output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
If the Ubuntu installation is still present (and only GRUB was lost), sure, you can use any distro that has live booting to do so. chroot into the Ubuntu installation and install and update Grub. If /dev/sda5 is the Ubuntu partition: mount /dev/sda5 /mnt mount -o bind /dev /mnt/dev mount -t proc none /mnt/proc mount -t sysfs none /mnt/sys mount -t devpts none /mnt/dev/pts chroot /mnt /bin/bash #Inside the chroot grub-install /dev/sda update-grub exit # Unmount all those mounts: for m in /mnt/{dev/pts,dev,proc,sys,}; do umount $m; done # rebootIf all you need to do is install grub, and updating isn't necessary, then you don't need the chroot: mount /dev/sda5 /mnt grub-install --root-directory=/mnt /dev/sdaIf you have a separate boot partition, remember to mount it as well, after mounting /mnt.
A friend of mine is running Ubuntu and got GRUB RESCUE. Can they use Mint ISO to repair their Grub? As I don't have Ubuntu ISO?
Can we use Linux Mint ISO to repair Ubuntu's Grub?
Systemd assumes certain mounts are critical to the system and as such a failure to mount one results in it switching to emergency mode. Systemd should have reconfigured its automount units when the device was disconnected unless it appears in /etc/fstab or you configured it as a mount unit. So the issue is likely that you still have /diskB_1TB in your fstab. From your emergency mode console try editing your fstab /etc/fstab and remove the line with /diskB_1TB then reboot.
I can't boot my Debian 9.5 with following errors. If I remember correctly ACPI Errors were shown every time I turn on PC. But computer always boot right with these errors, so I didn't care much. The new error starts at "A start job is running.."When did the error start? I was going to sell my older HDD so I erased the disk /dev/sdd via command dd if=/dev/zero of=/dev/sdd bs=1M. The disk was mounted to /diskB_1TB. After erasing the HDD I turned down computer and then disconnected the disk from a motherboard. After that I turned on the computer but the error occured for the first time. I've tried procedure from: https://askubuntu.com/questions/924170/error-on-ubuntu-boot-up-recovering-journal/924335?noredirect=1#comment1512824_924335 but it fixed nothing.I have 4 disks/dev/sda with windows /dev/sdb with linux debian /dev/sdc as 2TB data disk /dev/sdd as 1TB data disk (the former disk I erased and disconnected)Is there anything I can still do in this situation? I'm pretty sure I did delete only /dev/sdd disk. I can still access data (via terminal) located in /dev/sdc and in /deb/sdb where my /home/stepaiv3 is located. Moreover I can normally boot into my windows on /dev/sda.
Trouble on Debian boot up "Timed out waiting for device dev-disk..."
This feature is provided by the friendly recovery menu, and in particular its dpkg plugin (which adds a menu entry titled “Repair broken packages”, translated appropriately in whatever language the user configured the system to use). This plugin uses two different approaches to repair broken packages:if dist-upgrader is available, it uses that to repair the system, by running env RELEASE_UPGRADER_NO_SCREEN=1 python3 /usr/lib/python3/dist-packages/DistUpgrade/dist-upgrade.py \ --partial --frontend DistUpgradeViewText \ --datadir /usr/share/ubuntu-release-upgraderotherwise, it runs dpkg --configure -a apt-get update apt-get install -f apt-get dist-upgradeTo achieve the same effect as the menu selection, you should try the first command using dist-upgrader, and if that fails because it doesn’t exist, run the four commands starting with dpkg --configure -a. Note that both these options don’t just repair broken packages, they upgrade the system to the latest versions of the packages available in whatever release is installed. (This is necessary because repairing the broken packages might involve installing missing packages, and that can only be done using the current versions of the packages from the configured repositories.)
I recently ran into the following situation:I could not boot my computer normally. (I was shown a blinking cursor after the boot-loader and Ubunutu load screen but before the login page while never reaching the login page.) I was able to enter recovery mode. If I fully continued the boot, I could get to a terminal where I could add/remove any packages with apt-get. Before fully booting into recovery mode, I was shown a menu where one of the options was dpkg which would repair installed packages. If I selected this option, the system calculated that a repair could be made if I reinstalled 103 packages. However saying yes to that operation ran into network issues when trying to download the packages for re-installation. I was able to resolve the situation by looking at the list of packages being offered to repair and then by using the "throw a dart and pray" strategy, I opted to run sudo apt-get install --reinstall ubuntu-gnome-desktop from the prompt offered after fully entering recovery mode. This ended up triggering a re-install of 103 packages. Once this was done, I could boot Ubuntu normally.The question I have is: What command could I have entered at the command prompt when booted which would have performed the same operation as the dpkg menu option?
What command is run when the dpkg option is selected in recovery mode?
I think dd is the good way to proceed. Meanwhile, this solution works out of the box only if there is only one /dev/sd*. For instance, i would suggest to list all /dev/sd* except usb one then make as many partition (fdisk -n) as required on usb drive and use dd for each /dev/sd* counted. From the link: insert the destination USB flash drive in my workstation delete the existing vfat partition and create a single linux partition using fdisk create a filesystem and synchronize it:bash# mkfs.ext3 /dev/sdb1 bash# sync ; syncremove the usb flash drive from the workstation, put it in the target PC mount the usb drive, move the udev filesystem out of the way, and copy the local filesystem:bash# cd / bash# mkdir /mnt/sda1 bash# mount /dev/sda1 /mnt/sda1 bash# mkdir udev bash# mount --move /dev /udev bash# cp -ax / /mnt/sda1That copy command might take awhile. When it is done, get rid of the temporary directory /udevbash# mount --move /udev /dev bash# rm -fr /udevNow to make the usb drive bootable. It should still be mounted at /mnt/sda1. First, in file /mnt/sda1/boot/grub/device.map set hd(0) to /dev/sda and in /mnt/sda1/boot/grub/menu.lst set the kernel boot options correctly for each boot configuration, eg: title Debian GNU/Linux, kernel 2.6.18-6-486 root (hd0,0) kernel /boot/vmlinuz-2.6.18-6-486 root=/dev/sda1 ro vga=792 initrd /boot/initrd.img-2.6.18-6-486 savedefault Finally, install grub on the usb flash drive: bash# grub-install --root-director=/mnt/sda1 /dev/sda All done! Now you can reboot into the flash drive.
How to create a persistent and bootable USB of a running system on a computer without shut down. The key would be the same as the computer and offer ways to use it and install it as is on some other hardware. . root rights ok quote from chat Unix & LinuxTo keep the things simple, I would like to create an alive backup ... example you run on your machine you have to leave ... you plug the key you load the command and then you plug the key on another place and have everything working like at home so not just a backup not just a live usb but a persitent of your system as is.
How to create a persistent USB key of a running system?
since you can't boot your system, you need some other medium - cd or usb. there is no other magic way to boot unbootable system. basically what you have to do is:boot your machine (slackware installer). mount your partitions and chroot to system / dir. install packages you removed (download them from some slackware mirror and copy, ie on usb drive). in details:boot from slackware install disc or usb drive. make some directory for your broken system (mount point), ie: mkdir /mntmount root partition (let's say it's sda2) to created directory, ie: mount /dev/sda2 /mntif your system is spread on many partitions (/boot, /var etc directory on separate partition) - mount them too! let's say your /boot is on sda1 and /var on sda3: mount /dev/sda1 /mnt/boot mount /dev/sda3 /mnt/varcopy (ie on usb drive) packages you removed in some accessible place on your system partition, ie /mnt/root. "switch" to your system partition: chroot /mntinstall packages, now they are in /rootit is done :) next, to clean up:exit chroot environment (Ctrl+D or logout). umount partitions you mounted in 4. and then(!) 3, ie: umount /mnt/var umount /mnt/boot umount /mntreboot to your hopefully rescued slackware os :)
I wanted to update my system (Slackware current) which was in multilib. Before updating, I tried to remove all packages (compat32 and multilib). Big mistake !!! This have broken some crucial symlinks and give me now a kernel panic when i trie to boot it. I have tried several methods, including this one But it does not work since I no longer have the original disc Can someone tell me what is the proper way to recover the installation in this situation?
Linux Slackware ( Broken - kernel panic )
This appears to be a bug in older versions of LVM. A bug that could be corrected by compiling from source with a different set of flags to add support for thin devices. I can not speak for the SystemRescueCD you mentioned, because I have never used it, but it may be using an older version of LVM, for whatever reason, which may have this very bug. Since you mentioned you are running Fedora, have you tried getting an official Fedora ISO image to boot from? Grab the server version here: https://getfedora.org/en/server/download/ I suggest the server version rather than desktop version because of the options for troubleshooting available. Simply boot from the ISO image (CD/DVD or USB thumb drive) and start rescue mode. Version 23 of the Fedora server ISO I tested seemed to have no issues reading any LVM volumes on a test machine I purposely "broke" the file systems on. Of course, your mileage may vary. :\ I have also had success with thin-provisioning-tools found here: https://github.com/jthornber/thin-provisioning-tools After booting from the Fedora ISO, you may need to do a bit of leg work to get your machine to boot up far enough to install the tools. Perhaps not mounting the damaged mount points at boot, if possible that is.
So my Fedora Linux machine crashed during an update, and now refuses to start properly. I'm using SystemRescueCD to try to recover the contents of the hard drive. Following the steps in this post, I have done the following, shown together with the respective output for each command. First I list the partitions: root@sysresccd /root % fdisk -l Disk /dev/loop0: 338.5 MiB, 354885632 bytes, 693136 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/sda: 477 GiB, 512110190592 bytes, 1000215216 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x283f70c2Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 1026047 1024000 500M 83 Linux /dev/sda2 1026048 1000214527 999188480 476.5G 8e Linux LVMDisk /dev/mapper/fedora-swap: 7.6 GiB, 8187281408 bytes, 15990784 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/sdb: 7.5 GiB, 8076132352 bytes, 15773696 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x29ca9ce2Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 7641087 7639040 3.7G b W95 FAT32The harddrive I want to access is in sda2, so I try to mount it. root@sysresccd /root % mkdir /mnt/old root@sysresccd /root % mount /dev/sda2 /mnt/old mount: unknown filesystem type 'LVM2_member'So it is unable to mount the hard drive because it does not recognise the filesystem. With the lvm2 tools, I do a disk scan root@sysresccd /root % lvmdiskscan /dev/loop0 [ 338.45 MiB] /dev/mapper/fedora-swap [ 7.62 GiB] /dev/sda1 [ 500.00 MiB] /dev/sda2 [ 476.45 GiB] LVM physical volume /dev/sdb1 [ 3.64 GiB] 1 disk 3 partitions 0 LVM physical volume whole disks 1 LVM physical volumeWith lvdisplay I get the logical volume (LV) name and the volume group (VG) name root@sysresccd /root % lvdisplay WARNING: Unrecognised segment type thin-pool WARNING: Unrecognised segment type thin --- Logical volume --- LV Path /dev/fedora/pool00 LV Name pool00 VG Name fedora LV UUID Ye2FvY-Sx80-znoh-aYdi-Q5wM-e0W3-UPaQtA LV Write Access read/write LV Creation host, time localhost, 2016-01-04 15:59:45 +0000 LV Status NOT available LV Size 452.82 GiB Current LE 115922 Segments 1 Allocation inherit Read ahead sectors auto--- Logical volume --- LV Path /dev/fedora/root LV Name root VG Name fedora LV UUID DLcLQA-VcRn-u7fQ-sWaL-v9cY-M5EW-F3ZFuN LV Write Access read/write LV Creation host, time localhost, 2016-01-04 15:59:46 +0000 LV Status NOT available LV Size 50.00 GiB Current LE 12800 Segments 1 Allocation inherit Read ahead sectors auto--- Logical volume --- LV Path /dev/fedora/home LV Name home VG Name fedora LV UUID aTrVab-urfB-u0xU-zoit-PK8H-l5Sf-2MfaXV LV Write Access read/write LV Creation host, time localhost, 2016-01-04 15:59:48 +0000 LV Status NOT available LV Size 402.82 GiB Current LE 103122 Segments 1 Allocation inherit Read ahead sectors auto--- Logical volume --- LV Path /dev/fedora/swap LV Name swap VG Name fedora LV UUID MuFrai-TMdG-uiap-y7fh-5lhU-dYlL-cjjBAZ LV Write Access read/write LV Creation host, time localhost, 2016-01-04 15:59:51 +0000 LV Status available # open 0 LV Size 7.62 GiB Current LE 1952 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0The vgdisplay command also gives similar information root@sysresccd /root % vgdisplay WARNING: Unrecognised segment type thin-pool WARNING: Unrecognised segment type thin --- Volume group --- VG Name fedora System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 9 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 476.45 GiB PE Size 4.00 MiB Total PE 121971 Alloc PE / Size 117932 / 460.67 GiB Free PE / Size 4039 / 15.78 GiB VG UUID WduLzz-NwqH-DXYy-8fQy-ojos-SDi4-EmmHs5Now I tried a new mount, using the LV name path: root@sysresccd /root % mount /dev/fedora/home /mnt/old mount: special device /dev/fedora/home does not existIt still refuses to mount. lvscan shows the status of the logical volume root@sysresccd /root % lvscan WARNING: Unrecognised segment type thin-pool WARNING: Unrecognised segment type thin inactive '/dev/fedora/pool00' [452.82 GiB] inherit inactive '/dev/fedora/root' [50.00 GiB] inherit inactive '/dev/fedora/home' [402.82 GiB] inherit ACTIVE '/dev/fedora/swap' [7.62 GiB] inheritAs you can see, it is still inactive and not mounted. Also, there are two warnings about unrecognised segment types. So even though I continue with the given instructions and add the device mapping module dm-mod to the kernel: root@sysresccd /root % modprobe dm-modThen I change the attributes of the volume group: root@sysresccd /root % vgchange -ay WARNING: Unrecognised segment type thin-pool WARNING: Unrecognised segment type thin Refusing activation of LV pool00 containing an unrecognised segment. Refusing activation of LV root containing an unrecognised segment. Refusing activation of LV home containing an unrecognised segment. 1 logical volume(s) in volume group "fedora" now activeBut the attributes do not change due to the unrecognised segments, and the logical volumes stay inactive. root@sysresccd /root % lvscan WARNING: Unrecognised segment type thin-pool WARNING: Unrecognised segment type thin inactive '/dev/fedora/pool00' [452.82 GiB] inherit inactive '/dev/fedora/root' [50.00 GiB] inherit inactive '/dev/fedora/home' [402.82 GiB] inherit ACTIVE '/dev/fedora/swap' [7.62 GiB] inheritI don't know what "thin" and "thin-pool" mean in this context, but it seems quite clear that they are blocking the access to the old partitions. So if there is anyone who can spot the problem, please tell how to solve it.
mount: Unrecognised segment type thin
Might want to use backup software for your personal files first...Buy a flash drive, 8 GB or larger, for this purpose. Download macOS Sierra from the App Store. Plug in the flash drive and rename it to "SierraInstallation" for the purpose of matching the command below in step five. Open Terminal or iTerm2. Execute, all on one line: sudo "/Applications/Install macOS Sierra.app/Contents/Resources/createinstallmedia --volume /Volumes/SierraInstallation --applicationpath /Applications/Install macOS Sierra.app --nointeraction" When the command has been completed, eject the flash drive and keep it to reinstall later. Proceed to the Linux installation, no need to keep any partitions.To install macOS later...Plug in the flash drive to the Mac. Reboot the Mac and hold the Option key. Choose "Install macOS Sierra" from the boot options.You might also want to keep a ZIP file of /Applications/Install macOS Sierra.app/ on another storage device, backup drive or what have you, in case you lose the flash drive or some other problem.
I have this mid 2011 iMac 21.5" Core i5 on which I would like to install Linux (Debian) as the only OS, so I'd like to throw away the entire macOS installation. The web is full of tutorials for doing this, so, I think, this will be an "easy" step. My problem is: what if in future I want to go back on macOS (e.g for selling the computer)? I wonder if "OS X Recovery" (CMD + R) will work after the full Linux installation because I'll no longer be able to download macOS from Mac App Store and I have no installation CD. The best solution I think could be: keep the recovery partition and install Linux on the rest of the disk, but can this be done? How?
Installing only Linux on a mac and in case, go back to macOS
Maybe searching a little more: https://superuser.com/questions/501818/changing-ownership-without-the-sudo-command#501824Reboot, hold down right shift key to bring up the grub2 boot menu. Then follow these instructions to enter single user mode. How do I boot into single user mode from grub? In single user mode you can fix the file permissions because you are automatically the root user. Generally speaking, if it's just the file ownership that changed. You can run: chown -R root:root /etcThat will change ownership and group back to the default root. I have an ubuntu server 12.04 LTS here and there are a small number of files/directories beneath /etc that have a different group ownership. Aside from this, all files are owned by root. The files with the different group ownership are: /etc: -rw-r----- 1 root daemon 144 Oct 26 2011 at.deny drwxr-s--- 2 root dip 4096 Aug 22 12:01 chatscripts -rw-r----- 1 root shadow 697 Oct 31 12:58 gshadow -rw-r----- 1 root shadow 1569 Oct 31 13:00 shadow/etc/chatscripts: -rw-r----- 1 root dip 656 Aug 22 12:01 providerSo you can run the chgrp command on those files after initially running chown first. Then you should have everything restored back to how it should be. It shouldn't take an average user more than 10mins. e.g. chgrp shadow /etc/shadowOh and one final step. After you've done the changes reboot. /> reboot
I changed owner of /etc folder by accident when I was doing work on web server and now owner of /etc folder and all of its subdirectories is www-data. I can't use sudo anymore for anything and in recovery mode console restarts after like 30 seconds and then it freezes. Is there any way for me to fix this without reinstalling ubuntu.
Changed owner of /etc folder, can't use sudo anymore
I'm sorry, but at this stage, you're probably better off restoring from a clean backup. When fsck places so many directories in /lost+found, that's a sign that there's been a lot of corruption. It's quite possible that there's more corruption, but because it's in the file contents rather than in metadata, fsck has no way to know. When restoring from a backup, make sure that it's a clean backup. The corruption may have started before it was detected. The only way to identify what the files in lost+found are is to look at them and figure it out. There is no systematic way. If there was one, fsck would do it. Looking at the contents you show for /lost+found, it looks like the /var directory got damaged. You can try to repair it by creating /var and moving the appropriate entries in /lost+found to /var. # Running as root, of course umask 022 mkdir /var mv /lost+found/\#87867 /var/lock mv /lost+found/\#87868 /var/run mv /lost+found/\#89866 /var/local mv /lost+found/\#89868 /var/mail …I figured out the entries above from the metadata (ownership and symlink targets). You can figure more out by looking at the directory contents. Compare with an existing system installation (preferably the same distribution or at least something close, but the processor architecture doesn't matter). /var/lib may be #89865 because it tends to have a lot of subdirectories, but that's only a guess. It could be from another part of the system altogether. Don't focus on recovering /var/lib/dpkg and ignore the rest. The lack of /var/lib/dpkg is just the first symptom you noticed. On a PC, I'd suggest to do a RAM test (with Memtest86+ which is available as a package on most distributions and is installed by default at least on Ubuntu). On a Raspberry Pi, if your system is on an SD card, I recommend replacing the SD card: SD cards are the least reliable part of the system, and if you keep using it, it'll probably keep corrupting your data.
All apt-like commands failed to create the lock file, because /var/lib/dpkg/ doesn't exist. Also, /lost+found/ has contents: pi@pi-top:~ $ sudo ls -al /lost+found/ total 102456 drwx------ 11 root root 16384 Apr 3 16:26 . drwxr-xr-x 23 root root 4096 May 5 17:00 .. -rw------- 1 root root 104857600 Apr 3 16:30 #29025 lrwxrwxrwx 1 root root 9 Mar 29 10:05 #87867 -> /run/lock lrwxrwxrwx 1 root root 4 Mar 29 10:05 #87868 -> /run drwxr-xr-x 2 root root 4096 May 5 10:35 #89863 drwxr-xr-x 12 root root 4096 Apr 3 16:41 #89864 drwxr-xr-x 44 root root 4096 Apr 3 16:30 #89865 drwxrwsr-x 2 root staff 4096 Mar 12 14:03 #89866 drwxr-xr-x 6 root root 4096 May 5 16:30 #89867 drwxrwsr-x 2 root mail 4096 Mar 29 10:05 #89868 drwxr-xr-x 2 root root 4096 Mar 29 10:05 #89869 drwxr-xr-x 5 root root 4096 Mar 29 10:32 #89870 drwxrwxrwt 3 root root 4096 May 5 16:31 #89871Lots of /var/lib/ is also missing, though the system isn't showing any other symptoms. Is it possible to recover the system (or at least dpkg)? If so, how?
/var/lib/dpkg/ disappeared after fsck
You don't need a live environment for this. You just need to boot into a rescue environment, which is easy. When GRUB comes up, hit the e key, so that you can edit the kernel command-line. Find the line that begins with "linux", use the arrow keys to move down to the end of it, and type single (with a space before it). Then hit either Ctrl-x or F11 (or F10, I can't remember) to boot. This will drop you into a recovery shell. From there, just type nano /etc/network/interfaces, edit the file, hit Ctrl-o to save, Ctrl-x to exit, and type exit to boot.
I just need to edit the /etc/network/interfaces file in my otherwise-functional Debian installation, but I can't boot up because it gets stuck at "Configuring network interfaces". So how can I access this file without booting up? Do I need a Live USB? I'm using the amd64 port of Debian 7.8.
How can I fix my Debian's /etc/network/interfaces?
It seems like you are simply looking for the rpm --root option, which is roughly analogous to dnf --installroot. This is documented in the RPM man page: --root DIRECTORY Use the file system tree rooted at DIRECTORY for all operations. Note that this means the database within DIRECTORY will be used for dependency checks and any scriptlet(s) (e.g. %post if installing, or %prep if building, a package) will be run after a chroot(2) to DIRECTORY.To verify all packages installed onto a filesystem mounted at /run/media/liveuser/sda6/, run something like: rpm --root /run/media/liveuser/sda6/ -Va
1st. Thanks in advance. This is somewhat like # 216697: Reinstalling packages with missing/corrupt files except in that person's situation, after recovery, the system still worked. Mine does not. Can't start X, no networking, systemd doesn't have all it's requirements so services can't start, etc etc. The system, to use the technical term, is hosed. (Fedora 26 i686) DNF has an --installroot command, and if I boot to a LiveUSB OS, I can mount my root filesystem partition and do dnf --installroot=/run/media/liveuser/sda6/ repolist and it does list all my configured repos. I further try dnf --installroot=/run/media/liveuser/sda6/ list --all and hundreds of package names scroll past. I am assuming the DNF db or rpmdb or whatever (I really don't know, sorry) seems intact. While DNF allows me to work with the non-running system, I can't figure out how to use RPM -V on a non-running system. It seems to only deal with the live OS. I assume I can chroot trick it, but don't want to risk messing up anything so I'm asking and googling madly, trying to find a solution to just verify the install and only force reinstall the corrupted packages, but am coming up empty. As a last resort, I'm going to use dnf --installroot={path} reinstall * but that will incur many hours of time and a many gigabytes to be wastefully downloaded. At least it can happen unattended. Alternatively I could keep using the liveOS and wait another week or two and install Fedora 27 over my disabled system when it's released, but that seems just as big a cop-out as force reinstalling every package. I really would love to learn how to do this. RPM Ninjas: HELP!
Reinstalling only packages with missing or corrupt files on a non-running system?
It seems like there isn't really much that needs fixing, at least on a fresh install of Ubuntu 15.10. Of course, if you've installed stuff, you will have files and directories that I don't. However, I believe this output will show the proper permissions to keep Ubuntu running. Some programs may be broken because of the command you ran, but Ubuntu will at least run, and you can go about reinstalling applications from there. If something doesn't work, try setting the owner to the group. It might not have been the same originally, but it's worth a shot if the app isn't working. By running shopt -s extglob; find /!(proc|tmp|dev|run|root|lost+found) -maxdepth 1 -ls | awk '$5!="root" || $6!="root"' (Thanks @terdon), I came up with the following: 131226 4 -rw-r----- 1 root shadow 824 Jun 21 14:34 /etc/gshadow 131284 4 -rw-r----- 1 root shadow 1212 Jun 21 14:34 /etc/shadow 131095 4 drwxr-s--- 2 root dip 4096 Oct 21 2015 /etc/chatscripts 131103 4 drwxr-xr-x 5 root lp 4096 Jul 19 07:00 /etc/cups find: `/mnt/hgfs': Protocol error 1064478 4 drwxr-xr-x 16 zw zw 4096 Jul 19 07:26 /home/zw 655571 36 -rwxr-sr-x 1 root shadow 35536 Apr 22 2015 /sbin/unix_chkpwd 655516 36 -rwxr-sr-x 1 root shadow 35576 Apr 22 2015 /sbin/pam_extrausers_chkpwd 150670 4 drwxrwsrwt 2 root whoopsie 4096 Oct 21 2015 /var/metrics 150669 4 drwxrwsr-x 2 root mail 4096 Oct 21 2015 /var/mail 150668 4 drwxrwxr-x 14 root syslog 4096 Jul 19 07:00 /var/log 150664 4 drwxrwsrwt 2 root whoopsie 4096 Oct 21 2015 /var/crash 150666 4 drwxrwsr-x 2 root staff 4096 Oct 19 2015 /var/localThe command excludes /root and /lost+found, as everything under /root and /lost+found is owned by root. Make sure to set the ownership accordingly. The command excludes /proc, /tmp, /dev and /run as these directories contain files that are reset upon reboot. /mnt and /media may have had special permissions set on subdirectories. A reboot may fix the ones under /media, but I'm not sure about /mnt.There aren't very many directories you need to pay attention to, since most of them are owned by root. If you have any extra directories under /*/* that I don't have, try setting their owners to root or their corresponding groups. For everything that does match, just fix the permissions. I would reverse the two commands by running what you ran, but replacing foobar with root. Then you can fix the other permissions afterward.
I was careless for just a second and managed to type (being logged as root) on my Ubuntu system: chown foobar /* chown foobar /*/*What is the possible extent of the damage, and how can I revert it?
Damage by chown command at /
Sorry, it would be very hard to restore your system from this backup. You didn't just lose file permissions, you also lost file ownership and symbolic links. With so much lost, restoring manually would be an arduous process, there'd be a lot to do manually and it would be difficult to ensure you have them all. It would be far easier to do a new, clean installation, and then restore selected configuration files (and any data files, of course) from your backup. If your backup at least preserved timestamps, you should be able to find the files that didn't come with the original system through their timestamps (you can use something like find /path/to/backup -type f -newer SOMEFILE to list the files that were modified more recently than SOMEFILE); this may mix some software updates with your changes. In principle, the files you modified should be under /etc or under home directories. You may have installed things under /opt or /usr/local as well.
I lost my drive with my media center on it, and I realized that I had an old backup lying around. But when I went to restore my backup, I realized that all of the files had been copied with a Windows utility, and more than likely my file permissions are all messed up. It will be a few hours before I'll be able to attempt to boot to this, but is there anything I can do, even manually, to restore this to a bootable condition? I assume, at the least, that the execute bit has been unset on any and all files.
Restore file permissions after Windows copy
You're trying to use the generic modesetting driver, but you somehow got xserver-xorg-video-intel installed again. Removing it should force Xorg to default back to the modesetting driver. Creating an /etc/X11/xorg.conf with the following should keep it working, even if -video-intel gets installed again: Section "Device" Identifier "Intel" Driver "modesetting" # on new enough Xorg, this might be "modeset" instead EndSectionThis will be the default in Debian Stretch according to a post on Timo Aaltonen's blog. So once you upgrade to Stretch, you should be able to remove that config.
I did as root in my Debian 8.5 because wanted to test Matlab's thing but the commands removed some dependencies which affect the runlevelRun apt-get purge openjdk-7-jdk openjdk-7-doc openjdk-7-jre-lib openjdk-7-jre Reboot Output: notification about processing runlevel changes and staying there. All symbols were green and OK. I waited some minutes. Pressed then power off. Now, no such text are coming anymore when power on. Power on. Just white cursor blinking at the top-left corner in Fig. 1 just blinking symbol (_) on the blank display when normal bootI press fn+f1/f2 but you see no logs of other TTYs, in contrast to outputs in Recovery mode. Findings I can start up the system in Linux kernel 3.16 but not 4.6 which is my default. This seems to be a firmware issue because came suddenly. How can you you restore the system using Linux kernel 4.6 with Linux kernel 3.16?Recovery mode in Linux kernel 4.6 I get Debian's boot menu stably now where you can choose Normal boot and Recovery boot. Booting in Recovery mode and studying in Terminal/var/log/apt/history.log's last entries Start-Date: 2016-09-07 21:47:23 Commandline: apt-get purge openjdk-7-jdk openjdk-7-doc openjdk-7-jre-lib Purge: openjdk-7:amd64 (7u111-2.6.7-1'deb8u1), openjdk-7-jre-lib:amd64 (7u111-2.6.7-1'deb8u1), openjdk-7-doc:amd64 (7u111-2.6.7-1'deb8u1), default-jdk:amd64 (1.7-52) End-Date: 2016-09-07 21:47:24Start-Date: 2016-09-07 21:51:15 Commandline: apt-purge openjdk-7-jre Purge: sat4j:amd64, default-jre:amd64 (1.7-52), eclipse-platform:amd64 (3.8.1-7), eclipse-rcp:amd64(3.8.1-7), eclipse:amd64 3.8.1-7), openjdk-7-jre:amd64 (7u111-2.6.7-1`deb8u1), eclipse-pde:amd64 (3.8.1-7), eclispe-jdt:amd64 (3.8.1-7) End-Date: 2016-09-07 21:51:17/var/log/apt/term.log Log started: 2016-09-07 21:47:23 (Reading database [...]) Removing default-jdk [...] Removing openjdk-7-doc [...] Removing openjdk-7-jdk:amd64 [...] update-alternatives: using /usr/bin/fastjar to provide /usr/bin/jar (jar) in auto mode Removing openjdk-7-jre-lib [...] Log ended: 2016-09-07 21:47:24Log started: 2016-09-07 21:51:15 (Reading database [...]) Removing eclipse [and other its related eclipse-packages] Purging configuration files for eclipse-platform (3.8.1-7) ... Removing sat4j (2.3.3-1) ... Removing eclipse-rcp (3.8.1-7) ... Removing default-jre (2:1.7-52) ... Removing openjdk-7-jre:amd64 (7u111-2.6.7.1'deb8u1) ... Processing triggers for [man-db desktop-file utils gnome-menus mime-support hicolor-icon-theme) Log ended: 2016-09-07 21:51:17I run in the recovery mode as root apt-get install openjdk-7-jdk openjdk-7-doc openjdk-7-jre-lib and get Could not resolve 'security.debian.org' E: Failed to fetch http://security.debian.org/pool/updates/main/o/openjdk-7-jre_[...]E: Unable to fetch some archives, may run apt-get update or try --fix-missing?[...] W: Some index files failed to download. They have been ignored, or old ones used instead. I do as root apt-get update but I get W: Failed to fetch http://ftp.fi.debian.org/debian/dists/jessie/InRelease[...]W: Some index files failed to download. They have been ignored, or old ones used instead. I run as root apt-get upgrade but similar errors and/or warnings. I changed all Finnish (fi) to US (us) but the same issue persists. Using GAD3R's proposal in Linux kernel 4.6 I run as root # open internet in recovery mode by ifconfig eth0 up; dhclient eth0apt-get update apt-get upgrade apt-get install openjdk-7-jdk openjdk-7-doc openjdk-7-jre-lib openjdk-7-jreapt-get install x11-common # output: 0 upgraded, 0, newly installed, 0 to remove and 0 not upgraded. rebootOutput: condition persists, but now the blinking of _ on blank display is visible also in other TTYs. I also changed sources back to Finnish (fi) but no difference in the output. Testing derobert's proposal in Linux kernel 4.6Booting to recovery mode. I do exit or ctrl+d which just leaves the system to the state where the messages but not proceeding [ 26.566...] iwlwifi 0000:01:00.0: L1 Enabled - LTR Enabled [ 29.903871] ax88179_178a_2... eth0: ax88179 - Link status is: 1 [ 32.259410] [many wlan0 messages] [ 32.270956] wlan0: associated [ 32.078387] IPv6: wlan0: IPv6 duplicate address [ip address] detected!TestingI'd suggest (in grub), pressing "e" on the normal boot entry, and removing any "quiet" on the kernel line, and booting it (possibly, even add "verbose" as well). That should at least get boot messages. (That's just a temporary change.) X11 packages installed masi@masi:~$ dpkg --get-selections | grep xserver x11-xserver-utils install xserver-common install xserver-xephyr install xserver-xorg install xserver-xorg-core install xserver-xorg-input-all install xserver-xorg-input-evdev install xserver-xorg-input-mouse install xserver-xorg-input-synaptics install xserver-xorg-input-vmmouse install xserver-xorg-input-wacom install xserver-xorg-video-all install xserver-xorg-video-ati install xserver-xorg-video-cirrus install xserver-xorg-video-fbdev install xserver-xorg-video-intel install xserver-xorg-video-mach64 install xserver-xorg-video-mga install xserver-xorg-video-modesetting install xserver-xorg-video-neomagic install xserver-xorg-video-nouveau install xserver-xorg-video-openchrome install xserver-xorg-video-qxl install xserver-xorg-video-r128 install xserver-xorg-video-radeon install xserver-xorg-video-savage install xserver-xorg-video-siliconmotion install xserver-xorg-video-sisusb install xserver-xorg-video-tdfx install xserver-xorg-video-trident install xserver-xorg-video-vesa install xserver-xorg-video-vmware installmasi@masi:~$ apt-cache policy x11-xserver-utils x11-xserver-utils: Installed: 7.7+3+b1 Candidate: 7.7+3+b1 Version table: *** 7.7+3+b1 0 500 http://ftp.fi.debian.org/debian/ jessie/main amd64 Packages 100 /var/lib/dpkg/statusNote that xserver-xorg-video-intel is the list, while it should not be there so some dependency has installed it automatically. So I purge it getting the list root@masi:/home/masi# dpkg --get-selections | grep xserver x11-xserver-utils install xserver-common install xserver-xephyr install xserver-xorg install xserver-xorg-core install xserver-xorg-input-all install xserver-xorg-input-evdev install xserver-xorg-input-mouse install xserver-xorg-input-synaptics install xserver-xorg-input-vmmouse install xserver-xorg-input-wacom install xserver-xorg-video-ati install xserver-xorg-video-cirrus install xserver-xorg-video-fbdev install xserver-xorg-video-intel deinstall xserver-xorg-video-mach64 install xserver-xorg-video-mga install xserver-xorg-video-modesetting install xserver-xorg-video-neomagic install xserver-xorg-video-nouveau install xserver-xorg-video-openchrome install xserver-xorg-video-qxl install xserver-xorg-video-r128 install xserver-xorg-video-radeon install xserver-xorg-video-savage install xserver-xorg-video-siliconmotion install xserver-xorg-video-sisusb install xserver-xorg-video-tdfx install xserver-xorg-video-trident install xserver-xorg-video-vesa install xserver-xorg-video-vmware installHow to troubleshoot this? OS: Debian 8.5 64 bit Linux kernel: 4.6 (backports) Window manager: Gnome 3.14 Internet: ethernet via USB (used in Recovery mode in an attempt to fix the system) Hardware: Asus Zenbook UX303UA Graphic firmwares: modesetting, firmware-misc-nonfree done as described here Xserver: x11-xserver-utils 7.7+3+b1, (dpkg --get-selections | grep xserevr, apt-cache policy x11-xserver-utils)
How to Recover Debian of LK backports where runlevel conflict?
I noticed that what happened was I install the boot sequence (or whatever it is called) on sdb1 or something similar and not on sda where the boot flag was set. So what I did was I booted linux from a live USB and re installed my linux mint, and put the boot sequence to be installed on sda and set the boot flag there through gdisk. After that I installed grub from the terminal and ran sudo update-grub and tadaa, my problems were fixed and the GRUB-loader now found the linux boot loader and the windows 7 boot loader :)
I was working on my linux machine earlier today and when I was in the middle of updating my java packages through the package manager and installing coffee script, my windows started not working and closed down. I didn't think much of it and kept on working in TexStudio and finishing my hand in for this week, saved it and closed it and closed my browser and thought"Hey, maybe it just needs a reboot!" I then rebooted the computer, chose my Linux Mint partition and here is where it gets weird. The only thing I see when I boot is the Linux Mint logo and after that the screen is just completely black and all I can see is a few light grey pixels in the top left of my screen. I looks like a terminal cursor, but it's not flashing and nothing happens when I press my keyboard. The good news here is that I can boot the Linux partition in recovery mode, but I don't really know what to do from there. Is there a way to revert some of the latest changes back to how it was set up yesterday? Or am I completely lost here? I would love to have it up and running again, because it's my favorite development environment but I do not know much about the kernel and how it works. If you need more info, please tell me how to get it and I will post it here. The lspci output is seen above Also I noticed the the message ideapad_laptop: Unknown Event: 1 appears on the screen about every 10 seconds. I cannot press Ctrl + Alt + F1 to enter tty from non-recovery mode
Linux Mint (LMDE) suddenly crashed and won't boot again
Re-Installation. All your installed programs rely on data in /usr, so you can't simply reinstall. Your idea about using a live CD/DVD goes in this direction. You use them to FIX the problem. This would be the reinstallation of all programs, which won't work. Since this would be invoked within a chroot, it doesn't matter that you are on a live system which has some working copies. You could spend one week, to find which directories to bind mount... Stick to a backup with a live cd like knoppix and a usb drive and then reinstall. KNOPPIX
Accidentally /usr/ directory got deleted, in Cent OS 8. For recovering CentOS, I have found this linkwhich needs live CentOS 8 installed in USB drive. However CentOS 8 doesn't have live ISO release as per thisdiscussion. In MS-Windows, I get errors when trying to install DVD iso to USB drive using Rufus and also Etcher. Kindly point me how to recover data in CentOS?
Accidentally deleted /usr/ directory in CentOS 8
The good news is that all your data is still there. The mixed news is that your system installation may or may not be recoverable —it depends where chmod stopped. You will need to boot into a rescue system to repair it. From the rescue system, mount your broken installation somewhere, say /mnt. Issue the following commands: chmod 755 /mnt find /mnt -type d -perm 644 >/mnt/bad-permissions find /mnt -type d -exec chmod 755 {} +The first find command saves a record of directories with bad permissions into a file. The purpose is to see where permissions have been modified. The second find command changes all directories to be publicly accessible. You now have a system where all directories listed in /mnt/bad-permissions and all files in these directories are world-readable. Furthermore files in these directories are not executable. Depending on which files were affected, this may be easily repairable or not. See Wrongly set chmod / 777. Problems? for what you can try to get the system functional, to which you should add chmod a+x /bin/* /sbin/* /usr/bin/* /usr/sbin/* /lib*/ld-*But even if you manage to get something working, there's a high risk that some permissions are still wrong, so I recommend reinstalling a new system, then restoring your data. How do I replicate installed package selections from one Debian system to another? (Debian Wheezy) should help.
I use debian jessie and I have done one of those bad mistakes and broke my system with a mistyped command and worse mistakes that follow in such situations. Trying to fix some permissions I mistakenly used chmod recursively on root folder: # chmod -R 0644 /and then realizing immediately I rushed in doing something to stop it but the system was frozen and the worse mistake was the hard powering off the system. Now I think I have some user manager problem and after booting with some "failed to start service" messages I don't have the Gnome user login and I can't also login in console. And this is what that flashes several times and then stays on screen: [ ok ] Created slice user-113.slice Starting user manager for UID 113... [ ok ] Started user manager for UID 113 [ ok ] Stopped user manager for UID 113 [ ok ] Removed slice user-113.slice
Broken system after chmod -R 644 /
The solution that worked for me was reinstalling all the nvidia drivers.Uninstall all nvidia drivers (based on this answer) sudo apt-get remove --purge '^nvidia-.*' sudo apt-get install ubuntu-desktop sudo rm /etc/X11/xorg.conf echo 'nouveau' | sudo tee -a /etc/modulesReinstall all nvidia drivers (based on this thread) ubuntu-drivers devices sudo ubuntu-drivers autoinstall
After a power outage, my ubuntu 20.04 boots from the normal option in grub to a black screen and hangs there. However, if I go to Advanced Options and select recovery mode and then "resume", it is able to boot up. What went wrong? how can I fix it? Note: I found many solutions to this problem, not all worked for me so I figure that there are many things that could go wrong. It might be nice to have many optional solutions listed in the answers for this problem :)
ubuntu 20.04 only boots from recovery mode
I have never used Manjaro, but the process that works for Arch Linux should be fine in your case too. You should be able boot off a Manjaro live USB, mount the root file system of your Manjaro installation, mount your existing /home directory and put the backup copy you have of shadow back to its place. Then you should also be able to boot your installed Manjaro, because /etc/grub.d is only used to (re-)create your GRUB configuration and is not required during the boot process. It is however important that you restore (editing them again, if you have no backup) the files it contains, otherwise your system risks becoming unbootable (or, more likely, not dual-bootable anymore) the next time some package update triggers the re-creation of your boot loader configuration. This also likely worked if you had an encrypted root file system. udev takes care of activating any block devices (e.g. MD RAID arrays or LVM volumes) as soon as they become available, and the only things usually left to you are:Opening encrypted devices; in your case, you should be able to: cryptsetup open /dev/your_encrypted_device decrypted_device_nameUnless something wiped your LUKS headers this will only require the passphrase. (Note that there is no way to recover your data if the LUKS headers have been wiped or damaged and you don't have a backup).Mounting file systems. E.g. mount /dev/sdaN / or mount /dev/mapper/mapped_dev /.lsblk may help you explore your device tree and locate the right devices to open/mount (look at the TYPE column).When pacman installs a new version of a file whose modification time does not match the one recorded in the packages database, the existing file is not overwritten and a .pacnew file is created instead. Promptly taking care of .pacnew files is important because, on a rolling-release distribution, any package update may introduce breaking changes. For instance, an existing configuration file may mention options that have been deprecated in the to-be-installed version of a program, which needs different options instead. Distribution maintainers can not take care of all the possible cases and checking configuration files is left to the user. pacdiff is aimed at helping in this process: it walks through the pacnew (and .pacsave) files tracked by the package manager and offers you to review them. "(O)verwrite with pacnew" does just what is says: the existing file is replaced with the .pacnew version and your custom configuration is lost. While the most proper action is usually to review the existing and .pacnew versions of a file and merge them when needed, some .pacnew files are not meant to be acted upon. Assuming Manjaro aligns with Arch in this respect, this is true for the user database (which includes /etc/passwd and /etc/shadow) "unless Pacman outputs related messages for action".
After all these years of using Linux, I have never goofed up this badly. I should have known better, I honestly don't know what I was thinking. While trying to fix a small error message I was getting (this is irrelevant to my now much bigger issue), I found a post that said to "just run pacdiff and overwrite old files", and without much looking into it, I began to overwrite.. I guess I did have the fortitude to back up /etc/shadow before overwriting it and stopped overwriting after just a few entries, but I am now locked out of root and all my users. I'm on the computer right now as I'm scared to restart as I also overwrote /etc/grub.d (which I did not backup)! The /etc/shadow backup is in my /home and is owned by root so I cannot read it now, but I do have it. What exactly does pacdiff and (O)verwrite with pacnew do? I have found instructions to recover /etc/shadow file, but will I be able to get into grub-boot loader on restart now? My root and home partition are not encrypted, but I do have a LUKS encrypted partition. If worse comes to worse, and I have to reinstall without formatting my /home and encrypted partition, is any instance of cryptsetup able to open a LUKS encrypted partition with the passphrase? The information to mount and open is stored in the LUKS partition header, so I should be good, right? I am unable to find out which cipher I am using as I don't have root access, but I'm almost positive it's whatever the default is. How do I proceed from here? I'm not going to shutdown this machine until I have a plan in place. I'm on a dual boot Manjaro/Windows. I do have a bootable Manjaro USB made and another machine if need be. Any help would be much appreciated. This is not my proudest moment, but a great lesson learned. I need you guys more than ever.
Locked out: overwrote /etc/shadow and /etc/grub.d with pacnew
Its refer to the time between reboots. i will explain with example: root pts/0 192.168.10.58 Mon Nov 28 10:53 still logged in reboot system boot 2.6.32-642.11.1. Mon Nov 28 10:14 - 11:00 (00:45) root pts/0 192.168.10.58 Mon Nov 28 10:11 - down (00:02) reboot system boot 2.6.32-642.11.1. Mon Nov 28 10:09 - 10:14 (00:04) root pts/0 192.168.10.58 Mon Nov 28 10:08 - down (00:01) reboot system boot 2.6.32-642.11.1. Mon Nov 28 10:07 - 10:09 (00:01) root pts/0 192.168.10.58 Mon Nov 28 10:06 - down (00:01) reboot system boot 2.6.32-642.11.1. Mon Nov 28 10:05 - 10:07 (00:01) root pts/0 192.168.10.58 Mon Nov 28 09:23 - down (00:41) reboot system boot 2.6.32-642.11.1. Mon Nov 28 09:21 - 10:05 (00:43) root pts/0 192.168.10.58 Mon Nov 28 04:42 - down (04:39)the last reboot logout time will keep change every minute and you can notice that if you type last any time you will see that the last reboot has the present time and the reboots before it every each one of them refer to the time that next reboot happen
I have the following lines extracted from the output of last. It shows two reboots, and that userA was logged in right up to the reboot. So far, I am able to interpret the data. However, what I do not understand right now, is the login-time of the pseudo user reboot. For any ordinary users, the two times are the time the user logged-in and logged-out. In case, of a reboot, the entry for the log-out time is crash, indicating, that the user was logged-in right until the bitter end. No statement, whether the user is a victim or the culprit. My guess is, that the log-in time of the pseudo user reboot is the time when the system reboot was initiated. However, what determines the log-out time of the user reboot? reboot system boot 3.10.0-327.13.1. Mon Nov 28 08:08 - 10:35 (02:26) userA pts/0 10.ZZ.YY.XX Sun Nov 27 08:01 - crash (1+00:06) reboot system boot 3.10.0-327.13.1. Sun Nov 27 07:36 - 10:35 (1+02:58) userA pts/9 10.ZZ.YY.XX Fri Nov 25 17:39 - crash (1+13:57) userA pts/0 10.ZZ.YY.XX Fri Nov 25 16:17 - crash (1+15:18)
last - user reboot - logged-in period
You need to boot into a linux live USB (preferably mint or ubuntu), make sure your linux HDD partion is mounted read/write, and use the linux mv command to move the directory to the proper location. This is how I would approach:Boot your system on a Linux Live USB. Use the file manager on the Linux live system to find the Linux Mint partition on the hard drive with your /home directory on it. Open a command terminal on the Linux live system. (ie "drop to the shell", heh) Type the command mount, which will show you a list of disks and the directories they're mounted on in the live linux system, for example: ... /dev/sda3 on /media/aaa (ro) /dev/sda4 on /mnt/a3d2fe6 (ro) ...Find out which directory (the path after "on" in the above output) is your linux mint partition. You need to look at only the lines which begin with "/dev/". For each directory, execute ls <dir>/homereplacing <dir> with the directory name. Do not check "/", that's the root directory of the live usb system. When you find out which mounted directory contains your home directory, issue these commands, replacing "/yourdirectory" with the directory that you've identified in the mount command output: sudo mount -o remount,rw /yourdirectory cd /yourdirectory sudo mv home/etc etc
I was making a spare Mint live USB via Unetbootin and when I was looking for my iso file, I misclicked and moved /etc into my /home directory, which pretty much immediately rendered my system useless. How can I make this partition bootable again? I tried grabbing the file from a live USB, but permission was denied. I then tried going into Windows and grabbed an ext4 driver that was read only and copied it into that partition, the booted into a live USB and tried to copy the folder into the root directory of my Linux partition, but access was still denied. How can I revive this system? Is the only way to copy my home directory and reinstall Mint? This is on a Toshiba C55DT-A5106, 12GB RAM, AMD A6 APU, 750 GB HD with 100GB for Mint, the rest to Windows and assorted recovery/swap partitions. The OS in question is Linux Mint 17.2 Cinnamon 64 bit.
I moved /etc into /home in Linux Mint, how do I make the system usable again?
I finally was able to restore my Synology software without losing the data thanks to this Synology Knowledgebase article. It explains how to factory reset the configuration and operating system:Use a paperclip to press the reset button on the back of the device and hold it for 4 seconds until it beeps. (This resets the configuration). Release the button. Within the next 10 seconds, hold the button for 4 seconds again until it beeps. (This resets the OS). Release the button again. Open Synology Assistant and check that the NAS now says "Not Configured". Double click the NAS to reinstall the OS.
I used to access it via SSH but I wanted to connect to it with zsh instead of bash. I made a mistake with my Synology NAS: I deleted the /bin/sh symbolic link to /bin/bash to replace it with /bin/zsh but since then I cannot connect to it using SSH: $ ssh synology.local [emailprotected]'s password: /bin/sh: No such file or directory Connection to synology.local closed.I have no idea how I can fix that...
Fix corrupt /bin/sh link
Is the restored image the same size as the original one ? You can test restored size using : lz4 -v img.lz4 > /dev/null If not, maybe the following line would be a bit safer : lz4 -d img.lz4 | dd of=/dev/sda
I've been cloning complete HDD images to restore OS crashes using DD and GZIP for a while now using dd if=/dev/sda | gzip > img.gz and gzip img.gz | dd of=/dev/sda This always working fine, but the process is a little slow. It takes more than 2 hours to create or restore an image. I started experimenting with faster (de)compression; LZ4. Again, using the same commands dd if=/dev/sda | lz4 > img.lz4 and lz4 img.lz4 | dd of=/dev/sda. Creating and restoring an image now takes less than 50% of the time. Point is, this restored image delivers a unbootable PC. What am I doing wrong? Is LZ4 not suitable for this purpose?
HD clone using LZ4 and DD fails
What to recover The LFS variable was presumably unset when you ran this command. So it modified /lib64/ld-linux-x86-64.so.2 and /lib64/ld-lsb-x86-64.so.3. You've corrupted the dynamic loader. As a consequence, you can't run any dynamically linked program. Pretty much every program is dynamically linked, including bash, init, ln, etc. /lib64/ld-linux-x86-64.so.2 is the important one. It's the dynamic loader used by 64-bit Arch programs. The symbolic link is provided by the glibc package. From a working Linux system, run ln -snf ld-2.33.so /lib/ld-linux-x86-64.so.2Note: the number 2.33 will change over time! Check what file /lib/ld-*.so exists on your system. /lib64/ld-lsb-x86-64.so.3 is for compatibility with programs not built for Arch. It's provided by the ld-lsb package. If this package is installed, restore the link: ln -snf ld-linux-x86-64.so.2 /lib/ld-lsb-x86-64.so.3If ld-lsb is not installed, remove /lib/ld-lsb-x86-64.so.3. Self-contained recovery with advance planning When dynamic libraries are corrupted, you can still run statically linked executables. If you're running any kind of unstable or rolling-release system, I recommend having a basic set of statically linked utilities. (Not just a shell: a statically linked bash is of no help to create symbolic links, for instance.) Arch Linux doesn't appear to have one. You can copy the executable from Debian's busybox-static or zsh-static: both include a shell as well as built-in core utilities such as cp, ln, etc. With such advance planning, provided you still have a running root shell, you can run busybox-static and ln -snf ld-2.33.so /lib/ld-linux-x86-64.so.2Or run zsh-static and zmodload zsh/files ln -snf ld-2.33.so /lib/ld-linux-x86-64.so.2If you've rebooted and are stuck because /sbin/init won't start, boot into the static shell: follow the steps in Crash during startup on a recent corporate computer under “Useful debugging techniques:”, starting with “press and hold Shift”. On the linux command line, add init=/bin/busybox-static (or whatever the correct path is). Repairing from a recovery system Without advance planning, you'll need to run a working Linux system to repair yours. The Arch wiki suggests booting a monthly Arch image. You can also use SysRescueCD. Either way, use your written notes, lsblk, fdisk -l, lvs, or whatever helps you figure out what your root partition is, and mount it with mount /dev/… /mnt. Then repair the symbolic link: ln -snf ld-2.33.so /mnt/lib/ld-linux-x86-64.so.2
So, recently I was doing the Linux from scratch project and I had multiple terminals open, so I was continuing to make it, and by accident I typed the line in another terminal tab (root), and it messes up symlinks completely!, I can't run any commands on bash. case $(uname -m) in i?86) ln -sfv ld-linux.so.2 $LFS/lib/ld-lsb.so.3 ;; x86_64) ln -sfv ../lib/ld-linux-x86-64.so.2 $LFS/lib64 ln -sfv ../lib/ld-linux-x86-64.so.2 $LFS/lib64/ld-lsb-x86-64.so.3 ;; esacI'm on arch linux, when I restarted the computer, also the kernel panic happened and it says: "switch_root: failed to execute /sbin/init: Too many levels of symbolic links." Any solutions? I hope if someone helps.
switch_root: failed to execute /sbin/init: Too many levels of symbolic links
Unfortunately you will not be able to do much from that os because you cannot run any binaries in /usr/bin/ or /usr/sbin/. What you can do is make a live usb of any linux os, boot into the live environment, mount your root partition in the live environment at any mountpoint (say /mnt/), undo what you did i.e run the command sudo chmod +x /mnt/*, then finally reboot.
Im pretty new at linux and I really don't know how to fix this. I accidentally ran sudo chmod -x /*Now every command I run returns a "Permission denied" even the sudo command. please help!
Accidentally ran "sudo chmod -x /*, is a fix possible?
Considering you have over-written your Ubuntu partition with a Windows reinstallation, chances of recovering files are pretty low. With :A LOT of time a pretty large storage space on another drive another PC where you can plug your hard drive (stop writing on your Windows/Ubuntu drive NOW!!!) an indecent amount of luck (sorry)you may recover some files with dedicated tools. I had results with Scalpel running on Debian (there are other softwares available, both for Windows and Linux). Scalpel (and possibly others) works by looking for "file signature" on the drive surface, which means it helps if you instruct it what to look for : JPEGs, OpenOffice files, ... What may increase your chances too is the amount of data before / after the reinstallation : if there were -for instance- 1TB of data and you only wrote 100GB, some areas of the disk may have been untouched.
On my PC I had 2 operating sytems (ubuntu and windows) and for about 5 days I wanted to reinstall my Windows OS because of some bugs but instead of installing it on the partition where my old version of windows is I have installed it on the partitions where my Ubuntu OS was so my question is, is it possible to recover some files which I saved on the Ubuntu ? because I ve already tried some softwares but until now I still have no result and I really need these files Thank you
Is it possible to recover overwritten files from another operating System on the current operating system?
Your variables are quoted, you don't need to escape the spaces: SYSDIR='/Volumes/Macintosh HD/System/'(etc.) Also (tangentially) related:Are there naming conventions for variables in shell scripts?
I'm trying to move some files using a bash script on a mac during recovery mode. I can successfully move the files by manually entering the command in Terminal after booting into recovery. However, one goal is to document the process and make it portable, so a script is desired. When I try to run the mv command from the script, I get the error No such file or directory. I've confirmed that the command executes correctly when I manually enter the filepaths, but the script continues to error. I've tried using double-quotes(") and dollar signs ($), dollar signs ($) and braces ({}), and a few other combinations for variable evaluation. They all fail with the same error. I've also tried using a trailing slash on the src, globbing (*) on the src, and trailing slash on both the src and dest. I'm totally stumped. It could be that this is specific to MacOS recovery mode, but surely since the prompt announces -bash-3.2, this must be a bash 3.2 shell? Please help - even our senior devs are stumped on this one... Could it have anything to do with an improper parsing? I note that there is a /* on the end of the destination, and I didn't specify that... Script is: #! /bin/bash #_# Setup echo 'Setting up variables' SYSDIR='/Volumes/Macintosh\ HD/System' EXTDIR='Library/Extensions' USERDIR='/Volumes/Macintosh\ HD/Users/admin'#_# Bluetooth echo 'Mitigating Bluetooth' BLUE='IOBluetoothFamily.kext' mkdir -p "$USERDIR/delete_me/$BLUE" mv "$SYSDIR/$EXTDIR/$BLUE"/* "$USERDIR/delete_me/$BLUE"echo '*Verify:* System Preferences/Bluetooth should display an error indicating bluetooth is not available.'Output is: -bash-3.2# ./bluetooth.sh Setting up variables Mitigating Bluetooth mv: rename /Volumes/Macintosh\ HD/System/Library/Extensions/IOBluetoothFamily.kext/* to /Volumes/Macintosh\ HD/Users/admin/delete_me/IOBluetoothFamily.kext/*: No such file or directory *Verify:* System Preferences/Bluetooth should display an error indicating bluetooth is not available. -bash-3.2#
Why is mv failing in this (bash) shell script in bash 3.2 during Mac OS recovery mode?
It's unlikely modifying anything in your home directory will have reset your password. On most distributions it's stored in /etc/shadow by default. To get to a recovery console, the most easy thing to try is to change the init kernel parameter on boot. Assuming you are booting with grub:switch on your computer At the boot menu press e to edit the boot entry You might be asked for your grub password if you have one set Modify the line starting linux to add init=/bin/bash at the end Press f10 to bootThis should give you a root command line with / mounted readonly. To make it read-write remount it: mount -o remount,rw /You should be able to reset your password with: passwd <username>And finally reboot with: reboot
Unfortunatelly I made mistake somewhere but can't get where. I had issues with ssh and github, tried different things and they didn't work. Later I decided .. i think.. moved file i balieve it was home/username/known.hosts or smth like that. Additonally I deleted everything from folder, where id_25519 and id_25519.pub files (maybe they are related to github, i don't know). And somewhere in between or afteer all. I could not provide my password in terminal. I 200% sure it's correct, no caps/num/ other things used. and now i can't log in to my linux mint cinamon 20. I will really appreciate any help how can I recover my laptop. It starts, but requires pass. And I can't understad how coudl it change ......
password to linux changed after deleting/manipulation with ssh
SOLUTION: upon entering the root (just type password in the emergency mode) I edited /etc/fstab * There I removed the bits where it said subvolid=xxx$. E.g from UUID=xxx-yyy-zzz /home btrfs rw,noatime,compress=zstd:3,ssd,space_cache,commit=120,subvolid=257,subvol=/@home 0 0to UUID=xxx-yyy-zzz /home btrfs rw,noatime,compress=zstd:3,ssd,space_cache,commit=120,subvol=/@home 0 0Save and reboot. FixedTechnically, I had typed mount /dev/nvme0n1p2 /home and then I used sudo vim /etc/fsbat. But it should be the same result, unless I am gravely mistaken.USE ALL AT YOUR OWN RISK :) But it worked fro me. Much thanks to @Albator78 on the Arch subreddit: https://www.reddit.com/r/archlinux/comments/qhb13t/comment/hieiyyk/?utm_source=reddit&utm_medium=web2x&context=3
I am using btrfs (which seems integral to the question). Upon recovering with timeshift and rebooting, I am encountering the following error [Failed] Failed to mount /home. [Depend] Dependancy failed for Local File Systems You are in emergency mode. After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or "exit" to default mode.Obviously, Control-D, rebooting and default mode etc. do not work. I tried timeshift --restore and I get the following error.It says "Found stale mount for device /dev/nvme0n1p2 at path /run/timeshfit/837/backup. \n Unmounted successfully. \n E: Failed to remove directory. \n Ret=256". I think the problem is, that it can not mount /dev/nvme0n1p2 to /home. But I am not sure how to fix it. Would really appreciate some help sad Cheers. P.S. here is my /etc/fstab output, when I log in as root (after emergency boot)I have a feeling, that typing mount /dev/nvme0n1p2 /homemight fix it, but I am afraid it might just wipe the drive or something...
Timeshift and btrfs. Recovery unable to mount /home
I had a similar problem, below is my current lsblk, i solved it by removing an fstab entrysdb 8:16 0 1.8T 0 disk ├─sdb1 8:17 0 16M 0 part └─sdb2 8:18 0 1.8T 0 part /mnt/xxxI had an sdb3 partition which had a fstab entry, I deleted the sdb3 partition without removing the fstab entry, after which i got booted into emergency mode. fixed it by commenting out the relevant fstab entry in /etc/fstab, you could also delete it#PARTUUID=XXXxX-xxxx-xxxx-xxxx-xxxxxxxxxxxx /mnt/winc ntfs defaults 0 0
I'm new to Linux but I'm going to try my best to describe the problem, please bare with me, if you need more information tell me what I can do to provide more information and I'll get it for you. After booting up my system and opening Steam the entire OS locked up, I could move the mouse but nothing else (including alt+f2) worked. I force rebooted the system and after the seeing the Systemd boot menu I saw a flash of gray from the Gnome login screen right before being dumped back to this screen:Pressing ^d just prints the "You are in emergency mode" speal again. And writing exit results with Failed to starte default target: Transaction for graphical.target /start is destructive I have three drives in my system, the first is an NVME with Windows installed on it (Fastboot has been disabled), the second is a Seagate HDD formatted as NTFS that the Windows and Linux (PopOS) systems share (Windows "automounts" drives on its own, I set the Seagate to auto-mount on boot in Pop via the Gnome disk manager a while back, not recently). And the third drive is a Samsung Evo Sata SSD that Pop boots off of. Booting between the current kernel and the old kernel via the systemd boot menu makes no difference. I can boot into Windows which should rule out hardware failure everywhere except the Samsung Evo (Which is less than a year old). Running cat /etc/fstab from within emergency/maintenance mode provides this:This looks okay to me, but I'm not very familiar with fstab. Booting into PopOS's recovery partition (much like plugging in a flash drive and booting a live OS) and then running fdisk -l yields this:The Seagate drive appears to be mounted as sdaThe Samsung that Pop is booting off of appears to be mounted as sdbfsck appears to believe that the Samsung boot drive is fine. If anyone has any idea what's going on here I could really appreciate the help. If worst comes for worse, would refreshing the OS from the recovery partition fix this issue?
PopOS (Ubuntu) crashed and booted into emergency mode
At the "Finish the installation" step of the d-i graphical Debian installer it runs scripts out of each parts of the installer. After installing the bootloader however it immediately runs this step. The only step left is the "finish-install" step which is here: https://salsa.debian.org/installer-team/finish-install/tree/master/finish-install.d However if for some reason it does ask you before continuing with finishing up the install then it'd be a mess to go through the dozens of scripts and it'd be better to just reinstall with preseed which allows to do most of the install headless with a config file.
I am trying to get Debian installed on a device that doesn't have enough battery capacity to power the display on full brightness for long enough to complete an installation, even while plugged in. I've got it to the stage where I've installed almost everything and it was sitting on the penultimate step of the advanced installer, but I didn't press "Finish installation" before the power ran out. How can I perform the steps that "Finish installation" normally performs in order to get my system working?
How can you complete a Debian installation interrupted just before "Finish installation"?
As argued in the response you link, it depends on your definition of "recover" (i.e. do you still trust a binary after it was potentially changed by anybody on the system, which may be "I guess I'm ok" for your private desktop machine but much much less so on a multi-user company system). Looking at my box, /bin has a manageable number of files that aren't 0755 or symlinks: ls -l /bin/ | grep -v '^-rwxr-xr-x' | grep -v '^l' | wc -l 35and that's mostly because they're setgid or setuid. So in principle, you could chmod 0755 /bin/* and manually fix the permissions on those 30, 40 binaries, if you still have root access. (su and sudo won't work until their proper permissions are restored.) But for practical purposes, that still means you need a "clean" machine somewhere for comparison purposes, but you don't need to mount its drive. (Come to think of it, I don't think wrong permissions on the binaries should stop the package manager from working, so you could conceivably try reinstalling every package that has something in /bin, but probably most everything has a dependency on something in /bin, so you might end up removing and reinstalling all packages.)
I know that chmod -R 777 / is extremely destructive, and I know that a chmod -R 000 /bin can be recovered by using an additional disk, but I'm wondering about chmod -R 777 /bin. If I have a root shell, but no additional disk mounted from a healthy VM, can I recover this system?(This question is for learning purposes as the actual box is not mine, nor was the mistake, and the box is already planned to be rebuilt.)
Can `chmod -R 777 /bin` be recovered from?
Solved it, answering for anyone that might encounter the same issue. My fstab also had my storage drive in it and I didn't know the system wouldn't boot unless it detected all drives in the file. Make sure to comment out or delete any drives that are not connected.
I cloned my installation from a 240GB SATA SSD to a 500GB NVMe SSD using Clonezilla. The cloning was completed successfully, except for a failure with initrd (unfortauntely I don't remember the actual error). After rebooting I manage to get to right before the login screen and then I'm kicked into emergency mode. I already checked /etc/fstab and the UUIDs of the drive are correct. I tried running update-initramfs -u but that didn't help. I'm sorry for not providing more accurate debugging info, but I don't even know where to look for it. If anyone needs some debug logs please tell me where to find them. Thanks.
Stuck in emergency mode after cloning my system with clonezilla
I had the same issue as you awhile ago. Heres the command I used in the Windows installer usb command prompt: Bcdboot C:\Windows /l en-us /s x: /f ALL Use diskpart to mount and unmount disks. Replace C:\Windows with your windows drives Windows folder and x: with your grub disk.
I recently reinstalled Fedora 35 on a dual-boot with Windows 10. Unfortunately, I think I have accidentally formatted /boot/efi, as hints a tree /boot /boot ├── config-5.14.10-300.fc35.x86_64 ├── config-5.14.16-301.fc35.x86_64 ├── efi │ ├── EFI │ │ ├── BOOT │ │ │ ├── BOOTIA32.EFI │ │ │ ├── BOOTX64.EFI │ │ │ ├── fbia32.efi │ │ │ └── fbx64.efi │ │ └── fedora │ │ ├── BOOTIA32.CSV │ │ ├── BOOTX64.CSV │ │ ├── gcdia32.efi │ │ ├── gcdx64.efi │ │ ├── grub.cfg │ │ ├── grubia32.efi │ │ ├── grubx64.efi │ │ ├── mmia32.efi │ │ ├── mmx64.efi │ │ ├── shim.efi │ │ ├── shimia32.efi │ │ └── shimx64.efi │ ├── mach_kernel │ └── System │ └── Library │ └── CoreServices │ └── SystemVersion.plist ├── extlinux │ ... ├── grub2 │ ├── fonts │ │ └── unicode.pf2 │ ├── grub.cfg │ └── grubenv ├── initramfs-0-rescue-a26e1c2d27044f10ac613e4bc63e9612.img ├── initramfs-5.14.10-300.fc35.x86_64.img ├── initramfs-5.14.16-301.fc35.x86_64.img ├── loader │ └── entries │ ├── a26e1c2d27044f10ac613e4bc63e9612-0-rescue.conf │ ├── a26e1c2d27044f10ac613e4bc63e9612-5.14.10-300.fc35.x86_64.conf │ └── a26e1c2d27044f10ac613e4bc63e9612-5.14.16-301.fc35.x86_64.conf ├── lost+found ├── symvers-5.14.10-300.fc35.x86_64.gz -> /lib/modules/5.14.10-300.fc35.x86_64/symvers.gz ├── symvers-5.14.16-301.fc35.x86_64.gz -> /lib/modules/5.14.16-301.fc35.x86_64/symvers.gz ├── System.map-5.14.10-300.fc35.x86_64 ├── System.map-5.14.16-301.fc35.x86_64 ├── vmlinuz-0-rescue-a26e1c2d27044f10ac613e4bc63e9612 ├── vmlinuz-5.14.10-300.fc35.x86_64 └── vmlinuz-5.14.16-301.fc35.x86_64From my understanding, there should be windows appearing here. The observed consequence is that windows does not appear in grub, and it is not possible to boot windows changing the BIOS priorities. Here is what returns a fdisk -l with root access: Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors Disk model: SAMSUNG MZVLB512HBJQ-000L2 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 5B49A787-6CFB-49B4-8F00-73B5F7F8A568Device Start End Sectors Size Type /dev/nvme0n1p1 2048 534527 532480 260M EFI System /dev/nvme0n1p2 534528 567295 32768 16M Microsoft reserved /dev/nvme0n1p3 567296 473878527 473311232 225.7G Microsoft basic data /dev/nvme0n1p4 998166528 1000214527 2048000 1000M Windows recovery environmen /dev/nvme0n1p5 473878528 475975679 2097152 1G Linux filesystem /dev/nvme0n1p6 475975680 998166527 522190848 249G Linux LVMPartition table entries are not in disk order.From my understanding, I can use the windows recovery to try to fix the boot issue. Unfortunately, I don't know how to boot from it. I tried to press various keys at startup (Lenovo S540), as well as to change the boot order in the BIOS. My questions are the following:Do I have an easy way to access the windows recovery from my machine? If not, how can I fix this issue?EDIT: Problem fixed. I created a Windows recovery device. The boot fix did not work natively, so I used the command prompt with BOOTREC /FIXMBR BOOTREC /FIXBOOT BOOTREC /RebuildBcd After that, windows still did not boot, but the automatic boot repair fixed everything. To conclude, I just updated grub following Fedora guidelines, and I am not officially saved.
Restoring windows boot [closed]
/run should be world-accessible, and only writable by root. chmod 755 /runHowever, you may run into more trouble, since whatever caused /run to have wrong permissions may have affected other files. There is no generic way to fix such problems. It depends on what happened. Maybe one of the previous questions on similar problems will be relevant to you. If only /run and its contents is affected, just reboot: /run is an in-memory filesystem and is recreated from scratch at each boot. On the other hand, if other parts of the system are affected, it may be better to repair what you can before rebooting, since the system may have become unbootable.
When running dbus-launch, I getbash: cd: /run: Permission non accordée. (Permission denied)I think I've bad permissions. Is there a correct solution ?
bash: cd: /run: Permission non accordée (Permission denied)
The Ubuntu partitions will be ext4, so you won't be able to see them on Windows. There is some software you can use to read ext4, but I have never found anything that worked satisfactorily. Your best chance is probably using a Linux live USB drive. Instructions for creating one with Ubuntu can be found here. Boot using the USB stick, and select the option to try Ubuntu without installing. Then you should be able to use Ubuntu's file manager/terminal to copy files from your old disk.
I had a dual boot system with windows 10 and ubuntu 18. Recently the system failed, the motherboard probably got fried by lightning during a thunder storm and its not booting anymore, no fan, no lights, totally dead. So I salvaged the hard disk to recover the data from it. I plugged in the hard disk to a windows machine using a SATA to USB adapter, but I can only see the windows file system in it, neither the linux partition is showing nor the files are showing anywhere else. How can I recover the files I had in ubuntu, is there any other way to see hidden partitions? Should I try plugging it directly into the SATA port instead of using a SATA to USB adapter? I have to invest in a new machine to do this and if it still cant access that data that investment will not be very fruitful.
Dual boot system failed, ubuntu file system missing during hard disk data recovery
Problems with "READ DMA" and "I/O error" are not problems with scripts, applications or the filesystem, but problems with the actual harddrive. To answer your question: nothing in particular, but your harddrive is failing. I can recommend turning the failing drive off for now, and later take a copy of whatever can be copied to a harddrive that is not failing.
I just recently ran a bash script that froze up my Linux mint machine, and I had no choice but to hold power key to perform a hard power off. Now my machine will not boot. I have no clue if the bash script did something, the hard reset did something, or it is just coincidence and my harddrive died. I created a bash script as follows for file in 4k/* do convert $file -resize 50% 1080p/$file doneI ran in the folder ~/Pictures/EastTrip/Finals/ When I ran this script the UI started to become unresponsive. Then I performed the hard reset. At the same time I ran the script that was working on directory "4k", I had the program rawtherapee writing to the directory 4k. I can't help but feel like the file system is broke and not the harddrive. I have entered recovery mode and dropped down to root shell. When I navigate to the Pictures directory and use ls command I get the following errors....What have I done?!?! I have like a weeks of unsaved work :/
Failure to boot after running bash script
For your use case try the mlockall system call to force a specific process to never be swapped, thus avoid swap thrashing slowdown. I would recommend earlyoom with custom rules over this hack.
I strongly despise any kinds of automatic OOM killers, and would like to resolve such situations manually. So for a long time I have vm.overcommit_memory=1 vm.overcommit_ratio=200But this way, when the memory is overflowed, the system becomes unresponsive. On my old laptop with HDD and 6 GB of RAM, I sometimes had to wait many minutes to switch to a text VT, issue some commands and wait for them to be executed. That's why I have numerous performance indicators to notice such situations beforehand, and often receive questions why would I need them at all. And they don't always help too, because if a memory overflow happened when I wasn't at the laptop, it's too late already. I suspected the situation would be better on a newer laptop with SSD and 12 GB of RAM, but in fact it's even worse. I have zRam with vm.swappiness=200, which allows up to 16.4 GB of compressed swap, and when it's nearly extinguished, the system becomes even more unresponsive than on the old laptop, to the point even VT switch barely works, as well as I cannot SSH into the system from the local network, so my only resort is blindly invoking the kernel's manual OOM with Alt+SysRq+RF, which sometimes chooses to kill important process like dbus-daemon. I might make a daemon with a sound alert when the swap is almost full, but that's a partial stopgap again, as I may not come in time anyway. In the past, I tried to mitigate such situations with thrash-protect. It sends SIGSTOP to greedy processes and then automatically SIGCONT-s them, which helped a lot to postpone the total lockup and resolve the situation manually, but in strong overload conditions, it starts freezing virtually everything (which can be explicitly allowlisted though). And it has a lot of irritating side effects. For example, if a shell is frozen, its child processes may remain frozen after thawing the shell. If two processes share a message bus and one of them is frozen, the messages are rapidly accumulated in the bus, which leads to rapidly growing RAM usage again, or lockups (graphical servers and multi-process browsers are especially prone to this). I tried to run sshd with a -20 priority, like suggested in the similar question, but that doesn't really help: it's as unresponsive as with the default priority. I would like to have some emergency console which is always locked in RAM and is usable regardless of how overloaded the rest of the system is. Something akin to Ctrl+Alt+Del screen in Windows NT≥6, or even better. Given that it's possible to reserve some RAM with the crashkernel parameter, which I use for kdump, I suspect it's possible to exploit this or some other kernel mechanism for the task too?
Is it possible to reserve resources for an always-up emergency console?
Well... After tons of re-tries, I solved this root filesystem restore problem via using installation DVD's recue mode. I discovered that the restoration of root filesystem from dump always conflicts with the running OS. Therefore, using rescue mode can handle it. I'll make other try to see whether there's possibility to restore root filesystem when OS is running.
I'm currently testing a backup/restore of RHEL 6.4 OS via the "dump" and "restore" on testing environment, and I do know that RHEL 6.4 seemed too outdated in nowadays. Butsome enterprises are still using such version of RHEL to load their services. Here's the scenario: to backup the system and critical programs in case of host crash/failure event.The test RHEL 6.4 host for backup utilizes windows Hyper-V VM as infrastrucure and the OS root is installed on LVM logical volume. In order to backup the system, I placed system into single user mode and used command to backup the root filesystem dump -0uf /<path_to_a_second_storage_to_store_dump>/mybackup.dump /The dump showed "DUMP IS DONE" on screen and the dump file was created with size about 2.2GB therefore I believed that the backup was successful.In order to simulate host crash event, I reinstalled the RHEL 6.4 system utilizing LVM logical volume and boot the system into single user mode before restoration. However, after restoring root filesystem using restore -rf /<path_to_a_second_storage_to_store_dump>/mybackup.dumpThe screen showed kernel panic and some other errors, and hung eventually. I retried several times but always failed. Can anyone give me some hints why the restoration can't be completed?
failed to restore root filesystem from dump backup
I figured it out- I had to specify a 'protocol' in the filter. I could find much documentation on this- all the examples I could find specified the protocol as 'ip' but since this a switch, I thought I'd try 'all' and it worked! tc qdisc add dev eth0 root handle 1:0 htb default 2 tc class add dev eth0 parent 1:0 classid 1:1 htb rate 1mbit ceil 1mbit tc class add dev eth0 parent 1:0 classid 1:2 htb rate 5mbit ceil 5mbit tc filter add dev eth0 parent 1:0 handle protocol all 5 fw flowid 1:1
I have a 4 port bridge: root@Linux-Switch:~# brctl show bridge name bridge id STP enabled interfaces br0 8000.000024cd2cb0 no eth0 eth1 eth2 eth3My goal is to limit the upload speed of the eth2 interface. (eth0 is the uplink interface to the upstream switch). I've been trying to do this via tc and iptables. # tried in both the filter table and mangle table iptables -A FORWARD -t mangle -m physdev --physdev-in eth2 -j MARK --set-mark 5 tc qdisc add dev eth0 root handle 1:0 htb default 2 tc class add dev eth0 parent 1:0 classid 1:1 htb rate 1mbit ceil 1mbit tc class add dev eth0 parent 1:0 classid 1:2 htb rate 5mbit ceil 5mbit tc filter add dev eth0 parent 1:0 handle 5 fw flowid 1:1I can see that the iptables rule is matching- root@Linux-Switch:~# iptables -vL -t mangle ...Chain FORWARD (policy ACCEPT 107K packets, 96M bytes) pkts bytes target prot opt in out source destination 38269 11M MARK all -- any any anywhere anywhere PHYSDEV match --physdev-in eth2 MARK set 0x5 ... root@Linux-Switch:~# But the tc config is not reading the fw mark; all traffic in port eth2 is being limited to the 5Mb default, not the 1Mb I'm attempting to configure. root@Linux-Switch:~# tc -s class show dev eth0 class htb 1:1 root prio 0 rate 1000Kbit ceil 1000Kbit burst 100Kb cburst 100Kb Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) rate 0bit 0pps backlog 0b 0p requeues 0 lended: 0 borrowed: 0 giants: 0 tokens: 200000 ctokens: 200000class htb 1:2 root prio 0 rate 5000Kbit ceil 5000Kbit burst 100Kb cburst 100Kb Sent 11465766 bytes 39161 pkt (dropped 0, overlimits 0 requeues 0) rate 6744bit 3pps backlog 0b 0p requeues 0 lended: 39161 borrowed: 0 giants: 0 tokens: 2454400 ctokens: 2454400root@Linux-Switch:~# What am I doing wrong?
tc on bridge port
I think I finally understood how redirecting ingress to IFB is working: +-------+ +------+ +------+ |ingress| |egress| +---------+ |egress| |qdisc +--->qdisc +--->netfilter+--->qdisc | |eth1 | |ifb1 | +---------+ |eth1 | +-------+ +------+ +------+My initial assumption in figure 2, that the ifb device is inserted between ingress eth1 and netfilter and that packets first enter the ingress ifb1 and then exit through egress ifb1 was wrong. In fact redirecting traffic from an interface's ingress or egress to the ifb's egress is done directly by redirecting ("stealing") the packet and directly placing it in the egress of the ifb device. Mirroring/redirecting traffic to the ifb's ingress is currently not supported as also stated in the documentation, at least on my version: root@deb8:~# tc -V tc utility, iproute2-ss140804 root@deb8:~# dpkg -l | grep iproute ii iproute2 3.16.0-2 root@deb8:~# uname -a Linux deb8 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 x86_64 GNU/LinuxDocumentation I was able to get this information thanks to the following documentation:linux-ip.net Intermediate Functional Block dev.laptop.org ifb-README people.netfilter.org Linux Traffic Control Classifier-Action Subsystem Architecture PaperDebugging And some debugging using iptables -j LOG and tc filter action simple, which I used to print out messages to syslog when an icmp packet is flowing through the netdevs. The result is as follows: Jun 14 13:02:12 deb8 kernel: [ 4273.341087] simple: tc[eth1]ingress_1 Jun 14 13:02:12 deb8 kernel: [ 4273.341114] simple: tc[ifb1]egress_1 Jun 14 13:02:12 deb8 kernel: [ 4273.341229] ipt[PREROUTING]raw IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341238] ipt[PREROUTING]mangle IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341242] ipt[PREROUTING]nat IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341249] ipt[INPUT]mangle IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341252] ipt[INPUT]filter IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341255] ipt[INPUT]nat IN=eth1 OUT= MAC=08:00:27:ee:8f:15:08:00:27:89:16:5b:08:00 SRC=10.1.1.3 DST=10.1.1.2 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=53979 DF PROTO=ICMP TYPE=8 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341267] ipt[OUTPUT]raw IN= OUT=eth1 SRC=10.1.1.2 DST=10.1.1.3 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=37735 PROTO=ICMP TYPE=0 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341270] ipt[OUTPUT]mangle IN= OUT=eth1 SRC=10.1.1.2 DST=10.1.1.3 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=37735 PROTO=ICMP TYPE=0 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341272] ipt[OUTPUT]filter IN= OUT=eth1 SRC=10.1.1.2 DST=10.1.1.3 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=37735 PROTO=ICMP TYPE=0 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341274] ipt[POSTROUTING]mangle IN= OUT=eth1 SRC=10.1.1.2 DST=10.1.1.3 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=37735 PROTO=ICMP TYPE=0 CODE=0 ID=1382 SEQ=1 Jun 14 13:02:12 deb8 kernel: [ 4273.341278] simple: tc[eth1]egress_1 Jun 14 13:02:12 deb8 kernel: [ 4273.341280] simple: tc[ifb0]egress_1The debugging was done using the following settings: iptables -F -t filter iptables -F -t nat iptables -F -t mangle iptables -F -t raw iptables -A PREROUTING -t raw -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]raw ' iptables -A PREROUTING -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]mangle ' iptables -A PREROUTING -t nat -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]nat ' iptables -A INPUT -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]mangle ' iptables -A INPUT -t filter -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]filter ' iptables -A INPUT -t nat -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]nat ' iptables -A FORWARD -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[FORWARD]mangle ' iptables -A FORWARD -t filter -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[FORWARD]filter ' iptables -A OUTPUT -t raw -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]raw ' iptables -A OUTPUT -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]mangle ' iptables -A OUTPUT -t nat -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]nat ' iptables -A OUTPUT -t filter -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]filter ' iptables -A POSTROUTING -t mangle -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[POSTROUTING]mangle ' iptables -A POSTROUTING -t nat -p icmp --icmp-type 8 -j LOG --log-level 7 --log-prefix 'ipt[POSTROUTING]nat ' iptables -A PREROUTING -t raw -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]raw ' iptables -A PREROUTING -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]mangle ' iptables -A PREROUTING -t nat -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[PREROUTING]nat ' iptables -A INPUT -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]mangle ' iptables -A INPUT -t filter -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]filter ' iptables -A INPUT -t nat -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[INPUT]nat ' iptables -A FORWARD -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[FORWARD]mangle ' iptables -A FORWARD -t filter -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[FORWARD]filter ' iptables -A OUTPUT -t raw -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]raw ' iptables -A OUTPUT -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]mangle ' iptables -A OUTPUT -t nat -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]nat ' iptables -A OUTPUT -t filter -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[OUTPUT]filter ' iptables -A POSTROUTING -t mangle -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[POSTROUTING]mangle ' iptables -A POSTROUTING -t nat -p icmp --icmp-type 0 -j LOG --log-level 7 --log-prefix 'ipt[POSTROUTING]nat 'export TC="/sbin/tc"$TC qdisc del dev eth1 root $TC qdisc del dev eth1 ingress ip link set dev ifb0 down ip link set dev ifb1 down $TC qdisc del dev ifb0 root $TC qdisc del dev ifb1 root rmmod ifbmodprobe ifb numifbs=2$TC qdisc add dev ifb0 root handle 1: htb default 2 $TC class add dev ifb0 parent 1: classid 1:1 htb rate 2Mbit $TC class add dev ifb0 parent 1: classid 1:2 htb rate 10Mbit $TC filter add dev ifb0 parent 1: protocol ip prio 1 u32 \ match ip protocol 1 0xff flowid 1:1 \ action simple "tc[ifb0]egress" $TC qdisc add dev ifb0 ingress $TC filter add dev ifb0 parent ffff: protocol ip prio 1 u32 \ match ip protocol 1 0xff \ action simple "tc[ifb0]ingress"$TC qdisc add dev ifb1 root handle 1: htb default 2 $TC class add dev ifb1 parent 1: classid 1:1 htb rate 2Mbit $TC class add dev ifb1 parent 1: classid 1:2 htb rate 10Mbit $TC filter add dev ifb1 parent 1: protocol ip prio 1 u32 \ match ip protocol 1 0xff flowid 1:1 \ action simple "tc[ifb1]egress" $TC qdisc add dev ifb1 ingress $TC filter add dev ifb1 parent ffff: protocol ip prio 1 u32 \ match ip protocol 1 0xff \ action simple "tc[ifb1]ingress"ip link set dev ifb0 up ip link set dev ifb1 up$TC qdisc add dev eth1 root handle 1: htb default 2 $TC class add dev eth1 parent 1: classid 1:1 htb rate 2Mbit $TC class add dev eth1 parent 1: classid 1:2 htb rate 10Mbit $TC filter add dev eth1 parent 1: protocol ip prio 1 u32 \ match ip protocol 1 0xff flowid 1:1 \ action simple "tc[eth1]egress" pipe \ action mirred egress redirect dev ifb0 $TC qdisc add dev eth1 ingress $TC filter add dev eth1 parent ffff: protocol ip prio 1 u32 \ match ip protocol 1 0xff \ action simple "tc[eth1]ingress" pipe \ action mirred egress redirect dev ifb1
I would like to know the exact position of the following device in the packet flow for ingress traffic shaping:IFB: Intermediate Functional BlockI would like to better understand how packets are flowing to this device and exactly when this happens to understand what methods for filtering / classification can be used of the following:tc filter ... u32 ... iptables ... -j MARK --set-mark ... iptables ... -j CLASSIFY --set-class ...It seems hard to find documentation on this topic, any help where to find official documentation would be greatly appreciated as well. Documentation as far as I know:tc: tldp.org HOWTO, lartc.org HOWTO ifb: linuxfoundation.org, tc-mirred manpage, wiki.gentoo.org netfilter packet flow: kernel_flow, docum.org kptdFrom the known documentation I interpret the following: Basic traffic control figure 1 +-------+ +------+ |ingress| +---------+ |egress| |qdisc +--->netfilter+--->qdisc | |eth0 | +---------+ |eth0 | +-------+ +------+IFB? tc filter add dev eth0 parent ffff: protocol all u32 match u32 0 0 action mirred egress redirect dev ifb0 will result in? figure 2 +-------+ +-------+ +------+ +------+ |ingress| |ingress| |egress| +---------+ |egress| |qdisc +--->qdisc +--->qdisc +--->netfilter+--->qdisc | |eth0 | |ifb0 | |ifb0 | +---------+ |eth0 | +-------+ +-------+ +------+ +------+
How is the IFB device positioned in the packet flow of the Linux kernel
You cannot limit incoming traffic on the destination machine because it has already arrived. To properly do what you want to do, you need to put tc onto your gateway. This is probably not an option for you, but it's the way. Ingress traffic can only be policed, in that it discards packets that exceed the speed limit. This is inefficient because you now take more bandwidth later to receive the same packet again. This works somewhat roughly because TCP is designed to handle traffic loss by slowing down when packets are lost, but you end up constantly going slower and faster as TCP scales which your most recent comment shows you are experiencing. However, there is a way to make your system into a gateway for itself by shoving an 'Intermediate Functional Block Device' into your network pathway. I suggest reading up on it and then trying that for inbound rate limiting. See this 'Theory' discussion re INGRESS / EGRESS shaping / policing on the Gentoo site.
Based on this section of the Linux Advanced Routing & Traffic Control HOWTO, I can't get tc to limit the network speed in my computer. The router is a Motorola SurfBoard modem with a few routing capabilities and firewall. The machine I want to limit the traffic is 192.168.0.5, and also the script is being run from 192.168.0.5. Here is my adaption of the commands on the link above for /etc/NetworkManager/dispatcher.d/: #!/bin/sh -eu# clear any previous queuing disciplines (qdisc) tc qdisc del dev wlan0 root 2>/dev/null ||:# add a cbq qdisc; see `man tc-cbq' for details if [ $2 = up ]; then # set to a 3mbit interface for more precise calculations tc qdisc add dev wlan0 root handle 1: cbq avpkt 1000 \ bandwidth 3mbit # leave 30KB (240kbps) to other machines in the network tc class add dev wlan0 parent 1: classid 1:1 cbq \ rate 2832kbit allot 1500 prio 5 bounded isolated # redirect all traffic on 192.168.0.5 to the previous class tc filter add dev wlan0 parent 1: protocol ip prio 16 \ u32 match ip dst 192.168.0.5 flowid 1:1 # change the hashing algorithm every 10s to avoid collisions tc qdisc add dev wlan0 parent 1:1 sfq perturb 10 fiThe problem is that I have tried setting 2832kbit to very small values for testing (like 16kbit), but I still can browse the web at high speed. The problem is not in NetworkManager, because I'm testing the script manually. EDIT: I have found that by changing dst 192.168.0.5 to src 192.168.0.5, the upload speed is reliably limited, but I still haven't figured how to get the download speed to work, which is the most important for me.
Can't get tc to limit network traffic
The solution you found was correct: iptables -A OUTPUT -m limit --limit 10/s -j ACCEPTBut it is assuming a default policy of DROP or REJECT which is not usual for OUTPUT. You need to add: iptables -A OUTPUT -j REJECTBe sure to add this rule after the ACCEPT one. Either execute them in this order, or use -I instead of -A for the ACCEPT. Also, depending on the application this might kill the connection. In that case try with DROP instead of REJECT or try with a different --reject-with (default is icmp-port-unreachable). I just tested with telnet against a DVR server and it didn't kill the connection. Of course, since a new connection is an output packet, trying to reconnect right after hitting the limit will fail right away if you use REJECT. I gather from the comments that your ISP also expects you to limit your INPUT packets... you cannot do this. By the time you are able to stop them they've already reached your NIC, which means the were already accounted for by your ISP. The INPUT packet count will also increase considerably when you limit your OUTPUT because most of the ACK won't make it out, causing lots of retransmissions. 10 packets per second is insane.
I have a packet rate limit (max. 10 per seconds) which is set by my internet provider. This is a problem if I want to use the AceStream player, because if I exceed the limit I get disconnected. How can I restrict the internet access of this program? I tried the suggested command: iptables -A OUTPUT -m limit --limit 10/s -j ACCEPTbut I get a fatal error message: FATAL: Error inserting ip_tables (/lib/modules/3.2.0-67-generic/kernel/net/ipv4/netfilter/ip_tables.ko): Operation not permitted iptables v1.4.12: can't initialize iptables table `filter': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded.With administor rights: sudo iptables -A OUTPUT -m limit --limit 10/s -j ACCEPTthere is no errror message anymore. But it is still not working, I get disconnected. Is there an error in the command line? Or do I have to use other arguments of iptables? Below is the actual message that I get, when I exceed the limits of the provider. Up to now, I tried different approaches, but none of them didn't work. sudo iptables -A INPUT -p tcp --syn --dport 8621 -m connlimit --connlimit-above 10 --connlimit-mask 32 -j REJECT --reject-with tcp-resetsudo iptables -A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 9/second --limit-burst 10 -j ACCEPTsudo iptables -A INPUT -p tcp --destination-port 8621 --syn -m state --state NEW -m limit --limit 9/s --limit-burst 10 -j ACCEPTThis approach seems not to help in order to still use the application. So, I posted another question: set connection limit via iptables .
set packet rate limit via iptables
According to the Packet flow in Netfilter and General Networking schematic, tcpdump captures (AF_PACKET) after egress (qdisc). So it's normal you don't see the delay in tcpdump: the delay was already present at initial capture. You'd have to capture it one step earlier, so involve a 3rd system: S1: system1, runs tcpdump on outgoing interface R: router (or bridge, at your convenience, this changes nothing), runs the qdisc netem S2: system2, runs tcpdump on incoming interface __________________ ________________ __________________ | | | | | | | (S1) -- tcpdump -+---+- (R) -- netem -+---+- tcpdump -- (S2) | |__________________| |________________| |__________________|That means 3 network stacks involved, be they real, vm, network namespace (including ip netns, LXC, ...)Optionally, It's also possible to cheat and move every special settings on the router (or bridge) by using an IFB interface with mirred traffic: allows by a trick (dedicated for this case) to insert netem sort-of-after ingress rather than on egress: _______ ______________________________________________ _______ | | | | | | | (S1) -+---+- tcpdump -- ifb0 -- netem -- (R) -- tcpdump -+---+- (S2) | |_______| |______________________________________________| |_______|There's a basic IFB usage example in tc mirred manpage:Using an ifb interface, it is possible to send ingress traffic through an instance of sfq: # modprobe ifb # ip link set ifb0 up # tc qdisc add dev ifb0 root sfq # tc qdisc add dev eth0 handle ffff: ingress # tc filter add dev eth0 parent ffff: u32 \ match u32 0 0 \ action mirred egress redirect dev ifb0Just use netem on ifb0 instead of sfq (and in non-initial network namespace, ip link add name ifbX type ifb works fine, without modprobe). This still requires 3 network stacks for proper working.using NFLOG After a suggestion from JenyaKh, it turns out it's possible to capture a packet with tcpdump, before egress (thus before qdisc) and then on egress (after qdisc): by using iptables (or nftables) to log full packets to the netlink log infrastructure, and still reading them with tcpdump, then again using tcpdump on the egress interface. This requires only settings on S1 (and doesn't need a router/bridge anymore). So with iptables on S1, something like: iptables -A OUTPUT -o eth0 -j NFLOG --nflog-group 1Specific filters should probably be added to match the test done, because tcpdump filter is limited on nflog interface (wireshark should handle it better). If the answer capture is needed (here done in a different group, thus requiring an additional tcpdump): iptables -A INPUT -i eth0 -j NFLOG --nflog-group 2Depending on needs it's also possible to move them to raw/OUTPUT and raw/PREROUTING instead. With tcpdump: # tcpdump -i nflog:1 -n -tt ...If a different group (= 2) was used for input: # tcpdump -i nflog:2 -n -tt ...Then at the same time, as usual: # tcpdump -i eth0 -n -tt ...
I have two linux containers connected with a veth-pair. At veth-interface of one container I set up tc qdisc netem delay and send traffic from it to the other container. If I watch traffic on both sides using tcpdump/wireshark it can be seen that timestamps of the same packet at sender and receiver do not differ by selected delay. I wanted to understand more in detail at which point libpcap puts timestamps to egress traffic corresponding to tc qdisc. I searched for a scheme/image on Internet but did not find. I found this topic (wireshark packet capture point) but it advises to introduce an indirection by having one more container/interface. This is not a possible solution in my situation. Is there any way to solve the problem not introducing additional intermediate interfaces (that is, not changing topology) and only by recording at the already given veth-interface but in such a way that the delay can be seen? UPDATE: I was too quick and so got mistaken. Neither my solution present below (same as the first variant of solution of the answer of @A.B), nor the solution with IFB of @A.B (I have already checked) solve my problem. The problem is with overflow of transmit queue of interface a1-eth0 of sender in the topology: [a1-br0 ---3Gbps---a1-eth0]---100Mbps---r1---100Mbps---r2I was too quick and checked only for delay 10ms at link between a1-eth0 and router r1. Today I tried to make the delay higher: 100ms, 200ms and the results (per-packet delay and rate graphs which I get) start to differ for the topology above and for the normal topology: [a1-eth0]---100Mbps---r1---100Mbps---r2So no, certainly, for accurate testing I cannot have extra links: nor introduced by Linux bridge, nor by this IFB, nor by any other third system. I test congestion control schemes. And I want to do it in a specific topology. And I cannot change the topology just for the sake of plotting -- I mean if at the same time my rate and delay results/plots get changed. UPDATE 2: So it looks like the solution has been found, as it can be seen below (NFLOG solution). UPDATE 3: Here are described some disadvantages of NFLOG solution (big Link-Layer headers and wrong TCP checksums for egress TCP packets with zero payload) and proposed a better solution with NFQUEUE which does not have any of these problems: TCP checksum wrong for zero length egress packets (captured with iptables). However, for my tasks (testing of congestion control schemes) neither NFLOG, nor NFQUEUE are suitable. As it is explained by the same link, sending rate gets throttled when packets get captured from kernel's iptables (this is how I understand it). So when you record at sender by capturing from interface (i.e., regularly) you get 2 Gigabytes dump, while if you record at sender by capturing from iptables you get 1 Gigabyte dump. Roughly speaking. UPDATE 4: Finally, in my project I use Linux bridge solution described in my own answer bewow.
Tc qdisc delay not seen in tcpdump recording
Whenever conntrack is in use, mainly for:stateful firewalling (-m conntrack ...) NAT (-t nat ...)An additional hidden facility gets loaded, provided by the kernel modules nf_defrag_ipv4 and nf_defrag_ipv6. This facility hooks into network prerouting at priority -400: that's before iptables' raw table which hooks at priority -300. After a packet(s) traverses nf_defrag_ipv[46] no fragment ever exists: packets were reassembled early. The goal is that the various protocol inspectors in Netfilter and iptables can get all of the packet contents, including for example UDP destination port: this information will be present only in the first fragment. So to avoid this, the -j NOTRACK (obsoleted by -j CT --notrack) in the raw table is not enough. One can:never use conntrack directly (stateful rules) or indirectly (NAT),or create a new network namespace to handle directly traffic (most likely using a stolen physical interface, or a macvlan interface, or being bridged but not routed by the host) and making sure that no stateful rule happens in this namespace. the defragmentation facility doesn't hook in the network namespace as long as nothing forces it to do so (and probably also with a recent enough kernel)or have a chain that hooks before priority -400. This is actually possible in recent enough kernels:for iptables-legacy since kernel (probably) >= 4.16 # modinfo -p iptable_raw raw_before_defrag:Enable raw table before defrag (bool)flush raw table, unload module and reload it with (and adjust /etc/modprobe.d/): modprobe iptable_raw raw_before_defrag=1for nftables Just create chains with priority lower than -400, eg: nft add table ip handlefrag nft add chain ip handlefrag predefrag '{ type filter hook prerouting priority -450; policy accept; }' nft add rule ip handlefrag predefrag ip 'frag-off & 0x3fff != 0' notrack (to deal only with following fragments, not the first, replace 0x3fff with 0x1fff) For IPv6 the method is different, because the fragment header might not be the next header. But nft provides an easy expression in its man: exthdr frag exists to detect packets that are fragments.nothing exists for the iptables-nft API (which is default on many distributions like Debian): it doesn't use the module iptable_raw and doesn't have an option to create the actual nftables chain at priority -450. So if your command's output looks like this: # iptables -V iptables v1.8.7 (nf_tables)you can't use this alone along conntrack. You must revert to iptables-legacy or switch to nft, or...what can yet still be done, is to mix nftables (rules above) to tag the packets as notrack before they are defragmented, and continue with iptables to handle the remaining part. There's no issue in using nftables and iptables at the same time, as long one understands the order of operations as seen in the schematic of Netfilter linked by OP.
I want to handle ip fragments in user-space, and I am using the iptables NF_QUEUE to direct packets to user-space. The problem is that IPv4 packets are always re-assembled and delivered as one packet rather than individual fragments. For IPv6, fragments are delivered as they should. I thought that the conntracker might be causing it and disabled it in the raw iptables table, but it turns out that the packet is already re-assembled when it reaches the raw table: # iptables -t raw -nvL Chain PREROUTING (policy ACCEPT 58 packets, 62981 bytes) pkts bytes target prot opt in out source destination 1 30028 CT all -- * * 0.0.0.0/0 10.0.0.0/24 NOTRACKThis is when sending a 30000 byte UDP packet over IPv4. The corresponding for IPv6: # ip6tables -t raw -nvL Chain PREROUTING (policy ACCEPT 46 packets, 62304 bytes) pkts bytes target prot opt in out source destination 21 31016 CT all * * ::/0 1000:: NOTRACKThis is in a virtual environment kvm/qemu with virtio network devices, mtu=1500. Some HW offload does not seem to cause this, since I can see all IPv4 fragments with tcpdump -ni eth2 host 10.0.0.0. So my question is what in the Linux kernel can force IPv4 packets to be re-assembled before the raw/PREROUTING netfilter chain? I suspect "ingress/qdisc" as it is in between AF_PACKET (tcpdump) and the raw/PREROUTING chain, but I can't find the problem. Packet flow: https://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg
Unwanted defragmentation of forwarded ipv4 packets
The kernel's default reserved handle 0: can't be referenced correctly (as major value 0: ). You have first to (re)install the qdisc root mq, using a valid handle (ie: not 0:): # tc qdisc add dev eth2 root handle 1: mqWhich now should give you this: # tc qdisc show dev eth2 qdisc mq 1: root qdisc pfifo_fast 0: parent 1:8 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent 1:7 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent 1:6 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent 1:5 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent 1:4 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent 1:3 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent 1:2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent 1:1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1You can now run your commands as intended using parent 1:1 instead of 0:1 etc.
This LWN article suggests that one may add/replace the network scheduler for a queue "under" the mq "dummy scheduler." These two point to that end: The mq scheduler does two things:- present device TX queues as classes, allowing to attach different qdiscs to them, which are grafted to the TX queues- present accumulated statistics of all device queue root qdiscsI would appreciate being schooled on how to do this. I've tried many combinations. For example, from this listing of the default (CentOS 7.6): # tc qdisc show dev eth2 qdisc mq 0: root qdisc pfifo_fast 0: parent :8 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent :7 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent :6 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent :5 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent :4 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent :3 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent :2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 qdisc pfifo_fast 0: parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1I've tried many variations trying to experiment with grafting different schedulers under mq. Here are some attempts: # tc qdisc add dev eth2 parent 0:1 fq_codel target 1ms interval 10ms quantum 9014 RTNETLINK answers: No such file or directory # tc qdisc add dev eth2 parent 0:1 handle 1: fq_codel target 1ms interval 10ms quantum 9014 RTNETLINK answers: No such file or directoryWould anyone know the magic to put different schedulers under mq than only pfifo_fast? One point that is highly frustrating is that the manual page, and many internet articles, reference root and parents regarding the schedulers and queues. However, none do an adequate job in describing, from the output I have above from the tc qdisc show dev eth2 command, what is the root and which are the parents. I'm guessing, but my guesses seem to be far off.
Adding qdisc under the mq top-level qdisc
You need to pick a class aware qdisc like HFSC or HTB. Then you'll have to build a class tree like this: Root Class (10MBit) | \--- XyZ Class (rate 3Mbit ceil 3Mbit) | | | \--- Client 10 (rate 1.5Mbit ceil 3Mbit) | \--- Client 11 (rate 1.5Mbit ceil 3Mbit) | \--- Client 30 (rate 3.5Mbit ceil 10Mbit) \--- Client 40 (rate 3.5Mbit ceil 10Mbit)And that on both interfaces (for upload and download shaping). With HTB to get predictable results you should make sure that the sum of children is always equal to parent. So Root has 10Mbit, and its direct children equal (Xyz 3Mbit + Client30 3.5Mbit + Client40 + 3.5Mbit == 10Mbit). Likewise XyZ has 3Mbit and its children Client10+Client11. Years and years ago I wrote a script that did something similar: https://github.com/frostschutz/FairNAT It's unmaintained today, but maybe it can give you some ideas anyway. Traffic shaping in Linux was a bit of an neglected/esoteric field, hard to find good documentation too. Not sure if that ever changed... There is http://lartc.org/ (ignore the wondershaper part) and the Kernel Packet Traveling Diagram http://www.docum.org/docum.org/kptd/ (also the FAQ) Or if that's all to complicated, maybe a stateless qdisc like ESFQ will do the trick for you. It tries to achieve some kind of equilibrium between clients without actually applying any hard bandwidth limits. Good luck.
We have say 4 users in a private network connected to the Internet trough a Linux router with a public IP address that is doing network address translation (NAT). I have to configure QoS to give access to the users to Internet but with throttled bandwidth for 2 users while for others no limitations. eth0:121.51.26.35 eth1:10.239.107.1eth0 of Linux Router is a 10Mbps link. eth1 is connected to switch and 4 nodes are connected to the switch. I want configure tc to throttle bandwidth of 2 nodes only i.e. a group of users (XyZ in picture) to use only 3Mbps cumulatively. (When 1 user will be downloading/uploading, he/she must get 3Mbps but when 3 users are downloading/uploading simultaneously they must receive 1MBps ) First please let me know is the requirement achievable, if yes how shall I proceed? Below is the topology
How to configure QoS per IP basis?
The limit_rate setting of nginx seems to overcome some of the issues in squid and varnish as recommended by other responders. From the docs:Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.For my scenario, where I'm looking to limit the rate of download bytes transferred for large files for individual requests without limiting the overall bandwidth for a client, this is exactly what I need. Squid Squid's delay pools group clients (usually by IP) and use a bucketed rate limiting. However even the docs say:You can not limit a single HTTP request's connection speed.Varnish Varnish's vmod_vsthrottle (and similarly libvmod-throttle) works off a token bucket algorithm and takes arbitrary keys. The implementation seems very cool, but it looks like there is not a good way to slow down traffic. Instead requests above a limit (in req/s) are responded to with something like a 429 .
Looking at tools like tc, wondershaper, htb and comcast, all these tools seem to operate on the level of a network interface or at least a "connection group" for limiting bandwidth. I'd like to not throttle bandwidth for a group of connections, but instead throttle the max rate of individual connections. Specifically: Is there a tool available that I can use to shape the max download rate of individual HTTP requests? Details What I'm looking to do is emulate slow requests to fetch from buckets on S3. I'm seeing that for requests that are away from a data center, download of an individual item is usually slow (<500 kb/s) but downloading in parallel yields download speeds >5 mb/s. I can probably get part of the way there by adding latency in these requests (which slows down throughout of serial requests but not overall bandwidth), but a more direct solution would be great.
Limit bandwidth of individual HTTP requests while not throttling total bandwidth
I finally found the solution which I was looking for. Iptables has rateest module which does exactly that. For example: # collects all ingress traffic to RATEEST target iptables -A INPUT -j RATEEST --rateest-name RE1 --rateest-interval 500.0ms --rateest-ewmalog 1s # creates a log entry(jumps to LOG target) if rate is greater than 10Mbps iptables -A INPUT -m rateest --rateest RE1 --rateest-gt --rateest-bps 10Mbps -j LOG --log-prefix "Ingress bandwidth >10Mbps "
I would like to log a warning message to /var/log/messages file if either ingress or egress bandwidth on eth0 interface is over a certain threshold. I could do this with a script which reads the value of /sys/devices/virtual/net/eth0/statistics/[rt]x_bytes file, stores the value, sleeps a second, reads those very same values again, calculates the amount of bits per second sent, compares the result with certain threshold and if higher, writes a message to /var/log/messages file. However, is there a smarter method? I mean for example with iptables or tc which could create a log message in case certain bandwidth threshold on interface is exceeded?
monitor bandwidth on interface with tc or iptables
Old post, but for reference, it wouldn't work for a few reasons:The priority should be 16 and not 1 The filter handle should be 800::800 and not 800:800 You must supply the parent qdisc that the filter is attached toThis should work: tc filter del dev peth1 parent 1: handle 800::800 prio 16 protocol ip u32
How can I remove a single filter? tc filter show dev peth1shows filter parent 1: protocol ip pref 16 u32 filter parent 1: protocol ip pref 16 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 16 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:2 match 5bd6aaf9/ffffffff at 12Why does that not work?: tc filter del dev peth1 pref 1 protocol ip handle 800:800 u32
remove tc filter (Traffic Shaping)
No it doesn't affect other interfaces. But the routing involved makes that any access from the server to itself stays local and uses the lo (loopback) interface whatever interface the IP address was assigned to. So lo is affected by tc ... netem. You can verify this with the ip route get command, which will give something similar to this for your case: $ ip route get 192.168.0.1 local 192.168.0.1 dev lo table local src 192.168.0.1 uid 1000 cache <local> As you can see the lo interface is used. There is no reason to disable this behavior. If you somehow manage to emit a packet to 192.168.0.1 through eth1 guess what: you won't receive it on the interface it was emitted from, so it will be lost. You could imagine further a whole convoluted setup, including configuring the switch port where eth1 is plugged to send back to its emitter traffic it received, but sooner or later this will defeat the original purpose of the experiment and the experiment won't work as intended anymore. To do correct tests when dealing with networking one should always separate local tests and remote tests. If no remote system is available, this can be simulated by using an other network namespace which has its own separate complete network stack. Below is an example (that will not be affected by OP's tc ... netem). As root user: ip netns add remote ip link add name vethtest up type veth peer netns remote name veth0 ip address add 192.0.2.1/24 dev vethtest ip -n remote link set dev veth0 up ip -n remote address add 192.0.2.2/24 dev veth0 ip netns exec remote ping 192.0.2.1There are ways to reuse the original interface but this would require some disruption in configuration.
I'm trying to modify the network behaviour of my server(s), to simulate external/WAN connection behaviours (what ever that means). After doing tc qdisc add dev lo root netem delay 100ms, I can successfully add 100ms delay to all traffic from (and to?) 127.0.0.1. E.g. ping 127.0.0.1 will have 200ms response time. However, this also affect my traffic to other interfaces. For example, I have interface eth1 with IP address 192.168.0.1 on the current server A. If I do ping 192.168.0.1, it will also have the 100ms delay (resulting in 200ms response time). I'm confused by this behaviour. I would expect lo has nothing to do with eth1, but it seems not to be the case. I assume this means Linux kernel automatically identifies 192.168.0.1 is a local address, and re-routes all traffic (originally to eth1) to lo? And if so, is there a way to disable this behaviour?Background: I would like to simulate external network behaviour even when processes on server A want to communicate to each other (through TCP/IP on the given local addresses and ports, of course). Essentially I want to add delay to eth1, but that's above this question (see my other question). My servers are running Ubuntu 18.04, but I believe that does not matter.
Why does tc-netem on loopback also affects other interfaces?
I am not quite sure about the wlan interface you are using but I guess you are missing the virtual interface which is supposed to redirect the traffic from ethX or in your case wlan3s0 into ifb which than controls the incoming packet So, something similar to modprobe ifb numifbs=1 ip link set dev ifb0 up tc filter add dev wlp3s0 parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0 tc qdisc add dev $VIRTUAL root handle 2: htb tc filter add dev $VIRTUAL protocol ip parent 2: prio 1 u32 match ip sport ${PORT} 0xffff police rate ${LIMIT} burst $BURST drop \ flowid :1I have created a bash script which allows you to filter bandwidth for incoming and/or outgoing traffic on specific ip address (or network) https://gist.github.com/ole1986/d9d6be5218affd41796610a35e3b069c Usage: ./traffic-control.sh [-r|--remove] [-i|--incoming] [-o|--outgoing] <IP>Arguments: -r|--remove : removes all traffic control being set -i|--incoming : limit the bandwidth only for incoming packetes -o|--outgoing : limit the bandwidth only for outgoing packetes <IP> : the ip address to limit the traffic for
I'm trying to police my downstream bandwith for a given port - but it seems unless I have a gigantic limit and burst, the download stops completely IF="wlp3s0" LIMIT="100kbit" BURST="100kbit" PORT="80"echo "resetting tc" tc qdisc del dev ${IF} ingressecho "setting tc"tc filter add dev ${IF} parent ffff: \ protocol ip prio 1 \ u32 match ip dport ${PORT} 0xffff \ police rate ${LIMIT} burst $BURST drop \ flowid :1 tc filter add dev ${IF} parent ffff: \ protocol ip prio 1 \ u32 match ip sport ${PORT} 0xffff \ police rate ${LIMIT} burst $BURST drop \ flowid :1I've been tweaking things for quite some time, trying out all sorts of different values for limit and burst - wgetting chozabu.net/testfile (12mb) any suggestions very welcome!
Why does this tc ingress limit command not work? (bandwidth drops off to nothing)
I am going to answer my own question since I have done some source code reading and research work myself. If I had not done some research work myself, the answers by frostschutz and sourcejedi would be of a great help too – they seem to be correct as far as my knowledge goes (although not into as much detail, but they give you a starting point to do the rest of the research yourself). Some theory: There are two kinds of queuing disciplines: classful and classless. Classful disciplines are (as the answer by sourcejedi says) flexible. They allow you to attach children classful qdiscs to them and can share bandwidth with other classes, when possible. Leaf classes have a classless qdisc (elementary/fundamental qdisc) attached to them (also called an elementary qdisc). The queues managed by these elementary qdiscs is where the packets get queued and dequeued. The packets are dequeued and enqueued from these classes by an algorithm corresponding to the class. Examples of classful qdiscs are: HTB and CBQ. Classless qdiscs are the fundamental or the elementary qdiscs, which are rigid in the sense that they cannot have children qdiscs attached to them, nor can they share bandwidth. In naive terms, they are on their own. These qdiscs own a queue from which they queue and dequeue packets according to the algorithm corresponding the qdisc. Examples of classless qdisc: pfifo, bfifo, pfifo_fast (default used by Linux tc), tbf, sfq and a few more. In the example tree in the question, each of the leaf htb classes 1:1, 1:2 and 1:3 has an elementary qdisc attached to it, which is by default pfifo (not pfifo_fast). The elementary qdisc attached to the leaf can be changed using thetc userspace utility in the following way: tc qdisc add dev eth0 parent 1:11 handle 30: pfifo limit 5 tc qdisc add dev eth0 parent 1:12 handle 40: sfq perturb 10More details about this can be found in the HTB Linux queuing discipline manual. Therefore the tree actually looks like this: 1: root qdisc (class htb) (100) / | \ / | \ / | \ 1:1 1:2 1:3 parent qdiscs (class htb) (30) (10) (60) || || || -----> pfifo qdiscs (queue length: txqueuelen (default, can be changed by tc utitlity))Notice the parameter txqueuelen is an interface-specific parameter. That means the parameter is a property of the interface and can be changed using iproute2 or ifconfig. By default, its value is 1000. Here is an example of how to change it to 200 it via iproute2: ip link set eth0 txqueuelen 200When a leaf node is created (in context of HTB qdisc), pfifo qdisc is attached to the leaf class by default. This pfifo is initialized with a queue limit of txqueuelen of the interface. This can be found in the function htb_change_class() in sch_htb.c, line 1395: /* create leaf qdisc early because it uses kmalloc(GFP_KERNEL) * so that can't be used inside of sch_tree_lock * -- thanks to Karlis Peisenieks */ new_q = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops, classid, NULL);For the default queue length of a pfifo queue, refer to sch_fifo.c, line 61: u32 limit = qdisc_dev(sch)->tx_queue_len;The kernel interacts directly with the root qdisc (maybe classful or classless) when it wants to queue ordequeue apacket. If the root qdisc is classful and has children, then it first classifies the packet (decides which child to send the packet to). Kernel source where it is done: sch_htb.c, line 209: static struct htb_class *htb_classify(struct sk_buff *skb, struct Qdisc *sch, int *qerr)On reading the comments, one can easily infer that this function returns one of the following:NULL, if thepacket should be dropped -1, if the packet should be queued into direct_queue a leaf node (which contains an elementary qdisc, where the packets actually end up)   Thisfunction traverses through all the interior nodes (Classes) of the tree until it returns a leaf node, where the packet should be queued.While dequeuing, each of the classes follow the algo associated with their qdisc to decide which of the children to dequeue from, and children do the same thing, until a packet is dequeued from an elementary qdisc attached to a leaf class. This also ensures that the rate of a child class is no more than its parent. (Since the parent will decide whether the packet will pass through or not). I have not gone through the source of dequeuing in htb, so Ican't provide a source for that. Direct queue: It is a special internal fifo queue maintained by the HTB qdisc from which the packets are dequeued at hardware speed. Its queue length is txqueuelen. A packet ends up in a direct queue ifHTB is unable to classify it into one of the children qdiscs, and the default is not specified. So the answers to my own questions:Yes, since they are leaf nodes, by default they are pfifo queues with queue length txqueuelen of the interface which is 1000 by default and can be changed. A queuing discipline is like the algo together with the queue combined in a single package! Ifyou ask me, queuing discipline is property of both the type of queue and the packet scheduler (here packet scheduler means the algo which enqueues and dequeues the packet). For example, a queue might be of type pfifo or bfifo. The algo used for enqueuing and dequeuing is same, but the queue lengths are measured in bytes for the byte fifo (bfifo). Packets are dropped in a bfifo, when the byte limit is reached. The default byte limit is calculated as mtu*txqueuelen. When a packet is enqueued, for example, the packet length is added to the current queue length. Similarly, when the packet is dequeued, the packet length is subtracted from the queue length. Answered above.Some sources one might consult:Linux Network Traffic Control — Implementation Overview (PDF) Journey to the Center of the Linux Kernel: Traffic Control, Shaping and QoS Demystification of TC - Criteo R&D Blog Queueing in the Linux Network Stack - Linux Journal
I am trying to understand the queuing mechanism of linux-htb QDisc and QDisc of linux tc in general. What I could gather: During TX, the packet is queued into the queue inside the linux tc. This queue by default follows a pfifo_fast QDisc with a txqueuelen of 1000. The packet scheduler dequeues the packet from this queue and puts it onto the TX driver queue (ring buffer). When we use linux-htb, the txqueuelen is inherited only for the default queue. [Link]. My question: Consider the tree (rates are specified in kbits/sec in parenthesis ()): 1: root qdisc (class htb) (100) / | \ / | \ / | \ 1:1 1:2 1:3 parent qdiscs (class htb) (30) (10) (60)Are there internal queues maintained for each of the parent htb classes (1:1, 1:2 and 1:3)? If yes what is their queue length? If not, how many queues are actually maintained and for what purpose? What is their queue length? What exactly is meant by Queueing Discipline (QDisc)? Is it the property of the data structure used (queue)? or it is a property of the packet scheduler? Or maybe both combined? While reading the source code of htb QDisc [Link], I came accross something called a direct queue. What is a direct_queue?Provide link to relevant sources if possible.
queueing in linux-htb
openvpn has an option called --up cmd which runs cmd whenever the VPN connection is first established, and an --up-restart option which tells openvpn to also run the --up command when a connection is restarted. You can write a script which contains your tc qdisc ... command, make it executable with chmod +x, and then add --up /path/to/my/script --up-restart to the openvpn command line. Alternatively, the cmd can be a properly quoted string containing the entire command and all its arguments. e.g. openvpn ... --up 'tc qdisc ...' --up-restart ...This is possibly simpler, but a script is more flexible and makes it easier to do more than one thing when the connection is established. BTW, there is also a --down cmd option which is used to run scripts or other programs whenever a VPN disconnects. See man openvpn for more details about --up and --down and related options.Note: it is possible that your Linux distribution may already make use of this feature, and may have a directory where you can just create a script to have it automatically run whenever the VPN is first established or restarted. Check the documentation for your distribution's openvpn package. If it does something like that, then follow the instructions there. If not, use the --up option as mentioned above.
I need to use a tc qdisc command to limit bandwidth usage on an interface created by openvpn. This works great when I run the command manually but occasionally the connection drops or restarts and this appears to cancel or deactivate the previously applied bandwidth settings. Is there a way to make a tc qdisc command apply permanently (or at least until I choose to cancel it) on a particular interface in such a way that any time that interface is up, my bandwidth settings will apply? I need something like the firewall-cmd permanent flag that makes the settings stick. The command I'm currently using looks something like this: tc qdisc add dev tun0 tbf rate 1mbit latency...where tun0 is the interface name created by openvpn.
How can I permanently associate tc qdisc commands with a particular interface?
Can tc be used with virtual network interfacesYes.(like eth0:0, eth0:1)?No. Those aren't virtual network interfaces. They're aliases for network interfaces. There's a huge difference. It's an oldfashioned way to specify more than one address per interface, instead of the modern approach of ip address add/change/replace/del $ip dev $interface. https://www.kernel.org/doc/Documentation/networking/alias.txtIP-aliases are an obsolete way to manage multiple IP-addresses/masks per interface.And that's pretty much all you can use them for. Best not to use them at all. Aliases make you think they're virtual devices with all the bells and whistles but they're not. Aliases exist in name only - they don't do anything. If you need a genuine virtual network device, you can have a look at bridge devices (virtualization), or tun/tap devices (openvpn). For tc specifically, you might also be interested in IMQ / IFB. If you just want to filter by IP address, you can specify those in tc filter or mark them in iptables and then filter by mark.
I need to simulate network environment with bad network connections for about 1000 hosts. Can tc (with netem) be used with virtual network interfaces (like eth0:0, eth0:1)? When I try to use tc on many virtual interfaces with different parameters - it seems that all virtual interfaces have one tc configuration. My problem is similar to this: https://stackoverflow.com/questions/31186010/netem-and-virtual-interfaces
How can I use `tc` with diffrent parameters on few virtual interfaces?
The problem you're experiencing with the MARK IPTABLES target not working as expected was caused by a missing kernel module which enables that specific Netfilter functionality. In order to use the MARK target, you need to load the XT_MARK module which must be compiled with the Linux kernel. Check your kernel config for CONFIG_NETFILTER_... items and ensure that ...XT_MARK and its prerequisites are compiled. If the XT_MARK item was compiled as a module, you'll need to load it with modprobe xt_mark.
Server has 2 network interfaces:eth1 with address 13.0.0.254/24 eth0 with address 172.20.203.4/24.It's routing traffic between this two networks. Task is to limit bandwidth between this two networks to 1Vbit/sec, but not to limit bandwidth between server and network hosts(i. e. limit all packets going though FORWARD) iptables -t mangle -A POSTROUTING -s 13.0.0.0/24 -d 172.20.203.0/24 -j MARK --set-mark 0x0001 iptables -t mangle -A POSTROUTING -s 172.20.203.0/24 -d 13.0.0.0/24 -j MARK --set-mark 0x0002# eth1 tc qdisc add dev eth1 root handle 1:0 htb default 2tc class add dev eth1 parent 1:0 classid 1:1 htb rate 1000mbps ceil 1000mbps tc class add dev eth1 parent 1:1 classid 1:2 htb rate 999mbps ceil 1000mbps tc class add dev eth1 parent 1:1 classid 1:3 htb rate 1mbpstc qdisc add dev eth1 parent 1:2 handle 2:0 sfq perturb 10 tc qdisc add dev eth1 parent 1:3 handle 3:0 sfq perturb 10tc filter add dev eth1 parent 1:0 handle 1 fw flowid 1:3 tc filter add dev eth1 parent 1:0 handle 2 fw flowid 1:3# eth0 tc qdisc add dev eth0 root handle 1:0 htb default 2tc class add dev eth0 parent 1:0 classid 1:1 htb rate 1000mbps ceil 1000mbps tc class add dev eth0 parent 1:1 classid 1:2 htb rate 999mbps ceil 1000mbps tc class add dev eth0 parent 1:1 classid 1:3 htb rate 1mbpstc qdisc add dev eth0 parent 1:2 handle 2:0 sfq perturb 10 tc qdisc add dev eth0 parent 1:3 handle 3:0 sfq perturb 10tc filter add dev eth0 parent 1:0 handle 2 fw flowid 1:3 tc filter add dev eth0 parent 1:0 handle 1 fw flowid 1:3This doesn't work. If I use this at the beginning: tc qdisc add dev eth1 root handle 1:0 htb default 3 tc qdisc add dev eth0 root handle 1:0 htb default 3it works. So problem is in filter settings. iptables -L -v -n -t mangleshows, that packets are going though MARK rules. I tried to mark packets not in POSTROUTING, but in FORWARD or PREROUTING - this does not work too. What am I doing wrong? Here is some diagnostics: # tc -s -d -r filter show dev eth0 filter parent 1: protocol [768] pref 49151 fw filter parent 1: protocol [768] pref 49151 fw handle 0x1 classid 1:3 filter parent 1: protocol [768] pref 49152 fw filter parent 1: protocol [768] pref 49152 fw handle 0x2 classid 1:3 # tc -s -d -r filter show dev eth1 filter parent 1: protocol [768] pref 49151 fw filter parent 1: protocol [768] pref 49151 fw handle 0x2 classid 1:3 filter parent 1: protocol [768] pref 49152 fw filter parent 1: protocol [768] pref 49152 fw handle 0x1 classid 1:3Kernel config: /boot # uname -a Linux armada-sc-02 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux /boot # grep CONFIG_IP_MULTIPLE_TABLES config-2.6.32-5-amd64 CONFIG_IP_MULTIPLE_TABLES=y /boot # grep CONFIG_IP_ADVANCED_ROUTER config-2.6.32-5-amd64 CONFIG_IP_ADVANCED_ROUTER=y /boot # grep CONFIG_IP_ROUTE_FWMARK config-2.6.32-5-amd64
tc don't see marked with -j MARK packets
I don't know a QDISC that would do that directly. With CBQ/HTB/HFSC, at best you could create a limited number of 10mbps classes and then hash filter IPs into them. Apart from hash collisions which will obviously happen, it would work. With some luck, you can set such limits directly at the source (like, in the webserver). But if it's not really about rate limiting, but keeping things fair between clients, maybe you're better off with SFQ/ESFQ. While it does not limit, it does provide a kind of balance.
I'm trying to setup QoS in my VPS so that I can have any new outbound connection to a unique IP gets limited x speed. For instance:have 5 public IP's requesting data from my VPS. When the VPS sends data back to each of those IP's, each IP gets 10mbps dedicated outbound bandwidth speed. Never drop packets, rather queue them if client surpasses 10mbps.I don't want to limit the whole eth0 port to 10mbps outbound speed, I want each individual public IP to get 10mbps. I have frequently different public IP's contacting my VPS so I would rather not have to write rules that are static which force me to write individually each bandwidth rule per IP. Is this possible with TC qdiscs? I've looked over what appears to be the typical setup of HTB qdics which allow me to have filters etc. But could not seem to see an example or literature that describes what I want. I'm using ubuntu server 14.04. UPDATE I did the following once I understood the way TC qdiscs work a bit better. commands I used for a basic setup which appears to work quite smoothly as packets aren't dropping but rather going into the token bucket(note: this is not highly optimized but appears to run stock pretty well): tc qdisc add dev eth0 root handle 1: htb default 11 tc class add dev eth0 parent 1: classid 1:1 htb rate 10mbit tc class add dev eth0 parent 1: classid 1:2 htb rate 20mbit tc filter add dev eth0 parent 1: protocol ip prio 16 u32 match ip dst 1.2.3.4 flowid 1:1 tc filter add dev eth0 parent 1: protocol ip prio 16 u32 match ip dst 2.3.4.5 flowid 1:2
QoS with TC qdiscs: is it possible to have ALL outbound connections have x speed limit per unique IP?
That's an old problem. You'll have to know how your distro handles the netfilter kernel module. Sometimes it's loaded and the trick is to create a rule to mark them all then split afterwards. The mangle chain is kinda tricky. Add this as your first mark rule: iptables -t mangle -A POSTROUTING -m physdev --physdev-out interface-name -j MARK --set-mark 10A second issue is that your distro might not compile and/or load xt_mark kernel module. Use lsmod | grep xt_mark to check if it's there. I also have issues with OVS and iptables sometimes. I find iptables a great 90's tool, but I feel it kinda obsolete theses days. The "check how your distro handle netfilter's module" is very important to understand your problem. If you just want to mark your packages and iptables has no other purpose, you can use OVS tool called ovs-ofctl with pkt_mark option.
I just realised that all the iptables rules I have been applying to my open Vswitch interfaces never match. I am using iptables to mark some packets, and then I use TC (traffic control) filters to put packets into different priority queues depending on the Iptables match. That works for every interface, and even for Linux Bridges (using -m physdev module). How can I filter packets that go through an ovs interface and put them into different priority queues if I can not mark them with iptables? Rules (simplified): iptables -w -t mangle -A POSTROUTING -m physdev --physdev-out interface-name -m ttl --ttl-lt 10 ! -p 89 -j MARK --set-mark 10tc filter add dev interface-name parent 1:0 protocol all prio 1 handle 10 fw flowid 1:10Then I am using HTB for the priorities, lets say that there are two queues 1:10 and 1:20. The rule should send all the traffic with ttl < 10 and not OSPF to the first queue 1:10.
Alternative to Iptables for packet filtering in OVS interfaces
If I've understood correctly, you're trying to limit your ingress traffic from your ISP by limiting egress traffic on your locally facing interface. The packet loss you're seeing are probably to be expected, as dropped packets are (one of) TCPs way(s) of detecting congestion, and the way a router can signal congestion. It's also the only reasonable way your router can abide by the limitation you've given it via tc without breaking i.e. TCPs congestion avoidance. (tc does have facilities for using RED, although I don't know enough about this to tell you anything beyond it's existence). Instead of shaping egress traffic on your inward facing interface, you could check out tc's ingress qdisc, attach it to the interface facing your ISP and a tc filter to police your ingress traffic. Packet loss will still occur, as it's probably the only viable way for your router to signal congestion. For an example, see the LARTC cookbook entry "The Ultimate Traffic Conditioner", which among other things use tc's ingress qdisc.
I use this tc command to limit upload speed on an interface: tc qdisc add dev eth1 root tbf rate 2mbit burst 10kb latency 70ms peakrate 2.4mbit minburst 1540But it results in heavy packet loss. If the data coming via eth0 (WAN) is 7 GB, it will be 6.2 GB on the rate-limited interface eth1. Are there any other rate limiting solutions that cause lesser packet loss?
Reducing packet loss in tc rate limiting
The initial default qdisc set by the kernel with special handle 0: can't be modified nor referenced. It can only be overridden by a new qdisc. Using change references the existing root qdisc, but as this can't be the default kernel's qdisc, that's an error. So the first time this netem qdisc is used, the add keyword should be used, and that's probably what was done at some point in the past. Then later the change keyword can be used to alter some of its parameters (like the corruption percent), since referencing it by the root keyword is enough. As a shortcut replace will attempt change and if it fails will perform add instead. So in the end this command will work the first and the following times too: sudo tc qdisc replace dev ens8 root netem corrupt 5%To remove this qdisc this should be done once (it would fail the 2nd time because that would be again done on the default qdisc installed by the kernel which is off-limits): sudo tc qdisc delete dev ens8 rootThe usage of add, change, replace (which is change or else add) and delete follows a similar pattern among many other iproute2 commands.
The following rule corrupts 5% of the packets by introducing a single bit error at a random offset in the packet:sudo tc qdisc change dev ens8 root netem corrupt 5%But recently it gave me the following error:Error: Qdisc not found. To create specify NLM_F_CREATE flagCould you kindly help me or provide me with some other methods to simulate packet corruption? I'm trying to simulate packet corruption to see how well my error detection mechanism works.
Error when trying to corrupt packets in linux terminal (netem)
iptables -t mangle -A PREROUTING -m dscp --dscp-class AF12 -j CONNMARK --set-xmark 12 iptables -t mangle -A POSTROUTING -m connmark --mark 12 -j DSCP --set-dscp-class AF12(not 100% dynamic as the DSCP value need to be known in advance in order to get a match)
I have seen connmark or ctinfo could work for this but couldn't find a simple effective command to make it work (Not familiar within this area). The command can be applied to the TCP termination node or any linux node as intermediary router.
Example command to set same DSCP value in the IP header for return packets within the same TCP connection
A tc action can have a control operator appended to alter further handling of packets:CONTROL The CONTROL indicates how tc should proceed after executing the action. Any of the following are valid: reclassify Restart the classifiction by jumping back to the first filter attached to the action's parent. pipe Continue with the next action. This is the default control. drop Drop the packed without running any further actions. continue Continue the classification with the next filter. pass Return to the calling qdisc for packet processing, and end classification of this packet.It seems that after a matching filter no further filter is evaluated. What you can simply do:You can combine (pipe) multiple actions on the same filter (let's suppose the 2nd mirror interface is called e101-006-0): tc filter add dev e101-001-0 ingress u32 match u32 0 0 action mirred egress mirror dev e101-005-0 pipe action mirred egress mirror dev e101-006-0You can also, instead, chain multiple filters (using action's continue control). Then an explicit prio/pref should be given because the order of filters will matter: the filter having the action with the continue control must be evaluated first. tc filter add dev e101-001-0 ingress pref 1 u32 match u32 0 0 action mirred egress mirror dev e101-005-0 continue tc filter add dev e101-001-0 ingress pref 2 u32 match u32 0 0 action mirred egress mirror dev e101-006-0This would be used over the 1st method if for example you'd want a different filter between the two actions (eg: one could match protocol ip the other protocol ipv6).
Does anyone know if it's possible to mirror to multiple interfaces from one source interface using TC? I've done the following:The first thing I did was create an ingress queue on my input interface with tc qdisc add dev e101-001-0 handle ffff: ingressIf you need to delete a qdisc you can do it with tc qdisc del dev e101-001-0 [ root | ingress ]Double check your queue with handle ffff was created with tc -s qdisc ls dev e101-001-0 Next we want to mirror all traffic from the ingress port to an output port with tc filter add dev e101-001-0 parent ffff: protocol all u32 match u32 0 0 action mirred egress mirror dev e101-005-0 Check that your port mirror appeared in the config with tc -s -p filter ls dev e101-001-0 parent ffff:If you need to delete the filters you can do so with tc filter del dev e101-001-0 parent ffff:Set queue to not shape traffic with tc qdisc add dev e101-001-0 handle 1: root prioThat got it working outputting to one interface, but I noticed if I add another filter with a new interface the first interface stops receiving traffic and it all goes to the new interface.
Mirror to Multiple Ports Using TC?
After some research here's what I found: First, some configuration: # download quota (Mb) dl_quota_mb=150 dl_quota=$(($dl_quota_mb * 1024 * 1024))# max speed once overquota (k/s) dl_cap_kb=50 dl_cap=$(($dl_cap_kb * 8))# wifi interface if_lan=wlan0# lan subnet lan=192.168.1Create tc classes for each ip to limit download speed: TCA="tc class add dev $if_lan" TQA="tc qdisc add dev $if_lan" SFQ="sfq perturb 10"$TQA root handle 1: htb # over quota speed limits for i in `seq 1 254`; do $TCA parent 1: classid 1:$i htb rate ${dl_cap}kbit ceil ${dl_cap}kbit prio 2 $TQA parent 1:$i handle $i: $SFQ doneCreate ipset for lan ips with accounting: ipset create IP_QUOTA bitmap:ip range $lan.0/24 counters ipset add IP_QUOTA $lan.1-$lan.254Classify overquota ips packets with iptables to make limits kick in: IPT="iptables -t mangle" IPT_POST="iptables -t mangle -A POSTROUTING -o $if_lan"$IPT -N overquota $IPT_POST -m set --match-set IP_QUOTA dst --bytes-gt $dl_quota -j overquota# classify packets for i in `seq 1 254`; do $IPT -A overquota --dst $lan.$i -j CLASSIFY --set-class 1:$i doneThis gives us download quotas per IP address. To get download quotas per mac address, one way is to watch for mac/ip pair changes and set/reset IP counters accordingly. I've setup a project on github which implements the full solution for OpenWrt. Note: As of June 2017, Gargoyle's download quotas are per IP address. Would be nice to implement something like this in Gargoyle eventually.
On a Linux router, how to setup download quotas for all hosts? This is for a shared wifi network with many guests:Each guest should start with a 150 Mb download quota and no restriction Once quota is reached download speed should be limited to 50 k/s Filtering must be based on mac address, IP addresses may change with dhcp.Gargoyle router implements something like this, unfortunately using gargoyle is not an option here, I need to do it with tc and iptables. This answer is a good starting point: iptables -A INPUT -p tcp -s 192.168.0.2 -m quota --quota 13958643712 -j ACCEPT iptables -A INPUT -p tcp -j CLASSIFY --set-class 1:12Making it use mac addresses instead of IP is easy, however it requires the addresses to be known in advance, which is not the case here.
iptables: download quota per mac address for all hosts
Consolidating comments into an answer Based on comments from @dirkt and @berndbausch, it seems like the bottomline is: There is no tc-specific way of persisting rules that are put in place using tc. The specifics of how to do so the Right Way will vary depending on your distro, but it will come down to re-running the tc commands as part of some file at boot time (for example, /etc/network/interfaces).
I am trying to determine whether rules put in place using tc persist beyond a reboot (I do not believe they do by default), and whether there is any way to cause them to persist, or if the best you can do is to re-execute the commands at boot in order to put them in place again. Also: how/where do these rules get persisted? (assuming there is in fact a way to do so)
Can TC rules persist beyond a reboot? Where?
Searching a bit more around the web, In this archlinux forum post is said restarting, and it worked. I just wonder why it did...
I'm trying to use this tc script on my arch box. Specifically, the first line, roughly equal to tc qdisc add dev eth0 root handle 1: htb default 3 fails with RTNETLINK answers: Operation not permitted I get another couple of those and a couple We have an error talking to the kernel. I searched around, nobody mentions tc depends on another package to use htb, I'm using it exactly as specified on the site with ./script.sh start. Running with sudo gives the same results. What's the next course of action to use tc with htb on arch linux?
how to use tc with htb on arch-linux
Perhaps the netem emulator : tc qdisc add dev eth0 root netem delay 800ms rate 1mbit
I want to throttle bandwidth and add delay to a network interface to simulate satellite communication. For example 800ms delay and 1mb/s. The following limits the bandwidth correctly but does not increase the latency: 17:16:51 root@Panasonic_FZ-55 ~ # tc qdisc add dev eth0 root tbf rate 1024kbit latency 800ms burst 1540 17:18:48 root@Panasonic_FZ-55 ~ # ping 10.10.91.58 PING 10.10.91.58 (10.10.91.58): 56 data bytes 64 bytes from 10.10.91.58: seq=0 ttl=64 time=0.938 ms 64 bytes from 10.10.91.58: seq=1 ttl=64 time=3.258 ms 64 bytes from 10.10.91.58: seq=2 ttl=64 time=1.259 ms 64 bytes from 10.10.91.58: seq=3 ttl=64 time=1.407 ms ^C --- 10.10.91.58 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.938/1.715/3.258 ms 17:18:56 root@Panasonic_FZ-55 ~ # iperf -c 10.10.91.58 ------------------------------------------------------------ Client connecting to 10.10.91.58, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.10.91.57 port 34790 connected with 10.10.91.58 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.5 sec 1.38 MBytes 1.09 Mbits/sec 17:19:19 root@Panasonic_FZ-55 ~ #I got my information from this site.
How to delay traffic and limit bandwidth at the same time with tc (Traffic Control)?
I answer my own question below. The simplest circumvention (my approach): putting one of the veth pair to another network namespace. Let's call it test. $ sudo ip netns add test $ sudo ip link add h1-eth0 type veth peer name h2-eth0 netns test$ sudo ip link set dev h1-eth0 up $ sudo ip netns exec test ip link set dev h2-eth0 up$ sudo ip addr add 10.0.0.1/24 dev h1-eth0 $ sudo ip netns exec test ip addr add 10.0.0.2/24 dev h2-eth0$ sudo tc qdisc add dev h1-eth0 root netem delay 60ms $ sudo ip netns exec test tc qdisc add dev h2-eth0 root netem delay 60msNow we check: $ ping -I h1-eth0 -c1 10.0.0.2 PING 10.0.0.2 (10.0.0.2) from 10.0.0.1 h1-eth0: 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=120 ms--- 10.0.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 120.056/120.056/120.056/0.000 ms$ sudo ip netns exec test ping -I h2-eth0 -c1 10.0.0.1 PING 10.0.0.1 (10.0.0.1) from 10.0.0.2 h2-eth0: 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=120 ms--- 10.0.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 120.146/120.146/120.146/0.000 msOther approaches I discovered that my question was already asked but also was not answered: https://serverfault.com/questions/585246/network-level-of-veth-doesnt-respond-to-arp. From there we see that the problem is with ARP. A question connected to this problem with ARP was asked here Linux does not reply to ARP request messages if requested IP address is associated with another (disabled) interface and the topic starter received some explanation but the problem was not still solved. The problem is that addresses 10.0.0.1 and 10.0.0.2 are present not only in main route table but also in local route table and the local route table has higher priority than the main route table. Below there are these tables for the initial setup from my question, i.e. WITHOUT putting one end of the veth pair to another network namespace test: $ ip route show table local broadcast 10.0.0.0 dev h1-eth0 proto kernel scope link src 10.0.0.1 broadcast 10.0.0.0 dev h2-eth0 proto kernel scope link src 10.0.0.2 local 10.0.0.1 dev h1-eth0 proto kernel scope host src 10.0.0.1 local 10.0.0.2 dev h2-eth0 proto kernel scope host src 10.0.0.2 broadcast 10.0.0.255 dev h1-eth0 proto kernel scope link src 10.0.0.1 broadcast 10.0.0.255 dev h2-eth0 proto kernel scope link src 10.0.0.2 ...$ ip route show table main 10.0.0.0/24 dev h1-eth0 proto kernel scope link src 10.0.0.1 10.0.0.0/24 dev h2-eth0 proto kernel scope link src 10.0.0.2 ...When having one of the ends of the veth pair in another network namespace we do not have a situation when two of the addresses are placed in the local route table at the same time. So, probably, this is why we do not have such a problem. I tried to delete the addresses from the local route table (only one of them or both -- in different combinations) but it did not help. Overall, I do not fully understand the situation so I will just stick up with setting the ends of the veth pair into different network namespaces. All the more, this is how a veth pair is mostly used, as far as I know.
I have Ubuntu 16.04 LTS with hwe kernel 4.13.0-39-generic. I configure the veth pair in the default network namespace as follows: $ sudo ip link add h1-eth0 type veth peer name h2-eth0$ sudo ip link set dev h1-eth0 up $ sudo ip link set dev h2-eth0 up$ sudo ip addr add 10.0.0.1/24 dev h1-eth0 $ sudo ip addr add 10.0.0.2/24 dev h2-eth0Here is the settings which I get after the above configuration: $ ifconfig ... h1-eth0 Link encap:Ethernet HWaddr ea:ee:1e:bb:66:55 inet addr:10.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ...h2-eth0 Link encap:Ethernet HWaddr ba:aa:99:77:ff:78 inet addr:10.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ...$ ip route show 10.0.0.0/24 dev h1-eth0 proto kernel scope link src 10.0.0.1 10.0.0.0/24 dev h2-eth0 proto kernel scope link src 10.0.0.2 ...Now I can ping one interface from another as following: $ ping -I 10.0.0.1 -c1 10.0.0.2 PING 10.0.0.2 (10.0.0.2) from 10.0.0.1 : 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms--- 10.0.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 msBut the first problem is that ping fails when I try to ping using the name of the interface, rather than the ip address: $ ping -I h1-eth0 -c1 10.0.0.2 PING 10.0.0.2 (10.0.0.2) from 10.0.0.1 h1-eth0: 56(84) bytes of data. From 10.0.0.1 icmp_seq=1 Destination Host Unreachable--- 10.0.0.2 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0msHow can this be a problem if h1-eth0 has ip address 10.0.0.1? The second problem is, I believe, related. I configure the interfaces as following: $ sudo tc qdisc add dev h1-eth0 root netem delay 60ms $ sudo tc qdisc add dev h2-eth0 root netem delay 60ms $ tc qdisc show qdisc netem 8006: dev h2-eth0 root refcnt 2 limit 1000 delay 60.0ms qdisc netem 8005: dev h1-eth0 root refcnt 2 limit 1000 delay 60.0msNow I ping again with the delay: $ ping -I 10.0.0.1 -c4 10.0.0.2 PING 10.0.0.2 (10.0.0.2) from 10.0.0.1 : 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.034 ms 64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.027 ms--- 10.0.0.2 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3063ms rtt min/avg/max/mdev = 0.027/0.038/0.059/0.013 msAnd it can be seen that the rtt is not expected 60ms*2=120ms. So it looks like tc qdisc netem does not work for my interfaces. So overall, I see that my configuration is somehow broken.
For veth pair, ping does not recognize interface name and tc qdisc netem does not work
As per @A.B comments:The mark you set in mangle/INPUT has no effect on tc, because tc ingress happens waaaay before. Check: en.wikipedia.org/wiki/Netfilter#/media/ ...To save the mark for the connection, use -j CONNMARK --save-mark for cgroup's outbound packets, retrieve connmark in inbound packets with tc-connmark, and finally, redirect packets to ifb to apply policer.Mark cgroup connection packets (Bidirectional):$ sudo iptables -A OUTPUT -t mangle -m cgroup --path '/user.slice/.../app-firefox-...scope' \ -j MARK --set-mark 0x11 $ sudo iptables -A OUTPUT -t mangle -j CONNMARK --save-markCreate ifb interface$ modprobe ifb $ ip link set ifb0 up $ tc qdisc add dev ifb0 root htb #For policing, we don't care about the qdisc typeRetrieve connmark and Redirect $IFACE ingress to ifb0$ tc qdisc add dev $IFACE ingress handle ffff: $ tc filter add dev $IFACE parent ffff: protocol all prio 10 u32 match u32 0 0 flowid 1:1 \ action connmark \ action mirred egres s redirect dev ifb0Apply policer on the marked packets in ifb0 root qdisc$ tc filter add dev ifb0 parent 1: protocol ip prio 20 handle 0x11 fw \ action police rate 1000kbit burst 10k dropThis will limit Firefox's download rate to 1000kbit.
I am trying to limit the download (ingress) rate for a certain app within a cgroup. I was able to limit the upload (egress) rate successfully by marking app's OUTPUT packets in iptables and then set a tc filter to handle that marked packets. However, when I did the same steps for ingress it didn't work.steps I followed to limit upload:Mark OUTPUT packets by their cgroup$ sudo iptables -I OUTPUT -t mangle -m cgroup --path '/user.slice/.../app-firefox-...scope'\ -j MARK --set-mark 11filter by fw mark (11) on the root qdisc$ tc qdisc add dev $IFACE root handle 1: htb default 1 $ tc filter add dev $IFACE parent 1: protocol ip prio 1 handle 11 fw \ action police rate 1000kbit burst 10k drop This limited the upload rate for firefox to 1000kbit successfully.steps I followed trying to limit download:Mark INPUT packets by their cgroup$ sudo iptables -I INPUT -t mangle -m cgroup --path '/user.slice/.../app-firefox-...scope'\ -j MARK --set-mark 22filter by fw mark (22) on the ingress qdisc$ tc qdisc add dev $IFACE ingress handle ffff: $ tc filter add dev $IFACE parent ffff: protocol ip prio 1 handle 22 fw \ action police rate 1000kbit burst 10k drop I am able to block app's download successfully with iptables: $ sudo iptables -I INPUT -t mangle -m cgroup --path '/user.slice/.../app-firefox-....scope' -j DROPSo it seems like iptables is marking cgroup's input packets but for some reason, tc can't filter them or maybe the packets are being consumed before tc filter takes effect? if so, then what is the use of marking input packets? If there is a way to block cgroup's input packets then there must be a way to limit them, right?
How to police ingress (input) packets belonging to a cgroup with iptables and tc?
My understanding is that you are confusing the Ethernet address that you modify with tc (link layer only), with the inner CHADDR field (client's hardware address) that was embedded by the client inside the DHCPDISCOVER request (application layer which won't ever be altered by tc).
I am using tc to change the MAC address of incoming packets on a TAP interface (tap0) as follows where mac_org is the MAC address of a guest in a QEMU virtual machine and mac_new is a different MAC address that mac_org should be replaced with. tc qdisc add dev tap0 ingress handle ffff: tc filter add dev tap0 protocol ip parent ffff: \ flower src_mac ${mac_org} \ action pedit ex munge eth src set ${mac_new} pipe \ action csum ip pipe \ action xt -j LOGI also add an iptables rule to log UDP packets on the input hook. iptables -A INPUT -p udp -j LOGsyslog shows that indeed the DHCP discover packet is changed accordingly. The tc log entry looks as follows: IN=tap0 OUT= MAC=ff:ff:ff:ff:ff:ff:${mac_new}:08:00 SRC=0.0.0.0 DST=255.255.255.255 LEN=338 TOS=0x00 PREC=0xC0 TTL=64 ID=0 DF PROTO=UDP SPT=68 DPT=67 LEN=318and the log entry of the netfilter input hook which follows the tc ingress hook as the locally incoming packet is passed towards the socket shows the same result slightly differently formatted. IN=tap0 OUT= MACSRC=${mac_new} MACDST=ff:ff:ff:ff:ff:ff MACPROTO=0800 SRC=0.0.0.0 DST=255.255.255.255 LEN=338 TOS=0x00 PREC=0xC0 TTL=64 ID=0 DF PROTO=UDP SPT=68 DPT=67 LEN=318Before starting QEMU I run dnsmasq on tap0 which surprisingly shows the output: DHCPDISCOVER(tap0) ${mac_org}Running strace -f -x -s 10000 -e trace=network dnsmasq ... shows a recvmsg call that contains ${mac_org} instead of ${mac_new}. recvmsg(4, {msg_name={sa_family=AF_INET, sin_port=htons(68), sin_addr=inet_addr("0.0.0.0")}, msg_namelen=16, msg_iov=[{iov_base="... ${mac_org} ..." ...How can that happen? It almost appears as if the packet is altered after the netfilter input hook.
MAC address rewriting using tc
There is no actual problem to solve in OP's question, so I'll provide a very simple example that uses network namespacesset up communications ip -n test1 link add up type veth peer netns test2 ip -n test2 link set veth0 up ip -n test1 address add 192.0.2.11/24 dev veth0 ip -n test2 address add 192.0.2.12/24 dev veth0initial test # ip netns exec test1 ping -c1 192.0.2.12 PING 192.0.2.12 (192.0.2.12) 56(84) bytes of data. 64 bytes from 192.0.2.12: icmp_seq=1 ttl=64 time=0.068 ms--- 192.0.2.12 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 msadd a first basic qdisc: netem tc -n test1 qdisc add dev veth0 root handle 1: netem delay 100mstest the result # ip -n test1 neigh flush all # ip netns exec test1 ping -c2 192.0.2.12 PING 192.0.2.12 (192.0.2.12) 56(84) bytes of data. 64 bytes from 192.0.2.12: icmp_seq=1 ttl=64 time=200 ms 64 bytes from 192.0.2.12: icmp_seq=2 ttl=64 time=100 ms--- 192.0.2.12 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 100.152/150.223/200.294/50.071 ms(first ping gets the ARP delay in addition to the IP delay that's why it's twice the delay)add again a qdisc with the previous as parent tc -n test1 qdisc add dev veth0 parent 1: handle 2: netem delay 350mstest yet again # ip -n test1 neigh flush all # ip netns exec test1 ping -c2 192.0.2.12 PING 192.0.2.12 (192.0.2.12) 56(84) bytes of data. 64 bytes from 192.0.2.12: icmp_seq=1 ttl=64 time=900 ms 64 bytes from 192.0.2.12: icmp_seq=2 ttl=64 time=450 ms--- 192.0.2.12 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 450.228/675.272/900.317/225.044 msAs you can see the two netem qdisc were used: 100+350=450ms (and twice because of ARP for the first one)As long as the qdisc's specific property makes sense, one can continue: tc -n test1 qdisc add dev veth0 parent 2: handle 3: priountil it makes no sense (prio is a classful qdisc): # tc -n test1 qdisc add dev veth0 parent 3: handle 4: sfq Error: Specified class not found.or there is no support (probably because it makes no sense): # tc -n test1 qdisc del dev veth0 parent 2: handle 3: # tc -n test1 qdisc add dev veth0 parent 2: handle 3: sfq # tc -n test1 qdisc add dev veth0 parent 3: handle 4: netem delay 100ms RTNETLINK answers: Operation not supportedbut one can't add a second qdisc to something (that's why there are classful qdisc providing multiple classes instead): # tc -n test1 qdisc add dev veth0 root handle 5: netem delay 100ms Error: NLM_F_REPLACE needed to override.Conclusion:yes there can be multiple qdiscs per device in total, but at a given hierarchy, there is only one qdisc. To have multiple qdiscs "at the same level" would require a qdisc which provides multiple classes that can each be parent of a qdisc.and (only non-classful) qdisc can have an other qdisc as child.In most useful cases, one uses a classful qdisc with classes, additional qdiscs added with the classes as parent, and there are filters to choose how classes will be selected. Here's a Q/A where I made an answer with this scheme in the second part: Limit bandwidth on a specific port in CentOS 7?
Can I add multiple qdiscs to the same device with tc, or is it only possible to use one qdisc per device? Also, can a qdisc contain child qdiscs, or only child classes? i.e. is it possible to do tc qdisc add parent <existing qdisc> handle <child qdisc> <qdisc type> ?
can I use multiple qdiscs per device?
It appears net.core.default_qdisc affects an interface driver when it's loaded. If the kernel module was loaded before net.core.default_qdisc was changed, then it won't affect it afterward. Some interfaces have altered behaviour: multiqueue interfaces will keep mq but their leaves inherit this default instead. lo or veth won't get any default queue. If you want to ensure the sysctl is changed before the driver, you could:have it changed in initramfs scripts (some tweaking is probably needed),have it loaded from kernel cmdline. This Q/A tells it's possible for any arbitrary sysctl only since kernel 5.8, which you are using. So in theory you could add something this in the boot parameters (probably in GRUB's GRUB_CMDLINE_LINUX) and forget about it: sysctl.net.core.default_qdisc=fq_piebut actually this is possible only for built-in drivers. It's very unlikely that sch_fq_pie was compiled built-in.delay the loading of the driver for wlp1s0 (I wouldn't know where to do this)rmmod ath10k and modprobe ath10k so the new default applies.Anyway to immediately change an interface's qdisc, just define its qdisc, which will override the default kernel qdisc, which has the reserved handle 0:. For example: tc qdisc add dev wlp1s0 handle 1: root fq_pie
I am very interested in setting up fq_pie queue discipline for TCP congestion control. If I write net.core.default_qdisc = fq_pie to /etc/sysctl.d/90-override.conf, it should enable fq_pie on latest kernels. It does work on my desktop though. But on my laptop: $ tc qdisc show qdisc noqueue 0: dev lo root refcnt 2 qdisc noqueue 0: dev wlp1s0 root refcnt 2 qdisc mq 0: dev wlp0s20f0u3 root qdisc fq_pie 0: dev wlp0s20f0u3 parent :4 limit 10240p flows 1024 target 15ms tupdate 16ms alpha 2 beta 20 quantum 1514b memory_limit 32Mb ecn_prob 10 qdisc fq_pie 0: dev wlp0s20f0u3 parent :3 limit 10240p flows 1024 target 15ms tupdate 16ms alpha 2 beta 20 quantum 1514b memory_limit 32Mb ecn_prob 10 qdisc fq_pie 0: dev wlp0s20f0u3 parent :2 limit 10240p flows 1024 target 15ms tupdate 16ms alpha 2 beta 20 quantum 1514b memory_limit 32Mb ecn_prob 10 qdisc fq_pie 0: dev wlp0s20f0u3 parent :1 limit 10240p flows 1024 target 15ms tupdate 16ms alpha 2 beta 20 quantum 1514b memory_limit 32Mb ecn_prob 10As it can be seen that I have 2 wifi adapters. One comes inbuilt to my laptop, which is Qualcomm Atheros (ath10k), fq_pie can't be activated on this. The fq_pie however, can be activated on TP Link (RTL8188EUS) adapter. I have also tried 2 more laptops (Dell and HP), the integrated wifi adapter is not actually running fq_pie. Is there a way to forcefully activate fq_pie to the Qualcomm Atheros and other wifi adapters? System Details: $ cat /proc/version Linux version 5.8.12-xanmod1-1 (makepkg@archlinux) (gcc (GCC) 10.2.0, GNU ld (GNU Binutils) 2.35) #1 SMP PREEMPT Wed, 30 Sep 2020 14:19:49 +0000$ ip -V ip utility, iproute2-v5.7.0-77-gb687d1067169$ tc -V tc utility, iproute2-v5.7.0-77-gb687d1067169
Forcefully enable fq_pie
I use the following script to emulate various network conditions: #!/bin/bashintf="dev eth0" delay="delay 400ms 100ms 50%" loss="loss random 0%" corrupt="corrupt 0%" duplicate="duplicate 0%" reorder="reorder 0%" rate="rate 512kbit"tc qdisc del $intf root tc qdisc add $intf root netem $delay $loss $corrupt $duplicate $reorder $rateecho "Cancel with:" echo "tc qdisc del $intf root"In your case, to introduce a 400ms delay and a rate limit of 512kbit/s on outgoing packets on device eth0: tc qdisc del dev eth0 root tc qdisc add dev eth0 root netem delay 400ms rate 512kbitReferences:man tc-netem Linux Foundation Netem Wiki
I read that there's another tool for netfilter that allows you to add latency to a ratelimit. Does anyone have an example of this?
How does one use tc to add latency to a ratelimit?
In Fedora 17 they moved a lot of unused (in common usage I guess) modules for the kernel into the package kernel-modules-extra. Installing this package will fix the problem. Source: https://serverfault.com/a/398964/112950
After upgrading from Fedora 16 to Fedora 17, Traffic Control no longer seems to work. Running # tc qdisc show will output: qdisc pfifo_fast 0: dev eth0 root refcnt 2 bands 3 priopmap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 However, if I run # tc qdisc add dev eth0 root netem delay 100ms or similar commands such as # tc ... loss 2% or # tc ... corrupt 3% I get the following: RNETLINK answers: No such file or directory Downgrading back to Fedora 16 allows me to use Traffic Control without this problem, so I'm convinced it's not a hardware issue. This question is similar to https://serverfault.com/questions/318926/tc-netem-possibly-missing but I believe the right components were installed by checking # yum provides */tc and ascertaining that tc is from the package iproute, whose latest installation I have. Is netem part of another package I must also install?
Is Traffic Control (tc) broken in Fedora 17?
So after some reading and rummaging in the kernel source, it seems the qdisc is ineffective because the tun driver doesn't ever tell the network stack it is busy. It simply holds packets in its own local queues (whose size is set by txqlen) and when they are full it simply drops the excess packets. Here's the relevant bit of the transmit function in drivers/net/tun.c that is called by the stack when it wants to send a packet: /* Net device start xmit */ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev) { struct tun_struct *tun = netdev_priv(dev); int txq = skb->queue_mapping; struct tun_file *tfile; int len = skb->len; rcu_read_lock(); tfile = rcu_dereference(tun->tfiles[txq]);....... Various unrelated things omitted ....... if (ptr_ring_produce(&tfile->tx_ring, skb)) goto drop; /* Notify and wake up reader process */ if (tfile->flags & TUN_FASYNC) kill_fasync(&tfile->fasync, SIGIO, POLL_IN); tfile->socket.sk->sk_data_ready(tfile->socket.sk); rcu_read_unlock(); return NETDEV_TX_OK; drop: this_cpu_inc(tun->pcpu_stats->tx_dropped); skb_tx_error(skb); kfree_skb(skb); rcu_read_unlock(); return NET_XMIT_DROP; } }A typical network interface driver should call netif_stop_queue() and netif_wake_queue() functions to stop and start the flow of packets from the network stack. When the flow is stopped, the packets are queued in the attached queue discipline, allowing the user more flexibility in how that traffic is managed and prioritised. For whatever reason, the tap/tun driver does not do this - presumably because most tunnels simply encapsulate packets and send them to real network interfaces without any additional flow control. To verify my finding I tried a simple test by stopping the flow control in the function above: if (ptr_ring_produce(&tfile->tx_ring, skb)) { netif_stop_queue(dev); goto drop; } else if (ptr_ring_full(&tfile->tx_ring)) { netif_stop_queue(dev); tun_debug(KERN_NOTICE, tun, "tun_net_xmit stop %lx\n", (size_t)skb); }and a similar additions to tun_ring_recv to stop/wake the queue based on whether it was empty after dequeuing a packet: empty = __ptr_ring_empty(&tfile->tx_ring); if (empty) netif_wake_queue(tun->dev); else netif_stop_queue(tun->dev);This is not a great system, and wouldn't work with a multiqueue tunnel, but it works well enough that I could see the qdisc reporting a backlog and a clear difference in ping times and loss rate using pfifo_fast at different ToS levels when the link was at capacity.
I am developing a tunnel application that will provide a low-latency, variable bandwidth link. This will be operating in a system that requires traffic prioritization. However, while traffic towards the tun device is clearly being queued by the kernel, it appears whatever qdisc I apply to the device it has no additional effect, including the default pfifo_fast, i.e. what should be high priority traffic is not being handled separately from normal traffic. I have made a small test application to demonstrate the problem. It creates two tun devices and has two threads each with a loop passing packets from one interface to the other and back, respectively. Between receiving and sending the loop delays 1us for every byte, roughly emulating an 8Mbps bidirectional link: void forward_traffic(int src_fd, int dest_fd) { char buf[BUFSIZE]; ssize_t nbytes = 0; while (nbytes >= 0) { nbytes = read(src_fd, buf, sizeof(buf)); if (nbytes >= 0) { usleep(nbytes); nbytes = write(dest_fd, buf, nbytes); } } perror("Read/write TUN device"); exit(EXIT_FAILURE); }With each tun interface placed in its own namespace, I can run iperf3 and get about 8Mbps of throughput. The default txqlen reported by ip link is 500 packets and when I run an iperf3 (-P 20) and a ping at the same time I see a RTTs from about 670-770ms, roughly corresponding to 500 x 1500 bytes of queue. Indeed, changing txqlen changes the latency proportionally. So far so good. With the default pfifo_fast qdisc I would expect a ping with the right ToS mark to skip that normal queue and give me a low latency, e.g ping -Q 0x10 I think should have much lower RTT, but doesn't (I have tried other ToS/DSCP values as well - they all have the same ~700ms RTT. Additionally I have tried various other qdiscs with the same results, e.g. fq_codel doesn't have a significant effect on latency. Regardless of the qdisc, tc -s qdisc always shows a backlog of 0 regardless of whether the link is congested. (But I do see ip -s link show dropped packets under congestion) Am I fundamentally misunderstanding something here or there something else I need to do make the qdisc effective? Complete source here
Traffic shaping ineffective on tun device
Contrary to Netfilter which includes a stateful NAT engine (using the conntrack lookup entries) and which will automatically de-NAT the reply traffic without explicit rule telling it to do so, implementing NAT elsewhere is stateless and requires to handle both directions. For incoming connections, that means handling NAT at ingress but also handling de-NAT at egress explicitly. As witnessed by running tcpdump on the client: # tcpdump -ttt -l -n -s0 -p -i lxcbr0 tcp tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on lxcbr0, link-type EN10MB (Ethernet), snapshot length 262144 bytes 00:00:00.000000 IP 10.0.3.1.52542 > 10.0.3.214.80: Flags [S], seq 3033230443, win 64240, options [mss 1460,sackOK,TS val 2154801903 ecr 0,nop,wscale 7], length 0 00:00:00.000058 IP 10.0.3.214.8080 > 10.0.3.1.52542: Flags [S.], seq 1400064141, ack 3033230444, win 65160, options [mss 1460,sackOK,TS val 3949758745 ecr 2154801903,nop,wscale 7], length 0 00:00:00.000013 IP 10.0.3.1.52542 > 10.0.3.214.8080: Flags [R], seq 3033230444, win 0, length 0the current eBPF code did only the first part. So incoming TCP packets to port 80 are indeed switched to port 8080 before any other part of the network stack can know about it, but then the reply traffic will just be issued from port 8080 (knowledge of any port 80 is lost after the eBPF code), while the client expects replies from port 80 too: the client's kernel replies with a TCP RST and the client tries again, with the same outcome: no connectivity. An equivalent inverse transformation has to be done on egress. As all this is stateless that means once done it will no longer be possible to connect directly to port 8080 for the same reasons: the same effect would then happen: connections to port 8080 will now be replied using port 80. By contrast, applying an equivalent setup to UDP would have worked for incoming traffic only, because UDP doesn't need to emit back anything when receiving traffic. But sending back ICMP errors (for example to signal to the client there is no longer a server listening) would fail. Even if eBPF code was done for the other direction for UDP, an ICMP error would still include the wrong UDP port in its partial UDP payload. Netfilter's NAT also takes care of this.
I'm want to use TC BPF to redirect incoming traffic from port 80 to port 8080. Below is my own code, but I've also tried the example from man 8 tc-bpf (search for 8080) and I get the same result. #include <linux/bpf.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_endian.h> #include <linux/pkt_cls.h> #include <linux/if_ether.h> #include <linux/tcp.h> #include <linux/in.h> #include <linux/ip.h>#include <linux/filter.h>static inline void set_tcp_dport(struct __sk_buff *skb, int nh_off, __u16 old_port, __u16 new_port) { bpf_l4_csum_replace(skb, nh_off + offsetof(struct tcphdr, check), old_port, new_port, sizeof(new_port)); bpf_skb_store_bytes(skb, nh_off + offsetof(struct tcphdr, dest), &new_port, sizeof(new_port), 0); }SEC("tc_my") int tc_bpf_my(struct __sk_buff *skb) { struct iphdr ip; struct tcphdr tcp; if (0 != bpf_skb_load_bytes(skb, sizeof(struct ethhdr), &ip, sizeof(struct iphdr))) { bpf_printk("bpf_skb_load_bytes iph failed"); return TC_ACT_OK; } if (0 != bpf_skb_load_bytes(skb, sizeof(struct ethhdr) + (ip.ihl << 2), &tcp, sizeof(struct tcphdr))) { bpf_printk("bpf_skb_load_bytes ethh failed"); return TC_ACT_OK; } unsigned int src_port = bpf_ntohs(tcp.source); unsigned int dst_port = bpf_ntohs(tcp.dest); if (src_port == 80 || dst_port == 80 || src_port == 8080 || dst_port == 8080) bpf_printk("%pI4:%u -> %pI4:%u", &ip.saddr, src_port, &ip.daddr, dst_port); if (dst_port != 80) return TC_ACT_OK; set_tcp_dport(skb, ETH_HLEN + sizeof(struct iphdr), __constant_htons(80), __constant_htons(8080)); return TC_ACT_OK; }char LICENSE[] SEC("license") = "GPL";On machine A, I am running: clang -g -O2 -Wall -target bpf -c tc_my.c -o tc_my.o tc qdisc add dev ens160 clsact tc filter add dev ens160 ingress bpf da obj tc_my.o sec tc_my nc -l 8080On machine B: nc $IP_A 80On machine B, nc seems connected, but ss shows: SYN-SENT 0 1 $IP_B:53442 $IP_A:80 users:(("nc",pid=30180,fd=3))On machine A, connection remains in SYN-RECV before being dropped. I was expecting my program to behave as if I added this iptables rule: iptables -t nat -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-port 8080Maybe my expectations are wrong, but I would like to understand why. How can I get my TC BPF redirect to work? SOLUTION Following the explanation in my accepted answer, here is an example code which works for TCP, does ingress NAT 90->8080, and egress de-NAT 8080->90. #include <linux/bpf.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_endian.h> #include <linux/pkt_cls.h> #include <linux/if_ether.h> #include <linux/tcp.h> #include <linux/in.h> #include <linux/ip.h>#include <linux/filter.h>static inline void set_tcp_dport(struct __sk_buff *skb, int nh_off, __u16 old_port, __u16 new_port) { bpf_l4_csum_replace(skb, nh_off + offsetof(struct tcphdr, check), old_port, new_port, sizeof(new_port)); bpf_skb_store_bytes(skb, nh_off + offsetof(struct tcphdr, dest), &new_port, sizeof(new_port), 0); }static inline void set_tcp_sport(struct __sk_buff *skb, int nh_off, __u16 old_port, __u16 new_port) { bpf_l4_csum_replace(skb, nh_off + offsetof(struct tcphdr, check), old_port, new_port, sizeof(new_port)); bpf_skb_store_bytes(skb, nh_off + offsetof(struct tcphdr, source), &new_port, sizeof(new_port), 0); }SEC("tc_ingress") int tc_ingress_(struct __sk_buff *skb) { struct iphdr ip; struct tcphdr tcp; if (0 != bpf_skb_load_bytes(skb, sizeof(struct ethhdr), &ip, sizeof(struct iphdr))) { bpf_printk("bpf_skb_load_bytes iph failed"); return TC_ACT_OK; } if (0 != bpf_skb_load_bytes(skb, sizeof(struct ethhdr) + (ip.ihl << 2), &tcp, sizeof(struct tcphdr))) { bpf_printk("bpf_skb_load_bytes ethh failed"); return TC_ACT_OK; } unsigned int src_port = bpf_ntohs(tcp.source); unsigned int dst_port = bpf_ntohs(tcp.dest); if (src_port == 90 || dst_port == 90 || src_port == 8080 || dst_port == 8080) bpf_printk("INGRESS %pI4:%u -> %pI4:%u", &ip.saddr, src_port, &ip.daddr, dst_port); if (dst_port != 90) return TC_ACT_OK; set_tcp_dport(skb, ETH_HLEN + sizeof(struct iphdr), __constant_htons(90), __constant_htons(8080)); return TC_ACT_OK; }SEC("tc_egress") int tc_egress_(struct __sk_buff *skb) { struct iphdr ip; struct tcphdr tcp; if (0 != bpf_skb_load_bytes(skb, sizeof(struct ethhdr), &ip, sizeof(struct iphdr))) { bpf_printk("bpf_skb_load_bytes iph failed"); return TC_ACT_OK; } if (0 != bpf_skb_load_bytes(skb, sizeof(struct ethhdr) + (ip.ihl << 2), &tcp, sizeof(struct tcphdr))) { bpf_printk("bpf_skb_load_bytes ethh failed"); return TC_ACT_OK; } unsigned int src_port = bpf_ntohs(tcp.source); unsigned int dst_port = bpf_ntohs(tcp.dest); if (src_port == 90 || dst_port == 90 || src_port == 8080 || dst_port == 8080) bpf_printk("EGRESS %pI4:%u -> %pI4:%u", &ip.saddr, src_port, &ip.daddr, dst_port); if (src_port != 8080) return TC_ACT_OK; set_tcp_sport(skb, ETH_HLEN + sizeof(struct iphdr), __constant_htons(8080), __constant_htons(90)); return TC_ACT_OK; }char LICENSE[] SEC("license") = "GPL";Here is how I build and loaded the different sections in my program: clang -g -O2 -Wall -target bpf -c tc_my.c -o tc_my.o tc filter add dev ens32 ingress bpf da obj /tc_my.o sec tc_ingress tc filter add dev ens32 egress bpf da obj /tc_my.o sec tc_egress
Redirect port using TC BPF
tc also accepts a -s parameter, with the same meaning: statistics. Example as root applied on a veth link toward an LXC container with address 10.0.3.128: # echo; tc qdisc del dev vethlzYQu1 root 2>/dev/null; \ ip neigh flush all; \ tc qdisc add dev vethlzYQu1 root netem loss 30% 50%; \ tc -s qdisc show dev vethlzYQu1 root; \ ping -q -c 10 10.0.3.128; \ tc -s qdisc show dev vethlzYQu1 root qdisc netem 8010: root refcnt 5 limit 1000 loss 30% 50% Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 PING 10.0.3.128 (10.0.3.128) 56(84) bytes of data.--- 10.0.3.128 ping statistics --- 10 packets transmitted, 8 received, 20% packet loss, time 9193ms rtt min/avg/max/mdev = 0.030/125.218/1001.185/331.084 ms qdisc netem 8010: root refcnt 5 limit 1000 loss 30% 50% Sent 826 bytes 9 pkt (dropped 3, overlimits 0 requeues 0) backlog 0b 0p requeues 0Here 9+3=12 packets should have been sent, 2 of the dropped packets were from the ping, and the other was probably an ARP request which was retried. If you need to parse tc's output in shell, better use its JSON output along jq. Eg: # tc -s -json qdisc show dev vethlzYQu1 root | jq '.[].drops' 3
I found an interesting article that describes how to simulate network issues (like lost packets) on a linux server. On an Ubuntu test VM, I checked which interface is used for internet connectivity, and it's called ens33. Then I added a rule using tc to introduce packet loss: $ sudo tc qdisc add dev ens33 root netem loss 30% 50% And then I let ping run for a while, the result is as expected, some packets are lost: $ ping www.google.com ... 97 packets transmitted, 84 received, 13% packet lossWhile ping was running, I thought I could also monitor the ongoing packet loss using ip -s link show ens33, but it shows 0 dropped packets both for RX and TX. What I'm trying to do is to monitor packet loss in realtime, while ping is running.
Monitoring packet loss simulated with tc
Creating an .onion service in the Tor network is as simple as editing /etc/tor/torrc and adding: HiddenServiceDir /var/lib/tor/www_service/ HiddenServicePort 80 127.0.0.1:80After restarting the tor service with sudo service tor restart or sudo service tor reloadThe directory will be created automagically, and inside the new directory, two files are generated, hostname and private_key. The hostname file has a somewhat random name inside, which is your address in the .onion network. $sudo cat /var/lib/tor/www_service/hostname xyew6pdq6qv2i4sx.onion The names are generated in negotiation with the actual Tor network, which also explains why sites/services in the Tor network have such strange names. There appears to be scripts for getting (using brute force?) a slighter less random name, I got an impression the added complexity is not worth the extra effort. So actually, what you have configured now, is that all visits to in the Tor network to http://xyew6pdq6qv2i4sx.onion/ will be forwarded to a daemon listening to 127.0.0.1:80 (localhost:80) on your server. Now we can setup a web daemon to answer for that IP adress:port and only binding for localhost e.g. it does not answers requests in the local network, and in any public IP address in the "regular" Internet. For instance, using nginx, change the default server configuration in /etc/nginx/sites-enabled/default to: server { listen 127.0.0.1:80 default_server; server_name xyew6pdq6qv2i4sx.onion; ... }Install some pages, and voilá, you have a darknet site. The actual part of installing the service per se, is not the most difficult part however. Care must be taken for not to leak informations of the real machine in:the security setup of the server; the daemon providing the service; the firewalling/iptables rules.Special care must be taken of DNS leaks too, either via dnscrypt or tor. See the answer at resolving DNS via Tor for more information. Such setup can be either used to setup somewhat anonymous sites, or more interestingly yet, due to the properties of arriving as a reverse proxy configuration, to setup a temporary service/download files from a network where there are no firewall rules, or public IP addresses/NAT available to setup a proper www site in the Internet at large. Obviously, there is so much more to talk about security concerns, however it is out of scope of this question. For multiple services in the same host, please see the related question: How to set up multiple Tor hidden services in the same host? For an introduction to the theme, have a look at: Setting up a hidden service with NGinx and Onionshop Guide: How To Set Up a Hidden Service? If having problems opening .onion sites with FireFox, see: Visiting darknet/ Tor sites with Firefox
I have heard a lot about creating darknet sites lately. I also use the Tor browser frequently. The tor service is running in my Debian server at home, and it was installed with: sudo apt-get install tor I have an idea how the Tor network works and also use torify once in a while, in Linux, and MacOS, for doing some tests with ssh and wget over the Tor network. I have noticed the lines in /etc/tor/torrc #HiddenServiceDir /var/lib/tor/hidden_service/ #HiddenServicePort 80 127.0.0.1:80However, how to go from there? How are .onion sites/names created? What are the basics about setting up such a service in Linux?
How to create a darknet/Tor web site in Linux?
Here is how it does it: static int getdestaddr_iptables(int fd, const struct sockaddr_in *client, const struct sockaddr_in *bindaddr, struct sockaddr_in *destaddr) { socklen_t socklen = sizeof(*destaddr); int error; error = getsockopt(fd, SOL_IP, SO_ORIGINAL_DST, destaddr, &socklen); if (error) { log_errno(LOG_WARNING, "getsockopt"); return -1; } return 0; }iptables overrides the original destination address but it remembers the old one. The application code can then fetch it by asking for a special socket option, SO_ORIGINAL_DST.
There are two SOCKS proxies that I know about that support transparent proxying for any outgoing TCP connection: Tor and redsocks. Unlike HTTP proxies, these SOCKS proxies can transparently proxy any outgoing TCP connection, including encrypted protocols and protocols without metadata or headers. Both of these proxies require the use of NAT to redirect any outgoing TCP traffic to the proxy's local port. For instance, if I am running Tor with TransPort 9040 on my local machine, I would need to add an iptables rule like this: iptables -t nat -A OUTPUT -p tcp -j REDIRECT --to-port 9040To my knowledge, this would replace the original destination IP and port with 127.0.0.1 and 9040, so given that this is an encrypted stream (like SSH) or one without headers (like whois), how does the proxy know the original destination IP and port?
How does a transparent SOCKS proxy know which destination IP to use?
Take a look at this answer: How does a transparent SOCKS proxy know which destination IP to use? Quotation: iptables overrites the original destination address but it remembers the old one. The application code can then fetch it by asking for a special socket option, SO_ORIGINAL_DST.
Out of curiosity I'm reading some tutorials about transparent TOR proxies as it's quite interesting topic from a networking standpoint. As opposed to VPN gateways which just use tun/tap interfaces and are totally clear to me, TOR proxy uses a single port. All tutorials repeat the magic line: iptables -t nat -A PREROUTING -i eth0 -p tcp --syn -j REDIRECT --to-ports 9040where eth0 is the input (LAN) interface and 9040 is some TOR port. The thing is, I completely don't get why such a thing makes sense at all from networking standpoint. According to my understanding of redirect / dst-nat chains and how it seems to work in physical routers, dst-nat chain takes dst-port and dst-addr BEFORE routing decision is taken and changes them to something else. So for example:before dst-nat: 192.168.1.2:46364 -> 88.88.88.88:80 after dst-nat: 192.168.1.2:46364 -> 99.99.99.99:8080And 99.99.99.99:8080 is what further chains in IP packet flow lane see (for example filter table) and this is how the packet looks from now on after leaving device for example. Now many people around the internet (including on this stackexchange) claimed that redirect is basically the same as dst-nat with dst-addr set to local address of interface. In such light, this rule: iptables -t nat -A PREROUTING -i eth0 -p tcp --syn -j REDIRECT --to-ports 9040clearly doesn't make sense. If that would be how it works, then TOR would get all packets with destination 127.0.0.1:9040. For typical applications where app takes packet and responds to it somehow (for example web servers) it totally makes sense because after all, such a server process is the final destination of the packet anyways so it's okay that the destination address is localhost. But TOR router is well... a router so it has to know original destination of packet. Am I missing something? Does DNAT not affect what local applications receive? Or is it specific behavior of REDIRECT directive?
What does iptables -j REDIRECT *actually* do to packet headers?
First, you need tun2socks (often a part of the 'badvpn' package). tun2socks sets up a virtual interface which you can route traffic through, and that traffic will get sent through the target socks proxy. Setting it up gets a little tricky as you only want to route certain traffic through the tunnel. This script should do what you want: #!/bin/bash socks_server=127.0.0.1:8080id="$RANDOM" tun="$(printf 'tun%04x' "$id")" ip tuntap add dev $tun mode tun ip link set $tun up ip addr add 169.254.1.1/30 dev $tun sysctl -w net.ipv4.conf.$tun.forwarding=1 ip rule add fwmark $id lookup $id ip route add default via 169.254.1.2 table $id iptables -t mangle -I PREROUTING -i eth1 -p tcp -j MARK --set-mark $id iptables -t mangle -I PREROUTING -i eth2 -p tcp -j MARK --set-mark $id badvpn-tun2socks --tundev $tun --netif-ipaddr 169.254.1.2 --netif-netmask 255.255.255.252 --socks-server-addr $socks_serveriptables -t mangle -D PREROUTING -i eth2 -p tcp -j MARK --set-mark $id iptables -t mangle -D PREROUTING -i eth1 -p tcp -j MARK --set-mark $id ip route del default via 169.254.1.2 table $id ip rule del from fwmark $id lookup $id ip tuntap del dev $tun mode tunExplanation: socks_server=127.0.0.1:8080This is the socks server we will use.id="$RANDOM" tun="$(printf 'tun%04x' "$id")"These generate a random ID to use for the tunnel. Since you may have other tunnels on the system, we can't just use tun0 or tun1. 99% of the time this will work fine. Adjust accordingly though.ip tuntap add dev $tun mode tun ip link set $tun up ip addr add 169.254.1.1/30 dev $tun sysctl -w net.ipv4.conf.$tun.forwarding=1These set up the tunnel interface tun2socks will use.ip rule add fwmark $id lookup $id ip route add default via 169.254.1.2 table $idThese create a routing table with a single rule which sends any traffic with firewall mark $id (covered next) through the tunnel.iptables -t mangle -I PREROUTING -i eth1 -p tcp -j MARK --set-mark $id iptables -t mangle -I PREROUTING -i eth2 -p tcp -j MARK --set-mark $idThese set firewall mark $id on any TCP packets coming in eth1 or eth2. We only want to match TCP. Socks can't handle UDP or ICMP (tun2socks does have a way to forward UDP, but it's more complicated, and so I'm leaving it out).badvpn-tun2socks --tundev $tun --netif-ipaddr 169.254.1.2 --netif-netmask 255.255.255.252 --socks-server-addr $socks_serverThis starts tun2socks up. It'll sit in the foreground until terminated.iptables -t mangle -D PREROUTING -i eth2 -p tcp -j MARK --set-mark $id iptables -t mangle -D PREROUTING -i eth1 -p tcp -j MARK --set-mark $id ip route del default via 169.254.1.2 table $id ip rule del from fwmark $id lookup $id ip tuntap del dev $tun mode tunThese tear down everything we created during the setup process. They will only run once badvpn-tun2socks exits.
Is there a way to redirect all traffic, UDP and TCP, coming to and from eth1 and eth2 through a SOCKS proxy (Tor) which then passes it through eth0? eth0: Internet in - leads to the main router, then the cable modem eth1: A USB Ethernet port setup as a modem (I think that's the word I'm looking for, right?) eth2: A USB WiFi antenna setup as a WiFi hotspot Could I use something like iptables to directly route it through Tor or would I need an adapter like Privoxy?
Redirect ALL packets from eth1 & eth2 through a SOCKS proxy
It's not an attack, just an outdated key. There's a issue report on this matter over at the GitHub repository. A workaround reported there, which works for some systems if not all, is to run: gpg --homedir "$HOME/.local/share/torbrowser/gnupg_homedir/" --refresh-keys --keyserver pgp.mit.edubefore torbrowser-launcher. Then it works. It's quite possible that what Kusalananda suggested would also work, but I can't check that unless I undo the key update.
I wanted to try using TOR on my new Linux Mint 18.1 installation. So I apt-get installed torbrowser-launcher and tor, then ran torbrowser-launcher. It opened a dialog box and showed me it was downloading the TOR browser; but when it was done, it said it had failed the signature check and that I may be "under attack" (oh my!). Now, it's quite unlikely I'm under some attack personally (I'm not important enough for that), so I'm guessing either it's some technical glitch, or, what would be possible although far far less likely, a man-in-the-middle attack covering my ISP rather than myself individually, nefarious government surveillance or what-not. How can I tell? What should I do? By the way, the URLs downloaded are: https://dist.torproject.org/torbrowser/6.5/tor-browser-linux64-6.5_en-US.tar.xz.asc https://dist.torproject.org/torbrowser/6.5/tor-browser-linux64-6.5_en-US.tar.xz
torbrowser signature verification fails - a glitch or an "attack"?
Answer form Debian bugs tracker : bug #942901 edit your /etc/apparmor.d/local/torbrowser.Browser.firefox and add the following line: owner /{dev,run}/shm/org.mozilla.*.* rw,Also add exactly the same line to /etc/apparmor.d/local/torbrowser.Browser.plugin-container. Then: sudo systemctl restart apparmorThe Debian bug tracker states:Message #5 Tor Browser 9.0 shows only black screens because the default apparmor profile does not allow write access to /dev/shm/org.mozilla.ipc.. like it does for /dev/shm/org.chromium.* and I was able to fix this issue by adding this workaround: ==> /etc/apparmor.d/local/torbrowser.Browser.firefox <== owner /{dev,run}/shm/org.mozilla.*.* rw,
I’ve just moved from Stretch to Buster with a Cinnamon desktop. This more or less does everything I need except that I also load stable firefox (using snap), AirVPN and Tor from the buster-backport repository. Normally there aren’t any problems except this time the Tor screen (after using the launcher) is black and unresponsive. I’ve done a clean install and then just installed Tor but the problem remains. Can anybody help? Thank you.
Debian Tor Browser Showing a Black Screen
I found the solution: Firstly, I ran: echo -e "deb http://http.kali.org/kali sana main non-free contrib\ndeb http://security.kali.org/kali-security/ sana/updates main contrib non-free" > /etc/apt/sources.listand then apt-get update apt-get update --fix-missing After this Tor was installed normally with apt-get install tor.
I'm trying to install tor on my Kali Linux 2016.1 (kali-rolling). When I type apt-get install tor in Terminal, this error appears: Reading package lists... Done Building dependency tree Reading state information... Done Package tor is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another sourceE: Package 'tor' has no installation candidateHow can I fix this and install tor? UPD: I tried this: http://www.blackmoreops.com/2013/12/16/installing-tor-kali-linux/ - I added deb http://deb.torproject.org/torproject.org wheezy main to sources file, but it didn't help at all, so I deleted this string and now it's in a default condition
Problem installing tor on Kali Linux
For adding multiple Tor services in the same server, it is as simple as editing /etc/tor/torrc and adding two lines per each service, each with it´s own directory under /var/lib/tor/ ; For instance, to launch another two web sites in the same server, you can leave in the right side also port 80, and in the left side, using another port in the localhost side as in: HiddenServiceDir /var/lib/tor/www2_service/ HiddenServicePort 80 127.0.0.1:8080HiddenServiceDir /var/lib/tor/www3_service/ HiddenServicePort 80 127.0.0.1:8081I would add a note that the part of leaving as the Tor side the port 80 for several sites is a welcomed facility, as it does not obliges the user to add a port after a URL to access an onion site/service, allowing the mapping of TCP-based services in their canonical ports to ports of your own choice in the local server. nginx would then be configured with 2 new vhosts: server { listen 127.0.0.1:8080; server_name zyew6pdq6fv4i6sz.onion; ... }server { listen 127.0.0.1:8081; server_name yyew6pdh6hv1i3sy.onion; ... }If also need arises to temporarily to access the ssh service via Tor as a poor man's VPN, and to bypass firewall rules, a 4th entry to the /etc/tor/torrc file can also be added: HiddenServiceDir /var/lib/tor/ssh_service/ HiddenServicePort 22 127.0.0.1:22As mentioned in How to create a darknet/Tor web site in Linux?, after you run: service tor reloadthe directories will be created, and inside of each of the new directories, two files are generated automatically, hostname and private_key. The content of the hostname file inside each directory with be the new .onion address by which the corresponding new services can be used inside the Tor network.
When posting the question How to create a darknet/Tor web site in Linux?, @Michael Kjörling asked how to setup multiple Tor Hidden services in the same host. In that question, for setting up a single www service, it was mentioned editing /etc/tor/torrc and adding: HiddenServiceDir /var/lib/tor/www_service/ HiddenServicePort 80 127.0.0.1:80How would we then setup multiple Tor services or multiple Tor sites sharing the same server?
How to set up multiple Tor hidden services in the same host?
Short answer Yes it is possible, use tsocks nmap -sT IP Long answer First of all Tor doesn't use privoxy, Tor provides an socks proxy for connecting via the Tor network. This means you won't see any network routes or things like that on your system but you have to configure your applications to use the Tor socks proxy to connect via Tor. Typical Tor installations have privoxy or other proxy serves to provide HTTP Proxies as some browsers try to resolve the hostname locally if they are using a socks proxy. But these http proxy servers have nothing to do with connecting arbitrary applications through Tor. Applications like tsocks allow to use arbitrary applications to connect via the Tor network. This is done by hooking into specific syscalls like connect and connect them automatically via the socks proxy. This only works if the program uses the specific syscalls and is dynamically linked. To use nmap via Tor you have to use a program like tsocks to redirect the connections via the socks proxy and use a scanning options which uses the connect syscall. Fortunately nmap provides the -sT option:-sT (TCP connect scan) TCP connect scan is the default TCP scan type when SYN scan is not an option. This is the case when a user does not have raw packet privileges.So yes it is possible to run specific nmap scans (the TCP connect scan) via the TOR network if you use tsocks.
Is it possible to runnmap via Tor? When I googled around, I got the impression that Tor uses Polipo / Privoxy, which are socks5 proxies. So any TCP / UDP aware applications should be able to use them as a gateway to route their traffic. But somewhere it also said that nmap uses raw packets, so it can't be run over Tor!
Run nmap via Tor
torify as a nice frontend for torsocks which makes life far easier. It automates and simplifies the setup of torsocks in the background, making it as simple as simply invoking torify before the intended commands. What you need to run to put to work any program that uses TCP connections in the Tor network are the steps bellow:run the tor daemon; if in Debian do: sudo apt-get install tor sudo service tor startif in MacOS (tested with Sierra 10.12.2 beta, MacPorts 2.3.5): sudo port install tor sudo port install torsocks tor &call in the command line most of the tools whose communications are based in TCP with torify. For instance: torify wget ...or torify ssh ...From man torify:torify is a simple wrapper that attempts to find the best underlying Tor wrapper available on a system. It calls torsocks with a tor specific configuration file. torsocks is an improved wrapper that explicitly rejects UDP, safely resolves DNS lookups and properly socksifies your TCP connections. Please note that since both method use LD_PRELOAD, torify cannot be applied to suid binaries.
I'm trying to download a file using wget, but I've exceeded (without knowing there was a limit) the limit of downloaded bytes for my IP address, which means that I can't download anymore the files I was downloading repeatedly by re-running a script. I was essentially trying to re-running a shell/bash script (somehow testing if it works correctly), but I can't do it anymore until I fake my IP address. So, I decided to install tor and torsocks and execute the following commands: echo | tor & torsocks wget <some_url>but it doesn't work. I've never really got into tor (and even less torsocks), so I'm not sure if that's the right tool for this case. Any help is appreciated. Note: I know a little bit about the tor network, and I thought that in general it should create a proxy and not show my IP address to the world, but apparently that's not exactly what's happening.
How to download file by faking the request's IP address?
Op (I, that is) didn't take this OpenVPN FAQ seriously enough: One of the most common problems in setting up OpenVPN is that the two OpenVPN daemons on either side of the connection are unable to establish a TCP or UDP connection with each other.This is almost [always] a result of: ... A software firewall running on the OpenVPN server machine itself is filtering incoming connections on port 1194 [here 5000-5007]. Be aware that many OSes will block incoming connections by default, unless configured otherwise.There's no problem with OpenVPN. I just neglected to create a firewall rule for WAN in the pfSense VM that's running the OpenVPN servers, to provide access for the hidden-service proxy in the Tor-gateway pfSense VM. How embarrassing. But this question should remain, I think, in case others make the same dumb mistake that I did.
I'm experimenting with OpenVPN connections routed through Tor, using pairs of Tor gateway and OpenVPN-hosting VMs. On the server side, link local ports are forwarded to Tor hidden-service ports on the associated gateway VM. On the client side, OpenVPN connects through socks proxies on the associated Tor gateway VM. The above setup works using Debian 7 for all Tor gateway and OpenVPN-hosting VMs. I'm using Whonix, which has been updated to OpenVPN 2.3.2 (built on 2013-09-12). Server-client ping is about 1200 msec. However, the setup does not work using pfSense 2.1 as Tor gateway and OpenVPN-hosting VMs on the server side. pfSense 2.1 also has OpenVPN 2.3.2 (built on 2013-07-24). For both Debian and pfSense clients, I see: TCP connection established with [AF_INET]192.168.0.10:9152 recv_socks_reply: TCP port read timeout expired: Operation now in progress (error ...)This is the same error reported in Debian bug #657964 for openvpn version 2.2.1-3: "openvpn: Can't connect to a VPN using SOCKS proxy". It's also been reported in OpenVPN bug #328 for openvpn version 2.3.2: "openvpn client gives up instead of retrying when proxy server is slow". However, this may not be the same bug. The problem here may be latency in forwarding OpenVPN server ports through Tor hidden services, rather than latency in Tor SOCKS proxies on the client side. Or it may be both. In any case, I find that OpenVPN 2.3.2 servers fail with this client error in pfSense 2.1, but not in Debian 7. Perhaps the latest package in the Debian 7 repository includes bug fixes that were issued since the pfSense 2.1 build. How can I configure OpenVPN to wait for slow SOCKS proxies?
How can I configure OpenVPN to wait for slow SOCKS proxies?
put this in your .profile or .bashrc or .zshrc or whichever shell you use: function mycommand { tor & polipo & planobar }And now just run mycommand
I use pianobar player. But before launching it I need to run tor and polipo. Both are continuous (are running until interrupted). Basically what I'm looking for is a single command which would spawn tor and polipo processes (their output is not needed) and then open pianobar in foreground. edit: turns out tor takes several seconds to start, this is why suggested solutions didn't work at first. I post this in case anyone has the same problem: function piano { tor & polipo & sleep 3 pianobar && killall tor killall polipo }
How can I start several processes with a single command?
To minimize DNS leaks, it is indeed possible to resolve DNS via Tor. For that, add to your /etc/tor/torrc the line: DNSPort 9053And restart the tor service with: service tor restartTo test it out, do: $nslookup set port=9053 server 127.0.0.1 www.cnn.comIf using resolvconf/dnsmasq, change your /etc/dnsmasq.conf: no-resolv server=127.0.0.1#9053 listen-address=127.0.0.1If simply using /etc/resolv.conf that is not changed by a DHCP configuration, change /etc/resolv.conf to: nameserver 127.0.0.1#9053or in BIND place in /etc/bind/named.conf.options: options { forwarders { 127.0.0.1 port 9053; } }Using a reputable dnscrypt service is in principle more secure than leaving your DNS resolution up to some element in the chain of the Tor network; see Configure BIND as Forwarder only (no root hints), encrypted + RPZ blacklist / whitelist all together. Also take note that resolving DNS via a Tor gateway is notably slower, and it is strongly advised to have a local cache such as dnsmasq or BIND. I will leave here the source of the article from which I have taken the dnsmasq configuration. Resolve DNS through Tor Interestingly enough, as a complementary/alternative approach, the strategy used by redsocks for handling UDP DNS requests is giving an invalid answer to any UDP DNS request via dnstc to force the operation of DNS via TCP, and thus facilitate the proxying of DNS via Tor. See also Visiting darknet/ Tor sites with Firefox
When I am using Tor, and not using the Tor bundle there is a possibility of DNS leaks in certain situations. What can be done to minimize it? Is it possible to resolve DNS via Tor?
resolving DNS via Tor
The "generally accepted best practice" is to install all the file sets, (which should not be confused with "run all daemons"). If you are inclined to keep some sets out because of safety concerns, remember that if someone has penetrated your system deep enough so that they can run a compiler, start daemons (like X) or something like that, you have much bigger problems already. Also, remember that even if OpenBSD comes with an (awesomely) large array of daemons in it base install, they won't be running by default unless you configure them to. In short, the files on the "non-core" data sets being there only pose a problem (security-wise), if they can be maliciously used, and if someone has enough privileges to use them against your will, you are already screwed. If you are leaving out some sets because of disk space constraints, then that's a different story. I have some small/old machines doing IPSec and some routing which run on small SD cards, so I tend to leave out everything but the kernels (including bsd.rd) and base**.tgz. Oh, and man**.tgz because I'm too lazy to switch terminals just to lookup some man page. But this has nothing to do with security. The FAQ mention has to do with cases like running a webserver that hosts some PHP app that do image/font manipulation with things like GD (via the php-gd module). I've had to install xbase on a headless server because of this. I don't think you'll need it for tor, but you can always leave X stuff out and add it later. In short, you can leave comp*, x* and games* out, but remember you did leave them out, be aware that somethings might break and be prepared to add the sets post-install, if needed be. Also, bear in mind that you need to know which sets you installed when upgrading the machine. It's easy to forget you added xbase to your (e.g.) 6.4 system, de-select it when upgrading to 6.5, and ending up with a frankenstein. Most of the time it's just easier to install everything and not having to worry about any of this. Update: sysupgrade, the tool for automated upgrades that ships with OpenBSD will automatically install all the sets when upgrading a system.
I want to install a minimal OpenBSD 6.5 (x86_64) server to run a Tor relay. On Debian I would select ssh-server and usually standard system utilities in tasksel for a basic server install. What is the closest equivalent to such a configuration in OpenBSD? The documentation is not clear about this merely stating: "New users are recommended to install all of them." As generally it is not advisable to install unneeded packages on a server, I am looking for a more specific 'best practices' recommendation for OpenBSD. The available selections are: [with my questions/comments in square brackets] General file setsbsd - The kernel (required) [do I need this, if I install bsd.mp?] bsd.mp - The multi-processor kernel [required for multi-core CPUs?] bsd.rd - The ramdisk kernel [required for upgrades?] baseXX.tgz - The base system (required) compXX.tgz - The compiler collection, headers and libraries [not needed on a server, right?] manXX.tgz - Manual pages [not needed, as they're available online] gameXX.tgz - Text-based games [definitely not needed ;-) ]X11 related file setsxbaseXX.tgz - Base libraries and utilities for X11 (requires xshareXX.tgz) xshareXX.tgz - X11's man pages, locale settings and includes xfontXX.tgz - Fonts used by X11 xservXX.tgz - X11's X servers (xservXX.tgz set is rarely needed)My inclination would be to not install compilers or any of the X11 related packages on a server, but the FAQ mentions, that some (non-X11) applications require fonts and fontconfig, which would require xbase, xshare and xfont file sets. Would many applications break, when not installing these? I doubt that a Tor relay would have any need for font manipulation. What are the generally accepted best practices when setting up an OpenBSD server?
What 'file sets' to install on a basic OpenBSD server to be used as a Tor relay?
I found an answer to my question and for a visibility purpose I think responding is better than editing. So I wanted to use Tor and a SOCKS5 proxy at the same time using proxychains. There are two ways to achieve that : With dante server Dante server is a SOCKS5 server (and client) with lots of options I don't know yet but will learn soon I hope. So first you install dante-server : wget https://www.inet.no/dante/files/dante-1.4.1.tar.gz tar xvf dante-1.4.1.tar.gz cd dante-1.4.1 ./configure make && make install#This is my launch script you can use yours obviously wget https://dl.dropboxusercontent.com/u/71868038/sockd mv sockd /etc/init.d/sockd chmod +x /etc/init.d/sockd update-rc.d sockd defaultswget https://dl.dropboxusercontent.com/u/71868038/sockd.conf mv sockd.conf /etc/You can edit your conf as you want, for example to block all the requests except from your IP address. More info here. Don't forget to change the IP address of your server in the config file ! Now that your SOCKS5 server is ready and works, you can use it along with tor thanks to proxychains. Just add your server in the config file : strict_chain proxy_dns tcp_read_time_out 15000 tcp_connect_time_out 8000 socks4 127.0.0.1 9050 socks5 1.2.3.4 1080Start tor and enjoy : service tor start proxychains iceweaselWith an SSH tunnel Simpler solution. You will need tor, torsocks and ssh apt-get install torsocks service tor start torsocks ssh -NfD 1080 1.2.3.4 proxychains iceweaselConfiguration of proxychains : strict_chain proxy_dns tcp_read_time_out 15000 tcp_connect_time_out 8000 socks5 127.0.0.1 1080What you do is you tunnel an SSH connection to your server after going through tor service (torsocks do that, I don't really know how it works yet. I'll edit if I figure out). And then : proxychains iceweaselIf someone needs more in-depth explanations just ask ;)
I'm trying to setup proxychains on Kali like this : User > Tor > SOCKS5 > OutI've created my SOCKS5 server with danted running on port 1080. I setup an SSH connection on my Kali distrib : ssh -NfD 1080 user@addressAnd I'm able to connect to the SOCKS5 server without trouble. Same when I'm trying to connect to Tor network. But when I try to connect to Tor AND to the SOCKS5 server, I get a denied error : |S-chain|-<>-127.0.0.1:9050-<>-127.0.0.1:1080-<--deniedSo I tried to allow connections from any IP address in dante, I'm not sure if it's right : logoutput: /var/log/dante.loginternal: 127.0.0.1 port = 1080 external: venet0 method: username none user.notprivileged: nobodyclient pass { from: 0.0.0.0/0 port 1-65535 to: 0.0.0.0/0 protocol: tcp udp } pass { from: 0.0.0.0/0 to: 0.0.0.0/0 protocol: tcp udp }Any idea where it could come from ?
Proxychains, Tor, SSH and Danted. Connection denied
First, let's break down what's happening. sudo is only being used to run wget, not the rest of the command. What you did is functionally equivalent to: # 1. Download a file and save as 'file.asc' sudo wget -qO- https://d...E886DDD89.asc > file.asc# 2. Dearmor that file (generates file.asc.gpg) gpg --dearmor file.asc# 3. Copy that file to /usr/share/keyrings tee /usr/share/keyrings/tor-archive-keyring.gpg >/dev/null <file.asc.gpg So you see, you used sudo to download the file. That doesn't really do much in this situation, except make file.asc owned by root. In your case, the file is piped onto stdout, so sudo really doesn't do much. Next, you used gpg --dearmor, which is fine. Finally, you used tee to copy the contents of the file to your system. This is the part that needs root permissions because you are writing to a root-owned directory. The answer is to run tee with sudo. Functionally this would look like: # 1. Download a file and save as 'file.asc' wget -qO- https://d...E886DDD89.asc > file.asc# 2. Dearmor that file (generates file.asc.gpg) gpg --dearmor file.asc# 3. Copy that file to /usr/share/keyrings sudo tee /usr/share/keyrings/tor-archive-keyring.gpg >/dev/null <file.asc.gpg In your 1-liner, it would look like: wget -qO- \ https://deb...6DDD89.asc | \ gpg --dearmor | \ sudo tee /usr/share/keyrings/tor-archive-keyring.gpg >/dev/nullor wget -qO- https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --dearmor | sudo tee /usr/share/keyrings/tor-archive-keyring.gpg >/dev/nullIn fact, the reason we | tee /usr/share... >/dev/null instead of the much simpler >/usr/share... is so we can prefix tee with sudo.
I was preparing my Kali Linux to run a Tor's Middle Relay. I was doing Tor Project's Repository configuration according to this site. I made steps 1 and 2. The 3rd step was to add the gpg key used to sign the packages by running the following command: sudo wget -qO- https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --dearmor | tee /usr/share/keyrings/tor-archive-keyring.gpg >/dev/nullThe problem is I don't understand what this command does and why it fails, even though I execute it with sudo permissions. ┌──(michal㉿kali)-[/usr/share/keyrings] └─$ sudo wget -qO- https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --dearmor | tee /usr/share/keyrings/tor-archive-keyring.gpg >/dev/null [sudo] password for michal: tee: /usr/share/keyrings/tor-archive-keyring.gpg: Permission deniedThis part with wget, I understand. But I don't know what is happening after the tor repo gets downloaded to my vps. ┌──(michal㉿kali)-[/usr/share/keyrings] └─$ ls -lah total 176K drwxr-xr-x 2 root root 4.0K Jan 28 2022 . drwxr-xr-x 135 root root 4.0K Jan 3 18:09 .. -rw-r--r-- 1 root root 8.5K Feb 25 2021 debian-archive-bullseye-automatic.gpg -rw-r--r-- 1 root root 8.6K Feb 25 2021 debian-archive-bullseye-security-automatic.gpg -rw-r--r-- 1 root root 2.4K Feb 25 2021 debian-archive-bullseye-stable.gpg -rw-r--r-- 1 root root 8.0K Feb 25 2021 debian-archive-buster-automatic.gpg -rw-r--r-- 1 root root 8.0K Feb 25 2021 debian-archive-buster-security-automatic.gpg -rw-r--r-- 1 root root 2.3K Feb 25 2021 debian-archive-buster-stable.gpg -rw-r--r-- 1 root root 55K Feb 25 2021 debian-archive-keyring.gpg -rw-r--r-- 1 root root 37K Feb 25 2021 debian-archive-removed-keys.gpg -rw-r--r-- 1 root root 7.3K Feb 25 2021 debian-archive-stretch-automatic.gpg -rw-r--r-- 1 root root 7.3K Feb 25 2021 debian-archive-stretch-security-automatic.gpg -rw-r--r-- 1 root root 2.3K Feb 25 2021 debian-archive-stretch-stable.gpg -rw-r--r-- 1 root root 2.3K Jan 25 2022 kali-archive-keyring.gpg┌──(michal㉿kali)-[/usr/share/keyrings] └─$ lsb_release -a 1 ⨯ No LSB modules are available. Distributor ID: Kali Description: Kali GNU/Linux Rolling Release: 2022.4 Codename: kali-rolling
Add the gpg key used to sign the packages by running the following wget | gpg | tee >/dev/null command
As the post related to man tor describes Does the Tor Browser Bundle cache relay information?, there is such a file with cache information in the system.DataDirectory/cached-consensus and/or cached-microdesc-consensus The most recent consensus network status document we’ve downloaded.So in Debian, the file with the cache Tor relays is /var/lib/tor/cached-microdesc-consensus and the information there can be valid for up to 24h. (if not renewed, which is the normal behaviour) The stuff pertinent to this post seems to start in line 36 in my home server and ends somewhere in line 35963: 36 r mintberryCrunch ABCTIE984gTgUHkIeZdNvcDTiRE 2016-11-26 20:55:20 88.99.35.166 443 9030 37 m V1CEu0LsXapK9Ci55c+VHLEP89EG+1wWjSjsDSYyC0Y 38 s Fast Guard HSDir Running Stable V2Dir Valid 39 v Tor 0.2.5.12 40 w Bandwidth=16800 41 r CalyxInstitute14 ABG9JIWtRdmE7EFZyI/AZuXjMA4 2016-11-27 01:19:50 162.247.72.201 443 80 42 m hiyRvQn2CqLG7Xgp+eDcQe9u2IpJ44p/qZ+CrgIp+W4 43 s Exit Fast Guard HSDir Running Stable V2Dir Valid 44 v Tor 0.2.8.6 45 w Bandwidth=10800I hacked a small bash script on the command line to get from this file the top 20 speeder relays: sudo egrep ^"r |^w " /var/lib/tor/cached-microdesc-consensus | paste -d " " - - \ | sed "s/Unmeasured=. //" | \ awk ' { printf("%s %s %s ", $2, $6, $10 ); system("geoiplookup " $6 ); } ' | \ cut -f1,2,3,8- -d" " | sed "s/=/ /" | sort -k4 -n -r | head -20And the end result was: IPredator 197.231.221.211 Bandwidth 254000 Liberia cry 192.42.115.101 Bandwidth 182000 Netherlands GrmmlLitavisNew 163.172.194.53 Bandwidth 180000 France regar42 62.210.244.146 Bandwidth 164000 France xshells 178.217.187.39 Bandwidth 161000 Poland dopper 192.42.113.102 Bandwidth 159000 Netherlands TorLand1 37.130.227.133 Bandwidth 151000 United Kingdom 0x3d001 91.121.23.100 Bandwidth 151000 France hviv104 192.42.116.16 Bandwidth 149000 Netherlands colosimo 109.236.90.209 Bandwidth 136000 Netherlands Onyx 192.42.115.102 Bandwidth 135000 Netherlands redteam01 209.222.77.220 Bandwidth 134000 United States belalugosidead 217.20.23.204 Bandwidth 129000 United Kingdom redjohn1 62.210.92.11 Bandwidth 124000 France Unnamed 46.105.100.149 Bandwidth 121000 France theblazehenTor 188.138.17.37 Bandwidth 119000 France splitDNA 62.210.82.44 Bandwidth 116000 France radia2 91.121.230.212 Bandwidth 115000 France ArachnideFR5 62.210.206.25 Bandwidth 115000 France quadhead 148.251.190.229 Bandwidth 111000 GermanyOr a list of relay nodes in my home country: sudo egrep ^"r |^w " /var/lib/tor/cached-microdesc-consensus | paste -d " " - - \ | sed "s/Unmeasured=. //" | \ awk ' { printf("%s %s %s ", $2, $6, $10 ); system("geoiplookup " $6 ); } ' | \ cut -f1,2,3,8- -d" " | sed "s/=/ /" | grep Portugal | sort -k4 -n -r Output: Laika 51.254.164.50 Bandwidth 47300 Portugal freja 194.88.143.66 Bandwidth 15400 Portugal cserhalmi 188.93.234.203 Bandwidth 1870 Portugal Eleutherius 85.246.243.40 Bandwidth 1400 Portugal luster 94.126.170.165 Bandwidth 1390 Portugal undercity 178.166.97.51 Bandwidth 1180 Portugal helper123 85.245.103.222 Bandwidth 1060 Portugal Pi 94.60.255.42 Bandwidth 271 Portugal TheSpy 85.240.255.230 Bandwidth 142 Portugal MADNET00 89.153.104.243 Bandwidth 78 Portugal MADNET01 82.155.67.190 Bandwidth 14 PortugalBy the way, bandwidth in the Tor server/client by default is defined in KB.
When using some Tor browsers, for instance in iOS, I have a nice list with speeds and countries from which I can choose the relay points that can be used/from which I do get out. Can I get such a list from the Linux command line when running the tor daemon?
How to get tor relay points through the Linux command line?
To specify the ip that Tor will use, append: ExitNodes IPInto your torrc config file (which is generally in /etc/tor/torrc for Ubuntu/Debian variant, not sure for other OSes). Where IP is the wanted ExitNodes ip, which can be either found by already knowing some of them (like by noting them down when using other Tor wrapper, like torify or what ip checking service report as your ip) or by looking at the official list of ExitNodes on Tor's website. One can also optionaly refer to their local list of ExitNodes, which can be accessed by doing: sudo grep -B3 "^s.*Exit" /var/lib/tor/cached-microdesc-consensus | grep "^r" | awk '{print $6 ":" $7}'If one need to access other kind of nodes, just change the regex pattern Exit to other valid pattern, like Guard for Entry Nodes. Make sure to restart Tor after modifying your config: sudo /etc/init.d/tor restartor sudo systemctl tor restartor even pkill -sighup torThanks to @A.B pointing out the bits of the documentation where this was mentioned and this post for the regex trick above.
I want to use a specific ip from Tor without changing it, even if Tor restart/close. I'm aware that by using Tor, either by using custom flags on Tor service/process, or by editing the config, one can achieve this, though I'm unaware of the exact details. A simple example that i know of is to use torify like so: torify curl http://icanhazip.com/where the url report the ip from Tor (say, 46.165.xxx.xxx). It seems to not change (which is the wanted effect). But after some time, it does change the ip used...(even though Tor service wasn't restarted afaik) I basically don't want Tor to change ip and want it to specifically use only one ip (either specified in the config, or as a flag) How can i make Tor use a specific/specified ip without it changing on restart?
Make Tor use only one specified ip address
The answer is pretty straight forward, as root use: service tor restartor service tor reload
I'm using the tor package coming with the debian-testing distribution: Package: tor Version: 0.3.5.8-1I'm frequently using it in combination with proxychains like: proxychains -q curl -s -L https://ipecho.net/plainhow do I change the route, in order to get a new IP from the command-line?
How do I change route (IP) of tor service from the command line?
Use this as the ProxyCommand: ProxyCommand nc -x localhost:9050 -X 5 umqkh75wp2chf5av5esqhtyzedmw4it76dvs7ild2rikbcek6eyqfsqd.onion 2222
I can reach my phone using: torsocks ssh myphoneif I have this in my .ssh/config: Host myphone User u0_a162 Port 2222 HostName umqkh75wp2chf5av5esqhtyzedmw4it76dvs7ild2rikbcek6eyqfsqd.onionCan I somehow adapt the .ssh/config so that I can simply write this and run the same: ssh myphoneCan I somehow move the torsocks into .ssh/config - possibly with ProxyCommand or similar?
ssh config to "prepend" torsocks
What you're describing is a tor middlebox. There seems to already be documentation on how to do this. http://www.howtoforge.com/how-to-set-up-a-tor-middlebox-routing-all-virtualbox-virtual-machine-traffic-over-the-tor-network This might be different depending on what network manager you use.
How can I configure VirtualBox to route all network access of a guest OS through TOR? My host system is running Linux, and I've already set up TOR on it.
How to configure VirtualBox to route through tor?
No, it's not OK. ICMP exists for a reason. For example, if you drop all ICMP packets you will not be able to communicate via IP with any host where the route to it is such your machine needs to be told to fragment the packets it's sending. At the very least look at the ICMP type and only drop those that you know are not needed. This has been described on the other StackExchange sites (in descending order of cluefulness):https://serverfault.com/questions/84963/why-not-block-icmp https://superuser.com/questions/572172/what-are-reasons-to-disallow-icmp-on-my-server https://security.stackexchange.com/questions/22711/is-it-a-bad-idea-for-a-firewall-to-block-icmp(also https://networkengineering.stackexchange.com/questions/2103/should-ipv4-icmp-from-untrusted-interfaces-be-blocked but there's only one answer and it admits some bias. It's probably standard knowledge e.g. in professional certifications so it doesn't tend to come up on the professional networking site).
Is it ok for me to drop all types of ICMP packets? I.e. iptables -I INPUT -p icmp -j DROP. Everything except one service seems to work. I already stopped this service. To be specific I functioned as non-exit Tor relay and that seems to have stopped working. In 2 days I dropped 107K ICMP packets, which seems excessive to me, isn't that so? Note, that I run some other services (on open ports) like Bitcoin.
Drop all ICMP packets?
You need to read the instructions. The very first (pinned) comment on the Package Details: tor-browser page says:Before running makepkg, you must do this (as normal user): $ gpg --auto-key-locate nodefault,wkd --locate-keys [emailprotected]The build is failing because the public key which signed the package is unknown. The command above installs that key into your gpg keyring.
I was going to install tor-browser. I had download package from here. I had cloned a git repo git clone https://aur.archlinux.org/tor-browser.gitThen, I was trying to unpack it. cd tor-browser makepkg -sI was getting error/output something just like this : Validating source files with sha256sums... tor-browser.desktop.in ... Passed tor-browser.in ... Passed tor-browser.png ... Passed tor-browser.svg ... Passed ==> Validating source_x86_64 files with sha256sums... tor-browser-linux64-10.0.16_en-US.tar.xz ... Passed tor-browser-linux64-10.0.16_en-US.tar.xz.asc ... Skipped ==> Verifying source file signatures with gpg... tor-browser-linux64-10.0.16_en-US.tar.xz ... FAILED (unknown public key EB774491D9FF06E2) ==> ERROR: One or more PGP signatures could not be verified!
Unable to unpack tor-browser git repo
I was struggling with this too for a long time. Then I found in the manual for dirmngr: --standard-resolver This option forces the use of the system's standard DNS resolver code. This is mainly used for debugging. Note that on Windows a standard resolver is not used and all DNS access will return the error ``Not Implemented'' if this option is used. Using this together with enabled Tor mode returns the error ``Not Enabled''.So it could be that you have in the file ~/.gnupg/dirmngr.conf a line with standard-resolver. If you have that, try removing it. Also kill the process dirmngr after every change of this file. That didn't work for me, since dirmngr does something weird with DNS resolving that only works on Linux. The next step to try would be to try to change the option to recursive-resolver. This also didn't work for me, it gave me errors like ERR 167772360 Buffer too short <Dirmngr>. As a last ditch attempt I added the option no-use-tor a the start of dirmngr.conf, and this finally worked for me. Later, it turned out that an ssh "DynamicForwarding" option on port 9050 confused dirmngr into thinking that Tor is in use.
command: gpg -vvv --debug-all --recv-keys A8BD96F8FD24E96B60232807B3B4C3CECC10C662output: gpg: Note: no default option file '/home/user/.gnupg/gpg.conf' gpg: using character set 'utf-8' gpg: enabled debug flags: packet mpi crypto filter iobuf memory cache memstat trust hashing ipc clock lookup extprog gpg: DBG: [not enabled in the source] start gpg: DBG: chan_3 <- # Home: /home/user/.gnupg gpg: DBG: chan_3 <- # Config: /home/user/.gnupg/dirmngr.conf gpg: DBG: chan_3 <- OK Dirmngr 2.2.4 at your service gpg: DBG: connection to the dirmngr established gpg: DBG: chan_3 -> GETINFO version gpg: DBG: chan_3 <- D 2.2.4 gpg: DBG: chan_3 <- OK gpg: DBG: chan_3 -> KS_GET -- 0xA8BD96F8FD24E96B60232807B3B4C3CECC10C662 gpg: DBG: chan_3 <- ERR 167772339 Not enabled <Dirmngr> gpg: keyserver receive failed: Not enabled gpg: DBG: chan_3 -> BYE gpg: DBG: [not enabled in the source] stop gpg: keydb: handles=0 locks=0 parse=0 get=0 gpg: build=0 update=0 insert=0 delete=0 gpg: reset=0 found=0 not=0 cache=0 not=0 gpg: kid_not_found_cache: count=0 peak=0 flushes=0 gpg: sig_cache: total=0 cached=0 good=0 bad=0 gpg: random usage: poolsize=600 mixed=0 polls=0/0 added=0/0 outmix=0 getlvl1=0/0 getlvl2=0/0 gpg: rndjent stat: collector=0x0000000000000000 calls=0 bytes=0 gpg: secmem usage: 0/65536 bytes in 0 blocks
gpg recv-keys error: DBG: Not enabled <Dirmngr>, keyserver receive failed: Not enabled