output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
The fact that efibootmgr -v works proves your system has UEFI runtime services enabled, which can only happen when the system is booted in UEFI mode. (As Marcus Müller said in the comments, the reverse inference cannot be made: it would be possible to boot in UEFI mode even without UEFI runtime services being enabled, either because the firmware chooses to not provide them, or because the kernel is missing the necessary build-time option to use them. In newer kernels, it is also possible to disable the use of UEFI run-time services by a boot option, e.g. to work around bugs in specific UEFI firmware implementations.) BootCurrent: 0004 [...] Boot0004* Linux Boot Manager HD(2,GPT,ed10b328-3615-45c0-bf5b-b117031e4c22,0x800,0x100000)/File(\EFI\SYSTEMD\SYSTEMD-BOOTX64.EFI)When your system was booted up, it used the Boot0004 boot option, which uses systemd-bootx64.efi on a partition whose PARTUUID is ed10b328-3615-45c0-bf5b-b117031e4c22. So you are currently using systemd-boot. You can see the PARTUUIDs with lsblk -o +partuuid. Your dd if=/dev/sda bs=512 count=1 2>/dev/null results indicate that a a BIOS-compatible i386-pc version of GRUB has been installed on that disk at some point, but unless the system is configured with the BIOS compatibility module (CSM) enabled, it will be completely meaningless for the UEFI firmware. It is possible (although not certain) that the Boot0006 boot option could represent booting from that disk in BIOS-compatible mode. Note that the i386-pc version of GRUB is not entirely contained in the Master Boot Record block: it also needs to embed the rest of the GRUB core image to a fixed location on the disk. On a MBR-partitioned disk, the unused space between the MBR and the beginning of the first partition is normally used for this; on a GPT-partitioned disk, this space is occupied by the GPT partition table structures, so a dedicated "biosboot" partition would be needed to boot with a BIOS-style GRUB from a GPT-partitioned disk. However, Microsoft chose to tie the boot method and the partitioning scheme together in their Windows OS, so a Windows installed to a MBR-partitioned disk will only ever boot in BIOS-style, and a Windows installed to a GPT-partitioned disk will only ever boot in UEFI style. A boot manager normally cannot switch between boot styles, so if you have multiple OSs installed, it's most convenient to use the same boot method (either BIOS or UEFI) with all of them. The presence of the text Windows Boot Manager in efibootmgr -v output suggests you may have a Windows OS that is booting in UEFI mode, so if you want to use GRUB, you should use the UEFI-native x86_64-efi version of GRUB instead of the i386-pc BIOS version.
Without having access to the physical display, how can I know if my computer is being booted by grub or systemd-boot? As I have writted on the tittle I have both /boot/grub/ and /boot/efi/EFI/systemd/ folders. On the other hand dd if=/dev/sda bs=512 count=1 2>/dev/null | strings returns the following: ZRr= `|f \|f1 GRUB Geom Hard Disk Read ErrorBut on the other hand cat /boot/efi/EFI/BOOT/BOOTX64.EFI | strings |grep systemd returns #### LoaderInfo: systemd-boot 247.3-7+deb11u1 #### Sooo... What's going here? It's the computer using grub or systemd-boot? EDIT: efibootmgr -v output: BootCurrent: 0004 Timeout: 1 seconds BootOrder: 0004,0002,0005,0006,0007,0008,0000,0003,0001 Boot0000* Windows Boot Manager VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}...a............... Boot0001* Linux Boot Manager VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb) Boot0002* Linux Boot Manager HD(2,GPT,1eded8dc-d1ab-4723-b499-b718400c1898,0x800,0x100000)/File(\EFI\SYSTEMD\SYSTEMD-BOOTX64.EFI) Boot0003* Linux Boot Manager VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb) Boot0004* Linux Boot Manager HD(2,GPT,ed10b328-3615-45c0-bf5b-b117031e4c22,0x800,0x100000)/File(\EFI\SYSTEMD\SYSTEMD-BOOTX64.EFI) Boot0005* UEFI OS HD(2,GPT,26b839e2-9a19-4e21-ad28-dbd1c15d598d,0x800,0x100000)/File(\EFI\BOOT\BOOTX64.EFI)..BO Boot0006* Hard Drive BBS(HD,,0x0)..GO..NO........o.S.a.m.s.u.n.g. .S.S.D. .8.7.0. .Q.V.O. .1.T.B...................A..........................>..Gd-.;.A..MQ..L.5.S.R.R.F.N.R.0.3.B.0.5.8.4. .Y. . . . .......BO..NO........o.S.T.1.2.0.0.0.N.M.0.0.1.G.-.2.M.V.1.0.3...................A..........................>..Gd-.;.A..MQ..L. . . . . . . . . . . . .T.Z.0.N.9.0.G.2.......BO..NO........o.S.T.1.2.0.0.0.N.M.0.0.1.G.-.2.M.V.1.0.3...................A..........................>..Gd-.;.A..MQ..L. . . . . . . . . . . . .T.Z.0.N.2.1.T.1.......BO..NO........o.S.T.1.2.0.0.0.N.M.0.0.1.G.-.2.M.V.1.0.3...................A..........................>..Gd-.;.A..MQ..L. . . . . . . . . . . . .T.Z.0.N.8.0.J.C.......BO..NO........o.S.a.m.s.u.n.g. .S.S.D. .8.7.0. .Q.V.O. .1.T.B...................A..........................>..Gd-.;.A..MQ..L.5.S.R.R.F.N.R.0.3.B.0.5.9.7. .P. . . . .......BO..NO........o.S.a.m.s.u.n.g. .S.S.D. .8.7.0. .E.V.O. .1.T.B...................A..........................>..Gd-.;.A..MQ..L.6.S.U.P.M.N.T.0.3.5.7.8.4.2. .L. . . . .......BO Boot0007* UEFI OS HD(2,GPT,1eded8dc-d1ab-4723-b499-b718400c1898,0x800,0x100000)/File(\EFI\BOOT\BOOTX64.EFI)..BO Boot0008* UEFI OS HD(2,GPT,ed10b328-3615-45c0-bf5b-b117031e4c22,0x800,0x100000)/File(\EFI\BOOT\BOOTX64.EFI)..BO
How to know if computer uses GRUB or systemd-boot through SSH? (I have both /boot/grub and /boot/efi/EFI/systemd folders)
UEFI has it's own boot manager. This boot manager uses variables in NVRAM to locate and execute a bootloader and your BIOS uses these variables to list boot options in the boot menu. It's very likely that your BIOS Update interfered with your NVRAM and caused the problem.
My setup:1 hdd with Windows 10 installed 1 ssd with Archlinux using systemd-boot installed Motherboard: MSI X470 Gaming ProI first installed Arch, then Windows 10 and the dual boot worked like a charm, but after a BIOS-update, my Motherboard keeps automatically booting to Windows without going though systemd-boot first. When checking the boot order in the bios, my ssd is still first but it now says Windows Boot Manager instead of UEFI OS (for Linux). I can verify that the Linux drive still has all of my stuff on it and seems to be untouched. contents of /boot/loader/ Can someone please explain to me why this happened?In case someone is wondering how I solved it: I booted from a USB-stick and mounted my ssd directories and moved the Microsoft directory from /boot/EFI/ to somewhere else. Afterwards the bios label for my ssd correctly said UEFI OS again and booted to systemd-boot again (which obviously didn't show the Windows option anymore). Finally I moved the Microsoft directory back to /boot/EFI/ and everything works again. My question still is why it happened in the first place and how it can be avoided during future bios upgrades.
Dual boot keeps booting to windows
Your /boot and /boot/efi are on separate partitions. The UEFI firmware expects to fine the missing files on the EFI System Partition, which is mounted on /boot/efi, but they are now in /boot. Move the files to /boot/efi, or better yet merge the two file systems so that the EFI System Partition is mounted on /boot.
I installed the systemd boot loader. But while booting it shows /vmlinuz-linux and initramfs-linux.img not found. How can I fix this? My directory structure is as follows;Output of lsblk:
systemd-boot: /vmlinuz-linux not found
The problem ended up being an obscure BIOS setting, which I found looking through the documentation of the machine I'm using. I saw that they noted, in order to make Ubuntu boot on that machine, you had to turn on "PinCntrl Driver GPIO Scheme" in the BIOS. I made that change and my Yocto build started working as well. So if others have this problem, I recommending looking in to various BIOS issues.
I'm having difficulties installing an image I've built with Yocto. In the past I've always used u-boot, MBR, and legacy boot. Installing Yocto meant creating boot and rootfs partitions, installing the first stage u-boot boot loader, and copying the files in /boot to the boot partition (a FAT32 partition). Now I'm trying to do something very different for an Intel machine that doesn't seem to support legacy boot. I'm using systemd-boot, GPT, and UEFI. If I directly write my .wic image that's produced by Yocto, it correctly boots. But if I instead try and follow a process as above where I manually partition and copy files over, it will run systemd-boot, but once it tries to load my boot entry, nothing happens. One thing I did notice is that the /boot directory that's in the rootfs.tar.gz produced by Yocto is different from the /boot directory that's on the .wic file. The kernels are different (different sizes) and the .wic file includes a microcode.cpio file. I tried copying the boot files from the .wic file and installing them manually when installing, but that got me to a point where it says EFI stub: Loaded initrd from LINUX_EFI_INITRD_MEDIA_GUID device path, but then nothing happens after that. Is there any guide to installing Yocto images by manually partitioning on UEFI systems? I'm not doing anything unusual other than maybe the installation method. I'm building nanbield, core-image-base, and have added the meta-intel layer. This is my local.conf: MACHINE ?= "intel-corei7-64" MACHINE ??= "qemux86-64" DISTRO ?= "poky" EXTRA_IMAGE_FEATURES ?= "debug-tweaks" USER_CLASSES ?= "buildstats" PATCHRESOLVE = "noop" BB_DISKMON_DIRS ??= "\ STOPTASKS,${TMPDIR},1G,100K \ STOPTASKS,${DL_DIR},1G,100K \ STOPTASKS,${SSTATE_DIR},1G,100K \ STOPTASKS,/tmp,100M,100K \ HALT,${TMPDIR},100M,1K \ HALT,${DL_DIR},100M,1K \ HALT,${SSTATE_DIR},100M,1K \ HALT,/tmp,10M,1K" PACKAGECONFIG:append:pn-qemu-system-native = " sdl"IMAGE_FEATURES += "read-only-rootfs" IMAGE_FSTYPES = "tar.xz"CORE_IMAGE_EXTRA_INSTALL += "kernel-modules"# OS packages CORE_IMAGE_EXTRA_INSTALL += "openssh" CORE_IMAGE_EXTRA_INSTALL += "nginx" CORE_IMAGE_EXTRA_INSTALL += "openssl" CORE_IMAGE_EXTRA_INSTALL += "gnupg" CORE_IMAGE_EXTRA_INSTALL += "iptables" CORE_IMAGE_EXTRA_INSTALL += "logrotate" CORE_IMAGE_EXTRA_INSTALL += "mongodb" CORE_IMAGE_EXTRA_INSTALL += "sudo" CORE_IMAGE_EXTRA_INSTALL += "rsync" CORE_IMAGE_EXTRA_INSTALL += "procps"# Python packages CORE_IMAGE_EXTRA_INSTALL += "python3" CORE_IMAGE_EXTRA_INSTALL += "python3-flask" CORE_IMAGE_EXTRA_INSTALL += "python3-setuptools" CORE_IMAGE_EXTRA_INSTALL += "python3-pymongo" CORE_IMAGE_EXTRA_INSTALL += "python3-cryptography" CORE_IMAGE_EXTRA_INSTALL += "python3-scrypt" CORE_IMAGE_EXTRA_INSTALL += "python3-pip" CORE_IMAGE_EXTRA_INSTALL += "python3-pyserial" CORE_IMAGE_EXTRA_INSTALL += "python3-pyudev"# Feature services CORE_IMAGE_EXTRA_INSTALL += "dnsmasq" CORE_IMAGE_EXTRA_INSTALL += "rsyslog" CORE_IMAGE_EXTRA_INSTALL += "ntp" CORE_IMAGE_EXTRA_INSTALL += "ntpq" CORE_IMAGE_EXTRA_INSTALL += "ntp-utils" CORE_IMAGE_EXTRA_INSTALL += "freeradius" CORE_IMAGE_EXTRA_INSTALL += "net-snmp"# Remove the following packages before 1.0 release CORE_IMAGE_EXTRA_INSTALL += "coreutils" CORE_IMAGE_EXTRA_INSTALL += "vim"This is my bblayers.conf: # POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly POKY_BBLAYERS_CONF_VERSION = "2"BBPATH = "${TOPDIR}" BBFILES ?= ""BBLAYERS ?= " \ /data/opis-current/meta \ /data/opis-current/meta-poky \ /data/opis-current/meta-yocto-bsp \ /data/opis-current/meta-openembedded/meta-oe \ /data/opis-current/meta-openembedded/meta-python \ /data/opis-current/meta-openembedded/meta-webserver \ /data/opis-current/meta-openembedded/meta-networking \ /data/opis-current/meta-intel \ "
How can I manually install a Yocto image?
Ok, so it turns out that to prevent accidental deletion of UEFI variables, only whitelisted ones are allowed to be deleted by default. Others are marked as immutable which prevents them from being deleted by accident: # lsattr -------------------- ./BootCurrent-8be4df61-93ca-11d2-aa0d-00e098032b8c ----i--------------- ./334-71db7b7e-4165-48fa-ac9d-f9af4cefc534 ----i--------------- ./2151678337-417acee0-6fa9-4a82-99d7-f9b1dd271e48 -------------------- ./Boot0000-8be4df61-93ca-11d2-aa0d-00e098032b8c ----i--------------- ./2151678336-417acee0-6fa9-4a82-99d7-f9b1dd271e48This was done so that running rm -rf / won't wipe out unknown UEFI variables which has been found to cause some buggy firmware implementations to fail to boot. (The specs say systems must boot fine with all UEFI vars removed, but some machines aren't compliant and can be bricked this way.) The immutable attribute has to be removed first: # chattr -i 2151678336-417acee0-6fa9-4a82-99d7-f9b1dd271e48Which still didn't allow me to remove or overwrite the variable, but it did allow me to remove a different one. I still couldn't add any new variables, but after a reboot Linux picked up the extra space and bootctl install finally succeeded. Comparing against another identical machine, it turns out this massive UEFI variable is in fact meant to be there! So you just end up with a tiny amount of free storage space for boot variables on these machines with Linux (because apparently Linux will refuse to write any UEFI vars if there is less than 50% free space). EDIT: You can also boot the kernel with the efi_no_storage_paranoia parameter to disable this limit and get access to the full EFI storage area, as long as you're sure your firmware isn't one of the early ones that fail to boot once the free EFI variable space drops below 50%.
I'm installing Arch Linux on a new machine, and I've gotten to the point where I need to configure the bootloader, however this is failing: # bootctl install Failed to create EFI Boot variable entry: No space left on deviceLooking at the variables, I can see the problem: # ls -la /sys/firmware/efi/efivars --sort=size --reverse ... -rw-r--r-- 1 root root 6 May 17 17:50 BootCurrent-8be4df61-93ca-11d2-aa0d-00e098032b8c -rw-r--r-- 1 root root 12 May 17 17:50 334-71db7b7e-4165-48fa-ac9d-f9af4cefc534 -rw-r--r-- 1 root root 36 May 17 17:50 2151678337-417acee0-6fa9-4a82-99d7-f9b1dd271e48 -rw-r--r-- 1 root root 124 May 17 17:50 Boot0000-8be4df61-93ca-11d2-aa0d-00e098032b8c -rw-r--r-- 1 root root 2.3K May 17 17:50 2151678336-417acee0-6fa9-4a82-99d7-f9b1dd271e48There is a very large file there taking up all the space. There is a folder with the same name in /boot with a date of a few minutes ago, so apparently one of my failed bootctl attempts somehow created this enormous UEFI variable taking up all the space. Removing this would appear to free up enough space to set the boot variables properly, but unfortunately this is not possible: # rm 2151678336-417acee0-6fa9-4a82-99d7-f9b1dd271e48 rm: cannot remove '2151678336-417acee0-6fa9-4a82-99d7-f9b1dd271e48': Operation not permittedEven though I'm doing this as root, I can't remove the file. I have been able to remove some of the other variables, but I can't remove any variable I want, and I can't add any at all, even after removing everything I can. How can I remove this bogus UEFI variable to free up NVRAM space?
Can't delete UEFI variable - no space left / operation not permitted
According to the manual of loader.conf you can disable the timeout and it does what you want:timeout How long the boot menu should be shown before the default entry is booted, in seconds. This may be changed in the boot menu itself and will be stored as an EFI variable in that case, overriding this option. If the timeout is disabled, the default entry will be booted immediately. The menu can be shown by pressing and holding a key before systemd-boot is launched.In the menu you can change the timeout value with these keys (see systemd-boot):+, t Increase the timeout before default entry is booted -, T Decrease the timeoutand alsod Make selected entry the default(I don't know if you can disable the timeout with these key combinations. Maybe.)
I use systemd-boot as boot manager. I have the menu with many entries but 90% of time I choose entry #3. Is there a way to hide the menu as default? In this way when I want to boot #3, it skips timeout and menu is not shown. While if I want to choose anything else, keep pressing a key i.e. shift to show the menu?
Show/hide systemd-boot menu
It would be simpler to make one encrypted container and set up both / and swap on that with LVM. Like this: sda1 boot sda2 LUKS-crypt LVM root-LV swap-LVThen you only need one key to open it, letting you skip crypttab altogether.
OS: Parabola GNU/Linux Libre, a GNU version of Arch. I have managed to encrypt my root partition, but I'm unsure about how to encrypt my swap partition. I know swap partitions are becoming old-fashioned and that swap files are preferred, btrfs still does not support this. lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 223.6G 0 disk ├─sda2 8:2 0 221.1G 0 part │ └─cryptroot 254:0 0 221.1G 0 crypt / ├─sda3 8:3 0 2G 0 part │ └─cryptswap 254:1 0 2G 0 crypt └─sda1 8:1 0 512M 0 part /boot/etc/fstab # /dev/mapper/cryptroot UUID=0126cb9b-d3aa-4f05-a39a-71682fa847bb / btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0# /dev/sda1 UUID=6F37-84A2 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2# /dev/mapper/cryptswap UUID=aef00636-0183-48d1-ab87-8f6653a30dd8 none swap defaults 0 0/boot/loader/entries/parabola.conf title Parabola GNU/Linux-libre linux /vmlinuz-linux-libre initrd /initramfs-linux-libre.img options rd.luks.uuid=c6b69115-15c6-4561-9691-fc4a05ac9622 rd.luks.name=c6b69115-15c6-4561-9691-fc4a05ac9622=cryptroot rd.luks.options=quiet rw root=/dev/mapper/cryptroot/etc/crypttab # crypttab: mappings for encrypted partitions # # Each mapped device will be created in /dev/mapper, so your /etc/fstab # should use the /dev/mapper/<name> paths for encrypted devices. # # The Parabola specific syntax has been deprecated, see crypttab(5) for the # new supported syntax. # # NOTE: Do not list your root (/) partition here, it must be set up # beforehand by the initramfs (/etc/mkinitcpio.conf).# <name> <device> <password> <options> cryptswap /dev/disk/by-id/ata-PH4-CE240_511160905070017677-part3 /dev/urandom swapjournalctl -b Dec 22 23:35:54 MyComputer mkswap[341]: Setting up swapspace version 1, size = 2 GiB (2147459072 bytes) Dec 22 23:35:54 MyComputer mkswap[341]: no label, UUID=c965e98e-b011-4e40-aef3-bb84d58d7a08 Dec 22 23:35:54 MyComputer systemd[1]: Started Cryptography Setup for swap. Dec 22 23:35:54 MyComputer systemd[1]: Reached target Encrypted Volumes. Dec 22 23:35:54 MyComputer systemd[1]: Found device /dev/mapper/swap. Dec 22 23:37:23 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device/start timed out. Dec 22 23:37:23 MyComputer systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device. Dec 22 23:37:23 MyComputer systemd[1]: Dependency failed for /dev/disk/by-uuid/aef00636-0183-48d1-ab87-8f6653a30dd8. Dec 22 23:37:23 MyComputer systemd[1]: Dependency failed for Swap. Dec 22 23:37:23 MyComputer systemd[1]: swap.target: Job swap.target/start failed with result 'dependency'. Dec 22 23:37:23 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.swap: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.swap/start failed with result 'dependency'. Dec 22 23:37:23 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device/start failed with result 'timeout'. Dec 22 23:37:23 MyComputer systemd[1]: Mounting Temporary Directory... Dec 22 23:37:23 MyComputer systemd[1]: Mounted Temporary Directory. Dec 22 23:37:23 MyComputer systemd[1]: Reached target Local File Systems. Dec 22 23:37:23 MyComputer systemd[1]: Starting Create Volatile Files and Directories... Dec 22 23:37:23 MyComputer systemd[1]: Started Create Volatile Files and Directories. Dec 22 23:37:23 MyComputer systemd[1]: Starting Update UTMP about System Boot/Shutdown... Dec 22 23:37:23 MyComputer systemd[1]: Started Update UTMP about System Boot/Shutdown. Dec 22 23:37:23 MyComputer systemd[1]: Reached target System Initialization. Dec 22 23:37:23 MyComputer systemd[1]: Started Daily Cleanup of Temporary Directories. Dec 22 23:37:23 MyComputer systemd[1]: Started Daily verification of password and group files. Dec 22 23:37:23 MyComputer systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 22 23:37:23 MyComputer systemd[1]: Reached target Sockets. Dec 22 23:37:23 MyComputer systemd[1]: Reached target Basic System. Dec 22 23:37:23 MyComputer systemd[1]: Starting Save/Restore Sound Card State... Dec 22 23:37:23 MyComputer systemd[1]: Starting dhcpcd on enp4s0... Dec 22 23:37:23 MyComputer systemd[1]: Starting Login Service... Dec 22 23:37:23 MyComputer systemd[1]: Started D-Bus System Message Bus. ... Dec 24 00:00:09 MyComputer systemd[1]: Started Update man-db cache. Dec 24 00:01:36 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device/start timed out. Dec 24 00:01:36 MyComputer systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device. Dec 24 00:01:36 MyComputer systemd[1]: Dependency failed for /dev/disk/by-uuid/aef00636-0183-48d1-ab87-8f6653a30dd8. Dec 24 00:01:36 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.swap: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.swap/start failed with result 'dependency'. Dec 24 00:01:36 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device/start failed with result 'timeout'.[Update] New Information has come to light. Looks like what should have been the encrypted swap partition is not recognized.[Update] I've tried the following with the same result as above: parted rm 3 mkpart primary ext2 -2GiB 100% (Ignore) quit dd if=/dev/urandom of=/dev/sda3 bs=1M cryptsetup -v -y luksFormat /dev/sda3 YES cryptsetup open /dev/sda3 cryptswap mkswap /dev/mapper/cryptswap swapon /dev/mapper/cryptswap[Update] Encrypting the partition like above on the Live MATE version of Parabola returns an error. 1 root@parabolaiso / # cryptsetup -y -v luksFormat /dev/sda3 --debug :( # cryptsetup 1.7.3 processing "cryptsetup -y -v luksFormat /dev/sda3 --debug" # Running command luksFormat. # Locking memory. # Installing SIGINT/SIGTERM handler. # Unblocking interruption on signal.WARNING! ======== This will overwrite data on /dev/sda3 irrevocably.Are you sure? (Type uppercase yes): YES # Allocating crypt device /dev/sda3 context. # Trying to open and read device /dev/sda3 with direct-io. # Initialising device-mapper backend library. # Timeout set to 0 miliseconds. # Iteration time set to 2000 milliseconds. # Interactive passphrase entry requested. Enter passphrase: Verify passphrase: # Formatting device /dev/sda3 as type LUKS1. # Crypto backend (gcrypt 1.7.5) initialized in cryptsetup library version 1.7.3. # Detected kernel Linux 4.8.6-gnu-1 x86_64. # Topology: IO (512/0), offset = 0; Required alignment is 1048576 bytes. # Checking if cipher aes-xts-plain64 is usable. # Userspace crypto wrapper cannot use aes-xts-plain64 (-95). # Using dmcrypt to access keyslot area. # Calculated device size is 1 sectors (RW), offset 0. # dm version [ opencount flush ] [16384] (*1) # dm versions [ opencount flush ] [16384] (*1) # Device-mapper backend running with UDEV support enabled. # DM-UUID is CRYPT-TEMP-temporary-cryptsetup-10670 # dm versions [ opencount flush ] [16384] (*1) # Device-mapper backend running with UDEV support enabled. # Udev cookie 0xd4d2344 (semid 65536) created # Udev cookie 0xd4d2344 (semid 65536) incremented to 1 # Udev cookie 0xd4d2344 (semid 65536) incremented to 2 # Udev cookie 0xd4d2344 (semid 65536) assigned to CREATE task(0) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES (0xe) # dm create temporary-cryptsetup-10670 CRYPT-TEMP-temporary-cryptsetup-10670 [ opencount flush ] [16384] (*1) # dm reload temporary-cryptsetup-10670 [ opencount flush readonly ] [16384] (*1) device-mapper: reload ioctl on temporary-cryptsetup-10670 failed: Invalid argument # Udev cookie 0xd4d2344 (semid 65536) decremented to 1 # Udev cookie 0xd4d2344 (semid 65536) incremented to 2 # Udev cookie 0xd4d2344 (semid 65536) assigned to REMOVE task(2) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES (0xe) # dm remove temporary-cryptsetup-10670 [ opencount flush readonly ] [16384] (*1) # temporary-cryptsetup-10670: Stacking NODE_DEL [verify_udev] # Udev cookie 0xd4d2344 (semid 65536) decremented to 0 # Udev cookie 0xd4d2344 (semid 65536) waiting for zero # Udev cookie 0xd4d2344 (semid 65536) destroyed # temporary-cryptsetup-10670: Processing NODE_DEL [verify_udev] # dm versions [ opencount flush ] [16384] (*1) # Device-mapper backend running with UDEV support enabled. Failed to setup dm-crypt key mapping for device /dev/sda3. Check that kernel supports aes-xts-plain64 cipher (check syslog for more info). # Releasing crypt device /dev/sda3 context. # Releasing device-mapper backend. # Unlocking memory. Command failed with code 5: Input/output error[Update] I actually solved it by using systemd-swap (better than nothing) instead and I'll wait for btrfs to support real swap.
Timed out error waiting for encrypted swap device
It's been a while, and there are many causes for a this problem it seems (fstab misconfiguration, orphan config files, etc), but for me using 'grep -r plymouth /' and then deleting the statements calling plymouth solved it
My problem is the following: After my last update (pacman -Syu) my system hangs on boot, and I can't figure out the cause (it's driving me crazy, really) Searching on the web I found out that this could be caused by a bad fstab file, but this doesn't appear to be the case. The distro I'm using is Manjaro linux (it is based on Arch) and my Systemd version is 231 This is what journalctl -xb had to say about it Oct 04 11:45:02 manjarobox systemd[350]: rescue.service: Faied at step EXEC spawning /bin/plymouth: No such file or directory -Subject: Process /bin/plymouth could not be executed -Defined-by: systemd -Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel - -The process /bin/plymouth could not be executed and failed - -The error number returned by this process is 2This is the output of ls -l /etc/systemd/system/multi-user.target.wants total 0 lrwxrwxrwx 1 root root 38 Dec 22 2015 cronie.service -> /usr/lib/systemd/system/cronie.service lrwxrwxrwx 1 root root 42 Dec 27 2015 lm_sensors.service -> /usr/lib/systemd/system/lm_sensors.service lrwxrwxrwx 1 root root 44 Dec 22 2015 ModemManager.service -> /usr/lib/systemd/system/ModemManager.service lrwxrwxrwx 1 root root 46 Dec 22 2015 NetworkManager.service -> /usr/lib/systemd/system/NetworkManager.service lrwxrwxrwx 1 root root 40 Dec 22 2015 remote-fs.target -> /usr/lib/systemd/system/remote-fs.target lrwxrwxrwx 1 root root 35 Dec 22 2015 tlp.service -> /usr/lib/systemd/system/tlp.service lrwxrwxrwx 1 root root 35 Jan 13 2016 ufw.service -> /usr/lib/systemd/system/ufw.serviceAnd my /etc/fstab file looks like this: # /etc/fstab: static file system information # # <file system> <dir> <type> <options> <dump> <pass> # DEVICE DETAILS: /dev/sda1 UUID=c52d9ae9-48a8-487c-931b-77deedf8e242 LABEL=DskA_Linux # DEVICE DETAILS: /dev/sda5 UUID=170E967E185647C6 LABEL=DskD_Files # DEVICE DETAILS: /dev/sda6 UUID=eeaa09fa-4ace-4e5a-8fef-170a18e41940 LABEL=DskE_Swap UUID=c52d9ae9-48a8-487c-931b-77deedf8e242 / ext4 defaults 0 1 #UUID=170E967E185647C6 /mnt/Files ntfs-3g defaults 0 1 #UUID=eeaa09fa-4ace-4e5a-8fef-170a18e41940 swap swap defaults 0 0Also, I have never installed plymouth, nor do I intend to, if I can help it. What can I do to solve this? :S Thanks in advance
plymouth causes system to hang on boot
You should store your custom unit files in /etc/systemd/system/. After you create them, you have to enable them with systemctl enable name, which creates necessary symlinks.
I have placed a systemd service file in usr/lib/systemd/system/testfile.service. Here is the service file: [Unit] Description=Test service[Service] Type=notifyExecStart=/bin/dd.shExecReload=/bin/kill -HUP $MAINPIDKillMode=processRestart=on-failureRestartSec=30s[Install]WantedBy=multi-user.targetI tried to start the service at boot time these two ways:Created a softlink for the file from /usr/lib/systemd/systemd to /etc/systemd/system/multi-user.target.wants (manually and by using systemctl enable command) and rebooted the system; testfile service started successfully at boot time. Created a dependency in the existing running service file like After=testfile.service and Wants=testfile.service, then rebooted the system; testfile service started successfully.But when I place the file /usr/lib/systemd/system without using approaches 1 or 2 above, the service is not started. I feel that placing the service file in /usr/lib/systemd/system/ is enough for any service to start automatically, without creating the softlinks to wants directory or creating the dependency with the other services. Please let me know, how do I start a service at boot time which is present in the /usr/lib/systemd/system directory without using approaches 1 or 2 above? I have also created preset files in usr/lib/systemd/system-preset/ to disable and enable a few services, but it seems like those preset files were not executed: services which I have disabled in the preset file are still enabled after boot up. Please let me know how to debug this issue.
Start a service at boot time in systemd
I'm not sure about systemd-boot, but grub works with any name. Naming / lv root is just a "best practice" to make it clear what the lv contains. I have a system with root lv named 00 and swap lv 01 and it works just fine. $ cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.8.6-301.fc33.x86_64 root=/dev/mapper/fedora-00 ro resume=/dev/mapper/fedora-01 rd.lvm.lv=fedora/00 rd.lvm.lv=fedora/01 rhgb quietIt is possible systemd-boot is confused because there is a dash in the name. Dash is usually used as a divider between vg and lv name, but that is just a wild guess. You can have multiple systems in the same vg, but there would be problem with booting -- /boot can't be placed on an lv, but it might be possible with a shared /boot/efi (I'm not sure, I'm not very familiar with booting on EFI systems). But the lv names should be the problem in this setup.
Or, perhap equivalently, can a bootable root volume be named other than "root"? While installing a new version of a Linux IS I created an lvm2 logical volume named "ub20-root" intended as a bootable root with near success. I had a line in systemd-boot configuration file options root=/dev/mapper/crypt3--vg-ub20--root(systemd-boot is an simpler to configure alternative to grub). However, when trying to boot, an error message occurred stating that crypt3--vg-root couldn't be found. I renamed the volume from ub20-root to root, change the config line to options root=/dev/mapper/crypt3--vg-rootand it booted successfully. I am unclear as to whether the constraint to name the volume root originates from systemd-boot or elsewhere. However, perusing various examples of creating lvm2 bootable root volumes, they are all named root, even though grub is the standard boot manager. Another possibly equivalent question is - Is there a way to have multiple bootable root volumes on a single volume group? If not, why not?
Can multiple lvm2 volumes in a single volume group be bootable root volumes?
Yes, you need a kernel and bootloader to boot a KVM virtual machine. But you can only use systemd-boot if your KVM virtual machine is configured to boot via UEFI, as it is a UEFI-only bootloader. Most VPS providers, including Vultr and Digital Ocean, only support legacy boot (for now). When these providers expand to support UEFI boot, or when you find another provider which does, then you can use systemd-boot there. For legacy boot, grub is your best bet.
I'm installing Arch Linux manually (using an ISO image) on a KVM virtual private server. I'm booted into the ISO image, but it was not booted with EFI. Is it possible to use systemd-boot in this circumstance? This post doesn't appear to have a good answer for my situation: The instructions I'm following suggest a bootloader needs to be installed. They actually suggest grub should be installed. pacman -S grub grub-install /dev/vda grub-mkconfig -o /boot/grub/grub.cfgHowever, I would prefer systemd-boot if it is possible.
how to use systemd-boot on a KVM vitual server?
Try clearing the PK (Primary Key) before trying to input other keys. This should place Secure Boot into Setup Mode in which there should be minimal restrictions on key updates. After updating any other Secure Boot key variables to suit your needs, input your key as the PK to return Secure Boot to normal mode. Primarily, you'll want your key in the db key variable, since it's the whitelist: unless specifically blacklisted, any *.efi signed with a key that's in the whitelist variable will be allowed to execute by Secure Boot. The dbx key variable is the blacklist: when loading any file signed with a blacklisted key, or whose hash matches a blacklisted hash, the firmware will stop it from loading and/or won't allow it to execute. The KEK key variable controls (programmatical?) updates to the db and dbx while Secure Boot is in normal mode. If possible, you'll want your key in this variable too. The PK variable controls updates to the KEK variable, and holds only one key - ideally yours, instead of the system manufacturer's default key. Your OvmfPkKek1.pem is probably the file needed by UEFI, but there are several possible formats it might expect. If the firmware cannot read a PEM file (either as-is or with a *.cer or *.crt suffix), try converting it into a DER format: openssl x509 -in OvmfPkKek1.pem -inform PEM -out OvmfPkKek1.cer -outform DERThe suffix of the DER file might have to be either *.cer or *.crt. Some UEFI user interfaces expect specifically EFI Signature List files (*.esl), which you can generate using the efisiglist command, that can probably be found in a package pesign, or the cert-to-efi-sig-list command from package efitools. To convert a DER-format certificate into an EFI Signature List: efisiglist --outfile OvmfPkKek1.esl --add --certificate=OvmfPkKek1.ceror cert-to-efi-sig-list OvmfPkKek1.cer OvmfPkKek1.eslWhile not in Secure Boot Setup Mode (i.e. while PK is set), it's possible the firmware user interface will only accept ESL files signed with a Secure Boot key whose certificate is in KEK or PK key variable. This follows the same rules as when updating the Secure Boot keys programmatically from a running operating system. If so, the expected file suffix for those could be *.auth. The sign-efi-sig-list command in the efitools package can generate *.auth files from *.esls. Note that you'll have to create a separate *.auth file for each key variable, even if you use the same actual key: sign-efi-sig-list -a -c OvmfPkKek1.pem -k OvmfPkKek1.key db OvmfPkKek1.esl signed-key-for-db.auth sign-efi-sig-list -a -c OvmfPkKek1.pem -k OvmfPkKek1.key KEK OvmfPkKek1.esl signed-key-for-KEK.auth sign-efi-sig-list -a -c OvmfPkKek1.pem -k OvmfPkKek1.key PK OvmfPkKek1.esl signed-key-for-PK.auth
I'm producing a yocto build, and want to enable UEFI Secure Boot on the intel machine I'm using. This is a pretty basic yocto build, using core-image-minimal and meta-intel. The artifacts it produces look like: ./core-image-minimal-intel-corei7-64.wic ./bzImage-intel-corei7-64.bin ./bzImage--6.1.38+git0+d62bfbd59e_11e606448a-r0-intel-corei7-64-20240208204456.bin ./core-image-minimal-intel-corei7-64.manifest ./OvmfPkKek1.crt ./OvmfPkKek1.pem ./systemd-bootx64.efi ./core-image-minimal-intel-corei7-64-20240215181510.rootfs.tar.xz ./microcode.cpio ./modules-intel-corei7-64.tgz ./core-image-minimal-intel-corei7-64-20240215181510.rootfs.manifest ./microcode_20230808.cpio ./modules--6.1.38+git0+d62bfbd59e_11e606448a-r0-intel-corei7-64-20240208204456.tgz ./bzImage ./core-image-minimal-intel-corei7-64-20240215181510.testdata.json ./grub-efi-bootx64.efi ./ovmf.vars.qcow2 ./core-image-minimal-intel-corei7-64.qemuboot.conf ./ovmf.secboot.code.qcow2 ./linuxx64.efi.stub ./OvmfPkKek1.key ./ovmf.secboot.qcow2 ./core-image-minimal-intel-corei7-64.tar.xz ./core-image-minimal-intel-corei7-64-20240215181510.rootfs.wic ./ovmf.code.qcow2 ./core-image-minimal.env ./core-image-minimal-systemd-bootdisk-microcode.wks ./ovmf.qcow2 ./core-image-minimal-intel-corei7-64-20240215181510.qemuboot.conf ./core-image-minimal-intel-corei7-64.testdata.jsonMy boot partition looks like: ./loader ./loader/loader.conf ./loader/entries ./loader/entries/boot.conf ./EFI ./EFI/BOOT ./EFI/BOOT/bootx64.efi ./bzImageI can't figure out how to enable secure boot using these files. There's an option to enroll a signature, and when I do that using the bootx64.efi file, and then try and boot, I get some sort of bzImage error, and then something about a security policy violation. I get similar (but different) errors when I try and do the same process on a random Kali linux install off of a USB drive. There are also uefi options like "enroll signature", "enroll PK", "enroll KEK", etc., and I tried these hoping to be able to select those OvmfPkKek1* files yocto is producing, assuming those are the keys, but they don't show up on disk when browsing my boot partition via the uefi interface, even though I copied them over. I'm not sure why. Any ideas how I make this install work with secure boot?
How do I enable UEFI secure boot for a linux build made with yocto?
Not an Arch expert, the Debian-centic script works well for me on Debian, but according to this Archwiki page should, or at least is expected to work.Passwords entered during boot are cached in the kernel keyring by systemd-cryptsetup(8), so if multiple devices can be unlocked with the same password (this includes devices in crypttab that are unlocked after boot), then you will only need to input each password once.What seems wrong in your setup is that you have your crypttab configured to use a keyfile for the root partition while the keyfile is stored in the encrypted root partition. Since you don't mind to enter a password and use the same password for both, setting none as keyfile in crypttab might solve your problem. The systemd-cryptsetup manpage also explicit mentions password caching so the order in which they are opened should not matter for you. Just in case: if you do not use hibernate/resume you could also encrypt the swap partition with a random key.
I'm using EndeavorOS (basically Arch), but with systemd-boot and dracut for initrd. I have a simple setup with an unencrypted boot partition and LUKS-encrypted root and swap partitions. Specifically, the setup is described in the output below: $ cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> UUID=8A2F-4076 /efi vfat defaults,noatime 0 2 /dev/mapper/luks-81733cbe-81f5-4506-8369-1c9b62e7d6be / ext4 defaults,noatime 0 1 /dev/mapper/luks-9715a3f9-f701-47b8-9b55-5143ca88dcd8 swap swap defaults 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0$ lsblk -f NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS nvme0n1 ├─nvme0n1p1 vfat FAT32 8A2F-4076 915.6M 8% /efi ├─nvme0n1p2 crypto_LUKS 1 81733cbe-81f5-4506-8369-1c9b62e7d6be │ └─luks-81733cbe-81f5-4506-8369-1c9b62e7d6be ext4 1.0 endeavouros d8d14c59-8704-4fb8-ad02-7d20a26bc1e1 843.6G 2% / └─nvme0n1p3 crypto_LUKS 1 9715a3f9-f701-47b8-9b55-5143ca88dcd8 └─luks-9715a3f9-f701-47b8-9b55-5143ca88dcd8 swap 1 swap b003ea64-a38d-464c-8609-7278e21f8a0f [SWAP]The problem is that each time I boot up the computer, I need to enter my password twice; once for the root partition and once of the swap (note I use the same password for both if that helps). This has become nuisance. So my question is: Is there a way to automatically decrypt my swap partition upon a successful passphrase for the root? There has been a question very similar to this with a sensible answer, but did not work. The first part of the answer is Debian-centric with a script option not present in other distributions. The second part uses crypttab to specify the location of a keyfile used to decrypt other partitions. As of now, my crypttab in initrd looks like this, which specifies a /crypto_keyfile.bin that exists in the root partition to open either of the partitions: $ lsinitrd --file /etc/crypttab luks-81733cbe-81f5-4506-8369-1c9b62e7d6be /dev/disk/by-uuid/81733cbe-81f5-4506-8369-1c9b62e7d6be /crypto_keyfile.bin luks luks-9715a3f9-f701-47b8-9b55-5143ca88dcd8 /dev/disk/by-uuid/9715a3f9-f701-47b8-9b55-5143ca88dcd8 /crypto_keyfile.bin luksThis approach does not work for two reasons:Contrary to what the linked answer suggests (being that the user is queried for the partitions by the order of crypttab entries), the order is random at each boot. Even if I could automatically open my swap partition after opening the root, if swap comes first, then I am still forced to enter the password for root since keyfile is on root. It seems to me that after entering password for root, the filesystem is not mounted immediately. The /crypto_keyfile.bin is actually searched inside the initrd filesystem, which explains the following errors in journal appearing twice: systemd-cryptsetup[460]: Failed to activate, key file '/crypto_keyfile.bin' missing.So if I am on the right track, how could I ensure systemd-cryptsetup to query me first for the root partition and second for the swap each time, and how can I ensure that after opening root, the filesystem is mounted and /crypto_keyfile.bin is successfully found to open the swap partition? Otherwise, if I am completely off track here, is there a way to achieve what I want? Thanks.
How to open all LUKS volumes with use of a single password?
It was an issue Now solved upstream. Starting with version 6.10 you can launch memtest86+ from systemd-boot out-of-the-box.
Recently has been released version 6 of Memtest86+ which finally introduce UEFI support. Now, I use systemd-boot as boot manager, so I'd like to launch memtest86+ directly at boot. I set up a configuration file /boot/loader/entries/memtest.conf as described here [1]: title Memory Tester (memtest86+) efi /memtest86+/memtest.efiWith no luck! It results in a blank screen. Help me: What I do wrong? I am on Arch Linux w/ memtest86+-efi [2] 6.00-2 UPDATE1: reported the issue in Arch Linux bug tracker [3][1] https://wiki.archlinux.org/title/Systemd-boot#EFI_Shells_or_other_EFI_applications [2] https://archlinux.org/packages/extra/any/memtest86+-efi/ [3] https://bugs.archlinux.org/task/76390
memtest86+ cannot boot via systemd-boot EFI
Ok, so, for what it's worth, the following was successful for me: sudo systemd-nspawn -bxD/Practically identical to yours, except I don't give the machine a name and I get an -x ephemeral btrfs snapshot of my / for the container's root. That brought up the container's getty on my terminal's pty and I logged in to login and all. I confess I was a bit stumped for a little while, but after a little poking at systemctl in the container w/ zsh <tab> completion I came up with (run from within the container): systemctl stop console-getty.service==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === Authentication is required to manage system services or other units. Authenticating as: mikeserv Password: ==== AUTHENTICATION COMPLETE ===Which got the machine to surrender its terminal control. The only thing is, I started that with sudo - which also gets its own layer of terminal control to authenticate in the first place. This left me with a blank terminal, and no amount of kill -CONT "$(pgrep ksh)" was doing me any good. And so I was again stumped for a moment or two, but (in another terminal)... sudo fuser -v /dev/pts/* USER PID ACCESS COMMAND /dev/pts/0: mikeserv 8347 F.... zsh root 18003 F.... sudo /dev/pts/13: mikeserv 9553 F.... zsh mikeserv 16838 F.... ksh root 17657 F.... sudo root 17658 F.... systemd-nspawn /dev/pts/14: root 17675 F.... systemdGave me the above list, and so I thought - what the hell? sudo kill -STOP 17657And - lo and behold - I had ksh back in the original terminal. To wrap it up, I needed to verify I could still access the machine, though, of course, else it would be useless: machinectl -lMACHINE CLASS SERVICE localhost-35ceaa76b1306897 container nspawnOk... sudo machinectl login localhost-35ceaa76b1306897Connected to machine localhost-35ceaa76b1306897. Press ^] three times within 1s to exit session.Arch Linux 4.0.7-2-ARCH (pts/0)localhost-35ceaa76b1306897 login:And I got another getty on another terminal!
I use systemd-nspawn to run a few containers. I can have them started in the background using systemctl start systemd-nspawn@foo. On occasion, however, I start with systemd-nspawn -bD foo. I couldn't find any way to send it to the background. Closing the terminal just kills the container as machinectl list shows. Can I do so, and if so, how? I understand a container is much more than a single process, but in this sense, the expected effect is the same as backgrounding a process - I want the container running, but my original shell given back to me.
How do I background a systemd-nspawn container?
For some reason, dnf didn't set the "right" SELinux label on /etc/passwd. But it did set a label on /bin/passwd. That mismatch is what causes the problem. Further explanations welcomed :). $ ls -Z fedora-24/etc/passwd unconfined_u:object_r:etc_t:s0 fedora-24/etc/passwd $ ls -Z /etc/passwd system_u:object_r:passwd_file_t:s0 /etc/passwd$ ls -Z fedora-24/bin/passwd system_u:object_r:passwd_exec_t:s0 fedora-24/bin/passwd $ ls -Z /usr/bin/passwd system_u:object_r:passwd_exec_t:s0 /usr/bin/passwdAttempting to run restorecon -Rv / inside the container does nothing. IIRC libselinux detects when it's run in a container, and will not do anything. Solution We need to run from outside the container: restorecon -Rv fedora-24/It makes sure all the SELinux labels are reset. (To the value expected by the container host, i.e. unlabelled). Then we can set the root password successfully.
I combined the detailed instructions from the original blog post, and the more up to date instructions from the man page (using dnf instead of yum). # sudo dnf -y --releasever=24 --installroot=$HOME/fedora-24 --disablerepo='*' --enablerepo=fedora --enablerepo=updates install systemd passwd dnf fedora-release vim-minimal# sudo systemd-nspawn -D fedora-24 Spawning container fedora-24 on /home/alan-sysop/fedora-24 Press ^] three times within 1s to kill container. -bash-4.3# passwd Changing password for user root. New password: Retype new password:Result: passwd: Authentication token manipulation errorand an AVC popup, i.e. SELinux error. It says passwd is not allowed to unlink (replace) /etc/passwd. One of the suggestions from the "Troubleshoot" button is that I could assign the label passwd_file_t to /etc/passwd. What's wrong, how can I fix it?
systemd-nspawn OS container is unusable because I can't set the root password
The problem of performance was because I thought that whitelisting the syscalls en nspawn with --system-call-filter will improve the performance, but as they explained me in systemd mail list I should use export SYSTEMD_SECCOMP=0, because nspawn will still be processing syscalls when I whitelist them. SYSTEMD_SECCOMP was added in systemd v247 (debian buster have v241 but backport repository have v247). so to make nspawn as quick as the baremetal host do: export SYSTEMD_SECCOMP=0 systemd-nspawn --capability=all -D ./bbusterboot --bootthis is equivalent to --privileged in docker/podman, and there is no need to use --system-call-filter if we use SYSTEMD_SECCOMP. of course this is not good for security, so do it in a safe environment when running a trusted code only. and if you want max performance which will add performance to baremetal, nspawn, docker, podman or what ever you are using, then disable all the spectre/meltdown mitigations as I did in the question above (but this is not good for security too if you run untrusted code like browsers with ad's for example).read this for more details: https://github.com/systemd/systemd/issues/18370
Why nspawn is slow compared to docker podman and even qemu?! CPU tasks take twice of the time it takes in docker, podman or qemu Here is a benchmark test I did: First I disabled all the spectre/meltdown mitigations in the host kernel (and the qemu guest kernel in the case of qemu benchmark) using: GRUB_CMDLINE_LINUX_DEFAULT=noibrs noibpb nopti nospectre_v2 nospectre_v1 l1tf=off nospec_store_bypass_disable no_stf_barrier mds=off tsx=on tsx_async_abort=off mitigations=off spectre_v2_user=off spec_store_bypass_disable=off nx_huge_pages=off kvm.nx_huge_pages=off kvm-intel.vmentry_l1d_flush=never srbds=offthen I used this benchmark test: git clone https://github.com/tsuna/contextswitch cd contextswitch time makeI tested nspawn with super full privileges: export SYSTEMD_NSPAWN_USE_CGNS=0 systemd-nspawn --keep-unit --register=no --boot --capability=all --private-users=false --system-call-filter="@default @aio @basic-io @chown @clock @cpu-emulation @debug @file-system @io-event @ipc @keyring @memlock @module @mount @network-io @obsolete @privileged @process @raw-io @reboot @resources @setuid @signal @swap @sync @system-service @timer" --bind=/sys/fs/cgroup --machine=testtt -D busterdirI tested podman with privileges too: podman run --rm -it --privileged debian:10 bashI tested docker with privileges too: docker run --rm -it --privileged debian:10 bashI tested qemu with: qemu-system-x86_64 -name buster20210121210102 -m 2G -enable-kvm -cpu host -smp cores=4,threads=2,sockets=1 -object iothread,id=myio1 -device virtio-blk-pci,drive=mydisk0,iothread=myio1 -drive file=buster20210121210102.qcow2,if=none,id=mydisk0,format=qcow2,aio=native,cache=noneand here are the results: # baremetal real 0m12.998s# nspawn real 0m30.777s <==== :(# docker real 0m15.127s#podman real 0m15.207s# qemu without mitigations real 0m15.979shere I filled a request to improve nspawn performance which contain the full test result: https://github.com/systemd/systemd/issues/18370 Do you know why systemd-nspawn is slower? how can I improve it?
Why systemd-nspawn is slower than docker,podman and qemu?! how to Improve nspawn performance?
Depends what you are trying to do. I'll give a direct answer to your question then answer some alternatives. I'm assuming you're using systemd as the init system in the container, which will be true if your container OS is based on Debian / Arch / Ubuntu or similar. Executing a command after starting a nspawn container In the .nspawn file (/etc/systemd/nspawn/yourcontainer.nspawn) add: [Exec] NotifyReady=yesThen sudo machinectl start yourcontainer will wait until the container has finished booting before exiting. The second line of your script will now work because the container is ready (unless your container fails to boot, which would have put your polling into an infinite loop). Under the hood, the host's systemd-nspawn is setting up a Unix domain socket /run/host/notify in the container. When the container's systemd is ready (in other words when it reaches the multi-user.target target), it sends a READY=1 notification to that socket. The host's systemd-nspawn service waits for this message to be received. The drawback of this approach is they you can't boot the container asynchronously anymore (unless using & notation), which gets annoying if you are debugging and the boot time is long. Some other approaches, in order of complexity: Running command in chroot Assuming the container is not running, sudo chroot /var/lib/machines/yourcontainer /bin/bash -c "$command"This is useful if you have just created a container and are initialising it programmatically a number of times. Obviously it doesn't benefit from the sandboxing features. It also won't work if you've previously run the same container with PrivateUsers=yes, because the files will be chowned with a high UID. It might give undefined results if the container is already running. Using systemd-nspawn directly This approach does not need the NotifyReady=yes|no explained above. systemd-nspawn -M yourcontainer -P /bin/bash -c "$command"This will run the command inside the container with all the sandboxing turned on, but the command will run as the only process (and with PID=1) - your init service will not have run. So for example the networking will not be available (unless you are using host networking anyway). This command will not work if the container is already running. Socket activation If you are waiting for a server in the container to be ready, then just use socket activation (assuming your server is compatible). This is better explained elsewhere, in summary systemd will wait for a connection to your socket (such as TCP port 80). When a client does connect, systemd will then start your container and then forward the traffic to it. In olden times inetd did the same thing. This would require a line like [emailprotected] in the .socket file.
I have a script that contains the following: sudo machinectl start "$machinename" sudo systemd-run -PM root@"$machinename" "$command"Failed to connect to bus: No such file or directory Failed to start transient service unit: Transport endpoint is not connectedThis fails because the first line only starts booting the container; the second line runs before the container finishes booting. For now, I have a solution that constantly polls for the container's status and blocks till it's ready: while [ "$(sudo systemctl show "systemd-nspawn@$machinename" -P StatusText)" != "Container running: Ready." ] do true doneHow do I wait for the container to finish booting, without constantly polling for the container's status?
How do I wait for a systemd-nspawn container to boot?
systemd-detect-virt can tell you whether your system is running in a VM/container. This requires systemd-detect-virt inside your container, but the systemd documentation on minimal builds suggests that you can just build a package that only includes systemd-detect-virt.
Quite recently I started using systemd-nspawn to set up other OS instances on my Arch box. One thing I'd like to do is detect if I'm inside a container, and if so, add the distro name (from lsb_release) to the terminal title. On Debian-based systems, the default .bashrc uses debian_chroot for a similar purpose. How do I detect if I am running inside a nspawn container?
How can I detect if a system is running inside a systemd-nspawn container?
The problem is Fedora's firewalld. It seems nspawn was never integrated with firewalld. (nspawn isn't correctly integrated with Fedora's SELinux policy either). As mentioned in the question, libvirt is working fine :). We can use the same trick people discovered for running containers with LXC on Fedora. Update: the workaround stopped working after I upgraded to Fedora 30.Run systemd-nspawn with the option --network-bridge virbr0. Instead of relying on systemd-networkd, this leverages libvirtd.service. The latter service is already started by default on Fedora. In the guest, set up your preferred DHCP client as usual. DNS resolution when using systemd-networkd as DHCP client Using systemd-networkd as a DHCP client might accidentally work on its own, if you may have a left-over /etc/resolv.conf from a previous container boot. But you can't rely on this working in general. It's really designed to be run together with systemd-resolved.service. In turn, systemd-resolved is intended to be used with nss-resolve. However this is not essential AIUI.
Looking at the documentation for systemd-nspawn, it must have been intended to have a very user-friendly way to launch containers in a different network namespace. You use the -n option, and simply enable systemd-networkd.service on both ends. The container gets its own IP address in one of the "private" ranges. (DNS might require some additional step). The result is I get an IP address in the range 169.254.*.*. The default route points at the host0 interface without going through any gateway/router. Attempts to reach internet servers e.g. 8.8.8.8 fail with "No route to host". (No point working out DNS if this doesn't work). If I run tcpdump -i ve-fedora-25 on the host, I can see the DHCP requests, but they are not responded to. systemd-networkd is definitely running on the host. The host-side logs show "Gained carrier" on ve-fedora-25, and networkctl shows this as "routable" & "configured" all in green. My system is Fedora 25. I want an OS container I can connect to using TCP/IP, and at the same time be able to connect out to the world (e.g. to run the dnf package manager). Just as libvirt VMs work so easily out-of-the-box. What has gone wrong?
systemd-nspawn container with separate IP address (network namespace) not working
The systemd-nspawn command has a --bind option that lets you "bind mount" a directory from the host filesystem into the container. If you just do --bind /path/to/dir then it will appear in that name inside the container. If you do --bind /path/to/dir:/foo then it will show up as /foo inside the container. In order to use it in a configuration file (/etc/systemd/nspawn/<container>.nspawn), add the Bind= directive to its [Files] section.
I would like to expose to a container (Ubuntu 16.04 created with debootstrap) started with systemd-nspawn a directory of the host system (also an Ubuntu 16.04). Is this possible with systemd-nspawn? I would fallback on some NFS based solution (the host exposes the directory which is mounted by the guest) but a systemd native solution would be ideal.
How to expose a directory to a container?
Have you read through the systemd-nspawn man page? Nothing in there says that you require debianbootstrap, and in fact it shows several non-debian examples. You do a root filesystem, but just like a Docker container (or even a traditional chroot environment) that doesn't require anything more than your executable and any shared libraries or other resources necessary for it to run. If you just want process tree isolation, maybe instead of systemd-nspawn you want unshare: # unshare --pid --fork --mount-proc bash # ps -fe UID PID PPID C STIME TTY TIME CMD root 1 0 4 09:49 pts/0 00:00:00 bash root 24 1 0 09:49 pts/0 00:00:00 ps -fe
Is it possible to use nspawn for a single executable? My goal is just to isolate the process tree of the application, not portability. Let's say you wrote an application in C, built for your target platform. Today I am able to run this as a systemd service by configuration via a systemd unit file. Is there a way to make a minimal nspawn "container" for my application? All the articles I've read indicate that nspawn requires debianbootstrap, which ends up being almost 300mb worth of files. If I don't care about portability, is there some other way to leverage the process tree isolation features of nspawn?
Minimal system-nspawn container for process tree isolation with embedded applications
Setting the locale is documented in the Debian install guide - there's an appendix which provides some hints on installing directly with debootstrap and configuring the system yourself.To configure your locale settings to use a language other than English, install the locales support package and configure it. Currently the use of UTF-8 locales is recommended. # aptitude install locales # dpkg-reconfigure localesThe appendix as a whole has a disclaimer that it is not comprehensive, but it is official documentation and this specific method is perfectly correct. There are other alternatives which may be preferred for scripting - this method prompts the user to choose which locale(s). There is a second issue which the appendix also mentions in passing. I am not sure if it affects your specific character issue, but it can cause issues with similar sophisticated output. You need to make sure that TERM is set correctly. Run echo $TERM outside the container. Inside the container, run e.g. export TERM=xterm-256color to set the terminal type for this session. I don't think machinectl login handles this for you either, which is sad given how it talks to systemd inside the container. If you run an SSH server inside the container, then just use that, SSH will forward TERM correctly and you don't have to do anything.
Trying to get powerline/airline symbols to show in vim running in a Debian container created with sudo systemd-nspawn -D ~/debian-tree/ on a Fedora host. Right now it just shows question marks in diamonds (��) I'm pretty sure I need to set the locale but I can't find a straight forward answer on how to do this properly. Output of locale LANG= LANGUAGE= LC_CTYPE="POSIX" LC_NUMERIC="POSIX" LC_TIME="POSIX" LC_COLLATE="POSIX" LC_MONETARY="POSIX" LC_MESSAGES="POSIX" LC_PAPER="POSIX" LC_NAME="POSIX" LC_ADDRESS="POSIX" LC_TELEPHONE="POSIX" LC_MEASUREMENT="POSIX" LC_IDENTIFICATION="POSIX" LC_ALL=output of locale -a C C.UTF-8 POSIX
Setting locale in a systemd-nspawn container (debian jessie)
$ cat /proc/6211/status | grep -i Cap CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000fdecafff CapAmb: 0000000000000000CapInh is the set of inheritable capabilities, which is not useful for the current program, but could be passed on to any programs this process would exec() if the right conditions apply. It's all zeroes, so there's no capabilities in there anyway. CapEff is the most important one: it is the set of effective capabilities, or the privileged things this process/thread is allowed to do right now. Unfortunately, it is all zeroes here. CapPrm limits the capabilities this particular process/thread is permitted to get for itself or its child processes if it asks for them. And that is also all zeroes. So as long as this process executes the current program, it will never be able to gain any capabilities at all. CapBnd is the bounding set that limits the capabilities the descendants of this program could receive - if they would get them from somewhere else. If this process would exec() a setuid-root program, this is the set of capabilities that would become effective for it all at once. Or if, for example, this process executed a program that had a setcap 'cap_sys_resource=eip' <filename> done on it, this CapBnd value would allow the CAP_SYS_RESOURCE capability to become effective for the executed program and its child processes. So your process currently does not have the CAP_SYS_RESOURCE capability and cannot get it without exec()ing another program. To make the CAP_SYS_RESOURCE immediately effective for your containerized process, you would need to add the option --ambient-capability=CAP_SYS_RESOURCE to your systemd-nspawn command line.
I have a systemd-nspawn container in which I am trying to change the kernel parameter for msgmnb. When I try to change the kernel parameter by directly writing to the /proc filesystem or using sysctl inside the systemd-nspawn container, I get an error that the /proc file system is read only. From the arch wiki I see this relevant documentation systemd-nspawn limits access to various kernel interfaces in the container to read-only, such as /sys, /proc/sys or /sys/fs/selinux. Network interfaces and the system clock may not be changed from within the container. Device nodes may not be created. The host system cannot be rebooted and kernel modules may not be loaded from within the container.I thought the container would inherit some properties of /proc from the host, including the kernel parameter value for msgmnb, but this does not appear to be the case as the host and container have different values for msgmnb. The kernel parameter value in the container: cat /proc/sys/kernel/msgmnb 16384Writing to the proc filesystem inside the container $ bash -c 'echo 2621440 > /proc/sys/kernel/msgmnb' bash: /proc/sys/kernel/msgmnb: Read-only file systemFor completeness, I also tried sysctl in the container: # sysctl -w kernel.msgmnb=2621440 sysctl: setting key "kernel.msgmnb": Read-only file systemI thought this value would be inherited from the host system. I set the value on the host, rebooted and re-created my container. The container (even new ones) maintains the value of 16384. # On the host $ cat /proc/sys/kernel/msgmnb 2621440I've also tried using unprivileged the -U flag when booting the systemd-nspawn container but I get the same results. I've also tried to editted /etc/sysctl.conf in the container tree to include this line before booting the container: kernel.msgmnb=2621440I also looked into https://man7.org/linux/man-pages/man7/capabilities.7.html and noticed CAP_SYS_RESOURCE which has a line that reads: CAP_SYS_RESOURCE ... raise msg_qbytes limit for a System V message queue above the limit in /proc/sys/kernel/msgmnb (see msgop(2) and msgctl(2));Using sudo systemd-nspawn --capability=CAP_SYS_RESOURCE -D /path/to/container, and then inside the container, when I use msgctl with IPC_SET and pass msqid_ds->msg_qbytes with a value that is higher than what is in /proc/sys/kernel/msgmnb, the syscall returns an error code. It seemed like passing the CAP_SYS_RESOURCE should work here? Nothing I've tried here has changed the value for msgmnb in the container. I can't seem to find documentation on how to achieve my goal. I'd appreciate any help - thank you! EDIT: Trying to determine if the process calling msgctl has the capability. Here is what I found: $ cat /proc/6211/status | grep -i Cap CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000fdecafff CapAmb: 0000000000000000 $ capsh --decode=00000000fdecafff 0x00000000fdecafff=cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_raw,cap_ipc_owner,cap_sys_chroot,cap_sys_ptrace,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap
How to increase kernel parameter (`msgmnb`) for a systemd-nspawn container
passwd works for this case. It has an option --stdin. Do not use echo my-secret-password | passwd --stdin, because echo my-secret-password may become visible if someone runs ps, or maybe even in a log file if you are unlucky. #!/bin/shPASSWORD=...passwd root --stdin <<EOF $PASSWORD EOF
I want to change passwd of root of nspawn container, as I am creating container via ansible just after I created rootfs, as at very first it doesn't have any root password. is it a good idea to change passwd by using replace module to replace root line in /etc/shadow file ? is there any other way too to update the password non-interactively ? I have tried : echo user:pass | /usr/sbin/chpasswdbut echo is not working, a I am getting execv() failed: No such file or directory
non-interactive password change of nspawn container
using the dmidecode | grep -A3 '^System Information' command. There you'll find all information from BIOS and hardware. These are examples on three different machines (this is an excerpt of the complete output): System Information Manufacturer: Dell Inc. Product Name: Precision M4700System Information Manufacturer: MICRO-STAR INTERANTIONAL CO.,LTD Product Name: MS-7368System Information Manufacturer: HP Product Name: ProLiant ML330 G6
I used a system information utility to take the model number of a system, and also of the motherboard. DMI System Manufacturer LENOVO DMI System Product 2306CTO DMI System Version ThinkPad X230 DMI Motherboard Product 2306CTO Is there a way to get model number, in this case 2306CTO, in Linux?
How can I find the hardware model in Linux?
If I need to know what it is say Linux/Unix , 32/64 bit uname -a This would give me almost all information that I need, If I further need to know what release it is say (Centos 5.4, or 5.5 or 5.6) on a Linux box I would further check the file /etc/issue to see its release info ( or for Debian / Ubuntu /etc/lsb-release ) Alternative way is to use the lsb_release utility: lsb_release -aOr do a rpm -qa | grep centos-release or redhat-release for RHEL derived systems
Often times I will ssh into a new client's box to make changes to their website configuration without knowing much about the server configuration. I have seen a few ways to get information about the system you're using, but are there some standard commands to tell me what version of Unix/Linux I'm on and basic system information (like if it is a 64-bit system or not), and that sort of thing? Basically, if you just logged into a box and didn't know anything about it, what things would you check out and what commands would you use to do it?
How can I tell what version of Linux I'm using?
If your system supports a procfs, you can get much information of your running system. Its an interface to the kernels data structures, so it will also contain information about your hardware. For example to get details about the used CPU you could cat /proc/cpuinfo For more information you should see the man proc. More hardware information can be obtained through the kernel ring buffer logmessages with dmesg. For example this will give you a short summary of recently attached hardware and how it is integreated in the system. These are some basic "interfaces" you will have on every distribution to obtain some hardware information. Other 'small' tools to gather hardware information are:lspci - PCI Hardware lsusb - USB HardwareDepending on your distribution you will also have access to one of these two tools to gather a detailed overview of your hardware configuration:lshw hwinfo (SuSE specific but availible under other distributions also)The "gate" to your hardware is thorugh the "Desktop Management Interface" (-> DMI). This framework will expose your system information to your software and is used by lshw for example. A tool to interact directly with the DMI is dmidecode and availible on the most distributions as package. It will come with biosdecode which shows you also the complete availbile BIOS informations.
How can I check what hardware I have? (With BIOS version etc.)
Getting information on a machine's hardware in Linux
You can execute: uname -rIt will display something like 3.13.0-62-genericFound on https://askubuntu.com/questions/359574/how-do-i-find-out-the-kernel-version-i-am-running (view that QA to learn other commands you could use)
While troubleshooting a problem with my ethernet card, I've found that the driver I'm currently using may have some issues with old kernel versions. What command can I use to check the kernel version I am currently running ?
How do I check the running kernel version?
You could try running (as root) dmidecode -t memory. I believe that's what lshw uses (as described in the other Answer), but it provides information in another form, and lshw isn't available on every linux distro. Also, in my case, dmidecode produces the Asset number, useful for plugging into Dell's support web site.
I'd like to price some new RAM for our in-house VMware testing server. It's a consumer box we use for testing our software on and running business VMs. I've forgotten what kind of RAM it has and I'd rather not reboot the machine and fire up memtest86+ just to get the specs of the RAM. Is there any way I can know what kind of RAM to buy without shutting down Linux and kicking everyone off? For example, is the information somewhere in /proc?
Can I identify my RAM without shutting down Linux?
I just checked on my CentOS 5 system - after: chgrp kmem /usr/sbin/dmidecode chmod g+s /usr/sbin/dmidecodeIt is still not possible to get dmidecode working - the group kmem has only read-rights for /dev/mem - it seems there is a write involved to get to the BIOS information. So some other options:Use sudo Use other information sources (e.g. /proc/meminfo ) Use an init-script that writes the static output of dmidecode to a world-readable file
I'm writing a program that displays various system information (on a CentOS system). For example, the processor type and speed (from /proc/cpuinfo), the last boot time (calculated from /proc/uptime), the IP address (from ifconfig output), and a list of installed printers (from lpstat output). Currently, several pieces of data are obtained from the dmidecode program:The platform type (dmidecode -s system-product-name) The BIOS version (dmidecode -s bios-version) The amount of physical memory (dmidecode -t17 | grep Size)These are only available if my program is run as root (because otherwise the dmidecode subprocess fails with a /dev/mem: Permission denied error). Is there an alternative way to get this information, that a normal user can access?
How to get dmidecode information without root privileges?
If that's all you need, just use free: $ free -h | gawk '/Mem:/{print $2}' 7.8Gfree returns memory info, the -h switch tells it to print in human readable format.
On OS X, I get a nice human readable system memory reading like so: printf -v system_memory \ "$(system_profiler SPHardwareDataType \ | awk -F ': ' '/^ +Memory: /{print $2}')" echo "$system_memory"prints out the friendly: 4 GBAlthough this on Linux is correct: lshw -class memoryit outputs: size: 4096MiBI need to painfully parse it and try to make it into a string as nice as the one above. Am I using the wrong command?
Human readable system memory reading from CLI?
In addition to uname -a, which gives you the kernel version, you can try: lsb_release -idrc # distro, version, codename, long release nameMost Desktop Environments like GNOME or KDE have an "about" or "info" menu option that will tell you what you use currently, so no commandline needed there really.
I have always found it difficult to find information about the system itself in Unix, whether it beWhich OS I am using (version number and all, to compare it with the latest available builds)? Which Desktop Environment am I using? If I am using KDE, most of the programs begin with a K and I can say I am using KDE, but there should be some way to query this, say from a script. Which kernel version am I using? (For example, I am using Fedora, and I want to know what Linux kernel version I am using)Basically, what I miss is a single point/utility that can get all this information for me. Most of the times the solutions to the above would themselves be OS specific. Then, you are stuck.
How to find information about the system/machine in Unix?
The entire format string is to be preceded by the +: $ date +"So this is week: %U" So this is week: 19 $ date +"So this is week: %U of %Y" So this is week: 19 of 2016
I want the UNIX date to output: So this is week 35 of 2016.Here, of course, 35 and 2016 are outputs of the date command. I have tried the following: date +%UThis printed out the current week number. But I have to wrap it inside the specific text I want to display. I tried: date "So this is week: %U" date "It is currently: "+ "%U"Both gave me an error. How can I make the date command display the week number to me in the specific format that I desire?
How to use the 'date' command to display week number of the year?
When you read from /proc, the kernel generates content on the fly. There is no hard drive involved. What you're doing is similar to what any number of monitoring programs do, so I advise you to look at what they're doing. For example, you can see what top does: strace top >/dev/nullThe trace shows that top opens /proc/uptime, /proc/loadavg, /proc/stat and /proc/meminfo once and for all. For all these files except /proc/uptime, top seeks back to the beginning of the (virtual) file and reads again each time it refreshes its display. Most of the data in /proc/cpuinfo is constant, but a few fields such as the CPU speed on some machines are updated dynamically. The proc filesystem is documented in the kernel documentation, in Documentation/filesystems/proc.txt. If you get desperate about some esoteric detail, you can browse the source.
Does the hard drive need to be accessed or is everything done in memory? Basically I would like to constantly get updated values from meminfo and cpuinfo. Do I need to reopen the file and then reread in order to get an updated value or can I just reread? I don't have access to a Linux install at the moment.
What happens when I open and read from /proc? [duplicate]
For CPU-Z I can't really say (/proc/cpuinfo doesn't give core speed, multiplier etc...). For hardware monitoring the sensors command (part of the lm_sensors package) should work; it doesn't have a GUI per se, however. Finally, the stresslinux distro has many stress-testing utilities.stresslinux makes use of some utitlities available on the net like: stress, cpuburn, hddtemp, lm_sensors ... stresslinux is dedicated to users who want to test their system(s) entirely on high load and monitoring the health. Stresslinux is for people (system builders, overclockers) who want to test their hardware under high load and monitor stability and thermal environment.
I am trying to overclock my machine. All the changes are made on the BIOS level, but then one needs to check all the temperatures, voltages, etc and test the stability of the overclock. Most tutorials (if not all) are written for Windows. What would be Linux alternatives for: CPU-Z: to display all the CPU information, including Core Speed, Core Voltage, current multiplier, etc. HWMonitor: check fan speed and core temperatures Prime95: stress testing with results validation Also I would like to be able to monitor VTT and NB voltages (see a short explanation of all voltages) for Intel processors (I have a Q9450) - I haven't actually found a Windows program that does it yet.
Overclocking tools in Linux
Thank you for adding the extra information about the processor to your question. It helps to know that the examples you posted refer to an Intel Core i7-920 Processor. The information provided by lscpu is more accurate because it includes all three levels of cache, L1, L2, and L3. It appears that lshw was only minimally modified to reflect Intel's addition of an L3 cache to their CPUs. Instead of displaying information about all three caches levels, the information about the size of the L3 cache is apparently reported as L2 cache. I assume that the specs you looked at did not include L1 and L2 cache because within a given microarchitecture they are all the same. For example, for Nehalem this is "64 KB L1 cache/core (32 KB L1 Data + 32 KB L1 Instruction) and 256 KB L2 cache/core.". I believe giving each core its own L1 and L2 with a single, much larger common L3 was first introduced as part of the Nehalem (microarchitecture) (in November 2008?). I do not know why lshw uses the term External Cache to refer to the L3. But it strikes me as misleading since the L3 cache is on the CPU die and not what I would consider external. Again, this feels like trying to use old software to describe newer hardware while only making minimal changes to the software. (Probably more could be learned by looking at the actual source code, but I did not have time to try to do that.) Finally, yes the L3 cache is shared among the cores/threads. The following quote is from the Wikipedia article linked above, "Hyper-threading is reintroduced along with a reduction in L2, which has been incorporated as L3 Cache which is usable by all cores."
I am trying to find out specifics about caches (in particular which caches are shared between cores and which are not) and have stumpled onto a inconsistency. sudo lshw says *-cache:0 description: L1 cache physical id: a slot: Internal Cache size: 64KiB capacity: 64KiB capabilities: synchronous internal write-back *-cache:1 description: L2 cache physical id: b slot: External Cache size: 8MiB capabilities: synchronous internal write-backbut lscpu claims L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192KI do not worry too much about instruction and data cache being added together, but where did L2 go? Observed on a machine running Ubuntu 10.10, or to let uname -a speak: Linux name 2.6.35-32-generic #66-Ubuntu SMP Mon Feb 13 21:04:32 UTC 2012 x86_64 GNU/LinuxThis is a general question, but note that neither the most precise manufacturer spec I could find nor Wikipedia do have the necessary detail. Unrelated bonus question: does External Cache mean the cache is shared between the (four) cores (and Internal Cache the opposite)?
lshw and lscpu disagree on caches - which is right?
Yes you can check /sys/kernel/security what's available. See also dmesg or /proc/cmdline for boot settings. If your config.gz available then zgrep CONFIG_SECURITY /proc/config.gzelse grep CONFIG_SECURITY /boot/config-`uname -r`
Is there a way to find out if and in case which linux security LSM (apparmor, selinux, grsecurity) is used by the kernel? To be more specific let's assume I am a legimate root user of the machine? If information available also it would be nice to furthermore know: With regard to the question, is there a difference with considering the machine being (a) a local computer, (b) a dedicated server and (c) a virtual server "vServer" update I know that I could for instance install the user-space stuff (on a debian for instance apt-get install apparmor) and check if it yields results related to the specific LSM. So I could do for apparmor sudo apparmor_status which would then for instance yield: apparmor module is not loaded. which helps me rule out that option. Yet I was looking for a more general approach covering most/all LSM. update2 I have discovered the path /sys/kernel/security. Maybe this is helpful finding an answer?
How to determine if and which linux security module (LSM) is available?
A system can have a UEFI firmware and still boot OS in legacy BIOS mode. In that situation there is no way for the booted OS to determine if the hardware is actually capable of UEFI, because BIOS isn't forward compatible with UEFI. You can still look at firmware interface if anything is related to UEFI, but that is vendor specific and inconsistent. So there is also no definite answer from that side. The canonical method to prove your x86(_64) kernel is booted from UEFI: $ dmesg | grep 'EFI v' [ 0.000000] efi: EFI v2.31 by EDK IIThe kernel will print such message at the main entry point of EFI boot. The kernel is booted with UEFI if and only if such message exists. Other informative stuff: $ dmesg | grep 'efi: mem' [ 0.000000] efi: mem00: type=7, attr=0xf, range=[0x0000000000000000-0x00000000000a0000) (0MB) ...This is the memory map passed from EFI firmware to the kernel. $ ls -F /sys/firmware/efi efivars/ systab vars/These are kernel ABI's related to EFI. efivars (3.8+) and vars are kernel ABI to the EFI NVRAM so you can change boot options with them. But lack of these clues does not prove the system is BIOS only. Empirically, recent laptops all have UEFI firmware. Latest servers are migrating to UEFI firmware. Edit: The author of rEFInd has a more thorough explanation. Steps are the same. Also, Firmware Test Suite from Ubuntu might detect whether your UEFI firmware has a compatibility feature for legacy BIOS. Although it doesn't solve the problem of detecting UEFI capable firmware booting in BIOS mode.
My inspiration for asking this is this other question on Ask Ubuntu. One reason I am asking is that I am just curious. I would like to know more about this for whatever value it might have in the future. But I would also like to know so that I have a procedure I can ask a user to perform when I am wondering WTF might be up with them and their system. ;-) Initially I wondered if this information might be detected and reported by a tool such as (or similar to) dmidecode. But what would happen when a UEFI BIOS is simulating a pre-UEFI BIOS? I expect this question will only become more interesting as time passes. It appears that the companies behind each of the major operating systems will insist on implementing EFI support by doing "the same thing only different". <sigh/>
Is there a command or method (other than RTFM) to determine if a system has a UEFI BIOS?
There is no a portable, reliable, generic method to retrieve hardware model name in Linux. Let me describe 2 different cases: ARM-based Raspberry Pi with Raspbian installed and MIPS-based TP-LINK router with OpenWRT installed. Raspberry Pi has an ARM CPU and ARM devices commonly use device-tree to describe hardware and Wikipedia article even mentions that it's mandatory since 2012. Device-tree structure is exposed to userspace and can be used for retrieving a model name by cating /proc/device-tree/model where /proc/device-tree itself is a symlink to /sys/firmware/devicetree/base like that (notice that there is no newline at the end of the device-tree files so we create a helper function called catn that cats the file and adds a newline): pi@raspberrypi:~$ catn () { cat $1 && echo; } pi@raspberrypi:~$ catn /proc/device-tree/model Raspberry Pi 3 Model B Rev 1.2 pi@raspberrypi:~$ catn /sys/firmware/devicetree/base/model Raspberry Pi 3 Model B Rev 1.2or by manually dumping /sys/firmware/fdt flattened device-tree blob with dtc: pi@raspberrypi:~$ sudo dtc /sys/firmware/fdt 2>/dev/null | grep model compatible = "raspberrypi,3-model-b\0brcm,bcm2837"; model = "Raspberry Pi 3 Model B Rev 1.2";If an official Raspberry Pi Linux fork is in use model is also written to /proc/cpuinfo: pi@raspberrypi:~$ grep "^Model" /proc/cpuinfo Model : Raspberry Pi 3 Model B Rev 1.2Also notice that the full name of the board - Raspberry Pi 3 Model B Rev 1.2 is constructed by low-level firmware and that you will not find a full string like that anywhere in the Linux kernel code: pi@raspberrypi:~$ strings /boot/start.elf | grep 'Raspberry Pi ' Raspberry Pi %s Rev %s Raspberry Pi Bootcodemodel is a standard device-tree property described in the DTSpec. Other architectures such as RISC-V also use device tree to describe hardware but I don't have any RISC-V board to check. There is no /proc/device-tree, no /sys/firmware/devicetree/base and no /sys/firmware/fdt on my TP-LINK router - it means that it either doesn't come with device-tree at all or that some appropriate Linux kernel config options have been disabled and device-tree is not exposed to the userspace. The former is, however, more likely as there is /tmp/sysinfo instead: ~ $ cat /tmp/sysinfo/board_name tl-wdr4300 ~ $ cat /tmp/sysinfo/model TP-Link TL-WDR3600 v1These values are generated by ar71xx.sh script which is rather long but you can see that name is assigned in line 1313: *"TL-WDR3600/4300/4310") name="tl-wdr4300" ;;based on TL-WDR4900 v2 which is in turn taken from machine field from /proc/cpuinfo: machine=$(awk 'BEGIN{FS="[ \t]+:[ \t]"} /machine/ {print $2}' /proc/cpuinfo)and is then assigned to AR71XX_BOARD_NAME and written to /tmp/sysinfo/board_name by the end of the script. The full value of machine field in /proc/cpuinfo on this router is: ~ $ grep "^machine" /proc/cpuinfo machine : TP-LINK TL-WDR3600/4300/4310But Neofetch is not looking for /tmp/sysinfo/board_name, it's looking for /tmp/sysinfo/model. It's not taken from /proc/cpuinfo but read from the firmware flash partition: ~ $ cat /proc/mtd dev: size erasesize name mtd0: 00020000 00010000 "u-boot" mtd1: 0010c5a4 00010000 "kernel" mtd2: 006c3a5c 00010000 "rootfs" mtd3: 00490000 00010000 "rootfs_data" mtd4: 00010000 00010000 "art" mtd5: 007d0000 00010000 "firmware" ~ $ dd if=/dev/mtdblock5 bs=4 count=1 skip=16 2>/dev/null | hexdump -v -n 4 -e '1/1 "%02x"' && echo 36000001Model is assigned in line 321: "360000"*) model="TP-Link TL-WDR3600" ;;Of course it's hard to imagine that a generic program such as Neofetch would have so much knowledge about each firmware, its flash layout etc. I could however imagine a MIPS-based implementation that wouldn't support device-tree and wouldn't provide any useful hardware model information in /tmp/sysinfo and anywhere else and in such cases /proc/cpuinfo could be used as a last resort to get any information about hardware.
I am writing an application that works like Neofetch when a -w option is passed. It shows some of the the system information like memory, swap, cpu, battery usages, hostname, local ip, kernel version etc. I am wondering how to get the "Host" like in Neofetch. For example: -` sourav@archlinux-arm .o+` -------------------- `ooo/ OS: Arch Linux armv7l `+oooo: Host: Raspberry Pi 3 Model B Rev 1.2 `+oooooo: Kernel: 4.19.108-1-ARCH -+oooooo+: Uptime: 10 mins `/:-:++oooo+: Packages: 804 (pacman) `/++++/+++++++: Shell: bash 5.0.16 `/++++++++++++++: Resolution: 1366x768 `/+++ooooooooooooo/` DE: Xfce ./ooosssso++osssssso+` WM: Xfwm4 .oossssso-````/ossssss+` WM Theme: XFCE_Colour_Lite_Pink -osssssso. :ssssssso. Theme: XFCE_Colour_Lite_Pink [GTK2], X :osssssss/ osssso+++. Icons: Papirus [GTK2], Tela-orange [GT /ossssssss/ +ssssooo/- Terminal: tilix `/ossssso+/:- -:/+osssso+- CPU: BCM2835 (4) @ 1.350GHz `+sso+:-` `.-/+oso: Memory: 333MiB / 901MiB `++:. `-/+/ .` `/ I get an information like this. On my laptop: -` sourav@archlinux .o+` ---------------- `ooo/ OS: Arch Linux x86_64 `+oooo: Host: Inspiron 5567 `+oooooo: Kernel: 5.5.10-arch1-1 -+oooooo+: Uptime: 3 hours `/:-:++oooo+: Packages: 1163 (pacman) `/++++/+++++++: Shell: bash 5.0.16 `/++++++++++++++: Resolution: 1920x1080 `/+++ooooooooooooo/` DE: Xfce ./ooosssso++osssssso+` WM: Xfwm4 .oossssso-````/ossssss+` WM Theme: XFCE_Colour_Lite_Ruby -osssssso. :ssssssso. Theme: XFCE_Colour_Lite_Purple [GTK2 :osssssss/ osssso+++. Icons: Papirus [GTK2/3] /ossssssss/ +ssssooo/- Terminal: tilix `/ossssso+/:- -:/+osssso+- CPU: Intel i3-6006U (4) @ 2.000GHz `+sso+:-` `.-/+oso: GPU: Intel Skylake GT2 [HD Graphics `++:. `-/+/ Memory: 2814MiB / 3755MiB .` `/My question is related to this question, but it doesn't answer my question because my raspberry pi can't run dmidecode, (no /sys/devices/virtual/dmi/ either), no lshw installed. Also, the /etc/hostname are not the computers' model name, instead they are just archlinux-arm and archlinux. The uname -a or cat /proc/version doesn't have the 'Rapsberry Pi' string on the raspberry pi. Is there a way to get the hardware name like neofetch without using any dependency which should also run on most hardware?
Get the hardware model name in linux
BIOS writers provide tools to update the DMI information, without needing to modify BIOS images, to companies which manufacture devices using those BIOSs. For example, AMI has a AMIDEDOS tool under DOS, or AMIDEWIN or DMIEdit for Windows (there used to be an AMIDELNX for Linux but that is no longer provided). These tools are usually provided under NDA, but some manufacturers provide them in their BIOS update images. This article provides a good description of the possibilities, and a list of tools (relevant when it was written, in 2012). Basically, what you’re asking for is possible, but using tools you probably don’t officially have access to, unless your system’s manufacturer provides them (e.g. Lenovo, but then you wouldn’t have “To Be Filled By O.E.M.” entries in the first place).
Running: cat /sys/devices/virtual/dmi/id/{sys_vendor,chassis_vendor,product_name} produces the output: To Be Filled By O.E.M. To Be Filled By O.E.M. To Be Filled By O.E.M.How would I change these values? I know it can be done through the registry in Windows, so hopefully there's a similarly simple way in Linux. Edit: I've tried changing the files with sudoedit, but they're locked for editing (like most of the /sys/ directory, from what I understand). There are a couple ways to accomplish this in Windows, but I haven't found any information online about how to edit these values in Linux.
How to change OEM vendor info?
Unix systems don't really have a “system language”. Unix is a multiuser system and each user is free to pick their preferred language. The closest thing to a system language is the default language that users get if they don't configure their account. The location of that setting varies from distribution to distribution; it's picked up at some point during the login process. In most cases, what is relevant is not the “system language” anyway, but the language that the user wants the application to use. Language preferences are expressed through locale settings. The setting that determines the language that applications should use in their user interface is LC_MESSAGES. There are also settings for the date, currency, etc. These settings are conveyed through environment variables which are usually set when the user logs in from some system- and user-dependent file. Finding a locale setting is a bit more complicated than reading the LC_MESSAGES variable as several variables come into play (see What should I set my locale to and what are the implications of doing so?). There's a standard library function for that. In Python, use locale.getlocale. You first need to call setlocale to turn on locale awareness. import locale locale.setlocale(locale.LC_ALL, "") message_language = locale.getlocale(locale.LC_MESSAGES)[0]
Is there any way, in Python 3, to find out the language used by the system? Even a tricky one though, like: reading from a file in a sneaky directory, and finding the string 'ENG' or 'FRE' within the file's content…
How to find system language within Python?
Well, you can tell if your CPU has FPU capabilities with the data stored in /proc/cpuinfo and filter it with grep fpu $ grep "fpu" /proc/cpuinfo fpu : yes fpu_exception : yes flags : fpu vme de pse ...And for info, what type of CPU are you playing with? :) EDIT for ARM proc, look for vector floating point unit (vfp), some info here. Ex: # cat /proc/cpuinfo Processor : ARMv6-compatible processor rev 7 (v6l) BogoMIPS : 697.95 Features : ... vfp ...
How can I tell if floating point arithmetic is performed in hardware or software? I could find the processor's name and Google it, but is there a way to do it in a BASH script? For instance, is there something saved in a system file that I could read? UPDATE: output of /proc/cpuinfo on Intel: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 69 model name : Intel(R) Core(TM) i3-4010U CPU @ 1.70GHz stepping : 1 microcode : 0x17 cpu MHz : 782.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes <-- !!! fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid bogomips : 3392.25 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management:output of /proc/cpuinfo on RPi (using Raspian v7): processor : 0 model name : ARMv6-compatible processor rev 7 (v6l) BogoMIPS : 2.00 Features : swp half thumb fastmult vfp edsp java tls CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xb76 CPU revision : 7Hardware : BCM2708 Revision : 000e Serial : 000000007b455c14
How can I tell if floating point arithmetic is performed in hardware or software?
When you examine the contents of /proc/cpuinfo, the flags for the CPU will include "pae".
Possible Duplicate: What do the flags in /proc/cpuinfo mean? I tried installing CentOS 6.3 on my computer only to see a complaint that my computer doesn't have PAE. I am not sure if my computer has it and it's just disabled or if it doesn't have PAE at all. I am using Mageia 2 right now and I want to check if I can turn it on (in case it's off) or if my computer doesn't have it. My current computer is an IBM ThinkPad X32. I know it's kind of old but this (CentOS 6.3) is the first ever Linux distro to give me that error of not having PAE.
How do I find out if my computer has PAE using Linux? [duplicate]
Interesting problem which I would think is going to bite you in the end. You can do a script that will do the following: rm /tmp/hw_snapshot touch /tmp/hw_snapshot cat /proc/cpuinfo | grep <whatever> >> /tmp/hw_snapshot dmidecode | grep <whatever> >> /tmp/hw_snapshot lspci | grep <whatever> >> /tmp/hw_snapshot md5sum /tmp/hw_snapshot > /tmp/keyNo you have a unique identifier for your hardware configuration. The issue is that even within the same model line the hardware can vary widely including CPUs, Network Cards, Number of Network Cards, etc. So basically if someone has an HP DL380 model and then gets another one with an extra network card added your unique key is no longer valid. Plus I still don't understand the purpose of hardware base restriction on communication. If you want to control what talks to your machine put the stuff that can on a private network with it (if you can).
I'm wondering what is the best way to manually generate a hardware key based on certain components of the machine. Here's the thing: I want only certain type of machines to be able to communicate with my server. In order to do that, I'd like to make sure that their hardware is just part of the "allowed hardware" that my server recognizes (or accepts). I would like to generate a key based on said hardware so I can check it on the server side and make sure that it's among the "allowed ones". Ideally, it would "check"Processor Motherboard Ethernet network interfaceMemory and hard drive are a bit tricky, because those may change pretty often. I am using Ubuntu 10.10 and I have seen the lshw command which provides pretty much information about... well... about everything. Also, cat /proc/cpuinfo, dmidecode... All of them show a lot of information which I can always parse with regular expressions and do... things, but I was wondering if there's a cleaner more direct way. Any hint or suggestion in this matter will be appreciated. Thank you.
Generate a key (number?) based on part of the machine's hardware components
Lets go from top to bottom, this guide is not distro specific (most of these commands will be available on most distros either out of the box, or through the package repositories) first you probably want to get the rough lay of your hardware specs. For this you have a couple options: All in one options: inxi --admin --verbosity=7 --filter --width #<- Lists your complete system specs lshw #<- Lists your complete system specs, sudo recommended hwinfo > hwinfo.txt #<-Writes your hardware info in extreme detail to hwinfo.txtAll in one GUI options: hardinfo #<- A pretty good GUI system information utility, also offers benchmarks. i-nex #<- Similar to CPU-Z on windows lshw-gtk <- GUI for lshwTargeted options: cat /sys/devices/virtual/dmi/id/board_{vendor,name,version} #<- Lists your motherboard details. lspci -Q #<- Lists all your internal hardware and checks online for missing/updated names. lspci -v | grep "VGA controller" #<- Displays your currently active graphics card. Very useful on laptops with hybrid/switchable graphics. (Typically this is the integrated card unless you have configured it otherwise) lspci -v | grep "3D controller" #<- Displays your Nvidia Dedicated GPU. For laptops with hybrid/switchable graphics. lspci -v | grep "Display controller" #<- Displays your ATI/AMD Dedicated GPU. For laptops with hybrid/switchable graphics. lsusb #<- Lists all your USB hardware. lscpu #<- Lists detailed processor info (alternative: cat /proc/cpuinfo ) fdisk -l #<- Lists your hard drives and partitions (may requires sudo access). free -h --si #<- Lists your memory information, total is your total, available is your total free memory. cat /proc/meminfo #<- Much more detailed info on your memory ip link #<- lists your network devices and their statusLet's also do a quick check for kernel errors: cat /proc/kmsg | grep -i Error #<-Lists errors detected by the kernel (often hardware related ones), probably requires sudo access.Now that we know what we're working with, we're going to check thermals, most distros do not have lm_sensors installed by default, lm_sensors is usually the package name, but sometimes it can be sensors. It is invoked like this: sensors-detect #<-Detect sensors on your pc; you only need to do this once. Requires sudo. sensors #<-Display current values for known sensors on your pcAfter this, if you want a GUI utility to monitor these sensors you can use psensor If you want to see temps or other information for an Nvidia GPU and you have the proprietary drivers for it installed, run nvidia-smi. Next up, we're gonna go for hard drive diagnostics, fsck is run on bootup for most linux distributions (it's pretty much standard, it is run on Linux mint) to check for and fix hard drive errors and bad sectors, so you pretty much don't need to do this. fsck can not be run on a mounted drive so if you want to further diagnose your hard drive you are going to have to boot out of your system and use a 3rd party utility such as system rescue cd(or another live cd/usb) or ultimate boot cd. Additionally smartmontools's smartctl can be used to run SMART tests, like fsck, the more in-depth tests can not run on a currently mounted drive (but many drives do support running these tests automatically when they are offline). Anyhow, there are a few more things that can be done from your running system. hdparm can be used for analyzing and tuning a hard drive. dd if=/dev/zero of=$HOME/testfile bs=1G count=1 conv=fdatasync oflag=direct #<- Measures throughput of your hard drive (whichever one has your home folder on it). hdparm -Tt /dev/sdx #<- Gives read speed information on hard drive sdx. I won't cover this in more detail, but you can look for guides on it. smartctl -Hic /dev/sdx #<- Gives basic info of hard drive sdx and runs an overall health assesment. (If the assessment fails either the drive has failed or is in the process of failing) it then lists the drives SMART capabilities. smartctl -t short /dev/sdx #<- Runs a short SMART test (cannot be run on a mounted drive (some drives support offline data collection and can automatically run the test on shutdown))For more thorough hdd benchmarking with fio, using a similar format to crystaldiskmark which windows users may be familiar with, see this answer or use kdiskmark. On memory testing, for a full on memory test you will most likely need to boot into a memory testing utility (like memtest86+, often embedded into livecds, you may also be able to install it and update grub to display it), but from inside a running linux environment, you can use memtester memtester 1024 5 #<- Sets aside 1GB(1024MB) free memory, and runs tests on it 5 times, then displays results.The best way to properly diagnose a LAN devices performance without simply testing how fast (and much) it can send or receive data to/from another device. But to do that you can use iperf or netcat (nc) in conjunction with dd (which we used before to test hard drive). Do note that you actually can test your network cards throughpout from itself to itself by hosting the server on your computer, and then connecting to yourself using the address localhost or 127.0.0.1 iperf -s #<- Starts iperf server (run this on the device you want to connect to, yes, as I said you need another computer for this) iperf -c <address of server computer> #<- Connects and displays transfer rate information. nc -vvlnp 12345 >/dev/null #<- Starts a netcat server (requires open firewall port for port 12345 if you have a strict firewall) dd if=/dev/zero bs=1M count=1K | nc -vvn <server IP address> 12345For battery testing there are two choices. gnome-battery-bench (graphical) or acpi (terminal) or upower (terminal) these are example commands: acpi -ib #<- Lists battery status, basic specs and gives an idea of it's health (shows it's charge level last time it was "full") upower -i /org/freedesktop/UPower/devices/battery_BAT0 #<- Should provide detailed battery information.For sound testing. well I have no idea why you would want to do that, if sound works it works, if it doesn't work it doesn't work, but lets do this anyways with ALSA (just so it'll work on all distros). You need alsa-utils for this. speaker-test -c 6 -t wav #<- Runs test sound on 6 speaker channels (for 5.1 speaker setup, you can use -c 2 for stereo speakers), just to see what happens. speaker-test -r 96000 -f S32LE #<- Test stereo wav sound at 32-bit on 96khz frequencies. You can use this to test the maximum supported format and frequency (for example, while you sepcify 32-bit format, it may set to 16-bit format, if it does this then it will say so so read the output) aplay -l #<- Lists sound output devices. speaker-test -D hw:0,0 -c 4 -r 48000 -t wav #<- Test on specific hard ware device 0,0 at 4 channels with 48khz rate. arecord -l #<- Lists recording devices. arecord -f dat -d 20 -D hw:0,0 test.wav #<- Test specific recording device by outputting to a file in basic DAT quality aplay -f dat test.wav #<- Play the recorded test file.Any further testing (CPU and GPU performance) will require either dedicated benchmarking/stresstesting programs, or booting into a specialized testing environment. Here is a list of the benchmarking utilities I would suggest besides the ones already mentioned. As always with graphical benchmarks, you want to make sure VSync is disabled.glxgears (part of mesa; very basic test of opengl performance) vkcube (part of vulkan-tools; very basic test of vulkan performance) Unigine Heaven or Unigine Valley (Graphical benchmarking program, for testing 3D gaming performance under heavy loads) sysbench (Command-line benchmarking tool for cpu, memory and hdd among others, guide) stress (a command-line CPU and HDD stress testing utility)And last but not least, don't forget that to be perfectly thorough in your testing, you will want to launch a hardware-testing boot cd like Ultimate Boot CD since there are so many things that cannot be done (at least not effectively) from a running operating system.
Are there any commands that I can use on a secondhand laptop or PC running Linux that will tell me if there are any problems with the system? If so, what are they? For example, battery life/condition, hard drive space, bad sectors, bad RAM, bus speed, video/audio hardware and driver specs, LAN card specs, etc., etc.
How can I test a used computer for problems?
There are several ways. last, who and ps are all relevant here. last is the most thorough for tracking current and past logins. From the man page for last (emphasis added):Last will list the sessions of specified users, ttys, and hosts, in reverse time order. Each line of output contains the user name, the tty from which the session was conducted, any hostname, the start and stop times for the session, and the duration of the session. If the session is still continuing or was cut short by a crash or shutdown, last will so indicate. ... If no users, hostnames or terminals are specified, last prints a record of all logins and logouts.So rather than only reporting on the sessions currently in progress, last reports on all logins and logouts.
One way to find the names is to look at /home/ and see whichever entries are on the system. To look at current users one can use #users to see how many users are there. If a single user has spawned many sessions you will get something like - root@debian:~# users shirish shirish shirish shirish shirish shirish shirishIs there any other way to know about users on the system other than the two shared above ?
How to find out names and numbers of users on your system?
There are plenty of monitoring CLI commands with Solaris. They are easy to find as almost all share the stat suffix:vmstat mpstat iostat netstat lockstat nfsstat prstat busstat cpustat kstat sar swapkstat (or the equivalent netstat -k) provides all of the kernel statistics in raw form. About the IOWAIT statistic, note that the fact it was often poorly understood, wrongly interpreted and meaningless with fast and/or multi core / multi threaded CPUs, it is no more reported by vmstat on modern (10+) Solaris releases.
What command line utilities come standard with sunos to do system monitoring? I've been able to find prstat, but I would like something that will tell me memory usage and IOWAIT as well. It looks like this particular machine uses sunos 5.8
system monitoring tools
This can be achieved by adding echo "" in the middle of the commands where the space is required. Here are some example.Adding new line in the middle.Example: df | fgrep '/dev/'; echo ""; free -houtput tmpfs 16334344 55772 16278572 1% /dev/shm total used free shared buff/cache available Mem: 31G 4.0G 21G 346M 6.0G 26G Swap: 15G 2.3M 15GAdding detail of the command. Recommended Example: echo "==================" ; echo "This is output of df"; echo "==================" ;df | grep '/dev/shm' ; echo ""; echo "==================" ; echo "This is output of Free"; echo "==================";free -hOutput: ================== This is output of df ================== tmpfs 16334344 55772 16278572 1% /dev/shm================== This is output of Free ================== total used free shared buff/cache available Mem: 31G 4.0G 21G 359M 6.0G 26G Swap: 15G 2.3M 15G
I am currently trying to use Plink to run a few commands on a linux server to monitor a couple of things on it such as the disk space, memory, cpu usage. I have the commands that I want to use but I want to format the output that I get to something a little friendlier to read. This is the commands I am using inside my batch file. FOR /F "" %%G IN (C:\Users\username\Desktop\ip_list.txt) DO "C:\Program Files\PuTTY\plink.exe" -ssh -batch username@%%G -pw password "hostname; df | fgrep '/dev/'; free -h; top -bn2 | grep 'Cpu(s)';"and here is the output that i get Basically, I would like to just add some lines in between the individual command outputs to make this a tiny bit easier to read. Is this possible without writing the output to a text file? Thank you
Format the output of commands
lshw uses a variety of sources to provide a comprehensive list of the hardware in a system. You should run it as root to get as much detail as possible.
I'm wondering what command I can use to see all the hardware in any PC, or at least some of it (for example, model of the motherboard). I'm pretty new on Linux, I'm using a lightweight version to boot in computers that I had to make a format, and previous back-up. Some of these PCs are old, or with a very specific hardware and sometimes it's hard to find drivers.
What is the command to list specific hardware components? [duplicate]
On Debian-derived systems, for hardware information use lshw, hwinfo, udevadm, hdparm, inxi (this one needs installing first) etc. To accurately identify machines you may try to use the system serial number, vendor, model, the MAC address of the network controllers, the serial number of the hard disk etc. (You may only try because some machines are virtual...) There is a tutorial with screen shots at http://www.binarytides.com/linux-commands-hardware-info/ and generally Google is not short of suggestions. You may want to record hostname, uname -a, lsb-release -cdr and /etc/machine-id. Also df may come in handy. For installed packages use dpkg-query, for example, dpkg-query --list | grep '^ii' | awk '{print $2 " " $3 " ("$4")"}produces a list beginning with a11y-profile-manager-indicator 0.1.10-0ubuntu3 (amd64) account-plugin-facebook 0.12+16.04.20160126-0ubuntu1 (all) account-plugin-flickr 0.12+16.04.20160126-0ubuntu1 (all) account-plugin-google 0.12+16.04.20160126-0ubuntu1 (all) accountsservice 0.6.40-2ubuntu11.3 (amd64) acl 2.2.52-3 (amd64) ... etc. etc. ...Other Linux variants have their own mechanisms to collect system information, for example Red Hat Enterprise Linux has sosreport. Most if not all proprietary UNIX systems have dedicated tools to collect system information, including hardware configuration and installed software. For example, HP-UX has a nice /opt/ignite/bin/print_manifest command.
I would like to fingerprint some Unix (most are Debian) systems. By fingerprinting I mean run a script that collect material identifiers, system version, etc in order to be able to:accurately identify machines discriminate hardware and software modification over machinesI know that a can use couple of commands to track down meta informations such as udev, uname, etc. My questions are:Is there packages that perform such actions If not what must I collect in order to achieve this accurately.
How to fingerprint a Unix System [closed]
Due to there are many differences from vendor to vendor (and within the vendor) the way I choose is using two main instruments: lspci and dmesg by grepping RAID. So, firstly I use the lspci command and, if it doesn't return sought output, I run dmesg with the same grepping. That way is working for now for more than 20 machines with Hewlett Packard and MegaRAID controlles.
Background There's a task to automate info grabbing from servers. However, I am unable to locate any hardware or software RAID controllers.Issue Due to the various ways each vendor describes its controller, I am struggling to clearly define which block devices shown are RAID. I assume the best way to resolve this issue would be to use built-in Linux utilities. If my assumptions are wrong, please inform me.
Is there a way to clearly define hardware controller?
Websites get your browser type and OS information from the user agent string presented by your browser. In your case, start firefox. On the URL box, type about:config and search for useragent. You will see few entries. One of those is responsible for presenting your OS as a windows OS.
When I visit a page like https://www.whatismyip.com it displays the following: Browser Info Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 I am using Linux Mint 17.1 cinnamon 64-bit wih Firefox 46.0 Linux-Kernel: 3.13.0-37-generic What could be next steps to investigate this behavior?
Wrong operating system gets displayed online
Use lspci as root with different verbosities (-v to -vvv); the most verbose setting will show bus speeds and IRQ (I don't know about the AGP rate - no machines with AGP graphics here). E.g. lspci: 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168 PCI Express Gigabit Ethernet controller (rev 06)lspci -v: 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168 PCI Express Gigabit Ethernet controller (rev 06) Subsystem: Dell Device 04b6 Flags: bus master, fast devsel, latency 0, IRQ 53 I/O ports at 2000 [size=256] Memory at f1804000 (64-bit, prefetchable) [size=4K] Memory at f1800000 (64-bit, prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [b0] MSI-X: Enable- Count=4 Masked- Capabilities: [d0] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 00-00-00-00-00-00-00-00 Kernel driver in use: r8169lspci -vvv: 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168 PCI Express Gigabit Ethernet controller (rev 06) Subsystem: Dell Device 04b6 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 53 Region 0: I/O ports at 2000 [size=256] Region 2: Memory at f1804000 (64-bit, prefetchable) [size=4K] Region 4: Memory at f1800000 (64-bit, prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=375mA PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Address: 00000000fee0100c Data: 41b2 Capabilities: [70] Express (v2) Endpoint, MSI 01 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop- MaxPayload 128 bytes, MaxReadReq 4096 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <512ns, L1 <64us ClockPM+ Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1- EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- Capabilities: [b0] MSI-X: Enable- Count=4 Masked- Vector table: BAR=4 offset=00000000 PBA: BAR=4 offset=00000800 Capabilities: [d0] Vital Product Data Unknown small resource type 00, will not decode more. Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [140 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=ff Status: NegoPending- InProgress- Capabilities: [160 v1] Device Serial Number 00-00-00-00-00-00-00-00 Kernel driver in use: r8169For the MAC address can use ifconfig as you're doing or ip link | grep link.
I want to find the following information for devices in Linux:Bus speed (e.g. 66 MHz) IRQ settings Vendor identification AGP rate (e.g. 1x, 2x, 4x) MAC address I am only able find the last one by /sbin/ifconfig | grep HWaddr How can I find this information in Linux?
How to find information about devices in Linux [duplicate]
You most certainly can have multiple processes listening on the same port if they are bound to different IPs. Here's a demonstration using nc: % nc -l 127.0.0.1 1234 & [1] 24985 % nc -l 192.168.1.178 1234 & [2] 24988 % netstat -an | grep 1234 tcp4 0 0 192.168.1.178.1234 *.* LISTEN tcp4 0 0 127.0.0.1.1234 *.* LISTEN As you see, I started nc twice in listen mode, one bound to 127.0.0.1, the other to 192.168.1.178 (which happen to be two of the IP addresses on that computer), both using port 1234. netstat then shows two listening sockets. I made the test on macOS, but on Linux you could add -p to netstat to show the two distinct processes. On macOS you can use lsof -nP to show the same thing. Note that since you are opening a "hole" in a security layer, you probably don't want to bind to an externally reachable (public) IP address, otherwise anyone can connect to that IP+port and reach the remote system which apparently needed to be protected. You should use only loopback IP addresses (127.0.0.1, 127.0.0.2...) or private IP addresses on a private network reachable only by trusted systems. For completeness, let's specify that an active TCP connection is defined by a 4-tuple (local IP, local port, remote IP, remote port), but a listening socket is indeed defined only by local IP and port. Connections established to that socket will get the full 4-tuple.
I have a number of servers to SSH into, and some of them, being behind different NATs, may require an SSH tunnel. Right now I'm using a single VPS for that purpose. When that number reaches 65535 - 1023 = 64512, and the VPS runs out of ports to attach tunnels to, do I spin up another VPS, or do I simply attach an additional IP address to the existing VPS? In other words, is a 65535 limit set per a Linux machine, or per a network interface? This answer seems to say it's per an IP address in general, and per IPv4 address specifically. So does a 5-tuple mean that introducing a new IP address will warrant a new tuple, therefore resetting the limit? And if IPv4 is the case, is it different for IPv6?
Can I have a single server listen on more than 65535 ports by attaching an IPv4 address
Albeit TCP and UDP are part of TCP/IP, both belong to the same TCP/IP or OSI layers, and both are a layer above IP, they are different protocols. http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are two of the core protocols of the Internet Protocol suite. Both TCP and UDP work at the transport layer TCP/IP model and both have a very different usage. TCP is a connection-oriented protocol. UDP is a connectionless protocol.(source: ml-ip.com) Some services do indeed answer to TCP and UDP ports at the same time, as is the case of DNS and NTP services, however that is not certainly the case with web servers, which normally only answer by default to port 80/TCP (and do not work/listen at all in UDP) You can list your UDP listenning ports in a linux system with: $sudo netstat -anlpu Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 0.0.0.0:1900 0.0.0.0:* 15760/minidlnad udp 0 0 0.0.0.0:5000 0.0.0.0:* 32138/asterisk udp 0 0 0.0.0.0:4500 0.0.0.0:* 1592/charon udp 0 0 0.0.0.0:4520 0.0.0.0:* 32138/asterisk udp 0 0 0.0.0.0:5060 0.0.0.0:* 32138/asterisk udp 0 0 0.0.0.0:4569 0.0.0.0:* 32138/asterisk udp 0 0 0.0.0.0:500 0.0.0.0:* 1592/charon udp 0 0 192.168.201.1:53 0.0.0.0:* 30868/named udp 0 0 127.0.0.1:53 0.0.0.0:* 30868/named udp 0 0 0.0.0.0:67 0.0.0.0:* 2055/dhcpd udp 0 0 0.0.0.0:14403 0.0.0.0:* 1041/dhclient udp 17920 0 0.0.0.0:68 0.0.0.0:* 1592/charon udp 0 0 0.0.0.0:68 0.0.0.0:* 1041/dhclient udp 0 0 0.0.0.0:56417 0.0.0.0:* 2055/dhcpd udp 0 0 192.168.201.1:123 0.0.0.0:* 1859/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 1859/ntpd udp 0 0 192.168.201.255:137 0.0.0.0:* 1777/nmbd udp 0 0 192.168.201.1:137 0.0.0.0:* 1777/nmbd udp 0 0 0.0.0.0:137 0.0.0.0:* 1777/nmbd udp 0 0 192.168.201.255:138 0.0.0.0:* 1777/nmbd udp 0 0 192.168.201.1:138 0.0.0.0:* 1777/nmbd udp 0 0 0.0.0.0:138 0.0.0.0:* 1777/nmbd udp 0 0 192.168.201.1:17566 0.0.0.0:* 15760/minidlnad And your listening TCP ports with the command: $sudo netstat -anlpt Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:5060 0.0.0.0:* LISTEN 32138/asterisk tcp 0 0 192.168.201.1:8200 0.0.0.0:* LISTEN 15760/minidlnad tcp 0 0 192.168.201.1:139 0.0.0.0:* LISTEN 2092/smbd tcp 0 0 0.0.0.0:2000 0.0.0.0:* LISTEN 32138/asterisk tcp 0 0 192.168.201.1:80 0.0.0.0:* LISTEN 7781/nginx tcp 0 0 192.168.201.1:53 0.0.0.0:* LISTEN 30868/named tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 30868/named tcp 0 0 192.168.201.1:22 0.0.0.0:* LISTEN 2023/sshd tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 1919/perl tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 30868/named tcp 0 0 192.168.201.1:445 0.0.0.0:* LISTEN 2092/smbd tcp 0 224 192.168.201.1:22 192.168.201.12:56820 ESTABLISHED 16523/sshd: rui [prNow normally NMAP does send a SYN to the port being scanned, and per the TCP protocol, if a daemon/service is bound to the port, it will answer with a SYN+ACK, and nmap will show it as open. TCP/IP connection negotiation: 3 way handshakeTo establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first bind to and listen at a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open. To establish a connection, the three-way (or 3-step) handshake occurs: SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment's sequence number to a random value A. SYN-ACK: In response, the server replies with a SYN-ACK.However, if a service is not running there, TCP/IP defines the kernel will send an ICMP message back with an "Port unreachable" message for UDP services, and TCP RST messages for TCP services. ICMP Destination unreachableDestination unreachable is generated by the host or its inbound gateway[3] to inform the client that the destination is unreachable for some reason. A Destination Unreachable message may be generated as a result of a TCP, UDP or another ICMP transmission. Unreachable TCP ports notably respond with TCP RST rather than a Destination Unreachable type 3 as might be expected.So indeed, your UDP scanning to port 80/UDP simply receives an ICMP unreachable message back because there is not a service listening to that combination or protocol/port. As for security considerations, those ICMP destination unreachable messages can certainly be blocked, if you define firewall/iptables rules that DROP all messages by default, and only allow in the ports that your machine serves to the outside. That way, nmap scans to all the open ports, especially in a network, will be slower, and the servers will use less resources. As an additional advantage, if a daemon/service opens additional ports, or a new service is added by mistake, it won't be serving requests until it is expressly allowed by new firewall rules. Please do note, that if instead of using DROP in iptables, you use REJECT rules, the kernel won't ignore the scanning/ TCP/IP negotiation tries, and will answer with ICMP messages of Destination unreachable, code 13: "Communication administratively prohibited (administrative filtering prevents packet from being forwarded)". Block all ports except SSH/HTTP in ipchains and iptables
I am testing my Debian Server with some Nmap port Scanning. My Debian is a Virtual Machine running on a bridged connection. Classic port scanning using TCP SYN request works fine and detects port 80 as open (which is correct) : nmap -p 80 192.168.1.166 Starting Nmap 6.47 ( http://nmap.org ) at 2016-02-10 21:36 CET Nmap scan report for 192.168.1.166 Host is up (0.00014s latency). PORT STATE SERVICE 80/tcp open http MAC Address: xx:xx:xx:xx:xx:xx (Cadmus Computer Systems)Nmap done: 1 IP address (1 host up) scanned in 0.51 secondsBut when running UDP port scan, it fails and my Debian server answers with an ICMP : Port unreachable error : nmap -sU -p 80 192.168.1.166Starting Nmap 6.47 ( http://nmap.org ) at 2016-02-10 21:39 CET Nmap scan report for 192.168.1.166 Host is up (0.00030s latency). PORT STATE SERVICE 80/udp closed http MAC Address: xx:xx:xx:xx:xx:xx (Cadmus Computer Systems)Nmap done: 1 IP address (1 host up) scanned in 0.52 secondsWireshark record :How is that possible ? My port 80 is open, how come that Debian answers with an ICMP : Port unreachable error ? Is that a security issue?
ICMP : Port unreachable error even if port is open
/etc/hosts can be used if you want to map a specific DNS name to a different IP address than it really has, but if the IP address is already specified by the application, that and any other techniques based on manipulating hostname resolution will be useless: the application already has a perfectly good IP address to connect to, so it does not need any hostname resolution services. If you want to redirect traffic that is going out to a specified IP address back to your local system, you'll need iptables for that. sudo iptables -t nat -I OUTPUT --dst 5x.2x.2xx.1xx -p tcp --dport 3306 -j REDIRECT --to-ports 3306This will redirect any outgoing connections from your system to the default MySQL port 3306 of 5x.2x.2xx.1xx back to port 3306 of your own system. Replace the 5x.2x.2xx.1xx and 3306 with the real IP address and port numbers, obviously. The above command will be effective immediately, but will not persist over a reboot unless you do something else to make the settings persistent, but perhaps you don't even need that?
So I have an IP Address 5x.2x.2xx.1xx I want to map to localhost. In my hosts file I have: cat /etc/hosts 127.0.1.1 test test 127.0.0.1 localhost# The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts5x.2x.2xx.1xx 127.0.0.1What I want to accomplish is that when I connect in this machine to 5x.2x.2xx.1xx, I go to localhost. What I really want is to connect to MySQL using mysql -uroot 5x.2x.2xx.1xx -p and instead of pointing to that IP address I want to use the local MySQL server At the time it isn't working since it stills redirect to the server's IP (5x.2x.2xx.1xx) I've also tried: sudo service nscd restart with no luck
How to map an IP address to localhost
Sometimes the commands given in the question will work, but sometimes it won't. Here is the reason. Let's say:my computer IP on my local network: 192.168.1.10 my home public IP: 203.0.113.10 my friend's public IP: 198.51.100.27When doing this on my computer: myself$ nc -u -p 7777 198.51.100.27 8888we have, before NAT translation: srcip srcport destip destport 192.168.1.10 7777 198.51.100.27 8888but after the home router NAT translation we have: srcip srcport destip destport 203.0.113.10 55183(*) 198.51.100.27 8888i.e. the source IP is rewritten by the NAT but also the source port. So a "hole" will indeed be created in my home firewall (accepting traffic from my friend 198.51.100.27), but for port 55183, and not for port 7777. This explains why it fails when my friend does: friend$ nc -u -p 8888 203.0.113.10 7777Note (*): In some cases the router might keep srcport=7777 instead of rewriting it to a random port like 55183. It this case, the solution given in the question might work. But this is random behaviour!
I'm trying to connect directly (without 3rd party server) my computer to a friend's computer. We are both behind a ISP router, and would like (as a challenge!) to connect without modifying the router configuration. As suggested here and here, we tried both TCP hole punching: myself$ nc -p 7777 public-ip-friend 8888 friend$ nc -p 8888 public-ip-myself 7777and UDP hole punching: myself$ nc -u -p 7777 public-ip-friend 8888 friend$ nc -u -p 8888 public-ip-myself 7777but none of them worked. How to solve this? Note: VPS (not behind a NAT) <--> my home computer (still behind router) works with the same method.
UDP or TCP hole punching to connect two peers (each one behind a router)
The TCP stack decides how to respond to a connection, based on a set of rules (it could be at the firewall level). You can REJECT the connection package (SYN), but you can also DROP it. Dropping it makes sense because of port scanning, for example.
telnet 8.8.8.8 8888displays Trying... I was expecting, that this directly is refused. Background: When we have a NGINX reverse proxy server, it would be great, that it detects directly when the backend is not there.
Why does telnet on a non existent port not directly reject, but time out?
The easiest solution here is to add additional addresses to the host, and then bind one container to each address. For example, assuming that your host is 192.168.1.20, you could add additional addresses like this: ip addr add 192.168.1.21/32 dev eth0 ip addr add 192.168.1.22/32 dev eth0 ip addr add 192.168.1.23/32 dev eth0And then, when starting a container, publish port 80 in the container to port 80 on a particular host address, like this: docker run -p 192.168.1.21:80:80 mywebimage(This doesn't change the ip address of the container; it creates a map between the given ip address and port and the container's internal ip address and port.) Note that the address configuration shown here will not be persistent; if you reboot your host you will lose the addresses. Exactly how you configure addresses like this persistently varies from distribution to distribution; refer to your distribution documentation for details.
How do I configure Docker containers to have unique IP addresses that are not the default ones? The Docker containers will run Apache or some web service. These Docker containers will share one host that has one physical NIC. These containers must be identifiable by unique IP addresses with calls over port 80. Workstations will use HTTP to download files. I tried creating dummy IP addresses on the Docker host. But this caused networking to the server to drop. I tried installing Docker overlay, but I don't think it will help me with getting workstations to use HTTP requests to the containers. Docker overlay appears to be geared toward inter-container connectivity. I looked into using interlock, but I'd rather not use Swarm.
How do I configure Docker containers to have unique IP addresses that are not the default ones?
I found this information that talks about tcp_adv_win_scale. The page is titled: TCP performance tuning - how to tune linux. excerptTCP performance is limited by latency and window size (and overhead, which reduces the effective window size) by window_size/RTT (this is how much data that can be "in transit" over the link at any given moment). To get the actual transfer speeds possible you have to divide the resulting window by the latency (in seconds): The overhead is: window/2^tcp_adv_win_scale (tcp_adv_win_scale default is 2) So for linux default parameters for the recieve window (tcp_rmem): 87380 - (87380 / 2^2) = 65536. Given a transatlantic link (150 ms RTT), the maximum performance ends up at: 65536/0.150 = 436906 bytes/s or about 400 kbyte/s, which is really slow today. With the increased default size: (873800 - 873800/2^2)/0.150 = 4369000 bytes/s, or about 4Mbytes/s, which is resonable for a modern network. And note that this is the default, if the sender is configured with a larger window size it will happily scale up to 10 times this (8738000*0.75/0.150 = ~40Mbytes/s), pretty good for a modern network. 2.6.17 and later have resonably good defaults values, and actually tune the window size up to the max allowed, if the other side supports it. So since then most of this guide is not needed. For good long-haul throughput the maxiumum value might need to be increased though.I was able to follow that but didn't quite understand the relationship, if any, between these 2 variables. I only marginally understood what that was trying to explain. At the core it sounds like this parameter to scale the amount of buffering space is to be used for TCP and for the application. Searching a bit more I found these explanations which made more sense. The page was titled: Ipsysctl tutorial 1.0.4 - Chapter 3. IPv4 variable reference. excerpt3.3.2. tcp_adv_win_scale This variable is used to tell the kernel how much of the socket buffer space should be used for TCP window size, and how much to save for an application buffer. If tcp_adv_win_scale is negative, the following equation is used to calculate the buffer overhead for window scaling: Where bytes are the amount of bytes in the window. If the tcp_adv_win_scale value is positive, the following equation is used to calculate the buffer overhead: The tcp_adv_win_scale variable takes an integer value and is per default set to 2. This in turn means that the application buffer is 1/4th of the total buffer space specified in the tcp_rmem variable. 3.3.3. tcp_app_win This variable tells the kernel how many bytes to reserve for a specific TCP window in the TCP sockets memory buffer where the specific TCP window is transfered in. This value is used in a calculation that specifies how much of the buffer space to reserve that looks as the following:As you may understand from the above calculation, the larger this value gets, the smaller will the buffer space be for the specific window. The only exception to this calculation is 0, which tells the kernel to reserve no space for this specific connection. The default value for this variable is 31 and should in general be a good value. Do not change this value unless you know what you are doing.Based on these explanations it sounds like the first parameter, tcp_adv_win_scale is controlling the split of the socket buffer space in terms of how it get's carved up for TCP window use vs. application buffer. Whereas the 2nd parameter, tcp_app_win is specifying the number of bytes to reserve for the application buffer mentioned in the tcp_adv_win_scale description.
I can't figure out why the tcp_adv_win_scale andtcp_app_win variables coexist in Linux. The information from tcp(7) says: For tcp_adv_win_scale:tcp_adv_win_scale (integer; default: 2; since Linux 2.4) Count buffering overhead as bytes/2^tcp_adv_win_scale, if tcp_adv_win_scale is greater than 0; or bytes-bytes/2^(-tcp_adv_win_scale), if tcp_adv_win_scale is less than or equal to zero. The socket receive buffer space is shared between the application and kernel. TCP maintains part of the buffer as the TCP window, this is the size of the receive window advertised to the other end. The rest of the space is used as the "application" buffer, used to isolate the network from scheduling and application latencies. The tcp_adv_win_scale default value of2 implies that the space used for the application buffer is one fourth that of the total.And for tcp_app_win:tcp_app_win (integer; default: 31; since Linux 2.4) This variable defines how many bytes of the TCP window are reserved for buffering overhead. A maximum of (window/2^tcp_app_win, mss) bytes in the window are reserved for the application buffer. Avalue of0 implies that no amount is reserved.So I'm not sure of understanding what does tcp_app_win exactly change. Itseems to me that both variables can be used to tweak the TCP application buffer, therefore there is no need of changing them together. Iam correct?
What does net.ipv4.tcp_app_win do?
Like this for example:For a static IP configuration copy the /etc/netctl/examples/ethernet-static example profile to /etc/netctl and modify Interface, Address, Gateway and DNS) as needed. For example: /etc/netctl/my_static_profileInterface=enp1s0 Connection=ethernet IP=static Address=('10.1.10.2/24') Gateway=('10.1.10.1') DNS=('10.1.10.1')Link to the official Arch Wiki here. This of course only works if you don't use Network Manager or something similar to control your network.
I am setting up an Arch/Manjaro-based machine that only occasionally will be connected to network. I.e. most of the time its Ethernet card is disconnected. I run into this curious problem - when I try to use networking commands the interface is down (I sit next to it with my laptop that has a Wi-Fi Internet connection). So I am not sure if it is working properly.How do I set up the network without an Ethernet connection present so that when I finally plug in a cable I can be sure that the address will be 192.168.1.1?I found the answer: use SkipNoCarrier=yes in the netctl profile. It is in Manjaro StaticIP wiki and in Arch netctl page.
How do I set a static IP address for a disconnected interface?
After discussion with a super-smart colleague, we think we have figured out what's wrong, but are yet to prove this. What is likely happening is that the keep-alive interval is only taken into account when no keep-alive ACK is received. So after 2 hours, if no ACK was received to this first keep-alive packet, a second packet would be sent after 75 seconds and repeatedly at 75 second intervals until an ACK was received. It turns out that the reason the interval is taken into account only when it is greater than the keep-alive time is due to the way the Linux keep-alive mechanism works, as described on my other question.
I have a server running on my Linux box, also on which the following commands were run: $ cat /proc/sys/net/ipv4/tcp_keepalive_time 7200 $ cat /proc/sys/net/ipv4/tcp_keepalive_intvl 75 $ cat /proc/sys/net/ipv4/tcp_keepalive_probes 9My server listens on port 58080, and I create a connection on it having set TCP keep-alive in my code. I then set Wireshark tracing this connection; a screenshot of the output is shown below:You can see that the first keep-alive packet is sent after 7200 seconds, or 2 hours as expected (the value of 'tcp_keepalive_time'). However I would also expect each probe to be sent at 75 seconds (the value of 'tcp_keepalive_intvl'); what I see though is that each probe is sent at 2 hours. Can someone please tell me why my configuration option for 'tcp_keepalive_intvl' is not being honoured? UPDATE It seems that specifying a keep-alive interval that is greater than the keep-alive time results in the interval time being adhered to...
TCP keep-alive parameters not being honoured
The problem was with configuration of tftpd-hpa server on host. According to guide the file /etc/default/tftpd-hpa must be something like: TFTP_USERNAME="tftp" TFTP_DIRECTORY="/home/bogdan_liulko/tftp" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS="--secure --create" RUN_DAEMON="yes"My problem was that my file didn't contain --create parameter in TFT_OPTIONS. And right after all steps from guide were done everything started work properly.
I have two Ethernet-wired connected devices. I gave to both of them address from same sub-network. As result I can see second device in arp-table of the first. $ arp -a ? (128.247.77.90) at 10:60:4b:4b:29:50 [ether] on eth0But ping always fails. $ ping 128.247.77.90 PING 128.247.77.90 (128.247.77.90) 56(84) bytes of data. From 128.247.77.158 icmp_seq=9 Destination Host UnreachableFirst device is a laptop. It's a host. Second is a tablet under u-boot. I have to get a file from a host via TFTP. This protocol fails too because of ICMP. Here are all packets that Wireshark caught.What is the reason of this problem?
ICMP - Destination unreachable (Port unreachable)
Ok I think I've found a solution: sudo tcpdump -XX -i eth0 -w tcpamir-%s.txt -G 10 port 3050This rotates your output file every 10 seconds to a new file called tcpamir-<unixtimestamp>.txt You can also modify the output file, so it overwrites itself every day, if you are worried about the pending file size. For more information read man 3 strftime. I think of something like sudo tcpdump -XX -i eth0 -w tcpamir-%R.txt -G 86400Where %R gives the time in 24-hour notation (12:40 e.g.). Read relevant output files with sudo tcpdump -r tcpamir-<unixtimestamp>.txtSecond solution: Split it into more commands and save it as a script/function: sudo tcpdump -XX -i eth0 port 3050 >> tcptmp.txt sudo tail -n100 tcptmp.txt >> tcpamir.txt sudo rm tcptmp.txt
I am monitoring a certain port, because my application uses that port, it seems that the connection drops at random times, I want to see what are the last packets passing through before the connection drops: I used this line sudo tcpdump -XX -i eth0 port 3050 | tail >> tcpamir.txtbut for it to work I have to start another terminal and issue sudo killall tcpdumpis there a better approach? EDIT1: it is important to capture only the last packets since I don't want the file to balloon, since there is enough traffic to fill the disk space quickly .
"tcpdump" to capture the last packets
A firewall does not exist in a single place in the kernel network stack. In Linux, for instance, the underlying infrastructure to support firewall functionality is provided by the netfilter packet filter framework. The netfilter framework in itself is nothing more than a set of hooks at various points in the kernel protocol stack. Netfilter provides five hooks:NF_IP_PRE_ROUTING Packets which pass initial sanity checks are passed to the NF_IP_PRE_ROUTING hook. This occurs before any routing decisions have been made. NF_IP_LOCAL_IN Packets which destined to the host itself are passed to the NF_IP_LOCAL_IN hook. NF_IP_FORWARD Packets destined to another interface are passed to the NF_IP_FORWARD hook. NF_IP_LOCAL_OUT Locally created packets are passed to NF_IP_LOCAL_OUT after routing decisions have been made, although the routing can be altered as a result of the hook. NF_IP_POST_ROUTING The NF_IP_POST_ROUTING hook is the final hook packets can be passed to before being transmitted on the wire.A firewall consists of a kernel module, which registers a callback function for each of the hooks provided by the the netfilter framework; and userspace tools for configuring the firewall. Each time a packet is passed to a hook, the corresponding callback function is invoked. The callback function is free to manipulate the packet that triggered the callback. The callback function also determines if the packet is processed further; dropped; handled by the callback itself; queued, typically for userspace handling; or if the same hook should be invoked again for the packet. Netfilter is usually associated with the iptables packet filter. As Gnouc already pointed out in your previous question, iptables has a kernel module, ip_tables, which interfaces with netfilter, and a userspace program, iptables, for configuring the in-kernel packet filter. In fact, the iptables packet filter provides several tools, each associated with a different kind of packet processing: The iptables userspace tool and ip_tables kernel module concern themselves with IPv4 packet filtering. The ip6tables userspace tool and ip6_tables kernel module concern themselves with IPv6 packet filtering. The arptables userspace tool and arp_tables kernel module concern themselves with ARP packet filtering.In addition to the iptables packet filters, the ebtables userspace tool and eb_tables kernel module concern themselves with link layer Ethernet frame filtering. Collectively, these tools are sometimes referred to as xtables, because of the similar table-based architecture. This architecture provides a packet selection abstraction based on tables packets traverse along. Each table contains packet filtering rules organized in chains. The five predefined chains, PREROUTING, INPUT, FORWARD, OUTPUT and POSTROUTING correspond to the five in-kernel hooks provided by netfilter. The table a rule belongs to determines the relative ordering of rules when they are applied at a particular netfilter hook:The raw table filters packets before any of the other table. The mangle table is used for altering packets. The nat table is used for Network Address Translation (e.g. port forwarding). The filter table is used for packet filtering, it should never alter packets. The security table is used for Mandatory Access Control (MAC) networking rules implemented by Linux Security Modules (LSMs), such as SELinux.The following diagram by Jan Engelhardt shows how the tables and chains correspond to the different layers of the OSI-model:Earlier this year, a new packet filter framework called nftables was merged in the mainline Linux kernel version 3.13. The nftables framework is intended to replace the existing xtables tools. It is also based on the netfilter infrastructure. Other kernel-based firewalls in Unix-like operating systems include IPFilter (multi-platform), PF (OpenBSD, ported to various other BSD variants and Mac OS X), NPF (NetBSD), ipfirewall (FreeBSD, ported to various operating systems).
This question is a follow-up to my previous question. Logic for me to says that an in-kernel firewall sits between the network access layer and the Internet layer, because it needs to have access to the IP-packet header to read the source and destination IP address in order to do filtering before determining if the packet is destined for the host, or it the packet should forwarded to the next hop if it is destined elsewhere. Somehow, it also seems logical to say that firewall is part of Internet layer, because that is where routing table is and a firewall is in some respects similar to routing table rules.
Does an in-kernel firewall sit between the network access layer and Internet layer?
If the machine is compromised, everything you typed in when logging in (such as your username and password) can be compromised, so "Remember me" doesn't really matter anymore. But even if we stick to cookies only, the hacker can extract the session cookies from the browser's profile and then use them in his browser. Example : Firefox stores all its data in ~/.mozilla, the hacker can just copy that folder to his system and put it in place of his own profile folder, and when he uses that browser with your profile folder, all websites will think that it's actually you (except some websites that also look at the user's IP which will be the attacker's one, sadly not many sites offer that feature).
I was reading an article on how to sniff network packets. (of course for knowledge purposes only). I came across these particular lines. For instance, say I was sniffing traffic on the network, and you logged in to Facebook and left the Remember Me On This Computer check box checked. That signals Facebook to send you a session cookie that your browser stores. I potentially could collect that cookie through packet sniffing, add it to my browser and then have access to your Facebook account.So, assuming my Linux client is compromised and am unaware of it currently, does that mean if I have clicked on remember me on this machine to login to my accounts, my personal details are compromised? How can the compromised machine's cookie information can be used in any hacker's browser?
Is it bad to select remember me option in the browsers of a compromised machine?
Why are LOKI and HOST_A connected by a router when they are on the same subnet? Is it really a router, or is it just a switch? (From your later comments, although the device is a router, it is just acting as a switch between LOKI and HOST_A). What is the routing table on HOST_A and ROUTER? If it were to try to reach 192.168.200.1, why should it use LOKI as a destination? Unless the routing table on these devices know to use LOKI for the 192 subnet, they will just forward to the default gateway. As an alternative, you might want to investigate installing NAT on LOKI. For many services, it can encapsulate traffic from HOST_B as traffic from LOKI. Since other machines can get to there, it can receive the traffic and forward back to HOST_B.
The environment is FreeBSD for the most part and looks like this: HOST_A <-> ROUTER <-> LOKI <-> HOST_BMinimally, I would like to be able to ping ROUTER from HOST_B.ROUTER is assigned the IP 10.0.0.1 LOKI is a multi-homed machine that is assigned IPs 10.0.0.2 and 192.168.200.1 HOST_B is assigned the IP 192.168.200.3 HOST_A is assigned the IP 10.0.0.3I have set the network up as above and added gateway_enable="YES" to loki's rc.conf netstat -r on LOKI produces: Routing tables Internet: Destination Gateway Flags Netif Expire default 10.0.0.1 UGS em0 10.0.0.0 link#1 U em0 10.0.0.2 link#1 UHS lo0 loki link#2 UH lo0 192.168.200.0 link#3 U ue0 192.168.200.1 link#3 UHS lo0which looks like a fine routing table and appears to work in all directions. netstat -r on HOST_B produces: Routing tables Internet: Destination Gateway Flags Netif Expire default 192.168.200.1 UGS em0 hostb link#2 UH lo0 192.168.200.0 link#1 U em0 192.168.200.3 link#1 UHS lo0which looks like another fine routing table, but is only able to see LOKI. In summary:LOKI is able to ping HOST_A, HOST_B and ROUTER HOST_B is able to ping LOKI, but not ROUTER or HOST_ASome additional notes: from HOST_B ping 10.0.0.1 100% packet lossIn wireshark on LOKI, while pinging 10.0.0.1 from HOST_B: 120 40.549564000 192.168.200.3 10.0.0.1 ICMP 98 Echo (ping) request id=0x5a0e, seq=92/23552, ttl=63 (no response found!)It appears to me that nothing is being routed from LOKI to ROUTER. What am I missing? I confirmed that IP forwarding was taking place by commenting out gateway_enable="YES" in /etc/rc.conf and rebooting. Then, I ran the following commands on loki: sudo tcpdump -i em0 -nS sudo tcpdump -i ue0 -nSto monitor activity on the two nics. from hostb, I ran: ping 10.0.0.1the ue0 interface on 192.168.200.1 reported: 14:44:21.870865 IP 192.168.200.3 > 10.0.0.1: ICMP echo request, id 21509, seq 0, length 64nothing was reported on the em0 interface on 10.0.0.2. I then ran: sudo sysctl -w net.inet.ip.forwarding=1and sure enough, em0 reported: 14:58:14.745369 IP 192.168.200.3 > 10.0.0.1: ICMP echo request, id 25861, seq 0, length 64But no, reply such as what I get with a ping loki from hostb: 14:44:15.724200 IP 192.168.200.3 > 192.168.200.1: ICMP echo request, id 21253, seq 4, length 64 14:44:15.724207 IP 192.168.200.1 > 192.168.200.3: ICMP echo reply, id 21253, seq 4, length 64If I ping router from loki, all is fine: 15:04:55.637839 IP 10.0.0.2 > 10.0.0.1: ICMP echo request, id 46852, seq 3, length 64 15:04:55.638324 IP 10.0.0.1 > 10.0.0.2: ICMP echo reply, id 46852, seq 3, length 64Any ideas how to find the reply?
Unable to figure out how to route a packet through a multi-homed nic
PPP can run protocols other than IP; the most common is of course IPv6. But numerous others have been (and maybe still are) run over PPP. Wikipedia even has a list of protocols that run over PPP, though I'm not sure how many work on Linux. Also — the reason you run PPP over a serial link is because you want to run a higher-level protocol like IP. If you want to avoid that overhead, just use the serial link directly. Serial links don't require PPP; you can send raw binary data over RS232 using whatever application-specific protocol you'd like.
I'm using a PPP to communicate with a device. So far what I have been doing is instantiating PPP on my machine (Fedora 29) and on the device (Yocto Linux). Then I open a TCP/UDP socket and communicate with the device. My serial link (which is why I use PPP) has low baud rate, 4800 to be exact. I cannot change it, it is a project requirement. I've been doing some reading about PPP and as far as I understand I can't just instantiate it and use it raw. I'm have to use TCP/IP/UDP. Am I correct? In other words once I have a PPP connection the only way to use is to open a socket (UDP or TCP) and talk to the device through it. I can't just create my application level packet and tell PPP to send it, I have to go through TCP/IP layer (transport layer).
using PPP without TCP/IP
/proc/net/tcp is not a real file, that can be edited. Each time you read from it, kernel allocates temporary buffer called a seq file, and writes statistics there from current in-kernel data. You may only hjack that by changing code in tcp4_seq_show() in net/ipv4/tcp_ipv4.c and subsequent functions. Note that /proc/net/tcp is actually is a symlink to a /proc/self/net/tcp, so if you put your process into a namespace, it won't see your connections at all.
I need to edit 1 line in /proc/net/tcp while the file is also used by the linux kernel for updating other lines of it by the kernel. Background: Each line in /proc/net/tcp represents a TCP socket. The kernel uses this file to show the state and statistics of each socket in the system. I want to fake the statisics of 1 socket in the system, because I'm capturing its traffic and passing it directly to the network card, without the kernel's knowledge.
How can I edit /proc/net/tcp?
You set options in pfctl.conf with a set limit { ... } statement. You can modify the packet filter state while it is running by passing the `-m' (merge) option to pfctl(8), ie FreeBSD 9.3-RELEASE-p10 (GENERIC) #0: Tue Feb 24 21:28:03 UTC 2015 # pfctl -sm No ALTQ support in kernel ALTQ related functions disabled states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 # echo "set limit { states 1000000, frags 1000000, src-nodes 100000, tables 1000000, table-entries 1000000 }" | pfctl -mf - No ALTQ support in kernel ALTQ related functions disabled # pfctl -sm No ALTQ support in kernel ALTQ related functions disabled states hard limit 1000000 src-nodes hard limit 100000 frags hard limit 1000000 tables hard limit 1000000 table-entries hard limit 1000000
We can figure out number of connections in GNU\Linux with ip_conntrack module, And I can print current connections and maximum of connections with: root@debian:/home/mohsen/test/shell# sysctl net.ipv4.netfilter.ip_conntrack_count net.ipv4.netfilter.ip_conntrack_count = 28 root@debian:/home/mohsen/test/shell# sysctl net.ipv4.netfilter.ip_conntrack_max net.ipv4.netfilter.ip_conntrack_max = 65536And i can change them. And with PF firewall i can : pfctl -si | grep current pfctl -sm | grep states Now, I have two serious question:How can I change them in PF firewall? How can I change maximum and current such as ip_conntrack in FreeBSD without any firewall or thirdparty?
ip_conntrack and FreeBSD
mydomaine.fr isn't associated with an IP address in your config. You should add an A record that would associate it with the desired IP address. $TTL 1Dmydomaine.fr. IN SOA ns1.mydomaine.fr. root.mydomaine.fr.( 0; serial 1D; refresh 1H; retry 1W; expire 3H; minimum )@ IN NS ns1.mydomaine.fr. ns1 IN A 192.168.132.190 ;your bind server IP @ IN A 192.168.10.1 ;IP mydomaine.fr points toThe @ symbol substitutes the current (or synthesized) value of $ORIGIN. You can also omit it. In your case $ORIGIN inherited zone name from named.conf file (mydomaine.fr)
for studying purposes about TCP/IP, we should run a DNS server, i did the advised configuration, the server is runing without any erros, but when i request the server for the configured domain name with dig or nslookup command, i get nothing. Here are the settings: system : centos 7. installation of bind package : yum install bind configuration of /etc/named.conf // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // // See the BIND Administrator's Reference Manual (ARM) for details about the // configuration located in /usr/share/doc/bind-{version}/Bv9ARM.htmloptions { listen-on port 53 { any; }; listen-on-v6 port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; recursing-file "/var/named/data/named.recursing"; secroots-file "/var/named/data/named.secroots"; allow-query { any; }; /* - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion. - If you are building a RECURSIVE (caching) DNS server, you need to enable recursion. - If your recursive DNS server has a public IP address, you MUST enable access control to limit queries to your legitimate users. Failing to do so will cause your server to become part of large scale DNS amplification attacks. Implementing BCP38 within your network would greatly reduce such attack surface */ recursion yes; dnssec-enable yes; dnssec-validation yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.root.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid"; session-keyfile "/run/named/session.key"; };logging { channel default_debug { file "data/named.run"; severity dynamic; }; };zone "." IN { type hint; file "named.ca"; };include "/etc/named.rfc1912.zones"; include "/etc/named.root.key";zone "mydomaine.fr" IN { file "/var/named/mydomaine.zone"; type master; allow-update {none;}; };configuration of /var/named/mydomaine.zone $TTL 1Dmydomaine.fr. IN SOA ns1.mydomaine.fr. root.mydomaine.fr.( 0; serial 1D; refresh 1H; retry 1W; expire 3H; minimum )mydomaine.fr. IN NS ns1.mydomaine.fr. ns1 IN A 192.168.10.1when i run systemctl status named.service -l ● named.service - Berkeley Internet Name Domain (DNS) Loaded: loaded (/usr/lib/systemd/system/named.service; disabled; vendor preset: disabled) Active: active (running) since Fri 2022-01-28 19:19:32 CET; 11min ago Process: 3597 ExecStart=/usr/sbin/named -u named -c ${NAMEDCONF} $OPTIONS (code=exited, status=0/SUCCESS) Process: 3594 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z "$NAMEDCONF"; else echo "Checking of zone files is disabled"; fi (code=exited, status=0/SUCCESS) Main PID: 3599 (named) Tasks: 5 CGroup: /system.slice/named.service └─3599 /usr/sbin/named -u named -c /etc/named.conf -4Jan 28 19:19:32 localhost.localdomain named[3599]: zone mydomaine.fr/IN: loaded serial 0 Jan 28 19:19:32 localhost.localdomain named[3599]: zone localhost.localdomain/IN: loaded serial 0 Jan 28 19:19:32 localhost.localdomain named[3599]: zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0 Jan 28 19:19:32 localhost.localdomain named[3599]: zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0 Jan 28 19:19:32 localhost.localdomain named[3599]: zone localhost/IN: loaded serial 0 Jan 28 19:19:32 localhost.localdomain named[3599]: all zones loaded Jan 28 19:19:32 localhost.localdomain named[3599]: running Jan 28 19:19:32 localhost.localdomain systemd[1]: Started Berkeley Internet Name Domain (DNS). Jan 28 19:19:32 localhost.localdomain named[3599]: managed-keys-zone: Key 20326 for zone . acceptance timer complete: key now trusted Jan 28 19:19:32 localhost.localdomain named[3599]: resolver priming query completeand dig mydomaine.fr gives me : G 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.8 <<>> mydomaine.fr ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 23167 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0;; QUESTION SECTION: ;mydomaine.fr. IN A;; Query time: 7 msec ;; SERVER: 192.168.132.190#53(192.168.132.190) ;; WHEN: Fri Jan 28 19:20:25 CET 2022 ;; MSG SIZE rcvd: 30and the command nslookup mydomaine.fr gives me : Server: 192.1... Address: 192.1...#53** server can't find mydomaine.fr: NXDOMAIN
My first Configuration DNS doesn't work or respond on centos
If this is a home lab, I would recommend setting up a DNS/DHCP server for a good learning exercise, as mentioned in the comment above by user1794469. This way, anytime the ip address is changed, the DNS record would be updated dynamically. The most important thing to do after that is to configure your client machines to point to your DNS server first, before they look else where. DNS and DHCP were created to solve this exact problem. If you setup JUST a DNS server, you would still have to manually update the records anytime you add a new machine or a machine gets a different IP. When you setup DNS and DHCP, the DHCP daemon will "Dynamically" update the records. There are plenty of guides online for setting this up so make sure you find a guide for your choice of Linux distro
I am a networking newb - ping cannot locate certain dns names, so I put them in /etc/hosts like this: 10.128.0.22 kafka.marathn.meso 10.128.0.31 elasticsearch.marathn.mesoand then ping can find them. Is there a more dynamic/scalable way to map the DNS names to an IP address, in case the IP address changes?
Alternative to modifying /etc/hosts for dns lookup
TCP communication is done through sockets, which you create with the socket() system call. Sockets are file descriptors, to all of the ways of reading from and writing to file descriptors (plus some additional system calls specific to sockets) work for sockets, and that's how you send and receive data. As with any other file descriptor, both reads and writes can block (if the file descriptor is configured to block) or return an indication that the operation cannot proceed immediately (for non-blocking mode), and that's how flow control works.
Are there standard sytem calls to send data to TCP? And back? How does TCP tell the application to send more or less?
How does tcp communicate back to the application? [closed]
For the first one: ndd -set /dev/tcp tcp_time_wait_interval 90000as per the oficial manualyou should not set this under 60000 = 60 seconds. For the second one ndd -get /dev/tcp tcp_smallest_anon_port tcp_largest_anon_portNo need to restart the network. But if you feel like it, in solaris 10 is svcadm restart network/physical
I need to change and test some TCP/IP network parameters on a Oracle Solaris 10 as a possible workaround for a bug in 'Oracle Hyperion EPM 11.2.1.0' in the development environment. I am not a Solaris/UNIX expert so would appreciate any guidance to identify the correct parameters and also it would be very helpful if you could tell me the possible impact of the changes and how I could have the admin support rollback the changes. I need to decrease the time wait before closing the sockets. I already have the corresponding UNIX command: $ echo 3 > /proc/sys/net/ipv4/tcp_fin_timeout I need to check if the System is preventing the 'Oracle Hyperion EPM' application from using a large number of sockets. How do I check the port range and modify it? The UNIX command for it would be: $ echo "1025 65535" > /proc/sys/net/ipv4/ip_local_port_rangeI am advised that I need to make these changes as root and execute the following UNIX command for applying the changes: $ /etc/rc.d/init.d/network restart References:I looked at the parameters in $ ndd /dev/tcp \? http://docs.oracle.com/cd/E19082-01/819-2724/6n50b07lr/index.html http://www.informit.com/articles/article.aspx?p=101138&seqNum=6
How do I safely perform the below changes to network parameters on Solaris 10?
Starting in tmux 1.9 the default-path option was removed, so you need to use the -c option with new-window, and split-window (e.g. by rebinding the c, ", and % bindings to include -c '#{pane_current_path}'). See some of the other answers to this question for details.A relevant feature landed in the tmux SVN trunk in early February 2012. In tmux builds that include this code, tmux key bindings that invoke new-window will create new a window with the same current working directory as the current pane’s active processes (as long as the default-path session option is empty; it is by default). The same is true for the pane created by the split-window command when it is invoked via a binding. This uses special platform-specific code, so only certain OSes are supported at this time: Darwin (OS X), FreeBSD, Linux, OpenBSD, and Solaris. This should be available in the next release of tmux (1.7?).With tmux 1.4, I usually just use tmux newwin a shell that already has the desired current working directory. If, however, I anticipate needing to create many windows with the same current working directory (or I want to be able to start them with the usual <prefix>c key binding), then I set the default-path session option via tmux set-option default-path "$PWD"in a shell that already has the desired current working directory (though you could obviously do it from any directory and just specify the value instead). If default-path is set to a non-empty value, its value will be used instead of “inheriting” the current working directory from command-line invocations of tmux neww. The tmux FAQ has an entry titled “How can I open a new window in the same directory as the current window?” that describes another approach; it is a bit convoluted though.
Is is possible to open a new-window with its working directory set to the one I am currently in. I am using zsh, if it matters.
How to create a new window on the current directory in tmux?
From their website:How is tmux different from GNU screen? What else does it offer?tmux offers several advantages over screen:a clearly-defined client-server model: windows are independent entities which may be attached simultaneously to multiple sessions and viewed from multiple clients (terminals), as well as moved freely between sessions within the same tmux server; a consistent, well-documented command interface, with the same syntax whether used interactively, as a key binding, or from the shell; easily scriptable from the shell; multiple paste buffers; choice of vi or emacs key layouts; an option to limit the window size; a more usable status line syntax, with the ability to display the first line of output of a specific command; a cleaner, modern, easily extended, BSD-licensed codebase.There are still a few features screen includes that tmux omits:builtin serial and telnet support; this is bloat and is unlikely to be added to tmux; wider platform support, for example IRIX and HP-UX, and for odd terminals.
Browsing through questions I found about tmux (I normally used GNU Screen). My question is what are pros and cons of each of them. Especially I couldn't find much about tmux.
tmux vs. GNU Screen [closed]
Not according to the man page, which only calls out the attach -r option to enable read-only mode. Also, in the source code, only the following line in cmd-attach-session.c sets the read only flag. The rest of the code checks whether this flag is set, but does not change its value. So again, it looks like you are out of luck unless you can make (or request) a code change: if (cmd_check_flag(data->chflags, 'r')) ctx->cmdclient->flags |= CLIENT_READONLY;
I've been using screen for years now as a way of ensuring that any remote work is safely kept open in after disconnects/crashes. In fact, as a matter of course, I use screens even when working locally. Recently, my requirements have progressed to the stage that I switched to tmux because of the beauty of: tmux attach -rAttaching to my own sessions in readonly mode (-r) means that I don't have to worry about accidentally:pasting lines of garbage in IRC halting an important compile/deploy process typing a password in full view for passersbyOf course the issue is that I have to open a session, C-b + d to detach, and then reopen it with the -r flag to go readonly. And then, when I occasionally want to chime in to an IRC conversation, interrupt a task or anything else, I have to detach again and reconnect normally. Does anyone know of a way to make a key binding to switch between modes?
Is there a tmux shortcut to go read only?
You haven't set window active background color, you only set active panel border, try: set-window-option -g window-status-current-bg red
Is it possible to change the background of the active (current) tmux tab? I'm using tmux 1.9 on Ubuntu 15.04. $ tmux -V tmux 1.9I tried to do: set-option -g pane-active-border-fg redBut the result was not changed:I expected the 3-bash* to have a red background.
Set the active tmux tab color
tl;dr... | tmux loadb - tmux saveb - | ...Explanation & Background In tmux, all copy/paste activity goes through the buffer stack where the top (index 0) is the most recently copied text and will be used for pasting when no buffer index is explicitly provided with -b. You can inspect the current buffers with tmux list-buffers or the default shortcut tmux-prefix+#. There are two ways for piping into a new tmux buffer at the top of the stack, set-buffer taking a string argument, and load-buffer taking a file argument. To pipe into a buffer you usually want to use load-buffer with stdin, eg.: print -l **/* | tmux loadb -Pasting this back into editors and such is pretty obvious (tmux-prefix+] or whatever you've bound paste-buffer to), however, accessing the paste from inside the shell isn't, because invoking paste-buffer will write the paste into stdout, which ends up in your terminal's edit buffer, and any newline in the paste will cause the shell to execute whatever has been pasted so far (potentially a great way to ruin your day). There are a couple of ways to approach this:tmux pasteb -s ' ' : -s replaces all line endings (separators) with whatever separator you provide. However you still get the behavior of paste-buffer which means that the paste ends up in your terminal edit buffer, which may be what you want, but usually isn't. tmux showb | ... : show-buffer prints the buffer to stdout, and is almost what's required, but as Chris Johnsen mentions in the comments, show-buffer performs octal encoding of non-printable ASCII characters and non-ASCII characters. This unfortunately breaks often enough to be annoying, with even simple things like null terminated strings or accented latin characters (eg. (in zsh) print -N á | tmux loadb - ; tmux showb prints \303\241\000 ). tmux saveb - | ... : save-buffer does simply the reverse of load-buffer and writes the raw bytes unmodified into stdout, which is what's desired in most cases. You could then continue to assemble another pipe, and eg. pass through | xargs -n1 -I{} ... to process line wise, etc..
When working in a shell environment I run fairly often into the need to copy 'intermediate pipe output' around (eg. from/to already running editors, to other shells, other machines, etc.). When in a windowing environment, an easy (and generic) method to solve this is often via the system clipboard, eg.:X11: ... | xsel -i / xsel -o | ... OS X: ... | pbcopy / pbpaste | ...How can I get similarly convenient behavior using the tmux copy/paste facility?
How to copy from/to the tmux 'clipboard' with shell pipes?
This question is a bit old, but I was looking for something similar, and found it here. It creates a second session that shares windows with the first, but has its own view and cursor. tmux new-session -s alice tmux new-session -t alice -s bobIf the sharing is happening between two user accounts, you may still have to mess with permissions (which it sounds like you had working already). Edit: As suggested, a quote from another answer: First, add a group for tmux users export TMUX_GROUP=tmux addgroup $TMUX_GROUPCreate a directory with the group set to $TMUX_GROUP and use the setgid bit so that files created within the directory automatically have the group set to $TMUX_GROUP. mkdir /var/tmux chgrp $TMUX_GROUP /var/tmux chmod g+ws /var/tmuxNext make sure the users that want to share the session are members of $TMUX_GROUP usermod -aG $TMUX_GROUP user1 usermod -aG $TMUX_GROUP user2
I've decided to try tmux: have been reading the docs and googling around, trying to find a way to have two users sharing a session, each with a different cursor. However, giving 777 permissions to the socket, or creating a group, chgrping the socket and adding both users to it, seems to let that same socket be used to share a session with only one cursor: both users can write, but always in the same cursor position. Right now both users are in the same home server over ssh, and the idea is to be able to have:A terminal in a, let's say, left pane, where I can type commands Another terminal in a right pane, where I can see another user typing commands in his own terminal The same thing for the other userWhat I'm doing at the moment is using two sessions (not shared) and a script -f and tail -f combination that kinda works for reading each other's key strokes, but I reckon there is probably some way of doing this using tmux sharing capabilities. Is there a way to get this idea working with write support in each other's terminal? What is the better way to do this?
tmux: shared session, one user in a pane, another in another pane, two different cursors
Super_L is an X keysym. Tmux runs in a terminal. It is up to your terminal emulator to transform a keysym into a character sequence. So you would have to configure both your terminal emulator and tmux. Looking at the tmux documentation, the prefix can only been a known key name with an optional modifier. So you can set the tmux prefix to a key combination you don't use, say M-F12, and get your terminal to send the character sequence for M-F12 when you press Super_L. With a little more work, you could use a key that your keyboard probably doesn't have (tmux accepts F13 through F20 as key names, but they have to be declared in terminfo). On the terminal emulator side, you would have to arrange for Super_L to generate the key sequence \e\e[24~ (for M-F12) or \e[34~ (for F20) (where \e is the escape character). How to do this depends on the terminal emulator (and some aren't configurable enough to do it). With xterm, it's done through X resources: ! Make Super_L act as Meta+F12 XTerm.VT100.translations: #override \ <Key>Super_L: string("\033\033[24~")You may hit a snag that Super_L is normally a modifier, and modifier keys don't always work when a non-modifier is required. If you don't want Super_L to be a modifier, you can take its modifier away, or (less confusingly) use a different keysym for the physical key. This can be done through xmodmap (old-fashioned and simple to understand), through xkb (the modern, poorly-documented, powerful and complex way), or perhaps through your desktop environment's GUI configuration tool.
I find even Ctrl+b to be very emacs like but I understand the point. I'm wondering if I could bind it to a single keypress of a key I don't other wise use? namely Super_L (also known as the left windows key. for why I say Super_L start xev in a terminal and press that key)
How do I bind the tmux prefix to a Super?
I use dwm and tmux. Before learning to use tmux, I would have multiple terminals open for different things, and have them in different tags. Now I can run everything inside of one tmux session, under a single tag, and can detach and reattach without losing state if I need to restart X.
Both terminal multiplexers (screen, tmux) and keyboard-driven tiling window managers (ratpoison, dwm, xmonad) provide similar functionality. Is there any benefit in using both at the same time? What about problems that may arise?
Does a terminal multiplexer have any benefit when used with a tiling window manager?
Maybe these schema can clarify the situation. This is the usual setting: Terminal (/dev/ttyX or /dev/pts/x) device | (screen)<--[<output]----x-------(stdout) Process1 Terminal (keyboard)---[input >]---o-\----->(stdin) \ \ (hardware console or \ `----(stdout) Process2 virtual console or terminal `---->(stdin) emulators like xterm, …)And there is no way to plug some new Process3 like this: Terminal device | (screen)<---o---[<output]--x------(stdout) Process1 Terminal (keyboard)---/-x--[input >]-o-\---->(stdin) | / \ \ | | \ `---(stdout) Process2 | | `--->(stdin) | | \ `---------------------(stdout) Process3 `--------------------->(stdin)What screen (and others) does is allocating some pseudo terminal device (like xterm does) and redirect it to one or more "real" terminals (physical, virtual, or emulated): Terminal pseudo devices ,--> Terminal (/dev/pts/x) | _______/ device Terminal <--[<output]--- | | | 1 ---[input >]--> |screen | <--[<output]---x-----(stdout) Process1 |Process| ---[input >]--o-\--->(stdin) Terminal <--[<output]--- | | \ \ 2 ---[input >]--> |_______| \ `--(stdout) Process2 `-->(stdin)Using screen -x you can attach one more terminal, xterm, whatever (say Terminal 3) to the screen session. So no, you can't communicate directly through stdin/stdout with processes attached to a different terminal. You can only do so through the process that is controlling this terminal if it happens to be a pseudo terminal, and if this process was concieved to do so (like screen is).
So let's say you boot up your Linux install all the way to the desktop. You start up a gnome-terminal/konsole/whatever so you have a tty to enter commands to. Now let's say I SSH into that same machine. It will bind me to another tty to enter commands to. Now let's say I want to "switch" my tty from my original SSH one to the gnome-terminal one started earlier. Basically I'm asking if there is any way to do the same thing screen -x does but without screen? I know you can easily send output to the other tty simply by echoing something into the /dev file, but I don't know a way to 'view' what's in the tty. Any ideas?
How can I switch between ttys without using screen?
Finally, I've managed to figure out "obvious" package which supply screen-256-color-s (got to be installed on remote machine): sudo apt install ncurses-termfixed the problem for me: nice 256 colors and no need for ugly workarounds with environment variables. Hooray! :)
I'm trying to make an ssh connection (via lsh) from one Ubuntu host to another from within screen. If I try to run mc right after that I get the following error: Unknown terminal: screen-256color-s Check the TERM environment variable. Also make sure that the terminal is defined in the terminfo database. Alternatively, set the TERMCAP environment variable to the desired termcap entry.The question is - who's causing this failure? Is it local host? remote? some package missing (which?), something not done by lsh-server? or client? Just to be clear - I don't want workarounds like "TERM=xterm mc", I want to be able to use visual themes which support 256 colors on the (remote) console.
ssh from screen leads to unknown terminal error
Sure, with screen -d -rYou can choose which screen to detach and reattach as usual by finding the pid (or complete name) with screen -list. screen -d -r 12345
I use the screen Screen visual consoles. To detach a screen I need to press Ctrl+A followed by D but some time a session is closed without detaching it. It appears as (Attached) on screen -list: eduard@eduard-X:~$ screen -list There are screens on: 4561.pts-46.eduard-X (30.03.2015 14:48:51) (Attached) 4547.pts-46.eduard-X (30.03.2015 14:48:33) (Detached) 4329.pts-41.eduard-X (30.03.2015 14:46:28) (Attached) 3995.pts-30.eduard-X (30.03.2015 14:30:01) (Detached)If i try to restore it, screen responds that there is no screen to resume: eduard@eduard-X:~$ screen -r 4329 There is a screen on: 4329.pts-41.eduard-X (30.03.2015 14:46:28) (Attached) There is no screen to be resumed matching 4329.Can I still resume a screen that I did not detached properly?
How can I resume a screen that I did not manage to detach?
They can potentially be faster at outputting and refreshing vast amounts of information. It could also allow for smooth(er) scrolling. Human beings however are quite slow at reading this information, so I'm kinda doubtful this can be beneficial - the average person is unlikely to be able to comprehend it anyways. CPU usage could be lower but it needs to be tested. At the same time such terminals are eating your VRAM which could be an issue for users who have little VRAM or VRAM stored in RAM (users with integrated graphics). One way to measure performance is to generate or find a very large text file and measure the time and CPU time it takes to output it. $ cat bigfile > /dev/null # cache it $ time cat bigfileCould be enough. I don't have terminals with HW acceleration, so I cannot test it.I've just installed kitty. XFCE4 Terminal: real 0m1.760s user 0m0.000s sys 0m0.342sKitty: real 0m1.007s user 0m0.000s sys 0m0.282sThat's for a 41MB file with over 700K lines: $ wc -l test.txt 751900 test.txtIn both cases you barely can see or read anything on the screen.
What distinguishes kitty from the vast majority of terminal emulators? It offers GPU-acceleration combined with a wide feature set. It’s targeted at power keyboard users. It’s billed as a modern, hackable, featureful, OpenGL based terminal emulator.What are the advantages of hardware-accelerated terminal emulators? Is it speed? How you notice that in daily command execution? Classic terminals seem not too slow, the bottleneck is mostly the human typing.
What are the advantages of hardware-accelerated terminal emulators?
A terminal emulator provides a standardised character based interface for text mode applications, it emulates the behavior of real or idealised hardware. Consoles typically run some sort of terminal emulation, (linux console emulates a VT220 with some additions) A terminal was dedicated hardware that implements the standard and iwas connected to ther server via a serial connection either directly or via a concentrator. The term is often used to include terminal emulators, it can also include GUI terminals that use X or RDP instead of being text based. A terminal multiplexer emulates several terminals and mixes their output and directs input in a way that is useful to the user. Xterm is a terminal emulator the runs under a GUI (X). a window manager can be used to resize and relocate the terminal windows that xterm uses. xterm also has a graphical capability where it emulates a graphical terminal, but there aren't many applications that can exploit this, i know of only two gnuplot and dosemu) most other GUI based terminal emulators dsiplay text only.
For example. What is the difference between a default 'interface/console' of FreeBSD/archlinux, vs Terminal, vs Terminal emulator like Xterm, vs Terminal multiplexer like tmux, vs Window manager like awesome; and where does Bash and other 'shells' fit in all this?
What is the difference between a Console, Shell, Terminal, Terminal emulator, Terminal multiplexer, and a Window manager?
Add set -g terminal-overrides "xterm*:smcup@:rmcup@" to your tmux config and restart tmux. If you are not using a terminal matching 'xterm*' you'll have to change it of course.
Scrollability of text in some terminals: | tty | xterm | urxvt | guake | terminator screen | Y | Y | Y | Y | Y tmux | N | N | Y | N | NI notice in this answer that scrolling doesn't work when using tmux. But I can scroll in urxvt. How can I have that behaviour in other terminals?
Scrollability of text when using tmux?
Although some terminal programs have support for splitting, you won't be able to access this functionality from the shell which is running in a different layer and doesn't have access to the software displaying it. What you can do is use a terminal multiplexer such as GNU Screen or tmux that allow you to run multiple shells in "panes" inside a console. Screen has been around since the dawn of time and works, but lately the project has falled into dis-repair and it's not being well maintained. Tmux is kind of a new player on the scene but the code is very clean and mature, it has a few more features than screen, and it's a good deal easier to learn and configure. Even though I still use screen out of force of habit, I highly recomend you use tmux for this. You should be able to write a script that launches a tmux session, runs your streamripper code in one pane, waits for a condition, then adds another pane to the same session, displays it as a split screen, then runs mplayer in the new pane.
I want to run streamripper in its own X Terminal (window), then split the terminal horizontally, and then run mplayer in the lower half. This is simple enough to do manually, but getting a script to do it eludes me.start a new terminal window run streamripper http://radio.net:8000 -r 8000 split the terminal window horizontally run mplayer http://localhost:8000 in the bottom panel.mplayer cannot be allowed to run immediately. It needs to wait for stream data, so a test for this would be better than "wait x seconds" (which is effectively what the manual method does. If the terminal is significant to this, anything will do, but I currently have installed konsole, gnome-terminal, and terminator (in Ubuntu)
How to run streamripper and mplayer in a split-screen X terminal, via a single script
Once you've launched screen, you can use its internal screen command to attach windows to additional terminal devices. Type C-a: to get the prompt, then use screen /dev/ttyUSB1 ######where ###### is this device's baud-rate. You can also put these commands in your .screenrc to attach the devices automatically when you start screen, or you could bind a keystroke to this command to get a shortcut. See the Window Types section of the screen manual.
I use screen to connect to devices via RS232 with a USB-serial dongle. Currently, I use this command to invoke screen (where 115200 is my baud-rate): screen /dev/ttyUSB0 115200Usually, I have more than one device (/dev/ttyUSB0 and /dev/ttyUSB1). Sometimes their baud-rate differs. Currently, I open a new terminal emulator and run screen for each instance, but that kind of defeats the purpose of screen. Can I access both devices in a single instance of screen? I'm thinking this would involve launching screen with no arguments and then attaching the session to a TTY with a specified baud-rate after it is created, but I don't see a command to change TTYs within a session. I know tmux can do that, but I'd rather stick with screen.
Multiplex different TTYs with a single instance of screen
There is no general method. As observed by vinc17, different terminal emulators let you configure the TERM value in different ways, if at all. You can drop terminfo configuration files into your home directory, organized as ~/.terminfo/INITIAL-LETTER/VALUE. For example, if you wish for xterm to point to the 256-color entry, on a typical machine, you could do mkdir -p ~/.terminfo/x ln -s /usr/share/terminfo/x/xterm-256color ~/.terminfo/x/xterm
Is there a way to set different $TERM for different terminal emulators. For example if I am in xterm the $TERM will read xterm-256color, in urxvt urxvt-256color, in sakura xterm-256color and in tmux screen-256color.
Modular $TERM for different terminal emulators
To send a string literally you can use the -l option to send-keys, but as you might still have more options after the -l you need to use something like '' (an empty string) to no longer be looking for options beginning -. You cannot mix and match the literal with keynames like Enter, so finally you need to give two commands, eg: tmux send-keys -t session -l '' -3 \; send-keys -t session Enter
Using tmux to send commands along from one terminal to another, I realize that $ tmux send -t mySession "text" ENTERcorrectly sends text, but $ tmux send -t mySession "up" ENTERsends text again, probably because up is interpreted not as text, but as keyworded key up arrow. Similarly, $ tmux send -t mySession "3" ENTERcorrectly sends 3, but $ tmux send -t mySession "-3" ENTER tmux: unknown option -- 3 usage: send-keys [-lRM] [-t target-pane] keyfails with this error message, and this naive try to escape $ tmux send -t mySession "\-3" ENTERsends 3 again, not the expected -3. Anyway, I'm pretty sure that I've missed something about the way tmux interprets and understand its argument. What am I missing here? How do I ensure that mytmuxcommand "<text>" ENTER will always be interpreted as "send actual <text> then send ENTER key"?
Escape keywords with tmux send
Try vim-slime, an environment inspired by Emacs's SLIME mode. It sends the contents of Vim to a screen or tmux session. In the future you can probably also use Xiki, but for now its Vim support is incomplete.
From time to time I want to use vim as scratch pad for commands that I would like to send to a command line shell like psql (Postgres), ghci (Haskell programming language), octave ('calculator'), gnuplot (plot) etc. The advantages would be that you could put comments next to command lines, directly document your session, incrementally develop command lines, test examples ad-hoc in manuals etc. Pro features I would like to use: send a selection to a shell, send e.g. the next 10 lines to a shell, display the output of a shell command into a vim output buffer, into a vim yank-register, directly insert it etc. There should be some support of a shell-session concept, i.e. the shell should not be started for each command from scratch. I could live with a kind of remote controlled xterm which I would put side by side to a vim window.
How to configure vim to interact with interactive command line shells?
Konsole has this feature by default. I would like to know if any others did too, as it's a feature I use regularly to ease readablity.
Are there any Linux Terminal Emulators that offer "control & scroll zoom" (allowing you to adjust the visible size of fonts by holding down ctrl and scrolling your mouse-wheel)? Recently on Windows I discovered ConEMU, which offers this feature and I'd love to enjoy it on my Linux desktops too. Looks like Terminator is "going" to have this soon! Anything have it now?
Terminal Emulator with CTRL+Scroll Zoom
I read carefully your question and I have a simple obvious suspect: did you run screen? Inside your "gnome-terminal", you have to run the screen program. After you'll start the program inside the terminal, you'd use ctrl+a c sequence to split your terminal.
I am working in a remote Linux box using VNC.I have a single terminal with lots of tabs opened in my system. How can I split them using the screen utility? Does the screen utility work with already opened tabs or do I need to close all existing tabs to try it out ? I have been through this answer but it is not working for me.I tried to open a new terminal using ctrla then c but it just doesn't do anything . I am using Red Hat Linux and GNOME terminal 1.16.0. I can see screen is there in my system as shown below: [subhrcho@slc04lyo pcbpel]$ which screen /usr/bin/screenBut I can't find tmux. [subhrcho@slc04lyo pcbpel]$ which tmux tmux: Command not found.Here's the linux version : [subhrcho@slc04lyo pcbpel]$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.8 (Tikanga)Edit1: Even if I close all the tabs and open a single window as suggested by @Jander, and then type ctrla then c I am seeing the character c is just getting printed in the shell. Am I missing something obvious ? Edit2: Turns out that I never actually ran screen command (Thanks to @andcoz). Now I can see my terminal getting split into horizontal sections but not vertically .
Why screen is not splitting my GNOME terminal?
The idea is to enumerate all the windows, or rather all the panes, since a window can have several of them. Then capture the output of each pane and display the last line of captured text. Put this into a script: tmux list-windows -F '#I' | while read w; do tmux list-panes -F '#P' -t $w | while read p; do echo -n "${w}.${p}" ; tmux capture-pane -p -t "${w}.${p}" | tail -n 1 done doneSuppose you put this code in /some/file After that, being in your 40 window tmux session, you create your new monitoring window, and run watch -n 1 'bash /some/file'in there. The echo -n "${w}.${p}" ; part will prepend the lines with the window and pane index, I found it's rather useful to have an idea of the output origin. You may not want it.
I am running a tmux session with 40 windows. I need to create an overview window listing the last line of each screen and updating each listing as the output of the respective window.
tmux - List output last line of all windows in new window
One approach is to use a terminal multiplexer only on remote machines. Running each shell in a separate terminal emulator has the advantage that you can put multiple shell windows side by side. On a remote machine, resistance to disconnection is a big win that justifies terminal multiplexers, but locally, they have fewer advantages. If you do want to nest terminal multiplexers, using different prefix keys locally and remotely would be the easy way to cope.
I use tmux regularly to easily handle multiple terminals on my local machine. Sometimes, I need to connect to a remote machine and start a script in one the terminals (i.e. a pane or window in tmux). If my machine disconnects for any reason during this process, the remote script is killed and I cannot re-attach to the remote terminal that started the process. Part of the purpose of terminal multiplexers is to deal with this precise scenario, but in my case, since I am running tmux on the local machine, I can't re-attach to the terminal that started the remote process. One option would be to run tmux (or GNU screen) on the remote machine within one of the panes of my local tmux session, but I am concerned about running into keyboard shortcut conflicts when nesting either screen or tmux within tmux. What is a good way of handling this problem? Is nesting console multiplexers a good idea? Is it the only solution to this problem?
Combining local and remote terminal multiplexing
Do you want to suspend a job inside a screen window?Just use Ctrl z inside the screen window (as usual). This doesn't suspend screen, though.Do you want to suspend screen itself?Use Ctrl az inside any screen window. But notice that although this suspends the user-facing part of the screen application, it doesn't suspend the applications being managed through screen. This is because screen is designed that its user-facing part can be detached with Ctrl ad, and the managed processes continue to run.
As far as I've seen, pressing Ctrl-Z on any terminal multiplexer, or trying to start them in the background, does nothing or crashes. I know that, in a sense, terminal multiplexers are a "replacement" for job control, and usually, they have their own mechanisms for suspending and resuming. Still, I was wondering if I could integrate them in some way into a workflow based on the shell's job control. Answer:Screen suspends with "C-a z" Tmux suspends with "C-b C-z" Zellij suspends with "C-o d", but unlike the previous ones, it doesn't place the process on the shell's job control.
Does any terminal multiplexer (screen, tmux, zellij) support job suspension (Ctrl-Z) in Bash?
I found my answer, although it was under a different title and the question was slightly different, this does the job: screen -d -S downloader -m wget https://google.comIt creates a new screen called downloader, detaches it and runs the command.
With the screen command, the -X option allows you to execute a command in the specified screen session, but when you try to use it when creating a new screen, e.g: screen -dmS -S downloader -X "wget https://google.com"you get the error No screen session found.. So it's clear the the -X option only works for pre-existing screen sessions. Is it possible to specify a command to be run on the creation of a new screen? If it's not possible in screen, is it possible in another multiplexer like tmux?
Execute command when creating new screen session