output
stringlengths 9
26.3k
| input
stringlengths 26
29.8k
| instruction
stringlengths 14
159
|
---|---|---|
Inspired by doneal24's comment, I solved my issue by loading the modules only when we ssh into a specific node that has a hostname that starts with gpu.
if [[ $(hostname) == gpu* ]]; then
module load PyTorch/1.12.0-foss-2022a-CUDA-11.7.0;
module load pdsh/2.34-GCCcore-11.3.0;
fi
|
On our cluster we use LMOD to load specific pre-installed modules dynamically (like PyTorch or some other scientific packages). On top of that I want to run some code with the DeepSpeed framework which allows for optimisations to run distributed code across nodes. Under the hood it uses pdsh. The issue that I have is that the ssh session's of course do not load the modules that I have already loaded in the main node - but that leads to problems because it can then not find some needed libraries such as Python.
As an example: Let's say that I request an interactive SLURM job with multiple nodes. In the main node I load the modules PyTorch+Python and pdsh
module load PyTorch/1.12.0-foss-2022a-CUDA-11.7.0
module load pdsh/2.34-GCCcore-11.3.0Then, I can run some deepspeed command, which will launch parallel ssh sessions to all the nodes. But, because those are new sessions on those nodes, the modules specified above are not loaded. It was suggested to add these module load commands to my .bashrc but that would mean that they are always loaded, which I may not want.
I'm therefore looking for a way to detech whether a session is set by pdsh. Does pdsh set some variables that I can use in my .bashrc so that I only module load when that condition is true?
|
Detect a pdsh session
|
A guess: you are now actually using MTP for accessing your Walkman, and MTP sucks.
Details
The Operation not supported error could indicate that your Walkman uses an MTP implementation that doesn't support "direct" access. According to http://intr.overt.org/blog/?p=174 this kind of direct access is an Android-specific extension, so it's probably not supported by your Walkman.
As result, you can only use a few selected ways to access files on your Walkman using MTP: I guess everything that reads or writes files in one single operation is supported, while access to selected parts of a file is not supported for these MTP implementations. And it appears that cp and Python always use the latter access method and hence fail.
Possible Workaround
However, you might be able to just replace cp by gvfs-copy. In my tests with a Samsung Android phone (which has a crippled MTP implementation as well) gvfs-copy was able to copy files to the phone where cp failed.
Background
I couldn't find much info about these device-dependent MTP limitations; here are some snippets where the situation is explained somewhat:
https://askubuntu.com/a/284831
https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/1389001/comments/2
https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/1157583/comments/1
Why did it work before?
As to why your Walkman was accessible with cp in Mint 14 but not in Mint 17, this might be caused by an internal switch from PTP to MTP as access system. At least that's what I noticed for the Samsung device when switching from Ubuntu 12.04 to 14.04. The phone supports both PTP and MTP, but Ubuntu 12.04 apparently only supports PTP; so that's what was used. Since the new Ubuntu version has built-in support for MTP, this is now used instead.
Actually it might even be the case that your Walkman was previously accessed as USB Mass Storage Device, which is what USB hard disks and flash drives use. Maybe for some reason Linux (or your Walkman) decided that MTP was preferable over Mass Storage access.
You can see the access method used by looking at the URL for the Walkman (in Nautilus, go to the Walkman folder, press Ctrl+L and look at the address bar): for MTP the device is found under eg. mtp://[usb:001,004]/ while for PTP it's something like gphoto2://[usb:001,004]/store_00010001. For Mass Storage access the URL is just a normal path like /media/WALKMAN.
I don't know if MTP has any actual advantages over PTP or Mass Storage, or whether it's possible to switch back to PTP or Mass Storage. Under Linux, both MTP and PTP implementations have their own set of bugs, so it might depend on your use case which one is better. AFAIK Mass Storage is the most desirable option for the user but device support in phones is waning.
|
I am running Linux Mint 17.1 64-bit (based on Ubuntu 14.04). Ever since upgrading from Linux Mint 14/Ubuntu 12.10, the Python script I use to sync music to my Walkman has stopped working.
Previously, when I mounted my Walkman, it would automatically show up as the path /run/user/1000/gvfs/WALKMAN/Storage Media and would work like any other file system: I could copy tracks to it, delete tracks from it, etc, all through Python. However, I can't remember if I had to make any changes to get this to happen.
Since upgrading to Linux Mint 17 (and now 17.1), when I mount the Walkman, it shows up as the path /run/user/1000/gvfs/mtp:host=%5Busb%3A002%2C007%5D/Storage Media. Furthermore, when I try to run the same file operations, they now fail. I have discovered that this happens not just through Python, but on the command line as well. For example:
david@MILTON:~$ cp '/data/Music/10SecsWhiteNoise.mp3' '/run/user/1000/gvfs/mtp:host=%5Busb%3A002%2C006%5D/Storage Media/MUSIC'
cp: cannot create regular file ‘/run/user/1000/gvfs/mtp:host=%5Busb%3A002%2C006%5D/Storage Media/MUSIC/10SecsWhiteNoise.mp3’: Operation not supportedI have done some research on this problem, but the most common explanation seems to be that it was formerly solved by this PPA: https://launchpad.net/~langdalepl/+archive/ubuntu/gvfs-mtp
But now, Ubuntu versions since 13.10 contain all these changes so it should no longer be necessary. So why am I still having these errors? I am still able to do file operations on my Walkman through a graphical file manager (Caja, on Linux Mint), just not via the command line.
|
Unable to perform file operations on a MTP device mounted via GVFS: "Operation not supported"
|
Install jmtpfs (aptitude install jmtpfs) which allows to mount MTP devices.
Create the directory if it doesn't exist already (mkdir /tmp/myphone). Then, the following will mount your phone:
jmtpfs /tmp/myphonejmtpfs will use the first available device. If you've got more than one connected at a time, you can do jmtpfs -l to find out which one is your phone, and use the -device flag to specify it.
As an alternative, you can try go-mtpfs instead.
|
I have installed mtp-tools on my debian7.8.
mtp-files can list all my files on android phone.
How can I mount all my files on android phone on the directory /tmp/myphone?
root@debian:/home/debian# jmtpfs -l
Device 0 (VID=0b05 and PID=0c02) is UNKNOWN.
Please report this VID/PID and the device model to the libmtp development team
Available devices (busLocation, devNum, productId, vendorId, product, vendor):
1, 8, 0x0c02, 0x0b05, UNKNOWN, UNKNOWN
root@debian:/home/debian# jmtpfs /tmp/myphone
Device 0 (VID=0b05 and PID=0c02) is UNKNOWN.
Please report this VID/PID and the device model to the libmtp development team
ignoring usb_claim_interface = -6ignoring usb_claim_interface = -5PTP_ERROR_IO: failed to open session, trying again after resetting USB interface
LIBMTP libusb: Attempt to reset device
Android device detected, assigning default bug flags
fuse: bad mount point `/tmp/myphone': Input/output error
The jmptfs can't mount my phone.
chmod 777 /tmp/myphone After chmod.root@debian:/home/debian# jmtpfs /tmp/myphone
Device 0 (VID=0b05 and PID=0c02) is UNKNOWN.
Please report this VID/PID and the device model to the libmtp development team
Android device detected, assigning default bug flags
root@debian:/home/debian# jmtpfs -l
Device 0 (VID=0b05 and PID=0c02) is UNKNOWN.
Please report this VID/PID and the device model to the libmtp development team
Available devices (busLocation, devNum, productId, vendorId, product, vendor):
1, 5, 0x0c02, 0x0b05, UNKNOWN, UNKNOWN
|
Mount my mtp in my android phone on a directory?
|
Install the jmtpfs package
apt install jmtpfsEdit your /etc/fuse.conf as follows
# Allow non-root users to specify the allow_other or allow_root mount options.user_allow_otherCreate an udev rule. Use lsusb or mtp-detect to get the ID of your device
nano /etc/udev/rules.d/51-android.ruleswith the following line:
SUBSYSTEM=="usb", ATTR{idVendor}=="04e8", ATTR{idProduct}=="6860", MODE="0666", OWNER="[username]"Replace 04e8 and 6860 with yours , then run:
udevadm control --reloadReconnect your device , open the terminal and run:
mkdir ~/mtp
jmtpfs ~/mtp
ls ~/mtpsample output:
Card PhoneTo unmount your device use the following command:
fusermount -u ~/mtpAlso you can use the go-mtpfs tool:Mount MTP devices over FUSEmkdir ~/mtp
go-mtpfs ~/mtpA graphical tool to mount your device : gmtp:simple file transfer program for MTP based devicessudo apt install gmtp
gmtpkio-mtp access to MTP devices for applications using the KDE Platform
|
So I'm trying to share files between the Samsung Galaxy S5 with Android and my Debian9/KDE machine using MTP instead of KDE Connect.
The problem is that I keep getting:The process for the mtp protocol died unexpectedly.When trying to copy over files.
It also often saysNo Storages found. Maybe you need to unlock your device?I can view some of the phone's contents in dolphin after trying for a while: pressing "Allow" whenever the dialog on the phone asks for it while trying to open it in dolphin which correctly detects it as Samsung Galaxy S5.
I once could successfully copy over a bunch of images.
I already tried sudo apt-get install --reinstall libmtp-common. syslog has things like the following:
usb 1-5: usbfs: process 7907 (mtp.so) did not claim interface 0 before use
usb 1-5: reset high-speed USB device number 35 using xhci_hcd
usb 1-5: usbfs: process 7909 (mtp.so) did not claim interface 0 before use
colord-sane: io/hpmud/pp.c 627: unable to read device-id ret=-1
usb 1-5: USB disconnect, device number 35
usb 1-5: new high-speed USB device number 36 using xhci_hcd
usb 1-5: usbfs: process 7930 (mtp.so) did not claim interface 0 before use
usb 1-5: usbfs: process 7930 (mtp.so) did not claim interface 0 before use
usb 1-5: usbfs: process 7930 (mtp.so) did not claim interface 0 before use
|
How to get the Samsung Galaxy S5 to work with MTP on Debian 9?
|
That's precisely it! If a package is orphaned and another has taken its place you almost always want to go with the non-orphaned one for support, updates etc.
In this specific example, as the wiki page you linked says, the only real differences are the name and package status.
|
I recently wanted to connect some LG Android smartphone to my Debian, but the device wasn't detected. I installed the jmtpfs package, and that solved the problem. But in Debian, you have two packages with similar names: mtpfs and jmtpfs. What's the difference between the two? On the Debian wiki there's info that the mtpfs was orphaned a long time ago. Is that the only thing, and that's why I should prefer jmtpfs over mtpfs, or is there something else?
|
What's the difference between mtpfs and jmtpfs packages in Debian?
|
This is an old question, anyway, now in 2023 I can manage a Samsung phone with Android version 9 via Firefox as well as by command line (gnome-terminal) in Ubuntu Desktop 22.04.x LTS.
When the phone is connected via USB (and mounted automatically), I find the path to it using this command,
find /run/user/*/gvfs -maxdepth 1 -name 'mtp:*'Some standard commands do not work, but then I use gio, for example
gio mount ...
gio copy ...See man gio for details.
I made a small shellscript, than can mount, read, write and unmount the phone. It is taylored for my specific needs, but it might help you create your own shellscript, so if you wish I can copy my shellscript into this answer.
|
I would like to mount my Samsung Galaxy S7 to an folder using simple-mtpfs and I cannot do it as I used to (on previous Fedora and older Galaxy S4).
If I simply plug S7 to my computer, I can browse it using Nautilius, but I cannot access it in terminal as ordinary folder, what is exactly what I want to achieve.
Every time I plug S7 I check twice that it works in MTP mode, so that isn't the problem.
In the past, I simply plugged S4 and typed:
simple-mtpfs /home/adam/S4Now, I can perform it and even my phone ask me to confirm MTP choice, but the catalogue S7 is still empty.
I also tried to mount it as root or ordinary user and by device number, but with no result.
# simple-mtpfs --list-devices
1: SamsungGalaxy models (MTP)$ simple-mtpfs --device 1 /home/adam/S7
# simple-mtpfs --device 1 /media/s7$ simple-mtpfs /dev/libmtp-3-1 /home/adam/s7
# simple-mtpfs /dev/libmtp-3-1 /media/s7I even tried to do it by udev rules:
# dmesg | tail
[16821.258485] usb 3-1: Product: SAMSUNG_Android
[16821.258487] usb 3-1: Manufacturer: SAMSUNG
[16821.258489] usb 3-1: SerialNumber: 98867?????????????
[16827.556099] usb 3-1: USB disconnect, device number 29
[16830.383366] usb 3-1: new high-speed USB device number 30 using xhci_hcd
[16830.548882] usb 3-1: New USB device found, idVendor=04e8, idProduct=6860
[16830.548887] usb 3-1: New USB device strings: Mfr=2, Product=3, SerialNumber=4
[16830.548903] usb 3-1: Product: SAMSUNG_Android
[16830.548905] usb 3-1: Manufacturer: SAMSUNG
[16830.548907] usb 3-1: SerialNumber: 98867?????????????# touch /etc/udev/rules.d/10-phone.rulesContent of /etc/udev/rules.d/10-phone.rules is set to:
SUBSYSTEM=="usb", ATTR{idVendor}=="04e8", ATTR{idProduct}="6860", SYMLINK="S7"After reloading rules I have /dev/S7 and I've tried to mount it:
# udevadm control --reload-rules# ls -l /dev/S7
lrwxrwxrwx. 1 root root 15 10-20 15:03 /dev/S7 -> bus/usb/003/075# ls -l /dev/libmtp-3-1
lrwxrwxrwx. 1 root root 15 10-20 15:03 /dev/libmtp-3-1 -> bus/usb/003/075# simple-mtpfs /dev/S7 /media/s7And still without any result. Mounting doesn't give and errors, but the directory where I about to mount is still empty.
The details about my setup:
# uname -r
4.7.7-200.fc24.x86_64# rpm -qa | grep mtp
simple-mtpfs-0.2-6.fc24.x86_64
libmtp-1.1.11-1.fc24.x86_64
gvfs-mtp-1.28.3-1.fc24.x86_64# rpm -qa | grep fuse
fuse-libs-2.9.7-1.fc24.x86_64
glusterfs-fuse-3.8.4-1.fc24.x86_64
fuse-2.9.7-1.fc24.x86_64
gvfs-fuse-1.28.3-1.fc24.x86_64Extract from system log (Fedora's journalctl) after plugging the phone and typing simple-mtpfs /media/s7 :
# journalctl -n 53
-- Logs begin at śro 2016-10-19 21:29:20 CEST, end at sob 2016-10-22 09:26:43 CEST. --
paź 22 09:24:31 PRZEDNICZEK01 kernel: usb 3-1: USB disconnect, device number 10
paź 22 09:24:31 PRZEDNICZEK01 PackageKit[1559]: get-updates transaction /384_eccedcee from uid 1000 finished with success after 45ms
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: new high-speed USB device number 11 using xhci_hcd
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: New USB device found, idVendor=04e8, idProduct=6860
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: New USB device strings: Mfr=2, Product=3, SerialNumber=4
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: Product: SAMSUNG_Android
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: Manufacturer: SAMSUNG
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: SerialNumber: 98867?????????????
paź 22 09:24:32 PRZEDNICZEK01 gvfsd[1813]: PTP: reading event an error 0x02ff occurredDevice 0 (VID=04e8 and PID=6860) is a Samsung Galaxy models (MTP).
paź 22 09:24:32 PRZEDNICZEK01 gvfsd[1813]: LIBMTP ERROR: couldnt parse extension samsung.com/devicestatus:0
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: Could not find parent node for URI:'mtp://[usb:003,011]/'
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: NOTE: URI theme may be outside scheme expected, for example, expecting 'file://' when given 'http://' prefix.
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: Could not find parent node for URI:'mtp://[usb:003,011]/'
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: NOTE: URI theme may be outside scheme expected, for example, expecting 'file://' when given 'http://' prefix.
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-CRITICAL **: Could not set mount point in database 'urn:nepomuk:datasource:5e7b19a6b9795726a5c47a99a89757bf', GDBus.Error:org.freedesktop.Tracker1.SparqlError.Internal: UNIQUE constraint
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-CRITICAL **: Could not set mount point in database 'urn:nepomuk:datasource:5c7e6bb78b9a6691c3ecea3925b2971d', GDBus.Error:org.freedesktop.Tracker1.SparqlError.Internal: UNIQUE constraint
paź 22 09:24:34 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: (gnome-shell:1832): Gjs-WARNING **: JS ERROR: TypeError: is null
paź 22 09:24:34 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: ContentTypeDiscoverer<._onContentTypeGuessed/<@resource:///org/gnome/shell/ui/components/autorunManager.js:133
paź 22 09:24:34 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: _proxyInvoker/asyncCallback@resource:///org/gnome/gjs/modules/overrides/Gio.js:86
paź 22 09:24:34 PRZEDNICZEK01 gvfsd[1813]: ** (process:3243): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/18 (g-dbus-error-quark, 19)
paź 22 09:24:34 PRZEDNICZEK01 gvfsd[1813]: ** (process:3243): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/18 (g-dbus-error-quark, 19)
paź 22 09:24:34 PRZEDNICZEK01 gvfsd[1813]: ** (process:3243): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/18 (g-dbus-error-quark, 19)
paź 22 09:24:34 PRZEDNICZEK01 gvfsd[1813]: ** (process:3243): WARNING **: send_done_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/18 (g-dbus-error-quark, 19)
paź 22 09:24:35 PRZEDNICZEK01 PackageKit[1559]: get-updates transaction /385_decdbbba from uid 1000 finished with success after 45ms
paź 22 09:26:37 PRZEDNICZEK01 kernel: usb 3-1: usbfs: process 3385 (simple-mtpfs) did not claim interface 0 before use
paź 22 09:26:37 PRZEDNICZEK01 kernel: usb 3-1: reset high-speed USB device number 11 using xhci_hcd
paź 22 09:26:38 PRZEDNICZEK01 kernel: usb 3-1: usbfs: process 3385 (simple-mtpfs) did not claim interface 0 before use
paź 22 09:26:38 PRZEDNICZEK01 kernel: usb 3-1: usbfs: process 3250 (events) did not claim interface 0 before use
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: USB disconnect, device number 11
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: new high-speed USB device number 12 using xhci_hcd
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: New USB device found, idVendor=04e8, idProduct=6860
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: New USB device strings: Mfr=2, Product=3, SerialNumber=4
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: Product: SAMSUNG_Android
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: Manufacturer: SAMSUNG
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: SerialNumber: 98867?????????????
paź 22 09:26:41 PRZEDNICZEK01 gvfsd[1813]: PTP: reading event an error 0x02ff occurredDevice 0 (VID=04e8 and PID=6860) is a Samsung Galaxy models (MTP).
paź 22 09:26:41 PRZEDNICZEK01 gvfsd[1813]: LIBMTP ERROR: couldnt parse extension samsung.com/devicestatus:0
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: Could not find parent node for URI:'mtp://[usb:003,012]/'
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: NOTE: URI theme may be outside scheme expected, for example, expecting 'file://' when given 'http://' prefix.
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: Could not find parent node for URI:'mtp://[usb:003,012]/'
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: NOTE: URI theme may be outside scheme expected, for example, expecting 'file://' when given 'http://' prefix.
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-CRITICAL **: Could not set mount point in database 'urn:nepomuk:datasource:0e6a8582e05ac627e4014d1ca1e6ec87', GDBus.Error:org.freedesktop.Tracker1.SparqlError.Internal: UNIQUE constraint
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-CRITICAL **: Could not set mount point in database 'urn:nepomuk:datasource:5c7e6bb78b9a6691c3ecea3925b2971d', GDBus.Error:org.freedesktop.Tracker1.SparqlError.Internal: UNIQUE constraint
paź 22 09:26:41 PRZEDNICZEK01 dbus-daemon[1760]: [session uid=1000 pid=1760] Activating service name='org.gnome.Shell.HotplugSniffer' requested by ':1.16' (uid=1000 pid=1832 comm="/usr/bin/gnome-shell " label="unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023")
paź 22 09:26:41 PRZEDNICZEK01 dbus-daemon[1760]: [session uid=1000 pid=1760] Successfully activated service 'org.gnome.Shell.HotplugSniffer'
paź 22 09:26:42 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: (gnome-shell:1832): Gjs-WARNING **: JS ERROR: TypeError: is null
paź 22 09:26:42 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: ContentTypeDiscoverer<._onContentTypeGuessed/<@resource:///org/gnome/shell/ui/components/autorunManager.js:133
paź 22 09:26:42 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: _proxyInvoker/asyncCallback@resource:///org/gnome/gjs/modules/overrides/Gio.js:86
paź 22 09:26:43 PRZEDNICZEK01 gvfsd[1813]: ** (process:3399): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/17 (g-dbus-error-quark, 19)
paź 22 09:26:43 PRZEDNICZEK01 gvfsd[1813]: ** (process:3399): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/17 (g-dbus-error-quark, 19)
paź 22 09:26:43 PRZEDNICZEK01 gvfsd[1813]: ** (process:3399): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/17 (g-dbus-error-quark, 19)
paź 22 09:26:43 PRZEDNICZEK01 gvfsd[1813]: ** (process:3399): WARNING **: send_done_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/17 (g-dbus-error-quark, 19)
paź 22 09:26:43 PRZEDNICZEK01 PackageKit[1559]: get-updates transaction /386_acdeddea from uid 1000 finished with success after 48ms
|
Mount Samsung Galaxy S7 using simple-mtpfs
|
Un-comment the user_allow_other on your /etc/fuse.conf file.
Mount your android device with allow_other option on your home directory without sudo :
$ mkdir phone
$ jmtpfs -o allow_other phone/
|
I'm using Moto G4 play with Marshmallow; and CentOS 7. I'm able to mount the device and see the content as root, by using jmtpfs. First time I used sudo then I switched as root and then mounted, works fine on both occasions; but files are only visible for root user. personal user is not able to access the content
[shiva@jayan ~]$ sudo su -
[sudo] password for shiva:
Last login: Wed Dec 6 23:51:54 IST 2017 on pts/0
[root@jayan ~]# jmtpfs /media/phone/
Device 0 (VID=22b8 and PID=2e82) is UNKNOWN.
Please report this VID/PID and the device model to the libmtp development team
Android device detected, assigning default bug flags
[root@jayan ~]# ll /media/phone/
total 0
drwxr-xr-x. 38 root root 0 Dec 3 4453203 Internal storageBut when I try to view it as my user, I get permission denied, as a dumb attempt I even tried to change user after mounting.
[shiva@jayan /]$ ll /media/phone
ls: cannot access /media/phone: Permission denied
[shiva@jayan /]$ cd media/
[shiva@jayan media]$ ll
ls: cannot access phone: Permission denied
total 0
d????????? ? ? ? ? ? phone
[shiva@jayan media]$ sudo chown shiva:shiva phone
[sudo] password for shiva:
chown: changing ownership of ‘phone’: Function not implementedThen I tried to mount from my user, it detected no mtp :(
[root@jayan ~]# fusermount -u /media/phone
[root@jayan ~]# exit
logout
[shiva@jayan ~]$ jmtpfs /media/phone/
No mtp devices found.Now my question is how to resolve this?
How to make mtp devices available for my user (or)
How to access files after mounting it as root! I tried using sudo chmod -R 775, it ran forever :'( yet was not able to access those files
|
How to make Android MTP in CentOS7 available for all users?
|
There is a common library used to handle the Media Transfer Protocol. Installing various related MTP tools (and thus their dependencies) will ensure that the actually needed libraries and other required components are present.
From comments it appears installing some or any of packages jmtpfs, mtp-tools, libmtp-runtime is enough to provide the needed libraries and other code in place for LXDE to use it properly.
|
I am running Debian Linux 11 with LXDE and I am trying to connect a Galaxy Tab A tablet.
When I do, I get the Removable Medium Is Inserted dialog with the type of medium being "removable disk" and the only action available is "Open in File Manager". I click OK and I get an Error dialog box with the do-not-enter icon and the message
The name :1.31 was not provided by any .service files
It is always this message (with that specific number).
The file manager opens up to "mtp://SAMSUNG_SAMSUNG_Android_b269fabb" but it is always empty (there should be a Card directory and a Tablet directory).
What does the error message mean? Is Linux looking for a driver that is not installed? If so, which one and how do I install it?
Thanks.
|
How do I resolve "The name was not provided by any .service files" when connecting Galaxy Tab A via USB?
|
Solved
The problem was "Gvfs-Backends" package was not installed. Installed it via Synaptic Package Manager. How it became uninstalled is a mystery. I found this by comparing Gvfs packages on a working laptop to the not working laptop in the Synaptic Package Manager, for those who may encounter this problem. So everything now behaves as it should.
Thanks all,
Rudi
|
Have two Android phones Motorola and HTC both not working on Cinnamon 18.
They were both working on Cinnamon 17.2.
I have installed MTP tools. The commands mtp-detect and lsusb both return Vids and Pids with no apparent errors.
Plugging in the phones I select File Transfers but nothing happens ie no connection sound and no auto nemo popup, opening nemo manually shows no mobile phone.
I also have USB Debugging on under Developer Options.
Any help I'd be grateful
Thank you.
|
MTP File Transfer not working on Linux Cinnamon 18
|
You should install the required packages:
sudo apt-get install libmtp-dev mtp-tools mtpfsConnect your device then run mtp-detect, this command will detect and give you some information about your device.
Run mtp-connect then mtp-folders to display your folders with their ID
the mtp-files will display your files/folders with their ID
to create a list file run:
mtp-files > file_list.txtUse the command mtp-getfile to copy file from your device to your computer , there is an example from debian wiki:file_list.txt will now contain entries like this:File ID: 81
Filename: WP_20161029_16_26_49_Pro.jpg
File size 936160 (0x00000000000E48E0) bytes
Parent ID: 12
Storage ID: 0x00010001
Filetype: JPEG filewhere "Parent ID" is something like the folder where the file resides on the smartphone. So you'll want to do something like this to get that particular file:mkdir "12"
mtp-getfile "81" "12/WP_20161029_16_26_49_Pro.jpg"
|
Have a current project where I'm trying to figure out a way to copy files(a video) from a MTP device over USB.
From the wiki I discover there is an open-source implementation called libmtp. Has anyone reading this used this? Any examples, tutorials?
I'd prefer to run Ubuntu with MATE.
Unix-like systems
A free and open-source implementation of the Media Transfer Protocol is available as libmtp.This library incorporates product and device IDs from many sources, and is commonly used in other software for MTP support.
|
Copying file from MTP device using libmtp (over USB)
|
If the MTP function is not available you can transfer your file through FTP.
The easy way :install the Wifi File Transfer application on your android
device
Start the application , it will give you the username , the password and the url e,g 192.169.1.150:3332 ( the 3332 port should be allowed through the firewall )
Open the browser and type 192.169.1.150:3332 , type the password and the username
Drag an drop your files ( bidirectional) after giving the username and the password generated by the android application.
|
Apparently Motorola's phones have MTP disabled (?) and can't find a way to transfer the files to my computer. Have read a bunch of links and none of them work. Have created the file /etc/udev/51-android.rules (sources 1 and 2) and /etc/udev/69-libmtp.rules (source) as the links show and nothing works.
The 51 file as I have it written
SUBSYSTEM=="usb", ATTR{idVendor}=="22b8", ATTR{idProduct}="2e76". MODE="0666", GROUP="plugdev"the 69 file as I have it written
# Motorola Moto G (MTP+?)
ATTR{idVendor}=="22b8", ATTR{idProduct}=="2e76", SYMLINK+="libmtp-%k", MODE="660", GROUP="audio", ENV{ID_MTP_DEVICE}="1", ENV{ID_MEDIA_PLAYER}="1"Since I am not really sure of what any of it means I have tried changing the ID of the vendor to 03f0 (hp, my computer) (know it makes no sense but I had to try since nothing was working), and the ID of the product to 2e82 a supposedly earlier version of my phone.
Other sources I have read that havent worked link
|
mount moto g on debian
|
Since it's a FUSE fs mounted on the mountpoint, you use the ordinary FUSE unmount method:
fusermount -u /home/sh/srv/mtpIf that fails, you can try:
fusermount -u -z /home/sh/srv/mtp
|
I have ~/srv/mtp, which I do not remember how I created. It was back when I was trying to set up a MTP mountpoint using various tools for that.
Currently I can't do anything to the directory.
○ file mtp
mtp: ERROR: cannot open `mtp' (Input/output error)cannot read `mtp' (Is a directory)○ ll | grep mtp
ls: cannot access mtp: Input/output error
d????????? ? ? ? ? ? mtp○ rm -rf mtp
rm: cannot remove ‘mtp’: Is a directory○ sudo rm -rf mtp
[sudo] password for sh:
rm: cannot remove ‘mtp’: Is a directoryMoreover, thunar can't even list ~/srv, which is the directory containing mtp, and which I visit quite often.
How can I fix that?
EDIT: in the output of mount it's mentioned as
jmtpfs on /home/sh/srv/mtp type fuse.jmtpfs (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
|
How can I delete a former MTP directory, which now gives me input/output error whenever I try?
|
In Debian 9, there's a FUSE-based filesystem jmtpfs, packaged with the same name. It seems to work quite well with Android 8 at least. Perhaps it is available as a package for Mint too?
As it's a real FUSE filesystem, it is actually mountable, and so usable from the command line.
The source code and the developer's introduction to jmtpfs is here.
|
I can't figure out, how to access an Android 7.0 phone's storage via MTP from terminal?
Phone: Honor 8
System: Linux Mint 18.3 Cinnamon 64-bit
I can access it via Nemo file manager as:
mtp://[usb:003,002]/But I can't see it inside neither:
/var/run/user/$UID/gvfs/, nor inside:
/run/user/$UID/gvfs/I also tried to find it via mount command to list all of the mounted devices:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=8131796k,nr_inodes=2032949,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=1631104k,mode=755)
/dev/sdb2 on / type ext4 (rw,relatime,errors=remount-ro,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (rw,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=38,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13805)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sdb1 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
cgmfs on /run/cgmanager/fs type tmpfs (rw,relatime,size=100k,mode=755)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=1631104k,mode=700,uid=1000,gid=1000)
/dev/sr0 on /media/vlastimil/My CDROM type iso9660 (ro,nosuid,nodev,relatime,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2)But I see there is only /dev/sr0.
My phone shows 2 devices in file manager:My CDROM:
/media/vlastimil/My CDROMwith the contents to be Windows program to control the phone.
FRD L09:
mtp://[usb:003,002]/with contents being Internal storage and SD card.
|
How to access an Android 7.0 phone via MTP from terminal?
|
I found the solution here.
It is a bug produces by a gnome-shell extension called "places status indicator".
Disabling this extension solves the problem.
To disable this extension, go to terminal and type
gnome-shell-extension-prefs
and scroll down to places status indicator and turn it off.
|
After upgrading to kali 2018(kernel 4.14), Something went wrong. Whenever I connect my android device to any of the USB port, it logs out and system returns to logging on screen, shutting down all the processes. After logging in, it behaves like the computer has been shut down.All the work/processes are lost.
I tried setting MTP to "charge only", but the behaviour seems same.
I tried replacing my android device with another. also tried different USB cable.
|
Kali Linux 2018 crash when connecting an USB device
|
I finally managed to solve the mounting problem! I spent three weeks searching for a more adequate solution.
To begin with, I applied these changes to the rules script for Android devices 51-android.rules /etc/udev/rules.d/51-android.rules:
diff --git a/51-android.rules b/51-android.rules
index d75ddb3..65f235c 100644
--- a/51-android.rules
+++ b/51-android.rules
@@ -9,7 +9,7 @@
# https://github.com/M0Rf30/android-udev-rules
# Skip testing for android devices if device is not add, or usb
-ACTION!="add", ACTION!="bind", GOTO="android_usb_rules_end"
+ENV{DEVTYPE}!="usb_device", GOTO="android_usb_rules_end"
SUBSYSTEM!="usb", GOTO="android_usb_rules_end"
# Skip testing for unexpected devices like hubs, controllers or printers
@@ -820,13 +820,16 @@ GOTO="android_usb_rule_match"
LABEL="not_ZTE"
# ZUK
-ATTR{idVendor}=="2b4c", ENV{adb_user}="yes"
+ATTR{idVendor}=="2b4c", ENV{adb_user}="yes", GOTO="android_usb_rule_match"
# Verifone
-ATTR{idVendor}=="11ca", ENV{adb_user}="yes"
+ATTR{idVendor}=="11ca", ENV{adb_user}="yes", GOTO="android_usb_rule_match"
+
+GOTO="android_usb_rules_end"
# Skip other vendor tests
LABEL="android_usb_rule_match"
+TAG+="systemd", SYMLINK+="$env{ID_VENDOR}_$env{ID_MODEL}_$env{ID_REVISION}", ENV{SYSTEMD_WANTS}+="mtp@$env{ID_VENDOR}_$env{ID_MODEL}_$env{ID_REVISION}.service"
# Symlink shortcuts to reduce code in tests above
ENV{adb_adbfast}=="yes", ENV{adb_adb}="yes", ENV{adb_fast}="yes"In this patch, I form a symbolic link to the attached device (I am forming a name from the properties of the device for a better understanding):
TAG+="systemd", SYMLINK+="$env{ID_VENDOR}_$env{ID_MODEL}_$env{ID_REVISION}", ENV{SYSTEMD_WANTS}+="mtp@$env{ID_VENDOR}_$env{ID_MODEL}_$env{ID_REVISION}.service"I also specify the systemd tag (without it, the systemd service call does not work for me) and call the service based on my template (below) /etc/systemd/system/[emailprotected]:
[Unit]
Description=Mounting MTP devices
BindsTo=dev-%i.device
After=dev-%i.device[Service]
Type=oneshot
RemainAfterExit=true
TimeoutStartSec=30
ExecStart=/etc/udev/scripts/mtp.sh add %I
ExecStop=/etc/udev/scripts/mtp.sh remove %I[Install]
WantedBy=dev-%i.deviceIn the system, my device looks like dev-NAME.device:
~ # systemctl --all --full -t device | grep Swift
dev-android.device loaded active plugged Swift_2_Plus
dev-android4.device loaded active plugged Swift_2_Plus
dev-bus-usb-001-007.device loaded active plugged Swift_2_Plus
dev-Wileyfox_Swift_2_Plus_0318.device loaded active plugged Swift_2_Plus
sys-devices-pci0000:00-0000:00:15.0-usb1-1\x2d4.device loaded active plugged Swift_2_PlusIn the handler script that is called systemd I create a directory (in my case it's Wileyfox_Swift_2_Plus_0318) and try to mount my device there. If successful, it is mounted, if not, then unmounting is triggered:
#! /bin/sh. /etc/thinstation.env
. $TS_GLOBALACTION=$1
DEVICE_NAME=$2
MOUNT=/bin/aft-mtp-mountCURRENT_DEVICE_MOUNT_PATH=$BASE_MOUNT_PATH/$USB_MOUNT_DIR/$DEVICE_NAME_logger() {
echo "$2" | systemd-cat -p $1 -t "mtp"
}_mounted() {
if [ -n "$(grep -oe "$1" /proc/mounts)" ]; then
return 0
else
return 1
fi
}_mount() {
_logger info "mount $DEVICE_NAME"
if [ -d $CURRENT_DEVICE_MOUNT_PATH ] && _mounted $CURRENT_DEVICE_MOUNT_PATH; then
_logger warning "$DEVICE_NAME already mounted"
exit 1
fi
if [ ! -d $CURRENT_DEVICE_MOUNT_PATH ]; then
mkdir $CURRENT_DEVICE_MOUNT_PATH
if is_enabled "$USB_STORAGE_SYNC" && [ ! -n "$(echo $USB_MOUNT_OPTIONS | grep -e sync)" ]; then
USB_MOUNT_OPTIONS=$USB_MOUNT_OPTIONS,sync
fi
$MOUNT -o $USB_MOUNT_OPTIONS $CURRENT_DEVICE_MOUNT_PATH
if _mounted $CURRENT_DEVICE_MOUNT_PATH && [ "$(ls -A $CURRENT_DEVICE_MOUNT_PATH)" ]; then
_logger info "mounted $DEVICE_NAME in $CURRENT_DEVICE_MOUNT_PATH"
else
_logger warning "$DEVICE_NAME failed to mount"
_umount
exit 2
fi
else
_logger warning "$CURRENT_DEVICE_MOUNT_PATH already exists"
exit 3
fi
}_umount() {
_logger info "unmount $DEVICE_NAME"
if [[ -d $CURRENT_DEVICE_MOUNT_PATH ]]; then
while _mounted $CURRENT_DEVICE_MOUNT_PATH; do
umount $CURRENT_DEVICE_MOUNT_PATH
done
_logger info "unmounted $DEVICE_NAME in $CURRENT_DEVICE_MOUNT_PATH"
rm -r $CURRENT_DEVICE_MOUNT_PATH
else
_logger warning "$DEVICE_NAME was not mounted"
fi
}if [ $ACTION == "add" ]; then
_mount
elif [ $ACTION == "remove" ]; then
_umount
fiexit 0Thus, I managed to mount the device and booting the operating system with the device connected to it will also work out without problems.
Demonstration of automatic mounting of an MTP device.
I made a repository if improvements are needed.
|
Linux: build based on the Thinstation 6.2 project (on systemd)
Task: automatic mounting of MTP devices (namely Android phones) when connected to a thin client
Problem: error loading the distribution when the device is connected or incorrect mounting of the device.
Software used when mounting: simple-mtpfs, android-file-transfer-linux, android-udev-rulesThe first attempt
In rules 51-android.rules, I added the execution of the program in case I connect my phone to a thin client:
...
# Skip other vendor tests
LABEL="android_usb_rule_match"
RUN+="/etc/udev/scripts/mtp.sh"
...My script /etc/udev/scripts/mtp.sh that performed device mounting when connected:
#!/bin/sh. /etc/thinstation.env
. $TS_GLOBALif [[ $ACTION == "add" ]]; then
echo "********************* START MOUNT *******************" | systemd-cat -p info -t "mtp"
/bin/aft-mtp-mount /phone
grep -oe "/phone" /proc/mounts | systemd-cat -p err -t "mtp"
ls "/phone/Внутренний общий накопитель" | systemd-cat -p err -t "mtp"
echo "********************** END MOUNT ********************" | systemd-cat -p info -t "mtp"
elif [[ $ACTION == "remove" ]]; then
echo "******************** START UNMOUNT ******************" | systemd-cat -p info -t "mtp"
umount /phone
echo "********************* END UNMOUNT *******************" | systemd-cat -p info -t "mtp"
fiP.S. I use the construction of recording logs as echo ..., since the logger from BusyBox does not write to the log on TS.
I used a logging script to see why my device was NOT mounted. Or rather, it was mounted and after some time fell off, while it was not visible in the system as a mounted device.
Googling on the internet, I found information that systemd by default runs systemd-udevd.service with a separate "mount namespace".
The second attempt
I rewrote the rules in 51-android.rules as follows:
...
# Skip other vendor tests
LABEL="android_usb_rule_match"
ACTION=="add", PROGRAM="/bin/aft-mtp-mount -p [emailprotected] $env{ID_MODEL}_$env{ID_VENDOR}", ENV{SYSTEMD_WANTS}+="%c"
...and in /etc/systemd/system/[emailprotected] placed the following:
[Service]
Type=oneshot
ExecStart=/etc/udev/scripts/mtp.shAccordingly, I rewrote the script /etc/udev/scripts/mtp.sh that would respond only to the add event:
#!/bin/sh. /etc/thinstation.env
. $TS_GLOBAL/bin/aft-mtp-mount /phoneIt is logical to assume that during testing I unmounted the directory myself, without a script. As a result, the phone was not always connected the first time. It feels like he had some kind of timeout for the connection limit of a few minutes. In the end, I never figured out what the problem was.
The third attempt
My whole process of solving this problem is similar to something similar that is already available on stackoverflow. I tried to reproduce the solution from this post.
Corrected the 51-android.rules script in this way:
...
# Skip other vendor tests
LABEL="android_usb_rule_match"
ACTION=="add", RUN+="/bin/systemctl start mtp@$env{ID_VENDOR}_$env{ID_MODEL}_$env{ID_REVISION}.service"
ACTION=="remove", RUN+="/bin/systemctl stop mtp@$env{ID_VENDOR}_$env{ID_MODEL}_$env{ID_REVISION}.service"
...and in /etc/systemd/system/[emailprotected] placed the following:
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/etc/udev/scripts/mtp.sh add %I
ExecStop=/etc/udev/scripts/mtp.sh remove %IAnd the mount execution script /etc/udev/scripts/mtp.sh:
#! /bin/sh. /etc/thinstation.env
. $TS_GLOBALACTION=$1
DEVICE_NAME=$2
MOUNT=/bin/aft-mtp-mountCURRENT_DEVICE_MOUNT_PATH=$BASE_MOUNT_PATH/$USB_MOUNT_DIR/$DEVICE_NAME_logger() {
echo "$2" | systemd-cat -p $1 -t "mtp"
}_mounted() {
if [ -n "$(grep -oe "$1" /proc/mounts)" ]; then
return 0
else
return 1
fi
}_mount() {
_logger info "mount $DEVICE_NAME"
if [ -d $CURRENT_DEVICE_MOUNT_PATH ] && _mounted $CURRENT_DEVICE_MOUNT_PATH; then
_logger warning "$DEVICE_NAME already mounted"
exit 1
fi
if [ ! -d $CURRENT_DEVICE_MOUNT_PATH ]; then
mkdir $CURRENT_DEVICE_MOUNT_PATH
if is_enabled "$USB_STORAGE_SYNC" && [ ! -n "$(echo $USB_MOUNT_OPTIONS | grep -e sync)" ]; then
USB_MOUNT_OPTIONS=$USB_MOUNT_OPTIONS,sync
fi
$MOUNT -o $USB_MOUNT_OPTIONS $CURRENT_DEVICE_MOUNT_PATH
if _mounted $CURRENT_DEVICE_MOUNT_PATH && [ "$(ls -A $CURRENT_DEVICE_MOUNT_PATH)" ]; then
_logger info "mounted $DEVICE_NAME in $CURRENT_DEVICE_MOUNT_PATH"
else
_logger warning "$DEVICE_NAME failed to mount"
_umount
exit 2
fi
else
_logger warning "$CURRENT_DEVICE_MOUNT_PATH already exists"
exit 3
fi
}_umount() {
_logger info "unmount $DEVICE_NAME"
if [[ -d $CURRENT_DEVICE_MOUNT_PATH ]]; then
while _mounted $CURRENT_DEVICE_MOUNT_PATH; do
umount $CURRENT_DEVICE_MOUNT_PATH
done
_logger info "unmounted $DEVICE_NAME in $CURRENT_DEVICE_MOUNT_PATH"
rm -r $CURRENT_DEVICE_MOUNT_PATH
else
_logger warning "$DEVICE_NAME was not mounted"
fi
}if [ $ACTION == "add" ]; then
_mount
elif [ $ACTION == "remove" ]; then
_umount
fiexit 0Now everything works as it was intended! But there is a nuance.
If the phone is initially attached to a thin client during its boot process, the system refuses to turn on. If the phone is disconnected during the boot process, then the thin client loads without problems and mounting also works after connection.
It seems that in the process of downloading a thin client, systemd-udevd tries to read the 51-android.rules rules and call the phone mount script, which creates a further problem with loading.
I tried to pull out the download data and this is what I saw:
Systemd services loading schedule
Kernel log
I don't understand, maybe I implemented everything incorrectly, or there are some pitfalls. It seems that I tried to try different options. It would be possible to try to mount the device via systemd-mount, but it does not support the MTP protocol to mount it. Or i can somehow implement it by other means? I have not found any additional information on this topic. I ask for help from those who know, since I have reached a dead end on the Arch forum. I have created themes on GitHub, but so far there is no progress either.
|
Mount MTP devices using systemd
|
Yes, you can do it with screen which has multiuser support.
First, create a new session:
screen -d -m -S multisessionAttach to it:
screen -r multisessionTurn on multiuser support:
Press Ctrl-a and type (NOTE: Ctrl+a is needed just before each single command, i.e. twice here)
:multiuser on
:acladd USER ← use username of user you want to give access to your screenNow, Ctrl-a d and list the sessions:
$ screen -ls
There is a screen on:
4791.multisession (Multi, detached)You now have a multiuser screen session. Give the name multisession to acl'd user, so he can attach to it:
screen -x youruser/multisessionAnd that's it.
The only drawback is that screen must run as suid root. But as far as I know is the default, normal situation.
Another option is to do screen -S $screen_id -X multiuser on, screen -S $screen_id -X acladd authorized_user
Hope this helps.
|
I am setting up a server where there are multiple developers working on multiple applications.
I have figured out how to give certain developers shared access to the necessary application directories using the setgid bit and default ACLs to give anyone in a group access.
Many of these applications run under a terminal while in development for easy access. When I work alone, I set up a user for an application and run screen as that user. This has the downside that every developer to use the screen session needs to know the password and it is harder to keep user and application accounts separate.
One way that could work is using screen multiuser features. They do not work out-of-the-box however, screen complains about needing suid root. Does giving that have any downsides? I am pretty careful about using suid root anything. Maybe there is a reason why it isn't the default?
Should I do it with screen or is there some other intelligent way of doing what I want?
|
Sharing a terminal with multiple users (with screen or otherwise)
|
Having searched for a long time I just couldn't find one. There are many multi user servers but I couldn't find one which executed as the system user.
So I wrote one myself. This is only tested as far as I can test it myself. But for what its worth, the source code is here:
https://github.com/couling/WebDAV-Daemon
|
I'm looking to completely decommission my SMBA service and replace it with a WebDav service.
All the google searches so far have pointed me to using Apache / Webdav. This is close to what I need but as far as I read it requires Apache to have access to my user's files and worse; if it creates a file the new file will be owned by Apache (not the user). Note that having files with the correct Unix ownership and permissions is a requirement as some users have direct SSH access.
So I'm quite simply looking for a way to either make Apache / Webdav work "correctly" with multi-users (that is change unix user to the logged in user before attempting to serve the file) or find a complete alternative to Apache / Webdav.
So far searches haven't turned anything up.
|
Is there a multi-user webdav server available for linux?
|
Generally, one runs a server with no actual graphical display attached to it (maybe a very simple one for diagnostic work). Clients connect via a network protocol, either X tunneled over SSH or a remote-desktop protocol like VNC or RDP.
With the former, users execute GUI programs from the remote shell and they show up seamlessly as windows on their client systems. This works well on high-speed networks as long as the graphics aren't intensive, but unfortunately the X protocol is very chatty and not highly efficient. It also requires each client to run an X server, which is automatic on Linux clients, easy on Mac OS, and somewhat cumbersome on Windows.
The other approach is to use VNC or RDP, which run an entire remote desktop session displayed as a window on the client. The actual work is done on the server and a compressed graphics stream delivered to the client program. There's also an in-between option called NX, which uses an optimized version of the X protocol to deliver a similar experience (with some performance improvements over VNC or RDP.) For these approaches, client programs are available for any major (and many minor) operating systems.
There is another entire way to go, though, which matches more what you are imaging: a ginormous octopus-like system extending direct graphical connections from a central server around a small area (or even a whole building). This is known as "Multiseat X", and you can read more about doing that in this article from x.org. The links from there indicate that there's enough interest in doing this to keep the idea alive, although I've never actually seen anyone doing it in my direct experience.
|
Lets pretend we had a *nix rather powerful system...
Now Obviously I know you can set up multiple users to login to a system.......but how exactly do you do that? Like....how would all the monitors connect and such, or would you need a smaller computer node that like....reroutes it or something?
How do System Admins and such set up multiple users for a *nix system? across a large building or something?
|
Multiple Users on a Desktop Environment [closed]
|
In Unix, users are identified by their ID (uid), which must be unique (in the scope of the local system).
So even if it were possible to create 2 different users with the same name
(adduser on my system refuses to do this, see this question for further information Can separate unix accounts share a username but have separate passwords?), they would need to get different uids. While you may be able to manipulate files containing the user information to match your criteria, every program is based on the assumption that uids are unique on the system, so such users would be identical.
EDIT: The other answer demonstrated a case where you have 2 different user names for the same uid - as far as the system is concerned though, this is like having two different names for the same user, so constructs like this should be avoided if possible, unless you specifically want to create an alias for a user on the system (see the unix user alias question on
serverfault for more information on the technicalities).
The system uses these uids to enforce file permissions.
The uid and gid (group id) of the user the file belongs to are written into the metadata of the file. If you carry the disk to another computer with a different user that randomly shares the same uid, the file will suddenly belong to this user on that system. Knowing that uids are usually
not more than 16-bit integers on a unix system, this shows that the uids
are not meant to be globally unique, only unique in scope of the local system.
|
I mean, if two users have the same name, how does the system know that they're actually different users when it enforces file permissions?
This doubt came to my mind while I was considering to rename my home /home/old-arch before reinstalling the system (I have /home on its own partition and I don't format it), so that I could then have a new, pristine /home/arch. I wondered if the new system would give me the old permissions on my files or if it would recognize me as a different arch.
|
How does Linux identify users?
|
Don't dismiss RVM's value
You can use the repository version of Ruby but I would recommend going another way and using RVM to manage Ruby. I realize it might seem like it's slowing you down, but the version of Ruby that's deployed via the repositories though usable will often lead to problems down the road. It's generally best to create dedicated versions of interpreters and any required libraries (Gems) that can be dedicated to a particular application and/or use case.
RVM provides the ability to install for single user (which is what you did) as well as do a multi-user installation.
$ curl -L https://get.rvm.io | sudo bash -s stableRunning the installation this way will automatically trigger RVM to do a multi-user installation which will install the software under /usr/local/rvm. From here the software can be accessed by anyone that's in the Unix group rvm.
$ sudo usermod -a -G rvm <user>Where <user> would be the user webide.
Installing a Ruby
Now add the following to each user's $HOME/.bashrc. I generally put this at the end of the file:
[[ -s /usr/local/rvm/scripts/rvm ]] && source /usr/local/rvm/scripts/rvmWith that, you'll want to logout and log back in.
NOTE1: It isn't enough to start another tab in gnome-terminal, it needs to be a newly logged in session. This is so that the group you just added this user to, get's picked up.
NOTE2: You'll probably not have to add the above to your $HOME/.bashrc if you find you have the following file installed here already, this does the above plus more for all users that are in the group rvm on the system.
$ ls -l /etc/profile.d/rvm.sh
-rwxr-xr-x 1 root root 1698 Nov 27 21:14 /etc/profile.d/rvm.shOnce logged in you'll need to install a Ruby. You can do this using the following steps, as user webide.
What versions available to install?
$ rvm list known | less
...
# MRI Rubies
[ruby-]1.8.6[-p420]
[ruby-]1.8.7[-p374]
[ruby-]1.9.1[-p431]
[ruby-]1.9.2[-p320]
[ruby-]1.9.3[-p484]
[ruby-]2.0.0-p195
[ruby-]2.0.0[-p353]
[ruby-]2.1.0-preview2
[ruby-]2.1.0-head
ruby-head
...NOTE: The 1st time you install a Ruby you should do this with a user that has sudo rights so that dependencies can be installed. For example on Ubuntu, you'll see this type of activity. After these are installed other users, such as webide, should be able to install additional Rubies too, into the directory /usr/local/rvm.
Installing requirements for ubuntu.
Updating system..............................................................................................................
Installing required packages: libreadline6-dev, zlib1g-dev, libssl-dev, libyaml-dev, libsqlite3-dev, sqlite3, autoconf, libgdbm-dev, libncurses5-dev, automake, libtool, bison, libffi-dev...............................................................................................
Requirements installation successful.Viewing installed versions
$ rvm listrvm rubies * ruby-1.9.3-p484 [ x86_64 ]# => - current
# =* - current && default
# * - defaultInstalling a 2nd Ruby
$ whoami
webide$ rvm install 2.0.0-p195
...
ruby-2.0.0-p195 - #validate binary
ruby-2.0.0-p195 - #setup
Saving wrappers to '/usr/local/rvm/wrappers/ruby-2.0.0-p195'........
ruby-2.0.0-p195 - #importing default gemsets, this may take time..................Now when we list what's installed:
$ rvm listrvm rubies * ruby-1.9.3-p484 [ x86_64 ]
ruby-2.0.0-p195 [ x86_64 ]# => - current
# =* - current && default
# * - defaultFrom the above we can see that user webide was able to install a Ruby.
Setting a default for all rvm users
$ rvm use ruby-2.0.0-p195 --default
Using /usr/local/rvm/gems/ruby-2.0.0-p195$ rvm listrvm rubies ruby-1.9.3-p484 [ x86_64 ]
=* ruby-2.0.0-p195 [ x86_64 ]# => - current
# =* - current && default
# * - defaultLogging in as another user that's in the group rvm we can see the effects of making ruby-2.0.0-p195 the default.
$ rvm listrvm rubies=> ruby-1.9.3-p484 [ x86_64 ]
* ruby-2.0.0-p195 [ x86_64 ]# => - current
# =* - current && default
# * - defaultSo this user is using, ruby-1.9.3-p484, and he's now configured to use ruby-2.0.0-p195 as the default too.
Slow downloads/installs
If you're experiencing a slow download you might want to make use of the offline installation method instead. This will allow you to do a re-install later on. Or perhaps the download via this system is problematic, and you could download the RVM installer on one system, and then use scp to copy the installer to this system afterwards.
$ curl -L https://github.com/wayneeseguin/rvm/tarball/stable -o rvm-stable.tar.gzSee here, RVM in offline mode for full details.
ReferencesRVM ArchLinux Wiki
Installing RVM - Quick (guided) Install
|
I intend to use Ruby when programming my Raspberry Pi which is running the Debian based Occidentals. Via SSH, I executed:
curl -L https://get.rvm.io | bash -s stable --rubywhich downloaded the ruby source and compiled it. It tool about 2 hours to complete. I would like to use ruby via AdaFruit's WebIDE - http://learn.adafruit.com/webide/. However, the ruby installation I performed via SSH created a folder called .rvm in the pi user's directory, whereas the WebIDE uses the webide user account.
What is the best way to allow the webide user account access to ruby? I tried moving the .rvm folder from /home/pi to /etc/share, but this didn't work - when trying to use ruby at a terminal I got the error "ERROR: Missing RVM environment file: '/home/pi/.rvm/environments/ruby-2.0.0-p353'" so I must've broken some link.
I'm holding back running another 2hr install for the webide user as I'm sure there's a better way!
|
Making ruby available to all users
|
Please have a look at bash manual:/etc/profile
The systemwide initialization file, executed for interactive login shells
/etc/bash.bashrc
The systemwide initialization file, executed for interactive, non-login shells.
~/.bash_profile
The personal initialization file, executed for interactive login shells
~/.bashrc
The individual per-interactive-shell startup file
~/.bash_logout
The individual login shell cleanup file, executed when a login shell exitsSo you need to put your aliases in /etc/profile or /etc/bash.bashrc in order to make them available for all users.
|
I am looking for a simple way to create a permanent alias for all users. So ~/.bashrc or ~/.bash_profile is not an option.
Hasn't anybody created a program for this? I think it should be a very common need. If not, I can always create a custom Bash script, but I need to know if there is a equivalent of .bash_profile for all users.
In my case, I am using MacOSX v10.9 (Mavericks) and Ubuntu12.04 (Precise Pangolin), but I would like a method that works on major Unix systems.
UPDATE: I was wondering about a program which automatically allows the users to manage a list of permanent of aliases directly from the command-line without having to edit files. It would have options for setting for all users, target users, interative/login shell, etc.
UPDATE 2: Reply to answer of @jimmij
$ su -m
Password:
# cat /etc/profile
alias test343="echo working"
# cat /etc/bash.bashrc
alias test727="echo working"
# test727
bash: test727: command not found
# test343
bash: test343: command not found
|
How to create permanent aliases on Unix-like systems?
|
Absolutely!
From a security perspective separation is a Good Thing (tm) - as your professional and personal usage may have very different risk profiles.
At work you may deal with code for clients, personal data for thousands of individuals, configuration of network devices etc., and that usage may be regulated (depending on your industry, employer, or clients)
At home you may be a bit more relaxed, watching videos, downloading games etc.
Without separation, you run risks which include:Allowing a compromised executable that you pick up at home compromising your work environment.
Accidentally doing something in your professional environment while you think you are in your personal environment - this happens a lot, and one of the workarounds where separation of accounts isn't possible is to have environments well labelled (eg by a different prompt, or coloured background)In reality it also makes a lot of sense to have separation of accounts used for development and production environments, so we do see this in major enterprises.
|
I'm a software developer and I have a new laptop with Ubuntu.
I plan to set up this laptop as a development workstation where I'll do my professional work (for my company) and also I'll develop some personal/home software.
I want to have my professional stuff (applications, software libraries, configurations, etc.) separate (as much as possible, in terms of tydiness, for example, unity launchers, application settings, apps at startup, broswer settings, etc. If I switch from one user to another it has to look like different environments) from my personal stuff.
So I think having separate users for separate contexts: one user for professional work and another user for personal work (and a guest user too). But I'll have two users for only one person (only I use this laptop).
It's not a big problem, but, is it a good practice to have different users for different contexts? Is there a better way to solve this issue?
|
Is it a good practice to have different users for different contexts?
|
There are two obvious answers:Give each user his own virtual machine image. Inside the virtual machine, the user has root access; outside the virtual machine, none at all. If your hardware supports it, kvm will work pretty well for this. And virtual machine images are just files, so they're easy to copy around, etc. You can use copy-on-write storage, which will save some disk space, if that's a concern.
Use the newfangled namespaces support in Linux 3.8, which actually allows everyone on the machine to have root in his own area. Depends especially on what you need root for. (Though, you can actually run an entire separate distro inside a namespace, it just has to share the same kernel).Unlike separate partitions (which are very easy for root to mess with—just mount it), the above two actually are secure (well, you have physical access to the machine, so those vulnerabilities apply regardless).
There are more painful things too, like capabilities and SELinux, depending on why you need root (sudo) access. Or, of course, if you just need a command or two, sudo has built-in support for limiting which commands may be run.
edit: For more information on namespaces, see Namespaces in operation, part 1: namespaces overview, which has six parts in total. Namespaces have been going into Linux slowly, starting several years ago. Part 5 and 6 cover the final part, added in 3.8, which allows any random user to have root in his own namespace.
|
Our class is finally installing Mint Linux on our machines. The problem is our teacher is scared that we'll play war games against the two classes that use the computers. His solution is to install two separate operating systems; due to that fact, that we will need sudo ability, but he doesn't want us to be able to break the OS for the other person (by either playing war games or by making mistakes). His solution is to install two separate operating systems, but I dislike this idea for a couple of reasons. First, we have MBR, so that limits the number of partitions; and second, it's just annoying because they are both Mint Linux, so we'll choose the wrong one a good deal of the time. Anyone know of a way of separating the two operating systems so one person can't screw it up for the other? I'm less worried that we'll play war games because we could do this with the separate partitions and because it's less important. Anyone have any ideas?
I was thinking of limiting the power a user has while still allowing them to use root; however, this could cause problems later on. The teacher wants to control the root account of course.
|
Fully separate two accounts without installing separate operating systems?
|
You can use a configuration management system to do this. Personally, I use Puppet for this. I have a single /etc/passwd and /etc/shadow file and I have Puppet sync it across all my systems. There is an interesting learning curve with them, but definitely tutorials for doing exactly what you want on their website.
I would, however, definitely recommend using LDAP and Kerberos. I know the learning curve is steep, but the security is really good. I know kerbs can be a burden sometimes, but LDAP would probably be acceptable. I have been meaning to set one up.
|
I'm administrating two ubuntu desktops and one debian server.
There are abount ~20 active users on the desktops. A few (5-10) user accounts are added each year and about the same amount become inactive.
I would like to share the user accounts and their respective homes between the two pcs. So far, my plan was to set up some kind of nfs + kerberos (+ldap/nis?), but I think kerberos is overly complicated for this simple purpose. In addition to that, the admin changes every ~2-3 years and I fear that complicated solutions will become unmaintainable for my successors (we are no professionals...).
Is there a way to split up /etc/passwd etc. in different files, so I could store these on the server and copy them to the desktops? Or is there some PAM-module that provides a similar type of "modular" authentication ? (well, except pam_krb5).
What would be the simplest way to achieve that?
|
Easiest way to manage users for two machines
|
This information is not stored by traditional filesystems. You have three main options:See who is accessing it in real time using lsof/fuser or similar;
Set up auditing (take a look at auditd);
Use something like LoggedFS.
|
I would like to find files accessed by specific user (even just read) within a folder tree. I thought the find command had this option, but it actually just searches for owner user. Is there any other command, or command combinations? The stat command offers access information, but doesn't display the user who made access.
|
Unix command to find files read by specific user
|
Programs satisfying the requirements are:Users and Groups (system-config-users), developed by Red Hat, included in RHEL/CentOS 7, has SELinux build dependency, deprecated.[1][2][3]
KUser (kuser), developed by KDE project, included in 4.x versions, unmantained.[4]Mostly identical, after installed they will be accessed under the Users and Groups (KUser) entry under Sundry (System tools) within the Applications menu.
Otherwise at the prompt type their command_name to get the user manager gui to pop up.
Users and Groups (system-config-users)Under preferences there is a checkbox for hide system users & groups so uncheck that to see everything listed in /etc/passwd within the gui.
Installation
Archlinux: AUR package
CentOS: available from the base install, but it not installed by default so you have to manually do a
yum install system-config-usersto have it available.
Fedora: system-config-users can be found as system-config-users-1.3.5-4.el7.noarch.rpm and can be freely obtained from one of the centos or fedora repositories. And I think this can also obtained via doing (depending on the version)
yum groupinstall "Graphical Administration Tools"Suse: I would use there YAST - users utility.
Ubuntu: never packaged it.
KUser (kuser)Installation
Archlinux: AUR package.
Other user management tools
They do not show all users/groups:Cockpit,
the CentOS 8.x replacement to system-config-users.[5]
webmin,
more a system management tool for the browser than just an users/groups manager.
Users and Groups (gnome-system-tools) lets you see all groups but not all users, while
GNOME Settings (gnome-control-center) only users and no groups.Related questionsHow to manage users and groups using GUI?
How to fully manage users and groups with web GUI and create templates for new users?
12.04 - How to add / list / remove a group - Ask Ubuntu
|
Is there a graphical tool that shows (edits) all users and groups on the system, so that one can avoid editing /etc/passwd and /etc/group directly?
GNOME Settings (gnome-control-center) Users view lets only see desktop users and no groups at all.
|
Is there a GUI to edit/add users and groups
|
Don't try making the login name longer, you'll probably find loads of places it breaks.
Note that you don't have a problem with the number of possible login names (you only get UID_MAX-UID_MIN uids anyway, which is 59,000 on my system).
The problem is just with how descriptive they are, but fortunately there's another field intended to be descriptive: the GECOS/comment field. Just put the URL in there, and make the username either a 32-character hash of that, or simply the UID in base 10.
Then, best case you can always hash the URL to get UID, and worst case you need a reverse lookup from URL to UID, which is pretty trivial for 59,000 items - even a grep of /etc/passwd would be sufficient.
|
Would it be difficult to increase the limit or would it just be a one-place change?
I have a system that could really elegantly make use of the Unix multiuser model for process isolation but I'd need much longer usernames than just 32 characters.
Is the name length standardized (POSIX)?
|
Linux/Unix username length limit
|
This document seems to say that multiuser was added in cifs-utils 4.7:
Originally by Igor Druzhinin in cifs-utils 4.7 and overhauled in 5.3. Kernel support in 3.3The reference was rather oblique. Corroborating information seemed needed. Version control may indicate the same as 4.7 is where the man page was added:
manpage: add mount.cifs manpage entry for "multiuser" option
Jeff Layton [Fri, 8 Oct 2010 14:11:58 -0500 (15:11 -0400)]
manpage: add mount.cifs manpage entry for "multiuser" optionSigned-off-by: Jeff Layton <[emailprotected]>Digging into the argument parser was a bit more challenging since multiuser is not present in the source code, so corroborating information was not located in other source files.
On the other hand, this web page talks about using multiuser on CentOS 7, and says something like:
When a Samba share is mounted, the mount credentials determine the access
permissions on themount point by default. The new multiuser mount option
separates the mount credentials from the credentials used to determine file
access for each user. In CentOS/RHEL 7, this can be used with sec=ntlmssp
authentication (contrary to the mount.cifs(8) man page).http://rpm.pbone.net shows cifs-utils 4.5 - 4.6 were native releases for RHEL 5, so it would not be surprising if it would not work, but, also according to http://rpm.pbone.net cifs-utils 5.9 has been built for RHEL 5 (by another vendor), so perhaps there is some hope if one wanted to deviate from the distribution supplied packages.
The "contrary to the mount.cifs(8) man page" comment looks a bit like a red flag, in that sec=ntlmssp mentioned. Have you used multiuser successfully elsewhere?
What exactly have you tried? Please give actual examples.
|
I was going to use the "multiuser" option for mount.cifs, but /var/log/messages reports:
kernel:CIFS: unknown mount option multiuserKernel is 2.6.18-433
mount.cifs is 1.12RHI can't find information which version of mount.cifs is supporting multiuser. I guess RHEL 5.11 and its kernel are too old for it? Can anyone confirm?
|
mount.cifs not supporting 'multiuser'?
|
For Yaws, you're looking for the tilde_expand configuration option.
There is nothing magical about the ~ character, but it is so common that I wouldn't recommend trying to circumvent it. If you want to forgo it completely, then give users the ability to edit content in /var/www/htdocs/<username>/ themselves.
|
We usually see personal web pages like http://example.com/~someone/, in this case, "someone" is a user of "example.com".
How to enable this by default, so whenever we add a new user "foo", then http://example.com/~foo/ is created automatically? Is the tilde in "~someone" optional, customizable?
Software environment:
Webserver: Apache or Yaws (Erlang based webserver)
|
Creating a public document directory for a normal user
|
Probably, the way to achieve this is using dbus. This is a mechanism of communication amongst disparate tasks that is highly generic and so quite difficult to use. You should be looking for events, called signals, sent over the bus by things like the systemd logind daemon and the gnome screensaver.
There are at least 3 different C libraries to access the bus, and some more manageable Python libraries on top of these. And there are some simple command-line interfaces that should be sufficient to write a shell script to extract the data. For example,
dbus-monitor --system(you may need sudo) will run continously showing calls and events. Look for stanzas beginning signal that have a path=/org/freedesktop/login1.... These will contain, in some complex way, information when someone logs in or out, or when the user on seat0 (the principal console) changes, or when the display locks due to idleness.
systemd has created its own alternative command:
sudo busctl monitorwhich provides the information in a different format.
There is also gdbus monitor ... which needs more args to say what to listen for.
Alternatively, you could try using in each login some time-tracking for worksheets tool such as track (about which I know nothing), then merging the information.
|
I have a different user for each of my own work areas on my system. This is to keep bookmarks, files and other defaults separate.
Several "users" are generally logged in at once and I just switch between them when I change which project I'm working on in that moment.
I am using Ubuntu 18.04 with Gnome Desktop Environment
I'd like to keep track of when I've been working on different projects and I figured tracking these sessions would be an easy way to do this, however I don't know if there is a log available that knows which user is using the GUI at any one time, or how to create such a tracker.
The two details I think are needed for the tracker are:
1.) Events where a user logs in, including the times when the user is unlocking a session that is already open
2.) When screen locks due to inactivity
Does anyone know if these are already/can be logged?
Thanks!
Woody
Solution: ( With help from meuh below and this question: How to run a script on screen lock/unlock? )
This following script needs to be run from ~/.profile for each user that I wish to keep track of, their login/logout times all get logged in the OUTPUTFILE
The initial echo is because the script only gets run after the user has logged in and would otherwise be missed. All subsequent logins are caught from dbus-monitor
#!/bin/bash
echo $(date), $USER, SCREEN_UNLOCKED >> OUTPUTFILENAME
dbus-monitor --session "type='signal',interface='org.gnome.ScreenSaver'" |
while read x; do
case "$x" in
*"boolean true"*) echo $(date), $USER, SCREEN_LOCKED >> OUTPUTFILENAME;;
*"boolean false"*) echo $(date), $USER, SCREEN_UNLOCKED >> OUTPUTFILENAME;;
esac
done
|
How do I keep track of when each user is actively using the GUI on Ubuntu?
|
I don't know about homebrew in particular, but in theory you could use sudo to install software. Then files are accessed with root privileges, which may or may not be what you want.In general though, if you want multiple unprivileged users to be able to write to the same location, it isn't the owner of that location that you want to change, but its group. You could create a group called homebrewers:
sudo dscl . -create /Groups/homebrewersYou'll then want to find a group ID that doesn't exist. For this I used:
dscl . -list /Groups \
| sed 's@^@/Groups/@' \
| ( while read grp; \
do dscl . -read "${grp}" PrimaryGroupID; \
done ) \
| sort -snk 2I found that the highest group number in use was 501, so 4200 was available.
So, I set the PrimaryGroupID to 4200 and the Password to * (unused). Do not forget to set these! If you do, your groups list will be corrupted and you will likely have to boot into single-user mode to correct it.
sudo dscl . -append /Groups/homebrewers PrimaryGroupID 4200
sudo dscl . -append /Groups/homebrewers Password '*'Then add your two users to that group. The example here uses shortnames (from whoami) of user1 and user2:
sudo dscl . -append /Groups/homebrewers GroupMembership user1
sudo dscl . -append /Groups/homebrewers GroupMembership user2Note that you may have to log out and log back in for these changes to take effect.
Finally, you'll want to change the directory's group to be homebrewers and its permissions to be group-writable:
chown -R :homebrewers /usr/local/var/homebrew
chmod -R g+w /usr/local/var/homebrewIf you want, you can even change the owner to root to no ill effects:
sudo chown -R root /usr/local/var/homebrewAll commands shown here were tested on Mac OS X 10.4.11 on a PowerBook G4. Much has changed since the move to Intel, so the commands as shown may not work exactly as given on a newer release. The underlying concepts will remain the same.
|
I have two users on my Mac. Both are me, but one is work mode, the other is non-work mode. I have an ongoing issue with installing via homebrew.
$ brew install x
Error: Can't create update lock in /usr/local/var/homebrew/locks!
Fix permissions by running:
sudo chown -R $(whoami) /usr/local/var/homebrewOf course, executing this suggested code solves the problem -- until I need to brew install using my other user, then I need to change ownership again. How can I set the permissions so that both users can install with homebrew?
|
Multiuser Homebrew privileges
|
Assuming you already have one user that auto-logs in, you can probably use a script that runs after logging in (use gnome-session-properties) that:gets a list of users to auto-login from some file
checks for each of those users if they are logged in yet
if one is not, use xdotool to switch to the first user (by simulating clicking on Menu and then Logout, etc) Each of these users must auto-run the script as well, thereby daisy chaining the login process
if all users are logged in, switch to some specially marked user (the first) that auto logs in, unless that is the user already currently running the script.
|
I could use some help with what I think is a basic request on the newest edition of Linux Mint (which I think would also be applicable to Ubuntu).
I have a home system with 3GB of RAM and accounts for family members (4 of them). As the initial login process takes 15-20 seconds and kids are impatient, I'd like a way to have an active session auto-started for each user when the system first boots.
... In other words, a multiple auto-login in the background and a normal login screen.
That way, when a user goes to login like normal, their session is already running and is switched to instantly. I have plenty of RAM and this machine always runs, so is there any way to pull this off via some type of login scripts?
|
Auto-start multiple background user sessions in Linux Mint
|
git does file locking to prevent corrupting the repository. You may get messages like
error: cannot lock ref 'refs/remotes/origin/develop': is at 2cfbc5fed0c5d461740708db3f0e21e5a81b87f9 but expected 36c438af7c374e5d131240f9817dabb27d2e0a2c
From github.com:myrepository
! 36c438a..2cfbc5f develop -> origin/develop (unable to update local ref)
error: cannot lock ref 'refs/remotes/origin/master': is at b9a3f6cf9dafc30df38542e5e51ae4842c50814d but expected 5e6174b3c7071c840effeda6c708d6aef36f7c6a
! 5e6174b..b9a3f6c master -> origin/master (unable to update local ref)from the git processes that fail to get the lock. That is all.
If the two git pull processes are slightly out of sync with each other, the effect will be the same as running the command twice.
|
What happens if two git pull command are run simultaneously in the same directory?
|
Running two git commands in parallel
|
Short answer
Install the
run-as
scripts and run:
run-as -X <user> <command>Long answer
Write and run a script to authorize userB to access userA graphical session.
/home/userA/.local/bin/xhost_userB
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
xhost +si:localuser:userBOptional: allow access at login.
/home/userA/.config/autostart/xhost_userB.desktop
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
[Desktop Entry]
Type=Application
Name=Graphical auth for user B
Comment=Authorize user B to run graphical software in this session
GenericName=userB xauth
Exec=/home/userA/.local/bin/xhost_userB
X-GNOME-Autostart-enabled=trueSome applications may require extra services.
/home/userA/.local/bin/xhost_userB_extra_services
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
systemctl --user restart dbus
systemctl --user import-environmentCreate a script to run the application as userB (es. Seahorse).
/home/userA/.local/bin/application
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
machinectl shell --uid=userB \
--setenv=DISPLAY="${DISPLAY}" \
--setenv=NO_AT_BRIDGE=1 \
.host \
/home/userA/.local/bin/xhost_userB_extra_servicesmachinectl shell --uid=userB \
--setenv=DISPLAY="${DISPLAY}" \
--setenv=NO_AT_BRIDGE=1 \
.host /usr/bin/applicationNote: it works on Wayland too if XWayland is running.
|
Let's an user A (userA) wants to run in his graphical session a graphical application as user B (userB). How is it done it on a modern GNU/Linux system?
|
How to run a graphical application as another user?
|
I would start by reading http://wiki.apache.org/httpd/PrivilegeSeparation
|
What are the best practices for a Debian Squeeze + Grsecurity/PAX based system when multiple, and therefore insecure, websites must run on the same server? I mean how can I mitigate an attack, so that if UserX's website got hacked, the other hosted sites remain intact etc.? Are there any good guides on this topic?
|
Best practices for secure, separated virtualhost LAMP environment
|
From the info in the comments I'll guess you are running Grafana in a docker container and you are running ps on the host.
The 472 user is the grafana user in one of the containers which is why you can't find it in the host's /etc/passwd file
|
ps aux output (only line of interest shown)
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
472 24070 0.0 0.7 1636608 59416 ? Ssl May09 10:53 grafana-server --...id to username lookup yields nothing
$ id -nu 472
id: ‘472’: no such userusername to id lookup yields nothing
$ id -u 472
id: ‘472’: no such user/etc/passwd does not contain any line with 472 anywhere in it.
This user running a program, why is it not listed anywhere?
|
USER shown in `ps aux` but not found /etc/passwd
|
what is the meaning of this run-level ?It means that word has still not percolated everywhere, even after it being explicitly stated in the systemd manual for its runlevel command since 2010, that this concept is obsolete.
Forget about run levels.
Your operating system does not have run levels. As explained at length in https://unix.stackexchange.com/a/394238/5132 , they simply do not exist outwith a few compatibility shims. What you see from the systemd runlevel command is a fiction, constructed from the states of the actual systemd mechanisms, by its almost wholly undocumented systemd-update-utmp program.
(I took a slightly different approach in the nosh toolset. I documented the login-update-utmpx command. But it makes no effort to construct fictions and I also made the runlevel shim just print "N N" to drive home the point that there are no run levels. Upstart's choice of printing "unknown" turns out to break some badly written package installation/deinstallation scripts.)
Run levels have been obsolete since 1990, and your operating system finally caught up with that 10 years ago, in RedHat Enterprise Linux version 6 when it switched to Upstart (which also did not operate in terms of run levels, but provided a slightly more extensive compatibility shim than systemd does). It has been more than 5 years since version 7 where it switched from Upstart to a system whose manual pages explicitly documented this as obsolete.
Ironically, systemd's runlevel is not even telling you that your system has not come up in multi-user mode. So it is pointless and ill-founded to ask how to switch to multi-user mode, considering that multi-user.target is probably already active on your system. To determine that of course, you use systemctl status multi-user.target, and not the runlevel command at all.
Further readingJonathan de Boyne Pollard (2018). run-levels are things of the past.. Frequently Given Answers.
Zbigniew Jędrzejewski-Szmek (2015-11-08). man: describe the reason why runlevels are obsolete. Put it at the top of the file, where it's hard to miss.. systemd. github.
Jonathan de Boyne Pollard. login-update-utmpx. nosh Guide. Softwares.
Jonathan de Boyne Pollard. runlevel. nosh Guide. Softwares.
|
we know that runlevl of multi user mode is
N 5 but on our redhat 7.2 we get the following
runlevel5 3what is the meaning of this run-level ?
how to change this machine to multi user mode ( full permissions )
and when we do
who -rwe get
run-level 3 last=5Note - we perform reboot/init 6 , but still we are in run-level 3
|
runlevel + what is runlevel 5 3 in redhat linux
|
I did this a few years back for SVN. I set up one Unix account. i.e. git@
Added user keys to ~git/authorized_keys, with the extra stuff to run the server (i.e. git-shell) when the user logs in.
So far exactly what you said in paragraph 4.
The bit that you are missing is that each entry in ~git/authorized_keys passes a different argument to git-shell telling it the name of the user. git-shell should then act accordingly to separate users. That is it has its own idea of users.
|
I want to set up a server that hosts git repositories over ssh and https with client certificates. I don't want any snazzy GUI or anything like that (i.e. not Bitbucket, GitLab, etc) I want a bare minimum configuration for hosting repositories only.
However, I do want user separation between repositories.
I already have a solution for the https case, and I have an annoying solution for ssh.
I did some searches for multi-user ssh git server configurations, but the base assumptions for the guides I found was that all users have access to all the repositories. I.e. create a "git" user, add all the users' public keys to ~git/.ssh/authorized_keys, use git-shell for the "git" user and host all the repos under the "git" user.
What I want to accomplish is to host a bunch of repositories which - by default - users can not access until given explicit permission to do so by having an administrator adding their ssh key to a project.
I have previously accomplished this by assigning each user a different account in the system, and use regular group permissions to do the sharing -- but this is cumbersome for a few reasons.
What's the general theory behind getting this feature to work smoothly?
I assume that the basic idea is to have a single "git" user and have each user in the ~git/authorized_keys, but once ssh has authenticated it passes on information about the authentication to some external user database containing all the repo permissions which is then used to perform file system access checks as appropriate.
|
Setting up a git server with ssh access
|
From a security perspective, switching VTs directly and using GNOME’s “switch user” are equivalent. The“switch user” feature is more about making it user-friendly, than it is about security: it means you don’t need to know which VT you’re logged in on, or even whether you’re logged in yet.
If your two accounts are completely separate (in particular, neither can write anywhere the other account will read important information from), this will mitigate user-level compromise from one account to the other. Determining what constitutes a “safe” option for you really requires determining what risks you want to prevent, or minimise, and how far you’re willing to go to do so.
|
On my personal laptop I'm using Fedora 29 and have two user accounts for myself; let's call them "personal_admin" and "personal_user". My personal_admin account is strictly for administrative tasks (installing software and updates mostly), which my personal_user account is for day-to-day use. Is this overkill?
How secure is switching between users using virtual terminals (CTRL+Alt+F#) versus the GNOME "Switch User" command? Does the situation change if I switch virtual terminals and then launch a GUI?
Suppose that my personal_user account is completely compromised and malware is running as a daemon with personal_user's privileges. I know that I shouldn't su personal_admin while signed into personal_user since the malicious daemon could easily record my personal_admin credentials. Does using virtual terminals or the GNOME "Switch User" command mitigate against this, or is the only safe option to sign out of personal_user (or maybe even reboot)?
|
How secure is switching TTY sessions vs GNOME "Switch User"?
|
The maximum number of users logged in on a Linux system is limited by the number of available pseudo terminals. The maximum number of these is found in /proc/sys/kernel/pty/max (4096 on my test machine).
This would be an upper limit to how many users would be able to give the screen -x command.
On non-Linux systems, the limit would also be set by the maximum number of allowed concurrent users. On NetBSD and OpenBSD, pseudo terminals may be created up to a limit of 992.
|
When using screen -x NAME is there a max number of connections to a single screen?
|
Is there a limit to # of users connected to a screen
|
I have a small arm server that runs Arch. I wanted to use only dhcpcd for my ethernet connection so I disabled netctl.service and netctl-ifplugd.service. Turns out that didn't work and I have no means of connecting to the machine anymore.Did you make sure to enable dhcpcd after disabling netctl?How can I "systemctl enable netctl.service" by manipulating files and/or symlinking files on that usb?
The equivalent alternative question is, what does "systemctl enable netctl.service" do?All systemctl enable does is create symlinks from /usr/lib/systemd/system/ or /etc/systemd/system/ to the appropriate target directories in /etc/systemd/system/, with services in the latter directory overriding ones in the former.
From the systemctl(1) manpage:enable NAME...
Enable one or more unit files or unit file instances, as
specified on the command line. This will create a number
of symlinks as encoded in the "[Install]" sections of the
unit files.Instead of using systemctl enable you could enable the netctl service manually with the following command:
ln -s /usr/lib/systemd/system/netctl.service \
/etc/systemd/system/multi-user.target.wants/netctl.serviceAnd to disable it manually you could use the following command to remove the symlink created with the previous ln command:
rm /etc/systemd/system/multi-user.target.wants/netctl.serviceThe appropriate target directory can be found by looking for the WantedBy setting in the [Install] section of the service file in question, though older service files sometimes has Alias instead of WantedBy and you may want to switch to using WantedBy instead, but either will work just as well.
Instead of reverting to using netctl you could first check that the dhcpcd service was enabled properly, and if it was you can use journalctl's --directory or --root flags to check the logs of the dhcpcd service after mounting the filesystem on your other machine and see if that can give any clues as to why it failed to work properly.
|
I have a small arm server that runs Arch. I wanted to use only dhcpcd for my ethernet connection so I disabled netctl.service and netctl-ifplugd.service. Turns out that didn't work and I have no means of connecting to the machine anymore. The server has it's root on a usb key which I can mount on my desktop and so the question is:
How can I systemctl enable netctl.service by manipulating files and/or symlinking files on that usb?
The equivalent alternative question is, what does systemctl enable netctl.service do?
|
What does systemctl enable netctl.service do
|
Arch uses systemd to manage startup processes (daemons and the like as well).
You can write a script that simply executes the command that you want, or sleep for a min and then execute. Then add it to the boot process with the instructions on the
wiki
if you add a sleep:
#!/bin/sh
sleep 60 # one min
netctl start bridgeIt should work perfectly fine. Systemd should spawn another process when it executes your script so it shouldn't make your system hang.
|
I have set up a bridge between eth0 and wlan0 with netctl. It works fine if I tell it to configure eth0 and wlan0 at startup and then for me to manually start the bridge after it boots. If I tell the bridge to start automatically as well though for some reason the wlan adapter does not connect to an access point. I therefore need "netctl start bridge" to run a minute or so after the entire system has finished booting. Any idea how I should do this?
PS. This is a headless system as in no xorg so running it at xorg startup won't work.
|
Arch Linux run script a minute after boot
|
The OP (Konrad Höffner) posted this answer
in his question on Oct21’14 at7:22:Solved
The network cable was defective.
I switched it out and it works again.
|
My DHCP Ethernet works fine in Windows,
but not in Arch Linux with netctl and dhcpcd.
What am I doing wrong?
Output of ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: wlp2s0: [...]
3: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether [my mac adress] brd ff:ff:ff:ff:ff:ffMy netctl profile
$ cat /etc/netctl/dhcp
Description='ethernet dhcp'
Interface=eno1
Connection=ethernet
IP=dhcp
#IP6=dhcp
#IP6=statelessError message after sudo netctl start dhcp
$ sudo journalctl -xn
-- Logs begin at Fr 2013-12-27 13:25:36 CET, end at Mo 2014-01-13 12:45:22 CET. --
Jan 13 12:44:50 laptop2 network[697]: DHCP IP lease attempt failed on interface 'eno1'
Jan 13 12:44:50 laptop2 network[697]: Failed to bring the network up for profile 'dhcp'
Jan 13 12:44:50 laptop2 systemd[1]: [emailprotected]: main process exited, code=exited, status=1/FAILURE
Jan 13 12:44:50
laptop2 systemd[1]: Failed to start Networking for netctl profile dhcp.OK, so it has problems getting the network up, doing it myself...
$ sudo ip link set eno1 up
$ sudo netctl start dhcp
Job for [emailprotected] failed. See 'systemctl status [emailprotected]' and 'journalctl -xn' for details.
$ sudo journalctl -xn
[...]
Jan 13 12:47:20 laptop2 network[1304]: Starting network profile 'dhcp'...
Jan 13 12:47:20 laptop2 network[1304]: The interface of network profile 'dhcp' is already up
Jan 13 12:47:20 laptop2 systemd[1]: [emailprotected]: main process exited, code=exited, status=1/FAILURE
Jan 13 12:47:20 laptop2 systemd[1]: Failed to start Networking for netctl profile dhcp.This doesn't help either, setting it down again.
$ sudo ip link set eno1 downTrying it with dhcpcd...
$ sudo systemctl start dhcpcd
$ ping www.google.de
connect: Network is unreachable
$ ip link
[...]
3: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
[...]
$ sudo systemctl stop dhcpcd$ sudo netctl start dhcp
Job for [emailprotected] failed. See 'systemctl status [emailprotected]' and 'journalctl -xn' for details.$ sudo journalctl -xn
-- Logs begin at Fr 2013-12-27 13:25:36 CET, end at Mo 2014-01-13 12:53:06 CET. --
Jan 13 12:52:36 laptop2 dhcpcd[1753]: version 6.1.0 starting
Jan 13 12:52:36 laptop2 dhcpcd[1753]: eno1: soliciting a DHCP lease
Jan 13 12:53:06 laptop2 dhcpcd[1753]: timed out
Jan 13 12:53:06 laptop2 dhcpcd[1753]: exited
Jan 13 12:53:06 laptop2 network[1707]: DHCP IP lease attempt failed on interface 'eno1'
Jan 13 12:53:06 laptop2 network[1707]: Failed to bring the network up for profile 'dhcp'
Jan 13 12:53:06 laptop2 systemd[1]: [emailprotected]: main process exited, code=exited, status=1/FAILURE
Jan 13 12:53:06 laptop2 systemd[1]: Failed to start Networking for netctl profile dhcp.After deleting the lease at /var/lib/dhcpcd/dhcpcd-eno1.lease6 and trying again, I still get the same error message.
Writing TimeoutDHCP=40 to /etc/netctl/hooks/timeout and making it executable also changes nothing.
|
DHCP IP lease attempt failed over Ethernet and DHCP with netctl
|
The functionality you are looking for is through nectl-auto. netctl is for auto connecting on boot or whenever the service through systemd is started where netctl-auto connects to the profiles enabled in its own manager and you would only have netctl-auto@[interface].service enabled.
netctl
netctl-auto
|
I am trying to setup my wifi network in Arch Linux ARM to auto-connect at home and office. But it is not always connecting automatically as intended.
netctl list
* wlan0-Spaceship
wlan0-CremeheadI am not quite sure how to debug this problem as it is acting very randomly. I have enabled both wlan0-Spaceship and wlan0-Cremehead which i assume should be the most important thing?
UPDATE 1
I tried to enable the netctl-auto service:
$ systemctl enable [emailprotected]But it has not solved the problem, but i see these two FAIL statements:
$ netctl-auto list
FAIL
FAIL $ systemctl --type=service
[emailprotected] loaded active running Automatic wireless network connection using netctl profiles
netctl.service loaded active exited (Re)store the netctl profile state
* netctl@wlan0\x2dSpaceship.service loaded failed failed Automatically generated profile by wifi-menu
* netctl@wlan0\x2dCremehead.service loaded failed failed Automatically generated profile by wifi-menuUPDATE 2
I just found out that i have problems enabling a connection. I need to do this two times to start.
Mar 11 10:05:39 proto-pi2-sandbox network[578]: The WPA supplicant did not start for interface 'wlan0'
Mar 11 10:05:39 proto-pi2-sandbox network[578]: Failed to bring the network up for profile 'wlan0-Cremehead'
Mar 11 10:05:39 proto-pi2-sandbox systemd[1]: netctl@wlan0\x2dCremehead.service: main process exited, code=exited, status=1/FAILURE
Mar 11 10:05:39 proto-pi2-sandbox systemd[1]: Failed to start Automatically generated profile by wifi-menu.
|
netctl not auto-connecting consistently
|
Execute sudo netctl enable tq84-wifi. The netctl wiki page says:Basic method
With this method, you can statically start only one profile per interface. First manually check that the profile can be started successfully with:
# netctl start profile then it can be enabled using:
# netctl enable profileThis will create and enable a systemd service that will start when the computer boots. Changes to the profile file will not propagate to the service file automatically. After such changes, it is necessary to reenable the profile:
# netctl reenable profileAfter enabling a profile, it will be started at next boot. Obviously this can only be successful, if the network cable for a wired connection is plugged in, or the wireless access point used in a profile is in range respectively.
|
I have configured a wifi connection in /etc/netctl/tq84-wifi in a Arch Linux installation and am able to manually start it with sudo netctl start tq84-wifi.
Now I'd like to have my installation start the wifi connection automatically when I start Linux. I tried sudo netctl-auto switch-to tq84-wifi, yet this command tells me Profile 'tq84-wifi' does not exist or is not available.
So, what do I have to do instead?
|
How do I automatically execute `netctl start tq84-wifi` on bootup?
|
Ok, the problem was that some packages were missing.
The packages you need are: wireless_tools, wpa_supplicant, netctl, dialog, dhcpcd and dhclient
|
After installing arch, I wanted to set up wifi. I typed wifi-menu wlp3s0 and selected my network. After that wifi-menu exited with no error. I typed ping - c3 google.com to verify my internet connection. I got ping: google.com: Temporary failure in name resolution I typed wifi-menu wlp3s0 again and in front of my wireless network is a:(handmade profile present). But i want *(active connection present). Can someone help me?
|
arch linux wifi-menu: doesn't shows an error, but doesn't connect to the network
|
So after a little bit more research I've determined why this is happening. I'm using the default Ubuntu and Debian templates to create the containers and their networking is set up so as to use DHCP to ask for a IP from the the router. So initially the static IP is set using the lxc.container.config and then when the container starts it queries the router (or whatever DHCP server you have) for a secondary IP that's then assigned to it.
The most logical way to stop this is likely to just assign the static ip inside the container. So on Debian based templates edit /etc/network/interfaces:
auto etho0
iface etho0 inet static
address 192.168.0.15
netmask 255.255.255.0
gateway 192.168.0.1And then remove the ipv4 line from the lxc config /var/lib/lxc/testcontainer/config:
lxc.network.type = veth
lxc.network.link = br0Another method is to let the host set the ip by keeping the ipv4 line in /var/lib/lxc/testcontainer/config and to tell the container explicitly to not touch the interface by setting it to manual:
auto eth0
iface eth0 inet manualApparently there are some issues with this second method if the host is suspended and then resumed. Probably best to use the first method.
|
I'm playing around with LXC on my Arch Linux workstation as a learning experience. I'm following the guide on the LXC page on the Archwiki and setting up a static ip for the container. This is what my network config is like:
/etc/netctl/lxcbridge
---------------------
Description="LXC Bridge"
Interface=br0
Connection=bridge
BindsToInterfaces=(enp1s0)
IP=static
Address=('192.168.0.20/24')
Gateway='192.168.0.1'
DNS=('192.168.0.1')And the container config:
/var/lib/lxc/testcontainer/config
---------------------------------
lxc.network.type = veth
lxc.network.link = br0
lxc.network.ipv4 = 192.168.0.198/24However according to lxc-ls -f it gets given an extra ip address.
NAME STATE AUTOSTART GROUPS IPV4 IPV6
testcontainer RUNNING 0 - 192.168.0.198, 192.168.0.220 -I only want 192.168.0.198. I'm not sure why it's getting the second one assigned to it.
|
LXC container gets two IP addresses assigned to it
|
Probably in such cases bridges are used to interconnect up multiple access points on the same network or else signal repeaters, also known as expanders signal.
|
Yesterday I travelled by train and there were multiple Wifi APs with the same SSID. I had no problem connecting to the Wifi; I'm just curios how netctl handles
such a case. Which of the available APs will it connect to, and does it seamlessly switch between the available APs if one of them gets out of range?
|
netctl: multiple Wifi APs with same SSID
|
I missed packages: dhcpcd, wireless_tools, wpa_supplicant, netctl, dialog and dhclient
|
I wanted to setup wifi on my arch linux machine with wifi-menu But it doesn't connect to wifi. It shows no error and it says, that im connected to the internet. But with ping 8.8.8.8 I get Network is unreachable. Am i missing packets or firmware for my wifi card? btw my wifi card driver is atk10-pci. Can someone help?
|
what are the requirements for netctl wifi-menu in arch linux?
|
Do I need to specify the broadcast address or gateway?From the looks of this article/thread titled: [SOLVED] Static IP wired connection doesn't work with netctl the broadcast address can be incorporated into the static IP's definition.
For example, they provided you with this:BROADCAST=255.255.255.255 or 192.168.117.255 (I was given both of these different values) I'd assume that the 2nd one, 192.168.117.255, is in fact correct, which would be a /24 mask, hence your Address= already has it:
Address='192.168.117.2/24'Is a prefix needed (and what is prefix 31)?Prefixes or, prefix lengths, are described here in these two articles titled: How do prefix-lists work?
Working with IP Addresses - The Internet Protocol Journal - Volume 9, Number 1excerptThe prefix length is just a shorthand way of expressing the subnet mask. The prefix length is the number of bits set in the subnet mask; for instance, if the subnet mask is 255.255.255.0, there are 24This table shows how they're calculated:So in your case, this information is a bit confusing. Your network address appears to be /24, but your prefix length is 31 bits. In either case, I'd ignore the 31 for the time being, and go with the /24.Is there anything else I have overlooked?Everything else in your example profile appears to check out. You should be good to go.
Referencesnetctl-profile man page
netctl wiki page - ArchLinux
|
Seeking to make a netctl profile for a tap device. Here is the info I was given about the connection.
GATEWAY=192.168.117.1
DNS=192.168.117.1
BROADCAST=255.255.255.255 **or** 192.168.117.255 (*I was given both of these different values*)
PREFIX=31
STATIC IP ADDRESS=192.168.117.2/24
TYPE=TAP Netctl includes some examples. I used the one I found in examples/tuntap:
Description='Example tuntap connection'
Interface=tun0
Connection=tuntap
Mode='tun'
User='nobody'
Group='nobody'## Example IP configuration
#IP=static
#Address='10.10.1.2/16'Here is the profile I came up with:
Description='My tap connection'
Interface=tap0
Connection=tuntap
Mode='tap'
User='nobody'
Group='nobody'
IP=static
Address='192.168.117.2/24'
UsePeerDNS=true
DefaultRoute=true
SkipDAD=yes
DHCPReleaseOnStop=yesQuestionsDo I need to specify the broadcast address or gateway?
Is a prefix needed (and what is prefix 31)?
Is there anything else I have overlooked?
|
How to make a netctl profile for a TAP device?
|
Okay, so the problem seems to be, that any interface answers the ARP-requests because they are all in the same subnet and therefore even if the network interfaces have different ip addresses the ARP entries point to the same interface.
So all the packets for the different IPs are sent to the same MAC address (because all the IPs have the same MAC in the ARP entry)
My arp cache looks like this:
$ ip neigh
192.168.100.11 dev enp2s0 lladdr 00:1e:67:a3:7f:b7 STALE
192.168.100.12 dev enp2s0 lladdr 00:1e:67:a3:7f:b7 STALEThanks @ChristopherNeylan for giving me the clue to look at ARP.
It seems that it is not easily possible to have more than one nic on the same subnet and archive the behavior I'd like.
The problem is that linux uses a weak host model but a strong host model would be needed here.
I've found a question on serverfault that addresses this issue should anybody else be interrested:
multiple physical interfaces with IPs on the same subnet
|
I have a Server with multiple (four) network interfaces running Arch Linux when I encountered something strange that I cannot explain.
I have two interfaces configured to the the ip addresses 192.168.100.11/24 and 192.168.100.12/24 (using two netctl profiles).
Both interfaces are connected to a switch which also connects to my computer.
When I enable the profiles both interface seem to work quite okay.
But when I'm pinging the address of the main interface (192.168.100.11) and removing its cable (while having the secondary interface up and running) I continue to getting replies from 192.168.100.11 even though no cable is attached to the interface. I even can successfully ssh 192.168.100.11 into the machine.
The output of ip addr shows that the interface has no carrier and is down but still has it's IP address:
2: enp7s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:1e:67:a3:7f:b6 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.11/24 brd 192.168.100.255 scope global enp7s0f0
valid_lft forever preferred_lft forever
inet6 fd00::21e:67ff:fea3:7fb6/64 scope global mngtmpaddr dynamic
valid_lft 6836sec preferred_lft 3236sec
inet6 fe80::21e:67ff:fea3:7fb6/64 scope link
valid_lft forever preferred_lft forever
3: enp7s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:1e:67:a3:7f:b7 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.12/24 brd 192.168.100.255 scope global enp7s0f1
valid_lft forever preferred_lft forever
inet6 fd00::21e:67ff:fea3:7fb7/64 scope global mngtmpaddr dynamic
valid_lft 6836sec preferred_lft 3236sec
inet6 fe80::21e:67ff:fea3:7fb7/64 scope link
valid_lft forever preferred_lft foreverWhen I do netctl status main to check for the status the profile is still active:
# netctl status main
● [emailprotected] - Main interface
Loaded: loaded (/etc/systemd/system/[emailprotected]; enabled; vendor preset: disabled)
Active: active (exited) since Thu 2015-01-08 11:29:07 UTC; 25min ago
Docs: man:netctl.profile(5)
Main PID: 55293 (code=exited, status=0/SUCCESS)Jan 08 11:29:03 timingserver1 network[55293]: Starting network profile 'main'...
Jan 08 11:29:07 timingserver1 network[55293]: Started network profile 'main'How is this even possible?
It is important to me to understand the mechanics behind this.
The plan is that I have two redundant servers offering a service on the ip 192.168.100.11. One of the servers will have the main interface down and both are connected to the same switch. The second interface on both servers is used for maintanance when the server is standby or degraded (of course they will not have the same IP).
So in case of a failover I will set the interface of the main-server down (or disconnect the cable) and activate the one on the backup-server. Of course, if the main-server still answers to 192.168.100.11 even if it is down, this would be bad... :-/
My netctl profile files:main interface
Description='Main interface'
Interface=enp7s0f0
Connection=ethernet
IP=static
Address=('192.168.100.11/24')
Gateway='192.168.100.1'
DNS=('192.168.100.1' '8.8.8.8')secondary interface
Description='Secondary interface'
Interface=enp7s0f1
Connection=ethernet
IP=static
Address=('192.168.100.12/24')
DNS=('192.168.100.2' '8.8.8.8')I would appreciate it very much if anybody can direct me as how to archive my desired solution.
Maybe it's just something stupid I've overlooked or don't know about but at the moment this is bothering me quite a bit... :-/
|
Why can I connect to the IP of a network interface (on a server with multiple network interfaces) when the network cable is removed?
|
systemd-networkd allowed me to do something like:
[Match]
Name=wlan0[Network]
Address=192.168.x.xto set the wireless card address (with netctl disabled, don't mix both).
When hostapd starts, it keeps that address as the access point address.
In my specific case, one can do the same for the wired card (a static address, with no further configuration). No bridge is needed, but probably is a good idea to have one address for the wired and other for the wireless (haven't tried though).
This is a dhcp-less configuration, so it requires static address setup on both ends.
|
I need to get a way to communicate to a Raspberry Pi which is acting as the brain of a project. The missing piece is the wireless TCP/IP link.
There are some tutorials for setting up a router with hostapd but I'm having trouble with some since the PI is running headless and failing to setup the interfaces correctly sometimes means taking the SD out to fix the wired connection. Also do I really need a bridge since the PI is the endpoint?
Is there any simpler solution for what I want? (Just need 1-2 clients, static IPs are fine)
Here are my netctl configs:
##Wired###################################
Interface=eth0
Connection=ethernet
IP=static
Address=('192.168.0.5/24')
##Bridge##################################
Interface=br0
Connection=bridge
BindsToInterfaces=(eth0)
IP=static
Address=('192.168.0.6/24')
SkipForwardingDelay=yesAnd the minimalist hostapd config:
interface=wlan0
ctrl_interface=/var/run/hostapd
ssid=randomssid
channel=5
auth_algs=1
driver=rtl871xdrv
hw_mode=g
logger_stdout=-1
logger_stdout_level=2
ieee80211n=1
bridge=br0With this config the problem is that the wireless card gets no IP. Am I supposed to configure it as a normal card and let hostapd take care after it?
Also as I said, I don't need anything to be routed to the wired card, can I get rid of the bridge?
|
Simple access point for remote electronics project
|
Either distribute static routes by DHCP to all hosts in the two segments, or, assuming the two routers are the default routing gateways in each segment, add a static route to each router. The latter will be less efficient.
Alternatively: Don't let the Tenda Router use a different segment with its own DHCP, bridge it instead.
|
I've got a standard router (192.168.1.1) that is connected to the internet. It also has the following connected to it:an un-managed switch which all wired devices connect to.
a Tenda Mesh WiFi Router (192.168.1.9 >> 192.168.5.1) connected to it.Devices connected to the Tenda have IPs of 192.168.5.x and I can't connect to them from devices connected to the main router.
UPDATE: I've seen some posts referencing 'Routes' within netctl, but I get:
Jan 14 22:56:00 deviceX network[3728]: Could not add route '192.168.5.0/24 via 192.168.1.9,' to interface 'eno1'
Jan 14 22:56:00 deviceX dhclient[3788]: receive_packet failed on eno1: Network is down
Jan 14 22:56:00 deviceX network[3728]: Failed to bring the network up for profile 'mynet-eno1-dhcp'My current netctl profile is:
Description='A basic dhcp ethernet connection'
Interface=eno1
Connection=ethernet
IP=dhcp
#Routes=('192.168.5.0/24 via 192.168.1.9', '192.168.1.0/24 via 192.168.1.1')
DHCPClient=dhclient
#DHCPReleaseOnStop=no
## for DHCPv6
IP6=dhcp
DHCP6Client=dhclient
## for IPv6 autoconfiguration
#IP6=statelessIn order to connect seamlessly between these two networks, how (and where) can I create a Static Route. Also, do I need to do this on multiple machines. My computers are all running Arch Linux, including several Raspberry Pis which are always on, that could server as intermediate points, if that would work.
|
Static Routes between Multiple Subnets - ROUTER (WAN) + WiFi Mesh ROUTER
|
Configure sudo to allow you to run the command without a password:
As root:
# visudoappend the following:
<username> ALL = NOPASSWD: netctl start network, netctl stop networkwhere <username> is your username (without < and >) or ALL to allow everyone to do this. You can also stipulate a group by preceding it with a % (eg %admin).
|
What I would like to do:
I would like to start my wireless network after login (As apposed to starting at boot).
Using my login credentials to run the command: sudo netctl start network. Instead of needing to Login once, then login my credentials a second time to start networking, at the same time I would prefer not having my system bootup with networking enabled.
The crux:
I thought I could just add it to .xinit but startx doesn't require sudo, however netctl does and therefore will not run.
I then thought about running it in my .bash_profile but that doesn't seem to work, for the same reason.
Is there a way to run networking at login, while just supplying login credentials once?
OS: Arch Linux
|
Start Networking at login
|
Nevermind, figured it out:
Problem was, I already had brought the respective interface up
ifconfig wlan0 up
which prevents the job from working.
Info was obtained by running
netctl status wlan0-ROUTENZUG
|
Upon login to serial console I run a command via /etc/profile
netctl start wlan0-ROUTENZUG
However it fails, yielding this output
Job for netctl@wlan0\x2ROUTENZUG.service failed. See ´sytemctl status netctl@wlan0\x2ROUTENZUG.service´ and ´journalctl -xn for details.´
The latter contains nothing about that job, the first just tells me the job is dead.
I just tested: The command fails as well when given manually.
|
netctl start PROFILE fails
|
I could finally solve it (I'm posting it using the USB connection :P).
Some points to note:wvdial (as configured in /etc/wvdial.conf) could not connect anyhow (couldn't dial with ATDT and ATD commands).
netctl created profile, 'connected' (netctl start) without (outputting) errors but there were no real connection.
So, I installed NetworkManager, started and enabled it as service alongisde ModemManager (with systemctl start).
I installed tint2 (as there's no panel (system tray) in my system) specifically to run GNOME's nm-applet on it, and configured the broadband network, and Network Manager showed it as an enabled device and I could connect through the profile I created on nm-applet (GUI, again).That's it. Thanks, gnome for nm-applet :)NetworkManager won't (probably) connect without ModemManager service.
No matter how hard I'll try I couldn't connect through the default, native netctl that comes with arch-linux.
I still would like to connect with the native application, if anyone knows how, or can help with it (reading my previous post, or by providing a 'better' profile file for the USB modem (model) noted above), I'd really appreciate that, and switch back to netctl.
But now that the native netctl couldn't/doesn't do the job, I'll have them both installed (what I don't actually like doing for minimalism) as network managers until I could connect properly through the former one.
|
I tried both netctl and NetworkManager.
Copied /etc/netctl/examples/mobile_ppp and added number and APN name as there's no pin/pass/username and set interface to /dev/ttyUSB0 (also tried the other ttyUSB1, and ttyUSB2 as well).
My `/etc/netctl/mobile_ppp` file's contents are as follows:Description='Example PPP mobile connection'
Interface='ttyUSB0'
Connection='mobile_ppp'
PhoneNumber='*99#'
# Use default route provided by the peer (default: true)
#DefaultRoute=true
# Use DNS provided by the peer (default: true)
#UsePeerDNS=true
# The user and password are not always required
#User='[emailprotected]'
#Password='very secret'
# The access point name you are connecting to
AccessPointName='internet'
# If your device has a PIN code, set it here. Defaults to 'None'
#Pin=None
# Mode can be one of 3Gpref, 3Gonly, GPRSpref, GPRSonly, None
# These only work for Huawei USB modems; all other devices should use None
Mode=3Gonly
# ^ tried all other options toonetctl start mobile_ppp "connects" silently with no errors on output and shown on journalctl -xe, but there's no actual connection.
And it also shows * sign prepended on its name when issuing netctl list (as if really connected/operating).
Currently I am connected with netctl but with wls1 (Wi-Fi profile set up with wifi-menu).
Moreover the USB modem is not shown in the nm-applet of NetworkManager, what I installed separately.
Also, the modem's LED turns blue as if connected or operating, but with no results.
I did a lot of research on the web, came across and read wikis and other users asking about similar issues, and tried the solutions, installed many many packages in Arch Linux, but unfortunately nothing worked for me to simply connect with the modem which used to automatically connect on Fedora 25 (without GNOME even).
|
Cannot connect with Huawei E3131 3GMax USB Modem
|
dig uses the OS resolver libraries. nslookup uses is own internal ones.
That is why Internet Systems Consortium (ISC) has been trying to get people to stop using nslookup for some time now. It causes confusion.
|
Why do the commands dig and nslookup sometimes print different results?
~$ dig facebook.com; <<>> DiG 9.9.2-P1 <<>> facebook.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6625
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;facebook.com. IN A;; ANSWER SECTION:
facebook.com. 205 IN A 173.252.110.27;; Query time: 291 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Sun Oct 6 17:55:52 2013
;; MSG SIZE rcvd: 57~$ nslookup facebook.com
Server: 8.8.8.8
Address: 8.8.8.8#53Non-authoritative answer:
Name: facebook.com
Address: 10.10.34.34
|
dig vs nslookup
|
Short answer:
A workaround is forcing glibc to reuse a socket for look up of the AAAA and A records, by adding a line to /etc/resolv.conf:
options single-request-reopenThe real cause of this issue might be:malconfigured firewall or a router (e.g. a Juniper firewall configuration described here) which causes dropping AAAA DNS packets
bug in DNS serverLong answer:
Programs like curl or wget use glibc's function getaddrinfo(), which tries to be compatible with both IPv4 and IPv6 by looking up both DNS records in parallel. It doesn't return result until both records are received (there are several issues related to such behaviour) - this explains the strace above. When IPv4 is forced, like curl -4 internally gethostbyname() which queries for A record only.
From tcpdump we can see that:-> A? two requests are send at the beginning
-> AAAA? (requesting IPv6 address)
<- AAAA reply
-> A? requesting again IPv4 address
<- A got reply
-> AAAA? requesting IPv6 again
<- AAAA replyOne A reply gets dropped for some reason, that's this error message:
error sending response: host unreachableYet it's unclear to me why there's a need for second AAAA query.
To verify that you're having the same issue you can update timeout in /etc/resolv.conf:
options timeout:3First create a text file with custom time reporting config:
cat >./curl-format.txt <<-EOF
time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_redirect: %{time_redirect}\n
time_pretransfer: %{time_pretransfer}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
EOFthen send a request:
$ curl -w "@curl-format.txt" -o /dev/null -s https://example.com time_namelookup: 3.511
time_connect: 3.511
time_appconnect: 3.528
time_pretransfer: 3.528
time_redirect: 0.000
time_starttransfer: 3.531
----------
time_total: 3.531There are two other related options in man resolv.conf:single-request (since glibc 2.10) sets RES_SNGLKUP in _res.options. By default, glibc performs IPv4 and IPv6 lookups in parallel since version 2.9. Some appliance DNS servers cannot handle these queries properly and make the requests time out. This option disables the
behavior and makes glibc perform the IPv6 and IPv4 requests sequentially (at the cost of some slowdown of the resolving
process).
single-request-reopen (since glibc 2.9)
The resolver uses the same socket for the A and AAAA requests. Some hardware mistakenly sends back only one
reply. When
that happens the client system will sit and wait for the second reply. Turning this option on changes this behavior so
that
if two requests from the same port are not handled correctly it will close the socket and open a new one before
sending the
second request.Related issues:DNS lookups sometimes take 5 seconds
Delay associated with AAAA request
|
I've a master bind9 DNS server and 2 slave servers running on IPv4 (Debian Jessie), using /etc/bind/named.conf:
listen-on-v6 { none; };When I try to connect from different server(s) each connection takes at least 5 seconds (I'm using Joseph's timing info for debugging):
$ curl -w "@curl-format.txt" -o /dev/null -s https://example.com
time_namelookup: 5.512
time_connect: 5.512
time_appconnect: 5.529
time_pretransfer: 5.529
time_redirect: 0.000
time_starttransfer: 5.531
----------
time_total: 5.531According to curl, lookup takes most of the time, however standard nslookup is very fast:
$ time nslookup example.com > /dev/null 2>&1real 0m0.018s
user 0m0.016s
sys 0m0.000sAfter forcing curl to use IPv4, it gets much better:
$ curl -4 -w "@curl-format.txt" -o /dev/null -s https://example.com time_namelookup: 0.004
time_connect: 0.005
time_appconnect: 0.020
time_pretransfer: 0.020
time_redirect: 0.000
time_starttransfer: 0.022
----------
time_total: 0.022I've disabled IPv6 on the host:
echo 1 > /proc/sys/net/ipv6/conf/eth0/disable_ipv6though the problem persists. I've tried running strace to see what's the reason of timeouts:
write(2, "*", 1*) = 1
write(2, " ", 1 ) = 1
write(2, "Hostname was NOT found in DNS ca"..., 36Hostname was NOT found in DNS cache
) = 36
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 4
close(4) = 0
mmap(NULL, 8392704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7f220bcf8000
mprotect(0x7f220bcf8000, 4096, PROT_NONE) = 0
clone(child_stack=0x7f220c4f7fb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f220c4f89d0, tls=0x7f220c4f8700, child_tidptr=0x7f220c4f89d0) = 2004
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
poll(0, 0, 4) = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
poll(0, 0, 8) = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
poll(0, 0, 16) = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
poll(0, 0, 32) = 0 (Timeout)
rt_sigaction(SIGPIPE, NULL, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
rt_sigaction(SIGPIPE, {SIG_IGN, [PIPE], SA_RESTORER|SA_RESTART, 0x7f22102e08d0}, NULL, 8) = 0
poll(0, 0, 64) = 0 (Timeout)It doesn't seem to be a firewall issues as nslookup (or curl -4) is using the same DNS servers. Any idea what could be wrong?
Here's tcpdump from the host tcpdump -vvv -s 0 -l -n port 53:
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
20:14:52.542526 IP (tos 0x0, ttl 64, id 35839, offset 0, flags [DF], proto UDP (17), length 63)
192.168.1.1.59163 > 192.168.1.2.53: [bad udp cksum 0xf9f3 -> 0x96c7!] 39535+ A? example.com. (35)
20:14:52.542540 IP (tos 0x0, ttl 64, id 35840, offset 0, flags [DF], proto UDP (17), length 63)
192.168.1.1.59163 > 192.168.1.2.53: [bad udp cksum 0xf9f3 -> 0x6289!] 45997+ AAAA? example.com. (35)
20:14:52.543281 IP (tos 0x0, ttl 61, id 63674, offset 0, flags [none], proto UDP (17), length 158)
192.168.1.2.53 > 192.168.1.1.59163: [udp sum ok] 45997* q: AAAA? example.com. 1/1/0 example.com. [1h] CNAME s01.example.com. ns: example.com. [10m] SOA ns01.example.com. ns51.domaincontrol.com. 2016062008 28800 7200 1209600 600 (130)
20:14:57.547439 IP (tos 0x0, ttl 64, id 36868, offset 0, flags [DF], proto UDP (17), length 63)
192.168.1.1.59163 > 192.168.1.2.53: [bad udp cksum 0xf9f3 -> 0x96c7!] 39535+ A? example.com. (35)
20:14:57.548188 IP (tos 0x0, ttl 61, id 64567, offset 0, flags [none], proto UDP (17), length 184)
192.168.1.2.53 > 192.168.1.1.59163: [udp sum ok] 39535* q: A? example.com. 2/2/2 example.com. [1h] CNAME s01.example.com., s01.example.com. [1h] A 136.243.154.168 ns: example.com. [30m] NS ns01.example.com., example.com. [30m] NS ns02.example.com. ar: ns01.example.com. [1h] A 136.243.154.168, ns02.example.com. [1h] A 192.168.1.2 (156)
20:14:57.548250 IP (tos 0x0, ttl 64, id 36869, offset 0, flags [DF], proto UDP (17), length 63)
192.168.1.1.59163 > 192.168.1.2.53: [bad udp cksum 0xf9f3 -> 0x6289!] 45997+ AAAA? example.com. (35)
20:14:57.548934 IP (tos 0x0, ttl 61, id 64568, offset 0, flags [none], proto UDP (17), length 158)
192.168.1.2.53 > 192.168.1.1.59163: [udp sum ok] 45997* q: AAAA? example.com. 1/1/0 example.com. [1h] CNAME s01.example.com. ns: example.com. [10m] SOA ns01.example.com. ns51.domaincontrol.com. 2016062008 28800 7200 1209600 600 (130)EDIT:
In bind logs frequently appears this message:
error sending response: host unreachableThough, each query is eventually answered (it just takes 5s). All machines are physical servers (it's not fault of NAT), it's more likely that packets are being blocked by a router. Here's quite likely related question: DNS lookups sometimes take 5 seconds.
|
Resolving hostname takes 5 seconds
|
You need a way to specify that you want to retrieve an AAAA record instead of an A record. You'll want to use the dig command for this, which is the replacement for nslookup anyways.
dig AAAA websitehostnameor if you don't want the verbose output:
dig AAAA +short websitehostname
|
Both the nslookup and host commands return IPv4 addresses only. How can i retrieve the IPv6 address of a website using the terminal?
(I have googled around, unfortunately I couldn't find anything useful)
|
Retrieve IPv6 address of website using terminal
|
Does dig +search dns01 give you what you want? If so, it it possible that +nosearch somehow got added to your ~/.digrc ?
ETA: Or, if you're like me, maybe the dig fairies failed to come and add +search to your ~/.digrc.
|
I have a lab set up with DNS running on a CentOS7 server (dns01.local.lab). The local.lab domain is defined in named.conf:
zone "local.lab" IN {
type master;
file "local.lab.zone";
allow-update { none; };
};I also have a reverse zone but that doesn't matter for this question as far as I can tell.
The zone file looks like:
$TTL 86400
@ IN SOA dns01.local.lab. root.local.lab. (
1 ; Serial
3600 ; Refresh
1800 ; Retry
604800 ; Expire
86400 ; Minimum TTL
)
@ IN NS dns01.local.lab.
@ IN A 192.168.122.100
@ IN A 192.168.122.1
dns01 IN A 192.168.122.100
virt-host IN A 192.168.122.1If I use nslookup using just the hostname I get a resolved IP:
[root@dns01 ~]# nslookup dns01
Server: 192.168.122.100
Address: 192.168.122.100#53Name: dns01.local.lab
Address: 192.168.122.100However, if I use dig using just the hostname I do not get the expected response:
[root@dns01 ~]# dig dns01; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.2 <<>> dns01
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 9070
;; flags: qr rd ra ad; QUERY: 1, ANSWER 0; AUTHORITY: 1, ADDITIONAL: 1;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;dns01. IN A;; AUTHORITY SECTION:
. 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2016020401 1800 900 604800 86400;; Query time: 95 msec
;; SERVER: 192.168.122.100#53(192.168.122.100)
;; WHEN: Thu Feb 04 09:15:07 HST 2016
;; MSG SIZE rcvd: 109I only get the expected response when I use the FQDN:
[root@dns01 ~]# dig dns01.local.lab; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.2 <<>> dns01
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 9070
;; flags: qr rd ra ad; QUERY: 1, ANSWER 1; AUTHORITY: 1, ADDITIONAL: 1;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;dns01.local.lab. IN A;; ANSWER SECTION:
dns01.local.lab. 86400 IN A 192.168.122.100;; AUTHORITY SECTION:
local.lab. 86400 IN NS dns01.local.lab.;; Query time: 8 msec
;; SERVER: 192.168.122.100#53(192.168.122.100)
;; WHEN: Thu Feb 04 09:22:15 HST 2016
;; MSG SIZE rcvd: 74Reverse lookups with dig provide the expected answer. Likewise with nslookup.
I know that dig and nslookup use different resolver libraries, but from what I understand dig is considered the better way.
As the results above indicate, the correct named server is being queried. It's as if dig doesn't recognize that the server is the authority for hostname being queried.
named.conf:
options {
listen-on port 53 { 127.0.0.1; 192.168.122.100; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query {localhost; 192.168.122.0/24; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
};logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};zone "." IN {
type hint;
file "named.ca";
};zone "local.lab" IN {
type master;
file "local.lab.zone";
allow-update { none; };
};zone "122.168.192.in-addr.arpa" IN {
type master;
file "local.lab.revzone";
allow-update { none; };
};include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
|
dig does not resolve unqualified domain names, but nslookup does
|
This is not a problem of a more basic protocol not working, but rather that there are multiple name service resolution protocols being used; ping here understands multicast DNS (mDNS) and is able to resolve the name minwinpc.local to an IP address via that protocol. dig and nslookup by contrast may only understand or use the traditional DNS protocol, which knows nothing about mDNS, and thus fail.
The .local domain is a clear indicator of mDNS (via a web search on ".local domain"); more can be read about it in [RFC 6762]. Another option for debugging a situation like this would be to run tcpdump or WireShark and look for packets that contain minwinpc.local; this may reveal the mDNS traffic.
Still another option would be to nmap the IP of the minwinpc.local device; this may well show that the device is listening on UDP/5353 and then one can research what services that port is used for (and then one could sudo tcpdump udp port 5353 to inspect what traffic involves that port).
|
I'm experimenting with a Win10 IoT board that runs a web interface on minwinpc.local. This works fine in the browser, and also when I use ping.
However, when I use dig or nslookup, I cannot get resolve working.
How can ping and the browser possibly get the IP if the more basic tools fail to do the resolve?
Setup is just a DragonBoard with Win10 IoT Core, connected to an iPhone hotspot. Client that tries connecting is running macOS Sierra. No special hosts or resolve files have been adjusted.
ping
$ping minwinpc.local
PING minwinpc.local (172.20.10.3): 56 data bytes
64 bytes from 172.20.10.3: icmp_seq=0 ttl=128 time=6.539 msdig
$ dig minwinpc.local any @172.20.10.1; <<>> DiG 9.8.3-P1 <<>> minwinpc.local any @172.20.10.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 61796
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0;; QUESTION SECTION:
;minwinpc.local. IN ANY;; Query time: 51 msec
;; SERVER: 172.20.10.1#53(172.20.10.1)
;; WHEN: ...
;; MSG SIZE rcvd: 35nslookup
$ nslookup minwinpc.local
Server: 172.20.10.1
Address: 172.20.10.1#53** server can't find minwinpc.local: NXDOMAINRelated questions:https://stackoverflow.com/questions/45616546
MSDN forums (same question)
|
dig / nslookup cannot resolve, but ping can
|
This is a feature of dnsmasq. The dnsmasq people call it "Rebind Protection". It defaults to being on. Either turn it off, or add the domain that you desire to work to the set of whitelisted rebind_domain domains.
You can see the option rebind_protection if you wish to change it in /etc/config/dhcp. If you decide to disable Rebind Protection, you'll have to run /etc/init.d/dnsmasq restart for changes to take effect.
config dnsmasq
option domainneeded '1'
option localise_queries '1'
option rebind_protection '0'
option rebind_localhost '1'If you wish to maintain that security except for one domain, you can simply add to the exclusion like this,
config dnsmasq
list rebind_domain '/acmevpn.net/'
list rebind_domain '/foobar.net/'The attack that it purports to prevent is one where an attacker, who has taken advantage of the fact that your WWW browser will automatically download and run attacker-supplied programs from the world at large, makes xyr domain name seem to rapidly alternate between an external IP address and one internal to your LAN, allowing your machine to become a conduit between another machine on your LAN with that IP address and some attacker-run content servers.
|
Setup
On some networks I'm able to use nslookup to resolve a domain name that is pointed to a private ip address:
@work> nslookup my192.ddns.net
Server: 10.1.2.3
Address: 10.1.2.3#53Non-authoritative answer:
Name: my192.ddns.net
Address: 192.168.20.20However, on my home network this same query fails:
@home> nslookup my192.ddns.net
Server: 192.168.0.1
Address: 192.168.0.1#53Non-authoritative answer:
*** Can't find my192.ddns.net: No answerWhat Works
I've found that if I change the A record for my192.ddns.net so that it points to a public IP range it will work fine:
@home> nslookup my192.ddns.net
Server: 192.168.0.1
Address: 192.168.0.1#53Non-authoritative answer:
Name: my192.ddns.net
Address: 172.217.12.238At home, if I specify the DNS server for nslookup, or set my laptop's DNS servers to Google's nslookup works as expected:
@home> nslookup my192.ddns.net 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8#53Non-authoritative answer:
Name: my192.ddns.net
Address: 192.168.20.20But I'd like to continue to use my home router as my primary DNS so that it can resolve local network names. I'd just like it not to fail when trying to do lookups for DNS records that point to private range addresses (eg: 192.168.20.20)
Home Network
I run LEDE (formerly OpenWRT) on my home router, which run dnsmasq. I've looked over the documentation for DNS and have even setup the system so that the DNS server it uses to resolve the address is Google's (8.8.8.8) - but it still fails and I can't seem to figure out why.
Question
What's happening here and how can I fix it?
|
Why does nslookup fail for DNS records set to a private address?
|
If you want to see the nameservers listed by the registrar, those are available in the DNS system via the root servers.
For example:
dig @a.gtld-servers.net ns stackoverflow.com; <<>> DiG 9.10.2-P4 <<>> @a.gtld-servers.net ns stackoverflow.com
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55658
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 3
;; WARNING: recursion requested but not available;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;stackoverflow.com. IN NS;; AUTHORITY SECTION:
stackoverflow.com. 172800 IN NS cf-dns02.stackoverflow.com.
stackoverflow.com. 172800 IN NS cf-dns01.stackoverflow.com.;; ADDITIONAL SECTION:
cf-dns02.stackoverflow.com. 172800 IN A 173.245.59.4
cf-dns01.stackoverflow.com. 172800 IN A 173.245.58.53;; Query time: 65 msec
;; SERVER: 192.5.6.30#53(192.5.6.30)
;; WHEN: Mon Sep 21 15:53:29 GMT 2015
;; MSG SIZE rcvd: 124If you modify the name servers listed in your registrar account, those servers will be reflected in the root / gtld servers. When you modify your DNS zones that your nameservers serve, they have no effect on the results returned by the root servers. Additionally, the only records the root servers will return are NS and A/AAAA defined by the registrar for the listed NS records. These are just pointers to find the authoritative (per the registrar) name servers for a domain to send your queries to.
|
A domain has nameservers and ns records. These should not, but can theoretically be different. There are multiple ways to see the ns records of a domain:
dig:
➜ ~ dig +short NS stackoverflow.com
cf-dns01.stackoverflow.com.
cf-dns02.stackoverflow.com.nslookup:
➜ ~ nslookup -type=any stackoverflow.com
Server: 195.186.1.111
Address: 195.186.1.111#53Non-authoritative answer:
stackoverflow.com nameserver = cf-dns01.stackoverflow.com.
stackoverflow.com nameserver = cf-dns02.stackoverflow.com.Both these commands give the nsrecords of a domain. Via whois, you can see the real nameservers (which in this case are the same). But since most whois outputs are formatted different for almost every tld, it would be difficult to parse them out of the whois.
Is there any way to see the nameservers of a domain (not the nsrecords) without exeucting a whois?
|
How to see the actual nameservers of a domain instead of the "nsrecords"
|
If your desired goal is to just make a table of hostnames and IP addresses, and you don't care particularly about using nslookup, I was able to seemingly create your desired output with a quick for .. echo loop:
for h in $( cat hosts.list ); do
a=$(dig +short $h | head -n1)
echo -e "$h\t${a:-Did_Not_Resolve}"
donedig is a slightly more friendly-to-scripting DNS tool than is nslookup, using the +short option makes the output even cleaner. The output of a request for which there is no record is an empty string, so I use the built-in bash default parameter expansion (${var:-default}) to handle the case of no record giving a "default" answer of Did_Not_Resolve.
$ dig www.example.com; <<>> DiG 9.10.6 <<>> www.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23579
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;www.example.com. IN A;; ANSWER SECTION:
www.example.com. 20308 IN A 93.184.216.34;; Query time: 28 msec
;; SERVER: 172.28.8.1#53(172.28.8.1)
;; WHEN: Fri Jun 01 12:02:27 MST 2018
;; MSG SIZE rcvd: 60$ dig +short www.example.com
93.184.216.34The ultimate yield is this output:
www.example.com 93.184.216.34
www.google.com 172.217.14.68
host.doesnotexist.tld Did_Not_Resolve
unix.stackexchange.com 151.101.129.69An alternative to dig is also host:
$ for h in $(cat hosts.list); do host $h; done
www.example.com has address 93.184.216.34
www.example.com has IPv6 address 2606:2800:220:1:248:1893:25c8:1946
www.google.com has address 216.58.193.196
www.google.com has IPv6 address 2607:f8b0:4007:80d::2004
Host host.doesnotexist.tld not found: 3(NXDOMAIN)
unix.stackexchange.com has address 151.101.129.69
unix.stackexchange.com has address 151.101.1.69
unix.stackexchange.com has address 151.101.65.69
unix.stackexchange.com has address 151.101.193.69In response to the questions in the comment below:
The only option I use for dig is +short, which reduces the output to either the IP address for the given host, or an empty string otherwise. I run dig in a subshell ($( dig [...] )) because I am capturing its output and assigning it to the variable a (for "address"). I am piping the output of dig through head -n1 as some hosts (like the host unix.stackexchange.com return multiple IP addresses; for the sake of simplicity, I simply grab the first one.
The reason this is being pulled out into a variable is so that we can use a simple parameter expansion trick to provide the "Did not resolve" text in lieu of an empty string, as described previously herein.
Expanding as requested on the echo statement specifically:
echo -e "$h\t${a:-Did_Not_Resolve}"The -e switch tells echo that I will be using escape sequences. In this case, I am using \t which, when combined with -e, becomes a Tab rather than a literal escaped t.
$h is, as you would expect, simply replaced with the contents of the variable h.
\t, as explained earlier, becomes a Tab.
${a:-Did_Not_Resolve}. Ah, here's where the magic is. bash has the ability when doing parameter expansion to do a little introspection as part of the process. The syntax ${var:-default} expands to the contents of the variable var or, if that is either unset or null, the provided replacement (in this example case, default; or in the actual use case here, Did_Not_Resolve). You can find more details about this in the bash manual page, in the section labeled "Parameter Expansion".The end result of this is outputting on each line, in the following order, the hostname, a Tab, and either the address if there was one, or the text Did_Not_Resolve if there was not.
|
I am currently running a script that uses nslookup on a bunch of hosts and then uses awk to print desired lines into a table. I am printing one line to file1 and another to file2, then using paste file1 file2 >> file3 to produce this table.
The table looks like this
Host IP
name 10.10.10.10
name 10.10.10.10
name 10.10.10.10For the most part, this is working. But for some reason, about 20 of my 160 results are getting "answer:" in the left column, and the hostname is appearing in the right. Like this:
Host IP
answer: hostnameThis is showing up randomly throughout the results and I can't figure it out because the nslookup doesn't have the word "answer:" in it anywhere for the script to accidentally awk.
Here is my script for reference:
hosts='hosts.list'
filelines=`cat $hosts`Empty_Containers(){
truncate -s 0 tmp.txt
truncate -s 0 file1
truncate -s 0 file2
}for h in $filelines ;
do
Empty_Containers
nslookup $h > tmp.txt
if grep -q "NXDOMAIN" tmp.txt
then
cat tmp.txt | awk 'FNR ==4 {print$5}' > file1
echo "Did_Not_Resolve" > file2
paste file1 file2 >> i.txt
else
cat tmp.txt | awk 'FNR ==4 {print$2}' > file1
cat tmp.txt |awk 'FNR ==5 {print$2}' > file2
paste file1 file2 >> i.txt
fi
cat i.txt | column -t 2 i.txt
done
|
nslookup awk to a file showing "answer"
|
One solution might be to temporarily change the order of the nameservers in /etc/resolv.conf .
Another approach is to iterate through the nameservers and use them separately:
while read IP
do
echo "Testing nameserver ${IP}"
nslookup google.com "${IP}"
done < <(grep nameserver /etc/resolv.conf| awk '(FNR != 2) {print $2;}')
|
We have a list of dns server IPs in /etc/resolv.conf. When doing nslookup for a particular scenario we would like to ingore the second entry below, so that naming resolution occurs via other 3 DNS server IPs.
$ cat /etc/resolv.conf
domain example.com
nameserver 192.168.1.1
nameserver 10.10.10.1
nameserver 192.168.1.2
nameserver 192.168.1.3Anyone has ideas? Thanks.
|
any option to ignore a dns server ip from /etc/resolv.conf when doing nslookup?
|
You did not use fully-qualified domain names. www.google.com is not a fully-qualified (human readable form) domain name. It does not end with a dot.
You also have a search path of plannersys.net. configured in your DNS client library and a wildcard DNS resource record for *.plannersys.net..
As a consequence of these, wget and tranceroute looked up the fully qualified domain name www.google.com.plannersys.net. and received the IP address 75.102.21.14 as the result. Your DNS client library, remember, turns non-fully-qualified domain names into fully-qualified domain names using the configured search paths and then issues lookups for the fully-qualified names.
nslookup differed because it uses a different, internal, DNS client library. Amusingly, this is one instance where it did not differ from ping, but that is probably because your DNS client library is configured with multiple proxy DNS servers that do not all present the same view of the DNS namespace, or you have something like systemd-resolved in the mix that is changing your DNS client configuration on the fly. There is zero information about your DNS client library in your question, so there is not enough information from you for anyone to determine exactly why this is.
Further readingJonathan de Boyne Pollard (2017). What DNS name qualification is. Frequently Given Answers.
Jonathan de Boyne Pollard (2003). Why the results from nslookup are different to the operation of ping. Frequently Given Answers.
Jonathan de Boyne Pollard (2004). DNS diagnosis tools. Frequently Given Answers.
|
We have a managed server with centos, recently it starts showing strange behavior:
root@server [/tmp]# ping www.google.com
PING www.google.com (172.217.8.196) 56(84) bytes of data.
64 bytes from ord37s09-in-f4.1e100.net (172.217.8.196): icmp_seq=1 ttl=53 time=24.8 msSo this looks quite good.
And here it becomes strange. We get our own server instead of the google server!
root@server [/tmp]# traceroute www.google.com
traceroute to www.google.com (75.102.21.14), 30 hops max, 60 byte packets
1 server.plannersys.net (75.102.21.14) 0.041 ms 0.017 ms 0.015 msNslookup, however, looks good:
root@server [/tmp]# nslookup
> www.google.com
Server: 8.8.8.8
Address: 8.8.8.8#53Non-authoritative answer:
Name: www.google.com
Address: 172.217.8.196
> ^CAnd wget and lynx also give us our own server.
root@server [/tmp]# wget https://www.google.com
--2017-12-04 11:50:56-- https://www.google.com/
Resolving www.google.com... 75.102.21.14
Connecting to www.google.com|75.102.21.14|:443... connected.
ERROR: no certificate subject alternative name matches
requested host name “www.google.com”.
To connect to www.google.com insecurely, use ‘--no-check-certificate’.And with lynx:
SSL error:host(www.google.com)!=cert(troutaccess.com)-Continue? (y)What can be the reason for this? Why do traceroute and wget and lynx use different addresses?
|
Why does ping resolve to a different address than traceroute? and lynx?
|
Run :
ldd $( which dig) | grep crypto, this will show you which crypto lib you're using at the moment. If this is different than expected one (usually openssl) you have few options :Remove the lib which interferes
Modify LDD_LIBARY_PATH env variable, and point to the openssl lib location
Fix the problem by removing unwanted library' location from /etc/ld.so.conf and /etc/ld.so.cond.d/* files. Running ldconfig afterwards. Warning : this will most probably break application using it.
|
I'm finding that a few commands (for now dig and nslookup) that fail no matter what with the following output:
19-Jan-2016 15:01:50.219 ENGINE_by_id failed (crypto failure)
19-Jan-2016 15:01:50.219 error:2606A074:engine routines:ENGINE_by_id:no such engine:eng_list.c:389:id=gost
dig: dst_lib_init: crypto failureEven stuff like dig -h results in this, so I guess this happens before the actual command execution starts
I remember these commands used to work, but they're not something I used very often, so I can't exactly pinpoint the origin
I can, however, say that I have messed with ssl options recently. Particularly, I was having problems handling GPG keys, and had to run export OPENSSL_CONF=/etc/ssl/openssl.cnf in order to make it work
I also found this issue, which seems to be similar. But that project has nothing to do with what I'm doing, and their solution (unsetting OPENSSL_CONF) did not work for me
EDIT:
I'm running Arch Linux.
The only change I did regarding OpenSSL configurations was running export OPENSSL_CONF=/etc/ssl/openssl.cnf which I needed to use gpg, but I already tried unsetting that
Running unset OPENSSL_CONF; dig -h results in the same output
|
"crypto failure" error when running various commands
|
The problem is in your resolv.conf/DHCP configuration if resolv.conf is not static.
You have got to add to the search directive of resolv.conf the domain eai.com
When you try a DNS name lookup, the resolver libraries, if unsuccessful, try in turns to resolve the name adding the domains in the search directive until they find a resolvable name, or until they exhaust the domain list in the search directive.
|
I tried setting up a DNS Lookup on CentOS 7 (in a Virtual Box VM), which works for FQDN on the same virtual machine as the DNS. However when I try to resolve the short hostname, it fails.
I have seen this working on some servers and wanted to learn how to set it up myself. Appreciate any help on this.
Below are the configurations in place:
File - /etc/named.conf
//
// named.conf
// options {
listen-on port 53 { 127.0.0.1; 192.168.56.101; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { localhost; 192.168.0.0/24; };
allow-transfer { localhost; 192.168.56.101; }; recursion yes; dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto; /* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid";
session-keyfile "/run/named/session.key";
}; logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
}; zone "." IN {
type hint;
file "named.ca";
}; zone "eai.com" IN {
type master;
file "forward.linuxzadmin";
allow-update { none; };
}; zone "0.168.192.in-addr.arpa" IN {
type master;
file "reverse.linuxzadmin";
allow-update { none; };
}; include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";File - /etc/resolv.conf
# Generated by NetworkManager
# nameserver 169.144.126.136
# nameserver 146.11.115.200
# nameserver 153.88.112.200
# NOTE: the libc resolver may not support more than 3 nameservers.
# The nameservers listed below may not be recognized.
# nameserver 147.128.170.138
# nameserver 127.0.0.1
nameserver 192.168.56.101File - /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.101 eai16.eai.com eai16 eai16-oamFile - /var/named/forward.linuxzadmin
$TTL 86400
@ IN SOA masterdns.eai.com. root.eai.com. (
2014051001 ; serial
3600 ; refresh
1800 ; retry
604800 ; expire
86400 ; minimum
)
@ IN NS masterdns.eai.com.
@ IN A 192.168.56.101
masterdns IN A 192.168.56.101
node1 IN A 192.168.56.101
eai16 IN A 192.168.56.101File - /var/named/reverse.linuxzadmin
$TTL 86400
@ IN SOA masterdns.eai.com. root.eai.com. (
2014051001 ; serial
3600 ; refresh
1800 ; retry
604800 ; expire
86400 ; minimum
)
@ IN NS masterdns.eai.com.
@ IN PTR eai.com.
masterdns IN A 192.168.56.101
node1 IN A 192.168.56.101
eai16 IN A 192.168.56.101
101 IN PTR masterdns.eai.com.
101 IN PTR node1.eai.com.
101 IN PTR eai16.eai.com.
101 IN PTR eai16.Command Output
Hostname
[root@eai16 etc]# hostname -f
eai16.eai.com
[root@eai16 etc]# hostname -s
eai16NS Lookup on FQDN
[root@eai16 etc]# nslookup eai16.eai.com
Server: 192.168.56.101
Address: 192.168.56.101#53Name: eai16.eai.com
Address: 192.168.56.101Dig on FQDN
[root@eai16 etc]# dig eai16.eai.com; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.3 <<>> eai16.eai.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62927
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;eai16.eai.com. IN A;; ANSWER SECTION:
eai16.eai.com. 86400 IN A 192.168.56.101;; AUTHORITY SECTION:
eai.com. 86400 IN NS masterdns.eai.com.;; ADDITIONAL SECTION:
masterdns.eai.com. 86400 IN A 192.168.56.101;; Query time: 0 msec
;; SERVER: 192.168.56.101#53(192.168.56.101)
;; WHEN: Wed Jun 28 21:13:38 IST 2017
;; MSG SIZE rcvd: 98Host on FQDN
[root@eai16 etc]# host eai16.eai.com
eai16.eai.com has address 192.168.56.101
[root@eai16 etc]# host `hostname`
eai16.eai.com has address 192.168.56.101Now all the commands (nslookup, dig and host) fail on the short hostname.
[root@eai16 etc]# host eai16
Host eai16 not found: 2(SERVFAIL)
[root@eai16 etc]# host eai16
;; connection timed out; no servers could be reached
[root@eai16 etc]# nslookup eai16
Server: 192.168.56.101
Address: 192.168.56.101#53** server can't find eai16: SERVFAIL[root@eai16 etc]# dig eai16; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.3 <<>> eai16
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 23006
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;eai16. IN A;; Query time: 0 msec
;; SERVER: 192.168.56.101#53(192.168.56.101)
;; WHEN: Wed Jun 28 21:25:18 IST 2017
;; MSG SIZE rcvd: 34I know there is something missing/wrong in my configuration, but am not able to figure out what.
|
CentOS7 unable to resolve nslookup for short hostname
|
check if your hostname is registered in DNS server you are using in /etc/resolve.conf. If not register it and check it should work.
|
I am having two NICs eth0 and eth1 for my Linux VM out of which one is in public and other in private network. When I am using nslookup for same by hostname its giving following error:
** server can't find "hostname": NXDOMAIN
I have checked all entries in /etc/hosts , /etc/sysconfig/network-scripts/ifcfg-eth0 , /etc/sysconfig/network-scripts/ifcfg-eth1 etc.
These all entries seems proper.
My /etc/resolve.conf is following:
domain in.rdlabs.hpecorp.net
search in.rdlabs.hpecorp.net
nameserver 16.110.135.51
nameserver 16.110.135.52
nameserver 16.110.135.53and netstat -r is :
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.0.0 * 255.255.252.0 U 0 0 0 eth1
15.154.112.0 * 255.255.248.0 U 0 0 0 eth0
169.254.0.0 * 255.255.0.0 U 0 0 0 eth1
default 15.154.112.1 0.0.0.0 UG 0 0 0 eth0
|
'nslookup' is not working on multiple network interfaces cards in linux
|
It would seem that ilportaledellautomobilista.it does not have an A record, however the domain does exist and does have a nameserver (SOA shows that). Since that host only has a SOA record, thats what dig is returning.
www.ilportaledellautomobilista.it does have an A record though. So give that a try instead, and you will see that A records have been configured for it. Also ilportaledellautomobilista.it is being redirected to www.ilportaledellautomobilista.it (try and type ilportaledellautomobilista.it on its own in a web browser and you will see what I mean)
The A record indicates the host's (domain name) IP. The SOA record indicates what the name server is for that domain. (basically which DNS server its managed from). Since the domain name itself does not have an A record specified, you will not get an IP address returned (thats what the A record is for, to specify an IP).
|
I have a domain and when I use
dig jeeja.bizit just gives me my server ip address.
;; ANSWER SECTION:
jeeja.biz. 13914 IN A 209.15.212.171But when I use the same command on another site:
dig ilportaledellautomobilista.itthe replay is:
;; AUTHORITY SECTION:
ilportaledellautomobilista.it. 3600 IN SOA dns.it.net. root.dns.it.net. 2012071209 86400 7200 604800 86400What is that and why there is no IP address?
Why ping, nslookup and traceroute are not working on that specific address?
|
dig domain replay SOA
|
These addresses are coming from your private network or are somehow spoofed.
Addresses from 10.0.0.0 to 10.255.255.255 are reserved for private networks (not connected to the internet)
http://tldp.org/HOWTO/IP-Masquerade-HOWTO/addressing-the-lan.html
|
I got several interesting :) e-mails from IP addresses where I can't identify country and owner (internet provider), it's by example
10.180.221.97
10.220.113.130
10.52.135.39I tried several IP lookup services with no luck. Please could you help me? Is it possible that IP doesn't have country identification?
|
Can't identify IP addresses
|
nslookup is contained in bind-utils package.
You should use below command to install it:
# yum install bind-utils
|
I can not install nslookup in my CentOS 7.2:
yum install nslookup[root@localhost network-scripts]# yum install -y nslookup
added plugin:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.sunnyvision.com
* epel: my.mirrors.thegigabit.com
* extras: mirror.sunnyvision.com
* updates: mirrors.icidc.com
No available nslookup
|
I can not install nslookup in my CentOS server
|
** server can't find some.url.ihave: NXDOMAINnslookup stops because it has received an answer to its query. It stops asking once it has an answer, obviously. That answer was that the domain does not exist.
If you don't want that to be the case, do not list in resolv.conf the IP address of a DNS server that thinks that that domain does not exist. Otherwise you'll end up sometimes, perhaps always, getting that as the answer. (There is no fixed universal rule about what order these things are tried in. Two programs from the same stable, nslookup and the BIND DNS client library from ISC, use different orders; and there are other DNS clients from other people with other behaviours still.)
Further readingJonathan de Boyne Pollard (2003). Your fallback proxy DNS servers must provide the same view of the DNS namespace as your principal one.. Frequently Given Answers.
Jonathan de Boyne Pollard (2003). Why the results from nslookup are different to the operation of ping. Frequently Given Answers.
Jonathan de Boyne Pollard (2001). nslookup is a badly flawed tool. Don't use it.. Frequently Given Answers.
|
# uname -a
Linux myserver 3.10.0-1062.18.1.el7.x86_64 #1 SMP Tue Mar 17 23:49:17 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)# cat /etc/resolv.conf# Generated by NetworkManager
options rotate
options timeout:3
options attempts:6
nameserver one.xxx.xxx.xxx
nameserver two.xxx.xxx.xxx
nameserver thr.xxx.xxx.xxx# nslookup some.url.ihave
Server: one.xxx.xxx.xxx
Address: one.xxx.xxx.xxx#53** server can't find some.url.ihave: NXDOMAINWhy does it not try the other two nameservers for DNS lookup that I have entered in resolv.conf ?
|
/etc/resolv.conf on CentOS7 doesn't seem to honor options
|
That represents IPv6,
In short form it is, 2a00:1288:110:2::4001, which represents
2a00:1288:0110:0002:0000:0000:0000:4001 in long form.
|
nslookup returns the following output on mac:
www.yahoo.com -> 46.228.47.115, 46.228.47.114, 2a00:1288:110:2::4001As I understand following IPs in DNS map to www.yahoo.com:
www.yahoo.com -> 46.228.47.115
www.yahoo.com -> 46.228.47.114But what is:
2a00:1288:110:2::4001
|
Understand the nslookup output.
|
Taken straight from HPUX documentation for nslookupls [option] domain List the information available for domain [...]. The default output contains host names and their Internet addresses.The ls subcommand will work only if you are connecting to an authoritative server and you have permission to request a zone transfer.
nslookup - 10.1.1.1 # The authoritative server for zone contoso.com
> ls contoso.com
|
I don't understand how to use the ls option in nslookup on HP-UX. It failed both interactively:
> ls
Using /etc/hosts on: hpuxlooking up FILES
Trying DNS
Name: ls.> set ls
*** Invalid option: ls
>And non-interactively:
nslookup -query=PTR 10.3.0.2 10.3.0.2 lsWhat's the right way to use it?
|
How to use ls in nslookup on HP-UX? [closed]
|
nslookup will query for both A and AAAA records, so if the A query returns immediately and the AAAA never returns, then nslookup will print an immediate response, then timeout.
Here's a table I made of how the dnsmasq server on 128.8.8.254 answered various types of queries:
dig @128.8.8.254 A focal-250 immediate success (A record)
dig @128.8.8.254 A focal-250.test immediate success (A record)
dig @128.8.8.254 AAAA focal-250 immediate SERVFAIL
dig @128.8.8.254 AAAA focal-250.test 15 second timeout, no responseWhat the output from nslookup meant is that it got the A record response (the first six lines), then timed out waiting for AAAA record.
One way I found to "fix" the problem is to tell dnsmasq that it's authoritative for the test domain by putting auth-zone=test in its config file. Now it behaves like this:
dig @128.8.8.254 A focal-250 immediate success (A record)
dig @128.8.8.254 A focal-250.test immediate success (A record)
dig @128.8.8.254 AAAA focal-250 immediate SERVFAIL
dig @128.8.8.254 AAAA focal-250.test immediate NOERROR (no records)nslookup and ping now respond immediately.
I've also found it useful to make dnsmasq "authoritative" for in-addr.arpa, for the same reason: so it returns an immediate NOERROR instead of timing out. The systemd-resolved service seems to use the answer from the server that responded with a record instead of the server that responded with nothing:
ubuntu@ca:~$ dig +short @128.8.8.254 -x 18.165.83.71
ubuntu@ca:~$ dig +short @192.168.1.1 -x 18.165.83.71
server-18-165-83-71.iad55.r.cloudfront.net.
ubuntu@ca:~$ dig +short @127.0.0.53 -x 18.165.83.71
server-18-165-83-71.iad55.r.cloudfront.net.
|
Here's what my nslookup is doing:
ubuntu@ca:~$ time nslookup focal-250
Server: 127.0.0.53
Address: 127.0.0.53#53Non-authoritative answer:
Name: focal-250.test
Address: 128.8.8.187
;; connection timed out; no servers could be reachedreal 0m15.024s
user 0m0.005s
sys 0m0.018sThe first six lines (i.e, the correct response) printed instantly, then it waited 15 seconds to "time out". Something like ping does the same thing: stalls for 15 seconds, then starts working.
It's an Ubuntu 20.04 LTS system running systemd-resolved. The only thing weird about it is that it has dnsmasq listening for name service on one of its interfaces, and that interface's address is configured as its own nameserver:
ubuntu@ca:~$ resolvectl
Global
LLMNR setting: no
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
168.192.in-addr.arpa
17.172.in-addr.arpa
18.172.in-addr.arpa
19.172.in-addr.arpa
20.172.in-addr.arpa
21.172.in-addr.arpa
22.172.in-addr.arpa
23.172.in-addr.arpa
24.172.in-addr.arpa
25.172.in-addr.arpa
26.172.in-addr.arpa
27.172.in-addr.arpa
28.172.in-addr.arpa
29.172.in-addr.arpa
30.172.in-addr.arpa
31.172.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test Link 3 (ens5)
Current Scopes: DNS
DefaultRoute setting: yes
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Current DNS Server: 128.8.8.254
DNS Servers: 128.8.8.254
DNS Domain: test Link 2 (ens4)
Current Scopes: DNS
DefaultRoute setting: yes
LLMNR setting: yes
MulticastDNS setting: no
DNSOverTLS setting: no
DNSSEC setting: no
DNSSEC supported: no
Current DNS Server: 192.168.1.1
DNS Servers: 192.168.1.1
DNS Domain: freesoft.orgubuntu@ca:~$ ip -br addr
lo UNKNOWN 127.0.0.1/8 ::1/128
ens4 UP 192.168.4.183/24 fe80::e2c:d2ff:fe67:0/64
ens5 UP 128.8.8.254/24 fe80::e2c:d2ff:fe67:1/64ubuntu@ca:~$ tail -5 /etc/dnsmasq.conf
listen-address=128.8.8.254
bind-interfaces
dhcp-range=128.8.8.101,128.8.8.200,12h
dhcp-authoritative
domain=testubuntu@ca:~$ tail -4 /etc/resolv.conf nameserver 127.0.0.53
options edns0 trust-ad
search test freesoft.orgIt's doing what I want, which is to answer queries for the ".test" domain, but I don't understand why it stalls for 15 seconds after getting the answer.
|
Why would nslookup return a response, then timeout?
|
A python solution
#!/usr/bin/python3import socket #this module is core networking module in Python,
#can be used to resolve domain names.sourcefile = 'sourcefile.txt' #file with domain names
outfile = 'results.txt' #file to write the IP addresseswith open(sourcefile, 'r') as inputf:
#This opens the sourcefile in read mode to see what are the domains with open(outfile, 'a') as outputf:
#This opens the outfile in append mode to write the results domains = inputf.readlines()
#This reads all the domains in sourcefile line by line for domain in domains:
#This for loop will go one by one on domains. domain = domain.strip("\n")
#as the every domain in the file are in newline,
#the socket function will have trouble, so strip off the newline char try:
resolution = (socket.getaddrinfo(domain, port=80,type=2))
for ip in resolution:
outputf.write(str(ip[4][0])+" "+domain+ " www."+domain+"\n" )
except:
outputf.write("Could not resolve "+domain+" www."+domain+"\n")
#getaddinfo("domain") gets all the IP addresses. Input :
1.gravatar.com
abcya.com
allaboutbirds.org
google.com
akamai.deOutput :
192.0.73.2 1.gravatar.com www.1.gravatar.com
2a04:fa87:fffe::c000:4902 1.gravatar.com www.1.gravatar.com
104.198.14.52 abcya.com www.abcya.com
128.84.12.109 allaboutbirds.org www.allaboutbirds.org
216.58.197.78 google.com www.google.com
2404:6800:4007:810::200e google.com www.google.com
104.127.218.235 akamai.de www.akamai.de
2600:140b:a000:28e::35eb akamai.de www.akamai.de
2600:140b:a000:280::35eb akamai.de www.akamai.de
|
I'd like a way of looking up all the domains in a text file (one domain per line) and generating another text file with the output as the IP address, a space, the domain name, a space and then the domain name with www. prepending it.
For example, if the source text file contains two lines:
1.gravatar.com
abcya.comthe new text file would contain 3 lines as 1.gravatar.com has both an IPv4 and an IPv6 address:
72.21.91.121 1.gravatar.com www.1.gravatar.com
2a04:fa87:fffe::c000:4902 1.gravatar.com www.1.gravatar.com
104.198.14.52 abcya.com www.abcya.comI'm running a Ubuntu derivative and can use nslookup to get the IPv4 and IPv6 addresses. However, the source text file is a list of over 2,000 domains - so doing it by hand would take a very long time with plenty of room for error.
And if the answer could allow for no IP address too. If the domain no longer exists (as in the case of alwaysbeready.mybigcommerce.com), nslookup returns
** server can't find alwaysbeready.mybigcommerce.com: NXDOMAIN
So, maybe have NXDOMAIN in place of the IP Address in the resulting text file?
Thanks in advance to anyone who can help.
|
Look Up IPs From Text File, Generate Another Text File With Specific Formatting
|
Either disabling or setting the reverse path filtering to loose mode for the private interfaces resolved the issue.
echo 2 > /proc/sys/net/ipv4/conf/ethusb0/rp_filter
echo 2 > /proc/sys/net/ipv4/conf/ethusb1/rp_filter
|
I have an embedded system with two ethernet ports. These two ports are connected
to two different ethernet ports on a linux box. The linux box has another third port
which is connected to the WAN.
The setup looks like below
_________________
eth0 ---- USB2ETH adapter-------(ethusb0)------------------| |
(IP: 192.168.2.50) (IP: 192.168.2.1) | Linux |
(Netmask: 255.255.255.0) (Netmask: 255.255.255.0) | Box |-------ethext0----WAN
| |
eth1----USB2ETH adapter--------(ethusb1)-------------------|_________________|
(IP: 192.168.3.50) (IP: 192.168.3.1)
(Netmask: 255.255.255.0) (Netmask: 255.255.255.0)Both the interfaces are in different domain but same netmask as shown above
ethusb0 and ethusb1 run dhcp servers. I have updated the /etc/dhcp/dhcpd.conf
accordingly and eth0 and eth1 get IP addresses assigned.
On the linux box, I have setup iptables to accept and forward packets from ethusb0 to ethext0
sudo iptables --policy FORWARD ACCEPTsudo iptables -A FORWARD -i ethusb0 -o ethext0 -j ACCEPT
sudo iptables -A FORWARD -i ethext0 -o ethusb0 -m state --state ESTABLISHED,RELATED -j ACCEPT
sudo iptables -t nat -A POSTROUTING -o ethext0 -j MASQUERADESimilar iptables is setup for ethusb1 as well.
On the linux box, I have also updated the /etc/network/interfaces for the ethusb0 and ethusb1
by adding the dns-nameservers. Lets say the server address is 192.0.3.3
Now, from the embedded system, from both the ports I'm able to ping the dns server.
When I do a nslookup of the server name, it succeeds most of the time.
I monitored the ethusb0 and ethext0 wireshark and I can see the nslookup request and replies.
Requests are received on ethusb0 and then forwarded to ethext0 and replies from ethext0 to ethusb0.
I also double confirmed by checking the forward stats counter in iptables for these interfaces.
Problem:
Now coming to the issue that occurs frequently.
There are certain times when the nslookup fails. The query packets are received on ethusb0 but not forwarded to ethext0. Confirmed this by monitoring wireshark and also iptables stats. But the next nslookup query works succeeds.
I investigated further and found an abnormality in nslookup query frames originating from the embedded system side. The frames sent out on say eth0, had MAC address of eth0 but the IP address of eth1. Its only for these kind of frames, that forwarding rule breaks.
Firstly, I do not know or understand, why the query packets would contain mismatched MAC and IP address. Usually the same interface is chosen to transmit/receive packets. Its only when I bring down and up the interface(ex. eth0), then the other interface is chosen.
Secondly, I'm not quite sure as to why the packets do not get forwarded. My suspicion is that there is some sort of MAC vs IP address check,
being done which makes the linux machine to drop those packets. But iptables does not report any drop packets count.
I checked posts such as
Make nslookup use specific interface
but it did not help.
Things tried so far,Different sub net masks, but still issue is seen
But if the interfaces are in same domain, 192.168.2.x and 192.168.2.y, the issue does not occur.Could somebody please let me know, if any additional rules have to added to iptables or should the interfaces on the embedded system side be configured
in a different way?
|
nslookup and IP forwarding with multiple interfaces fails sometimes
|
Use grep to parse strings that "looks like" a valid IPv4 addresses :
nslookup unix.stackexchange.com | grep -Eo '([0-9]{1,3}[.]){3}[0-9]{1,3}[^#]'
151.101.129.69
151.101.1.69
151.101.193.69
151.101.65.69Use tool such as dig. It can output directly the info you desire without the need for external parsing tool:
dig +short A unix.stackexchange.com
151.101.65.69
151.101.129.69
151.101.193.69
151.101.1.69
|
I try to use name resolution and get IP address of a web site. I used nslookup command and use 6'th line of its output. Because I noticed that output of nslookup contains IP address(in IPv4) in 6'th line. My command was :
website=<somewebsiteURL>
IP=$(nslookup "$website" | head -n 6 | tail -n 1 | cut -d" " -f2)Also I tried sed command to reach the same goal and used :
website=<somewebsiteURL>
IP=$(nslookup "$website" | sed -n 6p | cut -d" " -f2)result was the same, result is unreliable, sometimes worked, sometimes not.
It works correctly but not always. Sometimes it reads the 7'th line, not the 6'th line and fails.
Actually I solved my problem using another approach :
website=<somewebsiteURL>
newIP=$(nslookup "$website" | grep "Address: " | head -n 1 | cut -d" " -f2)which gave the correct line and IP address everytime(although it can give more than one IP > nslookup can return more than one IP)
Why do the first two codes fail?
|
Why does nslookup script give different results?
|
No, this will not prevent the script from crashing. If any errors occur in the tar process (e.g.: permission denied, no such file or directory, ...) the script will still crash.
This is because of using > /dev/null 2>&1 will redirect all your command output (both stdout and stderr) to /dev/null, meaning no outputs are printed to the terminal.
By default:
stdin ==> fd 0
stdout ==> fd 1
stderr ==> fd 2In the script, you use > /dev/null causing:
stdin ==> fd 0
stdout ==> /dev/null
stderr ==> fd 2And then 2>&1 causing:
stdin ==> fd 0
stdout ==> /dev/null
stderr ==> stdout
|
I'm reading an example bash shell script:
#!/bin/bash# This script makes a backup of my home directory.cd /home# This creates the archive
tar cf /var/tmp/home_franky.tar franky > /dev/null 2>&1# First remove the old bzip2 file. Redirect errors because this generates some if the archive
# does not exist. Then create a new compressed file.
rm /var/tmp/home_franky.tar.bz2 2> /dev/null
bzip2 /var/tmp/home_franky.tar# Copy the file to another host - we have ssh keys for making this work without intervention.
scp /var/tmp/home_franky.tar.bz2 bordeaux:/opt/backup/franky > /dev/null 2>&1# Create a timestamp in a logfile.
date >> /home/franky/log/home_backup.log
echo backup succeeded >> /home/franky/log/home_backup.logI'm trying to understand the use of /dev/null 2>&1 here. At first, I thought this script uses /dev/null in order to gracefully ignore errors, without causing the script to crash (kind of like try catch exception handling in programming languages). Because I don't see how using tar to compress a directory into a tar file could possibly cause any type of errors.
|
redirecting to /dev/null
|
In addition to the performance benefits of using a character-special device, the primary benefit is modularity. /dev/null may be used in almost any context where a file is expected, not just in shell pipelines. Consider programs that accept files as command-line parameters.
# We don't care about log output.
$ frobify --log-file=/dev/null# We are not interested in the compiled binary, just seeing if there are errors.
$ gcc foo.c -o /dev/null || echo "foo.c does not compile!".# Easy way to force an empty list of exceptions.
$ start_firewall --exception_list=/dev/nullThese are all cases where using a program as a source or sink would be extremely cumbersome. Even in the shell pipeline case, stdout and stderr may be redirected to files independently, something that is difficult to do with executables as sinks:
# Suppress errors, but print output.
$ grep foo * 2>/dev/null
|
I am trying to understanding the concept of special files on Linux. However, having a special file in /dev seems plain silly when its function could be implemented by a handful of lines in C to my knowledge.
Moreover you could use it in pretty much the same manner, i.e. piping into null instead of redirecting into /dev/null. Is there a specific reason for having it as a file? Doesn't making it a file cause many other problems like too many programs accessing the same file?
|
Why is /dev/null a file? Why isn't its function implemented as a simple program?
|
Bash uses C-style strings internally, which are terminated by null bytes. This means that a Bash string (such as the value of a variable, or an argument to a command) can never actually contain a null byte. For example, this mini-script:
foobar=$'foo\0bar' # foobar='foo' + null byte + 'bar'
echo "${#foobar}" # print length of $foobaractually prints 3, because $foobar is actually just 'foo': the bar comes after the end of the string.
Similarly, echo $'foo\0bar' just prints foo, because echo doesn't know about the \0bar part.
As you can see, the \0 sequence is actually very misleading in a $'...'-style string; it looks like a null byte inside the string, but it doesn't end up working that way. In your first example, your read command has -d $'\0'. This works, but only because -d '' also works! (That's not an explicitly documented feature of read, but I suppose it works for the same reason: '' is the empty string, so its terminating null byte comes immediately. -d delim is documented as using "The first character of delim", and I guess that even works if the "first character" is past the end of the string!)
But as you know from your find example, it is possible for a command to print out a null byte, and for that byte to be piped to another command that reads it as input. No part of that relies on storing a null byte in a string inside Bash. The only problem with your second example is that we can't use $'\0' in an argument to a command; echo "$file"$'\0' could happily print the null byte at the end, if only it knew that you wanted it to.
So instead of using echo, you can use printf, which supports the same sorts of escape sequences as $'...'-style strings. That way, you can print a null byte without having to have a null byte inside a string. That would look like this:
for file in * ; do printf '%s\0' "$file" ; done \
| while IFS= read -r -d '' ; do echo "$REPLY" ; doneor simply this:
printf '%s\0' * \
| while IFS= read -r -d '' ; do echo "$REPLY" ; done(Note: echo actually also has an -e flag that would let it process \0 and print a null byte; but then it would also try to process any special sequences in your filename. So the printf approach is more robust.)Incidentally, there are some shells that do allow null bytes inside strings. Your example works fine in Zsh, for example (assuming default settings). However, regardless of your shell, Unix-like operating systems don't provide a way to include null bytes inside arguments to programs (since program arguments are passed as C-style strings), so there will always be some limitations. (Your example can work in Zsh only because echo is a shell builtin, so Zsh can invoke it without relying on the OS support for invoking other programs. If you used command echo instead of echo, so that it bypassed the builtin and used the standalone echo program on the $PATH, you'd see the same behavior in Zsh as in Bash.)
|
I've read that, since file-paths in Bash can contain any character except the null byte (zero-valued byte, $'\0'), that it's best to use the null byte as a separator. For example, if the output of find will be sent to another program, it's recommended to use the -print0 option (for versions of find that have it).
But although something like this works fine (printing file-paths separated by newlines — don't worry, this is just a demonstration, I'm not actually doing it in real scripts):
find -print0 \
| while IFS= read -r -d $'\0' ; do echo "$REPLY" ; donesomething like this does not work:
for file in * ; do echo -n "$file"$'\0' ; done \
| while IFS= read -r -d $'\0' ; do echo "$REPLY" ; doneWhen I try just the for-loop part, I find that it just prints all the filenames together, without the null byte in between.
Why is this? What's going on?
|
How do I use null bytes in Bash?
|
They do count as I/O, but not of the type measured by the fields you’re looking at.
In htop, IO_RBYTES and IO_WBYTES show the read_bytes and write_bytes fields from /proc/<pid>/io, and those fields measure bytes which go through the block layer. /dev/zero doesn’t involve the block layer, so reads from it don’t show up there.
To see I/O from /dev/zero, you need to look at the rchar and wchar fields in /proc/<pid>/io, which show up in htop as RCHAR and WCHAR:rchar: characters read
The number of bytes which this task has caused to be
read from storage. This is simply the sum of bytes
which this process passed to read(2) and similar system
calls. It includes things such as terminal I/O and is
unaffected by whether or not actual physical disk I/O
was required (the read might have been satisfied from
pagecache).
wchar: characters written
The number of bytes which this task has caused, or
shall cause to be written to disk. Similar caveats
apply here as with rchar.See man 5 proc and man 1 htop for details.
|
I am emptying out a hard drive on some Linux 4.x OS using this command:
sudo sh -c 'pv -pterb /dev/zero > /dev/sda'And I opened another tty and started sudo htop and noticed this:
PID USER PRI NI CPU% RES SHR IO_RBYTES IO_WBYTES S TIME+ Command
4598 root 20 0 15.5 1820 1596 4096 17223823 D 1:14.11 pv -pterb /dev/zeroThe value for IO_WBYTES seems quite normal, but IO_RBYTES remains at 4 KiB and never changes.
I ran a few other programs, for example
dd if=/dev/zero of=/dev/zero
cat /dev/zero > /dev/zeroand was surprised to see none of them generates a lot of IO_RBYTES or IO_WBYTES.
I think this is not specific to any program, but why don't reads from /dev/zero and writes to /dev/{zero,null} count as I/O bytes?
|
Why don't reads from /dev/zero count as IO_RBYTES?
|
It is a documented optimization:When the archive is being created to /dev/null, GNU tar tries to
minimize input and output operations. The Amanda backup system, when
used with GNU tar, has an initial sizing pass which uses this feature.
|
I have a directory with over 400 GiB of data in it. I wanted to check that all the files can be read without errors, so a simple way I thought of was to tar it into /dev/null. But instead I see the following behavior:
$ time tar cf /dev/null .real 0m4.387s
user 0m3.462s
sys 0m0.185s
$ time tar cf - . > /dev/nullreal 0m3.130s
user 0m3.091s
sys 0m0.035s
$ time tar cf - . | cat > /dev/null
^Creal 10m32.985s
user 0m1.942s
sys 0m33.764sThe third command above was forcibly stopped by Ctrl+C after having run for quite long already. Moreover, while the first two commands were working, activity indicator of the storage device containing . was nearly always idle. With the third command the indicator is constantly lit up, meaning extreme busyness.
So it seems that, when tar is able to find out that its output file is /dev/null, i.e. when /dev/null is directly opened to have the file handle which tar writes to, file body appears skipped. (Adding v option to tar does print all the files in the directory being tar'red.)
So I wonder, why is this so? Is it some kind of optimization? If yes, then why would tar even want to do such a dubious optimization for such a special case?
I'm using GNU tar 1.26 with glibc 2.27 on Linux 4.14.105 amd64.
|
Why does tar appear to skip file contents when output file is /dev/null?
|
Looking at the source code for mv, http://www.opensource.apple.com/source/file_cmds/file_cmds-220.7/mv/mv.c :
/*
* If rename fails because we're trying to cross devices, and
* it's a regular file, do the copy internally; otherwise, use
* cp and rm.
*/
if (lstat(from, &sb)) {
warn("%s", from);
return (1);
}
return (S_ISREG(sb.st_mode) ?
fastcopy(from, to, &sb) : copy(from, to));...int
fastcopy(char *from, char *to, struct stat *sbp)
{
...
while ((to_fd =
open(to, O_CREAT | O_EXCL | O_TRUNC | O_WRONLY, 0)) < 0) {
if (errno == EEXIST && unlink(to) == 0)
continue;
warn("%s", to);
(void)close(from_fd);
return (1);
}In the first pass through the while loop, open(to, O_CREAT | O_EXCL | O_TRUNC | O_WRONLY, 0) will fail with EEXIST. Then /dev/null will be unlinked, and the loop repeated.
But as you pointed out in your comment, regular files can't be created in /dev, so on the next pass through the loop, open(to, O_CREAT | O_EXCL | O_TRUNC | O_WRONLY, 0) is still going to fail.
I'd file a bug report with Apple. The mv source code is mostly unchanged from the FreeBSD version, but because OSX's devfs has that non-POSIX behavior with regular files, Apple should fix their mv.
|
If I do: touch file; mv file /dev/null as root, /dev/null disappears. ls -lad /dev/null results in no such file or directory. This breaks applications which depend on /dev/null like SSH and can be resolved by doing mknod /dev/null c 1 3; chmod 666 /dev/null. Why does moving a regular file to this special file result in the disappearance of /dev/null?
To clarify, this was for testing purposes, and I understand how the mv command works. What I am curious about is why ls -la /dev/null before replacing it with a regular file shows the expected output, but afterwards it shows that /dev/null does not exist even though a file was allegedly created via the original mv command and the file command shows ASCII Text. I think this must be a combination of the ls command behavior in conjunction with devfs when a non special file replaces a character/special file. This is on Mac OS X, behaviors may vary on other OS's.
|
mv a file to /dev/null breaks dev/null
|
ERROR: type should be string, got "\nhttps://wiki.gentoo.org/wiki/Device-mapper#ZeroSee Documentation/device-mapper/zero.txt for usage. This target has no target-specific parameters.\nThe \"zero\" target create that functions similarly to /dev/zero: All reads return binary zero, and all writes are discarded. Normally used in tests [...]\nThis creates a 1GB (1953125-sector) zero target:\nroot# dmsetup create 1gb-zero --table '0 1953125 zero'" |
I want to test some physical links in a setup. The software tooling that I can use to test this require a block device to read/write from/to. The block devices I have available can't saturate the physical link so I can't fully test it.
I know I can setup a virtual block device which is backed by a file. So my idea was to somehow setup a virtual block device to /dev/null but the problem is of course that I can't read from it. Is there a way I could setup a virtual block device that writes to /dev/null but just returns always zero when read?
Thank you for any help!
|
Create virtual block device which writes to /dev/null
|
You are able to pass null bytes across a pipe (like you say in the title), but the bash shell will not allow null bytes in expansions. It does not allow null bytes in expansions because the shell uses C strings to represent the results of expansions, and C strings are terminated by null bytes.
$ hexdump -C <<< $( python2 -c "print('c'*6+b'\x00'+'c'*6)" )
bash: warning: command substitution: ignored null byte in input
00000000 63 63 63 63 63 63 63 63 63 63 63 63 0a |cccccccccccc.|
0000000dPassing the data across a pipe is fine:
$ python2 -c "print('c'*6+b'\x00'+'c'*6)" | hexdump -C
00000000 63 63 63 63 63 63 00 63 63 63 63 63 63 0a |cccccc.cccccc.|
0000000eRedirecting a process substitution also works, as process substitutions don't expand to the data produced by the command but to the name of a file containing that data:
$ hexdump -C < <( python2 -c "print('c'*6+b'\x00'+'c'*6)" )
00000000 63 63 63 63 63 63 00 63 63 63 63 63 63 0a |cccccc.cccccc.|
0000000eSo, the solution is to avoid having the shell store the data containing the null byte in a string, and instead pass the data over a pipe, without using a command substitution. In your case
$ python2 -c "print('c'*6+b'\x00'+'c'*6)" | ./bfRelated:How do I use null bytes in Bash?
bash can't store hexvalue 0x00 in variable
... and others.Or switch to zsh which does allow null bytes in strings:
$ hexdump -C <<< $( python2 -c "print('c'*6+b'\x00'+'c'*6)" )
00000000 63 63 63 63 63 63 00 63 63 63 63 63 63 0a |cccccc.cccccc.|
0000000e
|
I am trying to redirect python generated input to ELF 64-bit executable in bash 5.0.3. I am getting:
> ./bf <<< $(python2 -c "print('c'*6+b'\x00'+'c'*6)")
bash: warning: command substitution: ignored null byte in input
Enter password: Password didn't match
input: ccccccccccccHow can I allow a null byte in the input?
|
Send null byte in unix pipe
|
There is no way to pass a null byte in the parameter of a command. This is not because of a limitation of bash, although bash has this limitation as well. This is a limitation of the interface to run a command: it treats a null byte as the end of the parameter. There's no escaping mechanism.
Most shells don't support null bytes in variables or in the arguments of functions and builtins. Zsh is a notable exception.
$ ksh -c 'a=$(printf foo\\0bar); printf "$a"' | od -t x1
0000000 66 6f 6f
0000003
$ bash -c 'a=$(printf foo\\0bar); printf "$a"' | od -t x1
0000000 66 6f 6f 62 61 72
0000006
$ zsh -c 'a=$(printf foo\\0bar); printf "$a"' | od -t x1
0000000 66 6f 6f 00 62 61 72
0000007But even with zsh, if you attempt to pass a parameter to an external command, then anything following a null byte is ignored —not by zsh but by the kernel.
$ zsh -c 'a=$(printf foo\\0bar); /usr/bin/printf "$a"' | od -t x1
0000000 66 6f 6f
0000003If you want to pass null bytes to a program, you need to find some way other than a command line parameter.
head -c 512 binaryFile.dd | myProgram --read-parameter2-from-stdin parameter1
myProgram --read-parameter2-from-file=<(head -c 512 binaryFile.dd) parameter1
|
So I'd like to pass the first 512 bytes of binaryFile.dd as the second parameter to myProgram but bash strips out all the NUL chars. Is there any way to avoid this in bash or am I on a hiding to nothing?
myProgram parameter1 "$(head -c 512 binaryFile.dd)"
|
Using binary data as a parameter in bash - any way to allow nuls?
|
The ${parameter:+word} parameter expansion form seems to do the job
( xyz=2; set -- ${xyz:+"$xyz"}; echo $# )
1( xyz=; set -- ${xyz:+"$xyz"}; echo $# )
0( unset xyz; set -- ${xyz:+"$xyz"}; echo $# )
0So that should translate to
program ${var:+"$var"}in your case
|
I need to pass as a program argument a parameter expansion. The expansion results in a filename with spaces. Therefore, I double-quote it to have the filename as a single word: "$var".
As long as $var contains a filename, the program gets a single-word argument and it works fine. However, at times the expansion results in an empty string, which when passed as argument, breaks the program (which I cannot change).
Not removing the empty string is the specified behavior, according to Bash Reference Manual:If a parameter with no value is expanded within double quotes, a null argument results and is retained.But then, how do I manage the case where I need to quote variables, but also need to discard an empty string expansion?
EDIT:
Thanks to George Vasiliou, I see that a detail is missing in my question (just tried to keep it short :) ). Running the program is a long java call, which abbreviated looks like this:
java -cp /etc/etc MyClass param1 param2 "$var" param4Indeed, using an if statement like that described by George would solve the problem. But it would require one call with "$var" in the then clause and another without "$var" in the else clause.
To avoid the repetition, I wanted to see if there is a way to use a single call that discards the expansion of "$var" when it is empty.
|
How to prevent word splitting without preventing empty string removal?
|
To create a backup copy of a disk while saving space, use gzip:
gzip </dev/sda >/path/to/sda.gzWhen you want to restore the disk from backup, use:
gunzip -c /path/to/sda.gz >/dev/sdaThis will likely save much more space than merely stripping trailing NUL bytes.
Removing trailing NUL bytes
If you really want to remove trailing NUL bytes and you have GNU sed, you might try:
sed '$ s/\x00*$//' /dev/sda >/path/to/sda.strippedThis might run into a problem if a large disk's data exceeds some internal limit of sed. While GNU sed has no built-in limit on data size, the GNU sed manual explains that system memory limitations may prevent processing of large files:GNU sed has no built-in limit on line length; as long as it can
malloc() more (virtual) memory, you can feed or construct lines as
long as you like.
However, recursion is used to handle subpatterns and indefinite
repetition. This means that the available stack space may limit the
size of the buffer that can be processed by certain patterns.
|
I just backed up the microSD card from my Raspberry Pi on my PC running a Linux distro using this command:
dd if=/dev/sdx of=file.bin bs=16MThe microSD card is only 3/4 full so I suppose there's a few gigs of null bytes at the end of the tremendous file. I am very sure I don't need that. How can I strip those null bytes from the end efficiently so that I can later restore it with this command?
cat file.bin /dev/zero | dd of=/dev/sdx bs=16M
|
Remove null bytes from the end of a large file
|
As per POSIX,input file shall be a text file, except that line lengths shall be unlimited¹NUL characters² in the input make it non-text, so the behaviour is unspecified as far as POSIX is concerned, so sh implementations can do whatever they want (and a POSIX compliant script must not contain NULs).
There are some shells that scan the first few bytes for 0s and refuse to run the script on the assumption that you tried to execute a non-script file by mistake.
That's useful because the exec*p() functions, env commands, sh, find -exec... are required to call a shell to interpret a command if the system returns with ENOEXEC upon execve(), so, if you try to execute a command for the wrong architecture, it's better to get a won't execute a binary file error from your shell than the shell trying to make sense of it as a shell script.
That is allowed by POSIX:If the executable file is not a text file, the shell may bypass this command execution.Which in the next revision of the standard will be changed to:The shell may apply a heuristic check to determine if the file to be executed could be a script and may bypass this command execution if it determines that the file cannot be a script. In this case, it shall write an error message, and shall return an exit status of 126.
Note: A common heuristic for rejecting files that cannot be a script is locating a NUL byte prior to a <newline> byte within a fixed-length prefix of the file. Since sh is required to accept input files with unlimited line lengths, the heuristic check cannot be based on line length.That behaviour can get in the way of shell self-extractable archives though which contain a shell header followed by binary data¹.
The zsh shell supports NUL in its input, though note that NULs can't be passed in the arguments of execve(), so you can only use it in the argument or names of builtin commands or functions:
$ printf '\0() echo zero; \0\necho \0\n' | zsh | hd
00000000 7a 65 72 6f 0a 00 0a |zero...|
00000007(here defining and calling a function with NUL as its name and passing a NUL character as argument to the builtin echo command).
Some will strip them which is also a sensible thing to do. NULs are sometimes used as padding. They are ignored by terminals for instance (they were sometimes sent to terminals to give them time to process complex control sequences (like carriage return (literally)). Holes in files appear as being filled with NULs, etc.
Note that non-text is not limited to NUL bytes. It's also sequence of bytes that don't form valid characters in the locale. For instance, the 0xc1 byte value cannot occur in UTF-8 encoded text. So in locales using UTF-8 as the character encoding, a file that contains such a byte is not a valid text file and therefore not a valid sh script³.
In practice, yash is the only shell I know that will complain about such invalid input. ¹ In the next revision of the standard, it is going to change to The input file may be of any type, but the initial portion of the file intended to be parsed according to the shell grammar (XREF to XSH 2.10.2 Shell Grammar Rules) shall consist of characters and shall not contain the NUL character. The shell shall not enforce any line length limits.explicitly requiring shells to support input that starts with a syntactically valid section without NUL bytes, even if the rest contains NULs, to account for self-extracting archives.
² and characters are meant to be decoded as per the locale's character encoding (see the output of locale charmap), and on POSIX system, the NUL character (whose encoding is always byte 0) is the only character whose encoding contains the byte 0. In other words, UTF-16 is not among the character encodings that can be used in a POSIX locale.
³ There is however the question of the locale changing within the script (like when the LANG/LC_CTYPE/LC_ALL/LOCPATH variables are assigned) and at which point the change takes effect for the shell interpreting the input.
|
Because that's what some of them are doing.
> echo echo Hallo, Baby! | iconv -f utf-8 -t utf-16le > /tmp/hallo
> chmod 755 /tmp/hallo
> dash /tmp/hallo
Hallo, Baby!
> bash /tmp/hallo
/tmp/hallo: /tmp/hallo: cannot execute binary file
> (echo '#'; echo echo Hallo, Baby! | iconv -f utf-8 -t utf-16le) > /tmp/hallo
> bash /tmp/hallo
Hallo, Baby!
> mksh /tmp/hallo
Hallo, Baby!
> cat -v /tmp/hallo
#
e^@c^@h^@o^@ ^@H^@a^@l^@l^@o^@,^@ ^@B^@a^@b^@y^@!^@
^@Is this some compatibility nuisance actually required by the standard? Because it looks quite dangerous and unexpected.
|
Are shells allowed to ignore NUL bytes in scripts?
|
The command will output the data from device /dev/null to the given file (mailbox of the root account). Since /dev/null responds just with end-of-file when reading from it nothing will be written to the file, but with the redirection > the shell will have cleared the file already. Actually this is equivalent to writing just
> /var/spool/mail/root(i.e., the same without cat or /dev/null).
|
In a document created by a former coworker there is this command:
cat /dev/null > /var/spool/mail/root It says next to it that it will clean out mailbox.
Can someone please explain how/why these commands do that. I need to know what will happen before I run the command.
We are trying to clean up space on var, as of right now it's at 91%.
|
Meaning of `cat /dev/null > file`
|
If your comm supports non-text input (like GNU tools generally do), you can always swap NUL and nl (here with a shell supporting process substitution (have you got any plan for that in mksh btw?)):
comm -23 <(tr '\0\n' '\n\0' < file1) <(tr '\0\n' '\n\0' < file2) |
tr '\0\n' '\n\0'That's a common technique.
|
I’m writing something that deals with file matches, and I need an inversion operation. I have a list of files (e.g. from find . -type f -print0 | sort -z >lst), and a list of matches (e.g. from grep -z foo lst >matches – note that this is only an example; matches can be any arbitrary subset (including empty or full) or lst), and now I want to invert this list.
Background: I’m sorta implementing something like find(1) excepton file lists (although the files do exist in the filesystem at the point of calling, the list may have been pre-filtered). If the list of files weren’t potentially so large, I could use find "${files[@]}" -maxdepth 0 -somecondition -print0, but even moderate use of what I’m writing would go beyond the Linux or BSD argv size limit.
If the lines were not NUL-separated, I could use comm -23 lst matches >inverted. If the matches were not NUL-separated, I could use grep -Fvxzf matches lst. But, from the generators I mentioned in the first paragraph, both are.
Assume GNU tools are installed, so this needs not be portable beyond e.g. Debian, as I’m using find -print0, sort -z and friends already (although some BSDs have it, so if it can be done in “more portable”, I won’t complain).
I’m trying to do code reuse here; plus, comm -23 is basically the perfect tool for this already except it doesn’t support changing the input line separator (yet), and comm is an underrated and not-enough-well-known tool anyway. If the Unix/Linux toolbox doesn’t offer anything sensible, I’m likely to reimplement a form of comm -23 (reduced to just this one use case) in shell, as the script already (for other reasons) requires a shell that happens to support read -d '' for NUL-delimited input, but that’s going to be slow (and effort… I posted this at the end of the workday in the hopes someone has got an idea for when I pick this up tomorrow or on the 28th).
|
Invert matching lines, NUL-separated
|
There should be no difference. Pipe the output through cat -v which will escape non-printable characters.
Perhaps you have some special locale settings, which modifies what -print0 does. At least with my en_US.UTF-8 settings there is no difference. Perhaps add the output of locale to your question.
Possibly your test with ruby causes ruby to interpret the \0 itself, and find is not even executed.
|
In a terminal I can run...
find . -type f -print0
./testdir/testfile2.txt./testdir/testfile.txtAnd then...
find . -type f -printf "%p\0"
./testdir/testfile2.txt./testdir/testfile.txtThey both visually appear the same, but since this is about the null character, that doesn't say much. If I run via the ruby repl:
2.5.1 :001 > `find . -type f -print0`
=> "./testdir/testfile2.txt\u0000./testdir/testfile.txt\u0000" and then
2.5.1 :002 > `find . -type f -printf "%p\0"`
Traceback (most recent call last):
3: from /usr/share/rvm/rubies/ruby-2.5.1/bin/irb:11:in `<main>'
2: from (irb):2
1: from (irb):2:in ``'
ArgumentError (string contains null byte)What is the difference in what the -print0 option outputs vs printf?
Test system info:
uname: Linux XPS-15-9570 4.15.0-30-generic #32-Ubuntu SMP Thu Jul 26 17:42:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
ruby: ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux].
|
With gnu find, what is the difference between -print0 and -printf "%p\0"
|
As for your exact question:Can I safely ignore: “warning: … ignored null byte … ”?The answer is yes, since you are creating the null byte with your own code.
But the real question is: Why do you need a "null byte"?
The inotifywait command will produce an output in the form of:
$dir ACTION $filenameWhich, for your input, looks like this (for file hello4):
/home/user/Monitor/ CREATE hello4The command cut will print fields 1 and 3, and using a null delimiter in --output-delimiter="" will produce an output with an embedded null, something like:
$'/home/user/Monitor/\0hello4\n'That is not what you need, because of the added null.
The solution turns out to be very simple.
Since you are using the command read already, do this:
#!/bin/bash
monitordir="/home/user/Monitor/"
tempdir="/home/user/tmp/"
logfile="/home/user/notifyme"inotifywait -m -r -e create ${monitordir} |
while read dir action basefile; do
cp -u "${dir}${basefile}" "${tempdir}";
doneUse the default value of IFS to split on whitespace the input and just use the directory and filename to copy.
|
Is it possible to safely ignore the aforementioned error message? Or is it possible to remove the null byte? I tried removing it with tr but I still get the same error message.
this is my script:
#!/bin/bash monitordir="/home/user/Monitor/"
tempdir="/home/user/tmp/"
logfile="/home/user/notifyme" inotifywait -m -r -e create ${monitordir} |
while read newfile; do
echo "$(date +'%m-%d-%Y %r') ${newfile}" >> ${logfile};
basefile=$(echo ${newfile} | cut -d" " -f1,3 --output-delimiter="" | tr -d '\n');
cp -u ${basefile} ${tempdir};
donewhen I run inotify-create.sh and I create a new file in "monitordir"
I get:
[@bash]$ ./inotify-create.sh
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
./inotify-create.sh: line 9: warning: command substitution: ignored null byte in input
|
Can I safely ignore: "warning: command substitution: ignored null byte in input"?
|
Only GNU awk and mawk (release 1.3.4 or later) can use \0 as a record separator with the meaning "null character." Older releases of mawk, BSD awk, Busybox awk, Plan 9 awk etc. all treat the string \0 in RS as if RS had been the empty string, i.e., it enables "paragraph mode" (two or more contiguous newlines delimit records).
|
How does the following code print just a single file?
find "$fdir" -type f -name "${fnam}-*.png" -print0 | awk -v RS='\0' -F'[-.]' '{print $(NF-1), $0}' | cat -vetwhich gives me
04 /home/flora/edvart/docs/schimmel-04.png$But doing find "$fdir" -type f -name "${fnam}-*.png" gives
/home/flora/edvart/docs/schimmel-04.png
/home/flora/edvart/docs/schimmel-05.png
/home/flora/edvart/docs/schimmel-06.png
/home/flora/edvart/docs/schimmel-07.png
/home/flora/edvart/docs/schimmel-08.png
/home/flora/edvart/docs/schimmel-09.png
/home/flora/edvart/docs/schimmel-10.png
/home/flora/edvart/docs/schimmel-11.png
/home/flora/edvart/docs/schimmel-12.png
/home/flora/edvart/docs/schimmel-13.png
/home/flora/edvart/docs/schimmel-1.png
/home/flora/edvart/docs/schimmel-2.png
/home/flora/edvart/docs/schimmel-3.png
|
awk with null record separator printing just one file
|
From bash manual section on Word Splitting:Explicit null arguments ("" or '') are retained and passed to commands as empty strings. [...] the word -d'' becomes -d after word splitting and null argument removal.It's helpful to have an args shell script in your toolbox to troubleshoot argument processing. Here's one possible implementation:
#!/bin/sh
printf "%d args:" "$#"
test "$#" -gt 0 && printf " <%s>" "$@"
echo# Source:
# https://lists.gnu.org/archive/html/help-bash/2021-07/msg00044.htmlTry it with mapfile -d'' -t files:
$ args mapfile -d'' -t files
4 args: <mapfile> <-d> <-t> <files>We see that the NUL we thought we were passing to the -d option is not present. The argument to the -d option is the next argument on the command line: -t!
The documentation of -d option to mapfile [link] says:The first character of delim is used to terminate each input lineThe first character of -t is -.
Thus these two invocations are equivalent:
mapfile -d'' -t files
mapfile -d '-' filesThis explains the result you experienced.
|
bash's mapfile seems to be broken when handling NUL separated input. In particular, it isn't handling minus characters (-) correctly, treating the empty string after one as an end-of-line marker.
For example:
#!/bin/bashmkdir /tmp/junk
cd /tmp/junktouch a.txt b.txt c.txt d-e-f.txt
declare -a filesecho "mapfile using default \\n delimiter"
mapfile -t files < <(find . -maxdepth 1 -type f -name '*.txt')
typeset -p filesecho
echo "mapfile using NUL delimiter"
mapfile -d'' -t files < <(find . -maxdepth 1 -type f -name '*.txt' -print0)
typeset -p filesecho
echo "and again"
mapfile -d$'\0' -t files < <(find . -maxdepth 1 -type f -name '*.txt' -print0)
typeset -p files$ ./test.sh
mapfile using default \n delimiter
declare -a files=([0]="./d-e-f.txt" [1]="./a.txt" [2]="./c.txt" [3]="./b.txt")mapfile using NUL delimiter
declare -a files=([0]="./d-" [1]="e-" [2]="f.txt")and again
declare -a files=([0]="./d-" [1]="e-" [2]="f.txt")Is this a bug? Or am I forgetting something important and just doing it wrong?
The mapfile entry in man bash says:-d The first character of delim is used to terminate each input line, rather than newline. If delim is the empty string, mapfile will terminate a line when it reads a NUL character.bash version is 5.1.8(1)-release:
$ bash --version
GNU bash, version 5.1.8(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.Update
Even weirder, it seems to be mangling existing elements of the array (as shown above) if it already exists, but not even creating it if it doesn't.
$ unset files
$ mapfile -t -d'' files < <(find . -maxdepth 1 -type f -name '*.txt' -print0)
$ typeset -p files
-bash: typeset: files: not found
|
bash mapfile NUL bug?
|
You misread the advice, the idea is not to copy the large file to /dev/null, which wouldn't affect it in any way, outside putting it in cache if space is available.
cp bigfile /dev/null # uselessThe advice is not to remove the file then copy /dev/null to it, as it would keep the original inode unchanged and won't free any disk space as long as processes have that file open.
The advice is to replace the file content with /dev/null one, which, given the fact /dev/null size is zero by design, is effectively truncating the file to zero bytes:
cp /dev/null bigfile # workscat /dev/null > bigfile # worksIt might be noted that if you use a shell to run these commands, there is no need to use /dev/null, a simple empty redirection will have the same effect and would be more efficient, cat /dev/null being a no-op anyway.
: > bigfile # better> bigfile # even better if the shell used supports it
|
I just came across the advice that if you want to get rid of a large file and a process has the file handle open you should copy it to /dev/null and its size will be reduce to zero.
How does this work? Or does this even work?
After a quick search I found conflicting answers ranging from "Yes, this totally works" to " Your whole machine is going to blow up". Can somebody enlighten me?
I found the question here:
https://syedali.net/engineer-interview-questions/
|
cp large file to /dev/null to reduce size to zero [closed]
|
Your sed command is not changing newlines (\n) to NULs (\0) but to NULs + newlines (\0\n) (as cat -A shows).
When using GNU awk with RS set to \0, the first character of a subsequent record (and of its first field) will be \n, which will break your exact match.
And the 's/\(,"[^,"]*\)\x00/\1/' newline-splits correction doesn't change that at all -- it just appends the newline",c record to the previous one.A quick and dirty "solution" is to set RS to \0\n instead of just \0. But that way of massaging csv files so that they can be parsed by awk is not reliable, so you should REALLY find something better.
With your last example:
sed -e 's/$/\x00/' -e 's/\(,"[^,"]*\)\x00/\1/' input.txt |
gawk 'BEGIN {RS=ORS="\x00\n" ; FS=OFS=","} { if ($1=="a") print}' | cat -A
a,b,c^@$
a,"with quotes",c^@$
a,"with ,",c^@$
a,"with$
newline",c^@$sed -e 's/$/\x00/' -e 's/\(,"[^,"]*\)\x00/\1/' input.txt |
gawk 'BEGIN {RS="\x00\n" ; FS=OFS=","} { if ($1=="a") print}'
a,b,c
a,"with quotes",c
a,"with ,",c
a,"with
newline",c
|
Given a file with newlines in fields (embedded by double quotes), I tried to use NUL as record separator and then select desired records.
For this I have replaced the ends of lines with NUL and then corrected for fields split by a newline (done using sed). However then exactly matching the first field in (GNU) awk with a string fails. Interestingly a string pattern match on the first field fails, which makes me assume that RS="\x00" is correctly applied.
Why would it fail? Why does the pattern match work?
Example file input.txt:
head1,head2,head3
a,b,c
b,no a in first field,c
a,"with quotes",c
a,"with ,",c
b,a,1
a,"with
newline",c
b,1,aRecord selection via awk with exact string before introducing NUL works:
$awk 'BEGIN {FS=OFS=","} {if ($1=="a") print}' input.txtResult:
a,b,c
a,"with quotes",c
a,"with ,",c
a,"withIntroducing NUL and correcting "newline-splits" works (note the "with\n newline" entry):
$sed -e 's/$/\x00/' -e 's/\(,"[^,"]*\)\x00/\1/' input.txt | cat -Ahead1,head2,head3^@$
a,b,c^@$
b,no a in first field,c^@$
a,"with quotes",c^@$
a,"with ,",c^@$
b,a,1^@$
a,"with$
newline",c^@$
b,1,a^@$Using a pattern match for a in field 1 works (note how "a" in other fields fails, but "head1" matches):
$sed -e 's/$/\x00/' -e 's/\(,"[^,"]*\)\x00/\1/' input.txt |
awk 'BEGIN {RS=ORS="\x00" ; FS=OFS=","}
{ if ($1~"a") print}' |
cat -Ahead1,head2,head3^@$
a,b,c^@$
a,"with quotes",c^@$
a,"with ,",c^@$
a,"with$
newline",c^@HOWEVER: the exact match for "a" in field 1 fails:
sed -e 's/$/\x00/' -e 's/\(,"[^,"]*\)\x00/\1/' input.txt |
awk 'BEGIN {RS=ORS="\x00" ; FS=OFS=","} { if ($1=="a") print}' ##<no output>##Where am I wrong? Why does is work before using NUL as RS?
|
awk: Exact string match on field not working with NUL as record separator
|
Your input is not valid json. You can use https://jsonlint.com to check.
You could make it valid by changing it to something like:
[{
"access_key": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}, {
"secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}]Or jq has a workaround for this:
cat sample.json | jq "..|objects|select(.access_key) | .access_key"I recommend using a tool simply called json over jq. It's not as fast but it's much more intuitive to use and has better features.
json handles this with the -g argument:-g, --groupGroup adjacent objects into an array of objects, or concatenate adjacent arrays into a single array.$ cat sample.json | json -ag access_key
1234
|
I'm trying to print the json output using jq, But I'm getting null
How can I print only access_key and secret_key, but not null?
$ cat sample.json | jq '.'{
"access_key": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
{
"secret_key": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}$ cat sample.json | jq -e ".access_key"
xxxxxxxxxxxxxxxxxxxxxxxxx
nullThe same happens with secret_key.
If I use "raw output", I get an error:
$ cat sample.json | jq -r ".access_key" | jq --raw-output 'select(type == "string")'
parse error: Invalid numeric literal at line 2, column 0I tried this also:
$ cat sample.json | jq -re ".access_key" | jq 'select(.go != null) | .geo' > geo-only.json
parse error: Invalid numeric literal at line 2, column 0
|
Printing only the value and excluding null
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.