source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
441,580
I'm trapping both INT and ERR with the following code set -ex -o pipefaildest=$(mktemp -d)cd "$dest"trap "echo; echo Clean up; rm -rf $dest" INT ERRsleep 9999 When I press ^C the clean up callback is executed multiple times ++ echo Clean upClean up++ rm -rf /tmp/tmp.KYXL110516++ echo++ echo Clean upClean up++ rm -rf /tmp/tmp.KYXL110516 Is that expected behavior? Is it possible to execute it only once?
You're getting both the INT and ERR signals; SIGINT is handed to sleep , who exits with a non-zero return code. The non-zero return code then triggers the trap for SIGERR. If a sigspec is ERR, the command arg is executed whenever a pipeline (which may consist of a single simple command), a list, or a compound command returns a non-zero exit status... An example, to see the separate traps: set -ex -o pipefailtrap "echo Clean up for INT" INTtrap "echo Clean up for ERR" ERRsleep 9999 Executing, then Control-C: + trap 'echo Clean up for INT' INT+ trap 'echo Clean up for ERR' ERR+ sleep 9999^C++ echo Clean up for INTClean up for INT++ echo Clean up for ERRClean up for ERR As for calling the trap only once, one option would be to reset the ERR trap while inside the INT trap: ...trap "echo Clean up for INT; trap ERR" INT ERR... ... which results in: + trap 'echo Clean up for INT; trap ERR' INT ERR+ sleep 9999^C++ echo Clean up for INTClean up for INT++ trap ERR
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441580", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
441,645
I have written one simple function in shell that returns 0 or 1 based on some condition.Let me call that function name foo foo(){......} Now i am trying to call foo in if condition as follow:- if ( foo $1 )..... It works fine.But when i used follow approach to call ,then i get error if [ foo $1 ]...... Why does it throws error as "Unary operator expected"?
When you use: if ( foo $1 ) You are simple executing foo $1 in a subshell and if is acting on it's exit status. When you use: if [ foo $1 ] You are attempting to use the shell test and foo is not a valid test operator. You can find the valid test operators here . It's not necessarily relevant for your issue but you should also always quote variables especially inside the shell test brackets. The shell test will succeed simply with the presence of something. So even when using a valid test operator you could get unwanted results: $ unset var$ [ -n $var ] && echo yesyes$ [ -n "$var" ] && echo yes$ [ -n "" ] && echo yes$ [ -n ] && echo yesyes$ [ foo ] && echo yesyes$ [ foo bar ] && echo yes-bash: [: foo: unary operator expected The presence of a single string inside the shell test will evaluate to true where the presence of two or more strings expects that one of them be a valid test operator.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/286048/" ] }
441,652
A cpp file I'm working with creates a directory, i.e. mkdir( path, ... ) , where path comes from an environment variable (e.g. getenv( "FOO" ); ). As an example, say $FOO is /foo , and path , created above, is `/foo/newPath/'. For my question scenario, it is possible that /foo/oldPath/ exists and has content (assume no further subdirectories), in which case I want to move files from /foo/oldPath/ to /foo/newPath . My question is: because /foo/newPath/ is created as a subdirectory of $FOO , i.e. /foo/newPath/ and /foo/oldPath/ have the same parent directory, is it then guaranteed that both directories are on the same "mounted file system"? My understanding of mount points and file systems on Linux is tenuous at best. The context behind this question is: if /foo/newPath/ and /foo/oldPath/ are guaranteed to be on the same mounted file system, I can use rename() to more easily perform the file movement than other alternatives. The man page of the function says that it will fail if oldPath and newPath are not on the same "mounted file system."
They are not guaranteed that. It is possible that /foo/oldPath is a mount point. This can, however, be easily checked by running mount | grep 'on /foo/oldPath' No output should indicate that the oldPath directory is not a mount point. You will need to be more careful if you are using nested directories, since you can have a mount point anywhere. I'm not sure whether this is automated, but it's worth noting that the 3rd field from mount (space-separated) is the mount point for each line, so utilizing an cut -d ' ' -f 3 could be used to extract the path (should you need to verify it's not just a substring of another mount point, like /foo/oldPath/nested/mountPoint ) If you'd like to translate this into C/C++ code, you may be able to use system("mount | grep 'on /foo/oldPath'") , but I won't swear by that. You may have better luck on StackOverflow for more implementation detail there if you need it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/441652", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214773/" ] }
441,664
Sometimes when I run the commands: sudo apk update && sudo apk upgrade Over Alpine linux fails to update the packages even if connected to internet. But if I do sudo su "echo 'nameserver 8.8.8.8' > /etc/resolv.conf" I manage to dowload them. But this solution: Causes me frustration I need to set dns all over the time. Sometimes /etc/resolv.conf gets overriden by itself. How I can have a more permanent solution?
You can solve the problem by installing the dhclient package. For the last time enable Google's DNS servers by runing for the last time: sudo su "echo 'nameserver 8.8.8.8' > /etc/resolv.conf" Then run this cocktail of commands: sudo apk update && sudo apk upgrade && sudo apk add dhclient In order to get the fresh packages and install the dhclient . Then configure the /etc/dhcp/dhclient.conf and put the following: option rfc3442-classless-static-routes code 121 = array of unsigned integer 8;send host-name = gethostname();request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, dhcp6.fqdn, dhcp6.sntp-servers, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers;prepend domain-name-servers 8.8.8.8, 8.8.4.4; And restart the networking: sudo rc-service networking restart Optionally you can confirm that works if you run: sudo reboot In either case you can confirm that dns is resolved by pinging the google. ping google.com
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173648/" ] }
441,668
I'm running rsync to backup a remote machine to a USB hard drive on an ARM SBC and sometimes rsync just stops with "read error from input device (I/O error)". I believe the issue is related to UAS + USB 3.0 + rsync causing high I/O load, because of uas_eh_device_reset_handler on /var/log/messages : sd 0:0:0:0: [sda] tag#1 data cmplt err -32 uas-tag 2 inflight: sd 0:0:0:0: [sda] tag#1 CDB: opcode=0x28 28 00 38 80 0a 68 00 00 a0 00sd 0:0:0:0: [sda] tag#0 data cmplt err -32 uas-tag 1 inflight: CMD sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x2a 2a 00 57 50 28 78 00 03 00 00sd 0:0:0:0: [sda] tag#1 uas_eh_abort_handler 0 uas-tag 2 inflight: CMD sd 0:0:0:0: [sda] tag#1 CDB: opcode=0x28 28 00 38 80 0a 68 00 00 a0 00sd 0:0:0:0: [sda] tag#2 uas_eh_abort_handler 0 uas-tag 3 inflight: CMD sd 0:0:0:0: [sda] tag#2 CDB: opcode=0x2a 2a 00 19 47 7f 20 00 00 90 00sd 0:0:0:0: [sda] tag#0 uas_eh_abort_handler 0 uas-tag 1 inflight: CMD sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x2a 2a 00 57 50 28 78 00 03 00 00scsi host0: uas_eh_device_reset_handler startusb 5-1: reset high-speed USB device number 2 using ehci-platformscsi host0: uas_eh_device_reset_handler successsd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08sd 0:0:0:0: [sda] tag#0 Sense Key : 0x2 [current] sd 0:0:0:0: [sda] tag#0 ASC=0x3a ASCQ=0x0 sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x2a 2a 00 57 50 28 78 00 03 00 00sd 0:0:0:0: [sda] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08sd 0:0:0:0: [sda] tag#1 Sense Key : 0x2 [current] sd 0:0:0:0: [sda] tag#1 ASC=0x3a ASCQ=0x0 sd 0:0:0:0: [sda] tag#1 CDB: opcode=0x2a 2a 00 19 47 7f 20 00 00 90 00EXT4-fs warning (device sda1): ext4_end_bio:323: I/O error 10 writing to inode 13001563 (offset 0 size 73728 starting block 53014518) This SBC doesn't have a USB 3 port, however it still loads the hard drive with UAS. According to this, UAS is broken on some HD enclosure chips . The solution provided is to disable UAS , however : 1- If I blacklist UAS completely with blacklist uas into /etc/modprobe.d/blacklist-uas.conf I get: lsusb -t /: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=, 480M Looking at Class=Mass Storage, Driver=, 480M => seems like the system doesn't load any other way to deal with the drive. 2- If I just try to disable UAS for a specific USB device , like the post recommended, it still loads with UAS : echo options usb-storage quirks=174c:55aa:u | tee /etc/modprobe.d/blacklist-uas.confupdate-initramfs -ureboot(...)dmesg | grep sda[ 2.488105] sd 0:0:0:0: [sda] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB)[ 2.488584] sd 0:0:0:0: [sda] Write Protect is off[ 2.488592] sd 0:0:0:0: [sda] Mode Sense: 43 00 00 00[ 2.489335] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA[ 2.539288] sda: sda1[ 2.543875] sd 0:0:0:0: [sda] Attached SCSI disk[ 6.898109] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: errors=remount-ro,data=orderedlsusb | grep ASMediaBus 005 Device 002: ID 174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridgelsusb -t/: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=uas, 480M What am I doing wrong? Is it possible to disable UAS and make the system still use the HD any other way? Why does options usb-storage quirks=174c:55aa:u doesn't disable UAS as it should? Thank you. Some notes : OS: Debian GNU/Linux 9.4 (stretch) kernel 4.14.18-sunxi64 from armbian SBC: NanoPi NEO2
With the precious help from @A.B I managed to fix this. As he said, my kernel (probably every armbian SBC kernel) doesn't have usb_storage loaded as a module, it is built-in. In this case, we need to change the boot options that are visible under /proc/cmdline : root=UUID=b58.... rootfstype=ext4 console=tty1 console=ttyS0,115200 panic=10 consoleblank=0 loglevel=1 ubootpart=096d26e5-01 usb-storage.quirks=0x2537:0x1066:u,0x2537:0x1068:u cgroup_enable=memory swapaccount=1 At the end there is usb-storage.quirks=0x2537:0x1066:u,0x2537:0x1068:u already set. We can't edit this file directly, in armbian this options are stored on the file /boot/armbianEnv.txt : verbosity=1console=bothoverlay_prefix=sun50i-h5overlays=usbhost1 usbhost2rootdev=UUID=b58048d3-ca7b-4ea6-9812-95d403fddaddrootfstype=ext4usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u So I just added my device in the last line as ,174c:55aa:u , making it: usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u,174c:55aa:u Just in case I re-run update-initramfs -u and after a reboot the USB HD now uses only usb-store instead of uas : lsusb -t/: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 480M As you can see here, uas is now properly blacklisted for the device: dmesg | grep "usb 5-1"[ 2.308569] usb 5-1: new high-speed USB device number 2 using ehci-platform[ 2.467087] usb 5-1: New USB device found, idVendor=174c, idProduct=55aa[ 2.467106] usb 5-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1[ 2.467117] usb 5-1: Product: ASM1153E[ 2.467127] usb 5-1: Manufacturer: Inateck[ 2.467137] usb 5-1: SerialNumber: 12345678910E[ 2.468297] usb 5-1: UAS is blacklisted for this device, using usb-storage instead
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441668", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23085/" ] }
441,675
I am a newcomer to Linux, and I want to instal Linux Lite on an old Dell laptop.It has 2GB of Ram, and is 32-bit.I get a message that tells me I don't have the correct kernel on my CPU to instal, pae.I have tried to download pae (in Mint) from Synaptic Package manager, and it tells me it has installed. However it does not show in an enquiry under flags.Another forum implies that pae PREVENTS installation on a system with less than 3GB of RAM, whereas I understood it was necessary with an older machine.Can anybody tell me whether I need pae, or not. And if I don't, why do I get the message?
With the precious help from @A.B I managed to fix this. As he said, my kernel (probably every armbian SBC kernel) doesn't have usb_storage loaded as a module, it is built-in. In this case, we need to change the boot options that are visible under /proc/cmdline : root=UUID=b58.... rootfstype=ext4 console=tty1 console=ttyS0,115200 panic=10 consoleblank=0 loglevel=1 ubootpart=096d26e5-01 usb-storage.quirks=0x2537:0x1066:u,0x2537:0x1068:u cgroup_enable=memory swapaccount=1 At the end there is usb-storage.quirks=0x2537:0x1066:u,0x2537:0x1068:u already set. We can't edit this file directly, in armbian this options are stored on the file /boot/armbianEnv.txt : verbosity=1console=bothoverlay_prefix=sun50i-h5overlays=usbhost1 usbhost2rootdev=UUID=b58048d3-ca7b-4ea6-9812-95d403fddaddrootfstype=ext4usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u So I just added my device in the last line as ,174c:55aa:u , making it: usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u,174c:55aa:u Just in case I re-run update-initramfs -u and after a reboot the USB HD now uses only usb-store instead of uas : lsusb -t/: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 480M As you can see here, uas is now properly blacklisted for the device: dmesg | grep "usb 5-1"[ 2.308569] usb 5-1: new high-speed USB device number 2 using ehci-platform[ 2.467087] usb 5-1: New USB device found, idVendor=174c, idProduct=55aa[ 2.467106] usb 5-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1[ 2.467117] usb 5-1: Product: ASM1153E[ 2.467127] usb 5-1: Manufacturer: Inateck[ 2.467137] usb 5-1: SerialNumber: 12345678910E[ 2.468297] usb 5-1: UAS is blacklisted for this device, using usb-storage instead
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441675", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289215/" ] }
441,676
I periodically receive a text file with phone numbers formatted in wildly different ways: ########## , ###-###-#### , (###) ###-### , etc. Usually there's ten digits, but I've seen +1 (###) ###-#### . Eventually the file gets imported into a database, but for reasons I won't go into, it'd be handy to have the phone numbers have a standard format, (###) ###-#### . The only constant is that the phone numbers always fall between the second and third tab character on each line. Is there a way to do this from the command line?
With the precious help from @A.B I managed to fix this. As he said, my kernel (probably every armbian SBC kernel) doesn't have usb_storage loaded as a module, it is built-in. In this case, we need to change the boot options that are visible under /proc/cmdline : root=UUID=b58.... rootfstype=ext4 console=tty1 console=ttyS0,115200 panic=10 consoleblank=0 loglevel=1 ubootpart=096d26e5-01 usb-storage.quirks=0x2537:0x1066:u,0x2537:0x1068:u cgroup_enable=memory swapaccount=1 At the end there is usb-storage.quirks=0x2537:0x1066:u,0x2537:0x1068:u already set. We can't edit this file directly, in armbian this options are stored on the file /boot/armbianEnv.txt : verbosity=1console=bothoverlay_prefix=sun50i-h5overlays=usbhost1 usbhost2rootdev=UUID=b58048d3-ca7b-4ea6-9812-95d403fddaddrootfstype=ext4usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u So I just added my device in the last line as ,174c:55aa:u , making it: usbstoragequirks=0x2537:0x1066:u,0x2537:0x1068:u,174c:55aa:u Just in case I re-run update-initramfs -u and after a reboot the USB HD now uses only usb-store instead of uas : lsusb -t/: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 480M As you can see here, uas is now properly blacklisted for the device: dmesg | grep "usb 5-1"[ 2.308569] usb 5-1: new high-speed USB device number 2 using ehci-platform[ 2.467087] usb 5-1: New USB device found, idVendor=174c, idProduct=55aa[ 2.467106] usb 5-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1[ 2.467117] usb 5-1: Product: ASM1153E[ 2.467127] usb 5-1: Manufacturer: Inateck[ 2.467137] usb 5-1: SerialNumber: 12345678910E[ 2.468297] usb 5-1: UAS is blacklisted for this device, using usb-storage instead
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74293/" ] }
441,683
I wonder why when I provide root's password, the following commandreports failure? $ su postgresPassword: su: Authentication failure Is it correct that su asks for the password of root , not of postgres ? If it is the password of postgres , when I installed postgreSQL, I didn't set up a login name to connect to postgresql server, and I didn't explicitly create the user postgres on my Ubuntu, so what is its password? in /etc/passwd postgres:x:124:133:PostgreSQL administrator,,,:/var/lib/postgresql:/bin/bash in /etc/shadow : postgres:*:17478:0:99999:7::: If I indeed can't su postgres , then generally speaking, what target users can su switch to? Is itcorrect that they are also the users whose ids setuid() can take asargument? From APUE, I learned that login names without valid login shell command can't be used for login. Are they also can be su ed to? But postgres has a valid login shell command /bin/bash , so why can't I su postgres ? Thanks.
Look at the second field of /etc/shadow : postgres:*:17478:0:99999:7::: Normally it would have the encrypted password, but here it has just a single asterisk. That means the account is locked - no password will be acceptable for it. This is the state any new account will have until a password is assigned to it. To transition into a user account that is currently locked, you would need a transition method that does not ask for the password of the target account. For su , that would mean you would have to fully become root first. It would be possible to configure sudo to allow you access to the postgres account even though it is locked for password authentication. The /etc/sudoers line would be something like this: Tim ALL=(postgres) ALL The sudo command line equivalent to su postgres would be sudo -u postgres -s . Note: in this method, some environment settings from your original account may be still in use in your session as user postgres . You may or may not want that: it could be actually useful if you have two or more database administrators with different personal preferences for their shell/environment both sudo ing to the postgres account. If you want the environment to be exactly as if it would be if user postgres would have when logged in directly, you could also use sudo -u postgres -i (the equivalent of su - postgres ). But if you want to have su postgres work, you would just need to have a password set for the user postgres . That can be achieved by running passwd postgres as root.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
441,717
If I open terminal and execute w command then it will show: user tty7 :0 12:04 39:56 36.87s 0.06s /sbin/upstart - Now if open terminator or xterm and execute w command then it will show it's entry in the output of w command like user tty7 :0 12:04 39:56 36.87s 0.06s /sbin/upstart -user pts/2 :0.0 12:50 1.00s 0.02s 0.00s w but it will not show a new entry when I open gnome-terminal or xfce4-terminal .Why it is showing new session for terminator and not for xfce4-terminal?
w displays the information stored in utmp ( /var/run/utmp typically on Linux systems). This generally is only updated by “login” sessions, i.e. login (for logins on virtual consoles or serial connections), the display manager (for graphical sessions), the SSH server (for SSH connections), and some (most?) terminal emulators. In the latter case, whether or not they update utmp depends on their built-in support and configuration; for example xterm has the ut flag for this ( -ut disables utmp updates, +ut enables them), and GNOME Terminal no longer updates utmp directly at all . So you’re seeing the entries which have been added to utmp in your case: one added by your display manager (on tty7 ), and others added by some of the terminal emulators you’re using. It should be possible to wrap commands to add utmp logging to anything you like, using for example libutempter , but that is apparently not as straightforward as one might hope .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255251/" ] }
441,731
As part of my assignment I was asked to find a group called + and to write in brief what I think of it. This group in /etc/group is shown as +:x:: which means it has no Gid and No users. But what does that means? what does having no Gid does to a group. I wrote that it might be invalid but apparently that's a wrong answer. I couldn't find the answer in any documentation or tutorial.
w displays the information stored in utmp ( /var/run/utmp typically on Linux systems). This generally is only updated by “login” sessions, i.e. login (for logins on virtual consoles or serial connections), the display manager (for graphical sessions), the SSH server (for SSH connections), and some (most?) terminal emulators. In the latter case, whether or not they update utmp depends on their built-in support and configuration; for example xterm has the ut flag for this ( -ut disables utmp updates, +ut enables them), and GNOME Terminal no longer updates utmp directly at all . So you’re seeing the entries which have been added to utmp in your case: one added by your display manager (on tty7 ), and others added by some of the terminal emulators you’re using. It should be possible to wrap commands to add utmp logging to anything you like, using for example libutempter , but that is apparently not as straightforward as one might hope .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441731", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289267/" ] }
441,740
I want to get the BIOS version from Linux without going directly to the BIOS. I mean, is there a way to get the BIOS version from inside Linux?
Without superuser privileges It is as simple as reading the following file: $ cat /sys/class/dmi/id/bios_version1.1.3 With superuser privileges Use dmidecode : $ sudo dmidecode -s bios-version1.1.3 Also, you might have to install this package, which is available in: Linux i386, x86-64, ia64 FreeBSD i386, amd64 NetBSD i386, amd64 OpenBSD i386, amd64 BeOS i386 Solaris x86 Haiku i586
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/441740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244823/" ] }
441,765
I have a folder with multiple .json files. There are certain files with empty arrays, Sample file : {"WarehouseActivity": []} The file has no other data apart from the one shown above. I need to identify these files and move them into a error folder. Any suggestions on how to go about this would be great. Thanks,Kavin
Jq would be the right tool for processing/analyzing JSON data: for f in *.json; do if jq -e 'keys_unsorted as $keys | ($keys | length == 1) and .[($keys[0])] == []' "$f" > /dev/null; then mv "$f" error_dir/ fidone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280955/" ] }
441,876
Task I need to unambiguously and without "holistic" guessing find the peer network interface of a veth end in another network namespace. Theory ./. Reality Albeit a lot of documentation and also answers here on SO assume that the ifindex indices of network interfaces are globally unique per host across network namespaces, this doesn't hold in many cases : ifindex/iflink are ambiguous . Even the loopback already shows the contrary, having an ifindex of 1 in any network namespace. Also, depending on the container environment, ifindex numbers get reused in different namespaces . Which makes tracing veth wiring a nightmare, espcially with lots of containers and a host bridge with veth peers all ending in @if3 or so... Example: link-netnsid is 0 Spin up a Docker container instance, just to get a new veth pair connecting from the host network namespace to the new container network namespace... $ sudo docker run -it debian /bin/bash Now, in the host network namespace list the network interfaces (I've left out those interfaces that are of no interest to this question): $ ip link show1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00...4: docker0: mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:34:23:81:f0 brd ff:ff:ff:ff:ff:ff...16: vethfc8d91e@if15: mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default link/ether da:4c:f7:50:09:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0 As you can see, while the iflink is unambiguous, but the link-netnsid is 0, despite the peer end sitting in a different network namespace. For reference, check the netnsid in the unnamed network namespace of the container: $ sudo lsns -t net NS TYPE NPROCS PID USER COMMAND......4026532469 net 1 29616 root /bin/bash$ sudo nsenter -t 29616 -n ip link show1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:0015: eth0@if16: mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 So, for both veth ends ip link show (and RTNETLINK fwif) tells us they're in the same network namespace with netnsid 0. Which is either wrong or correct under the assumptions that link-netnsids are local as opposed to global. I could not find any documentation that make it explicit what scope link-netnsids are supposed to have. /sys/class/net/... NOT to the Rescue? I've looked into /sys/class/net/ if /... but can only find the ifindex and iflink elements; these are well documented. "ip link show" also only seems to show the peer ifindex in form of the (in)famous "@if#" notation. Or did I miss some additional network namespace element? Bottom Line/Question Are there any syscalls that allow retrieving the missing network namespace information for the peer end of a veth pair?
Many thanks to @A.B who filled in some missing pieces for me, especially regarding the semantics of netnsid s. His PoC is very instructive. However, the crucial missing piece in his PoC is how to correlate a local netnsid to its globally unique network namespace inode number, because only then we can unambiguously connect the correct corresponding veth pairs. To summarize and give a small Python example how to gather the information programmatically without having to rely on ip netns and its need to mount things: RTNETLINK actually returns the netnsid when querying for network interfaces. It's the IFLA_LINK_NETNSID attribute, which only appears in a link's info when needed. If it's not there, then it isn't needed -- and we must assume that the peer index refers to a namespace-local network interface. The important lesson to take home is that a netnsid / IFLA_LINK_NETSID is only locally defined within the network namespace where you got it when asking RTNETLINK for link information. A netnsid with the same value gotten in a different network namespace might identify a different peer namespace, so be careful to not use the netnsid outside its namespace. But which uniquely identifyable network namespace ( inode number) map to which netnsid ? As it turns out, a very recent version of lsns as of March 2018 is well capable to show the correct netnsid next to its network namespace inode number! So there is a way to map local netnsid s to namespace inodes, but it is actually backwards! And it's more an oracle (with a lowercase ell) than a lookup: RTM_GETNSID needs a network namespace identifier either as a PID or FD (to the network namespace) and then returns the netnsid . See https://stackoverflow.com/questions/50196902/retrieving-the-netnsid-of-a-network-namespace-in-python for an example of how to ask the Linux network namespace oracle. In consequence, you need to enumerate the available network namespaces (via /proc and/or /var/run/netns ), then for a given veth network interface attach to the network namespace where you found it, ask for the netnsid s of all the network namespaces you enumerated at the beginning (because you never know Beforehand which is which), and finally map the netnsid of the veth peer to the namespace inode number per the local map you created in step 3 after attaching to the veth 's namespace. import psutilimport osimport pyroute2from pyroute2.netlink import rtnl, NLM_F_REQUESTfrom pyroute2.netlink.rtnl import nsidmsgfrom nsenter import Namespace# phase I: gather network namespaces from /proc/[0-9]*/ns/netnetns = dict()for proc in psutil.process_iter(): netnsref= '/proc/{}/ns/net'.format(proc.pid) netnsid = os.stat(netnsref).st_ino if netnsid not in netns: netns[netnsid] = netnsref# phase II: ask kernel "oracle" about the local IDs for the# network namespaces we've discovered in phase I, doing this# from all discovered network namespacesfor id, ref in netns.items(): with Namespace(ref, 'net'): print('inside net:[{}]...'.format(id)) ipr = pyroute2.IPRoute() for netnsid, netnsref in netns.items(): with open(netnsref, 'r') as netnsf: req = nsidmsg.nsidmsg() req['attrs'] = [('NETNSA_FD', netnsf.fileno())] resp = ipr.nlm_request(req, rtnl.RTM_GETNSID, NLM_F_REQUEST) local_nsid = dict(resp[0]['attrs'])['NETNSA_NSID'] if local_nsid != 2**32-1: print(' net:[{}] <--> nsid {}'.format(netnsid, local_nsid))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288012/" ] }
441,927
I answered this question on SuperUser that was something related to kind of Regular expressions used while grepping an output. The answer I gave was this : tail -f log | grep "some_string.*some_string" And then, In three comments to my answer @Bob wrote this : .* is greedy and might capture more than you want. .*? is usually better. Then this, the ? is a modifier on * , making it lazy instead of the greedy default. Assuming PCRE. I googled for PCRE , but couldn't get what's the significance of this in my answer ? and finally this, I should also point out that this is regex (grep doing POSIX regex by default), not a shell glob. I only know what a Regex is and very basic usage of it in grep command. So, I couldn't get any of those 3 comments and I have these questions in mind : What are differences in usage of .*? vs. .* ? Which is better and under what circumstance? Please provide examples. Also It would be helpful to understand the comments, If anyone could UPDATE: As an answer to question How are Regex different from Shell Globs ? @Kusalananda provided this link in his comment. NOTE: If needed, Please read my answer to this question before answering for referring to the context.
Suppose I take a string like: can cats eat plants? Using the greedy c.*s will match the entire string since it starts with c and ends with s , being a greedy operator it continues to match until the final occurrence of s. Whereas using the lazy c.*?s will only match until the first occurrence of s is found, i.e. string can cats . From the above example, you might be able to gather that: "Greedy" means matching the longest possible string. "Lazy" means matching the shortest possible string. Adding a ? to a quantifier like * , + , ? , or {n,m} makes it lazy.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441927", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259047/" ] }
441,969
I'm wondering how to display a connection status using nmcli . I understand that the following will display a list of configured connections: nmcli con show And I also understand that the following will show only active connections: nmcli con show --active And that the following will display all settings for a connection (which is a very long list): nmcli con show {connection_name} My question is: Is there a quick way to display the status of a connection? Something similar to: nmcli con status {connection_name} Noting that the above is actually not a valid option on CentOS or Fedora.
As user B Layer suggested in their comment, you can specify a field name with nmcli . I think the most relevant field in your case is GENERAL.STATE : nmcli -f GENERAL.STATE con show {connection_name} For my current connection, this yields: GENERAL.STATE: activated
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/441969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/256420/" ] }
442,057
I am trying to run Ubuntu 18.04 on a laptop with an AMD A12 processor and Radeon R7 graphics. I am having nothing but problems and am very discouraged with Ubuntu though I used 12.04 for years without such problems. I have two problems that are maddening, and I will post them separately. The problems happen only on Gnome and Ubuntu on Xorg and do not happen on Wayland. However, I am told that it's best not to run Gnome on Wayland. This problem is that the computer sometimes doesn't wake up from suspend. Well, I think it does wake up because the optical drive spins and the hard drive ticks away, but I cannot login because the lock screen is a hash of colors or a distorted background without a place to login. I cannot even Ctrl - Alt - F1 to get to a prompt. All input is frozen. I am wondering if Xorg is configured correctly. I am running the Oilaf video driver which works well under Wayland so I don't know if there is a problem with Xorg and that driver.
There seems to be an issue with nouveau driver. Edit the grub file by editing it with sudo access. sudo vim /etc/default/grub Add nouveau.modeset=0 to the line that says GRUB_CMDLINE_LINUX="" so that it finally looks like this GRUB_CMDLINE_LINUX="nouveau.modeset=0" Then run sudo update-grub Reboot after successfully updating grub .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/442057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289521/" ] }
442,181
I'm running Arch Linux. After I reboot, the sshd.service fails to start due to the network not yet being up. I have to manually run: sudo systemctl restart sshd Once I do that, the sshd service starts without any errors and it operates correctly. How do I resolve this so that the systemd sshd.service will start automatically as it should? Here are the logs: May 06 15:47:23 server2 nm-dispatcher[471]: req:3 'connectivity-change': completed: no scriptsMay 06 15:47:23 server2 nm-dispatcher[471]: req:3 'connectivity-change': new request (0 scripts)May 06 15:47:23 server2 NetworkManager[440]: <info> [1525636043.9284] manager: NetworkManager state is now CONNECTED_GLOBALMay 06 15:47:22 server2 nm-dispatcher[471]: req:2 'up' [eth0]: completed: no scriptsMay 06 15:47:22 server2 nm-dispatcher[471]: req:2 'up' [eth0]: new request (0 scripts)May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6632] manager: startup completeMay 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6626] device (eth0): Activation: successful, device activated.May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6566] policy: set 'Wired connection 1' (eth0) as default for IPv4 routing and DNSMay 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6565] manager: NetworkManager state is now CONNECTED_SITEMay 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6558] manager: NetworkManager state is now CONNECTED_LOCALMay 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6557] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'manag>May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6555] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'manage>May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6549] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6541] dhcp4 (eth0): state changed unknown -> boundMay 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6536] dhcp4 (eth0): gateway 10.10.0.1May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6536] dhcp4 (eth0): hostname 'server2'May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6536] dhcp4 (eth0): domain name 'oaks'May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6536] dhcp4 (eth0): nameserver '10.10.0.1'May 06 15:47:22 server2 dbus-daemon[439]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.Avahi.service': Unit dbus-org.freedesktop.Avahi.serv>May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6535] dhcp4 (eth0): expires in 648000 secondsMay 06 15:47:22 server2 dbus-daemon[439]: [system] Activating via systemd: service name='org.freedesktop.Avahi' unit='dbus-org.freedesktop.Avahi.service' requested >May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6535] dhcp4 (eth0): plen 24May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6535] dhcp4 (eth0): address 10.10.1.12May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6293] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6284] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6275] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6262] manager: NetworkManager state is now CONNECTINGMay 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6258] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'manage>May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6251] device (eth0): Activation: starting connection 'Wired connection 1' (5b9f0bf9-ed10-31e5-bcad->May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6224] policy: auto-activating connection 'Wired connection 1'May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6206] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface>May 06 15:47:22 server2 NetworkManager[440]: <info> [1525636042.6196] device (eth0): carrier: link connectedMay 06 15:47:22 server2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readyMay 06 15:47:22 server2 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/TxMay 06 15:47:22 server2 udisksd[598]: udisks daemon version 2.7.6 startingMay 06 15:47:22 server2 systemd[1]: Starting Disk Manager...May 06 15:47:20 server2 systemd[1]: sshd.service: Failed with result 'exit-code'.May 06 15:47:20 server2 systemd[1]: sshd.service: Main process exited, code=exited, status=255/n/aMay 06 15:47:20 server2 sshd[564]: fatal: Cannot bind any address.May 06 15:47:20 server2 sshd[564]: error: Bind to port 22 on 10.10.1.12 failed: Cannot assign requested address.May 06 15:47:20 server2 systemd[1]: Started OpenSSH Daemon.May 06 15:47:20 server2 systemd[1]: Stopped OpenSSH Daemon.May 06 15:47:20 server2 systemd[1]: sshd.service: Scheduled restart job, restart counter is at 3.May 06 15:47:20 server2 systemd[1]: sshd.service: Service hold-off time over, scheduling restart.May 06 15:47:19 server2 systemd[1]: Startup finished in 12.430s (firmware) + 6.006s (loader) + 8.847s (kernel) + 2.106s (userspace) = 29.391s.May 06 15:47:19 server2 polkitd[543]: Acquired the name org.freedesktop.PolicyKit1 on the system busMay 06 15:47:19 server2 systemd[1]: Started Authorization Manager.May 06 15:47:19 server2 dbus-daemon[439]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'May 06 15:47:19 server2 polkitd[543]: Finished loading, compiling and executing 2 rulesMay 06 15:47:19 server2 polkitd[543]: Error compiling script /etc/polkit-1/rules.d/40-allow-passwordless-printer-admin.rulesMay 06 15:47:19 server2 polkitd[543]: Loading rules from directory /usr/share/polkit-1/rules.dMay 06 15:47:19 server2 polkitd[543]: Loading rules from directory /etc/polkit-1/rules.dMay 06 15:47:19 server2 polkitd[543]: Started polkitd version 0.114May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.9110] supplicant: wpa_supplicant runningMay 06 15:47:19 server2 wpa_supplicant[546]: Successfully initialized wpa_supplicantMay 06 15:47:19 server2 systemd[1]: Started WPA supplicant.May 06 15:47:19 server2 dbus-daemon[439]: [system] Successfully activated service 'fi.w1.wpa_supplicant1'May 06 15:47:19 server2 systemd[1]: Started CUPS Scheduler.May 06 15:47:19 server2 colord[526]: failed to get session [pid 465]: No data availableMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.8565] device (wlan0): set-hw-addr: set MAC address to 02:6B:94:A4:D1:08 (scanning)May 06 15:47:19 server2 kernel: IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not readyMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.8563] device (wlan0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'e>May 06 15:47:19 server2 systemd[1]: sshd.service: Failed with result 'exit-code'.May 06 15:47:19 server2 sshd[540]: fatal: Cannot bind any address.May 06 15:47:19 server2 systemd[1]: sshd.service: Main process exited, code=exited, status=255/n/aMay 06 15:47:19 server2 sshd[540]: error: Bind to port 22 on 10.10.1.12 failed: Cannot assign requested address.May 06 15:47:19 server2 systemd[1]: Starting WPA supplicant...May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.8527] manager: (wlan0): new 802.11 WiFi device (/org/freedesktop/NetworkManager/Devices/3)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.8521] device (wlan0): driver supports Access Point (AP) modeMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.8519] wifi-nl80211: (wlan0): using nl80211 for WiFi device controlMay 06 15:47:19 server2 dbus-daemon[439]: [system] Activating via systemd: service name='fi.w1.wpa_supplicant1' unit='wpa_supplicant.service' requested by ':1.1' (u>May 06 15:47:19 server2 systemd[1]: Starting Authorization Manager...May 06 15:47:19 server2 systemd[1]: Started OpenSSH Daemon.May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.8493] manager: rfkill: WiFi now disabled by radio killswitchMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.8492] rfkill1: found WiFi radio killswitch (at /sys/devices/pci0000:00/0000:00:1d.1/0000:05:00.0/ie>May 06 15:47:19 server2 systemd[1]: Stopped OpenSSH Daemon.May 06 15:47:19 server2 systemd[1]: sshd.service: Scheduled restart job, restart counter is at 2.May 06 15:47:19 server2 systemd[1]: sshd.service: Service hold-off time over, scheduling restart.May 06 15:47:19 server2 dbus-daemon[439]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.1' (uid=>May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.8479] ovsdb: Could not connect: No such file or directoryMay 06 15:47:19 server2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyMay 06 15:47:19 server2 systemd[1]: sshd.service: Failed with result 'exit-code'.May 06 15:47:19 server2 systemd[1]: sshd.service: Main process exited, code=exited, status=255/n/aMay 06 15:47:19 server2 sshd[525]: fatal: Cannot bind any address.May 06 15:47:19 server2 sshd[525]: error: Bind to port 22 on 10.10.1.12 failed: Cannot assign requested address.May 06 15:47:19 server2 systemd[1]: Starting Manage, Install and Generate Color Profiles...May 06 15:47:19 server2 systemd[1]: Started OpenSSH Daemon.May 06 15:47:19 server2 systemd[1]: Stopped OpenSSH Daemon.May 06 15:47:19 server2 systemd[1]: sshd.service: Scheduled restart job, restart counter is at 1.May 06 15:47:19 server2 systemd[1]: sshd.service: Service hold-off time over, scheduling restart.May 06 15:47:19 server2 dbus-daemon[439]: [system] Activating via systemd: service name='org.freedesktop.ColorManager' unit='colord.service' requested by ':1.9' (ui>May 06 15:47:19 server2 kernel: ath: Regpair used: 0x69May 06 15:47:19 server2 kernel: ath: Country alpha2 being used: 00May 06 15:47:19 server2 kernel: ath: EEPROM indicates we should expect a direct regpair mapMay 06 15:47:19 server2 kernel: ath: EEPROM regdomain: 0x69May 06 15:47:19 server2 kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not readyMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.6322] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'ex>May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.6298] settings: (eth0): created default wired connection 'Wired connection 1'May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.6283] keyfile: add connection in-memory (5b9f0bf9-ed10-31e5-bcad-42dadabdcdad,"Wired connection 1")May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.6253] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.6223] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.6207] device (lo): carrier: link connectedMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.6185] Loaded device plugin: NMOvsFactory (/usr/lib/NetworkManager/libnm-device-plugin-ovs.so)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.6170] Loaded device plugin: NMTeamFactory (/usr/lib/NetworkManager/libnm-device-plugin-team.so)May 06 15:47:19 server2 kernel: ath10k_pci 0000:05:00.0: htt-ver 3.47 wmi-op 4 htt-op 3 cal otp max-sta 32 raw 0 hwcrypto 1May 06 15:47:19 server2 kernel: ath10k_pci 0000:05:00.0: Unknown eventid: 90118May 06 15:47:19 server2 kernel: ath10k_pci 0000:05:00.0: Unknown eventid: 118809May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5679] Loaded device plugin: NMWifiFactory (/usr/lib/NetworkManager/libnm-device-plugin-wifi.so)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5647] Loaded device plugin: NMBluezManager (/usr/lib/NetworkManager/libnm-device-plugin-bluetooth.s>May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5550] Loaded device plugin: NMWwanFactory (/usr/lib/NetworkManager/libnm-device-plugin-wwan.so)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5299] Loaded device plugin: NMAtmManager (/usr/lib/NetworkManager/libnm-device-plugin-adsl.so)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5276] Loaded device plugin: NMVxlanDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5276] Loaded device plugin: NMVlanDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5276] Loaded device plugin: NMVethDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5276] Loaded device plugin: NMTunDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5276] Loaded device plugin: NMPppDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5276] Loaded device plugin: NMMacvlanDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5275] Loaded device plugin: NMMacsecDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5275] Loaded device plugin: NMIPTunnelDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5275] Loaded device plugin: NMInfinibandDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5275] Loaded device plugin: NMEthernetDeviceFactory (internal)May 06 15:47:19 server2 nm-dispatcher[471]: req:1 'hostname': completed: no scriptsMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5275] Loaded device plugin: NMDummyDeviceFactory (internal)May 06 15:47:19 server2 nm-dispatcher[471]: req:1 'hostname': new request (0 scripts)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5275] Loaded device plugin: NMBridgeDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5275] Loaded device plugin: NMBondDeviceFactory (internal)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5275] dhcp-init: Using DHCP client 'internal'May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5274] manager: Networking is enabled by state fileMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5274] manager: rfkill: WWAN enabled by radio killswitch; enabled by state fileMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5273] manager: rfkill: WiFi enabled by radio killswitch; disabled by state fileMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.5267] settings: loaded plugin iBFT: (c) 2014 Red Hat, Inc. To report bugs please use the NetworkMa>May 06 15:47:19 server2 ntpd[458]: Command line: /usr/bin/ntpd -g -u ntp:ntpMay 06 15:47:19 server2 ntpd[458]: ntpd [email protected] Wed Mar 7 18:48:03 UTC 2018 (1): StartingMay 06 15:47:19 server2 systemd[1]: Started Network Manager Script Dispatcher Service.May 06 15:47:19 server2 dbus-daemon[439]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'May 06 15:47:19 server2 systemd[1]: sshd.service: Failed with result 'exit-code'.May 06 15:47:19 server2 systemd[1]: sshd.service: Main process exited, code=exited, status=255/n/aMay 06 15:47:19 server2 sshd[463]: fatal: Cannot bind any address.May 06 15:47:19 server2 sshd[463]: error: Bind to port 22 on 10.10.1.12 failed: Cannot assign requested address.May 06 15:47:19 server2 systemd[1]: Started Simple Desktop Display Manager.May 06 15:47:19 server2 autossh[462]: ssh child pid is 476May 06 15:47:19 server2 systemd[1]: Started Permit User Sessions.May 06 15:47:19 server2 autossh[462]: starting ssh (count 1)May 06 15:47:19 server2 autossh[462]: port set to 0, monitoring disabledMay 06 15:47:19 server2 autossh[468]: ssh child pid is 475May 06 15:47:19 server2 autossh[468]: starting ssh (count 1)May 06 15:47:19 server2 autossh[461]: ssh child pid is 474May 06 15:47:19 server2 autossh[459]: ssh child pid is 473May 06 15:47:19 server2 autossh[461]: starting ssh (count 1)May 06 15:47:19 server2 autossh[460]: ssh child pid is 472May 06 15:47:19 server2 autossh[459]: starting ssh (count 1)May 06 15:47:19 server2 autossh[460]: starting ssh (count 1)May 06 15:47:19 server2 systemd[1]: Starting Network Manager Script Dispatcher Service...May 06 15:47:19 server2 autossh[466]: ssh child pid is 470May 06 15:47:19 server2 autossh[466]: starting ssh (count 1)May 06 15:47:19 server2 autossh[468]: port set to 0, monitoring disabledMay 06 15:47:19 server2 autossh[461]: port set to 0, monitoring disabledMay 06 15:47:19 server2 autossh[459]: port set to 0, monitoring disabledMay 06 15:47:19 server2 autossh[460]: port set to 0, monitoring disabledMay 06 15:47:19 server2 autossh[466]: port set to 0, monitoring disabledMay 06 15:47:19 server2 systemd[1]: Starting Permit User Sessions...May 06 15:47:19 server2 systemd[1]: Starting CUPS Scheduler...May 06 15:47:19 server2 systemd[1]: Started OpenSSH Daemon.May 06 15:47:19 server2 dbus-daemon[439]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.ser>May 06 15:47:19 server2 systemd[1]: Starting Network Time Service...May 06 15:47:19 server2 systemd[1]: Reached target Network.May 06 15:47:19 server2 systemd[1]: Started Network Manager.May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.4679] manager[0x5631afba8080]: rfkill: WWAN hardware radio set enabledMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.4678] manager[0x5631afba8080]: rfkill: WiFi hardware radio set disabledMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.4674] dns-mgr[0x5631afbc3130]: init: dns=default, rc-manager=symlinkMay 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.4672] hostname: hostname changed from (none) to "server2"May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.4672] hostname: hostname: using hostnamedMay 06 15:47:19 server2 systemd[1]: Started Hostname Service.May 06 15:47:19 server2 dbus-daemon[439]: [system] Successfully activated service 'org.freedesktop.hostname1'May 06 15:47:19 server2 kernel: input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input19May 06 15:47:19 server2 kernel: input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input18May 06 15:47:19 server2 kernel: input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input17May 06 15:47:19 server2 kernel: input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input16May 06 15:47:19 server2 systemd[1]: Starting Hostname Service...May 06 15:47:19 server2 dbus-daemon[439]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' re>May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.1725] manager[0x5631afba8080]: monitoring kernel firmware directory '/usr/lib/firmware'.May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.1684] Read config: /etc/NetworkManager/NetworkManager.conf (lib: 20-connectivity.conf)May 06 15:47:19 server2 NetworkManager[440]: <info> [1525636039.1683] NetworkManager (version 1.10.6-3, Arch Linux) is starting... (for the first time)May 06 15:47:19 server2 systemd-logind[441]: Watching system buttons on /dev/input/event4 ( USB Keyboard)May 06 15:47:19 server2 systemd-logind[441]: Watching system buttons on /dev/input/event3 ( USB Keyboard)May 06 15:47:19 server2 systemd-logind[441]: Watching system buttons on /dev/input/event0 (Sleep Button)May 06 15:47:19 server2 systemd-logind[441]: Watching system buttons on /dev/input/event1 (Power Button)May 06 15:47:19 server2 systemd-logind[441]: Watching system buttons on /dev/input/event2 (Power Button)May 06 15:47:19 server2 systemd-logind[441]: New seat seat0.May 06 15:47:19 server2 systemd[1]: Reached target Timers.May 06 15:47:19 server2 systemd[1]: Started Daily man-db cache update.May 06 15:47:19 server2 systemd[1]: Started Daily verification of password and group files.May 06 15:47:19 server2 systemd[1]: Started Daily Cleanup of Snapper Snapshots.May 06 15:47:19 server2 systemd[1]: Started Daily autocommit of changes in /etc directory.May 06 15:47:19 server2 systemd[1]: Starting Login Service...May 06 15:47:19 server2 systemd[1]: Starting Network Manager...May 06 15:47:19 server2 systemd[1]: Started D-Bus System Message Bus.May 06 15:47:19 server2 systemd[1]: Reached target Basic System.May 06 15:47:19 server2 systemd[1]: Reached target Sockets.May 06 15:47:19 server2 systemd[1]: Listening on CUPS Scheduler.May 06 15:47:19 server2 systemd[1]: Listening on D-Bus System Message Bus Socket.May 06 15:47:19 server2 systemd[1]: Started Daily Cleanup of Temporary Directories.May 06 15:47:19 server2 systemd[1]: Started Timeline of Snapper Snapshots.May 06 15:47:19 server2 systemd[1]: Reached target Paths.May 06 15:47:19 server2 systemd[1]: Started CUPS Scheduler.May 06 15:47:19 server2 systemd[1]: Started Daily rotation of log files.May 06 15:47:19 server2 systemd[1]: Reached target System Initialization.May 06 15:47:19 server2 systemd[1]: Started Update UTMP about System Boot/Shutdown.May 06 15:47:19 server2 systemd[1]: Starting Update UTMP about System Boot/Shutdown...May 06 15:47:19 server2 systemd[1]: Started Create Volatile Files and Directories.May 06 15:47:19 server2 systemd[1]: Starting Create Volatile Files and Directories...May 06 15:47:19 server2 systemd[1]: Started Flush Journal to Persistent Storage.May 06 15:47:18 server2 systemd[1]: Reached target Sound Card. /usr/lib/systemd/system/sshd.service: [Unit]Description=OpenSSH DaemonWants=sshdgenkeys.serviceAfter=sshdgenkeys.serviceAfter=network.target[Service]ExecStart=/usr/bin/sshd -DExecReload=/bin/kill -HUP $MAINPIDKillMode=processRestart=always[Install]WantedBy=multi-user.target# This service file runs an SSH daemon that forks for each incoming connection.# If you prefer to spawn on-demand daemons, use sshd.socket and [email protected]. /etc/NetworkManager/NetworkManager.conf is empty and /etc/NetworkManager/conf.d/ is empty. Noteable log entries: May 06 15:47:22 server2 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx May 06 15:47:19 server2 systemd[1]: sshd.service: Failed with result 'exit-code'. May 06 15:47:19 server2 systemd[1]: Reached target Network. It appears that the system is reaching target Network before the actual interface is up. How is this resolved?
As A.B indicated, your ssh config has likely been adjusted to listen on a specific IP address. If that IP address is not available when sshd starts, then the service will fail. By default, sshd is configured to liste on 0.0.0.0 , which means any address, and thus doesn't depend on a specific IP being available. There are 2 ways you can address this: Start after network-online.target systemd provides a network-online.target unit for services which need the network to be fully up (all interfaces configured). Thus you can make sshd start after this unit to start up after the interface is configured mkdir /etc/systemd/system/sshd.service.dcat > /etc/systemd/system/sshd.service.d/network-online.conf <<EOF[Unit]After=network-online.targetEOFsystemctl daemon-reload Allow binding to non-local addresses The other option is to allow listening on addresses which are not present on the host. This solution is system wide, so will apply to any service, not just sshd. echo 'net.ipv4.ip_nonlocal_bind = 1' > /etc/sysctl.d/99-nonlocal_bind.confsystemctl restart systemd-sysctl.service
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
442,259
I think my Linux laptop has been hacked, for three reasons: Whenever I saved files into the Home folder, the files wouldn't appear - not even in the other folders on my computer. An unfamiliar .txt file has showed up in my Home folder. Having noticed it, I didn't open it. I immediately had a suspicion that maybe my laptop has been hacked. When checking my Firewall status, it turned out that it was inactive. Thus, I have taken the following steps: I backed-up all of my recent files using two USB Sticks that aren't as important as other USB Sticks which I own - so in case those USB Sticks get infected with the potential malware, it wouldn't infect my other backed-up important files. I've used ClamTK in order to scan the aforementioned suspicious file -but apparently, for some reason, it hasn't detected any threats. I've used chkrootkit for another scan. This is the output (up until that point, nothing seemed to have been infected): Searching for suspicious files and dirs, it may take a while... The following suspicious files and directories were found: /usr/lib/python2.7/dist-packages/PyQt4/uic/widget-plugins/.noinit /usr/lib/debug/.build-id /lib/modules/4.13.0-39-generic/vdso/.build-id /lib/modules/4.13.0-37-generic/vdso/.build-id /lib/modules/4.10.0-38-generic/vdso/.build-id /lib/modules/4.13.0-36-generic/vdso/.build-id /lib/modules/4.13.0-32-generic/vdso/.build-id /lib/modules/4.13.0-38-generic/vdso/.build-id/usr/lib/debug/.build-id /lib/modules/4.13.0-39-generic/vdso/.build-id /lib/modules/4.13.0-37-generic/vdso/.build-id /lib/modules/4.10.0-38-generic/vdso/.build-id /lib/modules/4.13.0-36-generic/vdso/.build-id /lib/modules/4.13.0-32-generic/vdso/.build-id /lib/modules/4.13.0-38-generic/vdso/.build-id And also: Searching for Linux/Ebury - Operation Windigo ssh... Possible Linux/Ebury - Operation Windigo installetd I was trying - twice - to scan my laptop with F-PROT, with fpscan, using Ultimate Boot CD. But when I tried getting into the PartedMagic section of the disc in order to use the tool, it just wouldn't work. Twice. So I was not able to use it whatsoever. When typing sudo freshclam , I got the following output: ERROR: /var/log/clamav/freshclam.log is locked by another processERROR: Problem with internal logger (UpdateLogFile = /var/log/clamav/freshclam.log). Then, I scanned the computer using rkhunter. These are the warnings I got: /usr/bin/lwp-request [ Warning ] Performing filesystem checks Checking /dev for suspicious file types [ Warning ] Checking for hidden files and directories [ Warning ] And this is the summary: System checks summary=====================File properties checks... Files checked: 143 Suspect files: 1Rootkit checks... Rootkits checked : 365 Possible rootkits: 0Applications checks... All checks skippedThe system checks took: 1 minute and 10 secondsAll results have been written to the log file: /var/log/rkhunter.logOne or more warnings have been found while checking the system.Please check the log file (/var/log/rkhunter.log) So, after all that - I do not have access to the rkhunter log file as root: n-even@neven-Lenovo-ideapad-310-14ISK ~ $ sudo suneven-Lenovo-ideapad-310-14ISK n-even # /var/log/rkhunter.logbash: /var/log/rkhunter.log: Permission denied What should I be doing now? Help much appreciated!Thanks a lot.
Based on the details in your question, your system is clean . You're making backups. OK. clamav comes up clean. That's fine, too. Based on your output of chkrootkit , your system is clean. Those files listed as suspicious are benign. The Ebury/Windigo detection is a false positive: https://github.com/Magentron/chkrootkit/issues/1 Some of the live discs you tried didn't work. That's OK. There might already be an updater running as a daemon. You're trying to execute the log file. View it in a pager instead, like less /var/log/rkhunter.log . From a logical standpoint, chkrootkit and rkhunter aren't of much use if they are used to scan the same system they execute on since they are not realtime scanners thus any decently packaged rootkit would have sabatoged the scanners before they are run. Also, both have heuristics that result in plenty of false positives. The saved files not appearing are rarely an indication of system compromise. Without knowing the contents of the "suspicious" .txt file you mention, there can be no conclusion drawn from that. DEADJOE is a backup file created by the JOE text editor. The firewall in Linux Mint is disabled by default. Edit: Added info on DEADJOE file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/442259", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201398/" ] }
442,300
$ myvar="/path to/my directory"$ sudo bash -c "cd $myvar" In such case, how can I quote $myvar to avoid word splitting because of the white spaces in the value of myvar ?
There's no word splitting (as in the feature that splits variables upon unquoted expansions) in that code as $myvar is not unquoted. There is however a command injection vulnerability as $myvar is expanded before being passed to bash . So its content is interpreted as bash code! Spaces in there will cause several arguments to be passed to cd , not because of word splitting , but because they will be parsed as several tokens in the shell syntax. With a value of bye;reboot , that will reboot!¹ Here, you'd want: sudo bash -c 'cd -P -- "$1"' bash "$myvar" (where you pass the contents of $myvar as the first argument of that inline script; note how both $myvar and $1 were quoted for their respective shell to prevent IFS-word-splitting (and globbing)). Or: sudo MYVAR="$myvar" bash -c 'cd -P -- "$MYVAR"' (where you pass the contents of $myvar in an environment variable). Of course you won't achieve anything useful by running only cd in that inline script (other than checking whether root can cd into there). Presumably, you want that script to cd there and then do something else there like: sudo bash -c 'cd -P -- "$1" && do-something' bash "$myvar" If the intention was to use sudo to be able to cd into a directory which you otherwise don't have access to, then that cannot really work. sudo sh -c 'cd -P -- "$1" && exec bash' sh "$myvar" will start an interactive bash with its current directory in $myvar . But that shell will be running as root . You could do: sudo sh -c 'cd -P -- "$1" && exec sudo -u "$SUDO_USER" bash' sh "$myvar" To get an unprivileged interactive bash with the current directory being $myvar , but if you didn't have the permissions to cd into that directory in the first place, you won't be able to do anything in that directory even if it's your current working directory. $ myvar=/var/spool/cron/crontabs$ sudo sh -c 'cd -P -- "$1" && exec sudo -u "$SUDO_USER" bash' sh "$myvar"bash-4.4$ lsls: cannot open directory '.': Permission denied An exception would be if you do have search permission to the directory itself but not to one of the directory components of its path: $ myvar=1/2$ mkdir -p "$myvar"$ chmod 0 1$ cd 1/2cd: permission denied: 1/2$ sudo sh -c 'cd -P -- "$1" && exec sudo -u "$SUDO_USER" bash' sh "$myvar"bash-4.4$ pwd/home/stephane/1/2bash-4.4$ mkdir 3bash-4.4$ ls3bash-4.4$ cd "$PWD"bash: cd: /home/stephane/1/2: Permission denied ¹ strictly speaking, for values of $myvar like $(seq 10) (literally), there would be word splitting of course upon expansion of that command substitution by the bash shell started as root
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
442,313
I am trying to write a script that performs diff on the output of valgrind using two different executables, but the process ID at the start of each lineis littering my diff output. I am trying to remove it using bash commands but can't seem to succeed. Here's my code so far: VG_MY=$((valgrind --leak-check=full ./executable < inputfile) 2>&1)VG_MY=${VG_MY//[0-9]/} this remove all digits from VG_MY, same as this: VG_MY="${VG_MY//[[:digit:]]/}" I've tried to add the == parts in many ways but none worked. Closest I've got is: VG_MY="${VG_MY//[==[:digit:]==]/}" Which removes all digits AND '=' from the valgrind output.I need to figure out what I am missing in order to remove only the numbers enclosed by '=' like so: ==123456== from the valgrind output. EDIT:a sample of valgrind output: ==94953== Memcheck, a memory error detector==94953== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.==94953== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info==94953== Command: ./executable==94953== ==94953== ==94953== HEAP SUMMARY:==94953== in use at exit: 0 bytes in 0 blocks==94953== total heap usage: 13 allocs, 13 frees, 232 bytes allocated==94953== ==94953== All heap blocks were freed -- no leaks are possible==94953== ==94953== For counts of detected and suppressed errors, rerun with: -v==94953== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 8 from 6)
With ksh or bash -O extglob (or after shopt -s extglob in a bash script) or zsh -o kshglob (or after set -o kshglob in a zsh script): VG_MY=${VG_MY//+(=)+([0-9])+(=)/} The +(...) is a ksh extended glob similar to the + extended regexp operator. +(x) matches on one or more x s. So the above removes all sequences of one or more = s followed by one or more decimal digits followed by one or more = s like sed -E 's/=+[0-9]+=+//g' ¹ would. Not that it would fail to remove 456== inside ==123====456== since the first replacement would remove ==123==== leaving something that doesn't match the pattern. To be able to remove those, you could change it to: VG_MY=${VG_MY//+(=)[0-9]*([0-9=])=/} (like sed -E 's/=+[0-9][0-9=]*=//g' ) With zsh 's own extended globs ( zsh -o extendedglob ): # is the equivalent of ERE * and ## of ERE + (and (#c1,3) of {1,3} ). So, there you can do: set -o extendedglobVG_MY=${VG_MY//=##[0-9]##=##/} ¹ Note that while several sed implementations support -E for extended regexps, it's not standard yet, and you can occasionally find some implementations that don't support it. With those, you can skip -E and use \{1,\} as the BRE replacement for + (or use ==* instead of =+ ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442313", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289698/" ] }
442,319
I'm trying to pair a SRS-XB40 portable speaker with my Debian Stretch desktop. The speaker works fine on a Mint laptop using the setup GUI. I installed blueman. Since it didn't work I also upgraded firmware-linux to the backport version (20170823). Hardware The machine is a Dell XPS 630i. hciconfig -ahci0: Type: Primary Bus: USB BD Address: 00:1C:26:DD:18:A9 ACL MTU: 1017:7 SCO MTU: 64:1 UP RUNNING PSCAN RX bytes:2607 acl:0 sco:0 events:153 errors:0 TX bytes:1739 acl:0 sco:0 commands:125 errors:0 Features: 0xff 0xfe 0x8d 0xfe 0x9b 0xf9 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF Link mode: SLAVE ACCEPT Name: 'ChromeLinux_6529' Class: 0x1c0104 Service Classes: Rendering, Capturing, Object Transfer Device Class: Computer, Desktop workstation HCI Version: 2.0 (0x3) Revision: 0x214c LMP Version: 2.0 (0x3) Subversion: 0x41f4 Manufacturer: Broadcom Corporation (15) Software Linux 4.15.0-0.bpo.2-amd64 #1 SMP Debian 4.15.11-1~bpo9+1 (2018-04-07) x86_64 GNU/Linuxfirmware-amd-graphics 20170823-1~bpo9+1firmware-linux 20170823-1~bpo9+1firmware-linux-nonfree 20170823-1~bpo9+1firmware-misc-nonfree 20170823-1~bpo9+1bluez 5.43-2+deb9u1bluez-obexd 5.43-2+deb9u1blueman 2.0.4-1 bluetooth service startup The bluetooth service starts correctly. # systemctl status bluetooth● bluetooth.service - Bluetooth service Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2018-05-07 13:47:15 CEST; 33min ago Docs: man:bluetoothd(8) Main PID: 679 (bluetoothd) Status: "Running" Tasks: 1 (limit: 4915) CGroup: /system.slice/bluetooth.service └─679 /usr/lib/bluetooth/bluetoothd --noplugin=sapmai 07 13:47:15 bouzin bluetoothd[679]: Excluding (cli) sapmai 07 13:47:15 bouzin systemd[1]: Started Bluetooth service.mai 07 13:47:15 bouzin bluetoothd[679]: Bluetooth management interface 1.14 initializedmai 07 13:47:15 bouzin bluetoothd[679]: Failed to obtain handles for "Service Changed" characteristicmai 07 13:50:14 bouzin bluetoothd[679]: Endpoint registered: sender=:1.41 path=/MediaEndpoint/A2DPSourcemai 07 13:50:14 bouzin bluetoothd[679]: Endpoint registered: sender=:1.41 path=/MediaEndpoint/A2DPSink From what I gathered, the Failed to obtain handles for "Service Changed" characteristic warning should be harmless. Device setup I can "setup" the speaker in blueman applet but I can't pair with it. To pair, I push the "pairing" button on the speaker to put it in pairing mode, then ask the applet to pair. I get an error. Using bluetoothctl, it says: Failed to pair: org.bluez.Error.AuthenticationFailed Old blueman bug I got those errors in the logs: mai 07 14:23:30 bouzin bluetoothd[679]: vendor 0x0 product: 0x0mai 07 14:23:30 bouzin bluetoothd[679]: Agent /org/blueman/agent/global replied with an error: org.freedesktop.DBus.Python.KeyError, Traceback (most recent call last): File "/usr/lib/python3/dist-packages/dbus/service.py", line 707, in _message_cb retval = candidate_method(self, *args, **keywords) File "/usr/lib/python3/dist-packages/blueman/main/applet/BluezAgent.py", line 167, in RequestPinCode self.ask_passkey(device, dialog_msg, notify_msg, False, self.notifications, ok, err) File "/usr/lib/python3/dist-packages/blueman/main/applet/BluezAgent.py", line 122, in ask_passkey alias = self.get_device_alias(device_path) File "/usr/lib/python3/dist-packages/blueman/main/applet/BluezAgent.py", line 95, in get_device_alias name = props["Name"] KeyError: 'Name' This is a bug in blueman that was fixes in this commit . I can't upgrade to the testing/unstable version as it relies on Python 3.6, so I apply the fix to /usr/lib/python3/dist-packages/blueman/main/applet/BluezAgent.py . No agent available Now, I get: mai 07 14:30:30 bouzin bluetoothd[4042]: vendor 0x0 product: 0x0mai 07 14:30:30 bouzin bluetoothd[4042]: No agent available for request type 0mai 07 14:30:30 bouzin bluetoothd[4042]: device_request_pin: Operation not permitted From this answer , I try to launch bluetoothctl -a PIN code This gets me a little further. pair B8:D5:0B:05:A1:62Attempting to pair with B8:D5:0B:05:A1:62Request PIN code[agent] Enter PIN code: 1324Failed to pair: org.bluez.Error.AuthenticationFailed I'm asked for a PIN code. From this answer and comments , I must enter 0000. I get a succesful pairing. In the GUI, I set "trust" on the device and now it apparently pairs automatically when the speaker is switched on. Audio sink From blueman, I click "audio sink" and I can hear a check sound coming from the speaker. blueman displays stats about the connection quality, which is excellent. For a few tens of seconds, the "audio profile" menu is not greyed out. I click "High fidelity playback (A2DP sink)". I get an error message Failed to change profile to a2dp_sink I saw this Debian bug but I don't think it is the same issue. I'm not using gdm but lightdm, and all pulseaudio processes belong to my user. Of course, I don't see the speaker in the list of audio output devices. I don't know where to go from here. I know most paragraphs above are unrelated to this last issue, but I'd like to keep them here hoping they provide useful information to people with the same issues.
With ksh or bash -O extglob (or after shopt -s extglob in a bash script) or zsh -o kshglob (or after set -o kshglob in a zsh script): VG_MY=${VG_MY//+(=)+([0-9])+(=)/} The +(...) is a ksh extended glob similar to the + extended regexp operator. +(x) matches on one or more x s. So the above removes all sequences of one or more = s followed by one or more decimal digits followed by one or more = s like sed -E 's/=+[0-9]+=+//g' ¹ would. Not that it would fail to remove 456== inside ==123====456== since the first replacement would remove ==123==== leaving something that doesn't match the pattern. To be able to remove those, you could change it to: VG_MY=${VG_MY//+(=)[0-9]*([0-9=])=/} (like sed -E 's/=+[0-9][0-9=]*=//g' ) With zsh 's own extended globs ( zsh -o extendedglob ): # is the equivalent of ERE * and ## of ERE + (and (#c1,3) of {1,3} ). So, there you can do: set -o extendedglobVG_MY=${VG_MY//=##[0-9]##=##/} ¹ Note that while several sed implementations support -E for extended regexps, it's not standard yet, and you can occasionally find some implementations that don't support it. With those, you can skip -E and use \{1,\} as the BRE replacement for + (or use ==* instead of =+ ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116842/" ] }
442,323
Where can I get the original Unix (from the year 1969)? I would like to look at the source code of the original Unix.
The closest of the feeling of a contemporary system you can get freely in the Internet, and pretty much tested and ready to run, is a version 7 disk image running with the PDP-11 SimH emulator, and even a system III disk image with the actual C sources also with the PDP-11 emulation under SimH. See my post with step-by-step instructions how to download and get running Unix version 7 after installing SimH . The original site has some inconsistencies: the original instructions are for an older SimH version, and are lacking some procedures needing to be done after booting: Link to my answer in Retro Computing explaining how to boot the PDP-11 system 7 image disk SimH runs in several architectures, including MacOS, DOS (I think) and Linux. For installing SimH in Debian, the corresponding package is: simh See https://packages.debian.org/jessie/otherosfs/simh Package: simh (3.8.1-5) Emulators for 33 different computers This is the SIMH set of emulators for 33 different computers: DEC PDP-1, PDP-4, ` PDP-7 , PDP-8, PDP-9, DEC PDP-10, PDP-11 ... To install then it in Debian: sudo apt-get install simh After installation, you will have a binary called pdp11 for emulating the PDP-11. After this you can follow my answer, in the first link of this answer, in our sister site Retro computing, as it is oriented to the same SimH version. As per the @user996142 comment, you can find nowadays the version 7 Unix source code tree at https://github.com/dspinellis/unix-history-repo As an alternative, there is a port of V7 for x86/Intel. A VM for VmWare and VirtualBox can be downloaded here: http://www.nordier.com/v7x86/releases/v7x86-0.8a-vm.zip ; you boot the VM, login as "guest", run su and introduce the password "password". I think the main use for it is for teaching purposes. More interestingly yet, is a System III disk image that was made from recovered tape(s), which can also be run under the PDP-11 emulator in SimH. System III has much more kernel source code lines written in C, and more utilities. The system resembles a little more Unix as we know it today. The tape/disk image also comes with the source code tree, in /usr/local/src (have to check the directory), that can be read, changed and compiled inside the emulator, thus not obliging you to much effort trying to (re)building and modifying legacy code if you want to test out some modifications. Obviously, the utilities are much smaller than nowadays, and such a system is much more easy to understand, rebuild and hack for pedagogic purposes. The HOW-TO to use and build the System III image emulation for SimH is here http://mailman.trailing-edge.com/pipermail/simh/2009-May/002382.html ; however the download links do not work anymore; nonetheless I managed to find a working download link of the System III version here: https://unixarchive.tliquest.net/PDP-11/Distributions/usdl/SysIII/ PS. I built my working System III SimH PDP-11 emulation disk image from those files.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/442323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/283800/" ] }
442,329
I have a poem with an unknown number of rows and I want to display only the penultimate one. What command should I use?
There are many ways to do that, but this is the fastest one I've found -- and is the cleanest in my opinion. Assuming that the poem is written in a file named poem , you can use: tail -n 2 poem | head -n 1 tail -n 2 poem will write the last 2 lines of the file poem . head -n 1 will write the first line of the output provided by the previous tail command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289710/" ] }
442,349
How can I count the number of occurrences of a substring in a string using Bash? EXAMPLE: I'd like to know how many times this substring : Bluetooth Soft blocked: no Hard blocked: no ...occurs in this string... 0: asus-wlan: Wireless LAN         Soft blocked: no         Hard blocked: no1: asus-bluetooth: Bluetooth         Soft blocked: no         Hard blocked: no2: phy0: Wireless LAN         Soft blocked: no         Hard blocked: no113: hci0: Bluetooth         Soft blocked: no         Hard blocked: no NOTE I: I have tried several approaches with sed, grep, awk... Nothing seems to work when we have strings with spaces and multiple lines. NOTE II: I'm a Linux user and I'm trying a solution that does not involve installing applications/tools outside those that are usually found in Linux distributions. IMPORTANT: I would like something like the hypothetical example below. In this case we use two Shell variables (Bash) . EXAMPLE: STRING="0: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no1: asus-bluetooth: Bluetooth Soft blocked: no Hard blocked: no2: phy0: Wireless LAN Soft blocked: no Hard blocked: no113: hci0: Bluetooth Soft blocked: no Hard blocked: no"SUB_STRING="Bluetooth Soft blocked: no Hard blocked: no"awk -v RS='\0' 'NR==FNR{str=$0; next} {print gsub(str,"")}' "$STRING" "$SUB_STRING" NOTE: We are using awk just to illustrate!
With perl : printf '%s' "$SUB_STRING" | perl -l -0777 -ne ' BEGIN{$sub = <STDIN>} @matches = m/\Q$sub\E/g; print scalar @matches' <(printf '%s' "$STRING") With bash alone, you could always do something like: s=${STRING//"$SUB_STRING"}echo "$(((${#STRING} - ${#s}) / ${#SUB_STRING}))" That is $s contains $STRING with all occurrences of $SUB_STRING within it removed. We find out the number of $SUB_STRING s that were removed by computing the difference in number of characters in between $STRING and $s and dividing by the length of $SUB_STRING itself. POSIXly, you could do something like: s=$STRING count=0until t=${s#*"$SUB_STRING"} [ "$t" = "$s" ]do count=$((count + 1)) s=$tdoneecho "$count"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61742/" ] }
442,402
Looking for alternative to: local foo=""if [[ ! -z "$foo" ]]; then echo "foo is actually not empty."fi is there some way to check if a variable is defined (not empty) without negation?
The opposite of -z word is -n word , or simply word without an operator: foo=xif [[ "$foo" ]] ; then echo "foo is not empty"fi Note that you can't test between an empty value and an unset value that way. You'll need ${foo+x} for that: [[ "${foo+x}" = x ]] will be true for any set value, even an empty string. -z , -n would work with the standard [ .. ] test, too, you don't need [[ .. ]] for those. The difference here is that within [ .. ] the variables need to be quoted.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
442,426
I have an http server on my Solaris 11 system and I want to block all http requests from other systems. Eventually I will allow access, but for now I cannot figure out how to block access to port 80, as nothing I do seems to work. I have the following in my /etc/ipf/ipf.conf # ipf.conf## IP Filter rules to be loaded during startup## See ipf(4) manpage for more information on# IP Filter rules syntax.# block in allblock in proto tcp from any to any port = 80block out quick proto tcp to any port = http keep state The ipfilter service is running root@test2:/etc/ipf# svcs ipfilterSTATE STIME FMRIonline 19:25:23 svc:/network/ipfilter:default However, whenever I visit 192.168.1.211 in my browser, I see "It works!" The only thing that seems to work is if I put block in all in the ipf.conf file, but that blocks ALL incoming traffic (including my SSH connection). I am not sure what I am doing. Maybe my syntax is wrong.
The opposite of -z word is -n word , or simply word without an operator: foo=xif [[ "$foo" ]] ; then echo "foo is not empty"fi Note that you can't test between an empty value and an unset value that way. You'll need ${foo+x} for that: [[ "${foo+x}" = x ]] will be true for any set value, even an empty string. -z , -n would work with the standard [ .. ] test, too, you don't need [[ .. ]] for those. The difference here is that within [ .. ] the variables need to be quoted.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139546/" ] }
442,473
Currently I am learning about linux booting process.Here i noticed that the initrd will create temporary root file system that includes drivers(LVM, NFS, etc) which are required by kernel.After that kernel will make use of that drivers and it will mount real root file system. Here my question is why not kernel itself should include the necessary drivers inside it and why it depends on initrd ?
See also answers on these questions: Why don't we include File System drivers in the kernel itself instead of using Initrd/Initramfs Why do we need initramfs and initrd 1. Why not kernel itself should include the necessary drivers inside it Firstly, know that kernel memory is not demand-paged. That would be a circular dependency. If you page out your disk driver to the disk when you're low on memory, you can't load it back later. (And inside the Linux kernel, we don't try to define some higher layer which isn't involved in the storage path and can safely be paged out. Allegedly this is possible: Windows did it. I don't know what this higher layer is supposed to be though. Maybe it is defined dynamically. Or maybe Windows has the luxury of not supporting strange ideas like swap-over-NFS). Instead, we support loading modules. If we don't need NFS on this particular computer, we don't have to load it. In a modern distribution this saves on the order of a hundred megabytes of RAM overall. (Look at the space taken by /lib/modules/$VERSION/ . Note that in modern distributions, the modules are compressed e.g. .xz files). 2. and why it depends on initrd While kernel modules are the more obvious reason for the initrd, there is a second aspect. It lets userspace build arbitrarily complex storage stacks to access the root filesystem. E.g. getting an IP address using DHCP, to support NFS, or prompting for a disk encryption passphrase. Again, the kernel tries not to bloat too much e.g. with user interface code. Memory usage is only one reason. The kernel/userspace division is overloaded with a number of purposes. E.g. the kernel can be one project and work specifically on kernel things. Userspace can be anything; it could be a "normal" Linux distribution, or it could be an entirely independent project like the Android OS. This is different from other OS's, e.g. the BSDs maintain kernel + core userspace together. This is illustrated by BSDs being able to handle the 2038 problem with a single flag day conversion of both kernel and userspace .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213226/" ] }
442,510
I am installing a huge program, which has its resources as an rpm file. It stuck at the line of #!/bin/shSCITEGICPERLBIN=`dirname $0`SCITEGICPERLHOME=`dirname $SCITEGICPERLBIN`if [ $SCITEGICPERLHOME == "." ] Apparently, sh work for bash in Red Hat Linux with this syntax, but it gives the error of unexpected operator in Ubuntu. I cannot change the script to bash as the script comes from the rpm package. I can extract and repack the rpm package, but there might be many of such scripts. Is there a way to change the shell default to treat #!/bin/sh as bash or anything else, which can handle the [ operator?
To switch sh to bash (instead of dash , the default), reconfigure dash (yes, it’s somewhat counter-intuitive): sudo dpkg-reconfigure dash This will ask whether you want dash to be the default system shell; answer “No” ( Tab then Enter ) and bash will become the default instead ( i.e. /bin/sh will point to /bin/bash ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/442510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10780/" ] }
442,524
I have a directory called backup with file extensions *.sql . The thing I want to do is to prepend a line to all sql files in the directory backup . I did echo 'use my_db;' >> backup/*.sql , which didn't work. I tried the below but don't know what to do next: ls backup/*.sql | xargs echo "use my_db;" Any solution to prepend that line?
Using GNU sed : for sql in backup/*.sql; do sed -i '1i\use my_db;' "$sql"done With standard sed : for sql in backup/*.sql; do sed '1i\use my_db;' "$sql" >"$sql.bak" && mv "$sql.bak" "$sql"done This would do a in-place editing of each .sql file in backup . The editing command inserts a line before the first line in each file. This assumes that the pattern backup/*.sql only matches the files that you want to edit. Using echo and cat : for sql in backup/*.sql; do { echo 'use my_db;'; cat "$sql"; } >"$sql.tmp" && mv "$sql.tmp" "$sql"done In this loop, we first output the line that we'd like to prepend to the file, then the contents of the file. This goes into a temporary file which is then renamed. The command echo 'use my_db;' >> backup/*.sql would expand to something like echo 'use my_db;' >> backup/file1.sql backup/file2.sql backup/file3.sql which is the same as echo 'use my_db;' backup/file2.sql backup/file3.sql >> backup/file1.sql which would append the given strings to backup/file1.sql . Your second command would not modify any files.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/442524", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62249/" ] }
442,547
I've created new system account for cronjob on my centos 7 and set up a cronjob for it. However my cronjob fails to run with error: (CRON) ERROR chdir failed (/home/sysagent): No such file or directory Currently crontab for sysagent account looks like: 15 16 * * * HOME="/tmp" && cd path/to/project_folder && ./src/mainroutine.sh | ts "[\%Y-\%m-\%d \%H:\%M:\%S]" >> /var/log/automation/sysagent.log I've added HOME variable and cd to project folder after some investigation (used to have full path to shell script there), but it doesn't help. How to make cronjob forget about home directory? And why it's looking for it by the way?
On the assumption the docs are bad or wrong, a source code dive: % rpm -qa | grep croncronie-1.4.11-17.el7.x86_64cronie-anacron-1.4.11-17.el7.x86_64crontabs-1.11-6.20121102git.el7.noarch ... some altagoobingleduckgoing here as the URL in the RPM is broken ... % git clone https://github.com/cronie-crond/cronie && cd cronie% fgrep -rl 'chdir failed' ../src/security.c ... so that error only appears in one place, within the cron_change_user_permanently call that is called from various other places in the code ... % grep cron_change_user_permanently **/*.csrc/do_command.c: if (cron_change_user_permanently(e->pwd, env_get("HOME", jobenv)) < 0)src/do_command.c: if (cron_change_user_permanently(e->pwd, env_get("HOME", jobenv)) < 0)src/popen.c: if (cron_change_user_permanently(pw, env_get("HOME", jobenv)) != 0)src/security.c:int cron_change_user_permanently(struct passwd *pw, char *homedir) { ... so in all cases the HOME environment variable appears to be used to determine where to chdir to for the user, and there is always a chdir to that directory. So you'll need to ensure that the HOME directory exists, or that HOME is properly set before cron_change_user_permanently is called (which likely happens before the shell code in your cron job is even looked at). (Or monkey patch cronie to do something else, but that's probably a really really bad idea.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186662/" ] }
442,552
When applying sudo to a command which doesn't actually need sudo , sometimes it doesn't ask me for my password. For example under my $HOME , sudo ls . But I remember that it does for some other command, though I forget which one. So I was wondering how sudo decides whether to ask for a password, when given a command which doesn't actually need sudo ? Is there some rule in /etc/sudoers specifying that? My real problem is that when I use du , it sometimes shows "permission denied" for some directories, and sometimes not, probably because I don't have permission on some directories? I apply sudo to du regardless, and thought I would be asked for a password regardless, but actually not on my own directories.
In a typical configuration, the command is irrelevant. You need to enter your password the first time you use sudo, and you don't need your password in that particular shell for the next 15 minutes. From the computer's perspective, there is no such thing as a “command that needs sudo”. Any user can attempt to run any command. The outcome may be nothing but an error message such as “Permission denied” or “No such file or directory”, but it's always possible to run the command. For example, if you run du on a directory tree that has contents that you don't have permission to access, you'll get permission errors. That's what “permission denied” means. If you run sudo du , sudo runs du as root, so you don't get permission errors (that's the point of the root account: root¹ always has permission). When you run sudo du , du runs as root, and sudo is not involved at all after du has started. Whether du encounters permission errors is completely irrelevant to how sudo operates. There are commands that need sudo to do something useful . Usefulness is a human concept. You need to use sudo (or some other methods to run the command as root) if the command does something useful when run as root but not when run under your account. Whether sudo asks for your password depends on two things. Based on the configuration, sudo decides whether you need to be authenticated. By default, sudo requires a password. This can be turned off in several ways, including setting the authenticate option to false and having an applicable rule with the NOPASSWD tag. If sudo requires your password, it may be content to use a cached value. That's ok because the reason sudo needs your password is not to authenticate who's calling it (sudo knows what user invoked it), but to confirm that it's still you at the commands and not somebody who took control over your keyboard. By default, sudo is willing to believe that you're still at the commands if you entered your password less than 15 minutes ago (this can be changed with the timeout option). You need to have entered the password in the same terminal (so that if you remain logged in on one terminal then leave that terminal unattended and then use another terminal, someone can't take advantage of this to use sudo on the other terminal — but this is a very weak advantage and it can be turned off by setting the tty_tickets option to false). ¹ nearly, but that's beyond the scope of this thread.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/442552", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
442,554
I noticed in somewhere that most of the Linux distro are based on Systemd instead of SysV init. So I just want to know without installing and booting. Is there any possible way to find distro based on Systemd or SysV init ?
On distrowatch.com you can search for distributions using the init system as a criterion. You can even select "not systemd".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213226/" ] }
442,558
$ sudo du -h -d 0 home/309G home/$ df -hFilesystem Size Used Avail Use% Mounted on.../dev/sda4 550G 309G 214G 60% /home... System Monitor on Ubuntu shows the used space on /home is 331.1 GB Do the different ways measure the same thing? How differently do they make the measurements? Thanks.
On distrowatch.com you can search for distributions using the init system as a criterion. You can even select "not systemd".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
442,598
I'm connected to local area network with access to the Internet through gateway. There is DNS server in local network which is capable of resolving hostnames of computers from local network. I would like to configure systemd-resolved and systemd-networkd so that lookup requests for local hostnames would be directed (routed) exclusively to local DNS server and lookup requests for all other hostnames would be directed exclusively to another, remote DNS server. Let's assume I don't know where the configuration files are or whether I should add more files and require their path(s) to be specified in the answer.
In the configuration file for local network interface (a file matching the name pattern /etc/systemd/network/*.network ) we have to either specify we want to obtain local DNS server address from DHCP server using DHCP= option : [Network]DHCP=yes or specify its address explicitly using DNS= option : [Network]DNS=10.0.0.1 In addition we need to specify (in the same section) local domains using Domains= option Domains=domainA.example domainB.example ~example We specify local domains domainA.example domainB.example to get the following behavior (from systemd-resolved.service, systemd-resolved man page): Lookups for a hostname ending in one of the per-interface domains are exclusively routed to the matching interfaces. This way hostX.domainA.example will be resolved exclusively by our local DNS server. We specify with ~example that all domains ending in example are to be treated as route-only domains to get the following behavior (from description of this commit) : DNS servers which have route-only domains should only be used for the specified domains. This way hostY.on.the.internet will be resolved exclusively by our global, remote DNS server. Note Ideally, when using DHCP protocol, local domain names should be obtained from DHCP server instead of being specified explicitly in configuration file of network interface above. See UseDomains= option . However there are still outstanding issues with this feature – see systemd-networkd DHCP search domains option issue. We need to specify remote DNS server as our global, system-wide DNS server. We can do this in /etc/systemd/resolved.conf file: [Resolve]DNS=8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844 Don't forget to reload configuration and to restart services: $ sudo systemctl daemon-reload$ sudo systemctl restart systemd-networkd$ sudo systemctl restart systemd-resolved Caution! Above guarantees apply only when names are being resolved by systemd-resolved – see man page for nss-resolve, libnss_resolve.so.2 and man page for systemd-resolved.service, systemd-resolved . See also: Description of routing lookup requests in systemd related man pages is unclear How to troubleshoot DNS with systemd-resolved? References: Man page for systemd-resolved.service, systemd-resolved Man page for resolved.conf, resolved.conf.d Man page for systemd-network
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/442598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5355/" ] }
442,634
I have large (~180MB) xml file with some wrong characters in it, for example <Data ss:Type="String">7402953^@</Data> The ^@ part should by removed. The job supposed to be done with sed -i 's/\^@//g' /tmp/large.xml but for some unknown reason it doesn't work as expected if string is located in my large xml file. If the file has only few KB in size, sed works perfectly. It looks like a bug but I think it can't be because the task is quite obvious. I'm doing something wrong?
Judging by your question (because there are no examples), I would say that ^@ in the big file are not actually the two characters ( ^ and @ ), but one of those unprintable characters. You can input that unprintable character in the terminal with Ctrl + v + Ctrl + 2 . Use that in sed instead of the characters ^ and @ and it should be fine. Also remove the escape sequence because it is not needed for the unprintable character.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289938/" ] }
442,692
I understand the subshell syntax to be (<commands...>) , is $() just a subshell that you can retrieve variable values from? Note: This applies to bash 4.4 based on different wording in their documentation.
$(…) is a subshell by definition: it's a copy of the shell runtime state¹, and changes to the state made in the subshell have no impact on the parent. A subshell is typically implemented by forking a new process (but some shells may optimize this in some cases). It isn't a subshell that you can retrieve variable values from. If changes to variables had an impact on the parent, it wouldn't be a subshell. It's a subshell whose output the parent can retrieve. The subshell created by $(…) has its standard output set to a pipe, and the parent reads from that pipe and collects the output. There are several other constructs that create a subshell. I think this is the full list for bash: Subshell for grouping : ( … ) does nothing but create a subshell and wait for it to terminate). Contrast with { … } which groups commands purely for syntactic purposes and does not create a subshell. Background : … & creates a subshell and does not wait for it to terminate. Pipeline : … | … creates two subshells, one for the left-hand side and one for the right-hand side, and waits for both to terminate. The shell creates a pipe and connects the left-hand side's standard output to the write end of the pipe and the right-hand side's standard input to the read end. In some shells (ksh88, ksh93, zsh, bash with the lastpipe option set and effective), the right-hand side runs in the original shell, so the pipeline construct only creates one subshell. Command substitution : $(…) (also spelled `…` ) creates a subshell with its standard output set to a pipe, collects the output in the parent and expands to that output, minus its trailing newlines. (And the output may be further subject to splitting and globbing, but that's another story.) Process substitution : <(…) creates a subshell with its standard output set to a pipe and expands to the name of the pipe. The parent (or some other process) may open the pipe to communicate with the subshell. >(…) does the same but with the pipe on standard input. Coprocess : coproc … creates a subshell and does not wait for it to terminate. The subshell's standard input and output are each set to a pipe with the parent being connected to the other end of each pipe. ¹ As opposed to running a separate shell .
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/442692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
442,698
When I log in with LightDM on my laptop running Debian Unstable, it recently started to hang for around 2 minutes until journalctl shows the message kernel: random: crng init done . When I press random keys on my keyboard while it hangs, it logs in faster (around 10 seconds). Before I didn't have this issue, is there any way I can fix it? Edit: using linux-image-4.15.0-3-amd64 instead of linux-image-4.16.0-1-amd64 works, but I don't want to use an older kernel.
Looks like some component of your system blocks while trying to obtain random data from the kernel (i. e. reading from /dev/urandom or calling getrandom() ) due to insufficient entropy (randomness) available. I do not have a ready explanation for why the problem depends on a particular kernel version, or which component on your system actually blocks, but regardless of the root cause, Indeed, as pointed out by Bigon in his answer , it appears to be a kernel bug introduced in 4.16: This bug is introduced by the "crng_init > 0" to "crng_init > 1" change in this commit: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=43838a23a05fbd13e47d750d3dfd77001536dd33 This change inadvertently impacts urandom_read, causing the crng_init==1 state to be treated as uninitialized and causing urandom to block, despite this state existing specifically to support non-cryptographic needs at boot time: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/char/random.c#n1863 Reverting 43838a23a05f ("random: fix crng_ready() test") fixes the bug (tested with 4.16.5-1), but this may cause security concerns (CVE-2018-1108 is mentioned in 43838a23a05f). I am testing a more localised fix that should be more palatable to upstream. ( https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=897572#82 ) ...Still, you may try using haveged or rng-tools to gather entropy faster.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/442698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118331/" ] }
442,714
File ( myfile.txt ) contains data as follows: abc#ab1=23nrt##clb1aXamd#322 Desired Output: abcnrtclb1axamd I could do like this, for i in `cat myfile.txt` do s1=`echo $i | cut -d'#' -f1`; s2=`echo $i | cut -d'#' -f2`; if [ "$s1" == "" ]; then echo "$s2" else echo "$s1" fi;done; But is there anyway to do this without using for and if , like using awk or sed or cut or something else in single line?
Short awk solution: awk -F'#' 'NF{ print ($1 != "" ? $1 : $2) }' file The output: abcnrtclb1aXamd
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442714", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81640/" ] }
442,769
Let's say I have a file with a unique name (e.g. Screenshot20180509143013.png) that I wish to copy to /media/SD256. The file /media/drive1/Users/name/Pictures/Screenshots/Screenshot20180509143013.png is tangled in some sub-directory levels, and I wish not to navigate to /media/drive1/Users/name/Pictures/Screenshots/ to find that file with the unique name. Instead, I wish to run a command while my working directory is /media/drive1/, which looks similar to: copy --find-filename-then-copy Screenshot20180509143*3.png /dev/media/SD256/DestinationFolder Is there such a command that can first find the file and then copy?
With zsh or fish or ksh -o globstar or bash -O globstar (or after shopt -s globstar in bash ) or tcsh after set globstar or yash -o extended-glob : cp -- **/Screenshot20180509143*3.png /dev/media/SD256/DestinationFolder globstar, with the ** syntax, does a recursive search; if the remainder of the glob (filename pattern) is unique, then you'll get the results you want. Note that I copied the ...3*3 from your example, and not the e.g. filename Screenshot20180509143013.png from earlier in the question. Note that: fish and versions of bash prior to 4.3 will following symlinks when recursing. With zsh , tcsh or yash , you can use *** instead of ** to get that behaviour. fish will not find the file if it's in the current directory. Hidden files and files in hidden directories will not be considered. Many shells have a dotglob option to reenable them. See also the (D) glob qualifier in zsh . In zsh you may also want to add ([1]) at the end of the pattern. [1] is a glob qualifier to copy only the first matching file. the -i option to cp can also guard against accidental overwrite if the file is found in several directories.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270469/" ] }
442,792
How to execute a shell script via a cronjob every 45 days?
If you don't need exactly 45 days, but "one and a half months" will do, then a straightforward method would be to run at the beginning of the month every three months, and at the middle of the next month after each of those: 0 12 1 1,4,7,10 * /path/to/script0 12 16 2,5,8,11 * /path/to/script For general arbitrary intervals, the other answers are obviously better, but 45 days sounds like it's based on the length of a month anyway. Human users might also be more used to something happening in the beginning or the middle of a month, instead of seeing the exact date drift a day or two each time.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442792", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
442,872
I'm using CentOS 7. I'm trying to write a script to start and stop a puma process but I can't figure out how to get the "master" PID, if taht is even the right term. In the below command [rails@server myproject_production]$ ps aux | grep pumarails 15767 0.0 1.2 437904 13612 ? Sl 17:20 0:00 puma 3.11.4 (tcp://0.0.0.0:3000,unix:///home/rails/myproject_production/shared/sockets/puma.sock) [myproject_production]rails 15779 0.6 7.6 1061248 80688 ? Sl 17:20 0:05 puma: cluster worker 1: 15767 [myproject_production]rails 15781 0.6 7.7 1061248 80876 ? Sl 17:20 0:05 puma: cluster worker 2: 15767 [myproject_production]rails 15785 0.6 7.4 1061964 78488 ? Sl 17:20 0:05 puma: cluster worker 3: 15767 [myproject_production]rails 15880 0.7 7.4 1059612 78592 ? Sl 17:22 0:05 puma: cluster worker 0: 15767 [myproject_production]rails 17106 0.0 0.1 112612 1064 pts/0 S+ 17:33 0:00 grep --color=auto puma The master PID is "15767". If I kill that all the other puma processes will die. How do I write a command to get taht into a script variable?
Use pgrep instead of filtering the output from ps . I think in your case, pgrep -f '^([^ ]*/)?puma ' will match the right process, but experiment a bit to make sure that you're getting what you want and no more. Once you're satisfied that pgrep is finding the process you want to kill, replace pgrep by pkill .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442872", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166917/" ] }
442,944
I am trying to understand the stuff.I have a machine with 80G storage. It looks like that: Filesystem Size Used Avail Use% Mounted on/dev/mapper/centos-root 50G 7.1G 43G 15% /devtmpfs 3.9G 0 3.9G 0% /devtmpfs 3.9G 1.4M 3.9G 1% /dev/shmtmpfs 3.9G 409M 3.5G 11% /runtmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup/dev/sda1 494M 125M 370M 26% /boot/dev/mapper/centos-home 26G 23G 3.5G 87% /hometmpfs 782M 0 782M 0% /run/user/0 Now, from what I read the tmpfs doesn't take physical storage, but uses the virtual memory of the machine. Is it correct? Does it affect the physical storage in any way? Is there a reality where the tmpfs will be written to the physical storage?Next, do all the mounted (/dev/sda1, /dev/sda1, etc...) dirs share the tmpfs? Or each of them gets a different one? Also, I tried to resize the tmpfs. I did : mount -o remount,size=1G /dev/shm On restart it went back to original size. I changed /etc/fstab like this: tmpfs /dev/shm tmpfs defaults,size=1G And then: mount -o remount /dev/shm it did the trick, but on restart it again went to it's original size. I think I am missing something.
Now, from what I read the tmpfs doesn't take physical storage, but uses the virtual memory of the machine. Is it correct? Correct. tmpfs appears as a mounted file system, but it's stored in volatile memory instead of a persistent storage device. So this could answer your other questions. In reality you cannot assign physical storage to tmpfs since it only relies on virtual memory. Everything stored in tmpfs is temporary in the sense that no files will be created on the hard drive. Swap space is used as backing store in case of low memory situations. On reboot, everything in tmpfs will be lost. Many Unix distributions enable and use tmpfs by default for the /tmp branch of the file system or for shared memory. Depending of your distribution you can use tmpfs for the /tmp . By default, a tmpfs partition has its maximum size set to half of the available RAM, however it is possible to overrule this value and explicitly set a maximum size. In this example, to override the default /tmp mount, use the size mount option: /etc/fstabtmpfs /tmp tmpfs nodev,nosuid,size=2G 0 0 source: https://wiki.archlinux.org/index.php/tmpfs
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/442944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277288/" ] }
442,945
In in the bash manual , it's written that Builtin commands are contained >>> within <<< the shell itself Also, this answer states that A built-in command is simply a command that the shell carries out itself,instead of interpreting it as a request to load and run some>>> other program <<< When I run compgen -b on bash 4.4 , I receive a list of all shell builtin commands. I see for example that [ and kill are listed to be shell builtins. But their actual locations are: /usr/bin/[/bin/kill I thought that being a builtin means that the command is compiled into the /bin/bash executable. So what's really confusing me: Please correct me, but how can a separate command be a builtin , when it is actually not part of the shell?
The commands that are built into the shell are often built in because of the performance increase that this gives. Calling the external printf , for example, is slower than using the built in printf . Since some utilities do not need to be built in, unless they are special, like cd , they are also provided as external utilities. This is so that scripts won't break if they are interpreted by a shell that does not provide a built in equivalent. Some shell's built-ins also provide extensions to the external equivalent command. Bash's printf , for example is able to do $ printf -v message 'Hello %s' "world"$ echo "$message"Hello world (print to a variable) which the external /usr/bin/printf simply wouldn't be able to do since it doesn't have access to the shell variables in the current shell session (and can't change them). Built in utilities also does not have the restriction that their expanded command line has to be shorter than a certain length. Doing printf '%s\n' * is therefore safe if printf is a shell built-in command. The restriction on the length of the command line comes from the execve() C library function used to execute an external command. If the command line and the current environment is larger than ARG_MAX bytes (see getconf ARG_MAX in the shell), the call to execve() will fail. If the utility is built into the shell, execve() does not have to be called. Built in utilities take precedence over utilities found in $PATH . To disable a built-in command in bash , use e.g. enable -n printf There's a short list of utilities that need to be built into a shell (taken from the POSIX standard's list of special built-ins ) breakcolon (:)continuedot (.)evalexecexitexportreadonlyreturnsetshifttimestrapunset These need to be built in since they directly manipulate the environment and program flow of the current shell session. An external utility would not be able to do that. Interestingly, cd is not part of this list, but POSIX says the following about that: Since cd affects the current shell execution environment, it is always provided as a shell regular built-in. If it is called in a subshell or separate utility execution environment, such as one of the following: (cd /tmp)nohup cdfind . -exec cd {} \; it does not affect the working directory of the caller's environment. I'm therefore assuming that the "special" built-ins can't have external counterparts, while cd in theory could have (but it wouldn't do very much).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/442945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264975/" ] }
443,047
I am thinking of upgrading Libreoffice 5 to 6 through backports. Do I have to purge LO5 first? Furthermore, is purging absolutely necessary before upgrading packages in general?
No, there’s no need to purge the LibreOffice 5 packages before upgrading to LibreOffice 6, at least if you’re using the Debian-provided LibreOffice packages. Purging involves removing a package along with its configuration. The only reason to do so is if you want to fully uninstall a package; when you’re upgrading a package, you shouldn’t ever need to do so. Even when a package changes names, if it’s incompatible with the previous version it will declare so in its metadata and the package management system will take care of things for you (which will involve removing the old package, not purging it, so that the new one can import the old package if appropriate, and so that you can revert to the old package if necessary). I can’t make any guarantees concerning packages from third parties, but you shouldn’t need to purge for upgrades either. You might need to purge before switching from Debian-provided packages and third-party alternatives, but I would expect the installation instructions to tell you so.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443047", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227411/" ] }
443,087
Is there a Linux command similar to yes but one that outputs newlines? Something like $ yes enter That outputs \n\n\n\n\n\n Similar too how yes 'foo' outputs foofoofoo
Similar to what was mentioned in the comments, this will do it: yes '<enter>' Will output the literal string <enter> , or you can use '' for newlines. yes ''
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290324/" ] }
443,118
Is there any sh code that is not syntactically valid bash code (won't barf on syntax)? I am thinking of overwriting sh with bash for certain commands.
Here is some code that does something different in POSIX sh and Bash: hello &> world Whether that is "invalid" for you I don't know. In Bash, it redirects both standard output and standard error from hello into the file world . In POSIX sh , it runs hello in the background and then makes an empty redirection into world , truncating it (i.e. it's treated as & > ). There are plenty of other cases where Bash extensions will do their thing when run under bash , and would have different effects in a pure POSIX sh . For example, brace expansion is another, and it too operates the same under Bash's POSIX mode and not. As far as static syntax errors go, Bash has both reserved words (like [[ and time ) not specified by POSIX, such that [[ x is valid POSIX shell code but a Bash syntax error, and a history of various POSIX incompatibility bugs that may result in syntax errors, such as the one from this question : x=$(cat <<'EOF'`EOF)bash: line 2: unexpected EOF while looking for matching ``'bash: line 5: syntax error: unexpected end of file Syntax-errors-only is a pretty dangerous definition of "invalid" for any circumstance where it matters, but there it is.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/443118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
443,146
I found line sed 's~ ~~g' in a shell script on a Linux system. What is this ~ ?
It's an alternative delimiter for the sed substitute ( s ) command. Usually, the slash is used, as in s/pattern/replacement/ , but sed allows for almost any character to be used. The main reason for using another delimiter than / in the sed substitution expression is when the expression will act on literal / characters. For example, to substitute the path /some/path/here with /other/path/now , one may do s/\/some\/path\/here/\/other\/path\/now/ This suffers from what's usually referred to as "leaning toothpick syndrome" , which means it's hard to read and to properly maintain. Instead, we are allowed to use another expression delimiter: s#/some/path/here#/other/path/now# Using ~ is just another example of a valid substitution expression delimiter. Your expression s~ ~~g is the same as s/ //g and will remove all spaces from the input. In this case, using another delimiter than / is not needed at all since neither pattern nor replacement contains / . Another way of doing the same thing is tr -d ' ' <infile >outfile
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/443146", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289083/" ] }
443,152
What is the order of commands executed which have both pipeline and output redirection in it? Say we do the following: Charles@myzone:/tmp$ mkdir /tmp/testdir Charles@myzone:/tmp$ cd /tmp/testdir Charles@myzone:/tmp/testdir$ touch file1 file2 Charles@myzone:/tmp/testdir$ ls | wc -l2Charles@myzone:/tmp/testdir$ ls | wc -l > ls_resultCharles@myzone:/tmp/testdir$ cat ls_result3 I know that if you do ls > result then result will contain the name of itself because the shell will do something like 1) create/open file named result 2) set the fd of result to be stdout3) exec ls I was expecting ls_result to have value 2, but it's 3. Question How is the command ls | wc -w > ls_result above executed ? Is it equivalent to (ls | wc -w ) > ls_result ? Some links with concerning information ? (I've looked up the bash manual)
utility1 | utility2 >output is not equivalent to ( utility1 | utility2 ) >output but to utility1 | { utility2 >output; } The two utilities are started pretty much the same time, which means you would expect your command to sometimes return 3 and sometimes 2. Example: $ { [ -f test ] && echo exists >&2; } | { echo >test; }; rm test$ { [ -f test ] && echo exists >&2; } | { echo >test; }; rm test$ { [ -f test ] && echo exists >&2; } | { echo >test; }; rm testexists$ { [ -f test ] && echo exists >&2; } | { echo >test; }; rm testexists$ { [ -f test ] && echo exists >&2; } | { echo >test; }; rm test$ { [ -f test ] && echo exists >&2; } | { echo >test; }; rm test The above shows that the file created by the right hand side of the pipeline is sometimes detected by the left hand side of the pipeline.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443152", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/254697/" ] }
443,187
I'd like to find the PDF files whose name (excluding the extension) is greater than three. $ find ~ -iregex ".{3,}/.pdf" returns nothing, but $ find ~ -iregex ".+/.pdf" works. How can I enable the {3,} variant?
Assuming you’re using GNU find (which you probably are, since -iregex is a GNU extension to POSIX find ), -regex and -iregex default to Emacs regular expressions, which don’t recognise {3,} . You need to specify a different type of regular expressions using the -regextype option; in addition, you need to adjust your regular expression to the fact that the expression matches against the full path: find ~ -regextype posix-extended -iregex '.*/[^/]{3,}.pdf' You should also escape the . so that it matches “.” rather than any character: find ~ -regextype posix-extended -iregex '.*/[^/]{3,}\.pdf' The regular expression can be simplified since we only care about three non-“/” characters: find ~ -regextype posix-extended -iregex '.*[^/]{3}\.pdf' For completeness, with FreeBSD or NetBSD find (another implementation that supports -iregex , not yours though as .+ wouldn't work there without -E ), you'd write: find ~ -iregex '.*[^/]\{3\}\.pdf' or: find -E ~ -iregex '.*[^/]{3}\.pdf' Without -E , that's basic regular expression (like in grep ) and with -E extended regular expression (like in grep -E ). With ast-open's find : find ~ -iregex '.*[^/]{3}\.pdf' (that's extended regexps out of the box).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443187", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260114/" ] }
443,226
What determines which Linux commands require root access? I understand the reasons why it is desirable that, say, apt-get should require root privilege; but what distinguishes these commands from the rest? Is it simply a matter of the ownership and execute permissions of the executable?
It's mainly a matter of what the tool or program does . Keeping in mind that a non-superuser can only touch files that it owns or has access to, any tool that needs to be able to get its fingers into everything will require superuser access in order to do the thing which it does. A quick sample of Things that might require superuser access include, but are not limited to: Opening a listening TCP socket on a port below 1024 Changing system configurations (e. g. anything in /etc ) Adding new globally-accessible libraries ( /lib and /usr/lib ) or binaries ( /bin , /usr/bin ) Touching any files not owned by the user who is doing the touching which don't have a sufficiently permissive mode Changing other users' files' ownership Escelating process priorities (e. g. renice ) Starting or stopping most services Kernel configuration (e. g. adjusting swappiness) Adjusting filesystem quotas Writing to "full" disks (most filesystems reserve some space for the root user) Performing actions as other users
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443226", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290445/" ] }
443,235
How I am supposed to use systemd-ask-password-console.service ? My aim is to trigger a password prompt and ask for input on some terminal. Currently I am trying it like this: Start systemd-ask-password-console.service . Ensure that no other password agent is running: ps aux | grep ask Ensure that no other password agent is to be started: systemctl status systemd-ask* Execute systemd-ask-password --no-tty "Password:" to trigger the password agent. Step 3 is waiting for an agent to return the password and finally times out. In the meantime the request can be seen within /run/systemd/ask-password/ . systemctl status systemd-ask-password-console.service shows: ● systemd-ask-password-console.service - Dispatch Password Requests to Console Loaded: loaded (/lib/systemd/system/systemd-ask-password-console.service; static; vendor preset: Active: active (running) since Fri 2018-05-11 16:46:43 CEST; 6min ago Docs: man:systemd-ask-password-console.service(8) Main PID: 392 (systemd-tty-ask) Tasks: 2 (limit: 4915) CGroup: /system.slice/systemd-ask-password-console.service ├─392 /bin/systemd-tty-ask-password-agent --watch --console └─393 /bin/systemd-tty-ask-password-agent --watch --console=/dev/tty1May 11 16:46:43 debian systemd[1]: Started Dispatch Password Requests to Console. I would expect that the running agent processes the request and that it will use some terminal (e.g. tty1 ) to ask for the password. What I am doing wrong?
It's mainly a matter of what the tool or program does . Keeping in mind that a non-superuser can only touch files that it owns or has access to, any tool that needs to be able to get its fingers into everything will require superuser access in order to do the thing which it does. A quick sample of Things that might require superuser access include, but are not limited to: Opening a listening TCP socket on a port below 1024 Changing system configurations (e. g. anything in /etc ) Adding new globally-accessible libraries ( /lib and /usr/lib ) or binaries ( /bin , /usr/bin ) Touching any files not owned by the user who is doing the touching which don't have a sufficiently permissive mode Changing other users' files' ownership Escelating process priorities (e. g. renice ) Starting or stopping most services Kernel configuration (e. g. adjusting swappiness) Adjusting filesystem quotas Writing to "full" disks (most filesystems reserve some space for the root user) Performing actions as other users
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443235", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100500/" ] }
443,300
After a Fedora distro upgrade (27->28) with dnf , I tried to manually resolve conflicts between package versions (needed to keep older OS versions functional; effective OS version is selected at boot time in GRUB2 menu). dnf security checks prevented the removal of conflicting packages and I used rpm -e xxx --force to do that. I inadvertently removed glibc and the PC immediately errored out. I want to avoid rebuilding my computer from scratch because: I don't exactly remember all applications I installed years ago (they were automatically upgraded by dnf system-upgrade), and there would be a huge configuration work in /etc to restore custom settings for my network environment plus the servers on the machine. Using a rescue disk, I could boot and examine the hard disk. Everything seems relatively "clean". Files from glibc package are simply missing. I could not complete chroot to the former root (in order to run rpm -i glibc ) because chroot tries to launch /bin/bash which is missing. Is there a way to tell rpm to do its usual job but to install files in, say, /mnt/hard_disk/ instead of / ? I'll take care afterwards of package database consistency and integrity.
chroot can take a command to run to this might work: chroot /mnt/hard_disk rpm -i glibc*.rpm Also, rpm has the --root option so this is another option: rpm -i --root /mnt/hard_disk glibc*.rpm
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290497/" ] }
443,341
I am running Apache2 version: Server version: Apache/2.4.29 (Ubuntu)Server built: 2018-04-25T11:38:24 I would like to enable TLSv1.3 but I get an error below in Apache2 if I put SSLProtocol TLSv1.2 TLSv1.3 in the ssl.conf file: # apachectl configtestAH00526: Syntax error on line 79 of /etc/apache2/mods-enabled/ssl.conf:SSLProtocol: Illegal protocol 'TLSv1.3'Action 'configtest' failed.The Apache error log may have more information. Is it not possible to enable TLSv1.3 in Apache2 (yet)? I know Nginx can do it, but this question aims at Apache2.
Debian Buster = TLSv1.3 supported In Debian Buster (currently in testing), the TLSv1.3 is supported already. The following information is dated to: # date -I 2019-02-24 Apache2 version: # apache2 -v Server version: Apache/ 2.4.38 (Debian) Server built: 2019-01-31 T20:54:05 Where to enable Globally in: /etc/apache2/mods-enabled/ssl.conf Locally in: Your VirtualHost(s) located in: /etc/apache2/sites-enabled/ How to enable To this date, the TLSv1.1 has been deprecated finally. So, you want only TLSv1.2 and TLSv1.3. To do that, put this line in the above-mentioned file: SSLProtocol -all +TLSv1.3 +TLSv1.2 Cipher suites The cipher suites are now divided into 2 categories, that being SSL (below TLSv1.3) and TLSv1.3 , you may want to use your own set of ciphers, take this only as an example: SSLCipherSuite TLSv1.3 TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256SSLCipherSuite SSL ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256 Curves One important note to the end: There is one new curve you could / should enable: X25519 . You can do this for instance like this, again only example: SSLOpenSSLConfCmd Curves X25519:secp521r1:secp384r1:prime256v1 Example domain test on SSLLabs Experimental: This server supports TLS 1.3 (RFC 8446).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290534/" ] }
443,398
I'm getting these error messages every single time I reboot my desktop (and a couple of more I don't know how to retain when it's shutting down, but those are not relevant to this question so far): [gorre@uplink ~]$ journalctl -p err..alert...-- Reboot --May 11 21:47:03 uplink kernel: ACPI BIOS Error (bug): Failure looking up [\_SB.PCI0.RP04.PXSX._SB.PCI0.RP05.PXSX], AE_NOT_FOUND (20180105/dswload2-194)May 11 21:47:03 uplink kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20180105/psobject-252)May 11 21:47:03 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP04.PXSX, AE_NOT_FOUND (20180105/psparse-550)May 11 21:47:03 uplink kernel: ACPI BIOS Error (bug): Failure looking up [\_SB.PCI0.RP08.PXSX._SB.PCI0.RP09.PXSX], AE_NOT_FOUND (20180105/dswload2-194)May 11 21:47:03 uplink kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20180105/psobject-252)May 11 21:47:03 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP08.PXSX, AE_NOT_FOUND (20180105/psparse-550)May 12 07:09:30 uplink kernel: rtc_cmos 00:03: Alarms can be up to one month in the future-- Reboot --May 12 07:10:32 uplink kernel: ACPI BIOS Error (bug): Failure looking up [\_SB.PCI0.RP04.PXSX._SB.PCI0.RP05.PXSX], AE_NOT_FOUND (20180105/dswload2-194)May 12 07:10:32 uplink kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20180105/psobject-252)May 12 07:10:32 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP04.PXSX, AE_NOT_FOUND (20180105/psparse-550)May 12 07:10:32 uplink kernel: ACPI BIOS Error (bug): Failure looking up [\_SB.PCI0.RP08.PXSX._SB.PCI0.RP09.PXSX], AE_NOT_FOUND (20180105/dswload2-194)May 12 07:10:32 uplink kernel: ACPI Error: AE_NOT_FOUND, During name lookup/catalog (20180105/psobject-252)May 12 07:10:32 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP08.PXSX, AE_NOT_FOUND (20180105/psparse-550) I found this article that states someone can add this line: echo "disable" > /sys/firmware/acpi/interrupts/gpe6F to /etc/rc.local , but I'm not sure if that's the correct solution...moreover, if that's only "patching" the error messages, but not fixing the underlying problem ‒ if any. Or maybe should I wait for an upgrade? I'm using: [gorre@uplink ~]$ uname -aLinux uplink 4.16.8-1-ARCH #1 SMP PREEMPT Wed May 9 11:25:02 UTC 2018 x86_64 GNU/Linux ...and this is my hardware: Corsair RMX750 (750 Watt) 80+ Gold Fully Modular Power Supply Intel Core i7-8700 (BX80684I78700) Processor Asus Prime Z370-P Corsair Force MP500 M.2 2280 240GB NVMe PCI-Express 3.0 x4 MLC SSD Corsair Vengeance LPX 32GB (2 x 16GB) 288-Pin DDR4 SDRAM DDR4 2666 (PC4 21300) UPDATE New kernel 4.19.13-1-lts update: $ uname -aLinux uplink 4.19.13-1-lts #1 SMP Sun Dec 30 07:38:47 CET 2018 x86_64 GNU/Linux ...and the error/warning messages are finally gone! -- Reboot --Dec 28 09:40:42 uplink kernel: ACPI Error: [_SB_.PCI0.RP05.PXSX] Namespace lookup failure, AE_NOT_FOUND (20170728/dswload2-191)Dec 28 09:40:42 uplink kernel: ACPI Exception: AE_NOT_FOUND, During name lookup/catalog (20170728/psobject-252)Dec 28 09:40:42 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP04.PXSX, AE_NOT_FOUND (20170728/psparse-550)Dec 28 09:40:42 uplink kernel: ACPI Error: [_SB_.PCI0.RP09.PXSX] Namespace lookup failure, AE_NOT_FOUND (20170728/dswload2-191)Dec 28 09:40:42 uplink kernel: ACPI Exception: AE_NOT_FOUND, During name lookup/catalog (20170728/psobject-252)Dec 28 09:40:42 uplink kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.RP08.PXSX, AE_NOT_FOUND (20170728/psparse-550)Dec 28 09:41:08 uplink gnome-session-binary[712]: Unrecoverable failure in required component org.gnome.Shell.desktopDec 28 11:48:13 uplink flatpak[7192]: libostree HTTP error from remote flathub for <https://dl.flathub.org/repo/objects/3d/b5370c04103b9acd46bca2f315fb4855649926120b099a>Dec 28 11:48:16 uplink flatpak[7192]: libostree HTTP error from remote flathub for <https://dl.flathub.org/repo/objects/e0/a43c4cbae106fc801d3c7bcc004b8222e9bf0528beef04>Dec 29 12:19:37 uplink kernel: rtc_cmos 00:03: Alarms can be up to one month in the futureDec 30 09:03:02 uplink kernel: rtc_cmos 00:03: Alarms can be up to one month in the futureDec 30 19:07:11 uplink kernel: [drm:intel_pipe_update_end [i915]] *ERROR* Atomic update failure on pipe A (start=952715 end=952716) time 142 us, min 1073, max 1079, scan>Dec 31 08:11:28 uplink kernel: rtc_cmos 00:03: Alarms can be up to one month in the future-- Reboot --Jan 01 10:23:42 uplink gnome-session-binary[516]: Unrecoverable failure in required component org.gnome.Shell.desktop
Your hardware is too new sort of speak. The bugs you are seeing are harmless and may persist for some time. You could try upgrading your BIOS, that is utmost priority. Then, you could try installing intel-microcode non-free package. See if these two options work for you first. Today, I have assembled a computer with the very same CPU and seeing the same bugs. On just another motherboard. Update 2018-Dec-1 The error on my Dell laptop with very recent UEFI BIOS update is still persistent as per log: Dec 01 06:27:07 dell-7577 kernel: ACPI Error: [\_SB_.PCI0.XHC_.RHUB.HS11] Namespace lookup failure, AE_NOT_FOUND (20170831/dswload-210)Dec 01 06:27:07 dell-7577 kernel: ACPI Exception: AE_NOT_FOUND, During name lookup/catalog (20170831/psobject-252)Dec 01 06:27:07 dell-7577 kernel: ACPI Exception: AE_NOT_FOUND, (SSDT:xh_OEMBD) while loading table (20170831/tbxfload-228)Dec 01 06:27:07 dell-7577 kernel: ACPI Error: 1 table load failures, 13 successful (20170831/tbxfload-246)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280674/" ] }
443,427
Running newer versions of Gnome (on Wayland), you can't restart the shell with Alt + F2 , entering r & then Enter - which used to restart the shell without logging the user out of the session. More recently, on Fedora systems you used to be able to restart by sending SIGHUP to the gnome-shell process - using top or whatever. However now on Fedora 28 atleast this kills the session and sends the user back to the login screen. Restarting the shell leaving the session intact is very useful in the event of installing/modifying an extension, or (hopefully not anymore!) having to restart gnome due to it bugging out and using 100% CPU. Is there a current alternative please? EDIT: I have also tried SIGQUIT , and gnome-shell --replace (with export DISPLAY=:0 if on a TTY), and the result is to still be kicked back to the login screen
In an Xorg session one can restart GNOME shell without losing application state as applications are running against a separate server (X). But unlike Xorg in case of a Wayland session GNOME shell is not separate from the Wayland protocol, GNOME itself acts as the display server. So there isn't any way to restart GNOME shell in Wayland without losing application state as the display server also goes down. It's similar to restarting X server in an Xorg session. That is the reason why this shell restart option is disabled in Wayland (recall that usually the key sequence to kill the X server is also disabled by default in the Xorg session) and there will probably never be any non-destructive way to restart GNOME shell in Wayland.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52956/" ] }
443,484
There are common pairings of escape sequences to ASCII control characters, such as Ctrl-C and Ctrl-Z to ETX and SUB, respectively. On the Wikipedia Control Codes page, there are most pairings, but no cited reference. Are the control character and key sequence pairings part of a standard? Where is that listed for Linux and other OS's? Are there man pages listing these pairings? Are they purely decades of unwritten convention? References The Linux manpage for termios(3) lists some of them. the command stty -a lists some for your system
I wrote a document in 1984 that summarizes ANSI Codes X3.64-1979, ANSI X3.4-1977, and ANSI X3.41-1974. This ansicode.txt describes how the control codes affect DEC LA-series hardcopy terminals and the VT-series video terminals.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288261/" ] }
443,501
How to manually mount a device so that it appears in file manager as disk/drive? ADDED: Sometimes device is not mounted automatically (and does not appear in Disks ) or needed to be mounted with specific options. Doing like mount /dev/sda5 /mnt/data does not make device appear in file manager in the left pane with other devices and I could not find option for mount command to make new mount appear as device in file manager (Nemo specfically).
I wrote a document in 1984 that summarizes ANSI Codes X3.64-1979, ANSI X3.4-1977, and ANSI X3.41-1974. This ansicode.txt describes how the control codes affect DEC LA-series hardcopy terminals and the VT-series video terminals.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443501", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266260/" ] }
443,507
1. Summary I don't understand, why do I need E010 bashate rule . 2. Details I use bashate for linting .sh files. E010 rule: do not on the same line as for for bashate: Correct: #!/bin/bashfor f in bash/*.sh; do sashacommand "$f"done Error: #!/bin/bashfor f in bash/*.sh do sashacommand "$f"done Is any valid arguments, why I need for and do in the same line? 3. Not useful I can't find an answer to my question in: Google Articles about best coding practices ( example ) bashate documentation . I find only : A set of rules that help keep things consistent in control blocks. These are ignored on long lines that have a continuation, because unrolling that is kind of “interesting”
Syntactically, the following two code snippets are correct and equivalent: for f in bash/*.sh; do sashacommand "$f"done for f in bash/*.sh do sashacommand "$f"done The latter one could possibly be said to be harder to read as do slightly obfuscates the command in the body of the loop. If the loop contains multiple commands and the next command is on a new line, the obfuscation would be further highlighted: for f in * do cmd1 cmd2done ... but to flag it as an "error" is IMHO someone's personal opinion rather than an objective truth. I would say that if you want to prepend the command in the loop with do , then feel free to do so if that makes the code consistent and readable in the eyes of whoever is reading the code. In general, almost any ; may be replaced by a newline. Both ; and newline are command terminators. do is a keyword that means "here follows what needs to be done (in this for loop)". for f in *; do ...; done is the same as for f in *do ...done and as for f in *; do ...done and for f in * do ...done The reason to use one over another is readability and local/personal style conventions. Personal opinion: In loop headers that are very long , I think that it may make sense to put do on a new line, as in for i in animals people houses thoughts basketballs beesdo ...done or for i in \ animals \ people \ houses \ thoughts \ basketballs \ beesdo ...done The same goes for the then in an if statement. But again, this comes down to one's personal style preferences, or to whatever coding style one's team/project is using.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/443507", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237999/" ] }
443,511
I've seen couple of similar topics, but they are referring to not quoting variables, which I know could lead to unwanted results. I saw this code and was wondering would if it be possible to inject something to be run when this line of code executes: echo run after_bundle
For the specific case echo run after_bundle quoting is not needed. No quoting is needed because the argument to echo are static strings that contain no variable expansions or command substitutions etc. They are "just two words" (and as Stéphane points out , they are additionally constructed out of the portable character set ). The "danger" comes when you deal with variable data that the shell may expand or interpret. In such cases, care must be taken that the shell does the correct thing and that the result is what's intended. The following two questions contain relevant information about that: Why is printf better than echo? Security implications of forgetting to quote a variable in bash/POSIX shells echo is sometimes used to "protect" potentially harmful commands in answers on this site. For example, I may show how to remove files or move files to a new destination using echo rm "${name##*/}.txt" or echo mv "$name" "/new_dir/$newname" This would output commands on the terminal instead of actually removing or renaming files. The user could then inspect the commands, decide that they look ok, remove the echo and run again. Your command echo run after_bundle may be an instruction to the user, or it may be a "commented out" piece of code that is too dangerous to run without knowing the consequences. Using echo like this, one has to know what the modified command does and one must guarantee that the modified command actually is safe (it would potentially not be if it contained redirections, and using it on a pipeline doesn't work, etc.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80330/" ] }
443,539
echo 'echo "hello, world!";sleep 3;' | parallel This command does not output anything until it has completed. Parallel's man page claims: GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. I guess the devil is in the phrasing: you get the same output as if you would run it normally, but not the output the same as if you would run it normally. I've looked for an option that will do this, for example --results /dev/stdout , but that does not work. My use-case is seeing real-time progress output from the command that I'm running. It's not about how many tasks have completed, which parallel could display for me, but about the progress output of each command individually that I want to see. I would use a bash loop ( for i in $x; do cmd & done; ) but I want to be able to stop all tasks with a single Ctrl+C, which parallel allows me to do. Is it possible to do this in parallel? If not, is there another tool?
I think you're looking for --ungroup . The man page says: --group Group output. Output from each jobs is grouped together and is only printed when the command is finished. --group is the default. Can be reversed with -u. -u of course is a synonym for --ungroup .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/443539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30731/" ] }
443,575
What is a fast way to subtract two lists 1 . The lists may be small, maybe a direct way in shell works. Or the lists may be long, maybe external tools are the faster way. Assume you have two lists: list1=( 1 2 3 4 5 6 7 8 9 10 11 12 )list2=( 1 2 3 5 7 8 9 11 12 ) How to remove all elements of list2 from list1 to obtain a resulting list ( listr ) equivalent to: listr=( 4 6 10 ) The lists could be inside files also, as the should if the list is big (it may use too much memory). To make this question brief, I am placing all algoritms in a community answer. Please read the multiple tests done in there. Multisets The original question was meant to find the missing elements of a complete list (list1) in list2, without repetitions. However, if the lists are: list1=( a a b b b c d d )list2=( b b c c c d d e ) And the definition of multiset subtraction is as in this page , the expected result is: listr= ( a a b ) Only algorithms 1 and 3 work correctly. Neither algorithms 2 or 4 can do this. Algorithm 5 (comm) could match this definition by doing comm -23 . Algorithm 6 (zsh) fails. I do not know how to make it work. Algorithm 7 (comm). As said above, using -23 works. I have not analyzed all the algorithms for the Set symmetric difference definition which should yield: listr=( a a b c c e ) But comm -3 list1.txt list2.txt | tr -d ' \t' works. 1 Yes, I know that it is a bad idea to process a text file (list of lines) in a shell, it is slow by design. But there are cases where it seems that it can not be avoided . I (we) am (are) open to suggestions.
You can use comm to remove anything that’s common to both lists: listr=($(comm -3 <(printf "%s\n" "${list1[@]}" | sort) <(printf "%s\n" "${list2[@]}" | sort) | sort -n)) This sorts both lists in the order comm expects, compares them, outputs only items which are unique to either list, and sorts them again in numeric order. If both lists are sorted lexicographically ( as per LC_COLLATE ), you can avoid sorting again: listr=($(comm --nocheck-order -3 <(printf "%s\n" "${list1[@]}") <(printf "%s\n" "${list2[@]}"))) This also works great if the values you need to compare are stored in files.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443575", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
443,617
The Linux foundation list of standard utilities includes getopts but not getopt . Similar for the Open Group list of Posix utilities. Meanwhile, Wikipedia's list of standard Unix Commands includes getopt but not getopts . Similarly, the Windows Subsystem for Linux (based on Ubuntu based on Debian) also includes getopt but not getopts (and it is the GNU Enhanced version ). balter@spectre:~$ which getopt/usr/bin/getoptbalter@spectre:~$ getopt -Vgetopt from util-linux 2.27.1balter@spectre:~$ which getoptsbalter@spectre:~$ So if I want to pick one that I can be the most confident that anyone using one of the more standard Linux distros (e.g. Debian, Red Hat, Ubuntu, Fedora, CentOS, etc.), which should I pick? Note: thanks to Michael and Muru for explaining about builtin vs executable. I had just stumbled across this as well which lists bash builtins.
which is the wrong tool. getopts is usually also a builtin : Since getopts affects the current shell execution environment, it is generally provided as a shell regular built-in. ~ for sh in dash ksh bash zsh; do "$sh" -c 'printf "%s in %s\n" "$(type getopts)" "$0"'; donegetopts is a shell builtin in dashgetopts is a shell builtin in kshgetopts is a shell builtin in bashgetopts is a shell builtin in zsh If you're using a shell script, you can safely depend on getopts . There might be other reasons to favour one or the other, but getopts is standard . See also: Why not use "which"? What to use then?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151352/" ] }
443,618
Given this input: # Lines starting with # stay the same# Empty lines stay the same# only lines with comments should changels  # show all major directories              # and other thingscd      # The cd command - change directory              # will allow the user to change between file directoriestouch             # The touch command, the make file command                 # allows users to make files using the Linux CLI #  example, cd ~bar foo baz # foo foo foo I need to keep lines beginning with # and lines not containing any comments as is, but align all other comments on the same column. Desired Output: # Lines starting with # stay the same# Empty lines stay the same# Only lines with # in middle should change and be alignedls              # show all major directories                # and other thingscd              # The cd command - change directory                  # will allow the user to change between file directoriestouch           # The touch command, the make file command                 # allows users to make files using the Linux CLI #  exmaple, cd ~bar foo baz     # foo foo foo Here what I have so far: # Building an array out of input while IFS=$'\n' read -r; do lines+=("$REPLY") done # Looping through array and selecting elemnts that need change for i in "${lines[@]}" do if [[ ${i:0:1} == ';' || $i != *";"* ]]; then echo "DOESNT CHANGE: #### $i" else echo "HAS TO CHANGE: #### $i" array+=( "${i%%";"*}" ); array2+=("${i##";"}") fidone# Trying to find the longest line to decide how much space I need to add for each elementmax = ${array[0]}for n in "${array[@]}" ; do ((${#n} > max)) && max=${#n} echo "Length:" ${#n} ${n}done#Longest lineecho $max# Loop for populating array for j in "${!array2[@]}" ; do echo "${array2[j]} " | sed -e "s/;/$(echo "-%20s ;") /g" done I feel like I'm doing too much. I think there should be an easier way to tackle this problem.
If all your commands and arguments do not contain # , and one other character (say the ASCII character given by byte 1), you can insert that other character as an extra separator and use column to align the comments (see this answer ). So, something like: $ sed $'s/#/\001#/' input-file | column -ets $'\001'# Lines starting with # stay the same# Empty lines stay the same# only lines with comments should changels # show all major directories # and other thingscd # The cd command - change directory # will allow the user to change between file directoriestouch # The touch command, the make file command # allows users to make files using the Linux CLI # example, cd ~bar foo baz # foo foo foo If your column doesn't support -e to avoid eliminating empty lines, you could add something to empty lines (for example, a space, or the separator character used above): $ sed $'s/#/\001#/;s/^$/\001/' input-file | column -ts $'\001'# Lines starting with # stay the same# Empty lines stay the same# only lines with comments should changels # show all major directories # and other thingscd # The cd command - change directory # will allow the user to change between file directoriestouch # The touch command, the make file command # allows users to make files using the Linux CLI # example, cd ~bar foo baz # foo foo foo
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290750/" ] }
443,643
I am trying to import fabric.api and having issues.I installed fabric using pip and it works fin when I run import fabric in the interpreter. But when I do from fabric.api import * it spews out an error saying "No module named api".I am using Python 2.7. What am I missing here? Python 2.7.10 (default, Oct 6 2017, 22:29:07)[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)] on darwinType "help", "copyright", "credits" or "license" for more information.>>> versionTraceback (most recent call last): File "<stdin>", line 1, in <module>NameError: name 'version' is not defined>>> import fabric>>> import fabric.apiTraceback (most recent call last): File "<stdin>", line 1, in <module>ImportError: No module named api>>> from "fabric.api" import * File "<stdin>", line 1 from "fabric.api" import * ^
Fabric has made some major API changes from v1 to v2; to see the changes, visit Upgrading from Fabric 1.x: API organization . In particular, fabric.api is removed and everything is imported directly from the top-level package. This means that your scripts won't work with the current Fabric==2.0.1 version; you have two possibilities: rewrite your code to be compliant with v2, or install the latest v1 version: $ pip install "fabric<2"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443643", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212613/" ] }
443,650
I have several servers running Ubuntu 16.04 that suddenly have accounts-daemon process utilizing 100% of their CPU. The first time it occurred 3 weeks ago, I moved /var/log/wtmp and re-created it, which immediately solved the problem. That was the first solution I came across, another one was to disable these wtmp logs in proftpd.conf . Are there any risks associated with doing that? Will it solve the problem?
I was having the same issue with accounts-daemon taking nearly 100% CPU on a 16.04 Ubuntu. In short, the root cause were serial console agetty -s, continously (i.e. a few times a minute) restarted by systemd . (I acknowledge not exactly answering Sam's main question -i.e. disabling wtmp completely-, but other people in trouble are likely to find this page - as I did) == Details for the curious: strace on accounts-daemon revealed that it was continuously accessing /var/log/wtmp, which was indeed about 300 Mbytes and constantly growing. Unfortunately, last did not show anything from it, but another utility, utmpdump showed a lot of failed agetty attempts on ttyS* serial consoles: [6] [30697] [tyS2] [LOGIN ] [ttyS2 ] [ ] [0.0.0.0 ] [Sun Dec 30 07:19:34 2018 CET] [6] [30698] [tyS1] [LOGIN ] [ttyS1 ] [ ] [0.0.0.0 ] [Sun Dec 30 07:19:34 2018 CET] [8] [30698] [tyS1] [ ] [ttyS1 ] [ ] [0.0.0.0 ] [Sun Dec 30 07:19:44 2018 CET] [8] [30697] [tyS2] [ ] [ttyS2 ] [ ] [0.0.0.0 ] [Sun Dec 30 07:19:44 2018 CET] [5] [30707] [tyS2] [ ] [ttyS2 ] [ ] [0.0.0.0 ] [Sun Dec 30 07:19:44 2018 CET] [6] [30707] [tyS2] [LOGIN ] [ttyS2 ] [ ] [0.0.0.0 ] [Sun Dec 30 07:19:44 2018 CET] [8] [30707] [tyS2] [ ] [ttyS2 ] [ ] [0.0.0.0 ] [Sun Dec 30 07:19:48 2018 CET] Indeed, there were some serial consoles somehow activated ( systemctl | grep ttyS.*service ), which I removed by commands like "systemctl disable [email protected]" (I have no idea why and how these serial agetty-s were activated, but this is a very old system.) wtmp immediately stopped growing and accounts-daemon disappeared from top output. I guess accounts-daemon only activates for new wtmp records, so even if it is inefficient, it rarely runs now. Cheers: Arpad
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290784/" ] }
443,666
We have a Kafka production machine on Red Hat Enterprise Linux. How can we remove all the files under /var/kafka/kafka-logs that end with .index ? How can we move all the files that end with .index to another folder, /var/tmp/INDEX_BACKUP ? Example contents under /var/kafka/kafka-logs : ./hd3gd.ewhd.pri.processed-98/00000000000000000011.index./hd3gd.ewhd.pri.processed-99/00000000000000000000.index./hd3gd.ewhd.suspected_relations-0/00000000000000000000.index./hd3gd.ewhd.suspected_relations-1/00000000000000000000.index./hd3gd.ewhd.suspected_relations-2/00000000000000000000.index./hd3gd.ewhd.suspected_relations-3/00000000000000000000.index./hd3gd.ewhd.suspected_relations-4/00000000000000000000.index./hd3gd.ewhd.suspected_relations-5/00000000000000000000.index./frfwjnwe.fwefew.heartbeat-0/00000000000000000000.index./frfwjnwe.fwefew.heartbeat-1/00000000000000000000.index./frfwjnwe.fwefew.heartbeat-1/00000000000000017239.index./frfwjnwe.fwefew.heartbeat-2/00000000000000000000.index./frfwjnwe.fwefew.heartbeat-2/00000000000000017238.index
to remove all files ending with .index under /var/kafka/kafka-logs , using GNU find or compatible: find /var/kafka/kafka-logs -name \*.index -delete POSIXly: find /var/kafka/kafka-logs -name \*.index -exec rm -f {} + to move them to another folder, with GNU mv : find /var/kafka/kafka-logs -name \*.index -exec mv -t /var/tmp/INDEX_BACKUP {} + POSIXly: find /var/kafka/kafka-logs -name \*.index -exec sh -c ' exec mv "$@" /var/tmp/INDEX_BACKUP/' sh {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443666", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
443,674
When I'm connected by WLAN I can determinate in which network I'm by looking the SSID, for example: $ iwgetid -rONOA72E$ nmcli -t -f active,ssid dev wifi | egrep '^yes' | cut -d: -f2ONOA72E And I can know that I'm at home because of ONOA72E is the SSID of my router. But when I'm using LAN by ethernet I don't know what can I look in order to know if I'm using my home router or not.
to remove all files ending with .index under /var/kafka/kafka-logs , using GNU find or compatible: find /var/kafka/kafka-logs -name \*.index -delete POSIXly: find /var/kafka/kafka-logs -name \*.index -exec rm -f {} + to move them to another folder, with GNU mv : find /var/kafka/kafka-logs -name \*.index -exec mv -t /var/tmp/INDEX_BACKUP {} + POSIXly: find /var/kafka/kafka-logs -name \*.index -exec sh -c ' exec mv "$@" /var/tmp/INDEX_BACKUP/' sh {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443674", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/235763/" ] }
443,734
I have bad csv files, and need to add some quotes In field,field2,text field with potential commas,field4,field5field,field2,text fie,ld with pot,ential commas,field4,field5field,field2,text field with, potential commas,field4,field5 Out field,field2,"text field with potential commas",field4,field5field,field2,"text fie,ld with pot,ential commas",field4,field5field,field2,"text field with, potential commas",field4,field5 sed 's/,/,"/2' will add the first quote, but how can I do the same with the second occurence backwards from the end, for each line? sed, awk, perl and other methods are welcome. Files are some million lines, speed is appreciated.
Here's an awk way: if there are more than five comma-delimited fields, then loop through the "middle" fields concatenating them before printing the new field surrounded by quotes, followed by the final two fields: awk -f awkscript.awk < input With the following as the awkscript.awk : BEGIN { OFS="," FS=","}{ if (NF > 5) { middle="" for(i=3; i <= NF-2; i++) middle=(middle ? middle"," : "")$i print $1, $2, "\""middle"\"", $(NF-1), $NF } else { print $1, $2, "\""$3"\"", $4, $5 }}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290766/" ] }
443,764
I'm using CentOS 7. I want to get the PID (if one exists) of the process running on port 3000. I would like to get this PID for the purposes of saving it to a variable in a shell script. So far I have [rails@server proddir]$ sudo ss -lptn 'sport = :3000'State Recv-Q Send-Q Local Address:Port Peer Address:PortCannot open netlink socket: Protocol not supportedLISTEN 0 0 *:3000 *:* users:(("ruby",pid=4861,fd=7),("ruby",pid=4857,fd=7),("ruby",pid=4855,fd=7),("ruby",pid=4851,fd=7),("ruby",pid=4843,fd=7)) but I can't figure out how to isolate the PID all by itself without all this extra information.
Another possible solution: lsof -t -i :<port> -s <PROTO>:LISTEN For example: # lsof -i :22 -s TCP:LISTENCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEsshd 1392 root 3u IPv4 19944 0t0 TCP *:ssh (LISTEN)sshd 1392 root 4u IPv6 19946 0t0 TCP *:ssh (LISTEN)# lsof -t -i :22 -s TCP:LISTEN1392
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/443764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166917/" ] }
443,850
I want to insert a new line in two lines before the last line. So if my original file is: 12345 The result should be 123New line45
Using ed : $ printf '$-1i\nNew line\n.\n,p\n' | ed -s file123New line45 The ed editing script: $-1iNew line.,p This first moves to the line one line up from the end ( $-1 ) and inserts ( i ) new contents above that line. The contents inserted is ended with the single dot (it's allowed to be multiple lines). The last ,p displays the complete modified buffer on the terminal. You may redirect this to a new file, or you may write it back to the original file using printf '$-1i\nNew line\n.\nw\n' | ed -s file (the ,p is changed to w ). This latter is also how you would similarly use ex for this job: printf '$-1i\nNew line\n.\nw\n' | ex -s file ed and ex are standard line-oriented editors (as opposed to full-screen editors) that should come with your system. Note that -s means different things to each, but is appropriate for both when doing batch mode editing tasks like this. ed . "Shell and utilities". Base specifications . IEEE 1003.1:2017. The Open Group. ex . "Shell and utilities". Base specifications . IEEE 1003.1:2017. The Open Group. Further reading: Dale Dougherty and Tim O'Reilly (1987). "Advanced Editing". Unix Text Processing . Hayden Books.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443850", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288476/" ] }
443,891
I have looked at the question How to count number of words from String using shell on SO, which explains how to count words inside a variable. But this only counts one word inside my variable so I have no idea how to fix it. I have the following variables: vmfarm1=(host1.com host2.com host3.com host4.com )maximus=(host11.com host 12.com host 13.com)firefly=(host5.com) I need to find a way to count all the host names into a number inside the variables. After this, the number inside that was counted, have to be used as a variable in this line. I have tried: echo "$input" | wc -wprintf ' \n|/4.vmfarm1 ' >> textfile.txt I have to write the 4 above by myself to the number and I need it to be done automatically, this is why I need a variable.
Given an array arr , its length (number of elements) is given by ${#arr[@]} . Using this with your vmfarm1 array: printf ' \n|/%d.vmfarm1 ' "${#vmfarm1[@]}" >>textfile.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/443891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255177/" ] }
443,898
I've set up several network namespaces on my Linux system (kernel version 3.10), and now I want to configure each network namespace to have its own DNS settings. I created resolv.conf files in each /etc/netns/[namespace] directory, and now I want to make my system work in the following way: In bash command line, whenever I enter the context of a particular network namespace with nsenter --net=/run/netns/[namespace name] , I want all processes launched from command line (like nslookup, ping) to run with the DNS settings that I configured with the matching /etc/netns/[namespace name]/resolv.conf . If I run my commands like this: "ip netns exec [namespace name] [command]" then the DNS settings of the namespace apply. However, when running the commands without "ip netns exec", the DNS settings are taken from /etc/resolv.conf , even though running "netns get cur" indicates that the context is set to the desired network namespace. I tried doing mount --bind /etc/netns/[namespace name]/resolv.conf /etc/resolv.conf in the context of the appropriate network namespace, but this applies the mount in the entire system rather then only in the context of that network namespace. I suspected that using mount namespaces may help, so I tried reading the man page of mount namespaces, however couldn't make anything out of it in the short time that I dedicated to it. Is there an easy and elegant way to achieve this goal? Any help/direction toward the solution will be greatly appreciated!
Just look at what is doing ip netns exec test ... in your situation, using strace . Excerpt: # strace -f ip netns exec test sleep 1 2>&1|egrep '/etc/|clone|mount|unshare'|egrep -vw '/etc/ld.so|access'unshare(CLONE_NEWNS) = 0mount("", "/", 0x55f2f4c2584f, MS_REC|MS_SLAVE, NULL) = 0umount2("/sys", MNT_DETACH) = 0mount("test", "/sys", "sysfs", 0, NULL) = 0open("/etc/netns/test", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 5mount("/etc/netns/test/resolv.conf", "/etc/resolv.conf", 0x55f2f4c2584f, MS_BIND, NULL) = 0 so to reproduce (partially, eg /sys isn't handled here) what ip netns exec test ... is doing: ~# ip netns id~# head -1 /etc/resolv.conf # Generated by NetworkManager ~# nsenter --net=/var/run/netns/test unshare --mount sh -c 'mount --bind /etc/netns/test/resolv.conf /etc/resolv.conf; exec bash' ~# ip netns idtest~# head -1 /etc/resolv.conf # For namespace test~# So that's right. nsenter alone isn't enough. unshare has to be used, to change to a newly created mount namespace (basing this new on a copy of the previous one) and alter it, and not just using verbatim an existing one, since there is no existing one yet that fits. That's what is doing the syscall of the same name as is telling strace .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290957/" ] }
443,976
I am working on a script and I have a while in which I want to take the user input and use as an integer value with a counter like so: read -p "How many bytes would you like you replace :> " $numOfBytesecho "$numOfBytes bytes to replace"while [ $counter -le $numOfBytes ]do echo "testing counter value = $counter" let $counter++done To my understanding it doesn't currently work because it is taking the numOfBytes variable as a string. Do I need to convert the string to an int some how? Is it possible to do something like that? Is there an alternative?
You want to read an integer and then do a loop from 1 to that integer, printing the number in each iteration: #!/bin/bashread -p 'number please: ' numfor (( i = 1; i <= num; ++i )); do printf 'counter is at %d\n' "$i"done Notice how the $ is not used when reading the value. With $var you get the value of the variable var , but read needs to know the name of the variable to read into, not its value. or, with a while loop, #!/bin/bashread -p 'number please: ' numi=0while (( ++i <= num )); do printf 'counter is at %d\n' "$i"done The (( ... )) in bash is an arithmetic context. In such a context, you don't need to put $ on your variables, and variables' values will be interpreted as integers. or, with /bin/sh , #!/bin/shprintf 'number please: ' >&2read numi=1while [ "$i" -le "$num" ]; do printf 'counter is at %d\n' "$i" i=$(( i + 1 ))done The -le ("less than or equal") test needs to act on two quoted variable expansions (in this code). If they were unquoted, as in [ $i -le $num ] , then, if either variable contained a shell globbing character or a space, you might get unexpected results, or errors. Also, quoting protects the numbers in case the IFS variable happens to contain digits. Related questions: Security implications of forgetting to quote a variable in bash/POSIX shells Why does my shell script choke on whitespace or other special characters?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/443976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290453/" ] }
444,017
I'm Creating a Login Screen when I start my CLI Arch Linux and I build a script on the file /etc/bash.bashrc Like as it follows below: #COMMANDS CREATED INSIDE /ETC/BASH.BASHRC FILE# USING ANSI COLORSRED="\e[31m"ORANGE="\e[33m"BLUE="\e[94m"GREEN="\e[92m"STOP="\e[0m"# LOGIN SCREEN MESSAGE screenfetchprintf "${GREEN}"printf "=================================\n"printf "${ORANGE}"figlet -w 200 -f standard "F4NT0 ARCH LINUX"printf "${BLUE}"figlet -w 200 -f small "CLI Operational System"printf "${GREEN}"printf "=================================\n"printf "${STOP}" In the code above, I build the variables who calls the colors and I let "leak" on the messages created with the figlet program using the printf . This way I can color the messages until the last variable call(the message below stay with the color i call in the variable above), when I call the next variable,the code change the color from the next message and so on until the STOP variable who stop the leaking of the colors. I like how it works in my Arch, but programable way I find it "dirty"... There is a way to add colors(ANSI,tput or others) in the figlet command to become more adequate to add colors inside Scripts in Unix/Linux?
The way I show In this Question is the Best Way to Put Colors on Figlet, the Way a color is putting before the commands is the Only way I find to Make it work, And after I call the next Color,the next line will be Change to The New Color!! If anyone want to Know, the Way I call the colors is using the ANSI color codes, like Below: Regular Colors: \e[30m = Black \e[31m = Red \e[32m = Green \e[33m = Yellow \e[34m = Blue \e[35m = Purple \e[36m = Cyan \e[37m = White Light Colors: \e[90 = Light Black \e[91 = Light Red \e[92 = Light Green \e[93 = Light Yellow \e[94 = Light Blue \e[95 = Light Purple \e[96 = Light Cyan \e[97 = Light White The way I use isn´t the only way, not even the complete one For the Complete Informations About ANSI colors, read this Site: https://misc.flogisoft.com/bash/tip_colors_and_formatting About The Variables: I put the color name all in Caps because is the best way to avoid Confusion GREEN="\e[92m" To call the Variable Created, you need to catch the value of the Vaiable, Using the ${} construction ${GREEN} To make the Color out before the command, you need to use printf to leak the Color(I use Printf but I think echo works too). printf "${GREEN}" The Next Line who Print Something on the Screen Will have the Color of the Variable. printf "${GREEN}" figlet -f standard "This Will Be Green" The Text Should Be Like This: To Stop the Color Leak Inside Where it Shouldn´t, there´s Two Steps: If you want to put a New Color, Just Call the New Color If you want to Stop the Color, Use the Following Variable: STOP="\e[0m" Put the Stop in the End where after that you Don´t want to Change the color like that: printf "${STOP}"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290652/" ] }
444,080
I have a list of files.txt, below: -rw-rw-r-- 1 root dev 11 May 16 12:18 20_SumActive.txt -rw-rw-r-- 1 root dev 11 May 16 12:18 22_SumActive.txt -rw-rw-r-- 1 root dev 7 May 16 12:18 24_SumActive.txt -rw-rw-r-- 1 root dev 0 May 16 12:18 26_SumActive.txt -rw-rw-r-- 1 root dev 0 May 16 12:18 28_SumActive.txt Output: kpgmeddev01> cat 2[0-8]_SumActive.txt Sum: 47760 Sum: 72000 Sum: 0 How to get output as below: Sum: 47760 Sum: 72000 Sum: 0 [Blank] [Blank] Need guidance.
cat cannot output data that does not exist in the files. If a file is empty, it does not even have a newline character to provide an empty line as output. You could make sure that the files contained at least a single newline character. This is how you use GNU awk to ensure that (this modifies the empty files): awk 'ENDFILE { if (FNR == 0) printf("\n") >>FILENAME }' 2[0-8]_SumActive.txt The ENDFILE block will be executed after finishing reading any of the files. If FNR is zero, we never saw any lines in the file, so we insert a single newline into it. The script then continues with the next file. You can then use cat as you did in the question. Alternatively, without changing the files, using GNU awk instead of cat : awk 'ENDFILE { if (FNR == 0) printf("\n") } 1' 2[0-8]_SumActive.txt This does the same kind of detection of empty files as above, but prints the newline to standard output rather than to the file. The 1 at the end could be replaced by { print } and will cause all data in the non-empty files to be outputted. Alternatively, a shell loop (should work in any POSIX shell): for name in ./2[0-8]_SumActive.txt; do if [ -s "$name" ]; then cat "$name" else printf '\n' fidone The -s test will be true if the file exists and has a size greater than zero. If you want the literal string [Blank] to be outputted for empty files, simply insert that string in front of \n in the calls to printf above (this will also work in the awk code).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444080", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291089/" ] }
444,110
It seems I have problems with my connection pool tool. There is a big delay when it obtains the DB connection. What I try to achieve is to get all the cases from log file when this incident occurs. The related log entries look like ...2018-03-12 16:18:44,070 efault task-166 gine.jdbc.internal.LogicalConnectionImpl DEBUG Obtaining JDBC connection...2018-03-12 16:20:23,172 efault task-166 gine.jdbc.internal.LogicalConnectionImpl DEBUG Obtained JDBC connection... So if the pattern ' DEBUG Obtaining JDBC connection ' occurs then extract the date ' 2018-03-12 16:18:44,070 ' and when the pattern ' DEBUG Obtained JDBC connection ' is found then extract its date and compare the 2 dates. If the difference is more than 2 sec then log then. I know it is pretty complicated to solve it with one line of code but is it possible without writing a program to do that?
cat cannot output data that does not exist in the files. If a file is empty, it does not even have a newline character to provide an empty line as output. You could make sure that the files contained at least a single newline character. This is how you use GNU awk to ensure that (this modifies the empty files): awk 'ENDFILE { if (FNR == 0) printf("\n") >>FILENAME }' 2[0-8]_SumActive.txt The ENDFILE block will be executed after finishing reading any of the files. If FNR is zero, we never saw any lines in the file, so we insert a single newline into it. The script then continues with the next file. You can then use cat as you did in the question. Alternatively, without changing the files, using GNU awk instead of cat : awk 'ENDFILE { if (FNR == 0) printf("\n") } 1' 2[0-8]_SumActive.txt This does the same kind of detection of empty files as above, but prints the newline to standard output rather than to the file. The 1 at the end could be replaced by { print } and will cause all data in the non-empty files to be outputted. Alternatively, a shell loop (should work in any POSIX shell): for name in ./2[0-8]_SumActive.txt; do if [ -s "$name" ]; then cat "$name" else printf '\n' fidone The -s test will be true if the file exists and has a size greater than zero. If you want the literal string [Blank] to be outputted for empty files, simply insert that string in front of \n in the calls to printf above (this will also work in the awk code).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444110", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291106/" ] }
444,113
I need to use "find" command to find several different sets of files in my Bash function, depending on my script input. So, I have thing like: DAYS=30case $1 inA1) ARGLINE="-name 'FOO*.xml' -or -name 'BAR*.xml' -or -name 'BTT*.txt'" ;;A2) ARGLINE="-name 'PO*xml' -or -name 'PR*xml'" ;;...esacfind . -maxdepth 1 -type f -mtime +${DAYS} `${ARGLINE}` This works. However, as soon as I want to use variable for number of days to search back to, like this: DAYS=30case $1 inA1) ARGLINE="-name 'FOO*.xml' -or -name 'BAR*.xml' -or -name 'BTT*.txt'" ;;A2) ARGLINE="-name 'PO*xml' -or -name 'PR*xml'" ;;...esacif [[ $# -gt 1 ]]; then DAYS=$2fifind . -maxdepth 1 -type f -mtime +${DAYS} `${ARGLINE}` The function fails when find doesn't find any files matching, with the following error: No command '-name' found, did you mean: Command 'uname' from package 'coreutils' (main) -name: command not found It however works correctly, when the number of days is such that find finds some files.It also fails when I try to pipe the output of succesful run into another command. How should I correctly build the argument line for "find"?
In bash , use an array: args=( '(' -name 'FOO*.xml' -or -name 'BAR*.xml' -or -name 'BTT*.txt' ')' ) The extra parentheses are there for creating the correct boolean logical grouping since you use -or ). Then, in the find command: find ...some arguments... "${args[@]}" You have an additional issue in that you use `$ARGLINE` This is a command substitution, similar to $( $ARGLINE ) and the shell will try to execute $ARGLINE (its value) as a command. This is why you get "No command '-name' found". The command substitution fails, but the find runs, which is why you think it "works".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444113", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121187/" ] }
444,139
I am little bit lost or confused. I have some secure server where inside ~/.ssh/authorized_keys i have a key to remotely login those servers from my desktop (mac book ) Now that secure server need to SFTP/SCP to some third party servers They provided me SSH ip, username, password, but i need to login with key. They asked me to share key. Now confused which key i have to share to them? id_rsa.pub or the key from ~/.ssh/authorized_keys NOTE: if i share my id_rsa.pub key to third party can they use it to hack my secured servers in point 1?
SSH keys have two parts, the secret/private key (usually in ~/.ssh/id_rsa ), and the public key ( ~/.ssh/id_rsa.pub ). The secret key can be used to prove who you are (or at least that you hold that secret key), and the public key can be used to check the secret key. You never pass the secret key to any other party , as that would give them the ability to impersonate you. As for which public key you install (or send to be installed) on the remote server, is up to you: it depends on what private key you want to use to login there. If you have a private key on your Macbook, and want to login using that, then send the public key corresponding to that. That's probably the one in your first server's authorized_keys or in your Macs id_rsa.pub . If you want to login from the first server, then send the public key of that server's key, the one in the machines id_rsa.pub . If you want to login using both keys, you'll need to arrange both in the authorized_keys on the target server. If you wanted to, you could create multiple private keys on the same system and use different ones for different remote systems. That just requires a bit of bookkeeping to know which key you used where, and some configuration of the SSH client so that it knows to try to use all of the keys. (If you have lots of keys, you may need to configure it per-host.) Passing the public key ( id_rsa.pub ) to a third party is no risk. It's in fact exactly what you need to do to allow them to identify you by your private key.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444139", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
444,148
I have a dualboot system with fedora 27 and windows 10. I am running out of space on my linux volume group (i.e. partition) and I don't have unallocated space. I've read that I might need Gparted and that maybe resize2fs can be used to increase a linux partition ( reference 1, reference 2 ) but all those cases were dealing with extending the root partition where unallocated space already exists. Output of df -h Filesystem Size Used Avail Use% Mounted ondevtmpfs 3.9G 0 3.9G 0% /devtmpfs 3.9G 192M 3.7G 5% /dev/shmtmpfs 3.9G 2.0M 3.9G 1% /runtmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup/dev/mapper/fedora-root 43G 32G 8.1G 80% /tmpfs 3.9G 14M 3.9G 1% /tmp/dev/sda5 976M 196M 713M 22% /boottmpfs 789M 16K 789M 1% /run/user/42tmpfs 789M 11M 778M 2% /run/user/1000tmpfs 789M 0 789M 0% /run/user/0 and ouput of fdisk -l Disk /dev/sda: 238.5 GiB, 256060514304 bytes, 500118192 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xb72b0508Device Boot Start End Sectors Size Id Type/dev/sda1 * 2048 1026047 1024000 500M 7 HPFS/NTFS/exFAT/dev/sda2 1026048 395909025 394882978 188.3G 7 HPFS/NTFS/exFAT/dev/sda3 498311168 500113407 1802240 880M 27 Hidden NTFS WinRE/dev/sda4 395909120 498311167 102402048 48.8G 5 Extended/dev/sda5 395911168 398008319 2097152 1G 83 Linux/dev/sda6 398010368 498311167 100300800 47.8G 8e Linux LVMPartition table entries are not in disk order.Disk /dev/mapper/fedora-root: 43 GiB, 46103789568 bytes, 90046464 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/fedora-swap: 4.9 GiB, 5247074304 bytes, 10248192 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes My partitions are: Windows 10 has 80 GB of free space. I want to take 70 GB from Windows and give it to fedora. How do I do this without losing either or both of my operating systems and any data? Can I shrink windows first to create unlocated space or should I create a partition of 70 GB inside the windows partition? If so can I do this inside windows else if I have to use Gparted can I download it to my external hard drive which contains other files or is a blank memory stick necessary? My linux partition was created in windows before installing fedora using rufus and I have roughly 5 GB of swap. If possible I would also like to increase the size of the swap to match my RAM size because I find my system tends to use up all of my swap partition.
SSH keys have two parts, the secret/private key (usually in ~/.ssh/id_rsa ), and the public key ( ~/.ssh/id_rsa.pub ). The secret key can be used to prove who you are (or at least that you hold that secret key), and the public key can be used to check the secret key. You never pass the secret key to any other party , as that would give them the ability to impersonate you. As for which public key you install (or send to be installed) on the remote server, is up to you: it depends on what private key you want to use to login there. If you have a private key on your Macbook, and want to login using that, then send the public key corresponding to that. That's probably the one in your first server's authorized_keys or in your Macs id_rsa.pub . If you want to login from the first server, then send the public key of that server's key, the one in the machines id_rsa.pub . If you want to login using both keys, you'll need to arrange both in the authorized_keys on the target server. If you wanted to, you could create multiple private keys on the same system and use different ones for different remote systems. That just requires a bit of bookkeeping to know which key you used where, and some configuration of the SSH client so that it knows to try to use all of the keys. (If you have lots of keys, you may need to configure it per-host.) Passing the public key ( id_rsa.pub ) to a third party is no risk. It's in fact exactly what you need to do to allow them to identify you by your private key.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289865/" ] }
444,154
I run Debian 9 with LXQt what is completely fine. However, attached USB-volumes are not getting displayed in the PCManFM-Qt sidebar in category Devices intended for that (see screenshot). Which package is responsible for that? I activated all three necessary options in Preferences -> Volume , but the drives still don't appear in the sidebar. Ironically the taskbar-widget shows the drives and I can easily open them in PCManFM-Qt by clicking on the widget. However they are never displayed in the sidebar while the category Devices remains completely empty. During common workflows that is a bit uncomfortable... Screenshot of the issue: Screenshot of the working taskbar widget: Again, clicking on the taskbar widget the device gets opened in PCManFM-Qt without any problems. Only displaying its existence in the sidebar does not work.
SSH keys have two parts, the secret/private key (usually in ~/.ssh/id_rsa ), and the public key ( ~/.ssh/id_rsa.pub ). The secret key can be used to prove who you are (or at least that you hold that secret key), and the public key can be used to check the secret key. You never pass the secret key to any other party , as that would give them the ability to impersonate you. As for which public key you install (or send to be installed) on the remote server, is up to you: it depends on what private key you want to use to login there. If you have a private key on your Macbook, and want to login using that, then send the public key corresponding to that. That's probably the one in your first server's authorized_keys or in your Macs id_rsa.pub . If you want to login from the first server, then send the public key of that server's key, the one in the machines id_rsa.pub . If you want to login using both keys, you'll need to arrange both in the authorized_keys on the target server. If you wanted to, you could create multiple private keys on the same system and use different ones for different remote systems. That just requires a bit of bookkeeping to know which key you used where, and some configuration of the SSH client so that it knows to try to use all of the keys. (If you have lots of keys, you may need to configure it per-host.) Passing the public key ( id_rsa.pub ) to a third party is no risk. It's in fact exactly what you need to do to allow them to identify you by your private key.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444154", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241507/" ] }
444,177
I need to disable the CSRF protection in jenkins, which is enabled by default.The problem is after containerizing this, when ever i spun up a new container with jenkins inside it, it throws a "No valid crumb " error. i am currenty using this cmd to turn on the jenkins application. /usr/bin/java -server -Djava.net.preferIPv4Stack=true -Dhudson.security.csrf.GlobalCrumbIssuerConfiguration=false -jar /usr/share/jenkins/jenkins.war --webroot=/var/cache/jenkins/war --httpPort=9090 --ajp13Port=-1
To disable CSRF, it can be done with groovy, open "Manage Jenkins" / "Script Console" import jenkins.model.Jenkinsdef instance = Jenkins.instanceinstance.setCrumbIssuer(null) Source: https://stackoverflow.com/a/57869141
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444177", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206348/" ] }
444,189
This works perfectly well on any Linux : $ echo foo bar | sed -n '/foo/{/bar/{;p;}}'foo bar But fails on OSXs ancient BSD variant : ❯ echo foo bar | sed -n '/foo/{/bar/{;p;}}' sed: 1: "/foo/{/bar/{;p;}}": extra characters at the end of } command Am I missing some magical incantation? Is there a way to write this in a portable manner ? I'd hate to have to revert to a pipeline of grep | grep | grep commands. Update : low rep here so can't upvote but thanks all repliers for your well considered advice.
A sed editing command should be terminated by ; or a literal newline. GNU sed is very forgiving about this. Your script: /foo/{/bar/{;p;}} Expanded: /foo/{ /bar/{ p }} This would work as a sed script fed to sed through -f . If we make sure to replace newlines with ; (only needed at the end of commands and {...} groups of commands) so that we can use it on the command line, we get /foo/{/bar/{p;};} This works with OpenBSD sed (the original did not, due to that second ; missing). In this particular case, this may be further simplified to /foo/{/bar/p;}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444189", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/254392/" ] }
444,214
When using the bash shell, I tried to use Shift + LEFT to highlight and copy the command I typed in (rather then using the mouse). However, I got a lot of C's instead. I later realized that Shift + UP makes A , Shift + DOWN makes B , and Shift + RIGHT also makes D . Why does this happen? I think it is from the raw keystroke data ( ^[[A , ^[[B , ^[[C , and ^[[D ), but it is just a capital letter (no ^[[ at the beginning).
This is a keyboard input protocol that goes back to the 1980s, and your shell, not your "terminal driver" (whatever that is supposed to be) as in M. Vazquez-Abrams's answer, is not handling it properly. It is, moreover, a perfectly valid control sequence. Background Terminals emit control sequences for function key and extended key presses. They can emit DECFNK control sequences, which are CSI-introduced control sequences; Linux function key control sequences, which are a different kind of CSI-introduced sequence; SCO console function key control sequences, which are a third kind of CSI-introduced sequence; shifted single characters, prefixed with SS3; or, as in this case, ECMA-48 standard sequences for various things. (SS3 and CSI are control characters, in the C1 range. Single Shift 3 and Control Sequence Introducer.) You have two particular keypads on your (IBM Model M-alike or similar) keyboard, a calculator keypad and a cursor keypad. The model employed by DEC VT-style terminal emulators (which is most of the terminal emulators that you are likely to encounter, from the one in your kernel to unicode-rxvt) is that both keypads have separately switchable application/normal modes. A full-screen TUI application, something using the libedit or GNU readline libraries (or ZLE) such as your shell, and a few other types of application specify which mode they want, and then listen for control sequences coming from the terminal by reading bursts of characters (on the grounds that a human cannot type a full ECMA-48 control sequence anywhere near as fast as a terminal or a terminal emulator sends control sequences, so all coming in one burst is what distinguishes a user pressing the Esc key from the terminal emulator sending a control sequence starting with the ␛ character). In application mode, the arrow keys on each keypad produce shifted single characters prefixed with SS3. Modifiers cannot really have any effect (despite XTerm having botched this) because ECMA-35 and ECMA-48 define SS2 and SS3 as only acting on a single following character. But, on the flip side, the calculator and cursor keypads generate different SS3-shifted characters, allowing the two keypads to be distinguished from each other. In normal mode, the arrow keys on each keypad produce the same CSI-introduced control sequences, and they are the ones from ECMA-48 with augmentations from DEC VTs . In particular, the cursor keys send the ECMA-48 control sequences CUU, CUD, CUR, and CUL (CUrsor Up, CUrsor Down, CUrsor Right, and CUrsor Left). The DEC augmentations to the ECMA-48 control sequences are that the control sequence includes the current modifier state. So one has a choice between application mode, where one cannot know what modifiers are pressed but one can distinguish the two Left Arrow keys, and normal mode, where one cannot distinguish between two arrow keys but one can know what modifiers are pressed. In more detail: The DEC augmentations to the ECMA-48 control sequences are that the control sequence has two parameters: The first parameter is analogous to the first parameter than a CUU, CUD, CUR, or CUL can actually have, per ECMA-48. It is the occurrence count, and is thus always 1. The second parameter is the interesting one. It contains the modifier key state, which (for reasons involving how parameters in CSI-introduced control sequences work when omitted) is a set of bitflags for various modifier keys, plus 1, encoded as a decimal number. This is how DEC VT terminals have been doing things since the 1980s. In recent years, several terminal emulators finally introduced the same functionality (albeit, as mentioned, XTerm got it rather wrong). What's going on. The problem is that your GNU readline library, libedit, ZLE, and so forth don't really handle the protocol properly . They are not totally to blame. They rely upon the termcap and terminfo systems, which simply aren't up to the job here. termcap and terminfo don't really have the notion of an input control sequence that can vary , let alone multiple-mode keypads. For that you have to look to the likes of Vim, which can be programmed with special overrides for terminfo to specify control sequences that follow the aforegiven protocol (c.f. :help xterm-modifier-keys in Vim), or NeoVIM, which uses Paul Evans's libtermkey and its CSI driver . libtermkey's CSI driver is how one has to handle keyboard input properly from DEC VT-alike terminal emulators. It's an actual ECMA-48 state machine parser that decodes control sequences properly. But what your shell is doing is looking up entries for the arrow keys in terminfo, and only matching those specific control sequences . Specifically: Your shell is looking up the kcub1 capability for your terminal in its terminfo record. Here's the one from the teken record , for example: % tput -T teken kcub1|hexdump -C00000000 1b 5b 44 |.[D|00000003% It is only matching that specific input sequence as ← Left Arrow . When you press ⇧ Level 2 Shift + ← Left Arrow your terminal emulator is sending the control sequence CSI 1 ; 2 D . Rather, it is using the 7-bit alternatives and sending ␛ [ 1 ; 2 D , where ␛ [ is the way of encoding CSI in 7-bit characters. Your shell fails to match that against any known fixed input sequence from terminfo, and aborts processing. On my Bourne Again shell here, it ends up swallowing the first two characters and acts as if I have pressed ; 2 D . On your Bourne Again shell, it ends up swallowing the first four characters and acts as if you have pressed D . What the failure mode is is dependent from exactly what set of input sequences it is attempting to pattern match, as that determines how many characters it swallows before it determines that it has a sequence with no possible matches. This of course is in turn dependent from what your terminal's terminfo/termcap record actually contains and what terminal type you have told your shell that your terminal is. Fixes The local fix for this sort of thing is to get creative with the keybindings in your shell. It's why, for example, you'll find people doing this sort of thing with the Z shell in their .zshrc s: bindkey "\e[1;5D" backward-wordbindkey "\e[1;5C" forward-word Unfortunately, there is no non-local fix. It would involve rearchitecting your shell's input handling quite significantly. Such rearchitecting is long overdue. (Witness NeoVIM.) But no-one has tackled it yet. Further reading Character Code Structure and Extension Techniques . ECMA-35. 6th edition. 1994. ECMA International. Control Functions for Coded Character Sets . ECMA-48. 5th edition. 1991. ECMA International. "ANSI, Short ANSI, and PC Keyboard Codes". VT420 Programmer Reference Manual . EK-VT420-RM-002. February 1992. Digital. DECFNK . "ANSI Control Functions". VT510 Terminal Programmer Information . EK-VT510-RM. November 1993. Digital. VT520/VT525 Video Terminal Programmer Information . EK-VT520-RM. July 1994. Digital. https://github.com/fish-shell/fish-shell/issues/2139#issuecomment-388706768 https://unix.stackexchange.com/a/238932/5132
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279258/" ] }
444,351
I'm trying to determine which group(s) a running child process has inherited. I want to find all groups the process is in given its uid. Is there a way to determine this via the /proc filesystem?
The list of groups is given under Groups in /proc/ <pid> /status ; for example, $ grep '^Groups' /proc/$$/statusGroups: 4 24 27 30 46 110 115 116 1000 The primary group is given under Gid : $ grep '^Gid' /proc/$$/statusGid: 1000 1000 1000 1000 ps is also capable of showing the groups of a process, as the other answers indicate.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120583/" ] }
444,358
I don't have enough confidence to do this alone and risk the server not to boot or something. I would like to upgrade kernel from: $ uname -r4.9.0-6-amd64$ uname -v#1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) to kernel version 4.15 or 4.16. Whichever you recommend. I just think I know how to list versions available: $ apt-cache search linux-image | grep amd64linux-headers-4.9.0-6-amd64 - Header files for Linux 4.9.0-6-amd64linux-headers-4.9.0-6-rt-amd64 - Header files for Linux 4.9.0-6-rt-amd64linux-image-4.9.0-6-amd64 - Linux 4.9 for 64-bit PCslinux-image-4.9.0-6-amd64-dbg - Debug symbols for linux-image-4.9.0-6-amd64linux-image-4.9.0-6-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RTlinux-image-4.9.0-6-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-6-rt-amd64linux-image-amd64 - Linux for 64-bit PCs (meta-package)linux-image-amd64-dbg - Debugging symbols for Linux amd64 configuration (meta-package)linux-image-rt-amd64 - Linux for 64-bit PCs (meta-package), PREEMPT_RTlinux-image-rt-amd64-dbg - Debugging symbols for Linux rt-amd64 configuration (meta-package)linux-headers-4.9.0-3-amd64 - Header files for Linux 4.9.0-3-amd64linux-headers-4.9.0-3-rt-amd64 - Header files for Linux 4.9.0-3-rt-amd64linux-headers-4.9.0-4-amd64 - Header files for Linux 4.9.0-4-amd64linux-headers-4.9.0-4-rt-amd64 - Header files for Linux 4.9.0-4-rt-amd64linux-headers-4.9.0-5-amd64 - Header files for Linux 4.9.0-5-amd64linux-headers-4.9.0-5-rt-amd64 - Header files for Linux 4.9.0-5-rt-amd64linux-image-4.9.0-3-amd64 - Linux 4.9 for 64-bit PCslinux-image-4.9.0-3-amd64-dbg - Debug symbols for linux-image-4.9.0-3-amd64linux-image-4.9.0-3-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RTlinux-image-4.9.0-3-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-3-rt-amd64linux-image-4.9.0-4-amd64 - Linux 4.9 for 64-bit PCslinux-image-4.9.0-4-amd64-dbg - Debug symbols for linux-image-4.9.0-4-amd64linux-image-4.9.0-4-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RTlinux-image-4.9.0-4-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-4-rt-amd64linux-image-4.9.0-5-amd64 - Linux 4.9 for 64-bit PCslinux-image-4.9.0-5-amd64-dbg - Debug symbols for linux-image-4.9.0-5-amd64linux-image-4.9.0-5-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RTlinux-image-4.9.0-5-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-5-rt-amd64linux-headers-4.15.0-0.bpo.2-amd64 - Header files for Linux 4.15.0-0.bpo.2-amd64linux-headers-4.15.0-0.bpo.2-cloud-amd64 - Header files for Linux 4.15.0-0.bpo.2-cloud-amd64linux-headers-4.16.0-0.bpo.1-amd64 - Header files for Linux 4.16.0-0.bpo.1-amd64linux-headers-4.16.0-0.bpo.1-cloud-amd64 - Header files for Linux 4.16.0-0.bpo.1-cloud-amd64linux-image-4.15.0-0.bpo.2-amd64 - Linux 4.15 for 64-bit PCslinux-image-4.15.0-0.bpo.2-amd64-dbg - Debug symbols for linux-image-4.15.0-0.bpo.2-amd64linux-image-4.15.0-0.bpo.2-cloud-amd64 - Linux 4.15 for x86-64 cloudlinux-image-4.15.0-0.bpo.2-cloud-amd64-dbg - Debug symbols for linux-image-4.15.0-0.bpo.2-cloud-amd64linux-image-4.16.0-0.bpo.1-amd64 - Linux 4.16 for 64-bit PCslinux-image-4.16.0-0.bpo.1-amd64-dbg - Debug symbols for linux-image-4.16.0-0.bpo.1-amd64linux-image-4.16.0-0.bpo.1-cloud-amd64 - Linux 4.16 for x86-64 cloudlinux-image-4.16.0-0.bpo.1-cloud-amd64-dbg - Debug symbols for linux-image-4.16.0-0.bpo.1-cloud-amd64linux-headers-4.9.0-4-grsec-amd64 - Header files for Linux 4.9.0-4-grsec-amd64linux-image-4.9.0-4-grsec-amd64 - Linux 4.9 for 64-bit PCs, Grsecurity protection (unofficial patch)linux-image-grsec-amd64 - Linux image meta-package, grsec featuresetlinux-image-cloud-amd64 - Linux for x86-64 cloud (meta-package)linux-image-cloud-amd64-dbg - Debugging symbols for Linux cloud-amd64 configuration (meta-package) I need headers too. On Ubuntu there is also package called extra or similarly, so I am confused not to see it here. What is the proper way of installing new kernel manually on Debian 9?
If you want to install a newer Debian-packaged kernel, you should use one from the backports repository. You seem to have that repository already added to your apt configuration, so you're all set. Since your current kernel is the basic amd64 version, I assume you won't need the realtime scheduler version, nor the cloud version. Just run apt-get install linux-image-4.16.0-0.bpo.1-amd64 linux-headers-4.16.0-0.bpo.1-amd64 i.e. "install the basic -amd64 version of the 4.16 kernel backported for Debian 9, and the corresponding headers package". Unlike for regular packages, the new version linux-image package will not outright replace the existing 4.9.0 kernel, but will install alongside it. (That's because the version number is included as part of the package name.) The bootloaders will automatically be configured at linux-image post-install to either present the available kernels in a version-number-based order, or if that is not possible for some bootloaders, just automatically set the most recently installed one as the preferred one. If it turns out that your new kernel won't boot, you can just select the previous kernel from the bootloader, and then remove the kernel package that proved to be non-functional. If you accidentally tell the package manager to remove the kernel you're currently running on, it is smart enough to know that isn't a good thing to do, and will abort the operation.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444358", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
444,366
Currently working on some RegExp to parse an input file for correct content. I'm using the below RegExp to parse some input: cell-(90|855|80|70)-(DEV|DEVL|SANDP|CAT|(SIT[a-z]|SIT[1-9])|TAT|PROD)(?:-(DEV|DEVL|SANDP|CAT|(SIT[a-z]|SIT[1-9])|TAT|PROD))-[a-z] Input it should match: cell-80-sandp-sit-a Or match this: cell-80-sandp-a The -sit part of the input should be an optional capture group, which to my understanding means the RegExp will continue successfully if it does not find this capture group, or also finish successfully if it does find it. For this instance, I would be using it in an if statement: if [[ "$Input" =~ $RegExp ]]; then #stufffi Can anyone point out what is wrong with the above? I have been using regex101.com to test it.
bash understands standard extended regular expressions ("ERE"), not PCRE ("Perl-compatible regular expressions"). Your PCRE: cell-(90|855|80|70)-(DEV|DEVL|SANDP|CAT|(SIT[a-z]|SIT[1-9])|TAT|PROD)(?:-(DEV|DEVL|SANDP|CAT|(SIT[a-z]|SIT[1-9])|TAT|PROD))-[a-z] The (?:...) in a PCRE is a non-capturing group (not an optional group). There is no equivalent in an ERE and all groups are capturing. To make an expression optional, you may qualify it with ? , as I have done below. The ? means that the previous expression should match one or zero times. As an ERE: cell-(90|855|80|70)-(DEV|DEVL|SANDP|CAT|(SIT[a-z]|SIT[1-9])|TAT|PROD)(DEV|DEVL|SANDP|CAT|(SIT[a-z]|SIT[1-9])|TAT|PROD)?-[a-z] or, contracting (SIT[a-z]|SIT[1-9]) into SIT[a-z1-9] , cell-(90|855|80|70)-(DEV|DEVL|SANDP|CAT|SIT[a-z1-9]|TAT|PROD)(-(DEV|DEVL|SANDP|CAT|SIT[a-z1-9]|TAT|PROD))?-[a-z] You may also want to add anchoring to this: ^cell-(90|855|80|70)-(DEV|DEVL|SANDP|CAT|SIT[a-z1-9]|TAT|PROD)(-(DEV|DEVL|SANDP|CAT|SIT[a-z1-9]|TAT|PROD))?-[a-z]$ ... otherwise it would match somethingcell-...-ablahblah
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/285147/" ] }
444,383
I am curious why can't we switch to a user's home director with either $ cd ~"$USER" or $ cd ~${USER}
That very much depends on the shell and the order the expansions are done in those shells. ~$user expands to the home directory of the user whose name is stored in $user in csh (where that ~user feature comes from), AT&T ksh, zsh, fish. Note however these variations: $ u=daemon/xxx csh -c 'echo ~$u'/usr/sbin/xxx # same in zsh/fish$ u=daemon/xxx ksh93 -c 'echo ~$u'~daemon/xxx$ u=daemon/xxx csh -c 'echo ~"$u"'Unknown user: daemon/xxx.$ u=daemon/xxx zsh -c 'echo ~"$u"'/usr/sbin/x # same in fish$ u=" daemon" csh -c 'echo ~$u'/home/stephane daemon$ u=" daemon" zsh -c 'echo ~$u'~ daemon # same in ksh/fish$ u="/daemon" csh -c 'echo ~$u'/home/stephane/daemon # same in zsh$ u="/daemon" fish -c 'echo ~$u'~/daemon # same in ksh It expands to the home directory of the user named literally $user in bash (provided that user exists, which is very unlikely of course). And to neither in pdksh , dash , yash , presumably because they don't consider $user to be a valid user name.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/286001/" ] }
444,402
I am currently running the command: echo "hello world" | xargs curl http://localhost:8080/function/func_wordcount -d" which takes the stdout of echo "hello world" and then pipes it to func_wordcount using the -d option. The -d option is for sending raw data and my func_wordcount takes the raw data input and prints the number of words and the number of letters. For example, when I write echo "hello" | xargs curl http://localhost:8080/function/func_wordcount -d" the output is: 1, 5 meaning that there was one word which contained 5 letters. However, when I try to include many words I get an error. When I write echo "hello world" | xargs curl http://localhost:8080/function/func_wordcount -d" I get the output 1, 5, then a newline with the error: curl: (6) could not resolve host: world . So I am pretty sure that it is splitting hello world into two words when I convert the stdout to raw data using the -d option. Also, just to show that the function works without piping and converting, when I run my function with just curl http://localhost:8080/function/func_wordcount -d "hello world" I get 2, 11 showing that there are two words and 11 characters. My question is how to work around this splitting issue. The part that I find confusing is why it is parsing just the first half of the input and completing that, and then throwing an error on the second part instead of just sending one chunk of data. I have only been able to send input that is not delimited at all by spaces so the functions uses become very limited.
That very much depends on the shell and the order the expansions are done in those shells. ~$user expands to the home directory of the user whose name is stored in $user in csh (where that ~user feature comes from), AT&T ksh, zsh, fish. Note however these variations: $ u=daemon/xxx csh -c 'echo ~$u'/usr/sbin/xxx # same in zsh/fish$ u=daemon/xxx ksh93 -c 'echo ~$u'~daemon/xxx$ u=daemon/xxx csh -c 'echo ~"$u"'Unknown user: daemon/xxx.$ u=daemon/xxx zsh -c 'echo ~"$u"'/usr/sbin/x # same in fish$ u=" daemon" csh -c 'echo ~$u'/home/stephane daemon$ u=" daemon" zsh -c 'echo ~$u'~ daemon # same in ksh/fish$ u="/daemon" csh -c 'echo ~$u'/home/stephane/daemon # same in zsh$ u="/daemon" fish -c 'echo ~$u'~/daemon # same in ksh It expands to the home directory of the user named literally $user in bash (provided that user exists, which is very unlikely of course). And to neither in pdksh , dash , yash , presumably because they don't consider $user to be a valid user name.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291361/" ] }
444,417
I have a directory foo/ bar.txt baz.yzw wun/ a.out Now, I would like to basically add a directory in between, i.e. I would like to make it foo/ var1/ bar.txt baz.yzw wun/ a.out with the intent of also adding other stuff to foo , but kept separate from the old contents. I could of course do it like this: $ mkdir foo-new$ mv foo foo-new$ mv foo-new foo or $ cd foo$ mkdir var1$ mv $(ls | grep -v var1) var1 but both seem inelegant and are error-prone. Is there a better way to do it?
$ cd foo$ mkdir var1$ mv * var1 The shell and mv command are smart enough to not try to move the var1 directory into itself.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444417", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54602/" ] }
444,466
When I opened wireshark (fresh install of live usb kali with persistence) for the first time it complained about me being root . This is what I found in googling it: Yes it's recommended and advisable not to run such tools in super user or high permission account. Giving root to such tools can go sideways should the tool malfunction. You can create a non-super user account or non root account and that should fix the error dialog. Also, the tool should still work when that error dialog shows up, It just warns you of the privileges you are assigning to the tool. https://null-byte.wonderhowto.com/forum/problems-with-wireshark-as-root-user-kali-0169494/ But when I create a user and try to log out as root , I get stuck in an autologin loop as root. I also saw elsewhere that most of the tools in Kali require root permissions (I figured I'd just sudo everything. I started in ubuntu years ago and have become so used to sudo I prefer it to root account). But my question is: which is the proper way of doing things in Kali? Creating a sub-account and disabling the log in (thus, my sub question is how?) or disabling the warnings in wireshark . What can go wrong with wireshark if you run it in root ? And finally: is this some kind of hazing ritual with Kali? (Bill Labanovic jokes in his book on python that installing pip is a kind of hazing ritual for new programmers in python. I thought maybe this is like that). UPDATE: Even when I find a way to log in as non-root (by waiting for the screen to lock and selecting a different user) I can't run wireshark because I don't have the permissions! And I can't seem to figure out a way to run wireshark with permissions without resorting to cli! Update2: I am not new to linux. By a "fresh install of live usb with persistence" I mean I just set up a live usb stick with kali linux and configured persistence following tutorials like this . The error message in wireshark is this one . Most of the tutorials I found online are about disabling the warning. I have set up a sudoer account but every time I boot Kali or attempt to log out of Kali, I go through the boot process and the last item is always something along the lines of 'Started up User Manager for UID 0.', as in root . How do I disable this? Update3: I'm going to clarify my question: what is the normal, correct way of doing this in Kali. According to the meta question on Kali questions, Kali doesn't even have proper apt support. So I'm afraid to go following the directions Draconis posted, because it requires installing special libraries. I'm not trying to run Kali as a production environment or a normal desktop (I have it on a flash drive), I'm just trying to use some tools in it to pentest my server (I am totally new to pentesting, and even though pentesterlab says not to bother using Kali, but just use the distro you're already comfortable in, I didn't want to go installing wireshark and a bunch of other tools in my desktop distro--I figured I'd start learning the basics of pentesting in Kali with pre-rolled tools. Perhaps this was a mistake). I did figure out how to log out as root : it requires using the lock screen button. Sounds amateur not to know that, but I'm not used to gnome and I spend more time in cli than not . I run an ubuntu server and mostly use my bunsenlabs box to mess around with mysql databases and write python scripts with vim . And it seems funny to me that I can't log out as root without getting stuck in an autologin loop. I am not new to linux. But I have been using, for the past 10 years off and on, somewhat easier distros (ubuntu, crunchbang, and now bunsenlabs. Have even dipped my toes into slackware and fedora). But for a tool that I figured was meant to be started up on a flash drive, there's an awful lot of configuration that has to happen before I can even get started using wireshark. This doesn't make sense to me. There is no way that a pentester goes and sets up special accounts to run wireshark every time he boots his flash drive (given most people, I would imagine, wouldn't even bother setting up persistence). So my question is: is it thoroughly normal to run wireshark as root in Kali Linux when running it from a flash drive? If it is not, what is the typical way of doing things in Kali?
There are a few different important points here. But the first one is, Kali is not a good first Linux distribution to start off with. If you're not familiar with account permissions, and especially if you don't want to use the command line, then Kali isn't right for you. I'd recommend Ubuntu instead. Anything you can do in Kali, you can also do in Ubuntu (once you install the right packages and tools), and it actually has a learning curve as opposed to Kali's "learning sheer vertical cliff". (OP, you say you've been using Ubuntu for years, so this warning isn't intended for you. But it's worth saying anyway for other people who find this question.) Second of all, Kali is designed to run pretty much everything as root. Which isn't a very good security practice! Some versions of Wireshark come with a warning: WIRESHARK CONTAINS OVER ONE POINT FIVE MILLION LINES OF SOURCE CODE. DO NOT RUN THEM AS ROOT. But if you're using Kali, it's assumed that: You know what you're doing, and you know why running as root in general is a bad idea, and so you're going to make sure that Even root isn't going to have the power to do anything really bad (because e.g. you're definitely not running this on the production server full of sensitive information) As far as what could go wrong, Wireshark has quite a lot of different "dissectors" to analyze incoming traffic. Because there are so many, and they're so complicated, it's hard to be sure none of them will glitch out when given specially-crafted packets. At best, this could make Wireshark crash. At worst, it could allow arbitrary code execution. And arbitrary code execution as root is very bad . The recommended way of using Wireshark, without letting it run as root, involves giving its dumpcap executable two extra capabilities: CAP_NET_ADMIN (allowing it to control network interfaces) and CAP_NET_RAW (allowing it to access raw packets). Full details on this are outside the scope of this question, but this article explains how to do that. Unfortunately, manipulating capabilities does require using the command line. If you're not comfortable with that, then Kali probably isn't right for you: it's built for command line use first and foremost.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444466", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288980/" ] }
444,476
If the input is foo,bar,bazbar,baz,quxqux,quux,bazbar,foo,quxwaldo,fred,garply the output should be foo,bar,bazbar,baz,quxwaldo,fred,garply As you can see, records are deduplicated based on the 3rd column's value. If multiple records have the same 3rd column value, pick a random one (or the first one; doesn't matter)
The idiomatic awk answer is awk -F, '!seen[$3]++' file That will print a line the first time a value is seen in the 3rd column.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291421/" ] }
444,503
I found a command rename.ul on my Ubuntu machine. It comes from util-linux package. It is odd for me because I rarely see executables with extension. In addition, it seems unnecessary because the file is compiled. Are there any historical or technical reasons on it? I'm also confused because I could not find file format associated this extension.
The extension is to avoid conflict with the multitude of rename commands otherwise available on Debian. This change was made in 2007 in response to Debian bug #439647 : /usr/bin/rename is managed by the alternatives system (with Perl's version the default). util-linux 2.13~rc3-8 installs its own binary there, instead of registering it as an alternative. In response, the util-linux rename was renamed to be rename.ul . Even so, rename.ul syntax is so far different from the Perl variants that it's not added to the alternatives system by default (see Debian bug #439935 ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
444,601
Is there any terminal shell/emulator out there that supports editing your current command with the mouse? Specifically things like placing the cursor by clicking (useful for long commands) or double clicking and pressing delete to select and delete a word etc. For example, the terminal at the bottom of Midnight Commander (mc) has support for placing the cursor by mouse click. I'm looking for something similar that is more terminal-focused, as mc is mainly a file manager. It's fine if it only works under a GUI environment (I'm on Ubuntu 18.04 with GNOME3).
zsh can be extended to support mouse operation like you describe, using Stéphane Chazelas’ mouse.zsh ZLE widget : wget http://stchaz.free.fr/mouse.zsh. ./mouse.zshzle-toggle-mouse (and once you’ve tested it, add it to your ~/.zshrc ). It will work in any terminal with VT200 mouse tracking, and in the Linux console with gpm .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291533/" ] }
444,610
I have a JSON file on CentOS where all text is on the same line. How can I pretty format it with all the correct indents and everything?
Use jq a very good JSON processor and from personal preference, its the best available in the market for just pretty print, use jq . file_name
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34039/" ] }
444,624
Is it equivalent to have commands print to a file directly, as opposed to writing to a file descriptor? Illustration Writing to file directly: for i in {1..1000}; do >>x echo "$i"; done Using an fd: exec 3>&1 1>xfor i in {1..1000}; do echo "$i"; doneexec 1>&3 3>&- Is the latter one more efficient?
The main difference between opening the file before the loop with exec , and putting the redirection in the command in the loop is that the former requires setting up the file descriptor just once, while the latter opens and closes the file for each iteration of the loop. Doing it once is likely to be more efficient, but if you were to run an external command inside the loop, the difference would probably disappear in the cost of launching the command. ( echo here is probably builtin, so that doesn't apply) If the output is going to be sent to something other than a regular file (e.g. if x is a named pipe), the act of opening and closing the file may be visible to other processes, so there may be differences in behaviour, too. Note that there's really no difference between a redirection through exec and a redirection on the command, they both open the file and juggle file descriptor numbers. These two should be pretty much equivalent, in that they both open() the file and write() to it. (There's differences in how fd 1 is stored for the duration of the command, though.): for i in {1..1000}; do >>x echo "$i"donefor i in {1..1000}; do exec 3>&1 1>>x # assuming fd 3 is available echo "$i" # here, fd 3 is visible to the command exec 1>&3 3>&-done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444624", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
444,651
In Centos 7, I want to install some packages that I see in the following URL : http://mirror.centos.org/centos/7.4.1708/extras/x86_64/Packages/ How can I add this URL to my yum package manager ? PS: downloading a single rpm file doesn't work, because it looks recursively for dependencies with the same version.
I had to add a new repo file: e.g. /etc/yum.repos.d/myrepo.repo with repo configuration: [myrepo]name=My extras packages for CentOS 7.4.1708baseurl=http://mirror.centos.org/centos/7.4.1708/extras/x86_64/enabled=1 Then, to install for example docker-1.12.6-55.gitc4618fb.el7.centos run: $ sudo yum install -y docker-1.12.6-55.gitc4618fb.el7.centos Options --disablerepo=* with --enablerepo=myrepo can be used to enforce only the new repo file to be considered. --- UPDATE --- Package version 7.4.1708 doesn't exist anymore in mirror.centos.org . You should rather use: baseurl=http://vault.centos.org/centos/7.4.1708/extras/x86_64/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/444651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223439/" ] }
444,681
I usually turn the alert sound (by default a water drop sound) off by going to control-center→Sound→Sound Effects and muting the Alert volume. This is in Gnome. I wanted to turn it off in a custom live build of Debian by default, but I can't figure where this setting is stored. I tried dconf and looked around config directories extensively without success. I tried find ~ -mmin -1 also gio monitor and inotifywatch without success. The only output by find ~ -mmin -1 was .config/dconf/ and .config/dconf/user which get edited all the time the control center is opened anyway. I replaced this user file in a vm to test and all dconf settings were updated except the one I need (the alert sound). I also tried dconf watch / which gave no output when I tried editing the alert sound setting I'd like someone to tell me how to mute this setting from command line and possibly tell me where it is stored.
This can be achieved by this command dconf write /org/gnome/desktop/sound/event-sounds "false" However, this doesn't turn off the sound volume slider effect. To completely turn off the sound effects the closest way I've found was to live boot into a clean iso of the distro and open the System settings > Sound > Sound effects and turn these sounds off as preferred, then copy the file ~/.config/pulse/*-stream-volumes.tdb and save it. Then, to turn off the "sound effects" on an installed environment or while building a custom version of the distro do cp saved-pulse-volumes.tdb ~/.config/pulse/*-stream-volumes.tdb
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291581/" ] }
444,682
Vim 8.1 added the :terminal command, which opens up a new bash terminal as a split. However, it always seems to be a horizontal split, and I prefer vertical splits. Is there a way to open a terminal as a vertical split without using: :vsp:terminal<c-w>j:q Alternatively, is there a way I could add it as a command in my .vimrc , like so: command Vterm :vsp | :terminal | <c-w>j | :q The command above chokes on trying to execute <c-w>j , opens a new vim split with the following: executing job failed: No such file or directory Just having: command Vterm :vsp | :terminal Works fine, but leaves the original split.
You can use the :vert[ical] command modifier : :vert term :vertical works with any command that splits a window, for example: :vert copen:vert help vert
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/444682", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291583/" ] }
444,754
I was trying to concatenate text files in sub-folders and tried: cat ./{mainfolder1,mainfolder2,mainfolder3}/{subfolder1}/book.txt > out$var However this did not return anything. So, tried adding a non existing 'subfolder2' cat ./{mainfolder1,mainfolder2,mainfolder3}/{subfolder1,subfolder2}/book.txt > out$var And this time it did work out, concatenating the files successfully.Why does this happens?
By definition, brace expansion in GNU Bash requires either a sequence expression or a series of comma-separated values: Patterns to be brace expanded take the form of an optional preamble, followed by either a series of comma-separated strings or a sequence expression between a pair of braces, followed by an optional postscript. You can read the manual for details. A few simple samples: echo {subfolder1}{subfolder1}echo {subfolder1,subfolder2}subfolder1 subfolder2echo subfolder{1}subfolder{1}echo subfolder{1..2}subfolder1 subfolder2
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291642/" ] }
444,767
Doing man -t man > man.ps will export the man page for man in postscript. How can I export it in PDF? I have gone through the manuals and learnt about the -T option but it's a bit unclear to me.
If groff and gropdf exists on your Linux system, you should be able to use man -Tpdf man >man.pdf (note the absence of a space between -T and pdf ) On an Ubuntu system, it should be enough to install the groff package to get access to gropdf . The option argument to -T is passed on to groff and groff will use its -T option with the same option argument. So, read the groff manual about -T for more info. On systems using mandoc , the groff utility does not need to be installed for the above command to work since the mandoc utility (called by man ) would convert the manual to PDF by itself.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/444767", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/286001/" ] }