source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
672,777
I'd need to filter through a list files (output of top command) for some info: Firstly I need to list files with a find Then, I need to grep for lines with a pattern Finally, I need to pickup one column I'm interested into. Seems I'm unable to bind the find with both grep and awk: My first attempt: find . -name "high-cpu.out" -exec grep "jboss" | awk '{ print $5 }' {} \;find: missing argument to `-exec'awk: fatal: cannot open file `{}' for reading (No such file or directory) Second attempt: find . -name "high-cpu.out" -exec grep "jboss" {} \ | awk '{ print $5 }';find: missing argument to `-exec' Any help?Thanks
I recommend using AWK to match the pattern instead of grep : find . -name "high-cpu.out" -exec awk '/jboss/ { print $5 }' {} + This also allows the use of the + variant of -exec , resulting in fewer awk invocations. When you tried find . -name "high-cpu.out" -exec grep "jboss" | awk '{ print $5 }' {} \; the shell processed the pipe first, not find . That led to find failing with “missing argument to ‘-exec’”: {} \; wasn’t included in the arguments passed to it (they were passed to awk ). Your second attempt failed because find couldn’t see the closing semi-colon since it wasn’t separated from the awk argument. In any case the pipe wouldn’t have worked, since find doesn’t delegate to the shell.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/672777", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80451/" ] }
672,871
Suppose I have a file direction with the lines eastnorthsouthwestsouth-west and using a loop and echo in a shell script I want to generate this output: Direction: eastDirection: northDirection: southDirection: westLast direction: south-west So in other words I want to do something different with the last line in the script.
bash can't detect the end of a file (without trying to read the next line and failing), but perl can with its eof function: $ perl -n -e 'print "Last " if eof; print "Direction: $_"' direction Direction: eastDirection: northDirection: southDirection: westLast Direction: south-west note: unlike echo in bash, the print statement in perl doesn't print a newline unless you either 1. explicitly tell it to by including \n in the string you're printing, or 2. are using perl's -l command-line option, or 3. if the string already contains a newline....as is the case with $_ - which is why you often need to chomp() it to get rid of the newline. BTW, in perl, $_ is the current input line. Or the default iterator (often called "the current thingy" probably because "dollarunderscore" is a bit of a mouthful) in any loop that doesn't specify an actual variable name. Many perl functions and operators use $_ as their default/implicit argument if one isn't provided. See man perlvar and search for $_ . sed can too - the $ address matches the last line of a file: $ sed -e 's/^/Direction: /; $s/^/Last /' direction Direction: eastDirection: northDirection: southDirection: westLast Direction: south-west The order of the sed rules is important. My first attempt did it the wrong way around (and printed "Direction: Last south-west"). This sed script always adds "Direction: " to the beginning of each line. On the last line ( $ ) it adds "Last " to the beginning of the line already modified by the previous statement.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/672871", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
672,996
If I give ls -1 I get like this, file_0001.jpegfile_0002.jpegfile_0003.jpegfile_0004.jpegfile_0005.jpegfile_0006.jpegfile_0007.jpegfile_0008.jpegfile_0009.jpegfile_0010.jpegfile_0011.jpegfile_0012.jpegfile_0013.jpegfile_0014.jpegfile_0015.jpegfile_0016.jpegfile_0017.jpegfile_0018.jpegfile_0019.jpegfile_0020.jpegfile_0021.jpeg...file_0999.jpeg Is there a way using awk or other tool to see if some file is missing in this consecutive incremental way.
You could use Awk to filter out the missing ones. On GNU Awk, with support for multi-char FS, you could pipe your result to awk -F'[_.]' ' $2 != prev+1 { print "file " prev+1 " missing" }{ prev = $2 }' Or using perl perl -F'[_.]' -ane 'if ($F[1] != $prev+1) {printf "file %d missing\n",$prev+1}; $prev=$F[1]' If there are more gaps anticipated, you could have awk print out the range of file numbers missing. Modifying the above awk -F'[_.]' ' $2 != prev+1 { print "file(s) " prev+1 "-" $2-1 " missing" }{ prev = $2 }' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/672996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22558/" ] }
673,011
CentOS 8 EOL date is 31st December 2021, whilst CentOS 7 EOL is 30th June 2024. This seems a very odd situation to me, where a later release is expiring before an earlier release. Can anyone explain this to me? I cannot be running EOL systems in 2022 and I don't want to migrate my CentOS 8 servers (which run nginx as a reverse proxy) to CentOS stream. I'm not too enthusiastic about alternatives such as Rocky Linux either. Perhaps it's an unusual step, but would downgrading to CentOS 7 be a better option to buy some time?
You could use Awk to filter out the missing ones. On GNU Awk, with support for multi-char FS, you could pipe your result to awk -F'[_.]' ' $2 != prev+1 { print "file " prev+1 " missing" }{ prev = $2 }' Or using perl perl -F'[_.]' -ane 'if ($F[1] != $prev+1) {printf "file %d missing\n",$prev+1}; $prev=$F[1]' If there are more gaps anticipated, you could have awk print out the range of file numbers missing. Modifying the above awk -F'[_.]' ' $2 != prev+1 { print "file(s) " prev+1 "-" $2-1 " missing" }{ prev = $2 }' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/673011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90367/" ] }
673,153
in /etc/ssh/sshd_config , PAM is enabled by default on Debian 10: UsePAM yes In a situation when I don't want to allow login with password or kerberos, and only want to allow SSH key authentication, does it still have any advantage to enable PAM in sshd? Or, would it simplify the process and perhaps make it more secure, if UsePAM is set to no? What would be the practical effects of disabling PAM in sshd? Would I notice any difference?
Contrary to what the manpage (and another answer) claims, UsePAM yes not only allows you to run sshd as a non-root user, but also allows a sshd running as non-root user to perform password authentication (for the same user it's running as) via the setuid /sbin/unix_chkpwd program. The latter of which is quite unexpected. user$ /usr/sbin/sshd -f /dev/null -p 9009 -h ~/.ssh/id_rsauser$ /usr/sbin/sshd -f /dev/null -o UsePAM=yes -p 7007 -h ~/.ssh/id_rsauser$ ssh -p 7007 localhostThe authenticity of host '[localhost]:7007 ([::1]:7007)' can't be established....Password: <correct password>Linux deb11 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64...user$ <I'm logged in!>user$ ^DConnection to localhost closed.user$ ssh -p 9009 localhostThe authenticity of host '[localhost]:9009 ([::1]:9009)' can't be established....user@localhost's password:Permission denied, please try again.user@localhost's password:Permission denied, please try again.user@localhost's password:user@localhost: Permission denied (publickey,password,keyboard-interactive).user$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/673153", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155832/" ] }
673,163
I'm using firejail to sandbox firefox. When I use lsof -i , there are no connections shown. Firejail does namespace isolation on the process, so i do this ps aux | grep firefox | awk ' { print $2} ' | while read p ; do nsenter -t $p lsof -i ; done to enter each namespace and to lsof -i. I've tried nsenter -t <pid> -n lsof -i as well but nothing appears. But this works when I lsof as root. Shouldn't a user be able to list open socket connections?
Contrary to what the manpage (and another answer) claims, UsePAM yes not only allows you to run sshd as a non-root user, but also allows a sshd running as non-root user to perform password authentication (for the same user it's running as) via the setuid /sbin/unix_chkpwd program. The latter of which is quite unexpected. user$ /usr/sbin/sshd -f /dev/null -p 9009 -h ~/.ssh/id_rsauser$ /usr/sbin/sshd -f /dev/null -o UsePAM=yes -p 7007 -h ~/.ssh/id_rsauser$ ssh -p 7007 localhostThe authenticity of host '[localhost]:7007 ([::1]:7007)' can't be established....Password: <correct password>Linux deb11 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64...user$ <I'm logged in!>user$ ^DConnection to localhost closed.user$ ssh -p 9009 localhostThe authenticity of host '[localhost]:9009 ([::1]:9009)' can't be established....user@localhost's password:Permission denied, please try again.user@localhost's password:Permission denied, please try again.user@localhost's password:user@localhost: Permission denied (publickey,password,keyboard-interactive).user$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/673163", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345639/" ] }
673,190
What is the correct way to update the sudoers file programmatically? Specifically: How can I add ,timestamp_timeout=600 to the end of the Defaults env_reset line in my sudoers files (to increase the sudo nag time to 10 hours), and doing this programmatically and without destroying the system (I tried this once and made my Linux system unbootable and had to reinstall). I have read that chmod 440 might be important for this. I understand that this is dangerous, I understand why it is protected, but these are my home systems where I have a script that runs through dozens of simple configuration changes (and I rebuild those systems fairly regularly also, so it would be useful to me to be able to automate this). I am most interested in how to do with this with standard Linux tools that I can put into a bash script, but I would be very interested to also see how this exact operation is done in Ansible so that I could roll out simple changes like this to all sudoers files on my home network. On this page there is a discussion on the sudoers file, but I don't quite understand the references to visudo -c -f ; I think what is being suggested there is: copy the sudoers file, then make changes to that copy, then visudo -c -f to check that the new file is valid, then overwrite sudoers , then chmod 440 on that new file, is that it? I'm not sure of the steps to implement this.
Contrary to what the manpage (and another answer) claims, UsePAM yes not only allows you to run sshd as a non-root user, but also allows a sshd running as non-root user to perform password authentication (for the same user it's running as) via the setuid /sbin/unix_chkpwd program. The latter of which is quite unexpected. user$ /usr/sbin/sshd -f /dev/null -p 9009 -h ~/.ssh/id_rsauser$ /usr/sbin/sshd -f /dev/null -o UsePAM=yes -p 7007 -h ~/.ssh/id_rsauser$ ssh -p 7007 localhostThe authenticity of host '[localhost]:7007 ([::1]:7007)' can't be established....Password: <correct password>Linux deb11 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64...user$ <I'm logged in!>user$ ^DConnection to localhost closed.user$ ssh -p 9009 localhostThe authenticity of host '[localhost]:9009 ([::1]:9009)' can't be established....user@localhost's password:Permission denied, please try again.user@localhost's password:Permission denied, please try again.user@localhost's password:user@localhost: Permission denied (publickey,password,keyboard-interactive).user$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/673190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/441685/" ] }
673,192
I mean the use of GNU env or BSD env command in the form of: env [name=value ...] [utility [args ...]] Looks like there is no way to escape special characters in value part but I am not pretty sure how env is implemented to parse the value part. I know there are many ways to do this by shell's feature but I want to pass literal string without shell's help (a bit like execute env by exec ). That is to say, I need to find some kind of literal string format with newline and supported by env command. For example: env FOO=LITERAL_STRING ruby -e 'puts ENV["FOO"]' Here the LITERAL_STRING should contain a literal string with newline and env should understand that format. With the above command, the expected output should be: helloworld I wonder if it is possible. I would appreciate for your help. Environment env I use BSD env so it can't print the version. Don't know if man can help: $ man env | tail -n1BSD April 17, 2008 BSD OS $ sw_versProductName: macOSProductVersion: 11.6BuildVersion: 20G165
Your understanding is incorrect. When you store the content "hello\nworld" into a variable, the \n is interpreted literally. Only if you invoke tools such as printf or echo with -e flag, they expand those backslash sequences when printing to console. In your case, you want to pass the variable to the environment with newline character expanded, suggest using ANSI C style quoting in bash for this env FOO=$'hello\nworld' ruby -e 'puts ENV["FOO"]' Or if portability is an issue, a trivial solution is to put those newlines where you want them f="helloworld"env FOO="$f" ruby -e 'puts ENV["FOO"]'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/673192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223471/" ] }
673,212
In the light of emmc wearout monitoring I'm wondering about the size as displayed inside a running Linux OS. An emmc chip has an internal manager which keeps track of the usage intensity of all the different byte registers. In theory, a flash device like SSD, USB stick or emmc chip should shrink in capacity when the end of live limits are being reached. $ lsblk -b .. returns the blocksizes in bytes like this: If the capacity and therefore the size of the whole emmc image goes down, does this figure update itself automatically? Are there any other tools which can achieve a real time representation of the actual available blocksize? Edit: After the comments from @Marcus and @Artem An emmc driver has this virtual file system entry where an "End Of Life" indication has been implemented. If I'm not mistaken 0x02 stands for 80% loss of capacity size, 0x03 stands for 90% loss of capacity size. This is kind of late to realize your emmc is gone, so I'm searching for a way to indicate this crucial information at an earlier stage. look at : cat /sys/class/block/mmcblk1/device/pre_eol_info
No. A device with internal wear leveling like eMMC and some flash drives will not advertise their full capacity in the first place. So, a device may have 1536 blocks but shows only 1024 to the system. The 1024 blocks that can be accessed by the OS are never guaranteed to be the same blocks, they can be re-allocated and so on. So, the size shown to the OS cannot be used to determine the current end-of-life status.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/673212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/477016/" ] }
673,591
There is this famous entry level programming exercise where you're asked to swap two variables. The obvious solution is to use a third, ephemeral variable. But if your language has something like tuples, you can write a very simple helper function that returns its arguments in reversed order: def swap(a: Int, b: Int): (Int, Int) = (b, a)val (two, one) = swap(1, 2) // => (2, 1) I wonder if the same is possible for file operations on Linux. If e.g. I have a configurations that I want to exchange depending on the situation, is there a command that takes two filenames and swaps their contents? For example, imagine file a.txt has the content "Hello" while b.txt reads "World". After calling what I'd call swap here, I would expect a.txt to contain "World" and b.txt "Hello". The issue is way harder to research than I originally thought because swap partitions dominate all search results.
There is a Linux-specific system call able to do this, at the kernel level. The relevant system call is renameat2() which is a Linux-specific extension to renameat() and can be used with an additional specific flag to address this question. It was added in Linux 3.15 and the glibc support in glibc 2.28. RENAME_EXCHANGE Atomically exchange oldpath and newpath . Both pathnames must exist butmay be of different types (e.g., one could be a non-empty directoryand the other a symbolic link). There might be further limitations on which filesystem can support this feature. This git search tells about support in various filesystems added since 2014: ext4, fuse, f2fs, shmem/tmpfs, xfs, gfs2, overlayfs, btrfs, ubifs, affs ... Without proper command to use this syscall, here's an example in Python that is very architecture specific (amd64/x86_64) where all symbols were resolved "by hand" (with SYS_renameat2 = 316 in this architecture etc.) to atomically swap files named a and b , with strace showing what would be done with the system call in C: $ echo Hello > a.txt; echo World > b.txt$ cat a.txtHello$ cat b.txtWorld$ strace -e trace=renameat2 python3 -c 'import ctypes; libc = ctypes.CDLL(None); libc.syscall(316, -100, b"a.txt", -100, b"b.txt", 2);'renameat2(AT_FDCWD, "a.txt", AT_FDCWD, "b.txt", RENAME_EXCHANGE) = 0+++ exited with 0 +++$ cat a.txtWorld$ cat b.txtHello Of course using a proper library would simplify this Python example.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/673591", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/424482/" ] }
673,604
I have a line logger Ok in my script. When I run it from command line with either of ./myscript.shsudo ./myscript.shsudo bash ./myscript.sh it writes in log Oct 17 22:32:01 d40688 mysqlf: Ok I.e. it knows my username and doesn't think I am root. While if I run this script from /var/spool/cron/root it writes Oct 17 22:32:01 d40688 root: Ok i.e. it thinks I am a root. How to simulate latter run from command line?
There is a Linux-specific system call able to do this, at the kernel level. The relevant system call is renameat2() which is a Linux-specific extension to renameat() and can be used with an additional specific flag to address this question. It was added in Linux 3.15 and the glibc support in glibc 2.28. RENAME_EXCHANGE Atomically exchange oldpath and newpath . Both pathnames must exist butmay be of different types (e.g., one could be a non-empty directoryand the other a symbolic link). There might be further limitations on which filesystem can support this feature. This git search tells about support in various filesystems added since 2014: ext4, fuse, f2fs, shmem/tmpfs, xfs, gfs2, overlayfs, btrfs, ubifs, affs ... Without proper command to use this syscall, here's an example in Python that is very architecture specific (amd64/x86_64) where all symbols were resolved "by hand" (with SYS_renameat2 = 316 in this architecture etc.) to atomically swap files named a and b , with strace showing what would be done with the system call in C: $ echo Hello > a.txt; echo World > b.txt$ cat a.txtHello$ cat b.txtWorld$ strace -e trace=renameat2 python3 -c 'import ctypes; libc = ctypes.CDLL(None); libc.syscall(316, -100, b"a.txt", -100, b"b.txt", 2);'renameat2(AT_FDCWD, "a.txt", AT_FDCWD, "b.txt", RENAME_EXCHANGE) = 0+++ exited with 0 +++$ cat a.txtWorld$ cat b.txtHello Of course using a proper library would simplify this Python example.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/673604", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28089/" ] }
673,641
Maybe I haven't had enough coffee yet today, but I can't remember or think of any reason why /proc/PID/cmdline should be world-readable - after all, /proc/PID/environ isn't. Making it readable only by the user (and maybe the group. and root, of course) would prevent casual exposure of passwords entered as command-line arguments. Sure, it would affect other users running ps and htop and the like - but that's a good thing, right? That would be the point of not making it world-readable.
I suspect the main, and perhaps only, reason is historical — /proc/.../cmdline was initially world-readable, so it remains that way for backwards compatibility. cmdline was added in 0.98.6, released on December 2, 1992, with mode 444; the changelog says - /proc filesystem extensions. Based on ideas (and some code) by Darren Senn, but mostly written by yours truly. More about that later. I don’t know when “later” was; as far as I can tell, Darren Senn’s ideas are lost in the mists of time. environ is an interesting counter-example to the backwards compatibility argument: it started out word-readable, but was made readable only by its owner in 1.1.85. I haven’t found the changelog for that so I don’t know what the reasoning was. The overall accessibility and visibility of /proc/${pid} (including /proc/${pid}/cmdline ) can be controlled using proc ’s hidepid mount option , which was added in version 3.3 of the kernel . The gid mount option can be used to give full access to a specific group, e.g. so that monitoring processes can still see everything without running as root.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/673641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7696/" ] }
673,679
I found lots of questions about how to rename multiple files using the command line. However I am not able to solve my specific issue which is renaming this file name: something_4M_something_something_manyothersomethings.csv into this: something_4_M_something_something_manyothersomethings.csv What I need is to split the 4M field into 4_M but I am not able to do it (notes: every something is separated by an underscore, there are many other fields, and I believe this is not important for the task). What I did is the following but it does not work as I am expecting I think it is a problem with the regexp, but I can't figure out a better one: rename -n 's/.4M/$&_4_M/' * Also, I don't know how exactly the thing I wrote is working, since I found something similar in a comment to one of the similar-to-this-questions but I can't find it anymore.
If all you want to do is really replace 4M by 4_M , then a variant of your regexp does this: $ lssomething_4M_something_something_manyothersomethings.csv$ rename -n 's/4M/4_M/' *'something_4M_something_something_manyothersomethings.csv' would be renamed to 'something_4_M_something_something_manyothersomethings.csv' The regexp works on anything inside the name, so you don't need to do something specific to keep the leading "something". If you want to do something else, like move the first "something" to the second "something", or restrict to the 4M to the first occurance after an underscore, please edit your question and clarify. As observed in the comments, -n does a dry-run, so it shows what would have happened, and doesn't actually perform the operation. As the example in the question was already using -n , I was under the assumption that this was understood by the person asking.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/673679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352134/" ] }
673,777
This is syntactically correct: for f in *bw;do echo $f;done But How would I add an extension to loop through one or the other? The following doesn't work: for f in *bw|*txt;do echo $f;done And this doesn't work either: for f in *bw or *txt;do echo $f;done
In Bash , with shopt -s extglob , and also zsh with KSH_GLOB enabled: for f in *.@(bw|txt) Note that in bash, if there are no matches, the loop will run with $f set to the literal string *.@(bw|txt) . To avoid this, in bash: shopt -s nullglobfor f in *.@(bw|txt) In zsh, by default, you'll get an error if there are no matches. To avoid this, add the N glob qualifier . for f in *.@(bw|txt)(N) In zsh , there's a simpler solution that works with default options, again with (N) to do nothing if there are no matches: for f in *.(bw|txt)(N) All of those will order all the entries alphabetically, with files with both extensions intermingled (that is, duplicate names that differ only in the extensions will (likely¹) be consecutive). You can list as many pipe-separated pattern entries within the parentheses as required, and they can include further globs (e.g. (zip|tar.?z) ). ¹ foo.bw foot.bw foot.txt foo.txt however would sort in that order in locales where . is ignored in first instance in the collation algorithm as is common these days (as footb or foott come before footx and after foobw ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/673777", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/440181/" ] }
673,982
I have a file that looks like that: chr1 3143567 3143568 .3-2704 1.000000|ENSMUSG00000102693.2chr1 3143599 3143600 .3-2705 1.000000|ENSMUSG00000102693.2chr1 3143631 3143632 .3-2706 1.000000|ENSMUSG00000102693.2chr1 3143663 3143664 .3-2707 1.000000|ENSMUSG00000102693.2chr1 3143695 3143696 .3-2708 1.000000|ENSMUSG00000102693.2chr1 3143727 3143728 .3-2709 1.000000|ENSMUSG00000102693.2 I'm writing 2 sed expressions to filter everything before the | first and with result file I discard everything after the . like so: sed -n -e 's/^.*|//p' original_file.txt > first_result.txt sed -n -e 's/\..*//p' first_result.txt > final_result.txt How can I write all of that in one line ? The end goal is to capture ENSMUSG00000102693
Your commands would discard lines containing no | character, and lines where the mouse gene identifier has no version number. I'm not certain this is intended, but it's a side effect of using sed -n with the p flag on the s command. I'm going to assume that this is unintended. Just use two expressions with sed : sed -e 's/.*|//' -e 's/\..*//' file >newfile With a grep command that has the non-standard -o option, and assuming that you just want to extract all Ensembl mouse gene stable IDs from the file (and that the file only contains stable IDs that you'd like to extract), grep -o 'ENSMUSG[[:digit:]]*' file >newfile You may also use two chained cut commands, each one doing similar modifications of the data as the two sed substitutions earlier in this answer. Using static cut would probably be quicker than using a regular expression, but I doubt you'd see any major speed differences unless your input data is huge. cut -d '|' -f 2 file | cut -d '.' -f 1 >newfile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/673982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/440181/" ] }
674,181
I'm using the following command to make a full, recursive copyof the contents of one directory into another : rsync -avzhe /path1/to1/dir1/* /path2/to2/dir2/ It works as I expect it to, except that the first file (in alphabetical order), be it a directory or a file, is not copied. Every other file gets copied. Why ? My OS is MacOS 10.14.6 if that matters.
To make a full copy of the directory /path1/to1/dir1 called /path2/to2/dir2 , use rsync -av /path2/to1/dir1/ /path2/to2/dir2 There is no point in using compression ( -z ) for a local copy. The -e option specifies the command used to establish the network connection (which is why your command fails in copying the first file; rsync uses it as the option-argument to the -e option), so that should be removed too in this scenario. Also, don't use * at the end of the source path, as that would ordinarily not match any hidden names. Globbing all names under dir1 could potentially also expand to a list too long for the command to execute at all. Just make sure that the source path ends with a slash. A slash at the end of the destination path makes no difference. Removing the slash from the source path would copy dir1 inside dir2 . A slash at the end of the source path makes dir2 a copy of dir1 . Other than that, you may want to use -H to make sure hard links are established correctly at the destination and --sparse if you know you are copying files that may be sparse (like pre-allocated disk images). Use --delete to also delete entries from the destination that is not part of the source file hierarchy. Use this with caution. You may want to test run with -n ( --dry-run ) first.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133175/" ] }
674,185
I have a dataset which contains contact information of students, the sample data set is as follows First Name, Last Name, Address, Phone NumberJohn, Doe, "House # 11, Street xyz, Road, Area",00000000Sara, Taylor, "Jake Lake%, Apartment #22, Main Road, Area XYZ", 00000000 I am running the following command to replace , inside Address column to | to load it into the DB. awk '!(NR%2){gsub(",","|")} {printf RFS $0} {RFS="\""}' RS=\" fileName.txt > output.txt The issue I am facing is the whenever I ran this command it returns me the following error, Initially it was running ok awk: run time error: not enough arguments passed to printf(""Jake Lake%, Apartment #22, Main Road, Area XYZ") Is there any solution to that? I noticed that % is coming in the address is that the issue?
For robustness, never do printf $0 , always use printf "%s", $0 instead as the former will fail when your input contains printf formatting characters (as you are currently seeing). The same applies to using printf with any input data. For clarity and robustness, never use all-upper-case variable names, e.g. RFS to avoid clashes with builtin variable names and to avoid obfuscating your code by making it look like you're using a built-in variable when you aren't. For readability, don't set variables, e.g. RS , after your script unless you need to set them to different values for different input files, set variables before or at the start of your script so when reading your script we see them being set before we see them being used. For efficiency, simplicity, robustness, the first argument to *sub() is a regexp, not a string, so use regexp ( /.../ ), not string ( "..." ) delimiters around it unless you NEED a dynamic instead of static regexp for some reason. For clarity and maintainability, when you have 2 variables that must have the same value, e.g. RS and RFS , don't set them separately to the same value, e.g. RS="\""; RFS="\"" , either set them together to that value, e.g. RS=RFS="\"" or set one to the other, e.g. RS="\""; RFS=RS . This is how to write the code in your question correctly: $ awk -v RS='"' '!(NR%2){gsub(/,/,"|")} {printf "%s%s", rfs, $0; rfs=RS}' fileFirst Name, Last Name, Address, Phone NumberJohn, Doe, "House # 11| Street xyz| Road| Area",00000000Sara, Taylor, "Jake Lake%| Apartment #22| Main Road| Area XYZ", 00000000 To do any more than that with a CSV using awk, see whats-the-most-robust-way-to-efficiently-parse-csv-using-awk .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/470189/" ] }
674,207
I found this interesting command: grep -v '^>' test.fasta | tr -d '\n' | sed -e 's/\(.\)/\1\n/g' | sort | uniq -c | sort -rn I have some grasp what it means (it counts letters from a text file), but my question is about this: sed -e 's/\(.\)/\1\n/g' I know that it is compose of three substitute commands. One is to substitute new lines ( \n ), one that matches any characters except newlines ( \(.\) ), but I am lost at /\1\ ?
The command sed -e 's/\(.\)/\1\n/g' is a single GNU sed substitution command that replaces every character with itself, followed by a newline character. The effect of this is to fold the input into a single column of single characters. $ echo hello | sed -e 's/\(.\)/\1\n/g'hello The \(.\) is a "capture group", capturing a single character. The \1 is a "back-reference" to the first capture group. Using \1 in the replacement text would insert whatever was captured by the first parentheses. It could also be written without so many backslashes as sed 's/./&\n/g' where & simply means "whatever was matched by the expression". The sed command requires GNU sed as standard sed can't insert newlines with \n like that. To do that more efficiently with standard tools, use fold -w 1 instead. This is more efficient as no regular expression matching is needed for each character in the input. Using fold , your pipeline could be written grep -v '^>' file | tr -d '\n' | fold -w 1 | sort | uniq -c | sort -rn Alternatively, using awk to get rid of a few steps of that pipeline, awk '!/^>/ { for (i = 1; i <= length; ++i) count[substr($0,i,1)]++ } END { for (ch in count) print count[ch], ch }' file |sort -rn The awk code counts the number of times each character has been seen. It does that by incrementing the value in the array count corresponding to each character in the input stream. At the end of input, a summary of the counts and characters counted are outputted.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/674207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/380953/" ] }
674,227
I am trying to understand shell expansions in bash (GNU bash, version 4.4.20(1)-release (i686-pc-linux-gnu)). Typing in my interactive bash shell x='$(id)'$x$(echo $x) I was expecting from any of last 2 lines an error of the form bash: uid=xxx(user): command not found but got bash: $(id): command not found . I don't undestand why command substitution does not occur here. Shouldn't it be realised after variable expansion ? My guess is that it has to do with Shell operations as described here https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Shell-Operation Can someone explains this behavior ? I am just interested in understanding more precisely bash expansions. I am not interested in running actual script in my question.
$(…) is a command substitution (“process substitution” is <(…) and the like). Variable substitutions and command substitutions occur in the same pass, from left to right in the string. The only things that occur on the result of these substitutions are word splitting and globbing. So x='$(id)' sets x to the 5-character string $(id) . Then, to run $x , the shell replaces $x by the value $(id) . This does not contain any whitespace or globbing character so it is treated as a command name. Contrast with: x='@(id)'shopt -s extglobecho /none/$x /usr/bin/$x Assuming that the file /none/id doesn't exist but /usr/bin/id does, the echo command expands to three words: echo (obviously), /none/@(id) (the glob pattern /none/@(id) doesn't match anything so it's left unchanged), and /usr/bin/id (the glob pattern /usr/bin/@(id) matches one file, so it's replaced by the one-element list of matches). In the bash manual, the relevant sentence is at the beginning of the Shell Expansions section. The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and filename expansion. Everything between two semicolons is one pass. Each pass works on the result of the previous pass. Beware that a single sentence (even a complex one like the one I cited above) can't tell the whole story. Shell semantics is convoluted. I doubt that any shell's manual has the details of all the corner cases. The POSIX specification is more formal but doesn't cover bash-specific extensions and even it leaves some really odd cases undefined.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674227", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201090/" ] }
674,234
Hi I'm currently trying to execute the following code. I have stored a string inside the variable DATE which is in the format YYYY/MM/DD. Im trying to extract the year by using the cut command. I receive an error stating it is not a file or directory. Is there a modification I could make or a different way of doing it? for file in ~/filesToSort/*do DATE=$(head -1 $file | tr "-" "/") echo "${DATE}" YYYY=$(cut -c1-4 accounts $DATE) #echo "${YYYY}"done Thanks
The cut utility reads data from its standard input stream, it does not operate on strings given as arguments. To use cut , therefore, you need to pass the data on standard input: YYYY=$( printf '%s\n' "$DATE" | cut -d '/' -f 1 ) However, that would be very slow in a loop. Instead, use a built-in parameter substitution to delete everything after the first / in the $DATE string: YYYY=${DATE%%/*} This removes the longest suffix string from $DATE that matches the shell pattern /* . If the string is 2021/10/21 , then this returns 2021 . To get the first four characters of every file in a directory (which is the essence of what I believe your current code is attempting to do), you could use sed like so: for name in "$HOME"/filesToSort/*; do sed -e 's/\(....\).*/\1/' -e q "$name"done This reads in the first line of each file, replaces the contents of the line with the first four characters of the line, and then quits after outputting the result to the terminal.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/497923/" ] }
674,267
Hi I am trying to loop through a directory and all its subdirectories to find the string 'foo' in all the files, for example. I then want to display the file names (including full path) of all files which contain it. The operating system is unix and using bash shell. Any suggestions would be appreciated.Thanks
You can use a recursive grep . In GNU grep , at least, that's the -R option: grep -R foo /path/to/parent/directory If you use a full path, as above, for example /home/callsign223/foo/ then all files will be shown with their qualified paths. And if you only want to see the file names, not the matching lines, you can use -l to only print the file name: grep -lR foo /path/to/parent/directory Note that the -R option is not portable and isn't supported by all grep implementations.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674267", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/497923/" ] }
674,384
I have a flat file which has a phone number in field starting at position 314 till 323. Now I wanted to dummy out that field with 1234567890 . For this I tried using the below commands and both are throwing error: awk '{var=substr($0,314,10);gsub("[0-9]","1234567890",$var); print}' final_phone.txt >final_phone.txt1 fatal: grow_fields_arr: fields_arr: can't allocate 9849885432 bytes of memory (Cannot allocate memory) In second case awk 'var=substr($0,314,10) { var = "1234567890" }1' final_phone.txt >final_phone.txt1 This worked but the values didn't change. The output remained the same. Can someone help me with the syntax here? In the first case I tried to assign the substring to a variable and in gsub() I wanted to check for numbers pattern and substitute with 1234567890 . can someone help me with this
you need to print two substring, one part before that position and another part after that position, something like: $ awk -v dummy='0123456789' -v start=314 -v len=10 '{ print substr($0, 1, start-1) dummy substr($0, start+len) }' infile >outfile testing: $ awk -v dummy='0123456789' -v start=4 -v len=10 '{ print substr($0, 1, start-1) dummy substr($0, start+len) }' <<<'0009876543210999'0000123456789999 Issue with your command: you are using $var instead of var in the third argument to the gsub() as it result gsub() to look a field which its number is the value of the var which it's a 10digits length field number, so awk tries to gsub() on that field #xxxxxxxxxx but it fails due to memory allocation for reevaluating these very huge number of fields (because when using any field other than $0 in the third argument to the gsub() it forces awk to rebuild the fields back on default OFS). if we fix the issue #1, then you will replace every single digits in the var variable with 1234567890 string. you then used print it will print the current line without changes, since you don't do any updates on that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215445/" ] }
674,450
Problem I want to copy the output of tldr to clipboard, and then paste that to text editor. I execute: tldr pwd | xclip -sel clip When I paste from clipboard, I get: pwd[0mPrint name of current/working directory.More information: https://www.gnu.org/software/coreutils/pwd. - [23;22;24;25;32mPrint the current directory:[23;22;24;25;33m pwd[0m - [23;22;24;25;32mPrint the current directory, and resolve all symlinks (i.e. show the "physical" path):[23;22;24;25;33m pwd -P[0m[0m I want to get rid of timestamps and also want to know why this is happening. Observation tldr pwd (without passing into xclip) doesn't display timestamps man pwd | xclip -sel clip doesn't include timestamps when pasted So, only when passing tldr to xclip I find this happening The timestamps looks like escape codes Environment Static hostname: debian Icon name: computer-desktop Chassis: desktop Operating System: Debian GNU/Linux 10 (buster) Kernel: Linux 4.19.0-17-amd64 Architecture: x86-64
Those are not timestamps. They are colour-codes. According to the v0.91 Changelog , tldr merged a feature to disable colours in July 2021, either by setting a NO_COLOR environment variable or using a new --no-color command-line option. Unfortunately, v0.91 of tldr is much newer than the version currently in Debian (0.6.4)....so, either uninstall the Debian package and compile/install it yourself(*) or submit a bug report asking for the new version to be packaged. Or both. That's the long-term solution. In the short-term, using sed or something to remove the colour codes from the output (as in @GMaster's answer) is probably the best you do. (*) I wouldn't normally suggest switching from a packaged version of a program to a self-compiled version (because doing that is likely to cause compatibility problems or issues with upgrading in future), but hard-coded colour codes that can't be disabled are a UI abomination.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674450", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/496102/" ] }
674,540
I have following problem. I am new to Linux and went with Mint as my distro. On windows I always used Chrome as my go to browser because I do a lot of webdev and like to have the best support for browser features so I also went for Google Chrome here on Linux. I found out that when I maximize the Screen of Chrome that the Window goes over the bounds of the screen. I couldn't replicate that behavior with any other application so it could be some issue with Chrome. Does Chrome use GTK2 because it also overrides the Style of the installed theme? I changed some dotfiles in my user directory and changed the default compositor to compton. There I just applied some settings to get a blur effect and transparent windows nothing too special. This is my system specs output from neofetch: OS: Linux Mint 20.2 x86_64 Host: 20287 Lenovo IdeaPad Z510 Kernel: 5.4.0-89-generic Uptime: 10 hours, 50 mins Packages: 2014 (dpkg) Shell: bash 5.0.17 Resolution: 1920x1080 DE: Xfce WM: Xfwm4 WM Theme: Sweet-Dark Theme: Sweet-Dark [GTK2/3] Icons: ePapirus [GTK2/3] Terminal: xfce4-terminal Terminal Font: Monospace 10 CPU: Intel i5-4200M (4) @ 3.100GHz GPU: NVIDIA GeForce GT 740M GPU: Intel 4th Gen Core Processor Memory: 1984MiB / 7719MiB As you can see in the screenshot the window decorations and scrollbar are outside of the screen which is quite annoying. So my questions is if this could be due to some weird bug on chromes side or maybe due to some misconfigs on my side?
UPDATE Seems like it is fixed with this version: Version 96.0.4664.45 (Official Build) (64-bit) seems like it is a bug within chrome. since no other window on my machine has this behavior and it just appeared after updating chrome.. found an open bug, too: https://bugs.chromium.org/p/chromium/issues/detail?id=1261797&q=maximize&can=2 hope it gets fixed soon! ;)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/674540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/498197/" ] }
674,704
The so-called "standard streams" in Linux are stdin,stdout,stderr. But they must be called "standard" for a reason. Are there non-standard streams? And are these non-standard streams fundamentally treated differently by the kernel?
In this context, a “stream” is an open file in a process. (The word “stream” can have other meanings that are off-topic here.) The three standard streams are the ones that are supposed to be already open when a program starts. File descriptor 0 is called standard input because that's where a program is supposed to read user input or its default data input. File descriptor 1 is called standard output because that's where a program is supposed to write its normal data output. File descriptor 2 is called standard error because that's where a program is supposed to write its error messages. Other file descriptor numbers are not standard anything because they don't have such a preassigned role. They'll end up being used for whatever the program wants. So could call any file opened by a program a “nonstandard stream”, but it would be weird and confusing: “open file other than stdin, stdout or stderr” doesn't really need a name, and “nonstandard stream” sounds like it's some special type of file or a file opened by a nonstandard method, which is not the case. The conventional role of file descriptors 0–2 is granted by the standard library and by certain programs. For example, console login programs and terminal emulators start the shell (or other program) with the terminal open on these file descriptors. The C standard library creates FILE* objects (what C calls streams) for these three standard descriptors. There's no special treatment in the kernel.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/495520/" ] }
674,823
It seems that clear and bash Ctrl - L are quite different; clear completely removes all previous terminal information (so you cannot scroll up) while Ctrl - L just scrolls the screen so that the cursor is at the top of the page so that you can still scroll up and see previous information. I much prefer the Ctrl - L system. Is there a way to override clear so that it does a Ctrl - L instead of wiping all previous terminal information? This is not a huge issue, but I'm just wondering out of curiosity if there is a way to alias clear to point at my preferred Ctrl - L functionality. As a side note, I just noticed that PowerShell also has a binding for Ctrl - L and it performs the same way as Ctrl - L on bash; it seems that the PowerShell designers there took a lot from bash, while cmd.exe consoles do not have this functionality.
Is there a way to override clear so that it does a Ctrl-L instead of wiping all previous terminal information? alias clear='tput -x clear' Yes, Ctrl-L in bash (while in set -o emacs mode) does exactly the same thing. Or you can just hardwire the escape with alias clear='printf "\033[H\033[2J"' which should work in most terminal emulators, and does not assume that you have ncurses or bash installed. NB: the clear applet from busybox does NOT wipe off the scrollback buffer, so you don't have to do anything special if you're using some busybox-based system, as most embedded Linux systems are.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/441685/" ] }
674,938
I'm trying to use find to return all file names that have a specific directory in their path, but don't have another specific directory anywhere in the file path. Something like: myRegex= <regex> targetDir= <source directory>find $targetDir -regex $myRegex -print I know I might also be able to do this by piping one find command into another, but I would like to know how to do this with a single regular expression. For example, I want every file that has the directory "good" in it's path, but doesn't have the directory "bad" anywhere in its path no matter the combination. Some examples: /good/file_I_want.txt #Captured/good/bad/file_I_dont_want.txt #Not captured/dir1/good/file_I_want.txt #Captured/dir2/good/bad/file_I_dont_want.txt #Not captured/dir1/good/dir2/file_I_want.txt #Captured/dir1/good/dir2/bad/file_I_want.txt #Not captured/bad/dir1/good/file_I_dont_want.txt #Not captured Keep in mind some file names might contain "good" or "bad", but I only want to account for directory names. /good/bad.txt #Captured/bad/good.txt #Not captured My research suggests I should use a Negative Lookahead and a Negative Lookbehind. However, nothing I have tried has worked so far. Some help would be appreciated. Thanks.
As Inian said, you don't need -regex (which is non standard, and the syntax varies greatly between the implementations that do support -regex ¹). You can use -path for that, but you can also tell find not to enter directories called bad , which would be more efficient than discovering every file in them for later filtering them out with -path : LC_ALL=C find . -name bad -prune -o -path '*/good/*.txt' -type f -print ( LC_ALL=C so find 's * wildcard doesn't choke on filenames with sequence of bytes not forming valid characters in the locale). Or for more than one folder name: LC_ALL=C find . '(' -name bad -o -name worse ')' -prune -o \ '(' -path '*/good/*' -o -path '*/better/*' ')' -name '*.txt' -type f -print With zsh , you can also do: set -o extendedglob # best in ~/.zshrcprint -rC1 -- (^bad/)#*.txt~^*/good/*(ND.) print -rC1 -- (^(bad|worse)/)#*.txt~^*/(good|better)/*(ND.) Or for the lists in arrays: good=(good better best)bad=(bad worse worst)print -rC1 -- (^(${(~j[|])bad})/)#*.txt~^*/(${(~j[|])good})/*(ND.) To not descend into dirs called bad , or (less efficient like with -path '*/good/*' ! -path '*/bad/*' ): print -rC1 -- **/*.txt~*/bad/*~^*/good/*(ND.) In zsh -o extendedglob , ~ is the except (and-not) globbing operator while ^ is the negation operator and # is 0-or-more-of-the-preceding-thing like regexp * . ${(~j[|])array} joins the elements of the array with | , with that | being treated as a glob operator instead of a literal | with ~ . In zsh , you'd be able to use PCRE matching after set -o rematchpcre : set -o rematchpcreregex='^(?!.*/bad/).*/good/.*\.txt\Z'print -rC1 -- **/*(ND.e['[[ $REPLY =~ $regex ]]']) But that evaluation of shell code for every file (including those in bad directories) is likely to make it a lot slower than other solutions. Also beware that PCRE (contrary to zsh globs) would choke on sequences of bytes that don't form valid characters in the locale, and doesn't support multi-byte charsets other than UTF-8. Fixing the locale to C like for find above would address both for this particular pattern. If you'd rather [[ =~ ]] only does extended regexp matching like in bash , you can also instead just load the pcre module ( zmodload zsh/pcre ) and use [[ -pcre-match ]] instead of [[ =~ ]] to do PCRE matching. Or you could do the filtering with grep -zP (assuming GNU grep or compatible): regex='^(?!.*/bad/).*/good/.*\.txt\Z'find . -type f -print0 | LC_ALL=C grep -zPe "$regex" | tr '\0' '\n' (though find still discovers all files in all bad directories). Replace tr '\0' '\n' with xargs -r0 cmd if you need to do anything with those files (other than printing them one per line). ¹ In any case, I don't know any find implementation that supports perl-like or vim-like regular expressions which you'd need for look-around operators.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/674938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/425126/" ] }
675,008
I'm not new in Linux world but I never liked Bash. Generally for me programming is an art and I'm very sensitive to the language syntax. When I have to work with bash, it feels like I'm writing in Stone Age people language. Maybe Fish is not perfect shell, but it is a way better than Bash and I want to switch to it completely, but something tells me that removing Bash from the system is not a great idea. So I decided to ask this crazy question, even though I have little faith that anyone will have an answer - is there existed some usable Linux distro that have no bash out of the box?
Nothing prevents you from changing your shell and forgetting about bash existence altogether. chsh --shell /usr/bin/fish# orusermod --shell /usr/bin/fish luarocks It looks like you don't like its presence in principle but there could be many things in life which we don't like yet manage to disregard without going mad. AFAIK there are no major well-supported distros with a large audience which don't use bash. If you really want to deal with less supported ones, you could even use the ones which come with busybox .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/498751/" ] }
675,015
What is utility tool to detect and give info of compression method (preferably more info) in any mean (CL or GUI) ? IOW what windows 7z tool equivalent for Linux, giving info easily in GUI?
Nothing prevents you from changing your shell and forgetting about bash existence altogether. chsh --shell /usr/bin/fish# orusermod --shell /usr/bin/fish luarocks It looks like you don't like its presence in principle but there could be many things in life which we don't like yet manage to disregard without going mad. AFAIK there are no major well-supported distros with a large audience which don't use bash. If you really want to deal with less supported ones, you could even use the ones which come with busybox .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675015", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
675,165
find is always a complete mystery to me whenever I use it; I just want to exclude everything under /mnt (I am in bash on Ubuntu 20.04 on WSL so don't want it to search in the Windows space) from my search, but find just blunders into those directories completely ignoring me. I found syntax from this page. https://stackoverflow.com/questions/4210042/how-to-exclude-a-directory-in-find-command and tried all variations - all failed. sudo find / -name 'git-credential-manager*' -not -path '/mnt/*'sudo find / -name 'git-credential-manager*' ! -path '/mnt/*'sudo find / -name 'git-credential-manager*' ! -path '*/mnt/*' When I do this, it just blunders into /mnt and throws errors (which is really frustrating as the syntax above looks clear, and the stackoverflow page syntax seems correct): find: ‘/mnt/d/$RECYCLE.BIN/New folder’: Permission deniedfind: ‘/mnt/d/$RECYCLE.BIN/S-1-5-18’: Permission denied Can someone show me how to stop find from ignoring my directory exclusion switches?
Find's -path doesn't exclude paths, it means "do not report any matches whose name matches this path". It will still descend into the directories and will search them. What you want is -prune (from man find ): -prune True; if the file is a directory, do not descend into it. If -depth is given, then -prune has no effect. Because -delete implies -depth, you cannot usefully use -prune and -delete to‐ gether. For example, to skip the directory src/emacs and all files and directories under it, and print the names of the other files found, do something like this: find . -path ./src/emacs -prune -o -print So, you want: sudo find / -path '/mnt/*' -prune -name 'git-credential-manager*' Although, based on what you're trying to exclude, it might be easier to use -mount (GNU find ) or -xdev (others): From man find : -mount Don't descend directories on other filesystems. An alternate name for -xdev , for compatibility with some other versions of find. So: sudo find / -mount -name 'git-credential-manager*'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675165", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/441685/" ] }
675,168
Warning : k8s greenhorn on this side. I need to run a task that will be set up in a k8s cronjob. I need it to run every 45 minutes. Having this in the schedule does not work: 0/45 * * * * Because it would run at X:00 , then X:45 then X+1:00 instead of X+1:30 . So I might need to set up multiple schedule rules instead: 0,45 0/3 * * *30 1/3 * * *15 2/3 * * * I am wondering if it's possible to set up multiple schedules in a single CronJob definition or if I will have to setup multiple CronJobs so that each CronJob takes care of each line. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/cron-job-v1/ Update : I just read that it's possible to have more than a single manifest written in a single yaml file so it might work with 3 manifests.... but knowing if it's possible with a single manifest would be awesome.
Find's -path doesn't exclude paths, it means "do not report any matches whose name matches this path". It will still descend into the directories and will search them. What you want is -prune (from man find ): -prune True; if the file is a directory, do not descend into it. If -depth is given, then -prune has no effect. Because -delete implies -depth, you cannot usefully use -prune and -delete to‐ gether. For example, to skip the directory src/emacs and all files and directories under it, and print the names of the other files found, do something like this: find . -path ./src/emacs -prune -o -print So, you want: sudo find / -path '/mnt/*' -prune -name 'git-credential-manager*' Although, based on what you're trying to exclude, it might be easier to use -mount (GNU find ) or -xdev (others): From man find : -mount Don't descend directories on other filesystems. An alternate name for -xdev , for compatibility with some other versions of find. So: sudo find / -mount -name 'git-credential-manager*'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/236478/" ] }
675,173
I'm trying to do a query and store every row result in an array element in ksh (maybe bash).I do: result=($($PATH_UTI/querysh "set heading offset feedback offSELECT columnA,columnb FROM user.comunication;")) I have that: row1 = HOUSE CARrow2 = DOC CATecho "${result[1]}" and it gives me HOUSE But I would like to get: echo "${result[1]}" gives: "HOUSE CAR"
Find's -path doesn't exclude paths, it means "do not report any matches whose name matches this path". It will still descend into the directories and will search them. What you want is -prune (from man find ): -prune True; if the file is a directory, do not descend into it. If -depth is given, then -prune has no effect. Because -delete implies -depth, you cannot usefully use -prune and -delete to‐ gether. For example, to skip the directory src/emacs and all files and directories under it, and print the names of the other files found, do something like this: find . -path ./src/emacs -prune -o -print So, you want: sudo find / -path '/mnt/*' -prune -name 'git-credential-manager*' Although, based on what you're trying to exclude, it might be easier to use -mount (GNU find ) or -xdev (others): From man find : -mount Don't descend directories on other filesystems. An alternate name for -xdev , for compatibility with some other versions of find. So: sudo find / -mount -name 'git-credential-manager*'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182248/" ] }
675,237
I'm working on a custom ZSH prompt and I want to repeat a char n times in a string (such as spaces for padding). This string is printed with print -rP (the -r flag ignores echo escape conventions and the -P flag performs prompt expansions). I have working code using some kind of string substitution, but I don't know how it works. For some reason I have to multiply the number of characters I want to print by two which feels like a hack. $ n=3$ c='a'$ print -rP "${(l:$n::$c:)}" # why doesn't this work?ca$ print -rP "${(l:(( $n * 2 ))::$c:)}" # but this does?aaa So, 1) why does this work when multiplied by two, and 2) what's the correct syntax to repeat a char within a string?
1) why does this work when multiplied by two, The expansion "${(l:3::$c:)}" expands to c$c whereas "${(l:3*2::$c:)}" expands to $c$c$c . If the option PROMPT_SUBST is set and this string used as part of a prompt string, it is evaluated for parameter expansion, command substitution and arithmetic expansion. So if c=a , then c$c becomes ca and $c$c$c becomes aaa . Test with XTRACE set: $ n=3 c=a zsh -o PROMPT_SUBST -xc 'print -rP -- "${(l:n::$c:)}"'+zsh:1> print -rP -- 'c$c'ca$ n=3 c=a zsh -o PROMPT_SUBST -xc 'print -rP -- "${(l:n*2::$c:)}"'+zsh:1> print -rP -- '$c$c$c'aaa and 2) what's the correct syntax to repeat a char within a string? The l parameter expansion flag can be used in the same way you are already using it. However, the p flag should be used to allow $c as the string argument to be taken as the value of the variable c prior to padding (thanks @StéphaneChazelas for pointing this out). $ n=3 c=a zsh -xc 'print -r -- "${(pl:n::$c:)}"'+zsh:1> print -r -- aaaaaa Note that this is the only form of parameter expansion accepted by this construct, as per man zshexpn (in the section about Parameter Expansion Flags): p Recognize the same escape sequences as the print builtin in string arguments to any of the flags described below that follow this argument. Alternatively, with this option string arguments may be in the form $var in which case the value of the variable is substituted. Note this form is strict; the string argument does not undergo general parameter expansion.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675237", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/498974/" ] }
675,239
After upgrading from PulseAudio to PipeWire my sound devices now feature the "Pro Audio" profile however I've Googled for it and haven't found anything interesting. You can find it by running PulseAudio Volume Control and see it under the Configuration tab for your devices. Would be nice if someone could, I don't know, glance over PipeWire sources (I'm not a C programmer per se and I don't really understand digital audio aside from the very basics) and explain what it is and why the user may want to use it instead of e.g. something which is offered by default.
The Pro Audio profile provides "raw device access with themaximum number of channels and no mixer controls" (from the release notes with the feature). Based on the code creating this profile , it looks like it adds direct mappings from each PCM device provided by ALSA to a corresponding input or output channel in PipeWire. This is in contrast with higher-level options such as the ALSA Use Case Manager , which would associate some of these channels to particular combination of verb and device type (e.g. "Voice Call" and "Mic", respectively). The main reason someone might want to use the Pro Audio profile is to access all the channels of interfaces with more than a single stereo input/output; for example, a USB mixer with 8 channels, which may not all be usable through the default profile. By using Pro Audio , these extra channels could be connected to various other applications with PipeWire's graph architecture . Here's an additional source describing the use of PipeWire for professional audio work, showing that not all channels are available by default.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/675239", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260833/" ] }
675,248
I know there is a shortcut to copy and paste contents in a Linux terminal. In addition, you can scroll through the contents of a terminal window by using Shift + PgUp and Shift + PgDown . But is there any shortcut to select text or highlight text in a terminal without using a mouse? Unfortunately, I could not find an answer to this question; that is why I am asking here. To clarify, I wanted to know about a keyboard command that will scroll through the terminal contents or output in the terminal. And it does not have to be a Gnome terminal; it should be some universal command for all kinds of the terminal. Like selecting the ID of a docker container after building the image.
The Pro Audio profile provides "raw device access with themaximum number of channels and no mixer controls" (from the release notes with the feature). Based on the code creating this profile , it looks like it adds direct mappings from each PCM device provided by ALSA to a corresponding input or output channel in PipeWire. This is in contrast with higher-level options such as the ALSA Use Case Manager , which would associate some of these channels to particular combination of verb and device type (e.g. "Voice Call" and "Mic", respectively). The main reason someone might want to use the Pro Audio profile is to access all the channels of interfaces with more than a single stereo input/output; for example, a USB mixer with 8 channels, which may not all be usable through the default profile. By using Pro Audio , these extra channels could be connected to various other applications with PipeWire's graph architecture . Here's an additional source describing the use of PipeWire for professional audio work, showing that not all channels are available by default.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/675248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/383131/" ] }
675,521
I have a systemd service running under a specific user. I erroneously assumed that the service would have access to the environment variables all users inherit from scripts/exports under /etc/profile.d Is there a way to accomplish this without having to manually copy the variables in systemd unit file definition. For example, I have the following $ cat /etc/profile.d/somexportsexport VAR1=VALUE1export VAR2=VALUE2 Can this be passed / exported to a systemd service?
There are a few possible sources of environment: Using Environment= which lets you set variables Using EnvironmentFile= which lets you load values from a file Using PassEnvironment= which lets you define variables which should be passed from PID1. Static configuration (e.g. $USER ) It might sound like EnvironmentFile=/etc/profile.d/someexports is what you want, but that's not the case. /etc/profile.d/* is often sourced by your shell and can be parsed by your shell. systemd is shell agnostic and so it will not rely on bash syntax. The EnvironmentFile should contain new-line-separated variable assignments which is must stricter. systemd 's design discourages dynamically changing units or their environments. Even the EnvironmentFile= option was only added as a result of pressure and was later considered to be a mistake by systemd 's developers. One example of this design is that $PATH does not affect which binaries are used. This keeps things more deterministic as when you define a unit, you are defining everything about how that unit should run without worrying about external influence. So short answer is: No. you cannot load /etc/profile.d/* into systemd and that's intentional. But the answer you probably want is: yes, you can load it. You just need to run your application through a shell. You can do that by changing: ExecStart=/usr/bin/myservice To ExecStart=/usr/bin/bash -c myservice That will cause bash to be the parent process, which loads /etc/profile.d/ and forwards that environment to its child. Also note that I did not specify a full absolute path to myservice . In this case, myservice will be based on $PATH and that may or may not be /usr/bin/myservice . You can see how this might make things more difficult to troubleshoot and that's the disadvantage of going this route.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136100/" ] }
675,552
After installing Fedora on my laptop, i am stuck in the Grub screen. Here are my disk partitions I tried this grub> set root=(hd0,2)grub> linux /vmlinuz-5.11.12-300.fc34x86grub> initrd /initramfs-5.11.12-300.fc34.x86_65grub> boot And got this error Am I using the wrong commands?Could somebody tell me which commands I need to write. Thanks for helping out!
There are a few possible sources of environment: Using Environment= which lets you set variables Using EnvironmentFile= which lets you load values from a file Using PassEnvironment= which lets you define variables which should be passed from PID1. Static configuration (e.g. $USER ) It might sound like EnvironmentFile=/etc/profile.d/someexports is what you want, but that's not the case. /etc/profile.d/* is often sourced by your shell and can be parsed by your shell. systemd is shell agnostic and so it will not rely on bash syntax. The EnvironmentFile should contain new-line-separated variable assignments which is must stricter. systemd 's design discourages dynamically changing units or their environments. Even the EnvironmentFile= option was only added as a result of pressure and was later considered to be a mistake by systemd 's developers. One example of this design is that $PATH does not affect which binaries are used. This keeps things more deterministic as when you define a unit, you are defining everything about how that unit should run without worrying about external influence. So short answer is: No. you cannot load /etc/profile.d/* into systemd and that's intentional. But the answer you probably want is: yes, you can load it. You just need to run your application through a shell. You can do that by changing: ExecStart=/usr/bin/myservice To ExecStart=/usr/bin/bash -c myservice That will cause bash to be the parent process, which loads /etc/profile.d/ and forwards that environment to its child. Also note that I did not specify a full absolute path to myservice . In this case, myservice will be based on $PATH and that may or may not be /usr/bin/myservice . You can see how this might make things more difficult to troubleshoot and that's the disadvantage of going this route.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675552", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499262/" ] }
675,621
I have a directory named "labels" in which there are text files which contain labels for "cat" or "dog" or both on separate lines. Contents of files in labels directory are: cat labels/1.txtcatcat labels/2.txtdogcat labels/3.txtcat dog I want to get the names of files which contain label "cat" only. I tried following command: ls labels | grep -Rwl "cat" labels/1.txt labels/3.txt But this command returns the names of those files which contain "cat" or both. But my requirement is to get those file names which contain only "cat", not both "cat" and "dog". Similarly when I try to get names of those files which contain "dog" only. If I search in the same fashion then it returns file names which contain "dog" or both labels. ls labels | grep -Rwl "dog"labels/2.txt labels/3.txt
You could use grep twice: a) for listing all files with cat , then b) sieve out dog -containing ones. Use -l and -L , respectively, where -l lists filenames with matches and -L filenames without matches: grep -L 'dog' $(grep -l 'cat' <list of files>) See man grep : -L, --files-without-match Suppress normal output; instead print the name of each input file from which no output would normally have been printed. The scanning will stop on the first match. -l, --files-with-matches Suppress normal output; instead print the name of each input file from which output would normally have been printed. The scanning will stop on the first match.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499312/" ] }
675,638
I need to clean a very large CSV, which has wrongly escaped double quotes ( \\" instead of \" ). How can I replace all instances of \\"\\\"\\\\"..... with \" or just space? Since it has \ I asked this question to avoid adding to the mess.
This should be enough: sed 's/\\\\*"/\\"/' This replaces a backslash ( \\ ) followed by any number of backslashes ( \\* ) and a double quote ( " ), with a backslash followed by a double quote ( \\" ). Use sed 's/\\\\*"/\\"/g ' for replacing all occurrences in a line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10780/" ] }
675,679
I have several files with the following content: GGHTERR_01218 GGHTERR_02418 GGHTERR_01991GGHTERR_02211 GGHTERR_02297 GGHTERR_02379GGHTERR_02294 GGHTERR_02455 GGHTERR_02374GGHTERR_00532 GGHTERR_00534GGHTERR_00533 GGHTERR_00535GGHTERR_00776 GGHTERR_00779GGHTERR_01220 GGHTERR_01620GGHTERR_01760 GGHTERR_01761GGHTERR_01774 GGHTERR_02404GGHTERR_01889 GGHTERR_01890GGHTERR_02081 GGHTERR_02287GGHTERR_02152 GGHTERR_02153GGHTERR_02260 GGHTERR_02321GGHTERR_02295 GGHTERR_02375GGHTERR_02419 GGHTERR_02437GGHTERR_02420 GGHTERR_02438GGHTERR_02430 GGHTERR_02448GGHTERR_00001GGHTERR_00002GGHTERR_00003GGHTERR_00004GGHTERR_00005GGHTERR_00006GGHTERR_00007 I would like to know if there is an easy way to count the number of rows that have 3 columns, 2 columns and 1 column. So the output should look like: 3 columns: 32 columns: 141 colums: 7
Awk is perfect for this. It will split lines at whitespace (by default; change with the -F option) and the internal variable NF (number of fields) has the number of fields per line. So, just go through the file, saving the NF for each line: awk '{ nums[NF]++ } END{ for(num in nums){ printf "%d columns: %d\n", num, nums[num] } }' file The code above just stores the number of fields ( NF ) in the associative array nums whose keys are the number of fields and values are the number of times that number of columns was found in the file. At the end, we just go through the array and print. Running the above on your example results in: $ awk '{ nums[NF]++}END{for(num in nums){printf "%d columns: %d\n", num, nums[num]}}' file1 columns: 72 columns: 143 columns: 3 One (small) drawback of this approach is that you will need to keep an entry for each line in the file in memory. That won't be a problem unless your file is absolutely gigantic or you have extremely little memory available, but if it is, you can get around it by just printing out the number of fields per line and then counting: $ awk '{ print NF}' file | sort | uniq -c 7 1 14 2 3 3 Or, to get the same output: $ awk '{ print NF}' file | sort | uniq -c | while read num fields; do printf "%d columns: %d\n" "$num" "$fields"; done7 columns: 114 columns: 23 columns: 3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/348686/" ] }
675,722
I have a file with multiple lines and I want to join lines if they both fit a specific pattern. I know that I can find lines that fit the pattern and get the next line with: grep -E -A1 'Pattern' filename But how can I check if the next line also fits the pattern and how would I go about joining the two? For example,I have a file like this: HelloiamJohnSmith An example pattern could be the following: '^[A-Z][a-z]+' So in this case, I would like to combine the rows, if they both start with capital letters. The output I would like to achieve would be: Helloiam John Smith
/^[A-Z][a-z]+/{ :a N /\n[A-Z][a-z]+/{ s/\n/ / b a }} Save it as join.sed and to execute: sed -Ef join.sed file . If the line matches the pattern, we start a loop that appends the next lineto pattern space and replaces the newline character with a space as long asthat line also matches the pattern. For GNU Sed you can collapse it to an one-liner: sed -E '/^[A-Z][a-z]+/{:a;N;/\n[A-Z][a-z]+/{s/\n/ /;b a}}' file Alternatively, an Awk script, join.awk , for which the pattern should be given as p : { if($0~p)c+=1 else c=0 printf "%s%s", (c>1 ? " " : ors), $0 ors=ORS}END{print ""} To execute: awk -f join.awk p='^[A-Z][a-z]+' file .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675722", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499425/" ] }
675,733
I am running Debian 11 Bullseye for AMD64 on an HP Pavillion Touch 14-N009LA laptop, using IBus and MATE as desktop environment, having upgraded recently from Buster. Prior to upgrading point release, I could use the Latin American keyboard layout with IBus; afterwards, I am no longer able to do so. The Keyboard Preferences app on MATE Control Center shows the Latin American Spanish layout, and I can manually set it with setxkbmap latam on a terminal (before IBus kicks in and replaces it), but on IBus I am only presented with the "Spanish" keyboard, which corresponds to the Spaniard Spanish keyboard that has different punctuation keys; there is no option for "Latin American" or anything similar. Running ibus list-engine gives me the following output, in which I can't see the Latin American Spanish layout, and no matches for latam or anything similar: <irrelevant languages omitted>language: Spanish xkb:es:nodeadkeys:spa - Spanish (no dead keys) xkb:es:sundeadkeys:spa - Spanish (Sun dead keys) xkb:es:winkeys:spa - Spanish (Windows) xkb:es:dvorak:spa - Spanish (Dvorak) xkb:es:deadtilde:spa - Spanish (dead tilde) xkb:es:mac:spa - Spanish (Macintosh) xkb:es::spa - Spanish<irrelevant languages omitted> So far I could only find a guide that only seems to apply to Ubuntu , and the Arch Linux guide for IBus . The former guide suggested that maybe I had to generate a Spanish locale for my system, which I did by uncommenting the es-MX locales from /etc/locale.gen and then running locale-gen . Afterwards, I rebooted my system. It didn't work. Any other idea on how could I use the Latin American Spanish layout on IBus for Debian Bullseye?
UPDATE. I've found that the latest commit in the IBus source has the blacklist already implemented, and that all Latin American layouts are blacklisted by default . This affects the generation process, which is done with a Python script on build time, which in turn, sources the available X layouts from /usr/share/X11/xkb/rules/evdev.xml , as this comment clearly states . The exact commit on which this restriction was implemented is here . As for the reason why this was done, is honestly beyond me, and until this situation is properly addressed, the fix I propose below must be applied every time IBus is updated (as stated in this previous answer ). I've faced the same problem in Xubuntu 22.04, and recently used a workaround that involves editing a whitelist. Even though it's been suggested that IBus 1.5.23 would include a blacklist, in place of the currently used whitelist , so that engines added would automatically appear as selectable layouts, it seems this feature is yet to be implemented (I have version 1.5.26 right now). What I did to make it work is as follows: Open the file /usr/share/ibus/component/simple.xml using sudo , and your editor of choice. Locate the xkb:es::spa engine. In my machine, it looks like this: <engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[email protected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine> Once found, copy the <engine> tag and paste it beside it (as a sibling, on the same level), and change the following tag values: name , from xkb:es::spa to xkb:latam::spa . layout , from es to latam . longname , to any text of your choice so that you can distinguish it from other layouts. It should now look like this: <!-- I added this one. vvv --><engine> <name>xkb:latam::spa</name> <language>es</language> <license>GPL</license> <author>logo_writer</author> <layout>latam</layout> <longname>Spanish Latam</longname> <description>Spanish Latam</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine><!-- I added this one. ^^^ --><engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[email protected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine> Once the new engine is added, save the file. Restart the IBus service, by issuing the command ibus restart . Once IBus restarts, type ibus list-engine and check that the new engine appears in the list. In my machine, I have the following configurations. The one I added is Spanish Latam . $ ibus list-engine | grep -A 7 Espaidioma: Español xkb:es:nodeadkeys:spa - Spanish (no dead keys) xkb:es:winkeys:spa - Spanish (Windows) xkb:es:dvorak:spa - Spanish (Dvorak) xkb:es:deadtilde:spa - Spanish (dead tilde) xkb:latam::spa - Spanish Latam xkb:es:mac:spa - Spanish (Macintosh) xkb:es::spa - Spanish Using ibus-setup or ibus engine , set the layout to the one you previosuly created. At this point, it should work. I hope this works for you. :)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47085/" ] }
675,734
I have a problem with netcat.I tried the "pickle rick" challenge on tryhackme,The problem is that I don't get a shell with netcat: nc -lnvp 9999 and this is the output: Listening on 0.0.0.0 9999Connection received on 10.10.164.203 37776 but I don't get the shell. I know that the output should be like that: Listening on 0.0.0.0 9999Connection received on 10.10.164.203 37776./bin/sh: 0: can't access tty; job control turned off then I should get the shell: $ Does someone know why I don't get the shell?
UPDATE. I've found that the latest commit in the IBus source has the blacklist already implemented, and that all Latin American layouts are blacklisted by default . This affects the generation process, which is done with a Python script on build time, which in turn, sources the available X layouts from /usr/share/X11/xkb/rules/evdev.xml , as this comment clearly states . The exact commit on which this restriction was implemented is here . As for the reason why this was done, is honestly beyond me, and until this situation is properly addressed, the fix I propose below must be applied every time IBus is updated (as stated in this previous answer ). I've faced the same problem in Xubuntu 22.04, and recently used a workaround that involves editing a whitelist. Even though it's been suggested that IBus 1.5.23 would include a blacklist, in place of the currently used whitelist , so that engines added would automatically appear as selectable layouts, it seems this feature is yet to be implemented (I have version 1.5.26 right now). What I did to make it work is as follows: Open the file /usr/share/ibus/component/simple.xml using sudo , and your editor of choice. Locate the xkb:es::spa engine. In my machine, it looks like this: <engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[email protected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine> Once found, copy the <engine> tag and paste it beside it (as a sibling, on the same level), and change the following tag values: name , from xkb:es::spa to xkb:latam::spa . layout , from es to latam . longname , to any text of your choice so that you can distinguish it from other layouts. It should now look like this: <!-- I added this one. vvv --><engine> <name>xkb:latam::spa</name> <language>es</language> <license>GPL</license> <author>logo_writer</author> <layout>latam</layout> <longname>Spanish Latam</longname> <description>Spanish Latam</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine><!-- I added this one. ^^^ --><engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[email protected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine> Once the new engine is added, save the file. Restart the IBus service, by issuing the command ibus restart . Once IBus restarts, type ibus list-engine and check that the new engine appears in the list. In my machine, I have the following configurations. The one I added is Spanish Latam . $ ibus list-engine | grep -A 7 Espaidioma: Español xkb:es:nodeadkeys:spa - Spanish (no dead keys) xkb:es:winkeys:spa - Spanish (Windows) xkb:es:dvorak:spa - Spanish (Dvorak) xkb:es:deadtilde:spa - Spanish (dead tilde) xkb:latam::spa - Spanish Latam xkb:es:mac:spa - Spanish (Macintosh) xkb:es::spa - Spanish Using ibus-setup or ibus engine , set the layout to the one you previosuly created. At this point, it should work. I hope this works for you. :)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499432/" ] }
675,769
When I am using this command ls repo/* | xargs -I {} cp {} backup_repo/ , I am getting the error -bash: /usr/bin/ls: Argument list too long . I do understand the reason this occurs is because bash actually expands the asterisk to every matching file, producing a very long command line. How can I fix this error?
UPDATE. I've found that the latest commit in the IBus source has the blacklist already implemented, and that all Latin American layouts are blacklisted by default . This affects the generation process, which is done with a Python script on build time, which in turn, sources the available X layouts from /usr/share/X11/xkb/rules/evdev.xml , as this comment clearly states . The exact commit on which this restriction was implemented is here . As for the reason why this was done, is honestly beyond me, and until this situation is properly addressed, the fix I propose below must be applied every time IBus is updated (as stated in this previous answer ). I've faced the same problem in Xubuntu 22.04, and recently used a workaround that involves editing a whitelist. Even though it's been suggested that IBus 1.5.23 would include a blacklist, in place of the currently used whitelist , so that engines added would automatically appear as selectable layouts, it seems this feature is yet to be implemented (I have version 1.5.26 right now). What I did to make it work is as follows: Open the file /usr/share/ibus/component/simple.xml using sudo , and your editor of choice. Locate the xkb:es::spa engine. In my machine, it looks like this: <engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[email protected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine> Once found, copy the <engine> tag and paste it beside it (as a sibling, on the same level), and change the following tag values: name , from xkb:es::spa to xkb:latam::spa . layout , from es to latam . longname , to any text of your choice so that you can distinguish it from other layouts. It should now look like this: <!-- I added this one. vvv --><engine> <name>xkb:latam::spa</name> <language>es</language> <license>GPL</license> <author>logo_writer</author> <layout>latam</layout> <longname>Spanish Latam</longname> <description>Spanish Latam</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine><!-- I added this one. ^^^ --><engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[email protected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine> Once the new engine is added, save the file. Restart the IBus service, by issuing the command ibus restart . Once IBus restarts, type ibus list-engine and check that the new engine appears in the list. In my machine, I have the following configurations. The one I added is Spanish Latam . $ ibus list-engine | grep -A 7 Espaidioma: Español xkb:es:nodeadkeys:spa - Spanish (no dead keys) xkb:es:winkeys:spa - Spanish (Windows) xkb:es:dvorak:spa - Spanish (Dvorak) xkb:es:deadtilde:spa - Spanish (dead tilde) xkb:latam::spa - Spanish Latam xkb:es:mac:spa - Spanish (Macintosh) xkb:es::spa - Spanish Using ibus-setup or ibus engine , set the layout to the one you previosuly created. At this point, it should work. I hope this works for you. :)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499368/" ] }
675,781
I'm on a Linux busybox 1.27 only system so no output=progress available, no busybox's own implementation of pv which is pipe_progress nor pv itself. I have two questions. The first is based on https://www.linux.com/training-tutorials/show-progress-when-using-dd/ . It says that by sending the USR1 signal to dd it "pauses" the process and dd after printing its current status will continue with the job it was doing. I'm trying to do some benchmark tests with dd so I would like to have minimal impact on the dd operation. I want to get an output of the current operation every second because the data that's passing through dd is fluctuating and it is important to me to recognize when the transfer rate drops. First question: Is it true that 'dd' "pauses" every time it receives a USR1 signal? If dd pauses every second then I'll be adding hours to the operation when tens of gigabytes are being transferred. Second question: Assuming yes as an answer to the first question, I would like to know if it's possible to get dd to print its current status without sending any signal to the process, maybe some kind of redirection for STDOUT (like 2>&1)? What I'm referring to is: # bs with 1Mib so I can have more control on the test.dd if=/dev/zero of=/dev/null bs=1048576 count=1024# Printing current operation status.sudo kill -USR1 $dd_pid
UPDATE. I've found that the latest commit in the IBus source has the blacklist already implemented, and that all Latin American layouts are blacklisted by default . This affects the generation process, which is done with a Python script on build time, which in turn, sources the available X layouts from /usr/share/X11/xkb/rules/evdev.xml , as this comment clearly states . The exact commit on which this restriction was implemented is here . As for the reason why this was done, is honestly beyond me, and until this situation is properly addressed, the fix I propose below must be applied every time IBus is updated (as stated in this previous answer ). I've faced the same problem in Xubuntu 22.04, and recently used a workaround that involves editing a whitelist. Even though it's been suggested that IBus 1.5.23 would include a blacklist, in place of the currently used whitelist , so that engines added would automatically appear as selectable layouts, it seems this feature is yet to be implemented (I have version 1.5.26 right now). What I did to make it work is as follows: Open the file /usr/share/ibus/component/simple.xml using sudo , and your editor of choice. Locate the xkb:es::spa engine. In my machine, it looks like this: <engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[email protected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine> Once found, copy the <engine> tag and paste it beside it (as a sibling, on the same level), and change the following tag values: name , from xkb:es::spa to xkb:latam::spa . layout , from es to latam . longname , to any text of your choice so that you can distinguish it from other layouts. It should now look like this: <!-- I added this one. vvv --><engine> <name>xkb:latam::spa</name> <language>es</language> <license>GPL</license> <author>logo_writer</author> <layout>latam</layout> <longname>Spanish Latam</longname> <description>Spanish Latam</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine><!-- I added this one. ^^^ --><engine> <name>xkb:es::spa</name> <language>es</language> <license>GPL</license> <author>Peng Huang &lt;[email protected]&gt;</author> <layout>es</layout> <longname>Spanish</longname> <description>Spanish</description> <icon>ibus-keyboard</icon> <rank>50</rank></engine> Once the new engine is added, save the file. Restart the IBus service, by issuing the command ibus restart . Once IBus restarts, type ibus list-engine and check that the new engine appears in the list. In my machine, I have the following configurations. The one I added is Spanish Latam . $ ibus list-engine | grep -A 7 Espaidioma: Español xkb:es:nodeadkeys:spa - Spanish (no dead keys) xkb:es:winkeys:spa - Spanish (Windows) xkb:es:dvorak:spa - Spanish (Dvorak) xkb:es:deadtilde:spa - Spanish (dead tilde) xkb:latam::spa - Spanish Latam xkb:es:mac:spa - Spanish (Macintosh) xkb:es::spa - Spanish Using ibus-setup or ibus engine , set the layout to the one you previosuly created. At this point, it should work. I hope this works for you. :)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/491396/" ] }
675,833
I'm trying to duplicate lines in a text file that contain certain special characters, but in the duplicate, substitute the special character with "regular" ASCII characters. The concrete use-case are accented characters. Input: évatestfrédéric Desired output: évaevatestfrédéricfrederic For now I can duplicate the lines containing the é character, but I'm not sure how to search and replace in the capture group. Here is what I've got so far: echo 'éva\ntest\nfrédéric' | sed 's/\(.*é.*\)/&\n\1/' Can I do that with sed ? If not, I'll be glad to work with awk ...
You can match on é and then apply multiple commands: sed '/é/{p;s/é/e/g;}' For any line containing é , this prints the current pattern space, then replaces all é s with e (and prints the pattern space again). The AWK equivalent is awk '/é/{print; gsub(/é/, "e")}1' sed ’s s command can re-use the address pattern: sed '/é/{p;s//e/g;}' and if your replacements are all single-character replacements, the y command is more efficient: sed '/é/{p;y/é/e/;}'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/675833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499545/" ] }
675,881
I'm removing the trailing spaces using sed -i 's/[ \t]*$//' *.txt However, this command will rewrite all the files. Is there a convenient way to judge that if there are trailing spaces in a text file and skip those without trailing spaces?
You could use grep first to find if there are lines that need modifying, though that would still read the files twice at worst (in the case where just the last line needs modifying): for f in ./*.txt; do grep -q '[[:blank:]]$' "$f" && sed -i 's/[[:blank:]]*$//' "$f"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67765/" ] }
675,890
vendor software configuration requires these settings on our Linux server: xerox soft nofile 16384xerox hard nofile 262144 in file /etc/security/limits.conf Because of security consideration is it possible to replace these configuration in the user bash_profile? can I use ulimit -n 262144 in /home/xerox/.bash_profile Will it be the same? UPDATE Still confused and would like to know What will be the equivalent commands to xerox soft nofile 16384 and xerox hard nofile 262144 in bash_profile Thank you!
You could use grep first to find if there are lines that need modifying, though that would still read the files twice at worst (in the case where just the last line needs modifying): for f in ./*.txt; do grep -q '[[:blank:]]$' "$f" && sed -i 's/[[:blank:]]*$//' "$f"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675890", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266062/" ] }
675,897
I have this code to optimize image size of all my images inside /dir directory: find /dir/ -iregex ".*\.\(jpg\|jpeg\)" -exec jpegoptim --strip-all {} \; When I run this code, it consumes a lot of my server CPU. So I am wondering: is it possible to me add a delay between each exec ? For example, I want 100 miliseconds of delay between each time exec is called to every image, this way the CPU does not get very busy. What would you suggest? My server is running Centos 8.
You could also add more commands using more -exec s find /dir/ -iregex ".*\.\(jpg\|jpeg\)" -exec jpegoptim --strip-all {} \; -exec sleep 0.1 \; But as a general rule, if you want to have it working full but being nice to other processes, it's very simple to use nice: nice find /dir/ -iregex ".*\.\(jpg\|jpeg\)" -exec jpegoptim --strip-all {} \;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499599/" ] }
675,914
Whenever multiple users are interacting with a Nextcloud installation, there is a possibility of error. Potentially, a family member may delete an old picture or a co-worker might accidentally tick off a task or calendar event, resulting in issues for other users. When full filesystem snapshots or backups of the whole Nextcloud directory are available, they can be used to restore an old state of the entire server. This is fine in case of a complete system crash. However, it becomes an issue if the problem is only noticed after a while and in the mean time, users have modified other data. Then, a choice must be made between rescuing old data and destroying all progress or keeping the current state. The Nextcloud documentation only describes a way to restore the entire installation. Is there a way to more intelligently back up all Nextcloud data automatically (files, messages, calendars, tasks, etc.) so that it can be restored independently? (Maybe even in an online state?)
You could also add more commands using more -exec s find /dir/ -iregex ".*\.\(jpg\|jpeg\)" -exec jpegoptim --strip-all {} \; -exec sleep 0.1 \; But as a general rule, if you want to have it working full but being nice to other processes, it's very simple to use nice: nice find /dir/ -iregex ".*\.\(jpg\|jpeg\)" -exec jpegoptim --strip-all {} \;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369957/" ] }
675,916
I've found other links on the stackoverflow communities that were similar but they didn't answer my question exactly. I have 2 files with a different number of lines BUT I have them both sorted. My original files are hundreds of lines long but for troubleshooting purposes, I made file1 have 12 lines and file2 have 5 lines. File2 is a subset of file1. What I want to do is run a command that outputs all the lines that are in file1 but are not in file2. I tried using the Unix commands diff and comm but they both list the full contents of file1, which is not what I want. A quick example of this would be: File1 File2A BB EC IE NG OILMNOX So here, we can see everything that's in file2 is in file1. For some reason, diff and comm both showed the full contents of file1. I assume it's because it's doing a line by line comparison and not searching thru the whole file. Is there another Unix command I can run that will output what I am expecting? EDIT: The commands I used to attempt to get what I needed were: a) diff file1 file2 This basically listed everything from file1 with a < in front of it showing the content was from file1, and everything from file2 with a > in front of it. Definitely not what I needed b) comm -23 file1 file2 This showed the whole content of file1 again and not the diff like I was expecting. I also c) comm -3 file1 file2 The help page for comm said this would print lines in file 1 but not in file 2 and vice versa but this also didn't show what I wanted b/c in my example, B appears in both files but on different lines. However, the output thinks it's in one but not the other and therefore prints it out.So the output looked like this: AB BCE Eetc. And it wasn't what I was expecting. I was expecting ACGLMX
You could also add more commands using more -exec s find /dir/ -iregex ".*\.\(jpg\|jpeg\)" -exec jpegoptim --strip-all {} \; -exec sleep 0.1 \; But as a general rule, if you want to have it working full but being nice to other processes, it's very simple to use nice: nice find /dir/ -iregex ".*\.\(jpg\|jpeg\)" -exec jpegoptim --strip-all {} \;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/675916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30038/" ] }
675,949
I have a file with records in this format : D20220327,S2927,977,1D20220328,S2927,977,1D20220329,S2927,977,1D20220330,S2927,977,1D20220331,S2927,977,1D20220401,S2927,977,1D20220402,S2927,977,1D20220403,S2927,977,1D20220404,S2927,977,1 However after applying the transformation to shift these to 7 days back in the past, it is not working for dates from 28th Mar to 03rd April, However same code logic is working fine on 27th March & 04th April. I am not able to figure why this is not working for one week only.This is the output D20220320,S2927,977,1 -- correctD20220320,S2927,977,1 -- incorrect D20220321,S2927,977,1 -- incorrectD20220322,S2927,977,1 -- incorrectD20220323,S2927,977,1 -- incorrectD20220324,S2927,977,1 -- incorrectD20220325,S2927,977,1 -- incorrectD20220326,S2927,977,1 -- incorrectD20220328,S2927,977,1 -- correct Logic used here is : BEGIN { OFS = FS = ","}{ t = mktime(sprintf("%4d %.2d %.2d 00 00 00", substr($1,2,4), substr($1,6,2), substr($1,8,2))); $1 = substr($1,1,1) strftime("%Y%m%d", t - 7*24*60*60) print}
Your calculations are done in local time, and you are affected by the switch over to daylight saving on the 27th of March. To do the calculation in UTC time instead (Unix timestamps are not in local time), using a recent release of GNU awk , make sure that you pass an extra 1 as a last argument to mktime() : t = mktime(sprintf("%4d %.2d %.2d 00 00 00", substr($1,2,4), substr($1,6,2), substr($1,8,2)), 1); This is a GNU awk extension available in GNU awk release 4.2.0+. As an alternative, you could instead avoid using a time around midnight (UTC) as your reference time of day: t = mktime(sprintf("%4d %.2d %.2d 12 00 00", substr($1,2,4), substr($1,6,2), substr($1,8,2))); This would make it work in older GNU awk implementations and in any other awk that has the required functions. Yet another alternative is to make the script run with an altered local time zone: TZ=UTC awk -f script.awk inputfile This sets the TZ environment variable to UTC for the execution of the awk script, which alters the time zone used by the mktime() and related functions.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/675949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/430478/" ] }
676,043
I need to change my python version from 3.8 to 3.6 ?How can I achieve this in Ubuntu 20.04. I tried pyenv, but when I try to use pyenv like pyenv global 3.6.0 then I do python3 and I have still 3.8 verision.
Do not downgrade the system version: it's likely that some parts of the system would stop working. Never change /usr/bin/python3 , and avoid putting an older version of python3 before it in the $PATH . The deadsnakes archive provides packages of most supported Python versions for currently supported Ubuntu LTS versions. To make these packages available, follow the usual instructions to enable a PPA. Then install the package(s) you want. sudo add-apt-repository ppa:deadsnakes/ppasudo apt-get updatesudo apt-get install python3.6 You can then create a virtual environment for your chosen Python version and with a chosen set of packages. python3.6 -m venv ~/python/foo-3.6sh -c '.export PYTHONNOUSERSITE=1; ~/python/foo-3.6/bin/activate; pip install …' To run a program in this environment, source the bin/activate script in a shell. $ bash$ export PYTHONNOUSERSITE=1$ . ~/python/foo-3.6/bin/activate$ ./my_python_program
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/676043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439475/" ] }
676,116
Inside a file test.txt I have a hexadecimal value 0x0000000000000000000000000000000000000000047546124890225541102135415377465907 Only one line . There are no other lines or characters. It is also identified only by 0x. I want to convert this hex value to decimal ( 388355321549592156970965297418600041568519 ), subtract 1, and overwrite the original value with the result of this operation in test.txt Ultimately, the data in the test.txt file should be converted from 0x0000000000000000000000000000000000000000047546124890225541102135415377465907 to 388355321549592156970965297418600041568518 I would be very grateful if you could tell me how to do it with Bash (Linux shell).
With perl , editing the file in-place: perl -Mbigint -lpi -e '$_ = hex($_) - 1' your-file That assumes the whole line is the hex number. To convert all hex numbers, wherever they're found: perl -Mbigint -lpi -e 's{\b0x[\da-f]+\b}{hex($&) - 1}gie' your-file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/676116", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499838/" ] }
676,118
This GRUB Quiet Splash says: The splash (which eventually ends up in your /boot/grub/grub.cfg )causes the splash screen to be shown. At the same time you want the boot process to be quiet, as otherwiseall kinds of messages would disrupt that splash screen. Although specified in GRUB these are kernel parameters influencing theloading of the kernel or its modules, not something that changes GRUBbehaviour. However, I have not found splash on https://www.kernel.org/doc/html/v5.0/admin-guide/kernel-parameters.html , but AFAIK it works on modern distros which are kernel 5+ based. Why?
If you specify a boot option that the kernel does not recognize, it does not cause an error: the unknown boot parameter will have no effect to the kernel, other than being listed in /proc/cmdline . Then initramfs scripts or other userspace programs can look for it and use it to modify their behavior. The unknown boot parameters are also passwd to the init process, whichever it may be (whether SysVinit, systemd or something else). In fact, this is how important troubleshooting/recovery boot options work, like single to boot a SysVinit system to single-user mode, or systemd.unit=emergency.target for the closest equivalent on a system with systemd . If your distribution uses user-space boot splash software like plymouth , the kernel just "passes through" any splash / nosplash boot option to /proc/cmdline , and plymouth in initramfs will check for it. Your distribution may have other troubleshooting/recovery functions implemented as extra boot options by the initramfs generator package. In Debian/Ubuntu and related distributions, see man 7 initramfs-tools for a list of boot options specific to initramfs files created by the initramfs-tools package; in modern RedHat/Fedora, see man dracut .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/676118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/446998/" ] }
676,134
What is the difference between a link and a directory entry. Is every link (symbolic and hard link) a directory entry? What is a directory entry, which is not a link for example?
If you specify a boot option that the kernel does not recognize, it does not cause an error: the unknown boot parameter will have no effect to the kernel, other than being listed in /proc/cmdline . Then initramfs scripts or other userspace programs can look for it and use it to modify their behavior. The unknown boot parameters are also passwd to the init process, whichever it may be (whether SysVinit, systemd or something else). In fact, this is how important troubleshooting/recovery boot options work, like single to boot a SysVinit system to single-user mode, or systemd.unit=emergency.target for the closest equivalent on a system with systemd . If your distribution uses user-space boot splash software like plymouth , the kernel just "passes through" any splash / nosplash boot option to /proc/cmdline , and plymouth in initramfs will check for it. Your distribution may have other troubleshooting/recovery functions implemented as extra boot options by the initramfs generator package. In Debian/Ubuntu and related distributions, see man 7 initramfs-tools for a list of boot options specific to initramfs files created by the initramfs-tools package; in modern RedHat/Fedora, see man dracut .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/676134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/418220/" ] }
676,209
Every time we run a normal command like ps multiple times, it has different process ids (PID). I wanted to know if that means for every command we run in terminal (bash) it creates a child process of bash to execute that command.
Yes, pretty much every command you'd run on the command line runs in a process of its own, and those processes are children of the shell launching them. The exception here are builtin commands of the shell. Bash implements some standard utilities itself, like printf , echo , true , false , kill and [ / test , so running those doesn't involve forking a child. The same applies to things like cd , read and mapfile , though they affect the shell's internal state, so they have to be builtin. (Also break , continue , and return , which oddly enough are builtin utilities, not shell keywords like if and while .) There's really no way for the shell to run an external program in the same process with itself, while still being able to go back. It is however possible for the shell to replace itself with another program. E.g. if you run echo $$ to see the PID of the shell and subsequently run exec ps , you'll see ps running with the same PID. And when ps exits, that shell no longer exists. Actually, a similar thing happens every time you run a program in the normal way, just that the shell makes a copy of itself (the fork() system call) before replacing the child with the program to run ( execve() ). In between the shell program running in the child process takes care of setting up any redirections and such for the child. It would be possible for a shell to implement other tools as builtins too, e.g. Busybox implements are largish set of standard utilities in the same program file. But as far as I tested, it still forks a child when running them, probably since it's an easy way to ensure the utilities don't mess with the shell's state unnecessarily.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/676209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/474035/" ] }
676,309
So, I tried to mount a hard disk /dev/sdc2 on / directory in my Ubuntu workstation, and upon mounting, I am not able to remote into the Ubuntu workstation anymore. The exact command I run to mount is sudo mount /dev/sdc2 / I am not sure what has gone wrong, but I thought mounting a hard disk on / directory is fine. From what I read, it does not erase all the files and folders in / directory. I was so anxious and I am only allowed to return to office to unplug the hard disk to see whether I can remote into my Ubuntu workstation again after unplugging. Any insight into this issue is much appreciated! :)
Unfortunately, mounting any filesystem on the / mount-point of a running Linux installation is fatal, unless that newly-mounted filesystem happens to contain in itself a complete Linux installation (and even then it is a bad idea). The reason is as follows: / is the root mount point to which the entire filesystem tree of the OS is attached, including configuration files, pseudo-filesystems for accounting, and the binary executables of any and all commands that don't happen to be builtin commands of your shell. If you mount anything to a mount-point where another filsystem is already attached, the previous filesystem content is shadowed by the content of the new filesystem. That means that while the original installation is still on your hard-drive, your operating system instead sees the contents of /dev/sdc2 where it would expect the OS. This in effect makes it completely inoperative. Since you cannot call any command anymore (remember, the shell would try to locate the executable file from a filesystem in no longer sees) your only choice is to try the "Magic SysRq keystroke" : Press Alt + SysRq and while keeping both pressed, press in addition the sequence R E I S U B . A nice mnemonic mentioned by @TooTea for that sequence is " R eboot E ven I f S ystem U tterly B roken". This will instruct the running kernel to try to sync and shutdown the system in as orderly a way as possible (but if that doesn't work, your only choice is a hard power-off). You can then start the computer again - since you didn't modify the fstab to mount /dev/sdc2 , it will boot again with the original filesystem where your OS is installed mounted as / . For the future , the "dedicated" mount-point to temporarily attach hard-drives is /mnt .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/676309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/500012/" ] }
676,403
I do dual boot Kali-Linux with Windows11. So the problem is I want to connect with my Bluetooth speaker (JBL GO), but unfortunately I cannot connect. I use both GUI and CLI to connect to my speaker. It was working perfectly before. I can connect in Windows, but not in my Kali. Here is the message using GUI: And here is the message using CLI: $ bluetoothctl Agent registered[bluetooth]# agent KeyboardOnlyAgent is already registered[bluetooth]# default-agent Default agent request successful[bluetooth]# power onChanging power on succeeded[bluetooth]# scan onDiscovery started[CHG] Controller 00:1A:7D:DA:71:15 Discovering: yes[CHG] Device 30:C0:1B:95:1D:C3 RSSI: -51[CHG] Device 30:C0:1B:95:1D:C3 TxPower: 0[bluetooth]# remove 30:C0:1B:95:1D:C3[DEL] Device 30:C0:1B:95:1D:C3 JBL GODevice has been removed[NEW] Device 30:C0:1B:95:1D:C3 JBL GO[CHG] Device 30:C0:1B:95:1D:C3 TxPower: 0[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 0000111e-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 00001108-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 0000110b-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 0000110d-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 0000110e-0000-1000-8000-00805f9b34fb[bluetooth]# trust 30:C0:1B:95:1D:C3[CHG] Device 30:C0:1B:95:1D:C3 Trusted: yesChanging 30:C0:1B:95:1D:C3 trust succeeded[bluetooth]# pair 30:C0:1B:95:1D:C3Attempting to pair with 30:C0:1B:95:1D:C3[CHG] Device 30:C0:1B:95:1D:C3 Connected: yes[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 00001101-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 00001108-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 0000110b-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 0000110c-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 0000110e-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 UUIDs: 0000111e-0000-1000-8000-00805f9b34fb[CHG] Device 30:C0:1B:95:1D:C3 ServicesResolved: yes[CHG] Device 30:C0:1B:95:1D:C3 Paired: yesPairing successful[CHG] Device 30:C0:1B:95:1D:C3 ServicesResolved: no[CHG] Device 30:C0:1B:95:1D:C3 Connected: no[bluetooth]# connect 30:C0:1B:95:1D:C3Attempting to connect to 30:C0:1B:95:1D:C3Failed to connect: org.bluez.Error.Failed[bluetooth]# exit I check the bluetooth.service: $ sudo systemctl status bluetooth● bluetooth.service - Bluetooth service Loaded: loaded (/lib/systemd/system/bluetooth.service; disabled; vendor preset: disabled) Active: active (running) since Sat 2021-11-06 08:32:21 WIB; 47min ago Docs: man:bluetoothd(8) Main PID: 3844 (bluetoothd) Status: "Running" Tasks: 1 (limit: 38347) Memory: 2.0M CPU: 439ms CGroup: /system.slice/bluetooth.service └─3844 /usr/libexec/bluetooth/bluetoothdNov 06 09:12:21 [hostname] bluetoothd[3844]: Endpoint registered: sender=:1.87 path=/MediaEndpoint/A2DPSink/sbcNov 06 09:12:21 [hostname] bluetoothd[3844]: Endpoint registered: sender=:1.87 path=/MediaEndpoint/A2DPSource/sbcNov 06 09:12:21 [hostname] bluetoothd[3844]: Endpoint registered: sender=:1.87 path=/MediaEndpoint/A2DPSink/sbc_xq_453Nov 06 09:12:21 [hostname] bluetoothd[3844]: Endpoint registered: sender=:1.87 path=/MediaEndpoint/A2DPSource/sbc_xq_453Nov 06 09:12:21 [hostname] bluetoothd[3844]: Endpoint registered: sender=:1.87 path=/MediaEndpoint/A2DPSink/sbc_xq_512Nov 06 09:12:21 [hostname] bluetoothd[3844]: Endpoint registered: sender=:1.87 path=/MediaEndpoint/A2DPSource/sbc_xq_512Nov 06 09:12:21 [hostname] bluetoothd[3844]: Endpoint registered: sender=:1.87 path=/MediaEndpoint/A2DPSink/sbc_xq_552Nov 06 09:12:21 [hostname] bluetoothd[3844]: Endpoint registered: sender=:1.87 path=/MediaEndpoint/A2DPSource/sbc_xq_552Nov 06 09:12:30 [hostname] bluetoothd[3844]: /org/bluez/hci0/dev_30_C0_1B_95_1D_C3/sep1/fd0: fd(42) readyNov 06 09:12:30 [hostname] bluetoothd[3844]: profiles/audio/avctp.c:avctp_connect_browsing_cb() Browsing: connect to 30:C0:1B:95:1D:C3: Connection refused (111) I already try rfkill , alsa but no result. But when I try these command: $ pulseaudio -k$ pulseaudio -D$ pulseaudio --start , it works. But I cannot find my device in pavucontrol . Now I'm stuck :| Here is my version of Kali: $ uname -aLinux [my_hostname] 5.14.0-kali2-amd64 #1 SMP Debian 5.14.9-2kali1 (2021-10-04) x86_64 GNU/Linux Bluetoothctl version: bluetoothctl: 5.61 Blueman version: 2.2.2-1
I have been beating my head about this issue. I have been encountering it for a few days since the latest round of apt updates available for Kali. After doing some digging I discovered that there are some package changes associated with PipeWire and pulseaudio components (specifically the removal of pipewire-media-session, the new installation of pipewire-pulse, and an upgrade to pipewire) I found the following article: https://wiki.debian.org/BluetoothUser/a2dp#PipeWire "At minimum, you will need to install the libspa-0.2-bluetooth package, remove the pulseaudio-module-bluetooth package (if previously installed), and then either reboot your computer or restart the PipeWire services, otherwise device connections will fail with "Protocol not available". apt install libspa-0.2-bluetooth && apt purge pulseaudio-module-bluetoothreboot I now have audio endpoints and I can still configure with pavucontrol. This was a very simple fix to a very ugly issue that surfaced out of nowhere. I hope this may help someone out there still facing this. ADDITIONAL I found that adding the load-module functions to /etc/pulse/default.pa described in the other resolution response from @sup2069 seems to have corrected an issue I was having where it would not remember my audio profiles. I would have to reconfigure the default audio source in pavucontrol every time the device reconnects. I just wanted to throw that bit of information out there. Thank you folks for all of your contributions to this community!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/676403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/474428/" ] }
676,608
I'm perplexed but still guess I misunderstood Bash somehow. /$ if [ -e /bin/grep ]; then echo yea; else echo nay ; fiyea/$ if [ ! -e /bin/grep ]; then echo yea; else echo nay ; finay/$ if [ -a /bin/grep ]; then echo yea; else echo nay ; fiyea/$ if [ ! -a /bin/grep ]; then echo yea; else echo nay ; fiyea Why negation ! reverses effect of -e test but not -a test? Man bash says: test : 3 arguments The following conditions are applied in the order listed. If the second argument is one of the binary conditionaloperators listed above under CONDITIONAL EXPRESSIONS, the result of the expression is the result of the binary test using the firstand third arguments as operands. The -a and -o operators are considered binary operators when there are three arguments. If the first argument is ! , the value is the negation of the two-argument test using the second and third arguments. Bash Conditional Expressions Conditional expressions are used by the [[ compound command and the test and [ builtin commands -a file True if file exists. -b file True if file exists and is a block special file. -c file True if file exists and is a character special file. -d file True if file exists and is a directory. -e file True if file exists.
-a is both a unary (for a ccessible, added for compatibility with the Korn shell, but otherwise non-standard and now redundant with -e ) and binary (for a nd, in POSIX (with XSI) but deprecated there) operator. Here [ ! -a /bin/grep ] invokes the binary operator as required by POSIX. It's [ "$a" -a "$b" ] to test whether $a is non-empty and $b is non empty, here with $a == ! and $b == /bin/grep . As both strings are non-empty, it returns true . See also the "The -a and -o operators are considered binary operators when there are three arguments" in the text you quoted. -a is deprecated in both the unary and binary form, the unary one because it's superseded by -e , the binary one because it makes for unreliable and ambiguous test expressions. To test for file existence (though in effect, it's more a test whether the file is accessible , whether stat() would succeed on the path¹), use [ -e filepath ] . To and two conditions, use && between two invocations of [ . To test whether a string is non empty, I personally prefer the [ -n "$string" ] form over the [ "$string" ] one. So: test for file existence: [ -e "$file" ] # not [ -a "$file" ][ ! -e "$file" ] # not [ ! -a "$file" ] test for two strings being non-empty: [ -n "$a" ] && [ -n "$b" ] # not [ "$a" -a "$b" ][ "$a" ] && [ "$b" ] From the rationale in the POSIX specification for the test (aka [ ) utility : The XSI extensions specifying the -a and -o binary primaries and the '(' and ')' operators have been marked obsolescent. (Many expressionsusing them are ambiguously defined by the grammar depending on the specific expressions being evaluated.) Scripts using these expressionsshould be converted to the forms given below. Even though many implementations will continue to support these obsolescent forms, scriptsshould be extremely careful when dealing with user-supplied input that could be confused with these and other primaries and operators. and: An early proposal used the KornShell -a primary (with the same meaning), but this was changed to -e because there were concerns about the high probability of humans confusing the -a primary with the -a binary operator. The manuals of yash , bosh , GNU coreutils do guard against using binary -a / -o in their respective [ / test implementations and zsh 's manual never documented them², but many others including bash (the GNU shell) unfortunately still don't discourage their usage nor deprecate them. ¹ more on that in this answer of mine to a related stackoverflow Q&A ² a test / [ builtin was only added to zsh in version 2.0.3 in 1991. The [[ ... ]] special construct, from the Korn shell was always preferred there, and has its own syntax where && and || are used for and and or operators.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/676608", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/446998/" ] }
676,617
How do we have our own linux binary recognize and use its library/deps. in local library i.e. in /usr/local/lib as we install it itself in /usr/local/bin ?
-a is both a unary (for a ccessible, added for compatibility with the Korn shell, but otherwise non-standard and now redundant with -e ) and binary (for a nd, in POSIX (with XSI) but deprecated there) operator. Here [ ! -a /bin/grep ] invokes the binary operator as required by POSIX. It's [ "$a" -a "$b" ] to test whether $a is non-empty and $b is non empty, here with $a == ! and $b == /bin/grep . As both strings are non-empty, it returns true . See also the "The -a and -o operators are considered binary operators when there are three arguments" in the text you quoted. -a is deprecated in both the unary and binary form, the unary one because it's superseded by -e , the binary one because it makes for unreliable and ambiguous test expressions. To test for file existence (though in effect, it's more a test whether the file is accessible , whether stat() would succeed on the path¹), use [ -e filepath ] . To and two conditions, use && between two invocations of [ . To test whether a string is non empty, I personally prefer the [ -n "$string" ] form over the [ "$string" ] one. So: test for file existence: [ -e "$file" ] # not [ -a "$file" ][ ! -e "$file" ] # not [ ! -a "$file" ] test for two strings being non-empty: [ -n "$a" ] && [ -n "$b" ] # not [ "$a" -a "$b" ][ "$a" ] && [ "$b" ] From the rationale in the POSIX specification for the test (aka [ ) utility : The XSI extensions specifying the -a and -o binary primaries and the '(' and ')' operators have been marked obsolescent. (Many expressionsusing them are ambiguously defined by the grammar depending on the specific expressions being evaluated.) Scripts using these expressionsshould be converted to the forms given below. Even though many implementations will continue to support these obsolescent forms, scriptsshould be extremely careful when dealing with user-supplied input that could be confused with these and other primaries and operators. and: An early proposal used the KornShell -a primary (with the same meaning), but this was changed to -e because there were concerns about the high probability of humans confusing the -a primary with the -a binary operator. The manuals of yash , bosh , GNU coreutils do guard against using binary -a / -o in their respective [ / test implementations and zsh 's manual never documented them², but many others including bash (the GNU shell) unfortunately still don't discourage their usage nor deprecate them. ¹ more on that in this answer of mine to a related stackoverflow Q&A ² a test / [ builtin was only added to zsh in version 2.0.3 in 1991. The [[ ... ]] special construct, from the Korn shell was always preferred there, and has its own syntax where && and || are used for and and or operators.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/676617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
676,731
I have a large JSON file in which I want to replace one string with another. It should not, but it might happen that this string would be used in a context where I do not want to replace it. I know how many times it is in the file in the right context so I would like to also print how many occurrences of the string were replaced with sed . How can I do that? For finding and replacing the string I use: sed -i "" "s/my_string/new_string/g" my_file.json Note that I'm on mac but I would need it for Linux too, like: sed -i "s/my_string/new_string/g" my_file.json I know I could just run a grep to find the string in the file and return the count like this: grep -o my_string my_file.json | wc -l but that's not what I'm asking. I am asking if there is a way how to do the same as any text editor (word, notepad, geany, ...) does -- if I give a string, it tells me how many times it saw this string and replaced it with another. More information -- it will run in a Bash script so if there would be another better way, I am open to that.
If you must use sed then one way could be As taken from the suggestions from @Phillipos it is changed to: sed -i "" -e ' s/my_string/new_string\/g;s/\n//w /dev/stdout s///g' my_file.json | wc -l place a newline after every occurrence of my_string + do the changes also. then take away one newline since sed implicitly puts one while printing. the writing to stdout happens conditionally, only when the substitution succeeded , meaning when that line had my_string mng then we take away the newline markers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/676731", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/448469/" ] }
676,733
I'm trying to get the exit code of the function that I'm repeatedly calling in the "condition" part of a Bash while loop: while <function>; do <stuff> done When this loop terminates due to an error, I need the exit code of <function> . Any thoughts on how I can get that?
You can capture the exit value from the condition and propagate that forward: while rmdir FOO; ss=$?; [[ $ss -eq 0 ]]do echo in loopdoneecho "out of loop with ?=$? but ss=$ss" Output rmdir: failed to remove 'FOO': No such file or directoryout of loop with ?=0 but ss=1 In this instance the exit status from rmdir FOO has been captured in the variable ss and is 1 . (Try replacing rmdir FOO with ( exit 4 ) . You'll find that ss=4 .) How does this work? Remember that the syntax is actually while list-1; do list-2; done , and not the much more usual expectation of while command; do list; done . The list-1 can be a sequence of semicolon-separated commands, and the documentation states that the " while command continuously executes the list list-2 as long as the last command in the list list-1 returns an exit status of zero. " As an alternative presentation of the messy-looking while condition, it is possible to assign a variable while inside an expression (( ... )) , and then to use the result. This gives the harder-to-read but more compact assign-and-test structure: while rmdir FOO; ((! (ss=$?)))do echo in loopdoneecho "out of loop with ?=$? but ss=$ss" Alternatively you can use while rmdir FOO; ! (( ss=$? )) . These work because ((1)) evaluates arithmetically to 1, which is generally associated with true, and so the exit code of that evaluation is 0 (success). On the other hand, ((0)) evaluates arithmetically to 0, which is generally associated with false, and so the exit code of that evaluation is 1 (failure). This may seem confusing, as after all both evaluations ((.)) are "successful", but this is a hack to bring the value of arithmetic expressions representing true/false in line with bash's exit codes of success/failure, and make conditional expressions like if ...; then ...; fi , while ...; do ...; done , etc, work correctly, whether based on exit codes or arithmetic values.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/676733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58343/" ] }
676,786
I want to validate a text file with a script. The file to validate is: FDFHDK JKL1545665 152HDKFHDK UHGYRYRUBH DFG867HDKE WER Valid lines must match the regex '[A-Z]{7}+[[:space:]]+[A-Z]{3}' . If all the lines are valid, the script shows a message saying that the file is OK. If at there is at least one line that doesn't match the regex, the script should show a message and display the lines that don't match the regex. The script is: #!/usr/bin/env bashresult=""output=$(grep -vE '[A-Z]{7}+[[:space:]]+[A-Z]{3}' "$1" |wc -l)if [[ $output > 0 ]]then echo "These lines don't match:" result="${resultado} $(grep -vE '[A-Z]{7}+[[:space:]]+[A-Z]{3}' "$1") \n" echo -e $resultelse echo "The text file is valid"fi The expected output is These lines don't matchFDFHDK JKL1545665 152867HDKE WER But I'm getting These lines don't match:FDFHDK JKL 1545665 152 867HDKE WER So the actual script is not taking the line break into account.
There is absolutely no reason to use an intermediate variable to store output of commands just to perform a test or output that data. #!/bin/sh -if grep -q -v -x -E -e '[A-Z]{7}[[:space:]]+[A-Z]{3}' -- "$1"then echo 'Does not verify. Bad lines follow...' grep -v -x -E -e '[A-Z]{7}[[:space:]]+[A-Z]{3}' -- "$1"fi The regular expression has been corrected to delete the extra + after {7} . The if statement tests the exit status of grep directly. The grep command in the if statement, and later, use -x to force a whole-line match, and the first grep statement uses -q to stop at the first match without outputting anything. The actual issue in your code is using $result unquoted, which causes the shell to split the value on spaces, tabs, and newlines, and then do filename globing on the generated words. The final set of words are then given as arguments to echo which prints them with spaces as delimiters. If you are concerned about running grep twice, then run it only once and store the output of it to e.g. a temporary file: #!/bin/sh -tmpfile=$(mktemp)if grep -v -x -E -e '[A-Z]{7}[[:space:]]+[A-Z]{3}' -- "$1" >"$tmpfile"then echo 'Does not verify. Bad lines follow...' cat -- "$tmpfile"firm -f -- "$tmpfile"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/676786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216688/" ] }
676,846
In Fedora 35, WirePlumber has replaced pipewire-media-session as the audio session manager. There is a highly annoying problem with audio on many built-in soundcards on Linux where the audio sink is suspended after nothing is played for 3 seconds. Upon resuming playback after 3 seconds have passed, audio is delayed or pops in. How do we fix this default behavior?
The relevant configuration file is /usr/share/wireplumber/main.lua.d/50-alsa-config.lua , but don't edit the system version of it! You need to copy it into /etc/wireplumber/main.lua.d/ (global config) or ~/.config/wireplumber/main.lua.d/ (user config) and make the necessary changes. The easiest way is to copy it into the global config location so that it applies to all user accounts: sudo cp -a /usr/share/wireplumber/main.lua.d/50-alsa-config.lua /etc/wireplumber/main.lua.d/50-alsa-config.luasudo nano /etc/wireplumber/main.lua.d/50-alsa-config.lua You then need to scroll down to the bottom of the file, inside the apply_properties section, and add a line there which says: ["session.suspend-timeout-seconds"] = 0 I've done a lot more changes and customized it for my own personal hardware. Here's my configuration for reference, but this config is only useful for my exact devices. You actually only need the line above to disable the auto-suspend. Add it to your own default config. Do not copy my config. The other changes I've made are unrelated. alsa_monitor.properties = { ["alsa.jack-device"] = true, ["alsa.reserve"] = true, ["alsa.midi.monitoring"] = true}alsa_monitor.rules = {{ matches = { { { "device.name", "matches", "alsa_card.*" } } }, apply_properties = { ["api.alsa.use-acp"] = true, ["api.acp.auto-profile"] = false, ["api.acp.auto-port"] = false } }, { matches = { { { "node.name", "matches", "alsa_output.pci-0000_0c_00.4.iec958-ac3-surround-51" } } }, apply_properties = { ["api.alsa.period-size"] = 128, ["api.alsa.headroom"] = 2048, ["session.suspend-timeout-seconds"] = 0 } }, { matches = { { { "node.name", "matches", "alsa_input.usb-BEHRINGER_UMC202HD_192k-00.analog-mono" } } }, apply_properties = { ["api.alsa.period-size"] = 128 } }} Setting the session.suspend-timeout-seconds property to 0 prevents the suspension/sleep behavior. It completely disables the behavior, as can be seen in WirePlumber's source code. WirePlumber has to be restarted in order for changes to take effect: systemctl --user restart wireplumber
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/676846", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/465249/" ] }
676,868
Is there any difference between these three code blocks in bash? Using IFS= : #!/usr/bin/env bashwhile IFS= read -r item; do echo "[$item]"done </dev/stdin Using IFS=$'\n' : #!/usr/bin/env bashwhile IFS=$'\n' read -r item; do echo "[$item]"done </dev/stdin Using -d $'\n' : #!/usr/bin/env bashwhile read -rd $'\n' item; do echo "[$item]"done </dev/stdin If there are differences between the two IFS values and the -d deliminator alternative, then under which circumstances would the differences present themselves? From my testing, they all appear the same: echo $'one two\nthree\tfour' | test-stdin # outputs:# [one two]# [three four]
IFS= and IFS=$'\n' are identical when it comes to read (assuming the read delimiter is not changed from the default), since the only difference is whether a newline inside a line separates words, but a newline never appears inside a line. read and read -d $'\n' are identical since $'\n' (newline) is the default delimiter. IFS= and IFS=$'\n' makes a difference for field splitting: IFS= completely turns off field splitting, whereas IFS=$'\n' splits on newlines. IFS=$'\n'echo $(echo a; echo b)# prints "a b" on a single line since $'a\nb' is split at # the newline and therefore echo receives two arguments "a" and "b"IFS=echo $(echo a; echo b)# prints "a" and "b" on separate lines $'a\nb' is passed # as a single argument to echo
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/676868", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50703/" ] }
676,944
Coming from the Windows world, where I'm in the habit of putting every new EXE or Installation file through something like Virustotal, or searching Stack Exchange/Reddit for reviews on the safety (no malware, no spyware, etc) of a particular piece of software before installing it. With Linux, is it mostly completely safe to install any utility or software so long as I'm using the default repositories that come with new installs of the OS from vendor images? If not, what is a general process for validating the safety of a particular Linux utility/program/application?
Short answer Yes, it is 'mostly safe' to install any utility or software so long as you are using the default repositories that come with new installs of the OS. The default repositories contain software that is tested by the developers and/or maintainers of the Linux distro. Example There are levels of security. Take Ubuntu as an example: The Ubuntu developers/maintainers working at Canonical are fully responsible for the central program packages (the repository main etc) that are used in the server version and all flavours of Ubuntu desktop. In some cases they are developing these programs, but in many cases these programs are developed and packaged 'upstream' by other persons/groups, for example Debian. Regardless of the packages' origin, all packages in main benefit from full security support provided by Ubuntu itself. The functionality of the software in the repositories universe and multiverse is tested, but the software is developed and packaged by other people or groups of people, and Ubuntu cannot guarantee the security. Software from a PPA is not tested by the Ubuntu developers and/or maintainers. The quality and security depends of the developer/maintainer. (I'm responsible for one PPA, and I am using some other PPAs, but I know many people who stay away from them because of the security risk.) All the software above are kept updated automatically. Software downloaded separately (like the typical case of Windows applications) are less secure (for example, you must check that they are up to date). Software that you compile yourself or even develop yourself may or may not be safe depending on your skill and what the software is dealing with. These links describe the Ubuntu case in more detail: Which Ubuntu repositories are totally safe and free from malware? https://ubuntu.com/security General conclusion In similar ways other Linux distros have repositories that are more or less tested for function and security. You should check carefully the origin, reputation, and maintenance of more 'peripheral' software before you install it. Before installing it is a good idea to test software in a separate 'test' system for example in a virtual machine or a live system or a second computer.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/676944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/500782/" ] }
676,953
I'm trying to use something like the pattern "John.*Smith" but instead of matching anything in between John and Smith I'd like to match anything between the words as long as it's of X characters length or less. If X = 5 for instance: Lines that should be a match: - John Smith- Jonh F. Smith Lines that shouldn't be a match: - John Ferdinand Smith- Joe Brown; John Johson; Mary Smith How could I do that with grep?
Try using a "bound". man regex: A bound is '{' followed by an unsigned decimal integer, possiblyfollowed by ',' possibly followed by another unsigned decimal integer, always followed by '}'. The integers must lie between 0 and RE_DUP_MAX (255(!)) inclusive, ... grep -E "John.{0,5}Smith" fileJohn SmithJohn F. Smith The 0 can be omitted (EDIT: in some implementations): grep -E "John.{,5}Smith" file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/676953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119229/" ] }
677,038
This question is about generating random numbers between a range, which is fine, but it doesn't fit my case. I'll explain in SQL terms because it seems to me that's easier to understand, though the question is about bash . My idea is with the results of the bash code build a SQL script. I have two MySQL tables, one of people and one of places. Each record has an unique integer id which goes from 1 to 139 (places) and 1 to 1519 (people). They are linked to each other by a foreign key, meaning: a place can have many people, but a person can have only one place. # 1-139 # 1-1519place1 → person1 → person2 → person3 ... and so on The data I have right now is that in one place all the people are linked, and the rest of places without any. The places are 139 and the people are 1519, so I have one place with 1519 people. My goal is to distribute the people randomly to the places, and that each place has at least one person. My code so far is this: $ c=1519$ while [[ $c -ne 0 ]]; do x=$((shuf -i 1-139 -n 1)) [[ $x -gt 139 ]] && continue echo $x (( c-- )) done This code generates 1519 random numbers between 1-139, so now I can have each person linked to a random place. My questions are: Is there a more efficient way to accomplish this? How can I control that each place has at least one person? I prefer to do this in bash , but I'm open to other solutions not involving it.
If you want to do this only using commonly-available tools (at least on Linux distributions), the most efficient way is probably to ask shuf : shuf -i 1-139 -n 1519 -r This produces 1519 numbers randomly chosen between 1 and 139. To ensure that each place gets one person, shuffle 139 numbers first without repeating: shuf -i 1-139shuf -i 1-139 -n 1380 -r To reduce the “first 139” effect (the first 139 people would all end up in different places), shuffle all this again: (shuf -i 1-139; shuf -i 1-139 -n 1380 -r) | shuf
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/677038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/338177/" ] }
677,040
I created a chroot jail and copied multiple binaries and their corresponding libraries to the relevant subdirectories. Example: cp -v /usr/bin/edit /home/jail/usr/binldd /usr/bin/edit linux-vdso.so.1 (0x00007fff565ae000) libm.so.6 => /lib64/libm.so.6 (0x00007f7749145000) libtinfo.so.5 => /lib64/libtinfo.so.5 (0x00007f7748f11000) libacl.so.1 => /lib64/libacl.so.1 (0x00007f7748d08000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f7748b04000) libperl.so => /usr/lib/perl5/5.18.2/x86_64-linux-thread-multi/CORE/libperl.so (0x00007f7748771000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f7748554000) libc.so.6 => /lib64/libc.so.6 (0x00007f77481ad000) libattr.so.1 => /lib64/libattr.so.1 (0x00007f7747fa8000) /lib64/ld-linux-x86-64.so.2 (0x00007f7749446000) libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f7747d6d000)cp -v /lib64/{libm.so.6,libtinfo.so.5,libacl.so.1,libdl.so.2,libpthread.so.0,libc.so.6,libattr.so.1,ld-linux-x86-64.so.2,libcrypt.so.1} /home/jail/lib64/ I did the same with the man command and copied all manual files with cp -rv /usr/share/man/ /home/jail/usr/share/ , but if I execute it, it returns this error: -bash-4.2$ man gzipexecve: No such file or directory What could be missing? More details: -bash-4.2$ ls /usr/share/manca da el es fr.ISO8859-1 hu it man0p man1p man3 man4 man6 man8 mann pl pt_BR sk sv zh zh_TWcs de eo fr fr.UTF-8 id ja man1 man2 man3p man5 man7 man9 nl pt ru sr uk zh_CN Update: -bash-4.2$ strace -f /usr/bin/mandb ls 2>ls.log-bash-4.2$ cat ls.logexecve("/usr/bin/mandb", ["/usr/bin/mandb", "ls"], [/* 45 vars */]) = 0brk(0) = 0x138b000mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd43a9ac000access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/lib64/tls/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)stat("/lib64/tls/x86_64", 0x7ffde87d2510) = -1 ENOENT (No such file or directory)open("/lib64/tls/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)stat("/lib64/tls", 0x7ffde87d2510) = -1 ENOENT (No such file or directory)open("/lib64/x86_64/libc.so.6", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)stat("/lib64/x86_64", 0x7ffde87d2510) = -1 ENOENT (No such file or directory)open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20\34\2\0\0\0\0\0"..., 832) = 832fstat(3, {st_mode=S_IFREG|0755, st_size=1974416, ...}) = 0mmap(NULL, 3828256, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fd43a3e6000mprotect(0x7fd43a584000, 2093056, PROT_NONE) = 0mmap(0x7fd43a783000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x19d000) = 0x7fd43a783000mmap(0x7fd43a789000, 14880, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fd43a789000close(3) = 0mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd43a9ab000mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd43a9aa000mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd43a9a9000arch_prctl(ARCH_SET_FS, 0x7fd43a9aa700) = 0mprotect(0x7fd43a783000, 16384, PROT_READ) = 0mprotect(0x601000, 4096, PROT_READ) = 0mprotect(0x7fd43a9ad000, 4096, PROT_READ) = 0brk(0) = 0x138b000brk(0x13ac000) = 0x13ac000open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/locale/de_DE.UTF-8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/locale/de_DE.utf8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/locale/de_DE/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/locale/de.UTF-8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/locale/de.utf8/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/locale/de/LC_CTYPE", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)getuid() = 1000geteuid() = 1000getgid() = 100execve("/usr/lib/man-db/mandb", ["/usr/bin/mandb", "ls"], [/* 45 vars */]) = -1 ENOENT (No such file or directory)dup(2) = 3fcntl(3, F_GETFL) = 0x8001 (flags O_WRONLY|O_LARGEFILE)close(3) = 0write(2, "execve: No such file or director"..., 34execve: No such file or directory) = 34exit_group(-22) = ?+++ exited with 234 +++ Update2:Ok this part was missing: cp -rv /usr/lib/man-db/ usr/lib/ Now I get this error: man: error while loading shared libraries: libmandb-2.6.6.so: cannot open shared object file: No such file or directory Strangely it's not part of the ldd return: # which mandb/usr/bin/mandb# ldd /usr/bin/mandb linux-vdso.so.1 (0x00007fffd64d0000) libc.so.6 => /lib64/libc.so.6 (0x00007f1885120000) /lib64/ld-linux-x86-64.so.2 (0x00007f18854c7000) Finally I needed those libraries: cp /usr/lib64/libmandb-2.6.6.so usr/lib64/libmandb-2.6.6.socp /usr/lib64/libgdbm.so.4 usr/lib64/libgdbm.so.4 After that man loaded, but no text is displayed: # man lsMan: find all matching manual pages (set MAN_POSIXLY_CORRECT to avoid this) * ls (1) ls (1p)Man: What manual page do you want?Man: 1 I compared the strace results of the jail and root user and they differ now only in this part (jail is left): As I added a bind mount to /var/run/nscd , the socket is available for the jail user: -bash-4.2$ if [[ -S /var/run/nscd/socket ]]; then echo "socket is available"; fisocket is available So the problem seems to be something else?! Update3:@nobodyYes, passwd and group are present: -bash-4.2$ ls -la /etctotal 124drwxr-xr-x 4 root root 216 Nov 11 14:15 .drwxr-xr-x 13 root root 183 Nov 4 08:49 ..-rw-r--r-- 1 root root 779 Nov 3 12:43 group-rw-r--r-- 1 root root 67659 Nov 11 13:55 ld.so.cache-rw-r--r-- 1 root root 2335 Nov 4 09:02 localtime-rw-r--r-- 1 root root 12061 Nov 11 13:16 manpath.config-rw-r--r-- 1 root root 1304 Nov 11 14:15 nsswitch.conf-rw-r--r-- 1 root root 3961 Nov 3 12:43 passwddrwxr-xr-x 2 root root 4096 Nov 3 14:13 postfix-rw-r--r-- 1 root root 9168 Nov 4 09:02 profiledrwxr-xr-x 2 root root 4096 Nov 4 09:02 profile.d-rw-r--r-- 1 root root 8006 Nov 4 09:17 vimrc Update4: The -Tascii flag returned more missing binaries: -bash-4.2$ man -Tascii lsman: can't execute tbl: No such file or directoryman: can't execute groff: No such file or directoryman: command exited with status 255: /usr/bin/zsoelim | /usr/lib/man-db/manconv -f UTF-8:ISO-8859-1 -t ANSI_X3.4-1968//IGNORE | tbl | groff -mandoc -Tascii So I copied tbl , groff and zsoelim and the complete dir /usr/share/groff. Now two additional binaries were missing: -bash-4.2$ man -Tascii lsgroff: couldn't exec troff: No such file or directorygroff: couldn't exec grotty: No such file or directoryman: command exited with status 4: /usr/bin/zsoelim | /usr/lib/man-db/manconv -f UTF-8:ISO-8859-1 -t ANSI_X3.4-1968//IGNORE | tbl | groff -mandoc -Tascii After copying these, the manual was displayed: But without the -Tascii flag its still black/empty. :| Update5: Default pager seems to be less -bash-4.2$ env | grep MANPATHMANPATH=/usr/share/man-bash-4.2$ env | grep PAGERPAGER=less
You should type the command strace -f man ls 2>ls.log and see how many execve lines there are in the ls.log file. You will have /usr/bin/pager, nroff, groff, tbl… groff would surely need a lot of files to work properly. See how many openat in the log file are successful.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/677040", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101920/" ] }
677,092
On Pop!_OS, I'm trying to remove an apt package that's marked as "essential". I understand the consequences and know how to make sure I'll still be able to use my system afterward. In examples and documentation about this online, it says all I'd have to do is type Yes, do as I say! into apt, but instead of getting prompted to do that, I just get this message: This operation is not permitted because it will break the system.Abort. This is my own system, so I shouldn't need permission to remove whatever packages I want and then deal with the consequences myself. How can I bypass this?
The reason this happens on Pop!_OS is that they applied this patch to their version of apt. To bypass this block and get the normal behavior of apt from now on, do sudo touch /etc/apt/break-my-system .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/677092", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/217726/" ] }
677,097
I got that error already twice while having my laptop, and I previously were able to fix it after them google search and switching to AHCI in BIOS. However, this time it seems to be another problem because even though sata-mode is AHCI it kicks me back to initramfs. I tried this, How to switch from IDE to AHCI , with the hope to not get that error in the future. After that I sadly got the error that I am now stuck on. Original error message Output from cat /proc/modules and ls dev Help is greatly appreciated.
The reason this happens on Pop!_OS is that they applied this patch to their version of apt. To bypass this block and get the normal behavior of apt from now on, do sudo touch /etc/apt/break-my-system .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/677097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/500915/" ] }
677,590
I have written this script: #!/bin/bashDAYSOLD="2"users_dir="/users"cd $users_dirdate >> cleanup.log; du -h --max-depth=1 | sort -hr >> cleanup.logfind $users_dir -mtime +$DAYSOLD -type f -exec rm -v {} \; I would like to delete files only under the "workspace" directory and older than 2 days.How should this be done? /users/user1/workspace/users/user2/workspace/users/user3/workspace Thanks for helping. I would love to learn how to do this.
-mtime "+$days" selects files whose age (based on last modification time) rounded down to an integer number of days is stricktly greater than $days . A file that is 2 days and 23 hours old would not be selected by -mtime +2 because, it's considered to be 2 day old which is not greater than 2 . So you need: find "$users_dir"/*/workspace -mtime "+$(( DAYSOLD - 1 ))" -type f -delete To delete files that are older than $DAYSOLD days¹ in any workspace directory² found in any subdirectory³ of $users_dir . Though you could also do: find "$users_dir"/*/workspace '(' -mtime "$DAYSOLD" -o -mtime "+$DAYSOLD" ')' \ -type f -delete Which deletes files whose age (in integer number of days) is either $DAYSOLD or greater than $DAYSOLD . -delete is a non-standard extension, but is available in GNU find as found on Ubuntu and is safer and more efficient than using -exec rm . You can make it -delete -print if you want the list of files that have been successfully deleted. Also remember to quote your variables and check the exit status of cd before carrying one: (cd -P -- "$users_dir" && exec du...) or du would run in the wrong directory if cd failed. Also note the use of a subshell or it wouldn't work properly if $users_dir was a relative path. It would actually make more sense to write it as: cd -P -- "$users_dir" || exitdu ...find . ... (which would be more consistent if $users_dir was a symlink) ¹ strictly speaking, that's at least $DAYSOLD days old but considering that the comparison is done using nanosecond precision and that it takes several hundred nanoseconds to start find , that distinction is hardly relevant. ² strictly speaking, if workspace is a file that is not of type directory , that will still be passed to find , and if that file is older than $DAYSOLD days (and is a regular files for -type f ), it will be deleted as well. You could switch to zsh and add (/) to the pattern to make sure only workspace files of type directory are considered. Do not just append / as you'd run into ³ below with more severe consequences (like when /users/joe/workspace is a symlink to /bin or / ). ³ Note that symlinks are followed there. So if /users/joe is a symlink to / for instance, that will delete old files in the /workspace directory.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/677590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/501350/" ] }
677,591
I'm getting the following error from sudo : $ sudo lssudo: /etc/sudoers is owned by uid 1000, should be 0sudo: no valid sudoers sources found, quittingsudo: unable to initialize policy plugin Of course I can't chown it back to root without using sudo . We don't have a password on the root account either. I honestly don't know how the system got into this mess, but now it's up to me to resolve it. Normally I would boot into recovery mode, but the system is remote and only accessible over a VPN while booted normally. For the same reason, booting from a live CD or USB stick is also impractical. The system is Ubuntu 16.04 (beyond EOL, don't ask), but the question and answers are probably more general.
The procedure described here (which may itself be an imperfect copy of this Ask Ubuntu answer ) performed the miracle. I'm copying it here, and adding some more explanations. Procedure Open two SSH sessions to the target server. In the first session, get the PID of bash by running: echo $$ In the second session, start the authentication agent with: pkttyagent --process 29824 Use the pid obtained from step 1. Back in the first session, run: pkexec chown root:root /etc/sudoers /etc/sudoers.d -R Enter the password in the second session password promt. Explanation Similar to sudo , pkexec allows an authorized user to execute a program as another user, typically root . It uses polkit for authentication; in particular, the org.freedesktop.policykit.exec action is used. This action is defined in /usr/share/polkit-1/actions/org.freedesktop.policykit.policy : <action id="org.freedesktop.policykit.exec"> <description>Run programs as another user</description> <message>Authentication is required to run a program as another user</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>auth_admin</allow_active> </defaults> </action> auth_admin means that an administrative user is allowed to perform this action. Who qualifies as an administrative user? On this particular system (Ubuntu 16.04), that is configured in /etc/polkit-1/localauthority.conf.d/51-ubuntu-admin.conf : [Configuration]AdminIdentities=unix-group:sudo;unix-group:admin So any user in the group sudo or admin can use pkexec . On a newer system (Arch Linux), it's in /usr/share/polkit-1/rules.d/50-default.rules : polkit.addAdminRule(function(action, subject) { return ["unix-group:wheel"];}); So here, everyone in the wheel group is an administrative user. In the pkexec manual page, it states that if no authentication agent is found for the current session, pkexec uses its own textual authentication agent, which appears to be pkttyagent . Indeed, if you run pkexec without first starting the pkttyagent process, you are prompted for a password in the same shell but it fails after entering the password: polkit-agent-helper-1: error response to PolicyKit daemon: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: No session for cookie This appears to be an old bug in polkit that doesn't seem to be getting any traction. More discussion . The trick of using two shells is merely a workaround for this issue.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/677591", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177652/" ] }
677,608
I have a series of headings in a file that have names like this: grep ">scaffold_3" DM_v6.1_unanchoredScaffolds.fasta >scaffold_3>scaffold_303>scaffold_31>scaffold_34>scaffold_36>scaffold_37>scaffold_39>scaffold_33>scaffold_300 I would like to select only the first, so I tried: $ grep ">scaffold_3 " file.fasta $$ grep ">scaffold_3[[:blank:]]" file.fasta $$ grep ">scaffold_3\t" file.fasta $$ grep ">scaffold_3\ " file.fasta $$ grep ">scaffold_3 " file.fasta $$ grep ">scaffold_3[[:space:]]" file.fasta $$ grep ">scaffold_3$" file.fasta >scaffold_3 How can I get the exact name but not the synonyms, given that the character after the name might be a space, tab, newline (perhaps from Windows too) and that [[:space:]] did not work? Thank you
The procedure described here (which may itself be an imperfect copy of this Ask Ubuntu answer ) performed the miracle. I'm copying it here, and adding some more explanations. Procedure Open two SSH sessions to the target server. In the first session, get the PID of bash by running: echo $$ In the second session, start the authentication agent with: pkttyagent --process 29824 Use the pid obtained from step 1. Back in the first session, run: pkexec chown root:root /etc/sudoers /etc/sudoers.d -R Enter the password in the second session password promt. Explanation Similar to sudo , pkexec allows an authorized user to execute a program as another user, typically root . It uses polkit for authentication; in particular, the org.freedesktop.policykit.exec action is used. This action is defined in /usr/share/polkit-1/actions/org.freedesktop.policykit.policy : <action id="org.freedesktop.policykit.exec"> <description>Run programs as another user</description> <message>Authentication is required to run a program as another user</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>auth_admin</allow_active> </defaults> </action> auth_admin means that an administrative user is allowed to perform this action. Who qualifies as an administrative user? On this particular system (Ubuntu 16.04), that is configured in /etc/polkit-1/localauthority.conf.d/51-ubuntu-admin.conf : [Configuration]AdminIdentities=unix-group:sudo;unix-group:admin So any user in the group sudo or admin can use pkexec . On a newer system (Arch Linux), it's in /usr/share/polkit-1/rules.d/50-default.rules : polkit.addAdminRule(function(action, subject) { return ["unix-group:wheel"];}); So here, everyone in the wheel group is an administrative user. In the pkexec manual page, it states that if no authentication agent is found for the current session, pkexec uses its own textual authentication agent, which appears to be pkttyagent . Indeed, if you run pkexec without first starting the pkttyagent process, you are prompted for a password in the same shell but it fails after entering the password: polkit-agent-helper-1: error response to PolicyKit daemon: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: No session for cookie This appears to be an old bug in polkit that doesn't seem to be getting any traction. More discussion . The trick of using two shells is merely a workaround for this issue.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/677608", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277882/" ] }
677,669
The latest official release of Linux kernel released by Centos is kernel-3.10.0-1160.45.1.el7.x86_64.rpm which is updated on 15th October 2021. Furthermore, the kernel version recommended by CVE-2021-4326 is provided by a third party repository named ElRepo which means that the recommended kernel update is not yet supported/released officially by Centos repositories? Or how secure is to update the kernel from any other source then centos official repo. Although did try to update the kernel of one of our dev environment server to the latest recommended kernel version i.e 5.15.2. This resulted in a broken operating system where after reboot, the system landed in Kernel emergency mode as it was unable to boot from the updated kernel and couldn’t configure the boot partitions automatically. Currently, our production servers are running on Linux Kernel 3.10.0-1160.21.1.el7.x86_64 which can be updated to the latest stable release 3.10.0-1160.45.1.el7.x86_64 . Under all these observations, Should we to stick with official Centos updates only as updating the kernel from third party sources may break operating system functionality in production environment.
The CentOS 7 and RHEL 7 kernels aren’t affected by CVE-2021-43267 , so there’s no need to do anything.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/677669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271692/" ] }
677,817
I want to use sed 's transliterate ( y/// ) to replace one set of characters by another. I would expect this to work just as well as using the tr program. $ echo '[]{}abc' | tr '[ab}' 'gefh' g]{hefc However, when I go to perform this same operation with sed, I see the following error: $ echo '[]{}abc' | sed 'y/[ab}/gefh/' sed: 1: "y/[ab}/gefh/": unbalanced brackets ([]) This makes some sense, as I expect to need to escape the [ character. However, when I do try and escape that, I receive the following, different error: $ echo '[]{}abc' | sed 'y/\[ab}/gefh/' sed: 1: "y/\[ab}/gefh/": transform strings are not the same length My current work-around is to either (1) just use tr or (2) insert a "dummy character" in the right-hand side of the transliteration whose job is to do nothing but match the escape character. $ echo '[]{}abc' | sed 'y/\[ab}/_gefh/' g]{hefc This is however unsatisfying and suspicious. It's also not very safe, e.g. when \ is in the input string. $ echo '[]{}abc\' | sed 'y/\[ab}/_gefh/' g]{hefc_ What's the correct way to escape a character in a sed transliteration without sed treating the escape character itself a part of the translation?
Assuming you are on macOS (the only system whose native sed I can get to display this issue, although I haven't checked on FreeBSD whence macOS's sed comes): $ echo '[]{}abc' | sed 'y/[ab}/gefh/'sed: 1: "y/[ab}/gefh/": unbalanced brackets ([]) $ echo '[]{}abc' | sed 'y/\[ab}/gefh/'sed: 1: "y/\[ab}/gefh/": transform strings are not the same length $ echo '[]{}abc' | sed 'y/\[ab}/\gefh/'g]{hefc So, one solution is to Escape the [ in the first string to avoid having an unbalanced bracket, and Make the two strings equal length by adding a "no-op" backslash to the second string too. Or, You could also enclose both strings in [...] , which, upon reflection, may be the safest way to deal with this as it can be done in a mechanical manner without caring about where in the string the [ is located: $ echo '[]{}abc' | sed 'y/[[ab}]/[gefh]/'g]{hefc Or install GNU sed via e.g. Homebrew on macOS, or FreeBSD's package system, and use that. I would treat this as a bug in this sed implementation.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/677817", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259874/" ] }
678,159
I'm trying to figure out how to use the ${parameter%word} expansion with $@ and $* . It started by trying to make a script to combine pdfs using ghostscript, but I ran into some weird behavior with parameter expansions, and now I'm just curious why this behavior is happening. Basically I'm trying to remove the '.pdf' from the end of each argument, then join them with an arbitrary string (I'm testing with '-'), then add a '.pdf' to the end of the result. E.g expected behavior is test.sh a.pdf b.pdf c.pdf -> a-b-c.pdf . Here is a test script I'm running: IFS='-'echo ${*%.pdf}.pdfecho "${*%.pdf}.pdf"a=${*%.pdf}.pdfb="${*%.pdf}.pdf"echo $aecho $b If I bash test.sh a.pdf b.pdf c.pdf , I get: a b c.pdfa-b-c.pdfa b c.pdfa b c.pdf If I zsh test.sh a.pdf b.pdf c.pdf , I get: a b c.pdfa.pdf-b.pdf-c.pdfa-b-c.pdfa.pdf-b.pdf-c.pdf I understand that zsh and bash are different, so I'm not worried about why they give different results from each other. However, in each case, only 1 of the 4 methods of constructing the string works as intended (the second one for bash, and the third one for zsh). Why do these seemingly similar attempts to construct the string result in such different results? Any insight is appreciated. Thanks!
How ${*%word} and the like work depends on the shell. POSIX leaves the result unspecified. There are two main plausible behaviors: the transformation (prefix or suffix removal) can be applied to each word, or to the result of joining the words. In shells that support arrays, it's natural apply the transformation to each word: that's what bash and ksh93 do. In shells that don't support arrays, it's natural join the words first (that's what ash/dash does). For example: # No arrays: $* joined = 'abc abc'; strip off b* → 'a'$ dash -c 'echo ${*%%b*}' _ abc abca# Arrays: $* = ('abc' 'abc'); strip off b* from each element → ('a' 'a'); then join$ bash -c 'echo ${*%%b*}' _ abc abca a The first character IFS is used to join the words that make up $* . It only makes a difference to what is stripped if the pattern can match that character. For example: # No arrays: $* joined = 'abc-def-ghi'; strip off -* → 'abc'$ dash -c 'IFS=-; echo "${*%%-*}"' _ abc-def ghiabc# Arrays: $* = ('abc-def' 'ghi'); strip off -* from each element → ('abc' 'ghi'); then join$ bash -c 'IFS=-; echo "${*%%-*}"' _ abc-def ghiabc-ghi When the substitution is in a word context, the expansion ends here. Word contexts include double quotes and assignments; see When is double-quoting necessary? and Expansion of a shell variable and effect of glob and split on it for more details. This explains echo "${*%.pdf}.pdf" : the first character of IFS is used for joining, and there is no subsequent splitting, hence the output in bash is a-b-c.pdf . The value of both a and b is a-b-c.pdf as well. When the substitution is in a list context (i.e. unquoted), as in your first example, the result undergoes word splitting (and globbing). This is based on IFS , hence a-b-c.pdf is split into a , b and c.pdf . The echo command prints the three words with a space in between. Exactly the same thing happens with echo $a and echo $b in your example: the value of a or b is split at IFS characters. Zsh treats @ and * differently. With * as the parameter name, it applies the string-style behavior (join first then transform) inside double quotes, and the array-style behavior (transform each element) otherwise. On the other hand, the parameter @ is always treated as an array. Thus: $ zsh -c 'echo "${*%.pdf}"' _ a.pdf b.pdf c.pdfa.pdf b.pdf c$ zsh -c 'echo ${*%.pdf}' _ a.pdf b.pdf c.pdfa b c$ zsh -c 'echo "${@%.pdf}"' _ a.pdf b.pdf c.pdfa b c$ zsh -c 'echo ${@%.pdf}' _ a.pdf b.pdf c.pdfa b c Unlike what happens in other shells, a string assignment does not cause $* to be processed string-style: the double quotes are what matters. This explains why a=${*%.pdf}; echo $a works like echo ${*%.pdf} and not like a="${*%.pdf}"; echo $a . With IFS=- , a dash is used when joining, which happens to * whenever it's in a word context, whether due to double quotes or to a string assignment. # ('a.pdf' 'b.pdf' 'c.pdf); strip each element → ('a' 'b' 'c'); print list$ zsh -c 'IFS=-; echo ${*%.pdf}' _ a.pdf b.pdf c.pdfa b c# join → 'a.pdf-b.pdf-c.pdf'; strip the single word and print it$ zsh -c 'IFS=-; echo "${*%.pdf}"' _ a.pdf b.pdf c.pdfa.pdf-b.pdf-c# ('a.pdf' 'b.pdf' 'c.pdf); strip each element → ('a' 'b' 'c'); `$*` in word context so join → 'a-b-c'; print word$ zsh -c 'IFS=-; a=${*%.pdf}; echo "$a"' _ a.pdf b.pdf c.pdfa-b-c# join → 'a.pdf-b.pdf-c.pdf'; strip the single word; print the word$ zsh -c 'IFS=-; a="${*%.pdf}"; echo "$a"' _ a.pdf b.pdf c.pdfa.pdf-b.pdf-c Note that you should almost never use $* . It's only useful to join the positional arguments with IFS , and it makes it impossible to distinguish IFS characters created by the joining from IFS characters that were already in the arguments. "$@" is almost always the right form. Note that you do need the double quotes to avoid word expansions (even in zsh, although the effect of omitting the quotes is much smaller in zsh). To make your script simple to understand, do one step at a time: strip off the suffix from each part, then join the parts. Use an array variable to store the intermediate result. parts=("${@%.pdf}") # using @ because we want to have array behaviorIFS=-joined="${parts[*]}" # using * and not @ for joiningecho "$joined.pdf" This snippet works identically in bash and zsh.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/678159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/497020/" ] }
678,302
I have a list of paths that are in a file called pathlist.txt . It looks like so: /home/abc/dirA/home/abc/dirB/home/abc/dir with space/home/abc/dirX I need to find all files in each of those paths. The following approach works but only with paths that do not have spaces in them: find $(tr '\n' ' ' < pathlist.txt) -type f -printf "%p, %AY-%Am-%Ad \n" I tried setting IFS=$'\n' and some experimentation with xargs but no success. Any suggestions as to how to make sure that find accepts paths that have spaces (and possibly other special chars) in them?
Assuming bash , one way is: readarray -t a < pathlist.txtfind "${a[@]}" -type f ....
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/678302", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8141/" ] }
678,699
I have a file with quite many lines, and I would need the nth last line (specifically the 95th from bottom). How could I go on about doing this? One way would be to use tail but then it prints everything from last to 95th last, when I would only need the 95th.
As you noted, you can use tail to get the last 95 lines from a file. You just want the first of these, for which there is a utility called head . So tail -95 file | head -1 Using tail is probably the best you can do. Another approach would be to read the lines into an array of lines, and just print out the n-95 line when you get to the end of the file. You don't actually need to store all the lines so you can have a circular buffer with 95 elements to store the last 95 lines you have read.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/678699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/502434/" ] }
678,777
I have recently installed CentOS Stream 9 and I'm finding that I can't get epel to work properly following any instructions I can find online. Resources for CentOS 8 / CentOS Stream 8 don't work and there are barely any resources for CentOS 9 Stream. I want to install epel so I can install packages like ddclient , but I can't figure it out. Normally I'd just do this: dnf install epel-releasednf config-manager --set-enabled powertools # I have also tried PowerTools But I get this error: Error: No matching repo to modify: powertools. I have messed around a lot trying to get epel so my set up may be messed up now, but here are my currently installed relevant packages and repos: $ dnf list installed | grep -E 'centos|epel'centos-gpg-keys.noarch 9.0-3.el9 @baseoscentos-logos.x86_64 90.4-1.el9 @AppStreamcentos-logos-httpd.noarch 90.4-1.el9 @appstreamcentos-stream-release.noarch 9.0-3.el9 @baseoscentos-stream-repos.noarch 9.0-3.el9 @baseosepel-release.noarch 8-13.el8 @@commandline$ dnf repolistrepo id repo namePlex Plexappstream CentOS Stream 9 - AppStreambaseos CentOS Stream 9 - BaseOS Any help would be much appreciated, I'm almost at the point of just wiping and installing CentOS Stream 8 instead. Thanks!
powertools are called crb (CodeReady Linux Builder, or epel 9) now. To enable it, run dnf config-manager --set-enabled crb For other versions of epel, check the documentation from Fedora . https://developers.redhat.com/blog/2018/11/15/introducing-codeready-linux-builder
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/678777", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194119/" ] }
678,930
I would like to output this on completion of my bash script. /\_/\( o.o ) > ^ < I have tried the following but all return errors. echo /\\_/\\\n\( o.o \)\n > ^ <echo \/\\_\/\\\r\n( o.o )\r\n > ^ <echo /\\_/\\\n\( o.o \)\n \> ^ < How do I escape these characters so that bash renders them as a string?
In this case, I'd use cat with a (quoted) here-document: cat <<'END_CAT' /\_/\( o.o ) > ^ <END_CAT This is the best way of ensuring the ASCII art is outputted the way it is intended without the shell "getting in the way" (expanding variables etc., interpreting backslash escape sequences, or doing redirections, piping etc.) You could also use a multi-line string with printf : printf '%s\n' ' /\_/\( o.o ) > ^ <' Note the use of single quotes around the static string that we want to output. We use single quotes to ensure that the ASCII art is not interpreted in any way by the shell. Also note that the string that we output is the second argument to printf . The first argument to printf is always a single quoted formatting string, where backslashes are far from inactive. Or multiple strings with printf (one per line): printf '%s\n' ' /\_/\' '( o.o )' ' > ^ <' printf '%s\n' \' /\_/\' \'( o.o )' \' > ^ <' Or, with echo (but see Why is printf better than echo? ; basically, depending on the shell and its current settings, there are possible issues with certain escape sequences that may not play nice with ASCII drawings), echo ' /\_/\( o.o ) > ^ <' But again, just outputting it from a here-document with cat would be most convenient and straight-forward I think.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/678930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/465869/" ] }
679,064
I'm trying to override malloc/free functions for the program, that requires setuid/setgid permissions. I use the LD_PRELOAD variable for this purpose. According to the ld documentation , I need to put my library into one of the standard search directories (I chose /usr/lib) and give it setuid/setgid permissions. I've done that. However, I still can't link to my .so file, getting the error: object 'liballoc.so' from LD_PRELOAD cannot be preloaded: ignored What can be the possible reasons for that? Tested this .so file on programs that don't have setuid/setgid permissions and all works fine.OS: RedHat 7.0
According to the ld documentation, I need to put my library into one of the standard search directories (I chose /usr/lib ) That was the mistake. You should've put it in /usr/lib64 (assuming that your machine is a x86_64). I've just tried the recipe from the manpage on a Centos 7 VM (which should be ~identical to RHEL 7) and it works: As root: cynt# cc -shared -fPIC -xc - <<<'char *setlocale(int c, const char *l){ errx(1, "not today"); }' -o /usr/lib64/liblo.socynt# chmod 4755 /usr/lib64/liblo.so As a regular user with a setuid program: cynt$ LD_PRELOAD=liblo.so su -su: not today Whether it's a good idea to use that feature is a totally different matter (IMHO, it isn't).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/679064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/502776/" ] }
679,131
I want to use a shortcut command for printf with a specific format string, and came up with the following. local PF="printf %s\n"$PF "Some Text" It does the job, but wonder if there exist any caveats when using such an approach, that could lead to misinterpretation of the format string.
Why execute the value of a variable when you could just use a function - that's what they're for. PF() { printf "%s\n" "$@" ; }PF "Some Text" or an alias: alias PF='printf "%s\n"'PF "Some Text" If you really want to have a variable involved, use it to hold the format string. e.g. fmt="%s\n"printf "$fmt" "Some Text" If you want to keep the line lengths under, say, 80 chars, then assign the variable in two or more statements. e.g. fmt="......part1......"fmt+="......part2......"fmt+="......part3......"... This is also useful if you just want to make assigning the format string more readable, or to add individual comments about some or all of the parts.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/679131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/501709/" ] }
679,201
I am trying to populate a txt with all the names of .fits files from a folder with this command: ls *.fits > output_all.txt The number of .fits files in the folder is >330k and I get the error message bash: /usr/bin/ls: Argument list too long How can I solve this? Alternatively, it might be possible to avoid at all the creation of the file output_all.txt . I only need it to tell the STILTS tool what .fits files to merge into a large .fits file with this command stilts tcat in=@output_all.txt out=table_stilts.fits icmd='keepcols "FLUX LOGLAM"' If you know a way in which to tell STILTS to accept as input a directory, not a file it will solve my problem with ls . Tnx
In ls *.fits , it's the shell that does all the hard work finding the filenames that end in .fits and don't start with . . Then it passes that list to ls , which sorts it (again, as shell globs already sort the list before passing to ls ) and displays it (in columns or one per line depending on the implementation and whether the output goes to a terminal or not) after having checked that each file exists. So it's a bit counter-productive especially considering that: you forgot the -- option delimiter, so any filename starting with - would cause problems. you forgot the -d option, so if any file is of type directory, ls would list their contents instead of themselves. as ls is a separate command from the shell (in most shells including bash ), it ends up having to be executed in a separate process using the execve() system call and you end-up tripping its limit on the cumulative size of arguments and environment variables. If you just need to print the list generated by the shell from *.fits , you can use printf instead which is built-in in most shells (and therefore doesn't invoke execve() and its limit): printf '%s\n' *.fits > output_all.txt That leaves one problem though: If *.fits doesn't match any file, in the bash shell, *.fits is left as-is, so printf will end-up printing *.fits<newline> . While ls would give you an error message about that non-existent *.fits file and leave the output_all.txt empty. That can be changed with the nullglob option (which bash copied from zsh ) which causes *.fits to expand to nothing instead. But then we run into another problem: when not passed any argument beside the format, printf still goes through the format once as if passed empty arguments, so you'd end up with one empty line in output_all.txt . That one can be worked around with: shopt -s nullglobprintln() { [ "$#" -eq 0 ] || printf '%s\n' "$@"}println *.fits > output_all.txt If you can switch to zsh instead of bash , it becomes easier: print -rC1 -- *.fits(N) > output_all.txt Where N enables nullglob for that one glob and print -rC1 prints its arguments r aw on 1 C olumn, and importantly here: prints nothing if not passed any argument. With zsh , you can also restrict the list to regular files only (excluding directories, symlinks, fifos..) using the . glob qualifier ( *.fits(N.) for instance), or include hidden files with D ( *.fits(ND.) )... Lastly you can also always defer to find to find the files, but if you do need the list to be sorted and hidden files to be excluded, and avoid a ./ prefix, that becomes quickly tedious as well and you'd need GNU extensions. For example, for the equivalent of print -rC1 -- *.fits(N.) : LC_ALL=C find . -maxdepth 1 ! -name '.*' -type f -printf '%P\0' | sort -z | tr '\0' '\n' > output_all.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/679201", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/392430/" ] }
679,495
Background In the past, if you wanted to install software from an Ubuntu PPA in Debian, the approach was to import/trust the developer's GPG key from keyserver.ubuntu.com, $ sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com E58A9D36647CAE7F then add the repository to /etc/apt/sources.list.d/... # /etc/apt/sources.list.d/papirus-ppa.listdeb http://ppa.launchpad.net/papirus/papirus/ubuntu focal main (Off the top of my head, examples can be found in this Ubuntu docs wiki for mkusb or the Papirus icon theme readme .) Problem The problem is that this approach now produces deprecation warnings ( apt-key was deprecated over a year ago ): $ apt-key adv ...Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)) Ninja edit See this answer below for yet another, separate deprecation in this apt-key command! Solution? The new approach (as exemplified by, say, Docker ) is twofold: Save the developer's GPG key to disk, $ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg then specify the path to that GPG key when defining a new APT source: # /etc/apt/sources.list.d/docker.listdeb [... signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian buster stable ⬑------------------ this part is new -----------------⬏ Step 1 is the part that replaces apt-key , but it doesn't seem possible to fetch individual GPG keys off of keyserver.ubuntu.com. Is it possible to adapt this approach for Ubuntu PPAs? If not, how can Ubuntu PPAs be added as software sources in Debian without the use of apt-key ?
apt-key adv basically passes CLI arguments/options directly to gpg , but only after setting up a temporary keyring. You can do the same manually with: $ export GNUPGHOME="$(mktemp -d)" # optional (skipping this means keys will be imported to your GPG keyring)$ gpg --recv-keys --keyserver keyserver.ubuntu.com 54B8C8AC$ gpg --export 54B8C8AC | sudo tee /usr/share/keyrings/mkusb-archive-keyring.gpg$ cat <<-SOURCE | sudo tee /etc/apt/sources.list.d/mkusb.list deb [signed-by=/usr/share/keyrings/mkusb-archive-keyring.gpg] http://ppa.launchpad.net/mkusb/ppa/ubuntu focal main SOURCE ( apt-key is just a shell script, so you can examine the code yourself in your favorite editor; e.g., vim $(which apt-key) .) If it's not working... At first, I was receiving this error: $ sudo apt update...Get:12 http://ppa.launchpad.net/papirus/papirus/ubuntu focal InRelease [18.0 kB]Err:12 http://ppa.launchpad.net/papirus/papirus/ubuntu focal InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E58A9D36647CAE7F which turned out to be a file permissions issue: $ ls -l /usr/share/keyrings/*...-rw-r--r-- 1 root root 3375 Nov 22 21:38 /usr/share/keyrings/filebot-archive-keyring.gpg-rw-r--r-- 1 root root 1124 Nov 22 21:38 /usr/share/keyrings/mkusb-archive-keyring.gpg-rw------- 1 root root 1126 Nov 29 08:15 /usr/share/keyrings/papirus-archive-keyring.gpg Be sure you are saving developer GPG keys with 644 permissions. Another problem From the gpg(1) manpage: --keyserver name This option is deprecated - please use the --keyserver in ‘dirmngr.conf’ instead. Apparently the original approach has been doubly deprecated! AFAIK gpg does not issue warnings about the use of this CLI option (yet) but a proper solution to this problem would seem to look something like this? $ echo "keyserver hkp://keyserver.ubuntu.com" >> "${GNUPGHOME}/dirmngr.conf"$ gpgconf --kill dirmngr$ gpg --recv-keys 54B8C8AC... except I tried this and got gpg: keyserver receive failed: Connection timed out So if anyone has any ideas, I'm all ears.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/679495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176219/" ] }
679,512
Looking to rsync a drive that contains folders that begin with a single dash or a double dash, for example: -Archives- --Archives-- I've tried the following: rsync -azP -e <source drive>/* <destination drive> ...however, the folders that start with dashes (-) or double-dashes (--) are not getting synced. How can I ensure that any folders beginning with dashes (-) or double-dashes (--) get properly synced?
Prefix the relative pathname (filename) with ./ and it'll no longer start with a dash rsync -av ./-items* destination:destPath/ Or remove the wildcard * and transfer the parent directory rsync -av ./ destination:destPath/dir
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/679512", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/503203/" ] }
679,569
What is the difference between /lib and /usr/lib and /var/lib ? Some of the files are symbolic links that "duplicate" content of other directories.
Someone else can probably explain this with much more detail and historical reference but the short answer: /lib is a place for the essential standard libraries. Think of libraries required for your system to run. If something in /bin or /sbin needs a library that library is likely in /lib . /usr/lib the /usr directory in general is as it sounds, a user based directory. Here you will find things used by the users on the system. So if you install an application that needs libraries they might go to /usr/lib . If a binary in /usr/bin or /usr/sbin needs a library it will likely be in /usr/lib . /var/lib the /var directory is the writable counterpart to the /usr directory which is often required to be read-only. So /var/lib would have a similar purpose as /usr/lib but with the ability to write to them.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/679569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/496510/" ] }
679,595
I installed new os debian bullseye in laptop. I cant find wifi option in network manager. sudo lshw -C network gives, ... *-network UNCLAIMED description: Network controller product: RTL8821CE 802.11ac PCIe Wireless Network Adapter vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress cap_list configuration: latency=0 resources: ioport:2000(size=256) memory:c0600000-c060ffff... lspci gives output like ...Network controller: Realtek Semiconductor Co., Ltd. RTL8821CE 802.11ac PCIe Wireless Network Adapter... lsmod gives ...rtw88_8821ce 16384 0rtw88_8821c 77824 1 rtw88_8821certw88_pci 28672 1 rtw88_8821ce... After adding backports to the source list, I run sudo apt install -t bullseye-backports firmware-realtek , then is shows firmware-realtek is already the newest version (20210315-3) . I can see that RTL8821C is available without backports here . output of sudo modprobe wl && dmesg | grep wl is modprobe: FATAL: Module wl not found in directory /lib/modules/5.10.0-9-amd64 Output of ifconfig eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.42.0.2 netmask 255.255.255.0 broadcast 10.42.0.255 inet6 xxxx::xxxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link> ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet) RX packets 16449 bytes 16751257 (15.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 13241 bytes 1848301 (1.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 129 bytes 11324 (11.0 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 129 bytes 11324 (11.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 When reinstalling firmware I get below warnings, W: Possible missing firmware /lib/firmware/amdgpu/arcturus_gpu_info.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_ta.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_sos.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/arcturus_ta.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/arcturus_asd.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/arcturus_sos.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/arcturus_rlc.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/arcturus_mec2.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/arcturus_mec.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_rlc.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_mec2.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_mec.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_me.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_pfp.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_ce.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/arcturus_sdma.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_sdma.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/sienna_cichlid_mes.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navi10_mes.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_vcn.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/arcturus_vcn.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_smc.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/arcturus_smc.bin for module amdgpuW: Possible missing firmware /lib/firmware/amdgpu/navy_flounder_dmcub.bin for module amdgpu Also sudo journalctl | grep rtw returns Nov 18 15:27:26 debian kernel: rtw_8821ce 0000:02:00.0: firmware: failed to load rtw88/rtw8821c_fw.bin (-2)Nov 18 15:27:26 debian kernel: rtw_8821ce 0000:02:00.0: Direct firmware load for rtw88/rtw8821c_fw.bin failed with error -2Nov 18 15:27:26 debian kernel: rtw_8821ce 0000:02:00.0: failed to request firmwareNov 18 15:27:26 debian kernel: rtw_8821ce 0000:02:00.0: failed to load firmwareNov 18 15:27:26 debian kernel: rtw_8821ce 0000:02:00.0: failed to setup chip efuse infoNov 18 15:27:26 debian kernel: rtw_8821ce 0000:02:00.0: failed to setup chip informationNov 18 15:27:26 debian kernel: rtw_8821ce: probe of 0000:02:00.0 failed with error -22Nov 24 21:38:57 debian kernel: rtw_8821ce 0000:02:00.0: firmware: direct-loading firmware rtw88/rtw8821c_fw.binNov 24 21:38:57 debian kernel: rtw_8821ce 0000:02:00.0: Firmware version 24.8.0, H2C version 12Nov 24 21:38:57 debian kernel: rtw_8821ce 0000:02:00.0: rfe 2 isn't supportedNov 24 21:38:57 debian kernel: rtw_8821ce 0000:02:00.0: failed to setup chip efuse infoNov 24 21:38:57 debian kernel: rtw_8821ce 0000:02:00.0: failed to setup chip informationNov 30 11:16:48 debian sudo[2358]: username : TTY=pts/0 ; PWD=/home/username ; USER=root ; COMMAND=/usr/sbin/modprobe rtw88_8821ceNov 30 11:23:31 debian sudo[2561]: username : TTY=pts/0 ; PWD=/home/username ; USER=root ; COMMAND=/usr/sbin/modprobe rtw88_8821ce and sudo dkms status returns nothing. I disabled secure boot and reinstalled driver. But not worked.
Someone else can probably explain this with much more detail and historical reference but the short answer: /lib is a place for the essential standard libraries. Think of libraries required for your system to run. If something in /bin or /sbin needs a library that library is likely in /lib . /usr/lib the /usr directory in general is as it sounds, a user based directory. Here you will find things used by the users on the system. So if you install an application that needs libraries they might go to /usr/lib . If a binary in /usr/bin or /usr/sbin needs a library it will likely be in /usr/lib . /var/lib the /var directory is the writable counterpart to the /usr directory which is often required to be read-only. So /var/lib would have a similar purpose as /usr/lib but with the ability to write to them.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/679595", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/503297/" ] }
679,891
I'm practicing shell scripts and am trying to make a simple script that takes a directory as an argument, loops through each file within and prints out its name and size. #!/bin/bash# A practice shell script to try and display a list of file names# and their sizes using the output of ls -l and cut.declare -i indexexport index=0export name=""export size=0for file in $1 ; do index+=1 name=`basename $file` size=`ls -l $file | cut -d " " -f 5` echo "$index: $name, size: $size bytes"done When I give ./* as the argument, it does it for one file and that's it. However, if I edit the code above and just put ./* in place of $1 , it works and loops through all files in the current directory. Why won’t it do the same, when $1 is supposed to equal ./* ?
The reason is that the shell from which you call the script expands the globbing pattern ./* before it is passed to the script. That means, that if your globbing pattern matches e.g. file1.txt to file4.txt , calling the script as ./my_script.sh ./* will actually be interpreted as ./my_script.sh file1.txt file2.txt file3.txt file4.txt and these will be the arguments that the shell script sees. For further reading, have a look at the section on shell expansion order in the Bash Reference manual. There are two possibilities to overcome the problem: If you are sure that you always want to iterate over all files in a given directory, pass the directory as argument, and iterate over for f in "$1"/*do # operations on "$f"done Alternative, if you are sure that you will only pass file names to operate on, iterate over the entire argument list, as in for f in "$@"do # operations on "$f"done If you want to do it by passing a glob pattern into the script - which certainly is an interesting exercise - this is also possible (see comment by @ilkkachu). As mentioned by @fra-san in a comment, the approach has advantages - it can add more flexibility to the script usage, and it circumvents the limitation on shell command-line parameters (cf. "argument list too long"; though RAM will still limit the length of the resulting filename list) - but requires you to be extra careful. You can prevent the shell from expanding the glob by enclosing the argument in quotes (single or double), or escaping the glob character with a backslash: ./my_script.sh "./*"./my_script.sh './*'./my_script.sh ./\* Inside the script, you would refer to the positional parameter $1 unquoted so that it actually is interpreted by the shell (something that we often want to avoid). Since that "interpretation" not only involves expansion (see above), but also word splitting you need to set the input field separator IFS to the empty string to ensure that no word-splitting occurs. The for loop would then look like IFS=for f in $1do # Operations on "$f"done A few general notes on your script: Always quote shell variables , in particular when they contain filenames, as otherwise your script will stumble on filenames with spaces or other, even more exotic characters in them - remember, even the newline is an allowed character for filenames (yuck)! Parsing the output of ls is highly discouraged for similar reasons. If you want to identify attributes of a particular file, the stat tool is a better choice. For determining the size of a file, e.g., you could use size=$(stat --printf="%s" "$f") It is recommended to use the "new" $( ... ) style for command substitutions rather then the old "backtick-style" ` ... ` . It is a good habit to check your shell scripts with shellcheck , also available as standalone tool in many Linux distributions, to guard against this (and other) possible error sources.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/679891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/503662/" ] }
679,896
I installed NordVPN from the AUR (nordvpn-bin package) around a week or two weeks ago. After installing and getting logged in it worked as it was supposed to. However, after rebooting my computer, every time I try to connect, no matter what server I try to connect to, I get the following message: at 07:44:37 ❯❯❯ nordvpn connect chicagoConnecting to United States #8798 (us8798.nordvpn.com)Whoops! We couldn't connect you to 'chicago'. Please try again. If the problem persists, contact our customer support. I tried logging out and back in, restarting nordvpnd, and running as sudo. All of my packages are up to date. I'm not sure what else to try. Any ideas?
The reason is that the shell from which you call the script expands the globbing pattern ./* before it is passed to the script. That means, that if your globbing pattern matches e.g. file1.txt to file4.txt , calling the script as ./my_script.sh ./* will actually be interpreted as ./my_script.sh file1.txt file2.txt file3.txt file4.txt and these will be the arguments that the shell script sees. For further reading, have a look at the section on shell expansion order in the Bash Reference manual. There are two possibilities to overcome the problem: If you are sure that you always want to iterate over all files in a given directory, pass the directory as argument, and iterate over for f in "$1"/*do # operations on "$f"done Alternative, if you are sure that you will only pass file names to operate on, iterate over the entire argument list, as in for f in "$@"do # operations on "$f"done If you want to do it by passing a glob pattern into the script - which certainly is an interesting exercise - this is also possible (see comment by @ilkkachu). As mentioned by @fra-san in a comment, the approach has advantages - it can add more flexibility to the script usage, and it circumvents the limitation on shell command-line parameters (cf. "argument list too long"; though RAM will still limit the length of the resulting filename list) - but requires you to be extra careful. You can prevent the shell from expanding the glob by enclosing the argument in quotes (single or double), or escaping the glob character with a backslash: ./my_script.sh "./*"./my_script.sh './*'./my_script.sh ./\* Inside the script, you would refer to the positional parameter $1 unquoted so that it actually is interpreted by the shell (something that we often want to avoid). Since that "interpretation" not only involves expansion (see above), but also word splitting you need to set the input field separator IFS to the empty string to ensure that no word-splitting occurs. The for loop would then look like IFS=for f in $1do # Operations on "$f"done A few general notes on your script: Always quote shell variables , in particular when they contain filenames, as otherwise your script will stumble on filenames with spaces or other, even more exotic characters in them - remember, even the newline is an allowed character for filenames (yuck)! Parsing the output of ls is highly discouraged for similar reasons. If you want to identify attributes of a particular file, the stat tool is a better choice. For determining the size of a file, e.g., you could use size=$(stat --printf="%s" "$f") It is recommended to use the "new" $( ... ) style for command substitutions rather then the old "backtick-style" ` ... ` . It is a good habit to check your shell scripts with shellcheck , also available as standalone tool in many Linux distributions, to guard against this (and other) possible error sources.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/679896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85047/" ] }
679,911
I need to run a python script several times in parallel but I have done executing it in the background like this ipython program.py & ipython program.py & ... I want to know if this way uses one core per execution or just executes the program.py using threads.By the way, I want to explore the use of GNU Parallel but the examples that I find are about commands like "cat" of "find".How can I use GNU Parallel for executing program.py concurrently, each time in a different core?Thanks for your help.
The reason is that the shell from which you call the script expands the globbing pattern ./* before it is passed to the script. That means, that if your globbing pattern matches e.g. file1.txt to file4.txt , calling the script as ./my_script.sh ./* will actually be interpreted as ./my_script.sh file1.txt file2.txt file3.txt file4.txt and these will be the arguments that the shell script sees. For further reading, have a look at the section on shell expansion order in the Bash Reference manual. There are two possibilities to overcome the problem: If you are sure that you always want to iterate over all files in a given directory, pass the directory as argument, and iterate over for f in "$1"/*do # operations on "$f"done Alternative, if you are sure that you will only pass file names to operate on, iterate over the entire argument list, as in for f in "$@"do # operations on "$f"done If you want to do it by passing a glob pattern into the script - which certainly is an interesting exercise - this is also possible (see comment by @ilkkachu). As mentioned by @fra-san in a comment, the approach has advantages - it can add more flexibility to the script usage, and it circumvents the limitation on shell command-line parameters (cf. "argument list too long"; though RAM will still limit the length of the resulting filename list) - but requires you to be extra careful. You can prevent the shell from expanding the glob by enclosing the argument in quotes (single or double), or escaping the glob character with a backslash: ./my_script.sh "./*"./my_script.sh './*'./my_script.sh ./\* Inside the script, you would refer to the positional parameter $1 unquoted so that it actually is interpreted by the shell (something that we often want to avoid). Since that "interpretation" not only involves expansion (see above), but also word splitting you need to set the input field separator IFS to the empty string to ensure that no word-splitting occurs. The for loop would then look like IFS=for f in $1do # Operations on "$f"done A few general notes on your script: Always quote shell variables , in particular when they contain filenames, as otherwise your script will stumble on filenames with spaces or other, even more exotic characters in them - remember, even the newline is an allowed character for filenames (yuck)! Parsing the output of ls is highly discouraged for similar reasons. If you want to identify attributes of a particular file, the stat tool is a better choice. For determining the size of a file, e.g., you could use size=$(stat --printf="%s" "$f") It is recommended to use the "new" $( ... ) style for command substitutions rather then the old "backtick-style" ` ... ` . It is a good habit to check your shell scripts with shellcheck , also available as standalone tool in many Linux distributions, to guard against this (and other) possible error sources.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/679911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205483/" ] }
679,925
I am trying to clone a 500 GB SSD to a 1TB SSD. For some reason, it keeps failing when the data being copied reaches 8GB. This is the third 1TB SSD I've tried this on and they all get stuck at the same place. I've ran the following command: dd if=/dev/sda of=/dev/sdb bs=1024k status=progress I've also tried to clone the drive using Clonezilla which fails at the same spot. I used GParted to reformat the drive and set it to a EXT4 file system but it still gets stuck at the same spot. Sda is internal and sdb is plugged in externally. The error I'm getting says: 7977443328 bytes (8.0 GB, 7.4 GB) copied, 208s, 38.4 MB/sdd: error reading '/dev/sda': Input/output error7607+1 records in7607+1 records out Thanks to @roaima for the answer below. I was able to run ddrescue and it copied most of the data over. I took the internal SSD out and connected both the new and old SSDs to a CentOS box via USB3. I ran the following: ddrescue -v /dev/sdb /dev/sdc tmp --force It ran for over 15 hours. It stopped overnight. But the good thing is it picked back up where it left off when I ran the command again. I used screen so that I wouldn't be locked into a single session the second time around :) . I used Ctrl+c to exit the ddrescue command after 99.99% of the data was rescued since it was hanging there for hours. I was able to boot from the new drive and it booted right up. Here is the state where I exited the ddrescue: Initial status (read from mapfile)rescued: 243778 MB, tried: 147456 B, bad-sector: 0 B, bad areas: 0Current status ipos: 474344 MB, non-trimmed: 1363 kB, current rate: 0 B/s ipos: 474341 MB, non-trimmed: 0 B, current rate: 0 B/s opos: 474341 MB, non-scraped: 522752 B, average rate: 8871 kB/snon-tried: 0 B, bad-sector: 143360 B, error rate: 0 B/s rescued: 500107 MB, bad areas: 123, run time: 8h 1m 31spct rescued: 99.99%, read errors: 354, remaining time: 14h 31m time since last successful read: 6m 7sScraping failed blocks... (forwards)^C Interrupted by user Hopefully this helps others. I think my old drive was starting to fail. Hopefully no data was lost. Now on to resizing the LUKS partition :)
The error is, dd: error reading '/dev/sda': Input/output error , which tells you that the problem is reading the source disk and not writing to the destination. You can replace the destination disk as many times as you like and it won't resolve the issue of reading the source. Instead of using dd , consider rescuing the data off the disk before it dies completely. Either copy the files using something like rsync or cp , or take an image copy with ddrescue . ddrescue -v /dev/sda /dev/sdb /some/path/not/on/sda_or_sdb The last parameter points to a relatively small temporary file (the map file) that is on neither /dev/sda nor /dev/sdb . It could be on an external USB memory stick if you have nothing else. The ddrescue command understands that a source disk may be faulty. It reads relatively large blocks at a time until it hits an error, and at that point it marks the section for closer inspection and smaller copy attempts. The map file is used to allow for restarts and continuations in the event that your source disk locks up and the system has to be restarted. It'll do its best to copy everything it can. Once you've copied the disk, your /dev/sdb will appear to have partitions corresponding only to the original disk's size. You can use fdisk or gparted / parted to fix that up afterwards. If you had an error copying data you should first use one of the fsck family to check and fix the partitions. For example, e2fsck -f /dev/sdb1 .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/679925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289356/" ] }
679,976
I would like to list the files matching a specific pattern along with their number of rows.So far I have tried the following, which list me the files matching the desired pattern: find 2021.12.*/ -maxdepth 2 -name "myfilepattern.csv" -ls123456789 32116 -rw-rw-r-- 1 user1 user1 32881884 Dec 1 23:59 2021.12.01/myfilepattern.csv234567891 4 -rw-rw-r-- 1 user1 user1 144 Dec 2 00:00 2021.12.02/myfilepattern.csv I would like to add a column to this result containing the number of rows of each files 2021.12.01/myfilepattern.csv and 2021.12.02/myfilepattern.csv . I don't have any specific requirements about the position of such column. Can be at the beginning or at the end.
You can use -printf and -exec actions, along with wc -l to count lines/rows: find 2021.12.*/ -maxdepth 2 -name "myfilepattern.csv" -printf '%i\t%k\t%M\t%n\t%u\t%g\t%s\t%Tb %Td %TH:%TM\t' -exec wc -l {} \; The row count will be the second to last column.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/679976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/322397/" ] }
680,095
I'm using speedtest CLI in a BASH script and trying to grab the output by using only one line if possible. Typical output of Speedtest: Speedtest by Ookla Server: (censored) ISP: (censored) Latency: 93.85 ms (222.66 ms jitter)Download: 85.75 Mbps (data used: 134.8 MB) Upload: 5.68 Mbps (data used: 6.2 MB) I would like to grab Latency, Download speed, Upload speed, and jitter. Most ideal format: Download Speed: xx MbpsUpload Speed: xx MbpsLatency: xx msJitter: xx ms My current test code is using 2 wasteful statements: dl_speed=`speedtest | grep "Download: " | head -2 | tail -1 | awk {'print$2'} | cut -f1 -d:`ul_speed=`speedtest | grep "Upload: " | head -2 | tail -1 | awk {'print$2'} | cut -f1 -d:`echo "Download Speed: $dl_speed Mbps"echo "Upload Speed: $ul_speed Mbps"
With GNU awk. I used at least one space and ( as field separators. Append this to your speedtest command. | awk 'BEGIN{ FS=" +|\\(" }; /Download/{ dow=$3 " " $4 }; /Upload/ { upl=$3 " " $4 }; /Latency/ { lat=$3 " " $4 }; /jitter/ { jit=$6 " " $7 }; END{ print "Download Speed:", dow; print "Upload Speed:", upl; print "Latency:", lat; print "Jitter:", jit }' Output to stdout: Download Speed: 85.75 MbpsUpload Speed: 5.68 MbpsLatency: 93.85 msJitter: 222.66 ms
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/468913/" ] }
680,271
I've previously written a script that searches a directory tree for .h and .c files and then runs clang-format on them: find $directory -name '*.[hc]' -exec clang-format -i {} \; This works just as expected. Now I'd like to add .cpp files to the search. However, neither -name '*.{[hc],cpp}' nor -name '{*.[hc],*.cpp}' work. That is, they find no files. I know that I could get my logic to work if I used find 's -o option. However, there must be a way to do this with a single -name directive.
The patterns used with the -name predicate in find are standard filename globbing patterns. What you are trying to use is a brace expansion, which find does not support. Note that there is no single standard globbing pattern that matches filenames that end with .c , .h or .cpp . You might have wanted to use something like '*.'{c,h,cpp} , which expands to *.c , *.h , and *.cpp , but that does not include the -name predicate nor the -o . The next thing to try is '-o -name "*.'{c,h,cpp}'"' , but this expands to the three strings -o -name "*.c" -o -name "*.h" , and -o -name "*.cpp" . This also can't be used as you would have to split them on spaces to get find to recognize the separate substrings (and remove the -o from the first one). It would possibly work with eval though, but it seems like more hassle than it's worth. Instead of that, you may use two -name tests with an OR in-between: find "$directory" -type f \( -name '*.[ch]' -o -name '*.cpp' \) \ -exec clang-format -i {} + This uses two -name tests as described earlier ( -o is the OR operator), and also calls clang-format as few times as possible by passing batches of found pathnames to the tool rather than invoking it once for each file. With a tiny extra bit of programming, you may store all the filename suffixes that you want to pick up in a list, and create the needed find expression from that. Since you did not mention what shell you're using, I'm doing this for the POSIX sh shell: set -- c h cppfor suffix do set -- "$@" -o -name "*.$suffix" shiftdoneshift # shifts off the initial "-o"find "$directory" -type f \( "$@" \) -exec clang-format -i {} + or set --for suffix in c h cpp; do set -- "$@" -o -name "*.$suffix"doneshiftfind "$directory" -type f \( "$@" \) -exec clang-format -i {} + The list that "$@" expands to in this example would be the equivalent of -name '*.c' -o -name '*.h' -o -name '*.cpp'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/413026/" ] }
680,272
Why couldn't the Live ISOs just be a minimal Linux system with an installer? Is there any reason to use squashfs to hold the root of the filesystem? Is it just for better compression, or are there other reasons? I've seen some answers (and comments) say that it's for read-only reasons. What about persistence, like what Ubuntu or EndeavourOS has in their Live USBs?
There are a couple of important reasons for this, but the big two are space constraints, and requirements from the filesystem itself. SquashFS is a highly optimized filesystem image format that provides, among other benefits: High levels of data compression. Built-in block-level deduplication (any given block is stored only once, and all files that contain a copy of that block just reference that one copy). No practical limitations on file sizes (this usually does not matter, but is worth mentioning IMO). Proper support for file ownership, file permissions, extended attributes (needed for example for SELinux) Very low runtime overhead despite the above benefits. Reasonably good performance. A burnable live system image needs to conform to the filesystem format required by the media type it’s being used with, either ISO 9660 if it’s an optical disk (because while it could use UDF, almost nobody actually does that), or FAT32 on most USB connected storage devices. FAT32 notoriously supports none of those first four benefits listed above. ISO 9660 technically has support for POSIX-style file ownership and permissions (the Rock Ridge extensions), but it lacks practical support for compression and deduplication. However, Linux needs POSIX-style file ownership and permissions to work correctly, and in most cases it’s extremely desirable for the final live system image to be as small as possible and therefore good compression is desirable, and because SquashFS does this better than any other options available for Linux right now, it’s what gets used for the root filesystem for the live image (since that is generally the biggest part of the image by a significant margin, and is also the only part that the bootloader does not need to understand).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/680272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/502557/" ] }
680,296
I find myself using history | grep ‘whatever' then copying and pasting the cmd, and I find this tedious. I also sometimes do $(history | grep 'whatever') to execute the cmd if it’s 1 line (more than that and I have to process it) but I have to use sed to strip the line numbers. So when copy paste wasn’t available did people just retype cmds? Or did they just use sed and awk to process then execute?
Speaking from my own history here, not necessarily UNIX chronological order. In the "old days" there was neither command line editing nor mouse. You literally retyped the command, as you would for vanilla sh or dash . I suspect this was one reason why commonly used commands were short. Simple scripts in a personal $HOME/bin directory incorporated in the $PATH helped with more frequently used operations. For a number of years this was my environment, augmented by X Windows running on a PC ( XVision ), which allowed me to copy and paste with a mouse. Then came the C Shell ( csh ) with its command history. Still no command line editing but you could repeat previous commands and perform simple modifications to them. Here, % is the C Shell prompt: % echo hello, worldhello, world% ^world^earthecho hello, earthhello, earth% history 12 echo hello, world 13 echo hello, earth 14 history% !13echo hello, earthhello, earth% !13:s/hello/goodbyeecho goodbye, earthgoodbye, earth% !hhistory 12 echo hello, world 13 echo hello, earth 14 history 15 echo hello, earth 16 echo goodbye, earth 17 history Then came tcsh which offered a command line editing tool. This became my favourite session shell for a number of years, although what scripts I wrote were for sh . Then I discovered bash , which meant I could use history and command line substitution (with ^old^new , as above) but also the the same syntax as for my scripts. I've not yet moved to zsh so this is my current preferred environment.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680296", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/474559/" ] }
680,358
I am using Ubuntu and I want to extract the values between two patterns and the required string is not in the file. My data is as shown below: [{"rows":[{"_uuid":["uuid","11111-222-33333-4444444"]}]}] I want to get the text between , and ] , and that means I want 11111-222-33333-444444 . How can I do this using sed ? Ubuntu is the operating system I'm using. It was not stored in any file or variable. It was the output of one command. I want to pipe the output of command1 to sed and parse the above string to get only the required information. It was in JSON format. This was the only data we would be getting...
Using jq (which does not care whether the input is in compact or multi-line form): your-command | jq -r '.[0].rows[0]._uuid[1]' Your JSON document is an array consisting of objects and you want the first of these top-level objects, .[0] . That object contains a rows array, and you want the first of its elements, .rows[0] . That element has another array called _uuid , and you want that array's second element, ._uuid[1] . With -r you get the decoded, "raw", data back. Without -r , you get a (quoted) JSON string. A totally different way of getting the data from this particular JSON document would be to get the last value in the document: your-comand | jq -r 'getpath([paths(scalars)][-1])' This first generates all "paths" for each scalar value in the whole document with paths , and picks out the last of them. The expression then uses this path-of-the-last-scalar with getpath to pull out the last value. For the given document, this results in the expected output. The following is probably doing the same thing, but with explicit recursion using .. and with select() to pull out all scalar values: your-command | jq -r '[.. | select(scalars)][-1]' Personally, I would go with the topmost suggestion in this answer, as it uses the structure of the document, which is bound to be meaningful to the user in some way. That code would have to be revisited, and the question reformulated, if any of the involved arrays starts containing more elements.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680358", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/433403/" ] }
680,382
so my problem is as follows. I install Kali Linux from the wsl store with $wsl --install -d kali-linux .Following has been installed PRETTY_NAME="Kali GNU/Linux Rolling"NAME="Kali GNU/Linux"ID=kaliVERSION="2019.2"VERSION_ID="2019.2"ID_LIKE=debianANSI_COLOR="1;31"HOME_URL="https://www.kali.org/"SUPPORT_URL="https://forums.kali.org/"BUG_REPORT_URL="https://bugs.kali.org/" If I then do $sudo apt-get update , I get the following error Get:1 http://kali.download/kali kali-rolling InRelease [30.6 kB]Err:1 http://kali.download/kali kali-rolling InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]>Fetched 30.6 kB in 1s (40.9 kB/s)Reading package lists... DoneW: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://kali.download/kali kali-rolling InRelease: The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]>W: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]>W: Some index files failed to download. They have been ignored, or old ones used instead. Then i manually install kali-archive-keyring_2020.2_all.deb with $sudo dpkg -i kali-archive-keyring_2020.2_all.deb Then I do $sudo apt-get update and it updates the repositories as expected. Then I do $sudo apt-get upgrade And so far it works I upgraded my kali to PRETTY_NAME="Kali GNU/Linux Rolling"NAME="Kali GNU/Linux"ID=kaliVERSION="2021.4"VERSION_ID="2021.4"VERSION_CODENAME="kali-rolling"ID_LIKE=debianANSI_COLOR="1;31"HOME_URL="https://www.kali.org/"SUPPORT_URL="https://forums.kali.org/"BUG_REPORT_URL="https://bugs.kali.org/" Now when I execute the command $sudo apt-get upgrade it says that there are packages kept back. Reading package lists... DoneBuilding dependency treeReading state information... DoneCalculating upgrade... DoneThe following packages have been kept back: apt apt-utils bind9-host bsdmainutils bsdutils coreutils cron curl dnsutils dpkg e2fsprogs fdisk findutils iproute2 iptables isc-dhcp-client libbind9-161 libbsd0 libc-bin libc6 libcryptsetup12 libcurl4 libdevmapper1.02.1 libext2fs2 libgnutls30 libiptc0 libirs161 libisccc161 libisccfg163 libldap-2.4-2 liblocale-gettext-perl liblwres161 libmount1 libp11-kit0 libpam-modules libpam-modules-bin librtmp1 libselinux1 libsemanage-common libslang2 libstdc++6 libsystemd0 libtext-charwidth-perl libtext-iconv-perl libudev1 libxml2 libxtables12 login logrotate mawk mlocate mount net-tools passwd perl-base procps rsyslog sed sudo systemd tar udev util-linux vim-common vim-tiny wget whois0 upgraded, 0 newly installed, 0 to remove and 67 not upgraded. From this point it doesn't matter what I am trying to install it always results in following error and the kali is broken I'm not able to further work with the OS. /usr/bin/perl: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directorydpkg: error processing package libc6:amd64 (--configure): installed libc6:amd64 package post-installation script subprocess returned error exit status 127Errors were encountered while processing: libc6:amd64E: Sub-process /usr/bin/dpkg returned an error code (1) For example if I try $sudo apt-get update afterwards it results in sudo: account validation failure, is your account locked? I looked the errors up but I was not lucky finding any plausible solution for this. I would appreciate any help regarding this problem. Thanks in advance!
I can reproduce this, and I'm really surprised they let this slip through. The problem, as described in this answer is that the Kali files are woefully out-of-date for the manually installed version. And sadly, it looks like that location is what is being used when we do wsl --install for Kali. Updating the keyring (as you tried) used to be enough to fix it, but not any longer. Now even more signatures and/or packages are out of date. We might find a way to fix them as well, but the easiest solution for now is to install using the Microsoft Store, rather than the wsl --install command. I have Kali installed in WSL from the Microsoft Store, and it installs as 2021.3 off the bat. Note that you should do another wsl --unregister kali-linux first before installing the Microsoft Store version. Alternatives If you can't install from the Microsoft Store (and, from the comments, you can't, due to corporate policy), then there are a few alternatives. All involve obtaining a Kali tarball and then wsl --import ing it. I've personally tested each of these successfully with Kali: Option 1: Copy WSL Kali from another Computer that can access the Microsoft Store This is probably the most reliable method if it works for you. You won't be installing any Store package on your work computer, so it shouldn't violate that policy, at least. Use another, non-work PC (assuming you have access) to install WSL and Kali from the Store. Configure it with your username and password (the one you want to use on your work computer ultimately -- It doesn't matter whether it exists in Windows on that computer or not). Optionally, go ahead and sudo apt get update && sudo apt get upgrade . Exit Kali From PowerShell or CMD: wsl --export kali-linux kali_clean.tar` Transfer the resulting tarball to your work computer using a USB drive, assuming that's allowed by policy. If USB isn't allowed, then put the resulting tarball somewhere in the cloud that you have access and download it to your work computer. If you're going to be installing a VM as an alternative anyway, it seems to me that this is just as safe (and policy-compliant) as that process anyway. Ultimately, your going to download something to get Kali (or any other distribution) on your work computer. Now on your work computer ... Skip to the instructions below for "Installing and Configuring Kali from tarball" ... Option 2: Build from Kali WSL build-script Kali is one of the few distributions I've seen that makes their WSL build process very easy to find. It's listed directly on the Get Kali page. You'll need a separate WSL instance first here. Since we know that Ubuntu works from the wsl --install -d Ubuntu , just start out with that. You can delete it when you are done. In Ubuntu ... sudo apt install -y debootstrapgit clone https://gitlab.com/kalilinux/build-scripts/kali-wsl-chroot.gitcd kali-wsl-chrootsudo ./build_chroot.sh# The build should complete for x64 but fail for ARM. That's okay as long as `./x64/install.tar.gz` is created.sudo mv ./x64/install.tar.gz /mnt/c/somewhere/on/c/kali.install.tar.gz` Exit Ubuntu Uninstall the Ubuntu distribution ( wsl --unregister Ubuntu ) if you want. Skip to the instructions below for "Installing and Configuring Kali from tarball" ... Option 3: Use Kali Docker image to create a tarball Microsoft provides instructions here on how to manually import almost any distro. You'll need to: First install another distribution such as Ubuntu (which does work correctly via wsl --install , of course) Install Docker Desktop (if policy and license allows). Note that Docker Desktop now requires a paid license for corporate use, depending on the size of your company. As an alternative, you can install Docker Engine (which continues to be free/OSS) in the Ubuntu distribution. docker pull kalilinux/kali or docker pull kalilinux/kali-rolling (see Kali Docker images . Run the image ( docker run kalilinux/kali:latest ) Get the name of the image from docker ps -a Export the container to a tarball with docker export <name_or_id_from_above> kali.tar Continue below to the instructions for "Installing and Configuring Kali from tarball" ... Installing and Configuring Kali from the tarball Create a directory in Windows for Kali. I tend to use something like %userprofile%\WSL\instances\kali myself. cd to that directory in PowerShell. wsl --import Kali . path\to\kali.tar --version 2 (or kali.tar.gz for Option 2) This will create a Kali instance using the tarball which can then be started with wsl -d kali . You can also set it as the default with wsl --set-default kali . For the second (build script) and third (Docker image) options, you'll need to adduser your WSL user and set its password: adduser <username>usermod <username> -aG adm,cdrom,sudo,dip,plugdev For all of these, WSL will automatically start as the root user.That can be changed by creating /etc/wsl.conf per this Super User answer .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/504155/" ] }