source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
688,833 | I've recently bought a core i5 12600K and a Gigabyte Z690 I can't get ubuntu 21.10 suspend working properly. System suspend, then when I wake it up, screen stays dark even though my computer "seem" to start (fan are on again as well as RGB) and I need to fully restart the computer. I'm currently on Ubuntu 21.10 running 5.13 kernel but I've tried 5.16 as well, I've also tried latest Manjaro, same issue there, that's why I don't think it is related to ubuntu itself. I've enabled S3 suspend on Bios Suspend works well on Windows I don't know what I should try to do after that, any ideas ? Thanks ! | One way to handle this is to exit when c goes back to 0: c { print; if (--c == 0) exit }; /XCHT/{c=10} or more concisely, c; c && !--c { exit }; /XCHT/{c=10} GNU grep can do something similar: grep -m1 -A10 XCHT (but this will show the first line matching “XCHT” as well). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/688833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/512566/"
]
} |
688,835 | About Linux Shell Script having the following: verifyIfFileExists(){ ... returns 0 # if the file exists ... returns 1 # if the does not exist }...something(){ verifyIfFileExists resultVerification=$? if [[ $resultVerification -eq 0 ]]; then ... else ... fi ...} The code shown above works as expected. I want to know if is possible and how do the method call and the evaluation within the if statement - it to avoid the resultVerification=$? declaration - something like: something(){ verifyIfFileExists if [[ $(verifyIfFileExists) -eq 0 ]]; then ... else ... fi | if command succeeds if command exits with a 0 status; in your case, if verifyIfFileExists; then ...else ...fi [ and [[ are themselves commands which return 0 or 1 depending on the result of evaluating the expressions given as arguments. So if [[ ... is an instance of the generic if command shown above. Storing an exit status in a different variable can be useful if you want to use it later; for example ... run a commandresult=$?printf "Command foo exited with result %s.\n" "$result"if [[ "$result" -eq 0 ]]; then ...fi If you don’t need that, then commandif [[ "$?" -eq 0 ]]; then can be rewritten as if command; then which I find simpler to read. This is especially true if your functions are named accordingly, e.g. if fileExists; then See also shellcheck’s SC2181 which lists more pitfalls. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/688835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383045/"
]
} |
688,845 | I have a number of files in a folder on a Linux machine with the following names: 11, 12, 13, 14, 15, 21, 22, 23, 24, 25, 31, 32, 33, 34, 35 I would like to use regex in order to rename with the .inp extension I tried mv * *.inpmv: target '*.inp' is not a directory which provided an error. I also tried using the regex [123][12345] instead of the *. So, I understand that mv is used to move files around. I also got the idea that perhaps I could use ./*.inp to force mv to write in the same folder but it failed. So, apart from not understading correctly how mv works, how would I proceed to have have this done with mv ? | The issue with your command is that the mv command only can move/rename a single file (when given exactly two command line arguments), or move a bunch of files to a single destination directory (more than two command line arguments). In your case, you use the expansion of * *.inp as the arguments, and this is going to expand to all the visible filenames in the current directory, followed by the names that matches *.inp . Assuming that this expands to more than two names, then the last argument needs to be the name of a directory for the command to be a valid mv command, or you'll get a "is not a directory" error. In this case, we instead want to use mv with two arguments at a time, and for that we need to use a shell loop: for name in [123][1-5]; do mv "$name" "$name.inp"done This loops over all names that matches (a variant of) the filename globbing pattern that you mentioned (note, this is not a regular expression). In the loop body, the current name will be stored in the name variable, and the mv simply renames the file by adding .inp at the end of the name. This does not prevent mv from overwriting existing files in the case where there might be a name collision. For that, assuming you use GNU mv , you may want to use mv with its --no-clobber (or -n ) option, or possibly with its --backup (or -b ) option. Or, you could do an explicit check for the existence of the destination name and skip the current file if it exists (which would also avoid moving files into exiting directories if you happened to have a directory with the same name as the destination name): for name in [123][1-5]; do [ -e "$name.inp" ] || [ -L "$name.inp" ] && continue mv "$name" "$name.inp"done Using GNU mv with --no-target-directory (or -T ) in combination with either -n or -b would avoid overwriting existing files (or back them up, with -b ) and also avoid moving the files into a subdirectory that happened to have the same name as the destination name. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/688845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/396747/"
]
} |
688,885 | I am trying to write a service file that unbinds a PCI device like this, after I run systemctl stop servicefile-name : ExecStop=/bin/echo 1 > "/sys/bus/pci/devices/0000\:03\:00.0/remove" However the unbinding never occurs and the device is still active and running. Executing the echo command from the bash command line has no problems and removes the device just fine: echo 1 > /sys/bus/pci/devices/0000:03:00.0/remove Running systemctl status servicefile-name shows that the echo command ran without any errors: (code=exited, status=0/SUCCESS) . I am clearly not using the correct echo command syntax within the service file. I also tried to remove the escape characters but it still did not work ExecStop=/bin/echo 1 > "/sys/bus/pci/devices/0000:03:00.0/remove" Does anyone know how to remove/unbind devices at runtime from a systemd service file, give the special characters ":"? | The issue with your command is that the mv command only can move/rename a single file (when given exactly two command line arguments), or move a bunch of files to a single destination directory (more than two command line arguments). In your case, you use the expansion of * *.inp as the arguments, and this is going to expand to all the visible filenames in the current directory, followed by the names that matches *.inp . Assuming that this expands to more than two names, then the last argument needs to be the name of a directory for the command to be a valid mv command, or you'll get a "is not a directory" error. In this case, we instead want to use mv with two arguments at a time, and for that we need to use a shell loop: for name in [123][1-5]; do mv "$name" "$name.inp"done This loops over all names that matches (a variant of) the filename globbing pattern that you mentioned (note, this is not a regular expression). In the loop body, the current name will be stored in the name variable, and the mv simply renames the file by adding .inp at the end of the name. This does not prevent mv from overwriting existing files in the case where there might be a name collision. For that, assuming you use GNU mv , you may want to use mv with its --no-clobber (or -n ) option, or possibly with its --backup (or -b ) option. Or, you could do an explicit check for the existence of the destination name and skip the current file if it exists (which would also avoid moving files into exiting directories if you happened to have a directory with the same name as the destination name): for name in [123][1-5]; do [ -e "$name.inp" ] || [ -L "$name.inp" ] && continue mv "$name" "$name.inp"done Using GNU mv with --no-target-directory (or -T ) in combination with either -n or -b would avoid overwriting existing files (or back them up, with -b ) and also avoid moving the files into a subdirectory that happened to have the same name as the destination name. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/688885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/512170/"
]
} |
688,976 | I have the following script: installRequiredFiles=$1install_with_pkg(){ arg1=$1 echo $arg1 echo "[+] Running update: $arg1 update."}if [ $installRequiredFiles = "true" ]; then if [ -x "$(command -v apt)" ]; then install_with_pkg "apt" elif [ -x "$(command -v yum)" ]; then install_with_pkg "yum" elif [ -x "$(command -v apk)" ]; then install_with_pkg "apk" else echo "[!] Can't install files." fifi When I run it straight forward it works fine: root@ubuntu:/# ./myscript.sh "true"apt[+] Running update: apt update.Printing installRequiredFiles: truePrinting arg1: apt But when I am using sh -c I am getting the following error: root@ubuntu:/# sh -c ./myscript.sh "true"./c.sh: 11: [: =: unexpected operatorPrinting installRequiredFiles:Printing arg1: I want to be able to run it correctly with sh -c and I want it to support sh and bash which currently does. | That's not what the -c option is for. You normally don't give it a file, you give it shell commands. It is meant for doing things like this: $ sh -c 'echo hello'hello Now that you gave it a file, it is trying to read it and execute the commands found in it, but the argument isn't passed to the script ( myscript.sh ), the argument is only given to the sh command itself as you can see if you simply print the arguments: $ cat script.shecho "ARGS: $@"$ sh ./script.sh trueARGS: true$ sh -c ./script.sh trueARGS: All you need to do is to not use -c and it will work as expected: sh ./myscript.sh "true" Or, if you absolutely must use -c for some reason, pass the script and the argument for the script as a single, quoted argument to sh : sh -c './myscript.sh "true"' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/688976",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316437/"
]
} |
689,019 | here is example how to add string on the beginning of line when match the UUID number in fstab sed -e "/UUID=953b1921-ac73-4b7b-abaf-ff983b0fce8a/ s/^/###FAULTY_DISK###/" -i /etc/fstab and we can verify with more /etc/fstab ###FAULTY_DISK###UUID=953b1921-ac73-4b7b-abaf-ff983b0fce8a /data/sdc ext4 defaults,noatime 0 0 but on the second running we get sed -e "/UUID=953b1921-ac73-4b7b-abaf-ff983b0fce8a/ s/^/###FAULTY_DISK###/" -i /etc/fstabmore /etc/fstab###FAULTY_DISK######FAULTY_DISK###UUID=953b1921-ac73-4b7b-abaf-ff983b0fce8a /data/sdc ext4 defaults,noatime 0 0 how to tell sed to ignore the adding of ###FAULTY_DISK### , in case it already set in file fstab | Append only if it was starting with that UUID: sed -e 's/^UUID=953b1921-ac73-4b7b-abaf-ff983b0fce8a/###FAULTY_DISK###&/' /etc/fstab or, in case you wanted to ignore leading whitespace too if any: sed -e 's/^[[:blank:]]*UUID=953b1921-ac73-4b7b-abaf-ff983b0fce8a/###FAULTY_DISK###&/' /etc/fstab | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
689,032 | I have a text file, fred.txt: % cat -e fred.txt 00:$00:04:$01:00:23:34$01:$01:40:$01:40:32:$% I can grep for a line with 2 digits and a colon: % pcregrep -e '[\d]{2}:' fred.txt 00:00:04:01:00:23:3401:01:40:01:40:32:% but when I try to get repeating patterns of that pattern, it doesn't find them: % pcregrep -e '[[\d]{2}:]{2}' fred.txt% I'm looking to get the same output as this: % pcregrep -e '[\d]{2}:[\d]{2}:' fred.txt00:04:01:00:23:3401:40:01:40:32:% Eventually I'll be looking for more nested repeating patterns in a larger file so I don't want to define each time the pattern repeats. How do I grep for the lines that have that pattern repeating? | Using GNU grep $ grep -Eo '([0-9]{2}:){2,}' fred.txt 00:04:01:00:23:01:40:01:40:32: | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/689032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/234187/"
]
} |
689,175 | When I run podman run I'm getting a particularly weird error, ❯ podman run -ti --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest✔ docker.io/rancher/rancher:latestTrying to pull docker.io/rancher/rancher:latest...Getting image source signatures[... blob copying...]Writing manifest to image destinationStoring signatures Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 630384594:600260513 for /usr/bin/etcdctl): Check /etc/subuid and /etc/subgid: lchown /usr/bin/etcdctl: invalid argumentError: Error committing the finished image: error adding layer with blob "sha256:b4b03dbaa949daab471f94bcfd68cbe21c1147e8ec2acfe3f46f1520db48baeb": Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 630384594:600260513 for /usr/bin/etcdctl): Check /etc/subuid and /etc/subgid: lchown /usr/bin/etcdctl: invalid argument What does "potentially insufficient UIDs or GIDs available in user namespace" mean and how can I remedy this problem? | In order to get around this I had to run --storage-opt ignore_chown_errors=true this ignores chmod errors and forces your container to only support one user. You can read about this in "Why can’t rootless Podman pull my image?" . Note that this is an option to podman , not to podman run . And as such using it looks like this, podman --storage-opt ignore_chown_errors=true run [....] In my case because I did not have the kernel overlayfs driver I needed to use the FUSE version (installed with sudo apt install fuse-overlayfs ), podman --storage-opt mount_program=/usr/bin/fuse-overlayfs --storage-opt ignore_chown_errors=true run [....] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
689,186 | I have script to open a text file which has a list of 1650 site locations. I am attempting to loop thru this list print two separate lists of files 1) which are found and 2) which are missing. The list of site locations is a single column. The problem I am running in to is the script is not reading the input file and looping thru it. The script is just looking for the the single file "instead of reading and looping thru each line of this file". #!/bin/bashfile=Sites-1.txtdoif "$file" ;then echo "$file found" >> found.tmpelse echo "$file has not been found" >> missing.tmpfidone input example from Sites-1.txt for files looking for 01-033-0043-SO2-2014.dat.out01-033-0044-SO2-2014.dat.out01-033-1002-SO2-2014.dat.out01-071-0020-SO2-2014.dat.out01-071-0026-SO2-2014.dat.out Expected output files composition found.tmp01-033-0043-SO2-2014.dat.out found01-033-0044-SO2-2014.dat.out found 01-071-0026-SO2-2014.dat.out found missing.tmp01-033-1002-SO2-2014.dat.out has not been found01-071-0020-SO2-2014.dat.out has not been found | In order to get around this I had to run --storage-opt ignore_chown_errors=true this ignores chmod errors and forces your container to only support one user. You can read about this in "Why can’t rootless Podman pull my image?" . Note that this is an option to podman , not to podman run . And as such using it looks like this, podman --storage-opt ignore_chown_errors=true run [....] In my case because I did not have the kernel overlayfs driver I needed to use the FUSE version (installed with sudo apt install fuse-overlayfs ), podman --storage-opt mount_program=/usr/bin/fuse-overlayfs --storage-opt ignore_chown_errors=true run [....] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/396038/"
]
} |
689,189 | I am trying to find (and store to a variable) the path of a file that is stored on another server by running the following command in one of my functions (included in .bashrc): FILE_PATH=$(ssh -T user@host 'find <directory> -name *<filename>*') However this is not returning any output. I have checked that there should be a file in the location that is grabbed, and I have also been able to replicate the command and store output to variable on the command line, but it does not work when running the function. Does anyone know what's going on and why the command isn't working? EDIT: Here is a representation of what I am seeing: function get_path { FILE=$1 FILE_PATH=$(ssh -T user@hostB 'find /home/daverbuj -name *${FILE}*') echo "Here is the file: ${FILE_PATH}"} [daverbuj@hostA]$ FILE_A=$(ssh -T user@hostB 'find /home/daverbuj -name foo.bar')[daverbuj@hostA]$ echo $FILE_A/home/daverbuj/foo.bar[daverbuj@hostA]$ get_path foo.barHere is the file: I am seeing what I expect when I run from the command line, but not when I run the function. | In order to get around this I had to run --storage-opt ignore_chown_errors=true this ignores chmod errors and forces your container to only support one user. You can read about this in "Why can’t rootless Podman pull my image?" . Note that this is an option to podman , not to podman run . And as such using it looks like this, podman --storage-opt ignore_chown_errors=true run [....] In my case because I did not have the kernel overlayfs driver I needed to use the FUSE version (installed with sudo apt install fuse-overlayfs ), podman --storage-opt mount_program=/usr/bin/fuse-overlayfs --storage-opt ignore_chown_errors=true run [....] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/512952/"
]
} |
689,197 | I was looking for a way to escape a variable containing format specifiers and special characters like quotes, backslashes and line breaks so that when passing it to print -P it'll print out literally. So essentially I want these two to print the same: > cat file.txt > my_var="$(cat file.txt)"> print -P "${<magic>my_var}" A good example file for test cases I used is this: Backslash \Double Backslash \\Single Quote 'Double Quote "-----------------------Escaped Linebreak \n-----------------------Color codes: %F{red}not red%fVariable expansion $SHELL Closest I got is ${${(q+)my_var}//\%/%%} though that has issues with quotes, linebreaks, backslashes and variable expansion: $'Backslash Double Backslash \Single Quote 'Double Quote "-----------------------Escaped Linebreak \n-----------------------Color codes: %F{red}not red%fVariable expansion /usr/bin/zsh' I am aware of printf '%s\n' "$my_var" . However in practice there's a lot of actual formatting going on in the print -P around the variable, so this isn't useful to me. Using print -P around the variable and printf for the actual variable sadly also doesn't work as there are cases where string manipulation is applied to the contents of the variable. | In order to get around this I had to run --storage-opt ignore_chown_errors=true this ignores chmod errors and forces your container to only support one user. You can read about this in "Why can’t rootless Podman pull my image?" . Note that this is an option to podman , not to podman run . And as such using it looks like this, podman --storage-opt ignore_chown_errors=true run [....] In my case because I did not have the kernel overlayfs driver I needed to use the FUSE version (installed with sudo apt install fuse-overlayfs ), podman --storage-opt mount_program=/usr/bin/fuse-overlayfs --storage-opt ignore_chown_errors=true run [....] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45867/"
]
} |
689,219 | Is there a way to blacken parts of a pdf file (i.e. personal data that I don't want to send with the pdf)? Maybe from the command line where I can say make everything black on page 2 from pixel X455 to X470 and Y300 to Y320. | In order to get around this I had to run --storage-opt ignore_chown_errors=true this ignores chmod errors and forces your container to only support one user. You can read about this in "Why can’t rootless Podman pull my image?" . Note that this is an option to podman , not to podman run . And as such using it looks like this, podman --storage-opt ignore_chown_errors=true run [....] In my case because I did not have the kernel overlayfs driver I needed to use the FUSE version (installed with sudo apt install fuse-overlayfs ), podman --storage-opt mount_program=/usr/bin/fuse-overlayfs --storage-opt ignore_chown_errors=true run [....] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/438538/"
]
} |
689,433 | I have some data that looks like this: abc123456789def111222333ghi999888777666 i.e. the records are separated by multiple newlines but in the wrong place. What I want is to get it like this: abc123456789def111222333ghi999888777666 I have tried setting RS to \n\n\n in awk but that ends up with the records cut up wrong; the abc term ends up as the final field of the previous record rather than the first field of the current record. I'm not sure how to use sed for this either since that works on a line-by-line basis. | Try awk '!NF {next} /[^0-9]/ {printf XRS; XRS = ORS} 1' file2abc123456789def111222333ghi999888777 It deletes empty lines (I read from your spec that those are really empty, no spaces etc.), then checks if there is any non-digit, indicating record headers, for which it prints a newline except for the first one which gets an empty string. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/178272/"
]
} |
689,463 | I have the following file from which I want to extract only Removed '2022-01-30_01-00-05' , at the end. Removing '2022-01-30_01-00-05'... 0.46% complete (00:03:45 remaining)^M 5.49% complete (00:00:17 remaining)^M 24.90% complete (00:00:06 remaining)^M 60.56% complete (00:00:01 remaining)^M 82.12% complete (00:00:00 remaining)^M 82.39% complete (00:00:01 remaining)^M 84.24% complete (00:00:01 remaining)^M 86.48% complete (00:00:01 remaining)^M 88.58% complete (00:00:01 remaining)^M 89.66% complete (00:00:01 remaining)^M101.08% complete (00:00:00 remaining)^M104.62% complete (00:00:00 remaining)^M ^MRemoved '2022-01-30_01-00-05' I've tried dos2unix but it didn't work. I've tried these variations, below, but when I less output they either don't remove the ^M characters, or the whole line is captured: tr -d $'\r' < /file | grep "Removed" > outputtr -d '^M' < /file | grep "Removed" > outputtr -d ^M < /file | grep "Removed" > outputsed 's/\r//g' < /file | grep "Removed" > output | The grep command will print the entire matching line and since lines in *nix are defined by \n and not \r , what you describe is normal behavior. In other words, your first and last commands (the tr -d '\r' and the sed 's/\r//g' ) are both working as intended, it's just that grep is doing what it's supposed to do and printing the entire line. To only print part of a line, you need GNU grep and its -o option. For example: $ grep -oP "Removed\s*'[^']+'" fileRemoved '2022-01-30_01-00-05' Alternatively, change the \r (the ^M ) to newline characters instead of deleting them: $ tr '\r' '\n' < file | grep RemovedRemoved '2022-01-30_01-00-05' Or $ sed 's/\r/\n/g' file | grep RemovedRemoved '2022-01-30_01-00-05' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265166/"
]
} |
689,465 | I have a file with several thousand lines describing the output from an elastic wave modecode. The size of the file is variable depending on the frequency, the thickness of themodel, and the number of modes found for the particular model. The header informationcontains the number of modes found. The modes are numbered from 0 to N and are stored nextto the keyword MODE. Following are how the file looks for the first two modes. In this example there are a total of 4 modes, 0 through 4.Following the record "I DEPTH Y1 Y2 Y3 Y4" are over a thousand records giving the amplitudes. I have shown only the first two records for the first two modes. It is very easy to pick off the individual MODE numbers using awk and the pattern /MODE /. I want to create individual files for each mode (mode_0, mode_1, ...) with the thousand or so values in each file corresponding to that mode. I can create the files with the first awk call, but am unable to get the thousand or so records of corresponding mode amplitude values into the files the first awk call creates. One unsuccessful attempt to do that is the second awk call. ########## MODE NUMBER is " 0" (RAYLEIGH WAVE) ##########I DEPTH Y1 Y2 Y3 Y41 3.000000E-01 9.999983E-01 1.166993E+06 -1.280462E-02 0.000000E+002 6.000000E-01 9.999933E-01 2.351593E+06 -2.580244E-02 0.000000E+00 This continues for a thousand or so records.-1 0.000000E+00 0.000000E+00 0.000000E+00 0.000000E+00 0.000000E+00 ########## MODE NUMBER is " 1" (RAYLEIGH WAVE) ##########I DEPTH Y1 Y2 Y3 Y4 1 3.000000E-01 9.999960E-01 1.183126E+06 -1.280343E-02 0.000000E+002 6.000000E-01 9.999840E-01 2.367720E+06 -2.562274E-02 0.000000E+00 This continues for a thousand or so records.-1 0.000000E+00 0.000000E+00 0.000000E+00 0.000000E+00 0.000000E+00 The last line for an individual model always ends with a -1 in the first field of the lastrecord. The number of records is variable, typically 1000 or more records. Then the next modestarts with exactly the same format as the previous mode, starting with 1 in the firstfield of the third record and ending with -1 in the first field of the last record of the mode. What I have been trying to do is: Create a separate file for each mode labelled mode_0, mode_1, mode_2, ..., mode_N for each of the individual modes. Write the mode amplitude values to the corresponding mode_n file. These values are the floating point numbers below the "I DEPTH ... " labels. I am quite inexperienced with awk as you can see from my latest attempt below. The examplehas a total of 5 modes, mode_0 through mode_4. The first call to awk works as expected,creating the individual mode files. The second awk call is one of my many unsuccessfulattempts to write the values to the individual mode files. I also tried the awk rangepattern / 1 /,/ -1 / which also did not work. I tried to get the secondawk call to work for just one mode, listed below, also unsuccessfully. I tried to figure out how to grab all the mode amplitude values between the record with "I" in the first field of the first line, and "-1", the first field in the last record of the mode amplitude values. Although the mode amplitude floating point numbers can be negative, the " -1 " is strictly integer and surrounded by spaces, making it a good pattern to search on for the last record of each individual mode amplitude value. gawk '/MODE / {if($6 == "0\"" ) $6 = 0 # Remove double quotes from MODE 0" which only occurs for mode 0. arr[i] = substr( $6,1,length($6-1)) {print $0 >> ("mode_"arr[i])}}' inputfile gawk '{ for (i = 1 ; i <= 4; i++) if ( ( arr[i] == 0 ) && ( $1 == " I " && $1 != " -1 ") ) print $0 >> ("mode_"arr[i])}' inputfile | The grep command will print the entire matching line and since lines in *nix are defined by \n and not \r , what you describe is normal behavior. In other words, your first and last commands (the tr -d '\r' and the sed 's/\r//g' ) are both working as intended, it's just that grep is doing what it's supposed to do and printing the entire line. To only print part of a line, you need GNU grep and its -o option. For example: $ grep -oP "Removed\s*'[^']+'" fileRemoved '2022-01-30_01-00-05' Alternatively, change the \r (the ^M ) to newline characters instead of deleting them: $ tr '\r' '\n' < file | grep RemovedRemoved '2022-01-30_01-00-05' Or $ sed 's/\r/\n/g' file | grep RemovedRemoved '2022-01-30_01-00-05' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/513197/"
]
} |
689,494 | I have a bunch of files. AcademicCapIcon.svelte ArrowSmLeftIcon.svelte CalculatorIcon.svelteAdjustmentsIcon.svelte ArrowSmRightIcon.svelte CalendarIcon.svelte...... All files has the same format. For example AcademicCapIcon.svelte has the following: <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke="currentColor" aria-hidden="true"> <path d="M12 14l9-5-9-5-9 5 9 5z"/> <path d="M12 14l6.16-3.422a12.083 12.083 0 01.665 6.479A11.952 11.952 0 0012 20.055a11.952 11.952 0 00-6.824-2.998 12.078 12.078 0 01.665-6.479L12 14z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 14l9-5-9-5-9 5 9 5zm0 0l6.16-3.422a12.083 12.083 0 01.665 6.479A11.952 11.952 0 0012 20.055a11.952 11.952 0 00-6.824-2.998 12.078 12.078 0 01.665-6.479L12 14zm-4 6v-7.5l4-2.222"/></svg> I'd like to insert the following at the beginning of each file. <script> export let className = "h-6 w-6";</script> And insert class={className} after xmlns="http://www.w3.org/2000/svg" . For example the final result of the above AcademicCapIcon.svelte file will be: <script> export let className = "h-6 w-6";</script><svg xmlns="http://www.w3.org/2000/svg" class={className} fill="none" viewBox="0 0 24 24" stroke="currentColor" aria-hidden="true"> <path d="M12 14l9-5-9-5-9 5 9 5z"/> <path d="M12 14l6.16-3.422a12.083 12.083 0 01.665 6.479A11.952 11.952 0 0012 20.055a11.952 11.952 0 00-6.824-2.998 12.078 12.078 0 01.665-6.479L12 14z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 14l9-5-9-5-9 5 9 5zm0 0l6.16-3.422a12.083 12.083 0 01.665 6.479A11.952 11.952 0 0012 20.055a11.952 11.952 0 00-6.824-2.998 12.078 12.078 0 01.665-6.479L12 14zm-4 6v-7.5l4-2.222"/></svg> How can I do this using a terminal or Bash script? | The grep command will print the entire matching line and since lines in *nix are defined by \n and not \r , what you describe is normal behavior. In other words, your first and last commands (the tr -d '\r' and the sed 's/\r//g' ) are both working as intended, it's just that grep is doing what it's supposed to do and printing the entire line. To only print part of a line, you need GNU grep and its -o option. For example: $ grep -oP "Removed\s*'[^']+'" fileRemoved '2022-01-30_01-00-05' Alternatively, change the \r (the ^M ) to newline characters instead of deleting them: $ tr '\r' '\n' < file | grep RemovedRemoved '2022-01-30_01-00-05' Or $ sed 's/\r/\n/g' file | grep RemovedRemoved '2022-01-30_01-00-05' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120293/"
]
} |
689,509 | I switch between windows and mac regularly. On windows I can clear the screen with cls, and on mac, I do the same with command+k. I have been trying to find a way to bind cls to command+k. Such that when I type cls in console, it clears the screen. I use zsh. | The grep command will print the entire matching line and since lines in *nix are defined by \n and not \r , what you describe is normal behavior. In other words, your first and last commands (the tr -d '\r' and the sed 's/\r//g' ) are both working as intended, it's just that grep is doing what it's supposed to do and printing the entire line. To only print part of a line, you need GNU grep and its -o option. For example: $ grep -oP "Removed\s*'[^']+'" fileRemoved '2022-01-30_01-00-05' Alternatively, change the \r (the ^M ) to newline characters instead of deleting them: $ tr '\r' '\n' < file | grep RemovedRemoved '2022-01-30_01-00-05' Or $ sed 's/\r/\n/g' file | grep RemovedRemoved '2022-01-30_01-00-05' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/513228/"
]
} |
689,530 | I have been using Void Linux for more than a year now and very satisfied with the user experience. Till now I have not fell short of any utility, library or program that was not found the repo. However, today while compiling i3WM, I figured Void Linux did not have the libstartup-notification0-dev package which is a dependency for compiling i3WM. I have been an end-user of i3WM for more than 2 years, but thought of getting to the Dev side of it. Below is the error for reference. [prashant@void i3]$ meson ./build/ && cd ./build/The Meson build systemVersion: 0.60.3Source dir: /home/prashant/i3Build dir: /home/prashant/i3/buildBuild type: native buildProject name: i3Project version: 4.20.1C compiler for the host machine: cc (gcc 10.2.1 "cc (GCC) 10.2.1 20201203")C linker for the host machine: cc ld.bfd 2.35.1Host machine cpu family: x86_64Host machine cpu: x86_64Compiler for C supports arguments -Wunused-value: YESProgram meson/meson-dist-script found: YES (/home/prashant/i3/meson/meson-dist-script)Checking for function "strndup" : YESChecking for function "mkdirp" : NOConfiguring config.h.in using configurationFound git repository at /home/prashant/i3Program /usr/bin/meson found: YES (/usr/bin/meson)Library m found: YESLibrary rt found: YESLibrary iconv found: NOFound pkg-config: /usr/bin/pkg-config (1.8.0)Run-time dependency libstartup-notification-1.0 found: NO (tried pkgconfig)meson.build:305:0: ERROR: Dependency "libstartup-notification-1.0" not found, tried pkgconfig Here is the search result on the Void repo: [prashant@void i3]$ sudo xbps-query -Rs libstartup-notification0-dev Anyone has a workaround for this?Requesting Void Linux maintainers to kindly get this package/library in the repository.Thanks in advance. | The grep command will print the entire matching line and since lines in *nix are defined by \n and not \r , what you describe is normal behavior. In other words, your first and last commands (the tr -d '\r' and the sed 's/\r//g' ) are both working as intended, it's just that grep is doing what it's supposed to do and printing the entire line. To only print part of a line, you need GNU grep and its -o option. For example: $ grep -oP "Removed\s*'[^']+'" fileRemoved '2022-01-30_01-00-05' Alternatively, change the \r (the ^M ) to newline characters instead of deleting them: $ tr '\r' '\n' < file | grep RemovedRemoved '2022-01-30_01-00-05' Or $ sed 's/\r/\n/g' file | grep RemovedRemoved '2022-01-30_01-00-05' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357804/"
]
} |
689,668 | On Linux, here is some sample output from the ncdu NCurses Disk Usage tool: command: ncdu /boot ncdu 1.14.1 ~ Use the arrow keys to navigate, press ? for help --- /boot ----------------------------------------------------- 100.2 MiB [##########] initrd.img-5.13.0-28-generic 100.2 MiB [######### ] initrd.img-5.13.0-27-generic 11.2 MiB [# ] vmlinuz-5.11.0-46-generic 9.7 MiB [ ] vmlinuz-5.13.0-28-generic 9.7 MiB [ ] vmlinuz-5.13.0-27-generic 9.7 MiB [ ] vmlinuz-5.13.0-25-generic 8.0 MiB [ ] /grub 5.7 MiB [ ] System.map-5.13.0-28-generic 5.7 MiB [ ] System.map-5.13.0-27-generic 5.7 MiB [ ] System.map-5.13.0-25-generic 5.6 MiB [ ] System.map-5.11.0-46-generic 252.0 KiB [ ] config-5.13.0-28-generic 252.0 KiB [ ] config-5.13.0-27-generic 252.0 KiB [ ] config-5.13.0-25-generic 252.0 KiB [ ] config-5.11.0-46-generic 184.0 KiB [ ] memtest86+_multiboot.bin 184.0 KiB [ ] memtest86+.elf 180.0 KiB [ ] memtest86+.bin! 16.0 KiB [ ] /lost+found! 4.0 KiB [ ] /efi@ 0.0 B [ ] initrd.img.old@ 0.0 B [ ] initrd.img@ 0.0 B [ ] vmlinuz.old@ 0.0 B [ ] vmlinuz BUT, it's a human-interactive program and that output isn't scriptable. I'd like to store it into a variable, so, how can I get similar output with du instead? This is a follow-on question to my question here: How to make ncdu show a quick summary of disk usage and exit? The end-usage will look something like this: output_before="$(du /boot)"# do a bunch of stuff here which reduces the size of /bootoutput_after="$(du /boot)"echo "Before:"echo "$output_before"echo ""echo "After:"echo "$output_after" Here's a start, but it doesn't show the output in proper descending order of largest to smallest in size: du --all --max-depth=1 -h /boot What I'd ideally like to see: --- /boot ----------------------------------------------------- 100.2 MiB [##########] initrd.img-5.13.0-28-generic 100.2 MiB [######### ] initrd.img-5.13.0-27-generic 11.2 MiB [# ] vmlinuz-5.11.0-46-generic 9.7 MiB [ ] vmlinuz-5.13.0-28-generic 9.7 MiB [ ] vmlinuz-5.13.0-27-generic 9.7 MiB [ ] vmlinuz-5.13.0-25-generic 8.0 MiB [ ] /grub 5.7 MiB [ ] System.map-5.13.0-28-generic 5.7 MiB [ ] System.map-5.13.0-27-generic 5.7 MiB [ ] System.map-5.13.0-25-generic 5.6 MiB [ ] System.map-5.11.0-46-generic 252.0 KiB [ ] config-5.13.0-28-generic 252.0 KiB [ ] config-5.13.0-27-generic 252.0 KiB [ ] config-5.13.0-25-generic 252.0 KiB [ ] config-5.11.0-46-generic 184.0 KiB [ ] memtest86+_multiboot.bin 184.0 KiB [ ] memtest86+.elf 180.0 KiB [ ] memtest86+.bin 16.0 KiB [ ] /lost+found 4.0 KiB [ ] /efi 0.0 B [ ] initrd.img.old 0.0 B [ ] initrd.img 0.0 B [ ] vmlinuz.old 0.0 B [ ] vmlinuz But, the minimum acceptable answer will look something like this: 100.2 MiB /boot/initrd.img-5.13.0-28-generic 100.2 MiB /boot/initrd.img-5.13.0-27-generic 11.2 MiB /boot/vmlinuz-5.11.0-46-generic 9.7 MiB /boot/vmlinuz-5.13.0-28-generic 9.7 MiB /boot/vmlinuz-5.13.0-27-generic 9.7 MiB /boot/vmlinuz-5.13.0-25-generic 8.0 MiB /boot/grub 5.7 MiB /boot/System.map-5.13.0-28-generic 5.7 MiB /boot/System.map-5.13.0-27-generic 5.7 MiB /boot/System.map-5.13.0-25-generic 5.6 MiB /boot/System.map-5.11.0-46-generic 252.0 KiB /boot/config-5.13.0-28-generic 252.0 KiB /boot/config-5.13.0-27-generic 252.0 KiB /boot/config-5.13.0-25-generic 252.0 KiB /boot/config-5.11.0-46-generic 184.0 KiB /boot/memtest86+_multiboot.bin 184.0 KiB /boot/memtest86+.elf 180.0 KiB /boot/memtest86+.bin 16.0 KiB /boot/lost+found 4.0 KiB /boot/efi 0.0 B /boot/initrd.img.old 0.0 B /boot/initrd.img 0.0 B /boot/vmlinuz.old 0.0 B /boot/vmlinuz | Small python script that reads from ncdu -o- : read_ncdu.py : #!/usr/bin/env python3import sys, jsondef sizeof_fmt(num, suffix='B'): for unit in ['','Ki','Mi','Gi','Ti','Pi','Ei','Zi']: if abs(num) < 1024.0: return "%3.1f%s%s" % (num, unit, suffix) num /= 1024.0 return "%.1f%s%s" % (num, 'Yi', suffix)def get_recursive(item): size = 0 if isinstance(item, dict): name = item["name"] size = item["asize"] else: name = item[0]["name"] for sub in item: size += get_recursive(sub)[1] return (name, size)data = json.loads(sys.stdin.read())items=[]for i in data[3][1:]: items.append(get_recursive(i))sum_sizes = sum([item[1] for item in items])biggest = max([item[1] for item in items])print("------ {} --- {} -------".format(data[3][0]["name"], sizeof_fmt(sum_sizes)))for item in sorted(items, key=lambda x:x[1], reverse=True): size=item[1] hsize=sizeof_fmt(item[1]) name=item[0] percent=(size/sum_sizes*100) percent_str="({:.1f}%)".format(percent) print("{} {:8} [{}{}] {}".format( " " * (10 - len(str(hsize)))+ str(hsize), " " * (8 - len(percent_str)) + percent_str, ('#' * round(size/biggest*10)), ('-' * round(10-size/biggest*10)), item[0]) ) You might want to improve the script: Use dsize (disk size) instead asize (apparant size) if you like, or introduce arguments to the script to let the user decide. Make the script standalone with os.walk() instead of using ncdu -o- input. See also here for explanation of the ncdu json output format. Run: Make read_ncdu.py executable --> chmod +x read_ncdu.py , then you can run: ncdu -o- /boot | ./read_ncdu.py Output: ------ /boot --- 224.3MiB ------- 56.8MiB (25.3%) [##########] initrd.img-5.13.0-28-generic 56.7MiB (25.3%) [##########] initrd.img-5.13.0-27-generic 55.4MiB (24.7%) [##########] initrd.img-5.11.0-46-generic 11.2MiB (5.0%) [##--------] vmlinuz-5.11.0-46-generic 9.7MiB (4.3%) [##--------] vmlinuz-5.13.0-28-generic 9.7MiB (4.3%) [##--------] vmlinuz-5.13.0-27-generic 6.5MiB (2.9%) [#---------] grub 5.7MiB (2.5%) [#---------] System.map-5.13.0-28-generic 5.7MiB (2.5%) [#---------] System.map-5.13.0-27-generic 5.6MiB (2.5%) [#---------] System.map-5.11.0-46-generic 251.7KiB (0.1%) [----------] config-5.13.0-28-generic 251.6KiB (0.1%) [----------] config-5.13.0-27-generic 248.1KiB (0.1%) [----------] config-5.11.0-46-generic 180.6KiB (0.1%) [----------] memtest86+_multiboot.bin 180.1KiB (0.1%) [----------] memtest86+.elf 178.4KiB (0.1%) [----------] memtest86+.bin 16.0KiB (0.0%) [----------] lost+found 28.0B (0.0%) [----------] initrd.img 28.0B (0.0%) [----------] initrd.img.old 25.0B (0.0%) [----------] vmlinuz 25.0B (0.0%) [----------] vmlinuz.old | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/114401/"
]
} |
689,670 | Does anybody know how to solve this problem? ansible -m shell -i test.txt -a "sudo find / -regex '.*\(tar\|zip\)$' -type f 2>/dev/null | while read i; do echo $i; done" The problem is the "while" part. It doesn't show any result. When I run it directly in Bash, then all is OK. (I need this "while" part, because I do various operations with files found by the find , like unzip and grep ) Thank you | Small python script that reads from ncdu -o- : read_ncdu.py : #!/usr/bin/env python3import sys, jsondef sizeof_fmt(num, suffix='B'): for unit in ['','Ki','Mi','Gi','Ti','Pi','Ei','Zi']: if abs(num) < 1024.0: return "%3.1f%s%s" % (num, unit, suffix) num /= 1024.0 return "%.1f%s%s" % (num, 'Yi', suffix)def get_recursive(item): size = 0 if isinstance(item, dict): name = item["name"] size = item["asize"] else: name = item[0]["name"] for sub in item: size += get_recursive(sub)[1] return (name, size)data = json.loads(sys.stdin.read())items=[]for i in data[3][1:]: items.append(get_recursive(i))sum_sizes = sum([item[1] for item in items])biggest = max([item[1] for item in items])print("------ {} --- {} -------".format(data[3][0]["name"], sizeof_fmt(sum_sizes)))for item in sorted(items, key=lambda x:x[1], reverse=True): size=item[1] hsize=sizeof_fmt(item[1]) name=item[0] percent=(size/sum_sizes*100) percent_str="({:.1f}%)".format(percent) print("{} {:8} [{}{}] {}".format( " " * (10 - len(str(hsize)))+ str(hsize), " " * (8 - len(percent_str)) + percent_str, ('#' * round(size/biggest*10)), ('-' * round(10-size/biggest*10)), item[0]) ) You might want to improve the script: Use dsize (disk size) instead asize (apparant size) if you like, or introduce arguments to the script to let the user decide. Make the script standalone with os.walk() instead of using ncdu -o- input. See also here for explanation of the ncdu json output format. Run: Make read_ncdu.py executable --> chmod +x read_ncdu.py , then you can run: ncdu -o- /boot | ./read_ncdu.py Output: ------ /boot --- 224.3MiB ------- 56.8MiB (25.3%) [##########] initrd.img-5.13.0-28-generic 56.7MiB (25.3%) [##########] initrd.img-5.13.0-27-generic 55.4MiB (24.7%) [##########] initrd.img-5.11.0-46-generic 11.2MiB (5.0%) [##--------] vmlinuz-5.11.0-46-generic 9.7MiB (4.3%) [##--------] vmlinuz-5.13.0-28-generic 9.7MiB (4.3%) [##--------] vmlinuz-5.13.0-27-generic 6.5MiB (2.9%) [#---------] grub 5.7MiB (2.5%) [#---------] System.map-5.13.0-28-generic 5.7MiB (2.5%) [#---------] System.map-5.13.0-27-generic 5.6MiB (2.5%) [#---------] System.map-5.11.0-46-generic 251.7KiB (0.1%) [----------] config-5.13.0-28-generic 251.6KiB (0.1%) [----------] config-5.13.0-27-generic 248.1KiB (0.1%) [----------] config-5.11.0-46-generic 180.6KiB (0.1%) [----------] memtest86+_multiboot.bin 180.1KiB (0.1%) [----------] memtest86+.elf 178.4KiB (0.1%) [----------] memtest86+.bin 16.0KiB (0.0%) [----------] lost+found 28.0B (0.0%) [----------] initrd.img 28.0B (0.0%) [----------] initrd.img.old 25.0B (0.0%) [----------] vmlinuz 25.0B (0.0%) [----------] vmlinuz.old | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/513392/"
]
} |
689,770 | i need to remove line with condition, column 2 only "eating" and combination values on column 3 and column 4 already occurred in a previous line my sample data csv like this: a,eating,apple,2b,throwing,banana,1c,eating,apple,3d,eating,apple,1e,eating,banana,2f,throwing,apple,2g,throwing,banana,2h,throwing,banana,3i,eating,apple,2j,eating,apple,3k,eating,banana,1l,throwing,banana,2m,throwing,banana,1n,throwing,apple,1o,eating,apple,3p,eating,banana,2q,throwing,apple,1r,throwing,apple,2s,eating,apple,1 the output should be like this a,eating,apple,2b,throwing,banana,1c,eating,apple,3d,eating,apple,1e,eating,banana,2f,throwing,apple,2g,throwing,banana,2h,throwing,banana,3k,eating,banana,1l,throwing,banana,2m,throwing,banana,1n,throwing,apple,1q,throwing,apple,1r,throwing,apple,2 | Assuming that the input data is "simple CSV", i.e. that there are no embedded commas or newlines in any fields, then we may use awk like so: $ awk -F, '$2 != "eating" || !seen[$3,$4]++' filea,eating,apple,2b,throwing,banana,1c,eating,apple,3d,eating,apple,1e,eating,banana,2f,throwing,apple,2g,throwing,banana,2h,throwing,banana,3k,eating,banana,1l,throwing,banana,2m,throwing,banana,1n,throwing,apple,1q,throwing,apple,1r,throwing,apple,2 This prints the current line if the 2nd comma-delimited field is not precisely the string eating or (if the 2nd field is eating ) if the combination of the 3rd and 4th fields has not been seen before. The logical expression $2 != "eating" || !seen[$3,$4]++ may be rewritten as !($2 == "eating" && seen[$3,$4]++) (which is the way the conditions were specified in the question) depending on which way is easiest to understand. The two expressions are equivalent. This is a simple variation of the common idiomatic way to remove duplicated lines while preserving the original record order using awk : awk '!seen[$0]++' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/377330/"
]
} |
689,816 | This file just has a single column: fs://derps123/20210103/fs://derps123/20210104/fs://derps123/20210105/fs://derps123/20210106/fs://derps123/20210107/fs://derps123/20210108/fs://derps123/20210109/ I want to filter lines between 20210105 and 20210108 I tried to filter using awk with date but this throws a syntax error: awk -v date='gs://derps123/''$1!=date{next};/20210105/,/20210108/' folders.txt | You get a syntax error because you have no whitespace between value of -v option and awk program. You can treat dates like numbers like that: $ awk -v low=20210105 -v high=20210108 -v FS='/' '$4 >= low && $4 <= high' folders.txtfs://derps123/20210105/fs://derps123/20210106/fs://derps123/20210107/fs://derps123/20210108/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29731/"
]
} |
689,923 | I am attempting to copy a file into a directory where my user account is not the directory owner but belongs to a group that is the directory group owner. These are the steps I have taken: Create a group and add me to that group stephen@pi:~ $ sudo groupadd test-groupstephen@pi:~ $ sudo usermod -a -G test-group stephenstephen@pi:~ $ grep 'test-group' /etc/grouptest-group:x:1002:stephen Create a file and list permission stephen@pi:~ $ touch example.txtstephen@pi:~ $ ls -l example.txt-rw-r--r-- 1 stephen stephen 0 Feb 9 10:46 example.txt Create a directory, modify the group owner to the new group and alter permission to the directory to grant write permission to the group stephen@pi:~ $ sudo mkdir /var/www/testdirstephen@pi:~ $ sudo chown :test-group /var/www/testdir/stephen@pi:~ $ sudo chmod 664 /var/www/testdir/stephen@pi:~ $ sudo ls -l /var/wwwtotal 8drwxr-xr-x 2 root root 4096 Oct 31 12:17 htmldrw-rw-r-- 2 root test-group 4096 Feb 9 10:48 testdir Copy the newly created file into this directory stephen@pi:~ $ cp example.txt /var/www/testdir/straight-copy.txtcp: failed to access '/var/www/testdir/straight-copy.txt': Permission denied To me, this should have been successful; I'm a member of the group that has ownership of this directory, and the group permission is set to rw. Ultimately, I want any files that are copied into this directory to inherit the permission of the parent directory (/var/www/testdir). I can copy with sudo , but this does not inherit the owner or permission from the parent directory, nor does it retain the original ownership (probably as I'm elevated to root to copy): Copy with sudo and list ownership/permission of file stephen@pi:~ $ sudo cp example.txt /var/www/testdir/straight-copy.txtstephen@pi:~ $ sudo ls -l /var/www/testdir/total 0-rw-r--r-- 1 root root 0 Feb 9 11:06 straight-copy.txt Please is someone able to explain to me what is happening? | When you did: sudo usermod -a -G test-group stephen Only the group database (the contents of /etc/group in your case) was modified. The corresponding gid was not automagically added to the list of supplementary gids of the process running your shell (or any process that has stephen 's uid as their effective or real uids). If you run id -Gn (which starts a new process (inheriting the uids/gids) and executes id in it), or ps -o user,group,supgrp -p "$$" (if supported by your ps ) to list those for the shell process, you'll see test-group is not among the list. You'd need to log out and log in again to start new processes with the updated list of groups ( login (or other logging-in application) calls initgroups() which looks at the passwd and group database to set the list of gids of the ancestor process of your login session). If you do sudo -u stephen id -Gn , you'll find that test-group is in there as sudo does also use initgroups() or equivalent to set the list of gids for the target user. Same with sudo zsh -c 'USERNAME=stephen; id -Gn' Also, as mentioned separately , you need search ( x ) permission to a directory be able to access (including create) any of its entries. So here, without having to log out and back in, you could still do: # add search permissions for `a`ll:sudo chmod a+x /var/www/testdir# copy as the new you:sudo -u stephen cp example.txt /var/www/testdir/ You can also use newgrp test-group to start a new shell process with test-group as its real and effective gid, and it added to the list of supplementary gids. newgrp will allow it since you've been granted membership of that group in the group database. No need for admin privilege in this case. Or sg test-group -c 'some command' to run something other than a shell. Doing sg test-group -c 'newgrp stephen' would have the effect of adding only adding test-group to your supplementary gids while restoring your original (e)gid. It's also possible to make a copy of a file and specify owner, group and permissions all at once with the install utility: sudo install -o stephen -g test-group -m a=r,ug+w example.txt /var/www/testdir/ To copy example.txt , make it owned by stephen , with group test-group and rw-rw-r-- permissions. To copy the timestamps, ownership and permissions in addition to contents, you can also use cp -p . GNU cp also has cp -a to copy as much as possible of the metadata (short for --recursive --no-dereference --preserve=all ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689923",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/513665/"
]
} |
689,949 | I have some data files and I wish to rename them for my pipeline. The files look like this: {unique_ids}_{experiment_condition}_L{3_digit_number}.txt I need to rename them so the experiment condition flag will appear at the end of the filename, before the extension as follows: {unique_ids}_L{3_digit_number}_{experiment_condition}.txt Length of unique_ids and experiment_condition is not fixed. Example: ghad312fd2_Mb_L002.txt becomes ghad312fd2_L002_Mb.txt . Thank You! | Using the Perl-based rename utility to rename all the files in the current directory matching the pattern ./*_*_*.txt (i.e. any file whose nome contains at least two underscores and ends with .txt ): rename -n 's/([^_]+)_([^_]+)\.txt$/$2_$1.txt/' ./*_*_*.txt This swaps the last two underscore-delimited parts of the filename, excluding the filename suffix .txt . Remove -n to run this for real after ensuring that it seems to be doing the correct thing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/513699/"
]
} |
689,957 | I currently extract text from image using: import png:- | convert png:- -units PixelsPerInch -resample 300 -sharpen 12x6.0 png:- | tesseract -l eng stdin stdout | xsel -ib However, import png:- command to take screenshot is not working well for me. It somehow do not quite suit Linux Mint. Is there any other command which I can use to directly send screenshot to STDOUT for further processing. | Using the Perl-based rename utility to rename all the files in the current directory matching the pattern ./*_*_*.txt (i.e. any file whose nome contains at least two underscores and ends with .txt ): rename -n 's/([^_]+)_([^_]+)\.txt$/$2_$1.txt/' ./*_*_*.txt This swaps the last two underscore-delimited parts of the filename, excluding the filename suffix .txt . Remove -n to run this for real after ensuring that it seems to be doing the correct thing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/689957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206574/"
]
} |
690,128 | In my home directory ~ , I issued ln -s Subfolder1/Subfolder2/Subfolder3 I then have a soft-linked folder Subfolder3 in my home directory. When I pushd into it, both pwd and dirs show my current workingdirectory ( cwd ) to be /home/ My.User.Name /Subfolder3 . My bash prompt also contains the cwd , which displays as ~/Subfolder3 . I recall that many years ago, after a cd or pushd into a symbolicallylinked folder, the full path ~/Subfolder1/Subfolder2/Subfolder3 wouldbe shown by pwd , dirs , and in the bash prompt. Is it a simple settingto get that behaviour back? | The documentation ( man bash , search for symbolic ) shows you can either handle this each time you use cd and pushd , or by setting a global option cd -P path/through/symlinkpushd -P pathset -P This global option switches bash to use real paths everywhere | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/690128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163540/"
]
} |
690,194 | I am reading the AWS eic_harvest_hostkeys script and I don't understand this line: key=$(/usr/bin/awk '{$1=$1};1' < "${file}") What is the of benefit awk?Isn't key=$(/bin/cat "${file}") better? | The assignment to $1 forces awk to rewrite the input line to a canonical format. { echo 'one two three'; echo ' indented with trailing '; } | catone two three indented with trailing{ echo 'one two three'; echo ' indented with trailing '; } | awk '{$1=$1}; 1'one two threeindented with trailing Furthermore, when file=-x , i.e. it's referencing a filename that begins with a dash, using key=$(<"$file") will work correctly whereas key=$(cat "$file") will produce an error such as cat: unknown option -- x with many (all?) implementations of cat . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/690194",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/486398/"
]
} |
690,195 | I have just install brightnessctl to control my screen brightness, but can only run it as root. Doing otherwise prints the suggestion I "get write permission for device files". What is the correct way to do this? I would also like to be able to set volume with amixer without root privileges, which I assume is the same issue. | The assignment to $1 forces awk to rewrite the input line to a canonical format. { echo 'one two three'; echo ' indented with trailing '; } | catone two three indented with trailing{ echo 'one two three'; echo ' indented with trailing '; } | awk '{$1=$1}; 1'one two threeindented with trailing Furthermore, when file=-x , i.e. it's referencing a filename that begins with a dash, using key=$(<"$file") will work correctly whereas key=$(cat "$file") will produce an error such as cat: unknown option -- x with many (all?) implementations of cat . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/690195",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179/"
]
} |
690,441 | I recently acquired an old Packard Bell machine with 8 MB of RAM and an Intel 486SX. I need to put an OS on that hardware. I know that FreeDOS might run on this system, but naturally, I am not nearly as familiar with the C-prompt as with bash . So, I'm wondering if there are any Unix-like operating systems still maintained that would run on this hardware? | Provided you get the possibility to cross-compile on whatever other system, the only solution I know is to… do it yourself from sources, you will need : The linux kernel. the 5.4 version should fit . And provided you take extreme care to select only the drivers you need, it should happily fit into 2 M Busybox ( many common UNIX utilities into a single small executable ) which should fit into ~ 1 M Choose your init system (I'd go with openrc , but, as suggested in comments busybox' init might well fulfill your needs) and configure it with the minimal number of services. Then consider what you can do with the remaining ~4 M Of course you'll have to forget everything running behind an X server. If semi-graphical is enough for you, then ncurses is IMHO the way to follow, I've even heard about ncurses based desktop environments but never tried. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/690441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/514257/"
]
} |
690,465 | we have Linux RHEL server - 7.6 version in server disks are : lsblk -SNAME HCTL TYPE VENDOR MODEL REV TRANsda 0:2:0:0 disk DELL PERC FD33xD 4.27sdb 1:0:0:0 disk ATA INTEL SSDSC1BG40 DL2B satasdc 2:0:0:0 disk ATA INTEL SSDSC1BG40 DL2B sata sdc and sdb are the OS disks about sda is disk that represented by RAID so sda include number of disks , but the question is how to count the number of disks in RAID we tried the following but we not sure if this cli described the number of disks in RAID? smartctl --scan/dev/sda -d scsi # /dev/sda, SCSI device/dev/sdb -d scsi # /dev/sdb, SCSI device/dev/sdc -d scsi # /dev/sdc, SCSI device/dev/bus/0 -d megaraid,0 # /dev/bus/0 [megaraid_disk_00], SCSI device/dev/bus/0 -d megaraid,1 # /dev/bus/0 [megaraid_disk_01], SCSI device/dev/bus/0 -d megaraid,2 # /dev/bus/0 [megaraid_disk_02], SCSI device/dev/bus/0 -d megaraid,3 # /dev/bus/0 [megaraid_disk_03], SCSI device/dev/bus/0 -d megaraid,4 # /dev/bus/0 [megaraid_disk_04], SCSI device/dev/bus/0 -d megaraid,5 # /dev/bus/0 [megaraid_disk_05], SCSI device/dev/bus/0 -d megaraid,6 # /dev/bus/0 [megaraid_disk_06], SCSI device/dev/bus/0 -d megaraid,7 # /dev/bus/0 [megaraid_disk_07], SCSI device/dev/bus/0 -d megaraid,8 # /dev/bus/0 [megaraid_disk_08], SCSI device/dev/bus/0 -d megaraid,9 # /dev/bus/0 [megaraid_disk_09], SCSI device/dev/bus/0 -d megaraid,10 # /dev/bus/0 [megaraid_disk_10], SCSI device/dev/bus/0 -d megaraid,11 # /dev/bus/0 [megaraid_disk_11], SCSI device/dev/bus/0 -d megaraid,12 # /dev/bus/0 [megaraid_disk_12], SCSI device/dev/bus/0 -d megaraid,13 # /dev/bus/0 [megaraid_disk_13], SCSI device/dev/bus/0 -d megaraid,14 # /dev/bus/0 [megaraid_disk_14], SCSI device/dev/bus/0 -d megaraid,15 # /dev/bus/0 [megaraid_disk_15], SCSI devicelspci -vv | grep -i raid06:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 [Invader] (rev 02) Kernel driver in use: megaraid_sas mdadm --detail /dev/sdamdadm: /dev/sda does not appear to be an md device cat /proc/mdstatPersonalities : [raid1]md1 : active raid1 sdb2[0] sdc2[1] 390054912 blocks super 1.2 [2/2] [UU] bitmap: 2/3 pages [8KB], 65536KB chunkmd0 : active raid1 sdb1[0] sdc1[1] 524224 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunkunused devices: <none>lsscsi[0:2:0:0] disk DELL PERC FD33xD 4.27 /dev/sda[1:0:0:0] disk ATA INTEL SSDSC1BG40 DL2B /dev/sdb[2:0:0:0] disk ATA INTEL SSDSC1BG40 DL2B /dev/sdc cat /proc/partitionsmajor minor #blocks name 8 0 13670809600 sda 8 16 390711384 sdb 8 17 524288 sdb1 8 18 390185984 sdb2 8 32 390711384 sdc 8 33 524288 sdc1 8 34 390185984 sdc2 9 0 524224 md0 9 1 390054912 md1 253 0 104857600 dm-0 253 1 16777216 dm-1 253 2 104857600 dm-2 253 3 10485760 dm-3 ll /sys/block/total 0lrwxrwxrwx 1 root root 0 Oct 17 07:27 dm-0 -> ../devices/virtual/block/dm-0lrwxrwxrwx 1 root root 0 Oct 17 07:27 dm-1 -> ../devices/virtual/block/dm-1lrwxrwxrwx 1 root root 0 Oct 17 07:27 dm-2 -> ../devices/virtual/block/dm-2lrwxrwxrwx 1 root root 0 Oct 17 07:27 dm-3 -> ../devices/virtual/block/dm-3lrwxrwxrwx 1 root root 0 Oct 17 07:27 md0 -> ../devices/virtual/block/md0lrwxrwxrwx 1 root root 0 Oct 17 07:27 md1 -> ../devices/virtual/block/md1lrwxrwxrwx 1 root root 0 Oct 17 07:27 sda -> ../devices/pci0000:00/0000:00:03.0/0000:02:00.0/0000:03:01.0/0000:04:00.0/0000:05:01.0/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sdalrwxrwxrwx 1 root root 0 Oct 17 07:27 sdb -> ../devices/pci0000:00/0000:00:11.4/ata1/host1/target1:0:0/1:0:0:0/block/sdblrwxrwxrwx 1 root root 0 Oct 17 07:27 sdc -> ../devices/pci0000:00/0000:00:11.4/ata2/host2/target2:0:0/2:0:0:0/block/sdcll /sys/block/ |grep 'primary'no output | The mdadm command will handle Linux Software RAID only . In case of hardware RAID, such as your Dell PERC FD33xD / LSI MegaRAID SAS-3 3108, you'll need a tool that will be able to communicate with the RAID controller using vendor-specific protocols to query the information. Unfortunately, since the ownership of that RAID controller product line has passed from Symbios to LSI to Avago to (current) Broadcom, it can be quite difficult to find the management tools for some RAID controller models from the original equipment manufacturer. But Dell is actually supporting a version of the management tool, known as perccli , for their branded versions of the RAID controllers. But you apparently cannot use an identifier like "PERC FD33xD" or "LSI MegaRAID SAS-3 3108" to search for drivers on Dell's support site: you need either the name of a server model that contains the RAID controller in question, or some Dell product name or support identifier that unfortunately won't appear in lsblk / lsscsi / lspci outputs. By some quick Googling, it appears that "PowerEdge FD332" is one of the models that might contain that RAID controller. So go to Dell support page , type in "PowerEdge FD332" (or your actual Dell server model, if applicable) and select "Drivers & Downloads". You'll see a box titled with "Find a driver for your PowerEdge FD332" (or whatever model you picked) with four drop-down menus. From the "Operating System" drop-down, pick your operating system, "RedHat Enterprise Linux 7" in this case. Then from the "Category" drop-down, pick "SAS RAID". And the list of downloadable drivers updates, and somewhere near the top (currently the very first entry for me!) should be "Linux PERCCLI Utility for all Dell HBA/PERC controllers". Download and install it: it will be a .tar.gz package containing both a .rpm file for RedHat and other distributions, and a .deb file for Debian and related distributions. After that, you should have the tool available in the /opt/MegaCLI/perccli/ directory, as either perccli or perccli64 . The first command you should use with the tool should probably be: /opt/MegaCLI/perccli/perccli64 /show This will display the installed compatible RAID controllers and identify the numbers this tool will use for each. If there is just one RAID controller, it presumably is number 0. To get the list of actual physical disks from RAID controller #0: /opt/MegaCLI/perccli/perccli64 /c0 /eall /sall show all The list should look similar to this: ------------------------------------------------------------------------------EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp ------------------------------------------------------------------------------252:0 7 Onln 0 465.25 GB SATA HDD N N 512B WDC WD5003ABYX-01WERA1 U 252:1 6 Onln 1 465.25 GB SATA HDD N N 512B WDC WD5003ABYX-01WERA1 U 252:2 5 Onln 2 74.0 GB SATA SSD N N 512B INTEL SSDSC2BB080G4 U 252:3 4 Onln 2 74.0 GB SATA SSD N N 512B INTEL SSDSC2BB080G4 U ------------------------------------------------------------------------------EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroupDHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global HotspareUBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-InterfaceMed-Media Type|SED-Self Encryptive Drive|PI-Protection InfoSeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-ForeignUGUnsp-Unsupported The numbers in the DID column are the numbers you can use with the smartctl command, e.g. smartctl -a -d megaraid,<DID value> /dev/sda Reference: https://www.thomas-krenn.com/en/wiki/Smartmontools_with_MegaRAID_Controller Note: Older and/or non-Dell-specific versions of these tools used to be known as MegaCLI and/or storcli , but those seem to be behind a labyrinth of stale web links and revised product naming schemes. The only link for MegaRAID SAS-3 3108 Linux tools on Broadcom's pages I managed to find currently points to a page in avago.com that no longer exists. So, I say this based on my 20 years of experience with enterprise-grade computer hardware: if you have systems with hardware RAID controllers, make sure you download any vendor-specific controller configuration tools from the vendor support site when initially setting up the server, and save them . And even if you have no problems with the controller, check for updates once in a while. If the product line is sold to a different company or the hardware vendor simply decides that their support site needs a new design, some tools may go missing for a while: in the case of RAID controller configuration tools, it is indeed very much better to have them and not need them, than vice versa. If you are planning to use old server models beyond their vendor support lifetime for any reason (even as test servers only!), make sure you download all the applicable vendor-specific tools and drivers before the end-of-support date, and archive them in a safe location . After the support ends, the downloads may vanish from the vendor's website without any warning. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/690465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
690,474 | I am running a RPi 4b with Raspbian (Bullseye), Logitech Media Server (LMS) 8.2.0 and Squeezelite 1.9.9. For automated startup of the Squeezelite process whenever a certain USB device is connected, I have defined the following udev rules: SUBSYSTEM=="usb", ACTION=="add", ENV{DEVTYPE}=="usb_device", ENV{PRODUCT}=="154e/300a/3", RUN+="/usr/bin/DAC_start.sh"SUBSYSTEM=="usb", ACTION=="remove", ENV{DEVTYPE}=="usb_device", ENV{PRODUCT}=="154e/300a/3", RUN+="/usr/bin/DAC_stop.sh" This is my DAC_start.sh script: #!/bin/sh######### DAC_start.sh #########date >> /tmp/udev.logecho "Starting Squeezelite" >> /tmp/udev.logsleep 5s/usr/bin/squeezelite -o hw:CARD=ND8006,DEV=0 -D -n MediaPlayer -d all=debug -f /tmp/sq.log | at now############################### This is my DAC_stop.sh script: #!/bin/sh######### DAC_stop.sh #########date >> /tmp/udev.logecho "Stopping Squeezelite ..." >> /tmp/udev.logpkill squeezelite############################### Both scripts work fine when I execute them manually (both as pi and root): Squeezelite successfully connects to the LMS and USB device, music can be played.The udev rules also work and get fired when I connect my USB DAC (which I can see from the log files).However, when squeezelite gets started by udev , squeezelite seems to be unable to connect to my LMS server which is on the same LAN, actually on the same machine. This is the Squeezelite logfile (I think the more important messages are at the very bottom, but i copied all messages for your convience in case I am overlooking something): /usr/bin/squeezelite -o hw:CARD=ND8006,DEV=0 -D -n MediaPlayer -d all=debug -f /tmp/sq.log[16:22:50.362611] stream_init:454 init stream[16:22:50.362971] stream_init:455 streambuf size: 2097152[16:22:50.376806] output_init_alsa:936 init output[16:22:50.377007] output_init_alsa:976 requested alsa_buffer: 40 alsa_period: 4 format: any mmap: 1[16:22:50.377081] output_init_common:360 outputbuf size: 3528000[16:22:50.377333] output_init_common:384 idle timeout: 0[16:22:50.410804] test_open:301 sample rate 1536000 not supported[16:22:50.410907] test_open:301 sample rate 1411200 not supported[16:22:50.411049] test_open:301 sample rate 32000 not supported[16:22:50.411085] test_open:301 sample rate 24000 not supported[16:22:50.411118] test_open:301 sample rate 22500 not supported[16:22:50.411151] test_open:301 sample rate 16000 not supported[16:22:50.411184] test_open:301 sample rate 12000 not supported[16:22:50.411216] test_open:301 sample rate 11025 not supported[16:22:50.411249] test_open:301 sample rate 8000 not supported[16:22:50.411330] output_init_common:426 supported rates: 768000 705600 384000 352800 192000 176400 96000 88200 48000 44100[16:22:50.500287] output_init_alsa:1002 memory locked[16:22:50.500456] output_init_alsa:1008 glibc detected using mallopt[16:22:50.501072] output_init_alsa:1026 unable to set output sched fifo: Operation not permitted[16:22:50.501080] output_thread:685 open output device: hw:CARD=ND8006,DEV=0[16:22:50.501156] decode_init:153 init decode[16:22:50.502046] alsa_open:354 opening device at: 44100[16:22:50.502132] register_dsd:908 using dsd to decode dsf,dff[16:22:50.502166] register_alac:549 using alac to decode alc[16:22:50.502198] register_faad:663 using faad to decode aac[16:22:50.502229] register_vorbis:385 using vorbis to decode ogg[16:22:50.502325] register_opus:328 using opus to decode ops[16:22:50.502361] register_flac:336 using flac to decode ogf,flc[16:22:50.502392] register_pcm:483 using pcm to decode aif,pcm[16:22:50.502433] register_mad:423 using mad to decode mp3[16:22:50.502463] decode_init:194 include codecs: exclude codecs:[16:22:50.503117] alsa_open:425 opened device hw:CARD=ND8006,DEV=0 using format: S32_LE sample rate: 44100 mmap: 1[16:22:50.503159] discover_server:795 sending discovery[16:22:50.503272] alsa_open:516 buffer: 40 period: 4 -> buffer size: 1764 period size: 441[16:22:50.503349] discover_server:799 error sending disovery[16:22:55.504955] discover_server:795 sending discovery[16:22:55.505246] discover_server:799 error sending disovery[16:23:00.510091] discover_server:795 sending discovery[16:23:00.510360] discover_server:799 error sending disovery[16:23:05.515053] discover_server:795 sending discovery[16:23:05.515329] discover_server:799 error sending disovery[16:23:10.519882] discover_server:795 sending discovery[16:23:10.520185] discover_server:799 error sending disovery[16:23:15.528387] discover_server:795 sending discovery[16:23:15.528659] discover_server:799 error sending disovery[16:23:20.535819] discover_server:795 sending discovery[16:23:20.536007] discover_server:799 error sending disovery[16:23:25.541079] discover_server:795 sending discovery[16:23:25.541333] discover_server:799 error sending disovery[16:23:30.549470] discover_server:795 sending discovery[16:23:30.549640] discover_server:799 error sending disovery[16:23:35.559568] discover_server:795 sending discovery[16:23:35.559857] discover_server:799 error sending disovery[16:23:40.568356] discover_server:795 sending discovery[16:23:40.568646] discover_server:799 error sending disovery[16:23:45.576730] discover_server:795 sending discovery[16:23:45.577009] discover_server:799 error sending disovery[16:23:50.586202] discover_server:795 sending discovery[16:23:50.586502] discover_server:799 error sending disovery[16:23:55.596574] discover_server:795 sending discovery[16:23:55.596872] discover_server:799 error sending disovery[16:24:00.604989] discover_server:795 sending discovery[16:24:00.605269] discover_server:799 error sending disovery[16:24:05.615978] discover_server:795 sending discovery[16:24:05.616278] discover_server:799 error sending disovery[16:24:10.625168] discover_server:795 sending discovery[16:24:10.625472] discover_server:799 error sending disovery[16:24:15.633952] discover_server:795 sending discovery[16:24:15.634246] discover_server:799 error sending disovery[16:24:20.642357] discover_server:795 sending discovery[16:24:20.642648] discover_server:799 error sending disovery[16:24:25.650821] discover_server:795 sending discovery[16:24:25.651113] discover_server:799 error sending disovery[16:24:30.662745] discover_server:795 sending discovery[16:24:30.663055] discover_server:799 error sending disovery[16:24:35.670289] discover_server:795 sending discovery[16:24:35.670566] discover_server:799 error sending disovery[16:24:40.674134] discover_server:795 sending discovery[16:24:40.674460] discover_server:799 error sending disovery[16:24:45.679650] discover_server:795 sending discovery[16:24:45.679984] discover_server:799 error sending disovery[16:24:50.689070] discover_server:795 sending discovery[16:24:50.689366] discover_server:799 error sending disovery[16:24:55.697415] discover_server:795 sending discovery[16:24:55.697709] discover_server:799 error sending disovery[16:25:00.705845] discover_server:795 sending discovery[16:25:00.706128] discover_server:799 error sending disovery[16:25:05.714279] discover_server:795 sending discovery[16:25:05.714583] discover_server:799 error sending disovery[16:25:10.723306] discover_server:795 sending discovery[16:25:10.723601] discover_server:799 error sending disovery[16:25:15.728709] discover_server:795 sending discovery[16:25:15.728977] discover_server:799 error sending disovery It seems as if Squeezelite, when started by udev , has no access to the LAN? I also tried starting Squeezelite with the -s 192.168.1.20 parameter (which is the IP of my LMS) -- but without success. It still cannot connect to the LMS server. Any ideas what I am doing wrong? I used the approach described above on an RPi with piCore OS (which is a Tiny Core Linux distribution), and it worked like a charm... | The mdadm command will handle Linux Software RAID only . In case of hardware RAID, such as your Dell PERC FD33xD / LSI MegaRAID SAS-3 3108, you'll need a tool that will be able to communicate with the RAID controller using vendor-specific protocols to query the information. Unfortunately, since the ownership of that RAID controller product line has passed from Symbios to LSI to Avago to (current) Broadcom, it can be quite difficult to find the management tools for some RAID controller models from the original equipment manufacturer. But Dell is actually supporting a version of the management tool, known as perccli , for their branded versions of the RAID controllers. But you apparently cannot use an identifier like "PERC FD33xD" or "LSI MegaRAID SAS-3 3108" to search for drivers on Dell's support site: you need either the name of a server model that contains the RAID controller in question, or some Dell product name or support identifier that unfortunately won't appear in lsblk / lsscsi / lspci outputs. By some quick Googling, it appears that "PowerEdge FD332" is one of the models that might contain that RAID controller. So go to Dell support page , type in "PowerEdge FD332" (or your actual Dell server model, if applicable) and select "Drivers & Downloads". You'll see a box titled with "Find a driver for your PowerEdge FD332" (or whatever model you picked) with four drop-down menus. From the "Operating System" drop-down, pick your operating system, "RedHat Enterprise Linux 7" in this case. Then from the "Category" drop-down, pick "SAS RAID". And the list of downloadable drivers updates, and somewhere near the top (currently the very first entry for me!) should be "Linux PERCCLI Utility for all Dell HBA/PERC controllers". Download and install it: it will be a .tar.gz package containing both a .rpm file for RedHat and other distributions, and a .deb file for Debian and related distributions. After that, you should have the tool available in the /opt/MegaCLI/perccli/ directory, as either perccli or perccli64 . The first command you should use with the tool should probably be: /opt/MegaCLI/perccli/perccli64 /show This will display the installed compatible RAID controllers and identify the numbers this tool will use for each. If there is just one RAID controller, it presumably is number 0. To get the list of actual physical disks from RAID controller #0: /opt/MegaCLI/perccli/perccli64 /c0 /eall /sall show all The list should look similar to this: ------------------------------------------------------------------------------EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp ------------------------------------------------------------------------------252:0 7 Onln 0 465.25 GB SATA HDD N N 512B WDC WD5003ABYX-01WERA1 U 252:1 6 Onln 1 465.25 GB SATA HDD N N 512B WDC WD5003ABYX-01WERA1 U 252:2 5 Onln 2 74.0 GB SATA SSD N N 512B INTEL SSDSC2BB080G4 U 252:3 4 Onln 2 74.0 GB SATA SSD N N 512B INTEL SSDSC2BB080G4 U ------------------------------------------------------------------------------EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroupDHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global HotspareUBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-InterfaceMed-Media Type|SED-Self Encryptive Drive|PI-Protection InfoSeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-ForeignUGUnsp-Unsupported The numbers in the DID column are the numbers you can use with the smartctl command, e.g. smartctl -a -d megaraid,<DID value> /dev/sda Reference: https://www.thomas-krenn.com/en/wiki/Smartmontools_with_MegaRAID_Controller Note: Older and/or non-Dell-specific versions of these tools used to be known as MegaCLI and/or storcli , but those seem to be behind a labyrinth of stale web links and revised product naming schemes. The only link for MegaRAID SAS-3 3108 Linux tools on Broadcom's pages I managed to find currently points to a page in avago.com that no longer exists. So, I say this based on my 20 years of experience with enterprise-grade computer hardware: if you have systems with hardware RAID controllers, make sure you download any vendor-specific controller configuration tools from the vendor support site when initially setting up the server, and save them . And even if you have no problems with the controller, check for updates once in a while. If the product line is sold to a different company or the hardware vendor simply decides that their support site needs a new design, some tools may go missing for a while: in the case of RAID controller configuration tools, it is indeed very much better to have them and not need them, than vice versa. If you are planning to use old server models beyond their vendor support lifetime for any reason (even as test servers only!), make sure you download all the applicable vendor-specific tools and drivers before the end-of-support date, and archive them in a safe location . After the support ends, the downloads may vanish from the vendor's website without any warning. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/690474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/514279/"
]
} |
690,603 | The following post solution works as expected: How to pass an array as function argument? Therefore - from his answer: function copyFiles() { arr=("$@") for i in "${arr[@]}"; do echo "$i" done}array=("one 1" "two 2" "three 3")copyFiles "${array[@]}" The reason of this post is what if the following case happens: copyFiles "${array[@]}" "Something More"copyFiles "Something More" "${array[@]}" Problem: I did do realize about the arguments sent, when they are received how parameters in the function - they are practically merged - so $1 and $2 does not work as expected any more, the "array" argument integrates with the other argument I already did do a research: How do I pass an array as an argument? Sadly typeset -n does not work and in: How to pass an array to a function as an actual parameter rather than a global variable does not work as expected - in that answer there is a comment indicating an issue - with a link with a demo testing/verification - about the array size ( ${#array[@]} ) being different within the function. So how accomplish this goal? | It is not possible to pass an array as argument like that. Even though it looks like you do that, it does not work as you expect it Your shell (e.g. here: bash ) will expand "${array[@]}" to the individual items before executing the function ! So, this copyFiles "Something More" "${array[@]}" will actually call copyFiles "Something More" "one 1" "two 2" "three 3" So, inside the function it is not possible to distinguish the array from other arguments. (You could add a reference to an array , but I would argue against using it, as it doesn't seem to be very portable, also you won't want to mix scopes if not necessary). You can use shift , e.g. copyFiles() { var1=$1 shift for i in "$@"; do ... ; done} (Note, that arr=("$@") is superfluous, $@ is already an array, you don't even need to specify "$@" , you could also use for i; do ...; done ) or parse arguments with something like getopts . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/690603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383045/"
]
} |
690,773 | I use Arch Linux (x86_64) I updated my repositories today with the following command Sudo Packman -Syu But the xampp program no longer runs Stopping all servers...Restarting all servers...Starting MySQL Database...Starting Apache Web Server...Exit code: 8Stdout:apache config test fails, abortingStderr:/opt/lampp/bin/httpd: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directoryStarting ProFTPD...Exit code: 8Stdout:proftpd config test fails, abortingStderr:/opt/lampp/sbin/proftpd: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory After a bit of checking I found that the libcrypt.so.1 file exists Output the command locate libcrypt.so.1 [ahmadreza@ahmadreza-sys ~]$ locate libcrypt.so.1/usr/lib/libcrypt.so.1 The version of the files is as follows: [root@ahmadreza-sys lib]# file libcrypto.so.1*libcrypto.so.1.1: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=4c926b672d97886b123e03a008387aecf0786de4, stripped[root@ahmadreza-sys lib]# output command sudo ldconfig -v | grep libcrypt [ahmadreza@ahmadreza-sys ~]$ sudo ldconfig -v | grep libcryptldconfig: Path `/usr/lib64' given more than once(from <builtin>:0 and <builtin>:0)ldconfig: Can't stat /usr/libx32: No such file or directory libcrypt.so.2 -> libcrypt.so.2.0.0 libcrypto.so.1.1 -> libcrypto.so.1.1 libcryptsetup.so.12 -> libcryptsetup.so.12.7.0[ahmadreza@ahmadreza-sys ~]$ cammand outout file /opt/lampp/bin/httpd [ahmadreza@ahmadreza-sys ~]$ file /opt/lampp/bin/httpd /opt/lampp/bin/httpd: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=00effd3a02918135bf3106612c2b59866e4f92fe, stripped[ahmadreza@ahmadreza-sys ~]$ How can I fix it? | I had the same error, solved by installing this package "libxcrypt-compat".It is not in pacman, install via AUR. yay -S libxcrypt-compat | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/690773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/508718/"
]
} |
690,813 | I foolishly started a job that turned out to be so big and busy that it froze everything. I wish I could type a kill command or use xkill, but the system is unresponsive, apart from the audible swapping. On Windows, Ctrl + Alt + Del helps in these situations; does Linux has a way to knock through into an overloaded system? Just saw this one and couldn't stop myself from sharing: | Ctrl + Alt + F4 opens a console window, where you can login and kill stuff as necessary or reboot the system. Use Ctrl + Alt + F2 or Ctrl + Alt + F1 to go back. In some cases you can restart the gnome session by pressing Alt + F2 , and the R in the window that opens. This should leave all programs running, but gnome itself will restart, so if the issue is in gnome it may help. If the above don't help, you can do a warm reboot, by pressing the following key sequence: While keeping pressed down both the Alt and Print Screen keys, sequentially (one by one) press the keys: R E I S U B This will sync and unmount the file system and do a safe reboot. The keys have the following meaning: R: Switch the keyboard from raw mode to XLATE mode E: Send the SIGTERM signal to all processes except init I: Send the SIGKILL signal to all processes except init S: Sync all mounted filesystems U: Remount all mounted filesystems in read-only mode B: Immediately reboot the system, without unmounting partitions or syncing Source Finally, if all else fails, keep the power-on button pressed for a few seconds to force a cold reboot, or take the power cable/battery out ;-). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/690813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112656/"
]
} |
690,862 | I am trying to read source code to understand which process is reponsible for these mapping, but I can't still figure it out. Can anyone give me a hint about which code is relative to it ? source code: agetty+login(util-linux project) , systemd | No process is responsible for this mapping. This is a device driver function, part of the kernel, with ctrl-c as the default. ctrl-c is mapped to SIGINT by the tty (or pty) device, and will be sent to the foreground processes for the controlling terminal. Systemd connects agetty to the tty device and starts it, then agetty initializes the tty device (with the system call version of stty or tcsetattr) and waits for input, and eventually exec()s login. If, for instance, tcsetattr is used, it applies the termios structure to the tty, which includes the c_cc array, which includes the special characters the tty maps to actions (including line editing and signals and other things), including VINTR which defaults to ctrl-c , and also sets mode flags that allow such characters to be interpreted by the tty. Some time later, applications (like bash or vim or emacs) would also manipulate termios and change the mode bits to possibly disable the line editing characters (and then emulate them) or even disable some or all of the interrupt characters so that they can be read literally and used as the application desires. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/690862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/479151/"
]
} |
690,964 | I have a very long input string that's multiline, in a local variable $INPUT, for example: abcdef How do I trim it to n lines max, let's say n=3, and add a message at the end: abc... message too long This is what I have, don't work on multi-line: $OUTPUT=$('$INPUT' | awk '{print substr($0, 1, 15) "...") | How about output=$(echo "$input" | awk -v n=3 'NR>n {print "... message too long"; exit} 1') or output=$(echo "$input" | sed -e '3{a\... message too long' -e 'q}') output=$(echo "$input" | sed -e '3{$!N;s/\n.*/\n... message too long/' -e 'q}') POSIXly output=$(echo "$input" | sed -e '3{ $!N s/\n.*/\n... message too long/ q }') or with GNU sed: output=$(echo "$input" | sed -e '4{i\... message too long' -e 'Q}') | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/690964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/514514/"
]
} |
690,974 | I am trying to connect to the internet, but am facing issues with configuring my network to get rid of an error saying "Temporary failure in name resolution".I am using a hyper-v VM for Kali Linux.My resolv.conf file has 8.8.8.8 192.168.0.208 is the ip I see when I run ipconfig on Windows 11 powershell, where the VM is being run. This is the output from running ipconfig I can successfully ping 192.168.0.208 I can't ping 8.8.8.8: $ ping 8.8.8.8PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.From 192.168.0.208 icmp_seq=1 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=2 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=3 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=4 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=5 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=6 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=7 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=8 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=9 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=10 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=11 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=12 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=13 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=14 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=15 Destination Host Unreachable^C--- 8.8.8.8 pint statistics ---17 packets transmitted, 0 received, +11 errors, 100% packet loss, time 16285ms From 192.168.0.208 icmp_seq=1 Destination Host UnreachableFrom 192.168.0.208 icmp_seq=1 Destination Host Unreachable | How about output=$(echo "$input" | awk -v n=3 'NR>n {print "... message too long"; exit} 1') or output=$(echo "$input" | sed -e '3{a\... message too long' -e 'q}') output=$(echo "$input" | sed -e '3{$!N;s/\n.*/\n... message too long/' -e 'q}') POSIXly output=$(echo "$input" | sed -e '3{ $!N s/\n.*/\n... message too long/ q }') or with GNU sed: output=$(echo "$input" | sed -e '4{i\... message too long' -e 'Q}') | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/690974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/514720/"
]
} |
691,199 | I have a requirement where I'm getting a .txt file in output from some tool and I want to split that into three. Example text from file: First lineSecond line234564567745678Third line9090990678Last lineZ567Z6787T Expected outputs: -> file1.txt 234564567745678 -> file2.txt 9090990678 -> file3.txt Z567Z5677T Basically, files will have 5 digit numeric/alphanumeric values which we want to use, and text in between works as identifier to split file into multiple files. I'm looking to use awk or sed command to do this. | If this isn't all you need: $ awk ' /^[[:alnum:]]{5}$/ { if ( !inBlock++ ) { close(out) out = "file" (++cnt) ".txt" } print > out next } { inBlock = 0 }' file $ head file?.txt==> file1.txt <==234564567745678==> file2.txt <==9090990678==> file3.txt <==Z567Z6787T then edit your question to provide clearer requirements and more truly representative sample input/output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/515025/"
]
} |
691,400 | I have a big confusion regarding the system calls in OS. According to the book "operating systems concepts 9th ", it is mentioned (in page 63) that : Most programmers never see this level of detail, however. Typically,application developers design programs according to an applicationprogramming interface (API). Behind the scenes, the functions that make up an API typically invokethe actual system calls on behalf of the application programmer. This means that, as a programmer, we don't use the system calls directly. However, I see videos that teach how to use system calls directly, like this one , where it accesses the read|() and write() system calls. So the system calls can be used directly or using APIs or using both ?? | Typically, application developers design programs according to an application programming interface (API). This means that, as a programmer, we don't use the system calls directly. No, that means you typically don't use the system calls directly. You can use them, but when programming you usually use some higher level library and use functions from its API instead of system calls. For example if you are writing a Gtk application, you'll use g_file_set_contents to write to a file instead of using write directly, simply because it is easier to use. But nothing is stopping from using the system call even if some higher level is available if you think it is better suited for the task you are working on. This usually depends on what kind of application or library you are working on. If you are programming a new low level library or a system tool, you'll probably use the system calls directly. If you are working on a new GUI application, you'll probably use the API provided by Gtk or Qt. And if you are using some higher level programming language, you'll probably use its API, like the built-in open in Python for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691400",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/513569/"
]
} |
691,476 | I have this file, and I'd like to select all the IP addresses under the masters/hosts section if their line is not commented. I tried this sed 's/.*\ \([0-9\.]\+\).*/\1/g' , but it did not work. metal: children: masters: hosts: dev0: {ansible_host: 172.168.1.10} # dev1: {ansible_host: 185.168.1.11} # dev2: {ansible_host: 142.168.1.12} workers: hosts: {} # dev3: {ansible_host: 172.168.1.13} | The same way that dealing with JSON is better handled in shell with an adequate tool: jq , what if there was a similar tool for YAML? There is actualy a jq wrapper called... yq that can be used like jq with YAML input. It doesn't appear to be packaged in distributions. Anyway once this python command is built and installed (along the mandatory jq tool which is widely available), one can now parse accordingly the YAML file, since it will naturally ignore comments. yq -r '.metal.children.masters.hosts[].ansible_host' < myfile.yaml which will dump IP addresses in masters as long as the syntax is valid. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/405537/"
]
} |
691,479 | How do you deal with a missing libcrypt.so.1 on Arch Linux? Trying to run openoffice4 (the LibreOffice binary) results in: /opt/openoffice4/program/javaldx: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory/opt/openoffice4/program/soffice.bin: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory The file libcrypt.so.1 indeed does not exist. There is, however: └[/usr/lib]> ls -al libcrypt*lrwxrwxrwx 1 root root 16 Dec 18 11:31 libcrypto.so -> libcrypto.so.1.1-rwxr-xr-x 1 root root 2999144 Dec 18 11:31 libcrypto.so.1.1lrwxrwxrwx 1 root root 23 Feb 3 12:16 libcryptsetup.so -> libcryptsetup.so.12.7.0lrwxrwxrwx 1 root root 23 Feb 3 12:16 libcryptsetup.so.12 -> libcryptsetup.so.12.7.0-rwxr-xr-x 1 root root 484192 Feb 3 12:16 libcryptsetup.so.12.7.0lrwxrwxrwx 1 root root 17 Feb 2 08:12 libcrypt.so -> libcrypt.so.2.0.0lrwxrwxrwx 1 root root 17 Feb 2 08:12 libcrypt.so.2 -> libcrypt.so.2.0.0-rwxr-xr-x 1 root root 165824 Feb 2 08:12 libcrypt.so.2.0.0 Simply creating a new symlink to libcrypt.so.1 fails as this is the incorrect version: /opt/openoffice4/program/javaldx: /usr/lib/libcrypt.so.1: version `GLIBC_2.2.5' not found (required by /opt/openoffice4/program/libuno_sal.so.3)/opt/openoffice4/program/soffice.bin: /usr/lib/libcrypt.so.1: version `GLIBC_2.2.5' not found (required by /opt/openoffice4/program/libuno_sal.so.3) I've tried to see what package provides libcrypt.so.1 using pacman -F : Which is: core/glibc 2.33-5 [installed: 2.35-2] usr/lib/libcrypt.so.1core/lib32-glibc 2.33-5 [installed: 2.35-2] usr/lib32/libcrypt.so.1community/aarch64-linux-gnu-glibc 2.34-1 usr/aarch64-linux-gnu/lib/libcrypt.so.1community/riscv64-linux-gnu-glibc 2.32-1 (risc-v) usr/riscv64-linux-gnu/lib/libcrypt.so.1 The latter two are not applicable to me as the architecture is different. Reinstalling glibc and lib32-glibc didn't resolve my issue. Moreover, checking with pacman -Ql to see if the file in question ( libcrypt.so.1 ) is actually present in glibc and lib32-glibc, doesn't show it exists. This problem appears to have occurred after an update which pulled in a newer version of glibc: [ALPM] upgraded glibc (2.33-5 -> 2.35-2) I've also tried to simply reinstall LibreOffice. Nothing changes. | I had the same issue and I fixed it by installing the libxcrypt-compat package, which is now available from the core repository. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/691479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/515314/"
]
} |
691,553 | I have been using grep "^[^#]" file.txt | wc -l to count the number of lines in a single file. How do I use awk to count the number of lines not starting with # and print out with file names like below? file1.txt 30 file2.txt 33.... | This is how you can do with awk (same ^[^#] in grep applies in the awk too but in awk regex match written inside pair of the slashes /.../ (there are other ways too)): find . -type f -exec \ awk '/^[^#]/ {count++}; END{ print FILENAME, count+0 }' {} \; or alternatively if you have a GNU awk for the ENDFILE{} block: gawk '/^[^#]/{ count++ }; ENDFILE{ print FILENAME, count+0; count=0 }' ./multiple.files* Or recursively: find . -type f -exec \ gawk '/^[^#]/{ count++ }; ENDFILE{ print FILENAME, count+0; count=0 }' {} + Notes: Using /^[^#]/ will skip counting empty lines._ Using !/^#/ would be the equivalent of the /^[^#]/ conditionally only if there is no empty lines in your input otherwise !/^#/ would count empty lines too which is what it really does and more correct when asking "counting the lines not starting with #" The exact equivalent of the !/^#/ would be /^([^#]|$)/ (so both would count the empty lines too) Considering the above points, if you would like to count empty lines too as said "count the lines not starting with #", do: awk '!/^#/{count++} END{ print FILENAME, count+0 }' or its equally: awk '/^([^#]|$)/{count++} END{ print FILENAME, count+0 }' or alternatively (which is suggested by @EdMorton ): awk '/^#/{ count++ } END{ print FILENAME, FNR-count }' and applying the same for the GNU awk form: awk '/^#/{ count++ } ENDFILE{ print FILENAME, FNR-count; count=0 }' if you wanted to skip printing the output when a file was full of commented lines: awk '/^#/{ count++ } END{ if(count) print FILENAME, FNR-count }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/515385/"
]
} |
691,558 | Problem I have one Ubuntu20 Desktop( Gnome ) and I want it to be fully black and ideal. As reference, I have taken one Ubuntu20 Server and Installed openbox on it. I hided the title bar of all the applications in OpenBox by editing the configurations in /etc/xdg/openbox/rc.xml and added <application class="*"> <decor>no</decor> <maximized>false</maximized> </application></applications> So that it will hide the title-bar of all the applications. Also, I am able to achieve my demand. Image shown below as example. I want to perform the same on Gnome Desktop as well, but I don't know which config file to be edited for the same.Currently, my Gnome Looks like this Any help for the following is appreciatedThank you. NOTE: I don't want to use only terminal, but will be using other applications like Firefox, chrome, VLC, etc. without title-bar | This is how you can do with awk (same ^[^#] in grep applies in the awk too but in awk regex match written inside pair of the slashes /.../ (there are other ways too)): find . -type f -exec \ awk '/^[^#]/ {count++}; END{ print FILENAME, count+0 }' {} \; or alternatively if you have a GNU awk for the ENDFILE{} block: gawk '/^[^#]/{ count++ }; ENDFILE{ print FILENAME, count+0; count=0 }' ./multiple.files* Or recursively: find . -type f -exec \ gawk '/^[^#]/{ count++ }; ENDFILE{ print FILENAME, count+0; count=0 }' {} + Notes: Using /^[^#]/ will skip counting empty lines._ Using !/^#/ would be the equivalent of the /^[^#]/ conditionally only if there is no empty lines in your input otherwise !/^#/ would count empty lines too which is what it really does and more correct when asking "counting the lines not starting with #" The exact equivalent of the !/^#/ would be /^([^#]|$)/ (so both would count the empty lines too) Considering the above points, if you would like to count empty lines too as said "count the lines not starting with #", do: awk '!/^#/{count++} END{ print FILENAME, count+0 }' or its equally: awk '/^([^#]|$)/{count++} END{ print FILENAME, count+0 }' or alternatively (which is suggested by @EdMorton ): awk '/^#/{ count++ } END{ print FILENAME, FNR-count }' and applying the same for the GNU awk form: awk '/^#/{ count++ } ENDFILE{ print FILENAME, FNR-count; count=0 }' if you wanted to skip printing the output when a file was full of commented lines: awk '/^#/{ count++ } END{ if(count) print FILENAME, FNR-count }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/512050/"
]
} |
691,602 | I ran grep 'd(o|i)g' a.txt and a.txt contains dig but nothing was returned. Is this the right way to match dog or dig ? | Yes, d(o|i)g is the correct way to do it. You can also do d[oi]g since you are dealing with single characters. You need to use the -E flag on your grep call to get extended regexes. $ cat a.txtbirddogcatdug$ grep 'd(o|i)g' a.txt$ grep -E 'd(o|i)g' a.txtdog | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/515438/"
]
} |
691,795 | we want to disable swap on all our rhel servers ( hadoop servers ) we have 2 options set swappiness to 0, and swapoff -a & swapon -a swapoff -a , and disable swap from fstab from my understanding both options are disable swap completely option 2 for sure since we swapoff -a and disable the swap from fstab but about option 1 , dose option 1 gives the same results as option 2 ? | What happenned when you tested it? Did you read the documentation ? A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. i.e. a value of zero does not disable swap, it just defers it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691795",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
691,805 | How can I get nested heredocs working in a shell script? It doesn't seem to recognize the EOF. #!/bin/bashtest(){ cat > /users/shared/test.sh << EOF#!/bin/bashcheck_VPN_status(){# Returns vpn_status as connected // disconnected anyconnect_status=$(/opt/cisco/anyconnect/bin/vpn status | grep 'Connected') globalprotect_status=$(ifconfig | grep -c flags=8050) if [[ ! -z $anyconnect_status \ || $globalprotect_status == 0 && -d /Applications/GlobalProtect.app ]]; then vpn_status=connected else vpn_status=disconnected fi}create_pf_conf(){# Set the network interface to be usedif="en0"# Set ports to be allowedallowed_ports="{22}"cat > "$pfconf" << EOF# Default Deny Policyblock in all# Skip the loop back interfaceset skip on lo0# Allowed inbound portspass in quick on $if proto tcp to any port $allowed_ports keep stateEOF}#----------------------------------------------------------## Global Variables ##----------------------------------------------------------#pfconf="/var/client/pf.conf"#----------------------------------------------------------## Start Workflow ##----------------------------------------------------------## Check if firewall is enabled and enable if neededenable_firewall# Get VPN connection statuscheck_VPN_statusif [[ $vpn_status == "connected" ]]; then # If connected to VPN, create pf.conf and enable pf create_pf_conf /sbin/pfctl -e -f $pfconfelse # If disconnected from VPN, disable pf /sbin/pfctl -dfi} EOF/bin/echo "hi"test``` | What happenned when you tested it? Did you read the documentation ? A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. i.e. a value of zero does not disable swap, it just defers it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/515639/"
]
} |
691,813 | I have below shell script.It works fine on one host. on second one it not works.When I remove 2>&1 << ‘EOF’ it works fine on second host.What that mean 2>&1 << ‘EOF’? Why it works fine on one, but not work on second host? Thanks start () { case ${1} in APP1) ssh -q user@host 2>&1 << ‘EOF’ exit $?EOF ;; APP2) ssh -q user@host 2>&1 << ‘EOF’ exit $?EOF ;;} | What happenned when you tested it? Did you read the documentation ? A value of 0 instructs the kernel not to initiate swap until the amount of free and file-backed pages is less than the high water mark in a zone. i.e. a value of zero does not disable swap, it just defers it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/513647/"
]
} |
691,860 | Morning everybody, I would like to divide groups of matching lines of a file with a blank line. Being new to awk I tinkered around a bit and came up with this: awk '!($0 in a) {print "\n"; a[$0]}; {print}' which in my head reads as If the current line is not in array 'a' then print a newline and add the line to 'a'. Print the current line. If I run this against a testfile the output looks like abcabcdefdefdefghi i.e. there are two blank lines printed instead of one. Where do the extra lines come from? This is the test file I used: abcabcdefdefdefghi | You don't need for an associated array: awk 'prev!=""{ print prev!=$0? prev ORS : $0 } { prev=$0 }END{ if(prev!="") print prev }' infile Output: abcabcdefdefdefghi About why your awk is printing newline twice it's because you are using print statement, which by default it's printing what ever you are printing + ORS ( O utput R ecord S eparator, which default is newline), you need to use printf "\n" instead or just print "" ; and with your own solution, you could do as following (applying some fixes): awk '!($0 in a) { if(c++) print "" } { a[$0]; print}' infile or more compact: awk '!($0 in a) && c++{ print ""} ++a[$0]' infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487176/"
]
} |
691,898 | I know how to replace a set of characters in filenames. E.g for i in *example_text* ;do mv "$i" "`echo $i | sed 's/example_text//'`"done How do I do so if a part of the filename is ";" (between quotes). I get an error whenever I try to run: for i in *;* ;do mv "$i" "`echo $i | sed 's/;//'`"done Any tips? | Escape the semicolon, it's a special character for the shell. for i in *\;*# orfor i in *';'* If you're using bash , parameter expansion is much faster than shelling out to sed . for f in *';'* ; do mv -- "$f" "${f//;/}"done Note that it can overwrite a file if it already exists. Use mv -n or check for existence with test -e . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296913/"
]
} |
691,909 | I have the following folder and files: .├── photos│ ├── photo-a│ │ ├── photo-a.meta.json│ │ └── photo-a.json│ ├── photo-b│ │ ├── photo-b.meta.json│ │ └── photo-b.json...There are more folders and files in the photos folder in the same structure I would want to copy all the files photo-a.json , photo-b.json and others into another folder called photos-copy . Basically, I want to exclude all the files that end with .meta.json but copy only those that end with .json files. So, if done correctly, photos-copy folder would look like this: .├── photos-copy│ ├── photo-a.json│ └── photo-b.json... I tried something along cp -a ./photos/*.json ./photos-copy but this will end up copying everything because the .meta.json files also end with .json extensions. How can I do this? | A couple of options spring to mind. rsync --dry-run -av --prune-empty-dirs --exclude '*.meta.json' --include '*.json' photos/ photos-copy/ Or if you don't have rsync (and why not!?), this will copy the files retaining the structure cp -av photos/ photos-copy/rm -f photos-copy/*/*.meta.json This variant will flatten the files into a single directory cp -av photos/*/*.json photos-copy/rm -f photos-copy/*.meta.json You can do more fancy things with bash and its extended pattern matching, which here tells the shell to match everything that does not contain .meta in its name: shopt -s extglobcp -av photos/*/!(*.meta).json photos-copy/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/515752/"
]
} |
691,925 | I have a flash drive that is formatted as exFAT. I wish to change the label of the drive without formatting. I've seen ways of doing this for ext4, fat or ntfs partitions but not for exFAT. Is this possible? how? | Depending on the tools you are using (or that are available for your distributions) you'll need to use either exfatlabel exfatlabel <device> <label> from exfat-utils which is provided by the older FUSE implementation of exFAT or tune.exfat tune.exfat -L <label> <device> from exfatprogs which is the userspace counterpart of the 5.7 kernel implementation and should be available in newer distributions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/691925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30019/"
]
} |
692,178 | I've copied a directory with cp -as /media/user/dir symlinks and now I'm terrified to rm -rf symlinks as it might delete files in /media/user/dir What is the safe way to only delete the directory structure and the symbolic links in symlinks without touching anything in /media/user/dir ? As a test, I did this: $ mkdir test$ touch test/file$ mkdir test/dir$ touch test/dir/file2$ cp -as test syms$ rm -rf syms This test didn't touch the original test directory. Is this a complete test? Is it always like this? I don't have the space to make a backup of /media/user/dir | You may remove the directory containing the symbolic links without fear that this would also remove the original files. The POSIX specification for the rm utility says (about what happens when encountering a symbolic link): The rm utility shall not traverse directories by following symbolic links into other parts of the hierarchy, but shall remove the links themselves. And then, a bit later (in the Rationale section): The rm utility removes symbolic links themselves, not the files they refer to, as a consequence of the dependence on the unlink() functionality, per the DESCRIPTION. When removing hierarchies with -r or -R , the prohibition on following symbolic links has to be made explicit. The GNU rm manual doesn't say anything about this, but we must assume that it does not break with POSIX in this regard. The manual on other systems sometimes contains this promise explicitly. Here's from OpenBSD (FreeBSD and NetBSD has identical wordings): The rm utility removes symbolic links, not the files referenced by thelinks. ... and from AIX (Solaris has a similar wording): If the file is a symbolic link, the link is removed, but the file or directory that the symbolic link refers to remains. Note that the behavior of rm with regards to symbolic links may be tested easily locally: $ touch file$ ls -ltotal 0-rw-r--r-- 1 myself wheel 0 Feb 26 09:32 file$ ln -s file link$ ls -l linklrwxr-xr-x 1 myself wheel 4 Feb 26 09:32 link -> file$ rm link$ ls -ltotal 0-rw-r--r-- 1 myself wheel 0 Feb 26 09:32 file A similar exercise could be carried out for symbolic links in a directory. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/692178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233403/"
]
} |
692,392 | I'm using bc in Ubuntu Linux. It has a pre-defined constant PI that's set to 99.Why does bc define PI to be 99 instead of 3.14159...? | There is no built-in constant π in bc . If invoked with the -l option, some built-in functions become available, allowing you to evaluate π trigonometrically – the man page includes this example: EXAMPLES In the shell, the following will assign the value of π to the shell variable pi . pi=$(echo "scale=10; 4*a(1)" | bc -l) What is happening when you try to evaluate PI is the result of input base conversion, as described in the texinfo documentation for GNU bc (here version 1.07.1): … bc converts constants intointernal decimal numbers using the current input base, specified by thevariable IBASE . noting that … For multi-digit numbers, bc changes allinput digits greater or equal to IBASE to the value of IBASE −1. Thismakes the number ZZZ always be the largest 3 digit numberof the input base. Correspondingly, in the default ibase=10 , conversion of any pair of non-decimal digits results in decimal 99. Earlier versions of GNU bc have a maximum ibase value of 16, and only make provision for characters in the set [0-9A-F]; characters outside of this range result in an error condition in that case. You can see this the bc Command Manual version 1.06,which also contains the above passages. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/692392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/516303/"
]
} |
692,418 | I have a file structure which looks like this: Project | +-- video_1 | | | +-- video_1_cropped | | | +--frame_00000.jpg | +--frame_00001.jpg | +... | +—-frame_00359.jpg | +-- video_2 | | | +-- video_2_cropped | | | +--frame_00000.jpg | +--frame_00004.jpg | +--frame_00005.jpg | +… | +—frame_00207.jpg | Now these frames are not all numbered consecutively as they had been previously processed and not every frame was eligible to be processed. I would like to know if it is possible to go through all these directories, check if 10 frames are consecutively numbered and copy them to another directory, making a new directory there as well. The new directory would look like: Videos | +-- video_00001 | | | +--frame_00000.jpg | +--frame_00001.jpg | +... | +--frame_00009.jpg | +-- video_00002 | | | +--frame_00013.jpg | +--frame_00014.jpg | +... | +--frame_00022.jpg ... Some extra notes, (1) I want to be able to copy multiple 10 frame sequences from the same video, if there exist multiple such sequences. (2) If there is no 10 frame sequence in a video, then it can be skipped. (3) If the sequence is longer than 10 frames, then I would want it to be split into 10 frame sequences still. So if the frames were numbered from 10-59, then I would create 5 new directories, each with 10 frames in them (frames 10-19, 20-29 etc.) (4) The source videos shouldn’t be related to each other, as when the 10 frame sequences are copied to a new directory, they wouldn’t be in the same subdirectory anyways. So you should be able to copy the same sequence (e.x 20-29) multiple times from different videos. | There is no built-in constant π in bc . If invoked with the -l option, some built-in functions become available, allowing you to evaluate π trigonometrically – the man page includes this example: EXAMPLES In the shell, the following will assign the value of π to the shell variable pi . pi=$(echo "scale=10; 4*a(1)" | bc -l) What is happening when you try to evaluate PI is the result of input base conversion, as described in the texinfo documentation for GNU bc (here version 1.07.1): … bc converts constants intointernal decimal numbers using the current input base, specified by thevariable IBASE . noting that … For multi-digit numbers, bc changes allinput digits greater or equal to IBASE to the value of IBASE −1. Thismakes the number ZZZ always be the largest 3 digit numberof the input base. Correspondingly, in the default ibase=10 , conversion of any pair of non-decimal digits results in decimal 99. Earlier versions of GNU bc have a maximum ibase value of 16, and only make provision for characters in the set [0-9A-F]; characters outside of this range result in an error condition in that case. You can see this the bc Command Manual version 1.06,which also contains the above passages. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/692418",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/516330/"
]
} |
692,462 | I want to remove all characters till the first letter [a-zA-Z] in a string. For example: #121Abc --> Abc %Ab#c --> Ab#c Ab#c --> Ab#c Abc --> Abc 1Abc --> Abc 1 21Abc --> Abc 1^1 Abc --> Abc Note: All non-letters includes non-printing characters. Is it possible with bash tools? | with awk : awk 'sub(/^[^[:alpha:]]*/, "")' infile with sed : sed 's/^[^[:alpha:]]*//' infile Note: if you had lines that there is no alphabet character in it, it will end up an empty lines in output, to skip printing those lines as well as skipping the empty lines in input, you need to use: awk 'sub(/^[^[:alpha:]]*/, "") && NF' infileawk 'sub(/^[^[:alpha:]]*/, "") && /./' infilesed 's/^[^[:alpha:]]*//;/./!d' infile or same doing with grep (thanks to @glennjackman ) grep -o '[[:alpha:]].*' infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/692462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220462/"
]
} |
692,466 | My laptop keyboard has been water damaged and the left control key occasionally activates. I am aware of xmapmod -e "keycode 37=" to disable the left control key. I seek to do something else as this causes a new problem. When I hold down any other key, e.g. 'a', a will be repeatedly typed until left ctrl activates . Although left control doesn't do anything now (thanks to xmapmod), it still interrupts key hold down. I suppose that I need to stop the keyboard even listening for the left control key. Does anyone have a fix for this? I am using linux mint. Thanks | with awk : awk 'sub(/^[^[:alpha:]]*/, "")' infile with sed : sed 's/^[^[:alpha:]]*//' infile Note: if you had lines that there is no alphabet character in it, it will end up an empty lines in output, to skip printing those lines as well as skipping the empty lines in input, you need to use: awk 'sub(/^[^[:alpha:]]*/, "") && NF' infileawk 'sub(/^[^[:alpha:]]*/, "") && /./' infilesed 's/^[^[:alpha:]]*//;/./!d' infile or same doing with grep (thanks to @glennjackman ) grep -o '[[:alpha:]].*' infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/692466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/516367/"
]
} |
692,483 | I have a comma-separated list in this rather simple format: IPrangestart,IPrangeend,int number,date (delimited by slashes),Name (is dirty, contains spaces, dots, dashes, quotemarks etc) The first three columns can't be empty.I want to to transform this to a standard firewall blocking viable format denoting the random separated by a dash: IPrangestart-IPrangend Sometimes the fields are empty.What's the quickest and smartest way to do this for thousands of lines?I tried RegEx like [A-Za-z] for each letter and [0-9] for each number, but that doesn't solve the issues with the random " . and similar stuff specified above... I tried this RegEx but I don't know how to make it recognize the dash in between (\b25[0-5]|\b2[0-4][0-9]|\b[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3} | If the ip related columns are always there (Based on your comment) use cut to keep the first two comma separated columns, and replace , with a - : cut -d, -f1,2 --output-delimiter=- If, for some reason, you haven't got access to GNU cut from coreutils (Thing which i doubt since you tagged the question with linux ) and therefore you are missing the --output-delimiter = - option : sed 's/^\([^,]*\),([^,]*).*/\1-\2/' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/692483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367734/"
]
} |
692,494 | Once upon a time I remember doing something roughly like this: vimdiff <(scp://some_server/home/user/.zshrc) <(scp://some_server-qa/home/user/.zshrc) (where some_server and some_server-qa are defined in my ~/.ssh/config with user and key and so forth, so it's not necessary to include that in the shell). The problem is that I can't figure out what syntax is needed to make this work, and I haven't been able to find anything by Googling. I'm sure someone here knows what I'm missing. What am I missing? | vim does support opening remote files with some URLs, so you can just do: vimdiff scp://some_server{,-qa}/home/user/.zshrc Enter :h scp within vim for the documentation. If that support has not been enabled at build time, you can always do instead: vimdiff -R <(ssh some_server cat /home/user/.zshrc) \ <(ssh some_server-qa cat /home/user/.zshrc) Though you won't be able to modify the remote files. The -R is to make vim read-only as a reminder that it's pointless to edit those file (though you could always do: :w !ssh host 'cat > file' to send the edited file back (or just :w !ssh host '>file' if your login shell on host is also zsh where $NULLCMD happens to be cat by default)). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/692494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3358/"
]
} |
692,612 | I noticed a weird antipattern in some CI scripts I've taken over, which basically boils down to this code checking whether a particular file is present in a package: dpkg --contents some.deb > contents.txtgrep --quiet foo contents.txt I tried the obvious refactor of dpkg --contents some.deb | grep --quiet foo , but I keep getting this error: dpkg-deb: error: tar subprocess was killed by signal (Broken pipe) From some more investigation, this is definitely a timing issue. If I use a regex which matches early in the input stream I get the error, but if I use a regex which specifically matches a late line it succeeds. The most obvious conclusion is that dpkg (or possibly tar ) does something wrong with SIGPIPE. Is this a known issue? Platform: # lsb_release --allNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 18.04.6 LTSRelease: 18.04Codename: bionic# dpkg --versionDebian 'dpkg' package management program version 1.19.0.5 (amd64).This is free software; see the GNU General Public License version 2 orlater for copying conditions. There is NO warranty.# tar --versiontar (GNU tar) 1.29Copyright (C) 2015 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Written by John Gilmore and Jay Fenlason. | dpkg uses tar to list the package contents. When tar can’t process an archive in full, it indicates an error, and that’s what dpkg is reporting. Both commands expect that an inability to complete their task is an error, and act accordingly. You can avoid this by ensuring that grep reads all its input before exiting: | grep foo > /dev/null (instead of -q , which exits as soon as it matches). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/692612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
692,821 | I have a scenario where I am looping through all directories and subdirectories in a given path; if a file with specific extension (.txt) is found, store the name of the directories and subdirectories in an array. Later, I read and execute commands on those directories. Here is what I am performing: !/bin/bashx=( $(find . -name "*.txt") ); echo "${x[@]}"for item in "${x[@]}"; { echo "$item"; } My current output is: ./dir1/file1.txt./dir1/file2.txt./dir2/subdir1/subdir2/file3.txt but what I want to achieve is in the array x there should not be any duplicates even if the dir contains more than one .txt file. Additionally, I don't want to store the file name in as a path; the array should contain only the directory name. The expected output: ./dir1./dir2/subdir1/subdir2/ | Using bash : shopt -s globstarshopt -s dotglob nullglobdirs=( ./**/*.txt ) # glob the namesdirs=( "${dirs[@]%/*}" ) # remove the filenames at the end This gives you an array of directory paths with possible duplicates. To delete the duplicates, use an associative array: declare -A seenfor dirpath in "${dirs[@]}"; do seen["$dirpath"]=''donedirs=( "${!seen[@]}" ) # extract the keys from the "seen" hash Then, to print them, printf '%s\n' "${dirs[@]}" In the zsh shell, you would do it similarly, but use a unique array and the shell's fancy globbing qualifiers to strip off the filename at the end of the paths: typeset -U dirsdirs=( ./**/*.txt(DN:h) ) The D and the N in the globbing qualifier after the pattern acts as dotglob and nullglob in bash , i.e., they enable matching of hidden names and remove the pattern if there are no matches at all. The final :h gives you the "head" of the generated pathnames, i.e., the directory path without the filename at the end. The zsh shell does not have to enable the use of ** explicitly, as you have to do in bash with setting the globstar shell option. Then, to print them, print -r -C1 -- $dirs Also related: Why is looping over find's output bad practice? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/692821",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/510509/"
]
} |
692,900 | I'm fairly new to Linux and I'll make it tl;dr.I installed Ubuntu Server 20.04.4 LTS on my Computer and used it as a server.I only Installed Java, Filezilla, Forge(Minecraft Server) and a Discord bot.Today I unintentionally blocked port 22 for SSH connections so I plugged it on a monitor and did a restart to open the port for SSH.I was welcomed with a GUI which totally confused me.How did the Server version install a GUI without me? I don't want it to use more resources than necessary. Output of sudo aptitude why gnome : i ubuntu-standard Recommends plymouthi A plymouth Suggests desktop-basep desktop-base Suggests gnome | kde-standard | xfce4 | wmaker and sudo aptitude why ubuntu-standard : Manually installed, current version 1.450.2, priority standard No dependencies require to install ubuntu-standard I'm 100% positive I never typed ubuntu-standard into console.How can I revert without formatting again? Just like @terdon suggested I deleted ubuntu-standard via sudo apt remove ubuntu-standard filezilla plymouth desktop-basesudo apt autoremove Now "why gnome" responds with: i grub-efi-amd64-signed Depends grub-efi-amd64 | grub-pcp grub-pc Depends grub-pc-bin (= 2.04-1ubuntu26.13)p grub-pc-bin Suggests desktop-base (>= 4.0.6)p desktop-base Suggests gnome | kde-standard | xfce4 | wmaker What's also interesting is that "echo $XDG_CURRENT_DESKTOP responds with GNOME and sudo apt remove gnome responds with Package 'gnome' is not installed, so not removed . I'm hella confused... | You installed filezilla which is a graphical FTP client. That would have brought in the other GUI packages. The other option is that you yourself installed ubuntu-standard , which is what seems to have happened here based on the output of aptitude . This would also install the full GUI environment. You can now run apt remove ubuntu-standard but since this is a meta package, that won't remove everything it installed. These two commands should remove most of the unnecessary packages: sudo apt remove ubuntu-standard filezilla plymouth desktop-basesudo apt autoremove | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/692900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/516797/"
]
} |
693,003 | Say I have two possible paths I want to list directories and files under on a Linux machine: /some/path1//some/path2/ If I do the following in tcsh , I get 0 exit code, if at least one of path1 or path2 exists: ls -d /some/{path1,path2}/* But if I do the exact same thing in bash , I get 2 exit code, with a stderr message reporting path1 does not exist (if path1 is the one that doesn't exist). How can I make bash behave like tcsh in this case? Is there a switch to ls that I can ask it to give back 0 if at least one path exists? If neither one exists, I do expect non-zero code, which is what tcsh gives back. | Most of your questions are already answered at Why is nullglob not default? . One thing to bear in mind is that: ls -d /some/{path1,path2}/* In csh / tcsh / zsh / bash / ksh (but not fish , see below) is the same as: ls -d /some/path1/* /some/path2/* As the brace expansion is performed before (not as part of ) the glob expansion, it's the shell that expands those /some/pathx/* patterns into the list of matching files to pass as separate arguments to ls . bash , like ksh which it mostly copied has inherited a misfeature introduced by the Bourne shell in that, when a glob pattern doesn't match any file, it is passed as-is, literally as an argument to the command. So, in those shells, if /some/path1/* matches at least one file and /some/path2/* matches none, ls will be called with -d , /some/path1/file1 , /some/path1/file2 and a literal /some/path2/* as arguments. As the /some/path2/* file doesn't exist, ls will report an error. csh behaves like in the early Unix versions, where globs were not performed by the shell but by a /etc/glob helper utility (which gave their name to globs). That helper would perform the glob expansions before invoking the command and report a No match error without running the command if all the glob patterns failed to match any file. Otherwise, as long as there was at least one glob with matches, all the non-matching ones would simply be removed. So in our example above, with csh / tcsh or the ancient Thompson shell and its /etc/glob helper, ls would be called with -d , /some/path1/file1 and /some/path1/file2 only, and would likely succeed (as long as /some/path1 is searchable). zsh is both a Korn-like and csh-like shell. It does not have that misfeature of the Bourne shell whereby unmatched globs are passed as is¹, but by default is stricter than csh in that, all failing globs are considered as a fatal error. So in zsh , by default, if either /some/path1/* or /some/path2/* (or both) fails to match, the command is aborted. A similar behaviour can be enabled in the bash shell with the failglob option². That makes for more predictable / consistent behaviours but means that you can run into that problem when you want to pass more than one glob expansion to a command and would not like it to fail as long as one of the globs succeeds. You can however set the cshnullglob option to get a csh-like behaviour (or emulate csh ). That can be done locally by using an anonymous function: () { set -o localoptions -o cshnullglob; ls -d /some/{path1,path2}/*; } Or just using a subshell: (set -o cshnullglob; ls -d /some/{path1,path2}/*) However here, instead of using two globs, you could use one that matches all of them using the alternation glob operator: ls -d /some/(path1|path2)/* Here, you could even do: ls -d /some/path[12]/* In bash , you can enable the extglob option for bash to support a subset of ksh's extended glob operator, including alternation: (shopt -s extglob; ls -d /some/@(path1|path2)/*) Now, because of that misfeature inherited from the Bourne shell, if that glob doesn't match any file, /some/@(path1|path2)/* would be passed as-is to ls and ls could end up listing a file called literally /some/@(path1|path2)/* , so you'd also want to enable the failglob option to guard against that: (shopt -s extglob failglob; ls -d /some/@(path1|path2)/*) Alternatively, you can use the nullglob option (which bash copied from zsh ) for all non-matching globs to expand to nothing. But: (shopt -s nullglob; ls -d /some/path1/* /some/path2/*) Would be wrong in the special case of the ls command, which, if not passed any argument lists . . You could however use nullglob to store the glob expansion into an array, and only call ls with the member of the arrays as argument if it is non-empty: ( shopt -s nullglob files=( /some/path1/* /some/path2/* ) if (( ${#files[@]} > 0 )); then ls -d -- "${files[@]}" else echo >&2 No match exit 2 fi) In zsh , instead of enabling nullglob globally, you can enable it on a per-glob basis with the (N) glob qualifier (which inspired ksh's ~(N) , not copied by bash yet), and use an anonymous function again instead of an array: () { (( $# > 0 )) && ls -d -- "$@"} /some/path1/*(N) /some/path2/*(N) The fish shell now behaves similarly to zsh where failing globs cause an error, except when the glob is used with for , set (which is used to assign arrays) or count where it behaves in a nullglob fashion instead. Also, in fish , the brace expansion though not a glob operator in itself is done as part of globbing, or at least a command is not aborted when brace expansion is combined with globbing and at least one element can be returned. So, in fish : ls -d /some/{path1,path2}/* Would end up in effect behaving like in csh . Even: {ls,-d,/xx*} Would result in ls being called with -d alone if /xx* was not matched instead of failing (behaving differently from csh in this instance). In any case, if it's just to print the matching file paths, you don't need ls . In zsh , you could use its print builtin to print in columns: print -rC3 /some/{path1,path2}/*(N) Would print the paths r aw on 3 columns (and print nothing if there's no match with the N ullglob glob qualifier). If instead you want to check if there's at least one non-hidden file in any of those two directories, you can do: # bashif (shopt -s nullglob; set -- /some/path1/* /some/path2/*; (($#))); then echo yeselse echo nofi Or using a function: has_non_hidden_files() ( shopt -s nullglob set -- "$1"/* (($#)))if has_non_hidden_files /some/path1 || has_non_hidden_files /some/path2then echo yeselse echo nofi # zshif ()(($#)) /some/path1/*(N) /some/path2/*(N); then echo yeselse echo nofi Or with a function: has_non_hidden_files() ()(($#)) $1/*(NY1)if has_non_hidden_files /some/path1 || has_non_hidden_files /some/path2then echo yeselse echo nofi ( Y1 as an optimisation to stop after finding the first file) Beware those has_non_hidden_files would (silently) return false for directories that are not readable by the user (whether they have files or not). In zsh , you could detect this kind of situation with its $ERRNO special variable. ¹ The Bourne behaviour (which was specified by POSIX) can be enabled though in zsh by doing emulate sh or emulate ksh or with set +o nomatch ² beware there are significant differences in behaviour as to what exactly is cancelled when the glob doesn't match, the fish behaviour being generally the more sensible, and the bash -O failglob probably the worst | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/693003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227020/"
]
} |
693,025 | I'm currently trying to install Splint in an openSUSE Leap 15.2 distro locally (without sudo privileges). I tried to follow the instructions here : In my home directory: git clone https://github.com/splintchecker/splint I entered the splint directory after that. The next instruction was to run configure . But there was no such file present. Following the suggestion here , I ran: autoreconf -i And then: ./configuremake At this point, the build appeared to be a success. So I tried running splint and got a command-not-found message. The answer here seemed to suggest running make install , so I tried that next, but to no avail. Are there some other steps I should take? Did I mess up somewhere? Edit: Here is the output from make . The output was too long so I have truncated it. (CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh /home/styx/splint/config-aŭ/missing autoheader)rm -f stamp-h1touch config.h.incd . && /bin/sh ./config.status config.hconfig.status: creating config.hconfig.status: config.h is unchangedmake all-recursivemake[1]: Entering directory '/home/styx/splint'Making all in srcmake[2]: Entering directory '/home/styx/splint/src'bison -v -t -d --debug --no-lines -p lsl signature.ysignature.tab.h generatedcat bison.head signature.tab.h bison.reset >signature_gen.hbison -v -t -d --debug --no-lines cgrammar.ycgrammar.y: warning: 159 shift/reduce conflicts [-Wconflicts-sr]cgrammar.y: warning: 123 reduce/reduce conflicts [-Wconflicts-rr]* Note: Expect 159 shift/reduce conflicts and 123 reduce/reduce conflicts. (see cgrammar.y for explanation)cgrammar.tab.h generatedcat bison.head cgrammar.tab.h bison.reset | /usr/bin/sed 's/YYSTYPE/cgrammar_YYSTYPE/g' | /usr/bin/sed 's/lsllex/cgrammar_lsllex/g' >cgrammar_tokens.hbison -v -t -d --debug --no-lines -p yl llgrammar.yllgrammar.y: warning: 2 shift/reduce conflicts [-Wconflicts-sr]* Note: Expect 2 shift/reduce conflictsllgrammar.tab.h generatedcat bison.head llgrammar.tab.h bison.reset >llgrammar_gen.hbison -v -t -d --debug --no-lines -p mt mtgrammar.ymtgrammar.y: warning: 11 shift/reduce conflicts [-Wconflicts-sr]* Note: Expect 11 shift/reduce conflicts.mtgrammar.tab.h generatedcat bison.head mtgrammar.tab.h bison.reset >mtgrammar_tokens.hflex -L -o cscanner.lex.c cscanner.lcat flex.head cscanner.lex.c flex.reset | /usr/bin/sed 's/YYSTYPE/cgrammar_YYSTYPE/g' | /usr/bin/sed 's/lsllex/cgrammar_lsllex/g' >cscanner.ccat bison.head cgrammar.tab.c bison.reset | /usr/bin/sed 's/YYSTYPE/cgrammar_YYSTYPE/g' | /usr/bin/sed 's/lsllex/cgrammar_lsllex/g' >cgrammar.ccat bison.head mtgrammar.tab.c bison.reset >mtgrammar.ccat bison.head llgrammar.tab.c bison.reset >llgrammar.ccat bison.head signature.tab.c bison.reset >signature.c/usr/bin/grep "FLG_" flags.def >flag_codes.genmake all-ammake[3]: Entering directory '/home/styx/splint/src'gcc -DHAVE_CONFIG_H -I. -I.. -I./Headers -I. -g -O2 -MT cscanner.o -MD -MP -MF .deps/cscanner.Tpo -c -o cscanner.o cscanner.cmv -f .deps/cscanner.Tpo .deps/cscanner.Po.........gcc -DHAVE_CONFIG_H -I. -I.. -I./Headers -I. -g -O2 -MT lsymbol.o -MD -MP -MF .deps/lsymbol.Tpo -c -o lsymbol.o lsymbol.cmv -f .deps/lsymbol.Tpo .deps/lsymbol.Pogcc -DHAVE_CONFIG_H -I. -I.. -I./Headers -I. -g -O2 -MT mapping.o -MD -MP -MF .deps/mapping.Tpo -c -o mapping.o mapping.cmv -f .deps/mapping.Tpo .deps/mapping.Pogcc -g -O2 -o splint cscanner.o cgrammar.o mtgrammar.o llgrammar.o signature.o cppmain.o cpplib.o cppexp.o cpphash.o cpperror.o context.o uentry.o cprim.o macrocache.o qual.o qtype.o stateClause.o stateClauseList.o ctype.o cvar.o clabstract.o idDecl.o clause.o globalsClause.o modifiesClause.o warnClause.o functionClause.o functionClauseList.o metaStateConstraint.o metaStateConstraintList.o metaStateExpression.o metaStateSpecifier.o functionConstraint.o pointers.o cscannerHelp.o structNames.o transferChecks.o varKinds.o nameChecks.o exprData.o cstring.o fileloc.o message.o inputStream.o fileTable.o cstringTable.o valueTable.o stateValue.o llerror.o messageLog.o flagMarker.o aliasTable.o ynm.o sRefTable.o genericTable.o ekind.o usymtab.o multiVal.o lltok.o sRef.o lcllib.o randomNumbers.o fileLib.o globals.o flags.o general.o osd.o reader.o mtreader.o clauseStack.o filelocStack.o cstringList.o cstringSList.o sRefSetList.o ctypeList.o enumNameList.o enumNameSList.o exprNodeList.o exprNodeSList.o uentryList.o fileIdList.o filelocList.o qualList.o sRefList.o flagMarkerList.o idDeclList.o flagSpec.o globSet.o intSet.o typeIdSet.o guardSet.o usymIdSet.o sRefSet.o mtscanner.o stateInfo.o stateCombinationTable.o metaStateTable.o metaStateInfo.o annotationTable.o annotationInfo.o mttok.o mtDeclarationNode.o mtDeclarationPieces.o mtDeclarationPiece.o mtContextNode.o mtValuesNode.o mtDefaultsNode.o mtAnnotationsNode.o mtMergeNode.o mtAnnotationList.o mtAnnotationDecl.o mtTransferClauseList.o mtTransferClause.o mtTransferAction.o mtLoseReferenceList.o mtLoseReference.o mtDefaultsDeclList.o mtDefaultsDecl.o mtMergeItem.o mtMergeClause.o mtMergeClauseList.o exprNode.o exprChecks.o llmain.o help.o rcfiles.o constraintList.o constraintResolve.o constraintGeneration.o constraintTerm.o constraintExprData.o constraintExpr.o constraint.o loopHeuristics.o lsymbolSet.o sigNodeSet.o lslOpSet.o sortSet.o initDeclNodeList.o sortList.o declaratorInvNodeList.o interfaceNodeList.o sortSetList.o declaratorNodeList.o letDeclNodeList.o stDeclNodeList.o storeRefNodeList.o lslOpList.o lsymbolList.o termNodeList.o ltokenList.o traitRefNodeList.o pairNodeList.o typeNameNodeList.o fcnNodeList.o paramNodeList.o programNodeList.o varDeclarationNodeList.o varNodeList.o quantifierNodeList.o replaceNodeList.o importNodeList.o tokentable.o scan.o scanline.o lslparse.o lh.o checking.o lclctypes.o imports.o lslinit.o syntable.o usymtab_interface.o abstract.o ltoken.o lclscanline.o lclsyntable.o lcltokentable.o sort.o symtable.o lclinit.o shift.o lclscan.o lsymbol.o mapping.o -lflmake[3]: Leaving directory '/home/styx/splint/src'make[2]: Leaving directory '/home/styx/splint/src'Making all in libmake[2]: Entering directory '/home/styx/splint/lib'../src/splint -nof -nolib +impconj standard.h -dump standardSplint 3.1.2 --- 05 Mar 2022Finished checking --- no warnings../src/splint -nof -nolib +impconj -DSTRICT standard.h -dump standardstrictSplint 3.1.2 --- 05 Mar 2022Finished checking --- no warnings../src/splint -nof -nolib +impconj standard.h posix.h -dump posixSplint 3.1.2 --- 05 Mar 2022Finished checking --- no warnings../src/splint -nof -nolib +impconj -DSTRICT standard.h posix.h -dump posixstrictSplint 3.1.2 --- 05 Mar 2022Finished checking --- no warnings../src/splint -supcounts -nof -incondefs -nolib +impconj standard.h posix.h unix.h stdio.h stdlib.h -dump unixSplint 3.1.2 --- 05 Mar 2022Finished checking --- no warnings../src/splint -supcounts -nof -incondefs -nolib +impconj -DSTRICT standard.h posix.h unix.h stdio.h stdlib.h -dump unixstrictSplint 3.1.2 --- 05 Mar 2022Finished checking --- no warningsmake[2]: Leaving directory '/home/styx/splint/lib'Making all in importsmake[2]: Entering directory '/home/styx/splint/imports'LARCH_PATH="../lib:../lib" ../src/splint stdlib.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint assert.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint ctype.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint errno.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint limits.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint locale.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint math.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint setjmp.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint signal.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint stdarg.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint stdio.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint string.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint strings.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedLARCH_PATH="../lib:../lib" ../src/splint time.lclSplint 3.1.2 --- 05 Mar 2022Finished checking --- no code processedmake[2]: Leaving directory '/home/styx/splint/imports'Making all in docmake[2]: Entering directory '/home/styx/splint/doc'make[2]: Nothing to be done for 'all'.make[2]: Leaving directory '/home/styx/splint/doc'Making all in testmake[2]: Entering directory '/home/styx/splint/test'Use make check to run the test suitemake[2]: Leaving directory '/home/styx/splint/test'make[2]: Entering directory '/home/styx/splint'make[2]: Leaving directory '/home/styx/splint'make[1]: Leaving directory '/home/styx/splint' Here is the output from make install : Making install in srcmake[1]: Entering directory '/home/styx/splint/src'make install-ammake[2]: Entering directory '/home/styx/splint/src'make[3]: Entering directory '/home/styx/splint/src' /usr/bin/mkdir -p '/usr/local/bin' /usr/bin/install -c splint '/usr/local/bin'/usr/bin/install: cannot create regular file '/usr/local/bin/splint': Permission deniedmake[3]: *** [Makefile:628: install-binPROGRAMS] Error 1make[3]: Leaving directory '/home/styx/splint/src'make[2]: *** [Makefile:976: install-am] Error 2make[2]: Leaving directory '/home/styx/splint/src'make[1]: *** [Makefile:970: install] Error 2make[1]: Leaving directory '/home/styx/splint/src'make: *** [Makefile:374: install-recursive] Error 1 | Most of your questions are already answered at Why is nullglob not default? . One thing to bear in mind is that: ls -d /some/{path1,path2}/* In csh / tcsh / zsh / bash / ksh (but not fish , see below) is the same as: ls -d /some/path1/* /some/path2/* As the brace expansion is performed before (not as part of ) the glob expansion, it's the shell that expands those /some/pathx/* patterns into the list of matching files to pass as separate arguments to ls . bash , like ksh which it mostly copied has inherited a misfeature introduced by the Bourne shell in that, when a glob pattern doesn't match any file, it is passed as-is, literally as an argument to the command. So, in those shells, if /some/path1/* matches at least one file and /some/path2/* matches none, ls will be called with -d , /some/path1/file1 , /some/path1/file2 and a literal /some/path2/* as arguments. As the /some/path2/* file doesn't exist, ls will report an error. csh behaves like in the early Unix versions, where globs were not performed by the shell but by a /etc/glob helper utility (which gave their name to globs). That helper would perform the glob expansions before invoking the command and report a No match error without running the command if all the glob patterns failed to match any file. Otherwise, as long as there was at least one glob with matches, all the non-matching ones would simply be removed. So in our example above, with csh / tcsh or the ancient Thompson shell and its /etc/glob helper, ls would be called with -d , /some/path1/file1 and /some/path1/file2 only, and would likely succeed (as long as /some/path1 is searchable). zsh is both a Korn-like and csh-like shell. It does not have that misfeature of the Bourne shell whereby unmatched globs are passed as is¹, but by default is stricter than csh in that, all failing globs are considered as a fatal error. So in zsh , by default, if either /some/path1/* or /some/path2/* (or both) fails to match, the command is aborted. A similar behaviour can be enabled in the bash shell with the failglob option². That makes for more predictable / consistent behaviours but means that you can run into that problem when you want to pass more than one glob expansion to a command and would not like it to fail as long as one of the globs succeeds. You can however set the cshnullglob option to get a csh-like behaviour (or emulate csh ). That can be done locally by using an anonymous function: () { set -o localoptions -o cshnullglob; ls -d /some/{path1,path2}/*; } Or just using a subshell: (set -o cshnullglob; ls -d /some/{path1,path2}/*) However here, instead of using two globs, you could use one that matches all of them using the alternation glob operator: ls -d /some/(path1|path2)/* Here, you could even do: ls -d /some/path[12]/* In bash , you can enable the extglob option for bash to support a subset of ksh's extended glob operator, including alternation: (shopt -s extglob; ls -d /some/@(path1|path2)/*) Now, because of that misfeature inherited from the Bourne shell, if that glob doesn't match any file, /some/@(path1|path2)/* would be passed as-is to ls and ls could end up listing a file called literally /some/@(path1|path2)/* , so you'd also want to enable the failglob option to guard against that: (shopt -s extglob failglob; ls -d /some/@(path1|path2)/*) Alternatively, you can use the nullglob option (which bash copied from zsh ) for all non-matching globs to expand to nothing. But: (shopt -s nullglob; ls -d /some/path1/* /some/path2/*) Would be wrong in the special case of the ls command, which, if not passed any argument lists . . You could however use nullglob to store the glob expansion into an array, and only call ls with the member of the arrays as argument if it is non-empty: ( shopt -s nullglob files=( /some/path1/* /some/path2/* ) if (( ${#files[@]} > 0 )); then ls -d -- "${files[@]}" else echo >&2 No match exit 2 fi) In zsh , instead of enabling nullglob globally, you can enable it on a per-glob basis with the (N) glob qualifier (which inspired ksh's ~(N) , not copied by bash yet), and use an anonymous function again instead of an array: () { (( $# > 0 )) && ls -d -- "$@"} /some/path1/*(N) /some/path2/*(N) The fish shell now behaves similarly to zsh where failing globs cause an error, except when the glob is used with for , set (which is used to assign arrays) or count where it behaves in a nullglob fashion instead. Also, in fish , the brace expansion though not a glob operator in itself is done as part of globbing, or at least a command is not aborted when brace expansion is combined with globbing and at least one element can be returned. So, in fish : ls -d /some/{path1,path2}/* Would end up in effect behaving like in csh . Even: {ls,-d,/xx*} Would result in ls being called with -d alone if /xx* was not matched instead of failing (behaving differently from csh in this instance). In any case, if it's just to print the matching file paths, you don't need ls . In zsh , you could use its print builtin to print in columns: print -rC3 /some/{path1,path2}/*(N) Would print the paths r aw on 3 columns (and print nothing if there's no match with the N ullglob glob qualifier). If instead you want to check if there's at least one non-hidden file in any of those two directories, you can do: # bashif (shopt -s nullglob; set -- /some/path1/* /some/path2/*; (($#))); then echo yeselse echo nofi Or using a function: has_non_hidden_files() ( shopt -s nullglob set -- "$1"/* (($#)))if has_non_hidden_files /some/path1 || has_non_hidden_files /some/path2then echo yeselse echo nofi # zshif ()(($#)) /some/path1/*(N) /some/path2/*(N); then echo yeselse echo nofi Or with a function: has_non_hidden_files() ()(($#)) $1/*(NY1)if has_non_hidden_files /some/path1 || has_non_hidden_files /some/path2then echo yeselse echo nofi ( Y1 as an optimisation to stop after finding the first file) Beware those has_non_hidden_files would (silently) return false for directories that are not readable by the user (whether they have files or not). In zsh , you could detect this kind of situation with its $ERRNO special variable. ¹ The Bourne behaviour (which was specified by POSIX) can be enabled though in zsh by doing emulate sh or emulate ksh or with set +o nomatch ² beware there are significant differences in behaviour as to what exactly is cancelled when the glob doesn't match, the fish behaviour being generally the more sensible, and the bash -O failglob probably the worst | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/693025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/512432/"
]
} |
693,083 | This is the output: $ ps x | grep rclone 7111 ? Sl 0:00 rclone mount xxx 7112 ? Sl 0:00 rclone mount xxx 7113 ? Sl 10:16 rclone mount xxx 9843 pts/1 S+ 0:00 grep --color=auto rclone I am thinking of somehow passing (pipe maybe) the second column of info, i.e. 7111, 7112, 7113 to be killed like so: kill 7111kill 7112kill 7113 Xargs is all I have in mind, but not sure if that is correct nor the way to use it. Thank you! Fedora 35 KDE if that matters. | This is what killall and pkill are for: killall rclone or pkill rclone . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/693083",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/446381/"
]
} |
693,160 | For a class on cryptography, I am trying to drain the entropy pool in Linux (e.g. make /proc/sys/kernel/random/entropy_avail go to 0 and block a command reading from /dev/random ) but I can't make it happen. I'm supposed to get reads from /dev/random to block. If I execute these two commands: watch -n 0.5 cat /proc/sys/kernel/random/entropy_avail to watch entropy and then: od -d /dev/random to dump the random pool, the value from the watch command hovers between 3700 and 3900, and gains and loses only a little while I run this command. I let both commands run for about three minutes with no discernible substantial change in the size of entropy_avail . I didn't do much on the computer during that time. From googling around I find that perhaps a hardware random number generator could be so good that the entropy won't drop but if I do: cat /sys/devices/virtual/misc/hw_random/rng_available I see nothing, I just get a blank line. So I have a few questions: What's replenishing my entropy so well, and how can I find the specific source of randomness? Is there any way to temporarily disable sources of randomness so I can force this blocking to happen? | There is a surprising amount of development going on around the Linux random device. The slow, blocking /dev/random is gone and replaced by a fast /dev/random that never runs out of data. You'll have to travel back in time, like prior to linux 4.8 ( which introduced a much faster crng algorithm ) or possibly linux 5.6 ( which introduced jitter entropy generation ). There is no way to get the original behavior back in current kernels. If you are seeing this issue in older versions of Linux, hwrng aside, you might be using haveged or rng-tools rngd , or similar userspace entropy providers. Some distros install these by default to avoid hangs while waiting for a few random bits, in that case you can uninstall or disable them or try it from within an initrd / busybox shell where no other processes are running. If the issue still persists, you might just have a very noisy piece of hardware from which kernel keeps collecting entropy naturally. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/693160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/517039/"
]
} |
693,163 | My script is setting array from grep results in such a way that empty grep results are also being stored in array. Eg. set -xecho "${Arr[@]}" + echo '' 'word' '' 'word2' Can someone help unset those empty valuesso that echo "${#Arr[@]}" gives 2 instead of 4 Tried var=-1 && for i in "${Input_Arr[@]}"; do var=$((var+1)); if [ -z "$i" ]; then unset "${Input_Arr[$var]}"; fi; done But it isn't working | First, there's no need to invent a dummy index - you can access the array's indices using the indirection operator ! Second, "${Input_Arr[$var]}" is the element's value ; unset needs the element's name , Input_Arr[$var] or just Input_Arr[var] , since it's already an arithmetic context). So given: $ arr=(foo '' bar '' baz)$ declare -p arrdeclare -a arr=([0]="foo" [1]="" [2]="bar" [3]="" [4]="baz") then $ for i in ${!arr[@]}; do [[ -z ${arr[i]} ]] && unset arr[i]; done leaves $ declare -p arrdeclare -a arr=([0]="foo" [2]="bar" [4]="baz") This also works for associative arrays - with suitable adjustments for the non-numeric keys (including double quoting expansions to prevent potential split + glob): $ declare -A Arr=(['1st val']=foo ['2nd val']='' ['3rd val']=bar ['4th val']='' ['5th val']=baz)$ declare -p Arrdeclare -A Arr=(["5th val"]="baz" ["2nd val"]="" ["4th val"]="" ["3rd val"]="bar" ["1st val"]="foo" )$ for i in "${!Arr[@]}"; do [[ -z ${Arr[$i]} ]] && unset Arr["$i"]; done$ declare -p Arrdeclare -A Arr=(["5th val"]="baz" ["3rd val"]="bar" ["1st val"]="foo" ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/693163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/516583/"
]
} |
693,185 | On linux (ubuntu) I want to record basically as low a quality and small audio file I can for some long recordings. So long as the audio is discernable and not so bad it's irritating to listen to (scratchy or something) I want to make it as small as possible. It seems like the -f parameter controls this, and there are a number of formats listed, however I can't find any documentation on what they really represent as far as quality. Does anyone know the proper flag(s) for this? I might add I don't really care if it's arecord or rec, or some other sound recording app. So long as it's recording audio from my browser, like arecord does, that works. thanks | First, there's no need to invent a dummy index - you can access the array's indices using the indirection operator ! Second, "${Input_Arr[$var]}" is the element's value ; unset needs the element's name , Input_Arr[$var] or just Input_Arr[var] , since it's already an arithmetic context). So given: $ arr=(foo '' bar '' baz)$ declare -p arrdeclare -a arr=([0]="foo" [1]="" [2]="bar" [3]="" [4]="baz") then $ for i in ${!arr[@]}; do [[ -z ${arr[i]} ]] && unset arr[i]; done leaves $ declare -p arrdeclare -a arr=([0]="foo" [2]="bar" [4]="baz") This also works for associative arrays - with suitable adjustments for the non-numeric keys (including double quoting expansions to prevent potential split + glob): $ declare -A Arr=(['1st val']=foo ['2nd val']='' ['3rd val']=bar ['4th val']='' ['5th val']=baz)$ declare -p Arrdeclare -A Arr=(["5th val"]="baz" ["2nd val"]="" ["4th val"]="" ["3rd val"]="bar" ["1st val"]="foo" )$ for i in "${!Arr[@]}"; do [[ -z ${Arr[$i]} ]] && unset Arr["$i"]; done$ declare -p Arrdeclare -A Arr=(["5th val"]="baz" ["3rd val"]="bar" ["1st val"]="foo" ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/693185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/517058/"
]
} |
693,193 | I've been stuck trying to figure this one out. Here is an example of my files: ~/download/Video/CE/~/download/Video/CE/153~/download/Video/CE/153/2022-03-04~/download/Video/CE/153/2022-03-05~/download/Video/CE/281~/download/Video/CE/281/2022-03-04~/download/Video/CE/281/2022-03-05~/download/Video/GA/~/download/Video/GA/154~/download/Video/GA/154/2022-03-04~/download/Video/GA/154/2022-03-05~/download/Video/GA/615~/download/Video/GA/615/2022-03-04~/download/Video/GA/615/2022-03-05...etc There are several dozen folders in /download/Video and then several hundred folders in /download/video/*/ I am trying to move all folders that contain "2022-03-04" and all of it's contents into a new directory. My desired outcome is: /mnt/d/archive/Video/CE/153/2022-03-04/mnt/d/archive/Video/CE/281/2022-03-04/mnt/d/archive/Video/GA/154/2022-03-04...etc I have tried various things but my biggest problems is that I can't just simply move the video folder to /mnt/d/archive because I want to keep the 2022-03-05 folders (and the superdirectories) on the main drive. Here is what I've tried so far: cd ~/download/Video/CE; for subdir in *; do mv $subdir/2022-03-04 /mnt/d/archive/Video/CE/$subdir/; done; ^ and then just copy and paste this code for GA and all the other subdirectories in /Video/ This created the following: /mnt/d/archive/Video/CE/153/file.mp4/mnt/d/archive/Video/CE/153/file2.mp4/mnt/d/archive/Video/CE/281/file.mp4...etc Which is great, except it doesn't copy the '2022-03-04' folder itself. I need the 2022-03-04 folder. So I tried this code instead: cd ~/download/Video/CE; for subdir in *; do mv $subdir/2022-03-04 ~/mnt/d/archive/Video/CE/$subdir/2022-03-04; done; However, now I ran into another problem. None of the folders nor files copied this time. I just get a "No such file or directory" error. Any ideas? | First, there's no need to invent a dummy index - you can access the array's indices using the indirection operator ! Second, "${Input_Arr[$var]}" is the element's value ; unset needs the element's name , Input_Arr[$var] or just Input_Arr[var] , since it's already an arithmetic context). So given: $ arr=(foo '' bar '' baz)$ declare -p arrdeclare -a arr=([0]="foo" [1]="" [2]="bar" [3]="" [4]="baz") then $ for i in ${!arr[@]}; do [[ -z ${arr[i]} ]] && unset arr[i]; done leaves $ declare -p arrdeclare -a arr=([0]="foo" [2]="bar" [4]="baz") This also works for associative arrays - with suitable adjustments for the non-numeric keys (including double quoting expansions to prevent potential split + glob): $ declare -A Arr=(['1st val']=foo ['2nd val']='' ['3rd val']=bar ['4th val']='' ['5th val']=baz)$ declare -p Arrdeclare -A Arr=(["5th val"]="baz" ["2nd val"]="" ["4th val"]="" ["3rd val"]="bar" ["1st val"]="foo" )$ for i in "${!Arr[@]}"; do [[ -z ${Arr[$i]} ]] && unset Arr["$i"]; done$ declare -p Arrdeclare -A Arr=(["5th val"]="baz" ["3rd val"]="bar" ["1st val"]="foo" ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/693193",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/398851/"
]
} |
693,257 | I have this sed filter: /.*[1-9][0-9][0-9] .*/{ s/.*\([1-9][0-9][0-9] .*\)/\1/}/.*[1-9][0-9] .*/{ s/.*\([1-9][0-9] .*\)/\1/}/.*[0-9] .*/{ # but this is always preferred/executed s/.*\([0-9] .*\)/\1/} The problem is that the first two are more restrictive, and they are not executed because the last third one is more "powerfult" because it includes the first two. Is there a way to make sed take the first two, with a "priority order"? Like if the first matches do first thingselif the second matches do second thingselif the third matches do third things | Without knowing what is your goal with those Regexp and not having a test data, what you need is a t branch that will cause your sed to branch to the end of the script and skip processing the rest of the code upon a successful substitution has been done. sed '/regexp1/ s/do/things/; t /regexp2/ s/do/things/; t /regexp3/ s/do/things/ ' infile Based on the command you used, you can also use b branch (unconditionally branch to the specified lable, or to the end of the script if label is omitted) sed -e '/regex/{ s/do/things/; b' -e '}; ...' too. from 'man sed': t label If a s/// has done a successful substitution since the last input line was read and since the last t orT command, then branch to label; if label is omitted, branch to end of script. b label Branch to label; if label is omitted, branch to end of script. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/693257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/314643/"
]
} |
693,667 | I'm trying to understand how to crack passwords from shadow file and I see the root user contains the following content on /etc/shadow : root:!:17888:0:99999:7::: What does It mean? How can I crack this password using john ? | The ! indicates that the account has no usable password. You cannot crack it. This information can be obtained from the documentation installed on your system. See man 5 shadow and search for ! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/693667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/517658/"
]
} |
693,679 | I'm trying to sort a directory of files by LoC. But sort appears to do nothing if the lines are piped in: paths=`find ./src/ | egrep "\.(cpp|h)$"`for path in $paths; dowc -l $path | sort -n;done Results in something like this (pre-sorted by find , but the wc numbers are ignored): 50 /a/a.cpp10 /a/a.h200 /b/b.cpp13 /b/b.h... If I use sort on a file instead of a pipe: for path in $paths; dowc -l $path >> test.txt;donesort -n test.txt it does work: ```bash10 /a/a.h13 /b/b.h50 /a/a.cpp200 /b/b.cpp... Why does the pipe version not work? | The ! indicates that the account has no usable password. You cannot crack it. This information can be obtained from the documentation installed on your system. See man 5 shadow and search for ! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/693679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/494826/"
]
} |
693,683 | I have a file which looks like this: 120E+00 360E+01 258E-09-784E+02 -847E+00-360E+01 242E-08 420E+05 14E+00 360E+01 254E+00 78E+02 -120E+00-254E-01-223E-09-786E-02 E+00,E+01,E-01 etc are in scientific notation meaning ^0,^1,^-1 etc. I would like to obtain the file in the following format: 120E+00 360E+01 258E-09 -784E+02 -847E+00 -360E+01 242E-08 420E+05 14E+00 360E+01 254E+00 78E+02 -120E+00 -254E-01 -223E-09 -786E-02 I want to add a space before the negative numbers, without modifying the scientific notation, or the negative number itself. I also don't want to add a space if the negative number is the first one in the line (first column), just from column 2 to the end of the line. I would also like to use awk. I hope my explanation is clear. Thank you! | So, essentially, you want to add a space before every - if the character immediately before the - is a number. You can do that in awk but it really isn't a good choice here. It is far simpler to do it with sed : $ sed -E 's/([0-9])-/\1 -/g' file120E+00 360E+01 258E-09 -784E+02 -847E+00 -360E+01 242E-08 420E+05 14E+00 360E+01 254E+00 78E+02 -120E+00 -254E-01 -223E-09 -786E-02 Or Perl: $ perl -pe 's/([0-9])-/\1 -/g' file120E+00 360E+01 258E-09 -784E+02 -847E+00 -360E+01 242E-08 420E+05 14E+00 360E+01 254E+00 78E+02 -120E+00 -254E-01 -223E-09 -786E-02 For both of the above, add -i if you want to modify the original file instead of printing it out: sed -i -E 's/([0-9])-/\1 -/g' fileperl -i -pe 's/([0-9])-/\1 -/g' file And here's a way to do it in GNU awk any modern version of mawk and any other awk that supports gensub , courtesy of Ed Morton who provided this in a now deleted comment: awk '{print gensub(/([0-9])-/,"\\1 -","g")}' file With GNU awk ( gawk ), you can use -i inplace to edt the original file: gawk -i inplace '{print gensub(/([0-9])-/,"\\1 -","g")}' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/693683",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/516564/"
]
} |
693,720 | I have some test output, looking like PASS: tests/test_mutex_rmwPASS: tests/test_mutex_trylockPASS: tests/test_malloc_irregFAIL: tests/ARMCI_PutS_latencyFAIL: tests/ARMCI_AccS_latencyPASS: tests/test_groupsPASS: tests/test_group_splitPASS: tests/test_malloc_groupFAIL: tests/test_accsFAIL: tests/test_accs_dla I want to filter the output to just view the failures. It would be convenient to just copy the text from screen and paste into stdin to pass into grep, e.g. grep FAIL and Shift-Ctrl-V (or middle mouse button) to copy the text in. What I want to see is just FAIL: tests/ARMCI_PutS_latencyFAIL: tests/ARMCI_AccS_latencyFAIL: tests/test_accsFAIL: tests/test_accs_dla But instead, the input pasted in is displayed to screen, and because of buffering the input gets mixed with the final output: $ grep FAILPASS: tests/test_mutex_rmwPASS: tests/test_mutex_trylockPASS: tests/test_malloc_irregFAIL: tests/ARMCI_PutS_latencyFAIL: tests/ARMCI_AccS_latencyPASS: tests/test_groupsPASS: tests/test_group_splitPASS: tests/test_malloc_groupFAIL: teFAIL: tests/ARMCI_PutS_latencysts/test_accsFAIL: tests/test_accs_dlaFAIL: tests/ARMCI_AccS_latencyFAIL: tests/test_accsFAIL: tests/test_accs_dla It would make sense to me for the input to first be provided to cat and then passed to grep, cat | grep FAIL , but that doesn't actually help. The buffer mixup still occurs. Of course it can be filtered cleanly if the data is placed in a file which is passed to grep. But what I'm looking for is a convenience tool to simply provide light filtering of text copied from the terminal output via the clipboard buffer. How is that best done? Equivalently, how can pasting be done without echoing to screen (providing data silently as stdin for the command)? One method is to explicitly switch off echoing, stty -echo; grep FAIL; stty echo That does work, but I suspect there are ways of doing it without toggling stty. Do you know other shell-based approaches? I use bash (on Debian GNU/Linux), but POSIX shell solutions are also interesting. | Mark the text with your mouse, then use xclip : xclip -o | grep FAIL or copied from Clipboard (Ctrl-c): xclip -selection clipboard -o | grep FAIL Or: xclip -sel c -o | grep FAIL for short. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/693720",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/492800/"
]
} |
693,760 | I am writing a shell script and I need to print the nth argument of the script. For example,suppose we have n=3 and our script is run with enough arguments.Now I need to print the n th argument, i.e. $3 . But if n=2 , we would print argument $2 . I don't want to use if statements. I wanted to do something like echo $($n) but the above doesn't work the way I need it to. | By chronological order, in various shells: csh (late 70s): $argv[$n] (also works in zsh since 1991 and fish since 2005) zsh (1991): $argv[n] / $@[n] / $*[n] (the last too also supported by yash but only with extra braces: ${@[n]} / ${*[n]} ) rc (early 90s): $$n / $*($n) (also works in es, akanga) ksh93 (1993): ${@:n:1} , ${*:n:1} (also supported by bash since 1996; zsh also since 2010 , though you need ${@:$n:1} or ${@: n:1} there to avoid conflict with csh-style modifiers and see there about the "$*" case) bash (1996): ${!n} zsh ( 1999 ): ${(P)n} . Remember that in ksh93/bash/yash, you need to quote parameter expansions in list contexts at least, and csh can hardly be used to write reliable code. In bash , there's a difference between "${!n}" and "${@:n:1}" in list context when the n th positional parameter is not set in that the latter then expands to no argument at all whilst the former expands to one empty element. In Bourne-like shells (but not the Bourne shell where that won't work for indices past the 9th), and with standard POSIX sh syntax, you can also do: eval "nth=\${$n}" There will also be differences in behaviour among all those if $n does not contain the canonical decimal representation of an integer number strictly greater than 0. If you can't guarantee that will be the case, using most of those (not just the eval one) would introduce an arbitrary command execution vulnerability (the only exceptions being with rc and maybe csh above). Also remember that except in zsh (with echo -E - $argv[n] ), yash (with ECHO_STYLE=raw echo "${*[$n]}" ) and fish (with echo -- $argv[$n] ), echo can't be used to output arbitrary data, use printf '%s\n' ... instead ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/693760",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/268128/"
]
} |
693,942 | I need to put one text inside another text. 1) I have a file with list of input values: A1B2C3D4E5 I have a wrapper pattern which should contain the text: $wgSpecialPageLockdown['INPUT_COMES_HERE'] = array('sysop'); For each input value, a wrapper with input should be created, so the final result should be a file with: $wgSpecialPageLockdown['A1'] = array('sysop');$wgSpecialPageLockdown['B2'] = array('sysop');$wgSpecialPageLockdown['C3'] = array('sysop');$wgSpecialPageLockdown['D4'] = array('sysop');$wgSpecialPageLockdown['E5'] = array('sysop'); I am open to do that in GUI as well, such as Visual Studio Code. How would you prefer to do such an action? And, by the way, how is such a textual operation commonly named? | You can use awk : $ awk '{ print "$wgSpecialPageLockdown[\47"$0"\47] = array(\47sysop\47);" }' file > newfile$wgSpecialPageLockdown['A1'] = array('sysop');$wgSpecialPageLockdown['B2'] = array('sysop');$wgSpecialPageLockdown['C3'] = array('sysop');$wgSpecialPageLockdown['D4'] = array('sysop');$wgSpecialPageLockdown['E5'] = array('sysop'); | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/693942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/517913/"
]
} |
694,163 | A script for terminating a server on a certain port: #!/bin/bashPORT="$1"SIGNAL="$2"if [ "${SIGNAL}" != "" ]; then SIGNAL=" -${SIGNAL}"; filsof -i:"${PORT}" |\ grep -e "localhost:${PORT}" -e TCP -e LISTEN |\ tr -s ' ' |\ cut -d' ' -f2 |\ tee /dev/tty |\ xargs --no-run-if-empty kill "$SIGNAL" Works: killbyport 4242 But if I want to do a kill -9 I'd do: killbyport 4242 9 ,and that errors: kill: (-9): No such process The xargs and kill aren't cooperating – how do I fix it? (PS: I want to fix this script, rather than change it to something else. It's almost working.) | The problem is that you’re explicitly adding a space to SIGNAL : SIGNAL=" -${SIGNAL}" and then referencing it in quotes: kill "$SIGNAL" Since kill is seeing an argument that doesn’t begin with dash(because it begins with space, and then dash),it isn’t seen as an option,but as an ordinary argument — in this case, a PID. A quick fix is to not add the space to SIGNAL : SIGNAL="-${SIGNAL}" But it doesn’t make sense that this is working when SIGNAL is null. See But what if …? . The first example ( ignorecase ) almost exactly matches your situation. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/694163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291147/"
]
} |
694,236 | grep manual quote: -o, --only-matching Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line. matched (non-empty) parts of a matching line means non empty line right? Update In case comments are deleted, from Stephen's answer below: Paraphrase: How about "empty strings in a line" instead "empty parts of a matching line"? –Lahor10 hours ago I think the important label is “matched parts”; “matched strings” would work too, I’m not the right person to ask which is better. –Stephen Kitt59 mins ago | It really refers to non-empty matched parts of a matching line, not non-empty lines. The result is that the output only contains non-empty lines, each one containing a non-empty match. grep -o prints one line per match: $ grep -Eo '.?' <<<"hello"hello but it doesn’t output empty matches. Many regular expressions can match empty text; for example: $ grep -E '.?' <<<"" outputs a blank line because the empty line matches. However, adding -o results in no output at all: $ grep -Eo '.?' <<<"" because -o ignores empty matches. Specifying that -o ignores empty matches means the implementers don’t need to decide what to do with zero-length matches, which would otherwise produce rather lengthy (and useless) output in many circumstances. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/694236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/517913/"
]
} |
694,242 | I am trying to install NixOS on my MacBook following the manual ( https://nixos.org/manual/nixos/stable/index.html#sec-installation-partitioning-formatting ) and I am stuck at the partitioning/formatting stage. After creating necessary partitions on the disk I took for NixOS (an MS-DOS partition made via Disk Utility) with parted , I need to initialise them as ext4 partitions and a swap partition. And in order to do that, I need to reference them. The problem is that my NixOS disk is known as /dev/nvme0n1p3 (p1 and p2 being ESP and OS X drive respectively) and I cannot access the reference names of the partitions I need to initialise; gdisk shows them correctly but lsblk does not count them as devices. Is there a way to see how to reference these “sub-sub-subpartitions”? P.S.: I am a complete newbie, this is the first time I install a Linux system so I can make real blunders in terminology etc. I apologise for this in advance. Edit: the lsblk output Edit 2: partition table of nvme0n1p3 according to gdisk Edit 3: so I tried to see what fdisk will show me, and it labels these partitions as nvme0n1p3p1, nvme0n1p3p2 and nvme0n1p3p3. Technically the question is solved, but now I cannot do anything with these partitions since all the commands involving them result in “no such file or directory” error. | It really refers to non-empty matched parts of a matching line, not non-empty lines. The result is that the output only contains non-empty lines, each one containing a non-empty match. grep -o prints one line per match: $ grep -Eo '.?' <<<"hello"hello but it doesn’t output empty matches. Many regular expressions can match empty text; for example: $ grep -E '.?' <<<"" outputs a blank line because the empty line matches. However, adding -o results in no output at all: $ grep -Eo '.?' <<<"" because -o ignores empty matches. Specifying that -o ignores empty matches means the implementers don’t need to decide what to do with zero-length matches, which would otherwise produce rather lengthy (and useless) output in many circumstances. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/694242",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/518246/"
]
} |
694,323 | I am having a 5.10.0-11-cloud-amd64 kernel and have installed 5.10.0-12-amd64 kernel on Debian 11.2 . I want to set 5.10.0-12-amd64 as the default kernel temporarily.I am new to Grub, How to set the default kernel as 5.10.0-12-amd64 ?My /lib/modules : 5.10.0-10-cloud-amd64 5.10.0-11-cloud-amd64 5.10.0-12-cloud-amd645.10.0-11-amd64 5.10.0-12-amd64 My /boot/ only has grub folder and no grub2 folder.Command grep -e "menuentry " -e submenu -e linux /boot/grub/grub.cfg yields : ### BEGIN /etc/grub.d/10_linux ###set linux_gfx_mode=export linux_gfx_modemenuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { linux /boot/vmlinuz-5.10.0-12-cloud-amd64 root=UUID=ccb4ba21-fd62-42c9-b8eb-75a437b1747d ro console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 submenu 'Advanced options for Debian GNU/Linux' $menuentry_id_option 'gnulinux-advanced-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { menuentry 'Debian GNU/Linux, with Linux 5.10.0-12-cloud-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-12-cloud-amd64-advanced-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { linux /boot/vmlinuz-5.10.0-12-cloud-amd64 root=UUID=ccb4ba21-fd62-42c9-b8eb-75a437b1747d ro console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 menuentry 'Debian GNU/Linux, with Linux 5.10.0-12-cloud-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-12-cloud-amd64-recovery-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { linux /boot/vmlinuz-5.10.0-12-cloud-amd64 root=UUID=ccb4ba21-fd62-42c9-b8eb-75a437b1747d ro single console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 menuentry 'Debian GNU/Linux, with Linux 5.10.0-12-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-12-amd64-advanced-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { linux /boot/vmlinuz-5.10.0-12-amd64 root=UUID=ccb4ba21-fd62-42c9-b8eb-75a437b1747d ro console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 menuentry 'Debian GNU/Linux, with Linux 5.10.0-12-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-12-amd64-recovery-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { linux /boot/vmlinuz-5.10.0-12-amd64 root=UUID=ccb4ba21-fd62-42c9-b8eb-75a437b1747d ro single console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 menuentry 'Debian GNU/Linux, with Linux 5.10.0-11-cloud-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-11-cloud-amd64-advanced-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { linux /boot/vmlinuz-5.10.0-11-cloud-amd64 root=UUID=ccb4ba21-fd62-42c9-b8eb-75a437b1747d ro console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 menuentry 'Debian GNU/Linux, with Linux 5.10.0-11-cloud-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-11-cloud-amd64-recovery-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { linux /boot/vmlinuz-5.10.0-11-cloud-amd64 root=UUID=ccb4ba21-fd62-42c9-b8eb-75a437b1747d ro single console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 menuentry 'Debian GNU/Linux, with Linux 5.10.0-11-amd64' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-11-amd64-advanced-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { linux /boot/vmlinuz-5.10.0-11-amd64 root=UUID=ccb4ba21-fd62-42c9-b8eb-75a437b1747d ro console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0 menuentry 'Debian GNU/Linux, with Linux 5.10.0-11-amd64 (recovery mode)' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.0-11-amd64-recovery-ccb4ba21-fd62-42c9-b8eb-75a437b1747d' { linux /boot/vmlinuz-5.10.0-11-amd64 root=UUID=ccb4ba21-fd62-42c9-b8eb-75a437b1747d ro single console=tty0 console=ttyS0,115200 earlyprintk=ttyS0,115200 consoleblank=0### END /etc/grub.d/10_linux ###### BEGIN /etc/grub.d/20_linux_xen ###### END /etc/grub.d/20_linux_xen ### Thanks telcoM for providing the above command | First, check /etc/default/grub . There should be a GRUB_DEFAULT= variable in it. If it is set to GRUB_DEFAULT=0 or unset, the default will be to boot the first entry in the boot menu (entry #0). If it is set to anything other than GRUB_DEFAULT=saved , the only way to reliably change the default will be to edit GRUB_DEFAULT= in /etc/default/grub and then run update-grub (or grub-mkconfig > /boot/grub/grub.cfg ) as root. If the setting is GRUB_DEFAULT=saved , then two commands, grub-reboot and grub-set-default will be usable. The former will set the kernel to boot for one boot only and then will return to the previous default. The latter will switch the GRUB default entry until you change it again, either by using grub-set-default or by selecting something else in the GRUB boot menu. The simplest form of default setting specifies just the menu entry number (starting from entry #0 at the top). But modern GRUB menu is usually constructed to have the newest kernel in the first position, then a submenu of all the other kernels in the second position, and any other OSs and other custom entries after that submenu. To view the GRUB menu in useful way, run grep -e "menuentry " -e submenu -e linux /boot/grub/grub.cfg . (The space after menuentry is needed to filter out some false hits.) You will see a number of fairly long menuentry and submenu lines, in the exact same order the real menu will have . Also, the entries of the submenu will be indented, while the main menu entries won't be. This will allow you to see the structure of the currently active GRUB menu without rebooting the system. The title of the topmost menu item is usually unhelpfully just "Debian GNU/Linux" with no kernel version number, but the command I gave above will also list the linux /boot/vmlinuz-<kernel version number> ... command that is part of the first menu entry block, and that will reveal the exact kernel version that will be booted by the topmost entry. If you need to select a menu entry that is within the submenu (i.e. its menuentry line is indented), then the default entry specification should be the identifier of the submenu line, a > character, and then the identifier of the actual menu entry you want. The menu entry identifiers can be menu entry numbers (starting from 0 in each menu), identifier strings (the quoted strings after the $menuentry_id_option on each menuentry or submenu line), or the visible titles of each menu item and submenu. The identifier strings for Linux kernels seem to be of the form gnulinux-simple-<Linux root filesystem UUID> for the first entry, and gnulinux-<kernel version>-advanced-<Linux root filesystem UUID> for the entries in the "Advanced options ..." submenu. The visible menu item titles in US English-configured Debian 11 are "Debian GNU/Linux" for the first item, "Advanced options for Debian GNU/Linux" for the submenu, and "Debian GNU/Linux, with Linux " for the non-recovery-mode entries in the submenu. So, assuming your GRUB menu has no other OSs complicating the matters, you could set the 5.10.0-12-amd64 kernel as the default until you change it back by editing the GRUB_DEFAULT= line in your /etc/default/grub to: GRUB_DEFAULT="1>Debian GNU/Linux, with Linux 5.10.0-12-amd64" and running update-grub as root. If you want more flexibility, you might instead set GRUB_DEFAULT=saved , run update-grub , and then run grub-set-default "1>Debian GNU/Linux, with Linux 5.10.0-12-amd64" to change the default until you change it back, or run grub-reboot "1>Debian GNU/Linux, with Linux 5.10.0-12-amd64" to change the default for one boot only. The 1> prefix comes from the requirement to first select the submenu entry, and the fact that it's always the second entry in the main GRUB menu (i.e. menu item #1). If you used grub-set-default , you can return to whatever kernel is currently the "latest" according to simple alpha-numeric sorting, by using grub-set-default 0 . Remember, the first entry in each menu level is numbered #0. With your menu entries, you could specify the menuentry 'Debian GNU/Linux, with Linux 5.10.0-12-amd64' line with menu entry numbers as: GRUB_DEFAULT="1>2" i.e. the second entry (entry #1) opens the submenu, and then pick the third entry (entry #2) of the submenu. Or with menu titles as: GRUB_DEFAULT="Advanced options for Debian GNU/Linux>Debian GNU/Linux, with Linux 5.10.0-12-amd64" Or with menu ID strings as: GRUB_DEFAULT="gnulinux-advanced-ccb4ba21-fd62-42c9-b8eb-75a437b1747d>gnulinux-5.10.0-12-amd64-advanced-ccb4ba21-fd62-42c9-b8eb-75a437b1747d" Or with any combination of the above methods. The advantage of using the menu titles or ID strings is that they keep referring to the same kernel even if you install & remove kernel packages, as long as the chosen kernel is still available. Using the menu item numbers would require you to check (and adjust if necessary) the settings after each kernel update, so if used together with any kind of automatic updates, it might cause nasty surprises. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/694323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/514562/"
]
} |
694,324 | If one downloads a webpage with curl or wget it comes down as html. But if I wish to download it as plain text (i.e. no HTML parsing whatsoever), exactly or almost exactly as it would be plainly read in a web browser (with any image/video/audio omitted of course), what would be a way to do that? | First, check /etc/default/grub . There should be a GRUB_DEFAULT= variable in it. If it is set to GRUB_DEFAULT=0 or unset, the default will be to boot the first entry in the boot menu (entry #0). If it is set to anything other than GRUB_DEFAULT=saved , the only way to reliably change the default will be to edit GRUB_DEFAULT= in /etc/default/grub and then run update-grub (or grub-mkconfig > /boot/grub/grub.cfg ) as root. If the setting is GRUB_DEFAULT=saved , then two commands, grub-reboot and grub-set-default will be usable. The former will set the kernel to boot for one boot only and then will return to the previous default. The latter will switch the GRUB default entry until you change it again, either by using grub-set-default or by selecting something else in the GRUB boot menu. The simplest form of default setting specifies just the menu entry number (starting from entry #0 at the top). But modern GRUB menu is usually constructed to have the newest kernel in the first position, then a submenu of all the other kernels in the second position, and any other OSs and other custom entries after that submenu. To view the GRUB menu in useful way, run grep -e "menuentry " -e submenu -e linux /boot/grub/grub.cfg . (The space after menuentry is needed to filter out some false hits.) You will see a number of fairly long menuentry and submenu lines, in the exact same order the real menu will have . Also, the entries of the submenu will be indented, while the main menu entries won't be. This will allow you to see the structure of the currently active GRUB menu without rebooting the system. The title of the topmost menu item is usually unhelpfully just "Debian GNU/Linux" with no kernel version number, but the command I gave above will also list the linux /boot/vmlinuz-<kernel version number> ... command that is part of the first menu entry block, and that will reveal the exact kernel version that will be booted by the topmost entry. If you need to select a menu entry that is within the submenu (i.e. its menuentry line is indented), then the default entry specification should be the identifier of the submenu line, a > character, and then the identifier of the actual menu entry you want. The menu entry identifiers can be menu entry numbers (starting from 0 in each menu), identifier strings (the quoted strings after the $menuentry_id_option on each menuentry or submenu line), or the visible titles of each menu item and submenu. The identifier strings for Linux kernels seem to be of the form gnulinux-simple-<Linux root filesystem UUID> for the first entry, and gnulinux-<kernel version>-advanced-<Linux root filesystem UUID> for the entries in the "Advanced options ..." submenu. The visible menu item titles in US English-configured Debian 11 are "Debian GNU/Linux" for the first item, "Advanced options for Debian GNU/Linux" for the submenu, and "Debian GNU/Linux, with Linux " for the non-recovery-mode entries in the submenu. So, assuming your GRUB menu has no other OSs complicating the matters, you could set the 5.10.0-12-amd64 kernel as the default until you change it back by editing the GRUB_DEFAULT= line in your /etc/default/grub to: GRUB_DEFAULT="1>Debian GNU/Linux, with Linux 5.10.0-12-amd64" and running update-grub as root. If you want more flexibility, you might instead set GRUB_DEFAULT=saved , run update-grub , and then run grub-set-default "1>Debian GNU/Linux, with Linux 5.10.0-12-amd64" to change the default until you change it back, or run grub-reboot "1>Debian GNU/Linux, with Linux 5.10.0-12-amd64" to change the default for one boot only. The 1> prefix comes from the requirement to first select the submenu entry, and the fact that it's always the second entry in the main GRUB menu (i.e. menu item #1). If you used grub-set-default , you can return to whatever kernel is currently the "latest" according to simple alpha-numeric sorting, by using grub-set-default 0 . Remember, the first entry in each menu level is numbered #0. With your menu entries, you could specify the menuentry 'Debian GNU/Linux, with Linux 5.10.0-12-amd64' line with menu entry numbers as: GRUB_DEFAULT="1>2" i.e. the second entry (entry #1) opens the submenu, and then pick the third entry (entry #2) of the submenu. Or with menu titles as: GRUB_DEFAULT="Advanced options for Debian GNU/Linux>Debian GNU/Linux, with Linux 5.10.0-12-amd64" Or with menu ID strings as: GRUB_DEFAULT="gnulinux-advanced-ccb4ba21-fd62-42c9-b8eb-75a437b1747d>gnulinux-5.10.0-12-amd64-advanced-ccb4ba21-fd62-42c9-b8eb-75a437b1747d" Or with any combination of the above methods. The advantage of using the menu titles or ID strings is that they keep referring to the same kernel even if you install & remove kernel packages, as long as the chosen kernel is still available. Using the menu item numbers would require you to check (and adjust if necessary) the settings after each kernel update, so if used together with any kind of automatic updates, it might cause nasty surprises. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/694324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/517913/"
]
} |
694,397 | I'm currently working on a little project; in a kml file called weatherdata.kml, I would like to extract the sea level pressure for each <Placemark> element. I'm trying to parse the information about the sea level pressure and put it into a file called report.csv ; and print the sea level pressure on a new line each time. I think this would work with awk and so far I've tried this: awk -F '[>,]' '/minSeaLevelPres/ {print $2}' report.csv But when I run this command in shell, I get this: 1002</minSeaLevelPres1002</minSeaLevelPres1002</minSeaLevelPres1001</minSeaLevelPres1001</minSeaLevelPres1001</minSeaLevelPres1001</minSeaLevelPres1001</minSeaLevelPres1001</minSeaLevelPres1001</minSeaLevelPres1001</minSeaLevelPres1002</minSeaLevelPres1002</minSeaLevelPres1003</minSeaLevelPres when I want to get this: 10021002100210011001100110011001100110011001100210021003 I can't work out how to get rid of </minSeaLevelPres . Would anyone be able to help? Below is an example of part of a placemark element in weatherdata.kml <Placemark> <styleUrl>#ex</styleUrl> <lat>19.2</lat> <lon>-24.1</lon> <stormName>NINE</stormName> <stormNum>10</stormNum> <basin>AL</basin> <stormType>LO</stormType> <intensity>20</intensity> <intensityMPH>23</intensityMPH> <intensityKPH>37</intensityKPH> <minSeaLevelPres>1002</minSeaLevelPres> <atcfdtg>2020082350</atcfdtg> <dtg>0000 UTC JAN 07</dtg> </Placemark> | I suggest to use a tool that can handle XML correctly: xmlstarlet select --template --value-of '//minSeaLevelPres' -n weatherdata.kml Output: 1002 See: xmlstarlet select --help | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/694397",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
694,796 | I am trying to find out how I can use: grep -i With multiple strings, after using grep on another command. For example: last | grep -i abclast | grep -i uyx I wish the combine the above into one command, but when searching on the internet I can only find references on how to use multiple strings with grep, when grep is used with a file, and not a command. I have tried something like this: last | grep -i (abc|uyx) Or last | grep -i 'abc|uyx' But that doesn't work. What is the correct syntax to get the results I epxect? Thanks in advance. | Many options with grep alone, starting with the standard ones: grep -i -e abc -e uyxgrep -i 'abcuyx'grep -i -E 'abc|uyx' With some grep implementations, you can also do: grep -i -P 'abc|uyx' # perl-like regexps, sometimes also with # --perl-regexp or -X perlgrep -i -X 'abc|uyx' # augmented regexps (with ast-open grep) also with # --augmented-regexpgrep -i -K 'abc|uyx' # ksh regexps (with ast-open grep) also with # --ksh-regexpgrep -i 'abc\|uyx' # with the \| extension to basic regexps supported by # some grep implementations. BREs are the # default but with some grep implementations, you # can make it explicit with -G, --basic-regexp or # -X basic You can add (...) s around abc|uyx ( \(...\) for BREs), but that's not necessary. The ( s and ) s, like | also need to be quoted for them to be passed literally to grep as they are special characters in the syntax of the shell language. Case insensitive matching can also be enabled as part of the regexp syntax with some grep implementations (not standardly). grep -P '(?i)abc|uyx' # wherever -P / --perl-regexp / -X perl is supportedgrep -K '~(i)abc|uyx' # ast-open grep onlygrep -E '(?i)abc|uyx' # ast-open grep onlygrep '\(?i\)abc|uyx' # ast-open grep only which makes it non-POSIX-compliant Those don't really bring much advantage over the standard -i option. Where it could be more interesting would be for instance if you want abc matching to be case sensitive and uyx not, which you could do with: grep -P 'abc|(?i)uyx' Or: grep -P 'abc|(?i:uyx)' (and equivalent variants with other regexp syntaxes). The standard equivalent of that would look like: grep -e abc -e '[uU][yY][xX]' (bearing in mind that case-insensitive matching is often locale-dependent; for instance, whether uppercase i is I or İ may depend on the locale according to grep -i i ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/694796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/518746/"
]
} |
694,819 | I have data like that (this is only a small part): 454SOL OW20704 0.317 0.251 5.525 454SOL HW120705 0.388 0.322 5.531 454SOL HW220706 0.229 0.290 5.553 462SOL OW20728 0.130 1.821 5.295 462SOL HW120729 0.120 1.806 5.394 462SOL HW220730 0.044 1.857 5.259 469SOL OW20749 0.461 1.266 5.451 469SOL HW120750 0.411 1.267 5.365 469SOL HW220751 0.398 1.248 5.527 500SOL OW20842 1.754 1.223 5.312 500SOL HW120843 1.845 1.184 5.299 500SOL HW220844 1.762 1.319 5.336 502SOL OW20848 1.592 1.629 5.349 502SOL HW120849 1.619 1.663 5.259 502SOL HW220850 1.671 1.591 5.395 515SOL OW20887 1.587 0.779 5.394 515SOL HW120888 1.495 0.817 5.389 515SOL HW220889 1.647 0.826 5.329 516SOL OW20890 1.013 0.105 5.494 516SOL HW120891 1.019 0.190 5.442 516SOL HW220892 1.045 0.029 5.437 522SOL OW20908 1.728 0.935 5.578 522SOL HW120909 1.682 0.928 5.489 522SOL HW220910 1.666 0.979 5.644 I want to delete three lines, the next 9 lines leave alone, and then again delete 3 lines. In this case, I want to delete lines 10,11,12 and 22,23,24.Expected output 454SOL OW20704 0.317 0.251 5.525 454SOL HW120705 0.388 0.322 5.531 454SOL HW220706 0.229 0.290 5.553 462SOL OW20728 0.130 1.821 5.295 462SOL HW120729 0.120 1.806 5.394 462SOL HW220730 0.044 1.857 5.259 469SOL OW20749 0.461 1.266 5.451 469SOL HW120750 0.411 1.267 5.365 469SOL HW220751 0.398 1.248 5.527 502SOL OW20848 1.592 1.629 5.349 502SOL HW120849 1.619 1.663 5.259 502SOL HW220850 1.671 1.591 5.395 515SOL OW20887 1.587 0.779 5.394 515SOL HW120888 1.495 0.817 5.389 515SOL HW220889 1.647 0.826 5.329 516SOL OW20890 1.013 0.105 5.494 516SOL HW120891 1.019 0.190 5.442 516SOL HW220892 1.045 0.029 5.437 I tried this: #!/bin/bash awk 'NR%10==0 || NR%11==0 || NR%12==0' sol.txt | tee sol_after.txt and it doesn't work. I need to dele lines on the whole file, so delete 10-12, then 22-24, then 34-36, 46-48, etc | Using any awk in any shell on every Unix box: $ seq 30 | awk 'NR%12 !~ /^(10|11|0)$/'123456789131415161718192021252627282930 or if you prefer: $ seq 30 | awk 'NR%12 == 10{c=3} !(c&&c--)'123456789131415161718192021252627282930 See https://stackoverflow.com/a/17914105/1745001 for what c&&c-- does and more information. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/694819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358262/"
]
} |
694,829 | I want to start editing my /etc/fstab file more comfortably and not rely on random forums anymore. But wherever I go, I see very scarce info about it. I can nowhere find a webpage that, for example, explains all of the options available. So, who owns the fstab file, what program uses it, and where can I find the official documentation for it from its creator? Specifically, I want to understand the difference between none , mem and tmpfs devices (the first field). I know, I can probably google it and eventually find the answer, but as I said, I don't want to do it anymore, I want to go full-geek mode and read from the official resources. EDIT: Quick answer: The difference should be only in the string (the name) and should only matter to systemd that reads the file when mounting filesystems. | The documentation for system files such as fstab is (almost always) on your machine. In this instance man fstab will answer your question - up to a point: The first field ( fs_spec ). This field describes the block special device or remote filesystem to be mounted. [...] For filesystems with no storage, any string can be used, and will show up in df (1) output, for example. Typical usage is proc for procfs; mem , none , or tmpfs for tmpfs. You've just mentioned in a comment that you want to create a tmpfs entry. Here's an example of one for /mnt/mytmpfs : tmpfs /mnt/mytmpfs tmpfs nosuid,nodev,noatime 0 0 Don't forget to create the directory yourself ( mkdir /mnt/mytmpfs ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/694829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/507861/"
]
} |
694,848 | I am trying to modify the ethernet device name without modifying grub. I have tried to modify the device name, but when I do, the device stops working. Things I have tried: I've tried this nmcli con edit id "Wired connection 1"set connection.id testnamesavequit I've also tried this: nmcli connection modify ens33 connection.id testname But neither of those change the device name, which is what I need (so I can access the device with ifconfig or ip addr ) I've also tried this ifdown ens33ifconfig ens33 downip link set ens33 name testnamemv /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-testnamevi /etc/sysconfig/network-scripts/ifcfg-testnameifconfig testname upifup testname which appears to work at first glance because i can access the device with ifconfig, but after I bring the interface back up it fails to ping the target device (although it can ping itself). The answer on this page looks promising, but I can't access it: https://access.redhat.com/solutions/108823 I must be missing a step, does anybody have an idea? | The documentation for system files such as fstab is (almost always) on your machine. In this instance man fstab will answer your question - up to a point: The first field ( fs_spec ). This field describes the block special device or remote filesystem to be mounted. [...] For filesystems with no storage, any string can be used, and will show up in df (1) output, for example. Typical usage is proc for procfs; mem , none , or tmpfs for tmpfs. You've just mentioned in a comment that you want to create a tmpfs entry. Here's an example of one for /mnt/mytmpfs : tmpfs /mnt/mytmpfs tmpfs nosuid,nodev,noatime 0 0 Don't forget to create the directory yourself ( mkdir /mnt/mytmpfs ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/694848",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255139/"
]
} |
695,902 | How can I turn the text St1number11234St2number2456 into the following one? st1,number1,1234st2,number2,1234 | With the standard paste command : $ paste -sd ',,\n\0' fileSt1,number1,1234St2,number2,456 s erially paste s the lines of file with , , , , newline , nothing (not a NUL character as one might think) as d elimiters in turn. Or: $ paste -d ',,\0' - - - - < fileSt1,number1,1234St2,number2,456 paste s stdin 4 times with , , , and nothing as d elimiters between them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/695902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/484919/"
]
} |
695,905 | As mentioned in the title, I'm unable to simultaneously record my headset microphone and my desktop audio when using SimpleScreenRecorder in KDE Neon 5.24. In the "Audio Input" section of the recording settings, I selected "PulseAudio" as Backend , whereas for Source I've tried both "Built-in Audio Analog Stereo" as well as "Monitor of Built-in Audio Analog Stereo" (see below): When selecting "Built-in Audio Analog Stereo", the microphone gets recorded but the desktop audio does not, while the situation is exactly the opposite when I select "Monitor of Built-in Audio Analog Stereo" (can record desktop audio but not headset microphone). Does anybody have an idea of how to solve this issue? | With the standard paste command : $ paste -sd ',,\n\0' fileSt1,number1,1234St2,number2,456 s erially paste s the lines of file with , , , , newline , nothing (not a NUL character as one might think) as d elimiters in turn. Or: $ paste -d ',,\0' - - - - < fileSt1,number1,1234St2,number2,456 paste s stdin 4 times with , , , and nothing as d elimiters between them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/695905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/379221/"
]
} |
695,906 | I know that there is a possibility to use tee to copy the contents of output to a file and still output it to the console. However, I can't seem to find a way, to prepare shell scripts (like a fixed template) without using tee for either each command in the script or to execute the script using a pipe to tee . Thus, I have to start calling the script each time with a pipe to tee instead of automatically doing this for me via the script. I tried using a modified shebang using a pipe but got no success and I can't seem to find a way to accomplish this. So instead of calling the script like this: ./myscript.sh |& tee scriptout.txt I would like to have the same effect just by calling it like this: ./myscript Of course the script needs to know the filename which is set in a variable inside the script. How can I accomplish this? | You could wrap the contents of your script in a function and pipe the functions output to tee : #!/bin/bash{echo "example script"} | tee -a /logfile.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/695906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/518817/"
]
} |
695,959 | I'm on a Mac computer accessing by ssh a linux machine to run simulations. I'm currently running multiple commands in parallel (I put them in background with &), when they finish I extract the files of interest then delete all commands output and redo it. I wondered if it was possible to check using the PID if those tasks are finished in order to automatically extract the wanted files and launch one more time the exact same command. I'm posting a message here to know to get the PID of a command which is executed through a ssh -c, that is not done by me (as I'm in remote linux). I tried the solutions shown in how-to-get-pid-of-just-started-process in order to get the PID, but neither $! nor ssh -c nor jobs -p give me the correct PID. I'm wondering if it doesn't work because I'm on remote access (the command appearing in htop is a ssh -c ...) or I'm just doing things poorly. Here is my first bash script : #!/bin/bash./createFiles.sh $1 # create the needed $1 foldersfor i in $(seq 1 $1)do cd $i myCommand & cd ..done When this one is finished I use : #!/bin/bashfor i in $(seq 1 $1)do cd $i cp output/file.txt ../file_$2_$i.txt # $2 is the number of the run cd ..done./deleteFiles.sh $2 # delete the needed $1 folders to start anew And then I loop 5 times those two. And I wanted to know if it's possible to loop automatically 5 times and not having me standing in front of my computer. If any of you have any idea, it would be great :) P.S: I hope it was clear, english is not my native language hihi | You are building GNU Parallel :) --transfer --return --cleanup is doing exactly what you are trying to do. Read chapter 8: https://zenodo.org/record/1146014 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/695959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/518847/"
]
} |
695,970 | I am trying to yank all of the individual Unix commands out of a large group of text files. Here is what I have so far: In this example I am pulling out all instances of the tx command. The big group of text documents are all sitting over in /PROJECT/DOCS and they are all named whatever .EXT . #!/bin/bashrm -f ~/Documents/proc-search.txt cd /PROJECT/DOCSfor file in *do echo "PROC Name: "$file >> ~/Documents/proc-search.txt echo "Description:" >> ~/Documents/proc-search.txt awk 'NR==1' $file >> ~/Documents/proc-search.txt echo "UNIX Commands:" >> ~/Documents/proc-search.txt awk '/tx/{print}' $file >> ~/Documents/proc-search.txt echo "########################################" >> ~/Documents/proc-search.txtdone I opened proc-search.txt and was all excited because it did indeed grab all instances of the tx command. But it also is outputting information for files I don't want because they don't include the tx command. Like in ACPFM.EXT in the example below. Is there a way I can make it exclude fields files that don't have tx ? This is the output I get, called proc-search.txt. And it looks good, except for the fact that I want to NOT see any report about ACPFM.EXT ,or or any other .EXTs that don't use the tx command. PROC Name: 17.EXTDescription:* NORMPARD (EDIT CONTRL FILE)UNIX Commands:# tx @CONTRL <- YAY! This is a result that I want.########################################PROC Name: ACPFM.EXT <- I don't want this stanza.Description:* ACPFM (Account PARameter File Maintenance)UNIX Commands:########################################PROC Name: ACTDARA.EXTDescription:*UNIX Commands:#tx @SEQFILE <- YAY! This is a result that I want.########################################PROC Name: ACTEDIT.EXTDescription:*UNIX Commands:#tx @SEQFILE <- YAY! This is a result that I want.######################################## | You are building GNU Parallel :) --transfer --return --cleanup is doing exactly what you are trying to do. Read chapter 8: https://zenodo.org/record/1146014 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/695970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/518858/"
]
} |
695,995 | I want to merge lines between ^pattern2 and its ";. Change this: pattern2"xxx xxxxxx xxxxxxxx";pattern2 "xxxx xxxxxxx xxxxxxxxx xxxxxxxxx";pattern2 "xxxx xxxxxxx xxxxxxxxx xxxxxxxxxyyyy yyyyyy yy yyyyyyyyyy yyyyyyy";pattern3"xxx xxxxxx xxxxxxxxxxx xxxxxx xxxxxxxx";pattern2"xxx xxxxxx xxxxxxxx"; to pattern2 "xxx xxxxxx xxxxxxxx";pattern2 "xxxx xxxxxxx xxxxxxxxx xxxxxxxxx";pattern2 "xxxx xxxxxxx xxxxxxxxx xxxxxxxxx yyyy yyyyyy yy yyyyyyyyyy yyyyyyy";pattern3"xxx xxxxxx xxxxxxxxxxx xxxxxx xxxxxxxx";pattern2 "xxx xxxxxx xxxxxxxx"; I've used this sed command before sed -i -e '/^pattern2/!b' -e :a -e 'N;/\;/!ba' -e 's/\n/ /g' input_file but in this case it gives this output: pattern2 "xxx xxxxxx xxxxxxxx";pattern2 "xxxx xxxxxxx xxxxxxxxx xxxxxxxxx"; pattern2 "xxxx xxxxxxx xxxxxxxxx xxxxxxxxxyyyy yyyyyy yy yyyyyyyyyy yyyyyyy";pattern3"xxx xxxxxx xxxxxxxxxxx xxxxxx xxxxxxxx";pattern2 "xxx xxxxxx xxxxxxxx"; Thanks | You are building GNU Parallel :) --transfer --return --cleanup is doing exactly what you are trying to do. Read chapter 8: https://zenodo.org/record/1146014 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/695995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/518376/"
]
} |
696,200 | I want to have a script to automatically move all filesinto a directory structure that I have in place(it looks like ~/jpg , ~/png , ~/mp3 , ~/zip , and so on). So far, this is almost doing exactly what I want it to: #!/bin/zshecho "Executing Script"find . -iname '*.jpg' -exec gmv -i --target-directory=$HOME/jpg '{}' +find . -iname '*.png' -exec gmv -i --target-directory=$HOME/png '{}' + I have no experience with shell scripting, so this is just something I've managed to cobble together. Is simply issuing consecutive commands on consecutive linesthe proper way to script for the shell? I initially tried this with mv and there was error handling involvedand it just wasn't elegant. I'm trying to expand on user EvilSoup's answer here. Besides what I've already mentioned, I included the mv -i flag so that I don't overwrite anything that already exists (this part is extremely important), but maybe the -n is better for that, I'm not exactly sure. Regarding find , I want the mv operations to happenonly in the current directory , and for some reason,the find seems to be somewhat recursive,although I don't exactly understand how to limit that. I want to run my script in each directory and have it mv only the files that find finds in the current working directory. +1 for any zsh-on-macOS- specific introductory material re: shell scripting. | You're not doing anything wrong. It's totally normal when you begin scripting to have your first script be nothing but a list of commands one after another. But hopefully you will learn to go beyond that. I'm not saying your script is bad, but by contrast, I want to show you what I would have written instead of your script. I'm using conditionals and loops and safety checks, and I've put in a bunch of comments so you can see what the script is doing. #!/usr/bin/env zsh# announce that the script has startedecho "Running sorting script."# the variable "$1" means the first argument to the script, i.e., if# the script is named "mysorter.zsh" if you ran mysorter.zsh ~/tmp# then $1 would be "~/tmp"# I'm assigning that to the variable "argument"argument="$1"# if $argument is blank, I want the script to run in the present # directory; if not, I want to move to that directory first# [[ -n "$argument" ]] means "$argument is not blank" (not length 0)# for other conditions like -n, see # https://zsh.sourceforge.io/Doc/Release/Conditional-Expressions.htmlif [[ -n "$argument" ]] ; then # [[ -d "$argument" ]] checks if "$argument" is a directory; if it's # not, the command is a mistake and the script should quit # the "!" means "not" if [[ ! -d "$argument" ]] ; then # echo prints text; the ">&2" means print to the standard error # stream, rather than regular output stream, because this is an # error message echo "$argument either does not exist or is not a directory" >&2 # exit quits the script; the number indicates the error code; if # you write "exit 0" that means no error; but this is an error # so I give it a non-zero code exit 1 fi # if we made it here, then "$argument" is indeed a folder, so I # move into it cd "$argument"fi# this is a loop that will be separately processed for all files in# the active directoryfor file in * ; do # I indent inside the loop to indicate better where the loop starts # and stops # first I check if "$file" is a folder/directory; if it is, # I don't want to do anything; "continue" means cease this cycle # through the loop and move on to the next one if [[ -d "$file" ]] ; then continue fi # what I want to do next depends on the extension of "$file"; # first I will determine what that is # the notatation "${file##*.}" means cut off everything prior to # the final . ; # see https://zsh.sourceforge.io/Doc/Release/Expansion.html#Parameter-Expansion extension="${file##*.}" # i want to treat extensions like JPG and jpg alike, so I will make # the extension lowercase; see the same page above extension="${(L)extension}" # I want to move the file to a folder in $HOME named after the # extension, i.e., $HOME/$extension; first I check if it doesn't # exist yet if [[ ! -d "$HOME/$extension" ]] ; then # since it doesn't exist, I will create it mkdir "$HOME/$extension" fi # by default we want the move target to have the same name # as the original file targetname="$file" # but I may need to add a number to it; see below num=0 # now I want to check if there is already a file in there with # that name already, and keep changing the target filename # while there is while [[ -e "$HOME/$extension/$targetname" ]] ; do # a file with that name already exists; but I still want # to put the file there without overwriting the other one; # I will keep checking numbers to add to the name until I # find one for a file that doesn't exist yet # increase by one let num++ # the new targetname is filename with the extension cut off, # plus "-$num" plus the extension # e.g. "originalfilename-1.zip" targetname="${file%.*}-${num}.${extension}" done # we have now found a safe file name to use for the target, # so we can move the file echo "Moving file $file to $HOME/$extension/$targetname" # here we try to move the file, but if it fails, we # print an error and quit the script with an error if ! mv "$file" "$HOME/$extension/$targetname" ; then echo "Move command failed!" >&2 exit 1 fi # we're now done with this file and can move on to the next onedone# done indicates the end of the loop## if we got here everything was successful and quit with exit code 0echo "All files successfully sorted."exit 0 This is just to give you a sense of what is possible. Don't be worried about your scripts being perfect right away. You'll get better at it when you get more practice. If the scripts work for you, they work for you. Obviously I put in some links to https://zsh.sourceforge.io which is a good resource for learning more about zsh. I won't put in anything mac specific, because you should try to learn not to be tied into any one platform, especially if you might foresee yourself moving away from proprietary platforms towards those that are open source and truly embrace the UNIX philosophy . (I'm doing my best not to be preachy about it. Is it working?) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/696200",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/484911/"
]
} |
696,224 | I can use the following command to display the certificate in a PEM file: openssl x509 -in cert.pem -noout -text But it will only display the information of the first certificate. A PEM file may also contain a certificate chain. How can I display all contained certificates? | Seems like PEM format is not handled very well with more than one certificate. Based on this answer : openssl crl2pkcs7 -nocrl -certfile cert.pem | openssl pkcs7 -print_certs -text -noout it first convert to pkcs7 and then display it | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/696224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/402870/"
]
} |
696,227 | I am using Ansible letsencrypt module to generate SSL certificates. The playbook is working fine, the certificate is generated and applied. Now I want that this task is only played when certain conditions are matched, otherwise skipped this task. Condition: If /etc/letsencrypt/certs directory is emtpy (when running first time) OR If 30 days are remaining in certificate expiry date OR If certificate is expired Could anyone please confirm the commands to achieve this. Example: - include_tasks: tasks/letsencrypt-issue-jetty.yml when: > | Seems like PEM format is not handled very well with more than one certificate. Based on this answer : openssl crl2pkcs7 -nocrl -certfile cert.pem | openssl pkcs7 -print_certs -text -noout it first convert to pkcs7 and then display it | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/696227",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/519149/"
]
} |
696,303 | I'm typing that command and my cursor is at the end of "supprimer des warnings" $ git commit -m "Nettoyage :> - Suppression de sources ou projets inutiles> - Corrections mineures sur les sources pour supprimer des warnings" It's the time I notice that I should have written "Nettoyage (deuxième partie)" at the beginning of my commit message. ...but how, being at the last line of my command, may I go up to the beginning of it, to edit it, on its first line? | Unfortunately, command entry in Bash is line-oriented, and you can’t go back to a previous line while entering a multi-line command. What you can do however is start an editor with the full command entered so far. To do so, in Emacs mode (the default), press Ctrl x Ctrl e ; in vi mode, press Esc v . This will open your editor with everything you’ve entered so far; fix what needs fixing, complete the command , exit the editor and Bash will run the edited command. In this particular case you could use an editor for the entire git commit message: omit the -m option and git will start an editor for you. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/696303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350549/"
]
} |
696,328 | I just finished updating the linux kernel via APT and restarted my machine. Then I checked for more updates and it said this: The following packages were automatically installed and are no longer required:linux-headers-5.4.0-100 linux-headers-5.4.0-100-generic linux-image-5.4.0-100-generic linux-modules-5.4.0-100-generic linux-modules-extra-5.4.0-100-genericUse 'sudo apt autoremove' to remove them. Should I use autoremove or not? | Unfortunately, command entry in Bash is line-oriented, and you can’t go back to a previous line while entering a multi-line command. What you can do however is start an editor with the full command entered so far. To do so, in Emacs mode (the default), press Ctrl x Ctrl e ; in vi mode, press Esc v . This will open your editor with everything you’ve entered so far; fix what needs fixing, complete the command , exit the editor and Bash will run the edited command. In this particular case you could use an editor for the entire git commit message: omit the -m option and git will start an editor for you. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/696328",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/519245/"
]
} |
696,335 | I am relatively new to Linux. I have installed Endeavour OS on my laptop (an HP Victus 16), and noticed underwhelming performance on apps like waydroid . It seems like linux is only detecting the iGPU in my system. When I run xrandr --listproviders it gives me the output Providers: number : 0** ! Even going to Settings > About shows the graphics card as "AMD Renoir" only. Running lspci shows the dGPU connected as: Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 14 [Radeon RX 5500/5500M / Pro 5500M] (rev c1)** but it seems like it doesn't work anywhere else? Configuration of my laptop if it matters: AMD Ryzen 5600h16 GB RAMAMD RX 5500M graphics And the OS details: Endeavour OS Linux x86_64Kernel: 5.17.0-247-tkg-pds | Unfortunately, command entry in Bash is line-oriented, and you can’t go back to a previous line while entering a multi-line command. What you can do however is start an editor with the full command entered so far. To do so, in Emacs mode (the default), press Ctrl x Ctrl e ; in vi mode, press Esc v . This will open your editor with everything you’ve entered so far; fix what needs fixing, complete the command , exit the editor and Bash will run the edited command. In this particular case you could use an editor for the entire git commit message: omit the -m option and git will start an editor for you. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/696335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/519248/"
]
} |
696,345 | Hello I have a file called users. In that file i have a list of users for example user1user2user3 Now i have another file called searches where there is a specific string called owner = user for example owner = user1random textrandom textowner = user15random text random text owner = user2 so is it possible to find all the users based on the users file and rename those users to [email protected] ? for example owner = [email protected] textrandom textowner = user15random text random text owner = [email protected] i got some bits and pieces working using the ack command and the cat command but i am new to programming so i cant get a proper output. What i figured out is below but it does not really do what i need. any help is highly appreciated. cat users | xargs -i sed 's/{}/moo/' searches | Unfortunately, command entry in Bash is line-oriented, and you can’t go back to a previous line while entering a multi-line command. What you can do however is start an editor with the full command entered so far. To do so, in Emacs mode (the default), press Ctrl x Ctrl e ; in vi mode, press Esc v . This will open your editor with everything you’ve entered so far; fix what needs fixing, complete the command , exit the editor and Bash will run the edited command. In this particular case you could use an editor for the entire git commit message: omit the -m option and git will start an editor for you. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/696345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469475/"
]
} |
696,353 | Inspired by this answer : When I run type -p in the command prompt, it reliably tells me the path if the command exists: pi@raspberrypi:~ $ type -p less/usr/bin/lesspi@raspberrypi:~ $ type -p asdfpi@raspberrypi:~ $ However, when used in a in a script it is as if the -p parameter is interpreted as a command of itself. It seems the type command ignores it as its option, because there is always some rogue text of -p not found in the result. It breaks the rest of the script: #!/usr/bin/shmain() { for mycommand in $1; do echo Checking $mycommand loc="$(type -p "$mycommand")" echo $loc if ! [ -f "$loc" ]; then echo I think I am missing $mycommand fi done}main "less asdf" Output of the script: Checking less-p: not found less is /usr/bin/lessI think I am missing lessChecking asdf-p: not found asdf: not foundI think I am missing asdf Can you please help me out here? Is there something about the shell on a raspberry pi that is causing this? | -p is not a standard option for the type command¹. The type utility itself, though standard is optional in POSIX (only required on systems implementing XSI, though that is required to obtain UNIX compliance certification). Instead of type , you can use command -v to check for availability of a command. No need to check its output, command will tell you via its exit status if it succeeded to find the command or not (regardless of whether it's an external command found in $PATH , a builtin or a function): #!/usr/bin/sh -main() { for mycommand do printf >&2 '%s\n' "Checking $mycommand" if ! command -v -- "$mycommand" > /dev/null 2>&1; then printf >&2 '%s\n' "I think I am missing $mycommand" fi done}main less asdf command is a mandatory POSIX utility. The -v option to command used to be optional, but it's not any longer in the latest version of the POSIX specification. Also remember that echo can't be used to display arbitrary data , -- must be used to separate options and non-options, errors (and in general diagnostics including progress/advisory information) should preferably go to stderr. ¹ it is however an option supported by the type builtin of the bash , mksh , ksh93 , busybox ash , zsh and yash shells². In mksh and yash , it's meant to search the command in the standard $PATH (as returned by getconf PATH for instance), in zsh , it's for searching the command only in $PATH , not in aliases/builtins/functions. Same in ksh93 , except that it also only prints the path of the found command. In bash (where it's used to be -path ; still supported though no longer documented), type -p succeeds but prints nothing if the command is a builtin, and prints the path only like in ksh93 when it's an external command. In busybox ash , it behaves similarly to command -v ² all more or less conformant interpreter implementations of the standard sh language at least when invoked as sh , though that doesn't prevent them from supporting extensions over that language in areas where the standard leaves the behaviour unspecified like here. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/696353",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345193/"
]
} |
696,367 | Two files: data1 Name |formula |no. |dose|days|cost |msg|em|notes Fname-Lname|BXXXT+GG |8262|4 |14 |57.78 | | |sq Fname-Lname|SJXXT+GG |8263|4¾ |14 |105.15| | |IB Fname-Lname|FJDHT+BH,LG,CQC,ZX|8264|5¾ |14 |46.20 | | |IB data2 10/12/2020|more-data-3456|105.1510/12/2020|more-data-3456|95.1011/12/2020|more.data-3456|30.3014/12/2020|more-data-3456|45.55 I am using the code snippet awk 'BEGIN {FS = "|" } NR==FNR{a[$6];next} $3 in a {print $0}' data1 data2 To match where a value in $6 of file data1 also occurs in $3 of file data2. Where there is a match print out the whole record ($0) containing the match from file data2.I am expecting: 10/12/2020|more-data-3456|105.15 But I am only getting an output of a blank line. I removed the file separators "|" using a " " as replacement the command code worked exactly as expected however really want to preserve the field separator as | if at all possible . I would like to understand why the addition of a BEGIN block has caused this . Has it caused awk to load an empty array in place of taking data from S6 ? My awk level is just above beginner.Edit: I have also used the -F parameter with the same result, an out put of a blank line . I am using gawk . | -p is not a standard option for the type command¹. The type utility itself, though standard is optional in POSIX (only required on systems implementing XSI, though that is required to obtain UNIX compliance certification). Instead of type , you can use command -v to check for availability of a command. No need to check its output, command will tell you via its exit status if it succeeded to find the command or not (regardless of whether it's an external command found in $PATH , a builtin or a function): #!/usr/bin/sh -main() { for mycommand do printf >&2 '%s\n' "Checking $mycommand" if ! command -v -- "$mycommand" > /dev/null 2>&1; then printf >&2 '%s\n' "I think I am missing $mycommand" fi done}main less asdf command is a mandatory POSIX utility. The -v option to command used to be optional, but it's not any longer in the latest version of the POSIX specification. Also remember that echo can't be used to display arbitrary data , -- must be used to separate options and non-options, errors (and in general diagnostics including progress/advisory information) should preferably go to stderr. ¹ it is however an option supported by the type builtin of the bash , mksh , ksh93 , busybox ash , zsh and yash shells². In mksh and yash , it's meant to search the command in the standard $PATH (as returned by getconf PATH for instance), in zsh , it's for searching the command only in $PATH , not in aliases/builtins/functions. Same in ksh93 , except that it also only prints the path of the found command. In bash (where it's used to be -path ; still supported though no longer documented), type -p succeeds but prints nothing if the command is a builtin, and prints the path only like in ksh93 when it's an external command. In busybox ash , it behaves similarly to command -v ² all more or less conformant interpreter implementations of the standard sh language at least when invoked as sh , though that doesn't prevent them from supporting extensions over that language in areas where the standard leaves the behaviour unspecified like here. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/696367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/512523/"
]
} |
Subsets and Splits