source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
654,885 | I want strict mode in my scripts. I would also appreciate portability. set -o pipefail seems compulsory. Yet shellcheck (a static linter) is unhappy that "In POSIX sh, set option pipefail is undefined". Is it correct? If so, is this a bash solely feature or is it rather prolific? | The pipefail shell option is specific to a number of shells that shellcheck claims to support 1 . You can use this shell option and be portable, if by "portability" you assume that any target system has a shell that supports it (and any other constructs that you may be using). This is the same type of "portability" that you get with any other specific scripting language. The shellcheck linter will complain if it finds set -o pipefail in a shell script that is a sh script, since it's currently not supported by POSIX sh . To ensure that your script is a script interpreted by the bash shell (or any specific shell that you are coding for), the script should have a #! -line pointing to the correct shell interpreter, e.g., #!/bin/bash or possibly #!/usr/bin/env bash or something similar. With a proper #! -line that additionally indicates that the script will be interpreted by a particular shell that is not sh , the shellcheck linter will not complain about you setting the pipefail shell option in your script. If you don't use a #! -line in your scripts, then you should consider doing so (or always run your scripts with an explicit interpreter on the command line). Meanwhile, the shellcheck command line tool can be told to switch to mode using its -s (or --shell= ) option: shellcheck --shell=bash myscript 1 I suspect that shellcheck has a "POSIX sh mode" and an "other mode" to support bash , dash , and ksh ( shellcheck does not claim to support zsh ). The pipefail shell option is documented to work with bash , and ksh . The zsh shell has a PIPE_FAIL shell option that can be set in the same way. The dash shell does not support the option, but if the #! -line mentions dash , shellcheck won't complain about pipefail . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20506/"
]
} |
654,889 | what I'im trying to do is make a simple shell script /bin/bash to count files in a given directory and then add the file count in the directory name. I can imagine how to count files (with find) and even to store file count in a variable, but I cannot figure it out how to move into dir, then goes back, get the dir path and rename.. I usually use Platypus to convert a shell script into a OSX app, enabling drag&drop. So the usage should be: drag a folder into this appthe app count files into this folderthe app append file count into folder name Any help will be appreciate ThanksMatt | The pipefail shell option is specific to a number of shells that shellcheck claims to support 1 . You can use this shell option and be portable, if by "portability" you assume that any target system has a shell that supports it (and any other constructs that you may be using). This is the same type of "portability" that you get with any other specific scripting language. The shellcheck linter will complain if it finds set -o pipefail in a shell script that is a sh script, since it's currently not supported by POSIX sh . To ensure that your script is a script interpreted by the bash shell (or any specific shell that you are coding for), the script should have a #! -line pointing to the correct shell interpreter, e.g., #!/bin/bash or possibly #!/usr/bin/env bash or something similar. With a proper #! -line that additionally indicates that the script will be interpreted by a particular shell that is not sh , the shellcheck linter will not complain about you setting the pipefail shell option in your script. If you don't use a #! -line in your scripts, then you should consider doing so (or always run your scripts with an explicit interpreter on the command line). Meanwhile, the shellcheck command line tool can be told to switch to mode using its -s (or --shell= ) option: shellcheck --shell=bash myscript 1 I suspect that shellcheck has a "POSIX sh mode" and an "other mode" to support bash , dash , and ksh ( shellcheck does not claim to support zsh ). The pipefail shell option is documented to work with bash , and ksh . The zsh shell has a PIPE_FAIL shell option that can be set in the same way. The dash shell does not support the option, but if the #! -line mentions dash , shellcheck won't complain about pipefail . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654889",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/478107/"
]
} |
654,975 | I have text files which I'd like to divide up into different files based on arbitrary "tags" I put at the start of various lines. Example text file: I CELEBRATE myself, and sing myself, And what I assume you shall assume, For every atom belonging to me as good belongs to you.#here I loafe and invite my soul, #here I lean and loafe at my ease observing a spear of summer grass.#there My tongue, every atom of my blood, form'd from this soil, this air,#there Born here of parents born here from parents the same, and their parents the same, #here I, now thirty-seven years old in perfect health begin, #here Hoping to cease not till death. In this example I'd like to remove every line beginning with #here and append it onto a file called here.txt , every line beginning with #there into a file called there.txt , and leave every untagged line in the original file. (Ideally removing the #here #there tags in the process.) I think this solution using awk might be helpful, but I am such a Unix noob that I don't know how to adapt this to my problem: How to split a file by using keyword boundaries Any suggestions on how to proceed? PS: I'm using the command line on OS X. | Your case is simpler than the linked case - you only need to look at each line (or "record" in awk parlance) and decide where to send it. So: awk '/^#here/{print > "here.txt"; next} /^#there/{print > "there.txt"; next} {print}' input.txt The remaining lines will be printed to standard output; portably, you can redirect that to a third file ( rest.txt say) then rename it to the name of the original. If you have GNU awk, you can use the inplace module to modify the original file directly: gawk -i inplace '/^#here/{print > "here.txt"; next} /^#there/{print > "there.txt"; next} {print}' input.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654975",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/478158/"
]
} |
655,017 | I have a file in the form of: XXX XXXX XXX-6VwvOkZvzuI.description How can I get just the file name XXX XXXX XXX ?I've tried: for file in $(ls .d*)do fname="${file%*-}" ext="${filename%.*}"done | Do not parse the output of ls (see e.g. Why *not* parse `ls` (and what to do instead)? ). The issue that you're running into is that your loop variable, file , will take the values of your filenames after they have all been concatenated into a single long string and then split on whitespaces (and after any split-up word that happens to be a globbing pattern has been expanded). You'll get three iterations of your loop for the filename XXX XXXX XXX-6VwvOkZvzuI.description , for example, one each for the values XXX , XXXX , and XXX-6VwvOkZvzuI.description . To iterate over all files that have a dash in their names and then a filename suffix of .description use for name in *-*.description; do ...; done To pick out the part before the - in $name , use a standard parameter expansion that removes everything after the first - in the string: prefix=${name%%-*} The difference between using %% and % here is that with %% the longest matching tail string is removed. This matters if there happens to be multiple - characters in any name. Your loop then becomes for name in *-*.description; do prefix=${name%%-*}done The filename suffix is already known ( .description ), but you can can get the bit of the filename from the - to the suffix using infix=${name#"$prefix"}infix=${infix%.description} Finally, with a script like #!/bin/shsuffix=.descriptionfor name in *-*"$suffix"; do prefix=${name%%-*} infix=${name#"$prefix"} infix=${infix%.description} printf 'prefix="%s", infix="%s", suffix="%s"\n' \ "$prefix" "$infix" "$suffix"done you'll get $ lsXXX XXXX XXX-6VwvOkZvzuI.description XXX XXXX XXX-6VwvOkZvzuK.descriptionXXX XXXX XXX-6VwvOkZvzuJ.description script $ ./scriptprefix="XXX XXXX XXX", infix="-6VwvOkZvzuI", suffix=".description"prefix="XXX XXXX XXX", infix="-6VwvOkZvzuJ", suffix=".description"prefix="XXX XXXX XXX", infix="-6VwvOkZvzuK", suffix=".description" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/475899/"
]
} |
655,072 | I have a pdf document with over 200 duplicate pages among the total 900 of the document. When there is a duplicate, it appears immediately after the original. Maybe with pdftk the job can be done, but I need some way to find out the duplicates... | comparepdf is a command line tool for comparing PDFs. The exit code is 0 if the files are identical and non-zero otherwise. You may compare by text content or visually (interesting for e.g. scans): comparepdf 1.pdf 2.pdfcomparepdf -ca 1.pdf 2.pdf #compare appearance instead of text So what you could do is explode the PDF, then compare pairwise and delete accordingly: #!/bin/bash#explode pdfpdftk original.pdf burst#compare 900 pages pairwisefor (( i=1 ; i<=899 ; i++ )) ; do #pdftk's naming is pg_0001.pdf, pg_0002.pdf etc. pdf1=pg_$(printf 04d $i).pdf pdf2=pg_$(printf 04d $((i+1))).pdf #Remove first file if match. Loop not forwarded in case of three or more consecutive identical pages if comparepdf $pdf1 $pdf2 ; then rm $pdf1 fidone#renunite in sorted manner:pdftk $(find -name 'pg_*.pdf' | sort ) cat output new.pdf EDIT: Following @notautogenerated's remark, one might be bettor off selecting pages from the orginal file instead of unifying single-page PDFs. After the pairwise comparison is done, one could do the following: pdftk original.pdf cat $(find -name 'pg_*.pdf' | awk -F '[._]' '{printf "%d\n",$3}' | sort -n ) output new.pdf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/416168/"
]
} |
655,096 | Could someone please give me the name of the default ZSH theme that Kali uses? Also, if you could please provide a link to it. | Kali Linux does not use a separate theme file for its zsh customizations. So you cannot download the Kali Linux zsh theme, drop it in themes/ , and set ZSH_THEME to its name as you usually can in Oh My Zsh . Instead, the customizations are made to .zshrc directly. You can inspect .zshrc which is included into Kali Linux and choose what customizations you want. Be careful to keep a backup of your existing ~/.zshrc . I also used zsh -d -f; source /path/to/file as suggested and explained in another answer to test the configurations without replacing the existing configuration file at all. Here is some further reading: Kali Linux zsh for macos | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/478281/"
]
} |
655,164 | Does anyone know how can I "split" the lines bellow (it is just an example): mercedes|$40000|black|$42000|white|$41000|redaudi|$31000|blue|$10000|whitemercedes|$5000|blue The output I expect is: mercedes|$40000|blackmercedes|$42000|whitemercedes|$41000|redaudi|$31000|blueaudi|$10000|whitemercedes|$5000|blue Thanks | A simple awk script to output pairs of fields from each line from the 2nd field on, prefixing each outputted pair with the 1st field on the line. $ awk -F '|' 'BEGIN { OFS=FS } { for (i = 2; i+1 <= NF; i += 2) print $1, $i, $(i+1) }' filemercedes|$40000|blackmercedes|$42000|whitemercedes|$41000|redaudi|$31000|blueaudi|$10000|whitemercedes|$5000|blue This assumes that the input conforms to expectations, which is that the final data should be organized into three columns. This means that the input is expected to strictly follow title|pair 1a|pair 1b|pair 2a|pair 2b|...|pair Na|pair Nb | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/478364/"
]
} |
655,246 | Here is a sample text.(It's name is 20210622_090009) nvmeSerial Endpoint nvmeSpeed nvmeWidth================================================================================nvme0n1 c7:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme1n1 c8:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme2n1 c9:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme3n1 ca:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme4n1 85:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme5n1 86:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme6n1 87:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme7n1 88:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme8n1 41:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme9n1 42:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme10n1 43:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme11n1 44:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme12n1 45:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme13n1 46:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme14n1 47:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme15n1 48:00.0 Width x2 (downgraded)nvme16n1 01:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme17n1 02:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme18n1 03:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme19n1 04:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme20n1 05:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme21n1 06:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme22n1 07:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme23n1 08:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme24n1 09:00.0 Speed 8GT/s (ok) Width x2 (downgraded)nvme25n1 0a:00.0 Speed 32GT/s (ok) Width x2 (downgraded) Here is the script: #! /bin/bashIFS_old="$IFS"IFS=$'\n'for line in $(cat 20210622_090009.txt | tail -n 26 | cut -f 5 | awk '{print $2}' )do echo "$line" doneIFS="$IFS_old"exit 0 The script output is 8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s32GT/s I want to grab the nvmeSpeed(Ex:8GT/s) whether the speed has the number or not. As you see, nvmeSpeed in nvme15n1 is a whitespace. And the output doesn't show up. My question is: How to awk a whitespace to become a for loop input ? | awk alone can do all of this. You don't need a shell script wrapper, you certainly don't need anything as baroque as cat 20210622_090009.txt | tail -n 26 | cut -f 5 | awk '{print $2}' ), and you should avoid using a shell while-read loop (or a for loop over the output of a language like awk or perl) wherever possible (see Why is using a shell loop to process text considered bad practice? for reasons why). Rule of thumb: if you ever find yourself thinking "I want to iterate over awk's output" you should change your thinking to "I should almost certainly do this with just awk", or a shell wrapper that sets up input and output redirection for awk to do the bulk processing work. Same for perl and most other languages. Any other language will do the processing work better than shell, and you're only going to make your job harder by trying to do it with shell. Anyway, the following script prints column 4 if there are exactly 8 columns ( NF == 8 ). If there are less than 8 columns ( NF < 8 ), it prints a blank line. In both cases, it ignores the two headers lines at the beginning of every input file (it can handle one or more filename arguments. FNR < 3 {next} . In awk, NR is the total number of lines read while FNR is the line number of the current file). $ awk 'FNR < 3 {next}; NF == 8 {print $4}; NF < 8 {print ""}' 20210622_090009.txt 8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s8GT/s32GT/s | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/655246",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/461875/"
]
} |
655,262 | I want to write a data parser script. The example data is: name: John Doedescription: AMemail: [email protected]: 999999999999999status: activename: Jane Doedescription: HRemail: [email protected]: 8888888888status: active...name: Foo Bardescription: XXemail: [email protected]: inactive The key-value pairs are always in the same order ( name , description , email , lastLogon , status ), but some of the fields may be missing. It is also not guaranteed that the first record is complete. The expected output is delimiter-separated (e.g. CSV) values: John Doe,AM,[email protected],999999999999999,activeJane Doe,HR,[email protected],8888888888,active...Foo Bar,XX,[email protected],n/a,inactive My solution is by using a while read loop. The main part of my script: while read line; do grep -q '^name:' <<< "$line" && status='' case "${line,,}" in name*) # capture value ;; desc*) # capture value ;; email*) # capture value ;; last*) # capture value ;; status*) # capture value ;; esac if test -n "$status"; then printf '%s,%s,%s,%s,%s\n' "${name:-n\a}" ... etc ... unset name ... etc ... fidone < input.txt This works. But obviously, very slow. The execution time with 703 lines of data: real 0m37.195suser 0m2.844ssys 0m22.984s I'm thinking about the awk approach but I'm not experienced enough using it. | The following awk program should work. Ideally, you would save it to a separate file (e.g. squash_to_csv.awk ): #!/bin/awk -fBEGIN { FS=": *" OFS="," recfields=split("name,description,email,lastLogon,status",fields,",")}function printrec(record) { for (i=1; i<=recfields; i++) { if (record[i]=="") record[i]="n/a" printf "%s%s",record[i],i==recfields?ORS:OFS; record[i]=""; }} $1=="name" && (FNR>1) { printrec(current) }{ for (i=1; i<=recfields;i++) { if (fields[i]==$1) { current[i]=$2 break } }}END { printrec(current)} You can then call this as awk -f squash_to_csv.awk input.datJohn Doe,AM,[email protected],999999999999999,activeJane Doe,HR,[email protected],8888888888,activeFoo Bar,XX,[email protected],n/a,inactive This will perform some initialization in the BEGIN block: set the input field separator to "a : followed by zero or more spaces" set the output field separator to , initialize an array of field names (we take a static approach and hard-code the list) If the name field is encountered, it will check if it is on the first line of the file, and if not , print the previously collected data. It will then start collecting the next record in the array current , beginning with the name field just encountered. For all other lines (I assume for simplicity that there are no empty or comment lines - but then again, this program should just silently ignore those), the program checks which of the fields is mentioned on the line, and stores the value at the appropriate position in the current array used for the current record. The function printrec takes such an array as parameter and performs the actual output. Missing values are substituted with n/a (or any other string you may want to use). After printing, the fields are cleared so that the array is ready for the next bunch of data. At the end, the last record is also printed. Note If the "value" part of the file can also include : -space-combinations, you can harden the program by replacing current[i]=$2 by sub(/^[^:]*: */,"")current[i]=$0 which will set the value to "everything after the first : -space combination" on the line, by removing ( sub ) everything up to including the first : -space-combination on the line. If any of the fields can contain the output separator character (in your example , ), you will have to take appropriate measures to either escape that character or quote the output, depending on the standard you want to adhere to. As you correctly noted, shell loops are very much discouraged as tools for text processing. If you are interested in reading more, you may want to look at this Q&A . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655262",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245871/"
]
} |
655,470 | I am trying to calculate the used bandwidth on the Ethernet interface (which is 1000 Mbit/s). To test my script, I am using the iperf tool to generate huge bandwidths. The problem I am facing is when eth0_rx1 and eth0_rx2 gets the values which are greater than maximum 32-bit value. I am getting the difference as 0. Somehow printf 'eth0 Download rate: %s B/s\n' "$((eth0_rx2-eth0_rx1))" is giving the correct value, but when tried with eth0_diff=expr $eth0_rx2 - $eth0_rx1 I am getting the value 0. Is there a way to handle if rx_bytes or tx_bytes are more than 32 bits? I am not sure this is an elegant way of calculating used bandwidth. If not, please suggest other alternate way. Sample output: eth0_rx1 = 2134947002 \eth0_rx2= 2159752166 \eth0 Download rate: 24805164 B/s \eth0_diff = 12536645 \eth0_rx_kB = 12242 \eth0_rx_kB_100 = 1224200 \eth0_rx_kB_BW = 9eth0_rx1 = 2159752166 \eth0_rx2= 2184557522 \eth0 Download rate: 24805356 B/s \eth0_diff = 0 \eth0_rx_kB = 0 \eth0_rx_kB_100 = 0 \eth0_rx_kB_BW = 0 Script used: #!/bin/sheth0_rx1=$(cat /sys/class/net/eth0/statistics/rx_bytes)while sleep 1; do eth0_rx2=$(cat /sys/class/net/eth0/statistics/rx_bytes) echo "eth0_rx1 = $eth0_rx1" echo "eth0_rx2= $eth0_rx2" printf 'eth0 Download rate: %s B/s\n' "$((eth0_rx2-eth0_rx1))" eth0_diff=`expr $eth0_rx2 - $eth0_rx1` echo "eth0_diff = $eth0_diff" #convert bytes to Kilo Bytes eth0_rx_kB=`expr $eth0_diff / 1024` echo "eth0_rx_kB = $eth0_rx_kB" #bandwidth calculation eth0_rx_kB=`expr $eth0_rx_kB \* 100` echo "eth0_rx_kB_100 = $eth0_rx_kB" #125000 = 1000 Mbit/s eth0_rx_kB=`expr $eth0_rx_kB / 125000` echo "eth0_rx_kB_BW = $eth0_rx_kB" eth0_rx1=$eth0_rx2 eth2_rx1=$eth2_rx2done | Given that printf 'eth0 Download rate: %s B/s\n' "$((eth0_rx2-eth0_rx1))" is giving you the correct value, as long as integer arithmetic is good enough, you’ve got your answer: $((eth0_rx2-eth0_rx1)) , i.e. shell arithmetic . Many shells, notably Bash, use 64-bit integers , even on 32-bit platforms. Thus: eth0_diff=$((eth0_rx2 - eth0_rx1))... eth0_rx_kB=$((eth0_diff / 1024))... eth0_rx_kB=$((eth0_rx_kB * 100))... eth0_rx_kB=$((eth0_rx_kB / 125000)) GNU expr can support arbitrary-precision arithmetic, if it is built with the GNU MP library . In other cases it uses native integers, and apparently on your system (assuming you’re using GNU expr ) those are 32 bits in size. Other implementations probably have similar limits. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/655470",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/450545/"
]
} |
655,551 | Using sed, how can I replace the Nth to last occurrence of a character on each line of a given file? In this case I want to replace the 3rd to last ; with , input 1;2;3;4;5;6;7;8;910;20;30;40;50;60;70;80;90100;200;300;400;500;600;700;800;900 expected output 1;2;3;4;5;6,7;8;910;20;30;40;50;60,70;80;90100;200;300;400;500;600,700;800;900 I know I could replace the 6th occurrence like this sed 's/;/,/6' input_file.csv > output_file.csv Or the last one sed -r 's/(.*);/\1,/' input_file.csv > output_file.csv But in my particular case and because of some nuances, I need to start from the end. I've tried something like sed -r 's/(.*);/\1,/3' input_file.csv > output_file.csv | You could explicitly capture two more delimited fields: $ sed -r 's/(.*);([^;]*;[^;]*;)/\1,\2/' input_file.csv1;2;3;4;5;6,7;8;910;20;30;40;50;60,70;80;90100;200;300;400;500;600,700;800;900 or more programatically $ sed -r 's/(.*);(([^;]*;){2})/\1,\2/' input_file.csv1;2;3;4;5;6,7;8;910;20;30;40;50;60,70;80;90100;200;300;400;500;600,700;800;900 where the number in the quantifier {n-1} replaces the nth from last. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/655551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/340265/"
]
} |
655,631 | I'm working remotely on a SLES11 machine (woe is me). On this machine, I'm using git, and specifically, git diff , which passes its results to less with some of colorization. Now, for some reason, instead of seeing color, I'm seeing lines which look like: ESC[1mdiff --git a/path/to/file.h b/path/to/file.hESC[mESC[1mindex 1ab153f..0491db9 100644ESC[m etc. I know the terminal supports color (ls results are colorized); I have TERM=xterm and COLORTERM=1 in my environment. How can I get the colorized diff to display properly? | As terdon says , less ’ default behaviour is to display equivalents of special characters, in cat -v style. less -R will change that so that escape sequences are passed on to whatever is handling the display. less ’s defaults can be specified with the LESS environment variable, e.g. export LESS=-R git has its own idea of what its pager should do. If no LESS environment variable is set, it will set it to FRX when invoking less , which matches git ’s expectations; if LESS is set, it will leave it unchanged, which can result in unreadable output if LESS doesn’t include -R . There are two ways of configuring less for use with git : either configure it globally using the LESS variable, or change the core.pager setting , e.g. git config --global core.pager "less -R" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
655,715 | The word command refers to two different concepts in Linux: An executable program, such as grep (or a shell built-in, such as cd ). Example usage: "Here are the top 10 Linux commands you should learn." A full text string sent to the shell for execution, such as grep com /etc/hosts . Example usage: "Type a Linux command and press Enter." Does anyone have any best practices for avoiding this ambiguity when writing prose about Linux commands? Here are some attempts I've rejected already: Using the word program or executable for meaning #1. It's inaccurate for shell built-ins. Using the phrase command line for meaning #2. That's confusing because "command line" is also a synonym for "shell." Using the phrase command string for meaning #2. It's imprecise because both #1 and #2 are strings. Any advice appreciated. | POSIX refers to the things that are like grep and cd as " utilities " , and reserves " command " for the instructions . Used consistently, these terms are unambiguous. To address your cases in turn "An executable program, such as grep (or a shell built-in, such as cd)" is a utility : Utility A program, excluding special built-in utilities provided as part of the Shell Command Language, that can be called by name from a shell to perform a specific task, or related set of tasks. Which is further clarified with : The system may implement certain utilities as shell functions or built-in utilities to be explicit that this incorporates ordinary utilities like true that are commonly found as shell builtins. Formally, " special built-in utilities " are separated from utilities not further specified; these are things like break , . , eval , set , and trap , which affect the shell's internal state, but they do not include cd , which is a regular built-in. Outside of the nuanced needs of the specification (certain variable assignment behaviour differs and they are not available with execvp ), "utility" suffices to cover both categories at the user level. Articles of shell syntax such as if and while are not utilities at all. "A full text string sent to the shell for execution, such as grep com /etc/hosts " is a command : Command A directive to the shell to perform a particular task. Commands do include simple commands like grep com /etc/hosts , pipelines, and compound commands like if constructs and grouping commands with ( ... ) , but the word "command" never refers to a utility itself. Within a command, a command name may appear that identifies a utility or a function: the command name in grep com /etc/hosts is grep , referring to the grep utility . Vernacular uses of "command" to mean a utility or function may be disambiguated by context, but the formal meaning is only of an instruction. If total avoidance of ambiguity is required, you can consistently use "utility" and "command" for those two roles. You probably can't expect users to make that distinction themselves, though, so search-engine-optimised "top 10 Linux commands" articles are probably making the right choice for their incentives. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45254/"
]
} |
655,790 | On gnome, gsettings set org.gnome.desktop.interface gtk-theme 'Adwaita-dark' can set dark theme system-wide. However, the same doesn't happen with a sway session. Firefox browser still detects light theme. | Firefox uses the GTK theme settings. On sway I could set the setting gtk-application-prefer-dark-theme to 1 to get Firefox and GTK apps to use a dark theme by default. You'll need to edit (or create if it doesn't exist) the following file: ~/.config/gtk-3.0/settings.ini To look something like this: [Settings]gtk-application-prefer-dark-theme=1 After that firefox and GTK apps were dark by default. It will likely require a restart of the application in the case of them already running. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236045/"
]
} |
655,799 | I'm doing an ssh to a PC that no one can actually physically reach, yet.Said PC was made after 2011, so it defintely has UEFI . The issue is that every posts on the web is about checking if the PC has a UEFI or BIOS (e.g. How to know if I'm booting using UEFI? ), which is not what I'm trying to find out. My question is then pretty simple : How do I tell if a PC with UEFI, booted with CSM/Legacy/BIOS Mode enabled or not ? Update 1 : There's lot of ways to tell if it's UEFI or not, but none of them can definitely tell since they all contradict themselves. Details: The drive is MBR There's no sign of an ESP partition at all (fstab, etc) There's no sign of EFI files on /boot at all cat /sys/firmware/efi/fw_platform_size gave 64 , which wouldn't work at all if it was in CSM Mode on another PC. update-grub gave Adding boot menu entry for EFI firmware configuration efibootmgr gave me a boot order, which usually shows when there's a UEFI. | Firefox uses the GTK theme settings. On sway I could set the setting gtk-application-prefer-dark-theme to 1 to get Firefox and GTK apps to use a dark theme by default. You'll need to edit (or create if it doesn't exist) the following file: ~/.config/gtk-3.0/settings.ini To look something like this: [Settings]gtk-application-prefer-dark-theme=1 After that firefox and GTK apps were dark by default. It will likely require a restart of the application in the case of them already running. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88282/"
]
} |
655,811 | I have a file file.txt c-hcos-49.84.202106221c-hcos-4.4.9-openc-hcos-4.9.9-open I want to grep lines that contain either 49 or 4.9 immediately after the string hcos- . I used cat file.txt| grep 'hcos' | grep '4\\.9\\|49' Its showing all lines in outputActual output c-hcos-49.56.202106221c-hcos-4.4.9-openc-hcos-4.9.9-open The expected output is c-hcos-49.84.202106221c-hcos-4.9.9-open | In your case, you want to find all lines that contain 49. or 4.9. immediately after the string hcos- . To do so, use grep -E 'hcos-4\.?9\.' file.txt where the -E option instructs grep to use extended regular expressions syntax. In basic regular expression syntax this would be achieved by: grep 'hcos-4\.\{0,1\}9\.' file.txt You don't need to cat a file into grep as grep can open the files on its own. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/655811",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/478970/"
]
} |
655,827 | I need to find the files that fulfill relatively complex condition.For example, I want to find all files that fulfill all below conditions: does contain word AAAA does contain word BBB or CCCCC (may contain both of them) does not contain word DDD The words may appear in any order and may appear in different lines. I have one solution, that combines find and egrep , but is not very legible. find . \( -type f -and -exec egrep -q 'BBB|CCCCC' {} \; \ -and -exec egrep -q AAAA {} \; \ -and -not -exec egrep -q DDD {} \; \) -print Is there any better way to solve that problem? | Your solution is pretty legible for the task, in my opinion. However, it's slow, because it spawns 3 processes per file. I reckon Awk is better suited here because it will allow to read a whole batch of files (as allowed by ARG_MAX) in a single go, using {} + instead of {} \; . GNU Awk: find . -type f -exec gawk ' BEGINFILE{c1=c2=c3=0} /AAA/ {c1=1} /BBB/||/CCC/{c2=1} /DDD/ {c3=1; nextfile} ENDFILE{if(c1 && c2 && !c3)print FILENAME}' {} + POSIX * : find . -type f -exec awk ' FNR==1{ if(NR>1 && c1 && c2 && !c3)print f c1=c2=c3=0 f=FILENAME } /AAA/ {c1=1} /BBB/||/CCC/{c2=1} /DDD/ {c3=1; nextfile} END{if(c1 && c2 && !c3)print f}' {} + *Actually, nextfile is still not POSIX but it has been accepted to the next issue of the standard . You can remove it for POSIX issue 7 compliance; the result will be the same, but with a performance penalty. Note : Awk bails out if it does not have permissions to read a file. In GNU Find, simply add the -readable flag to avoid that. If GNU Find is not available, Test can be used as an additional filter: find . -type f -exec test -r {} \; -exec awk ' ...' {} + But spawning a Test for each file represents a performance penalty. Further reading: POSIX Find . POSIX Awk . The BEGINFILE/ENDFILE special patterns . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/655827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60841/"
]
} |
656,005 | I just read the following sentence: Case Sensitivity is a function of the Linux filesystem NOT the Linux operating system. What I deduced from this sentence is if I'm on a Linux machine but I am working with a device formatted using the Windows File System, then case sensitivity will NOT be a thing. I tried the following to verify this: $ ~/Documents: mkdir Test temp$ ~/Documents: touch Test/a.txt temp/b.txt$ ~/Documents: ls te*b.txt And it listed only the files within the temp directory, which was expected because I am inside a Linux Filesystem. When I navigated to a Windows File System (NOTE: I am using WSL2), I still get the same results, but I was expecting it to list files inside both directories ignoring case sensitivity. $ /mnt/d: mkdir Test temp$ /mnt/d: touch Test/a.txt temp/b.txt$ /mnt/d: ls te*b.txt I tried it with both bash and zsh. I feel that it's somehow related to bash (or zsh), because I also read that bash enforces case sensitivity even when working with case insensitive filesystems. This test works on Powershell, so it means that the filesystem is indeed case insensitive. | Here, you're running: ls te* Using a feature of your shell called globbing or filename generation (pathname expansion in POSIX), not of the Linux system nor of any filesystem used on Linux. te* is expanded by the shell to the list of files that match that pattern. To do that, the shell requests the list of entries in the current directory from the system (typically using the readdir() function of the C library, which underneath will use a system-specific system call ( getdents() on Linux)), and then match each name against the pattern. And unless you've configured your shell to do that matching case insensitively (see nocaseglob options in zsh or bash) or use glob operators to toggle case insensitivity (like the (#i) extended glob operator in zsh ), te* will only expand to the list of files whose name as reported by readdir() starts with te , even if pathname resolution on the system or file system underneath is case insensitive or can be made to be like NTFS. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/656005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/479172/"
]
} |
656,179 | Recently I decided to read a bit more on the bash built-ins declare , local , and readonly , which led me to switch from: local variable_namevariable_name='value'readonly variable_name To: variable_name='value'declare -r variable_name This change cut down the number of lines written and allowed me to set a few attributes, like telling bash that the value of a variable is an integer, which was nice. However, while creating a function that will serve as an alias for cURL, I noticed that variables inside an array never expand if I use declare , but expand just fine with local and readonly . Here is an example: #!/usr/bin/env bashset -o errexit -o errtrace -o pipefail -o nounsetIFS=$'\n\t'curl() { curl_version="$(command curl --version | awk 'NR==1 {print $2}')" declare -r curl_version curl_args=( --user-agent "curl/${curl_version}" --silent --fail ) command curl "${curl_args[@]}" \ "${@}"}curl --url 'https://httpbin.org/get' Because the variables do not expand for whatever reason, the --user-agent part of the array makes the script exit with an error, since as far as bash knows, this is an unbound variable, and those are not allowed because of set -o nounset . I have been trying to get this to work for a few days now, so I guess it is time to throw the towel and ask for help. Can anyone point me in the right direction to understand what I am doing wrong, please? EDIT: Forgot to mention, but the variable does expand if I declare it in the same line, like declare -r variable_name . The problem is, if I do that, I hit SC2155 from ShellCheck , hence why I am trying to declare after the value is set. | With: curl_version="$(command curl --version | awk 'NR==1 {print $2}')"declare -r curl_version within a function, you're setting the $curl_version global variable to some value and then creating a separate local and readonly variable which is initially unset. It looks like you want: # instantiate a new local variable (but in bash it inherits the "export"# attribute if any of the variable with same name in the parent scope)local curl_version# unset to remove that export attribute if any. Though you could# also change the above to local +x curl_versionunset -v curl_version# give a value:curl_version="$(command curl --version | awk 'NR==1 {print $2}')"# make that local variable read onlylocal -r curl_version (here using local instead of declare to make it clearer that you want to make the variable local¹). Or do all at the same time with: local +x -r curl_version="$(command curl --version | awk '{print $2; exit}')" (though as noted by shellcheck, you then lose the exit status of the pipeline²). In any case, I wouldn't use readonly / typeset -r in shells like you would use const in C especially in bash . Shells (other than ksh93) don't have static scoping like in C. And in bash (contrary to zsh for instance), you can't create a variable local to a function if it has been made readonly in the global scope. For instance: count() { local n for (( n = 0; n < $1; n++ )) { echo "$n"; }}readonly n=5count "$n" would work in zsh but not in bash. It may be OK if you only use local -r and never readonly . ¹ in any case typeset / declare / local are all the same in bash , the only difference being that if you try to use local outside of a function, it reports an error. The difference between typeset -r and readonly (same as between typeset -x and export ) being that the latter doesn't instantiate a new variable if called within a function. ² See how with that exit in awk in that version for awk to stop processing the input after the first line, curl could be killed with a SIGPIPE (very unlikely in practice as curl would send its output in one go and it would fit in the pipe) and because of pipefail , the pipeline could end up failing with 141 exit status, but local itself would still succeed as long as it can assign a value to the variable. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/656179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/479351/"
]
} |
656,180 | I had a dual boot (Windows/Linux Mint) laptop with a 500GB SSD, and replaced the SSD with a 1TB SSD using the procedure described below. When I reboot, it boots directly into Windows, without grub menu. Why did this happen, and how can I restore the grub menu? This is what I did: Boot the laptop from a live bootstick (running Linux Mint) Use dd to copy the whole 500GB SSD to a network drive Shut down the laptop and replace the SSD by a 1GB SSD. Boot again with a live bootstick and use dd to copy the file on the network drive back to the SSD. Reboot without live bootstick I know that this leaves half of my new SSD unused; I was hoping to fix that later. dd worked correctly, or at least I can mount all partitions including live partitions from a live bootstick. Here are some hardware details: Laptop: Dell XPS 15 (9550) Old SSD: PM951 NVMe SAMSUNG 512GB New SSD: Kingston Technology KC2500 M.2 1000 GB PCI Express 3.0 3D TLC NVMe Pastebin link, from Boot-Repair: http://paste.ubuntu.com/p/DkMGvNXdYq/ In case it matters: Windows fast boot was disabled when I cloned the disk. | With: curl_version="$(command curl --version | awk 'NR==1 {print $2}')"declare -r curl_version within a function, you're setting the $curl_version global variable to some value and then creating a separate local and readonly variable which is initially unset. It looks like you want: # instantiate a new local variable (but in bash it inherits the "export"# attribute if any of the variable with same name in the parent scope)local curl_version# unset to remove that export attribute if any. Though you could# also change the above to local +x curl_versionunset -v curl_version# give a value:curl_version="$(command curl --version | awk 'NR==1 {print $2}')"# make that local variable read onlylocal -r curl_version (here using local instead of declare to make it clearer that you want to make the variable local¹). Or do all at the same time with: local +x -r curl_version="$(command curl --version | awk '{print $2; exit}')" (though as noted by shellcheck, you then lose the exit status of the pipeline²). In any case, I wouldn't use readonly / typeset -r in shells like you would use const in C especially in bash . Shells (other than ksh93) don't have static scoping like in C. And in bash (contrary to zsh for instance), you can't create a variable local to a function if it has been made readonly in the global scope. For instance: count() { local n for (( n = 0; n < $1; n++ )) { echo "$n"; }}readonly n=5count "$n" would work in zsh but not in bash. It may be OK if you only use local -r and never readonly . ¹ in any case typeset / declare / local are all the same in bash , the only difference being that if you try to use local outside of a function, it reports an error. The difference between typeset -r and readonly (same as between typeset -x and export ) being that the latter doesn't instantiate a new variable if called within a function. ² See how with that exit in awk in that version for awk to stop processing the input after the first line, curl could be killed with a SIGPIPE (very unlikely in practice as curl would send its output in one go and it would fit in the pipe) and because of pipefail , the pipeline could end up failing with 141 exit status, but local itself would still succeed as long as it can assign a value to the variable. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/656180",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/434086/"
]
} |
656,205 | https://sks-keyservers.net/ ( Internet Archive snapshot ) says This service is deprecated. This means it is no longer maintained, and new HKPS certificates will not be issued. Service reliability should not be expected. Update 2021-06-21: Due to even more GDPR takedown requests, the DNS records for the pool will no longer be provided at all. Which keyservers can I use for gpg --keyserver "$keyserver1" --recv-key keyid that I can expect not will go away anytime soon? | Which keyservers can I use for gpg --keyserver "$keyserver1" --recv-key keyid that I can expect not will go away anytime soon? The recommendation is to use keys.openpgp.org , however this keyserver only includes User IDs for keys whose owners have personally confirmed via email (basically eliminating large swaths of of the PGP ecosystem). It also does not include any 3rd party signatures on keys to mitigate the possibility of a "poisoned key" attack. As of December 2021, this is the default (if none is configured by the user) keyserver for GnuPG packaged by Debian since gnupg2 2.2.17-1 (released in 2019). Personally, I'd recommend a Hockeypuck-based keyserver like keyserver.ubuntu.com , which isn't so limited (although it does strip 3rd party signatures). GnuPG has since changed this to the default as of versions 2.2.29 and 2.3.2. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/656205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2972/"
]
} |
656,375 | I would like to understand what Linux does when it runs a .deb file. What I mean by this isare there any specific files it will look for or a default name of a file within the .deb file.I'm quite new to Linux so a simple, straight to the point answer would be great!Thank you in advance! | A .deb file is an archive (extract it via ar x package.deb or just list contents via dpkg -c package.deb ) with the following contents: data.tar.xz , control.tar.gz , debian-binary data.tar.xz Extract this archive via tar -xvf data.tar.xz and you will have the actual files in the folder structure where they will be installed. All programs are already compiled (in contrast to downloading source code and compiling it yourself).Say contents for a small package are ./usr/bin/program (the binary) and ./usr/lib/program/special.so (a library the program uses), then what is done during installation is just copying these files in the /usr directory. control.tar.gz Extract via tar -xzvf control.tar.gz . Contains references for controlling the installation: hash values for safety reasons, exact description of the package version and versions of each file, information regarding dependencies and what files are used for configurations. The need for describing version and dependencies is obvious. File versions are interesting, as even with installing updates via .deb -files e.g. some library files might be the same - so reinstalling them is not needed. Config files on the other hand usually had been adapted by the user, so overwriting them is a no-go. debian-binary Just tells the system what .deb -file standard is used. 2.0 - nowadays. What else is happening? With the version information dpkg updates its logs of installed packages and where to find them. Needed for version and dependency checks as well as when removing packages. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/656375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/479608/"
]
} |
656,465 | When I do ls /dev/tty* , I see the following output: /dev/tty /dev/tty12 /dev/tty17 /dev/tty21 /dev/tty26 /dev/tty30 /dev/tty35 /dev/tty4 /dev/tty44 /dev/tty49 /dev/tty53 /dev/tty58 /dev/tty62 /dev/ttyS0/dev/tty0 /dev/tty13 /dev/tty18 /dev/tty22 /dev/tty27 /dev/tty31 /dev/tty36 /dev/tty40 /dev/tty45 /dev/tty5 /dev/tty54 /dev/tty59 /dev/tty63 /dev/ttyS1/dev/tty1 /dev/tty14 /dev/tty19 /dev/tty23 /dev/tty28 /dev/tty32 /dev/tty37 /dev/tty41 /dev/tty46 /dev/tty50 /dev/tty55 /dev/tty6 /dev/tty7 /dev/ttyS2/dev/tty10 /dev/tty15 /dev/tty2 /dev/tty24 /dev/tty29 /dev/tty33 /dev/tty38 /dev/tty42 /dev/tty47 /dev/tty51 /dev/tty56 /dev/tty60 /dev/tty8 /dev/ttyS3/dev/tty11 /dev/tty16 /dev/tty20 /dev/tty25 /dev/tty3 /dev/tty34 /dev/tty39 /dev/tty43 /dev/tty48 /dev/tty52 /dev/tty57 /dev/tty61 /dev/tty9 The formatting is well and I can see all the files in modest block inside terminal. But when I run this command such a way watch -d -n1 'ls /dev/tty*' , I see: Every 1.0s: ls /dev/tty* debian: Wed Jun 30 21:08:06 2021/dev/tty/dev/tty0/dev/tty1/dev/tty10/dev/tty11/dev/tty12/dev/tty13/dev/tty14/dev/tty15/dev/tty16/dev/tty17/dev/tty18/dev/tty19/dev/tty2/dev/tty20/dev/tty21... So the output listed in vertical and doesn't fit my screen. What is the reason? How can I solve this? | What is the reason? When watch executes commands they are not connected to theterminal. In other words, isatty(3) returns 0. You can use thefollowing isatty.c to check if a command is connected to the terminalwhen it's ran: #include <stdio.h>#include <stdlib.h>#include <unistd.h>int main(void){ printf("%d\n", isatty(STDOUT_FILENO)); return EXIT_SUCCESS;} Compile: gcc isatty.c -o isatty Run in your terminal emulator: $ ./isatty1 Run it in watch: $ watch ./isattyEvery 2.0s: ./isatty darkstar: Wed Jun 30 20:42:51 20210 How can I solve this? Use -C option with ls in watch: watch -d -n1 'ls -C /dev/tty*' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/656465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165555/"
]
} |
656,492 | using dumpe2fs on some ext4 partition, I get in the initial data, that the first inode is #11. However, if I ls -i on this disk root partition, I get that it's inode number is #2 (as expected). So... What is this “first partition” reported by dumpe2fs ? | #11 is the first "non-special" inode, that can be used for the first regularly created file or directory (usually used for lost+found ). The number of that inode is saved in the filesystem superblock ( s_first_ino ), so technically it doesn't need to be #11, but mke2fs always sets it that way. Most of the inodes from #0 to #10 have special purposes (e.g. #2 is the root directory) but some are reserved or used in non-upstream versions of the ext filesystem family. The usages are documented on kernel.org . inode Number Purpose 0 n/a 1 List of defective blocks 2 Root directory 3 User quota 4 Group quota 5 Reserved for boot loaders 6 Undelete directory (reserved) 7 "resize inode" 8 Journal 9 "exclude" inode (reserved) 10 Replica inode (reserved) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/656492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266328/"
]
} |
656,588 | My question and the title may not be very well-formed, and I apologise in advance for that. Suppose I would like to execute a command (to be specific, xsdcxx ) as such in zsh : $ xsdcxx cxx-tree schemas/core/**/*.xsd --output-dir /absolute/path/to/globbed/path so that the directory structure of the generated files are in the same directory as the input .xsd schemas. How do I do this? In PowerShell, this is quite straightforward (assuming xsdcxx is already in the PATH ): > Get-ChildItem -Recurse -Include '*.xsd' | Foreach { xsdcxx cxx-tree --output-dir $_.Directory.Fullname $_.Fullname } | With zsh , that would be just: for f (schemas/core/**/*.xsd) xsdcxx cxx-tree --output-dir $f:h $f Where $f:h is the head (dirname) of $f like in csh or vim. Change $f to $f:P and $f:h to $f:h:P to get the realpath ¹ of the file and file head respectively. Here, you may want to change schemas/core/**/*.xsd to schemas/core/**/*.xsd(N) (where N enables nullglob for that one glob expansion) to avoid the error if there's no match. Or (N.) to restrict to regular files only (excluding all other types of files like sockets, fifos, directories, symlinks etc), or (N-.) to also include symlinks to regular files. ¹, that is the canonical path to the corresponding file: absolute and symlink-free, like with the realpath() standard function. See also the :a and :A modifiers described at info zsh modifiers as possible alternatives. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/656588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/393789/"
]
} |
656,595 | I am fairly certain the answer is no, but I was wondering if it's possible to decipher the original parent of a daemon process, prior to their daemonization and subsequent re-parenting process. | With zsh , that would be just: for f (schemas/core/**/*.xsd) xsdcxx cxx-tree --output-dir $f:h $f Where $f:h is the head (dirname) of $f like in csh or vim. Change $f to $f:P and $f:h to $f:h:P to get the realpath ¹ of the file and file head respectively. Here, you may want to change schemas/core/**/*.xsd to schemas/core/**/*.xsd(N) (where N enables nullglob for that one glob expansion) to avoid the error if there's no match. Or (N.) to restrict to regular files only (excluding all other types of files like sockets, fifos, directories, symlinks etc), or (N-.) to also include symlinks to regular files. ¹, that is the canonical path to the corresponding file: absolute and symlink-free, like with the realpath() standard function. See also the :a and :A modifiers described at info zsh modifiers as possible alternatives. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/656595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303085/"
]
} |
656,599 | I am trying to run an "exe" file on Linux using Ubunto through Wine. When I run it through the terminal I get this error: This application requires a Java Runtime Environment 1.8.0(32 bit). Wine has referred me to an Oracle link and although I followed the instructions and downloaded the JRE, it is still not working. | With zsh , that would be just: for f (schemas/core/**/*.xsd) xsdcxx cxx-tree --output-dir $f:h $f Where $f:h is the head (dirname) of $f like in csh or vim. Change $f to $f:P and $f:h to $f:h:P to get the realpath ¹ of the file and file head respectively. Here, you may want to change schemas/core/**/*.xsd to schemas/core/**/*.xsd(N) (where N enables nullglob for that one glob expansion) to avoid the error if there's no match. Or (N.) to restrict to regular files only (excluding all other types of files like sockets, fifos, directories, symlinks etc), or (N-.) to also include symlinks to regular files. ¹, that is the canonical path to the corresponding file: absolute and symlink-free, like with the realpath() standard function. See also the :a and :A modifiers described at info zsh modifiers as possible alternatives. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/656599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/479817/"
]
} |
656,683 | I have a csv file which contains a nanosecond-resolution timestamp in the format "YYYY-MM-DDTHH:MM:SS.fffffffffZ" followed by some data 2021-04-26T09:30:04.786235633Z,102 2021-04-26T09:30:04.786235633Z,524 2021-04-26T09:30:04.786235633Z,566 2021-04-26T09:30:04.791050014Z,391 2021-04-26T09:30:09.882687589Z,922 2021-04-26T09:30:09.886405549Z,744 2021-04-26T09:30:09.886405549Z,702 2021-04-26T09:30:14.986237837Z,436 2021-04-26T09:30:14.986237837Z,636 2021-04-26T09:30:14.986298532Z,353 2021-04-26T09:30:14.986298532Z,445 2021-04-26T09:30:14.986298532Z,785 2021-04-26T09:30:14.986298532Z,917 2021-04-26T09:30:20.086229659Z,195 2021-04-26T09:30:20.086229659Z,228 2021-04-26T09:30:20.086229659Z,486 2021-04-26T09:30:20.086229659Z,41 2021-04-26T09:30:20.086229659Z,421 2021-04-26T09:30:20.090214746Z,386 2021-04-26T09:30:25.186477272Z,678 2021-04-26T09:30:25.186477272Z,198 2021-04-26T09:30:25.190264104Z,459 2021-04-26T09:30:25.190460283Z,123 2021-04-26T09:30:25.190460283Z,318 2021-04-26T09:30:26.442994013Z,200 I would like to process it in such a way that only the last row per second is output: 2021-04-26T09:30:04.791050014Z,391 2021-04-26T09:30:09.886405549Z,702 2021-04-26T09:30:14.986298532Z,917 2021-04-26T09:30:20.090214746Z,386 2021-04-26T09:30:25.190460283Z,318 2021-04-26T09:30:26.442994013Z,200 Is it possible to do this with awk or some such tool? | Yes, this is possible: keep track of the last second and corresponding line, and whenever the second changes, output the memorised line: awk -F. 'NR > 1 && lastsec != $1 { print lastline } { lastsec = $1; lastline = $0 } END { if (NR) print }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/656683",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28680/"
]
} |
656,717 | How to watch for sysfs file changes (like /sys/class/net/eth0/statistics/operstate ) and execute a command on content change? inotify does not work on sysfs I don't want to poll. I want to set a listener with a callback routine once | I have not read the source code that populates operstate , but generally, reading a file in sysfs executes some code on the kernel side that returns the bytes you're reading. So, without you reading operstate , it has no "state". The value is not stored anywhere. How to watch for sysfs file change Since these are not actually files, the concept "change" doesn't exist. There's probably a better way to achieve what you want! netlink was designed specifically for the task of monitoring networking state; it's easy to interface . For example, this minimally modified sample code from man 7 netlink might already solve your problem: struct sockaddr_nl sa; memset(&sa, 0, sizeof(sa)); sa.nl_family = AF_NETLINK; // Link state change notifications: sa.nl_groups = RTMGRP_LINK; fd = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE); bind(fd, (struct sockaddr *) &sa, sizeof(sa)); Generally, if this is not about ethernet-level connectivity but, say, connectivity to some IP network (or, the internet), systemd/NetworkManager is the route you'd go on a modern system instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/656717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306382/"
]
} |
656,963 | So, I've a simple nested loop where the outer for loops over all the files in a directory and the inner for loops over all characters of these filenames. #!/bin/bashif [ $# -lt 1 ]then echo "Please provide an argument" exitfifor file in `ls $1`do for ch in $file do echo $ch donedone The script above doesn't work. The inner loop doesn't loop over all the characters in the filename but instead loops over the entire thing. UPDATE: Based on @ilkkachu's answer I was able to come up with the following script and it works as expected. But I was curious can we not use the for...in loop to iterate over strings? #!/bin/bashif [ $# -lt 1 ]then echo "Please provide an argument" exitfifor file in `ls $1`; do for ((i=0; i<${#file}; i++)); do printf "%q\n" "${file:i:1}" donedone | Since you're using Bash: #!/bin/bashword=foobarfor ((i=0; i < ${#word}; i++)); do printf "char: %q\n" "${word:i:1}" done ${var:p:k} gives k characters of var starting at position p , ${#var} is the length of the contents of var . printf %q prints the output in an unambiguous format, so e.g. a newline shows as $'\n' . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/656963",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/479172/"
]
} |
656,989 | I'm running Debian 9 with Linux 4.9.0-16-amd64 on a Lenovo T440s,which has been stable until recently but started to hang a couple oftimes per day. There have been no upgrades on it, so I suspect thehangs may be caused by hardware. There are errors in /var/log/syslog, such as this (that did not immediately cause a hang): Jul 4 12:46:39 dumaty kernel: [ 2345.071294] ------------[ cut here ]------------Jul 4 12:46:39 dumaty kernel: [ 2345.071314] WARNING: CPU: 2 PID: 366 at /build/linux-hrcSIZ/linux-4.9.272/drivers/net/wireless/intel/iwlwifi/mvm/rs.c:1212 iwl_mvm_rs_tx_status+0x159/0x1950 [iwlmvm]Jul 4 12:46:39 dumaty kernel: [ 2345.071315] Modules linked in: ctr ccm binfmt_misc rfcomm fuse cmac bnep iTCO_wdt iTCO_vendor_support intel_rapl arc4 x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel snd_hda_codec_hdmi kvm iwlmvm irqbypass intel_cstate mac80211 joydev evdev intel_uncore pcspkr intel_rapl_perf snd_hda_codec_realtek rtsx_pci_ms serio_raw iwlwifi sg hid_multitouch snd_hda_codec_generic memstick uvcvideo lpc_ich cfg80211 videobuf2_vmalloc videobuf2_memops videobuf2_v4l2 videobuf2_core btusb cdc_mbim btrtl cdc_wdm snd_hda_intel videodev btbcm shpchp btintel snd_hda_codec i915 media cdc_ncm cdc_acm snd_hda_core bluetooth usbnet drm_kms_helper mii snd_hwdep drm mei_me snd_pcm snd_timer mei i2c_algo_bit thinkpad_acpi wmi nvram snd soundcore ac rfkill battery video button parport_pc ppdev lp parport ip_tables x_tablesJul 4 12:46:39 dumaty kernel: [ 2345.071369] autofs4 ext4 crc16 jbd2 crc32c_generic fscrypto ecb mbcache algif_skcipher af_alg usbhid hid dm_crypt dm_mod sd_mod crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel rtsx_pci_sdmmc mmc_core aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd psmouse ahci libahci i2c_i801 i2c_smbus libata xhci_pci scsi_mod ehci_pci xhci_hcd ehci_hcd rtsx_pci e1000e mfd_core ptp usbcore pps_core usb_common thermalJul 4 12:46:39 dumaty kernel: [ 2345.071401] CPU: 2 PID: 366 Comm: irq/47-iwlwifi Not tainted 4.9.0-16-amd64 #1 Debian 4.9.272-1Jul 4 12:46:39 dumaty kernel: [ 2345.071402] Hardware name: LENOVO 20ARA0YL00/20ARA0YL00, BIOS GJET77WW (2.27 ) 05/20/2014Jul 4 12:46:39 dumaty kernel: [ 2345.071403] 0000000000000000 ffffffffae213377 0000000000000000 0000000000000000Jul 4 12:46:39 dumaty kernel: [ 2345.071406] ffffffffadc7aa2b ffff9cf649fb0900 0000000000000005 ffff9cf64a4c1568Jul 4 12:46:39 dumaty kernel: [ 2345.071409] 00000000ffffffea 000000000d9afcfb ffff9cf580809a28 ffffffffc0b479e9Jul 4 12:46:39 dumaty kernel: [ 2345.071411] Call Trace:Jul 4 12:46:39 dumaty kernel: [ 2345.071417] [<ffffffffae213377>] ? dump_stack+0x66/0x81Jul 4 12:46:39 dumaty kernel: [ 2345.071421] [<ffffffffadc7aa2b>] ? __warn+0xcb/0xf0Jul 4 12:46:39 dumaty kernel: [ 2345.071429] [<ffffffffc0b479e9>] ? iwl_mvm_rs_tx_status+0x159/0x1950 [iwlmvm]Jul 4 12:46:39 dumaty kernel: [ 2345.071432] [<ffffffffadcb768e>] ? find_busiest_group+0x3e/0x4d0Jul 4 12:46:39 dumaty kernel: [ 2345.071436] [<ffffffffadce86b4>] ? lock_timer_base+0x74/0x90Jul 4 12:46:39 dumaty kernel: [ 2345.071453] [<ffffffffc0d68162>] ? ieee80211_tx_status+0x3b2/0x8b0 [mac80211]Jul 4 12:46:39 dumaty kernel: [ 2345.071459] [<ffffffffc0b3b8d6>] ? iwl_mvm_rx_tx_cmd+0x296/0x770 [iwlmvm]Jul 4 12:46:39 dumaty kernel: [ 2345.071462] [<ffffffffae2224a5>] ? __switch_to_asm+0x35/0x70Jul 4 12:46:39 dumaty kernel: [ 2345.071468] [<ffffffffc0cd3832>] ? iwl_pcie_rx_handle+0x2d2/0x840 [iwlwifi]Jul 4 12:46:39 dumaty kernel: [ 2345.071473] [<ffffffffc0cd4e51>] ? iwl_pcie_irq_handler+0x181/0x730 [iwlwifi]Jul 4 12:46:39 dumaty kernel: [ 2345.071475] [<ffffffffadcd7190>] ? irq_finalize_oneshot.part.36+0xf0/0xf0Jul 4 12:46:39 dumaty kernel: [ 2345.071477] [<ffffffffadcd71b1>] ? irq_thread_fn+0x21/0x60Jul 4 12:46:39 dumaty kernel: [ 2345.071479] [<ffffffffadcd79b6>] ? irq_thread+0x136/0x1c0Jul 4 12:46:39 dumaty kernel: [ 2345.071481] [<ffffffffae21d4d1>] ? __schedule+0x241/0x6f0Jul 4 12:46:39 dumaty kernel: [ 2345.071483] [<ffffffffadcbdb0f>] ? __wake_up_common+0x4f/0x90Jul 4 12:46:39 dumaty kernel: [ 2345.071485] [<ffffffffadcd7280>] ? irq_forced_thread_fn+0x90/0x90Jul 4 12:46:39 dumaty kernel: [ 2345.071487] [<ffffffffadcd7880>] ? irq_thread_check_affinity+0xd0/0xd0Jul 4 12:46:39 dumaty kernel: [ 2345.071490] [<ffffffffadc9af29>] ? kthread+0xd9/0xf0Jul 4 12:46:39 dumaty kernel: [ 2345.071493] [<ffffffffae2224b1>] ? __switch_to_asm+0x41/0x70Jul 4 12:46:39 dumaty kernel: [ 2345.071496] [<ffffffffadc9ae50>] ? kthread_park+0x60/0x60Jul 4 12:46:39 dumaty kernel: [ 2345.071498] [<ffffffffae222537>] ? ret_from_fork+0x57/0x70Jul 4 12:46:39 dumaty kernel: [ 2345.071499] ---[ end trace e62295838fbe3e4e ]--- Later, another error happened. I remember having seen other swap_free errors previously, too. Jul 4 15:11:21 dumaty kernel: [11027.163548] swap_free: Unused swap file entry 3ffff8c9d3f8aJul 4 15:11:21 dumaty kernel: [11027.163554] BUG: Bad page map in process CompositorTileW pte:e6c580ea2a pmd:24ca96067Jul 4 15:11:21 dumaty kernel: [11027.163557] addr:000055f7a8fc0000 vm_flags:08100073 anon_vma:ffff9cf5bb2a9e10 mapping: (null) index:55f7a8fc0Jul 4 15:11:21 dumaty kernel: [11027.163559] file: (null) fault: (null) mmap: (null) readpage: (null)Jul 4 15:11:21 dumaty kernel: [11027.163563] CPU: 3 PID: 6137 Comm: CompositorTileW Tainted: G W 4.9.0-16-amd64 #1 Debian 4.9.272-1Jul 4 15:11:21 dumaty kernel: [11027.163564] Hardware name: LENOVO 20ARA0YL00/20ARA0YL00, BIOS GJET77WW (2.27 ) 05/20/2014Jul 4 15:11:21 dumaty kernel: [11027.163565] 0000000000000000 ffffffffae213377 000055f7a8fc0000 ffff9cf55af3b0c8Jul 4 15:11:21 dumaty kernel: [11027.163568] ffffffffaddb7c31 000055f7a9034000 0000000000000000 0000000000000000Jul 4 15:11:21 dumaty kernel: [11027.163571] 000055f7a8fc0000 ffff9cf58ca96e00 000000e6c580ea2a ffffb7c4839a3c38Jul 4 15:11:21 dumaty kernel: [11027.163573] Call Trace:Jul 4 15:11:21 dumaty kernel: [11027.163580] [<ffffffffae213377>] ? dump_stack+0x66/0x81Jul 4 15:11:21 dumaty kernel: [11027.163582] [<ffffffffaddb7c31>] ? print_bad_pte+0x1d1/0x2a0Jul 4 15:11:21 dumaty kernel: [11027.163584] [<ffffffffaddba434>] ? unmap_page_range+0x5d4/0x9d0Jul 4 15:11:21 dumaty kernel: [11027.163586] [<ffffffffaddbabfc>] ? unmap_vmas+0x4c/0xa0Jul 4 15:11:21 dumaty kernel: [11027.163589] [<ffffffffaddc3b9f>] ? exit_mmap+0x8f/0x140Jul 4 15:11:21 dumaty kernel: [11027.163593] [<ffffffffadc77604>] ? mmput+0x54/0x100Jul 4 15:11:21 dumaty kernel: [11027.163594] [<ffffffffadc7f1be>] ? do_exit+0x27e/0xb60Jul 4 15:11:21 dumaty kernel: [11027.163596] [<ffffffffadc7fb1a>] ? do_group_exit+0x3a/0xa0Jul 4 15:11:21 dumaty kernel: [11027.163599] [<ffffffffadc8abe1>] ? get_signal+0x161/0x850Jul 4 15:11:21 dumaty kernel: [11027.163602] [<ffffffffadcfea0f>] ? do_futex+0x14f/0xba0Jul 4 15:11:21 dumaty kernel: [11027.163605] [<ffffffffadc26486>] ? do_signal+0x36/0x690Jul 4 15:11:21 dumaty kernel: [11027.163607] [<ffffffffadd2d5a4>] ? __seccomp_filter+0x74/0x270Jul 4 15:11:21 dumaty kernel: [11027.163610] [<ffffffffadcff4df>] ? SyS_futex+0x7f/0x160Jul 4 15:11:21 dumaty kernel: [11027.163613] [<ffffffffadc03721>] ? exit_to_usermode_loop+0x71/0xb0Jul 4 15:11:21 dumaty kernel: [11027.163615] [<ffffffffadc03bd9>] ? do_syscall_64+0xe9/0x100Jul 4 15:11:21 dumaty kernel: [11027.163619] [<ffffffffae22238e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6Jul 4 15:11:21 dumaty kernel: [11027.163620] Disabling lock debugging due to kernel taintJul 4 15:11:21 dumaty kernel: [11027.165144] BUG: Bad rss-counter state mm:ffff9cf5bb398000 idx:2 val:-1 Later on still: Jul 4 16:03:38 dumaty kernel: [14164.368364] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018Jul 4 16:03:38 dumaty kernel: [14164.368412] IP: [<ffffffffadf61e1f>] swiotlb_unmap_sg_attrs+0x1f/0x50Jul 4 16:03:38 dumaty kernel: [14164.368447] PGD 0 Jul 4 16:03:38 dumaty kernel: [14164.368457] Jul 4 16:03:38 dumaty kernel: [14164.368467] Oops: 0000 [#2] SMPJul 4 16:03:38 dumaty kernel: [14164.368483] Modules linked in: ctr ccm binfmt_misc rfcomm fuse cmac bnep iTCO_wdt iTCO_vendor_support intel_rapl arc4 x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel snd_hda_codec_hdmi kvm iwlmvm irqbypass intel_cstate mac80211 joydev evdev intel_uncore pcspkr intel_rapl_perf snd_hda_codec_realtek rtsx_pci_ms serio_raw iwlwifi sg hid_multitouch snd_hda_codec_generic memstick uvcvideo lpc_ich cfg80211 videobuf2_vmalloc videobuf2_memops videobuf2_v4l2 videobuf2_core btusb cdc_mbim btrtl cdc_wdm snd_hda_intel videodev btbcm shpchp btintel snd_hda_codec i915 media cdc_ncm cdc_acm snd_hda_core bluetooth usbnet drm_kms_helper mii snd_hwdep drm mei_me snd_pcm snd_timer mei i2c_algo_bit thinkpad_acpi wmi nvram snd soundcore ac rfkill battery video button parport_pc ppdev lp parport ip_tables x_tablesJul 4 16:03:38 dumaty kernel: [14164.368930] autofs4 ext4 crc16 jbd2 crc32c_generic fscrypto ecb mbcache algif_skcipher af_alg usbhid hid dm_crypt dm_mod sd_mod crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel rtsx_pci_sdmmc mmc_core aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd psmouse ahci libahci i2c_i801 i2c_smbus libata xhci_pci scsi_mod ehci_pci xhci_hcd ehci_hcd rtsx_pci e1000e mfd_core ptp usbcore pps_core usb_common thermalJul 4 16:03:38 dumaty kernel: [14164.369092] CPU: 2 PID: 1819 Comm: chrome Tainted: G B D W 4.9.0-16-amd64 #1 Debian 4.9.272-1Jul 4 16:03:38 dumaty kernel: [14164.369126] Hardware name: LENOVO 20ARA0YL00/20ARA0YL00, BIOS GJET77WW (2.27 ) 05/20/2014Jul 4 16:03:38 dumaty kernel: [14164.369158] task: ffff9cf5e8136100 task.stack: ffffb7c482578000Jul 4 16:03:38 dumaty kernel: [14164.369191] RIP: 0010:[<ffffffffadf61e1f>] [<ffffffffadf61e1f>] swiotlb_unmap_sg_attrs+0x1f/0x50Jul 4 16:03:38 dumaty kernel: [14164.369230] RSP: 0018:ffffb7c48257bc70 EFLAGS: 00010212Jul 4 16:03:38 dumaty kernel: [14164.369257] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000Jul 4 16:03:38 dumaty kernel: [14164.369294] RDX: 0000000000001000 RSI: 0000000080eed000 RDI: ffff9cf36fd09400Jul 4 16:03:38 dumaty kernel: [14164.369328] RBP: 0000000000000021 R08: 0000000000000000 R09: 000000000000ffffJul 4 16:03:38 dumaty kernel: [14164.369357] R10: ffff9cf62fd12a20 R11: ffff9cf5e1bbf738 R12: 0000000000000000Jul 4 16:03:38 dumaty kernel: [14164.370826] R13: 0000000000000040 R14: ffff9cf64f8a40a0 R15: ffff9cf64b600000Jul 4 16:03:38 dumaty kernel: [14164.372260] FS: 00007fac918be000(0000) GS:ffff9cf65e280000(0000) knlGS:0000000000000000Jul 4 16:03:38 dumaty kernel: [14164.373705] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033Jul 4 16:03:38 dumaty kernel: [14164.375100] CR2: 0000000000000018 CR3: 00000002a82f2000 CR4: 0000000000160670Jul 4 16:03:38 dumaty kernel: [14164.376543] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000Jul 4 16:03:38 dumaty kernel: [14164.377938] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400Jul 4 16:03:38 dumaty kernel: [14164.379216] Stack:Jul 4 16:03:38 dumaty kernel: [14164.380610] ffff9cf649f5d300 0000000000000000 ffffffffc09d6da0 ffff9cf649f5d300Jul 4 16:03:38 dumaty kernel: [14164.382141] ffffffffc06c73b8 ffffffffc094b02e ffff9cf649f5d300 0000000000000000Jul 4 16:03:38 dumaty kernel: [14164.383372] ffffffffc09d6da0 ffff9cf64b600000 ffffffffc06c73b8 ffff9cf64b600000Jul 4 16:03:38 dumaty kernel: [14164.384541] Call Trace:Jul 4 16:03:38 dumaty kernel: [14164.385717] [<ffffffffc094b02e>] ? i915_gem_object_put_pages_gtt+0x3e/0x260 [i915]Jul 4 16:03:38 dumaty kernel: [14164.386885] [<ffffffffc09490e2>] ? i915_gem_object_put_pages+0x72/0xf0 [i915]Jul 4 16:03:38 dumaty kernel: [14164.388043] [<ffffffffc094de9c>] ? i915_gem_free_object+0xcc/0x280 [i915]Jul 4 16:03:38 dumaty kernel: [14164.389419] [<ffffffffc06a48c6>] ? drm_gem_object_unreference_unlocked+0x76/0x80 [drm]Jul 4 16:03:38 dumaty kernel: [14164.391121] [<ffffffffc06a49e1>] ? drm_gem_object_release_handle+0x51/0x90 [drm]Jul 4 16:03:38 dumaty kernel: [14164.393000] [<ffffffffc06a4a79>] ? drm_gem_handle_delete+0x59/0x80 [drm]Jul 4 16:03:38 dumaty kernel: [14164.394899] [<ffffffffc06a5c2a>] ? drm_ioctl+0x1fa/0x470 [drm]Jul 4 16:03:38 dumaty kernel: [14164.396774] [<ffffffffc06a5150>] ? drm_gem_handle_create+0x40/0x40 [drm]Jul 4 16:03:38 dumaty kernel: [14164.398721] [<ffffffffade2a5b6>] ? current_time+0x36/0x70Jul 4 16:03:38 dumaty kernel: [14164.400573] [<ffffffffadda43ec>] ? shmem_truncate_range+0x1c/0x40Jul 4 16:03:38 dumaty kernel: [14164.402625] [<ffffffffadd2d5a4>] ? __seccomp_filter+0x74/0x270Jul 4 16:03:38 dumaty kernel: [14164.404488] [<ffffffffade220e2>] ? do_vfs_ioctl+0xa2/0x620Jul 4 16:03:38 dumaty kernel: [14164.406324] [<ffffffffadc03337>] ? syscall_trace_enter+0x117/0x2c0Jul 4 16:03:38 dumaty kernel: [14164.408169] [<ffffffffade226d4>] ? SyS_ioctl+0x74/0x80Jul 4 16:03:38 dumaty kernel: [14164.410000] [<ffffffffadc03b7d>] ? do_syscall_64+0x8d/0x100Jul 4 16:03:38 dumaty kernel: [14164.411822] [<ffffffffae22238e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6Jul 4 16:03:38 dumaty kernel: [14164.413687] Code: 40 00 66 2e 0f 1f 84 00 00 00 00 00 83 f9 03 74 48 41 56 41 55 49 89 fe 41 54 55 31 ed 85 d2 53 41 89 d5 48 89 f3 41 89 cc 7e 25 <8b> 53 18 48 8b 73 10 44 89 e1 4c 89 f7 83 c5 01 e8 9c ff ff ff Jul 4 16:03:38 dumaty kernel: [14164.415765] RIP [<ffffffffadf61e1f>] swiotlb_unmap_sg_attrs+0x1f/0x50Jul 4 16:03:38 dumaty kernel: [14164.417692] RSP <ffffb7c48257bc70>Jul 4 16:03:38 dumaty kernel: [14164.419631] CR2: 0000000000000018Jul 4 16:03:38 dumaty kernel: [14164.421605] ---[ end trace e62295838fbe3e50 ]---Jul 4 16:04:00 dumaty kernel: [14186.946643] GpuWatchdog[1835]: segfault at 0 ip 00005564adf60a02 sp 00007fac7f8656f0 error 6 in chrome[5564a96c6000+7bf3000]Jul 4 16:04:52 dumaty kernel: [14238.504806] BUG: unable to handle kernel paging request at 000000030ea51897Jul 4 16:04:52 dumaty kernel: [14238.507190] IP: [<ffffffffadc98962>] __task_pid_nr_ns+0x42/0x90Jul 4 16:04:52 dumaty kernel: [14238.509452] PGD 0 Jul 4 16:04:52 dumaty kernel: [14238.509464] Jul 4 16:04:52 dumaty kernel: [14238.511711] Oops: 0000 [#3] SMPJul 4 16:04:52 dumaty kernel: [14238.513959] Modules linked in: ctr ccm binfmt_misc rfcomm fuse cmac bnep iTCO_wdt iTCO_vendor_support intel_rapl arc4 x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel snd_hda_codec_hdmi kvm iwlmvm irqbypass intel_cstate mac80211 joydev evdev intel_uncore pcspkr intel_rapl_perf snd_hda_codec_realtek rtsx_pci_ms serio_raw iwlwifi sg hid_multitouch snd_hda_codec_generic memstick uvcvideo lpc_ich cfg80211 videobuf2_vmalloc videobuf2_memops videobuf2_v4l2 videobuf2_core btusb cdc_mbim btrtl cdc_wdm snd_hda_intel videodev btbcm shpchp btintel snd_hda_codec i915 media cdc_ncm cdc_acm snd_hda_core bluetooth usbnet drm_kms_helper mii snd_hwdep drm mei_me snd_pcm snd_timer mei i2c_algo_bit thinkpad_acpi wmi nvram snd soundcore ac rfkill battery video button parport_pc ppdev lp parport ip_tables x_tablesJul 4 16:04:52 dumaty kernel: [14238.521475] autofs4 ext4 crc16 jbd2 crc32c_generic fscrypto ecb mbcache algif_skcipher af_alg usbhid hid dm_crypt dm_mod sd_mod crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel rtsx_pci_sdmmc mmc_core aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd psmouse ahci libahci i2c_i801 i2c_smbus libata xhci_pci scsi_mod ehci_pci xhci_hcd ehci_hcd rtsx_pci e1000e mfd_core ptp usbcore pps_core usb_common thermalJul 4 16:04:52 dumaty kernel: [14238.529141] CPU: 2 PID: 8675 Comm: top Tainted: G B D W 4.9.0-16-amd64 #1 Debian 4.9.272-1Jul 4 16:04:52 dumaty kernel: [14238.531734] Hardware name: LENOVO 20ARA0YL00/20ARA0YL00, BIOS GJET77WW (2.27 ) 05/20/2014Jul 4 16:04:52 dumaty kernel: [14238.534341] task: ffff9cf60fe45100 task.stack: ffffb7c488158000Jul 4 16:04:52 dumaty kernel: [14238.537059] RIP: 0010:[<ffffffffadc98962>] [<ffffffffadc98962>] __task_pid_nr_ns+0x42/0x90Jul 4 16:04:52 dumaty kernel: [14238.539736] RSP: 0018:ffffb7c48815bd78 EFLAGS: 00010286Jul 4 16:04:52 dumaty kernel: [14238.542405] RAX: 0000000000000508 RBX: ffff9cf64a93de40 RCX: ffff9cf64d816e00Jul 4 16:04:52 dumaty kernel: [14238.545097] RDX: 000000030ea51067 RSI: 0000000000000004 RDI: ffff9cf6354d1588Jul 4 16:04:52 dumaty kernel: [14238.547772] RBP: ffff9cf64d816e00 R08: 000000000000044c R09: 0000000000000000Jul 4 16:04:52 dumaty kernel: [14238.550439] R10: 0000000000000007 R11: ffff9cf64aa452a6 R12: ffff9cf6354d1080Jul 4 16:04:52 dumaty kernel: [14238.553111] R13: ffffffffae61bb79 R14: 0000000000000066 R15: ffff9cf64da0c840Jul 4 16:04:52 dumaty kernel: [14238.555792] FS: 00007fa93fea2280(0000) GS:ffff9cf65e280000(0000) knlGS:0000000000000000Jul 4 16:04:52 dumaty kernel: [14238.558422] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033Jul 4 16:04:52 dumaty kernel: [14238.560971] CR2: 000000030ea51897 CR3: 000000030b9c2000 CR4: 0000000000160670Jul 4 16:04:52 dumaty kernel: [14238.563467] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000Jul 4 16:04:52 dumaty kernel: [14238.565880] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400Jul 4 16:04:52 dumaty kernel: [14238.568210] Stack:Jul 4 16:04:52 dumaty kernel: [14238.570452] ffffffffade856ff ffffffffae83eee0 ffffffffae845d20 ffff9cf635772800Jul 4 16:04:52 dumaty kernel: [14238.572715] 00000000000003ff 000000000000044c 0000000000000040 ffffb7c48815bed8Jul 4 16:04:52 dumaty kernel: [14238.574970] ffffb7c48815beec 0000000000000001 0000000000000000 0000000000000000Jul 4 16:04:52 dumaty kernel: [14238.577206] Call Trace:Jul 4 16:04:52 dumaty kernel: [14238.579412] [<ffffffffade856ff>] ? proc_pid_status+0x46f/0x9f0Jul 4 16:04:52 dumaty kernel: [14238.581621] [<ffffffffaddea428>] ? __kmalloc+0x188/0x580Jul 4 16:04:52 dumaty kernel: [14238.583821] [<ffffffffade7ff51>] ? proc_single_show+0x51/0x80Jul 4 16:04:52 dumaty kernel: [14238.586022] [<ffffffffade34326>] ? seq_read+0x106/0x400Jul 4 16:04:52 dumaty kernel: [14238.588217] [<ffffffffade0d6e1>] ? vfs_read+0x91/0x130Jul 4 16:04:52 dumaty kernel: [14238.590405] [<ffffffffade0ebfa>] ? SyS_read+0x5a/0xd0Jul 4 16:04:52 dumaty kernel: [14238.592578] [<ffffffffadc03b7d>] ? do_syscall_64+0x8d/0x100Jul 4 16:04:52 dumaty kernel: [14238.594739] [<ffffffffae22238e>] ? entry_SYSCALL_64_after_swapgs+0x58/0xc6Jul 4 16:04:52 dumaty kernel: [14238.596894] Code: 08 05 00 00 74 1a 83 fe 04 74 0e 89 f6 48 8d 04 76 48 8d 04 c5 08 05 00 00 48 8b bf d0 04 00 00 48 01 c7 48 8b 0f 48 85 c9 74 1f <8b> b2 30 08 00 00 31 c0 3b 71 04 77 0d 48 c1 e6 05 48 01 f1 48 Jul 4 16:04:52 dumaty kernel: [14238.599315] RIP [<ffffffffadc98962>] __task_pid_nr_ns+0x42/0x90Jul 4 16:04:52 dumaty kernel: [14238.601602] RSP <ffffb7c48815bd78>Jul 4 16:04:52 dumaty kernel: [14238.603887] CR2: 000000030ea51897Jul 4 16:04:52 dumaty kernel: [14238.606195] ---[ end trace e62295838fbe3e51 ]--- I have run memtest, memtest86+, stress-ng (cpu, hdd, vm stressors) forhours without triggering a failure. Turning swap off made thingsstable for almost a day. Due to this and because smartctl -t short doesn't seem to finish at all, I ordered a replacement ssd. Shortlyafter the above failures happened. I believe all crashes were whilewatching youtube (there is not a lot of other use, though). glxgearsdidn't trigger failures. Any ideas what could be causing this and how to diagnose it? | Since you're using Bash: #!/bin/bashword=foobarfor ((i=0; i < ${#word}; i++)); do printf "char: %q\n" "${word:i:1}" done ${var:p:k} gives k characters of var starting at position p , ${#var} is the length of the contents of var . printf %q prints the output in an unambiguous format, so e.g. a newline shows as $'\n' . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/656989",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/480202/"
]
} |
657,060 | Given a nested directory, I would like to list all tif files with extension .tif , .TIF , .tiff , .TIFF . Currently I'm using find . -type f -iname *.TIF -print ; find . -type f -iname *.TIFF -print; Using -iname allows me to be case-insensitive but it goes through the directory twice to get files with .tif and .tiff . Is there a better way to do this? Perhaps with brace expansion? Why not *.tif* ? In some cases, my directories might have auxiliary files with extension .tif.aux.xml alongside the tiffs. I'd like to ignore those. | find supports an “or” disjunction, -o : find . -type f \( -iname \*.tif -o -iname \*.tiff \) This will list all files whose name matches *.tif or *.tiff , ignoring case. -print is the default action so it doesn’t need to be specified here. * , ( , and ) are escaped so that they lose their significance for the shell. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/657060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/480275/"
]
} |
657,072 | Hi I have a md file containing the below string and I want to write a regular expression for this. Conditions The id will be anything. The type will be youtube,vimeo etc ID and type are mandatory fields {% include video.html id="T3q6QcCQZQg" type="youtube" %} So I want to check the string is in a proper format in bash script otherwise will through an error. Current code look like this . The below code is working for me without an ID. But I need to add a regex for id as well IFS=$'\n' read -r -d '' -a VIDEOS < <( grep "video.html" "$ROOT_DIR$file" && printf '\0' )#output => {% include video.html id="T3q6QcCQZQg" type="youtube" %}for str in "${VIDEOS[@]}" do if [[ "$str" =~ ({%)[[:space:]](include)[[:space:]](video.html)[[:space:]](type="youtube"|type="vimeo")[[:space:]](%})$ ]]; then flag="dummy" echo "Invalid format:: $second" fidone Please help | find supports an “or” disjunction, -o : find . -type f \( -iname \*.tif -o -iname \*.tiff \) This will list all files whose name matches *.tif or *.tiff , ignoring case. -print is the default action so it doesn’t need to be specified here. * , ( , and ) are escaped so that they lose their significance for the shell. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/657072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/480295/"
]
} |
657,086 | I am using solutions from Test if a command outputs an empty string . In addition to those solutions, I want to print command's output if its not empty. I was wondering would it be possible to print command's output if its not empty by calling the command only one time. I can do it by re-calling the same command inside the if condition , but now second call may consume additional CPU usage, since we already obtained its result. Possible approach could be, where the same command is called two times: if [[ $(ls -A) ]]; then ls -Aelse echo "no files found"fi output=$(git diff --ignore-blank-lines --color-words HEAD | tr -d ' \n\r\t ' | wc -c)if [ "$output" -gt 0 ]; then git diff --ignore-blank-lines --color-words HEAD else echo "diff is empty"fi | Note: The question was originally tagged bash , but OP has since stated that the shell used is zsh . This will invalidate the answer somewhat. You can store the output of a command in a variable and test if that variable is empty: output="$(your_command)"if [[ -n $output ]]then printf -- "%s\n" "$output"else printf -- "No output\n"fi The -n test will check if $output is a non-empty string. If it is, the content of $output will be printed. Otherwise, a message that no output was produced will be printed. Notice that the quotes around the "$output" are necessary. Notice also that (as mentioned by @ilkkachu), trailing newlines will be removed by the command substitution, so if you care about these, a different approach is necessary. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/657086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198423/"
]
} |
657,341 | I'd like to extract only a specific value from command output. The string that the command returns is something like this: Result: " 5 Secs (11.2345%) 60 Secs (22.3456%) 300 Secs (33.4567%)" And I want to filter only the "60 Secs" value between () 22.3456% How can I do that? | If that is the exact string that the command returns, then sed will work. command_output | sed 's/.*60 Secs..\(.*\)..300.*/\1/' That prints everything between 60 Secs ( and ) 300 . Result: 22.3456% | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/657341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/480595/"
]
} |
657,359 | In Ed I can do a search to replace all blank lines as follows: g/^$/d This deletes all blank lines. But what if I wish to delete two or more blank lines and keep 1? For example: Line 1Line 2Line 3 Becomes: Line 1Line 2Line 3 | Adapted from the Vim Wiki : ed -s file <<EOFv/./.,/./-1jwqEOF v/./ : select all lines that don't match the regex . (i.e., select all blank lines). Execute the following action on them: .,/./-1j : the j oin command is applied from a selected line ( . ) up to the the line above the next non-blank line ( /./-1 ). w q : save and quit. You could use %p Q instead to only display the output without modifying the file. Although equally valid, my original suggestion was more complicated: printf '%s\n' 'g/^$/.,/./-1d\' 'i\' '' w q | ed -s file This one uses two commands for a single g lobal command (usually the command list consists of a single command), which requires prefixing newlines with backslashes for its command list. g/^$/ : select all blank lines. .,/./-1d\ : d elete from the selected line ( . ) up to the line above the next non-blank line ( /./-1 ). This would delete all blank lines, so 'i\' '' : i nsert a new blank line above. It is equivalent to use here-docs or Printf to feed Ed. Just pick the one you like best. Reference: POSIX Ed . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/657359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321709/"
]
} |
657,496 | I would like to replace number between patterns with multiplicated numbers and print the all the lines.The file is a tree file in newick format and consisted only a single line. My targets are all the numbers after ) and before : . I wanted to multiply all the numbers in between the two symbols with 100. file: ((((A_8:0.000846,(A_5:0.002449,(A_1:1e-06,((A_4:1e-06,((A_7:1e-06,A_6:0.001061)0.714000:1e-06,A_3:1e-06)0.314500:1e-06)0.358667:1e-06,A_2:1e-06)0.361000:1e-06)0.434800:1e-06)0.683500:0.001619)0.888571:0.001931,A_9:0.00069)0.688471:0.000691,... The easiest way to me seemed to be splitting the file by replacing all the ":" symbols with a new line first. So all my target numbers are now in separate lines and appear after ) . Then, I was using the awk script below to multiply the target numbers with 100, but didn't manage to keep the lines without my target number though. script: sed 's/:/\n/g' df9.tree | awk -F")" '{OFS=")"} $2=$2*100 {print $0}'sed 's/:/\n/g' df9.tree | awk '$NF ~/)/ {$NF *=100}1' How can I multiply the numbers after ) and print the entire file in this case? Or is there other simpler way to directly look for the numbers lie between : and ) , multiply them by 100 and print the whole file? Update:Expected output ((((A_8:0.000846,(A_5:0.002449,(A_1:1e-06,((A_4:1e-06,((A_7:1e-06,A_6:0.001061)71.4000:1e-06,A_3:1e-06)31.4500:1e-06)35.8667:1e-06,A_2:1e-06)36.1000:1e-06)43.4800:1e-06)68.3500:0.001619)88.8571:0.001931,A_9:0.00069)68.8471:0.000691,...) | $ perl -pe 's/\)([-0-9.]+):/sprintf ")%.4f:", $1 * 100/eg' df9.tree((((A_8:0.000846,(A_5:0.002449,(A_1:1e-06,((A:1e-06,((A_7:1e-06,A:0.001061)71.4000:1e-06,A:1e-06)31.4500:1e-06)35.8667:1e-06,A:1e-06)36.1000:1e-06)43.4800:1e-06)68.3500:0.001619)88.8571:0.001931,A:0.00069)68.8471:0.000691,... replaces all numbers (defined as a sequence of one-or-more digits, periods, or minus characters) immediately following a ) character and terminated by a : character with the number multiplied by 100. e.g. )0.714000: gets changed to )71.4000: It uses perl's /e regex evaluation modifier to execute perl code in the RHS of the s/// operator. See man perlop and search for s\/PATTERN for details. sprintf is used to format the number to have 4 decimal places. If the number between ) and : could be in either plain decimal notation ("0.714000") or "C float"-style scientific notation ("1e-06"), the regex needs to be just a tiny bit more complicated to match all the possible variations: $ perl -pe 's/\)(([+-]?)(?=\d|\.\d)\d*(\.\d*)?([Ee]([+-]?\d+))?):/sprintf ")%.4f:", $1 * 100/eg' df9.tree((((A_8:0.000846,(A_5:0.002449,(A_1:1e-06,((A_4:1e-06,((A_7:1e-06,A_6:0.001061)71.4000:1e-06,A_3:1e-06)31.4500:1e-06)35.8667:1e-06,A_2:1e-06)36.1000:1e-06)43.4800:1e-06)68.3500:0.001619)88.8571:0.001931,A_9:0.00069)68.8471:0.000691,...) The following may also work, but there may be some numbers it won't match: perl -pe 's/\)([-0-9.eE+]+):/sprintf ")%.4f:", $1 * 100/eg' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/657496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345181/"
]
} |
657,536 | I want to run a command that requires sudo , but non-interactively, so typing in a password isn't an option. In my specific case I want to create an automator/shortcut action on my phone to execute this script on my server via an ssh login. In my case, I want to start a virtual machine with virsh start , which I know I can configure to allow a non-root user to start the machine, but I'm curious about this solution in general, especially in a case where a non-root option isn't available. I'm also picturing that the unprivileged user only needs to be able to run a single script that I define, not an arbitrary command. In other words, the user simply needs to be able to trigger the script to run - they do not supply any arguments, and even if they found a way to do so the arguments would be ignored. Also in my case the server is only used by me, and is behind a firewall. It has internet connectivity (In that it can download things), but cannot be accessed from the internet. I either have to be on my LAN or connected via VPN to reach it, and I only expect to use this script while specifically at home. Still - I'm concerned about learning "good practice" here. The server itself is running Debian Testing. My concern is allowing passwordless sudo seems risky. Is there a better way to do this where a theoretically unprivileged account can trigger a privileged script or command to run? Should I just use a script running as root to monitor for some file to be created every minute or so and have the script just touch that file? Can I create a user who can only sudo that script? | You can configure sudo to allow specific users (or groups) to run named commands; in /etc/sudoers (use visudo to edit it): user ALL = NOPASSWD: /path/to/command will allow user user to run sudo /path/to/command without being prompted for a password. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/657536",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/480794/"
]
} |
657,648 | After learning (more or less) some useful discussions about pipes like Get exit status of process that's piped to another and Exit when one process in pipe fails I still can't avoid starting the second command when the first command fails. Am I missing a fundamental detail about pipes? So for example $ somecommand | tar -T - -czf /tmp/someProject.tar.gz shouldn't create the almost empty tar.gz file if somecommand didn't work properly and produced just a few error messages instead of the expected file list. | Yes, there is a bit of a fundamental detail about pipes there. The point of a pipeline is to run the two or more commands in parallel, which avoids having to store all the data in full and can save time in that all processes can work at the same time. This by definition means that the second command starts before the first exits, so the exit status for the first isn't available yet. The simple workaround is to use a temporary file instead. It shouldn't be much of a problem with storage here since we're passing just the list of file names, and not the data itself. E.g.: tmp=$(mktemp)if somecommand > "$tmp"; then tar -T - -czf /tmp/someProject.tar.gz < "$tmp"firm -f "$tmp" Or indeed like terdon comments, just let the tar run, and remove the tar file afterwards if somecommand failed. But if somecommand produces a partial but significant list of files before failing, that can still cause some amount of unnecessary I/O when creating the to-be-removed archive. Also, at least in GNU tar, by default -T does some processing of quotes and lines that look like command line options, so if you have nasty filenames, you may need to take that into account, or look into --verbatim-files-from , or --null . Similar issues might exist with other tar implementations. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/657648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83728/"
]
} |
657,776 | I'm trying to find a way to check if tmux server is running or not in order to update my zsh prompt with that information. when tmux server is alive > tmux ls0: 1 windows (created Sat Jul 10 13:47:36 2021) (attached) so I can grep that output, correct? grepping that output: > tmux ls | grep -i "windows"0: 1 windows (created Sat Jul 10 13:47:36 2021) windows is colored in red (for me), or whatever color you have in your terminal, so the command worked. I chose windows because the word will always appear in the tmux ls output. now checking if the server is offline After killing the tmux server: tmux kill-server now I check to see if server is alive using tmux ls > tmux lsno server running on /tmp/tmux-1000/default normal output, as expected. The grep seems to fail when when the tmux server is dead; check this strange thing out: > tmux ls | grep -i "windows"no server running on /tmp/tmux-1000/default exit code of this command: > echo $?1 meaning that grep failed to grab the output. another test case showing strangeness > tmux ls > file.txtno server running on /tmp/tmux-1000/default It's always printing no server ... bla bla on the screen, no matter what you run. When catting the file: > cat file.txt nothing there How do I check if tmux server is running or not in order to update my zsh prompt with that information? NOTE: variable $TMUX is useless for me, because I don't want to know if I'm in a session while already being in a session, that is pointless. I want to know if the server is alive or dead , no matter if I'm attached to a session or not. EXTRA (for zsh users only) if you want to put tmux server status in your zsh prompt here's what i got: to get this just add these lines to your .zshrc function update_prompt_and_venv () { if tmux ls &> /dev/null; then tmux_server="%{$terminfo[bold]$fg[black]%}(%{$terminfo[bold]$fg[green]%}tmux%{$terminfo[bold]$fg[black]%}) " fi PROMPT="$tmux_server ... $your_cwd> "}precmd_functions+=(update_prompt_and_venv) contents: a function to be called every time your press enter in terminal a variable with tmux server status the prompt updated in the function make sure you add the function to precmd_functions in order to by called before every entered command and when you press enter after you closed the server you will see the updates immediately | Your confusion comes from the fact that tmux , like all other utilities, writes error messages and other diagnostic messages to the standard error stream rather than to the standard output stream . With the > redirection, you only redirect the standard output stream, not the error messages. Likewise, when you pipe the output of a command, you only pipe the standard output stream, not the error messages. This is by design. However you don't need to grep anything here and can instead rely on investigating the exit status of tmux itself. There is a has-session sub-command of tmux that exists solely to tell you whether a session exists (or if a specific session exists): has-session [ -t target-session ] (alias: has ) Report an error and exit with 1 if the specified session does notexist. If it does exist, exit with 0. This means that you could use if tmux has-session 2>/dev/null; then echo session existselse echo no sessionsfi This relies on investigating the exist status of tmux has-session and does not require parsing the output of the command for specific strings. We redirect the error stream to /dev/null to discard it using 2>/dev/null . The tmux command will output an error message there if there are no sessions available, but we're not interested in that message. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/657776",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/461559/"
]
} |
657,804 | Recently I started the process of gradual switching my shell to nu and because of it I thought about assigning its path to SHELL in cron. Having read a good part of the manual at man 5 crontab , I took a look at PATH and copied the convention of using : in between the values attempting to assign two shells to SHELL: SHELL=/bin/bash:/home/jerzy/.cargo/bin/nu It does not work, the scripts from my crontab are not doing their job. Whereas either SHELL=/bin/bash and SHELL=/home/jerzy/.cargo/bin/nu works fine. Can I assign two shells to SHELL? Does it even make sense to do so? | No, you can’t assign two shells to SHELL : cron needs to know which shell to start, there can only be one. The SHELL variable in crontab doesn’t specify possible shells, it specifies the shell to use. cron reads the value in SHELL , if any, and uses that as the command to run; it doesn’t interpret : or any other symbol. A fallback can’t work either: if something fails with nu , cron can’t know whether it failed because of nu or something else. Most scripts are written for a given interpreter (specified in their shebang), you can’t try running them with one and again with another. Likewise, crontab entries are written with the specified SHELL in mind (if not the default /bin/sh ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/657804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/346682/"
]
} |
657,943 | On our RHEL 7.6 server we create the following folder # mkdir -p /var/data/data-logs_temp the second part is to move all content under /var/data/ to /var/data/data-logs_temp by: # mv /var/data/* /var/data/data-logs_temp but the output that we get from the mv command is: mv: cannot move ‘/var/data/data-logs_temp’ to a sub directory of itself, ‘/var/data/data-logs_temp/data-logs_temp’ The mv command is correct about this . But; is it possible to tell mv to ignore this as we need the exit code from the mv command to be 0 ? Or maybe any other option that ignore the move the sub directory of itself? | You can use bash's "extended glob" syntax to exclude the data-logs_temp subdirectory from the list of files to be moved: shopt -s extglobmv /var/data/!(data-logs_temp) /var/data/data-logs_temp See Greg's wiki and this question for more info about extended globs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/657943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
658,001 | I have a playbook in which one of the tasks gathers info about running docker containers on the specific host. - name: Gather info hosts: "{{ hosts }}" gather_facts: no tasks: - name: Check all running containers become: yes command: docker ps --format "{{ \.Names }}" register: dkr_ps - debug: msg="{{dkr_ps}}" But somehow the docker cmd run by command module keeps, throwing the following error: TemplateSyntaxError: unexpected char u'\\' at 23 line 1 I assume I'm not escaping correctly ? | You have two conflicting templates: the template expected by the docker command, and the Jinja2 templates used in Ansible. Jinja2 is trying to interpret {{ .Names }} which isn’t valid; that’s where the error message is coming from. The general rule of thumb is to escape the Docker template by turning the template symbols into valid Jinja2 expressions: command: docker ps --format "{{ '{{' }} .Names {{ '}}' }}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/658001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312758/"
]
} |
658,031 | I'm a newbie to Ubuntu/Linux world, so don't be hard on me. I'm using Ubuntu on Windows with WSL2 and the bash shell. Sometimes I want to copy files from Windows to Ubuntu or the other way around. I found some tutorials online on how to do this, and the simplest way is to navigate to /mnt/c/Users/<your_user> . I want to create a variable like tilde ~ that would mean my Windows user's home directory. I thought about using double tilde ~~ since I saw that it isn't reserved or something. I created an executable file in /bin named ~~ which would echo the necessary path. But that is cumbersome to use. Is there a way to create a special character like ~ ? If not is there a better way to do this? The end result I'd like is to be able to do something like: cp ~~/Desktop/somefile.txt ~/somefile.txt | My solution under WSL is to create a symlink in my home directory to my Windows profile. I personally use: ln -s /mnt/c/Users/<username> ~/winhome But you could shorten it. Under bash , at least ~~ appears to work as the symlink name as well, but I'm worried that in certain cases it could be misinterpreted. If you want to try that route: ln -s /mnt/c/Users/<username> ~/~~ Then accessing it becomes something like: cp ~/myfile ~/~~/Documents/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/481253/"
]
} |
658,051 | I have a file with several columns. I want to remove entire rows from this file at which the first and the second columns show the same value. For instance, my file is as the following: Variant rsid chr pos1:10177_A_AC rs367896724 1 101771:10352_T_TA rs201106462 1 103521:10511_G_A rs534229142 1 105111:10616_CCGCCGTTGCAAAGGCGCGCCG_C 1:10616_CCGCCGTTGCAAAGGCGCGCCG_C 1 10616 I want to remove the line in which the value at Variant column is equal to rsid column, so I would like to obtain a final file such as the following: Variant rsid chr pos1:10177_A_AC rs367896724 1 101771:10352_T_TA rs201106462 1 103521:10511_G_A rs534229142 1 10511 I've tried to run the following commands: awk '$1==$2{sed -i} input.file > output.fileawk -F, '$1==$2' input.file > output.file But none of them worked. How could I solve it by using awk and/or sed ? | You already have the best, general answer , but in your specific case, you can also simply select all lines where the second field starts with rs : $ awk '$2 ~ /^rs/' fileVariant rsid chr pos1:10177_A_AC rs367896724 1 101771:10352_T_TA rs201106462 1 103521:10511_G_A rs534229142 1 10511 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658051",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/481271/"
]
} |
658,173 | I am not too familiar with Unix and am working on a very large CSV file right now. Here is an example: ABC1,ABC2,ABC3,DDD,EEE,FFF1,2,3,4,5,61,2,3,4,5,6 How can I extract all columns that start with ABC ? | The following awk program will do. Store it in a file, e.g. extract.awk : #!/bin/awk -fBEGIN { FS=OFS=","}FNR==1 { for (i=1;i<=NF;i++) { if (index($i,startstr)==1) cols[++ncol]=i; }}{ for (j=1;j<=ncol;j++) printf("%s%s",$(cols[j]),j==ncol?ORS:OFS) } You would then call it as ~$ awk -f extract.awk -v startstr="ABC" input.csvABC1,ABC2,ABC31,2,31,2,3 where you define the string you are looking for in the variable startstr . This will first set the input and output field separators to , . In the first (header) line it will check if any column names start with your search string, which is stored in the variable startstr . If so, the column number will be added to an array cols of "columns to print". For each line (including the first), it will then print the value of all columns stored in cols , followed by either the field separator or the record separator (defaults to newline) if it is the last column to print. Note that we use a literal string match using the index() function of awk rather than a regular-expression based match, in case your actual search string contains characters that are special in the context of regular expressions. If you must use a regular-expression base search, change the if (index($i,startstr)==1) cols[++ncol]=i; to if ($i ~ startstr) cols[++ncol]=i but beware that all characters in startstr are then subject to interpretation as regular expression tokens, which can lead to unexpected behavior if one is not careful. For the example you mentioned, startstr would be ^ABC . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/658173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/481385/"
]
} |
658,199 | I have the following code: testval="aaa_bbb_ccc," testval+="_ddd"echo ${testval} But, I need to get rid of the comma after " ccc ". It will always be the last character of ${testval} before I add the " _ddd ". There could be other commas that need to remain in the string. But so far I can't find anything that works. e.g. testval="aaa_bbb_ccc," ; testval=${testval} | rev | cut -c 2- | rev ; testval+="_ddd" ; echo ${testval} or testval="aaa_bbb_ccc," ; testval=${testval} | sed 's/.$//' ; testval+="_ddd" ; echo ${testval} both result in: aaa_bbb_ccc,_ddd I also tried: testval="aaa_bbb_ccc," ; testval=$(${testval} | rev | cut -c 2- | rev) ; testval+="_ddd" ; echo ${testval}` and testval="aaa_bbb_ccc," ; testval=$(${testval} | sed 's/.$//') ; testval+="_ddd" ; echo ${testval} both result in: -su: {testval}: command not found** Is there not the equivalent or rtrim of some similar function? that simply trims the last n characters from a string/variable | Is there not the equivalent or rtrim of some similar function? that simply trims the last n characters from a string/variable Yes there is: ${parameter%word} ${parameter%%word} Remove matching suffix pattern. The % form removes the shortest match while the %% removes the longest match. The corresponding "ltrim"s are ${parameter#word} and ${parameter##word} . So for example $ testval="aaa_bbb_ccc,"$ testval=${testval%,}$ echo ${testval}aaa_bbb_ccc For your case $ testval="aaa_bbb_ccc,"$ testval=${testval%,}$ testval+="_ddd"$ echo ${testval}aaa_bbb_ccc_ddd You can change , to ? to remove any single character; or use ??? to replace n=3 trailing characters and so on. In bash, word can be a KSH-style extended glob pattern if the extglob shell option is set. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/658199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102428/"
]
} |
658,270 | I often want a temporary directly where I can unpack some archive (or create a temporary project), see around some files. It is unpredictable in advance for how much time a particular directory may be needed. Such directories are often clutter home directory, /tmp , project directories. They often have names like some weak passwords, like qqq , 1 , test that become undescriptive a month after. Is there some shell command or external program that can help manage such throw-away directories, so that they get cleaned up automatically when I lose interest in them, where I don't need to invent a name for them, but that can be given a name and made persistent easily? If there is no such tool, is it a good idea to create one? | It doesn’t quite cover all the features you mention (easily making the temporary directory persistent), but I rather like Kusalananda’s shell for this. It creates a temporary directory, starts a new shell inside it and cleans the temporary directory up when the shell exits. Before the shell exits, if you decide you want to keep the temporary directory, send a USR1 signal to shell ; typically kill -USR1 $PPID When you exit, shell will tell you where to find the temporary directory, and you can move it somewhere more persistent. If there is no such tool, is it a good idea to create one? This is the best kind of tool to create — you already know it would be useful for you. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17594/"
]
} |
658,271 | This is a really stupid question but I can't find an answer anywhere. So I have a command that prints out lines of text like this: htopkvantumalacritty And I need to check for a line, not a substring | It doesn’t quite cover all the features you mention (easily making the temporary directory persistent), but I rather like Kusalananda’s shell for this. It creates a temporary directory, starts a new shell inside it and cleans the temporary directory up when the shell exits. Before the shell exits, if you decide you want to keep the temporary directory, send a USR1 signal to shell ; typically kill -USR1 $PPID When you exit, shell will tell you where to find the temporary directory, and you can move it somewhere more persistent. If there is no such tool, is it a good idea to create one? This is the best kind of tool to create — you already know it would be useful for you. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/391386/"
]
} |
658,290 | Before the attempt to format a flash drive: $ sudo fdisk -l......Disk /dev/sdc: 7.32 GiB, 7864320000 bytes, 15360000 sectorsDisk model: DataTraveler 3.0Units: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: gptDisk identifier: F89B0513-2DBE-8D40-BCDF-22BE8A5C5E45Device Start End Sectors Size Type/dev/sdc1 2048 15359966 15357919 7.3G Linux filesystem During the attempt: $ sudo mkfs.ntfs -I /dev/sdc1 Cluster size has been automatically set to 4096 bytes.Initializing device with zeroes: 100% - Done.Creating NTFS volume structures.mkntfs completed successfully. Have a nice day. After the attempt: $ sudo fdisk -l......Disk /dev/sdc: 7.32 GiB, 7864320000 bytes, 15360000 sectorsDisk model: DataTraveler 3.0Units: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: gptDisk identifier: F89B0513-2DBE-8D40-BCDF-22BE8A5C5E45Device Start End Sectors Size Type/dev/sdc1 2048 15359966 15357919 7.3G Linux filesystem How is it possible? What am I doing wrong? | Here's what you're missing. There's a partition table and there are file systems - they are related but different. You can perfectly have partitions type Linux filesystem (MBR notation Linux ) formatted as NTFS and partitions type Microsoft basic data (MBR notation HPFS/NTFS/exFAT ) formatted as e.g. ext4 . mkfs.* utilities simply format the storage, they never touch the partition table. To change the partition type in the partition table you need to use any of these tools: fdisk , parted , sfdisk , gdisk , etc. Linux GUI applications like GParted or KDE Partition Manager will set the correct partition type automatically when you create a new partition in the free space of your disk. If you come from Windows then its partition tools do that automatically. Lastly Windows normally will refuse to mount a NTFS formatted partition when its type is not set to Microsoft basic data and if you have a partition type Microsoft basic data but it contains any other filesystem or its contains just binary zeros Windows will offer to format it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164309/"
]
} |
658,319 | Intro This is a question that seeks an extension of the answers at how-to-restrict-an-ssh-user-to-only-allow-ssh-tunneling Problem Statement I have tried to implement the suggestion at the reference to create a user with no shell access but I could not connect to a tunnel in the name of that user. Requirement I want to be able to have an unprotected remote computer (machine A) initiate a reverse tunnel back to my firewall. Machine A is a headless box with no user input. It is out in the wild and may be behind a firewall. My firewall (MyFW) is a linux machine and I have CLI root access. I need Machine A to initiate a reverse ssh tunnel back to MyFW. Usually a local tunnel will be created on a user Machine C on the protected side of MyFW. The local ssh tunnel would connect to the reverse tunnel at MyFW as required. If someone breaks into Machine A, and finds the reverse tunnel, I don't want them to be able use the reverse tunnel to gain access beyond the tunnel entrance (port) at MyFW. What have I tried I have used ssh and tunnels for some years, so I understand their general config and operation. For testing I setup 2 machines on my desk, machines A & B. Both are linux machines, neither is a firewall. The aim being to get the simplest setup working. I started by creating ssh keys on machine A with: ssh-keygen In the usual way. adding a user "tunnel" to the machine B acting as MyFW. The following commands were executed by another user with sudo. useradd tunnel -m -d /home/tunnel -s /bin/true The tunnel user does not have a password.I created a home dir because the above command does not. mkdir /home/tunnel/.ssh I created ssh keys for the user tunnel and put them in the /home/tunnel dir ssh-keygen I manually copied the public key from the remote machine A to the /home/tunnel/.ssh/authorized_keys file in the MyFW machine B. I set the values in the ssh config files: AllowTcpForwarding YesGatewayPorts yesExitOnForwardFailure=yes The purpose of this was to create a user at the firewall, to create and link tunnels while blocking any access attempt by a hacker. There is nothing of value on the real remote Machine A so it wouldn't matter much if a hacker trashed it. Testing To test, I setup a local tunnel on the remote Machine A, then attempted to ssh from Machine A to MyFW. This works for a normal 'user': $ ssh -L 9022:localhost:222 [email protected]$ ssh localhost -p 9022 As expected, the first command creates the tunnel. The 2nd command tunnels ssh to the CLI of the machine on the other end. but this doesn't work for the restricted 'tunnel': $ ssh -L 9222:localhost:222 [email protected]$ ssh localhost -p 9222 The CLI reports that the connection closes. There is nothing I can see in the syslog to show what the problem is. Given that the tunnel user doesn't have a shell, should I expect a tunneled ssh to work??As I write this I think maybe not.How should I test the tunnel? Question What am I doing wrong?? Can someone please try this out and post a set of working instructions or errors in mine. How can a tunnel with a restricted user be tested? EDIT The user tunnel account was created without a password. This caused problems using ssh because the tunnel account was reported locked. The fix for a locked account is at this link . It seems that creating an account without a password, in an attempt to stop password hacking (can't hack a password if it doesn't exist), is a bad idea and weakens security. | Here's what you're missing. There's a partition table and there are file systems - they are related but different. You can perfectly have partitions type Linux filesystem (MBR notation Linux ) formatted as NTFS and partitions type Microsoft basic data (MBR notation HPFS/NTFS/exFAT ) formatted as e.g. ext4 . mkfs.* utilities simply format the storage, they never touch the partition table. To change the partition type in the partition table you need to use any of these tools: fdisk , parted , sfdisk , gdisk , etc. Linux GUI applications like GParted or KDE Partition Manager will set the correct partition type automatically when you create a new partition in the free space of your disk. If you come from Windows then its partition tools do that automatically. Lastly Windows normally will refuse to mount a NTFS formatted partition when its type is not set to Microsoft basic data and if you have a partition type Microsoft basic data but it contains any other filesystem or its contains just binary zeros Windows will offer to format it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/481529/"
]
} |
658,324 | I am trying to install Python 3.9, but Debian comes with 3.7. When I run sudo apt install python3.9 , I receive this: E: Unable to locate package python3.9E: Couldn't find any package by glob 'python3.9'E: Couldn't find any package by regex 'python3.9' When I run sudo apt install python3 , I get this: python3 is already the newest version (3.7.3-1). I've manually went and found that there is in fact Python 3.9 on Debian's servers. It's located here . I then downloaded the deb file for Python 3.9 and ran sudo apt install ./python3.9* , I resulted in this: Note, selecting 'python3.9' instead of './python3.9_3.9.6-1_i386.deb'Some packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: python3.9 : Depends: python3.9-minimal (= 3.9.6-1) but it is not installable Depends: libpython3.9-stdlib (= 3.9.6-1) but it is not installableE: Unable to correct problems, you have held broken packages. I then downloaded the deb files for these two and tried the same thing, but it ended up with more dependencies that are 'not installable'. I know that you cannot install packages that are not configured for the version of Debian I am running, but is there a way to install Python 3.9? Is there a way to install it without building it from source? I am running Debian 10.10 ( buster ) with the i386 (32-bit) architecture. My sources.list (I am using the Australian mirror): deb http://ftp.au.debian.org/debian/ buster maindeb-src http://ftp.au.debian.org/debian/ buster maindeb http://ftp.au.debian.org/debian/ buster/updates maindeb-src http://ftp.au.debian.org/debian/ buster/updates main# buster-updates, previously known as 'volatile'# deb http://ftp.au.debian.org/debian/ buster-updates main# deb-src http://ftp.au.debian.org/debian/ buster-updates main | You can see from Debian's tracker that: Debian 9 ( stretch / oldstable ) shipped with 3.5.3 Debian 10 ( buster / stable ) ships with 3.7.3 Debian 11 ( bullseye / testing ) will ship with 3.9.2 experimental has 3.9.4 python3 is not available in stable-backports bullseye will release in a few weeks. If you do a dist-upgrade to bullseye , you'll get python3.9. That's the safest thing you can do. It's possible to add experimental to your souces.list , and then sudo apt install -t experimental python3 , but you are replacing a very core package and that has some dangerous potential if things don't go perfectly. Also note: If you are on Debian 10, all native packages will work with python3.7. There is very little reason to upgrade unless you are writing python code yourself and you are specifically trying to use the features provided by the newer version. If you are relying on pip packages, most of those are available as python3-<pkg> for python3.7 in debian 10. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/658324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/481007/"
]
} |
658,400 | My aim: replace filenames by their contents in a JSON file with a command sed or awk or other... An example: JSON file to modify ( file.json ) ... "value": "{{<test/myData.txt>}}"... where the value key is located at .tests[].commands[].value in the document structure. Source file for data ( test/myData.txt ) blablablabla Desired result ( result.json ) ... "value": "blabla\nblabla"... My problem: I tried with sed : sed -E "s/\{\{<([^>]+)>\}\}/{r \1}/" file.json > result.json but the file is not read, I have this result: ... "value": "{r test/myData.txt}"... An idea to resolve my problem with sed (or a better idea)? SOLUTION: Thank you so much ! All the answers were helpful but I wanted to use a command without installing any new tools in the default environment of GitHub actions.So I chose between sed and jq because they installed by default.Sed does not cover automatically conversion of raw strings in a json document, so logically I preferred to use jq. I use jq play to debug the jq script. Here the final script: #!/bin/bashif [ $# -eq 0 ]; then printf "Utilization:\n" printf "$0 <FILE_INPUT> [[--output|-o] <FILE_OUTPUT>]\n" printf "example : ./importFile.sh test/testImportFile.side -o aeff.side" exit 1fiwhile [ $# -gt 0 ]; do case $1 in --output|-o) output="${2}" shift ;; *) input="${1}" esac shiftdonecp -p $input $output while : ; do cp -p $output "$output.tmp" datafile=$(jq -r 'first(.tests[].commands[].value | select(startswith("{{<"))| select(endswith(">}}")) | ltrimstr("{{<") | rtrimstr(">}}"))' "$output.tmp") #echo "datafile $datafile" if [ -z "$datafile" ]; then # echo NOT FOUND break elif [ -f "$datafile" ]; then # echo FOUND jq --arg data "$(cat "$datafile")" '(first(.tests[].commands[].value | select(startswith("{{<"))| select(endswith(">}}")))) |= $data' "$output.tmp" > $output else printf 'Could not find "%s" referenced by "%s"\n' "$datafile" $input >&2 exit 1 fidonerm "$output.tmp"echo DONE You can find the project with this script on github. | You will have issues doing this with sed since you will need to both parse the document, decode the pathname stored in the JSON file (it may have certain characters JSON-encoded), and encode the contents of the file for inclusion into the JSON document. This is certainly doable with sed , it just means that you have to implement a JSON parser in sed . Let's use an already existing JSON-aware tool, such as jq . Since we don't see much of the file in the question, I will assume that the file looks something like { "description": "hello world example", "value": "{{<test/myData.txt>}}"} or the equivalent {"description":"hello world example","value":"{{<test/myData.txt>}}"} i.e., that the value key is one of the top-level keys in the JSON file. What we want to do here is to parse out the value from the value key that is between {{< and >}} and to replace the whole value with that of the file corresponding to the pathname that we are left with. The pathname could be had using jq with jq -r '.value | ltrimstr("{{<") | rtrimstr(">}}")' file.json This removes the flanking {{< and >}} and returns the decoded string value. We can put this string into a shell variable like so: datafile=$( jq -r '.value | ltrimstr("{{<") | rtrimstr(">}}")' file.json ) or we may let jq create an assignment statement that we evaluate in the shell (this would allow the pathname to end with a newline), eval "$( jq -r '.value | ltrimstr("{{<") | rtrimstr(">}}") | @sh "datafile=\(.)"' file.json )" The @sh operator ensures that the value that we parse from the JSON file is safely quoted for the shell. With my example JSON document, this would eval the string datafile='test/myData.txt' . Then it's just a matter of getting the file's data and updating that key's value in the original file: jq --arg data "$(cat "$datafile")" '.value |= $data' file.json This creates a jq variable $data that contains the JSON-encoded data of the file. The data is used to update the value of the value key. The result, given my small example file, and your test/myData.txt example file: { "description": "hello world example", "value": "blabla\nblabla"} Then redirect to a new filename if you wish to do so. Summary: datafile=$( jq -r '.value | ltrimstr("{{<") | rtrimstr(">}}")' file.json )jq --arg data "$(cat "$datafile")" '.value |= $data' file.json >result.json Add sanity checking and diagnostic messages to taste: datafile=$( jq -r '.value | ltrimstr("{{<") | rtrimstr(">}}")' file.json )if [ -f "$datafile" ]; then jq --arg data "$(cat "$datafile")" '.value |= $data' file.json >result.jsonelse printf 'Could not find "%s" referenced by "%s"\n' "$datafile" file.json >&2fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/658400",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/481607/"
]
} |
658,464 | free -m total used free shared buffers cachedMem: 15708 15539 168 124 6 6272-/+ buffers/cache: 9260 6447Swap: 0 1759218604 0sysctl vm.swappinessvm.swappiness = 0grep Swap /proc/meminfoSwapCached: 0 kBSwapTotal: 0 kBSwapFree: 36 kB I have set vm.swappiness=0 to disable swap, but the output of free -m shows that swap cache is used 1759218604 , a very huge number. I think used swap memory should be 0, why not 0? centos version:6.7, Linux kernel:2.6 | That's a very old RHEL/CentOS 6 kernel bug, you need to update to kernel-2.6.32-573.6.1.el6 (or newer). See this RH customer portal article (requires RH account) and this question on serverfault for more details. I would also recommend upgrading your system, CentOS 6 is no longer supported and 6.7 is not even the latest minor version (last was 6.10). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160632/"
]
} |
658,485 | I'm using | sed to indent the result of some command (let's take a plain $ echo something for this example). I want to prefix the result with, say, 10 spaces. The following works fine: $ echo something | sed 's/^/ /' something But isn't there a way to use quantifiers in the | sed expression? I tried for example $ echo something | sed 's/^/ {10}/' {10}something But obviously it doesn't work. {10} isn't interpreted as a quantifier. And protecting the braces with backslashes doesn't work either: $ echo something | sed 's/^/ \{10\}/' {10}something Is there a way to use quantifiers in the substitution expression? | A lot of good solutions, I especially like @nezabudka's | pr -T -o10 solution, really short and simple. Now, for the fun of it, I found a " sed only" solution (no printf , no expand , no nothing) : $ echo something | sed ':L; /^ \{10\}/! {s/^/ /;b L}' something EDIT: At @Barmar's request, some explanation : :L; creates a label, named L (you could name it anything) We append a space at the beginning of the line, with s/^/ / . Then we go back to the label ( b L , we b ranch to the label L , kind of a goto ) And we make this expression conditional : it's only executed as long as the line doesn't start with 10 spaces ( /^ \{10\}/! ) So, basically, we repeat "add a space at the beginning of the line" until the line begins with 10 spaces While @nezabudka's pr solution is clearly the easiest way to append spaces in front of a line, this sed solution is more versatile. To add for example 5 times XYZ- in front of a line : $ echo something | sed ':L; /^\(XYZ-\)\{5\}/! {s/^/XYZ-/;b L}'XYZ-XYZ-XYZ-XYZ-XYZ-something | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152418/"
]
} |
658,495 | I am working on Linux and I'm running the following: $ ls -t postgresq*.log | head -n1 | xargs grep "ALTER USER" < pg_user 2021-07-15 05:03:41.609 EDT > LOG: statement: ALTER USER username WITH ENCRYPTED PASSWORD 'JUly@#12' valid until '2021-07-20'; But I only want to grep the string below: ALTER USER username WITH ENCRYPTED PASSWORD 'JUly@#12' valid until '2021-07-20'; | A lot of good solutions, I especially like @nezabudka's | pr -T -o10 solution, really short and simple. Now, for the fun of it, I found a " sed only" solution (no printf , no expand , no nothing) : $ echo something | sed ':L; /^ \{10\}/! {s/^/ /;b L}' something EDIT: At @Barmar's request, some explanation : :L; creates a label, named L (you could name it anything) We append a space at the beginning of the line, with s/^/ / . Then we go back to the label ( b L , we b ranch to the label L , kind of a goto ) And we make this expression conditional : it's only executed as long as the line doesn't start with 10 spaces ( /^ \{10\}/! ) So, basically, we repeat "add a space at the beginning of the line" until the line begins with 10 spaces While @nezabudka's pr solution is clearly the easiest way to append spaces in front of a line, this sed solution is more versatile. To add for example 5 times XYZ- in front of a line : $ echo something | sed ':L; /^\(XYZ-\)\{5\}/! {s/^/XYZ-/;b L}'XYZ-XYZ-XYZ-XYZ-XYZ-something | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/384421/"
]
} |
658,655 | Running printf "lol\nlol\nfoo\n\n\n\n\nbar\nlol\nlol\nfoo\nlol\nfoo" | uniq --unique prints foobarfoololfoo Why is foo printed three times? Shouldn't uniq --unique remove them? Also, notably, it seems all duplicates of lol were removed. Why were lol duplicates removed, but not foo duplicates? | uniq requires the input to be sorted (from man uniq ) if you want it to remove all duplicate lines: DESCRIPTION Filter adjacent matching lines from INPUT (or standard input), writingto OUTPUT (or standard output). As you can see above, it only filters adjacent matching lines. This is why the lol s were removed. So sort your data before passing to uniq : $ printf "lol\nlol\nfoo\n\n\n\n\nbar\nlol\nlol\nfoo\nlol\nfoo" | sort | uniq barfoolol Or, with GNU sort , skip uniq : $ printf "lol\nlol\nfoo\n\n\n\n\nbar\nlol\nlol\nfoo\nlol\nfoo" | sort --uniquebarfoolol Finally, if you want to completely remove lines that were present more than once (instead of keeping one copy, the default behavior), use uniq -u or --unique as in your question: $ printf "lol\nlol\nfoo\n\n\n\n\nbar\nlol\nlol\nfoo\nlol\nfoo" | sort | uniq -ubar In all cases, however, the sorting is necessary. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139042/"
]
} |
658,923 | I have a directory with a bunch of subdirectories with a bunch of files with the extension .no_sub . I want to rename every file with extension .no_sub to the same name with .no_sub removed. So foo.no_sub -> foo . Can I do this in Bash? (I am on Ubuntu 20.04) | Use the power of your shell to get all files, then use the usual tools to rename them shopt -s nullglob ## as recommended and explained in the commentsshopt -s globstarshopt -s dotglobfor fname in **/*.no_sub ; do mv -- "${fname}" "${fname%.no_sub}"done Here, shopt -s globstar enables ** as a recursive glob shopt -s dotglob enables finding .*.no_sub The for loop is a special-character-safe way to go through all files ( don't ever parse ls for that ) The mv syntax is mv source target ; I'm sometimes overly careful, but I also like ${fname} better than just $fname , because there can't be a variable name confusion. It just expands to the content of the variable fname , i.e. to the current file The variable expansion ${variable%pattern} expands to the variable content, but reduced by the suffix pattern pattern | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658923",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220721/"
]
} |
658,964 | First, I think the fact that I'm using cygwin is highly relevant here. In theory, I already know how to do this: cat file | xargs grep pattern The problem is, some of the file paths in file have spaces. The file looks like this: subdir/foo/bar.htmlsubdir/f o o/ba r.htmlsubdir/~foo/bar.html This causes errors. I read how to solve this: use xargs -0 . But I don't know how to make cat output null terminated line endings, so i think that means it squashes the whole file into one line. As a result, it gives this error: xargs: argument line too long Update: it turned out the file I was reading from had some paths that no longer exist. Coincidentally, all of these paths had a ~ in them, so I mistakenly thought that that was an issue. Everything about the spaces still stands though. Turns out those files simply didn't exist. | Use xargs -d '\n' if the input file contains one filename per line. e.g. xargs -d '\n' grep pattern < file If your filenames start with ~ to indicate "my home directory", you first need to replace the ~ symbols with your actual home directory. For example: sed -e "s=^~=$HOME=" file | xargs -d '\n' grep pattern Note that this sed script is double-quoted because we want to interpolate the variable $HOME into the sed script, and uses = as the delimiter for the s operator because $HOME is going to contain /s (but is unlikely to contain an = ). Or, if you're using find , use find's -print0 option combined with xargs' -0 option. e.g. find ... -print0 | xargs -0 grep pattern or just use find's -exec option: find ... -exec grep pattern {} + | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/658964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101311/"
]
} |
659,002 | I have a file on RedHat with the data below: $ cat hello.txtmumdfw2as123v USER=wladmin MOUNTPOINT=/appsMUMFW2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/appsMUMFW3AS65V USER=user MOUNTPOINT=DR-/uMUMDFW3AS66V USER=oracle MOUNTPOINT=/umumdfw3AS69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/web I wish to convert only the first column to lowercase and save the changes to the same file. I do not have nawk tool as I did find a solution using 'nawk' Can you please suggest? | Here's a simple approach: $ awk -F'[ ]' '{$1=tolower($1)}1' filemumdfw2as123v USER=wladmin MOUNTPOINT=/appsmumfw2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/appsmumfw3as65v USER=user MOUNTPOINT=DR-/umumdfw3as66v USER=oracle MOUNTPOINT=/umumdfw3as69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/web That simply changes $1 (the first field) to itself in lower case. The 1 at the end is awk shorthand for "print this line". The fun bit is the -F'[ ]' where we are setting the input field separator to a space, but because it is presented as a regular expression (a character class), that forces awk to recalculate the input line and means we can keep the original spacing of the input file. Without it, we would get: $ awk '{$1=tolower($1)}1' filemumdfw2as123v USER=wladmin MOUNTPOINT=/appsmumfw2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/appsmumfw3as65v USER=user MOUNTPOINT=DR-/umumdfw3as66v USER=oracle MOUNTPOINT=/umumdfw3as69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/web To edit the file in place, you can use GNU awk (the default on linux systems): $ gawk -F'[ ]' -i inplace '{$1=tolower($1)}1' file$ cat filemumdfw2as123v USER=wladmin MOUNTPOINT=/appsmumfw2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/appsmumfw3as65v USER=user MOUNTPOINT=DR-/umumdfw3as66v USER=oracle MOUNTPOINT=/umumdfw3as69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/web | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659002",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/392596/"
]
} |
659,013 | I want to restart celery from server, because the celery has many process, to I write this script to query all process id and kill it: ## stop celery process#PID=`ps -ef|grep -w ${CELERY_PROGRAM_NAME}|grep -v grep|cut -c 9-15`if [ -z "${PID}" ]; then echo "Process aready down..."else array=(${PID//\n/}) for var in "${array[@]}" do single_pid=`echo ${var} | awk 'gsub(/^ *| *$/,"")' ` if [[ ${single_pid} -gt 1 ]]; then kill -15 "${single_pid}" else echo "Process ${PROGRAM_NAME} not found" fi donefi from the log, I found the pid did not convert to array, the next step did not split correctly. I run this script remotely from GitHub Actions. This is the log output from GitHub Actions: ======CMD======cd /opt/apps/pydolphin. /opt/apps/pydolphin/restart.sh======END======err: +/opt/apps/pydolphin/restart.sh:16> PROGRAM_NAME=schedulespider.py err: +/opt/apps/pydolphin/restart.sh:17> CELERY_PROGRAM_NAME=celery err: +/opt/apps/pydolphin/restart.sh:18> PYTHON_BIN_PATH=/usr/bin/python3 err: +/opt/apps/pydolphin/restart.sh:23> PID=+/opt/apps/pydolphin/restart.sh:23> ps -eferr: +/opt/apps/pydolphin/restart.sh:23> PID=+/opt/apps/pydolphin/restart.sh:23> grep -w celeryerr: +/opt/apps/pydolphin/restart.sh:23> PID=+/opt/apps/pydolphin/restart.sh:23> grep -v greperr: +/opt/apps/pydolphin/restart.sh:23> PID=+/opt/apps/pydolphin/restart.sh:23> cut -c 9-15err: +/opt/apps/pydolphin/restart.sh:23> PID=' 9777 err: 9778 err: 9779 err: 9865 err: 9867 err: 9868 ' err: +/opt/apps/pydolphin/restart.sh:24> [ -z ' 9777 err: 9778 err: 9779 err: 9865 err: 9867 err: 9868 ' ']'err: +/opt/apps/pydolphin/restart.sh:27> array=( ' 9777 err: 9778 err: 9779 err: 9865 err: 9867 err: 9868 ' ) err: +/opt/apps/pydolphin/restart.sh:28> var= 9777 err: 9778 err: 9779 err: 9865 err: 9867 err: 9868 err: +/opt/apps/pydolphin/restart.sh:30> single_pid=+/opt/apps/pydolphin/restart.sh:30> echo ' 9777 err: 9778 err: 9779 err: 9865 err: 9867 2021/07/19 06:00:52 Process exited with status 1err: 9868 'err: +/opt/apps/pydolphin/restart.sh:30> single_pid=+/opt/apps/pydolphin/restart.sh:30> awk 'gsub(/^ *| *$/,"")'err: +/opt/apps/pydolphin/restart.sh:30> single_pid='9777err: 9778err: 9779err: 9865err: 9867err: 9868' err: +/opt/apps/pydolphin/restart.sh:31> [[ '9777err: 9778err: 9779err: 9865err: 9867err: 9868' -gt 1/opt/apps/pydolphin/restart.sh:31: bad math expression: operator expected at `9778\n9779\n...'err: ]] I read my script and did not found where is going wrong, what should I do to make it work? | Here's a simple approach: $ awk -F'[ ]' '{$1=tolower($1)}1' filemumdfw2as123v USER=wladmin MOUNTPOINT=/appsmumfw2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/appsmumfw3as65v USER=user MOUNTPOINT=DR-/umumdfw3as66v USER=oracle MOUNTPOINT=/umumdfw3as69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/web That simply changes $1 (the first field) to itself in lower case. The 1 at the end is awk shorthand for "print this line". The fun bit is the -F'[ ]' where we are setting the input field separator to a space, but because it is presented as a regular expression (a character class), that forces awk to recalculate the input line and means we can keep the original spacing of the input file. Without it, we would get: $ awk '{$1=tolower($1)}1' filemumdfw2as123v USER=wladmin MOUNTPOINT=/appsmumfw2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/appsmumfw3as65v USER=user MOUNTPOINT=DR-/umumdfw3as66v USER=oracle MOUNTPOINT=/umumdfw3as69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/web To edit the file in place, you can use GNU awk (the default on linux systems): $ gawk -F'[ ]' -i inplace '{$1=tolower($1)}1' file$ cat filemumdfw2as123v USER=wladmin MOUNTPOINT=/appsmumfw2as97v.mrshmc.com USER=wladmin MOUNTPOINT=/appsmumfw3as65v USER=user MOUNTPOINT=DR-/umumdfw3as66v USER=oracle MOUNTPOINT=/umumdfw3as69v_oracle ansible_host=mumdfw3as69v USER=oracle MOUNTPOINT=/web | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171959/"
]
} |
659,158 | I have an input file with different columns, like the following: VARIANT,SNP,chr,pos,A1,A2,BETA,P_value 7:106350628_G_A,rs6977865,7,106350628,G,A,-0.0808873,8.6E-3097:106353698_T_C,rs74804152,7,106353698,T,C,-0.0808701,9.3E-30920:57674276_T_A,rs6026699,20,57674276,T,A,-0.0945835,6.0E-3141:10177_A_AC,rs367896724,1,10177,A,AC,0.000264372,9.3E-011:10642_G_A,rs558604819,1,10642,G,A,0.0425225,7.0E-012:31467079_G_A,rs2295471,2,31467079,G,A,-0.0830949,8.6E-320 Now, I'd like to remove the rows at which the P-value is less than 2.23E-308, in order to have the following output file: VARIANT,SNP,chr,pos,A1,A2,BETA,P_value1:10177_A_AC,rs367896724,1,10177,A,AC,0.000264372,9.3E-011:10642_G_A,rs558604819,1,10642,G,A,0.0425225,7.0E-01 I ran the following command in the Unix shell: awk -F, '$8!"<2.23E-308"' input.file > output.file However, I still have the first input file, with all the rows... Is the command wrong? May be there a problem in recognizing the set threshold? I am using Linux. | Your expression isn't quite right - it should be a >= b or (if you prefer) !(a < b) rather than a!"<b" . However in your particular case there's a subtler issue that the numerical values are smaller than the smallest value representable as a double precision (64-bit) floating point number. If you have a version of GNU awk ( gawk ) that is built with the GNU MPFR/MP libraries, you may need to enable arbitrary precision handling via the -M or --bignum command line options: $ gawk -F, -M '$8 >= 2.23E-308' input.fileVARIANT,SNP,chr,pos,A1,A2,BETA,P_value1:10177_A_AC,rs367896724,1,10177,A,AC,0.000264372,9.3E-011:10642_G_A,rs558604819,1,10642,G,A,0.0425225,7.0E-01 Otherwise, one possible workaround would be to force numeric conversion before the comparison: $ mawk -F, '$8+0 >= 2.23E-308' input.file1:10177_A_AC,rs367896724,1,10177,A,AC,0.000264372,9.3E-011:10642_G_A,rs558604819,1,10642,G,A,0.0425225,7.0E-01$ awk -F, '$8+0 >= 2.23E-308' input.file1:10177_A_AC,rs367896724,1,10177,A,AC,0.000264372,9.3E-011:10642_G_A,rs558604819,1,10642,G,A,0.0425225,7.0E-01 but note that this will force values outside the range of a IEEE double to zero (because they're initially converted as strings, and the numerical value of a string is 0). If you want the header row as well, then add that as a separate logical test: awk -F, 'NR==1 || $8+0 >= 2.23E-308' input.fileVARIANT,SNP,chr,pos,A1,A2,BETA,P_value1:10177_A_AC,rs367896724,1,10177,A,AC,0.000264372,9.3E-011:10642_G_A,rs558604819,1,10642,G,A,0.0425225,7.0E-01 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/659158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/481271/"
]
} |
659,162 | Here's my setup $ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTSnvme0n1 259:0 0 238.5G 0 disk├─nvme0n1p1 259:1 0 100M 0 part /boot/efi├─nvme0n1p2 259:2 0 250M 0 part /boot└─nvme0n1p3 259:3 0 238.1G 0 part └─Be-Water-My-Friend 254:0 0 238.1G 0 crypt ├─Arch-swap 254:1 0 2G 0 lvm [SWAP] └─Arch-root 254:2 0 236.1G 0 lvm / I have one main LUKS2-encrypted partition ( nvme0n1p3 ), with one LVM volume group ( Be-Water-My-Friend ) containing two logical volumes Arch-swap and Arch-root . The Arch-root is a btrfs . When I set that up, I only chose 2GB of swap which turns out to be insufficient for my needs. I would like to increase that to 24GB of swap. For that, I think I need to boot on a USB live key decrypt the LUKS2 partition mount the Arch-root volume shrink the Arch-root file system with btrfs filesystem resize -22g remove the Arch-swap logical volume recreate the Arch-swap logical volume taking all available space in the Be-Water-My-Friend volume group. Is there anything I'm missing? I really don't want to screw that up! | You need one extra step between 4 and 5 -- shrink the Arch-root logical volume using lvresize -L-22G Arch/root ( lvresize has option --resizefs to resize both the LV and the filesystem, but it currently doesn't support btrfs so you can't use it here). This answer nicely explains difference between resizing filesystem (btrfs in your case) and the block device (LVM logical volume). You also might want to use --uuid with mkswap to set your old swap UUID for the new swap. Swaps are usually not referred with UUID in /etc/fstab and GRUB, but using the old UUID might save you some problems. Also if you just want a bigger swap, you can create a swap file on btrfs and use it as second swap. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659162",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96407/"
]
} |
659,253 | I use Bash 5.1.8. Running man shows the manual page but with the following errors man pssh: bat: line 10: syntax error near unexpected token `('sh: bat: line 10: ` *.?(ba)sh)'sh: error importing function definition for `bat' I think some shell ( sh ) is finding Bashisms obnoxious. These errors vanish if I remove Bash-isms like the following from ~/.bashrc : function bat { # lines snipped for brevity case "$f" in *.rs ) opt_syntax="--syntax=rust";; *.?(ba)sh ) opt_syntax="--syntax=shellscript";; *.?(m)m ) opt_syntax="--syntax=objc";; esac # lines snipped for brevity } export -f bat I'm sure .bashrc itself has no problems, as I see no errors or warnings when Bash starts. Debugging further I noticed .profile sourcing .bashrc # source Bash customizations[ -n "${BASH_VERSION}" ] && [ -r "${HOME}/.bashrc" ] && . "${HOME}/.bashrc" Here's how my ~/.bashrc starts # If not running interactively, don't do anything[[ "$-" != *i* ]] && return Questions: Why does man have to source .profile just before starting? Despite two checks why is the aforementioned code parsed by a non-Bash shell? Check in .profile to not source .bashrc when it's not Bash Check in .bashrc to stop further processing when non-interactive From @muru's comments I realize that I should NOT have Bashisms in a function that's exported since there's a risk of it being imported by a non-Bash shell. A question that still remains: why man calls sh ? . | man is not reading your ~/.profile but it will run sh to interpret some command lines (or to interpret nroff which at least on my system is a sh script wrapper to groff ), and on your system sh happens to be bash , which imports functions exported by bash 's export -f (in variables named BASH_FUNC_funcname%% since shellshock), even when running as sh . Here, you're exporting a function whose syntax depends on the extglob option. So, when sh ( bash in sh mode) starts and imports all the exported functions, it fails to parse it as extglob is not an options that is enabled by default (in sh mode or not). IOW, exporting a function means that function will be available in all bash invocations and all sh invocations on systems where sh is implemented with bash . So you need to be careful that the syntax in those functions be compatibles with both bash and sh -as- bash in their default settings, or avoid exported functions altogether as the code in those functions will be parsed even if the functions will never end up being invoked. See: $ env 'BASH_FUNC_f%%=() { case x in ?(x)) echo x; esac; }' bash -O extglob -c fx$ env 'BASH_FUNC_f%%=() { case x in ?(x)) echo x; esac; }' bash -c fbash: f: line 0: syntax error near unexpected token `('bash: f: line 0: `f () { case x in ?(x)) echo x; esac; }'bash: error importing function definition for `f'bash: line 1: f: command not found If your function needs a non-default option that affects syntax parsing like extglob , you can define it as: f() ( shopt -s extglob eval ' function body here ')export -f f Here running the code in a subshell for the option to only be set during the execution of the function ( bash , contrary to zsh or ksh has no local scope for options (other than the ones set with set in recent versions)), and using eval delays the parsing until the function is invoked and the extglob option set. Here, you could also do: f() ( shopt -s extglob pattern='*.?(ba)sh' case ... in ($pattern)... esac) Though, you could also do: f() { case ... in (*.sh | *.bash) ... esac} Which is standard sh syntax that doesn't require extglob . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/659253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30580/"
]
} |
659,282 | How to delete all lines containing def AND jkl from the following file (lines 2 and 4)? I want the match to apply to sub-string matches of the fields, too. The fields in the file are space-separated. $ cat test2.txt 1. abc def ghi2. def ghi jkl3. jkl mno pqr4. jkl def stu5. vwx yza bcd I managed to do it using a boolean OR ( \| ): $ sed '/def.*jkl\|jkl.*def/d' test2.txt 1. abc def ghi3. jkl mno pqr5. vwx yza bcd Isn't there a simpler syntax with a boolean AND , something like $ sed '/defANDjkl/d' ? I tried sed '/def&jkl/d' , sed '/def&&jkl/d' , sed '/def\&jkl/d' and sed '/def\&\&jkl/d' , but nothing works. | With sed specifically, you could do: sed -e '/def/!b' -e /jkl/d Where the first e xpression b ranches out (which prints the line as we didn't pass the -n option) if def is not ( ! ) found, and the second d eletes if jkl is found. So in the end the line is deleted only if both def and jkl are found. To generalise to any number of regexps, you can do: sed ' /regexp1/!b /regexp2/!b /regexp3/!b d' Note that \| is not a standard basic regular expression (BRE) operator. Few sed implementations support it. Standard BREs have neither OR nor AND operators. Standard EREs ( extended regular expressions , as supported by sed -E with many sed implementations) do support OR ( | ) but not AND. The ast-open implementation of sed does have a AND operator ( & ) in its augmented regexps enabled with -A or -X , but you'd need: sed -A '/.*def.*&.*jkl.*/d' as A&B matches on strings that are matched by both A and B . With sed implementations that support perl-like regexps (like sed -P with ast-open's or sed -R with ssed ), you can use look ahead operators: sed -P '/^(?=.*def)(?=.*jkl)/d' Which matches on the start of the line provided it is followed ( (?=...) ) by any number of characters ( .* ) followed by def and that it is followed by any number of characters followed by jkl . There are more implementations of grep that support -P than sed though, so: grep -vP '^(?=.*def)(?=.*jkl)' would be more portable. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/659282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152418/"
]
} |
659,287 | Scenario: I am fetching a date value from a file into a variable and it is in DD-MM-YYYY format by default. I have to substract this date from system date. Subtraction is giving incorrect result if I had both dates in DD-MM-YYYY format. So I read a bit on google and decided to format both dates as YYYY-MM-DD as this will give correct value after subtraction. I have System date formatted successfully in YYYY-MM-DD, but facing a hard time to convert the date obtained from the file to YYYY-MM-DD format. Below solution works fine with single digit dates: $ date -d $(sed "s/-/\//g" <<< '9-2-1832') +%Y-%m-%dOutput : 1832-09-02 but when I try to convert date in double digits like below: $ date -d $(sed "s/-/\//g" <<< '19-07-2021') +%Y-%m-%d I get output Invalid date '19/07/2021' Where: 19 - is Date of a month07 - is Month i.e. July in this case.2021 - is Year Desired output as -> 2021-07-19 I am working on RH Linux with Date version as: date (GNU coreutils) 8.22 Please help to provide a solution for above problem. | In your case, awk might be a better method: $ awk -F'-' '{printf("%04d-%02d-%02d\n",$3,$2,$1)}' <<< '19-07-2021'2021-07-19 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659287",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/482498/"
]
} |
659,585 | I have a firewall ( csf ) that lets you to separately allow incoming and outgoing TCP ports. My question is, why would anyone want to have any outgoing ports closed? I understand that by default you might want to have all ports closed for incoming connections . From there, if you are running an HTTP server you might want to open port 80. If you want to run an FTP server (in active mode) you might want to open port 21. But if it's set up for passive FTP mode, a bunch of ports will be necessary to receive data connections from FTP clients... and so on for additional services. But that's all. The rest of ports not concerned with a particular service that the server provides, and especially if you are mostly a client computer, must be closed. But what about outgoing connections ? Is there any security gain in having destination ports closed for outbound connections? I ask this because at first I thought that a very similar policy of closing all ports as for incoming connections could apply. But then I realised that when acting as a client in passive FTP mode, for instance, random high ports try to connect to the FTP server. Therefore by blocking these high ports in the client side you are effectively disabling passive FTP in that client, which is annoying. I'm tempted to just allow everything outgoing, but I'm concerned that this might be a security threat. Is this the case? Is it a bad idea, or has it noticeable drawbacks just opening all (or many) ports only for outgoing connections to facilitate services such as passive FTP? | There can be many reasons why someone might want to have outgoing ports closed. Here are some that I have applied to various servers at various times The machine is in a corporate environment where only outbound web traffic is permitted, and that via a proxy. All other ports are closed because they are not needed. The machine is running a webserver with executable code (think PHP, Ruby, Python, Perl, etc.) As part of a mitigation against possible code flaws, only expected outbound services are allowed. A service or application running on the machine attempts to connect to a remote resource but the server administrator does not want it to do so. Good security practice: what is not explicitly permitted should be denied. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/659585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27990/"
]
} |
659,729 | In the new update, vscode doesn't open as root in debian. Even after specifying an alternative directory using --user-data-dir Has anyone ever faced this issue in the new update of vscode or is there any way to fix this? The terminal doesn't output any errors after executing the command (it just doesn't open as root). I couldn't find any solution online either because most of the vscode and root account related problems are associated with the person failing to specify path using --user-data-dir and in my case it doesn't open at all. Operating system: Debian 10 Vscode version: 1.58.2-1626302803 [NOTE: I didn't face this issue until I updated to version 1.58.2-1626302803. The old versions of vscode was working fine in the root account.] | The code script eventually executes: /usr/share/code/bin/../code /usr/share/code/bin/../resources/app/out/cli.js --user-data-dir /tmp/ff which comes back with: [6113:0724/111813.659159:FATAL:electron_main_delegate.cc(263)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.Trace/breakpoint trap (core dumped) Adding --no-sandbox does bring up the window. The moral of this story, they really do not want you to run as root. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/448763/"
]
} |
659,744 | Due to its high CPU usage i want to limit Chromium web browser by cpulimit and use terminal to run: cpulimit -l 30 -- chromium --incognito but it does not limit CPU usage as expected (i.e. to maximum to 30%). It again uses 100%. Why? What am I doing wrong? | Yeah, chromium doesn't care much when you stop one of its threads. cpulimit is, in 2021, really not the kind of tool that you want to use, especially not with interactive software: it "throttles" processes (or unsuccessfully tries to, in your case) by stopping and resuming them via signals. Um. That's a terrible hack, and it leads to unreliability you really don't want in a modern browser that might well be processing audio and video, or trying to scroll smoothly. Good news is that you really don't need it. Linux has cgroups , and these can be used to limit the resource consumption of any process, or group of processes (if you, for example, don't want chromium, skype and zoom together to consume more than 50% of your overall CPU capacity). They can also be used to limit other things, like storage access speed and network transfer. In the case of your browser, that'd boil down to (top of head, not tested): # you might need to create the right mountpoints firstsudo mkdir /sys/fs/cgroup/cpusudo mount -t cgroup -o cpu cpu /sys/fs/cgroup/cpu# Create a group that controls `cpu` allotment, called `/browser`sudo cgcreate -g cpu:/browser# Create a group that controls `cpu` allotment, called `/important`sudo cgcreate -g cpu:/important# allocate few shares to your `browser` group, and many shares of the CPU time to the `important` group.sudo cgset -r cpu.shares=128 browsersudo cgset -r cpu.shares=1024 importantcgexec -g cpu:browser chromium --incognitiocgexec -g cpu:important make -j10 #or whatever The trick is usually giving your interactive session (e.g. gnome-session ) a high share, and other things a lower one. Note that this guarantees shares; it doesn't take away , unless necessary. I.e. if your CPU can't do anything else in that time (because nothing else is running, or because everything with more shares is blocked, for example by waiting for hard drives), it will still be allocated to the browser process. But that's usually what you want: It has no downsides (it doesn't make the rest of the system run any slower, the browser is just quicker "done" with what it has to do, which on the upside probably even saves energy on the average: when multiple CPU cores are just done, then things can be clocked down/suspended automatically). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/659744",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
659,757 | my libevent installed version is 2.0.12 I install the new version of libevent (v2.1.12) through the following command and everything goes well but after that yum info show libevent version 2.0.12 again. what's wrong with yum? and how can I update yum database? $ wget https://github.com/libevent/libevent/releases/download/release-2.1.12-stable/libevent-2.1.12-stable.tar.gz $ tar -zxf libevent-*.tar.gz $ cd libevent-*/ $ ./configure --prefix=/usr/local --enable-shared $ sudo make && make install | yum only looks at what is in the RPM database as yum is just a front end for rpm . As you compiled the package from source and didn't install it with yum or rpm , it's not in the RPM database and yum isn't going to operate on it and will instead only account for the libevent that's in the RPM database. As that libevent is already up to date, yum isn't going to do anything and neither will rpm . There isn't anything that you need to do with the database as it's functioning as it should. The reason for compiling software from source is to get a version that isn't available in the repos so that you can add it to environment without causing conflicts that can ruin your system by putting you in the notorious "dependency hell". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/482988/"
]
} |
659,774 | So I just installed Fedora 34 and did the following: sudo dnf install powertop However, rather than installing the package, it begins downloading updates. Because I'm using a metered connection, I'd like to simply install that package. Same thing happens with yum . | You can use sudo dnf install powertop -C to tell dnf not to update the metadata cache. Note that yum redirects to dnf , so it's really the same command, and the latter should be used. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/474303/"
]
} |
659,837 | Let's say I have a file like following 1,2,3-5,61,2,3-5,6,11-31,2,3-,4,5-71,2,3-,4,5-7,1,2,-3,4,51,2,-,3,41,2,,,3,4,1,2,3 Only combination of following rules should be considered as valid: Ranges [0-9]+-[0-9]+ Groups [0-9]+,[0-9]+ Single Numbers [0-9]+ The lines could ending with comma should also be considered valid I want to extract only 1,2,3-5,61,2,3-5,6,11-3 As the other lines shown below do not match the rules 1,2,3-,4,5-71,2,3-,4,5-7,1,2,-3,4,51,2,-,3,41,2,,,3,4,1,2,3 Because some lines have incomplete ranges, some have missing numbers in groups P.S: A PCRE compatible grep only solution would be awesome, but other solutions are also welcome | The full pcre that will match the strings you listed (and those that start with a , ) might be: grep -P '^([0-9]+(-[0-9]+)?(,|$))+$' How have we got there? The most basic element to match is a digit, lets assume that [0-9] , or the simpler \d in PCRE, is a correct regex for a English (ASCII) digit. Which might as well not be . It could match Devanagari numerals , for example. Then you would need to write: [0123456789] to be precise. Then, a run of digits would be matched by [0-9]+ . After a number (1 or 3 or 26) ther could be a dash '-' followed by one or several digits ( a number again ): [0-9]+(-[0-9]+)? Where the ? makes the dash-number sequence optional. Then, each of those numbers: 3 (or number ranges: 4-9 ) should be followed by a comma , (several times): ([0-9]+(-[0-9]+)?,)+ Except that the last comma might be missing: ([0-9]+(-[0-9]+)?(,|$))+ And, if required, a leading comma might be present: (^|,)([0-9]+(-[0-9]+)?(,|$))+ It is a very good idea to anchor the regex to the beginning and end of the text tested: ^((^|,)([0-9]+(-[0-9]+)?(,|$))+)$ You may test and edit the PCRE regex in this site If the leading comma should be rejected, use: ^(([0-9]+(-[0-9]+)?(,|$))+)$ That leaves no optional interpretations to the regex machine. All must be matched, and anything that is not matched gets rejected. It may be written as an (GNU) extended regex: grep -E '^(([0-9]+(-[0-9]+)?(,|$))+)$' As a Basic Regular Expression (BRE): grep '^\(\([0-9]\{1,\}\(-[0-9]\{1,\}\)\{0,1\},\{0,1\}\)\{1,\}\)$' Where the comma , is optional {0,1} , the regex engine might take some decisions about what to match. Descriptive Regex? A more descriptive regex, with spaces and comments might be had by starting it with (?x) in pcregrep pcregrep '(?x) # tell the regex engine to allow # white space and comments. (?(DEFINE) # subroutines that will be used. (?<nrun> [0-9]+) # run of digits (n-run). # define a range pair. A number run followed by # an optional ( dash and another number run ) (?<range> (?&nrun) (-(?&nrun))? ) # range pair. (?<sep> ,) # separator used. ) # end of definitions. # Actual regex to use: # (range) that ends in a (sep) # or is at the end of the line, # several times (+). ^( (?&range) ((?&sep)|$) )+$ ' file This regex (once compiled) is exactly equivalent to the original one and will run equally fast. Of course, there is an (negligible) additional time used to compile the regex. Test example is here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659837",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224025/"
]
} |
659,909 | Could anyone help explain the parameters in the following SFTP command please? sftp://user:[email protected]:22 This was used as part of the LFTP command of the source. What is the xx after the first : ? Is that the directory (couldn't find any such directory on the sever of this user)? | It's where the password for [email protected] gets put in the command, although this is not a recommended approach for sftp. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/483149/"
]
} |
659,935 | Currently I need to access the folder in my desktop, and then a bin subfolder to reach ./pycharm . So I want to automate all of: cd Desktopcd pycharm-community-2021.3.1cd bin./pycharm.sh to pycharm I would like to ease the process by just inputting 'pycharm' in the terminal and have it launch. I'm aware Linux is incredible for such things - I just forgot how its called and how its done so. If someone could point me in the right direction. I would appreciate. | It's where the password for [email protected] gets put in the command, although this is not a recommended approach for sftp. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659935",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/483163/"
]
} |
659,936 | Sample Data: Tab separated tsv file a.1.58 fadado/CSV https://github.com/fadado/CSVa.1.63 jehiah/json2csv https://github.com/jehiah/json2csva.1.80 stedolan/jq https://github.com/stedolan/jq/issues/370 Following selects one record using fzf, and stores 2nd and 3rd column to an array Link: mapfile -d $'\t' -t Link < <(awk 'BEGIN{FS="\t"; OFS="\t"} {print $2,$3}' "${SessionP}" | fzf) Issue In the above command I have used -t option of mapfile, but echo "${Link[1]}" prints a trailing new line! Why is it not getting eliminated? Reference mapfile Man Page - Linux - SS64.com | Check your local documentation instead of the documentation found someplace else on the web. In an interactive bash shell session, type help mapfile , or look up he documentation for mapfile in the bash manual ( man bash ). Depending on your version of bash , the documentation may vary from what's found on the web. On my system, with bash 5.1.8, help mapfile documents the -t option to mapfile like this: -t Remove a trailing DELIM from each line read (default newline) The DELIM is set with -d : -d delim Use DELIM to terminate lines, instead of newline This means that when using -d $'\t' -t with mapfile , it would remove a trailing tab character , if there was one, not a trailing newline character. The bash shell has had mapfile -d since release 4.4. The introduction of this option was documented like this : The mapfile builtin now has a -d option to use an arbitrary characteras the record delimiter, and a -t option to strip the delimiter assupplied with -d . To remove the trailing newline from your data when printing the last element, use "${Link[1]%$'\n'}" when outputting the element. This removes the last newline from the element if the last character is a newline character. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220462/"
]
} |
659,961 | I am using the find and grep commands. Getting quite confused about when multiple options are joined by "or"with the -o flag and the use of grouped parentheses, and when grouped parentheses are not used. When using find , grouped parentheses seem necessary find $fdir ( -name *.texi -o -name *.org ) When using grep , grouped parentheses not used grep --include "*.texi" --exclude "*.org" | Most programs, including grep don't treat parenthesis as arguments specially. If you did this: grep "(" --include "*.texi" --exclude "*.org" ")" grep would treat the first ( as the pattern to search for, and the last ) as a filename. (*) Same as if they were foo and bar instead. So, you can't group options to grep . But here's the thing: -name , -type , -o , and ( etc. aren't options to find . It does take some options, namely -P / -H / -L , which affect symlink processing, but these aren't options. Instead, they're part of the search expression, which is a thing specific to find . (**) Emphasis on expression there. When you give find the expression ( -name *.texi -o -name *.org ) it's more like the C-like expression ( patternmatch(filename, "*.texi") || patternmatch(filename, "*.texi") ) than anything else. And find evaluates that expression for each file it sees. If you had e.g. this instead: ( -name *.texi -o -name *.org ) -printf something You'd need the parens, because without them: -name *.texi -o -name *.org -printf something would be the same as -name *.texi -o -name *.org -a -printf something because there's an implied and between atoms unless -o is given, and then the expression would be patternmatch(...) || patternmatch(...) && printf(...) and the and operation binds tighter than the or operation, exactly in the same way it does in pretty much all programming languages, and in the same way multiplication binds tighter than addition. And find can't know what you wanted, because it supports arbitrary expressions. (***) So, in this case, it wouldn't work like you want without the parens. As others noted, the command you have doesn't need parens, since if there are no "actions" ( -print , -exec etc.) in the find expression, it defaults to printing matching filenames, and also implicitly puts parenthesis around the expression. So, find "$fdir" -name "*.texi" -o -name "*.org" acts like find "$fdir" \( -name "*.texi" -o -name "*.org" \) -print but if you explicitly put the -print there, you also need to explicitly put the parenthesis to get the processing order right. See: `find` with multiple `-name` and `-exec` executes only the last matches of `-name` Going back to grep : grep doesn't take parens, and doesn't need them, since it doesn't process expressions. It has no concept of nesting or operators like and and or in general. Instead, it has hard-coded behaviours. With --include and --exclude , I think it tries to fulfil both the include and exclude rules at the same time. (Or, at least one of the individual --include rules and none of the individual --exclude rules.) But with multiple search patterns, it's enough to match one, or another. Both of these are static rules: you can't give it a more complicated expression of which patterns should match. (* GNU grep would take the middle ones as options, other implementations might take them as filenames too, as the non-option argument earlier stopped option processing. Also, you need to quote or escape the parens to prevent their special meaning to the shell ; that's unrelated to what grep does with them.) (** In the same way that it's specific to grep that the first non-option argument is a pattern, and only the rest are filenames, or that the last argument to mv is a destination while the others are files to move, and it's specific to git what it does with whatever arguments it takes. The tools do different things, so they have to use the command line arguments in different ways.) (*** Someone once said that evaluating expressions is the main thing find does. That is to say, it doesn't find filenames to print them, it goes through a tree of files to evaluate an expression on them. Printing and running external commands is just a side-effect.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/659961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/451524/"
]
} |
660,057 | I have a Linux VM on AWS in EC2 that starts up, performs a task, and then shuts itself down. I am issuing the shutdown command like this: shutdown -h 5 I have a 5 minute delay to give myself time to ssh into the server and cancel the shutdown if I want to do something with the server. The problem I have is once I issue the shutdown command, Linux will no longer allow new logins. There doesn't seem to be anything in the man page to allow it to issue a shutdown, but still allow new logins. Is there a way to issue this shutdown command, but still allow a new ssh login? | You could simply not use shutdown with a time specification, but echo shutdown -h now | at now + 5 minutes or similar; a simple sleep $((60*5)) ; shutdown -h now would do, too. However, using at has the advantage that you can review waiting commands using atq and cancel them using atrm . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/660057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/483270/"
]
} |
660,068 | I am attempting to run a shell script on a remote server in jenkins scripted pipeline using the sshScript remote: remote, script: command.This line of code currently looks like this: sshScript remote: remote, script: './bash.sh' "$env.gitTag" "$env.Version" However, it keeps saying that the parameters are null there is an error being thrown. I have tried everything and cannot find an answer. I echo the parameters before they are thrown and they contain strings which I expect. 15:56:35 WARNING: Unknown parameter(s) found for class type 'org.jenkinsci.plugins.sshsteps.steps.CommandStep': script[Pipeline] }[Pipeline] // stage[Pipeline] }[Pipeline] // node[Pipeline] }[Pipeline] // timestamps[Pipeline] End of Pipelinejava.lang.IllegalArgumentException: command is null or empty at org.jenkinsci.plugins.sshsteps.steps.CommandStep$Execution.run(CommandStep.java:69) at org.jenkinsci.plugins.sshsteps.util.SSHStepExecution.lambda$start$0(SSHStepExecution.java:84) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829)Finished: FAILURE These are the errors being published at the end. | You could simply not use shutdown with a time specification, but echo shutdown -h now | at now + 5 minutes or similar; a simple sleep $((60*5)) ; shutdown -h now would do, too. However, using at has the advantage that you can review waiting commands using atq and cancel them using atrm . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/660068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/483273/"
]
} |
660,110 | I'm trying to align output from a bash for loop. Currently, I'm getting output from my loop that looks like so: Directory: /some/long/directory/path Remote: some-remoteDirectory: /some/dir/path Remote: other-remote Which I'm trying to align like so: Directory: /some/long/directory/path Remote: some-remoteDirectory: /some/dir/path Remote: other-remote The current, basic loop that generates this output looks something like this: for dir in $(find /some/path -type d -name .git); do cd $dir remote=$(git remote) printf "Directory: $dir\tRemote: $remote\ndone I've tried using: column (which formats each line separately, as it's a for loop) printf ( printf "Directory: %s Remote: %s\n" "$dir" "$remote" ) awk ( echo "Directory: $dir Remote: $remote" | awk '{printf ("%s-20s %s-20s %s-20s %s-20s",$1 $2 $3 $4)}' ) Among many other variations of these commands. I'm probably missing something basic (I tried my best at looking at other examples online and reading the man pages), but I couldn't get it to work. I'd really appreciate any pointers as to what I'm doing wrong. | column should work just fine.However, you don't add it to each loop iteration, but in the end: for...done | column -t Output: Directory: /some/long/directory/path Remote: some-remoteDirectory: /some/dir/path Remote: other-remote Some additional notes regarding your script: Do not loop find output like this. Check here . Quote file/directory variables --> cd "$dir" Do not use Variables in printf FORMAT string. --> printf 'Directory: %s\tRemote: %s\n' "$dir" "$remote" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/660110",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470190/"
]
} |
660,135 | I have a seemingly very simple problem but I am unable to come up with a satisfying solution.I have a simple input file containing IPs and ports, like 10.155.78.0 445172.17.11.0 3389 Now I want to execute nc -vvv <ip> <port> for each line in a for loop. All I can come up with is splitting the line twice with cut: for x in $(cat inputfile); do nc -vvv $(echo -n $x | cut -d" " -f1) $(echo -n $x | cut -d" " -f2) or using gawk and starting a sub-shell for x in $(cat dingens); do cmd=$(echo $x | gawk -F" " '{ print "nc -vvv -w 2 " $1 " " $2 }'); echo -n $cmd | bash; done but both solutions seem terribly complicated.Isn't there a better solution? | while IFS=" " read -r Ip Port Junk <&3; do nc -vvv "${Ip}" "${Port}" 3<&-done 3< inFile The purpose of Junk is to receive any fields after the second (for example, a comment). We open inFile on fd 3 instead of stdin as otherwise nc , invoked within that loop would also end up reading the contents of inFile | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/660135",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/162773/"
]
} |
660,148 | I have a function plist that is able to call head and tail commands. But for processing regions I call a different function pregion . # --- plist --- ("-H"|"--head") local -r hn="$2" ; shift 2 ;; ("-T"|"--tail") local -r tm="$2" ; shift 2 ;; ("--FS") # field separator local fs="$2" ; shift 2 ;; ("--incl") local incl+=("$2") ; shift 2 ;; # file type suffix ("--excl") local excl+=("$2") ; shift 2 ;; # file type suffix ("--RP") local pn=$2 ; shift 2 ;; ("--RQ") local qn=$2 ; shift 2 ;; ("--dyn"|"--dynamic") local dyn="1" ; shift 1 ;; ("-C"|"--context") local ctx=$2 ; shift 2 ;; ("-d"|"--directory") local fdir=$2 ; shift 2 ;; (--) shift; break ;; ... if [[ -v hn ]]; then head -v -hn "$n" elif [[ -v tm ]]; then tail -v -n "$tm" elif [[ -v dyn ]]; then pregion "$@" # requires original options here fi With head and tail , I only use options -H , -T ', --FS ,and --incl . Because I am using shift when processing options, I need to have a copy ef the original plist input arguments, because I cannot simply pass "$@" to pregion . This will call head or tail plist -H 8 ./01cuneus plist -T 13 ./01cuneus Examples of calling pregion plist --dyn -C 8 "Martin" ./01cuneusplist --incl .texi --incl .org --RP 8 --RQ 13 ./01cuneusplist --incl .texi --incl .org --dyn -C 8 "Martin" ./01cuneus | while IFS=" " read -r Ip Port Junk <&3; do nc -vvv "${Ip}" "${Port}" 3<&-done 3< inFile The purpose of Junk is to receive any fields after the second (for example, a comment). We open inFile on fd 3 instead of stdin as otherwise nc , invoked within that loop would also end up reading the contents of inFile | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/660148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/451524/"
]
} |
660,178 | I have 2 column file like: $ cat dataa4 b1a4 c2a4 b4z4 c2 I want to match both columns such asif (column1 = a4 and column2 = b1) OR (column1 = a4 and column2 = c2) then the output in column3 should be (DESIRED OUTPUT) : a4 b1 matcheda4 c2 matcheda4 b4 -z4 c2 - so I tried to incorporate my logic into 1 liner awk: $ awk '{print $1, $2, (($1 = a4 && $2 = b1) || ($1 = a4 && $2 = c2) ? "a4-matched" : "-")}' data and I'm getting - for whole column3, I guess I have wrong awk syntax in place, or something else missing -- below is result: a4 b1 -a4 c2 -a4 b4 -z4 c2 - | You were almost there, but you seem to have introduced a syntax error: $1=a4 would not check if the first column is equal to a4 , but assign the content of the awk variable a4 (which is undefined and therefore empty) to the first column, thereby overwriting its content (which you already printed, so you didn't notice) and also evaluating to "false" because an uninitialized variable evaluates as "false". The same is true for your other comparisons. That is why you never get the "matched" condition as "true". With the (small) required corrections, the program would look as follows: awk '{if (($1=="a4" && $2=="b1") || ($1=="a4" && $2=="c2")) $3="matched"; else $3="-"} 1' data.txt It works as follows: For every line, it will check whether the conditions you mention are met, and adds a third column to the line by setting $3 to either - or matched . It will then print the current line including any modifications made. This is the meaning of the seemingly stray 1 outside of the rule block - awk will print the current line including any previous modifications if it encounters a condition that evaluates to "true" outside of a rule. Note that the above program is written explicitly for ease of understanding and to demonstrate the point. It can be shortened in your case because the condition on $1 is the same for both "allowed" cases of $2 : awk '{if ($1=="a4" && ($2=="b1" || $2=="c2")) $3="matched"; else $3="-"} 1' data.txt Also note that modifying any field will cause awk to rebuild the line from its individual fields using the output field separator (defaults to one space), so if the input fields were separated by more than one space, the original formatting will be clobbered. If that is an issue, you should go with the "appending" strategy you already chose in the attempt you presented, although you should then print $0, ( your conditional string ) instead of $1, $2, ( your conditional string ) . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/660178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117409/"
]
} |
660,284 | I want to use awk to sort content of an input file into different output files. Simple example Assuming the following input file: $cat sample.txt START Unix Linux START Solaris Aix SCO The awk program awk '/START/{x="F"++i;}{print > x}' sample.txt produces the following output into files: $ cat F1 START Unix Linux $ cat F2 START Solaris Aix SCO Actual usage scenario When I apply this technique to my actual use case, awk '/Certificate Revocation List (CRL):/{x="F"++i;}{print > x}' test_cert.pem does not extract the contents starting from Certificate Revocation List (CRL): Instead it gives following error: awk: cmd. line:1: (FILENAME=test_cert.pem FNR=1) fatal: expression for `>' redirection has null string value I tried putting the pattern in quotes and all, but it does not work, not sure if the pattern is multiword how we extract the content. The test_cert.pem looks as follows: Certificate Revocation List (CRL): Version 2 (0x1) Signature Algorithm: sha256WithRSAEncryption Issuer: C = XX, O = XXXXX, OU = 0003 374154744412350, CN = XXX Last Update: Aug 15 04:37:16 2021 GMT Next Update: Sep 23 03:47:16 2021 GMT CRL extensions: X509v3 CRL Number: 209 X509v3 Authority Key Identifier: keyid:09:DF:3B:15:GE:10:08:D5:86:8F:5B:E7:E6:36:B9:A1:A8:1A:83:18Revoked Certificates: Serial Number: AAS60F19DABCDA8AGHIK3E4A59988AAFDA8E6 Revocation Date: Jan 29 12:45:09 2021 GMT Serial Number: GGF0HHHABCDA8AGHIK3E4A599KKKAFDA8E6 Revocation Date: Jul 25 4:32:24 2021 GMT Signature Algorithm: sha256WithRSAEncryption 1e:cc:8e:9d:gv:ae:eb:0a:67:95:4b:8b:b6:5d:9e:bd:48:42: a5:25:e8:eb:b2:22:BV:42-----BEGIN X509 CRL-----MIIDLLKKARMCAQEwLLKKAOKONcNAQELBQYYUzvgfzELLLKKA1UEBhMCRlIxDzANBgNVmZ7YI0YYUzvgrzYYUzvgz9Deb78UGbaedXkYYUzvgr5Hu1Zm16YYUzvgXo67IiNUI=-----END X509 CRL----- | The problem in your case is two-fold. The first problem is that your matching pattern contains characters that are special to regular expressions, in this case the ( ... ) . You need to escape them in order for your program to actually find the match. Currently, your program doesn't find the match and therefore x is never initialized. That is the reason for the "redirection has null string value" error. In addition, even if the regular expression were formulated correctly, it would fail for anything that comes before the first occurence of the Certificate Revocation List (CRL): string. So you need to correct your regular expression ensure that nothing is printed unless x is initialized. Your could change your program to awk '/Certificate Revocation List \(CRL\):/{x="F"++i;}{if (x) print > x}' test_cert.pem and it will work again. But this is again an example why you shouldn't use regular expression matches if you simply look for a fixed string . To harden your program against this kind of problem, use awk '$0=="Certificate Revocation List (CRL):"{x="F"++i}{if (x) print >x}' test_cert.pem instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/660284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/450545/"
]
} |
660,388 | Why are the numbers so irregular? echo {1..200000} | xargs perl -E 'say "ok:", scalar @ARGV'ok:23691ok:21840ok:21840ok:21840ok:20261ok:18720ok:18720ok:18720ok:18720ok:15648 It's more civilised with standard argument length. perl -E' say "1 " x 900000' | xargs perl -E 'say "ok:", scalar @ARGV'ok:65520ok:65520ok:65520ok:65520ok:65520ok:65520ok:65520ok:65520ok:65520ok:65520ok:65520ok:65520ok:65520ok:48240 What's the key factor anyhow? | The numbers that matter are the total length of (all) the arguments, and the command buffer size xargs decides to use. The first depends on the fixed command line to the command you run, and the arguments xargs gives each invocation. perl -E 'say "ok:", scalar @ARGV' is 32 bytes, counting the NUL bytes that terminate the strings (i.e. perl<NUL>-E<NUL>say "ok:", scalar @ARGV<NUL> . And in the second example, all the arguments are two bytes each, 1<NUL> . So 32 + 65520 * 2 bytes, or 131072 B = 128 * 1024 B = 128 kB. Obviously in the first example, the lengths of arguments vary, giving varying counts, but the logic should be the same. E.g. 21840 args for the second to fourth runs matches 5-digit arguments (6 bytes each): 21840 * 6 + 32 = 131072. The size of the command buffer may depend on the implementation, but GNU xargs can show it with xargs --show-limits , and on my Linux, I get: $ echo | xargs --show-limitsYour environment variables take up 2305 bytesPOSIX upper limit on argument length (this system): 2092799POSIX smallest allowable upper limit on argument length (all systems): 4096Maximum length of command we could actually use: 2090494Size of command buffer we are actually using: 131072Maximum parallelism (--max-procs must be no greater): 2147483647 Looking at the second to last line, that's exactly the same number. You can change the size of the buffer it uses with -s , e.g. with just 10 kB buffer: $ perl -E' say "1 " x 90000' | xargs -s 10240 perl -E 'say "ok:", scalar @ARGV' ok:5104ok:5104ok:5104... And of course there's also -n to limit the number of individual arguments: $ echo {1..200000} | xargs -n 10000 perl -E 'say "ok:", scalar @ARGV' ok:10000ok:10000ok:10000... --show-limits mentions environment variables because they use the same space as command line arguments, and if you raise the buffer size enough, close to the system maximum, their size starts to matter too. I'm not sure if the system also counts the sizes of the pointers to the argument strings against the limit, but at least xargs doesn't seem to care about that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/660388",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
662,461 | Imagine you have a silly script test.sh to which arguments are passed that would look like this: bash test.sh arg1 arg2 arg3 with test.sh being a silly script that displays its command line: #!/bin/bashecho "$0 $*" I would like to do the same using bash heredoc << to feed the script to bash.So I tried this : bash <<'EOF' -- arg1 arg2 arg3echo "$0 $*"EOF But it fails with error bash: arg1: No such file or directory Any idea ? | The here document is passed to the inner bash on its standard input. You need to instruct bash to read from standard input. With no command line arguments, this happens automatically. If there's at least one non-option argument and no -s or -c option, the first non-option argument is the name of the script file to run. You can pass the -s option to tell bash to read from standard input. bash <<'EOF' -s -- arg1 arg2 arg3echo "$0 $*"EOF | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/662461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/402499/"
]
} |
662,484 | I have a TOML file in the following format (categories may have any name, the sequential numbering is just an example and not guaranteed): [CATEGORY_1]A=1B=2[CATEGORY_2]C=3D=4E=5...[CATEGORY_N]Z=26 What I want to achieve is to retrieve the text inside a given category. So, if I specify, let's say, [CATEGORY_1] I want it to give me the output: A=1B=2 I tried using grep to achieve this task, with the z flag, so it could interpret newlines as null-byte characters and using this regular expression: (^\[.*]) # Match the category ((.*\n*)+? # Match the category content in a non-greedy way (?=\[|$)) # Lookahead to the start of other category or end of line It wasn't working unless I removed the ^ at beginning of the expression. However, if I do this, it will misinterpret loose pairs of brackets as a category. Is there a way to do it correctly? If not with grep , with other tool, such as sed or awk . | If I understand you correctly, you can use this sed command: # Choose the category until the next [ character# and then delete any line starting with the [ character$ sed -n '/^\[CATEGORY_2\]/,/^\[/p' file | sed '/^\[/d'C=3D=4E=5 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/662484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382867/"
]
} |
662,607 | I've multiple configurations in YAML file and I need to change some paramaters using a Bash script. Is it possible? I want to avoid using any external dependency. My YAML Looks like %YAML 1.2---name: miccomponents:- name: Mic parameters: period_count: 4 alsa_device_name: "pulse"---name: speakercomponents:- name: Speaker parameters: period_duration_ms: 20 period_count: 4 alsa_device_name: "pulse" I want it to behave like if I provide --mic logitec --speaker hk34 then it should modify alsa_device_name for both mic and speaker in my config to %YAML 1.2---name: miccomponents:- name: Mic parameters: period_count: 4 alsa_device_name: "logitec"---name: speakercomponents:- name: Speaker parameters: period_duration_ms: 20 period_count: 4 alsa_device_name: "hk34" Is it possible to do it using only Bash, and if so, how? For now I am using a Python script but this adds an extra dependency of having python3 , which is something I want to avoid. | Firstly, parsing a language like YAML that has a well defined grammar using line oriented tools is a bad idea. Don't use it in production! Never. There are syntax aware tools for parsing YAML via command line like Python yq and Go yq . They support constructs within YAML like anchors, block literals which is not understood by standard line oriented tools. For one time processing like above, you could use awk like BEGIN { map["name: mic"] = "\"logitec\"" map["name: speaker"] = "\"hk34\""}match($0, "name: mic") || match($0, "name: speaker") { key = substr($0, RSTART, RLENGTH)}/alsa_device_name/ { sub(/:.*/, ": " map[key])}{ print } You could put the above in a awk script ( .awk ) and run it as awk -f script.awk yaml or run it as part of command line. You can also define the values in the command line as awk -v mic='"logitec"' -v speaker='"hk34"' -f script.awk yaml and process the arguments in the BEGIN block as BEGIN { map["name: mic"] = mic map["name: speaker"] = speaker} Also note that your input isn't a valid YAML file, the header line %YAML 1.2 would make most standard YAML parsers throw an error on the syntax of your input. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/662607",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/439190/"
]
} |
662,645 | I have an file whose fields are ID , Designation , ParentID , and ParentDesignation . The file content is the following. A1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1b1 Sr.R&D B1b2 Jr.SR&D B1a2 Jr.Sales A1B1 M.D-R&D 0 UmbrellaCorp I want to get ParentDesignation for those lines that are missing the fourth column, which would essentially mean to: Read each line Get ParentID from the third column Match it with the value in the first column Insert it into the fourth column4 in front of that child. The result would be the following one. A1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1 M.D-Salesb1 Sr.R&D B1 M.D-R&Db2 Jr.SR&D B1 M.D-R&Da2 Jr.Sales A1 M.D-SalesB1 M.D-R&D 0 UmbrellaCorp I know how I could do the same task in Excel with vlookup , but I need to use a script. | Final answer given more comments below and updated sample input/output in question: I'd sort the data first so the act of filling in the missing values is more efficient and uses less memory than doing a 2-pass approach within awk and the final output is much better organized than the input was for readability: $ cat tst.sh#!/usr/bin/env bashawk ' BEGIN { FS=OFS="\t" } { print (NR>1), ($4=="" ? $3 : $1), $4, $1, NR, $0 }' "${@:--}" |sort -t$'\t' -k1,1n -k2,2 -k3,3r -k4,4 -k5,5n |cut -f6- |awk ' BEGIN { FS=OFS="\t" } $4 != "" { d = $2 } $4 == "" { $4 = d } { print }' $ ./tst.sh file | column -s$'\t' -tID Designation ParentID ParentDesignationA1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1 M.D-Salesa2 Jr.Sales A1 M.D-SalesB1 M.D-R&D 0 UmbrellaCorpb1 Sr.R&D B1 M.D-R&Db2 Jr.SR&D B1 M.D-R&D The first call to awk just decorates the input so it can be sorted by: (NR>1) = header-or-not 0-or-1 indicator to ensure the header line remains first after sorting, ($4=="" ? $3 : $1) = the ID or ParentID for each row to group related rows together $4 = the ParentDesignation so we can sort it such that rows with a ParentDesignation come before those that don't for the same ID/ParentID, $1 = the ID so we can sort children alphabetically by their ID, NR = so if everything else is common we can print the lines in the same order as they occurred in the input (probably not necessary in this case as every ID appears to be unique but good practice for other similar situations). Then we just sort by the above fields and then remove the decorations using cut before passing to the final awk script to actually do the $4 population. If you're not sure what any of those steps do, just change each | to | cat; exit one at a time and then you'll see what's happening at each step. Previous answer: Given the comments below, this might be what you want, assuming a parent (if it exists) always occurs before a child in your data: $ cat tst.awkBEGIN { FS=OFS="\t" }$4 != "" { id2des[$1] = $2}$4 == "" { $4 = id2des[$3]}{ print } $ awk -f tst.awk fileID Designation ParentID ParentDesignationA1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1 M.D-Salesa2 Jr.Sales A1 M.D-SalesB1 M.D-R&D 0 UmbrellaCorpb1 Sr.R&D B1 M.D-R&Db2 Jr.SR&D B1 M.D-R&D Original answer: Your problem actually seems to be simpler than you specified as you appear to have a parent row with all info followed by children rows missing $4 in which case you don't need to look up anything, all you need is: $ awk 'BEGIN{FS=OFS="\t"} $4!=""{d=$2} $4==""{$4=d} 1' fileID Designation ParentID ParentDesignationA1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1 M.D-Salesa2 Jr.Sales A1 M.D-SalesB1 M.D-R&D 0 UmbrellaCorpb1 Sr.R&D B1 M.D-R&Db2 Jr.SR&D B1 M.D-R&D $ awk 'BEGIN{FS=OFS="\t"} $4!=""{d=$2} $4==""{$4=d} 1' file | column -s$'\t' -tID Designation ParentID ParentDesignationA1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1 M.D-Salesa2 Jr.Sales A1 M.D-SalesB1 M.D-R&D 0 UmbrellaCorpb1 Sr.R&D B1 M.D-R&Db2 Jr.SR&D B1 M.D-R&D | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/662645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117409/"
]
} |
662,696 | I am in the process of developing a bash script which automates the addition of a USB wifi dongle to a virtual machine (QEmu/KVM virtualization) and therefore to add a wifi key to a VM. [edit]This VM is for the moment with Debian Buster distro[/edit] From the host when I plug in my TP-Link TL-WN823N USB dongle, the following interface is added : user@host:~$ ip -o link | grep wlx57: wlx123456789012: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000\ link/ether 2a:51:d5:12:34:56 brd ff:ff:ff:ff:ff:ff When I list the USB devices I get : user@host:~$ lsusb | grep TP-LinkBus 002 Device 009: ID 2357:0109 TP-Link TL WN823N RTL8192EU It is therefore identified by the wlx123456789012 interface, its vendor:product ID is 2357:0109 and is device #9 of USB bus #2 .* How to determine for sure vendor:product ID when we only know the name of the interface ? @meuh suggestion user@host:~$ ls --format=commas /sys/class/net/wlx123456789012/deviceauthorized, bAlternateSetting, bInterfaceClass, bInterfaceNumber,bInterfaceProtocol, bInterfaceSubClass, bNumEndpoints, driver,ep_01, ep_02, ep_03, ep_04, ep_05, ep_06, ep_81, ieee80211, leds,modalias, net, power, subsystem, supports_autosuspend, uevent So no vendor nor device file directly in this location (according to him it would be due to the fact that its test concerns an on-board wifi) But that inspired me, so I tried : user@host:~$ grep -iEr "2357|0109" /sys/class/net/wlx123456789012/device.../sys/class/net/wlx123456789012/device/modalias:usb:v2357p0109d0101dc00dsc00dp00icFFiscFFipFFin00/sys/class/net/wlx123456789012/device/uevent:PRODUCT=2357/0109/101/sys/class/net/wlx123456789012/device/uevent:MODALIAS=usb:v2357p0109d0101dc00dsc00dp00icFFiscFFipFFin00 So in /sys/class/net/wlx123456789012/device , there are : modalias : usb:v 2357 p 0109 d0101dc00dsc00dp00icFFiscFFipFFin00 uevent : PRODUCT= 2357 / 0109 /101 So I found traces but the fact that @meuh gives me another localization makes me doubt that the solution (especially if I change the version of the distro or just the distro) @Tom Yan suggestion user@host:~$ udevadm info /sys/class/net/wlx123456789012 \ | sort -r | awk '/ID_(VENDOR|MODEL)_ID/'E: ID_VENDOR_ID=2357E: ID_MODEL_ID=0109 Notas : here MODEL is used instead of PRODUCT ; sort -r is used to sort VENDOR line prior to MODEL line | Final answer given more comments below and updated sample input/output in question: I'd sort the data first so the act of filling in the missing values is more efficient and uses less memory than doing a 2-pass approach within awk and the final output is much better organized than the input was for readability: $ cat tst.sh#!/usr/bin/env bashawk ' BEGIN { FS=OFS="\t" } { print (NR>1), ($4=="" ? $3 : $1), $4, $1, NR, $0 }' "${@:--}" |sort -t$'\t' -k1,1n -k2,2 -k3,3r -k4,4 -k5,5n |cut -f6- |awk ' BEGIN { FS=OFS="\t" } $4 != "" { d = $2 } $4 == "" { $4 = d } { print }' $ ./tst.sh file | column -s$'\t' -tID Designation ParentID ParentDesignationA1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1 M.D-Salesa2 Jr.Sales A1 M.D-SalesB1 M.D-R&D 0 UmbrellaCorpb1 Sr.R&D B1 M.D-R&Db2 Jr.SR&D B1 M.D-R&D The first call to awk just decorates the input so it can be sorted by: (NR>1) = header-or-not 0-or-1 indicator to ensure the header line remains first after sorting, ($4=="" ? $3 : $1) = the ID or ParentID for each row to group related rows together $4 = the ParentDesignation so we can sort it such that rows with a ParentDesignation come before those that don't for the same ID/ParentID, $1 = the ID so we can sort children alphabetically by their ID, NR = so if everything else is common we can print the lines in the same order as they occurred in the input (probably not necessary in this case as every ID appears to be unique but good practice for other similar situations). Then we just sort by the above fields and then remove the decorations using cut before passing to the final awk script to actually do the $4 population. If you're not sure what any of those steps do, just change each | to | cat; exit one at a time and then you'll see what's happening at each step. Previous answer: Given the comments below, this might be what you want, assuming a parent (if it exists) always occurs before a child in your data: $ cat tst.awkBEGIN { FS=OFS="\t" }$4 != "" { id2des[$1] = $2}$4 == "" { $4 = id2des[$3]}{ print } $ awk -f tst.awk fileID Designation ParentID ParentDesignationA1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1 M.D-Salesa2 Jr.Sales A1 M.D-SalesB1 M.D-R&D 0 UmbrellaCorpb1 Sr.R&D B1 M.D-R&Db2 Jr.SR&D B1 M.D-R&D Original answer: Your problem actually seems to be simpler than you specified as you appear to have a parent row with all info followed by children rows missing $4 in which case you don't need to look up anything, all you need is: $ awk 'BEGIN{FS=OFS="\t"} $4!=""{d=$2} $4==""{$4=d} 1' fileID Designation ParentID ParentDesignationA1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1 M.D-Salesa2 Jr.Sales A1 M.D-SalesB1 M.D-R&D 0 UmbrellaCorpb1 Sr.R&D B1 M.D-R&Db2 Jr.SR&D B1 M.D-R&D $ awk 'BEGIN{FS=OFS="\t"} $4!=""{d=$2} $4==""{$4=d} 1' file | column -s$'\t' -tID Designation ParentID ParentDesignationA1 M.D-Sales 0 UmbrellaCorpa1 Sr.Sales A1 M.D-Salesa2 Jr.Sales A1 M.D-SalesB1 M.D-R&D 0 UmbrellaCorpb1 Sr.R&D B1 M.D-R&Db2 Jr.SR&D B1 M.D-R&D | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/662696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/455838/"
]
} |
662,777 | My PC fails to run snap packages, when I try I get: 2021/07/31 20:56:38.255535 cmd_run.go:576: WARNING: XAUTHORITY environment value is not a clean path: "/mnt/e664d184-8567-4278-93ce-c986567c66af/home/iaquobe/.Xauthority"cannot create user data directory: /home/iaquobe/snap/shapezio/2: Not a directory The directories do however exist.So far the packages I tested are 0ad shapezio whatsdesk , all of them had the same issue. Those packages do run on my laptop.One thing that is different is that on my PC /home/iaquobe is a symbolic link to a drive at /mnt/[...]/home . This is the only cause for this error I could think of, what do you think?And what could I do to fix it? Thanks in advance :) | The /home symlink is indeed causing the issue. This is a known snap bug (or to be more precise design limitation of snap) -- with snap packages home can't be a symlink or a different directory than /home , see this bug for details. Suggested workaround/fix is to run sudo dpkg-reconfigure apparmor but some people in the bug discussion said it didn't help so it might not work. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/662777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/315546/"
]
} |
662,839 | I'm working with XML files, each of which could be dozens of lines long. There are literally hundreds of these files, all over a directory structure. Yes, it is Magento. I need to find the file that has the <foo><bar><boom><bang> element. A <boom><bang> tag could be defined under other tags, so I need to search for the full path not just the end tag or tags. There could be dozens of lines between each tag, and other tags between them: <foo> <hello_world>... 50 lines .... </hello_world> <bar> <giraffe>... 50 lines .... </giraffe> <boom> <bang>Vital information here</bang> </boom> </bar></foo> What is the elegant, *nix way of searching for the file that defines <foo><bar><boom><bang> ? I'm currently on an up-to-date Debian-derived distro. This is my current solution, which is far from eloquent: $ grep -rA 100 foo * | grep -A 100 bar | grep -A 100 boom | grep bang | grep -E 'foo|bar|boom|bang' | You could try xmlstarlet to sel ect i f the path exists then output the f ilename: find . -name '*.xml' -exec xmlstarlet sel -t -i '/foo/bar/boom/bang' -f -n {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/662839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
662,958 | I have this XML file (example) <This is a line of text with a year=2020 month=12 in itThis line of text does not have a year or month in itThis year=2021 is the current year the current month=1This is the year=2021 the month=2/><This is a line of text with a year=33020 month=12 in itThis line of text does not have a year or month in itThis year=33020 is the current year the current month=1This is the year=33020 the month=2/> Using the sed installation provided by my Linux distribution ( sed (GNU sed) 4.2.2) I search within this file with the following regexp: sed -En 'N;s/\<(This.*2020.*[\s\S\n]*?)\>/\1/gp' test2.txt However, it captures only this string: <This is a line of text with a year=2020 month=12 in itThis line of text does not have a year or month in it But I try to capture the entire first paragraph between < and > that contains the pattern. What am I doing wrong here? | The reason this doesn't work as you expect is that < and > do not need to be escaped in regular expressions, they don't have any special meaning. However, \< and \> do have special meaning for GNU extended regular expressions (which you activate with -E ): they match at word boundaries. \< matches the beginning of a word and \> the end. So \<(This isn't actually matching the < , it is matching the beginning of the word This . Similarly for the \> at the end. The GNU sed manual has an example which is almost exactly what you're after: $ sed -En '/./{H;1h;$!d} ; x; s/(<This.*2020.*?>)/\1/p;' file<This is a line of text with a year=2020 month=12 in itThis line of text does not have a year or month in itThis year=2021 is the current year the current month=1This is the year=2021 the month=2/> I find sed particularly ill-suited to this sort of task. I would use perl instead: $ perl -000 -ne 'chomp;/<.*2020.*?>/s && print "$_\n"; exit' file<This is a line of text with a year=2020 month=12 in itThis line of text does not have a year or month in itThis year=2021 is the current year the current month=1This is the year=2021 the month=2/> Here, we are using Perl in "paragraph mode" ( -000 ) which means that a "line" is defined by two consecutive \n characters, by a blank line. The script will: chomp : remove the trailing newline at the end of the "line" (paragraph). /<.*2020.*?>/s && print "$_\n" : if this "line" (paragraph) matches a < then 0 or more characters until 2020 and zero or more characters and then a > , then print this line appending a newline character ( print "$_\n" ). The s modifier to the match operator allows . to match newlines. Another option is awk : $ awk 'BEGIN{RS="\n\n"} /<.*2020.+?>/' file<This is a line of text with a year=2020 month=12 in itThis line of text does not have a year or month in itThis year=2021 is the current year the current month=1This is the year=2021 the month=2/> We set the record separator RS to two consecutive newlines and then match using the same regex as above. Since in awk the default behavior when a match is found (or any other operation returns true) is to print the current record, this will print out what you need. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/662958",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188662/"
]
} |
663,245 | I am running iwconfig | grep -v "no wireless extensions" but can't get the -v option to work as expected. I want to exclude lines including "no wireless extensions", i.e., I want to display only the active/working wireless interface or whatever this should be called. In the beginning I thought perhaps the command outputs to a stream different than the one piped to grep, so I tried cat myFile | grep -v myExclusionPattern . This works as expected, so I concluded -v does what I expect it does.I then tried iwconfig | grep "no wireless extensions" and again the result is the expected - meaning the output of iwconfig is what is piped to grep.So I am left with the question why specifically -v is not working when piping the results of iwconfig to grep . Here is my output to iwconfig: enp4s0 no wireless extensions.docker0 no wireless extensions.lo no wireless extensions.wlp5s0 IEEE 802.11 ESSID:"myEssid" Mode:Managed Frequency:5.24 GHz Access Point: 74:83:C2:75:86:2A Bit Rate=6 Mb/s Tx-Power=30 dBm Retry short limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=59/70 Signal level=-51 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:747 Missed beacon:0 Summarizing: grep -v works for me when I pipe to grep the output of a file with cat . I can't reproduce the same behavior as in (1) when piping the output of iwconfig to grep . I read the following questions on grep -v, but am not able to find the answer to the above in any of them: Why "grep -q -v" only works with single line input? Pipe find into grep -v grep -v without output? grep -v -f alternative | iwconfig outputs to both standard output and to standard error, depending on whether it found or did not find any wireless extensions for an interface. Piping only affects standard output. Example removing the output sent to the standard error stream (only shows interfaces that have wireless extensions): $ /usr/sbin/iwconfig 2>/dev/nullwlp4s0 IEEE 802.11 ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=22 dBm Retry short limit:7 RTS thr:off Fragment thr:off Power Management:on Example removing the output sent to the standard output stream (only shows interfaces where iwconfig failed to find wireless extensions): $ /usr/sbin/iwconfig >/dev/nulllo no wireless extensions.enp0s31f6 no wireless extensions.wwan0 no wireless extensions.docker0 no wireless extensions.br-ca679f9ee354 no wireless extensions.veth232fd86 no wireless extensions.vboxnet0 no wireless extensions. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/663245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/433212/"
]
} |
663,285 | I have a directory named data . Inside data there is one directory samples which has more than 50 directories and there is one shell script testing.sh inside data . The setup looks like below: data |___ samples |______ PREC001 |______ PREC003 |______ PREC023 |______ KRES118 |______ TWED054 . . . |______ PREC098 |___ testing.sh I want to create a .txt file with the path for testing.sh for all the directories in the samples and also the directory names. I am working on linux. The created .txt file should look like below: /data/testing.sh PREC001 samples/data/testing.sh PREC003 samples/data/testing.sh PREC023 samples/data/testing.sh KRES118 samples/data/testing.sh TWED054 samples.../data/testing.sh PREC098 samples How to do that in linux. I actually tried with some basic commands but I couldn't get what I want. Thankyou. | It should just be a matter of: (cd data/samples && printf '/data/testing.sh %s sample\n' *) > file.txt Or to restrict to files of type directory only, with zsh : printf '/data/testing.sh %s sample\n' data/samples/*(/:t) > file.txt Note that it also has the advantage of reporting an error and not clobbering file.txt if no matching file can be found. Replace (/:t) with (-/:t) to also include files of type symlink that eventually resolve to a file of type directory . If that's to generate shell code to be interpreted by sh later, you'd also want to make sure the names of the files are properly quoted. You could replace %s with %q for that, but then you'd need to make sure the script is interpreted by zsh instead of sh as %q could use some form of quoting that are specific to zsh . Another alternative is to use the qq parameter expansion flag that always uses single quotes for quoting which are the most portable and safest quoting operators : () { printf '/data/testing.sh %s sample\n' ${(qq)@}} data/samples/*(/:t) > file.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/663285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273584/"
]
} |
663,347 | Why is sed writing to my file in reverse order? I need to read a text file ( old_file.txt ) that contains the following data: what's in the file old_file.txt : 1.002.003.00 I have a second file, which contains: what's in the file new_file.txt : AAAABBBBCCCCDDDDEEEEFFFFGGGG Now, I need to move the cursor to the nth line of the file new_file.txt , and insert everything that was in old_file.txt , like this: What I need: AAAABBBBCCCC1.00 <-- 2.00 <-- 3.00 <--DDDDEEEEFFFFGGGG But I get: What I get: AAAABBBBCCCC3.00 <--2.00 <--1.00 <--DDDDEEEEFFFFGGGG This is my code: #!/bin/bashline=3 # line that I need (exemple)while read x do sed -i "${line}a\ $x" new_file.txt done < old_file.txt I don't know why writing happens in reverse order! | It's happening because your loop appends 1.00 after CCCC , then appends 2.00 after CCCC , then appends 3.00 after CCCC . To get the order you want, you could instead i nsert $x before the (n+1)th line - however you don't need a shell loop here at all if you use r ead from the file directly: sed -i "${line}"'r old_file.txt' new_file.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/663347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/485535/"
]
} |
663,385 | I want to replace the value of "CTE_LG_LL_OE" from "warn" to "ready" { "abcd": { "aav": "on", "vvg": "iio7890_APPID", "ct": "b-tte", "eSL": true, "it": "https://%ght%.mjk.com/", "tpo": "i-1", "pd": false, "pm": false, "pr": "" }, "en": { "CTE_LG_LL_OE": "warn", "S_G_EL_OD": "INFO", "G_LG_EL_OVE": "info" }, "EN_HTTPS": false } I am able to find the same in file using the below command , but how do I replace it with one liner command. jq '.' /abc/temp/config.json | grep CTE_LG_LL_OE | To get the key's value, you would use a JSON-aware tool, like jq rather than grep . You would do this for a few reasons: The value that you are extracting may be encoded . The jq tool would decode this for you if you use it with the -r ( --raw-output ) option. Using grep makes no distinction between values and keys, so you may accidentally extract data that you did not plan to extract. Extracting the value of the CTE_LG_LL_OE key in the top-level en entry: jq -r '.en.CTE_LG_LL_OE' file Setting the value to the string ready and writing the resulting document to the file new-file : jq '.en.CTE_LG_LL_OE |= "ready"' file >new-file The |= operator is the "update operator" and it takes a "path" to a key to the left and a new value for the key on the right. To set the value from a shell variable: jq --arg newval "$newvalue" '.en.CTE_LG_LL_OE |= $newval' file >new-file This creates a jq variable called $newval from the shell variable newvalue , which we then use in the jq expression. The value in the variable will automatically be JSON-encoded by jq . For readability, space things out a bit (assuming this is part of a shell script, since the question is tagged with shell-script ): jq --arg newval "$newvalue" \ '.en.CTE_LG_LL_OE |= $newval' file >new-file To do in-place editing of the file ( jq does not support in-place editing by itself): tmpfile=$(mktemp)cp file "$tmpfile" &&jq --arg newval "$newvalue" \ '.en.CTE_LG_LL_OE |= $newval' "$tmpfile" >file &&mv "$tmpfile" file &&rm -f "$tmpfile" As a once-liner: tmpfile=$(mktemp); cp file "$tmpfile" && jq --arg newval "$newvalue" '.en.CTE_LG_LL_OE |= $newval' "$tmpfile" >file && mv "$tmpfile" file && rm -f "$tmpfile" If you're not sure of where in the document structure the CTE_LG_LL_OE key is located and just want to update the values of all CTE_LG_LL_OE keys that have the value warn : jq ' ( .. | select(type == "object" and .CTE_LG_LL_OE? == "warn").CTE_LG_LL_OE ) |= "ready"' file This examines all keys and values recursively in the whole document. It finds all objects that has a CTE_LG_LL_OE key with th evalue warn and updates these to instead be ready . The newlines are only for readability. The value could be taken from a shell variable as before: jq --arg newval "$somevariable" ' ( .. | select(type == "object" and .CTE_LG_LL_OE? == "warn").CTE_LG_LL_OE ) |= $newval' file This could obviously be combined with doing in-place editing too, as show in the first half of this answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/663385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/348364/"
]
} |
663,393 | pwd is my home directory. ls */ -d displays the directories. ~$ ls */ -dBlog/ Desktop/ Documents/ Downloads/ Music/ Pictures/ Public/ Templates/ Videos/ And ls -a displays all hidden directories and files. Hidden directories - .cache , .local etc. $ ls -a. .cache .gnupg .xsession-errors.old.. .config .gtkrc-2.0 Templates.bash_history Desktop .gtkrc-xfce Pictures .themes.bash_logout .linuxmint .pki Videos.bashrc Documents .local .profile .viminfoBlog Downloads .mozilla Public .XauthorityMusic .ssh .xsession-errors But when I run ls -ad */ , it won't displays any hidden directories. Anyone care to explain? $ ls -ad */ Blog/ Desktop/ Documents/ Downloads/ Music/ Pictures/ Public/ Templates/ Videos/ | * deliberately doesn't match hidden files or directories, because most of the time that's exactly what is wanted (as dot files and directories are generally used for configuration, not data), and you can override the default by explicitly specifying a glob starting with . if/when you need to. If you want them to be matched by a glob, use ls -d .* (or ls -d .*/ to match only hidden directories without regular files). To match both hidden and non-hidden directories, use ls -d -- */ .*/ NOTE: as mentioned by @rexkogitans in a comment, globs like * are expanded by the shell before they are passed to a program. ls never sees a * or .* (or .*/ ), it sees the list of file and/or directory names that are the result of the shell expanding the glob. This is why you need to quote or escape (by prefixing with a backslash) glob characters if you want to pass them as string literals to a program. Also note: if there are no files/dirs matching the glob then the glob is passed to the program as is, unexpanded (i.e. ls will see * as its argument). bash (and some other bourne-like shells) allow this behaviour to be over-ridden - e.g. in bash, you can use the nullglob or failglob shell options. nullglob causes it to be expanded to nothing (i.e. it is removed from the argument list), while failglob triggers an error. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/663393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174912/"
]
} |
663,415 | i am trying to use following commands in a shell script.. any suggestions to do it a correct way? [root@testserver ~]# crontab -u oracle -e >> 0 0 * * * /usr/local/scrips/setup.shcrontab: usage error: no arguments permitted after this optionUsage: crontab [options] file crontab [options] crontab -n [hostname]Options: -u <user> define user -e edit user's crontab -l list user's crontab -r delete user's crontab -i prompt before deleting -n <host> set host in cluster to run users' crontabs -c get host in cluster to run users' crontabs -s selinux context -x <mask> enable debuggingDefault operation is replace, per 1003.2 | The -e switch will make crontab interactive, which isn't the wished behaviour. I suggest you use the crontab -u user file syntax. Below is an example: root@c:~# crontab -l -u userno crontab for userroot@c:~# echo "10 10 * * * /bin/true" >> to_installroot@c:~# crontab -u user to_installroot@c:~# crontab -l -u user10 10 * * * /bin/trueroot@c:~# crontab -l -u user > temproot@c:~# echo "12 12 * * * /bin/false" >> temproot@c:~# crontab -u user temproot@c:~# crontab -l -u user10 10 * * * /bin/true12 12 * * * /bin/false | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/663415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/468332/"
]
} |
663,532 | I have a list of names in a list/text file (file test.txt ). For example: smithjohnsonwest How would I add every letter as a prefix for each line, and output it as a new text file? Desired output: asmithbsmithcsmithdsmith...ajohnsonbjohnsoncjohnsonetc., etc. | Using awk : awk '{ for (asc=97; asc<=122; asc++)printf ("%c%s\n", asc, $0) }' infile we used printf and its %c (character conversion modifier, see man awk for more details) to print the character converted of ASCII lower-case English letters start from ascii-code: 97 (character a ) upto ascii-code:122 (character z ) followed by the current line itself. see ASCII table . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/663532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/485711/"
]
} |
663,647 | Long story short: how to print in a terminal the binary digits constituting a file e.g. a library .so or a simple text .txt file PC hardware works with electrical signal (basically it's an ON/OFF behaviour) which is well logically translated by the binary system (digits 0s and 1s). Visualizing the content of a file would be an interesting educational exercise , as well as comparing a .txt and an executable that prints the same text. | xxd can give binary output. Example below. $ cat fooHello World$ xxd -b foo00000000: 01001000 01100101 01101100 01101100 01101111 00100000 Hello00000006: 01010111 01101111 01110010 01101100 01100100 00001010 World.$ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/663647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186963/"
]
} |
Subsets and Splits