source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
642,983 | I recently installed Arch Linux (I have used it before), and installed KDE Plasma 5.21.3. I have an external monitor and it is physically higher than my laptop's screen, however, whenever i open settings (Display and Monitor > Display Configuration), it aligns the top edge of the displays together, which makes life very confusing, whenever I try to drag the monitor up, it decides to move the entire window instead of moving the actual screen, which means that I am unable to move the display up. Can someone please point out if this is a bug, or if not, then point out a workaround until it gets fixed, I've heard that something like this is possible using xrandr but not sure exactly how. Any help would be appreciated. I did not have this problem when using Debian, and also in Arch Linux late last year. Specs: KDE Plasma Version: 5.21.3 Qt Version: 5.15.2 Kernel Version: 5.11.10-arch1-1 OS Type: 64-bit Graphics Platform: Wayland Graphics Processor: Mesa Intel Iris Graphics 540 | What you can do is turn the comparison around: case "example" in "$1"*) echo OK ;; *) echo Error ;;esac With multiple words, you can stick with your original idea case "$1" in e|ex|exa|exam|examp|exampl|example) : ;; t|te|tes|test) : ;; f|fo|foo) : ;; *) echo error ;;esac or use a loop and a "boolean" variable match=""for word in example test foo; do case "$word" in "$1"*) match=$word; break ;; esacdoneif [ -n "$match" ]; then echo "$1 matches $match"else echo Errorfi You can decide which is better. I think the first one is elegant. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/642983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/429684/"
]
} |
643,026 | I have a disk mounted to VM without as a whole. I created a file system on that disk. It has no partitions. Now, I resized the disk from 100G to 200G. Do I need to do anything else to let the file system to make full use of the disk size? For file systems on some disk partition, we need to update the size of the partition that holds the file system. But I'm not sure do we need to do anything in my above senario. | You will need to verify that the kernel has recognized the new size, by e.g. running fdisk -l /dev/<device> or cat /sys/block/<device>/size and checking that the total size matches the new size instead of the old one. If you are using paravirtualized drivers in a VM, most of them will handle this automatically. But if the old size is still displayed, echo 1 > /sys/block/<device>/device/rescan can be used to tell the kernel that the size of the device has changed. Once the kernel knows the new size of the whole device, there is no partition table to edit in your case, so you can proceed directly to extending the filesystem, using a filesystem-dependent tool. For ext2/ext3/ext4 filesystems, you can use resize2fs /dev/<device> , no matter if the filesystem is currently mounted or not. For XFS, the filesystem must be mounted to extend it, and the command will be xfs_growfs <mount point pathname> . Other filesystem types have their own rules and extension tools. If your distribution includes fsadm , it provides an unified method for resizing ext2/ext3/ext4 filesystems, ReiserFS and XFS (hopefully it will be extended to cover other filesystem types in the future). The command would be fsadm resize /dev/<device> . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/643026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/464541/"
]
} |
643,174 | In my crontab , I set following bash function and applied it for my job. it is indicated to add timestamp to the log. adddate() { while IFS= read -r line; do printf '%s %s\n' "$(date)" "$line"; done}30 06 * * * root $binPath/zsh/test.zsh | adddate 1>>$logPath/log.csv 2>>$errorLogPath/error.txt But when I see error.txt the bash function didn't work well. /bin/bash: adddate: command not found Where is root cause of this? If someone has opinion, please let me know. Thanks | Cron doesn't accept shell functions, create a script like #!/bin/bashadddate() { while IFS= read -r line; do printf '%s %s\n' "$(date)" "$line"; done}$binPath/zsh/test.zsh | adddate 1>>$logPath/log.csv 2>>$errorLogPath/error.txt and put that in cron. (I'm assuming here that you used $binPath and $logPath for the purpose of this question. If this isn't the case you have to set them in the script) Setting SHELL=/bin/bash in your crontab might be a way to use shell functions. (I didn't try it and it would surprise me if it works). But even if it works I would certainly not advise it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/643174",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449704/"
]
} |
643,226 | I am trying to generate a comma separated unordered list of ints between 1 and 10, I have tried the following but it results in an ordered list: seq -s "," 10 | shuf | You can use paste -s to join lines: shuf -i1-10 | paste -sd, - This uses -i option of shuf to specify a range of positive integers. The output of seq can be piped to shuf: seq 10 | shuf | paste -sd, - Or -e to shuffle arguments: shuf -e {1..10} | paste -sd, - | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/643226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/464797/"
]
} |
643,296 | How can I let my script determine the largest number for itself? I looked through my environment variables, and I found these two that looked promising: ~# declare -p BASH_VERSINFO HOSTTYPEdeclare -ar BASH_VERSINFO=([0]="5" [1]="0" [2]="11" [3]="1" [4]="release" [5]="x86_64-slackware-linux-gnu")declare -- HOSTTYPE="x86_64" ...but could I really trust parsing those, in order to draw a conclusion about what the largest number in Bash arithmetic would be? There must be a better way, programmatically. Any suggestions? | Bash arithmetic uses signed numbers. So the quick answer would be: ((MAX=(1<<63)-1)) But since you want your script to not know about the bitness of the system it's running on, then let's keep going. Brute force would be, keep adding 1 in a loop, until you hit the point where it will overflow unto a negative number. But that could take years! :-) A quicker and more elegant way to do it is with a simple bit-shift. Let's find the sign bit, i.e., let's find the number that has 1 in the most signifficant bit, and zeros in all the other bits, however many they may be. Once we have that number, we'll simply subtract 1 from it, and we'll get the largest signed number. # MIN -- the smallest signed number 0x8000...00 (it equals MAX+1)# MAX -- the largest signed number 0x7Fff...FF <-- what we are looking forMIN=1; until (( (MIN<<=1) < 0 )) ;do :;done((MAX=MIN-1))echo $MAXResult:9223372036854775807 Or, here's a one-liner, without a loop. We put the hex representation of a number in a variable, and then mask the sign bit through the variable expantion when passing it to the printf builtin: printf -v MAX %x -1 && printf -v MAX %d 0x${MAX/f/7}echo $MAXResult:9223372036854775807 On a machine with a different bitness than mine, the result will be a different number. And just for illustration, in my case: printf "MAX %X %d\nMIN %X %d\n" $MAX $MAX $MIN $MINMAX 7FFFFFFFFFFFFFFF 9223372036854775807MIN 8000000000000000 -9223372036854775808 A little side note about MIN: You may want to constrain yourself to using ((MIN=-MAX)) , otherwise you will occasionally run into problems with some arithmetic operations. ((MIN=-MAX)) ; printf "MIN %X %d\n" $MIN $MINMIN 8000000000000001 -9223372036854775807 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/643296",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
643,318 | I ran into a strange problem. To demonstrate, let's take the largest unsigned number on my machine ( printf "%X \n" -1 gives me FFFFFFFFFFFFFFFF ), and try to shift some bits. First, shift to the left: printf "%X \n" $(( 0xFFFFFFFFFFFFFFFF<<4 ))FFFFFFFFFFFFFFF0printf "%X \n" $(( 0xFFFFFFFFFFFFFFFF<<8 ))FFFFFFFFFFFFFF00printf "%X \n" $(( 0xFFFFFFFFFFFFFFFF<<16 ))FFFFFFFFFFFF0000 So far so good. As expected. Now let's try the right shift: printf "%X \n" $(( 0xFFFFFFFFFFFFFFFF>>4 ))FFFFFFFFFFFFFFFFprintf "%X \n" $(( 0xFFFFFFFFFFFFFFFF>>8 ))FFFFFFFFFFFFFFFFprintf "%X \n" $(( 0xFFFFFFFFFFFFFFFF>>16 ))FFFFFFFFFFFFFFFF Wait, what?? Why is this not working? Is that a bug? Edit: I am dreading that someone will suggest some connection with the sign bit being raised. But we are not talking about arithmetic, so the concept of sign has no place here. Other tools like * and / are for arithmetic. The whole point of having a tool that can manipulate bits is to be able to manipulate bits -- no matter how I'll chose to display those bits later, as signed or as unsigned. Right? Like: printf "%u \n" -118446744073709551615 Any ideas anybody? EDIT : Since the answers here went straight to talking about multiplication or division, let me try to explain my concern more clearly. Multiplication/division and bit-shifting are two different things, although I can see the connection between them in the minds of long-time programmers. When doing arithmetic, you have to have the concept of sign; for bit-shifting you don't. Bash has given us two distinctly different sets of tools for these two different things. When I want to multiply a number by 2, I reach for the * tool. The fact that under the hood Bash can use bit-shifts for arithmetic is beyond the point. To quote one of the answers... If the sign bit wasn't copied, the result would turn into an unsigned number. E.g. shifting the 8-bit value 1111 0000 once to the right would give 0111 1000 But turning 1111 0000 into 0111 1000 is exactly what I want. If I wanted to do a division, then I would use arithmetic operstor instead. Anyway, is there at least some way of explicitly specifying with what kind of bits it should fill when shifting? | There are two different ways to shift right in common use. The "logical right shift" inserts zero bits on the left, so the result of shifting one bit to the right corresponds to dividing the unsigned binary number by two. echo $(( 16 >> 1 )) gives 8 . And, the "arithmetic right shift" inserts a copy of the sign bit on the left, so the result of shifting one bit to the right corresponds to dividing the signed binary number by two. echo $(( 16 >> 1 )) gives 8 , and echo $(( -16 >> 1 )) gives -8 . Except that on two's complement numbers, it doesn't match the rounding of an actual division: -15 >> 1 gives -8 ; while -15 / 2 gives -7 . If the sign bit wasn't copied, but zeroed, the result would be a positive number. E.g. shifting the 8-bit value 1111 0000 (0xf0, -16) once to the right would give 0111 1000 (0x78, +120). Now, which one of these is used is a hairier matter. In practice, many implementations would use the arithmetic shift for signed numbers, and shell arithmetic is mostly done on a signed long. But that's not exactly guaranteed, at all. The POSIX definition for shell arithmetic refers to the C standard for most of the behaviour, and e.g. the operator table doesn't say anything about what sort of a shift >> is supposed to be. (see: Shell Command Language, 2.6.4 Arithmetic Expansion and Shell & Utilities, 1.1.2 Concepts Derived from the ISO C Standard: Arithmetic Precision and Operations ) Integer variables and constants, including the values of operands and option-arguments, [...] shall be implemented as equivalent to the ISO C standard signed long data type [...] Arithmetic operators and control flow keywords shall be implemented as equivalent to those in the cited ISO C standard section, [...] << , >> : Section 6.5.7, Bitwise Shift Operators cppreference.com says of the C operators that For negative a , the value of a >> b is implementation-defined (in most implementations, this performs arithmetic right shift, so that the result remains negative). (That may be a remnant of a world where not everything was two's complement. A shift to the right of a ones' complement or a sign-magnitude number would be different from the shift to the right of a two's complement number. But the result is the same: implementation-defined it is.) Some other programming languages, like Javascript , have distinct operators for arithmetic right shift >> , and logical right shift >>> . But C doesn't, and neither do any of the shells I tried. As an aside, if you were to do shifts with offsets greater than the word width, you'd also see strange things happen. On an x86, 1 << 64 is just 1 , because the processor only looks at the lowest 6 bits of the shift value, so it's the same as 1 << 0 . (1 << 32) << 32 is 0 , though, and the result might be different on another processor. You said, But the concept of sign has no place here. I mean, a number is a number, regardless of whether later you'll chose to display it as signed or unsigned, right? And that's true for addition, subtraction, and the low part of a multiplication (e.g. 32x32 -> 32) on a two's complement machine. But it's not true for the high part of a multiplication or division in general. The 8-bit value 0xff can mean the unsigned number 255 or the signed number -1. An 8x8 -> 16 multiplication for e.g. 0xff * 0xff is either 0x0001 or 0xfe01 , depending on if it's signed (-1 * -1) or unsigned (255 * 255). Also e.g. 0xff / 3 is either 0 or 0x55 , depending on if it's signed (-1 / 3 == 0), or unsigned (255 / 3 == 85). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/643318",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/439686/"
]
} |
643,519 | I have a file like this. 12345 X678GHR 0 ADD23445 HGT6787 1 ADD12345 X678GHR 0 REM67894 OIY5678 0 ADD12345 OIY5678 0 ADD12345 X678GHR 1 ADD I have to compare the lines in a file to delete the lines that were added and removed later. So the output should look like this: 23445 HGT6787 1 ADD67894 OIY5678 0 ADD12345 OIY5678 0 ADD12345 X678GHR 1 ADD Cleaned the records added and deleted later from the file. Update: I also have to make sure that columns 2 and 3 also match between deleting records. In my original file, delimiter is not space. It is a closed bracket ")" Please help. I'm very new to UNIX | If you don't need to guarantee the order of entries, then given $ cat file12345)X678GHR)0)ADD23445)HGT6787)1)ADD12345)X678GHR)0)REM67894)OIY5678)0)ADD12345)OIY5678)0)ADD12345)X678GHR)1)ADD the following awk $ awk -F ')' ' $NF == "ADD" {lines[$1 FS $2 FS $3] = $0} $NF == "REM" {delete lines[$1 FS $2 FS $3]} END {for(i in lines) print lines[i]}' file12345)X678GHR)1)ADD67894)OIY5678)0)ADD23445)HGT6787)1)ADD12345)OIY5678)0)ADD If you do need to preserve order, then you can do so by making two passes over the file: $ awk -F ')' ' NR == FNR {if($NF == "REM") rem[$1 FS $2 FS $3]; next} !($1 FS $2 FS $3 in rem)' file file23445)HGT6787)1)ADD67894)OIY5678)0)ADD12345)OIY5678)0)ADD12345)X678GHR)1)ADD | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/643519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/464607/"
]
} |
643,520 | Along the lines of /dev/null (path to an empty source/sink file), is there a path that will never point to a valid file on at least Linux? This is mostly for testing purposes of some scripts I'm writing, and I don't want to just delete or move a file that doesn't belong to the script if it exists. | As an alternative, I would suggest that your script create a temporary directory, and then look for a file name in there. That way, you are 100% certain that the file doesn't exist, and you have full control and can easily clean up after yourself. Something like: dir=$(mktemp -d)if [ -e "$dir"/somefile ]; then echo "Something is seriously wrong here, '$dir/somefile' exists!"firmdir "$dir" You can write the equivalent code in any language, the vast majority (all?) higher level languages will have some dedicated tool to handle creating and deleting temporary directories. This seems like a far safer and cleaner approach than trying to guess a file name that should not exist. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/643520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304684/"
]
} |
643,537 | I have a file with this content: $ cat compromised_header.txtsome unique string 1some other unique string 2another unique string 3 I wanted to find all files that have all the lines of above file exactly in the same order and those lines have no intermediary lines in between. Example input file: $ cat a-compromised-file.txtsome unique string 1some other unique string 2another unique string 3unrelated line xunrelated line yunrelated line z I tried using below grep : grep -rlf compromised_header.txt dir/ But I wasn't sure it will give the expected files as it will also match this file: some unique string 1unrelated line xunrelated line yunrelated line z | Using an awk that supports nextfile : NR == FNR { a[++n]=$0; next}$0 != a[c+1] && (--c || $0!=a[c+1]) { c=0; next}++c >= n { print FILENAME; c=0; nextfile} with find for recursion: find dir -type f -exec gawk -f above.awk compromised_header.txt {} + Or this might work: pcregrep -rxlM "$( perl -lpe '$_=quotemeta' compromised_header.txt )" dir Using perl to escape metacharacters because pcregrep doesn't seem to combine --fixed-strings with --multiline . With perl in slurp mode (won't work with files that are too large to hold in memory): find dir -type f -exec perl -n0777E 'BEGIN {$f=<>} say $ARGV if /^\Q$f/m' compromised_header.txt {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/643537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245871/"
]
} |
643,616 | i want to test if a variable has more than 4 digits something like this #!/bin/bashif [ $input has more than 4 digits ]; then echo " * Please only 4 digits" >&2 echo""else the other optionfi | If you care about the number of digits (and not the numerical value), you could match against a regex in Bash/Ksh/Zsh (* see footnote on [[:digit:]] ) : #!/bin/bashinput=$1re='^[[:digit:]]{1,4}$'if [[ $input =~ $re ]]; then echo "'$input' contains 1 to 4 digits (and nothing else)"else echo "'$input' contains something else"fi Or e.g. [[ $input =~ ^[[:digit:]]{5,}$ ]] to check for "5 or more digits (and nothing else)", etc. Or in a pure POSIX shell, where you have to use case for the pattern match: #!/bin/shinput=$1case $input in *[![:digit:]]*) onlydigits=0;; # contains non-digits *[[:digit:]]*) onlydigits=1;; # at least one digit *) onlydigits=0;; # emptyesacif [ $onlydigits = 0 ]; then echo "'$input' is empty or contains something other than digits"elif [ "${#input}" -le 4 ]; then echo "'$input' contains 1 to 4 digits (and nothing else)"else echo "'$input' contains 5 or more digits (but nothing else)"fi (You could put all the logic inside the case , but nesting an if there is somewhat ugly, IMO.) Note that [[:digit:]] should match whatever the current locale's idea of "digits" is. That might or might not be more than the ASCII digits 0123456789 . On my system, [[:digit:]] does not match e.g. ⁴ (superscript four, U+2074), but [0-9] does. Matching other "digits" might be a problem, esp. if you do arithmetic on the number in the shell. So, if you want to be stricter, use [0123456789] to accept just the ASCII digits. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/643616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/465179/"
]
} |
643,777 | I am trying to send messages from kafka-console-producer.sh , which is #!/bin/bashif [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx512M"fiexec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@" I am pasting messages then via Putty terminal. On receive side I see messages truncated approximately to 4096 bytes. I don't see anywhere in Kafka, that this limit is set. Can this limit be from bash/terminal or Putty? | 4095 is the limit of the tty line discipline internal editor length on Linux. From the termios(3) man page: The maximum line length is 4096 chars (including the terminating newline character); lines longer than 4096 chars are truncated. After 4095 characters, input processing (e.g., ISIG and ECHO* processing) continues, but any input data after 4095 characters up to (butnot including) any terminating newline is discarded. This ensures that the terminal can always receive more input until at least oneline can be read. See also the corresponding code in the Linux kernel . For instance, if you enter: $ wc -c Enter Enter in the shell's own line editor (readline in the case of bash) submits the line to the shell. As the command line is complete, the shell is ready to execute it, so it leaves its own line editor, puts the terminal device back in canonical (aka cooked ) mode, which enables that crude line editor (actually implemented in tty driver in the kernel). Then, if you paste a 5000 byte line, press Ctrl + D to submit that line, and once again to tell wc you're done, you'll see 4095 as output. (Note that that limit does not apply to bash 's own line editor, you'll see you can paste a lot more data at the prompt of the bash shell). So if your receiving application reads lines of input from its stdin and its stdin is a terminal device and that application doesn't implement its own line editor (like bash does) and doesn't change the input mode, you won't be able to enter lines longer than 4096 bytes (including the terminating newline character). You could however disable the line editor of the terminal device (with stty -icanon ) before you start that receiving application so it reads input directly as you enter it. But then you won't be able to use Backspace / Ctrl + W for instance to edit input nor Ctrl + D to end the input. If you enter: $ saved=$(stty -g); stty -icanon icrnl; head -n1 | wc -c; stty "$saved" Enter paste your 5000 byte long line and press Enter , you'll see 5001. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/643777",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28089/"
]
} |
644,271 | I need to extract some values from a file in bash , on my CentOS system. In myfile.txt I have a list of objects called Info_region in which each object is identified with a code (eg. BARD1_region_005 or BIRC2_region_002 etc.) Moreover there are some others columns in which are reported some numerical variable. The same object (same code name) can be repeated several times in my file.I also have a file that contains a completed list with all object codes without duplicates.I would like to obtain an output.txt file in which each object (code name) is reported only once as in my list-file.txt and I would like to associate to this the maximum possible values associated with that code name in myfile.txt. myfile.txt: (columns are separated by tab ) Info_region Lig_score Lig_prevista Lig_prevista_+1 Int_score Expo_score Protac_scoreBARD1_region_005 0 3 3 0 1 1BARD1_region_006 0 1 1 0 1 1BIRC2_region_001 1 6 7 0 1 2BIRC2_region_001 1 7 8 0 1 2BIRC2_region_001 0 2 2 0 0 0BIRC2_region_001 0 12 12 0 1 1BIRC2_region_001 1 10 11 -1 1 1BIRC2_region_001 1 2 3 0 1 2BIRC2_region_001 1 0 1 0 1 2BIRC2_region_001 1 6 7 0 1 2BIRC2_region_002 0 0 0 0 1 1BIRC2_region_002 1 0 0 -1 0.5 0.5BIRC2_region_003 0 0 0 0 1 1BIRC2_region_004 0 1 1 0 1 1UHRF1_region_004 0 0 0 1 1 2UHRF1_region_004 0 0 0 1 1 2UHRF1_region_004 1 0 1 0 0.5 1.5UHRF1_region_004 0 0 0 1 1 2UHRF1_region_005 0 3 3 1 1 2UHRF1_region_005 1 0 0 -1 1 1 file-list.txt: Info_regionBARD1_region_005BARD1_region_006BIRC2_region_001BIRC2_region_002BIRC2_region_003BIRC2_region_004UHRF1_region_004UHRF1_region_005 output.txt: Info_region Lig_score Lig_prevista Lig_prevista_+1 Int_score Expo_score Protac_scoreBARD1_region_005 0 3 3 0 1 1BARD1_region_006 0 1 1 0 1 1BIRC2_region_001 1 12 12 0 1 2BIRC2_region_002 1 0 0 0 1 1BIRC2_region_003 0 0 0 0 1 1BIRC2_region_004 0 1 1 0 1 1UHRF1_region_004 1 0 1 1 1 2UHRF1_region_005 1 3 3 1 1 2 Could someone help me please? Thank you! | Assuming the data is in the file called file and that it is sorted on the first column, the GNU datamash utility could do this in one go on the data file alone: datamash -H -W -g 1 max 2-7 <file This instructs the utility to use whitespace separated columns ( -W ; remove this if your column are truly tab-delimited), that the first line of the data contains headers ( -H ), to group by he first column ( -g 1 ), and to calculate the maximum values for the 2nd through to the 7th columns. The result, given the data in the question: GroupBy(Info_region) max(Lig_score) max(Lig_prevista) max(Lig_prevista_+1) max(Int_score) max(Expo_score) max(Protac_score)BARD1_region_005 0 3 3 0 1 1BARD1_region_006 0 1 1 0 1 1BIRC2_region_001 1 12 12 0 1 2BIRC2_region_002 1 0 0 0 1 1BIRC2_region_003 0 0 0 0 1 1BIRC2_region_004 0 1 1 0 1 1UHRF1_region_004 1 0 1 1 1 2UHRF1_region_005 1 3 3 1 1 2 You could also use --header-in in place of -H to get header-less output, and then take the header from the original data file: { head -n 1 file; datamash --header-in -W -g 1 max 2-7 <file; } >output Here, I'm also writing the result to some new output file called output . Using awk and assuming tab-delimited fields: awk -F '\t' ' BEGIN { OFS = FS } NR == 1 { print; next } { n[$1] = 1 for (i = 2; i <= NF; ++i) a[$1,i] = (a[$1,i] == "" || $i > a[$1,i] ? $i : a[$1,i]) } END { nf = NF for (j in n) { $0 = j for (i = 2; i <= nf; ++i) $i = a[$1,i] print } }' file This calculates the maximum value in each column for each group. These numbers are stored in the a array while the n array just holds the group names as keys. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/644271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/393805/"
]
} |
644,282 | I have two systems: An Ubuntu 20.04 server on the Internet and a Raspberry Pi 2B (Ubuntu 18.04) in my local LAN (behind a NAT router). I am opening a reverse tunnel from the Raspberry Pi to the server like so: ssh -f -N -R 13333:127.0.0.1:22 server The config file looks like: Host server Hostname <public IP> User serveruser IdentityFile /home/piuser/.ssh/serverkey Then on the server I am running the following command: ssh -p13333 -i ~/.ssh/pikey [email protected] For the pikey, a command is executed on the Raspberry (set in the authorized_keys file). I tried this setup a few times yesterday, but today (~24h later) I couldn't use the tunnel anymore. ss showed me the open ports, and I was able to execute the command on the server, but there was no response- the command just "hung". Then, I killed the tunnel and re-established it, without any problem. To troubleshoot the problem more, I added -vvv to the command sent from the server: ssh -vvv -p13333 -i ~/.ssh/pikey [email protected] This showed me the stuck connection at the following point: OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020debug1: Reading configuration data /home/serveruser/.ssh/configdebug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no filesdebug1: /etc/ssh/ssh_config line 21: Applying options for *debug2: resolve_canonicalize: hostname 127.0.0.1 is addressdebug2: ssh_connect_directdebug1: Connecting to 127.0.0.1 [127.0.0.1] port 13333.debug1: Connection established.debug1: identity file /home/serveruser/.ssh/pikey type 0debug1: identity file /home/serveruser/.ssh/pikey-cert type -1debug1: Local version string SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.2 This time, on the site of the Raspberry Pi I saw: $ Warning: remote port forwarding failed for listen port So my last try was rebooting the Raspberry, and, voila after establishing the tunnel again, it worked as it's supposed to work. Finally, my question: How can I determine what went wrong? I rely on that connection when abroad, so I really need to figure out what went wrong and how I can fix that... Btw, I Googled a bit, but none of the suggestions / solutions work for my case, since I was able to establish the connection in the first place, but some hickup brought it into a strange state. If you need anymore information, please let me know :) | Assuming the data is in the file called file and that it is sorted on the first column, the GNU datamash utility could do this in one go on the data file alone: datamash -H -W -g 1 max 2-7 <file This instructs the utility to use whitespace separated columns ( -W ; remove this if your column are truly tab-delimited), that the first line of the data contains headers ( -H ), to group by he first column ( -g 1 ), and to calculate the maximum values for the 2nd through to the 7th columns. The result, given the data in the question: GroupBy(Info_region) max(Lig_score) max(Lig_prevista) max(Lig_prevista_+1) max(Int_score) max(Expo_score) max(Protac_score)BARD1_region_005 0 3 3 0 1 1BARD1_region_006 0 1 1 0 1 1BIRC2_region_001 1 12 12 0 1 2BIRC2_region_002 1 0 0 0 1 1BIRC2_region_003 0 0 0 0 1 1BIRC2_region_004 0 1 1 0 1 1UHRF1_region_004 1 0 1 1 1 2UHRF1_region_005 1 3 3 1 1 2 You could also use --header-in in place of -H to get header-less output, and then take the header from the original data file: { head -n 1 file; datamash --header-in -W -g 1 max 2-7 <file; } >output Here, I'm also writing the result to some new output file called output . Using awk and assuming tab-delimited fields: awk -F '\t' ' BEGIN { OFS = FS } NR == 1 { print; next } { n[$1] = 1 for (i = 2; i <= NF; ++i) a[$1,i] = (a[$1,i] == "" || $i > a[$1,i] ? $i : a[$1,i]) } END { nf = NF for (j in n) { $0 = j for (i = 2; i <= nf; ++i) $i = a[$1,i] print } }' file This calculates the maximum value in each column for each group. These numbers are stored in the a array while the n array just holds the group names as keys. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/644282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/235218/"
]
} |
644,295 | I am trying the following but it doesn't seem to give me my desired outcome, I basically want to run dig until I get a response at which point exit and continue with rest of script. until dig +answer example.com > /dev/null ;do :; done | Assuming the data is in the file called file and that it is sorted on the first column, the GNU datamash utility could do this in one go on the data file alone: datamash -H -W -g 1 max 2-7 <file This instructs the utility to use whitespace separated columns ( -W ; remove this if your column are truly tab-delimited), that the first line of the data contains headers ( -H ), to group by he first column ( -g 1 ), and to calculate the maximum values for the 2nd through to the 7th columns. The result, given the data in the question: GroupBy(Info_region) max(Lig_score) max(Lig_prevista) max(Lig_prevista_+1) max(Int_score) max(Expo_score) max(Protac_score)BARD1_region_005 0 3 3 0 1 1BARD1_region_006 0 1 1 0 1 1BIRC2_region_001 1 12 12 0 1 2BIRC2_region_002 1 0 0 0 1 1BIRC2_region_003 0 0 0 0 1 1BIRC2_region_004 0 1 1 0 1 1UHRF1_region_004 1 0 1 1 1 2UHRF1_region_005 1 3 3 1 1 2 You could also use --header-in in place of -H to get header-less output, and then take the header from the original data file: { head -n 1 file; datamash --header-in -W -g 1 max 2-7 <file; } >output Here, I'm also writing the result to some new output file called output . Using awk and assuming tab-delimited fields: awk -F '\t' ' BEGIN { OFS = FS } NR == 1 { print; next } { n[$1] = 1 for (i = 2; i <= NF; ++i) a[$1,i] = (a[$1,i] == "" || $i > a[$1,i] ? $i : a[$1,i]) } END { nf = NF for (j in n) { $0 = j for (i = 2; i <= nf; ++i) $i = a[$1,i] print } }' file This calculates the maximum value in each column for each group. These numbers are stored in the a array while the n array just holds the group names as keys. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/644295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
644,322 | The libssh2 1.9 can't be installed from EPEL repository on RHEL 8.1 and newer (tested on RHEL 8.3): # dnf --enablerepo=epel install libssh2-1.9.0...All matches were filtered out by modular filtering for argument: libssh2-1.9.0Error: Unable to find a match: libssh2-1.9.0 Other EPEL RPMs can be installed without any obstacles. How can I install the libssh2 without downloading it and installing localy? | The easiest you can do is bypass module filtering. Edit /etc/yum.repos.d/epel.repo and add module_hotfixes=1 line under the [epel] section. Done. The installation will succeed. However the above can be too broad solution. The alternative could be to set module_hotfixes just in the command via --setopt : dnf --enablerepo=epel --setopt=epel.module_hotfixes=true install libssh2-1.9.0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/644322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173916/"
]
} |
644,343 | I'm currently learning how to write bash scripts. How do I stop while loop once I get a 200 response code on my curl request? aws --endpoint-url http://s3.sample.com/ s3 cp hello.php s3://bucket/while [ true ]do curl http://sample.com/hello.php &> /dev/nulldone | The accepted answer to the proposed duplicate target shows how to make curl fail on any server error, returning 22 as its exit status. Based on that, you may write: until curl -s -f -o /dev/null "http://example.com/foo.html"do sleep 5done Which reads "until curl successfully completes the requested transfer, wait 5 seconds and retry". -f makes curl fail on server errors, -s prevents it from printing messages and the progress meter, -o /dev/null assumes you are not interested in the content of the response. However, curl is able to retry by itself, there is no need for a shell loop. For instance, to make it retry ten times and sleep five seconds before retrying: curl --retry 10 --retry-delay 5 -s -o /dev/null "http://example.com/foo.html" Or, if you want it to retry even on non-transient HTTP errors (e.g. 404): curl --retry 10 -f --retry-all-errors --retry-delay 5 \ -s -o /dev/null "http://example.com/foo.html" If, instead, you are interested in running curl until the response shows a specific HTTP status: until [ \ "$(curl -s -w '%{http_code}' -o /dev/null "http://example.com/foo.html")" \ -eq 200 ]do sleep 5done -w instructs curl to display the information specified by a format string (here, %{http_code} ) after a completed transfer. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/644343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/465955/"
]
} |
644,403 | Everything on my system (that needs it) supports UTF-8 just fine. That's all nice when you want output...But what if you want easy in put ? At the moment the only non-ASCII chars I can easily type are chars like é by using AtlGr . But for chars like ₂ ² ≈ √ π at the moment I have to: Open a browser Surf to https://www.utf8icons.com or a similar site Click, type and search a lot on the site to get to a page that contains the symbol i want Copy it Paste it in the program where I need it (Optionally) close the browser What I'm looking for is a program that can do something like this: Run in the background in a modern desktop environment (in my case Cinnamon) Jump to the foreground to show a whole list of reasonably popular UTF-8 symbols after pressing something like F1 Let me click a symbol after which it will be sent to the program I was last using as if it was a keypress Give me the option to configure it to either stay visible after this "fake keypress" or jump back to background In short: Are there virtual keyboard programs with support for non-ASCII UTF-8 ? Actually... I am already happy with any method that improves mine. Edit: For others ending up here and don't want to read all the answers themselves (or add a answer that's already given): These are the options already mentioned + links to the answers + pro's and contra's. Feel free to add extra solutions below (after providing them as detailed answer) : ibus (usually with Ctrl Shift E ) → Can't get it to work on Cinnamon onboard → pro : Seems to do everything I need + has support for snippets, con : Only (by default) included non-latin layout is for math, other layouts with popular UTF-8 chars have to be created manually gucharmap → pro: Lots of chars and easy to search con: Doesn't easily jump between foreground/background (can probably be handled with a workaround in Cinnamon itself) kcharselect → Same pro/con as gucharmap Solutions from the programs themselves (e.g. Ctrl . for a couple of them) → pro : Ideal for that exact program con : Most programs, including the ones where it's needed the most, don't have one + it's not uniform https://www.unicodeit.net/ → pro : Good for long math formula's. con : Same problem as the one I originally stated + useless for non-math symbols Keyboard with extra symbols → pro : Easy con : Small amount of chars + extra keyboard needed for each system Shortcuts for the most used chars with xcompose → pro : Easy con : Depending on your memory (as human, not as computer) it only works for a limited amount of chars HTML entities to compose - pro/con : Too much of each, see answer Use Ctrl Shift U , Hexcode , Space : pro/con : Same as above | You could use Onboard Onscreen Keyboard which is available in most distros. It allows to create a custom layout with the characters you need, e.g. In case you don't want to create a new layout it offers a feature called "Snippets" where you have the choice of entering different characters or even text. In order to show it just create a shortcut in your desktop environment which will simply execute onboard or dbus-send --type=method_call --dest=org.onboard.Onboard /org/onboard/Onboard/Keyboard org.onboard.Onboard.Keyboard.Show In order to hide it create a shortcut for dbus-send --type=method_call --dest=org.onboard.Onboard /org/onboard/Onboard/Keyboard org.onboard.Onboard.Keyboard.Hide Or you could toggle visibility with dbus-send --type=method_call --dest=org.onboard.Onboard /org/onboard/Onboard/Keyboard org.onboard.Onboard.Keyboard.ToggleVisible | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/644403",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/275743/"
]
} |
644,442 | On 3 machines I get: $ speedtest-cli Retrieving speedtest.net configuration...Traceback (most recent call last): File "/usr/bin/speedtest-cli", line 11, in <module> load_entry_point('speedtest-cli==2.1.2', 'console_scripts', 'speedtest-cli')() File "/usr/lib/python3/dist-packages/speedtest.py", line 1986, in main shell() File "/usr/lib/python3/dist-packages/speedtest.py", line 1872, in shell speedtest = Speedtest( File "/usr/lib/python3/dist-packages/speedtest.py", line 1091, in __init__ self.get_config() File "/usr/lib/python3/dist-packages/speedtest.py", line 1173, in get_config ignore_servers = list(ValueError: invalid literal for int() with base 10: '' I have tested one of these machines on two different internet connections with the same result. Why is it not working? | From this speedtest-cli Pull Request , I gather the speedtest site have changed something in the response their API gives out. Looking at the first commit in the PR, you just need to modify a single line in speedtest.py. If you're in Ubuntu or similar, and you have the file in the location shown in your output, you can fix it with: ## Backup original codesudo gzip -k9 /usr/lib/python3/dist-packages/speedtest.py## Make the line substitutionsed -i "s/^ map(int, server_config\['ignoreids'\].split(','))$/ map(int, (server_config['ignoreids'].split(',') if len(server_config['ignoreids']) else []) )/" /usr/lib/python3/dist-packages/speedtest.py EDIT: the final patch is at https://github.com/sivel/speedtest-cli/commit/cadc68 , and published in v2.1.3 . It's too complex for a simple one-line sed command, but you could still apply it yourself manually. Or you could try downloading that version of the speedtest.py file yourself: sudo gzip -k9 /usr/lib/python3/dist-packages/speedtest.pysudo wget https://raw.githubusercontent.com/sivel/speedtest-cli/v2.1.3/speedtest.py \ -O /usr/lib/python3/dist-packages/speedtest.py (Again, you should double-check the location of the speedtest.py file. The above location seems to be common for Ubuntu, but not across all versions of Unix/Linux.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/644442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2972/"
]
} |
644,472 | The code below is a simple bash script I am running on a Linux machine, and I was wondering that why the time interval between each output is four seconds instead of eight ? $ for test in test1 test2 test3; do (echo ${test}; sleep 4s; echo hop2; sleep 4s; echo hop3) | date; doneSun 11 Apr 2021 12:42:27 AM +07Sun 11 Apr 2021 12:42:31 AM +07Sun 11 Apr 2021 12:42:35 AM +07 Despite increasing the latter time value to be somewhat longer, the time interval between each output is still four seconds. $ for test in test1 test2 test3; do (echo ${test}; sleep 4s; echo hop2; sleep 50s; echo hop3) | date; doneSun 11 Apr 2021 12:42:44 AM +07Sun 11 Apr 2021 12:42:48 AM +07Sun 11 Apr 2021 12:42:52 AM +07 This is very confusing, I will really appreciate it if anyone can explain this. What'd more confusing is that if I put the date command in the front, it looks like none of the sleep commands are executed: $ for test in test1 test2 test3; do (date; echo ${test}; sleep 50s; echo hop2; sleep 50s; echo hop3) | date; doneSun 11 Apr 2021 01:22:35 AM +07Sun 11 Apr 2021 01:22:35 AM +07Sun 11 Apr 2021 01:22:35 AM +07 | To clarify this, let me add add some debugging output to stderr (bypassing the pipe), before & after echoing "hop2": $ for test in test1 test2 test3; do (echo ${test}; sleep 4s; echo before hop2 >&2; echo hop2; echo after hop2 >&2; sleep 4s; echo hop3) | date; doneSat Apr 10 11:29:46 PDT 2021before hop2Sat Apr 10 11:29:50 PDT 2021before hop2Sat Apr 10 11:29:54 PDT 2021before hop2 Note that echo after hop2 >&2 never executes , and neither do the commands after it: the second sleep and the echo hop3 . As I understand it, here's what happens. Within the loop, two separate processes execute in parallel, with output from the first piped to input of the second. The two processes execute: echo ${test}sleep 4secho before hop2 >&2echo hop2echo after hop2 >&2sleep 4secho hop3 and date Here's the rough sequence of execution (the exact sequence of the first 3 steps and the beginning of step 4 will be somewhat random): Process 1 executes echo ${test} ; this writes "test1" (and a newline) into the pipe, where it's buffered so that it can be read later. Process 2 executes date , printing the current date to the terminal. Process 2 exits, closing its end of the pipe. Process 1 executes sleep 4s . After 4 seconds, process 1 executes echo before hop2 >&2 , printing "before hop2" to the terminal. Process 1 tries to execute echo hop2 , but since the pipe's only reader has closed it, it gets a SIGPIPE errror. This apparently causes the entire subshell process (not just the echo command) to exit. Note that this happens only because echo is a shell builtin; if you used /bin/echo hop2 (an external command, instead of the shell's echo builtin) it would execute the second sleep as you expected. BTW, this is relatively consistent between different shells. I get the same results running this in bash, zsh, dash, and ksh (interactively). ksh in a script is a bit different, because it apparently doesn't wait for process 1 to exit before continuing, so the date s all execute immediately, followed (4 seconds later) by a series of "before hop2" lines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/644472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/466091/"
]
} |
644,491 | I've got a 20GB RAR file to extract with a password on Debian Linux Google Cloud VM. I first tried sudo apt-get install unrar but the following output was given: Reading package lists... DoneBuilding dependency tree Reading state information... DonePackage unrar is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, oris only available from another sourceE: Package 'unrar' has no installation candidate I found that this is likely to be because I don't have the multiverse activated, so I tried sudo add-apt-repository multiverse . This didn't work: Error: 'multiverse' invalid I eventually found a post saying that 'unrar free' could be installed. I installed it, and ran unrar-free -x -p Filename.rar . It is currently going through each file in the archive and giving the following output: Extracting Folder_name/image/0/1.jpg Failed Extracting Folder_name/image/0/10.jpg Failed Extracting Folder_name/image/0/100.jpg Failed Extracting Folder_name/image/0/1000.bmp Failed Apparently unrar-free is unable to extract archives in the RAR 3.0 format. I don't know how to tell which version of RAR this archive was compressed in. How can I extract this RAR file? I don't mind paying some money if it means faster extraction - I've got 140GB of RAR files to get through. | You can extract RAR archives, including RAR 5 archives, in Debian with unar , which is available in the main repositories. To be able to install the unrar package, you need to enable the non-free repositories ( non-free in the “free as in freedom” sense ): sudo sed -i.bak 's/buster[^ ]* main$/& contrib non-free/g' /etc/apt/sources.listsudo apt update (The sed command adds contrib non-free to the end of every line containing “buster”; use the appropriate codename if you’re using a different release.) This will allow you to run sudo apt install unrar and use that to extract your RAR archives. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/644491",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/465230/"
]
} |
644,558 | OS: Debian Bullseye, uname -a : Linux backup-server 5.10.0-5-amd64 #1 SMP Debian 5.10.24-1 (2021-03-19) x86_64 GNU/Linux I am looking for a way of undoing this wipefs command: wipefs --all --force /dev/sda? /dev/sda while the former structure was: fdisk -l /dev/sdaDisk /dev/sda: 223.57 GiB, 240057409536 bytes, 468862128 sectorsDisk model: CT240BX200SSD1 Units: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: gptDisk identifier: 8D5A08BF-0976-4CDB-AEA2-8A0EAD44575EDevice Start End Sectors Size Type/dev/sda1 2048 1050623 1048576 512M EFI System/dev/sda2 1050624 468860927 467810304 223.1G Linux filesystem and the output of that wipefs command (is still sitting on my terminal): /dev/sda1: 8 bytes were erased at offset 0x00000052 (vfat): 46 41 54 33 32 20 20 20/dev/sda1: 1 byte was erased at offset 0x00000000 (vfat): eb/dev/sda1: 2 bytes were erased at offset 0x000001fe (vfat): 55 aa/dev/sda2: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54/dev/sda: 8 bytes were erased at offset 0x37e4895e00 (gpt): 45 46 49 20 50 41 52 54/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa I might have found an article hosted on https://sysbits.org/ , namely: https://sysbits.org/undoing-wipefs/ I will quote the wipe and undo parts from there, I want to know if it's sound and I can safely execute it on my server, which I did not yet reboot, and since then trying to figure out a work-around from this hell of a typo: wipe part wipefs -a /dev/sda/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54/dev/sda: 8 bytes were erased at offset 0x3b9e655e00 (gpt): 45 46 49 20 50 41 52 54/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa undo part echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x00000200))echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x3b9e655e00))echo -en '\x55\xaa' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x000001fe))partprobe /dev/sda Possibly alternative solution Just now, I ran the testdisk on that SSD drive, and it found many partitions, but only these two match the original: TestDisk 7.1, Data Recovery Utility, July 2019Christophe GRENIER <[email protected]>https://www.cgsecurity.orgDisk /dev/sda - 240 GB / 223 GiB - CHS 29185 255 63 Partition Start End Size in sectors 1 P EFI System 2048 1050623 1048576 [EFI System Partition] [NO NAME] 2 P Linux filesys. data 1050624 468860927 467810304 Can I / Should I just hit Write (Write partition structure to disk)? If not, why not? | You're lucky that wipefs actually prints out the parts it wipes. These, wipefs -a /dev/sda/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54/dev/sda: 8 bytes were erased at offset 0x3b9e655e00 (gpt): 45 46 49 20 50 41 52 54/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aaecho -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x00000200))echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x3b9e655e00))echo -en '\x55\xaa' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x000001fe)) do look sensible to me in general. But note that the offsets there are different from the ones in your case! You'll need to use the values you got from wipefs . Based on the offset values (0x3b9e655e00 vs 0x37e4895e00), they had a slightly larger disk than you did (~256 GB vs ~240 GB). Using their values would mean that the backup GPT at the end of disk would be left broken.That shouldn't matter much, in that any partitioning tool should be able to rewrite it as long as the first copy is intact. But if it was the other way around, and the wrong offset you used happened to be within the size of your disk, you'd end up overwriting some random part of the drive. Not good. Also, the magic numbers for the filesystems of course need to be in the right places. I tested wiping and undoing it with a VFAT image, and wrote this off the top of my head before reading your version too closely: printf "$(printf '\\x%s' 46 41 54 31 36 20 20 20)" | dd bs=1 conv=notrunc seek=$(printf "%d" 0x00000036) of=test.vfat that's for the single wipefs output line (repeat for others): test.vfat: 8 bytes were erased at offset 0x00000036 (vfat): 46 41 54 31 36 20 20 20 The nested printf at the start allows to copypaste the output from wipefs , without having to manually change 46 41 54 31... to \x46\x41\x54\x31... . Again, you do need to take care to enter the correct values in the correct offsets! It probably wouldn't be too bad to automate that further, but what with the risk involved, I'm not too keen to post such a script publicly without significant testing. If you can, take a copy of the disk contents before messing with it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/644558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
644,626 | I need to kill some processes that are run as sudo, matching them by full name, and count the number of original command invocations. For each process there are two processes: the command itself, and the sudo one, but I can work this around: $ sudo -b perf record sleep 100$ pgrep -fa '/perf record'2245700 /usr/lib/linux-tools/.../perf record sleep 100 The problem is that when I invoke sudo pkill , it will find, kill and count itself: $ sudo pkill -ef '/perf record' | wc -l2 Is there a simple way for pkill not to include itself, in this case? I've tried using pid files, which could be acceptable, but the pgrep/pkill documentation is lacking, and it seems not to work in the basic form: $ pgrep -f '/perf record' | tee /tmp/pids$ sudo pkill -F /tmp/pids killed (pid 2249211) as it will kill only the first. The documentation says: -F, --pidfile file Read PID's from file. This option is perhaps more useful for pkill than pgrep. but it's ambiguous about the meaning of PID's . | Try: sudo pkill -ef '/[p]erf record' | wc -l It's a trick which uses a character class containing only the single letter p . The regex is looking for /perf , but the sudo pkill command line has /[p]erf , which doesn't match. This method has been used for decades in commands like ps aux | awk '/[f]oo/ {print $1}' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/644626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12814/"
]
} |
644,628 | I have few directories whose names looks like the following: I want to remove the newline from the end recursively. I checked Recursively rename directories I also checked Remove newlines in file names The solution it suggests is: find -name $'*\n*' -exec rename $'s|\n| |g' '{}' \; But in my case find -name $'*\n*' returns nothing. If I remove the $ it can find the directories % find . -name '*\n*'./second?% find . -name '*\r*'./third?./first? However, when I run find . -name '*\n*' -exec rename $'s|\n| |g' '{}' \; it does not rename the directory. I also tried find . -name $'*\n*' -exec rename $'\n' ' ' {} \; from Recursively remove newline in file names . It is also not renaming the directories. What can I do? | Try: sudo pkill -ef '/[p]erf record' | wc -l It's a trick which uses a character class containing only the single letter p . The regex is looking for /perf , but the sudo pkill command line has /[p]erf , which doesn't match. This method has been used for decades in commands like ps aux | awk '/[f]oo/ {print $1}' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/644628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206574/"
]
} |
644,769 | I can't seem to add an alias for git add **/ I think those two asterisks is causing an issue, or it could be that forward slash. How do I solve this? So far I have tried alias ga='git add **/' When I run the above alias in my terminal, I issue this command ga somename.java which would be an equivalent if alias was not there would be git add **/somename.java . I have also tried adding more asterisks, slashes, dollar sign followed by a quote, etc. | ga somename.java is short for git add **/ somename.java . The first argument to the alias is not concatenated to the last word inside the alias. You can think of it this way: the space that you type after ga is not removed. To do anything more complex than give a command an alternate name or pass arguments to a command, use a function instead of an alias. Here's a zsh version which is pretty versatile. function ga { git add **/$^~@(.N)}alias ga='noglob ga' Here's how it works for ga foo* bar : The alias ga expands to noglob ga . Thanks to the noglob precommand modifier , noglob ga foo* bar does not expand the wildcard in foo* , and instead passes foo* literally as an argument to ga . Since ga (the one in noglob ga … ) isn't the first word in the command, it is not looked up as an alias, and therefore it refers to the function, which is called with the arguments foo* and bar . $^~@ uses the ${^spec} and ${~spec} parameter expansion forms . The modifier ^ causes **/ to be prepended to each element of the array $@ , resulting in **/foo* and **/bar . Without this modifier, **/$@ would expand to **/foo* and bar . $~@ causes the wildcard characters in the arguments to be expanded now. This way, the pattern foo* is not expanded by itself: what's expanded is **/foo* , so this matches things like sub/dir/foobar . The . glob qualifier causes only regular files to be matched. Make it -. to also match symbolic links to regular files, or @. to match all symbolic links in addition to regular files. The N glob qualifier causes patterns that match nothing to be omitted. Don't put (N) if you prefer to have an error if one of the patterns doesn't match anything. About the only nice thing that's missing here is completion, which is doable but I think not worth the trouble here. In bash, you can't have it as nice. There's no way to prevent wildcards from being expanded immediately, so ga foo* will be equivalent to ga foo1 foo2 if the directory content is .git …foo1foo2subdirsubdir/foobar which will miss subdir/foobar . In bash, since you need to quote wildcards in arguments, you might as well rely on Git's own pattern matching in a pathspec . Note in particular that * matches directory separators, i.e. it has the shell ** behavior built in. function ga { local x for x in "$@"; do git add "*/$x" done}ga 'foo*' bar On a final note, if you keep your .gitignore up-to-date and commit often, you'll rarely need anything other than git add . . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/644769",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/366308/"
]
} |
644,819 | I want to move the .tmux.conf file from ~ to ./config/... , but I'm not sure about whether this would lead to tmux would not know about this location. So where are the alternative location(s) where tmux will source its .tmux.conf file? | Starting with tmux version 3.1, ~/.config/tmux/tmux.conf works as an alternative to ~/.tmux.conf . Notice that it cannot be a hidden file in that directory. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/644819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/201322/"
]
} |
644,821 | new to stackoverflow and linux usage, have a NAS setup on HC4 currently trying to set up a steam cache, after installing docker I was trying to install network-manager which lead me down a rabbit hole because it returned errors such as : W: Failed to fetch http://deb.debian.org/debian/dists/stable/InRelease Could not resolve 'deb.debian.org'W: Failed to fetch http://deb.debian.org/debian/dists/buster-updates/InRelease Could not resolve 'deb.debian.org'W: Failed to fetch http://deb.debian.org/debian-security/dists/buster/updates/InRelease Could not resolve 'deb.debian.org'W: Failed to fetch http://ftp.debian.org/debian/dists/buster-backports/InRelease Could not resolve 'ftp.debian.org'W: Failed to fetch https://download.docker.com/linux/debian/dists/buster/InRelease Could not resolve 'download.docker.com'W: Failed to fetch http://packages.openmediavault.org/public/dists/erasmus/InRelease Could not resolve 'packages.openmediavault.org'W: Failed to fetch https://openmediavault-plugin-developers.github.io/packages/debian/dists/usul/InRelease Could not resolve 'openmediavault-plugin-developers.github.io'W: Failed to fetch http://packages.openmediavault.org/public/dists/usul/InRelease Could not resolve 'packages.openmediavault.org'W: Failed to fetch http://ppa.linuxfactory.or.kr/dists/buster/InRelease Could not resolve 'ppa.linuxfactory.or.kr'W: Some index files failed to download. They have been ignored, or old ones used instead.W: Target Packages (stable/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:24 and /etc/apt/sources.list.d/docker.list:1W: Target Translations (stable/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list:24 and /etc/apt/sources.list.d/docker.list:1W: Target Packages (stable/binary-arm64/Packages) is configured multiple times in /etc/apt/sources.list.d/docker.list:1 and /etc/apt/sources.list.d/omvextras.list:2W: Target Packages (stable/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:24 and /etc/apt/sources.list.d/omvextras.list:2W: Target Translations (stable/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list:24 and /etc/apt/sources.list.d/omvextras.list:2 now I get those errors with apt-get update, and apt-get install network-manager returns Reading package lists... DoneBuilding dependency treeReading state information... DonePackage network-manager is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, oris only available from another sourceE: Package 'network-manager' has no installation candidate and because I messed with this a lot, here is my sources.list: #------------------------------------------------------------------------------## OFFICIAL DEBIAN REPOS#------------------------------------------------------------------------------####### Debian Main Reposdeb http://deb.debian.org/debian stable maindeb-src http://deb.debian.org/debian stable maindeb http://deb.debian.org/debian buster-updates maindeb-src http://deb.debian.org/debian buster-updates maindeb http://deb.debian.org/debian-security/ buster/updates maindeb-src http://deb.debian.org/debian-security/ buster/updates maindeb http://ftp.debian.org/debian buster-backports maindeb-src http://ftp.debian.org/debian buster-backports main#------------------------------------------------------------------------------## UNOFFICIAL REPOS#------------------------------------------------------------------------------####### 3rd Party Binary Repos###Docker CEdeb [arch=amd64] https://download.docker.com/linux/debian buster stable###openmediavaultdeb http://packages.openmediavault.org/public erasmus maindeb-src http://packages.openmediavault.org/public erasmus main any help or a direction to point me in would be greatly appreciated | Starting with tmux version 3.1, ~/.config/tmux/tmux.conf works as an alternative to ~/.tmux.conf . Notice that it cannot be a hidden file in that directory. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/644821",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/466484/"
]
} |
644,827 | Hello everyone I have recently installed ubuntu on my dos machine and when the installation got finished it told me to restart. But after restarting the system is not booting into ubuntu automatically, instead i get this grub screen I have tried other solution also but in that they select linux kernel in /boot folder But i cant find that file either Can someone please help me. I am attaching my screenshot and all the file contents i found | Starting with tmux version 3.1, ~/.config/tmux/tmux.conf works as an alternative to ~/.tmux.conf . Notice that it cannot be a hidden file in that directory. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/644827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/466494/"
]
} |
644,897 | I am creating an automation script. As part of it, I want to add a cron job.Here's a part of the script that fails: BACKUP_USER=backupbotSCRIPT_NAME=backup-script.shscp -i ./ssh-key ./$SCRIPT_NAME user@server:/tmpssh -i ./ssh-key user@server " sudo mv /tmp/$SCRIPT_NAME /home/$BACKUP_USER/bin/ && sudo chown $BACKUP_USER /home/$BACKUP_USER/bin/$SCRIPT_NAME && sudo chmod 100 /home/$BACKUP_USER/bin/$SCRIPT_NAME && sudo sed -i 's/THE_URL/'${1}'/' /home/$BACKUP_USER/bin/$SCRIPT_NAME && sudo echo '*/1 * * * *' $BACKUP_USER /home/$BACKUP_USER/bin/$SCRIPT_NAME > /etc/cron.d/discourse-backup" The problematic command is: sudo echo '*/1 * * * *' $BACKUP_USER /home/$BACKUP_USER/bin/$SCRIPT_NAME > /etc/cron.d/discourse-backup I'm getting: bash: line 5: /etc/cron.d/discourse-backup: Permission denied Until this one, everything is executed as it should. What is the issue with my last command?I thought it is some problem with quotes - I tried multiple combinations of single- and double- quotes, but I ended up with the same (or worse) results. | When you run a command like sudo echo some text > file the redirection is done by your shell as the normal user before running sudo . Edit, answering a comment: The shell doesn't treat sudo as anything specific compared to other commands, and it doesn't know that sudo will run with elevated privileges. The shell's behavior will be the same as with /bin/echo some text > file When the shell parses one of the command lines above, it finds the redirection. So it will first open the file, then fork a process for the program to execute, dup the file descriptor to stdout and exec the program. Then either /bin/echo or sudo is run with stdout already redirected. In your use case, opening the file for the redirection will fail as the normal user. Try something like echo '*/1 * * * *' $BACKUP_USER /home/$BACKUP_USER/bin/$SCRIPT_NAME | sudo tee /etc/cron.d/discourse-backup >/dev/null In this case, the file is a command line argument for sudo which will run as root and then passed the file name argument to tee which will then be executed with elevated privileges. This will allow tee to open the file for writing. 2nd edit : This answer was focused on solving the problem related to sudo and redirection not on other possible problems. As mentioned by user cas in a comment, the variables should be quoted, either individually or as the entire string, e.g. echo "*/1 * * * * $BACKUP_USER /home/$BACKUP_USER/bin/$SCRIPT_NAME" | sudo ... In the use case of the question, the quoting might be less critical for two reasons. The arguments are used for echo only, and the output must be a valid crontab line. This forbids several "problematic" characters in the variables anyway. But in general, correct quoting is always recommended. As this command would be part of a longer quoted string, the quotes could be escaped, e.g. ssh -i ./ssh-key user@server " ... echo \"*/1 * * * * $BACKUP_USER /home/$BACKUP_USER/bin/$SCRIPT_NAME\" | sudo ... " | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/644897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345969/"
]
} |
645,008 | I ran this command: grep -i 'bro*' shows.csv and got this as output 1845307,2 Broke Girls,2011,138,6.7,890931702042,An Idiot Abroad,2010,21,8.3,29759903747,Breaking Bad,2008,62,9.5,14025772249364,Broadchurch,2013,24,8.4,893781733785,Bron/Broen,2011,38,8.6,563572467372,Brooklyn Nine-Nine,2013,145,8.4,2095717569592,Chilling Adventures of Sabrina,2018,36,7.6,690417221388,Cobra Kai,2018,31,8.7,729931355642,Fullmetal Alchemist: Brotherhood,2009,69,9.1,111111118360,Johnny Bravo,1997,67,7.2,32185455275,Prison Break,2005,91,8.3,465246115341,Sabrina the Teenage Witch,1996,163,6.6,334581312171,The Umbrella Academy,2019,20,8,1408003339966,Unbreakable Kimmy Schmidt,2015,51,7.6,61891 Where is bro in breaking bad? In fact, o doesn't even appear in "Breaking bad". I tried it once more, and got the same result. It is not accounting for the last character. Is there something wrong in the way I am writing it? You can download the file shows.csv from https://cdn.cs50.net/2021/x/seminars/linux/shows.csv | In your code o* means "zero or more occurrences of o ". It seems you confused regular expressions with glob syntax (where o* means "one o and zero or more whatever characters"). In Breaking Bad there is exactly zero o characters after Br , so it matches bro* (case-insensitively). grep -i bro shows.csv will do what (I think) you want. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/645008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/466667/"
]
} |
645,027 | I was trying to install npm.. └─$ sudo apt-get install npm I got some error/message https://paste.ubuntu.com/p/ZvGd7Kt96f/ That's very huge that's why I didn't add them here.. Then, I tried sudo apt --fix-broken install I got these messages I am using Debian Based Linux Distro.. I tried as the website also.https://www.how2shout.com/linux/how-to-install-npm-and-nodejs-14-x-on-kali-linux/ curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -sudo apt-get updatesudo apt-get install nodejs I got following error Unpacking nodejs (14.16.1-deb-1nodesource1) over (12.21.0~dfsg-1) ...dpkg: error processing archive /var/cache/apt/archives/nodejs_14.16.1-deb-1nodesource1_amd64.deb (--unpack): trying to overwrite '/usr/share/doc/nodejs/api/cli.json.gz', which is also in package nodejs-doc 12.21.0~dfsg-1dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)Errors were encountered while processing: /var/cache/apt/archives/nodejs_14.16.1-deb-1nodesource1_amd64.debE: Sub-process /usr/bin/dpkg returned an error code (1) | In your code o* means "zero or more occurrences of o ". It seems you confused regular expressions with glob syntax (where o* means "one o and zero or more whatever characters"). In Breaking Bad there is exactly zero o characters after Br , so it matches bro* (case-insensitively). grep -i bro shows.csv will do what (I think) you want. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/645027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/464778/"
]
} |
645,031 | First of all, if I do a simple ssh [email protected] "cat backup.tar" > backup.tar this is not happening! So, there is something else converting it which I don't know. I have the requirement to provide a backup of the current state of the system without giving access to the server. What I have done is the following: Create a new user. Let's call him username Allow only access via ssh-key Put the keys into the authorized-keys Change the command / shell in /etc/passwd to a custom script for username Custom script gets the data by doing a database dump and copy some files and eventually creates a tar archive to the stdout Make sure that there is no additional output to stdout . To get the data you need to do: ssh [email protected] > backup.tar . Now, nearly everything is working fine except that the line ending is changed. The tar gets bigger and I can see that every line has the dos line ending with a ^M symbol. If I force dos2unix on the binary tar file the file matches the file on the server. Why is this happening and what is changing the line ending?I even simplified it to the following: The script on the server for username linked in /etc/passwd just make a cat .viminfo and the line endings are still transferred wrong when using the above ssh command to login as username and redirect the output to a file on my local machine. | What is likely happening: when ssh is invoked with no command argument (and no RemoteCommand option specified), a pseudo-terminal is by default allocated on the remote system for the session. That happens regardless of the command the remote system is configured to run. A pseudo-terminal is usually configured to translate line feed characters into carriage return + line feed sequences ( LF → CRLF , see this answer for a thorough explanation). Hence, the output of your script is written to a pseudo-terminal device, which alters it, and then sent to the client side. The allocation of a pseudo-terminal can be prevented in several ways, including: invoking ssh with the -T option on the client side (or using RequestTTY no in ssh 's configuration file (likely ~/.ssh/config )); prepending no-pty (note the white space) to the user's key in authorized_keys on the remote system; using PermitTTY no in the configuration of the SSH server on the remote system (possibly /etc/ssh/sshd_config ), likely using a Match conditional block to make it only affect a specific user); for instance: Match User="username" ForceCommand /path/to/your/script DisableForwarding yes PermitTTY no (which assumes a working command interpreter is set for the user in /etc/passwd ; you may then want to lock the user's password and make sure they have no other means of logging in); (in your case, invoking ssh with a command argument should work too (e.g. ssh user@host : ); though I would consider this no more than a workaround). Of course, the most suitable option depends on your use case and, in particular, on whether the user is supposed to be allowed to choose. See also: Creating a UNIX account which only executes one command — for a partially alternative approach, noting that, as pointed out in a comment to your question, a script used as a user's default interpreter does (may) prevent the user from getting a shell, but may also add exploitable elements of its own. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/645031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/466714/"
]
} |
645,358 | AFAICT, having continue in for loop that calls another function breaks the errexit semantics. In the main() function, I want to continue onto the next iteration if anything fails in the build() function: #! /usr/bin/env bashexport PS4='# ${BASH_SOURCE}:${LINENO}: ${FUNCNAME[0]}() - [${SHLVL},${BASH_SUBSHELL},$?] 'set -o xtraceset -o errexitbuild() { local _foo=$1 if [ "${_foo}" -eq 1 ]; then false fi printf "%s with foo=%s builds ok\\n" "${FUNCNAME[0]}" "${_foo}"}main() { for i in 1 2 3; do build $i || continue done}main "$@" However, continue inside the for loop causes the code to continue inside the build() function instead, removing the effect of the errexit flag: $ ./foo.sh # ./foo.sh:5: () - [3,0,0] set -o errexit# ./foo.sh:23: () - [3,0,0] main# ./foo.sh:18: main() - [3,0,0] for i in 1 2 3# ./foo.sh:19: main() - [3,0,0] build 1# ./foo.sh:8: build() - [3,0,0] local _foo=1# ./foo.sh:10: build() - [3,0,0] '[' 1 -eq 1 ']'# ./foo.sh:11: build() - [3,0,0] false# ./foo.sh:14: build() - [3,0,1] printf '%s with foo=%s builds ok\n' build 1build with foo=1 builds ok# ./foo.sh:18: main() - [3,0,0] for i in 1 2 3# ./foo.sh:19: main() - [3,0,0] build 2# ./foo.sh:8: build() - [3,0,0] local _foo=2# ./foo.sh:10: build() - [3,0,0] '[' 2 -eq 1 ']'# ./foo.sh:14: build() - [3,0,0] printf '%s with foo=%s builds ok\n' build 2build with foo=2 builds ok# ./foo.sh:18: main() - [3,0,0] for i in 1 2 3# ./foo.sh:19: main() - [3,0,0] build 3# ./foo.sh:8: build() - [3,0,0] local _foo=3# ./foo.sh:10: build() - [3,0,0] '[' 3 -eq 1 ']'# ./foo.sh:14: build() - [3,0,0] printf '%s with foo=%s builds ok\n' build 3build with foo=3 builds ok As you can see on the line with the printf , the exit code of the previous line, the false , is indeed 1 (the third number inside the bracket in front of it), so it is running as if errexit wasn't in place: # ./foo.sh:14: build() - [3,0,1] printf '%s with foo=%s builds ok\n' build 1 I've confirmed that removing the || continue makes the shell exit when i=1 , so the errexit is passed onto the subhshell/function. Any help would be much appreciated. Versions ~ $ bash --version GNU bash, version 5.0.3(1)-release (x86_64-pc-linux-gnu) Update Lots of good answers as to why this is. As for how to solve it, I've found this solution to be the easiest to make the script to what I want: Changing the false to: false || return $? The drawback of course, is that I'll have to do that for all the commands the function calls out to. I might have to go back to my old approach of using a run() wrapper, which executes the passed command, checks the return code of it and fails the script accordingly. Doing what you would expect errexit to do, I suppose :-) | This seems to match the description of -e / -errexit in the bash documentation : The shell does not exit if the command that fails is part of thecommand list immediately following a while or until keyword, part ofthe test in an if statement, part of any command executed in a && or|| list except the command following the final && or ||, any commandin a pipeline but the last, or if the command’s return status is beinginverted with !. [...] If a compound command or shell function executes in a context where -eis being ignored, none of the commands executed within the compoundcommand or function body will be affected by the -e setting, even if -eis set and a command returns a failure status. This has been covered in this stackoverflow question , which links to this email with the following text: > My initial gripe about errexit (and its man page description) is that the > following doesn't behave as a newbie would expect it to:> > set -e> f() {> false> echo "NO!!"> }> f || { echo "f failed" >&2; exit 1; }Indeed, the correct behavior mandated by POSIX (namely, that 'set -e' iscompletely ignored for the duration of the entire body of f(), because fwas invoked in a context that ignores 'set -e') is not intuitive. Butit is standardized, so we have to live with it. The POSIX description of -e says: -e When this option is on, if a simple command fails for any of the reasons listed in Consequences of Shell Errors or returns an exit status value >0, and is not part of the compound list following a while, until, or if keyword, and is not a part of an AND or OR list , and is not a pipeline preceded by the ! reserved word, then the shell shall immediately exit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/645358",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36427/"
]
} |
645,436 | I have 2 files with * delimiter, each file with 3k records. There are common fields in different positions. In file1 (count=1590) the position is 1 and in file2 (2707) the position is 2. file2 count and output count should be same. Note: in file2 2nd position numbers will be present in file1 we need to take corresponding $3 value which is 1 or 0 In both files total count was 3k, both files were * delimter, in that file1 $1 and file2 $2 was common field for both files, we need check whether common field has 0 or 1 which present in file1 $3. we need to write the file like 1==>000000001 D056002001 1 2==>000000003 D079291785 0, $1=seqno,$2=matched9digit value follwed byD and $3 whether is 0 or 1 All $2 values from file2 will be present as $1 values in file1. file1: D056002001**1D005356216**1D079291785**0D610350290**1 file2: 000000001*D056002001000000002*D610350290000000003*D079291785 output: 000000001*D056002001*1000000002*D610350290*1000000003*D079291785*0 I tried using the following awk commands: awk -F'*' 'NR==FNR{c[$1]++;next};c[$2]' file1 file2 > outputawk -F"*" '{ OFS="*"; if (NR==FNR) { a[$1+0]=$0;} else { if (a[$1+0]) { print $1, a[$2+0]}}}' file1 file2 > outputawk -F"*" '{ OFS="*"; if (NR==FNR) { a[$1+0]=$0;NEXT; } else { if (a[$2+0]) { print $0,a[$2+0]; } else { print $0,"***"; }}}' file1 file2 > outputawk -F"*" '{ OFS="*"; if (NR==FNR) {a[$1]=1; b[$1]=$2;next;} else { if ( a[$1]==1) { print $0,b[$1]} else { print $0,"0";}}}' file1 file2 > output Please help on that? | awk 'BEGIN{FS=OFS="*"} NR==FNR{map[$1]=$3; next} {print $0, map[$2]}' file1 file2000000001*D056002001*1000000002*D610350290*1000000003*D079291785*0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/645436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
645,447 | I'm looking for a command that can perform a grep operation in a specific file contained in a tar.gz archive. Example: file: archive.tar.gz, which contains: fileA.txtfileB.txtfileC.txt I want to grep only inside fileA.txt, not in the other two, without extract the files from the original archive, with only one command. Is it possible? I have tried: for f in /path/*.gz; do tar -xzf "$f" --to-command='grep -Hn --label="$TAR_ARCHIVE/$TAR_FILENAME" pattern || true'done This command performs the grep in all files included in the archive, but this is not exactly what I need. I need a command that greps only in the file I want to search in. | Tell tar which file it should process inside the archive: for f in /path/*.gz; do tar -xzf "$f" --to-command='grep -Hn --label="$TAR_ARCHIVE/$TAR_FILENAME" pattern || true' fileA.txtdone ( fileA.txt at the end of the tar command). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/645447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257551/"
]
} |
645,719 | I have this awk statement that reads a YAML file and outputs a particular value. I need to loop this awk inside a loop where I read a key value from a list of values and pass that key to awk. The YAML file has this structure: abc: NAME: Bob OCCUPATION: Techniciandef: NAME: Jane OCCUPATION: Engineer Say I want to get key abc OCCUPATION value of TECHNICIAN , through googling I managed to construct an awk statement that gives what I want > awk 'BEGIN{OFS=""} /^[^ ]/{ f=/^abc:/; next } f{ if (sub(/:$/,"")) abc=$2; else print abc,$1 $2}' test.yml| grep "OCCUPATION:" | cut -d':' -f2Technician However passing -v option to awk does not seem to give anything if I use this loop: items="abc,def"for item in $(echo $items | sed "s/,/ /g"); do echo $item; awk -v name="$item" 'BEGIN{OFS=""} /^[^ ]/{ f=/^\name:/; next } f{ if (sub(/:$/,"")) name=$2; else print name,$1 $2}' test.yml| grep "OCCUPATION:" | cut -d':' -f2; done I get just the debug echos I set out abcdef Where am I going wrong? I thought the variable should be interpreted correctly inside awk? EDIT: Based on steeldrivers comment I have changed the input a little items="abc,def"for item in $(echo $items | sed "s/,/ /g"); do echo $item; awk -v name="$item" 'BEGIN{OFS=""} /^[^ ]/{ f=name; next } f{ if (sub(/:$/,"")) name=$2; else print name,$1 $2}' test.yml| grep "OCCUPATION:" | cut -d':' -f2; done However now I am getting all values for OCCUPATION printed: abcTechnicianEngineerdefTechnicianEngineer I tried to use the ~ operator but I think I am not using it right as it is giving me errors, so I decided to just parse the value directly, but this is giving duplicates :/ | When working with structured text like YAML or JSON or XML, you really should use a parser that "understands" the structure. There are several specific command-line tools for various kinds of structured text (e.g. xmlstarlet for xml, jq for json, and yq for yaml), and most programming/scripting languages have libraries for parsing and processing structured text. Here's how to do it in perl, using the perl core YAML module: (this requires a version of perl >= 5.14, which is when the YAML module was included as a standard part of the core module distribution. perl 5.14 was released in 2013. For earlier versions of perl, you can install YAML with cpan ). #!/usr/bin/perluse strict;use YAML qw(LoadFile);my $file = shift; # first arg is the input filenamemy $data = LoadFile($file); # load the yaml data into a hashref variable# loop over the remaining args (i.e. the keys)foreach my $item (@ARGV) { print "$item\n"; print $$data{$item}{'OCCUPATION'}, "\n";} Save this as, e.g. yaml.pl and make it executable with chmod +x yaml.pl . If your yaml data is save in a file called input.yaml , you can run it like this: $ ./yaml.pl input.yaml abc defabcTechniciandefEngineer Like awk or sed, this can also be condensed into an inscrutable one-liner: $ perl -MYAML=LoadFile -E '$data=LoadFile(shift);foreach (@ARGV) {say $_;say $$data{$_}{"OCCUPATION"}}' input.yaml abc defabcTechniciandefEngineer perl can also automatically split the arguments for you. e.g. if you change the foreach loop to: foreach my $item (split /\s*,\s*/,join(",",@ARGV)) { you can run it as: $ ./yaml.pl input.yaml abc def or $ ./yaml.pl input.yaml "abc,def" or any combination (asuuming hypothetical ghi and jkl keys): $ ./yaml.pl input.yaml "abc,def" ghi jkl | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/645719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/467438/"
]
} |
645,758 | If I subtract a time amount from the current date, GNU date works intuitively: date '+%F %R'; date '+%F %R' --date='- 1 hour'2021-04-19 15:352021-04-19 14:35 However, when I use a date as operand, the result is unexpected: $ date '+%F %R' --date='2000/1/2 03:04:05 - 1 hour'2000-01-02 06:04$ date '+%F %R' --date='2000/1/2 03:04:05 + 1 hour ago'2000-01-02 02:04 How is date intepreting the $date - 1 hour expression? | In short: The date you give with --date is taken in local time, unless you specify a time zone, and something like +/- NNN is taken as one. Only anything after that, even if it's just hour is taken as the relative modifier. So - 1 hour doesn't mean to subtract one hour from the given time, but to specify that the time is in the time zone UTC-01, and then to add one hour to it. What I think should work for what you're trying, would be to either explicitly give the timezone before the offset, or put the offset first so it can't be confused with a timezone. Here, using the Central European Summer Time timezone (CEST), and today's date, with %Z added to the output to show the timezone. (You could also use %z to output the numeric timezone, or +0200 here.) $ date +'%F %T %Z' -d '2021-04-19 12:00:00 CEST + 5 hours'2021-04-19 17:00:00 CEST$ date +'%F %T %Z' -d '+ 5 hours 2021-04-19 12:00:00'2021-04-19 17:00:00 CEST Though of course for a January date like in the question, a summer-time time zone like CEST would not be a valid one. But rearranging the two still works, the time you give is just taken as the local time at that time. $ date +'%F %T %Z' -d '+ 5 hours 2021-01-01 12:00:00'2021-01-01 17:00:00 CET (And for 2021-10-31 02:30:00 I get CET, even though that time also exists in CEST...) (See older revisions of this answer for more examples on how it interprets various inputs.) As per @muru's answer on another question , we can also use the --debug option to have the program actually tell us what it did. Note the second and third lines: $ date --debug +'%F %T %Z' -d '2021-04-19 12:00:00 - 1 hour'date: parsed date part: (Y-M-D) 2021-04-19date: parsed time part: 12:00:00 TZ=-01:00 date: parsed relative part: +1 hour(s) date: input timezone: -01:00 (set from parsed date/time string)date: using specified time as starting value: '12:00:00'date: starting date/time: '(Y-M-D) 2021-04-19 12:00:00 TZ=-01:00'date: '(Y-M-D) 2021-04-19 12:00:00 TZ=-01:00' = 1618837200 epoch-secondsdate: after time adjustment (+1 hours, +0 minutes, +0 seconds, +0 ns),date: new time = 1618840800 epoch-secondsdate: output timezone: +01:00 (set from TZ="Europe/Berlin" environment value)date: final: 1618840800.000000000 (epoch-seconds)date: final: (Y-M-D) 2021-04-19 14:00:00 (UTC0)date: final: (Y-M-D) 2021-04-19 16:00:00 (output timezone TZ=+01:00)2021-04-19 16:00:00 CEST The man page says: The date string format is more complex than is easily documented here [...] Which indeed seems quite apt. The more comprehensive documentation is in the info pages, or online: https://www.gnu.org/software/coreutils/manual/html_node/Date-input-formats.html | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/645758",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12814/"
]
} |
645,791 | When I try to install this, I get this error; what is the solution? ubuntu@ip-xxx-xxx-xxx-xxx:~/URLuploader-With-Hotstar$ pip3 install -r requirements.txtTraceback (most recent call last): File "/usr/bin/pip3", line 9, in <module> from pip import main File "/usr/lib/python3/dist-packages/pip/__init__.py", line 14, in <module> from pip.utils import get_installed_distributions, get_prog File "/usr/lib/python3/dist-packages/pip/utils/__init__.py", line 23, in <module> from pip.locations import ( File "/usr/lib/python3/dist-packages/pip/locations.py", line 9, in <module> from distutils import sysconfigImportError: cannot import name 'sysconfig' from 'distutils' (/usr/lib/python3.8/distutils/__init__.py) | In short: The date you give with --date is taken in local time, unless you specify a time zone, and something like +/- NNN is taken as one. Only anything after that, even if it's just hour is taken as the relative modifier. So - 1 hour doesn't mean to subtract one hour from the given time, but to specify that the time is in the time zone UTC-01, and then to add one hour to it. What I think should work for what you're trying, would be to either explicitly give the timezone before the offset, or put the offset first so it can't be confused with a timezone. Here, using the Central European Summer Time timezone (CEST), and today's date, with %Z added to the output to show the timezone. (You could also use %z to output the numeric timezone, or +0200 here.) $ date +'%F %T %Z' -d '2021-04-19 12:00:00 CEST + 5 hours'2021-04-19 17:00:00 CEST$ date +'%F %T %Z' -d '+ 5 hours 2021-04-19 12:00:00'2021-04-19 17:00:00 CEST Though of course for a January date like in the question, a summer-time time zone like CEST would not be a valid one. But rearranging the two still works, the time you give is just taken as the local time at that time. $ date +'%F %T %Z' -d '+ 5 hours 2021-01-01 12:00:00'2021-01-01 17:00:00 CET (And for 2021-10-31 02:30:00 I get CET, even though that time also exists in CEST...) (See older revisions of this answer for more examples on how it interprets various inputs.) As per @muru's answer on another question , we can also use the --debug option to have the program actually tell us what it did. Note the second and third lines: $ date --debug +'%F %T %Z' -d '2021-04-19 12:00:00 - 1 hour'date: parsed date part: (Y-M-D) 2021-04-19date: parsed time part: 12:00:00 TZ=-01:00 date: parsed relative part: +1 hour(s) date: input timezone: -01:00 (set from parsed date/time string)date: using specified time as starting value: '12:00:00'date: starting date/time: '(Y-M-D) 2021-04-19 12:00:00 TZ=-01:00'date: '(Y-M-D) 2021-04-19 12:00:00 TZ=-01:00' = 1618837200 epoch-secondsdate: after time adjustment (+1 hours, +0 minutes, +0 seconds, +0 ns),date: new time = 1618840800 epoch-secondsdate: output timezone: +01:00 (set from TZ="Europe/Berlin" environment value)date: final: 1618840800.000000000 (epoch-seconds)date: final: (Y-M-D) 2021-04-19 14:00:00 (UTC0)date: final: (Y-M-D) 2021-04-19 16:00:00 (output timezone TZ=+01:00)2021-04-19 16:00:00 CEST The man page says: The date string format is more complex than is easily documented here [...] Which indeed seems quite apt. The more comprehensive documentation is in the info pages, or online: https://www.gnu.org/software/coreutils/manual/html_node/Date-input-formats.html | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/645791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/467526/"
]
} |
645,847 | We are using rsync to synchronize data from two NFS servers. One NFS server is on the east coast, the other is on west coast. RTT is about 110ms. On the east coast NFS server I mount the west coasts NFS server mount point. <server>:/home/backups on /mnt/backups type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=krb5,clientaddr=x.x.x.x,local_lock=none,addr=y.y.y.y) . The data is ALREADY on both servers and just to do a validation of the data (e.g. sync folders and when nothing needs to be changes). The following is how long it takes to validate that east coast server is the same as west cost server of a 7GB folder. The follow takes about 8 minutes to complete over 7GB of data. rsync -r -vvvv --info=progress2 --size-only /<local_path>/ /<remote_path>/ The following (which avoids using NFS mount) takes about 15seconds to complete over 7GB of data (same as above). rsync -r -vvvv --info=progress2 --size-only /<local_path>/ <user>@<west_cost_NFS>:/<remote_path>/ again the above is NOT moving any data as the folders are already synchronized, its just validating the data is the same (based on size of files). I've tried using -o async on client and in /etc/exports async on the server but the client won't ever show async when I run "mount" on the client. I assume async is default. I've tried changing rsize, wsize as well to larger values, but performance doesn't get much better. Am I just SOL on getting any better performance out of NFS? | It seems to me you're trying to use rsync wrong. Rsync's protocol is designed for the exact senario of comparing / synchronising large file systems on two separate servers. It does at much as it can locally on both the local and remote machine before comparing in the middle. Its protocol is designed such that an rsync agent on one machine talks to an rsync agent on another and the protocol is designed to massively reduce the number of round trips (and total data) required to complete the task. That is rsync is designed to work: [fast] [slow SSH] [fast]File system <----> rsync <----------> rsync <----> File system Rsync is optimised for network performance between the two agents, but it has no way to control the protocol used to access the disk. So when you mount a remote NFS file system you change the profile of network access: [fast] [fast] [slow NFS]File system <----> rsync <------> rsync <---------> File system Rsync can't do anything about this because it has absolutely no control over the NFS protocol. One concrete difference here is that over NFS, every file must be individually requested. To explore a file tree containing /foo/bar/baz you have to request / [wait] then request /foo [wait] then request /foo/bar [wait] then finally request /foo/bar/baz . At 110ms latency per request that's 330ms latency and you only got one file. Rsync's protocol between agents doesn't have this limitation. The agent running on the remote machine eagerly compiles a list of every file and directory in the remote file system being synchronised and sends over everything. There's only one request for the entire file tree! See how rsync works | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/645847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100218/"
]
} |
645,849 | I have a folder with ~300 files PD26414b.fixedheader.hs37d5.cramPD26414b.fixedheader.hs37d5.cram.craiPD26415g.fixedheader.hs37d5.cramPD26415g.fixedheader.hs37d5.cram.crai I want to replace the IDs (PD26414b,PD26415g) in the file names with their homolog names which I have saved in a text file head names.homologs.txtPD26414b SAMEA3471115PD26415g SAMEA3471120PD26433c SAMEA3471126PD26429d SAMEA3471130 so the homolog names of PD26414b is SAMEA3471115. My desired file names would be SAMEA3471115.fixedheader.hs37d5.cramSAMEA3471115.fixedheader.hs37d5.cram.craiSAMEA3471120.fixedheader.hs37d5.cramSAMEA3471120.fixedheader.hs37d5.cram.crai Is there any way to do it in Linux?I know it should be a combination of sed and mv but do not know exactly the command | It seems to me you're trying to use rsync wrong. Rsync's protocol is designed for the exact senario of comparing / synchronising large file systems on two separate servers. It does at much as it can locally on both the local and remote machine before comparing in the middle. Its protocol is designed such that an rsync agent on one machine talks to an rsync agent on another and the protocol is designed to massively reduce the number of round trips (and total data) required to complete the task. That is rsync is designed to work: [fast] [slow SSH] [fast]File system <----> rsync <----------> rsync <----> File system Rsync is optimised for network performance between the two agents, but it has no way to control the protocol used to access the disk. So when you mount a remote NFS file system you change the profile of network access: [fast] [fast] [slow NFS]File system <----> rsync <------> rsync <---------> File system Rsync can't do anything about this because it has absolutely no control over the NFS protocol. One concrete difference here is that over NFS, every file must be individually requested. To explore a file tree containing /foo/bar/baz you have to request / [wait] then request /foo [wait] then request /foo/bar [wait] then finally request /foo/bar/baz . At 110ms latency per request that's 330ms latency and you only got one file. Rsync's protocol between agents doesn't have this limitation. The agent running on the remote machine eagerly compiles a list of every file and directory in the remote file system being synchronised and sends over everything. There's only one request for the entire file tree! See how rsync works | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/645849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216256/"
]
} |
645,865 | I need to create a binary file that is filled only with 11111111 . I only know how to create zero-filled binary with dd if=/dev/zero bs=18520 count=1 Could you please say to me what a command in pipeline should I use to fill the bin with 1 ? How can I use awk in this case? | Probably easiest to use tr to convert the zeroes from /dev/zero to whatever you want, and then cut to length using dd or head or such. This would write 18520 bytes with all bits ones, so value 0xff or 255, or 377 in octal as the input must be: < /dev/zero tr '\000' '\377' | head -c 18520 > test.bin (To convert from hex or decimal to octal, you could use printf "%o\n" 0xff or such. ) To produce streams of longer than one-byte strings, I'd use Perl: perl -e '$a = "abcde" x 1024; print $a while 1' | head ... (Of course you could use "\xff" or "\377" there, too. I used the repetition operator x 1024 there mostly to avoid minimally small writes. Changing it may or may not affect the performance of that.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/645865",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/269307/"
]
} |
645,914 | In order to adjust the screenpad backlight on my ASUS Zenbook, I am using a kernel module I found here. Per his instructions, to make keybind shortcuts using a simple screenpad x command to adjust the brightness, I need to add sudo chmod a+w '/sys/class/leds/asus::screenpad/brightness' to 'rc.local', as the command is required with each reboot, and needs a password every time. By running automatically I could immediately use the custom keyboard shortcuts as they'd function normally with the drivers on Windows, without needing to run the command and enter my password each boot. I'm a new Linux user, on Parrot OS. From what I've gathered, it's not recommend to use rc.local, and I should instead use either systemd, cronjob, or run it as process using the GUI startup applications menu. I'm completely lost as to go about doing this with systemd or cronjob. I tried making a file called 'screenpad-perms.sh' and put it in /usr/local/bin, with just these lines in it based on what I've read: #! /bin/bashsudo chmod a+w '/sys/class/leds/asus::screenpad/brightness' I then made it executable using chmod +x screenpad-perms.sh . Finally, I opened the GUI Autostart app and added it as a Login Script. Restarted the PC but it doesn't work, typing screenpad x gets a permissions denied error unless I manually type sudo chmod a+w '/sys/class/leds/asus::screenpad/brightness' and enter my password; so it seems to not be executing. Again apologies as I'm very new to Linux, just really hoping to get this screen working properly. What am I missing here? | If your system is using systemd, that is your best option for what you want to do. The systemd unit will already be executed as root, so sudo is not needed, and you can set it up to run during bootup without even needing anyone to be there to log in. Here's one link with information on systemd: https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files systemd unit files are more or less similar to a Microsoft *.INI file. They have [SectionHeadings] followed by Directive=Value line. Here are the steps you will need: Either load a root shell ( sudo bash ) or prefix most of the commands with sudo to run as root. Create a shell script for the systemd service unit to execute. Typically, you will put the file in /usr/local/sbin . Let's call it /usr/local/sbin/fix-backlight.sh (as root): editor /usr/local/sbin/fix-backlight.sh (Assuming editor launches your preferred editor and it creates the file if it does not exist.) In the file, put (the #! MUST be the first line of the file): #!/bin/bash chmod a+w '/sys/class/leds/asus::screenpad/brightness' Go ahead and save it and close your editor. Then make the file only read/write/executable by root (for security): chmod 0700 /usr/local/sbin/fix-backlight.sh Create the systemd unit file (usually in /etc/systemd/system , but there are other locations; the link above gives more detail): editor /etc/systemd/system/fix-backlight.service In the editor for that file, put: [Unit] Description=Fix perms for the 'screenpad x' backlight command [Service] ExecStart=/usr/local/sbin/fix-backlight.sh [Install] WantedBy=multi-user.target Save and exit the editor. Test the unit: systemctl start fix-backlight.service If all went well and from (a non-root) shell the 'screenpad x' cmmand is working, enable the unit to start on boot: systemctl enable fix-backlight.service Go ahead then and reboot, and make sure it's all working now. (And if it does not and blows up the neighbor's cat, blame the dog!) If needed, you can also systemctl disable fix-backlight.service to make it stop running at boot. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/645914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/466295/"
]
} |
645,966 | My title may be a bit oddly worded, so here's my situation: I have a bunch of directory paths, e.g. /a/b/a/b/c/a/b/c/d/a/e/f/g/h/a/e/f/g/h/i/j/k/l/a/e/f/g/m/n/o/a/e/f/g/m/n/p and I want to filter out all lines that are child paths of an entry that already exists in the list, e.g. /a/b/a/e/f/g/h/a/e/f/g/m/n/o/a/e/f/g/m/n/p The directory paths are obtained from find , so they should reliably be in top-down order. Solutions for parsing as an array or multi-line string are both welcome. | A short awk solution: <infile sort -u |awk 'NR==1 || index($0, pre"/")!=1{print; pre=$0}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/645966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143458/"
]
} |
646,266 | Is it possible to echo the two-character string -n using just the echo command built into the bash shell? I know I can do printf '%s\n' -n but was just wondering if the echo built-in is capable of outputting the string -n at all. | Using -e and octal 55 for the - : $ echo -e '\055n'-n ... or octal 156 for the n , or octal for both: $ echo -e '\055\0156'-n If the -e is bothering you, set the shell option xpg_echo to make echo always interpret backslash sequences (this is not usually what you want though): $ shopt -s xpg_echo$ echo '-\0156'-n The echo in bash also recognizes hexadecimal: $ echo -e '\x2d\x6e'-n | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/646266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/440131/"
]
} |
646,532 | I'm writing a program to pipe one command to another. Inputs will be from the command line: $ ./a.out ls '|' wcc2 PID 6804c1 PID 6803PARENT PID 6802$ 2 2 17 Why does the output print after the prompt returns. Is there any way to prevent that? This is the code I've written: #include <stdio.h>#include <string.h>#include <sys/types.h>#include <sys/wait.h>#include <unistd.h>int main(int argc, char * argv[]){ if(argc <= 1 ) { printf("ERROR: No arguments passed\n"); printf("USAGE: ./pipe <command 1> | <command 2>\n"); return 1; } char * cmd1[50]; char * cmd2[50]; int cmd1_arg = 0; int cmd2_arg = 0; int pipe_num = 0; for(int cla = 1; cla<argc; cla++) { if( !strcmp(argv[cla],"|") ) pipe_num++; else if(pipe_num == 0) cmd1[cmd1_arg++] = argv[cla]; else if(pipe_num == 1) cmd2[cmd2_arg++] = argv[cla]; } cmd1[cmd1_arg] = (char *)NULL; cmd2[cmd2_arg] = (char *)NULL; if(pipe_num != 1) { printf("ERROR: Insufficient arguments passed\n"); printf("USAGE: ./pipe <command 1> | <command 2>\n"); return 1; } int pipe_fd[2]; pipe(pipe_fd); pid_t pid = fork(); if(pid == -1) { perror("FORK FAILED"); return 1; } if(pid != 0) { pid_t cmd_pid = fork(); if(cmd_pid == -1) { perror("FORK FAILED"); return 1; } if(cmd_pid != 0) { waitpid(pid,NULL,0); waitpid(cmd_pid,NULL,WNOHANG); printf("PARENT PID %d\n",getpid()); } if(cmd_pid == 0) { printf("c2 PID %d\n",getpid()); close(pipe_fd[1]); int stdin_fd = dup(0); close(0); dup(pipe_fd[0]); if(execvp(cmd2[0],cmd2) == -1 ) perror("CMD2 FAIL"); close(0); dup(stdin_fd); } } if(pid == 0) { printf("c1 PID %d\n",getpid()); close(pipe_fd[0]); int stdout_fd = dup(1); close(1); int test = dup(pipe_fd[1]); if( execvp(cmd1[0],cmd1) == -1 ) perror("CMD1 FAIL"); close(1); dup(stdout_fd); } return 0;} | You have: waitpid(cmd_pid,NULL,WNOHANG); By including the WNOHANG option, you're telling waitpid() to not wait for the process to terminate if it hasn't already terminated. My guess is that you added that because your program hangs if you don't include it. That's because the original parent still has an open file descriptor to the write-end of the pipe, so the reading process is still blocked waiting for input on that pipe. Here's a revised version of your program that closes the pipe file descriptors, and that does not use WNOHANG . #include <stdio.h>#include <string.h>#include <sys/types.h>#include <sys/wait.h>#include <unistd.h>int main(int argc, char *argv[]){ if (argc <= 1) { printf("ERROR: No arguments passed\n"); printf("USAGE: ./pipe <command 1> | <command 2>\n"); return 1; } char *cmd1[50]; char *cmd2[50]; int cmd1_arg = 0; int cmd2_arg = 0; int pipe_num = 0; for (int cla = 1; cla < argc; cla++) { if (!strcmp(argv[cla], "|")) { pipe_num++; } else if (pipe_num == 0) { cmd1[cmd1_arg++] = argv[cla]; } else if (pipe_num == 1) { cmd2[cmd2_arg++] = argv[cla]; } } cmd1[cmd1_arg] = NULL; cmd2[cmd2_arg] = NULL; if (pipe_num != 1) { printf("ERROR: Insufficient arguments passed\n"); printf("USAGE: ./pipe <command 1> | <command 2>\n"); return 1; } int pipe_fd[2]; if (pipe(pipe_fd) < 0) { perror("pipe"); return 1; } const pid_t pid = fork(); if (pid < 0) { perror("fork"); return 1; } else if (pid != 0) { const pid_t cmd_pid = fork(); if (cmd_pid < 0) { perror("fork"); return 1; } else if (cmd_pid != 0) { printf("PARENT PID %d\n", getpid()); close(pipe_fd[0]); close(pipe_fd[1]); if (waitpid(pid, NULL, 0) < 0) { perror("waitpid"); } if (waitpid(cmd_pid, NULL, 0) < 0) { perror("waitpid"); } } else { printf("c2 PID %d\n", getpid()); if (dup2(pipe_fd[0], STDIN_FILENO) < 0) { perror("dup2"); return 1; } close(pipe_fd[0]); close(pipe_fd[1]); if (execvp(cmd2[0], cmd2) < 0) { perror("CMD2 FAIL"); return 1; } } } else { printf("c1 PID %d\n", getpid()); if (dup2(pipe_fd[1], STDOUT_FILENO) < 0) { perror("dup2"); return 1; } close(pipe_fd[0]); close(pipe_fd[1]); if (execvp(cmd1[0], cmd1) < 0) { perror("CMD1 FAIL"); return 1; } } return 0;} A run of this program gives me: ./a.out echo 1 2 3 '|' wcPARENT PID 20412c1 PID 20413c2 PID 20414 1 3 6$ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/646532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/468391/"
]
} |
646,590 | I'm running Arch Linux, and use ext4 filesystems. When I run ls in a directory that is actually small now, but used to be huge - it hangs for a while. But the next time I run it, it's almost instantaneous. I tried doing: strace ls but I honestly don't know how to debug the output. I can post it if necessary, though it's more than a 100 lines long. And, no, I'm not using any aliases. $ type lsls is hashed (/usr/bin/ls)$ df .Filesystem 1K-blocks Used Available Use% Mounted on/dev/sda9 209460908 60427980 138323220 31% /home | A directory that used to be huge may still have a lot of blocks allocated for directory entries (= names and inode numbers of files and sub-directories in that directory), although almost all of them are now marked as deleted. When a new directory is created, only a minimum number of spaces are allocated for directory entries. As more and more files are added, new blocks are allocated to hold directory entries as needed. But when files are deleted, the ext4 filesystem does not consolidate the directory entries and release the now-unnecessary directory metadata blocks, as the assumption is that they might be needed again soon enough. You might have to unmount the filesystem and run a e2fsck -C0 -f -D /dev/sda9 on it to optimize the directories, to get the extra directory metadata blocks deallocated and the existing directory entries consolidated to a smaller space. Since it's your /home filesystem, you might be able to do it by making sure all regular user accounts are logged out, then logging in locally as root (typically on the text console). If umount /home in that situation reports that the filesystem is busy, you can use fuser -m /dev/sda9 to identify the processes blocking you from unmounting /home . If they are remnants of old user sessions, you can probably just kill them; but if they belong to services, you might want to stop those services in a controlled manner. The other classic way to do this sort of major maintenance to /home would be to boot the system into single-user/emergency mode. On distributions using systemd , the boot option systemd.unit=emergency.target should do it. And as others have mentioned, there is an even simpler solution, if preserving the timestamps of the directory is not important , and the problem directory is not the root directory of the filesystem it's in: create a new directory alongside the "bloated" one, move all files to the new directory, remove the old directory, and rename the new directory to have the same name as the old one did. For example, if /directory/A is the one with the problem: mkdir /directory/Bmv /directory/A/* /directory/B/ # regular files and sub-directoriesmv /directory/A/.??* /directory/B/ # hidden files/dirs toormdir /directory/Amv /directory/B /directory/A Of course, if the directory is being used by any services, it would be a good idea to stop those services first. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/646590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/414347/"
]
} |
646,715 | My wrong command : find . -type f -name '*2019*' -exec mv {} ./backup_2019 \; result : mv: ‘./backup_2019/2019-A.txt’ and ‘backup_2019/2019-A.txt’ are the same filemv: ‘./backup_2019/2019-B.txt’ and ‘backup_2019/2019-B.txt’ are the same filemv: ‘./backup_2019/2019-C.txt’ and ‘backup_2019/2019-C.txt’ are the same file I think it finds the file that it previously moved. Can I solve this without using maxdepth? Is there a way to exclude only the target directory? | You will need to ignore the backup directory so that find does not enter into it. There is already an answer showing how to do this. However, you may run the risk of deleting data if you back up files in this way. If two or more files, in different subdirectories, have the same names, they would over-write each other on the destination, in the backup directory. It would be better to use some real backup software to back up the data, such as restic . If that is not possible, use a solution that preserves the relative path to the files that you are backing up. The following command uses rsync to copy (not move) all files that have names containing the substring 2019 into the directory backup_2019 : rsync --itemize-changes --archive --prune-empty-dirs \ --exclude='/backup_2019/***' --include='*/' --include='*2019*' --exclude='*' \ ./ ./backup_2019 This would avoid looking inside ./backup_2019 for files or directories to transfer, but would otherwise copy all things that contains the substring 2019 . Directories on the target that end up empty are removed. Everything that is copied is copied into a location under backup_2019 that is the same as the file's location under the current directory: Example: $ tree -F.|-- dir1/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B|-- dir2/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B`-- dir3/ |-- file-1 |-- file-2019-A `-- subdir/ |-- file-2 `-- file-2019-B $ rsync --itemize-changes --archive \ --prune-empty-dirs \ --exclude='/backup_2019/***' --include='*/' --include='*2019*' --exclude='*' \ ./ ./backup_2019cd+++++++++ ./cd+++++++++ dir1/>f+++++++++ dir1/file-2019-Acd+++++++++ dir1/subdir/>f+++++++++ dir1/subdir/file-2019-Bcd+++++++++ dir2/>f+++++++++ dir2/file-2019-Acd+++++++++ dir2/subdir/>f+++++++++ dir2/subdir/file-2019-Bcd+++++++++ dir3/>f+++++++++ dir3/file-2019-Acd+++++++++ dir3/subdir/>f+++++++++ dir3/subdir/file-2019-B $ tree -F.|-- backup_2019/| |-- dir1/| | |-- file-2019-A| | `-- subdir/| | `-- file-2019-B| |-- dir2/| | |-- file-2019-A| | `-- subdir/| | `-- file-2019-B| `-- dir3/| |-- file-2019-A| `-- subdir/| `-- file-2019-B|-- dir1/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B|-- dir2/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B`-- dir3/ |-- file-1 |-- file-2019-A `-- subdir/ |-- file-2 `-- file-2019-B13 directories, 18 files You may add --remove-source-files to the list of rsync options to perform a "move" rather than "copy" of the files that you back up. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/646715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/275107/"
]
} |
646,722 | I have downloaded Raspberry Pi OS Lite and "burned" it onto the flash card and it's in my RPI (v1). I have also put a FLAC (or OGG, or MP3) sound file onto it. It's not going to have any keyboard/mouse/monitor/network access. Its sole purpose is to perpetually loop the same sound file (10 hours recorded rain), outputting it to the loudspeakers attached to the RPI, as soon as it gets power. If I cut the power at any point, I need it to start back up again the next time I plug it in, and not require me to do any kind of "fiddling about" because it was "unexpectedly shut down" or anything like that. It's a poor man's "white noise generator" to help me sleep with noisy neighbours. Since I have the RPI and the loudspeakers, I thought this would be more than doable, and almost insultingly "low-tech" for such a capable electronic device. What exact steps do I need to take to make it so that it does this? I assume that I have to make some kind of edit on the flash card to make it not ask for username/password on boot, and another edit to make it actually play the sound file (and loop it) when it has started up? | You will need to ignore the backup directory so that find does not enter into it. There is already an answer showing how to do this. However, you may run the risk of deleting data if you back up files in this way. If two or more files, in different subdirectories, have the same names, they would over-write each other on the destination, in the backup directory. It would be better to use some real backup software to back up the data, such as restic . If that is not possible, use a solution that preserves the relative path to the files that you are backing up. The following command uses rsync to copy (not move) all files that have names containing the substring 2019 into the directory backup_2019 : rsync --itemize-changes --archive --prune-empty-dirs \ --exclude='/backup_2019/***' --include='*/' --include='*2019*' --exclude='*' \ ./ ./backup_2019 This would avoid looking inside ./backup_2019 for files or directories to transfer, but would otherwise copy all things that contains the substring 2019 . Directories on the target that end up empty are removed. Everything that is copied is copied into a location under backup_2019 that is the same as the file's location under the current directory: Example: $ tree -F.|-- dir1/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B|-- dir2/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B`-- dir3/ |-- file-1 |-- file-2019-A `-- subdir/ |-- file-2 `-- file-2019-B $ rsync --itemize-changes --archive \ --prune-empty-dirs \ --exclude='/backup_2019/***' --include='*/' --include='*2019*' --exclude='*' \ ./ ./backup_2019cd+++++++++ ./cd+++++++++ dir1/>f+++++++++ dir1/file-2019-Acd+++++++++ dir1/subdir/>f+++++++++ dir1/subdir/file-2019-Bcd+++++++++ dir2/>f+++++++++ dir2/file-2019-Acd+++++++++ dir2/subdir/>f+++++++++ dir2/subdir/file-2019-Bcd+++++++++ dir3/>f+++++++++ dir3/file-2019-Acd+++++++++ dir3/subdir/>f+++++++++ dir3/subdir/file-2019-B $ tree -F.|-- backup_2019/| |-- dir1/| | |-- file-2019-A| | `-- subdir/| | `-- file-2019-B| |-- dir2/| | |-- file-2019-A| | `-- subdir/| | `-- file-2019-B| `-- dir3/| |-- file-2019-A| `-- subdir/| `-- file-2019-B|-- dir1/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B|-- dir2/| |-- file-1| |-- file-2019-A| `-- subdir/| |-- file-2| `-- file-2019-B`-- dir3/ |-- file-1 |-- file-2019-A `-- subdir/ |-- file-2 `-- file-2019-B13 directories, 18 files You may add --remove-source-files to the list of rsync options to perform a "move" rather than "copy" of the files that you back up. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/646722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/468588/"
]
} |
646,724 | I've been carefully reading the linux man page for clone(), and I understand the difference between the clone() wrapper and the "raw" system call. But what I don't understand is why the parent process needs to allocate a stack for the child, even if CLONE_VM is not used in the wrapper. Does the wrapper simply ignore the stack argument if CLONE_VM is not used? Why require it at all then? The raw system call allows it to be null which makes sense, but I don't understand why the wrapper requires this. Will the wrapper make the child and parent share memory even if you don't tell it to? | The required stack argument goes hand-in-hand with the fn argument . The raw kernel syscall doesn’t always need a stack because it behaves like fork : execution in the child starts at the return of the system call. The libc wrapper then needs to set things up to call fn , and to do so, it needs the stack (and has always done so ). As a result, a stack is always required when calling the wrapper, to pass information across the clone system call to the code which calls the fn function ( thread_start in the glibc code). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/646724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/333595/"
]
} |
646,735 | In QEMU 5.1 zstd compression of your qcow2 files was introduced.But it's not described in the manual for qemu-img.How do you enable it? | The required stack argument goes hand-in-hand with the fn argument . The raw kernel syscall doesn’t always need a stack because it behaves like fork : execution in the child starts at the return of the system call. The libc wrapper then needs to set things up to call fn , and to do so, it needs the stack (and has always done so ). As a result, a stack is always required when calling the wrapper, to pass information across the clone system call to the code which calls the fn function ( thread_start in the glibc code). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/646735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179937/"
]
} |
646,741 | I have a tab file from, which I am extracting the column containing the row numbers I need to extract lines from another file. I got the line numbers with cut -f and I stored them into a variable list . I tried to use sed with the following: $ list="2 5 7 10"$ echo $list2 5 7 10$ sed -n "$list p" longText.txt sed: -e expression #1, char 3: unknown command: `5'$ sed -n "${list}p" longText.txt sed: -e expression #1, char 3: unknown command: `5' What is the error? What is the correct syntax? | The required stack argument goes hand-in-hand with the fn argument . The raw kernel syscall doesn’t always need a stack because it behaves like fork : execution in the child starts at the return of the system call. The libc wrapper then needs to set things up to call fn , and to do so, it needs the stack (and has always done so ). As a result, a stack is always required when calling the wrapper, to pass information across the clone system call to the code which calls the fn function ( thread_start in the glibc code). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/646741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277882/"
]
} |
646,826 | I have a wget process that I am unable to kill.This question is similar as one asked before , but here the D in the STAT column seems to indicate that it is in uninterruptible sleep (usually IO) , while in the other question the process was in state R . $ ps -axuf | grep `id -un`USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND[...]biogeek 2833351 0.0 0.0 0 0 ? D Apr12 0:03 [wget][...] Trying to kill it doesn't produce any output $ kill -9 2833351 and when I run ps -axuf again, the wget process is still there. How do I figure out which software/hardware fault caused this issue? | Since the process has received a SIGKILL, it will die when it returns from its current system call. Furthermore the kernel will make the process return as soon as it gets into a state where it can safely abort the system call. A process only remains in uninterruptible sleep (state D ) for a long time if something unusual is happening inside the kernel. For more information about unkillable processes, see What if 'kill -9' does not work? One way to investigate what the process is doing is to run a diagnostic tool such as strace or dtrace or other similar tools, depending on your unix flavor. This will tell you what system call the process is making and with what arguments. For example, you might see something like this: strace -p2833351strace: Process 2833351 attachedread(3, This tells you that the process is currently reading from file descriptor 3. The next step would be to find out what's on this file descriptor, for example with lsof -p2833351 or with ls -l /proc/2833351/fd/3 . This could point to the origin of the problem, for example a non-responding NFS server or a buggy disk controller that left the filesystem driver in an unexpected state. You may also find clues in the system logs. The clues may be difficult to find because this is unusual behavior that can be caused by very different things that would have very different telltale signs. It could be a kernel bug directly related to what the process is doing, an unrelated kernel bug that corrupted some memory, defective RAM that corrupted some memory, a defective peripheral such as a disk drive that isn't responding when it should, etc. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/646826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8221/"
]
} |
647,179 | I'm puzzled by bash (and dash) behavior when -e option is set. Simple example: #!/bin/bash -efunc() { false && true}false && trueecho "1"funcecho "2" outputs: 1 expected output: 12 While first occurrence works as expected, second occurrence (inside function) leads to immediate exit. I searched documentation but was unable to find explanation to such different behavior.Is there any rationale behind this, or is this bug? I tested this in bash and dash with same results. According to bash manpage: -e Exit immediately if a pipeline (which may consist of a single simple command), a list, or a compound command (see SHELL GRAMMAR above), exits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test following the if or elif reserved words, part of any command executed in a && or || list except the command following the final && or || , any command in a pipeline but the last, or if the command's return value is being inverted with ! . If a compound command other than a subshell returns a non-zero status because a command failed while -e was being ignored, the shell does not exit. A trap on ERR , if set, is executed before the shell exits. This option applies to the shell environment and each subshell environment separately (see COMMAND EXECUTION ENVIRONMENT above), and may cause subshells to exit before executing all the commands in the subshell. If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes. | The script terminates upon returning from func , since its exit status is non-zero. The script is not terminating inside func . The false && true list is unaffected by -e , and the script does not terminate from it, not in the main part of the script and not in the function. However, the false in the function sets the function's exit status to non-zero, so when the function returns, the shell terminates. Your script may be simplified into #!/bin/bash -efalse && trueecho "1"falseecho "2" You may also want to test returning zero from your function to convince yourself that the false && true list in the function is not terminating the script: #!/bin/bash -efunc() { false && true return 0}false && trueecho "1"funcecho "2" Running this outputs 12 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469107/"
]
} |
647,383 | I'm checking if file present with find command like following - find ${pwd} | grep 'Test.*zip' This command returns output with relative path like - ./ReleaseKit/Installable/Test-5.2.0.11.zip Is there a way to get absolute path of found file using find command? | The problem with your find ${pwd} | grep 'Test.*zip' is that you don't have a variable called pwd . So this is the same as find | grep 'Test.*zip' . You want to give the current directory as the starting point. Either use $(pwd) or $PWD instead of ${pwd} . $(pwd) runs the pwd program whilst $PWD uses the variable that bash and other POSIX shells maintains to give the current directory. Not all shells are POSIX. You should also quote the variable or the command substitution to defend against unusual characters in the directory path, s you end up with find "$PWD" | grep 'Test.*zip' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/405069/"
]
} |
647,409 | I am trying to use substr to split a datetime column, the fifth one (previous_test) into three different ones at the end. Input: id,tester,company,chief,previous_test,test,date,result,cost6582983b-61d4-4371-912d-bbc76bb8208b,Audrey Feest,Pagac-Gorczany,Claudine Moakson,18/02/2019,Passwords,20/05/2020,none,£11897.96 Expected Output: id,tester,company,chief,previous_test,test,date,result,cost,day,month,year6582983b-61d4-4371-912d-bbc76bb8208b,Audrey Feest,Pagac-Gorczany,Claudine Moakson,18/02/2019,Passwords,20/05/2020,none,£11897.96,18,02,2019 I've tried using: awk -F, -v OFS="," '{s = substr($5, 1, 2)} {g = substr($5, 4, 2)} {l = substr($5, 7, 4)} {print s, g, l}' file.csv And all I get is only the date separated by commas, but not as three additional columns appended to the existing columns. I am missing how to append the output into three separate columns. | Your code prints only the substring values that are intended for the new columns, not the existing columns. You need a special handling for the first line. awk -F, -v OFS="," 'NR==1 { print $0,"day,month,year"; next }{ s = substr($5, 1, 2); g = substr($5, 4, 2); l = substr($5, 7, 4); print $0, s, g, l}' file.csv prints id,tester,company,chief,previous_test,test,date,result,cost,day,month,year6582983b-61d4-4371-912d-bbc76bb8208b,Audrey Feest,Pagac-Gorczany,Claudine Moakson,18/02/2019,Passwords,20/05/2020,none,£11897.96,18,02,2019 Explanation: The condition NR==1 is valid for the first record/line. $0 is the whole input record/line The next command jumps to the next record/line and skips all remaining commands for the current record/line. This means the other commands will be executed for all records/lines except the first one. Edit: As suggested in a comment by Olivier Dulac , the splitting of the date string can be simplified with the split function. awk -F, -v OFS="," 'NR==1 { print $0,"day,month,year"; next }{ split($5,a,"/"); print $0, a[1], a[2], a[3] }' file.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469363/"
]
} |
647,540 | I have written this shell script to test sha-516 hash password string : myhash='$6$nxIRLUXhRQlj$t29nGt1moX3KcuFZmRwUjdiS9pcLWpqKhAY0Y0bp2pqs3fPrnVAXKKbLfyZcvkkcwcbr2Abc8sBZBXI9UaguU.' #Which is created by mkpasswd for testi=0while [[ 1 -eq 1 ]]do testpass=$(mkpasswd -m sha-512 "test") i=$[ $i + 1 ] if [[ "$testpass" == "$myhash" ]]; then echo -e "found\n" break else echo -e "$myhash /= $testpass :-> $i Testing....\n" fidone After running 216107 numbers loop test I never found match.But in case of my linux OS(Ubuntu) system make so quickly match sign in credentials.My question is Why do I not get the same so quickly? | A password hash (like what you put in myhash ) contains some metadata indicating which hash function is used and with what parameters such as cost, a salt, and the output of the hash function. In the modern Unix password hash format, the parts are separated by $ : 6 indicates the Unix iterated SHA-512 method (a design that is similar, but not identical, to PBKDF2 ). There are no parameters, so the cost factor is the default value. nxIRLUXhRQlj is the salt. t29nGt1moX3KcuFZmRwUjdiS9pcLWpqKhAY0Y0bp2pqs3fPrnVAXKKbLfyZcvkkcwcbr2Abc8sBZBXI9UaguU. is the expected output. Each time you run mkpasswd -m sha-512 , it creates a password hash with a random salt. So each run of this command produces a different output. When you type your password, the system calculates iterated_sha512(default_cost, "nxIRLUXhRQlj", typed_password) and checks whether the output is "t29nGt1moX3KcuFZmRwUjdiS9pcLWpqKhAY0Y0bp2pqs3fPrnVAXKKbLfyZcvkkcwcbr2Abc8sBZBXI9UaguU." . What your program is doing is different: you generate a random salt, then compare it plus the output of the password hashing function to myhash . This only matches if you've happened to generate the same salt, which has a negligible probability (your computer isn't going to generate the same salt twice in your lifetime). But you don't need to guess the salt: it's right there in the password hash. Recommended reading: How to securely hash passwords? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221660/"
]
} |
647,551 | I understand that "everything is a file" is not entirely true, but as far as I know, every process gets a directory in /proc with lots of files. Read/write operations are often great bottlenecks in speed, and having to read/write from/to files all the time can significantly slow down processing. Does having to keep a bunch of files in /proc slow things down? If not, how doesn't having to do a lot of IO operations not be a huge design flaw in Linux? | Files in /proc and /sys exist purely dynamically, i.e. when nothing is reading them, they aren't there at all and the kernel spends no time generating them. You could think of /proc and /sys files as API calls. If you don't execute them, the kernel doesn't run any code | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/647551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/370354/"
]
} |
647,554 | This should be simple but I am missing something, need some help.My requirement is to read the log file via tail to get latest logs, grep Download Config & Copying all files of and write it in MyOwnLogFile.log but I want this to stop as soon as .myworkisdone file appears in /usr/local/FOLDER/One thing is sure that .myworkisdone will be generated at the last when all logs are done… but the script just continues to read the log file and never comes out of it, even if the file is created. while [[ ! -e /usr/local/FOLDER/.myworkisdone ]];do sudo tail -f -n0 /var/log/server22.log | while read line; do echo "$line" | grep -e 'Downloading Config’ -e ‘Copying all files of' ; done >> /var/tmp/MyOwnLogFile.logdone I also tried until instead of while to check the file but still the script cant break the spell of reading the log file.Thank you in advance. | Files in /proc and /sys exist purely dynamically, i.e. when nothing is reading them, they aren't there at all and the kernel spends no time generating them. You could think of /proc and /sys files as API calls. If you don't execute them, the kernel doesn't run any code | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/647554",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205940/"
]
} |
647,567 | I've got a folder on a non reflink -capable file system (ext4) which I know contains many files with identical blocks in them. I'd like to move/copy that directory to an XFS file system whilst simultaneously deduplicating them. (I.e. if a block of a copied file is already present in a different file, I'd like to not actually copy it, but to make a second block ref point to that in the new file.) One option would of course be first copying over all files to the XFS filesystem, running duperemove on them there, and thus removing the duplicates after the fact. Small problem: this might get time-intense, as the target filesystem isn't as quick on random accesses. Therefore, I'd prefer if the process that copies over the files already takes care of telling the kernel that, hey, that block is a duplicate of that other block that's already there. Is such a thing possible? | Files in /proc and /sys exist purely dynamically, i.e. when nothing is reading them, they aren't there at all and the kernel spends no time generating them. You could think of /proc and /sys files as API calls. If you don't execute them, the kernel doesn't run any code | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/647567",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106650/"
]
} |
647,584 | Is there a command-line tool to convert/format time? (I mean time, not date!) Here is what I was expecting: $ timeconverter --format '%s' 1h3600$ timeconverter --format '%h:%m:%s' 3665s1:01:05$ timeconverter --format '%H:%m:%s' 3665s01:01:05$ timeconverter --format '%Hh%Mm%Ss' 3966s1h5m6s$ timeconverter --format '%m' 2d2880$ timeconverter --format '%d' 1w7$ timeconverter --format '%y' 365d1$ timeconverter --format '%s' 1h10m4200 Where %m is minute, %d is day, %y is year, %h is hour, %s is second, %w is week, and so on. I myself could write a shell script for it, but it is very likely that someone else already did that, and I want to save my time. | You have the GNU software units (available as a package units in many Linux distributions). Here is simple run to find out the number of days in 2 weeks: $ units "2 week" day* 14/ 0.071428571 The manpage explains in full detail the input, output, exit code, and other ways to use the command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/462354/"
]
} |
647,628 | I want to make a function that would run something like this: youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best' "https://www.youtube.com/watch?v=_OBlgSz8sSM" Currently, in .zshrc, I've been trying downloadyoutube(){ youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best' "$1" But it still wants me to add the quotes around the URL when I call it. Instead, I would like to run downloadyoutube https://www.youtube.com/watch?v=_OBlgSz8sSM without quotes. And not downloadyoutube "https://www.youtube.com/watch?v=_OBlgSz8sSM" Is there a way to do this? | You have the GNU software units (available as a package units in many Linux distributions). Here is simple run to find out the number of days in 2 weeks: $ units "2 week" day* 14/ 0.071428571 The manpage explains in full detail the input, output, exit code, and other ways to use the command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469583/"
]
} |
647,648 | I have a bunch of text files in the following format: Lorem ipsum dolor sit amet,consetetur sadipscing elitr,sed diam nonumy eirmod temporinvidunt ut labore et doloremagna aliquyam erat, sed diamvoluptua. - At vero eos et accu-sam et justo duo dolores et earebum. - Stet clita kasd guber-gren, no sea takimata sanctusest Lorem ipsum dolor sit amet. How can I print this as continuous text on the command line, but with removing the syllable division on the line ends: Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. - At vero eos et accusam et justo duo dolores et ea rebum. - Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. I could use tr '\n' ' ' to convert the new-lines into spaces The problem is tr can only replace one character and I would need some command to remove the -\n in advance. How can I achieve this on the bash comman-line? | Using awk : awk -F'-$' '{ printf "%s", sep $1; sep=/-$/?"":OFS } END{ print "" }' infile with the -F'-$' , we defined the F ield S eparator to single hyphen at the end of line, so with this and by taking the first field $1 , we will always have the line without that hyphen for those line having this hyphen or still entire line for those not having that hyphen. then we do simply printing it with a sep in between but that changes when reading the next line to empty-string if current line was ending with a hyphen otherwise to OFS ( O utput F ield S eparator , default is Space character). at the END{...} block we are adding a final newline character to make it a POSIX text file , if you don't want that to be added, just remove that part. Using sed , alternatively: sed ':loop /-$/N;s/-\n//;t loop; N;s/\n/ /;t loop' infile :loop if a line ended with a hyphen (testing with /-$/ ), do read the N ext line and replace the "hyphen+ \n ewline" with an empty string. if substitution was successful (testing with t ), then jump to the label loop and process the next line and skip executing rest of the code. else, read the N ext line and replace the embedded \n ewline in between those two lines with a space character. if substitution here was also successful, then jump to the label loop and process the next line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
647,677 | I had a csv file of data such like that when read into shell: name,income,reward,paymentJackson,10000,2000,1000Paul,2500,700,200Louis,5000,100,1800 and I want to find the net earning for each person, use formula: "net = income+reward-payment". when I used command to do this, it only calculate the first row of data. $ cat data.csv | awk -F ',' '{for (i=1;i<=NF;i++) net[i] = $2+$3-$4} END {for (p in total) print p, "net = ", net[p]}' > result.txt How can I do the calculation here? By the way, the names are not unique, so I try (for loop) to create index for the array [net]. My expected output is: 1 Jackson net = 110002 Paul net = 30003 Louis net = 3300 | $ awk -F, -v OFS=, 'NR>1 { print $1, $2+$3-$4 }' data.csv Jackson,11000Paul,3000Louis,3300 Or if you want the net appended to the existing data, along with the (updated) header line: $ awk -F, -v OFS=, 'NR==1 {print $0,"net"}; NR>1 {print $0, $2+$3-$4}' data.csv name,income,reward,payment,netJackson,10000,2000,1000,11000Paul,2500,700,200,3000Louis,5000,100,1800,3300 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469640/"
]
} |
647,815 | I have a large file containing two fields the first one representing an object name and the second field represents the size of that object: A 1A 2B 4ABC 12C 5A 9B 3ABC 6 I would like to summarize the list with the following format: A 1,2,9ABC 12,6B 4,3C 5 the solution I came up with is to create a unique list of the objects present in the file, iterate through it and match it with the original file for object in $(awk '{print $1}' objects_with_sizes.txt | sort -u);do echo -n "$object " awk -v pattern="$object" '$1==pattern{printf "%s%s" ,sep,$2;sep=","} END{print ""}' objects_with_sizes.txt done This implementation takes a long time to run, is there a more efficient way of creating the desired output? | $ awk '{ object[$1]= (object[$1]==""?"":object[$1] ",") $2 } END { for(obj in object) print obj, object[obj] }' infileA 1,2,9B 4,3C 5ABC 12,6 A bit more efficient (use of the memory; matter for a huge file that cannot fit into the memory), i.e not buffering the file partially into the memory that awk command alone does above but only until a object key changes: $ <infile sort -k1,1 -k2,2n |\ awk 'pre!=$1 { if(obj) { print obj; obj="" } } { obj= (obj==""?$1 " ":obj ",") $2; pre=$1 } END{ if(obj) print obj }'A 1,2,9ABC 6,12B 3,4C 5 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/453131/"
]
} |
647,843 | Consider this sample C program which writes to /dev/tty and doesn't have command line options to make it not do so. #include <stdio.h>int main (void) { FILE* fout = fopen("/dev/tty", "w"); fprintf(fout, "Hello, World!\n"); fclose(fout);} How could I redirect the output of it to /dev/null in a shell script? P.S. I read this answer , but I didn't understand much. In any case, I'm expecting an answer that doesn't modify the code source of the program. | TL,DR: script -c myprogram /dev/null </dev/null >/dev/null You can't “redirect” /dev/tty in the same sense that you can redirect standard output. Standard output is defined as a file descriptor. Programs write to whatever file is already open on file descriptor 1 when they start. Some operating systems offer /dev/stdout as a file that's equivalent to standard output, but it's an “alias” for standard output. In contrast, /dev/tty is a file name, which refers to the process's controlling terminal. If a program opens /dev/tty , explicitly it opens /dev/tty , and that can't be redirected. What you can do is run the program with a controlling terminal that isn't the same as the controlling terminal of the program that runs it. A simple way to do this is with the script command . In its simplest form: script -c myprogram /dev/null >/dev/null When myprogram runs and opens /dev/tty , this is a terminal provided by script , not the terminal in which script runs. What script does when it detects a write on the terminal is to both write to its own standard output and write to the indicated typescript file; hence I set both script 's standard output and the typescript file to /dev/null . If myprogram reads from the terminal, script reads from its own standard input, so you'll probably want to redirect this to /dev/null as well. Note that script does not pass the exit status of myprogram to its caller. Some implementations (e.g. the one in Debian and derivatives) have a -e option to do that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/647843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216846/"
]
} |
647,859 | I am working on my yocto distribution including cryptsetup in the 2.3.2 version I am running such distribution on a board with 1 GB RAM and I am incurring in an "out of memory" error trying to open an encrypted partition that I am not able to properly debug. Any ideas? My distro runs from an mSD with 3 partitions; the third one (30 MB) is the encrypted one. I used the steps described on the ArchLinux guide to encrypt that partition, with ext3 instead of ext4 # cryptsetup -y -v luksFormat /dev/sda2# cryptsetup open /dev/sda2 cryptroot# mkfs.ext3 /dev/mapper/cryptroot But trying to open that partition on my board raises an error: cryptsetup --debug open /dev/mmcblk0p3 cryptroot# cryptsetup 2.3.2 processing "cryptsetup --debug open /dev/mmcblk0p3 cryptroot"# Running command open.# Locking memory.# Installing SIGINT/SIGTERM handler.# Unblocking interruption on signal.# Allocating context for crypt device /dev/mmcblk0p3.# Trying to open and read device /dev/mmcblk0p3 with direct-io.# Initialising device-mapper backend library.# Trying to load any crypt type from device /dev/mmcblk0p3.# Crypto backend (OpenSSL 1.1.1k 25 Mar 2021) initialized in cryptsetup library version 2.3.2.# Detected kernel Linux 4.1.35-rt41 ppc.# Loading LUKS2 header (repair disabled).# Acquiring read lock for device /dev/mmcblk0p3.# Opening lock resource file /run/cryptsetup/L_179:3# Verifying lock handle for /dev/mmcblk0p3.# Device /dev/mmcblk0p3 READ lock taken.# Trying to read primary LUKS2 header at offset 0x0.# Opening locked device /dev/mmcblk0p3# Veryfing locked device handle (bdev)# LUKS2 header version 2 of size 16384 bytes, checksum sha256.# Checksum:43e122216ab19330fdfb6d2f9d7b586c4e5189884aef24be884e7159228e9ee5 (on-disk)# Checksum:43e122216ab19330fdfb6d2f9d7b586c4e5189884aef24be884e7159228e9ee5 (in-memory)# Trying to read secondary LUKS2 header at offset 0x4000.# Reusing open ro fd on device /dev/mmcblk0p3# LUKS2 header version 2 of size 16384 bytes, checksum sha256.# Checksum:4ed9a44c22fde04c4b59a638c20eba6da3a13e591a6a1cfe7e0fec4437dc14cc (on-disk)# Checksum:4ed9a44c22fde04c4b59a638c20eba6da3a13e591a6a1cfe7e0fec4437dc14cc (in-memory)# Device size 32505856, offset 16777216.# Device /dev/mmcblk0p3 READ lock released.# Only 1 active CPUs detected, PBKDF threads decreased from 4 to 1.# Not enough physical memory detected, PBKDF max memory decreased from 1048576kB to 255596kB.# PBKDF argon2i, time_ms 2000 (iterations 0), max_memory_kb 255596, parallel_threads 1.# Activating volume cryptroot using token -1.# Interactive passphrase entry requested.Enter passphrase for /dev/mmcblk0p3:# Activating volume cryptroot [keyslot -1] using passphrase.device-mapper: ioctl: 4.31.0-ioctl (2015-3-12) initialised: [email protected]# dm version [ opencount flush ] [16384] (*1)# dm versions [ opencount flush ] [16384] (*1)# Detected dm-ioctl version 4.31.0.# Device-mapper backend running with UDEV support enabled.# dm status cryptroot [ opencount noflush ] [16384] (*1)# Keyslot 0 priority 1 != 2 (required), skipped.# Trying to open LUKS2 keyslot 0.# Keyslot 0 (luks2) open failed with -12.Not enough available memory to open a keyslot.# Releasing crypt device /dev/mmcblk0p3 context.# Releasing device-mapper backend.# Closing read only fd for /dev/mmcblk0p3.# Unlocking memory.Command failed with code -3 (out of memory). | LUKS2 uses Argon2i key derivation function which is memory-hard -- meaning it requires a lot of memory to open the device to prevent (or at least make it harder) brute force attacks using GPUs. You can check how much memory you need to open your device using cryptsetup luksDump /dev/sda2 , look for the line Memory: 755294 under Keyslots . When creating the device, cryptsetup checks how much memory is available and adjusts the amount required for opening it accordingly, but if you did create the LUKS device from a different computer (for example when formatting the SD card on a desktop) or even on the same machine with more memory available, it's possible you simply don't have enough memory now. And we are talking only about RAM, swap is not used in this case. I recommend re-creating the LUKS device with --pbkdf pbkdf2 to switch to the "old" (used to be default in LUKS1) key derivation function PBKDF2 which doesn't use extra memory. Alternatively you can also use --pbkdf-memory <num> to force lower amount of memory for the default Argon2i. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324258/"
]
} |
647,876 | I have copied a partial csv file. publish_date,headline_text,likes_count,comments_count,shares_count,love_count,wow_count,haha_count,sad_count,thankful_count,angry_count20030219,aba decides against community broadcasting licence,1106,118,109,155,6,5,2,0,620030219,act fire witnesses must be aware of defamation,137,362,67,0,0,0,0,0,020030219,a g calls for infrastructure protection summit,357,119,212,0,0,0,0,0,020030219,air nz staff in aust strike for pay rise,826,254,105,105,21,45,7,0,9020030219,air nz strike to affect australian travellers,693,123,153,17,113,4,103,0,720030219,ambitious olsson wins triple jump,488,57,161,0,0,0,0,0,020030219,antic delighted with record breaking barca,386,60,80,3,4,0,93,0,6820030219,aussie qualifier stosur wastes four memphis match,751,45,297,0,0,0,0,0,020030219,aust addresses un security council over iraq,3847,622,141,1,0,0,0,0,020030219,australia is locked into war timetable opp,1330,205,874,0,0,0,0,0,020030219,australia to contribute 10 million in aid to iraq,3530,130,0,23,16,4,1,0,020030219,barca take record as robson celebrates birthday in,13875,331,484,0,0,0,0,0,020030219,bathhouse plans move ahead,11202,450,2576,433,51,20,4,0,3420030219,big hopes for launceston cycling championship,3988,445,955,0,0,0,0,0,020030219,big plan to boost paroo water supplies,460,101,92,0,0,0,0,0,020030219,blizzard buries united states in bills,303,223,193,0,0,0,0,0,0 I would like to find a shell command that will help me able to make a new column that will add up each entries (likes_count+ love_count + thankful_count) - (angry_count + sad_count) and name the column emotional_polarity. I have tried awk -F , {$12=$3+$6+$10-$11-$9;}{print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12} file but it does not work for some reason the columns become mixed together.i think this may be because I am losing the comma when I perform this | set OFS ( O utput F ield S eparator ) too so that you don't lose commas. It loses the commas when you do $12=$3+$6+$10-$11-$9 , i.e, setting/updating any column's value which in this case awk does the field splitting on the current line based on the OFS internal variable, which is Space character by default, so setting it to a comma will keep those on output when printing. awk 'BEGIN{ FS=OFS="," } { $(NF+1)=(NR==1? "emotional_polarity" : $3+$6+$10-$11-$9); print }' infile or simply append the new updates to the current input line: awk -F, '{ $0=$0 FS (NR==1? "emotional_polarity" : $3+$6+$10-$11-$9); print }' infile from the awk manual : FS The input field separator (see section Specifying How Fields AreSeparated ). The value is a single-character string or a multicharacterregular expression that matches the separations between fields in aninput record. OFS The output field separator (see section Output Separators ). It isoutput between the fields printed by a print statement. Its defaultvalue is " ", a string consisting of a single space. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647876",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/443484/"
]
} |
647,907 | Looks like I cannot run any normal linux binaries if their name ends with .exe , any idea why? $ cp /bin/pwd pwd$ ./pwd/home/premek This is ok. But... $ cp /bin/pwd pwd.exe$ ./pwd.exe bash: ./pwd.exe: No such file or directory$ ls -la pwd.exe -rwxr-xr-x 1 premek premek 39616 May 3 20:27 pwd.exe$ file pwd.exe pwd.exe: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=2447335f77d6d8c4245636475439df52a09d8f05, stripped$ ls -la /lib64/ld-linux-x86-64.so.2lrwxrwxrwx 1 root root 32 May 1 2019 /lib64/ld-linux-x86-64.so.2 -> /lib/x86_64-linux-gnu/ld-2.28.so$ ls -la /lib/x86_64-linux-gnu/ld-2.28.so-rwxr-xr-x 1 root root 165632 May 1 2019 /lib/x86_64-linux-gnu/ld-2.28.so$ file /lib/x86_64-linux-gnu/ld-2.28.so/lib/x86_64-linux-gnu/ld-2.28.so: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=f25dfd7b95be4ba386fd71080accae8c0732b711, stripped | I spent one day on this and of course 1 second after posting this question I remembered something like this existed to register .exe files for wine: $ sudo cat /proc/sys/fs/binfmt_misc/wine enabledinterpreter /usr/bin/wineflags: extension .exe and /usr/bin/wine did not exist. I got rid of it using: $ sudo update-binfmts --remove wine /usr/bin/wineupdate-binfmts: warning: no executable /usr/bin/wine found, but continuing anyway as you request and it works now | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/647907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20860/"
]
} |
647,996 | I'm currently working on a little home project where i host various services on a raspberry pi 4 via docker. While working on this project i now encountered a dns problem which i can't really get my head around.I'm hosting pihole inside a container and configured it to use my router as an upstream dns server. On my router i have configured my raspberry pi as the local dns server and added a fiew other upstream dns servers. From my understanding this would lead to all dns requests getting routed trough my pihole container on my raspberry pi and then back to my router to get it resolved.So far this setup works for all my devices on my local network including the raspberry pi itself. The only problem i now encounter is with other containers on the same raspberry pi that are inside the same and/or different networks than pihole. All of them seem to have problems with resolving dns queries. For example: I have a phpmyadmin countainer connected to the same docker network as the pihole container. If i now ssh into the phpmyadmin container and want to execute 'ping google.com' or 'apt-get update' it won't be able to execute these commands because of failing dns. What i already checked: I looked at /etc/resolv.conf of the phpmyadmin container => It includes 127.0.0.11 - which is correct by my knowlegde I looked at /etc/resolv.conf of the host => It includes the actual ip of my raspberry pi (NOT 127.0.0.1). I do not understand why it uses the actual ip instead of localhost here but it does work anyway I restarted docker daemon I recreated the networks included in my docker-compose.yml I recreated the phpmyadmin container So far none of the above steps solved the problem. Out of curiosity i then set the ip of my router in /etc/dhcpcd.conf on my host as a static nameserver and reloaded both the dhcpcd and docker daemon. If i now ssh into my phpmyadmin container dns suddenly works. I excluded my routers ip again to verify my problem and dns stops working immediately.This leads me to the conclusion that all my docker containers (excluding pihole - because i specified dns 127.0.0.1 for this container) seem to have a problem with using my hosts ip address for dns. My current docker-compose.yml: version: '3'services: portainer: image: portainer/portainer-ce:linux-arm container_name: portainer restart: unless-stopped environment: TZ: Europe/Berlin networks: - frontend volumes: - /etc/localtime:/etc/localtime:ro - /var/run/docker.sock:/var/run/docker.sock - portainer_data:/data labels: - traefik.enable=true - traefik.docker.network=compose_frontend - traefik.http.routers.portainer.entrypoints=web_tcp - traefik.http.routers.portainer.rule=Host(`portainer.mydomain`) - traefik.http.services.portainer.loadbalancer.server.port=9000 traefik: image: traefik:latest container_name: traefik restart: unless-stopped environment: TZ: Europe/Berlin networks: - frontend ports: - 80:80 volumes: - /etc/localtime:/etc/localtime:ro - /var/run/docker.sock:/var/run/docker.sock:ro - /home/farmadmin/config/traefik:/etc/traefik labels: - traefik.enable=true - traefik.docker.network=compose_frontend - traefik.http.routers.traefik.entrypoints=web_tcp - traefik.http.routers.traefik.rule=Host(`traefik.mydomain`) - traefik.http.services.traefik.loadbalancer.server.port=8080 pihole: image: pihole/pihole:latest container_name: pihole restart: unless-stopped environment: TZ: Europe/Berlin networks: - frontend dns: - 127.0.0.1 ports: - 53:53/tcp - 53:53/udp volumes: - /etc/localtime:/etc/localtime:ro - etc-pihole:/etc/pihole/ - etc-dnsmasq.d:/etc/dnsmasq.d/ labels: - traefik.enable=true - traefik.docker.network=compose_frontend - traefik.http.routers.pihole.entrypoints=web_tcp - traefik.http.routers.pihole.rule=Host(`pihole.mydomain`) - traefik.http.routers.pihole.middlewares=dashboard_prefix - traefik.http.middlewares.dashboard_prefix.addprefix.prefix=/admin - traefik.http.services.pihole.loadbalancer.server.port=80 mariadb: image: linuxserver/mariadb:latest container_name: mariadb restart: unless-stopped environment: - TZ=Europe/Berlin - PUID=1000 - PGID=1000 networks: - backend volumes: - mariadb_data:/config phpmyadmin: image: phpmyadmin:latest container_name: phpmyadmin restart: unless-stopped environment: - TZ=Europe/Berlin - PMA_HOST=mariadb - PMA_PORT=3306 networks: - frontend - backend labels: - traefik.enable=true - traefik.docker.network=compose_frontend - traefik.http.routers.phpmyadmin.entrypoints=web_tcp - traefik.http.routers.phpmyadmin.rule=Host(`phpmyadmin.mydomain`) - traefik.http.services.phpmyadmin.loadbalancer.server.port=80networks: frontend: backend: internal: truevolumes: # Persistent Portainer Data portainer_data: # Persistent Pihole Data etc-pihole: etc-dnsmasq.d: # Persistent MariaDB Data mariadb_data: So my questions would be:Why does the hosts resolv.conf include its full own ip instead of localhost?Why is my host able to resolve dns queries with its own ip but my docker containers aren't?How can i solve this problem without setting the hosts nameserver to my router? | So, digging into this issue further (I had the same issue on one of my Pis ), I found that if you're running a local DNS resolver on that server, and you want Docker's DNS to work with it properly, you need to make sure the Pi's /etc/resolv.conf file has the nameserver 127.0.0.1 (as you mentioned). And to make that change stick, you should edit the /etc/resolvconf.conf file and uncomment the line: # If you run a local name server, you should uncomment the below line and# configure your subscribers configuration files below.name_servers=127.0.0.1 After that you can reboot or regenerate the file immediately with sudo resolvconf -u . As to why resolvconf drops the IP address of the Pi into /etc/resolv.conf by default—I believe unless you override the name_servers or have things customized in /etc/network/interfaces , it typically drops in the IP(s) of the DNS server(s) your router provides—which would be the IP address of your Pi! (Catch 22... but it's tricky to debug since many things besides Docker work fine with that setup!). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469938/"
]
} |
647,999 | Hello I am currently working with a csv file. I want to find a bash command that will help me find special characters ?, !, #, *, % and also and character spaces such as ' ' any advice would be helpful, I am looking at potentially using the grep function but not too sure how this would apply to the above specifications. | So, digging into this issue further (I had the same issue on one of my Pis ), I found that if you're running a local DNS resolver on that server, and you want Docker's DNS to work with it properly, you need to make sure the Pi's /etc/resolv.conf file has the nameserver 127.0.0.1 (as you mentioned). And to make that change stick, you should edit the /etc/resolvconf.conf file and uncomment the line: # If you run a local name server, you should uncomment the below line and# configure your subscribers configuration files below.name_servers=127.0.0.1 After that you can reboot or regenerate the file immediately with sudo resolvconf -u . As to why resolvconf drops the IP address of the Pi into /etc/resolv.conf by default—I believe unless you override the name_servers or have things customized in /etc/network/interfaces , it typically drops in the IP(s) of the DNS server(s) your router provides—which would be the IP address of your Pi! (Catch 22... but it's tricky to debug since many things besides Docker work fine with that setup!). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/647999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/443484/"
]
} |
648,101 | I have a file named data.txt with the following content: 1 aFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT524234 2 bFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT524234 3 cFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT524234 4 dFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT524234 5 eFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT524234 6 fFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT524234 7 gFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT524234 8 hFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT524234 9 iFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT52423410 jFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT52423411 kFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf IT524234 Please note that the first field is the line number. Now I want to construct a shell script such that I could call that script with some line number arguments and it should print out the 1st and 2nd field of the corresponding line numbers in data.txt . For example: get.sh 1 3 5 should print: 1 aFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf3 cFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf5 eFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf I think awk can be used for printing only 1st and 2nd field but I am stuck at filtering only specific lines based on the arguments passed to the shell script. Thanks in advance. | In awk, you could collect the line numbers to an array and read through the file once, printing the lines that are mentioned in the array: #!/bin/shawk -v lines="$*" 'BEGIN { split(lines, a, "[, ]"); for (i in a) b[a[i]] = 1;} NR in b {print $1, $2}' < data.txt The split() splits the variable lines along spaces and commas to array a , and the for loop builds the array b such that the keys of that array contain the lines we're interested in. Then NR in b just checks if the key matching the current line number exists. Note that that will print each line only once, regardless of how many times it exists in the input, and the lines will be printed input numeric order, not the order given by the argument: $ bash get.sh 7 3 33 cFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf7 gFDLKSFD_FDSJFskadfsff_fsadklfj_fdsaf ( get.sh 7,3,3 works too) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358102/"
]
} |
648,268 | Here are the outputs. echo *.*file1.txt file2.txt file3.txtecho * .*file1.txt file2.txt file3.txt . .. Why does the second command include the files if they have no space character before the period? Thank you for the help much love! | Those are two separate globs. The first, * matches everything except hidden files, so that prints out file1.txt , file2.txt , and file3.txt . The second, .* matches hidden files and directories only: those whose name starts with a . , so this prints out . and .. . If you only want to print out file/dir names with a space followed by a period, you would need to escape the space so that it isn't treated as a separator: $ ls file1.txt 'file2 .txt'$ echo *\ .*file2 .txt Finally, I should probably mention that the *.* doesn't mean "match everything". Unlike in Windows systems, *nix systems don't require an extension to file names (with very few exceptions, extensions are completely optional and arbitrary). On *nix systems, the *.* glob will only print file names with a . in their name: $ ls file file1.txt 'file2 .txt'$ echo *.*file1.txt file2 .txt To print all files, use a single * : $ echo *file file1.txt file2 .txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648268",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470219/"
]
} |
648,308 | I feel like this should be straightforward but I've never seen anyone ask this that I can tell. The situation is pretty straight forward. Whenever I become a user, ie su user it always starts in /root directory instead of it's home directory. Let me show you. [root@st-test2 ~]# grep "postgres" /etc/passwdpostgres:x:26:26:PostgreSQL Server:/var/lib/pgsql/:/bin/bash[root@st-test2 ~]# su postgresbash-4.2$ pwd/root[root@st-test2 ~]# ls -lhart /var/lib |grep postgresdrwx------. 4 postgres postgres 86 May 5 16:07 pgsql So, you can see that the postgres user's home directory exists and that its set in /etc/passwd...but for some reason, they start in the root directory. This happens with every user that I have created and I have no idea why. I can't say that I've ever seen this happen before either. | If you only give a username as argument, su changes user without changing much else : For backward compatibility, su defaults to not change the currentdirectory and to only set the environment variables HOME and SHELL (plus USER and LOGNAME if the target user is not root). So su postgres stays in the same directory. However since HOME is set to the new user’s home directory, cd will take you to the right place. To log in and start from the user’s default directory, you need to ask su to start a login shell set up appropriately: su -l postgres or its common synonym, su - postgres | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/648308",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470260/"
]
} |
648,319 | While following instructions at a forum post to install something over ssh at an external machine, I executed the following command without thinking: . .bashrc I have never seen the command before but am guessing (from having had a similar problem previously) that it recursively sources ~/.bashrc because now I can't execute any commands. When I login, I can't do anything. I immediately get: -bash: /usr/bin/whoami: Argument list too long-bash: /usr/bin/cut: Argument list too long-bash: /usr/bin/logger: Argument list too long Unfortunately, I can't do what solved the problem when I had a similar problem in the past (login without using the bash shell by doing ssh -t user@host /bin/sh and then modify ~/.bashrc ) because there does not seem to be any problem with ~/.bashrc . It looks exactly the same as it did before I messed up. Whatever I did, modifying ~/.bashrc does not appear to be a fix. Can anyone please suggest an alternative solution? Here is ~/.bashrc : # .bashrc# Source global definitionsif [ -f /etc/bashrc ]; then . /etc/bashrcfiexport PATH=$PATH:$HOME/.local/bin:$HOME/binexport PATH=$PATH:$HOME/.local/bin:$HOME/bin/prog1:$PATHexport PATH=$PATH:$HOME/.local/bin:$HOME/bin/prog2:$PATHexport PATH=$PATH:$HOME/.local/bin:$HOME/prog2:$PATHexport PATH=$PATH:$HOME/.local/bin:$HOME/prog2/bin:$PATHexport PATH=$PATH:$HOME/.local/bin:$HOME/bin/prog3/tools/newtool:$PATHexport PYTHONPATH=$PATH:$HOME/.local/bin:$HOME/prog2:$PYTHONPATH | Replace the first set of export lines with this export PATH="$PATH:$HOME/.local/bin:$HOME/bin"[[ -d "$HOME/bin/prog1" ]] && PATH="$PATH:$HOME/bin/prog1"[[ -d "$HOME/bin/prog2" ]] && PATH="$PATH:$HOME/bin/prog2"[[ -d "$HOME/prog2" ]] && PATH="$PATH:$HOME/prog2"[[ -d "$HOME/prog2/bin" ]] && PATH="$PATH:$HOME/prog2/bin"[[ -d "$HOME/bin/prog3/tools/newtool" ]] && PATH="$PATH:$HOME/bin/prog3/tools/newtool" What was happening was that you were doubling up $PATH on every line ( $PATH + new item + $PATH ). Very strange. In this replacement code each [[ ... ]] section ensures the corresponding directory exists before adding it to your $PATH . Not essential but certainly cleaner | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105179/"
]
} |
648,352 | A set of commonly used symbols to represent that a variable belongs to a given real coordinate space are ∈ ("ELEMENT OF", Unicode U+2208) and ℝ ("DOUBLE-STRUCK CAPITAL R", Unicode U+211D). Are those two symbols available in eqn , troff , and/or groff ? I can not find them in the documentation. Edit: I have tested provided answer and I can get symbol ∈ ("ELEMENT OF", Unicode U+2208), but not symbol ℝ ("DOUBLE-STRUCK CAPITAL R", Unicode U+211D). Specifically, if I do: .TL Test.NHIntroduction.LPGiven an input in subspace \[u211D]:.EQx \[mo] \[u211D] sup 2.ENwith output estimated value:.EQy hat.EN I get the following error: cat test.ms | eqn | groff -ms > test.pstroff: <standard input>:8: warning: can't find special character 'u211D' As it can be seen in the PS output ∈ is shown, but ℝ is not: I am using FreeBSD 12 eqn and groff . | If you want to see a unicode character like U+211D in groff you need to find a font that contains it, and provide the font metrics file for it to groff, usually by converting a ttf file to pfa and adding it to a list. One site that does the look-up for you for some common fonts is fileformat.info which shows most of the DejaVu fonts contain this character, eg DejaVu Serif . On Fedora this ttf font can be installed from a package dejavu-sans-fonts , and so I presume FreeBSD might have something similar. (If not, try one of the other matched fonts). Alternatively, if you have the fc-match command you can find font files you already have with the character: fc-match -s -f '%{file}\n' ':charset=211D' You need to pick out the TrueType files (usual suffix .ttf ) from this list. Alternatively, if you have the fc-list and ttx commands you can do a slow search through the ttf fonts for the character name with: fc-list | sed -n 's/\.ttf: .*/.ttf/p' |xargs -l -t ttx -t cmap -o - 2>&1 |grep 'ttx\|DOUBLE-STRUCK CAPITAL R' If it finds the glyph it will output the filename and the match, eg: ttx -t cmap -o - /usr/share/fonts/dejavu/DejaVuSansMono.ttf <map code="0x211d" name="uni211D"/><!-- DOUBLE-STRUCK CAPITAL R --> You can then read Peter Schaffter's explanation about Adding fonts to groff . Though this is written for the mom macros, it applies to groff in general, though your macros may not handle a family automatically. He conveniently provides a shell script to do the work for you. Some tweaking may be needed as every distribution likes to place files in different places. You can then add the following to your eqnrc , for example: define @R '"\f[DejaVuR]\[u211D]\fR"' The following doesn't need any new fonts: define in '{type "relation" size +3 \[mo]}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/378578/"
]
} |
648,365 | I am on grub rescue mode, and the avalible command just ls, the search, search.file not there. I already found my partition which use ext2, that is hd0 msdos6. When i do ls, it consist oftmp/ root/ var/ dev/ proc/ run/ sys/ What we are looking to normal boot is root also boot right? But where is the boot/ ?I don't even know why the grub is broken right now. I am already aware that it can be fixed by another linux like boot seq dick , but i am hesitant to do that. Because i need to borrow someone pc to install it on a thumb drive first. | If you want to see a unicode character like U+211D in groff you need to find a font that contains it, and provide the font metrics file for it to groff, usually by converting a ttf file to pfa and adding it to a list. One site that does the look-up for you for some common fonts is fileformat.info which shows most of the DejaVu fonts contain this character, eg DejaVu Serif . On Fedora this ttf font can be installed from a package dejavu-sans-fonts , and so I presume FreeBSD might have something similar. (If not, try one of the other matched fonts). Alternatively, if you have the fc-match command you can find font files you already have with the character: fc-match -s -f '%{file}\n' ':charset=211D' You need to pick out the TrueType files (usual suffix .ttf ) from this list. Alternatively, if you have the fc-list and ttx commands you can do a slow search through the ttf fonts for the character name with: fc-list | sed -n 's/\.ttf: .*/.ttf/p' |xargs -l -t ttx -t cmap -o - 2>&1 |grep 'ttx\|DOUBLE-STRUCK CAPITAL R' If it finds the glyph it will output the filename and the match, eg: ttx -t cmap -o - /usr/share/fonts/dejavu/DejaVuSansMono.ttf <map code="0x211d" name="uni211D"/><!-- DOUBLE-STRUCK CAPITAL R --> You can then read Peter Schaffter's explanation about Adding fonts to groff . Though this is written for the mom macros, it applies to groff in general, though your macros may not handle a family automatically. He conveniently provides a shell script to do the work for you. Some tweaking may be needed as every distribution likes to place files in different places. You can then add the following to your eqnrc , for example: define @R '"\f[DejaVuR]\[u211D]\fR"' The following doesn't need any new fonts: define in '{type "relation" size +3 \[mo]}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470338/"
]
} |
648,370 | I have some issue I just want to catch exact CUDA version using nvidia-smi from command line and it is working in shell: $ nvidia-smi | awk -F"CUDA Version:" 'NR==3{split($2,a," ");print a[1]}'11.0 But when I am doing the same operation from makefile I have syntax error: ver_cuda: CUDA = $(nvidia-smi | awk -F"CUDA Version:" 'NR==3{split($2,a," ");print a[1]}'); VER_CUDA ?= $(CUDA); Result: awk: line 1: syntax error at or near ,expr: syntax error: unexpected argument ‘11.0’make: Nothing to be done for 'ver_cuda' If somebody could help me I will be really apreciate ! | If you want to see a unicode character like U+211D in groff you need to find a font that contains it, and provide the font metrics file for it to groff, usually by converting a ttf file to pfa and adding it to a list. One site that does the look-up for you for some common fonts is fileformat.info which shows most of the DejaVu fonts contain this character, eg DejaVu Serif . On Fedora this ttf font can be installed from a package dejavu-sans-fonts , and so I presume FreeBSD might have something similar. (If not, try one of the other matched fonts). Alternatively, if you have the fc-match command you can find font files you already have with the character: fc-match -s -f '%{file}\n' ':charset=211D' You need to pick out the TrueType files (usual suffix .ttf ) from this list. Alternatively, if you have the fc-list and ttx commands you can do a slow search through the ttf fonts for the character name with: fc-list | sed -n 's/\.ttf: .*/.ttf/p' |xargs -l -t ttx -t cmap -o - 2>&1 |grep 'ttx\|DOUBLE-STRUCK CAPITAL R' If it finds the glyph it will output the filename and the match, eg: ttx -t cmap -o - /usr/share/fonts/dejavu/DejaVuSansMono.ttf <map code="0x211d" name="uni211D"/><!-- DOUBLE-STRUCK CAPITAL R --> You can then read Peter Schaffter's explanation about Adding fonts to groff . Though this is written for the mom macros, it applies to groff in general, though your macros may not handle a family automatically. He conveniently provides a shell script to do the work for you. Some tweaking may be needed as every distribution likes to place files in different places. You can then add the following to your eqnrc , for example: define @R '"\f[DejaVuR]\[u211D]\fR"' The following doesn't need any new fonts: define in '{type "relation" size +3 \[mo]}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648370",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470339/"
]
} |
648,421 | I'm using AWK to generate values between 1 - 6 which need to come out in random order. I have managed to sort out the logic for the creation of the right range of numbers but am struggling with reading those in to an array to prevent duplicate numbers being output. Currently my code has this;- BEGIN{FS=""}{for (i=1; i<=6; ++i) {v=(int (rand()*6)+1 print v } This currently outputs six numbers but shows duplicates 2, 2, 6, 1, 4, 2.What I need the output to be is something like 1, 4, 2, 5, 6, 3 Can anyone please help with the array side of this for my AWK program? Many thanks | Why use awk when, on most Unix boxes at least, you can just do: $ seq 6 | shuf523416 or as @StéphaneChazelas mentioned in a comment shuf -i 1-6 . If you do want to use awk though then here's one approach using a Knuth Shuffle : $ cat tst.awkfunction shuf(arr, i, j, n, tmp) { n = length(arr) for (i=n; i>1; i--) { j = int( 1 + rand()*i ) tmp = arr[i] arr[i] = arr[j] arr[j] = tmp }}BEGIN { srand() for (i=1; i<=n; i++) { arr[i] = i } shuf(arr) for (i=1; i<=n; i++) { print arr[i] }} $ awk -v n=6 -f tst.awk315462 which just populates an array with the values you want, then swaps the value stored at every index in the array with a value stored at some other randomly selected index, then prints the array. Note that the shuf() function above works in a single pass of the array. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470391/"
]
} |
648,530 | Is it possible to use a for -loop with the watch command? I'm not really sure what to make of this error with what I've tried: $ for i in 1 2 3; do echo $i; done123$ watch -n 10 for i in 1 2 3; do echo $i; done-bash: syntax error near unexpected token `do'$ watch for i in 1 2 3; do echo $i; done-bash: syntax error near unexpected token `do'$ | watch 's command argument(s) are a script that is run with sh -c . If the command arguments are just a list of tokens separated by spaces (e.g. watch ls -l ), it concatenates them all and runs them. But unquoted shell meta-characters are used by the shell that you run watch from and are never seen by watch . This means that meta-characters like ; & | < > etc need to be escaped or quoted to prevent the shell in which you run watch from seeing those characters as, e.g., instructions to mark the end the watch command, run the watch command in the background, or pipe the output of watch into another program (rather than run the pipe inside the watch script). The usual quoting rules apply - single-quotes to prevent variable interpolation, double-quotes otherwise. man watch has an EXAMPLES section at the end showing this. For example: watch -n 10 'for i in 1 2 3; do echo $i; done' or watch -n 10 'grep something /var/log/kern.log | tail' Note: you can use watch 's -x option if you want to exec something without sh -c . e.g. watch -x awk -f script.awk . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/214773/"
]
} |
648,619 | I am new to Linux and thought /bin/sh was a folder. I did mv path/to/file /bin/sh and now I can't open terminal and Ubuntu Software anymore. There are probably more broken programs I haven't noticed yet. I get the error: Failed to spawn child process /bin/sh Too many layers of symbolic links Any advice? I am running Ubuntu 20.04 | /bin/sh is a symlink, and overwriting didn't actually delete anything, it just invalidates the link. Which is a problem because all kinds of scripts use /bin/sh in the shebang header. This is probably why various random things are also failing. You need to, as root or via sudo: 1 cd /bin rm sh ln -s dash sh Hopefully the meaning of that is clear enough, since depending on what mechanismyou find to do this using absolute paths may be easier (the original link likely did not use an absolute path but this should not matter much). If you are unfamiliar with (symbolic) file links see man ln . This should allow you to use a terminal normally again. If it works, you probably want to reboot in case any failed script from earlier has ongoing consequences. This is Debian/Ubuntu and family specific; other distros may not include the dash shell and instead symlink to bash . If there's no dash in /bin , use bash . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648619",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470592/"
]
} |
648,697 | Current logs: 18:56:54 Info: Starting18:56:55 Error: timed out18:56:56 Error: timed out18:56:57 Error: timed out18:56:58 Info: reconnected18:56:59 Error: timed out Desired output: 18:56:54 Info: Starting18:56:55 Error: timed out (3)18:56:57 Info: reconnected18:56:58 Error: timed out I have log files that can have thousands of repeated lines, I want to copy the behaviour of chrome logs using bash/linux commands. I found this, which is close: Remove partial duplicates consecutive lines but keep first and last It gives this magic awk command: awk '{n=$2$3$4$5$6$7}l1!=n{if(p)print l0; print; p=0}l1==n{p=1}{l0=$0; l1=n}END{print}' file (Crucially having n=$1 excluded allows the timestamp to be different, which is needed. Exact timestamp shown for compressed lines isn't important.) But I need a counter added as well, so I have clear idea of what was eliminated, giving a decent compromise between readability and accuracy (the only info lost will be the exact timing of repeated messages, having the first or last timestamp is enough.) Thanks, I'm bad at awk and just learned about uniq, hopefully someone can link me to a solution, or sees this as a fun exercise. Cheers. | No need for awk , just use uniq directly, uniq -c -f 1 file The -c option gives the count for the number of times a line was found consecutively in the input, and you can skip the timestamp in the first space or tab-delimited field with -f 1 . Example given the data in the question: $ uniq -c -f 1 file 1 18:56:54 Info: Starting 3 18:56:55 Error: timed out 1 18:56:58 Info: reconnected 1 18:56:59 Error: timed out | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/648697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470670/"
]
} |
648,816 | When you run luksDump on a LUKS device, I get this: $ sudo cryptsetup luksDump /dev/sda1 LUKS header informationVersion: 2Epoch: 3Metadata area: 16384 [bytes]Keyslots area: 16744448 [bytes]UUID: 4640c6e4-[…]Label: (no label)Subsystem: (no subsystem)Flags: (no flags)[…] I’s quite obvious what “version” refers to (the current best is v2, so this is what you should aim for) and I’ve seen values for Epoch from 3 to 5. However, what does Epoch refer to, actually?And what value should I aim at? Does it matter (security-wise) what number is stated there?Is it bad if it is still Epoch 3 e.g.? Can one upgrade that Epoch? I’ve searched the web and the FAQ for information, but the word epoch is not mentioned there. | The Epoch increases every time you change anything in your LUKS header (like when adding or removing keys, etc.). The LUKS2 header specification states: uint64_t seqid; // sequence ID, increased on update seqid is a counter (sequential number) that is always increased whena new update of the header is written. The header with a higher seqid is more recent and is used for recovery (if there are primary and secondaryheaders with different seqid, the more recent one is automatically used). Why this is called a "sequence ID" in code and technical documentation, but uses the term "Epoch" when shown to the end user, remains a mystery. That it is in fact the same thing, can be seen if you read the fine source , which prints seqid as Epoch: log_std(cd, "Epoch: \t%" PRIu64 "\n", hdr->seqid); tl;dr You can safely ignore the Epoch, it is a harmless counter with no specific meaning. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/648816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146739/"
]
} |
649,004 | I'm trying to substitute (with sed ) an entire line containing a specific word and the newline at the end. Here the testfile: this # target for substitutionthis is a testanother test? Now, I already posted here, and from the linked post, I understand how to do this in some way: sed 's/^this$/test/g' testfile That works, or at least it seems so, because the newline at the end of the word this is still there: test # target for substitution but newline is still therethis is a testanother test? Given the above, I'm also fully aware sed can't match the newline directly (although I do recall that I could use '\n' in certain version of sed , but that's beside the point). I do know how to at least delete the entire word/line and the newline: sed '/^this$/d' testfile Except I need to substitute it instead. How can I do this? (with sed preferably) | As I understand you, you want to replace a line consisting only of the word this and the following newline by test , so foothisthis is a test should become footestthis is a test In sed you can do simply join the next line with N and replace everything up to the newline: sed '/^this$/{N;s/.*\n/test/;}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409852/"
]
} |
649,013 | I have been experimenting with hex numbers in AWK ( gawk ), but sometimes when I print them using e.g. printf , they are printed with some LSBs masked out, like in the following example: awk 'BEGIN { x=0xffffffffbb6002e0; printf("%x\n", x); }'ffffffffbb600000 Why do I experience this behaviour and how can I correct it? I'm using gawk on Debian Buster 10. | Numbers in AWK are floating-point numbers by default, and your value exceeds the precision available. 0xffffffffbb6002e0 ends up represented as 0 10000111110 1111111111111111111111111111111101110110110000000000 in IEEE-754 binary64 ( double-precision ) format, which represents the integer value 0xffffffffbb600000 . Note the change in the low 12 bits, rounded to zero. The smallest positive integer to get any rounding error when converted to double is 2 53 + 1. The larger the number, the larger the gap between values a double can represent. (Steps of 2, then 4, then 8, etc; that's why the low hex digits of your number round to zero.) With GAWK, if it’s built with MPFR and MP (which is the case in Debian), you can force arbitrary precision instead with the -M option: $ awk -M 'BEGIN { x=0xffffffffbb6002e0; printf("%x\n", x); }'ffffffffbb6002e0 For calculations, this will default to the same 53 bits of precision as available with IEEE-754 doubles, but the PREC variable can be used to control that. See the manual linked above for extensive details. There is a difference in handling for large integers and floating-point values requiring more than the default precision, which can result in surprising behaviour; large integers are parsed correctly with -M and its default settings (only subsequent calculations are affected by PREC ), whereas floating-point values are stored with the precision defined at the time they are parsed, which means PREC needs to be set appropriately beforehand: # Default settings, integer value too large to be exactly represented by a binary64$ awk 'BEGIN { v=1234567890123456789; printf "%.20f\n", v }'1234567890123456768.00000000000000000000# Forced arbitrary precision, same integer value stored exactly without rounding$ awk -M 'BEGIN { v=1234567890123456789; printf "%.20f\n", v }'1234567890123456789.00000000000000000000# Default settings, floating-point value requiring too much precision$ awk 'BEGIN { v=123456789.0123456789; printf "%.20f\n", v }'123456789.01234567165374755859# Forced arbitrary precision, floating-point parsing doesn’t change$ awk -M 'BEGIN { v=123456789.0123456789; printf "%.20f\n", v }'123456789.01234567165374755859# Forced arbitrary precision, PREC set in the BEGIN block, no difference$ awk -M 'BEGIN { PREC=94; v=123456789.0123456789; printf "%.20f\n", v }'123456789.01234567165374755859# Forced arbitrary precision, PREC set initially$ awk -M -vPREC=94 'BEGIN { v=123456789.0123456789; printf "%.20f\n", v }'123456789.01234567890000000000 When reading input values, AWK only recognises decimal values as numbers; to handle non-decimal values (octal or hexadecimal), fields should be processed using GAWK’s strtonum function . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/649013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128739/"
]
} |
649,189 | I'm running the following code on iOS using my iPhone's terminal, to be clear, this command is run within my jailbroken iphone using a slim terminal tweak called New Term 2: cd /var/mobile/Library/Widgetsfind . -maxdepth 3 -name 'index.html' -printf "%h\n" This returns the list of the folders containing index.html . I'd like to know how to add another file: Config_extra.js (if it exists, it'll be located in the same folder as index.html) to the search in a way that the results show only folders containing both files Thanks in advance | You were almost there; once find finds the index.html file, we ask it to look for the Config_extra.js file within the same directory via the -execdir (a non-POSIX option that is supported by some find implementations, including BSD- find which is on iOS) and upon success we print the directory name. find . -maxdepth 3 -type f -name index.html -execdir test -f Config_extra.js \; -printf '%h\n' The above command written in a spread out fashion: find . -maxdepth 3 \ -type f -name index.html \ -execdir test -f Config_extra.js \; \ -printf '%h\n' ; Another way to solve this problem is via perl using the File::Find module which is standard and part of Perl core since a very long time. Meaning, if you have perl you have File::Find cfg='Config_extra.js'perl -MFile::Find -le ' find( sub { my $cfg = $ARGV[0]; my $d = $File::Find::dir; -d && "$d/" =~ m|(?:.*/){3}| && $File::Find::prune++; -f && /^index\.html$/ && -f $cfg && print($d); }, shift, )' . "$cfg" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/649189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/471005/"
]
} |
649,222 | I have a script - run from cron.daily that gathers SMART stats from two identical SATA SSD's.However, smartctl -A /dev/sda sometimes returns the stats for /dev/sdb - and if does so smartctl -A /dev/sdb returns the stats for /dev/sdb. However, sometimes it gets it right! The system boots into / on a M2 nvme0n1 with /home on one of the SATA SSD's and all filesystems are mounted via fstab using UUID references. I have tried inserting random sleep commands - but this makes no difference. The output of smartctl doesn't include any notification of what it is the output of - example output:- smartctl 6.6 2017-11-05 r4594 [x86_64-linux-5.10.0-0.bpo.5-amd64] (local build)Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF READ SMART DATA SECTION ===SMART Attributes Data Structure revision number: 1Vendor Specific SMART Attributes with Thresholds:ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 2396. . . uname -a Linux hal 5.10.0-0.bpo.5-amd64 #1 SMP Debian 5.10.24-1~bpo10+1 (2021-03-29) x86_64 GNU/Linux Here is the script, which writes all the output as a single CSV line to a log file. #!/bin/sh# SMART DISK PROCESSING# =====================tmpfile=$(mktemp -q)today=$(date -u +%d-%m-%Y)smartctl -A /dev/sdb > $tmpfile# Output log as a single line - note "Unknown_Attribute" is "POR_Recovery_Count" [unexpected shutdown]echo -n $today ', ' >> /var/log/disk-monitor.d/sdb-errors.csvawk 'NR>=8 && NR<=21 {print $1,",",$2,",",$10,",";}' $tmpfile | tr -d '\n' | sed 's/Unknown_Attribute/POR_Recovery_Count/;s/\,$/\n/' >> /var/log/disk-monitor.d/sdb-errors.csv#------------------------------smartctl -A /dev/sda > $tmpfile# Output log as a single line - note "Unknown_Attribute" is "POR_Recovery_Count" [unexpected shutdown]echo -n $today ', ' >> /var/log/disk-monitor.d/sda-errors.csvawk 'NR>=8 && NR<=21 {print $1,",",$2,",",$10,",";}' $tmpfile | tr -d '\n' | sed 's/Unknown_Attribute/POR_Recovery_Count/;s/\,$/\n/' >> /var/log/disk-monitor.d/sda-errors.csvexit 0 | Device nodes for drives aren't guaranteed to be consistent across reboots. They're allocated on a first-seen basis, at boot time. This may vary due to hardware changes, kernel changes, module loading order, minor variations in timing, etc. If you want persistent device node naming, use the symlinks under /dev/disk/*/ . They will always point to the correct device node for the same device, no matter what order the kernel found it in. I prefer to use the symlinks in /dev/disk/by-id/ because they provide the device type (e.g. nvme or ata or usb), the device brand, model, and serial number. I print sticky labels with the serial numbers for each drive, so I can easily find one if it needs to be replaced without risking confusion with device node names. e.g. some of the SATA SSDs on one of my systems (partitions from these are used for its zfs rootfs pool): # ls -lF /dev/disk/by-id/ata-Crucial* | grep -v partlrwxrwxrwx 1 root root 9 May 9 20:06 /dev/disk/by-id/ata-Crucial_CT275MX300SSD1_163313AAxxx -> ../../sdllrwxrwxrwx 1 root root 9 May 9 20:06 /dev/disk/by-id/ata-Crucial_CT275MX300SSD1_163313AAExxx -> ../../sdqlrwxrwxrwx 1 root root 9 May 9 20:06 /dev/disk/by-id/ata-Crucial_CT275MX300SSD1_163313AAFxxx -> ../../sdolrwxrwxrwx 1 root root 9 May 9 20:06 /dev/disk/by-id/ata-Crucial_CT275MX300SSD1_163313AB0xxx -> ../../sdp# zpool status ganesh pool: ganesh state: ONLINE scan: scrub repaired 0B in 00:22:42 with 0 errors on Sun May 9 00:46:44 2021config: NAME STATE READ WRITE CKSUM ganesh ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-Crucial_CT275MX300SSD1_163313AADxxx-part5 ONLINE 0 0 0 ata-Crucial_CT275MX300SSD1_163313AAExxx-part5 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 ata-Crucial_CT275MX300SSD1_163313AAFxxx-part5 ONLINE 0 0 0 ata-Crucial_CT275MX300SSD1_163313AB0xxx-part5 ONLINE 0 0 0errors: No known data errors These symlinks will be 100% consistent across every reboot (unless you remove or replace the drive, of course). Whenever a given drive is in the system, the exact same symlinks will be created. And symlinks for each partition on it, too. BTW, these symlinks are made by udev rules. On my Debian system, /lib/udev/rules.d/60-persistent-storage.rules . You can write your own rules if you want your own naming scheme instead of, or in addition to, these ones. There's not many reasons to want to do that, but you can if you need to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205853/"
]
} |
649,363 | It tried to grep strings with number <4 in 3rd column.My data: 52343523412312;52343523412312;4 52343523412312;52343523412312;452343523412312;52343523412312;452343523262412;52343523262412;3 I tried AWK: awk -F; '$3!="4"' But still receive an error - awk: option requires an argument -- F What I'm doing wrong? | A few things. Your shell uses ; as the command separator, so you need to quote it (or escape it with \ ) for your command. Also, you shouldn't quote the 4 as it's a number. Lastly, you wanted "less than 4", not "not equal to 4". So, overall, you can do: awk -F';' '$3<4' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/649363",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/471343/"
]
} |
649,377 | Why I am getting this error when I want to load the ebpf program into kernel?? ebpf_prog.c: #include <bpf/bpf_helpers.h>#include <bpf/libbpf.h>int main(int argc, char **argv) { struct bpf_object *obj; int map_fd, prog_fd; int i, sock; FILE *f; if (bpf_prog_load("ebpf_prog.o", BPF_PROG_TYPE_SOCKET_FILTER, &obj, &prog_fd)){ printf("The kernel didn't load the BPF program\n"); return -1; } return 0;} load_prog.c: #include <linux/bpf.h>#include <bpf/bpf_helpers.h>#include <bpf/libbpf.h>int main(int argc, char **argv) { struct bpf_object *obj; int map_fd, prog_fd; int i, sock; FILE *f; if (bpf_prog_load("ebpf_prog.o", BPF_PROG_TYPE_SOCKET_FILTER, &obj, &prog_fd)){ printf("The kernel didn't load the BPF program\n"); return -1; } return 0;} Error: $ gcc ebpf_prog.c -c -o ebpf_prog.o$ gcc load_prog.c -o load_prog -lbpf$ ./load_proglibbpf: elf: sock_example.o is not a valid eBPF object fileThe kernel didn't load the BPF program what's wrong with my code?? | A few things. Your shell uses ; as the command separator, so you need to quote it (or escape it with \ ) for your command. Also, you shouldn't quote the 4 as it's a number. Lastly, you wanted "less than 4", not "not equal to 4". So, overall, you can do: awk -F';' '$3<4' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/649377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/468844/"
]
} |
649,392 | I’ve installed antiX 19.3 recently on a 16-yro (or older), laptop. One issue I’ve been having is that the thing keeps going to sleep! Every 30-45 seconds or so, it goes into sleep mode; and comes back up on a keypress. This includes even the boot sequence: While running the init scripts for runlevel 5, this already happens once. It continues after my desktop environment (IceWM) has loaded. I've read this highly related question , and found a workaround: Completely disable ACPI and APM on the grub2 boot line for the kernel: acpi=off apm=off . But that’s not a good solution, because it is important for the laptop to go to sleep when unused; and you want fan speed control etc. Another suggestion there involve systemd facilities - but my distribution doesn't use systemd. What else can I do? Also, what could be the cause of this? Here's the repeating segment of my dmesg: [Wed May 12 17:11:00 2021] VFS: busy inodes on changed media or resized disk sr0[Wed May 12 17:11:26 2021] PM: suspend entry (deep)[Wed May 12 17:11:26 2021] PM: Syncing filesystems ... done.[Wed May 12 17:11:26 2021] Freezing user space processes ... (elapsed 0.001 seconds) done.[Wed May 12 17:11:26 2021] OOM killer disabled.[Wed May 12 17:11:26 2021] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done.[Wed May 12 17:11:26 2021] Suspending console(s) (use no_console_suspend to debug)[Wed May 12 17:11:26 2021] sd 0:0:0:0: [sda] Synchronizing SCSI cache[Wed May 12 17:11:26 2021] sd 0:0:0:0: [sda] Stopping disk[Wed May 12 17:11:28 2021] ACPI: EC: interrupt blocked[Wed May 12 17:11:28 2021] ACPI: Preparing to enter system sleep state S3[Wed May 12 17:11:28 2021] ACPI: EC: event blocked[Wed May 12 17:11:28 2021] ACPI: EC: EC stopped[Wed May 12 17:11:28 2021] PM: Saving platform NVS memory[Wed May 12 17:11:28 2021] Disabling non-boot CPUs ...[Wed May 12 17:11:28 2021] ACPI: Low-level resume complete[Wed May 12 17:11:28 2021] ACPI: EC: EC started[Wed May 12 17:11:28 2021] PM: Restoring platform NVS memory[Wed May 12 17:11:28 2021] ACPI: Waking up from system sleep state S3[Wed May 12 17:11:28 2021] ACPI: EC: interrupt unblocked[Wed May 12 17:11:28 2021] usb usb2: root hub lost power or was reset[Wed May 12 17:11:28 2021] usb usb3: root hub lost power or was reset[Wed May 12 17:11:28 2021] usb usb4: root hub lost power or was reset[Wed May 12 17:11:28 2021] 8139too 0000:01:00.0 eth0: link up, 100Mbps, full-duplex, lpa 0xC5E1[Wed May 12 17:11:28 2021] sd 0:0:0:0: [sda] Starting disk[Wed May 12 17:11:28 2021] ACPI: EC: event unblocked[Wed May 12 17:11:28 2021] ata1.00: ACPI cmd ef/03:0c:00:00:00:a0 (SET FEATURES) filtered out[Wed May 12 17:11:28 2021] ata1.00: ACPI cmd ef/03:45:00:00:00:a0 (SET FEATURES) filtered out[Wed May 12 17:11:28 2021] ata2.00: ACPI cmd ef/03:0c:00:00:00:a0 (SET FEATURES) filtered out[Wed May 12 17:11:28 2021] ata2.00: ACPI cmd ef/03:42:00:00:00:a0 (SET FEATURES) filtered out[Wed May 12 17:11:29 2021] usb 3-2: reset full-speed USB device number 2 using uhci_hcd[Wed May 12 17:11:29 2021] firewire_core 0000:01:02.0: rediscovered device fw0[Wed May 12 17:11:30 2021] OOM killer enabled.[Wed May 12 17:11:30 2021] Restarting tasks ... done.[Wed May 12 17:11:30 2021] PM: suspend exit[Wed May 12 17:11:35 2021] VFS: busy inodes on changed media or resized disk sr0[Wed May 12 17:12:01 2021] PM: suspend entry (deep) Notes: I should mention that this did not happen with the Windows XP installation which the laptop used to have. The laptop’s battery is almost dead, so I only run it with mains power plugged in. I tried switching the kernel version from 4.9.something to 4.19.something (antix-packaged images); no effect. Laptop info: Clevo M3CW, Pentium M 1.6GHz, 1 GB memory, 40GB HDD. Has a built-in CD which is giving me another kind of trouble that's probably unrelated. | A few things. Your shell uses ; as the command separator, so you need to quote it (or escape it with \ ) for your command. Also, you shouldn't quote the 4 as it's a number. Lastly, you wanted "less than 4", not "not equal to 4". So, overall, you can do: awk -F';' '$3<4' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/649392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
649,408 | Never thought this would happen to me, but there you go. ¯\_(ツ)_/¯ I ran a build script from a repository inside the wrong directory without looking at the source first. Here's the script Scripts/BuildLocalWheelLinux.sh : cd ../Dependencies/cpythonmkdir debugcd debug../configure --with-pydebug --enable-sharedmakecd ../../..cd ..mkdir -p cmake-build-localcd cmake-build-localrm -rf *cmake .. -DMVDIST_ONLY=True -DMVPY_VERSION=0 -DMVDPG_VERSION=local_buildmake -jcd ..cd Distributionpython3 BuildPythonWheel.py ../cmake-build-local/[redacted]/core.so 0python3 -m ensurepippython3 -m pip install --upgrade pip[more pip install stuff]python3 -m setup bdist_wheel --plat-name manylinux1_x86_64 --dist-dir ../distcd ..cd Scripts The dangerous part seems to be mkdir -p cmake-build-localcd cmake-build-localrm -rf * But thinking about it, it actually seems like it couldn't possibly go wrong. The way you're supposed to run this script is cd Scripts; ./BuildLocalWheelLinux.sh . When I ran it the first time, it showed an error on the very last line (as I learned afterwards). I was in a hurry, so I though "maybe the docs are outdated, I'll try running from the project root instead. So I ran ./Scripts/BuildLocalWheelLinux.sh . Suddenly, vscodes theme and zoom level changed, my zsh terminal config was reset, terminal fonts were set to default, and I Ctrl+C'd once I realized what was happening. There are some files remaining, but there's no obvious pattern to them: $ ls -latotal 216drwx------ 27 felix felix 4096 May 12 18:08 .drwxr-xr-x 3 root root 4096 Apr 15 16:39 ..-rw------- 1 felix felix 12752 Apr 19 11:07 .bash_history-rw-r--r-- 1 felix felix 3980 Apr 15 13:40 .bashrcdrwxrwxrwx 7 felix felix 4096 May 12 18:25 .cachedrwx------ 8 felix felix 4096 May 12 18:26 .configdrwx------ 3 root root 4096 Apr 13 21:40 .dbusdrwx------ 2 felix felix 4096 Apr 30 12:18 .dockerdrwxr-xr-x 8 felix felix 4096 Apr 15 13:40 .dotfiles-rw------- 1 felix felix 8980 Apr 13 18:10 examples.desktop-rw-r--r-- 1 felix felix 196 Apr 19 15:19 .gitconfig-rw-r--r-- 1 felix felix 55 Apr 16 13:56 .gitconfig.old-rw-r--r-- 1 felix felix 1040 Apr 15 13:40 .gitmodulesdrwx------ 3 felix felix 4096 May 6 10:10 .gnupg-rw-r--r-- 1 felix felix 1848 May 5 14:24 heartbeat.tcl-rw------- 1 felix felix 1610 Apr 13 20:36 .ICEauthoritydrwxr-xr-x 5 felix felix 4096 Apr 21 16:39 .ipythondrwxr-xr-x 2 felix felix 4096 May 4 09:35 .jupyter-rw------- 1 felix felix 161 Apr 27 14:23 .lesshstdrwx------ 3 felix felix 4096 May 12 18:08 .local-rw-r--r-- 1 felix felix 140 Apr 29 17:54 minicom.logdrwx------ 5 felix felix 4096 Apr 13 18:25 .mozilladrwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Musicdrwxr-xr-x 6 felix felix 4096 May 12 17:16 Nextcloud-rw-r--r-- 1 felix felix 52 Apr 16 11:43 .nix-channels-rw------- 1 felix felix 1681 Apr 20 10:33 nohup.outdrwx------ 3 felix felix 4096 Apr 15 11:16 .pki-rw------- 1 felix felix 946 Apr 16 11:43 .profiledrwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Publicdrwxr-xr-x 2 felix felix 4096 May 12 18:08 .pylint.d-rw------- 1 felix felix 1984 May 12 18:06 .pythonhist-rw-r--r-- 1 felix felix 2443 Apr 19 13:40 README.mddrwxr-xr-x 13 felix felix 4096 May 12 18:08 reposdrwxr-xr-x 6 felix felix 4096 Apr 19 11:08 snapdrwx------ 3 felix felix 4096 May 5 15:33 .sshdrwxr-xr-x 5 felix felix 4096 Apr 26 17:39 .stm32cubeidedrwxr-xr-x 5 felix felix 4096 May 5 15:52 .stm32cubemxdrwxr-xr-x 2 felix felix 4096 Apr 23 11:44 .stmcubedrwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Templatesdrwxr-xr-x 3 felix felix 4096 Apr 19 11:57 testdrwxr-xr-x 2 felix felix 4096 Apr 13 18:10 Videos-rw------- 1 felix felix 14313 May 12 10:45 .viminfo-rw-r--r-- 1 felix felix 816 Apr 15 13:40 .vimrcdrwxr-xr-x 3 felix felix 4096 Apr 16 12:08 .vscode-rw-r--r-- 1 felix felix 2321 Apr 19 18:47 weird_bug.txt-rw-r--r-- 1 felix felix 162 Apr 15 13:40 .xprofile .config is gone, as well as some standard XDG dirs like Pictures and Desktop, but .bashrc is still there. .nix-channels is still there, but .nix-defexpr was nuked. So, this leads me to two questions: What went wrong? I'd like to fix this build script and make a PR to prevent this from happening in the future. What order were the files deleted in? Obviously not in alphabetical order, but * expands in alphabetical order, so something else is going on here, it seems. | Ouch. You aren't the first victim . What went wrong? Starting in your home directory, e.g. /home/felix , or even in /home/felix/src or /home/felix/Downloads/src . cd ../Dependencies/cpython Failed because there is no ../Dependencies . mkdir debugcd debug You're now in the subdirectory debug of the directory you started from. ../configure --with-pydebug --enable-sharedmake Does nothing because there's no ../configure or make . cd ../../..cd .. If you started out no more than three directory levels deep, with cd debug reaching a fourth level, the current directory is now the root directory. If you started out four directory levels deep the current directory is now /home . mkdir -p cmake-build-local This fails since you don't have permission to write in / or /home . cd cmake-build-local This fails since there is no directory cmake-build-local . We now get to… What order were the files deleted in? rm -rf * This tries to recursively delete every file in the current directory, which is / or /home . The home directories are enumerated in alphabetical order, but the files underneath are enumerated in the arbitrary order of directory traversal. It's the same order as ls --sort=none (unless rm decides to use a different order for some reason). Note that this order is generally not preserved in backups, and can change when a file is created or removed in the directory. How to fix the script First, almost any shell script should have set -e near the top. set -e causes the script to abort if a command fails. (A command fails if its exit status is nonzero.) set -e is not a panacea, because there are circumstances where it doesn't go into effect. But it's the bare minimum you can expect and it would have done the right thing here. (Also the script should start with a shebang line to indicate which shell to use, e.g. #!/bin/sh or #!/bin/bash . But that wouldn't help with this problem.) rm -rf * , or variants like rm -rf $foo.* (what if $foo turns out to be empty?), are fragile. Here, instead of mkdir -p cmake-build-localcd cmake-build-localrm -rf * it would be more robust to just remove and re-create the directory. (This would not preserve the permissions on the directory, but here this is not a concern.) rm -rf cmake-build-localmkdir cmake-build-localcd cmake-build-local Another way is more robust against deleting the wrong files, but more fragile against missing files to delete: delete only files that are known to have been built, by running make clean which has rm commands for known build targets and for known extensions (e.g. rm *.o is ok). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/649408",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67771/"
]
} |
649,440 | i'm working on big script and i'm stucking in this part, if anyone can help me please this is the file sample 31:49.9,9.92,TCP ,1_19,490,EXT_SERVER,22,5,257,1,.ASF,0,normal and this is the part of my script read -p "Please input the function itself ex, > : " functionread -p "Please input the value: " value if i used like this, working fine with me cat temp | awk -F',' -v val="$value" '{if($8 > val) print $0}' > $destfile but when i try use like this cat temp | awk -F',' -v val="$value" -v fn="$function" '{if($8 fn val) print $0}' > $destfile Thanks | Not many programming languages allow producing an operator from the contents of a variable at runtime. When the code is parsed, the expression in the condition of the if-statement has to be syntactically valid, and $ number variable variable isn't. (Of course in the shell and test / [ you could do e.g. [ "$a" "$op" "$b" ] , but that's because [ is a command, and parses its arguments after the shell has expanded the variables.) You have some options. You could have the shell expand the operator before awk parses the code, so something like this: cat temp | awk -F',' -v val="$value" ' {if($8 '"$function"' val) print $0}' > $destfile then if $function contained e.g. < , awk would see {if($8 < val) ... . If you do this, you SHOULD verify that $function is an acceptable operator before running awk. Otherwise your users could get surprising syntax errors, or worse. As another alternative, you could have the user enter both a lower and upper limit, and check both in the code, always. Substitute e.g. -inf / +inf or some other sufficiently low/high values if the user doesn't want to enter a lower/upper limit respectively. read -p "Please enter lower limit: " loread -p "Please enter upper limit: " hicat temp | awk -F',' -v lo="${lo:--inf}" -v hi="${hi:-+inf}" ' {if (lo <= $8 && $8 <= hi) print $0}' > $destfile ( -inf / +inf seem to work directly in GNU awk and Busybox, mawk appears to require forcing them to numbers with lo + 0 <= $8 etc.) I suppose in GNU awk a third choice would be making functions for all different comparison operators you want to provide, and then using indirect function calls to call one of them, based on a name given in a variable. E.g. for less-than and greater-than tests: read -p "Please input the function (lt or gt): " opread -p "Please input the value: " valuecat temp | gawk -F',' -v op="$op" -v val="$value" ' function lt(a, b) { return a < b; } function gt(a, b) { return a > b; } {if (@op($8, val)) print $0}' The downside of this is of course that you'll have to explicitly create all the functions, and if the user enters an invalid function name, you they get an error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/442487/"
]
} |
649,524 | oldfile=test.csvread -p "Enter File location: "$savelocation1read -p "Enter File name: " $newfile1grep "foobar" $oldfile > $savelocation1/$newfile1awk ' BEGIN {FS=","} { printf "%-5s %-10s" \n, $1, $3 } ' < $savelocation1/$newfile1 grep will create a new file called $newfile that only contains the lines "foobar" from the $oldfile but then when i do an awk to print the 1st and 3rd columns into the $newfile1 the problem with this is that it doesn't write the output of awk to the $newfile1 but it just prints the results to the terminal. Edit - I used a tempory file to store then output then transfer it to the original file. however this only works for grep and not any awk statement for some reason. $savelocation1/$oldfile> $savelocation1/tempfile.csv && mv $savelocation1/tempfile.csv $savelocation1/$newfile | If you use GNU awk: gawk -i inplace '...' filename # NOT redirected If you install the moreutils package awk '...' filename | sponge filename Use a temp file (only overwrite the original if the awk process completes successfully) t=$(mktemp)awk '...' filename >"$t" && mv "$t" filename BUT, you don't need separate grep and awk here: awk -F, -v pattern="foobar" ' $0 ~ pattern { printf "%-5s %-10s\n", $1, $3 }' $oldfile > $savelocation1/$newfile1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/471138/"
]
} |
649,525 | I'm sure I once had a way to read email files from the commandline that was fall-off-a-log simple but I can't for the life of me find it again now. I have files in MailDir format, I wish to view their contents (headers, body (HTML/plain), MIME-decoded, extract attachments maybe). These aren't my emails; it's not that I want a MUA capable of fetching, sorting, sending mail for me - they're just raw files that I need to inspect. | The package maildir-utils (at least it's called so in Debian) contains a program called mu , that has a nice functionality to display the contents of a Maildir mail message. It displays only the headers, the text/plain part plus list of attachments. See man page . Example: mu view /path/to/email-file . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649525",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23542/"
]
} |
649,526 | I am building a shell script that uses a JSON file. { "property1": true, "list": [ { "id": 1, "name": "APP1" }, { "id": 2, "name": "APP2" } ], "property2": false} I need to use shell script to read the name from the list and remove its parent object from list. Basically I need to remove the object with name APP1 from the list using shell. Editing JSON structure is not an option. | Using the del function in jq : jq 'del(.list[] | select(.name=="APP1"))' If you wanted to pass the app name as a shell variable to jq you can use the --arg option: jq --arg name "$name" 'del(.list[] | select(.name==$name))' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/649526",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/471518/"
]
} |
649,529 | I don't know how to print a variable inside a string of a string. I first started off without a variable like this and it worked perfectly: #!/bin/bashssh 1.1.1.1 $'sudo -H -u apache bash -c \'cd ~/html; echo development > stuff.text\'' When I login to my server at 1.1.1.1 , I can see that the file stuff.text has the word development . Perfect. Then I made this bash script: #!/bin/bashBRANCH=developmentssh 1.1.1.1 $'sudo -H -u apache bash -c \'cd ~/html; echo ${BRANCH} > stuff.text\'' But running this bash script causes an empty stuff.text file. I also tried each of these commands, but they all gave syntax/parse errors: ssh 1.1.1.1 $`sudo -H -u apache bash -c 'cd ~/html; echo ${BRANCH} > stuff.text'`ssh 1.1.1.1 ${`sudo -H -u apache bash -c 'cd ~/html; echo ${BRANCH} > stuff.text'`}ssh 1.1.1.1 ${sudo -H -u apache bash -c 'cd ~/html; echo ${BRANCH} > stuff.text'}ssh 1.1.1.1 ${"sudo -H -u apache bash -c 'cd ~/html; echo ${BRANCH} > stuff.text'"} How do I write a variable inside the string of another string? | Using the del function in jq : jq 'del(.list[] | select(.name=="APP1"))' If you wanted to pass the app name as a shell variable to jq you can use the --arg option: jq --arg name "$name" 'del(.list[] | select(.name==$name))' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/649529",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56021/"
]
} |
649,598 | Microdnf's README says that it is "A minimal dnf for (mostly) Docker containers that uses libdnf and hence doesn't require Python." It doesn't list microdnf 's features and doesn't expand on what sense it is "minimal" compared to dnf . Red Hat's Atomic Base Image annoucement mentions that "Microdnf is not a full yum replacement" but it also doesn't expand on what's missing. There is a man page online -- I'm not sure how official it is -- but it also doesn't expand on what's the gap compared to dnf . The question is: what is the gap between microdnf and dnf ? What can only be done with dnf and not with microdnf ? Is there a resource that lists that? | It doesn't list microdnf's features and doesn't expand on what sense it is "minimal" compared to dnf. As minimal as stated: no Python and no Python module dependencies. Which are quite many packages to think of it. rpm -q --requires dnfpython3-dnf = 4.2.23-4.el8 rpm -q --requires python3-dnfpython3-gpgpython3-hawkey >= 0.48.0-3python3-libcomps >= 0.1.8python3-libdnfpython3-libdnf >= 0.48.0-3python3-rpm >= 4.14.2-35 The actual dependency tree will expand far more if each python module dependency is checked. what is the gap between microdnf and dnf I suppose the actual gap will come also from the fact of not using Python: There is no Python interface, and thus you can't invoke microdnf from a Python code using a consistent API. You'll have to resort to using the subprocess Python module Actual dnf can be expanded with many additional commands provided by the dnf-plugins-core and other plugin packages. You may not expect any of those features in microdnf . They will hardly ever make it to microdnf . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56711/"
]
} |
649,628 | let's say, I have this file with five hundred lines of text. onetwothreefourfivesixseveneightnineten...five-hundred If I wanted to replace the fifth line after match with some string, I would just do this: sed '/two/{n;n;n;n;n;s/.*/MODIFIED/}' inputfile output: onetwothreefourfivesixMODIFIEDeightnineten...five-hundred But what if I want to replace the 60th line after match?I don't want to write 'n' sixty times. I tried playing around with x, h/H, g/G, and ranges, but I can't still get my desired output. | Workaround with awk : awk '/two/{ n=NR+5 } NR==n{ sub(/.*/, "MODIFIED") }1' file or if you want to replace the line awk '/two/{ n=NR+5 } NR==n{ $0="MODIFIED" }1' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649628",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/471609/"
]
} |
649,639 | I need to read PCI device information from files. But it gives unusable output when I use command like that: cat /proc/bus/pci/05/00.0 Output: �h�� How could I fix this? OS: Debian-like Linux x64, Kenel 4.19 | Workaround with awk : awk '/two/{ n=NR+5 } NR==n{ sub(/.*/, "MODIFIED") }1' file or if you want to replace the line awk '/two/{ n=NR+5 } NR==n{ $0="MODIFIED" }1' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649639",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/461582/"
]
} |
649,759 | While there is usually no need for more than the 64k available ports, I am interested in the PoC that having a port number on 64 bits would mitigate the regular attacks on the access ports (ssh, vpn...). Having a 64b port makes it almost impossible to randomly attack a service, targeting either DoS or a login. Like ssh -p 141592653589793238 my.site.com Is it possible to configure Linux to use 64 bit ports? (of course both client and server should be configured) and practically Would that disturb the Internet equipment? ('transport' is OSI layer 4, above IP, thus the routing itself should not be impacted, but some devices go up to the upper layers for analysis / malware detection... ; a 64 bit ports Linux box would act as home router) | Is it possible to configure Linux to use 64 bit ports? You cannot change a parameter to use 64bit ports in TCP/UDP. You could create similar protocols, but you would only be able to communicate with your modified hosts and it would not be TCP / UDP, but a new set of protocols, let's say TCP64 / UDP64. Here are just some of the things you'd have to add for these protocols to work, just to start, before even considering memory impact and a ton of other issues: a definition of the the TCP64 ( a modification of the current TCP segment ) a new family AF_INET capable of holding the extended ports, along with the kernel code to handle it (if you're thinking about copy/paste, note that you have to change, at the very least, a list the structure definitions, type definitions and calls to htons() or ntohs() for example code to all userspace programs meant to use the new stacks, including those at the edges of the network, such as firewalls if you plan to filter the traffic. Since it will be a different set of protocols, with their own IP numbers , they would not disturb the routing nodes, though they could be dropped by them along the route, because the IP protocol number would not be known. As for mitigation: software like fail2ban and custom service ports (in the 16-bit range) are usual techniques, though not the only ones . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/649759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3527/"
]
} |
649,776 | I am a new learner in this field.I want to subtract few seconds from date_time. I used this code to extract data and then subtract the seconds.BUT I can not save this output into a variable. Could you please help me to save this? for stnm in H33 do cd $stnm for file in $input_dir/$stnm/2018/350.hyd echo $file do dat=`saclst kzdate f $file | awk '{print substr($2,1,10)}'` time=`saclst kztime f $file| awk '{print substr($2,1,11)}'` echo $dat $time "############################" # new_time= date -d "$(date -Iseconds -d "$dat $time" ) - 2 minutes - 0.05 seconds" new_time= date -d " $dat $time Z - 2 minutes - 0.05 seconds" +%Y/%m/%d_%H:%M:%S | awk '{print substr($1,1,24)}' echo $dat $time $new_time "####################" donedone Output /NAS2/Abbas/TS14_OBS/H33/2018/350.hyd2018/12/16 00:00:00.00 ############################2018/12/15_23:57:592018/12/16 00:00:00.00 #################### | Is it possible to configure Linux to use 64 bit ports? You cannot change a parameter to use 64bit ports in TCP/UDP. You could create similar protocols, but you would only be able to communicate with your modified hosts and it would not be TCP / UDP, but a new set of protocols, let's say TCP64 / UDP64. Here are just some of the things you'd have to add for these protocols to work, just to start, before even considering memory impact and a ton of other issues: a definition of the the TCP64 ( a modification of the current TCP segment ) a new family AF_INET capable of holding the extended ports, along with the kernel code to handle it (if you're thinking about copy/paste, note that you have to change, at the very least, a list the structure definitions, type definitions and calls to htons() or ntohs() for example code to all userspace programs meant to use the new stacks, including those at the edges of the network, such as firewalls if you plan to filter the traffic. Since it will be a different set of protocols, with their own IP numbers , they would not disturb the routing nodes, though they could be dropped by them along the route, because the IP protocol number would not be known. As for mitigation: software like fail2ban and custom service ports (in the 16-bit range) are usual techniques, though not the only ones . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/649776",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/471808/"
]
} |
649,799 | How can I get the details and transpose it to horizontal form? Every record ends after Couse . Couse will never be blank or null. Note: These four headers will be there for the below data: Name, City, Age, Couse If you see the second record, there isn't any "Name": "" -> missing so it should be null in place of that and the remaining will be appended after that with a pipe separated like this: null | Ors | 11 | MB I have data like below in the demo.txt file "Name":"asxadadad ,aaf dsf""City":"Mum""Age":"23""Couse":"BBS""City":"Ors""Age":"11""Couse":"MB""Name":"adad sf""City":"Kol""Age":"21""Couse":"BB""Name":"pqr""Age":"21""Couse":"NN" Expected Output: asxadadad ,aaf dsf | Mum | 23 | BBSnull | Ors | 11 | MBadad sf | Kol | 21 | BBpqr | null | 21 | NN I tried the below code: but not working my logic counter=0var_0='Couse' while read -r line echo "$line" counter=$(( counter + 1 )) var_1=`echo "$line" | grep -oh "Couse"` if [ $var_0 == $var_1 ] then head -$counter demo.txt > temp.txt sed -i '1,$counter' demo.txt counter = 0 else echo "No thing to do" fi done < demo.txt | Using any awk in any shell on every Unix box: $ cat tst.awkBEGIN { numTags = split("Name City Age Couse",nums2tags) for (tagNr=1; tagNr<=numTags; tagNr++) { tag = nums2tags[tagNr] tags2nums[tag] = tagNr wids[tagNr] = ( length(tag) > length("null") ? length(tag) : length("null") ) } OFS=" | "}(NR==1) || (prevTag=="Couse") { numRecs++}{ gsub(/^"|"$/,"") tag = val = $0 sub(/".*/,"",tag) sub(/[^"]+":"/,"",val) tagNr = tags2nums[tag] vals[numRecs,tagNr] = val wid = length(val) wids[tagNr] = ( wid > wids[tagNr] ? wid : wids[tagNr] ) prevTag = tag}END { # Uncomment these 3 lines if youd like a header line printed: # for (tagNr=1; tagNr<=numTags; tagNr++) { # printf "%-*s%s", wids[tagNr], nums2tags[tagNr], (tagNr<numTags ? OFS : ORS) # } for (recNr=1; recNr<=numRecs; recNr++) { for (tagNr=1; tagNr<=numTags; tagNr++) { val = ( (recNr,tagNr) in vals ? vals[recNr,tagNr] : "null" ) printf "%-*s%s", wids[tagNr], val, (tagNr<numTags ? OFS : ORS) } }} $ awk -f tst.awk fileasxadadad ,aaf dsf | Mum | 23 | BBSnull | Ors | 11 | MBadad sf | Kol | 21 | BBpqr | null | 21 | NN or if you didn't want to use a hard-coded list of tags (field/column names): $ cat tst.awkBEGIN { OFS=" | " }(NR==1) || (prevTag=="Couse") { numRecs++}{ gsub(/^"|"$/,"") tag = val = $0 sub(/".*/,"",tag) sub(/[^"]+":"/,"",val) if ( !(tag in tags2nums) ) { tagNr = ++numTags tags2nums[tag] = tagNr nums2tags[tagNr] = tag wids[tagNr] = ( length(tag) > length("null") ? length(tag) : length("null") ) } tagNr = tags2nums[tag] vals[numRecs,tagNr] = val wid = length(val) wids[tagNr] = ( wid > wids[tagNr] ? wid : wids[tagNr] ) prevTag = tag}END { for (tagNr=1; tagNr<=numTags; tagNr++) { printf "%-*s%s", wids[tagNr], nums2tags[tagNr], (tagNr<numTags ? OFS : ORS) } for (recNr=1; recNr<=numRecs; recNr++) { for (tagNr=1; tagNr<=numTags; tagNr++) { val = ( (recNr,tagNr) in vals ? vals[recNr,tagNr] : "null" ) printf "%-*s%s", wids[tagNr], val, (tagNr<numTags ? OFS : ORS) } }} $ awk -f tst.awk fileName | City | Age | Couseasxadadad ,aaf dsf | Mum | 23 | BBSnull | Ors | 11 | MBadad sf | Kol | 21 | BBpqr | null | 21 | NN Note that the order of the columns in the output for that second script will be the order those tags appear in the input which is why they need a header row to identify the values unless all tags are guaranteed to occur in the input in the order you want them output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/471836/"
]
} |
Subsets and Splits