source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
721,421 | This is working as I would expect : $ sudo -u romain bash -c 'a="b" ; echo $a'b$ But with -i the variable is not echoed, why? $ sudo -iu romain bash -c 'a="b" ; echo $a'$ I was wondering if -i was adding a level of bash indirection or variable interpolation but if it is, how can this be working? $ sudo -iu romain bash -c 'a="b" ; echo ${a}'b$ | When you use -i with sudo , the user's login shell, $SHELL , will be used, and will be invoked as a login shell. When you additionally give the command a command to run, as in sudo -u user -i 'some-command' ... sudo will run that command with $SHELL -c , meaning it needs to convert the list of arguments it gets itself into a single command line string that gets evaluated by the shell again. For this to work, it has to escape each character in some-command , except for alphanumerics, underscores, hyphens, and dollar signs. This means that sudo -u user -i bash -c 'a="b" ; echo ${a}' will be executed as the user user , escaped as the equivalent of $SHELL -c bash\ -c\ \'a\=\"b\"\ \;\ echo\ $\{a\}\' ... while using $a turns the command into $SHELL -c bash\ -c\ \'a\=\"b\"\ \;\ echo\ $a\' Note that in this last command, $a is expanded by the user's login shell before it can start bash -c . In the previous command, where ${a} is used, the $\{a\} is not a valid expansion, so the user's shell makes no expansion, and the inline bash -c shell sees ${a} and can expand it. This extra quoting that happens is explained in the sudo manual, in the section describing the -i option: -i, --login Run the shell specified by the target user's password database entry as a login shell. This means that login- specific resource files such as .profile, .bash_profile, or .login will be read by the shell. If a command is specified, it is passed to the shell as a simple command using the -c option. The command and any arguments are concatenated, separated by spaces, after escaping each character (including white space) with a backslash (‘\’) except for alphanumerics, underscores, hyphens, and dollar signs. If no command is specified, an interactive shell is executed. [...] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/721421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/545639/"
]
} |
721,464 | Need help in formatting below test data in unix. Sample data: Dan serv1 p1Dan serv2 p2Dan serv3 p3 Dus serv2 p1Dus serv3 p2Dus Serv5 p3 Tes serv3 p1Tes serv5 p3 Needed format: Name p1 p2 p3Dan serv1 serv2 serv3Dus serv2 serv3 serv5TEs serv3 Serv5 | When you use -i with sudo , the user's login shell, $SHELL , will be used, and will be invoked as a login shell. When you additionally give the command a command to run, as in sudo -u user -i 'some-command' ... sudo will run that command with $SHELL -c , meaning it needs to convert the list of arguments it gets itself into a single command line string that gets evaluated by the shell again. For this to work, it has to escape each character in some-command , except for alphanumerics, underscores, hyphens, and dollar signs. This means that sudo -u user -i bash -c 'a="b" ; echo ${a}' will be executed as the user user , escaped as the equivalent of $SHELL -c bash\ -c\ \'a\=\"b\"\ \;\ echo\ $\{a\}\' ... while using $a turns the command into $SHELL -c bash\ -c\ \'a\=\"b\"\ \;\ echo\ $a\' Note that in this last command, $a is expanded by the user's login shell before it can start bash -c . In the previous command, where ${a} is used, the $\{a\} is not a valid expansion, so the user's shell makes no expansion, and the inline bash -c shell sees ${a} and can expand it. This extra quoting that happens is explained in the sudo manual, in the section describing the -i option: -i, --login Run the shell specified by the target user's password database entry as a login shell. This means that login- specific resource files such as .profile, .bash_profile, or .login will be read by the shell. If a command is specified, it is passed to the shell as a simple command using the -c option. The command and any arguments are concatenated, separated by spaces, after escaping each character (including white space) with a backslash (‘\’) except for alphanumerics, underscores, hyphens, and dollar signs. If no command is specified, an interactive shell is executed. [...] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/721464",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/545683/"
]
} |
721,844 | I have two similar scripts with different names. One works fine but other throws error.Can anyone please tell me what is the issue? This is my test.sh scripts which works fine [nnice@myhost Scripts]$ cat test.sh #!/bin/bashfunction fun { echo "`hostname`"}fun [nnice@myhost Scripts]$ ./test.sh myhost.fedora Here is my another script demo.sh but it throws error [nnice@myhost Scripts]$ cat demo.sh #!/bin/bashfunction fun { echo "`hostname`"}fun[nnice@myhost Scripts]$ ./demo.sh bash: ./demo.sh: cannot execute: required file not found Both scripts having the same permissions [nnice@myhost Scripts]$ ll test.sh -rwxr-xr-x. 1 nnice nnice 65 Oct 21 10:47 test.sh[nnice@myhost Scripts]$ ll demo.sh -rwxr-xr-x. 1 nnice nnice 58 Oct 21 10:46 demo.sh | Your demo.sh script is a DOS text file. Such files have CRLF line endings, and that extra CR (carriage-return) character at the end of the line is causing you issues. The specific issue it's causing is that the interpreter pathname on the #! -line now refers to something called /bin/bash\r (with the \r symbolising a carriage-return, which is a space-like character, so it's usually not visible). This file is not found, so this is what causes your error message. To solve this, convert your script from a DOS text file to a Unix text file. If you are editing scripts on Windows, you can probably do this by configuring the Windows text editor to create Unix text files, but you may also use the dos2unix utility, available for most common Unix variants. $ ./scriptbash: ./script: cannot execute: required file not found $ dos2unix script$ ./scriptharpo.local Regarding your code: Please never do echo `some-command` or echo $(some-command) to output the output of some-command . Just use the command directly: #!/bin/shfun () { hostname}fun (Since the script now does not use anything requiring bash , I also shifted to calling the simpler /bin/sh shell.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/721844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/314628/"
]
} |
721,907 | I'd like to print out the first line of each .txt file that is more than a 100 bytes. So far, I've managed to gather the files that match the criteria but I don't know how to print out the files' first line of text. find -size +100c -name "*.txt" Or am I on the wrong track completely? | You need to run head -n 1 on each file: find . -size +100c -name '*.txt' -execdir head -n 1 {} \; or, if your find doesn’t support -execdir , find . -size +100c -name '*.txt' -exec head -n 1 {} \; If your head version supports -q , or you don’t care about the headers shown for each file when multiple files are processed in one invocation, you can make both variants slightly more efficient using … -exec head -q -n 1 {} + (See Race Conditions with -exec to understand the benefits of -execdir .) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/721907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/546118/"
]
} |
722,323 | I'm trying to find a way to lock a script based on a parameter given, but was unsuccessful in finding a proper answer. What I'm trying to achieve is prevent another user from running a script based on some parameter: so if user A executes the script with parameter JOHN_DOE (e.g: -d JOHN_DOE ) and user B executes it with parameter -d ANNA_DOE then it runs without any problems, but if user B tries to execute it with JOHN_DOE as parameter while the first instance of the script hasn't finished running then it does not allow the user B to run it. Is there a proper way to achieve this? | A tool such as flock can help manage locks. (It may not work with NFS, depending on whether you believe the documentation or the practice, and similarly may or may not work on SMB or indeed any other remote filesystem.) The documentation, man flock , does have several examples of use. Here's one of them tailored to your scenario. #!/bin/bash# Example of using flock(1) to provide a named exclusive lock# Parse command line optionswhile getopts 'd:' OPTdo case "$OPT" in d) lockParam="$OPTARG" ;; *) echo "Usage: ${0##*/} -d <parameter>" >&2; exit 1 esacdoneshift $(($OPTIND -1))# Sanitise lock parameter value (do not trust the user)lockName="$(printf "%s\n" "${lockParam^^}" | tr -cd '[:alnum:]\n')"lockFile="/tmp/lock.${##*/}.${lockName:-noname}"echo "Attempting lock with parameter '$lockParam' sanitised to '$lockName'" >&2( # Get the lock or report failure. See "man flock" for other options flock -n 9 || exit 9 # This section is managed by the exclusive lock. Your program code # would go here. echo "Achieved exclusive lock on '$lockFile'" >&2 sleep=10 echo "Waiting for $sleep second(s) to simulate activity" >&2 sleep $sleep # Exit status 0=ok, otherwise 1-8 is your choice of error codes echo "Releasing lock" >&2 exit 0 # End of exclusive lock section) 9>"$lockFile"ss=$?# Report on exit status from actual codeif [ $ss -eq 9 ]then echo "Failed to acquire lock" >&2fi# Exit with meaningexit $ss Make the script executable (if you call it lockeg then chmod a+x lockeg ), and run it ./lockeg -d JOHN_DOE | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/722323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/546521/"
]
} |
722,327 | I have a scenario where I need to print a line, but using an awk if to search for a number the is appened with a double colon with more numbers: See below example: test1 test2 37:375003 test3 test4test1 test2 38:375004 test3 test4test1 test2 39:375005 test3 test4test1 test2 40:375006 test3 test4test1 test2 41:375007 test3 test4 What I want to achieve is using the command like below: cat test_out.txt | awk "{if ($3 == 37~/\:*/ ) print $0;}" The above should give me the below line: test1 test2 37:375003 test3 test4 Getting the syntax error below: Syntax Error The source line is 1.The error context is {if ( >>> == <<<awk: 0602-502 The statement cannot be correctly parsed. The source line is 1. | You need to use the ~ binary operator whose syntax is: string ~ regexp To match a string against a regular expression, so: <test_out.txt awk '$3 ~ /^37:[[:digit:]]+$/' To print the records ( {print} , short for {print $0} being the default action) whose third field matches the ^37:[[:digit:]]+$ extended regexp. In ERE syntax: ^ matches at the start of the subject [...] : matches any character or collating element in the set. [:digit:] in the set above means any character classified as decimal digit in the locale (on most systems, that's limited to 0123456789). Change to 0123456789 in mawk which doesn't support those POSIX character classes or if you don't want to match other decimal digits. 0-9 would also work in mawk but could also match on other characters in some awk implementations. + is for one-or-more of the preceding thing. So here one-or-more digits $ matches at the end of the subject. If you don't care whether the part after 37: is made of digits or not, then the regexp is just ^37: ( 37: at the start of the subject). Another approach would be: <test_out.txt awk '$3 + 0 == 37' Where the + 0 numeric operation forces awk to try and convert $3 to a number, ignoring anything past the initial number. Then that would match on 37:anything , but also 37.0;whatever ¹, 3.7e+1 ¹, possibly 0x25#xxx with some awk implementations, +37+38 ... Using +$3 == 37 though standard, doesn't work with some awk implementations. For the value (here 37 ) to come from a shell variable, you could construct the regexp in the shell and pass it to awk via an ENVIRON ment variable: var=37ERE='^'$var':[[:digit:]]+$' <test_out.txt awk '$3 ~ ENVIRON["ERE"]' Or make an awk v ariable out of the shell variable²: var=37<test_out.txt awk -v n="$var" '$3 ~ "^" n ":[[:digit:]]+"' Avoid expanding the shell variable into the awk code as in: <test_out.txt awk '$3 ~ /^'"$var"':[[:digit:]]+$/' as that typically introduces command injection vulnerabilities (the worst type of vulnerability). Some comments on your attempt: as already noted by @RudyC , you used double quotes around your awk code. Shells perform parameter expansion inside those, so the $3 would be expanded to the value of the third argument to the shell script, and $0 to the name of the script. $3 == 37 ~ /\:*/ . == has higher precedence than ~ . So that's ($3 == 37) ~ /\:*/ . So that's matching the \:* regexp against the result of that comparison (1 or 0 depending on whether $3 is 37 or not) \:* as a regexp is unspecified as \: is unspecified. To match a literal : , it's : alone. :* would be 0 or more : s so match on anything since any string contains at least 0 : s. * in regexps matches on 0 or more of the previous thing. You may be confusing it with the * of shell wildcards that matches 0 or more characters. In regexps, 0 or more characters is .* , . being the operator to match a single character. awk statements are of the form condition {action} , where either condition or action can be omitted. In your case, you omitted the condition and used if in the action , and used {print $0} which happens to be the default action . While that works, that will look very awk ward to awk users. you used cat to con cat enate a single file which hardly makes sense. The shell can open the file by itself to make it the stdin of awk using redirection which saves a process and the need to shove the contents through a pipe. You could also pass the file name as argument to awk which can also open it by itself. ¹ assuming the decimal radix character is . and not , in the locale, at least with some awk implementations such as GNU awk in POSIX mode. ² beware that -v mangles backslashes, so using ENVIRON is safer in the general case. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/722327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186444/"
]
} |
722,371 | #!/bin/bashcd /path-to-directorymd5=$(find . -type f -exec md5sum {} \; | sort -k 2 | md5sum)zenity --info \--title= "Calculated checksum" \--text= "$md5" The process of a recursive checksum calculation for a directory takes a while.The bash script doesn´t wait until the process is finished it just moves to the next command which is the dialog box, that displays the calculated checksum. So the dialog box shows a wrong checksum. Is there an option to tell the script to wait until the calculation of the checksum is finished?Furthermore, is there an option to pipe the progress of the checksum calculation to some kind of progress bar like in zenity for example? | As written, it would be waited for. For a pulsating progress bar: #! /bin/sh -export LC_ALL=Ccd /path/to/dir || exit{ md5=$( find . -type f -print0 | sort -z | xargs -r0 md5sum | md5sum ) exec >&- zenity --info \ --title="Checksum" \ --text="$md5"} | zenity --progress \ --auto-close \ --auto-kill \ --pulsate \ --title="${0##*/}" \ --text="Computing checksum" For an actual progress bar, you'd need to know the number of files to process in advance. With zsh : #! /bin/zsh -export LC_ALL=Cautoload zargscd /path/to/dir || exit{ files=(.//**/*(ND.))} > >( zenity --progress \ --auto-close \ --auto-kill \ --pulsate \ --title=$0:t \ --text="Finding files")md5=( $( zargs $files -- md5sum \ > >( awk -v total=$#files '/\/\// {print ++n * 100 / total}' | { zenity --progress \ --auto-close \ --title=$0:t \ --text="Computing checksum" || kill -s PIPE $$ }) \ | md5sum ))zenity --info \ --title=$0:t \ --text="MD5 sum: $md5[1]" Note that outside of the C locale, on GNU systems at least, filename order is not deterministic, as some characters sort the same and also filenames are not guaranteed to be made of valid text, hence the LC_ALL=C above. The C locale order is also very simple (based on byte value) and consistent from system to system and version to version. Beware that means that error messages if any will be displayed in English instead of the user's language (but then again the Computing checksum , Finding files , etc are not localised either so it's just as well). Some other improvements over your approach: Using -exec md5sum {} + or -print0 | xargs -r0 md5sum (or zargs equivalent) minimises the number of md5sum invocations, each md5sum invocation being passed a number of files. -exec md5sum {} \; means running one md5sum per file which is very inefficient. we sort the list of files before passing to md5sum . Doing sort -k2 in general doesn't work as file names can contain newline characters. In general, it's wrong to process file paths line-based. You'll notice we use a .// prefix in the zsh approach for awk to be able to count files reliable. Some md5sum implementations also have a -z option for NUL-delimited records. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/722371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/543372/"
]
} |
722,381 | I've set the SUID & SGID bit on a folder belonging to user foo with sudo chmod g+s myfolder & sudo chmod u+s myfolder drwsr-sr-x 24 foo www-data 4,0K Okt 25 16:17 myfolder Then I went inside and created a folder with sudo mkdir xyz , but the user of the folder gets overwritten with root while the group was protected successfully. drwxr-sr-x 2 root www-data 4,0K Okt 25 16:24 xyz I expect the user to be protected, it should stay at foo after executing sudo mkdir xyz . What have I missed? | As written, it would be waited for. For a pulsating progress bar: #! /bin/sh -export LC_ALL=Ccd /path/to/dir || exit{ md5=$( find . -type f -print0 | sort -z | xargs -r0 md5sum | md5sum ) exec >&- zenity --info \ --title="Checksum" \ --text="$md5"} | zenity --progress \ --auto-close \ --auto-kill \ --pulsate \ --title="${0##*/}" \ --text="Computing checksum" For an actual progress bar, you'd need to know the number of files to process in advance. With zsh : #! /bin/zsh -export LC_ALL=Cautoload zargscd /path/to/dir || exit{ files=(.//**/*(ND.))} > >( zenity --progress \ --auto-close \ --auto-kill \ --pulsate \ --title=$0:t \ --text="Finding files")md5=( $( zargs $files -- md5sum \ > >( awk -v total=$#files '/\/\// {print ++n * 100 / total}' | { zenity --progress \ --auto-close \ --title=$0:t \ --text="Computing checksum" || kill -s PIPE $$ }) \ | md5sum ))zenity --info \ --title=$0:t \ --text="MD5 sum: $md5[1]" Note that outside of the C locale, on GNU systems at least, filename order is not deterministic, as some characters sort the same and also filenames are not guaranteed to be made of valid text, hence the LC_ALL=C above. The C locale order is also very simple (based on byte value) and consistent from system to system and version to version. Beware that means that error messages if any will be displayed in English instead of the user's language (but then again the Computing checksum , Finding files , etc are not localised either so it's just as well). Some other improvements over your approach: Using -exec md5sum {} + or -print0 | xargs -r0 md5sum (or zargs equivalent) minimises the number of md5sum invocations, each md5sum invocation being passed a number of files. -exec md5sum {} \; means running one md5sum per file which is very inefficient. we sort the list of files before passing to md5sum . Doing sort -k2 in general doesn't work as file names can contain newline characters. In general, it's wrong to process file paths line-based. You'll notice we use a .// prefix in the zsh approach for awk to be able to count files reliable. Some md5sum implementations also have a -z option for NUL-delimited records. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/722381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124191/"
]
} |
722,424 | I basically want to do this: tail -f trades.csv | csvtool readable - I want to read a CSV file in a readable format using csvtool and I want to keep watching it. I think that command doesn't work because tail -f never signals the end of the stream so csvtool is waiting indefinitely. Surely, there is a workaround for this common issue? Thank you | There is no such thing as “emitting EOF”. EOF is not an out-of-band signal. EOF is when an attempt to read reports that there is no data left to read. If you pipe the output of tail -f into a program that reads the whole input before it starts emitting output, the program won't emit any output until it has read the complete input. And since tail -f never closes its output (since it never stops emitting output), that only happens once you kill the tail process. csvtool readable reads all the input rows, then determines the width of each cell, calculates the maximum width of cells in each column, and finally emits all the rows with columns in a consistent width. It is impossible to perform this calculation until all the input is available, since the last row might be the one that has the widest cells. So it is logically impossible to design csvtool readable in a way that starts emitting output¹ before it has read all the input. Maybe you don't care that all the rows have the same column widths. Maybe you just want mostly widths, that get enlarged if a wider row appears. This would be reasonable. But it isn't a feature that csvtool offers. In many cases, “ foo | bar doesn't emit output immediately when foo emits output gradually” is due to output buffering in foo . See Turn off buffering in pipe . This isn't what's happening here though. It could be the problem in different circumstances, for csvtool subcommands that don't require the whole input, with input coming from a program that does buffer its output. If all you want is to convert commas in CSV to some column alignment, and you're willing to specify the column widths manually, here's a two-liner: tail -f … | python3 -u -c 'import csv, sys for row in csv.reader(sys.stdin): print("\t".join(row))' | expand -t 11,13,17 You don't need the expand step if you're happy with the default tab stops every 8 columns that most terminals and editors use. ¹ For the nitpickers: beyond the first cell of the first row, which wouldn't help. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/722424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36230/"
]
} |
722,485 | I have a file with lines similar to the following (unfortunately, this is the only format in which another software outputs results): 1 2 3 5/2 7 17/5 9 10/3 15 I need to replace it with the following line: 1 2 3 2.5 7 3.4 9 3.33 15 In other words, I want GAWK to do the division and replace the fractions (rational quantities) 5/2, 17/5 and 10/3 with their decimal values 2.5, 3.4 and 3.33. I tried multiple FS (field separators), but nothing worked.. What's a good way to do this using GAWK? Thanks. Would it make it easier if I change the slash (/) to a colon (:) ? Why do I ask this question? I was trying to search whether / is a substring of $i .. (and if the answer is yes, then I would split() $i into 2 parts and then do the division). I read elsewhere that to check whether a field $i begins with F , they use if ($i~/^F/) -- and so I tried if ($i~///) , then if ($i~/"/"/) , then if ($i~/\//) (escaping / with a \) etc.. None of these worked.. So I thought / is a special character in Awk.. To avoid special character complications, I thought, let me use : . | Iterate over the fields and split each one on / . If the split generates exactly two substrings, use these to calculate the field's new value: $ awk '{ for (i=1; i<=NF; ++i) if (split($i,a,"/")==2) $i = a[1]/a[2] };1' file1 2 3 2.5 7 3.4 9 3.33333 15 To two decimals, use the %.2f format specifier with sprintf() : $ awk '{ for (i=1; i<=NF; ++i) { if (split($i,a,"/")==2) $i = sprintf("%.2f",a[1]/a[2]) } };1' file1 2 3 2.50 7 3.40 9 3.33 15 Similarly, using Miller : $ mlr --nidx put 'for (k,v in $*) { a=splitnv(v,"/"); if (length(a)==2) { $[k]=a[1]/a[2] } }' file1 2 3 2.500000 7 3.400000 9 3.333333 15 $ mlr --nidx put 'for (k,v in $*) { a=splitnv(v,"/"); if (length(a)==2) { $[k]=fmtnum(a[1]/a[2],"%.2f") } }' file1 2 3 2.50 7 3.40 9 3.33 15 Note that when using the nidx input and output format, the default field delimiter is a single space character. This means that the input shown in the question has 17 fields, some of which are empty. These are all replicated in the output, which means the spaces are preserved. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/722485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/546671/"
]
} |
722,563 | When installing Ubuntu for the first time, I separated / and home into different partitions. Thinking back on it, how is this possible? Isn't home "in" / . | A partition can contain a file system. Linux can mount a file system at a mountpoint (a directory). This mountpoint can be in another file system's directory tree, and the mountpoint /home is in the root directory / . Mounting means that the content of the mounted directory is available via the mountpoint. This means that the home directory is in the root directory (directory tree), but it is still located in an own file system in an own partition. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/722563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/546767/"
]
} |
722,622 | I am getting the below output from a command. { "data": { "access-type": "ObjectRead", "access-uri": "/p/u4yRbnS_Yv29ivICXNWz-76BAgBqfln0SthBVYLZ3AdPOs9BKTQEH48MZEJNvXaT/n/bmjx6wj24zrv/b/season5/o/abcd.zip", "bucket-listing-action": null, "id": "tePBaSkrsUEBY+rKK0HiwraPn76TLD86BOsqm7dr3cqjNXp6026BouTf9kQoKzZk:abcd.zip", "name": "abcd.zip", "object-name": "abcd.zip", "time-created": "2022-10-27T02:20:17.430000+00:00", "time-expires": "2023-02-01T00:00:00+00:00" }} Can someone help me to extract the value of access-uri According the above example the output should be /p/u4yRbnS_Yv29ivICXNWz-76BAgBqfln0SthBVYLZ3AdPOs9BKTQEH48MZEJNvXaT/n/bmjx6wj24zrv/b/season5/o/abcd.zip | I'd go with jq . In your example it would be (the Input comes from your command of course): myvar=$(commandX | jq -r '.data."access-uri"?') | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/722622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/525521/"
]
} |
722,840 | I have the following list of files: main.acnmain.acrmain.algmain.auxmain.glgmain.glomain.glsmain.istmain.lofmain.logmain.lotmain.nlomain.out main.pdf main.tex main.toc I want to rm all of the main.* files except for main.tex and main.pdf file. I tried rm main.* !("main.tex","main.pdf") but that did not work and removed all the main.* without exception. Any ideas? | !("main.tex","main.pdf") matches anything that isn't main.tex,main.pdf , so probably all files in the directory. The key to note is that !(...) already acts like * , in that it itself matches anything that isn't one of the things inside. And the separator is | , not , . So, main.!(tex|pdf) should work. Or !(main.tex|main.pdf) if you want to remove every file but those two, not just the ones with names that start with main. . See: https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html https://mywiki.wooledge.org/glob#extglob | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/722840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/534405/"
]
} |
722,974 | When exploring some of the flags associated with mv in zsh terminal, I encountered the following prompt: overwrite untitled folder 2/test.txt? (y/n [n]) My question is what is the second n in brackets seeking to ask or suggest? Why is the prompt not just y/n and why is it y/n [n] ? What does this mean? Can someone please explain the significance of [n] ? | If you just hit enter, it assumes the part in brackets. In your case, the default is “n”. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/722974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/547176/"
]
} |
723,336 | How do command-line applications like Vim work? Specifically, how do they take control of the terminal in the manner they do? Also, for future reference, is there a specific term for applications that take control of the terminal the way Vim does? | vim and other semi-graphical (the capability to display semi-graphic characters such as corners, full crosses…) applications control the terminal (manage the position of the cursor, the position of displayable characters, color settings…) by sending dedicated escape sequences, control codes that the terminal will translate into some dedicated action it will execute. Because it would be just a nightmare for any programmer to echo the escape sequences to the stdout, not to say implying non portable code since there have always been many different terminals with different capabilities and different escape sequences, a library abstracting all that work was created : curses . Nowadays named ncurses which also provides higher level functions such as window management. ncurses relies on the terminfo database for acquiring the appropriate terminal description of capabilities. It is thanks to this library of functions that vim, iptraf-ng, alsa amixer, less, gdb, the primary kernel configuration utility and many other control the terminal. Note that these apps are typically not called " command-line " utilities which generally handle a single line of input with basic cursor management and editing facilities thanks to the readline library. Per contrast and as you can read in the ncurses man page linked hereabove, these programs can be called : Interactive, Screen Oriented. vim is typically named a screen-oriented editor per contrast to ed the line-oriented editor. Note following suggestion in comments : When started, the application will inherit the tty driver settings from the shell that launched it which are likely to be very similar to those initially set by the original agetty. This including buffering of input until catching a newline charater, echoing of input keys at instant cursor's position… all sort of features whatever screen oriented application is not likely to want When initializing, the program will save current tty driver settings and force those according to the programmer wishes. Before quitting… the programmer is strongly invited to restore the initial settings… unless facing the risk of coming back to the calling shell in rather unpredictable but certainly messy conditions… | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/723336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/547548/"
]
} |
723,368 | I have a command <streaming ls> | wc -l , it works fine, but the <streaming ls> takes a while, which means I don't get the final line count until a few minutes later. Is there a way to have the output of wc -l update in real time? | You can’t use wc -l for this, but you can produce a running count of lines seen using other tools, for example AWK: <streaming ls> | awk '{ printf "%d\r", NR } END { print NR }' This will update the count of lines seen every time a line is seen, and finish with the total number of lines at the end of the process. For commands producing lots of output, the overhead can be reduced by printing every n lines: … | awk 'NR % 10 == 0 { printf "%d\r", NR } END { print NR }' (for n = 10) or by printing every second: … | awk 'systime() > lasttime { lasttime = systime(); printf "%d\r", NR } END { print NR }' (or every n seconds by changing the condition to >= lasttime + n ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/723368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321024/"
]
} |
723,387 | Can anyone tell me the most efficient way to convert a variable to a repeating -param in a script call? I don't know how to describe it properly, but the examples speak for themselves. At least I hope they do :) Example1: # inputexport DOMAINS="domain1.tld,domain2.tld"# tranform to./example-script.sh -d "domain1.tld" -d "domain2.tld" Example2 (input is singular): # inputexport DOMAINS="domain1.tld"# tranform to./example-script.sh -d "domain1.tld" [UPDATE]I'm sorry for not adding this in the first post. Context I should have added: DOMAINS is env variable added to a Docker container Container only has sh shell, so zsh and bash specific options won't work. Sorry for adding the initial bash tag. | You can’t use wc -l for this, but you can produce a running count of lines seen using other tools, for example AWK: <streaming ls> | awk '{ printf "%d\r", NR } END { print NR }' This will update the count of lines seen every time a line is seen, and finish with the total number of lines at the end of the process. For commands producing lots of output, the overhead can be reduced by printing every n lines: … | awk 'NR % 10 == 0 { printf "%d\r", NR } END { print NR }' (for n = 10) or by printing every second: … | awk 'systime() > lasttime { lasttime = systime(); printf "%d\r", NR } END { print NR }' (or every n seconds by changing the condition to >= lasttime + n ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/723387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/547598/"
]
} |
723,616 | So i first encountered this problem when i tried to run nodejs on my system. And i got this error message node: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory I also encountered it when i tried to run virtualBoxand it gave me same think like: "we can't start because we do not have libcrypto.so.1.1"I also tried to search a solution for it but i couldn't find any that would work for me.Btw i use arch as my operating system and everything i find about anything was for ubuntu. | i also ran into this on arch. The solution for me was to also install openssl-1.1 which provides libcrypto.so.1.1 . The upgrade may have also broke pacman for you; if so, you will have to download the package from a mirror and manually place libcrypto.so.1.1 and libssl.so.1.1 into /usr/lib/ . Then, you can run pacman -U --overwrite '/usr/lib/*' openssl-1.1-1.1.1.s-2-x86_64.pkg.tar.zst to install the full package. Note that sudo may also be broken if pacman is, so if you don't have a root password to log in via su you may need to recover from an install disk. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/723616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/547854/"
]
} |
723,891 | I've two files. One is called file1.txt and the other is person1.txt . Both of them contain some lines of text as in the following example: file1.txt word1word2word3word4word5word6word7word8word9 person1.txt givi sixarulidze What I want to do is to replace first seven lines of code in file1.txt with the content of person1.txt . Desired output: givi sixarulidzeword8word9 How can I achieve that? | cp person1.txt result.txttail -n+8 file1 >> result.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/723891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/548161/"
]
} |
723,943 | Problem: Grepping all parameters from a list of URLs A link, for example, https://www.google.com/search?q=grep+urls+from+a+file&rlz=1C5CHFA_enIL1008IL1008&oq=grep+urls+from+a+file What I've tried: Grepping any text that start with ? and end with = , or start with & and end with & or empty string. Desired outcome: qrlzoq | cp person1.txt result.txttail -n+8 file1 >> result.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/723943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/548195/"
]
} |
724,086 | There is a file of 40 GB in gz format. I want to find the record count and cksum of this file in uncompressed format. One approach I have is: Unzip the file using gunzip Use wc , cksum commands on unzipped file Zip the file again using gzip . Problem with this approach is extracting and compressing the file will be taking lot of time. May be around 30-40 mins. Another approach may be using zcat to calculate record count and cksum zcat <file name> | wc -l zcat <file name> | cksum This approach may take less time, but using zcat twice on same file. Is there a better approach? May be using one command to find both record count and cksum ? | You can use tee and bash's process substitution this: $ zcat foo.gz | tee >(md5sum >&2) | wc6f869e2acc27a0330b10d9ffa6655e7b - 36568 45710 2743552 That decompresses the file once, passes the decompressed data to tee which passes it as an input file to md5sum which is told to print its output to standard error (and therefore isn't caught by | wc ), and then we also pass the output to wc . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/724086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/431562/"
]
} |
724,117 | The answer to this question put me on the right path but I still have no idea how to manually download these two packages to follow the answer's mentioned steps. | Use the link provided in the answer to the question you linkedto find the package of the Arch Linux site. You can also search for any package in the official repositories. You can also search for something like libcrypto and it should list the packages that provide those libraries. In this case the openssl package provides both of the libraries you're looking for. On the package page you can see the Provides line lists the libraries you are looking for. Under the Package Contents section you can expand to show all the files that are in the package to confirm it has the files you are looking for, like usr/lib/libcrypto.so.1.1 In the upper right, under Package Actions, click Download From Mirror at the bottom. You should end up with a file like openssl-1.1-1.1.1.s-2-x86_64.pkg.tar.zst . Extract the contents with tar --use-compress-program=unzstd -xvf openssl-1.1-1.1.1.s-2-x86_64.pkg.tar.zst In the unpackaged folder, find the files you need at the locations from the file list in step 2 and copy them to the corresponding location on your system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/724117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/548390/"
]
} |
724,121 | The Filesystem Hierarchy Standard (FHS) is the formal codification for root file tree on Linux installations, as inherited from earlier iterations of Unix and POSIX, and subsequently adapted. It standardizes the exact uses of the familiar /home , /etc , /usr , /var , and so on, from various historic differences of convention, and resolves where application-specific and site-specific file names may be added, or not. Basic Linux installations historically have placed the entire tree on a single file system, though some variations have utilized a separate partition for /home , presumably to facilitate backup and migration. More recently, Btrfs has gained increasing adoption, which allows a single partition to host various subvolumes. Subvolumes are appealing because they may be captured in snapshots, and require no pre-allocation of space. The mapping of subvolumes to nodes on the FSH appears to vary widely.Sensible standards and policies respecting such matters are important, for supporting optimal management of files on the system with respect to snaphots and related concerns. Following are some observations: Debian appears to place the entire tree on a single subvolume beneath root. Ubuntu appears to allocate a subvolume for /home , and another for the remainder of the root tree. Arch Linux appears to extend the separation adopted by Debian by placing /var/log and /var/cache each in a separate subvolume. openSUSE has a single subvolume for /var , and one each for /home , /root , /usr/local , /opt , and /srv , as well as one for the remainder of the root tree, a further one for each installed grub architecture. Have any standards emerged that have attempted to resolve the various design considerations, and to unify the approaches adopted by various operating systems? Has any agreement emerged concerning how to reconcile the functions of the various file tree nodes with policies concerning snapshots? | Use the link provided in the answer to the question you linkedto find the package of the Arch Linux site. You can also search for any package in the official repositories. You can also search for something like libcrypto and it should list the packages that provide those libraries. In this case the openssl package provides both of the libraries you're looking for. On the package page you can see the Provides line lists the libraries you are looking for. Under the Package Contents section you can expand to show all the files that are in the package to confirm it has the files you are looking for, like usr/lib/libcrypto.so.1.1 In the upper right, under Package Actions, click Download From Mirror at the bottom. You should end up with a file like openssl-1.1-1.1.1.s-2-x86_64.pkg.tar.zst . Extract the contents with tar --use-compress-program=unzstd -xvf openssl-1.1-1.1.1.s-2-x86_64.pkg.tar.zst In the unpackaged folder, find the files you need at the locations from the file list in step 2 and copy them to the corresponding location on your system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/724121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/377123/"
]
} |
724,429 | Why can the command { echo err >&2; echo out >&1; } | : print "err", but the second can not? Some tests in container: Debian : 11.5 (bash/5.1.4) Rocky Linux : 9.0 (bash/5.1.8), CentOS : 7.7.1908 (bash/4.2.46), CentOS: 8.4.2105 (bash/4.4.19). This also happened in host CentOS 8.0.1905, Debian 11.4. ~# { echo err >&2; echo out >&1; } | :err~# { echo out >&1; echo err >&2; } | :~# And not using : seems right: ~# { echo out >&1; echo err >&2; } | sed 's/.*/-&-/'err-out-~# But print in another host CentOS 7.7.1908 (bash/4.2.46): ~# { echo out >&1; echo err >&2; } | :err~# | The difference between { echo err >&2; echo out >&1; } | : ... and { echo out >&1; echo err >&2; } | : ... is the order in which the two echo calls are made. The order matters only in terms of how quickly the first echo can execute. The two sides of the pipeline, { ...; } and : , are started concurrently, and : does not read from its standard input stream. This means the pipe is torn down as soon as : terminates, which will be almost immediately. If the pipe is torn down, there is no pipe buffer for echo out >&1 to write to. When echo tries to write to the non-existent pipe, the left-hand side of the pipeline dies from receiving a PIPE signal. If it dies before echo err >&2 has executed, there will be no output. So: You will always get err as output from { echo err >&2; echo out >&1; } | : since echo err >&2 will not care about the state of the pipe. You will sometimes get err as output from { echo out >&1; echo err >&2; } | : depending on whether : has terminated before echo out >&1 has the chance to run. If : terminates first, there will be no output due to the left-hand side getting killed prematurely. I'm guessing that most people will not experience the non-deterministic behaviour of the second pipeline on most systems. Still, you have found combinations of operating systems and shell versions that show this. I have also replicated the non-deterministic behaviour under zsh on FreeBSD, even in a single shell session (although the shell seems to tend to one behaviour rather than the other, so I needed 10+ tries to finally see the switch). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/724429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/548782/"
]
} |
724,575 | On a new mac, trying to switch to bash for a minute: alex.mills@alex uta-phd %alex.mills@alex uta-phd % echo $SHELL/bin/zshalex.mills@alex uta-phd %alex.mills@alex uta-phd % exec bashThe default interactive shell is now zsh.To update your account to use zsh, please run `chsh -s /bin/zsh`. For more details, please visit https://support.apple.com/kb/HT208050.%na%m %1~ %#%na%m %1~ %# echo $SHELL/bin/zsh%n@%m %1~ %# echo "$SHELL" keeps saying I am using /bin/zsh What to do? | You are running bash. That's why the prompt looks weird. Bash and zsh both use the PS1 variable as the main setting for the prompt, but they have different escape sequences (backslash-character in bash, percent-character in zsh, and the second characters have different meanings). Normally the PS1 variable should be set by the shell's initialization file ( .bashrc or .zshrc ) and not exported to the environment (since it only makes sense inside one given program), but many systems (including major Linux distributions) are poorly configured and export PS1 . MacOS correctly does not export PS1 out of the box as far as I can tell, but it seems that your own files do (or maybe a different version of macOS from the one I checked), and for some reason the system bashrc on macOS specifically does not change PS1 if it's already set. The SHELL environment variable does not indicate what shell you are running. It indicates what shell you want to run. It tells programs that want to start a shell which shell to run. Just running another shell manually does not change $SHELL . If you see a shell prompt and you aren't sure what shell is running, you can check with ps . In all Bourne-style shells (sh, ash, bash, ksh, zsh, …), $$ stands for the shell's process id, thus ps $$ tells you what program the current shell is. In fish you'd get an error that makes it obvious you're running fish because $$ is not valid there. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/724575",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
724,759 | Whenever I try to create a signed git commit, I need to enter my GPG key. It spawns some GUI application to receive the password. It looked like the application was seahorse , so I uninstalled it, but git still uses some GUI app. Polybar doesn't report the application name and it's title is just [process]@MYPC . How do I get git to use the command line / pinentry? Versions: gpg: 2.2.19 git: 2.25.1 pinentry: 1.1.0 | This is a gpg configuration issue, not a git configuration issue. You can force gpg to use a terminal-based dialog for entering your password by setting the pinentry-program in your gpg-agent.conf . For a simple terminal prompt, put the following in your ~/.gnupg/gpg-agent.conf: pinentry-program /usr/bin/pinentry-tty For a curses-based prompt: pinentry-program /usr/bin/pinentry-curses | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/724759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/494501/"
]
} |
724,786 | I'm trying to launch a program across many Ubuntu servers that can not be headless. I would need to relaunch these every few days so I'm looking for a way to easily do this. I installed a VNC Server on one linux machine and I'm able to launch my program by connecting to that but the problem is it takes effort to manually open up TightVNC and then open the terminal inside the TightVNC GUI to run the command to launch the program. What I want ideally is to write one line in the terminal on my local windows machine to launch this program on my server and then even if my local computer was to turn off, I would still have this program running on the server. | This is a gpg configuration issue, not a git configuration issue. You can force gpg to use a terminal-based dialog for entering your password by setting the pinentry-program in your gpg-agent.conf . For a simple terminal prompt, put the following in your ~/.gnupg/gpg-agent.conf: pinentry-program /usr/bin/pinentry-tty For a curses-based prompt: pinentry-program /usr/bin/pinentry-curses | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/724786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/549151/"
]
} |
725,160 | I like to use set -x in scripts to show what's going on, especially if the script is going to run in a CI/CD pipeline and I might need to debug some failure post-hoc. One annoyance with doing this is that if I want to echo some text to the user (e.g., a status message or "I'm starting to do $X" ) then that message gets output twice - once for the echo command itself being echoed, and then once as the output of that echo command. What's a good way to make this nicer? One solution is this: set -x... bunch of normal commands that get echoed( # Temporarily don't echo, so we don't double-echo set +x echo "Here is my status message")... rest of commands get echoed again But the two problems with that are That's a lot of machinery to write every time I want to tell the user something, and it's "non-obvious" enough that it probably requires the comment every time It echoes the set +x too, which is undesirable. Is there another option that works well? Something like Make's feature of prepending an @ to suppress echoing would be great, but I've not been able to find such a feature in Bash. | You can achieve this in Bash by not using set -x and instead trapping DEBUG and doing your own tracing: #!/bin/bashset -Ttrap '! [[ "$BASH_COMMAND" =~ ^(echo|printf) ]] && printf "+ %s\n" "$BASH_COMMAND"' DEBUGfoo=barecho This is a testecho $foo[[ $foo = bar ]] && /usr/bin/printf 'Matched\n' The idea is to add commands you want to ignore to the regex in the trap line. Running the above produces + foo=barThis is a testbar+ [[ $foo = bar ]]+ /usr/bin/printf 'Matched\n'Matched set -T ensures that the trap is inherited by shell functions. You can add a separate mechanism to enable and disable the trap, e.g. with a shell variable: #!/bin/bashset -Tstatus() { NOTRACE=1 echo "$@"}trap '! [[ "$BASH_COMMAND" =~ ^(status|NOTRACE=) ]] && printf "+ %s\n" "$BASH_COMMAND"' DEBUGfoo=bartest=$(echo baz)echo This is a teststatus This is a status reportecho $foo[[ $foo = bar ]] && /usr/bin/printf 'Matched\n' This produces + foo=bar+ test=$(echo baz)+ echo This is a testThis is a testThis is a status report+ echo $foobar+ [[ $foo = bar ]]+ /usr/bin/printf 'Matched\n'Matched | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/725160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45532/"
]
} |
725,215 | I found that my PATH on my MacBook Pro running Mac OS Ventura 13.0.1 has a strange directory inside all of the normal ones, i.e. /opt/homebrew/bin:/usr/local/bin:XXX:/usr/bin:/bin:/usr/sbin:/sbin: /System/Cryptexes/App/usr/bin is inserted into my PATH at the point with XXX As noted from the command: echo $PATH Where is it coming from? Is it safe to keep? Is it okay to remove from PATH? | Where is it coming from? Is it safe to keep? Is it okay to remove from PATH? /System/Cryptexes is part of macOS security . Mostly Safari and a few other features use it. So it came from Apple, it is safe to keep, and if you don't use anything in there it is likley safe to remove. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/725215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/549584/"
]
} |
725,364 | I'm trying to modify a markdown file. In a file, there are many links like this one. [string one](/stringtwo/#stringthree) I'd like to change these to the following: [string one](stringtwo.html#stringthree) Remove slashes and add .html . I tried the following: sed -i 's/](\(\/.*\)#/](\1.html#/g' file But it returns [global configuration](/config/.html#globals) . It doesn't remove slashes. How can I achieve this using bash or sed ? | This seems to do the trick: $ cat 725364.in[string one](/stringtwo/#stringthree)[example label](/path/to/doc/#anchor)$ sed 's_\(\[[^]]*]\)(/\([^#]*\)/\(#[^)]*\))_\1(\2.html\3)_g' 725364.in[string one](stringtwo.html#stringthree)[example label](path/to/doc.html#anchor) To break it down: Firstly, I use s_needle_pin_flags for sed rather than s/needle/pin/flags so as not to have to escape literal / s. sed will search for, using this expression \(\[[^]]*]\)(/\([^#]*\)/\(#[^)]*\)) , broken down as: \(\[[^]]*]\) - Definition of group 1 (the link label): A literal [ Followed by zero or more or of anything that is not a ] Followed by a literal ] (/ - A literal (/ \([^#]*\) - Definition of group 2 (the URL): Zero or more of anything that is not a literal # / - A literal / \(#[^)]*\) - Definition of group 3 (the anchor): A literal # Followed by zero or more of anything that is not a literal ) ) - A literal ) And transform it using \1(\2.html\3) , broken down as: The match for group 1, followed by ( , followed by The match for group 2, followed by .html , followed by The match for group 3, followed by ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/725364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120293/"
]
} |
725,590 | I’m asking because string comparisons are slow, but indexing is fast, and a lot of scripts I write are in bash, which to my knowledge performs a full string lookup for every executable call. All those ls ’s and grep ’s would be a little bit faster without performing a string lookup on each step. Of course, this now delves into compiler optimization. Anyways, is there a way to directly invoke a program in Linux using only its inode number (assuming you only had to look it up once for all invocations)? | The short answer is no. The longer answer is that linux user API doesn't support accessing files by any method using the inode number. The only access to the inode number is typically through the stat() system call which exposes the inode number, which can be useful for identifying if two filenames are the same file, but is not used for anything else. Accessing a file by inode would be a security violation, as it would bypass permissions on the directories that contain the file linked to the inode. The closest you can get to this would be accessing a file by open file handle. But you can't run a program from that either, and this would still require opening the file by a path. (As noted in comments, this functionality was added to linux for security reasons along with the rest of the *at system calls, but is not portable.) There's also numerous ways of using the inode number to find the file (basically, crawl the filesystem and use stat) and then run it normally, but this is the opposite of what you want, as it is enormously more expensive than just accessing the file by pathname and doesn't remove that cost either. Having said that, worrying about this type of optimization is probably moot, as Linux has already optimized the internal inode lookup a great deal. Also, traditionally, shells hash the path location of executables so they don't have to hunt for them from all directories in $PATH every time. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/725590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/441393/"
]
} |
725,851 | I have a script something like this: for chain in http https sshdo iptables -nvxL $chain | tail -1 | awk '{print $2}'done But what I actually want to do is capture the output of the iptables command for each iteration into a different variable, whose name should be the equal to the current value of chain . So for the first loop iteration (where $chain is http ) I want this to happen: http=$(iptables -nvxL http | tail -1 | awk '{print $2}') Then for the next I want this: https=$(iptables -nvxL https | tail -1 | awk '{print $2}') Hopefully you get the idea, not sure how to do this. | In your case, I would use an associative array for that: declare -A rulesfor chain in http https sshdo rules[$chain]=$(iptables -nvxL $chain | tail -1 | awk '{print $2}')done You can then access the output by dereferencing, as in printf -- "%s\n" "${rules['http']}" or for chain in http https sshdo printf -- "%s\n" "${rules[$chain]}"done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/725851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90367/"
]
} |
725,869 | I have some files(.v.gz). The data is present in the file is shown below syntax: module **module_name**(out, In, clk, rst )statement 1statement 2 : :statement nendmodule the actual data is file module mod_reg_lif(out, In, clk, rst ) #statement 1statement 2 : :statement nendmodulemodule dff_reg_net(out, In, clk, rst ) statement 1statement 2 : :statement nendmodulemodule dat_log_out (out, In, clk, rst ) statement 1statement 2 : :statement nendmodulemodule rest_wire_lib (out, In, clk, rst ) statement 1statement 2 : :statement nendmodule actual file contains similar type of multiple data module switch(in1, datainready1, atainready) always @(posedge clk) begin if (reset == 1) begin counter<= 0; datainreadyl<= 0; if(counter ==0) dataoutready<= 0; end endmodule here switch is a module name The above example please check the module_names are mod_reg_lif ,dat_log_out ,dff_reg_net, rest_wire_lib , consider after module as module_name (in the same line or next line it stars) till ( occurs if it's in the same line or next line.in output, I want only module names like this mod_reg_lif dat_log_out dff_reg_netrest_wire_lib what output I am getting mod_reg_lif rest_wire_lib I am not able to fetch all the data as I mentioned in the output, because both module and module_name are present in the same line. My bash code is for file in *.v.gz;do zgrep -A1 "^module" "$file" | sed -n -e 's/^\(module \)*\(.*(.*)\).*$/\2/p' | cut -f1 -d"(" > $(basename "$file" .v.gz).txtdone please correct my sed -n -e 's/^(module ) (. (. )). $/\2/p' command i am not sure how to fix. ps: please comment if you have any doubts about the question. | In your case, I would use an associative array for that: declare -A rulesfor chain in http https sshdo rules[$chain]=$(iptables -nvxL $chain | tail -1 | awk '{print $2}')done You can then access the output by dereferencing, as in printf -- "%s\n" "${rules['http']}" or for chain in http https sshdo printf -- "%s\n" "${rules[$chain]}"done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/725869",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/545686/"
]
} |
726,261 | Suppose we have an if statement as follows: if [ $(false) ]; then echo "?"fi Then "?" is not printed (the condition is false). However, in the following case, "?!" is printed, why? if [ $(false) -o $(false) ]; then echo "?!"fi | $(false) doesn’t evaluate to false, it produces an empty string. Because it isn’t quoted, if [ $(false) ]; then evaluates to if [ ]; then which is false, because [ with an empty expression is false. if [ $(false) -o $(false) ]; then evaluates to if [ -o ]; then This doesn’t use the -o operator, it evaluates -o as an expression with a single string; [ with such an expression is true, so the then part of the if statement runs. See the POSIX specification for test , in particular: The algorithm for determining the precedence of the operators and the return value that shall be generated is based on the number of arguments presented to test. (However, when using the "[...]" form, the <right-square-bracket> final argument shall not be counted in this algorithm.) In the following list, $1, $2, $3, and $4 represent the arguments presented to test: 0 arguments: Exit false (1). 1 argument: Exit true (0) if $1 is not null; otherwise, exit false. test only considers operators if it is given at least two arguments. If you want to use a command’s exit status as a condition, don’t put it either in a command substitution or in [ ] : if false; then and if false || false; then Note too that test ’s -a and -o operators are deprecated and unreliable ; you should use the shell’s && and || operators instead, e.g. if [ "$a" = b ] || [ "$a" = c ]; then | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/726261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5228/"
]
} |
726,423 | I have a shell server on an embedded system (It's a 32Bit ARMel system). When I go to login to it, I use: $ ssh root@ip Unable to negotiate with ip port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss I tried to give it one of the expected cypher types with the -c option: $ ssh -c ssh-dss root@ip Unknown cipher type 'ssh-dss' or: $ ssh -c ssh-rsa root@ipUnknown cipher type 'ssh-rsa' So I'm not sure what to do next. I have a UART serial console I can send commands to, but I'd rather be on SSH. I know it's running the service, but I don't know how to log in to it. | Try using this: ssh -oHostKeyAlgorithms=+ssh-rsa root@ip Notes: OpenSSH man page | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/726423",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41051/"
]
} |
726,431 | I know its a big classical but I didn't found the exact situation that concerns me I need a mkdir + mv command that can be invoked like that : mvdir /home/user/Documents/irs.pdf /mnt/work/45/223/insight/irs1970.pdf Exactly like a normal mv command works, just with a creation of path instead of a no such file or directory Considering that work/45/223/insight/ doesn't exist and need to be created All other command that I've found can't be invoked like that, needs some more informations, need to distinguish the path and file ourself, or something Attempt: mkdir -p /mnt/work/45/223/insight && mv /home/user/Documents/irs.pdf /mnt/work/45/223/insight/irs1970.pdf | Try using this: ssh -oHostKeyAlgorithms=+ssh-rsa root@ip Notes: OpenSSH man page | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/726431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/403461/"
]
} |
726,684 | I have the following sed expressions: echo 'abcabcabc' | sed -n ':-A s/a/x/1; s/a/&/2;t-A; p' Which supposedly should replace every occurrence of 'a' with 'x' except the last one.So, the expected output is this: xbcxbcabc but the actual ouput is this: xbcxbcxbc Which replaces all 'a' with 'x' I know there have been already similar questions like this, like Replace every occurence of a character except the last one in every line But I'm trying a different approach here using sed conditional branch. Let me break down my sed expressions with my own understanding so first here's the sed expressions again: echo 'abcabcabc' | sed -n ':-A s/a/x/1; s/a/&/2;t-A; p' sed pulls this abcabcabc into the pattern space then set a label :-A then s/a/x/1; replace the first occurence of 'a' with 'x' . Now the pattern space contains this xbcabcabc s/a/&/2; checks if the pattern space contains two 'a' , which it does, so it replaces the two 'a' with themselves & . So the pattern space still contains this xbcabcabc t-A since the most recent subtitution is successful, jump back to label -A starting from label -A it does this again s/a/x/1; which turns the content of pattern space from this xbcabcabc to this xbcxbcabc s/a/&/2 it checks if there are two 'a' again. This time the pattern space contains this xbcxbcabc which does not have two 'a' so the substitution is unsuccessful. t-A since the most recent subtitution is unsuccessful , it should not jump back to label -A but instead proceed with p which prints whatever in the pattern space which is this xbcxbcabc then exit.But instead, even when the subtitution is unsuccessful it jumps back to label -A again and replaces the remaining 'a' with 'x' . so it ends up with this xbcxbcxbc if I insert l between expressions: echo 'abcabcabc' | sed -n ':-A s/a/x/1; l; s/a/&/2;t-A; p' output: xbcabcabc$xbcxbcabc$xbcxbcxbc$xbcxbcxbc$xbcxbcxbc we can see it branches again even when the pattern space contains this xbcxbcabc So, what am I missing here? | Note that s/a/&/2 would replace the 2nd a with itself. It would not replace two a . Likewise, s/a/x/1 is always the same as s/a/x/ (it replaces the 1st a with x ). This is irrelevant to the question but still a misconception that may come back to bite you in some other context. The t command branches if there has been a successful substitution since the last input line was read unless another t command has been triggered since then, according to the GNU sed manual: t label If a s/// has done a successful substitution since the lastinput line was read and since the last t or T command, thenbranch to label ; if label is omitted, branch to end of script. The POSIX specification for the same command agrees with this: [2addr]t [label] Test. Branch to the : command verb bearing the label if any substitutions have been made since the most recent reading of an input line or execution of a t . If label is not specified, branch to the end of the script. So, to summarise: If any s command is successful for your single line of input, since the most recent t command, the t command will always branch to the given label. Your data is first transformed into xbcabcabc , then to xbcxbcabc . When arriving at this result, the initial s command for the iteration successfully replaced the first a with x , so the t command branches, giving you xbcxbcxbc . A way to fix this is to insert an extra t command and a dummy label: echo abcabcabc |sed -e :A -e 's/a/x/' -e tB \ -e :B -e 's/a/&/2' -e tA The execution of tB "resets the successful-flag" of that first s command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/726684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/471609/"
]
} |
727,287 | I have a 38GB folder with 800 MP4 videos in it. After re-downloading it, the file name has no spacing , and all words are joined, but it's still TitleCase. So from TitleCase I need Title Case . What would be the most effective way of bulk renaming these files? I remember a rename or autorename being included a long time ago in my distro, but I don't seem to have it now. | If you want to add spaces between each 'Words' of mp4 filename TitleCase (PascalCase to Word Separated By Spaces): rename -n 's/\B[[:upper:]]/ $&/g' ./*.mp4 Output rename(./FooBarBaz.mp4, ./Foo Bar Baz.mp4) Check this post that details in depth the Perl's rename What about rename different versions and usage ? What is the recommended way to use the Perl version especially? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/727287",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/551807/"
]
} |
727,334 | I have 86 PDFs that I don't want to share on the internet and merge with online tools.I am using the PopOS Linux distro and I would like to merge it using the terminal. PDF names are like 1.SubjectA , 2. SubjectB (start with Number. , so they are well ordered)Here is what I found but none of them are merging in order: qpdf --empty --pages *.pdf -- out.pdf Example file names: 1. Why to learn System and Network.pdf2. Network, Hardwares, LAN-WAN.pdf3. Protocols-Ports, OSI-TCP IP.pdf4. ARP, ICMP, RFC, IANA.pdf... Pattern isNumber + .(dot) + space + name | If you want to add spaces between each 'Words' of mp4 filename TitleCase (PascalCase to Word Separated By Spaces): rename -n 's/\B[[:upper:]]/ $&/g' ./*.mp4 Output rename(./FooBarBaz.mp4, ./Foo Bar Baz.mp4) Check this post that details in depth the Perl's rename What about rename different versions and usage ? What is the recommended way to use the Perl version especially? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/727334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/549638/"
]
} |
727,352 | I am using Arch which was updated a couple of weeks ago. Packages is use are Arch 6.0.10-arch2-1 BASH 5.1.16(1)-release gnu sed 4.9 gnu grep 3.8 openbox 3.6.1 Within my Openbox rc.xml text file I have two lines as follows <!-- <keybind key="Right"> --><keybind key="s-Up"> I wish to frequently toggle/swap the above two lines for the below two lines using a script <!-- <keybind key="s-Up"> --><keybind key="Right"> This allows me to quickly change a keybinding without tediously editing my keybindings set-up text file ( rc.xml ) every time. The script I have so far is below, and is not working, though I know it is close. I'm not too bothered how this toggling is achieved, but having spent a significant amount of time already on this it would be nice to get below script working. The sed expressions below that do the text swapping work as they should. The if statement seems to do the first sed swap if the condition is met but not the second sed swap if the condition is not met, which is the issue. var_a=""#var_a=$(grep -zoP "<\!\-\- <keybind key=\"Right\"> \-\->\n\n<keybind key=\"s-Up\">" /home/kes/Dropbox/lubuntu-rc.xml | sed ':a;N;$!ba;s|\n\n||g')var_a=$(grep -zoP "<\!\-\- <keybind key=\"Right\"> \-\->\n\n<keybind key=\"s-Up\">" /home/kes/Dropbox/lubuntu-rc.xml | tr -d '\n' )# result of grep is# <!-- <keybind key="Right"> --><keybind key="s-Up">echo $var_a; sleep 0.5#if [[ -z ! "$var_a" ];thenif [[ '$var_a'=='<!-- <keybind key="Right"> --><keybind key="s-Up">' ]]; then sed -ie ':a;N;$!ba;s|<!-- <keybind key="Right"> -->\n\n<keybind key="s-Up">|<!-- <keybind key="s-Up"> -->\n\n<keybind key="Right">|g' /home/kes/Dropbox/lubuntu-rc.xmlelse sed -ie ':a;N;$!ba;s|<!-- <keybind key="s-Up"> -->\n\n<keybind key="Right">|<!-- <keybind key="Right"> -->\n\n<keybind key="s-Up">|g' /home/kes/Dropbox/lubuntu-rc.xmlfi | If you want to add spaces between each 'Words' of mp4 filename TitleCase (PascalCase to Word Separated By Spaces): rename -n 's/\B[[:upper:]]/ $&/g' ./*.mp4 Output rename(./FooBarBaz.mp4, ./Foo Bar Baz.mp4) Check this post that details in depth the Perl's rename What about rename different versions and usage ? What is the recommended way to use the Perl version especially? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/727352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46470/"
]
} |
727,492 | Why can I still ssh into my Ubuntu machine using a password? This is the /etc/ssh/sshd_config file of my Ubuntu 20.04 on ovh hosting (showing only non-commented lines for brevity): Include /etc/ssh/sshd_config.d/*.confPort xxxPermitRootLogin noAllowUsers user1 user2PubkeyAuthentication yesAuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2PasswordAuthentication noChallengeResponseAuthentication noUsePAM yesX11Forwarding yesPrintMotd noAcceptEnv LANG LC_*Subsystem sftp /usr/lib/openssh/sftp-server The permissions of the relevant files seem to be OK: $ stat -c %a /home/user1/.ssh/700$ stat -c %a /home/user1/.ssh/authorized_keys`600 I have run sudo service ssh restart and sudo service sshd restart . Why am I still able to log into my Ubuntu machine by password over ssh? I can login by user and password over ssh (PuTTY), it only asks for password. Both user1 and user2 have their key in .ssh home folder. What is missing? I checked the include file: -rw------- 1 root root 27 Dec 1 12:52 50-cloud-init.conf...:/etc/ssh/sshd_config.d$ sudo cat 50-cloud-init.confPasswordAuthentication yes so I guess that is the cause? However, wouldn’t my config overwrite this setting?since it is included above (line wise)? | Ubuntu/Debian distributions have the non-standard entry Include /etc/ssh/sshd_config.d/*.conf at the beginning of the distribution sshd_config . The purpose of this is to allow users to customize their sshd configuration without modifying the core sshd_config file, which can minimize conflicts or unexpected configuration changes on apt update of OpenSSH. Because the first encountered configuration line is the one applied, any password commands in a custom configuration file in /etc/ssh/sshd_config.d/*.conf will pre-empt the PasswordAuthentication no line in the primary configuration. Ensure that all configuration is as you expect. As noted by @dexter, you can output the effective configuration with sudo sshd -T , which may highlight when one configuration file overrides another. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/727492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/552031/"
]
} |
727,691 | I have several files in a directory with this kind of content: Wood *NailsLarge Hammer * Some names have a star after them, some don't. I have multiple files with such content. In each file a product may or may not have a single star next to it.I need to make a bash script to count the number of star occurrences for each individual product in all the files. For example, the output needs to be like this: Wood 12Yellow Lamps 6Nails 4... Which means that in all the files it found 12x a star next to Wood, 6x a star next to the lamps, etc... It's pretty easy to parse it in C, but I don't want a binary to run. I want a shell script, and I'm not as versatile with grep and awk, which I'm sure I need here. I know how to count the stars per se, but I'm not sure how to track which star count belongs to which product. | Like this, with one awk : awk '$NF=="*"{$NF=""; arr[$0]++}END{for (i in arr) print i arr[i]}' ./* $NF is the latest string separated by space(s) by default the main trick is to create an associative named arr ay with the current words as key and incrementing as value at the END we iterate over the arr ay to print each keys/values With perl one-liner: perl -anE ' if ($F[-1] eq "*") { $k = join " ", @F[0..@F-2]; $a->{$k}++ } END{say "$_ $a->{$_}" for keys %$a}' ./* The -a is the split mode in @F default array | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/727691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/467270/"
]
} |
727,701 | I need to open a file named - . (I am playing this hacker game) And when I try to use command cat -- - , after I type that, any command I type does not work, for that fact, everything I type just repeats itself after I hit enter. I assume I entered into some type of loop or extended command mode or something. What do I type, or press to get out of this "loop" ? Just to be clear, I typed "hhel" once, and it showed up twice, I typed "test" once, and as you can see in the photo, it showed up twice in the terminal. | Running cat -- - is effectively the same as running just cat . The - is understood to mean standard input, but cat 's default behaviour without any arguments is to read standard input anyway. As for the duplicating text, that's the terminal echoing your input as you type it (the first time) and cat reading input and printing it to output (which is also to the terminal, hence showing the text a second time). To exit this, press Ctrl D on a new line. You can also use Ctrl C to send SIGINT to cat , causing it to die. To view the contents of a file called - in the current directory using cat , use cat ./- | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/727701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/552278/"
]
} |
727,740 | The command df [-h] gives the used/remaining space on all disks. Can I somehow pipe this output to a different command and get the usage for my current disk (current disk = disk where my current working directory) is located? | You can tell df to operate on any directory you like; thus df -h . will report the available space on the filesystem containing the current directory. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/727740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/526898/"
]
} |
727,878 | I sometimes forget the version numbers and codenames from Debian. Sure, lsb-release -a or cat /etc/os-release prints a well formatted information regarding the current system I logged in, but: Is there a manpage or help document that lists all previous versions? (Ideally it's something "native", so no tool to install. And something relatively easy to remember, so no curl ing a webpage) I didn't find a man -page and thought I might find something in in /usr/share/doc but unfortunatley not. What I did find, was a python file, that gives: grep 'Description: Debian' /usr/share/python-apt/templates/Debian.infoDescription: Debian 11 'bullseye'Description: Debian 10 'buster'Description: Debian 9 'stretch'Description: Debian 8 'jessie'Description: Debian 7 'Wheezy' Description: Debian 6.0 'Squeeze' Description: Debian 5.0 'Lenny' Description: Debian 4.0 'Etch'Description: Debian 3.1 'Sarge'Description: Debian current stable releaseDescription: Debian testingDescription: Debian 'Sid' (unstable) which is cool as a workaround, but dependent on the python-apt package to be installed and thats not what I was going for. EDIT: Thank @Gilles for finding /usr/share/distro-info/debian.csv (and ubuntu.csv) This file is nearly perfect, it even contains the dates of creation,release and "endoflive for LTS", e.g.: ...9,Stretch,stretch,2015-04-25,2017-06-17,2020-07-06... | There's no man page : find /usr/share/man -exec zgrep -li 'Wheezy|Potato' {} + 2>/dev/null There's no match . One way, using just one awk : awk -F, '{print $1, $2}' /usr/share/distro-info/debian.csvversion codename1.1 Buzz1.2 Rex1.3 Bo2.0 Hamm2.1 Slink2.2 Potato3.0 Woody3.1 Sarge4.0 Etch5.0 Lenny6.0 Squeeze7 Wheezy8 Jessie9 Stretch10 Buster11 Bullseye12 Bookworm13 Trixie Sid Experimental For Ubuntu : awk -F, '{print $1, $2}' /usr/share/distro-info/ubuntu.csv version codename4.10 Warty Warthog5.04 Hoary Hedgehog5.10 Breezy Badger6.06 LTS Dapper Drake6.10 Edgy Eft7.04 Feisty Fawn7.10 Gutsy Gibbon8.04 LTS Hardy Heron8.10 Intrepid Ibex9.04 Jaunty Jackalope9.10 Karmic Koala10.04 LTS Lucid Lynx10.10 Maverick Meerkat11.04 Natty Narwhal11.10 Oneiric Ocelot12.04 LTS Precise Pangolin12.10 Quantal Quetzal13.04 Raring Ringtail13.10 Saucy Salamander14.04 LTS Trusty Tahr14.10 Utopic Unicorn15.04 Vivid Vervet15.10 Wily Werewolf16.04 LTS Xenial Xerus16.10 Yakkety Yak17.04 Zesty Zapus17.10 Artful Aardvark18.04 LTS Bionic Beaver18.10 Cosmic Cuttlefish19.04 Disco Dingo19.10 Eoan Ermine20.04 LTS Focal Fossa20.10 Groovy Gorilla21.04 Hirsute Hippo21.10 Impish Indri22.04 LTS Jammy Jellyfish22.10 Kinetic Kudu23.04 Lunar Lobster Another way: xidel -se '//div[@id="toc"]/ul//li//li/a' \ https://en.wikipedia.org/wiki/Debian_version_history | cut -d ' ' -f2- or curl -sL https://en.wikipedia.org/wiki/Debian_version_history | xmlstarlet format -H - 2>/dev/null | xmlstarlet sel -t -v '//div[@id="toc"]/ul//li//li/a' - | cut -d ' ' -f2- output Debian 1.1 (Buzz)Debian 1.2 (Rex)Debian 1.3 (Bo)Debian 2.0 (Hamm)Debian 2.1 (Slink)Debian 2.2 (Potato)Debian 3.0 (Woody)Debian 3.1 (Sarge)Debian 4.0 (Etch)Debian 5.0 (Lenny)Debian 6.0 (Squeeze)Debian 7 (Wheezy)Debian 8 (Jessie)Debian 9 (Stretch)Debian 10 (Buster)Debian 11 (Bullseye)Debian 12 (Bookworm) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/727878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161003/"
]
} |
728,319 | I have a dd image of disk, which is a LUKS container containing a filesystme, and which I can loop mount and unlock to access the files. The filesystem in the container is only about 1/4 full. What is the proper way to take advantage of compression, while allowing me to be able to mounted and unlock the disk? | You can't compress LUKS encrypted data. However, if all involved filesystems support it, you can discard free space using fstrim, resulting in a sparse file where free space is zero and does not occupy space. # du -h foobar.img1.0G foobar.img# cryptsetup open --allow-discards foobar.img foobarEnter passphrase for foobar.img: foobar# mount /dev/mapper/foobar loop/# df -h loop/Filesystem Size Used Avail Use% Mounted on/dev/mapper/foobar 974M 129M 780M 15% /foobar/loop# fstrim -v loop/loop/: 845.7 MiB (886763520 bytes) trimmed# du -h foobar.img179M foobar.img Yet another option may be to shrink the filesystem itself and truncate the size of the image file accordingly (remember to account for the LUKS header offset, usually 2 MiB for LUKS 1 and 16 MiB for LUKS 2). The alternative would be to compress the unencrypted data instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/728319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58343/"
]
} |
728,329 | I have a list of files like: 100119329_d01.png100119329_d08.png100119329_d02.png100119329_d05.png100119329_d03.png100119329_d04.png100119329_d07.png100119329_f02.png100119329_f01.png I want to sort by numbers and then by the preceding character in reverse to get the following output: 100119329_f01.png100119329_f02.png100119329_d01.png100119329_d02.png100119329_d03.png100119329_d04.png100119329_d05.png100119329_d07.png100119329_d08.png I've tried: cat <file> |sort -k1.11r -k1.12,1.13n but only one of the arguments work at a time. So I can only sort by number or by reversed character.How can I get both to work at the same time? | You can't compress LUKS encrypted data. However, if all involved filesystems support it, you can discard free space using fstrim, resulting in a sparse file where free space is zero and does not occupy space. # du -h foobar.img1.0G foobar.img# cryptsetup open --allow-discards foobar.img foobarEnter passphrase for foobar.img: foobar# mount /dev/mapper/foobar loop/# df -h loop/Filesystem Size Used Avail Use% Mounted on/dev/mapper/foobar 974M 129M 780M 15% /foobar/loop# fstrim -v loop/loop/: 845.7 MiB (886763520 bytes) trimmed# du -h foobar.img179M foobar.img Yet another option may be to shrink the filesystem itself and truncate the size of the image file accordingly (remember to account for the LUKS header offset, usually 2 MiB for LUKS 1 and 16 MiB for LUKS 2). The alternative would be to compress the unencrypted data instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/728329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194276/"
]
} |
728,390 | I need to feed a program some specific files, in the correct order and grouped two by two. If I have A_file.txtB_file.txtC_file.txtD_file.txt I need to feed it to a program so that file A and B are processed first, then C and D and so on. In essence: for i in *.txt; do some_program A_file.txt B_file.txt > output_AB some_program C_file.txt D_file.txt > output_CD I know that the above doesn't make sense, but it was to illustrate the point. Essentially, iterate over all .txt files in the folder but feed them two at a time to the program, then move to the next two. Looking to learn, many thanks. | #!/bin/shset -- *_file.txtuntil [ "$#" -lt 2 ]; do process "$1" "$2" >"output_${1%_file.txt}${2%_file.txt}" shift 2done This sets the positional parameters to the list of filenames you are interested in, based on a filename globbing pattern matching the names in the question. It then uses a loop to iterate over this list until there are less than two names left in the list ( $# is the length of the list of positional parameters). In each iteration, the first two elements of the list, $1 and $2 , are processed and then shifted off the list using shift 2 . The output from the processing is redirected to a file named output_ followed by the concatenation of the variable parts of the two filenames (whatever is before the static _file.txt string in each). This assumes that the files are named in such a way that sorting the names in lexicographical order (which the expansion of the globbing pattern will do) results in a list of names that can be paired in the way shown in the question. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/728390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291379/"
]
} |
728,551 | I'd like to create a script that is able to install my desired set of packages in any Linux distro (initially create it to run in Ubuntu and later expand it to whichever) and basically function as an environment setup. Examples of such packages would be zsh, omz, fzf, autocomplete etc... My question is this, should I rely on the existence of a package manager ( apt-get ) in order to install the package and just call that through the script or manually wget the source files and proceed with the installation? Which do you think is best? | I'd recommend to always prefer the native package manager for anything that needs to link against system libraries, or anything that other things on the target system might link against. Otherwise, you'll be going through the effort of making things work with distro quirks for every single platform you're targeting, and will lead to people having problems if they try to use your version vs the version that a package they've installed requires. As @ArtemS.Tashkinov says, you need to deal with each target individually. That really does not scale: the GNU Radio community has a tool called PyBOMBS, which tried to do this for but a handful of platforms. It's a maintenance nightmare. We should have gone with existing prefix managers like conda from the start¹ for those use cases where users actually wanted an isolated prefix, and put more effort in coordination with the official distro package maintainers to make the software we shipped installable via apt, yum, pacman, emerge, zypper,... and actually work flawlessly out of the box. And specifically regarding your software: under no circumstances should you install a hand-compiled zsh to your system path. That's a recipe for nightmarish incompatibilities. Just install the zsh your distro brings. A Linux distribution is a software conflict avoidance mechanism that you should use whether possible. Building things from source and installing them that a user night reasonably install via their package manager leads to conflicts, and thus breaks your user's system. ¹ This is my private opinion on it; PyBOMBS1 and PyBOMBS2 still enabled a great deal of great applications! It's definitely cool to have it. It's just been an immense effort, duplicating what larger communities are already doing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/728551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/463960/"
]
} |
728,569 | I have a Lenovo IdeaPad 3-15ADA6 Laptop, Type 82KR, with Debian 11 Bullseye. I have just installed a brand new RAM chip, Corsair Vengeance 8Gb DDR4 2400MHz, as from Lenovo specifications. If I run sudo lshw I can see all the installed RAM correctly: *-memory description: System Memory physical id: 1 slot: System board or motherboard size: 12GiB *-bank:0 description: Row of chips DDR4 Synchronous Unbuffered (Unregistered) 2400 MHz (0,4 ns) product: CMSX8GX4M1A2400C16 vendor: Unknown physical id: 0 serial: 00000000 slot: DIMM 0 size: 8GiB width: 64 bits clock: 2400MHz (0.4ns) *-bank:1 description: SODIMM DDR4 Synchronous Unbuffered (Unregistered) 2400 MHz (0,4 ns) product: HMA851S6DJR6N-XN vendor: Hynix physical id: 1 serial: 00000000 slot: DIMM 0 size: 4GiB width: 64 bits clock: 2400MHz (0.4ns) I expected to have 12 GB of RAM, but if I run htop , the graphical system monitor or simply the free command, this is what I get: $ free -ht total used free shared buff/cache availableMem: 2,8Gi 1,8Gi 260Mi 45Mi 713Mi 700MiSwap: 976Mi 973Mi 3,0MiTotal: 3,8Gi 2,8Gi 264Mi The system tends to freeze with a small number of open applications. I thought the memory might be broken, but I should be able to see 4 GB anyway, the amount of fixed installed memory on the motherboard, not 2.8 GB! | I'd recommend to always prefer the native package manager for anything that needs to link against system libraries, or anything that other things on the target system might link against. Otherwise, you'll be going through the effort of making things work with distro quirks for every single platform you're targeting, and will lead to people having problems if they try to use your version vs the version that a package they've installed requires. As @ArtemS.Tashkinov says, you need to deal with each target individually. That really does not scale: the GNU Radio community has a tool called PyBOMBS, which tried to do this for but a handful of platforms. It's a maintenance nightmare. We should have gone with existing prefix managers like conda from the start¹ for those use cases where users actually wanted an isolated prefix, and put more effort in coordination with the official distro package maintainers to make the software we shipped installable via apt, yum, pacman, emerge, zypper,... and actually work flawlessly out of the box. And specifically regarding your software: under no circumstances should you install a hand-compiled zsh to your system path. That's a recipe for nightmarish incompatibilities. Just install the zsh your distro brings. A Linux distribution is a software conflict avoidance mechanism that you should use whether possible. Building things from source and installing them that a user night reasonably install via their package manager leads to conflicts, and thus breaks your user's system. ¹ This is my private opinion on it; PyBOMBS1 and PyBOMBS2 still enabled a great deal of great applications! It's definitely cool to have it. It's just been an immense effort, duplicating what larger communities are already doing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/728569",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/553253/"
]
} |
728,955 | Fresh install on a machine that has two disks: 1TB HDD150GB SSD I installed Fedora on the latter but for some reason only 15GB of it is allocated for the root filesytem: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTSsda 8:0 0 931.5G 0 disk ├─sda1 8:1 0 579M 0 part └─sda2 8:2 0 930.9G 0 part /run/media/richard/12a41467-0624-427c-a60b-86863514358asdb 8:16 0 149.1G 0 disk ├─sdb1 8:17 0 600M 0 part /boot/efi├─sdb2 8:18 0 1G 0 part /boot└─sdb3 8:19 0 147.5G 0 part └─fedora_fedora-root 253:0 0 15G 0 lvm /sdc 8:32 1 0B 0 disk sdd 8:48 1 0B 0 disk sde 8:64 1 0B 0 disk sr0 11:0 1 1024M 0 rom zram0 252:0 0 8G 0 disk [SWAP] First question - is this normal to not use the whole thing? Second question - what is the right set of commands to follow to make more of it available? Update I was able to increase the size using: lvextend -L +10G /dev/mapper/fedora_fedora-root But is this the best solution? | This is default behaviour of Fedora Server -- the root filesystem will be 15 GiB and rest of the disk space is left unused for the user to either resize the root logical volume or use for different use case (for /var or virtualization etc.). If you want a different storage layout, you need to use the custom partitioning in the installer and create the mountpoints manually . One of the reasons is that the XFS filesystem used by Fedora Server (and only server, Workstation and other flavours use btrfs) cannot be shrunk so if the installer uses the entire free space, it will be really hard to change the default layout. If you want to resize your root filesystem you can use lvextend -L+<size> --resizefs fedora_fedora/root . Where <size> can be for example 50G for 50 GiB. Edit: The --resizefs is important, without this the lvresize command will resize only the volume and not the filesystem on it. If you run lvresize without the --resizefs option you can resize the filesystem afterwards with xfs_growfs /dev/mapper/fedora_fedora-root . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/728955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/553667/"
]
} |
728,975 | Printing the $RANDOM variable in zsh is giving me the same result if I don't use it in another context. It's like the shell has a cache of command outputs or variable values so I can't have the new value. I am, running this: zsh $ printf \\$(printf "%o" $(( $RANDOM % 123)))\\$(printf "%o" $(( $RANDOM % 123))) It's two times the same $RANDOM result and in different successive runs, I still get the same result as the first run. Still get the exact same behaviour printf \\$(printf "%o" $(( $(echo $RANDOM) % 123)))\\$(printf "%o" $(( $(echo $RANDOM) % 123))) | It's a property of zsh . See also man zshparam on RANDOM : The values of RANDOM form an intentionally-repeatable pseudo-randomsequence; subshells that reference RANDOM will result in identicalpseudo-random values unless the value of RANDOM is referenced orseeded in the parent shell in between subshell invocations. You're evaluating $RANDOM inside subshells, so this is the expected result: $ RANDOM=123$ echo $(echo $RANDOM) $(echo $RANDOM) $(echo $RANDOM)17313 17313 17313 Same command in bash or busybox ash: $ echo $(echo $RANDOM) $(echo $RANDOM) $(echo $RANDOM)12554 22752 18907 Different shell, different behavior. Different example in zsh : $ RANDOM=123$ echo $RANDOM $(echo $RANDOM) $RANDOM $(echo $RANDOM) $RANDOM 17313 7829 7829 9329 9329 Whenever the parent shell uses $RANDOM , the subshells that follow will give the next number in the pseudo-random sequence. But since the subshell does not affect the parent shell, the numbers are repeated when the parent shell uses $RANDOM again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/728975",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/403461/"
]
} |
729,004 | I'm running debian bullseye (was a fresh install but with old $HOME) and mate desktop.Whenever I lock my Notebook with mate-screensaver; I can not unlock it neither with my main user nor an unmodified testuser. In journal I found journalctl | grep mate-screensaverDez 19 18:06:28 Taomon dbus-daemon[541]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.70' (uid=1000 pid=2931 comm="mate-screensaver-preferences ")Dez 19 18:06:28 Taomon pulseaudio[806]: Looking for .desktop file for mate-screensaver-preferencesDez 19 18:06:28 Taomon pulseaudio[806]: Found /usr/share/applications/mate-screensaver-preferences.desktop.Dez 19 18:06:28 Taomon pulseaudio[806]: Parsing configuration file '/usr/share/applications/mate-screensaver-preferences.desktop'Dez 19 18:08:33 Taomon mate-screensaver-dialog[3008]: pam_unix(mate-screensaver:auth): authentication failure; logname= uid=1000 euid=1000 tty=:0 ruser= rhost= user=alexDez 19 18:08:50 Taomon pulseaudio[806]: Looking for .desktop file for mate-screensaver-dialogDez 19 18:08:50 Taomon mate-screensaver-dialog[3008]: pam_unix(mate-screensaver:auth): authentication failure; logname= uid=1000 euid=1000 tty=:0 ruser= rhost= user=alexDez 19 18:09:02 Taomon pulseaudio[806]: Freed 17 "mate-screensaver-dialog"Dez 19 18:16:23 Taomon mate-screensaver-dialog[3987]: pam_unix(mate-screensaver:auth): authentication failure; logname= uid=1001 euid=1001 tty=:0 ruser= rhost user=alex My workaround ist to hit STR+Alt+F1 and unlock it there with mate-screensaver-command -u If it matters display-manager is lightdm gsettings list-recursively org.mate.screensaverorg.mate.screensaver themes ['screensavers-personal-slideshow', 'screensavers-popsquares', 'screensavers-gnomelogo-floaters', 'screensavers-footlogo-floaters', 'screensavers-cosmos-slideshow']org.mate.screensaver embedded-keyboard-command ''org.mate.screensaver user-switch-enabled trueorg.mate.screensaver status-message-enabled trueorg.mate.screensaver embedded-keyboard-enabled falseorg.mate.screensaver logout-command ''org.mate.screensaver idle-activation-enabled falseorg.mate.screensaver lock-enabled falseorg.mate.screensaver logout-enabled falseorg.mate.screensaver power-management-delay 30org.mate.screensaver logout-delay 120org.mate.screensaver cycle-delay 10org.mate.screensaver lock-delay 1org.mate.screensaver mode 'random'org.mate.screensaver picture-filename '/usr/share/images/desktop-base/desktop-background'org.mate.screensaver lock-dialog-theme 'default' I have no idea where to start. (maybe pkaction or pam?) | It's a property of zsh . See also man zshparam on RANDOM : The values of RANDOM form an intentionally-repeatable pseudo-randomsequence; subshells that reference RANDOM will result in identicalpseudo-random values unless the value of RANDOM is referenced orseeded in the parent shell in between subshell invocations. You're evaluating $RANDOM inside subshells, so this is the expected result: $ RANDOM=123$ echo $(echo $RANDOM) $(echo $RANDOM) $(echo $RANDOM)17313 17313 17313 Same command in bash or busybox ash: $ echo $(echo $RANDOM) $(echo $RANDOM) $(echo $RANDOM)12554 22752 18907 Different shell, different behavior. Different example in zsh : $ RANDOM=123$ echo $RANDOM $(echo $RANDOM) $RANDOM $(echo $RANDOM) $RANDOM 17313 7829 7829 9329 9329 Whenever the parent shell uses $RANDOM , the subshells that follow will give the next number in the pseudo-random sequence. But since the subshell does not affect the parent shell, the numbers are repeated when the parent shell uses $RANDOM again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/729004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/400156/"
]
} |
729,022 | I'm using grep to search through 1 TB of files. I want to grep filenames and put the names in a text file AND I want to cp all files with a match to the dir /home/user/matches . I want to both tasks without searching through all my files with grep twice. I had the idea to put filenames output into a text file with grep grep -ril "xxx" . >> /home/user/matches/output-filename.txt And now use output-filename.txt as input for cp and make cp execute line per line. How do I do that? awk? Or do you guys have other ideas to avoid searching through all files twice | File paths are sequences of bytes other than 0; they're not necessarily text let alone lines of texts. In particular, a file path may contain newline characters may contain sequences of bytes that don't form valid characters may be longer than LINE_MAX The GNU implementation of grep (the one that added the -r option), can print the paths in a non-text format with -Z that is post-processable safely. For instance, GNU xargs can process that format with its -0 option: xargs -r0 -a <( grep -rilZ xxx . | tee file.list) cp -it /home/user/matches -- (here also assuming GNU cp for its -t option) If you want to print that list in a text¹ format that is understandable by a human, with GNU printf : xargs -r0a file.list printf '%q\n' ¹ Well, it should ensure bytes that can't be decoded as characters are rendered as $'\234' representations. Same for control characters including newline, which is rendered as $'\n' . That addresses the first two points above, but it doesn't guarantee the output will have lines shorter than LINE_MAX (but then again, GNU implementations of standard text utilities generally don't have a limit on length of lines they support). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/729022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/553723/"
]
} |
729,101 | I am troubleshooting a script and I wanted to manually enter each line, one by one, to find the error. My script has an if statement: if [ -d dir1 ]; then rm -rf dir1fi So I tried entering as a one line command $ if [ -d dir1 ]; then rm -rf dir1 fi> I tried Ctrl-D but then get message : bash: syntax error: unexpected end of file So how do I enter this if statement on the command line? | Short answer: you need a semi-colon after rm -rf dir1 . See it in action: enter the if block exactly as it appears in the script: $ if [ -d dir1 ]; then> rm -rf dir1> fi After running it, use the up-arrow to call back the last command, it will show you the entire if block on one line: $ if [ -d dir1 ]; then rm -rf dir1; fi The shell needs some way to interpret breaks between commands inside the if-block. It can't simply treat it as a single command executed all at once, so the semi-colons serve as those breaks. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/729101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81765/"
]
} |
729,124 | I'm using a Wyse50 Terminal emulation on Rocky Linux 9. Man pages, help screens, etc., are unusable because the system outputs attribute and color codes that the terminal emulator doesn't understand; so, for example, man cp reads like this: 1mSYNOPSIS0mm 1mcp 22m[4mOPTION24m]... [4m-T24m] 4mSOURCE24m 4mDEST0mm 1mcp 22m[4mOPTION24m]... 4mSOURCE24m... 4mDIRECTORY0mm 1mcp 22m[4mOPTION24m]... 4m-t24m 4mDIRECTORY24m 4mSOURCE24m...m m1mDESCRIPTION0mm I've updated the latest terminfo packages and did an infocmp between the latest wy50 and the wy50 from a Linux 4 where everything worked fine. I see nothing in profile, bash_profile, bashrc or .bashrc that would set any color parameters. What am I missing? (other than a legacy application that belongs in the prior century) | Short answer: you need a semi-colon after rm -rf dir1 . See it in action: enter the if block exactly as it appears in the script: $ if [ -d dir1 ]; then> rm -rf dir1> fi After running it, use the up-arrow to call back the last command, it will show you the entire if block on one line: $ if [ -d dir1 ]; then rm -rf dir1; fi The shell needs some way to interpret breaks between commands inside the if-block. It can't simply treat it as a single command executed all at once, so the semi-colons serve as those breaks. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/729124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/553829/"
]
} |
729,153 | I have a few commands that don't run as a script. It's supposed to create a bulleted list of the file names, then sort the columns of each file alphabetically, grab the second and its matching third column and put them in HTML tags. It works well for the most part, but doesn't successfully remove the .csv file extension. Every time I run the script, it returns line 10 p: No such file or directory found and line 21: /a No such file or directory found repeatedly. Update: It seems like it's an error with sort on my computer. I've run a simple shell script with only a sort comment and I get a no such file or directory error. I'm already on my sudoers list but maybe I'm doing something wrong. I don't understand why some people have commented that they have no problem running the script. #!/bin/bashfor file in *.csv; doname=${file%.csv} echo "<li><a href='$file'>$file</a></li>" >> final.htmldonefor file in *.csv; do name=${file%.csv} sort -t"," -k2 "$file" | awk -vfile="$name" -F"," ' BEGIN { printf "<h3 id=\"%s\">%s</h3>\n", file, file printf "<ul class=\"no-bullet\">\n" } { printf "<li>%s (%s)</li>\n", $2, $3 } END { printf "</ul>\n" printf "<p id=\"%s\" class=\"text-smaller block-looser\">[<a href=\"#top\">Return to Top</a>]</p>\n", file } ' >> final.htmldone When I have run bash -x myscript.sh I noticed that it eventually starts to return this: BEGIN { printf "<h3 id=\"%s\">%s</h3>\n", file, file printf "<ul class=\"no-bullet\">\n" } {+ sort -t, -k2 file.csv printf "<li>%s (%s)</li>\n", $2, $3 } END { printf "</ul>\n" printf ' 'id="%s"' 'class="text-smaller' 'block-looser"'myscript.sh: line 9: p: No such file or directory+ href='"#top"'+ to Topmyscript.sh: line 21: /a: No such file or directory I don't understand what the discrepancy is between running it in the shell versus the script especially when I run it in the same directory as my .csv files. These issues don't appear when the commands are pasted into the command line. Update: running the code with a directory only makes the first for loop run but still has an issue going through the second loop. I accidentally put down lines 10 and 22 instead of lines 10 and 21 and updated the post to reflect that. There might have been an extra line somewhere in an attempt to debug/parse out what's going on. Here is a sample .csv file that I'm trying to run with this shell script. Contacts.csv ID,Name,State1,John,NY2,Rachel,SC The expected output for this code: <li><a href='Contacts'> Contacts</a></li><h3 id='Contacts'> Contacts </h3><ul class="no-bullet"><li>John (NY) </li><li>Rachel (SC) </li></ul><p id='Contacts' class="text-smaller">[<a href="#top">Return to top</a>]</p> I've also tried using printf "<p and printf '<p to no avail. Second update:The script still doesn't run as expected even when pasted from this post. I updated homebrew and tried a different machine but no luck. What could be going on??? | Short answer: you need a semi-colon after rm -rf dir1 . See it in action: enter the if block exactly as it appears in the script: $ if [ -d dir1 ]; then> rm -rf dir1> fi After running it, use the up-arrow to call back the last command, it will show you the entire if block on one line: $ if [ -d dir1 ]; then rm -rf dir1; fi The shell needs some way to interpret breaks between commands inside the if-block. It can't simply treat it as a single command executed all at once, so the semi-colons serve as those breaks. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/729153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/541397/"
]
} |
729,279 | I compiled a list of materials I needed in a game, starting from the very top down to its most primitive ingredients. However, now I'm looking for a way to quickly tally up the numbers. 21 reinforced alloy 21 damascus steel 21 steel 21 iron dust 21 carbon 21 iron 21 iron dust 21 carbon 21 iron 21 hardened metal 21 damascus steel 21 steel 21 iron dust 21 carbon 21 iron 21 iron dust 21 carbon 21 iron 21 duralmin 21 aluminum dust 21 copper dust 21 aluminum 21 aluminum dust 21 compressed carbon 84 carbon 21 aluminum bronze 21 aluminum dust 21 bronze 21 copper dust 21 tin dust 21 copper 21 aluminum 21 aluminum dust 21 corinthian bronze 21 silver dust 21 gold dust 21 copper dust 21 bronze 21 copper dust 21 tin dust 21 copper 21 solder 21 lead dust 21 tin dust 21 lead 21 lead dust 21 billon 21 silver dust 21 copper dust 21 silver 21 silver dust 21 gold 24 carat The top levels don't matter, since I'm looking for the raw materials I need to collect. For example, 21 hardened metal and 21 damascus steel don't matter, because I'm looking for the total of 42 damascus steel , which also doesn't matter, because I'm looking for 42 iron dust , 42 carbon , and 42 iron (this example doesn't count the rest of the list), the total raw materials count. So far I did this on a regex testing website , but eventually I'd like to be able to use grep so I don't have to open up a website to do the counting. I'd like to get something like "there are 5 occurences of carbon, here are the matching lines" so I can calculate easier, since if I know there are 5 occurences of carbon with 4 of them being 21 carbon and 1 being 84 carbon , I can now easily calculate that I need a total of 21*4 + 84 = 168 carbon . I'm trying to count lines that don't have another line with a larger amount of tabs following it, since presumably if it does then it's not the raw material. /(\t+)\d+ aluminum\n(?!\1)/g (replacing "aluminum" with whatever raw material count I'm trying to find) This is not finding anything though. Is there a way to achieve what I'm trying to achieve with regex at all? If so, how? Thank you for your time. I'm not sure whether to put this on SO or this SE, but given that I eventually want to be able to use grep I thought this might be the more appropriate place. | If you want to use perl-like regexps, why not using the real thing: <your-file perl -l -0777 -ne ' while (m{^(\s*+)(\d+) (.*)$(?!\n\1\s)}mg) { $count{$3} += $2 } END { printf "%4d %s\n", $count{$_}, $_ for sort keys %count }' Which gives: 84 aluminum dust 168 carbon 42 copper 105 copper dust 21 gold 24 carat 21 gold dust 84 iron 84 iron dust 42 lead dust 63 silver dust 63 tin dust -0777 -n means the whole input is slurped into $_ . The m ultiline flag to the m{...} operator makes so that ^ and $ match at the beginning and end of each line within $_ rather than just at the start and end of $_ . Without the s flag, . doesn't match on a newline character, but beware that \s does which could throw things off here if there were blank lines in the input. \s*+ is the non-backtracking version of \s* . Not strictly necessary here since what follows ( \d+ ) can't match a whitespace. Standard grep doesn't support perl-like regexps such as those \d and (?!\1) perl RE operators you're using, but you could use pcregrep which happens to also support -o and a multiline mode with -M : <your-file pcregrep -Mo '^(\s*+)\K.*$(?!\n\1\s)' You'd still need to pipe to something else like perl or awk to do the sums, so that has little advantage over using perl for everything. If the indentation may have a mix of tabs and spaces, you may want to have the input go through either expand or unexpand first to consolidate those into just spaces or just tabs. By default, they consider tab stops to be 8 columns apart like most terminals or browsers do (but not stackexchange which annoyingly has them 4 columns apart), but see the -t option to change that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/729279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/553983/"
]
} |
729,344 | I'm able to run this command successfully: tail -f my_file.txt | grep foo It shows only the lines with the string foo , and it keeps showing them. But when I run this command: tail -f my_file.txt | grep foo | grep bar It doesn't show any lines, even though there are lines that include both foo and bar . I know there is a solution for using multiple patterns in a single grep call, but I want to know why this line failed. | That's because the default behaviour of the C runtime library is to buffer writes to stdout until a full block of data is written (some kilobytes, usually), unless stdout is connected to a terminal. You'll get output once the middle grep has printed a full block, but then you have to wait again for the next block to fill, and so on. It's an optimization for throughput, and works much better when the left-hand command just does some task and terminates, instead of waiting for something. GNU grep has the --line-buffered option to turn off that buffering, so this should work better: tail -f my_file.txt | grep --line-buffered foo | grep bar The last grep prints to the terminal so it's line buffered by default and doesn't need an option. See Turn off buffering in pipe for generic solutions to the buffering issue. In this particular case of two greps, you could use e.g. a single AWK instead as Stéphane Chazelas mentioned in a comment: tail -f my_file.txt | awk '/foo/ && /bar/' (Incidentally, you could also do things like awk '/foo/ && !/bar/' , catching lines with foo but no bar .) Doing the same in grep would be harder, as grep -e foo -e bar matches any lines that contain either foo or bar . You'd need something like ... | grep -E -e 'foo.*bar|bar.*foo' instead. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/729344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11086/"
]
} |
729,695 | HI supposed I want to extract only a component of a match, for example up to the first "_" echo "Ha00030_Z6_L008_I1_001.fastq.gz" | grep -P -o '^H.+?_' however the above returns, Ha00030_ , but I only want Ha00030 is there something I can do, parenthesis or something to indicate to grep that I only want a certain component of the match? edit: the ^H is not a requirement. so matching up to the first "_" is sufficient. | Like this, using exclude character class : $ echo "Ha00030_Z6_L008_I1_001.fastq.gz" | grep -Po '^[^_]+'Ha00030 Or the same without PCRE aka -P that is not on all boxes at this time, like on latest freeBSD : echo "Ha00030_Z6_L008_I1_001.fastq.gz" | grep -o '^[^_]\+' The [^_]+ means all but not a _ with + quantifier using bash using parameter expansion replacement , see: http://mywiki.wooledge.org/BashFAQ/073 and "Parameter Expansion" in man bash . Also see http://wiki.bash-hackers.org/syntax/pe $ str=Ha00030_Z6_L008_I1_001.fastq.gz$ echo "${str//_*/}"Ha00030 or $ IFS=_ read str _ <<< "Ha00030_Z6_L008_I1_001.fastq.gz"$ echo "$str"Ha00030 using cut (any shell) POSIX ly $ printf '%s\n' "Ha00030_Z6_L008_I1_001.fastq.gz" | cut -d'_' -f1Ha00030 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/729695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283063/"
]
} |
729,955 | I have this: -rw-r--r-- 1 user user 36166999908 Jan 29 2022 tmp.archive.part1.zip-rw-r--r-- 1 user user 5579574562 Jan 29 2022 tmp.archive.part2.zip-rw-r--r-- 1 user user 5097536636 Jan 29 2022 tmp.archive.part3.zip-rw-r--r-- 1 user user 10612382236 Dec 29 02:19 tmp.archive.part4.zip G M k so these ZIP files are 36 GB, 5, 5, and 10 GB in size, all of them would be past the 2^32 4GB maximum that I read at one place. They say "zip64" allows 2^64 size, but I don't know what I have, zip -h says: Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license.Zip 3.0 (July 5th 2008). Usage: ... and file tells me: file tmp.archive.part1.ziptmp.archive.part1.zip: Zip archive data, at least v1.0 to extract so how can that be? I do notice that zipmerge completely fails to operate with these files. My problem is, I need to combine these zip files into one (if possible) and do so without actually extracting them (no space and file count quota on the system this is on). I tried a zip2tar python script which someone posted here to another question but that also fails. They don't like the file, say it's no zip file, or just crash with core dump. If these zip files were created with the zip 3.0 which I showed, then is there perhaps a better zipmerge or something which will not choke on the size? | Because there is more than one type of ‘ZIP’ archive. The original ZIP format, as implemented by the first version of PKZIP, did indeed have a 4 GiB limit on archive size (as well as a corresponding limit on archive member sizes, both compressed and uncompressed). However, with version 4.5 of the format, the ZIP64 extensions were introduced, which extended this limit to 16 EiB by moving the relevant fields in the file header and archive entries to supplementary fields stored elsewhere in the archive, as well as expanding the limit on the number of archive members (classic ZIP was limited to 65535 archive members) in a similar manner. However , unless a tool is actually looking for those extended fields, they get ignored and the tool will just fail to work correctly. This is because a ZIP64 archive is still technically a valid ‘classic’ ZIP archive unless you try to validate the member sizes (and this is an excellent example of why backwards compatibility can sometimes be a bad thing). Possibly of note, there are actually a lot of other potential incompatibilities in the ZIP format. Of particular note, there are multiple incompatible encryption mechanisms that can be used with ZIP archives, and almost a dozen different compression algorithms with most implementations not supporting all of them (though you have to go out of your way to use something other than ‘Store’, ‘Deflate’, or ‘Deflate64’, and those are supported by pretty much everything). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/729955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/386302/"
]
} |
729,960 | Consider the following shell script function: #!/bin/bashdeclare -a dirfunction() {local -a directories=( "A/B/C D" "E/F G H" ) #Initialize local array.printf "%q " "${directories[@]}" #"Return" values of array in escaped form.}dir=( $(funcion) )for i in "${dir[@]}"; do echo $idone How I would the command substitution works: dir=( A/B/C\ D E/F\ G\ H ) #Escape the whitespace. The result of script shuld is: A/B/C DE/F G H How the command substitution realy works (I think, because its result): dir=( A/B/C\\ D E/F\\ G\\ H ) #Escape the backslash. The result of script is: A/B/C\DE/F\G\H Is there a way to make this works like I would? | Because there is more than one type of ‘ZIP’ archive. The original ZIP format, as implemented by the first version of PKZIP, did indeed have a 4 GiB limit on archive size (as well as a corresponding limit on archive member sizes, both compressed and uncompressed). However, with version 4.5 of the format, the ZIP64 extensions were introduced, which extended this limit to 16 EiB by moving the relevant fields in the file header and archive entries to supplementary fields stored elsewhere in the archive, as well as expanding the limit on the number of archive members (classic ZIP was limited to 65535 archive members) in a similar manner. However , unless a tool is actually looking for those extended fields, they get ignored and the tool will just fail to work correctly. This is because a ZIP64 archive is still technically a valid ‘classic’ ZIP archive unless you try to validate the member sizes (and this is an excellent example of why backwards compatibility can sometimes be a bad thing). Possibly of note, there are actually a lot of other potential incompatibilities in the ZIP format. Of particular note, there are multiple incompatible encryption mechanisms that can be used with ZIP archives, and almost a dozen different compression algorithms with most implementations not supporting all of them (though you have to go out of your way to use something other than ‘Store’, ‘Deflate’, or ‘Deflate64’, and those are supported by pretty much everything). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/729960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/554728/"
]
} |
729,966 | today I wrote a script which should copy all files per ftp to my destination server. Script is executed inside zsh-shell, so I guess I should use "sh script.sh" to execute it correctly? Also I pasted my script to https://www.shellcheck.net/ , which says its fine, but inside my zsh-shell I get following errors: upload_file: command not found #!/bin/bashFTP_SERVER=ftp.ipFTP_USER=ftp.userFTP_PASS=ftp.passwdFTP_DESTINATION_DIR="/destination_directory"LOCAL_SOURCE_DIR="/home/$USER/pictures/local_pictures"function upload_file { local file=$1 echo "Hochladen von Datei $file" ftp -inv "$FTP_SERVER" << EOFuser $FTP_USER $FTP_PASSbinarycd $FTP_DESTINATION_DIRput $filequitEOF}find "$LOCAL_SOURCE_DIR" \( -name "*.jpg" -o -name "*.png" \) -exec bash -c 'upload_file "$0"' {} \; I am not very used to this kind of scripting... | Because there is more than one type of ‘ZIP’ archive. The original ZIP format, as implemented by the first version of PKZIP, did indeed have a 4 GiB limit on archive size (as well as a corresponding limit on archive member sizes, both compressed and uncompressed). However, with version 4.5 of the format, the ZIP64 extensions were introduced, which extended this limit to 16 EiB by moving the relevant fields in the file header and archive entries to supplementary fields stored elsewhere in the archive, as well as expanding the limit on the number of archive members (classic ZIP was limited to 65535 archive members) in a similar manner. However , unless a tool is actually looking for those extended fields, they get ignored and the tool will just fail to work correctly. This is because a ZIP64 archive is still technically a valid ‘classic’ ZIP archive unless you try to validate the member sizes (and this is an excellent example of why backwards compatibility can sometimes be a bad thing). Possibly of note, there are actually a lot of other potential incompatibilities in the ZIP format. Of particular note, there are multiple incompatible encryption mechanisms that can be used with ZIP archives, and almost a dozen different compression algorithms with most implementations not supporting all of them (though you have to go out of your way to use something other than ‘Store’, ‘Deflate’, or ‘Deflate64’, and those are supported by pretty much everything). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/729966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/458922/"
]
} |
730,130 | I have a process running that spits out a line of data every second. I want to output all the output to 'output.txt', but also output lines that have @ to a different file 'emails.txt'. I tried something like below,but the grep part doesn't work. myProgrram | pee 'tee output.raw 2>&1' 'grep @ > email.txt' Any ideas on how to improve this?Thanks. | Here is one solution: myprogram 2>&1 | tee output.txt | grep --line-buffered @ > emails.txt Explanation: stdin is "standard in" (file descriptor number 0 ). stdout is "standard out" (file descriptor number 1 ). stderr is "standard error" (file descriptor number 2 ). 2>&1 redirects anything that is sent to stderr to stdout . | pipes stdout on the left side to stdin on the right side. tee output.txt does two things at the same time: writes stdin to output.txt passes on stdin to stdout . | pipes stdout to stdin . grep --line-buffered @ picks out any line with @ from stdin and sends it to stdout . > emails.txt writes whatever is on stdin to emails.txt . The --line-buffered flag makes grep use line buffering on output which can cause a performance penalty but ensures that all output will be printed to the file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/730130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/489284/"
]
} |
730,277 | Goal : I'm writing a very simple image viewer for framebuffer /dev/fb0 (something like fbi ). Current state: My software takes the pixel resolution from /sys/class/graphics/fb0/virtual_size (such as 1920,1080 ). And then (for each row) it will write 4 bytes (BGRA) for each 1920 row-pixels (total 4x1920=7680 bytes) to /dev/fb0 .This works perfectly fine on my one laptop with a 1920x1080 resolution. More precisely: setting a pixel at y -row x -col => arr[y * 1920 * 4 + x * 4 + channel] where the value channel is 0,1,2,3 (for B , G , R , and A , respectively). Problem : When I try the same software on my old laptop with ( /sys/.../virtual_size -> 1366,768 resolution), the image is not shown correctly (bit skewed). So I played around the pixel-width value and found out the value was 1376 (not 1366). Questions : Where do these 10 extra bytes come from? And, how can I get this value of 10 extra bytes on different machines (automatically, not manually tuning it)? Why do some machines need these extra 10 bytes, when some machines don't need them? | Programmatically, to retrieve information about a framebuffer you should use the FBIOGET_FSCREENINFO and FBIOGET_VSCREENINFO ioctl s : #include <fcntl.h>#include <linux/fb.h>#include <stdio.h>#include <stdlib.h>#include <sys/ioctl.h>int main(int argc, char **argv) { struct fb_fix_screeninfo fix; struct fb_var_screeninfo var; int fb = open("/dev/fb0", O_RDWR); if (fb < 0) { perror("Opening fb0"); exit(1); } if (ioctl(fb, FBIOGET_FSCREENINFO, &fix) != 0) { perror("FSCREENINFO"); exit(1); } if (ioctl(fb, FBIOGET_VSCREENINFO, &var) != 0) { perror("VSCREENINFO"); exit(1); } printf("Line length: %ld\n", fix.line_length); printf("Visible resolution: %ldx%ld\n", var.xres, var.yres); printf("Virtual resolution: %ldx%ld\n", var.xres_virtual, var.yres_virtual);} line_length gives you the line stride. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/730277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470206/"
]
} |
730,298 | want to create codebuild project, but i neeed build source to be codepipeline, not a specific codecommit branch, so that i want to force codebuild to get the source from the codepipeline source stage in order to use one codebuild project for multiple pipelines. | Programmatically, to retrieve information about a framebuffer you should use the FBIOGET_FSCREENINFO and FBIOGET_VSCREENINFO ioctl s : #include <fcntl.h>#include <linux/fb.h>#include <stdio.h>#include <stdlib.h>#include <sys/ioctl.h>int main(int argc, char **argv) { struct fb_fix_screeninfo fix; struct fb_var_screeninfo var; int fb = open("/dev/fb0", O_RDWR); if (fb < 0) { perror("Opening fb0"); exit(1); } if (ioctl(fb, FBIOGET_FSCREENINFO, &fix) != 0) { perror("FSCREENINFO"); exit(1); } if (ioctl(fb, FBIOGET_VSCREENINFO, &var) != 0) { perror("VSCREENINFO"); exit(1); } printf("Line length: %ld\n", fix.line_length); printf("Visible resolution: %ldx%ld\n", var.xres, var.yres); printf("Virtual resolution: %ldx%ld\n", var.xres_virtual, var.yres_virtual);} line_length gives you the line stride. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/730298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/540297/"
]
} |
730,580 | Scripts typically start with a shebang such as #!/usr/bin/env bash , which specifies the shell to be used for execution. The execution behavior when the shebang is not present seems to be up to the calling shell . Either way, the script is run from a "new" shell which doesn't know about the variables and functions defined in the calling shell. Alternatively, scripts can also be source d into a shell, which by my understanding is equivalent to a copy-paste of the script's contents into the current shell. If any functions or variables are defined in the script, then they will remain in the calling shell after "execution", which may not be desirable. Is there something between these two options? Is it possible to execute a script as a subshell of the calling shell, such that we would have access to everything defined in the calling shell, yet would not modify it (unless perhaps with export and such commands)? Writing (source myscript.sh) seems to be doing what I am after; is this the right way to go about it? Is there an equivalent shebang that would yield the same behavior by calling ./myscript.sh instead? | (. ./myscript.sh) is the right way ( the standard form is . , not source , and . with no directory specified searches on the PATH ). Doing this using a shebang would require having a reliable way of finding the binary for the shell running the parent script, without relying on environment variables. On Linux, one might imagine a /proc/parent similar to /proc/self , pointing to the parent process; then one could write #! /proc/parent/exe - to have a shell script use whatever binary is running the parent. (This would work even if the original shell were deleted or replaced; /proc/self/exe can be used even if the binary is deleted, and /proc/parent/exe could be made to behave in the same way.) But blindly relying on the parent process’ binary to be able to run a script is itself unreliable (there are races if the parent dies while the child is starting). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/730580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45354/"
]
} |
730,626 | I'm on Fedora 37. I deleted my grub.cfg file and when I re-booted my laptop, I booted into the grub boot prompt. I was able to repair my system and reboot. I regenerated the grub.cfg file and re-installed grub. However, reading into this process has left me confused, because in some websites the advice is to regenerate the config file first, and then re-install grub, yet in other sites, the opposite advice is given. In what cases would one need to regenerate the config file after re-installing grub? Does the order of these two operations matter? | (. ./myscript.sh) is the right way ( the standard form is . , not source , and . with no directory specified searches on the PATH ). Doing this using a shebang would require having a reliable way of finding the binary for the shell running the parent script, without relying on environment variables. On Linux, one might imagine a /proc/parent similar to /proc/self , pointing to the parent process; then one could write #! /proc/parent/exe - to have a shell script use whatever binary is running the parent. (This would work even if the original shell were deleted or replaced; /proc/self/exe can be used even if the binary is deleted, and /proc/parent/exe could be made to behave in the same way.) But blindly relying on the parent process’ binary to be able to run a script is itself unreliable (there are races if the parent dies while the child is starting). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/730626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/555479/"
]
} |
730,657 | I have Ubuntu 22.04 with Nvidia 515.86.01 (proprietary), along with the CUDA toolkit and cuDNN. nvidia/515.86.01, 5.15.0-53-generic, x86_64 After 3 weeks of holiday I am back and wanted to install a few tools (Evince for example). I was surprised to see that nothing regarding apt-get related to package installation or upgrade works due to a mysterious libnvidia-nscq : Reading package lists... Done Building dependency tree...Done Reading state information...Done You might want to run 'apt --fix-broken install' to correct these. The following packages have unmet dependencies: libnvidia-nscq-510 : Depends: libnvidia-nscq-515 but it is not installedE: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). From what I can tell NSCQ is a NVswitch, which is something from Nvidia regarding servers and GPUs Version 510 is a transitional package Running dpkg -l *nvidia* returns the following (among others) iU libnvidia-nscq-510 515.86.01-0ubuntu0.22.04.1 amd64 Transitional package for libnvidia-nscq-515in libnvidia-nscq-515 <none> amd64 (no description available) I cannot figure out where this NSCQ dependency is coming from. In addition it is rather strange that 510 depends on 515 but perhaps I've misunderstood this line. apt-cache depends libnvidia-nscq-515libnvidia-nscq-515 Conflicts: <libnvidia-nscq> libnvidia-nscq-450 libnvidia-nscq-470 libnvidia-nscq-525 Replaces: <libnvidia-nscq> libnvidia-nscq-450 libnvidia-nscq-470 libnvidia-nscq-515 libnvidia-nscq-525 I can neither run apt-get upgrade , nor apt-get autoremove , nor apt-get install <package> . I did apt-get clean to remove the cached packages and then pulled fresh ones using apt-get update . If I am to remove the Nvidia drivers and CUDA toolkit, I am pretty sure it will break my machine learning setup (PyTorch and TensorFlow). These tools are very fiddly when it comes to which version of driver/CUDA/cuDNN is to be used. UPDATE: As per request in the comments: apt policy libnvidia-nscq-515libnvidia-nscq-515: Installed: (none) Candidate: 515.86.01-0ubuntu0.22.04.1 Version table: 515.86.01-0ubuntu0.22.04.1 500 500 http://de.archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages 500 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages 515.48.07-0ubuntu0.22.04.2 500 500 https://ppa.launchpadcontent.net/canonical-kernel-team/ppa/ubuntu jammy/main amd64 Packages | (. ./myscript.sh) is the right way ( the standard form is . , not source , and . with no directory specified searches on the PATH ). Doing this using a shebang would require having a reliable way of finding the binary for the shell running the parent script, without relying on environment variables. On Linux, one might imagine a /proc/parent similar to /proc/self , pointing to the parent process; then one could write #! /proc/parent/exe - to have a shell script use whatever binary is running the parent. (This would work even if the original shell were deleted or replaced; /proc/self/exe can be used even if the binary is deleted, and /proc/parent/exe could be made to behave in the same way.) But blindly relying on the parent process’ binary to be able to run a script is itself unreliable (there are races if the parent dies while the child is starting). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/730657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79008/"
]
} |
730,779 | I have been following a tutorial online to teach myself some basic command line stuff and accidentally I used the dd command (which I haven't gotten to and don't understand), but it seems to have done something to my computer. The image below shows exactly what I typed ( dd , followed by hitting enter, another d , a few more enters, a few more enters, before a Ctrl+C to stop execution as I had learned). Could I ask what this has done, if it is potentially harmful, and if it is, how I would remediate it? | dd with no arguments reads from its standard input and writes to its standard output; you haven’t done anything to your computer. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/730779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/555656/"
]
} |
730,906 | Output of locate txt | head : /etc/brltty/brl-ba-all.txt/etc/brltty/brl-bd-all.txt/etc/brltty/brl-bl-18.txt/etc/brltty/brl-bl-40_m20_m40.txt/etc/brltty/brl-ec-all.txt/etc/brltty/brl-ec-spanish.txt/etc/brltty/brl-eu-all.txt/etc/brltty/brl-lb-all.txt/etc/brltty/brl-lt-all.txt/etc/brltty/brl-mb-all.txt Output of locate *.txt | head : /home/abc/capital.txt/home/abc/state.txt Why is there such a huge difference in the output?The second command only seems to check my home folder, but the first command seems to check many directories. Why is that the case? | locate txt locates all files (of any type including regular, symlink, directory, socket...) whose path contains txt ¹, so includes /foo/xtxty/bar , /foo/bar.txt , /foo/txt.bar , etc. locate *.txt is wrong because that * is left unquoted, so the *.txt would be expanded by the shell to all the filenames in the current directory matching that pattern first and the result passed to locate , so for instance, if the current directory contained a.txt and b.txt , that would end up running locate a.txt b.txt which locates paths that contain either a.txt and b.txt or depending on the locate implementation, both of them such as /foo/da.txtob.txt (yours looking like it's one in the first category and you likely ran the command from within /home/abc ). If there's no .txt file in the current directory, depending on the shell you get either an error or locate is called with *.txt literally as argument. To always call locate with a literal *.txt which is what you want here, you want to make sure that * is quoted for the shell, either with "..." , '...' or backslash if in a Bourne-like shell, single quotes being the best as they quote every character in Bourne-like shell and are the most portable among shells: locate '*.txt' Then, as that argument contains a wildcard ( * ), locate switches from a subtstring search to a pattern matching search (same ones as those recognised by the shell or find -name for instance) and returns all the file paths that match that pattern, that is, all the file paths that end in .txt , like /foo/.txt or /foo/bar.txt . locate is not a standard command, and there are many incompatible implementations around, but those simple behaviours above are common to most if not all. Most implementations support various options to do the matching differently. Check your own locate documentation with man locate , not some random pages on the internet as they may very well document a different implementation and/or version. ¹ and that did exist at the time the locate database was last updated | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/730906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/547763/"
]
} |
730,919 | There is already the Nimbus ExaDrive 100TB SSD and the 200TB SSD will come soon . As you can read here ext4 supports up to 256 TB. It's only a matter of time hardware will reach this limit. Will they update ext4 or will there be ext5? What will happen? | 64-bit ext4 file systems can be up to 64ZiB in size with 4KiB blocks, and up to 1YiB in size with 64KiB blocks , no need for an ext5 to handle large volumes. 1 YiB, one yobibyte, is 1024 8 bytes. There are practical limits around 1 PiB and 1 EiB , but that’s still (slightly) larger than current SSDs, and the limits should be addressable within ext4, without requiring an ext5. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/730919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/376049/"
]
} |
730,947 | uptime piped to sed fixes the first part but what about memory usage? top runs, at least by default, interactively. So: I need to watch used RAM excluding opportunistic caching(which gets dropped as soon as the memory is needed). And ask about both because I expect a single standard tool can do it, instead of two. Or - even better - something in /proc indicating the RAM part. | 64-bit ext4 file systems can be up to 64ZiB in size with 4KiB blocks, and up to 1YiB in size with 64KiB blocks , no need for an ext5 to handle large volumes. 1 YiB, one yobibyte, is 1024 8 bytes. There are practical limits around 1 PiB and 1 EiB , but that’s still (slightly) larger than current SSDs, and the limits should be addressable within ext4, without requiring an ext5. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/730947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20506/"
]
} |
730,957 | I try to use a public key to connect to a remote server running centos7. I generated a key by ssh-keygen then copy the key to the server by ssh-copy-id [email protected] the authorized_keys is created on the remote machine, but the ssh login still requires the password. I try to login with triple verbose option ssh -v [email protected] and it give me something like: OpenSSH_7.6p1 Ubuntu-4ubuntu0.7, OpenSSL 1.0.2n 7 Dec 2017debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug1: Connecting to chip02.phy.ncu.edu.tw [140.115.32.12] port 22.debug1: Connection established.debug1: identity file /home/longhoa/.ssh/id_rsa type 0debug1: key_load_public: No such file or directorydebug1: identity file /home/longhoa/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/longhoa/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/longhoa/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/longhoa/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/longhoa/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/longhoa/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/longhoa/.ssh/id_ed25519-cert type -1debug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.7debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000debug1: Authenticating to chip02.phy.ncu.edu.tw:22 as 'hoa'debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: algorithm: curve25519-sha256debug1: kex: host key algorithm: ecdsa-sha2-nistp256debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: nonedebug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: nonedebug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ecdsa-sha2-nistp256 SHA256:ALKc8EF9HMXaCSs/aN4wsfpFN8Bh1W9twUxOTueP5Kkdebug1: Host 'chip02.phy.ncu.edu.tw' is known and matches the ECDSA host key.debug1: Found key in /home/longhoa/.ssh/known_hosts:1debug1: rekey after 134217728 blocksdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: rekey after 134217728 blocksdebug1: SSH2_MSG_EXT_INFO receiveddebug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>debug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug1: Next authentication method: gssapi-keyexdebug1: No valid Key exchange contextdebug1: Next authentication method: gssapi-with-micdebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials available (default cache: FILE:/tmp/krb5cc_1000)debug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials available (default cache: FILE:/tmp/krb5cc_1000)debug1: Next authentication method: publickeydebug1: Offering public key: RSA SHA256:S79m96anBkvF16Rjihe80MYbcU1fZlfPxE5686k/vn4 /home/longhoa/.ssh/id_rsadebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug1: Trying private key: /home/longhoa/.ssh/id_dsadebug1: Trying private key: /home/longhoa/.ssh/id_ecdsadebug1: Trying private key: /home/longhoa/.ssh/id_ed25519debug1: Next authentication method: [email protected] password: debug1: Authentication succeeded (password) I searched on google, some mentioned setting the correct permission, I followed the instruction and ended up withthe key on my computer: -rw------- 1 longhoa longhoa 1.7K 23-01-08|14:14:40 id_rsa-rw-r--r-- 1 longhoa longhoa 399 23-01-08|14:14:40 id_rsa.pub the permission on the remote server: drwx------. 2 hoa zh 4.0K 23-01-08|15:10 /home/hoa/.ssh-rw-------. 1 hoa zh 399 23-01-08|14:23 /home/hoa/.ssh/authorized_keysdr-xr-xr-x. 29 root root 4096 22-12-27|17:26 /drwxrwxrwx. 41 root root 4096 22-11-24|18:38 /homedrwx------. 58 hoa zh 12288 23-01-11|00:47 /home/hoa/ There are other answer mention SELinux and debuging from the server but I don't have root access to that server, so I can't not do anything. So how do I make it work? Thank you very much. Update 1 I tried @roaima suggestion. ssh -nvv -o NumberOfPasswordPrompts=0 [email protected] 2>&1 | grep "debug2: host key" which returns: debug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsadebug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 I also tried id_dsa and id_ed25519, but none seems to work. Update2 @roaima and @telcoM pointed out that the remote host was not set up correctly. I will update the status after I talk to the admin. | 64-bit ext4 file systems can be up to 64ZiB in size with 4KiB blocks, and up to 1YiB in size with 64KiB blocks , no need for an ext5 to handle large volumes. 1 YiB, one yobibyte, is 1024 8 bytes. There are practical limits around 1 PiB and 1 EiB , but that’s still (slightly) larger than current SSDs, and the limits should be addressable within ext4, without requiring an ext5. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/730957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/555836/"
]
} |
731,072 | I am facing a very confusing behaviour of ls that I cannot search for. It displays that there are contents in a directory, but only when I'm in the directory where these were created from. Let me show you: ciprian Documents $ pwd/Users/ciprian/Documentsciprian Documents $ ls ../Downloads/rss22/22rss-USB/ciprian Documents $ ls ../Downloads/rss22/22rss-USB/HTML/ciprian Documents $ cd ../Downloads/rss22ciprian rss22 $ lsciprian rss22 $ ls 22rss-USB/gls: cannot access '22rss-USB/': No such file or directory After I cd ed to ../Downloads/rss22 , its contents are displayed as empty. It is also shown empty if I cd ~/Desktop and then I ls ../Downloads/rss22/ , like the first case here. To me, this indicates that there might be a folder named ../Downloads/rss22 inside Documents . But I cannot figure out how to display it. ls -a ~/Documents does not show anything related to these folders. What is going on? The files were created by trying a partial extraction from an archive: unzip 22rss-USB.zip "22rss-USB/HTML/**/*" -d ../Downloads/rss22/ For reference, I am on macOS , though I do not think this is relevant (it's a Unix, right?). I am using Bash 5.1.16 (changed from default zsh). Output of type pwd : pwd is a shell builtin. It turns out that if I do cd -P ../Downloads/igarss22/ then it shows the contents that I expect. Where can I see more about this? man cd doesn't show anything about -P . Now, after cd -P ../Downloads/igarss22/ ciprian Documents $ cd -P ../Downloads/igarss22ciprian igarss22 $ pwd/Users/ciprian/Library/CloudStorage/OneDrive/Downloads/igarss22 Right. So I forgot this ; my ~/Documents is a symlink to a folder under my OneDrive: $ ll ~/ | grep Doclrwxr-xr-x 1 ciprian 38 May 19 2022 Documents -> /Users/ciprian/OneDrive/Documents Which, due to some magic and changes in macOS v12 (Monterey), actually lives under /Users/ciprian/Library/CloudStorage/OneDrive . I'm still not sure what exactly is going on | The difference is physical vs. logical treatment of .. : You apparently have two separate Downloads directories: one is /Users/ciprian/Downloads the other is /Users/ciprian/Library/CloudStorage/OneDrive/Downloads Some shells, including bash and all POSIX shells, have the option of treating .. in the cd command as "take the current path exactly as the user expressed it, cut off the right-most element and change to the resulting path". Across symbolic links this will work as a logical "go back to where we came from", returning from /Users/ciprian/Library/CloudStorage/OneDrive/Documents (symlinked to, and referred by user as ~/Documents ) to /Users/ciprian (so the new path at this point is just ~ ) and then from there to /Users/ciprian/Downloads . If you use cd -L , this is explicitly the behavior you'll get. Other programs typically won't do this: they will instead follow the physical paths, so for them, changing directories to ../Downloads starting from /Users/ciprian/Library/CloudStorage/OneDrive/Documents will always mean changing to /Users/ciprian/Library/CloudStorage/OneDrive/Downloads . This is also what you'll get with cd -P . If you don't like this (mis)feature of the shell, there is probably a shell-specific way to disable it. With the bash shell, adding set -P or set -o physical to your ~/.bashrc or similar shell start-up script would make bash behave like all cd commands had the -P option unless the -L option is explicitly used. Note . The documentation of -P is in man bash (section SHELL BUILTIN COMMANDS) because cd is an internal command of Bash. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/731072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68186/"
]
} |
731,205 | I have a new desktop computer with Intel i7-12700 32GB RAM. I am doing build code stuff, I use the sensors command to check CPU temperature, and I found most of the cores are @ 100C. Is that normal? Will CPU hardware itself control the frequency to fit the temperature? update I checked dmesg and found many logs as below: mce: CPUxx: Package temperature above threshold, cpu clock throttled It looks like the CPU control itself not higher than 100C. | Whether it’s normal for your system depends on a number of factors; however 100°C is on the high end for a desktop system and you should try to address that. Typically, that would involve improving the system’s cooling: the overall airflow in the case itself (assuming your CPU isn’t water-cooled), the CPU cooler and its interface to the CPU, etc. In any case, your CPU won’t cook itself: it knows its limits, and it will throttle itself (reduce its frequency) if it needs to cool down. If that happens, you’ll see corresponding messages in the kernel logs ( sudo dmesg ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/731205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/368823/"
]
} |
731,342 | Need to add a white space or line break before the next line,if the next line starts with blank it has to ignore. My input file is like below #qwert TRWQQA 01 40 /* this is the sample test */ /* STILL COMING UP... AFTER UP ENQ GOES AWAY */ /* FEB AND 30TH */#TFCDF DWERTY 01 40 (FEB AND 30TH) /* AND (qwert-01 01 #qwert OR */ /* (START SCD RTGFG)) XDYGH #qwert */#HYUOIK YUPOIH 01 40 FEB AND 30TH /* AND (qwert-01 01 #qwert OR */ /* (START SCD qwert)) SDFGH #qwert */#NHUYUOI GHTYHD 01 40 (FEB AND 30TH) AND (qwert-01 01 #qwert OR (START SCD SDFRE))#KJYY ERTYUB 01 40 (FEB AND 30TH) AND (qwert-01 03 #qwert OR (START SCD DERF))RTYUH POMHY 01 40 ERTYUJ RTYUJQWERG PIJHGV 01 40 MNBV LKJH Expected output. #qwert TRWQQA 01 40 /* this is the sample test */ /* STILL COMING UP... AFTER UP ENQ GOES AWAY */ /* FEB AND 30TH */ #TFCDF DWERTY 01 40 (FEB AND 30TH) /* AND (qwert-01 01 #qwert OR */ /* (START SCD RTGFG)) XDYGH #qwert */ #HYUOIK YUPOIH 01 40 FEB AND 30TH /* AND (qwert-01 01 #qwert OR */ /* (START SCD qwert)) SDFGH #qwert */ #NHUYUOI GHTYHD 01 40 (FEB AND 30TH) AND (qwert-01 01 #qwert OR (START SCD SDFRE)) #KJYY ERTYUB 01 40 (FEB AND 30TH) AND (qwert-01 03 #qwert OR (START SCD DERF)) RTYUH POMHY 01 40 ERTYUJ RTYUJQWERG PIJHGV 01 40 MNBV LKJH **strong text** | Using awk to add a line before each line that does not start with a blank (but avoid doing that for the first line): awk 'NR != 1 && /^[^[:blank:]]/ { print "" }; 1' file The command uses two tests. If both tests are true , a blank line is inserted. The NR != 1 test is true if the current line is not the first line. This ensures that we don't add a blank line at the start of the output if the first line starts with a non-blank. The /^[^[:blank:]]/ test is true if the current line starts with a non-blank character. Here, a "blank character" means space or tab, a set that may include additional blank characters depending on your locale. The trailing 1 causes the current line to be outputted unconditionally (it's short-hand for { print } ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/731342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/431343/"
]
} |
731,970 | I was running a pipe command with one section being the following: sort -t $'\t' -T . -k1,1g When I was monitoring htop I saw this instead: What is the reason behind this? Does this mean my command is wrong or is there something wrong with htop ? | There’s nothing wrong with your command, htop replaces control characters with question marks : (((unsigned char)data_c[j]) >= 32 ? ((unsigned char)data_c[j]) : '?') (characters with values less than 32 are control characters). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/731970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304576/"
]
} |
732,793 | I'm using the -e flag. Usage: #!/bin/bash -e Explained: -e Exit immediately if a simple command (see SHELL GRAMMAR above) exits with a non-zero status When a command in the script fails, the script exits and doesn't continue to execute the rest of the commands, which is exactly what I want. But, the failure contains just the info that the failed command chooses to disclose. Sometimes the failed command is very complicated, like a curl with many headers. How do I print the failed command? I mean, immediately after it failed. I know I can use the -x bash flag but it will print all of the executed commands. I'd like to see just the failed one. | You can use a trap to identify the failing command (and its line number). Here's an example #!/bin/bash -e# This script can be used to fail intentionally and exit# Declare an error handlertrapERR() { ss=$? bc="$BASH_COMMAND" ln="$BASH_LINENO" echo ">> Failing command is '$bc' on line $ln and status is $ss <<" >&2 exit $ss}# Arrange to call trapERR when an error is raisedtrap trapERR ERR # Start heredateecho 'hello, world'#sleep rt # Remove the leading comment to echo 'all done'exit 0 Successful completion: 23 Jan 2023 15:58:03hello, worldall done Remove the comment in front of the sleep so that you introduce an error 23 Jan 2023 15:58:34hello, worldsleep: invalid time interval ‘rt’Try 'sleep --help' for more information.>> Failing command is 'sleep rt' on line 12 and status is 1 << | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/732793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6272/"
]
} |
733,013 | I am confused by the terminology used to describe Linux signal delivery. Most texts say things like "the signal is delivered to the process" or "the signal is delivered to the thread". It is my understanding that a signal is "delivered" to a signal handler, which resides in a process, when the kernel calls that handler. The process itself is running asynchronously, and this "delivery" process is akin to a CPU calling an interrupt handler. The interrupt handler (signal handler) is not the process thread, nor any thread running under that process, correct? It is a separate thread of its own started by the kernel. So the signal is not delivered to a thread or a process, but is delivered to a signal handler residing in the process and not necessarily associated with any specific thread. If this is not correct, please tell me, for example, the association between the signal handler and a pthread that justifies the terminology of "signal delivered to a pthread". | A signal handler is just a function within a given process' address space. This function is executed whenever the signal is received. There's nothing special about it (although there are certain actions that should not be performed within a signal handler), and it does not reside in a special thread. While signals are often described as being software interrupts, they aren't actually asynchronous. * When a signal is sent to a process, the kernel adds it to the process' pending signal set. It doesn't cause anything to happen immediately. The signal will only actually do anything at the next context switch back to userspace (whether that's a syscall returning or the scheduler switching to that process). If a process were to, for whatever reason, never switch from kernel to user, the signal would be kept in the pending signal set and never acted upon. † When a process establishes a signal handler, it gives the kernel an address to a function. When the process is to receive a signal, the next context switch from kernelspace to userspace will not restore the execution context from before the process entered the kernel (usually, the context is saved when entering the kernel and restored upon exiting it). Instead, it will "restore" execution at the location of the signal handler. When the signal handler returns, it executes code which calls rt_sigreturn() , which restores the real execution context, allowing the process to continue where it left off. When a process has multiple threads (i.e. there are multiple processes in a given thread group), the signal is sent to one of the threads in the thread group at random. This is because threads typically share memory and many other resources and run the same code. * While they aren't asynchronous from the perspective of hardware, they are effectively asynchronous as far as userspace applications are concerned. This is why they are sometimes called software interrupts. † When I refer to context switches, I mean privilege or process switches (i.e. both simple mode transitions between kernel and user within the same process and "true" context switches between processes or kernel threads). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/733013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/558025/"
]
} |
733,021 | I am getting confused about setting local variables in bash functions. It seems that using local dgt local ltr local braces local da could be safer than using local dgt ltr braces da I am worried about the possibility of a variable not getting defined as local, or not having the value set. Could that happen? For instance, consider local foo="$(mycmd)" The exit status of the command is overridden by the exit status of the creation of the local variable. Then the correct code would be local foofoo=$(mycmd) | A signal handler is just a function within a given process' address space. This function is executed whenever the signal is received. There's nothing special about it (although there are certain actions that should not be performed within a signal handler), and it does not reside in a special thread. While signals are often described as being software interrupts, they aren't actually asynchronous. * When a signal is sent to a process, the kernel adds it to the process' pending signal set. It doesn't cause anything to happen immediately. The signal will only actually do anything at the next context switch back to userspace (whether that's a syscall returning or the scheduler switching to that process). If a process were to, for whatever reason, never switch from kernel to user, the signal would be kept in the pending signal set and never acted upon. † When a process establishes a signal handler, it gives the kernel an address to a function. When the process is to receive a signal, the next context switch from kernelspace to userspace will not restore the execution context from before the process entered the kernel (usually, the context is saved when entering the kernel and restored upon exiting it). Instead, it will "restore" execution at the location of the signal handler. When the signal handler returns, it executes code which calls rt_sigreturn() , which restores the real execution context, allowing the process to continue where it left off. When a process has multiple threads (i.e. there are multiple processes in a given thread group), the signal is sent to one of the threads in the thread group at random. This is because threads typically share memory and many other resources and run the same code. * While they aren't asynchronous from the perspective of hardware, they are effectively asynchronous as far as userspace applications are concerned. This is why they are sometimes called software interrupts. † When I refer to context switches, I mean privilege or process switches (i.e. both simple mode transitions between kernel and user within the same process and "true" context switches between processes or kernel threads). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/733021",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/540882/"
]
} |
733,139 | For POSIX shell utilities which take one or more files as arguments, does POSIX guarantee that a symbolic link can be passed instead (and that the utility will resolve it)? Is it documented somewhere? | POSIX does not require that all the utilities it specifies resolve any symbolic link provided as an argument expecting a file name or path. It does however document in detail how symbolic links should be handled (look for the “Symbolic Link” entry). As a general rule, a path ending in a symlink pointing to a directory must be followed if the path component is suffixed with / . Then There are four domains for which default symbolic link policy is established in a system. In almost all cases, there are utility options that override this default behavior. The four domains are as follows: Symbolic links specified to system calls that take pathname arguments Symbolic links specified as command line pathname arguments to utilities that are not performing a traversal of a file hierarchy Symbolic links referencing files not of type directory, specified to utilities that are performing a traversal of a file hierarchy Symbolic links referencing files of type directory, specified to utilities that are performing a traversal of a file hierarchy System call behaviour varies for historical reasons. Utilities not traversing a file system follow symlinks, with some exceptions. This covers most utilities: The general rule is that the utilities in this category follow symbolic links named as arguments. Utilities traversing a file system handle symlinks to files other than directories without following them, where that makes sense, and follow them otherwise. For utilities traversing a file system, POSIX doesn’t mandate a specific behaviour regarding symlinks to directories, but it recommends that the utilities not follow symlinks (with detailed reasoning). There are exceptions to the above, and variations depending on whether the system assigns certain attributes to symlinks ( e.g. whether symlinks have permissions independent of the file they link to). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/733139",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/456507/"
]
} |
733,306 | After SFTP logins were no longer working with error 127 we found that the SSH connection created some output by a script installed in /etc/profile.d/. Previous versions of sshd had an option available called "UseLogin" which by default was set to "no", thus did not execute stuff from profile. Now that option no longer seems to exist and the default behavior seems to have changed. In sshd_config I now replaced Subsystem sftp /usr/lib/ssh/sftp-server by Subsystem sftp internal-sftp This seems to work better (SFTP is working). But as we all know: If you don't understand why it works, it isn't fixed. Can someone explain? And maybe suggest a better "fix"? Update SLES15.4 (openssh-server-8.4p1) | SFTP uses SSH as transport. Any SFTP client expects the SSH server to establish communication with an SFTP server (like sftp-server ). At least with OpenSSH, when an SSH server is told to run something, it uses the target user's shell for this. One can define a subsystem (it may be a custom subsystem) by adding Subsystem … entry in sshd_config . Even then the supplied command will be executed in the user's shell. This happened to you with the standard sftp subsystem specified as /usr/lib/ssh/sftp-server . If the shell (or anything really) prints some "garbage" when your SFTP client expects to talk to an SFTP server, it's outside of the SFTP protocol and thus the communication breaks. As long as the SSH server uses the user's shell, no option can totally reliably make everything work. This is because in general : The user's shell may be anything. Even sane shells source some files. Some shells may be told not to, but there is no portable option for this and there is no way for a client to tell the SSH server to use a custom option when invoking a shell. The sourced files may print something; or they may run something that prints something. The only way to avoid the user's shell is to use a subsystem handled internally by the SSH server. AFAIK for now the only internal subsystem in sshd from OpenSSH is internal-sftp . internal-sftp solved your problem because it does not rely on a shell. An alternative fix is to make sure nothing but SFTP server uses the standard streams provided by the SSH server. This solution includes silencing the user's shell and anything the shell starts before it runs the actual requested command like /usr/lib/ssh/sftp-server . A person with root access on the server or the user himself/herself may easily break this. Some interesting cases, for comparison: An unfortunate edit to .bashrc locks the user out . The only way to fix on their own via SSH is with internal-sftp . If internal-sftp hadn't been already enabled, the user would need to use some protocol other than SSH to fix things (e.g. contacting an administrator is such protocol). A user wants to run commands without any interference from (possibly goofy) remote shells . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/733306",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40285/"
]
} |
733,476 | When I try to install a package say git using: dnf install -y git in the very same virtual machine image dnf/yum sometimes simply installs the package, but sometimes, usually if some time spent after the latest dnf/yum usage it starts to update repositories like: AlmaLinux 9 - AppStream 4.7 MB/s | 8.0 MB 00:01...and so on on all configured and enabled repositories How dnf/yum decides when to update and when do not update? I am installing the very same package on the very same image... so the linux and package manager state is supposedly alse same. | For those not having Red Hat account answer can be found here , under metadata_expire parameter.Also man yum.conf will give useful information. On my Rocky Linux 9.1 system this parameter is set inside almost every " rocky " repo contained in /etc/yum.repos.d/rocky*.repo files.You can see it under one of the repos: [baseos]name=Rocky Linux $releasever - BaseOSmirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=BaseOS-$releasever$rltype#baseurl=http://dl.rockylinux.org/$contentdir/$releasever/BaseOS/$basearch/os/gpgcheck=1enabled=1countme=1metadata_expire=6hgpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/733476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491418/"
]
} |
733,529 | I have file1.txt 0.12340.3330.220.1 and file2.txt 210100100 I'm looking for a command that multiplies the lines with each other and outputs as many decimal places as file1.txt had at this position/line. output.txt 0.24683.33022.0010.0 so far i use this command: paste file1.txt file2.txt | awk '{printf "%.4f", ($1 * $2)}' > output.txt But of course "%.4f" always produces 4 decimal places... | If your awk implementation provides access to the underlying C library's dynamic width and precision features, and the numbers in file1.txt can never include mantissa-exponent representations like 1.23e4 , then you could do something like $ paste file1.txt file2.txt | awk '{split($1,a,/[.]/); printf "%.*f\n", length(a[2]), ($1 * $2)}'0.24683.33022.0010.0 or perhaps a little more robustly paste file1.txt file2.txt | awk '{l = (match($1,/[.][[:digit:]]+/) > 0) ? RLENGTH-1 : 0; printf "%.*f\n", l, ($1 * $2)}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/733529",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/558210/"
]
} |
733,999 | I am trying to understand why wc and stat report different things for /proc/[pid]/cmdline . wc says my shell's cmdline file is 6 bytes in size: $ wc --bytes /proc/$$/cmdline6 /proc/10425/cmdline stat says the file is 0 bytes in size: $ stat --format='%s' /proc/$$/cmdline0 file agrees with stat : $ file /proc/$$/cmdline/proc/10425/cmdline: empty cat gives this output: $ cat -vE /proc/$$/cmdline-bash^@ All of this is on Linux rather than on any other *nix OS. Do the stat and wc programs have a different algorithm for computing the number of bytes in a file? | The files under /proc are not regular your usual files, but virtual things created on the fly by the kernel. For most (all?) of them, the system doesn't bother calculating a size beforehand, but a program reading it just gets whatever data there is to get. The difference between what your wc does and what stat and e.g. ls do, is that here, wc opens the file, reads it, and counts what it gets, while stat and ls use the stat() system call to ask the system about the metadata of the file, including the size (but also getting e.g. the owner and permissions). In the case of virtual files, these don't give the same result. If you run e.g. ls -l /proc/$$/ , you'll see a lot files of size 0, even though most of them can be read for data. Device nodes like /dev/sda are similar, though in their case ls doesn't even bother to show the size, but shows the device numbers instead. With file in particular, you can use file -s to ask it to just read the data and not care about if it's a special file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/733999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/392334/"
]
} |
734,124 | On AlmaLinux during setup there is an option to choose a Security Profile. I run live and public websites on this server, so security is good, but I don't know what these are and how it could benefit me. Should I choose one of these, and if so, which one? Or, should I ignore this, is it only for special use cases? | These are OpenSCAP profiles to ensure compliance with various government security standards. These are mostly used in situations when you are required to adhere to some specific security policy. So you'd usually choose a security policy if you are working for a governmental organization or your company is a government contractor or something similar. The installer basically checks the policy rules and makes changes (or ask you to make changes) to follow the policy. The rules can define partition layout (for example force encryption), specify what packages should be installed (or should not), what services needs to be enabled and how should they be configured (for example SSH with root login disabled) etc. The rules are public, if you are interested, you can read for example the first one from your screenshot, the French ANSSI-BP-028 . You can read more about this in the RHEL installer guide . The rules generally can have some useful security "tips & tricks" but I wouldn't bother using them on a private machine, using some general guides for server hardening is probably better than picking a specific government policy. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/734124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/557788/"
]
} |
734,150 | I have been away from scripting for years, so I was wondering if someone can help in the below. I am migrating from Google Photos to Amazon Photos (about 40k photos). Here is an example of what I downloaded from Google: IMG-20180601-WA0004-modifié.jpgIMG-20180601-WA0004.jpgIMG-20180601-WA0004.jpg.jsonIMG-20180601-WA0005-modifié.jpgIMG-20180601-WA0005.jpgIMG-20180601-WA0005.jpg.jsonIMG-20180601-WA0008-modifié.jpgIMG-20180601-WA0008.jpgIMG-20180601-WA0008.jpg.jsonIMG-20180601-WA0009-modifié.jpgIMG-20180601-WA0009.jpgIMG-20180601-WA0009.jpg.jsonIMG-20180601-WA0010-modifié.jpgIMG-20180601-WA0010.jpgIMG-20180601-WA0010.jpg.jsonIMG-20180601-WA0011-modifié.jpgIMG-20180601-WA0011.jpgIMG-20180601-WA0011.jpg.jsonIMG-20180601-WA0013-modifié.jpgIMG-20180601-WA0013.jpgIMG-20180601-WA0013.jpg.jsonIMG-20180601-WA0014-modifié.jpgIMG-20180601-WA0014.jpgIMG-20180601-WA0014.jpg.jsonIMG-20180601-WA0015-modifié.jpgIMG-20180601-WA0015.jpgIMG-20180601-WA0015.jpg.jsonIMG-20180601-WA0020.jpgIMG-20180601-WA0020.jpg.jsonIMG-20180601-WA0036-modifié.jpgIMG-20180601-WA0036.jpgIMG-20180601-WA0036.jpg.jsonVID-20180601-WA0012.mp4.jsonVID_20180601_195857.mp4.jsonmétadonnées.json I want the following: Search across all the directories within the main folder of the downloaded photos, delete all the video files such as mov, mp4, mpeg, mpg, avi, m4v and wmv (note that sometimes the file extension is in caps) for the photos, you will notice that most of the file names are duplicated (one without the word "modifié" and the other with). Note that not all of them have a "modifié" version, e.g. IMG-20180601-WA0020.jpg. I would like to delete all photos where the filename does not contain the word "modifié" except if the original file does not have a "modifié" version then keep it (IMG-20180601-WA0020.jpg is an example to keep) . I prefer to keep the json files as is if they are not related to a video file otherwise delete them. | These are OpenSCAP profiles to ensure compliance with various government security standards. These are mostly used in situations when you are required to adhere to some specific security policy. So you'd usually choose a security policy if you are working for a governmental organization or your company is a government contractor or something similar. The installer basically checks the policy rules and makes changes (or ask you to make changes) to follow the policy. The rules can define partition layout (for example force encryption), specify what packages should be installed (or should not), what services needs to be enabled and how should they be configured (for example SSH with root login disabled) etc. The rules are public, if you are interested, you can read for example the first one from your screenshot, the French ANSSI-BP-028 . You can read more about this in the RHEL installer guide . The rules generally can have some useful security "tips & tricks" but I wouldn't bother using them on a private machine, using some general guides for server hardening is probably better than picking a specific government policy. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/734150",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/559200/"
]
} |
734,183 | I know that | is the logical "OR" operator inside a RegExp expression. But what is the equivalent "AND" operator (again, inside a RegExp)? Note: This is not about the multiple expressions' operator of "AND", which is just && . For example, something like /A&B/ to match both A and B . | There is no such operator in any of the regular expression flavors I am familiar with. If you want to match inputs that have both A and B you can write A.*B or B.*A , both of which require them in that particular order; or combine both expressions to accept either order with A.*B|B.*A . Alternatively, do two separate matches. For example, in awk : awk '/A/ && /B/' file or manually with two grep instances: grep A file | grep B You don't really need an AND operator in regular expressions. The idea of a regex is that it describes a string. By definition, you put in the regex the thing you are trying to match. So an OR is needed to allow matching either A or B, but the AND is basically built in to the regular expression: anything you write in a regex needs to be matched so everything is basically joined by AND operators making a dedicated AND kind of pointless. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/734183",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263508/"
]
} |
734,449 | We need to download a 182 GB compressed file (uncompressed as a TSV ). However, we only need the first five columns of the file, which equates to approximately 1 GB. Is there some fancy shell magic that can be done to download a subset of a file? Downloading the whole file only to delete 99% of it really kills our server's storage. What is being downloaded: gnomad.genomes.v3.1.2.sites.chr1.vcf.bgz from gnomad.broadinstitute.org . Any alternatives solutions are welcome. I'm interested in solutions that work for non-compressed files as well. The gist is, how do I avoid downloading a massive file when I only need a subset of that file? | You can download the file, filter it, and then write the result to your local disk curl https://storage.googleapis.com/gcp-public-data--gnomad/release/3.1.2/vcf/genomes/gnomad.genomes.v3.1.2.sites.chr1.vcf.bgz | bgzip -d | cut -f1-5 This still requires you to download the full file but only the filtered amount gets written to disk. On Debian the bgzip command is provided by the tabix package. But if that's not installed you can also use gzip 's zcat to read bgzip -compressed files ( curl … | zcat | cut -f1-5 ). I've had some queries about the amount of data storage this pipeline will require. Here is a real run. Notice that on this system I've only got 2GB of available storage in total; nowhere near the 182GB required to download and save the file even compressed: # How much disk space available in my current directory?df -h .Filesystem Size Used Avail Use% Mounted on/dev/root 7.9G 5.6G 2.0G 75% /# Download and filter the file, saving only the resultcurl https://storage.googleapis.com/gcp-public-data--gnomad/release/3.1.2/vcf/genomes/gnomad.genomes.v3.1.2.sites.chr1.vcf.bgz |bgzip -d |cut -f1-5 > bigfile# What did we get, and how much disk space remains?ls -lh bigfile-rw-r--r-- 1 roaima roaima 1.7G Feb 7 05:09 bigfiledf -h .Filesystem Size Used Avail Use% Mounted on/dev/root 7.9G 7.2G 347M 96% / Interestingly, I note that the file is not strictly TSV (tab-separated values) format. Of its 59,160,934 lines, 942 do not contain tab separated data. file bigfilebigfile: Variant Call Format (VCF) version 4.2, ASCII text, with very long lines | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/734449",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/467640/"
]
} |
734,543 | It's well known that redirecting standard output and error to the same file with cmd >out_err.txt 2>out_err.txt can lead to loss of data, as per the example below: work:/tmp$ touch file.txtwork:/tmp$ ls another_file.txtls: cannot access 'another_file.txt': No such file or directory The above is the setup code for the example. An empty file file.txt exists and another_file.txt is not a thing. In the code below, I naively redirect to out_err.txt both input and output os listing these files. work:/tmp$ ls file.txt another_file.txt >out_err.txt 2>out_err.txtwork:/tmp$ cat out_err.txt file.txtt access 'another_file.txt': No such file or directory And we see that we lost a few characters in the error stream. However, using >> works in the sense that replicating the example would yield keep the whole output and the whole error. Why and how does cmd >>out_err.txt 2>>out_err.txt work? | Not sure it's that well known, but it happens because done like that, the two file handles are completely separate, and have independent read/write positions. Hence they can overwrite each other. (They correspond to two distinct open file descriptions , to use the technical term, which is sadly somewhat easy to confuse with the term "file descriptor".) This only happens with foo > out.txt 2>out.txt , not with foo > out.txt 2>&1 , since the latter copies the file descriptor (referring to the same open file description). When appending, all writes go the to end of the file, as it is during the moment of the write. This is handled by the OS, atomically, so that there's no way for even another process to get in the middle. Hence, the issue from independent read/write positions is defused.(Except it might not work over NFS, that's a filesystem restriction.) In your example, the error message ls: cannot access... is written first, at the start of the file. The write position of the stderr fd is now at the end of the file. Then the regular output of file.txt<newline> is also written, but the write position of the stdout fd is still at the start, so those 9 bytes overwrite part of the error message. With an appending fd, that second write would go to end, regardless of anything. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/734543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/499446/"
]
} |
734,848 | Suppose I have this going: $ find ./src -name '*.txt'./src/file1.txt./src/subdir1/file2.txt./src/subdir1/subsubdir1/file3.txt./src/subdir2/file4.txt I want to exclude the directory being searched, so something like: ./file1.txt instead of ./src/file1.txt ./subdir1/file2.txt instead of ./src/subdir1/file2.txt and so on. Adding -mindepth 1 didn't do anything. Is it possible to do what I'm after purely in find ? | Expanding on what @steeldriver said : $ cd "$(mktemp --directory)" # create temporary directorydirenv: unloading$ mkdir foo bar$ touch foo/1 bar/2$ find foo bar -type f -name '*' -printf '%P\n'12 The %P formatting string is documented as follows by the GNU find manual: %P File's name with the name of the starting-point underwhich it was found removed. Here, "file's name" means the pathname of the found file, not just the filename. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/734848",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/553983/"
]
} |
734,931 | Syscalls (system calls) cause some performance penalty due to the isolation between kernel and user space. Therefore, it sounds like a good idea to reduce syscalls. So what I thought is, that we could pack together syscalls into a single one. So, the idea is to place the syscalls and argumentsin a simple data structure in memory. Then we could introduce a new syscall, which we give this data structure. The kernel could then trigger all the functionality in parallel and resume the thread if one (or all) syscalls finished. I think this approach would be a good basis for concurrent programming (asynchronous I/O) and would improve on existing select/poll/epoll solutions by allowing concurrency on any syscall and reducing overall context switches. Why is this not done? | This already exists. On Linux it’s implemented by io_uring , available since version 5.1 of the kernel (May 2019): operations are placed on a queue (or rather, ring) and processed without system calls, with their results going to another queue. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/734931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358092/"
]
} |
735,138 | Consider a file file2.txt having the following content: P 89 24 -1.5388040474568784e+01 7.4421775186012660e+00 -1.3143195543234219e+03 1.3168884860257754e+03 8.0419002445999993e+01 44 0 0 -97 0P 122 -4 -1.4869334602986523e+01 5.7316939411954255e+00 -1.3144161801429666e+03 1.3169704096915282e+03 8.0419002445999993e+01 44 0 0 -370 0P 493 -24 -1.4690576431881317e+01 7.3848907323212831e+00 -1.3144620647251766e+03 1.3170224315489374e+03 8.0419002445999993e+01 62 0 0 -499 0E 3 -1 -1.0000000000000000e+00 -1.0000000000000000e+00 -1.0000000000000000e+00 9999 0 970 1 2 0 7 1.7003962000000002e+05 8.5019810000000018e-01 8.5019810000000018e-01 8.5019810000000018e-01 3.0000000000000000e+01 3.8153441026312507e+01 1.0000000000000000e+11E 4 -1 -1.0000000000000000e+00 -1.0000000000000000e+00 -1.0000000000000000e+00 9999 0 818 1 2 0 7 1.7003962000000002e+05 8.5019810000000018e-01 8.5019810000000018e-01 8.5019810000000018e-01 3.0000000000000000e+01 3.2509364886711985e+01 1.0000000000000000e+11P 5 2 0 0 3.7531787088999999e+02 3.8383684055052936e+02 8.0419002445999993e+01 22 0 0 -6 0P 8 24 7.0195398693654170e+00 3.1543502387874696e+01 5.5989200759599044e+01 1.0318077843755555e+02 8.0419002445999993e+01 44 0 0 -50 0P 67 28 5.8271676589304882e+00 3.3476871962084061e+01 5.6723118833601163e+01 1.0411236719963519e+02 8.0419002445999993e+01 44 0 0 -168 0P 219 13 6.0328453988772415e+00 3.3531592253635168e+01 5.6777179460595200e+01 1.0417114266715717e+02 8.0419002445999993e+01 44 0 0 -329 0P 444 -24 6.4646967953734418e+00 3.4909545978243479e+01 5.7879920796889749e+01 1.0525098522544691e+02 8.0419002445999993e+01 62 0 0 -452 0E 5 -1 -1.0000000000000000e+00 -1.0000000000000000e+00 -1.0000000000000000e+00 9999 0 598 1 2 0 7 1.7003962000000002e+05 0 0 8.5019810000000018e-01 3.0000000000000000e+01 6.8997318544430456e+01 1.0000000000000000e+11 I want to extract only the strings P ... 24 ... or P ... -24 ... . This is what I do: cat file2.txt | grep -E '(P [0-9]+ 24 | P [0-9] + -24 |P [0-9][0-9]+ 24 | P [0-9][0-9] + -24 |P [0-9][0-9][0-9] + 24 | P [0-9][0-9][0-9] + -24 |P [0-9][0-9][0-9][0-9]+ 24 | P [0-9][0-9][0-9][0-9] + -24 )' &> file3.txt But the resulting file3.txt contains only the strings P ... 24 . Could you please tell me what I am doing wrong? | .... what am I doing wrong? ... apart from making it far more complicated that it needs to be ... you are trying to match multiple spaces and leading spaces that are not in your strings in all of the cases for -24 and in some of other cases too .... P [0-9]+ 24 | is fine P then then a series of digits [0-9]+ , then and 24 followed by a space | P [0-9] + -24 | here there is a before the P and after the digits you had one or more spaces + followed by another which fails to match because of the additional spaces |P [0-9][0-9]+ 24 | fine again though all matches are already caught in the first pattern and so it is redundant | P [0-9][0-9] + -24 | extra spaces, the same as -24 above ... no match |P [0-9][0-9][0-9] + 24 | extra space before the + so it is looking for 2 or more again... | P [0-9][0-9][0-9] + -24 | leading space before the P and 2 or more before -24 again. |P [0-9][0-9][0-9][0-9]+ 24 | fine but redundant | P [0-9][0-9][0-9][0-9] + -24 leading space before the P and 2 or more before -24 again. While @gillesquenot has a far more elegant solution, yours "works" if you lose the additional spaces... grep -E '(P [0-9]+ 24 |P [0-9]+ -24 |P [0-9][0-9]+ 24 |P [0-9][0-9]+ -24 |P [0-9][0-9][0-9] + 24 |P [0-9][0-9][0-9]+ -24 |P [0-9][0-9][0-9][0-9]+ 24 |P [0-9][0-9][0-9][0-9]+ -24 )' And if you have the possibility of multiple spaces grep -E '^P +[0-9]+ +-?24' edit This is a useful resource for seeing what matches and where in any string | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/735138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128780/"
]
} |
735,495 | I use MacOS and I have variable date in format 3.Jan.2023,12.Nov.2017,9.Apr.2022,... I need to change in 03.01.2023,12.11.2017,09.04.2022,... | MacOS date allows you specify input and output formats for conversions: for inp in "3.Jan.2023" "12.Nov.2017" "9.Apr.2022"; do date -j -f "%d.%b.%Y" "$inp" "+%d.%m.%Y"done According to strptime() : %b The month, using the locale's month names; either the abbreviated or full name may be specified. %d The day of the month [01,31]; leading zeros are permitted but not required. %m The month number [01,12]; leading zeros are permitted but not required. %Y The year, including the century (for example, 1988). Output: 03.01.202312.11.201709.04.2022 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/735495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/560687/"
]
} |
735,505 | I am trying to write a script which has a condition based on a variable appearing in a list : #!/bin/bashLIST=`ls`function listcontains() { [[ $1 =~ (^|[[:space:]])$2($|[[:space:]]) ]] && return 0 || return 1}if [ $(listcontains "${LIST}" "multi.sh") ] ; then echo "Found!"else echo "Failed :("fi$(listcontains "${LIST}" "multi.sh")echo returned $? There is a file named in "multi.sh" in the list so I was expecting "Found!" but the script above reports "Failed :(". The subsequent invocations returns 0. I tried if [ 0 -eq $(listcontains "${LIST}" "multi.sh") ] ; then But then I get an error ./script.sh: line 9: [: 0: unary operator expected Failed :( What am I missing here? | MacOS date allows you specify input and output formats for conversions: for inp in "3.Jan.2023" "12.Nov.2017" "9.Apr.2022"; do date -j -f "%d.%b.%Y" "$inp" "+%d.%m.%Y"done According to strptime() : %b The month, using the locale's month names; either the abbreviated or full name may be specified. %d The day of the month [01,31]; leading zeros are permitted but not required. %m The month number [01,12]; leading zeros are permitted but not required. %Y The year, including the century (for example, 1988). Output: 03.01.202312.11.201709.04.2022 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/735505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9178/"
]
} |
736,052 | I often find myself in the situation where I need to look up the syntax and logic for a configuration file of some program on my computer. While I can do man mosquitto , this will not necessarily yield the help section for the file /etc/mosquitto.conf. What I am searching for is something like man /etc/mosquitto.conf or man ./mosquitto.conf , which should open the exact help I need for a given file. Not just mosquitto, that was only an example. Is there such a mapping somewhere?Is there a program that I can use to find help about specific configuration files, instead of having to search the internet? | TL/DR: There's no centralized repository of information. Most up-to-date source of information of a an application / tool / etc. is in its documentation. Otherwise there are man pages, info pages, tldr pages, html documentation, literature, internet... Man pages are just one form of software documentation. The fastest way to find out whether a man page exists is just giving command man foo . Command apropos foo outputs a list of man pages containing info about foo . man7.org hosts The Linux man-pages project , a web repository of man pages. It contains lists of man pages by section , alphabetically and by project . Man pages are viewed on the terminal by just man foo (for example man hosts ), never man /path/foo or man ./foo (for example man /etc/hosts ). Distributions can also host their own man pages online. Not everything has a man page. There's no man page for .bashrc , but some information about it is found in man bash - not specifics on its contents, tho'. xattr has a man page, xattr.conf doesn't; man xattr.conf outputs No manual entry for xattr.conf . Xattr's man page doesn't mention the conf file either. Some info about it can be seen just by cat /etc/xattr.conf : # Format:# <pattern> <action>## Actions:# permissions - copy when trying to preserve permissions.# skip - do not copy. Other files like .bashrc also contain documentation in the form of comments. Similar projects to man pages are GNU Info and TLDR . While man pages contain references to other man pages, they're static; so following references requires opening another page. GNU Info has internal hyperlinks. TLDR pages are sort of cheatsheets, a community effort to simplify the man pages. It also provides practical examples. They're used just like man - i.e. info xattr and tldr xattr . GUI applications (GNOME and KDE, for example) don't use any of the above. Their end user documentation is provided using HTML, and they can contain viewers like GNOME's Yelp . In the end providing documentation is entirely up to the developers. They can freely choose in which form they provide it - or not to provide it at all. There's a great many of them. Consequently the quality and availability of specific information varies a lot, and creating a single repository is simply impossible. Today the fastest and easiest way to find info is the internet. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/736052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/465412/"
]
} |
736,098 | Given the following configuration file: shape { visible: true type: rectangle ... } shape { type: circle visible: isRound === "true" ... } shape { // comment: "visible" not set, default "true" } How to set using bash all the values for visible key (property) to false , without touching the comments, but keeping the old value as a comment?The new content should be: shape { visible: false type: rectangle ... } shape { type: circle visible: false // visible: isRound === "true" // ideally, above, the old value is kept as a comment... ... } shape { visible: false // comment: "visible" not set, default "true" } The file is not just a list of shape structures, may contain other entries too. This should be QML compliant. awk shows: awk --versionGNU Awk 5.2.1, API 3.2Copyright (C) 1989, 1991-2022 Free Software Foundation. Edit I think I can use this PCRE regex : (\s*shape\s*\{\n)(\s*)(.*?)(visible\:.*\n?)?(.*?)(\n\s*\}) with replace expression as: $1$2visible: false // $4\n$2$5$6 or \1\2visible: false // \4\n\2\5\6 but I need a tool to apply it. It is not perfect, still needs to not comment twice. | TL/DR: There's no centralized repository of information. Most up-to-date source of information of a an application / tool / etc. is in its documentation. Otherwise there are man pages, info pages, tldr pages, html documentation, literature, internet... Man pages are just one form of software documentation. The fastest way to find out whether a man page exists is just giving command man foo . Command apropos foo outputs a list of man pages containing info about foo . man7.org hosts The Linux man-pages project , a web repository of man pages. It contains lists of man pages by section , alphabetically and by project . Man pages are viewed on the terminal by just man foo (for example man hosts ), never man /path/foo or man ./foo (for example man /etc/hosts ). Distributions can also host their own man pages online. Not everything has a man page. There's no man page for .bashrc , but some information about it is found in man bash - not specifics on its contents, tho'. xattr has a man page, xattr.conf doesn't; man xattr.conf outputs No manual entry for xattr.conf . Xattr's man page doesn't mention the conf file either. Some info about it can be seen just by cat /etc/xattr.conf : # Format:# <pattern> <action>## Actions:# permissions - copy when trying to preserve permissions.# skip - do not copy. Other files like .bashrc also contain documentation in the form of comments. Similar projects to man pages are GNU Info and TLDR . While man pages contain references to other man pages, they're static; so following references requires opening another page. GNU Info has internal hyperlinks. TLDR pages are sort of cheatsheets, a community effort to simplify the man pages. It also provides practical examples. They're used just like man - i.e. info xattr and tldr xattr . GUI applications (GNOME and KDE, for example) don't use any of the above. Their end user documentation is provided using HTML, and they can contain viewers like GNOME's Yelp . In the end providing documentation is entirely up to the developers. They can freely choose in which form they provide it - or not to provide it at all. There's a great many of them. Consequently the quality and availability of specific information varies a lot, and creating a single repository is simply impossible. Today the fastest and easiest way to find info is the internet. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/736098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221551/"
]
} |
736,116 | Is there a command to copy and compile a c file to another directory? So if I have Ticker.c in the home directory, I want to copy it and compile it in another directory called Task | TL/DR: There's no centralized repository of information. Most up-to-date source of information of a an application / tool / etc. is in its documentation. Otherwise there are man pages, info pages, tldr pages, html documentation, literature, internet... Man pages are just one form of software documentation. The fastest way to find out whether a man page exists is just giving command man foo . Command apropos foo outputs a list of man pages containing info about foo . man7.org hosts The Linux man-pages project , a web repository of man pages. It contains lists of man pages by section , alphabetically and by project . Man pages are viewed on the terminal by just man foo (for example man hosts ), never man /path/foo or man ./foo (for example man /etc/hosts ). Distributions can also host their own man pages online. Not everything has a man page. There's no man page for .bashrc , but some information about it is found in man bash - not specifics on its contents, tho'. xattr has a man page, xattr.conf doesn't; man xattr.conf outputs No manual entry for xattr.conf . Xattr's man page doesn't mention the conf file either. Some info about it can be seen just by cat /etc/xattr.conf : # Format:# <pattern> <action>## Actions:# permissions - copy when trying to preserve permissions.# skip - do not copy. Other files like .bashrc also contain documentation in the form of comments. Similar projects to man pages are GNU Info and TLDR . While man pages contain references to other man pages, they're static; so following references requires opening another page. GNU Info has internal hyperlinks. TLDR pages are sort of cheatsheets, a community effort to simplify the man pages. It also provides practical examples. They're used just like man - i.e. info xattr and tldr xattr . GUI applications (GNOME and KDE, for example) don't use any of the above. Their end user documentation is provided using HTML, and they can contain viewers like GNOME's Yelp . In the end providing documentation is entirely up to the developers. They can freely choose in which form they provide it - or not to provide it at all. There's a great many of them. Consequently the quality and availability of specific information varies a lot, and creating a single repository is simply impossible. Today the fastest and easiest way to find info is the internet. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/736116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/561408/"
]
} |
736,505 | I want to match lines that have foo unless the next line contains bar. So given a file containing: 1 foo 1 foo 2baz bar bap only 1 foo 1 would print. I got this to work using a negative lookahead /foo(?!.*\n.*bar)/ on https://regex101.com/r/ZMZsiN/1 but getting this to work on the command line with grep and perl both failed. Any solution using grep or one-liners in perl, sed, awk, or python would be good. Chatgpt failed me. Some attemps: $grep -Pwe 'foo(?!.*\n.*bar)' testfile1 foo 1 foo 2 $perl -wnl -e /'foo(?!\n.*bar)/ and print' testfile1 foo 1 foo 2 $perl -ne 'print if /foo/ && ($_ = <>) !~ /bar/' testfilefoo 2 The last one is based of something chatgpt gave and is close but my perlfu isn't good enough to figure what's wrong. | grep or perl -n works on one line at a time so the thing the regexp matches on is just the contents of one line (with the line delimiter not even included with grep or perl with -l ). You could use pcregrep (the -P option that GNU grep can be built to support uses PCRE as well) which has a multiline mode with -M . pcregrep -M '\bfoo\b(?!.*\n.*\bbar\b)' Besides pulling more lines into the subject of the match as needed, the multiline mode in pcregrep also enables the m flag (implicit (?m) ) which makes ^ and $ match at the start and end of each line, not just of the subject and does not enabled the s flag which means . doesn't match newline characters. ( \b is for word b oundary , the -w would not put word boundaries in useful places). With perl -n , you could set the record separator to something impossible for the regexp to match on the whole file: perl -0777 -ne ' print for m{^.*\bfoo\b.*\n(?!.*\bbar\b)}mg' With the standard Unix toolchest, you could use sed , though standard sed has no word boundary operator, so you'd need clumsy workarounds: sed -n '/^\(.*[^[:alnum:]_]\)\{0,1\}foo\([^[:alnum:]_].*\)\{0,1\}$/ { $!N /\n\(.*[^[:alnum:]_]\)\{0,1\}bar\(.*[^[:alnum:]_]\)\{0,1\}.*$/!P D }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/736505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86629/"
]
} |
Subsets and Splits