source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
444,770 | i have a test.tar in the folder /dir1/dir2/ and i want to extract folders from my test.tar into the directory /dir1/dir2/. The structure of my test.tar ist test.tar|tdirX/|tdirY/|tdirZ/ and so on. now i want to extract the folders tdirX - tdir Y,... to /dir1/dir2/ without extracting test.tar/tdirZ FYI: I'm running SunOS 5.8 | If groff and gropdf exists on your Linux system, you should be able to use man -Tpdf man >man.pdf (note the absence of a space between -T and pdf ) On an Ubuntu system, it should be enough to install the groff package to get access to gropdf . The option argument to -T is passed on to groff and groff will use its -T option with the same option argument. So, read the groff manual about -T for more info. On systems using mandoc , the groff utility does not need to be installed for the above command to work since the mandoc utility (called by man ) would convert the manual to PDF by itself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/444770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291653/"
]
} |
444,825 | I have several files named like this: This is a test - AB12-1998.avi (the last code is always 2 letters, 2 numbers, dash, 4 digits) What I'd like to do is rename them like this: AB12-1998 - This is a test.avi I'd appreciate any solution you can give me using bash, rename, or any other way as long as it gets the job done. | With the Perl rename (*) : rename 's/^(.*) - (.*)(\..*)$/$2 - $1$3/' *.avi or, if you want to be stricter about the code: rename 's/^(.*) - ([a-zA-Z]{2}\d{2}-\d{4})(\..*)$/$2 - $1$3/' *.avi That should work even with names like foo - bar - AB12-1234.avi , since the first .* greedily matches up to the final <space><dash><space> . (* see: What's with all the renames: prename, rename, file-rename? ) Or similarly in Bash: for f in *.avi ; do if [[ "$f" =~ ^(.*)\ -\ (.*)(\..*)$ ]]; then mv "$f" "${BASH_REMATCH[2]} - ${BASH_REMATCH[1]}${BASH_REMATCH[3]}" fidone Briefly, the regexes break down to ^ start of string( ) capture group.* any amount of anything\. a literal dot$ end of string Most regular characters match themselves, though you need to escape spaces with backslashes in Bash (as above). The contents of the capture groups appear in order in $1 , $2 etc in Perl, and in ${BASH_REMATCH[1]} , ${BASH_REMATCH[2]} etc in Bash. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/444825",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291701/"
]
} |
444,931 | When I was installing my OS, I didn't encrypt. Is there a way to encrypt it now without formatting and without losing any data?I read a few guides how to encrypt and every one says that I need to backup all my data because I will lose it.Is there a way to encrypt it all now without losing data? | Yes, there is a way. The LUKS cryptsetup utility contains the reencrypt command that you can also use to encrypt your existing unencrypted root partition, i.e. without destroying the existing filesystem. That said, before performing such a conversion you should still backup your data. Of course, one should always perform backups on a regular schedule, because of possible hardware failure etc. Thus, this is kind of redundant advice. Switching an existing root filesystem from unencrypted to encrypted requires quite a few steps: backup make sure that the cryptsetup package is installed make sure that your root filesystem has some free space (at least 100 MiB to be on the safe side) identify the partition your root partition is located on: e.g. with df / , lookup the UUID of the filesystem with blkid and store it somewhere boot into a rescue system where you can unmount your root filesystem (e.g. boot from an USB stick which contains - say - Grml ) locate your root partition (e.g. with blkid and look for the UUID) if it's ext4 execute a filesystem check: e2fsck -f /dev/sdXY shrink the filesystem to make some room for the LUKS header, e.g. if it's an ext4 filesystem: resize2fs /dev/sdXY $smallersizeinGiB_G (you need to shrink it by at least 32 MiB) encrypt it: cryptsetup reencrypt --encrypt /dev/sdXY --reduce-device-size 32M open it: cryptsetup open /dev/sdXY root enlarge the filesystem to the maximum: resize2fs /dev/mapper/root mount it to - say - /mnt/root mount the boot filesystem on /mnt/root and bind-mount pseudo filesystems /dev , /sys , /proc under /mnt/root . chroot into your system by: chroot /mnt/root /bin/bash update kernel parameters in /etc/default/grub or some equivalent location, e.g. when your distro uses dracut (which is likely) you need to add rd.luks.uuid=$UUID_OF_LUKS_DEVICE (cf. blkid , note that this UUID is different from the root filesystem one), if you have selinux installed you should add enforcing=0 (and later remove it) because of all the edits if your distribution has selinux enabled, configure a relabeling: touch /mnt/root/.autorelabel regenerate grub config: grub2-mkconfig -o /boot/.../grub...cfg regenerate initramfs (to make sure that cryptsetup support is included): dracut -f /boot/initramfs....img kernelversion exit the chroot unmount everything cryptsetup close root reboot As you see these are many steps, i.e. there is some potential to introduce errors. Thus, arguably it might be simpler to just reinstall and restore your backup (e.g. config files and $HOME ). Also, in my experience as of 2020, cryptsetup reencrypt is relatively slow, thus it may be faster to just cryptsetup luksFormat the device and restore a backup. If you have an XFS filesystem, you can't just shrink it, because XFS doesn't support this, as of 2020. Thus, you would need to fstransform it before being able to shrink it. With a transformed filesystem you have another uuid to take care of. That means either change the UUID of the new filessytem to the UUID of the old one. Or update the UUID of the filesystem in /mnt/root/etc/fstab . With a dracut based distribution you don't need to create a /etc/crypttab , other distribution might require it (also before the initramfs update, because it might need to be included there). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/444931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291778/"
]
} |
444,935 | In bash script I'm developing I'm trying to execute command and capture in variable(s): stdout stderr status code how to achieve that? The command is tar , if it is of any significance. I tried the most standard approach: TAROUTPUT=$(tar -cf arch.tar /path/to/dir) Based on some work I did (I haven't actually produced tar failure) I get only stdout from this, stderr is not stored to variable. The perfect solution has TAROUTPUT (with both stdout&stderr) and TARSTATUS variables. Thanks in advance. | TAROUTPUT=$(tar -cf arch.tar /path/to/dir 2>&1)this_is_the_tar_exit_code=$? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/444935",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68350/"
]
} |
444,946 | $ ls -l /tmp/test/my\ dir/total 0 I was wondering why the following ways to run the above command fail or succeed? $ abc='ls -l "/tmp/test/my dir"'$ $abcls: cannot access '"/tmp/test/my': No such file or directoryls: cannot access 'dir"': No such file or directory$ "$abc"bash: ls -l "/tmp/test/my dir": No such file or directory$ bash -c $abc'my dir'$ bash -c "$abc"total 0$ eval $abctotal 0$ eval "$abc"total 0 | This has been discussed in a number of questions on unix.SE, I'll try to collect all issues I can come up with here. Below is a description of why and how the various attempts fail, a way to do it properly with a function (for a fixed command), or with shell arrays (Bash/ksh/zsh) or the $@ pseudo-array (POSIX sh), both of which also allow building the command line pieces, if you e.g. only need to vary some optoins and notes about using eval to do this. Some references at the end. For the purposes here, it doesn't matter much if it's only the command arguments or also the command name that is to be stored in a variable. They're processed similarly up to the point where the command is launched, at which point the shell just takes the first word as the name of the command to run. Why it fails The reason you face those problems is the fact that word splitting is quite simple and doesn't lend itself to complex cases, and the fact that quotes expanded from variables don't act as quotes, but are just ordinary characters. (Note that the part about quotes is similar to every other programming language: e.g. char *s = "foo()"; printf("%s\n", s) does not call the function foo() in C, but just prints the string foo() . That's different in macro processors, like m4, the C preprocessor, or Make (to some extent). The shell is a programming language, not a macro processor.) On Unix-like systems, it's the shell that processes quotes and variable expansions on the command line, turning it from a single string into the list of strings that the underlying system call passes to the launched command. The program itself doesn't see the quotes the shell processed. E.g. if given the command ls -l "foo bar" , the shell turns that into the three strings ls , -l and foo bar (removing the quotes), and passes those to ls . (Even the command name is passed, though not all programs use it.) The cases presented in the question: The assignment here assigns the single string ls -l "/tmp/test/my dir" to abc : $ abc='ls -l "/tmp/test/my dir"' Below, $abc is split on whitespace, and ls gets the three arguments -l , "/tmp/test/my and dir" . The quotes here are just data, so there's one at the front of the second argument and another at the back of the third. The option works, but the path gets incorrectly processed as ls sees the quotes as part of the filenames: $ $abcls: cannot access '"/tmp/test/my': No such file or directoryls: cannot access 'dir"': No such file or directory Here, the expansion is quoted, so it's kept as a single word. The shell tries to find a program literally called ls -l "/tmp/test/my dir" , spaces and quotes included. $ "$abc"bash: ls -l "/tmp/test/my dir": No such file or directory And here, $abc is split, and only the first resulting word is taken as the argument to -c , so Bash just runs ls in the current directory. The other words are arguments to bash, and are used to fill $0 , $1 , etc. $ bash -c $abc'my dir' With bash -c "$abc" , and eval "$abc" , there's an additional shell processing step, which does make the quotes work, but also causes all shell expansions to be processed again , so there's a risk of accidentally running e.g. a command substitution from user-provided data, unless you're very careful about quoting. Better ways to do it The two better ways to store a command are a) use a function instead, b) use an array variable (or the positional parameters). Using functions: Simply declare a function with the command inside, and run the function as if it were a command. Expansions in commands within the function are only processed when the command runs, not when it's defined, and you don't need to quote the individual commands. Though this really only helps if you have a fixed command you need to store (or more than one fixed command). # define itmyls() { ls -l "/tmp/test/my dir"}# run itmyls It's also possible to define multiple functions and use a variable to store the name of the function you want to run in the end. Using an array: Arrays allow creating multi-word variables where the individual words contain white space. Here, the individual words are stored as distinct array elements, and the "${array[@]}" expansion expands each element as separate shell words: # define the arraymycmd=(ls -l "/tmp/test/my dir")# expand the array, run the command"${mycmd[@]}" The command is written inside the parentheses exactly as it would be written when running the command. The processing the shell does is the same in both cases, just in one it only saves the resulting list of strings, instead of using it to run a program. The syntax for expanding the array later is slightly horrible, though, and the quotes around it are important. Arrays also allow you to build the command line piece-by-piece. For example: mycmd=(ls) # initial commandif [ "$want_detail" = 1 ]; then mycmd+=(-l) # optional flag, append to arrayfimycmd+=("$targetdir") # the filename"${mycmd[@]}" or keep parts of the command line constant and use the array fill just a part of it, like options or filenames: options=(-x -v)files=(file1 "file name with whitespace")target=/somedirsomecommand "${options[@]}" "${files[@]}" "$target" ( somecommand being a generic placeholder name here, not any real command.) The downside of arrays is that they're not a standard feature, so plain POSIX shells (like dash , the default /bin/sh in Debian/Ubuntu) don't support them (but see below). Bash, ksh and zsh do, however, so it's likely your system has some shell that supports arrays. Using "$@" In shells with no support for named arrays, one can still use the positional parameters (the pseudo-array "$@" ) to hold the arguments of a command. The following should be portable script bits that do the equivalent of the code bits in the previous section. The array is replaced with "$@" , the list of positional parameters. Setting "$@" is done with set , and the double quotes around "$@" are important (these cause the elements of the list to be individually quoted). First, simply storing a command with arguments in "$@" and running it: set -- ls -l "/tmp/test/my dir""$@" Conditionally setting parts of the command line options for a command: set -- lsif [ "$want_detail" = 1 ]; then set -- "$@" -lfiset -- "$@" "$targetdir""$@" Only using "$@" for options and operands: set -- -x -vset -- "$@" file1 "file name with whitespace"set -- "$@" /somedirsomecommand "$@" Of course, "$@" is usually filled with the arguments to the script itself, so you'll have to save them somewhere before re-purposing "$@" . To conditionally pass a single argument, you can also use the alternate value expansion ${var:+word} with some careful quoting. Here, we include -f and the filename only if the filename is nonempty: file="foo bar"somecommand ${file:+-f "$file"} Using eval (be careful here!) eval takes a string and runs it as a command, just like if it was entered on the shell command line. This includes all quote and expansion processing, which is both useful and dangerous. In the simple case, it allows doing just what we want: cmd='ls -l "/tmp/test/my dir"'eval "$cmd" With eval , the quotes are processed, so ls eventually sees just the two arguments -l and /tmp/test/my dir , like we want. eval is also smart enough to concatenate any arguments it gets, so eval $cmd could also work in some cases, but e.g. all runs of whitespace would be changed to single spaces. It's still better to quote the variable there as that will ensure it gets unmodified to eval . However, it's dangerous to include user input in the command string to eval . For example, this seems to work: read -r filenamecmd="ls -ld '$filename'"eval "$cmd"; But if the user gives input that contains single quotes, they can break out of the quoting and run arbitrary commands! E.g. with the input '$(whatever)'.txt , your script happily runs the command substitution. That it could have been rm -rf (or worse) instead. The issue there is that the value of $filename was embedded in the command line that eval runs. It was expanded before eval , which saw e.g. the command ls -l ''$(whatever)'.txt' . You would need to pre-process the input to be safe. If we do it the other way, keeping the filename in the variable, and letting the eval command expand it, it's safer again: read -r filenamecmd='ls -ld "$filename"'eval "$cmd"; Note the outer quotes are now single quotes, so expansions within do not happen. Hence, eval sees the command ls -l "$filename" and expands the filename safely itself. But that's not much different from just storing the command in a function or an array. With functions or arrays, there is no such problem since the words are kept separate for the whole time, and there's no quote or other processing for the contents of filename . read -r filenamecmd=(ls -ld -- "$filename")"${cmd[@]}" Pretty much the only reason to use eval is one where the varying part involves shell syntax elements that can't be brought in via variables (pipelines, redirections, etc.). However, you'll then need to quote/escape everything else on the command line that needs protection from the additional parsing step (see link below). In any case, it's best to avoid embedding input from the user in the eval command! References Word Splitting in BashGuide BashFAQ/050 or "I'm trying to put a command in a variable, but the complex cases always fail!" The question Why does my shell script choke on whitespace or other special characters? , which discusses a number of issues related to quoting and whitespace, including storing commands. Escape a variable for use as content of another script How can I conditionally pass an argument from a POSIX shell script? | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/444946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
444,962 | I have file with this output, and I am trying to collect useful data from my file. R1#show ip route 192.168.5.130Routing Descriptor Blocks: * 192.168.5.128, from 192.168.5.162, 00:20:16 ago, via Serial0/0/0.2 Route metric is 2172416, traffic share count is 1 Total delay is 20100 microseconds, minimum bandwidth is 1544 Kbit/sec Reliability 255/255, minimum MTU 1500 bytes Loading 1/255, Hops 1 I want to grep match if my above paragraph have word "metric" then it should display the whole paragraph not just that line. Also is there a way I can check condition that if metric==2172416 then return the whole paragraph. I would like to know the simplest and easiest way to do it, since I am going to apply that in different scenarios. Also If I have this in my file, how can fetch just the lines from Apr 11? Can I use wildcard here? CPU0:Apr 11 05:22:04.768 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-INTCHG : CPU0:Apr 11 05:22:04.769 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-NBRCHG : CPU0:Apr 11 05:22:04.769 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-NBRCHG : CPU0:Apr 11 06:09:53.066 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-INTCHG : CPU0:Apr 11 06:09:53.066 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-NBRCHG : CPU0:Apr 11 06:09:56.707 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-NBRCHG : | This has been discussed in a number of questions on unix.SE, I'll try to collect all issues I can come up with here. Below is a description of why and how the various attempts fail, a way to do it properly with a function (for a fixed command), or with shell arrays (Bash/ksh/zsh) or the $@ pseudo-array (POSIX sh), both of which also allow building the command line pieces, if you e.g. only need to vary some optoins and notes about using eval to do this. Some references at the end. For the purposes here, it doesn't matter much if it's only the command arguments or also the command name that is to be stored in a variable. They're processed similarly up to the point where the command is launched, at which point the shell just takes the first word as the name of the command to run. Why it fails The reason you face those problems is the fact that word splitting is quite simple and doesn't lend itself to complex cases, and the fact that quotes expanded from variables don't act as quotes, but are just ordinary characters. (Note that the part about quotes is similar to every other programming language: e.g. char *s = "foo()"; printf("%s\n", s) does not call the function foo() in C, but just prints the string foo() . That's different in macro processors, like m4, the C preprocessor, or Make (to some extent). The shell is a programming language, not a macro processor.) On Unix-like systems, it's the shell that processes quotes and variable expansions on the command line, turning it from a single string into the list of strings that the underlying system call passes to the launched command. The program itself doesn't see the quotes the shell processed. E.g. if given the command ls -l "foo bar" , the shell turns that into the three strings ls , -l and foo bar (removing the quotes), and passes those to ls . (Even the command name is passed, though not all programs use it.) The cases presented in the question: The assignment here assigns the single string ls -l "/tmp/test/my dir" to abc : $ abc='ls -l "/tmp/test/my dir"' Below, $abc is split on whitespace, and ls gets the three arguments -l , "/tmp/test/my and dir" . The quotes here are just data, so there's one at the front of the second argument and another at the back of the third. The option works, but the path gets incorrectly processed as ls sees the quotes as part of the filenames: $ $abcls: cannot access '"/tmp/test/my': No such file or directoryls: cannot access 'dir"': No such file or directory Here, the expansion is quoted, so it's kept as a single word. The shell tries to find a program literally called ls -l "/tmp/test/my dir" , spaces and quotes included. $ "$abc"bash: ls -l "/tmp/test/my dir": No such file or directory And here, $abc is split, and only the first resulting word is taken as the argument to -c , so Bash just runs ls in the current directory. The other words are arguments to bash, and are used to fill $0 , $1 , etc. $ bash -c $abc'my dir' With bash -c "$abc" , and eval "$abc" , there's an additional shell processing step, which does make the quotes work, but also causes all shell expansions to be processed again , so there's a risk of accidentally running e.g. a command substitution from user-provided data, unless you're very careful about quoting. Better ways to do it The two better ways to store a command are a) use a function instead, b) use an array variable (or the positional parameters). Using functions: Simply declare a function with the command inside, and run the function as if it were a command. Expansions in commands within the function are only processed when the command runs, not when it's defined, and you don't need to quote the individual commands. Though this really only helps if you have a fixed command you need to store (or more than one fixed command). # define itmyls() { ls -l "/tmp/test/my dir"}# run itmyls It's also possible to define multiple functions and use a variable to store the name of the function you want to run in the end. Using an array: Arrays allow creating multi-word variables where the individual words contain white space. Here, the individual words are stored as distinct array elements, and the "${array[@]}" expansion expands each element as separate shell words: # define the arraymycmd=(ls -l "/tmp/test/my dir")# expand the array, run the command"${mycmd[@]}" The command is written inside the parentheses exactly as it would be written when running the command. The processing the shell does is the same in both cases, just in one it only saves the resulting list of strings, instead of using it to run a program. The syntax for expanding the array later is slightly horrible, though, and the quotes around it are important. Arrays also allow you to build the command line piece-by-piece. For example: mycmd=(ls) # initial commandif [ "$want_detail" = 1 ]; then mycmd+=(-l) # optional flag, append to arrayfimycmd+=("$targetdir") # the filename"${mycmd[@]}" or keep parts of the command line constant and use the array fill just a part of it, like options or filenames: options=(-x -v)files=(file1 "file name with whitespace")target=/somedirsomecommand "${options[@]}" "${files[@]}" "$target" ( somecommand being a generic placeholder name here, not any real command.) The downside of arrays is that they're not a standard feature, so plain POSIX shells (like dash , the default /bin/sh in Debian/Ubuntu) don't support them (but see below). Bash, ksh and zsh do, however, so it's likely your system has some shell that supports arrays. Using "$@" In shells with no support for named arrays, one can still use the positional parameters (the pseudo-array "$@" ) to hold the arguments of a command. The following should be portable script bits that do the equivalent of the code bits in the previous section. The array is replaced with "$@" , the list of positional parameters. Setting "$@" is done with set , and the double quotes around "$@" are important (these cause the elements of the list to be individually quoted). First, simply storing a command with arguments in "$@" and running it: set -- ls -l "/tmp/test/my dir""$@" Conditionally setting parts of the command line options for a command: set -- lsif [ "$want_detail" = 1 ]; then set -- "$@" -lfiset -- "$@" "$targetdir""$@" Only using "$@" for options and operands: set -- -x -vset -- "$@" file1 "file name with whitespace"set -- "$@" /somedirsomecommand "$@" Of course, "$@" is usually filled with the arguments to the script itself, so you'll have to save them somewhere before re-purposing "$@" . To conditionally pass a single argument, you can also use the alternate value expansion ${var:+word} with some careful quoting. Here, we include -f and the filename only if the filename is nonempty: file="foo bar"somecommand ${file:+-f "$file"} Using eval (be careful here!) eval takes a string and runs it as a command, just like if it was entered on the shell command line. This includes all quote and expansion processing, which is both useful and dangerous. In the simple case, it allows doing just what we want: cmd='ls -l "/tmp/test/my dir"'eval "$cmd" With eval , the quotes are processed, so ls eventually sees just the two arguments -l and /tmp/test/my dir , like we want. eval is also smart enough to concatenate any arguments it gets, so eval $cmd could also work in some cases, but e.g. all runs of whitespace would be changed to single spaces. It's still better to quote the variable there as that will ensure it gets unmodified to eval . However, it's dangerous to include user input in the command string to eval . For example, this seems to work: read -r filenamecmd="ls -ld '$filename'"eval "$cmd"; But if the user gives input that contains single quotes, they can break out of the quoting and run arbitrary commands! E.g. with the input '$(whatever)'.txt , your script happily runs the command substitution. That it could have been rm -rf (or worse) instead. The issue there is that the value of $filename was embedded in the command line that eval runs. It was expanded before eval , which saw e.g. the command ls -l ''$(whatever)'.txt' . You would need to pre-process the input to be safe. If we do it the other way, keeping the filename in the variable, and letting the eval command expand it, it's safer again: read -r filenamecmd='ls -ld "$filename"'eval "$cmd"; Note the outer quotes are now single quotes, so expansions within do not happen. Hence, eval sees the command ls -l "$filename" and expands the filename safely itself. But that's not much different from just storing the command in a function or an array. With functions or arrays, there is no such problem since the words are kept separate for the whole time, and there's no quote or other processing for the contents of filename . read -r filenamecmd=(ls -ld -- "$filename")"${cmd[@]}" Pretty much the only reason to use eval is one where the varying part involves shell syntax elements that can't be brought in via variables (pipelines, redirections, etc.). However, you'll then need to quote/escape everything else on the command line that needs protection from the additional parsing step (see link below). In any case, it's best to avoid embedding input from the user in the eval command! References Word Splitting in BashGuide BashFAQ/050 or "I'm trying to put a command in a variable, but the complex cases always fail!" The question Why does my shell script choke on whitespace or other special characters? , which discusses a number of issues related to quoting and whitespace, including storing commands. Escape a variable for use as content of another script How can I conditionally pass an argument from a POSIX shell script? | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/444962",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291229/"
]
} |
444,970 | I have a file alphanum with these two lines: 123 abcthis is a line I am confused as to why, when I run sed 's/[a-z]*/SUB/' alphanum , I get the following output: SUB123 abcSUB is a line I was expecting: 123 SUBSUB is a line I found a fix (use sed 's/[a-z][a-z]*/SUB/' instead), but I don't understand why it works and mine doesn't. Can you help? | The pattern [a-z]* matches zero or more characters in the range a to z (the actual characters are dependent on the current locale). There are zero such characters at the very start of the string 123 abc (i.e. the pattern matches), and also four of them at the start of this is a line . If you need at least one match, then use [a-z][a-z]* or [a-z]\{1,\} , or enable extended regular expressions with sed -E and use [a-z]+ . To visualize where the pattern matches, add parentheses around each match: $ sed 's/[a-z]*/(&)/' file()123 abc(this) is a line Or, to see all matches on the lines: $ sed 's/[a-z]*/(&)/g' file()1()2()3() (abc)(this) (is) (a) (line) Compare that last result with $ sed -E 's/[a-z]+/(&)/g' file123 (abc)(this) (is) (a) (line) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/444970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/271929/"
]
} |
444,978 | I'm trying to use openssl to create a cryptographic hash of a file using HMAC-SHA-256. I'm confused as to why I'm seeing a 'no such file or directory' error on the output. The key I'm using is in a file called mykey.txt. This is my command: openssl dgst -sha256 -hmac -hex hexkey:$(cat mykey.txt) -out hmac.txt /bin/ps And the output | -hmac takes the key as an argument ( see manual ), so your command asks for an HMAC using the key -hex . hexkey:... is taken as a filename, since it doesn't start with a dash, and openssl doesn't take options after filenames, so the following -out is also a filename. To get the HMAC with a key given as a hex string, you'll need to use -mac hmac and -macopt hexkey:<key> . Note that using -hmac <key> and -mac hmac together doesn't work, and -macopt requires -mac hmac . Test: openssl dgst -sha256 -hmac abc <<< "message"openssl dgst -sha256 -hmac abc -macopt hexkey:12345678 <<< "message"openssl dgst -sha256 -mac hmac -macopt hexkey:616263 <<< "message"perl -MDigest::HMAC=hmac_hex -MDigest::SHA=sha256 \ -le 'print(hmac_hex("message\n", "abc", \&sha256))' All give the hash 99592e56fcde028fb41882668b0cbfa0119116f9cf111d285f5cedb000cfc45a which agrees with a random online HMAC calculator for message message\n , key abc or 616263 in hex. (Note the newline at the end of message here.) So, it seems you'd probably want openssl dgst -sha256 -mac hmac -macopt hexkey:$(cat mykey.txt) -out hmac.txt /bin/ps Since we're talking about cryptography, which is hard; and OpenSSL, which doesn't always have the most easy-to-use interfaces, I would suggest also verifying everything yourself, at least twice, instead of taking my word for it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/444978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255605/"
]
} |
444,998 | I don't understand the best way to set fs.inotify.max_user_watches with sysctl . In fact, I don't understand much of what is happening here other than the fact that I need to set the number of files that can be watched by a particular process. I believe that I can see the max number of users by running this command: cat /proc/sys/fs/inotify/max_user_watches My understanding is that some people suggest changing /proc/sys/fs/inotify/max_user_watches by opening /etc/sysctl.conf in an editor and adding this to it: fs.inotify.max_user_watches=524288 Then run sudo sysctl -p to -- presumably -- process the changes made to the file. Others suggest running commands like this: sudo sysctl -w fs.inotify.max_user_instances=1024sudo sysctl -w fs.inotify.max_user_watches=12288 I know that -w stands for write, but what is being written and where? Is it just that this command changes /proc/.../max_user_watches ? Which of the two approaches outlined above is best? I understand that 524288 and 12288 are different numbers, but I don't understand the difference between the effect of running -p and -w . | sysctl -w writes kernel parameter values to the corresponding keys under /proc/sys : sudo sysctl -w fs.inotify.max_user_watches=12288 writes 12288 to /proc/sys/fs/inotify/max_user_watches . (It’s not equivalent, it’s exactly that; interested readers can strace it to see for themselves.) sysctl -p loads settings from a file, either /etc/sysctl.conf (the default), or whatever file is specified after -p . The difference between both approaches, beyond the different sources of the parameters and values they write, is that -w only changes the parameters until the next reboot, whereas values stored in /etc/sysctl.conf will be applied again every time the system boots. My usual approach is to use -w to test values, then once I’m sure the new settings are OK, write them to /etc/sysctl.conf or a file under /etc/sysctl.d (usually /etc/sysctl.d/local.conf ). See the sysctl and sysctl.conf manual pages ( man sysctl and man sysctl.conf on your system) for details. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/444998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91728/"
]
} |
445,032 | When does history expansion happen? From bash manual Enclosing characters in double quotes (‘"’) preserves the literal value of all characters within the quotes, with the exception of ‘$’, ‘`’, ‘\’, and, when history expansion is enabled, ‘!’. Since double quotes are recognized at parsing stage by the parser,is it correct that history expansion must happen after parsing? If yes, when does it happen with respect to shell expansions such as brace expansion, parameter expansion, filename expansion, etc? But I think that history expansion is provided by the readline ofthe shell, so is processed before lexical analysis and parsing? Justlike auto-completion in shell. Am I missing something? Thanks. | Quoting the bash manual : History expansion is performed immediately after a complete line is read, before the shell breaks it into words. History expansion is the first stage of processing, even before shell parsing, which is why double quotes don’t protect ! : the latter is processed before double quotes. It is handled by the history library, which implements its own parsing, with a few ways of protecting the history operator: Only ‘ \ ’ and ‘ ' ’ may be used to escape the history expansion character, but the history expansion character is also treated as quoted if it immediately precedes the closing double quote in a double-quoted string. By the time the shell’s parser starts handling a string, it’s already been parsed by the history library and history expansion has already taken place. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
445,094 | I have this variable : toto=123456 why does touch "$toto.hihi.log" works and creates a file called 123456.hihi.log but touch "$totohihi.log" doesn’t do anything ? | You need touch "${toto}hihi.log" The problem is that the shell cannot know without the braces how many characters are part of the variable name. Thus it treats all legal characters as a part of the name. In this case that is everything before the . ; i.e. the shell uses the non-existing variable $totohihi . In general it helps to use the shell option -x to see what is going on: set -xtouch "$totohihi.log" + touch .log | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211513/"
]
} |
445,325 | I just solved a problem with my Makefile(s). Make trips over every <<< with the error message /bin/sh: 1: Syntax error: redirection unexpected And I would like to know why. (I am using Bash as SHELL ) In my current projects I tried a lot of recipies along the lines of: target: read FOO <<< "XXX"; \ read BAR <<< "YYY"; \ read BAZ <<< "ZZZ"; \ someprog -a param0 -b $$FOO -c param1 -d $${BAR} -e param2 -f $${BAZ} >$@ Trying this will result in an error for every <<< as described at the beginning. My workaround is target.dep: echo "XXX YYY ZZZ" >$@target: %: %.dep read FOO BAR BAZ < $<;\ someprog -a param0 -b $$FOO -c param1 -d $${BAR} -e param2 -f $${BAZ} >$@ which means I put my stuff into temporary files which I then read with < , which works just fine. When I copy paste my make output to a normal bash prompt, every command works just as expected, even with the <<< . I am fairly certain that my problem is, that using the <<< operator, i.e. here strings, break something. Why is that and is there a way to make here strings work in Makefiles? P.S.: Yes, sometimes I feel autotools would be the better choice over make. | /bin/sh: 1: Syntax error: redirection unexpected means you’re not using bash as your shell, in spite of your expectations to the contrary. bash as sh recognises here strings fine (so your Makefile would work on Fedora), but for example dash as sh doesn’t. Unless told otherwise, Make uses /bin/sh as its shell; it ignores your default user shell. Setting SHELL=/bin/bash in your Makefile should fix things for you; at least, it does for me on a system showing the same symptoms as yours. P.S.: Yes, sometimes I feel autotools would be the better choice over make. Autotools and Make don’t address the same problems; they’re complementary, and using Autotools would still mean using Make... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19575/"
]
} |
445,340 | I have multiple files with different number or records in each. I want an awk command to print the second last line of them and perform some change. I want something like this:(this one doesn't work of course) awk '( NR == FNR-1 ), $0","' *.txt | awk reads records as they come and has no notion of how far from the end those records are if it has not read them yet (extra records could very well be added after it reads and processes the current record). Contrary to sed , it cannot even tell which is the last record ( sed has the $ address which it actually implements by internally reading one record in advance so it knows which is the last one). You can however do some processing at the end in the special END statement, or with GNU awk , after having processed each input file (in the ENDFILE statement). So you can save the last two records while you're processing them, and then in the END / ENDFILE statement, recall the penultimate one from where you've saved it. For instance: awk '{prevlast = last; last = $0} END {if (NR >= 2) print "penultimate:", prevlast}' < input Or: gawk '{prevlast = last; last = $0} ENDFILE { if (FNR >= 2) print "penultimate for", FILENAME ":", prevlast }' file1 file2 Or to generalise it for the n th from the end: awk -v n=2 '{saved[NR % n] = $0} END {if (NR >= n) print saved[(NR + 1) % n]}' < inputgawk -v n=2 '{saved[FNR % n] = $0} ENDFILE {if (FNR >= n) print saved[(FNR + 1) % n]}' file1 file2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445340",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/276761/"
]
} |
445,386 | i play vlc but surprised by this error :Audio output failed:The audio device "default" could not be used:No such file or directory.so i search for it and the run this command : vlc --reset-config --reset-plugins-cache but it produce this error PulseAudio server connection failure: Connection refusedand also there's no sound when i using browserand i don't know what to do hint: my sound was work well but suddenly this happen | It might be because either your pulseaudio driver is broken and/or has permissions set to root. I had solved this problem with the following commands : # clean and reinstall pulseaudiosudo apt-get remove --purge alsa-base pulseaudiosudo apt-get install alsa-base pulseaudiosudo apt-get -f install && sudo apt-get -y autoremove && sudo apt-get autoclean && sudo apt-get clean && sudo sync && echo 3 | sudo tee /proc/sys/vm/drop_caches# fixes user folder permissionssudo chown -R $USER:$USER $HOME/# then rebootsudo reboot Pulseaudio should be started at startup but you can manually start it with : pulseaudio --start If the problem persists, feel free to paste your system log here tail -100f /var/log/syslog | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291768/"
]
} |
445,390 | For a while now I've found that when I try to view an HTML file in my browser (from Mutt, v to see all attachments, select the text/html attachment and enter ) my browser opens a new tab at file:///tmp/mutt.html but consistently says File not found Firefox can’t find the file at /tmp/mutt.html. Check the file name for capitalization or other typing errors. Check to see if the file was moved, renamed or deleted. I'm not quite sure how to troubleshoot this. It worked fine until it didn't. And then just to keep me on my toes every 10th or 12th time it randomly does show the HTML mail. For instance, just now, I ... tried to view an HTML email, decided I want to finally troubleshoot this, started composing the first 2/3 of this question, opened a second terminal tab to look at /var/log/ to see if there's an obvious log file to look at ran tail -f /var/log/syslog with tail follow running, went back to try viewing the HTML email again, just to see if anything is written to syslog (though I don't think it would be) and a) nothing was written to syslog, b) the same exact email actually did open just fine in a browser tab. So ... where should I be looking for some indication of why mutt html messages sometimes open just fine in a browser and sometimes /tmp/mutt.html can't be found? | I had some trouble with that too. The mailcap entry for text/html is the point to look at. With chromium it was indeed the needsterminal flag. For firefox the copiousoutput flag did the trick. #text/html; chromium %s; needsterminaltext/html; firefox %s;copiousoutput If you don't already have a custom mailcap, you can create a file at ~/.mailcap for example and add the firefox line to it. Don't forget to specify the path in your .muttrc set mailcap_path = ~/.mailcap | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141494/"
]
} |
445,395 | I want to create a final infrastructure to boot a PXE custom kernel image, but through the process I'm stuck creating a middle "live" CD ISO using a previously compiled custom kernel image with the live-build tool. I don't know how/where to specifiy the lb config/live build tool to use my own kernel deb package instead the default amd64-kernel flavour. I think that I have to use the --linux-packages parameter, but I don't really understand how. I can't find any kind of info or example.I have read all the man pages and so on, but I'm stuck. My current auto/lb config: *lb config no auto \ --architectures amd64 \ --distribution stretch \ --system live \ --chroot-filesystem squashfs \ --apt-recommends false \ --apt-indices none \ --memtest none \ --debian-installer false \ --interactive shell \ --bootloaders syslinux \ --bootappend-live "boot=live components hostname=test username=test sudo" \ "${@}"* How can I create a live image with custom compiled kernel? | I had some trouble with that too. The mailcap entry for text/html is the point to look at. With chromium it was indeed the needsterminal flag. For firefox the copiousoutput flag did the trick. #text/html; chromium %s; needsterminaltext/html; firefox %s;copiousoutput If you don't already have a custom mailcap, you can create a file at ~/.mailcap for example and add the firefox line to it. Don't forget to specify the path in your .muttrc set mailcap_path = ~/.mailcap | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292129/"
]
} |
445,430 | I have a program that expects arguments in the following syntax: prog [-f filename | -g filename1 filename2 ] ... Each filename must be prefixed with the -f flag. For example, the following are valid invocations of prog : prog -f a.txt -g b.txt c.txt -f d.txtprog -g a.txt b.txt -g c.txt d.txtprog -f a.txt -f b.txt -f c.txt …but the following are not: prog -f a.txt b.txtprog -f a.txt -g b.txtprog a.txt In my case, I only care about the -f option. I have a lot of files in a directory, all of which end in .txt . They look like this: important-files/├── a.txt├── b.txt├── c.txt├── d.txt└── filename with spaces.txt I would like to avoid needing to list out every file one by one. Normally, I would use a straightforward glob for this: $ prog important-files/*.txt But this doesn’t work, since it produces the following invalid invocation: $ prog important-files/a.txt important-files/b.txt important-files/c.txt important-files/d.txt 'important-files/filename with spaces.txt' …when I really want this invocation: $ prog -f important-files/a.txt -f important-files/b.txt -f important-files/c.txt -f important-files/d.txt -f 'important-files/filename with spaces.txt' …since each filename must be prefixed with -f in order for prog to understand they shouldn’t be interpreted like -g . What is the shortest way to use a glob and prefix each of the files it expands to with a flag? | Using printf and an xargs supporting nul-delimited input: printf -- '-f\0%s\0' important-files/*.txt | xargs -0 prog printf loops the format string over the arguments, so for each filename in the expansion of the glob, it will print -f and the filename separated by the null character. xargs then reads this and converts it into arguments for prog . The -- needed since some implementations of printf have trouble with a leading - in the format string. Alternately, the - can be replaced with \055 , which is standard. Same principle as Rakesh's answer , but using the wildcard expansion directly. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/445430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63459/"
]
} |
445,450 | I want to create a symlink to a device.When i tried the command ln -s /dev/sr0 /dev/scd0 it looked everything well.But when i restarted the server,i found /dev/scd0 is disappered.How can i create a permanent link? | Modern linux distros use udev device manager, so you need to create a udev rule to achieve this. As a root user create a new file named 99_sr0.rules in /etc/udev/rules.d/ with the following contents KERNEL=="sr0", SYMLINK+="scd0" Reboot your PC or run sudo udevadm control --reload-rules; sudo udevadm trigger to re-run your udev rules and you will see your symlink > ls -l /dev/sr0 /dev/scd0lrwxrwxrwx 1 root root 3 May 22 18:54 /dev/scd0 -> sr0brw-rw----+ 1 root cdrom 11, 0 May 22 18:54 /dev/sr0> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262683/"
]
} |
445,460 | # touch $$# gzip $$# gzip --test $$.gz# echo $?0# OUT=$(gzip --test $$.gz)# echo $OUT# if [ -z $OUT ] ; then echo $$ ; fi26521# if [ -n $OUT ] ; then echo $$ ; fi26521# from bash(1) -z string True if the length of string is zero. string -n string True if the length of string is non-zero. I'm confused, how is it zero and non-zero at the same time? How does one do check against if key has a value (using bash )? | [ .. ] follows the same rules as all other commands, namely Word Splitting applies. If OUT is empty (or unset), $OUT will expand to nothing, not even an empty argument. So, [ -n $OUT ] expands to [ , -n and ] , and [ tests if -n is not an empty string. It is, so the test returns true. You need to quote $OUT , as almost everywhere else: if [ -n "$OUT" ]; then ... See: When is double-quoting necessary? and Tests and Conditionals on BashGuide. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/445460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1187/"
]
} |
445,469 | How can I quote a string with single quotes? Eg, I can do: $ printf "%q\n" 'two words'two\ words$ Is there a way to get a single- (or double-) quoted string as output, ie: $ MAGIC 'two words''two words'$ I find the single-quoted version much easier to read. I'd like an answer which works for {ba,z}sh. POSIX shell would be a bonus. | Assuming that: $ value=$'This isn\'t a \n\x1b "correct" test'$ printf '%s\n' "$value"This isn't a"correct" test posix quote () { printf %s\\n "$1" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/'/" ; } Use: $ quote "${value}"'This isn'\''t a"correct" test' From Rich's sh posix tricks This function simply replaces every instance of « ' » (single quote) within the string with « '\'' » (single quote, backslash, single quote, single quote), then puts single quotes at the beginning and end of the string. Since the only character whose meaning is special within single quotes is the single quote character itself, this is totally safe. Trailing newlines are handled correctly, and the single quote at the end doubles as a safety character to prevent command substitution from clobbering the trailing newlines, should one want to do something like: quoted=$(quote "$var") Warning: the ESC (\033 or \x1b or decimal 27) characters above gets (technically) quoted, but is invisible. When sent to a terminal, like other control characters, could even do harm. Only when they are visually presented as $'\033', $'\C-[' or $'\E', they are clearly visible and unambiguous. bash printf '%s\n' "${value@Q}" $'This isn\'t a \n\E "correct" test' zsh printf '%s\n' ${(q)value} This\ isn\'t\ a\ $'\n'$'\033'\ \"correct\"\ test zsh printf '%s\n' ${(qq)value} 'This isn'\''t a"correct" test' zsh printf '%s\n' ${(qqq)value} "This isn't a\"correct\" test" zsh printf '%s\n' ${(qqqq)value} $'This isn\'t a \n\033 "correct" test' zsh printf '%s\n' ${(q-)value} 'This isn'\''t a"correct" test' zsh printf '%s\n' ${(q+)value} $'This isn\'t a \n\C-[ "correct" test' Be careful with some zsh quoted strings: the ESC (\033 or \x1b or decimal 27) characters above are all (technically) quoted, but invisible. When sent to a terminal, like other control characters, could even do harm. Only when they are visually presented as $'\033', $'\C-[' or $'\E', they are clearly visible and unambiguous. From Bash's manual : ${parameter@operator} Q The expansion is a string that is the value of parameter quoted in a format that can be reused as input. From the zshexpn man page : q Quote characters that are special to the shell in the resulting words with backslashes; unprintable or invalid characters are quoted using the $'\NNN' form, with separate quotes for each octet. If this flag is given twice, the resulting words are quoted in single quotes and if it is given three times, the words are quoted in double quotes; in these forms no special handling of unprintable or invalid characters is attempted. If the flag is given four times, the words are quoted in single quotes preceded by a $ . Note that in all three of these forms quoting is done unconditionally, even if this does not change the way the resulting string would be interpreted by the shell. If a q- is given (only a single q may appear), a minimal form of single quoting is used that only quotes the string if needed to protect special characters. Typically this form gives the most readable output. If a q+ is given, an extended form of minimal quoting is used that causes unprintable characters to be rendered using $'...' . This quoting is similar to that used by the output of values by the typeset family of commands. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
445,521 | I have a pattern variable with below value: \"something//\\anotherthing' and a file with below contents: \"something//\\anotherthing'\"something//\\anotherthing\"something/\anotherthing'\"something\anotherthing'\\"something\/\/\\\\anotherthing' When I compare a line read from the file against the pattern in the environment with == operator, I get the expected output: patt="$pattern" awk '{print $0, ENVIRON["patt"], ($0 == ENVIRON["patt"]?"YES":"NO") }' OFS="\t" file\"something//\\anotherthing' \"something//\\anotherthing' YES\"something//\\anotherthing \"something//\\anotherthing' NO\"something/\anotherthing' \"something//\\anotherthing' NO\"something\anotherthing' \"something//\\anotherthing' NO\\"something\/\/\\\\anotherthing' \"something//\\anotherthing' NO But when I do the same with the ~ operator, the tests never match.(I expected YES on the first line, as above): patt="$pattern" awk '{print $0, ENVIRON["patt"], ($0 ~ ENVIRON["patt"]?"YES":"NO") }' OFS="\t" file\"something//\\anotherthing' \"something//\\anotherthing' NO\"something//\\anotherthing \"something//\\anotherthing' NO\"something/\anotherthing' \"something//\\anotherthing' NO\"something\anotherthing' \"something//\\anotherthing' NO\\"something\/\/\\\\anotherthing' \"something//\\anotherthing' NO To fix the issue with ~ comparison I need to double escape the escapes: patt="${pattern//\\/\\\\}" awk '{print $0, ENVIRON["patt"], ($0 ~ ENVIRON["patt"]?"YES":"NO") }' OFS="\t" file\"something//\\anotherthing' \\"something//\\\\anotherthing' YES\"something//\\anotherthing \\"something//\\\\anotherthing' NO\"something/\anotherthing' \\"something//\\\\anotherthing' NO\"something\anotherthing' \\"something//\\\\anotherthing' NO\\"something\/\/\\\\anotherthing' \\"something//\\\\anotherthing' NO Note the double escapes in result of printing ENVIRON["patt"] in second column. Question: Where does escape sequence in awk happening when using tilde ~ comparison operator? on $0 (or $1 , $2 , ...) or in ENVIRON["variable"] ? | The ~ operator does pattern matching, treating the right hand operand as an (extended) regular expression, and the left hand one as a string. POSIX says: A regular expression can be matched against a specific field or string by using one of the two regular expression matching operators, '~' and "!~" . These operators shall interpret their right-hand operand as a regular expression and their left-hand operand as a string. So ENVIRON["patt"] is treated as a regular expression, and needs to have all characters that are special in EREs to be escaped, if you don't want them to be have their regular ERE meanings. Note that it's not about using $0 or ENVIRON["name"] , but the left and right sides of the tilde. This would take the input lines (in $0 ) as the regular expression to match against: str=foobar awk 'ENVIRON["str"] ~ $0 { printf "pattern /%s/ matches string \"%s\"\n", $0, ENVIRON["str"] }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72456/"
]
} |
445,528 | I have found some explanations about what "address binding" is. They say that "address binding is an operation of mapping virtual or logical addresses to physical addresses." Is this definition correct? I cannot make sure whether it is correct or not because a university presentation says that converting virtual addresses to physical addresses is performed in execution time. However, address binding says that binding operation can be implemented in compile time, load time or execution time. This shows that there is a contradiction. | The explanation on Quora seems to me to be rather confusing, and mixes up a number of concepts. The term “address binding”, in the context of memory addresses (as opposed to network addresses for example), comes from Leon Presser and John R. White’s 1972 paper on linkers and loaders (see also the ACM entry ), where it is defined as follows: The translation or mapping of a logical into a physical address is called address binding . A quick read could give the impression that this is talking about logical and physical addresses from a memory management perspective, but that’s not the case; in the paper, physical addresses are addresses of “information” in memory, and logical addresses are the symbols used to refer to that information. Thus address binding is what is commonly referred to nowadays as symbol (or pointer) relocation, and as you say, this can happen at compile time (when generating a static binary for example), at load time (when the dynamic linker resolves symbols in a shared library), or at execution time (when the running program resolves symbols manually, e.g. using dlopen ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/278582/"
]
} |
445,673 | I am having the following error after running apt update and apt upgrade on my server. W: Possible missing firmware /lib/firmware/e100/d102e_ucode.bin for module e100W: Possible missing firmware /lib/firmware/e100/d101s_ucode.bin for module e100W: Possible missing firmware /lib/firmware/e100/d101m_ucode.bin for module e100W: Possible missing firmware /lib/firmware/rtl_nic/rtl8107e-2.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8107e-1.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168h-2.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168h-1.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168g-3.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168g-2.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8106e-2.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8106e-1.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8411-2.fw for module r8169W: Possible missing firmware /lib/firmware/rtl_nic/rtl8411-1.fw for module r8169W: Possible missing firmware /lib/firmware/rtl_nic/rtl8402-1.fw for module r8169W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168f-2.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168f-1.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8105e-1.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168e-3.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168e-2.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168e-1.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168d-2.fw for module r816 9W: Possible missing firmware /lib/firmware/rtl_nic/rtl8168d-1.fw for module r816 9W: Possible missing firmware /lib/firmware/phanfw.bin for module netxen_niccp: cannot stat '/etc/udev/rules.d/70-persistent-net.rules': No such file or dir ectorycp: cannot stat '/etc/udev/rules.d/70-persistent-net.rules': No such file or dir ectory How can I solve it? | First of all, if your system is working fine, in particular all your wired and wireless network connectivity, then you don’t need to do anything — those are only warnings. Some modules will work fine without firmware in most cases (the e100 module), others will typically require firmware; the specifics depend on exactly what hardware you have. If you do have networking equipment which doesn’t work properly, then you should install the appropriate firmware. In your case, the packages you need are firmware-misc-nonfree (for the e100 firmware), firmware-netxen (for the netxen_nic firmware), and firmware-realtek (for the r8169 firmware). To install these, you’ll have to enable the non-free repositories ; to do so, edit /etc/apt/sources.list , find the lines which looks like deb ... stretch main (with a URL instead of ... ), and add contrib non-free : deb ... stretch main contrib non-free You can do this automatically by running sed -i.bak 's/stretch[^ ]* main$/& contrib non-free/g' /etc/apt/sources.list as root; this will make a backup of your original file as /etc/apt/sources.list.bak so you can revert if something goes wrong. Then update your indexes and install the missing packages: apt updateapt install firmware-misc-nonfree firmware-netxen firmware-realtek and update your initramfs: update-initramfs -u | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/445673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292336/"
]
} |
445,685 | Suppose I have these lines to feed to awk : dolly-cabinet-93-redmurfy-swan-96-whitechizzle-rock-115-green How do I select the line cut by - whose 3rd column is the largest (numerically) and output the line (preferably using awk , but not limited to)? Somehow the solution I have thinks 96 is the largest. | First of all, if your system is working fine, in particular all your wired and wireless network connectivity, then you don’t need to do anything — those are only warnings. Some modules will work fine without firmware in most cases (the e100 module), others will typically require firmware; the specifics depend on exactly what hardware you have. If you do have networking equipment which doesn’t work properly, then you should install the appropriate firmware. In your case, the packages you need are firmware-misc-nonfree (for the e100 firmware), firmware-netxen (for the netxen_nic firmware), and firmware-realtek (for the r8169 firmware). To install these, you’ll have to enable the non-free repositories ; to do so, edit /etc/apt/sources.list , find the lines which looks like deb ... stretch main (with a URL instead of ... ), and add contrib non-free : deb ... stretch main contrib non-free You can do this automatically by running sed -i.bak 's/stretch[^ ]* main$/& contrib non-free/g' /etc/apt/sources.list as root; this will make a backup of your original file as /etc/apt/sources.list.bak so you can revert if something goes wrong. Then update your indexes and install the missing packages: apt updateapt install firmware-misc-nonfree firmware-netxen firmware-realtek and update your initramfs: update-initramfs -u | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/445685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219153/"
]
} |
445,782 | systemd-resolved is a daemon that, among other things, acts as a DNS server by listening IP address 127.0.0.53 on the local loopback interface. I would like to let the daemon listen to another interface. My use-case is to expose it to docker containers, so that docker containers share the DNS caching provided by systemd-resolved. I know how to configure the host as a DNS server for docker containers, but at least by default, systemd-resolved rejects these DNS queries because they are not coming from the loopback interface, but from the docker bridge interface. With dnsmasq (a tool similar to systemd-resolved), I did this by adding listen-address=172.17.0.1 to the configuration file . Unfortunately, I couldn't find a systemd-resolved equivalent. Since systemd-resolved is the default at least on Ubuntu 18.04, I would like a solution that works in this configuration. Is there a way to configure which interface systemd-resolved listens on? | Resolved is not intended nor designed for your use-case, but to provide services in the local loopback, thus the listen address is hardcoded. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116507/"
]
} |
445,829 | I have researched the heck out of this question and found two pages about the issue but not clarifying it. In the debian-installer during the optional software selection phase you have these options: Debian desktop environment (already ticked by default) ... GNOME (not ticked) ... xfce (not ticked) ... KDE (not ticked) ... Cinnamon (not ticked) ... MATE (not ticked) ... LXDE (not ticked) What does Debian desktop environment actually install? Does it install a GUI (Gnome, my understanding, is the default) or does it just install a handful of programs useful for desktop users but which do not include a GUI? Do you have to tick off Gnome to get the GUI or not? And if not, what is the purpose of the option to tick off Gnome in addition to Debian Desktop Environment? The page concerning Desktop Environments in the Debian Wiki does not clarify the issue. This thread on the Debian User Forums concerns this very issue but has a raft of contradictory answers. | If no specific desktop environment is selected, but the “Debian desktop environment” is, the default which ends up installed is determined by tasksel : on i386 and amd64 , it’s GNOME, on other architectures, it’s XFCE. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/445829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288980/"
]
} |
445,832 | In bash, I use arguments that look like paste <(cat file1 | sort) <(cat file2 | sort) or comm <(cat file1 | sort) <(cat file2 | sort) When I check man comm or man paste , the documentation says the args are indeed FILES. Question: Are intermediate temporary files get created (on TEMP filesystem or elsewhere on slower disk) for <(cat file1 | sort) and <(cat file2 | sort) ? What is the name for this <( ) magic? (to lookup its documentation) Is it specific to bash or does it work across other shells? | This is called process substitution. 3.5.6 Process Substitution Process substitution allows a process’s input or output to be referred to using a filename. The process list is run asynchronously, and its input or output appears as a filename. This filename is passed as an argument to the current command as the result of the expansion. If the >(list) form is used, writing to the file will provide input for list. If the <(list) form is used, the file passed as an argument should be read to obtain the output of list. Note that no space may appear between the < or > and the left parenthesis, otherwise the construct would be interpreted as a redirection. Process substitution is supported on systems that support named pipes (FIFOs) or the /dev/fd method of naming open files. It is not just a bash thing as it originally appeared in ksh but it's not in the posix standard. Under the hood, process substitution has two implementations. On systems which support /dev/fd (most Unix-like systems) it works by calling the pipe( ) system call, which returns a file descriptor $fd for a new anonymous pipe, then creating the string /dev/fd/$fd , and substitutes that on the command line. On systems without /dev/fd support, it calls mkfifo with a new temporary filename to create a named pipe, and substitutes this filename on the command line. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/445832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79401/"
]
} |
445,854 | I have a bash script that takes a handful of files and sets them up for FTP to a site that processes the one of the setup files. We are looking to find away to have the other file go up on the first Monday of the month but I am not sure how to put that in the bash script. I have seen stuff around using crontab but the first part and the last part of the script would be exactly the same and could cause issues if we had 2 different scripts. only putting in a part of the script that I'm looking at making the change to. #!/bin/bash...e_file="/tmp/tmpemail.$(date +%s).txt"file1='/usr/local/filename1'file2='/usr/local/filename2'relayserver='relay-server.example.com'#ftp infoFTP_USER='ftpuser' #not the actual FTP User NameFTP_DEST_PATH='/'...echo -e "Starting Tunnel and SFTP Process"# make ssh tunnel for access to SFTP Sitessh -L 9022:ftp.example.com:22 serviceaccount@$relay_server -Nf >/dev/null 2&>1proc=`ps -ef | grep "ssh -L 9022\:ftp.example.com\:22" | awk '{print $2}'`#checks to see if the tunnel opened correctly then proceeds to push to FTP Siteif [ "${proc}" != "" ]; then #looking for first monday, was thinking of first day but the crontab only runs on monday to friday ifStart=`date '+%d'` if [ $ifStart == 01 ]; then echo -e "File 1 & File2 sent to FTP Site" >> $e_file $SFTP_CMD -oPort=9022 -b /dev/stdin $FTP_USER@localhost << END cd $FTP_DEST_PATH put $file1 put $file2 byeEND else echo -e "file 2 sent to FTP" >> $e_file $SFTP_CMD -oPort=9022 -b /dev/stdin $FTP_USER@localhost << END cd $FTP_DEST_PATH put $file2 byeEND fi echo "killing ssh tunnel - $proc" kill $procelse... I am looking to be pointed in the right direction of getting the if statement for the first Monday of the month where I have to comment located. Any ideas to get around this? Added Note:This Script has to run every weekday of the month to upload the files to be processed. | I do not have time to read all the script but here is the idea:with date command get the name of the day in week: we=$(LC_TIME=C date +%A) ( LC_TIME=C is used to get English name of the day of week) and then get day in the month dm=$(date +%d) and then check if the day is less than 8 and day of week is Monday: if [ "$we" = "Monday" ] && [ "$dm" -lt 8 ]then .....fi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/445854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60089/"
]
} |
445,890 | If I run the command ip link | awk '{print $2}' in Ubuntu 18.04, I get this output: lo:00:00:00:00:00:00wlp1s0:2c:6e:85:bf:01:00enp2s0:14:18:77:a3:01:02 I want it formatted like this (without lo ) wlp1s0: 2c:6e:85:bf:01:00enp2s0: 14:18:77:a3:01:02 How do I do this? | You can get the MAC address from /sys/class/net/<dev>/address : $ cat /sys/class/net/enp0s3/address08:00:27:15:dc:fd So, something like: find /sys/class/net -mindepth 1 -maxdepth 1 ! -name lo -printf "%P: " -execdir cat {}/address \; Gives me: enp0s3: 08:00:27:15:dc:fddocker0: 02:42:61:cb:85:33 Or, using ip 's one-line mode, which is convenient for scripting: $ ip -o link | awk '$2 != "lo:" {print $2, $(NF-2)}'enp0s3: 08:00:27:15:dc:fddocker0: 02:42:61:cb:85:33 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/445890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266428/"
]
} |
445,906 | I need to install pip3 , but cannot do it without sudo privileges which I don't have. I have tried wget https://bootstrap.pypa.io/get-pip.py but that gets me the other version of pip. | You can install pip3 without root previlege as follow: wget https://bootstrap.pypa.io/get-pip.pypython3 get-pip.py --user pip3 will be installed locally under /home/$USER/.local/bin/ . Check the installed version: $HOME/.local/bin/pip3 -V or PATH=$PATH:$HOME/.local/binpip3 --version sample output: pip 10.0.1 from /home/$USER/.local/lib/python3.6/site-packages/pip (python 3.6) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/445906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292495/"
]
} |
446,049 | Never realized that you could do this until just now: : >> file It seems to be functionally similar to: touch file Is there a reason why most resources seem to prefer touch over this shell builtin? | You don't even need to use : ; you can just > file (at least in bash ; other shells may behave differently). In practical terms, there is no real difference here (though the minuscule overhead of calling out to /bin/touch is a thing). touch , however, can also be used to modify the timestamps on a file that already exists without changing or erasing the contents; further, > file will blow out any file that already exists. This can be worked around by instead using >> file . One other difference with touch is that you can have it create (or update the timestamp on) multiple files at once (e.g. touch foo bar baz quux ) with a more succinct syntax than with redirection, where each file needs its own redirection (e.g. >foo >bar >baz >quux ). Using touch : $ touch foo; stat -x foo; sleep 2; touch foo; stat -x foo File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:19 2018Modify: Fri May 25 10:55:19 2018Change: Fri May 25 10:55:19 2018 File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:21 2018Modify: Fri May 25 10:55:21 2018Change: Fri May 25 10:55:21 2018 Using redirection: $ > foo; stat -x foo; sleep 2; >> foo; stat -x foo File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:21 2018Modify: Fri May 25 10:56:25 2018Change: Fri May 25 10:56:25 2018 File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:21 2018Modify: Fri May 25 10:56:25 2018Change: Fri May 25 10:56:25 2018 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/446049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273175/"
]
} |
446,060 | I have a dual boot, and I never use Windows, but because I'm nice I wanted at the beginning to cut my hard drive in two equal parts, Windows on the left part, Linux on the right part. But then Linux ran out of space, so I shrinked Windows, and because I used a LVM partition, I created a new partition and share them on the logical partition. But now linux still runs out of space, and I'm thinking that it is strange to create tons of small LVM partitions, so I'm thiking to move the second LVM partition I created and extend it so that I just keep 2 partitions. Is it possible? Thanks. | You don't even need to use : ; you can just > file (at least in bash ; other shells may behave differently). In practical terms, there is no real difference here (though the minuscule overhead of calling out to /bin/touch is a thing). touch , however, can also be used to modify the timestamps on a file that already exists without changing or erasing the contents; further, > file will blow out any file that already exists. This can be worked around by instead using >> file . One other difference with touch is that you can have it create (or update the timestamp on) multiple files at once (e.g. touch foo bar baz quux ) with a more succinct syntax than with redirection, where each file needs its own redirection (e.g. >foo >bar >baz >quux ). Using touch : $ touch foo; stat -x foo; sleep 2; touch foo; stat -x foo File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:19 2018Modify: Fri May 25 10:55:19 2018Change: Fri May 25 10:55:19 2018 File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:21 2018Modify: Fri May 25 10:55:21 2018Change: Fri May 25 10:55:21 2018 Using redirection: $ > foo; stat -x foo; sleep 2; >> foo; stat -x foo File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:21 2018Modify: Fri May 25 10:56:25 2018Change: Fri May 25 10:56:25 2018 File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:21 2018Modify: Fri May 25 10:56:25 2018Change: Fri May 25 10:56:25 2018 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/446060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169695/"
]
} |
446,065 | I'm creating a L3-Switch that modifies packets by redirecting some of them to local app. My goal is to send them further to the same MAC as before. Short "why": zero-conf device to connect with to any ethernet network, portable, does proxying. Switch is organized as ethernet bridge (br-lan) between eth0 and eth1. It is assumed by default that gateway for br-lan clients lies through eth0. Question: Let's say that packet comes from eth1 on the way to eth0 and gets redirected to local app. After that app has output and destination IP of the original packet has changed. L3 tries to route packet to new destination, but it doesn't have any default gateways (And it shouldn't, because it's switch!). Assuming I know the MAC address of default gateway, how to I force packet to go out through eth0 to specific MAC address? Technically I'm not trying to do anything "illegal" in terms of network. I want to kick the packet out of eth0 and all I'm "missing" is destination MAC, but I can retrieve it from the original packet. I know for sure that destination IP isn't local, therefore it would be sent to default gateway anyway using it's MAC address. So it's a question of implementation. I was trying to modify destination MAC at bridge -t NAT OUTPUT by doing this: ebtables -t nat -A OUTPUT -p ipv4 --ip-proto tcp --ip-src 192.168.1.251 -j dnat --to-dst 04:61:e7:d2:e2:09 But that didn't help. (Assuming 04:61:e7:d2:e2:09 is default gateway MAC and 192.168.1.251 is one of the clients just to test this theory) Actual implementation is on OpenWRT, so available packages might be limited. How did I get to that problem: More information on the local app: it's ss-redir from here, binds to 0.0.0.0:port => https://github.com/shadowsocks/shadowsocks-libev Added use cases to the [Device]: Expectation: We have 3 PC-clients connected to a regular switch. After bringing [Device] and connecting it to regular switch and reconnecting PC-clients to [Device], PC-clients gain [Result] without configuring the device. [Result]: 1)From the "outside"(other network nodes except 3 ours and everything else) it should look like every user keeps his IP/MAC pair so admin would be happy. DHCP is static-configured in the office, so IP/MAC pair won't probably change, but admin can change any of that. And device should handle any changes without reconfiguring manually. No new IP/MAC should appear in the network(being not admin-registered). 2)From the "outside" every PC-client should be accessible for all protocols in the network, whatever they are (RDP, NetBIOS for naming resolution, file sharing, or whatever local admin decides to do). 3)They should have internet access via default gateway as always, except proxying tcp via SS for particular destination ipset (which is always through the very same gateway) Under assumption that these use cases require device not having any IP/MAC knowledge of the existing network from the start(because office users won't config anything by themselves), I'm trying to make "proxying bridge" that works like a switch, intercepting packets and sends them out to eth0(WAN) after local app redirection. The problem is the after redirection packet needs to be sent on its way. I'm investigating "auto-reconfig on the fly idea" with a MAC-snat/dnat, but stuck with the problem that packet won't go to eth0 after being generated locally even if I can specify Default Gateway MAC-addr in ebtables as destination. | You don't even need to use : ; you can just > file (at least in bash ; other shells may behave differently). In practical terms, there is no real difference here (though the minuscule overhead of calling out to /bin/touch is a thing). touch , however, can also be used to modify the timestamps on a file that already exists without changing or erasing the contents; further, > file will blow out any file that already exists. This can be worked around by instead using >> file . One other difference with touch is that you can have it create (or update the timestamp on) multiple files at once (e.g. touch foo bar baz quux ) with a more succinct syntax than with redirection, where each file needs its own redirection (e.g. >foo >bar >baz >quux ). Using touch : $ touch foo; stat -x foo; sleep 2; touch foo; stat -x foo File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:19 2018Modify: Fri May 25 10:55:19 2018Change: Fri May 25 10:55:19 2018 File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:21 2018Modify: Fri May 25 10:55:21 2018Change: Fri May 25 10:55:21 2018 Using redirection: $ > foo; stat -x foo; sleep 2; >> foo; stat -x foo File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:21 2018Modify: Fri May 25 10:56:25 2018Change: Fri May 25 10:56:25 2018 File: "foo" Size: 0 FileType: Regular File Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)Device: 1,5 Inode: 8597208698 Links: 1Access: Fri May 25 10:55:21 2018Modify: Fri May 25 10:56:25 2018Change: Fri May 25 10:56:25 2018 | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/446065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292622/"
]
} |
446,088 | I have an Arduino Uno attached over USB, using the cdc_acm driver. It is available at /dev/ttyACM0 . The convention for the Arduino's serial interface is for the DTR signal to be used for a reset signal—when using the integrated serial-to-USB adapter, the DTR/RTS/DSR/CTS signal; or, when using an RS-232 cable, pins 4 or 5 (and possibly 6 or 8) are wired to the RESET pin. This reset avenue has the important advantage of being, if not truly out-of-band, at least very near -failsafe (due to being implemented via the always-out-of-band serial controller in conjunction with the not-normally-user-controllable watchdog circuit), and while it can be physically disabled (via wiring either a capacitor or a resistor, depending on the model, to the RESET pin), to do so completely ruins this important killswitch and all associated utility. Unfortunately, it seems that , currently, Linux absolutely always sends this signal when any program attaches to an ACM device for any reason, and ( unlike Windows ,) provides no even-vaguely-known-reliable way to prevent this. (Currently both -hupcl , "send a hangup signal when the last process closes the tty" and -clocal , "disable modem control signals" do not prevent this signal from being sent every time the device is opened .) tl;dr: What do I need to do to access /dev/ttyACM0 without sending it a DTR/RTS/DSR/CTS signal (short of blocking the signal on the hardware level)? | When a userland process is opening a serial device like /dev/ttyS0 or /dev/ttyACM0 , linux will raise the DTR/RTS lines by default, and will drop them when closing it. It does that by calling a dtr_rts callback defined by the driver. Unfortunately, there isn't yet any sysctl or similar which allows to disable this annoying behavior (of very little use nowadays), so the only thing that works is to remove that callback from the driver's tty_port_operations structure, and recompile the driver module. You can do that for the cdc-acm driver by commenting out this line : --- drivers/usb/class/cdc-acm.c~+++ drivers/usb/class/cdc-acm.c@@ -1063,7 +1063,7 @@ } static const struct tty_port_operations acm_port_ops = {- .dtr_rts = acm_port_dtr_rts,+ /* .dtr_rts = acm_port_dtr_rts, */ .shutdown = acm_port_shutdown, .activate = acm_port_activate, .destruct = acm_port_destruct, This will not prevent you from using the DTR/RTS lines via serial ioctls like TIOCMSET , TIOCMBIC , TIOCMBIS , which will be handled by the acm_tty_tiocmset() , etc callbacks from the acm_ops structure, as usual. Similar hacks could be used with other drivers; I personally have used this with the PL2303 usb -> serial driver. [The diff is informative; it will not apply directly because this site mangles tabs and whitespaces] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26420/"
]
} |
446,099 | From https://askubuntu.com/a/831521/1471 ps -Flww -p THE_PID I was wondering what the purpose for double w is? Is it the same as just one w ? Thanks. | The man ps is saying: w Wide output. Use this option twice for unlimited width.-w Wide output. Use this option twice for unlimited width. either in this way ww or -w -w or any combination of these two. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446099",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
446,103 | I have an Ubuntu 18.04 server which accepts SSH and HTTP requests from another PC on the LAN but isn't accessible from the other side of my Comcast Gateway. I have a Windows 10 PC physically connected via ethernet to the same Comcast gateway as the server, and from the Windows 10 PC I'm able to use Putty, Filezilla and Chrome (using IP address or DNS domain name in a web browser) to access the server and the website on the server. In that case through LAN, the browser on the Windows 10 PC will even redirect to HTTPS! BUT IT'S ALL A TRICK!!! Using my cell phone (using the Verizon network and not connected to LAN WiFi) I'm unable to access the server with JuiceSSH, or by using the IP address or the domain name in Chrome. DHCP or Static IP settings have no impact on access outside the LAN. Using DHCP on the server I'm still only able to access the IP address from another machine in the LAN. I've disabled UFW.I've disabled the Comcast Gateway firewall. What could be wrong? How can I diagnose the problem? | The man ps is saying: w Wide output. Use this option twice for unlimited width.-w Wide output. Use this option twice for unlimited width. either in this way ww or -w -w or any combination of these two. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292647/"
]
} |
446,131 | #include <stdio.h>#include <unistd.h>#include <sys/wait.h>int main( int argc, char *argv[] ){ FILE *fptr; pid_t pid; fptr = fopen("Shared File.txt", "a"); pid = fork(); if( pid > 0 ){ // parent process int counter = 0; while( counter < 10 ){ fprintf(fptr, "a"); ++counter; } wait(NULL); } else{ int counter = 0; while( counter < 5 ){ fprintf(fptr, "b"); ++counter; } } return 0;} When I execute this code, the file produced by the code contains this message: bbbbbaaaaaaaaaa Whenever I execute this code, I get same message. Why does not the processes write to file in shuffling order ? Why does the operating system try to finish the child process at first ? My expectation about the message is like this: baabbaaabaaabaa There is no continuous transition between the processes. | The scheduling between parent and child has been discussed already (at least) in When child processes are executed and How does fork system call really works . But in this case, there's also the question of buffering within stdio .You're using fprintf() to write to a regular file. By default, stdio buffers output to regular files until enough data is written, to save on system call overhead. On x86 Linux, it usually seems to write in 4096 byte blocks, but you can't count on that, unless you set the buffering manually (see setbuf() and friends). You can see this with a command like strace that shows the system calls a program makes. So, while you can't make any predictions about which process runs first, in this case you can predict that the a s are written consecutively, and the b s as well. You can only get bbbbbaaaaaaaaaa or aaaaaaaaaabbbbb , and which one you get is pretty much up to chance. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/278582/"
]
} |
446,152 | I have a bash script that echoes paragraphs of text. I want them to be indented. Example: echo "Something"echo -e "\tfoo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo" Will print something like this: Something foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo But what I want is this: Something foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo I prefer to avoid having to line-break many such paragraphs by hand. I've seen all sorts of techniques using sed and so on, but I need to rely on shell builtins only (should be as simple as possible). UPDATE Put it this way- how do the man pages (for any random command) format all those paragraphs so nicely? Surely they were not line-breaked by hand? And I assume they didn't use anything other than builtins when documenting the basic commands? | The scheduling between parent and child has been discussed already (at least) in When child processes are executed and How does fork system call really works . But in this case, there's also the question of buffering within stdio .You're using fprintf() to write to a regular file. By default, stdio buffers output to regular files until enough data is written, to save on system call overhead. On x86 Linux, it usually seems to write in 4096 byte blocks, but you can't count on that, unless you set the buffering manually (see setbuf() and friends). You can see this with a command like strace that shows the system calls a program makes. So, while you can't make any predictions about which process runs first, in this case you can predict that the a s are written consecutively, and the b s as well. You can only get bbbbbaaaaaaaaaa or aaaaaaaaaabbbbb , and which one you get is pretty much up to chance. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446152",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291147/"
]
} |
446,193 | This works from the terminal: ls /dev/sda* I want it in a bash script, using a variable. I tried: device="a"ls "/dev/sd"$device"*" But I get the error: ls: cannot access '/dev/sda*': No such file or directory . | ls /dev/sd$device* # orls "/dev/sd$device"* You must not quote the globbing metacharacters if you want globbing to be performed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446193",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291147/"
]
} |
446,204 | I am connected to my Debian 9 with Virtualmin by SSH from my PC. I go for +-2 minutes away and after I return, SSH is disconnected...I tried changing ssh config on server and on client... Nothing helped...Where to search for problem? Can it be some settings of networking or maybe router? | Some over-zealous routers like to drop TCP connections that are idle for too long (i.e. don't transmit any data). It might be because they assume the user only uses things like HTTP, where the connection is often closed after a single query is complete. Assuming OpenSSH, use the ClientAliveInterval and ClientAliveCountMax directives in sshd_config , or equivalently ServerAliveInterval and ServerAliveCountMax in the client side config ( ~/.ssh/config or /etc/ssh/ssh_config ) to enable protocol-level keepalive packets. They're actually meant to detect if the remote host has gone away, but since they cause messages to be sent when the connection is otherwise idle, they also work to prevent the connection from being seen as idle by outside devices. *AliveInterval sets the interval (in seconds) after which the client/server sends a query to the remote, and *AliveCountMax sets the number of unanswered queries after which the the client/server drops the connection as inactive. Something like these values should do: ClientAliveInterval 15ClientAliveCountMax 4 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283347/"
]
} |
446,234 | In some tutorials ( Here and Here ) about netplan dhcp4 appear in the following way: network: version: 2 renderer: networkd ethernets: enp2s0: dhcp4: no or dhcp4: yes But in netplan examples and blog ubuntu sometimes appear in the following way: dhcp4: true or dhcp4: false And in other examples it appears as not/yes What is the correct way to set dhcp in Ubuntu 18.04 ( yes/no or true/false )? Thanks | Netplan configuration syntax is YAML, and the dhcp4 setting takes a boolean value. According to http://yaml.org/type/bool.html the acceptable values are y / n , yes / no , true / false and on / off , written either with all lowercase, with an Initial Capital, or with ALL CAPS. So all of the ways you listed are correct. The canonical ("the most correct" if a distinction must be made) form would be lower-case y / n . However, the definition says: A Boolean represents a true/false value. Booleans are formatted as English words (“true”/“false”, “yes”/“no” or “on”/“off”) for readability and may be abbreviated as a single character “y”/“n” or “Y”/“N”. So you can use any of those forms, whichever you find easiest to read. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266428/"
]
} |
446,236 | Among many other modifications I remapped my Caps Lock to Hyper , an ancient modifier key from the Space-cadet keyboards . However I can not find an example how to use it for key-bindings in Tmux, which has Emacs-like key-binding definitions, for example C-k defines Ctrl-k in both of them, so I tried H-k which works perfectly in Emacs, but as it turned out, not in Tmux. The exact binding definition from my .tmux.conf bind-key -T copy-mode H-k send-keys -X -N 30 scroll-down Results in the following error: /home/attila/.tmux.conf:21: unknown key: H-k I know there is a trick to mimic the Hyper key as a simultaneous press of all other modifier keys, however I use xcape , so this is not an option. | Your terminal does not support the "hyper" modifier, let alone tmux. tmux is a TUI application. It only knows what terminals send to it. Terminals, in the POSIX General Terminal Interface paradigm, only send characters; ordinary characters, control characters, escape sequences, and control sequences. There is no concept of raw keystrokes and separately transmitted modifier key information. tmux, like other TUI applications, has no dealing in any such concepts. Some of the control sequences transmitted by terminals and terminal emulators in response to function keys and extended keys can include parameters specifying an instantaneous modifier state. But the DEC VT convention that they generally follow has only ⇧ Level 2 Shift , ⎇ Alt , and ⎈ Control . It does not have a concept of a "hyper" modifier, DEC terminals having no such key. Nor does it even have the concept in the first place of such special control sequences for alphanumeric keys; only for (some) keys on the calculator, editing, cursor, and function keypads. Further reading https://unix.stackexchange.com/a/444270/5132 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150252/"
]
} |
446,237 | POSIX defines a text file as: A file that contains characters organized into zero or more lines. The lines do not contain NUL characters and none can exceed {LINE_MAX} bytes in length, including the <newline> character. Although POSIX.1-2017 does not distinguish between text files and binary files (see the ISO C standard), many utilities only produce predictable or meaningful output when operating on text files. The standard utilities that have such restrictions always specify "text files" in their STDIN or INPUT FILES sections. Source: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_403 However, there are several things I find unclear: Must a text file be a regular file? In the above excerpt it does not explicitly say the file must be a regular file Can a file be considered a text file if contains one character and one character only (i.e., a single character that isn't terminated with a newline)? I know this question may sound nitpicky, but they use the word "characters" instead of "one or more characters". Others may disagree, but if they mean "one or more characters" I think they should explicitly say it In the above excerpt, it makes reference to "lines". I found four definitions with line in their name: "Empty Line", "Display Line", "Incomplete Line" and "Line". Am I supposed to infer that they mean "Line" because of their omission of "Empty", "Display" and "Incomplete"- or are all four of these definitions inclusive as being considered a line in the excerpt above? All questions that come after this block of text depend on inferring that "characters" means "one or more characters": Can I safely infer that if a file is empty, it is not a text file because it does not contain one or more characters? All questions that come after this block of text depend on inferring that in the above excerpt, a line is defined as a "Line", and that the other three definitions containing "Line" in their name should be excluded: Does the "zero" in "zero or more lines" mean that a file can still be considered a text file if it contains one or more characters that are not terminated with newline? Does "zero or more lines" mean that once a single "Line" (0 or more characters plus a terminating newline) comes into play, that it becomes illegal for the last line to be an "Incomplete Line" (one or more non-newline characters at the end of a file)? Does "none [no line] can exceed {LINE_MAX} bytes in length, including the newline character" mean that there a limitation to the number of characters allowed in any given "Line" in a text file (as an aside, the value of LINE_MAX on Ubuntu 18.04 and FreeBSD 11.1 is "2048")? | Must a text file be a regular file? In the above excerpt it does not explicitly say the file must be a regular file No; the excerpt even specifically notes standard input as a potential text file. Other standard utilities, such as make , specifically use the character special file /dev/null as a text file . Can a file be considered a text file if contains one character and one character only (i.e., a single character that isn't terminated with a newline)? That character must be a <newline>, or this isn't a line , and so the file it's in isn't a text file. A file containing exactly byte 0A is a single-line text file. An empty line is a valid line. In the above excerpt, it makes reference to "lines". I found four definitions with line in their name: "Empty Line", "Display Line", "Incomplete Line" and "Line". Am I supposed to infer that they mean "Line" because of their omission of "Empty", "Display" and "Incomplete" It's not really an inference, it's just what it says. The word "line" has been given a contextually-appropriate definition and so that's what it's talking about. Can I safely infer that if a file is empty, it is not a text file because it does not contain one or more characters? An empty file consists of zero (or more) lines and is thus a text file. Does the "zero" in "zero or more lines" mean that a file can still be considered a text file if it contains one or more characters that are not terminated with newline? No, these characters are not organised into lines. Does "zero or more lines" mean that once a single "Line" (0 or more characters plus a terminating newline) comes into play, that it becomes illegal for the last line to be an "Incomplete Line" (one or more non-newline characters at the end of a file)? It's not illegal , it's just not a text file. A utility requiring a text file to be given to it may behave adversely if given that file instead. Does "none [no line] can exceed {LINE_MAX} bytes in length, including the newline character" mean that there a limitation to the number of characters allowed in any given "Line" in a text file Yes. This definition is just trying to set some bounds on what a text-based utility ( for example, grep ) will definitely accept — nothing more. They are also free to accept things more liberally, and quite often they do in practice. They are permitted to use a fixed-size buffer to process a line, to assume a newline appears before it's full, and so on. You may be reading too much into things. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/446237",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273175/"
]
} |
446,301 | I am going to connect to a VPN using openconnect on CEntOS 7 terminal. I only have one terminal because I am on a SSH session. I need to connect to the VPN using openconnect. I do so like this: openconnect -u username us.myprovider.net I need to run the VPN in the background and then do other things in the foreground. Currently, I start the VPN, I press Ctrl + Z and then press bg to send it to the background. But, this seems to close the VPN connection. How can I do that? | According to the Openconnect documentation , the option you would want to try would be: -b,--backgroundContinue in background after startup | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446301",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142579/"
]
} |
446,319 | how to print only the properties lines from json file example of json file { "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "items" : [ { "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610", "tag" : "version1527250007610", "type" : "kafka-env", "version" : 8, "Config" : { "cluster_name" : "HDP", "stack_id" : "HDP-2.6" }, "properties" : { "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi", "is_supported_kafka_ranger" : "true", "kafka_log_dir" : "/var/log/kafka", "kafka_pid_dir" : "/var/run/kafka", "kafka_user" : "kafka", "kafka_user_nofile_limit" : "128000", "kafka_user_nproc_limit" : "65536" } } ] expected output "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi", "is_supported_kafka_ranger" : "true", "kafka_log_dir" : "/var/log/kafka", "kafka_pid_dir" : "/var/run/kafka", "kafka_user" : "kafka", "kafka_user_nofile_limit" : "128000", "kafka_user_nproc_limit" : "65536" | Jq is the right tool for processing JSON data: jq '.items[].properties | to_entries[] | "\(.key) : \(.value)"' input.json The output: "content : \n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi""is_supported_kafka_ranger : true""kafka_log_dir : /var/log/kafka""kafka_pid_dir : /var/run/kafka""kafka_user : kafka""kafka_user_nofile_limit : 128000""kafka_user_nproc_limit : 65536" In case if it's really mandatory to obtain each key and value double-quoted - use the following modification: jq -r '.items[].properties | to_entries[] | "\"\(.key)\" : \"\(.value | gsub("\n";"\\n"))\","' input.json The output: "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi","is_supported_kafka_ranger" : "true","kafka_log_dir" : "/var/log/kafka","kafka_pid_dir" : "/var/run/kafka","kafka_user" : "kafka","kafka_user_nofile_limit" : "128000","kafka_user_nproc_limit" : "65536", | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/446319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
446,339 | I am trying to create a function which returns 0 or 1 (i.e. true or false) and takes an argument, then create a variable in another which stores the results of that function. Finally check if that variable is 0 or 1 (true or false) Here is a sample of what I am attempting #!/bin/bash_has_string() { if [ $1 == "string" ]; return 0 else return 1 fi}_my_func() { var=$(_has_string "string") if [ $var == "0" ]; then echo "var contains string" else echo "var does not contain string" fi}_my_func I have tried a few variations of this and can not seem to find a way to get it to work. All of my variations basically just return the $var as nothing. Not a 0. Not null. Literally it is just blank. | You confuse output with the exit code. _my_func() { if _has_string 'string'; then You should also quote your variables; and _has_string can be simplified: _has_string() { [ "$1" = 'string' ]} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446339",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149742/"
]
} |
446,388 | I've been wondering this for a long time but haven't figured out how to look it up - is this: x=`command -v r2g` the same as this: x="$(command -v r2g)" or is it the same as this: x=$(command -v r2g) ...if it's the latter, should I do this to fix it? x="`command -v r2g`" | All examples are of variable assignment from command substitution, so they are equivalent. As per Gilles's answer, quoting isn't necessary on right hand of assignment to variable, since word splitting doesn't occur there. So all four are OK. If they were standalone, i.e. not in assignment, then you'd need to quote. The $(...) form compared to backticks has advantage that quotes can be nested and broken into multiple lines, which is why this form is generally preferred nowadays. In other words, you can do "$( echo "$var" )" with this form to protect both the inner expansion of $var and the outer expansion of $(...) from word splitting and filename globbing. As shown in POSIX Shell Command Language specs, embedded multiline scripts don't work with backticks (on the left), but do work with $() form (on the right). echo ` echo $(cat <<\eof cat <<\eofa here-doc with ` a here-doc with )eof eof` )echo ` echo $(echo abc # a comment with ` echo abc # a comment with )` )echo ` echo $(echo '`' echo ')'` ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446388",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
446,410 | I want to use the tac to reverse a text file character by character. On the info page for coreutils I found an example saying: #Reverse a file character by character tac -r -s 'x\|[^x]' However running tac -r -s seems to open standard input instead of printing the file. What does 'x\|[^x]' mean and what should I be doing? I also noted that the output for tac [file] and tac -r [file] are same and they're the same as cat [file] . Still can't figure out char by char reverse. | To reverse a file character-by-character using tac , use: tac -r -s 'x\|[^x]' This is documented in info tac : # Reverse a file character by character.tac -r -s 'x\|[^x]' -r causes the separator to be treated as a regular expression . -s SEP uses SEP as the separator . x\|[^x] is a regular expression that matches every character (those that are x , and those that are not x ). $ cat testfileabcdefghi$ tac -r -s 'x\|[^x]' testfileihgfedcba%$ tac file is not the same as cat file unless file has only one line. tac -r file is the same as tac file because the default separator is \n , which is the same when treated as a regular expression and not. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446410",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291388/"
]
} |
446,420 | Say I have a bash function like so: gmx(){ echo "foo";} will this function implicitly return the exit value of the echo command, or is using return necessary? gmx(){ echo "foo"; return $?} I assume that the way bash works, the exit status of the final command of the bash function is the one that gets "returned", but not 100% certain. | return does an explicit return from a shell function or "dot script" (a sourced script). If return is not executed, an implicit return is made at the end of the shell function or dot script. If return is executed without a parameter, it is equivalent of returning the exit status of the most recently executed command. That is how return works in all POSIX shells. For example, gmx () { echo 'foo' return "$?"} is therefore equivalent to gmx () { echo 'foo' return} which is the same as gmx () { echo 'foo'} In general, it is very seldom that you need to use $? at all. It is really only needed if you need to save it for future use, for example if you need to investigate its value multiple times (in which case you would assign its value to a variable and perform a series of tests on that variable). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446420",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
446,432 | So I am working on an install script for a program that needs the device id from lsusb in it's configuration so I was thinking of doing the following: $usblist=(lsusb)#put the list into a array for each line.#use the array to give the user a selection list usinging whiptail.#from that line strip out the device id and vender id from the selected line. The line looks as follows: Bus 001 Device 004: ID 0665:5161 Cypress Semiconductor USB to Serial So I want only the 9 characters after "ID{space}" | return does an explicit return from a shell function or "dot script" (a sourced script). If return is not executed, an implicit return is made at the end of the shell function or dot script. If return is executed without a parameter, it is equivalent of returning the exit status of the most recently executed command. That is how return works in all POSIX shells. For example, gmx () { echo 'foo' return "$?"} is therefore equivalent to gmx () { echo 'foo' return} which is the same as gmx () { echo 'foo'} In general, it is very seldom that you need to use $? at all. It is really only needed if you need to save it for future use, for example if you need to investigate its value multiple times (in which case you would assign its value to a variable and perform a series of tests on that variable). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292853/"
]
} |
446,436 | I'm currently trying to parse my FIX messages to get 2 columns showing currency (tag55) and price (tag133) but having difficulty using 'awk' as it appears the desired part of the message isn't split into columns (in bold for your reference). Any thoughts on how to accomplish this? FIX log example:03:55:16.128 incoming 20180528-07:55:16.015 8587130 11891 8587030 S **8=FIX.4.29=013535=S49=IUAT2Feed56=FixServer50=IUAT2Feed_Offers34=858713052=20180528-07:55:16.015117=055=NOK/SEK7225=7133=1.0735135=2100000010=159**03:55:16.128 incoming 20180528-07:55:16.015 8587131 11891 8587030 S **8=FIX.4.29=013435=S49=IUAT2Feed56=FixServer50=IUAT2Feed_Offers34=858713152=20180528-07:55:16.015117=055=USD/CNH7225=2133=6.3872135=300000010=110** Desired output: NOK/SEK 1.0735USD/CNH 6.3872 | return does an explicit return from a shell function or "dot script" (a sourced script). If return is not executed, an implicit return is made at the end of the shell function or dot script. If return is executed without a parameter, it is equivalent of returning the exit status of the most recently executed command. That is how return works in all POSIX shells. For example, gmx () { echo 'foo' return "$?"} is therefore equivalent to gmx () { echo 'foo' return} which is the same as gmx () { echo 'foo'} In general, it is very seldom that you need to use $? at all. It is really only needed if you need to save it for future use, for example if you need to investigate its value multiple times (in which case you would assign its value to a variable and perform a series of tests on that variable). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292856/"
]
} |
446,501 | I installed NixOS 18.03 from Ubuntu on another partition following the NixOS manual's " 2.4. Installing from another Linux distribution " section. Everything went fine, but I did a couple idiotic things [?] , namely: Forgot to add the extra GRUB boot loader entry for the Ubuntu installation before nixos-install . Added it as an afterthought after install, and did a reboot (of course, no Ubuntu entry) Did not enable any networking in configuration.nix , and ended up with no network configuration commands after reboot to connect to wifi. The catch 22 is that nixos-rebuild switch requires a network connection, so I couldn't finalize any changes. So my thinking was that I can boot from a NixOS Live CD (17.03),connect to our wifi and somehow rebuild the config of the installation. It is more than possible that I am missing something essential, have incorrect assumptions above etc; fairly new at nix and NixOS. EDIT: I forgot to include how my partitions are set up and what I tried before successfully installing NixOS. Partitions (mountpoints from Ubuntu): sda├─sda1 ntfs Recovery # some Win7 artifact├─sda2 vfat /boot/efi├─sda3 vfat NIXBOOT # boot partition (esp, boot)├─sda4 ext4 onyx # NixOS data├─sda5 swap # Ubuntu swap│ └─cryptswap1 swap [SWAP]├─sda6 ext4 # (Arch install)├─sda7 ext4 / # Ubuntu install├─sda8 swap nixswap └─sda9 ext4 home I didn't want to mess up the Ubuntu boot partition, so I created another one ( /dev/sda3 ). My plan was to later include a menu entry in Ubuntu's GRUB for NixOS, but for now, install, reboot and test booting NixOS from GRUB console ( set root=... , linux ... , initrd ... , boot ) sudo PATH="$PATH" NIX_PATH="$NIX_PATH" `which nixos-install` --root /mnt --no-bootloader After reboot, I couldn't see anything on the NixOS boot partition. Went back to Ubunut, installed without --no-bootloader , remembered to add an entry for Ubuntu and reboot. (It was only after this that I realized that systemd-boot and GRUB are two completely different things...) UPDATE: I was able to get back to Ubuntu by selecting the Ubuntu boot partition as an alternative boot device in BIOS, and the usual GRUB menu came up. I may just redo the install with the right config. | The simplest way to go is to install from the LiveCD. nixos-generate-config will regenerate the hardware config, but if it finds configuration.nix already exists it will leave it alone. And nixos-install is designed such that it can be safely executed as many times as needed. This means you can follow the main installation guide using the filesystem (and configuration) you already created for NixOS and just continue where you left off. Some things to be mindful of: NixOS will install systemd-boot by default on EFI systems. So you'll end up with a new EFI executable along side the ones you already have. nixos-install will also attempt to set systemd-boot as the default boot manager. I believe you can disable this by setting boot.loader.efi.canTouchEfiVariables to false in configuration.nix I recommend installing NixOS with a basic config; For example, setup networking, users, and install a text editor but not much else. The reason is that the LiveCD uses a Nix store which is held in RAM. Your system will be first installed to this RAM-backed Nix store and then copied to disk. Once installed and bootable you can safely proceed with the rest of the configuration. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85131/"
]
} |
446,502 | In include/x86_64-linux-gnu/asm/unistd_64.h , I see a system call named tuxcall , #define __NR_tuxcall 184 There is nothing about it in man tuxcall except to say that it's an unimplemented system calls . What did it do? Was it never implemented, or did it do something in antiquity? | tuxcall is the place-holder for the tux system call which was used by user-space tools to communicate with the TUX kernel module, which implemented the TUX web server . This was a web server running entirely in the kernel; it was maintained by Ingo Molnar until improvements in other parts of Linux, notably thread support with NPTL , brought user-space web server performance up to the level attained by TUX. You can still find the TUX 3 patches for Linux 2.6.18 among Ingo’s patches , including the implementation of sys_tux (the system call in question). The user-space portion, which includes the documentation, can be found on the Wayback Machine (thanks hvd !). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/446502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
446,624 | When using sudo iotop (latest version 0.6-2.el7 ) in a terminal in my newly installed CentOS 7.5, I get the following error message: Traceback (most recent call last): File "/sbin/iotop", line 17, in <module> main() File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 620, in main main_loop() File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 610, in <lambda> main_loop = lambda: run_iotop(options) File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 508, in run_iotop return curses.wrapper(run_iotop_window, options) File "/usr/lib64/python2.7/curses/wrapper.py", line 43, in wrapper return func(stdscr, *args, **kwds) File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 501, in run_iotop_window ui.run() File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 155, in run self.process_list.duration) File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 434, in refresh_display lines = self.get_data() File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 415, in get_data return list(map(format, processes)) File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 388, in format cmdline = p.get_cmdline() File "/usr/lib/python2.7/site-packages/iotop/data.py", line 292, in get_cmdline proc_status = parse_proc_pid_status(self.pid) File "/usr/lib/python2.7/site-packages/iotop/data.py", line 196, in parse_proc_pid_status key, value = line.split(':\t', 1)ValueError: need more than 1 value to unpack Any idea how to fix this problem? | Apparently, recent kernel versions introduced a blank line in /proc/(pid)/status that iotop does not expect: CapBnd: 0000001fffffffffCapAmb: 0000000000000000Seccomp: 0SpeculationStoreBypass: vulnerable As a zeroth approximation of a fix, edit (as root) /usr/lib/python2.7/site-packages/iotop/data.py ca l.195: def parse_proc_pid_status(pid): result_dict = {} try: for line in open('/proc/%d/status' % pid): if not line.strip(): continue key, value = line.split(':\t', 1) result_dict[key] = value.strip() except IOError: pass # No such process return result_dict where the if not line.strip(): continue is new. Beware that python does not have explicit braces, so the indentation of this line should match that of the line below it. (Also see https://bugs.launchpad.net/pkg-website/+bug/1773383 for other fixes for this bug.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/446624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/287970/"
]
} |
446,693 | Hi I have this sentence and I want to know what does it means please. if [[ -z "$1" ]]; then # --> this is if the value of the parameter $1 is zero PASO=1elif [[ "$1" -gt 1 ]] ; then # but i don't know what this flags mean? .."-gt" LOG "[$(date +%T)] Parametros incorrectos" exit 255else PASO=$1fi What does -gt mean? | $ help testtest: test [expr] Evaluate conditional expression.... arg1 OP arg2 Arithmetic tests. OP is one of -eq, -ne, -lt, -le, -gt, or -ge. Arithmetic binary operators return true if ARG1 is equal, not-equal, less-than, less-than-or-equal, greater-than, or greater-than-or-equal than ARG2. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293078/"
]
} |
446,697 | I am trying to extract a gcc-4.9.0.tar.gz downloaded from one of the gcc mirror site. In order to check the md5 signature on it before I gunzip it I did digest -a md5 -v gcc-4.9.0.tar.gz which gave md5 (gcc-4.9.0.tar.gz) = fe8786641134178ecfeee2dc7644a0d8 This matches with the md5.sum in the directory downloaded from the source. Then I did gzip -dc gcc-4.9.0.tar.gz | tar xvf - The extraction began but soon terminated with a tar: directory checksum error I also tried to gunzip and untar separately like this gunzip gcc-4.9.0.tar.gz Then tar -xvf gcc-4.9.0.tar but it also ended with a checksum error. Please How do I resolve this? | You need to use gtar, it is usually preinstalled with package SUNWgtar: gzip -dc gcc-4.9.0.tar.gz | /usr/sfw/bin/gtar -xf -echo $?0 Native Solaris unpatched tar may have problems with files created with GNU tar. See answer of @schily why. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293072/"
]
} |
446,708 | I have the following command set used to update all my WordPress sites in my CentOs shared-hosting partition on my hosting provider's platform (via daily cron). The wp commands inside the pushd-popd set, are of the WP-CLI program, which is a Bash extension used for various shell-level actions on WordPress websites. for dir in public_html/*/; do if pushd "$dir"; then wp plugin update --all wp core update wp language core update wp theme update --all popd fidone The directory public_html is the directory in which all website directories are located (each website usually has a database and a main file directory). Given that public_html has some directories which are not WordPress website directories, than, WP-CLI would return errors regarding them. To prevent these errors, I assume I could do: for dir in public_html/*/; do if pushd "$dir"; then wp plugin update --all 2>myErrors.txt wp core update 2>myErrors.txt wp language core update 2>myErrors.txt wp theme update --all 2>myErrors.txt popd fidone Instead writing 2>myErrors.txt four times (or more), is there a way to ensure all errors whatsoever, from every command, will go to the same file, in one line? | The > file operator opens the file for writing but truncates it initially. That means that each new > file causes the content of the file to be replaced. If you'd want the myErrors.txt to contain the error of all the commands, you'd need either to open that file only once, or use > the first time and >> the other times (which opens the file in append mode). Here, if you don't mind the pushd / popd errors to also go to the log file, you can redirect the whole for loop: for dir in public_html/*/; do if pushd "$dir"; then wp plugin update --all wp core update wp language core update wp theme update --all popd fidone 2>myErrors.txt Or you could open the log file on a fd above 2, 3 for instance, and use 2>&3 (or 2>&3 3>&- so as not to pollute commands with fds they don't need) for each command or group of commands you want to redirect to the log file: for dir in public_html/*/; do if pushd "$dir"; then { wp plugin update --all wp core update wp language core update wp theme update --all } 2>&3 3>&- popd fidone 3>myErrors.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273994/"
]
} |
446,748 | I'm using bash shell on CentOS 7. I want to run a MySQL query from a shell script and iterate over each row of results. If there were 4 rows returned, I thought I could capture the four rows in an array like so: query="select p.id, p.ebook_id, es.id FROM ...";echo "$query" > /tmp/query.sqlmysql -u user --password=pass db_id < /tmp/query.sql > /tmp/query.csvlinesIN=`cat /tmp/query.csv | sed 's/\t/,/g'`arraylength=${#linesIN[@]}echo $arraylength However, $arraylength always outputs 1 even though I can see multiple result rows returned. How can I modify the above to correctly create an array of results where each element in the array represents one row from the result set? | With the --batch option , mysql should output the result one record on a line, and columns separated by tabs. You can read the lines to an array with Bash's mapfile and process substitution, or command substitution and array assignment: mapfile results < <( mysql --batch ... < query.sql ) or set -f # disable globbingIFS=$'\n' # set field separator to NL (only)results=( $(mysql --batch ... ) ) (Note that IFS stays modified and globbing disabled after this.) Then, if you want to split the columns of a row to some variables: IFS=$'\t' read -r col1 col2 col2 ... <<< "${results[0]}" Your assignment linesIN=`cat /tmp/query.csv | sed 's/\t/,/g'` is not an array assignment (it's missing the parenthesis). It just assigns the output of the command substitution to a regular string variable. (Any newlines will be embedded there, but it'll still be a single string.) ${#linesIN[@]} still works since in Bash/ksh single-element arrays and scalar variables act the same. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446748",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166917/"
]
} |
446,847 | I have a script running on Linux that accepts some parameters. I would like to do something like: if [[ $CONDITION == "true" ]]; then script param1 --param2 else script param1 fi I would like to avoid the forking path of the if. Is there a more optimal way to pass the second parameter? | The most expansible and robust way would probably be to use an array to hold the optional parameter(s): params=()if [[ $CONDITION == true ]]; then params+=(--param2)fiscript param1 "${params[@]}" Or in shorthand: [[ $CONDITION == true ]] && params+=(--param2)script param1 "${params[@]}" That avoids repeating the constant part of the command and you can put more than one argument in the array, even the whole command. Note that it's important to do this with an array: if you replace the array with a regular variable ( params="--param2"; script param1 $params ) you'll either have to expand the variable unquoted, with all the problems that brings, or expand it quoted, in which case you'll pass an empty string as argument if the variable is empty. In a simple case like this, the "alternate value" expansion can also be used: cond=xp2="--param2"script param1 ${cond:+"$p2"} Here, if cond is nonempty (regardless of if it's cond=false or cond=0 instead of cond=true ), the value of p2 is expanded. This may be seen as less ugly than arrays, but be careful with the placement of the quotes. See also: How can we run a command stored in a variable? Using shell variables for command options Why does my shell script choke on whitespace or other special characters? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/446847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264385/"
]
} |
446,953 | I have a file with many lines, and I want to trim each line to be 80 characters in length. How could I do this? I have already filtered out lines shorter than 80 characters, so now I'm left with a file that has lines 80+ characters in length and I want to trim each line so that all are exactly 80. In other words I want to preserve the first 80 characters in each line and remove the rest of the line. | You can use the cut command: cut -c -80 file With grep : grep -Eo '.{80}' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/446953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293255/"
]
} |
446,999 | how exactly df -h works? If I run df , I get this: Filesystem 1K-blocks Used Available Use% Mounted on/dev/simfs 41943040 7659828 34283212 19% / If I run df -h , I get this: Filesystem Size Used Avail Use% Mounted on/dev/simfs 40G 7.4G 33G 19% / The question is how to get same numbers? 41943040 / 1024 / 1024 = 40 OK, let's divide others by 1024. 7659828 / 1024 / 1024 = 7,304981 Then maybe by 1000? 7659828 / 1000 / 1000 = 7,659828 How df -h got 7.4G? 34283212 / 1024 / 1024 = 32,695, which is ±33G While df is open source, I've cloned the repo and checked the code. That's what I found: for (col = 0; col < ncolumns; col++) { char *cell = NULL; char const *header = _(columns[col]->caption); if (columns[col]->field == SIZE_FIELD && (header_mode == DEFAULT_MODE || (header_mode == OUTPUT_MODE && !(human_output_opts & human_autoscale)))) { char buf[LONGEST_HUMAN_READABLE + 1]; int opts = (human_suppress_point_zero | human_autoscale | human_SI | (human_output_opts & (human_group_digits | human_base_1024 | human_B))); /* Prefer the base that makes the human-readable value more exact, if there is a difference. */ uintmax_t q1000 = output_block_size; uintmax_t q1024 = output_block_size; bool divisible_by_1000; bool divisible_by_1024; do { divisible_by_1000 = q1000 % 1000 == 0; q1000 /= 1000; divisible_by_1024 = q1024 % 1024 == 0; q1024 /= 1024; } while (divisible_by_1000 & divisible_by_1024); if (divisible_by_1000 < divisible_by_1024) opts |= human_base_1024; if (divisible_by_1024 < divisible_by_1000) opts &= ~human_base_1024; if (! (opts & human_base_1024)) opts |= human_B; char *num = human_readable (output_block_size, buf, opts, 1, 1); /* Reset the header back to the default in OUTPUT_MODE. */ header = _("blocks"); /* TRANSLATORS: this is the "1K-blocks" header in "df" output. */ if (asprintf (&cell, _("%s-%s"), num, header) == -1) cell = NULL; } else if (header_mode == POSIX_MODE && columns[col]->field == SIZE_FIELD) { char buf[INT_BUFSIZE_BOUND (uintmax_t)]; char *num = umaxtostr (output_block_size, buf); /* TRANSLATORS: this is the "1024-blocks" header in "df -P". */ if (asprintf (&cell, _("%s-%s"), num, header) == -1) cell = NULL; } else cell = strdup (header); if (!cell) xalloc_die (); hide_problematic_chars (cell); table[nrows - 1][col] = cell; columns[col]->width = MAX (columns[col]->width, mbswidth (cell, 0)); } I don't have experience with this language but as I understand, it tries to check if the value on each column is dividable by 1024 or 1000 and choose whatever is better to render values for the -h option. But I don't get the same value no matter whether I divide by 1000 or 1024. Why? I think I know why. It checks to divide by 1000 or 1024 on each division. if (divisible_by_1000 < divisible_by_1024) opts |= human_base_1024; if (divisible_by_1024 < divisible_by_1000) opts &= ~human_base_1024; if (! (opts & human_base_1024)) opts |= human_B; so let's crack 7659828 / 1024 / 1024 = 7,304981. -h gave answer of 7.4G 7659828 / 1024 = 7480,xxx7659828 / 1000 = 7659,xxx while 7659 is more than 7480, divide by 1024. Still a big number, let's continue: 7659828 / 1024 / 1024 = 7,xxx (7,3049..)7659828 / 1024 / 1000 = 7,xxx (7,4803..) it takes 1000 now and gives 7,48 and I believe somewhere in the code it rounds down so "better say less than more" while you can put in 7.4G of data but you can't put 7.5G. Same story with 33.4G 34283212 / 1024 / 1000 = 33.47... So it becomes 33G. | The code you posted is from the function "get_header" which generates the text in the first row.In your case this applies to the heading "1K-blocks" (call df -B1023 to see the difference). Important to note: "1K" refers to 1024-byte blocks, not to 1000-byte blocks (indicated by "1kB-blocks", see df -B1000 ) The calculation of the numbers in the human readable format is handled by function "human_readable" (human.c:153).In df.c:1571 you can find the options which are used when called with the -h flag: case 'h': human_output_opts = human_autoscale | human_SI | human_base_1024; output_block_size = 1; break; All calculations are done with base 1024 in human readable format ("-h").In addition to the shown human_output_opts, there is a default setting which applies here (see human.h, enum declaration): /* The following three options are mutually exclusive. *//* Round to plus infinity (default). */human_ceiling = 0,/* Round to nearest, ties to even. */human_round_to_nearest = 1,/* Round to minus infinity. */human_floor = 2, As human_output_opts does not include human_round_to_nearest or human_floor, it will use its default value of human_ceiling. All calculated values will therefore be rounded up. To verify the settings, we can try to calculate the human readable format based on the 1K-blocks from df : Size = ceil(41943040/1024/1024) = ceil(40) = 40Used = ceil(7659828/1024/1024) = ceil(7.305) = 7.4Available = ceil(34283212/1024/1024) = ceil(32.695) = 33 Which is the same as the output of df -h . (... and if you prefer 1000-byte format, you can simply call df -H ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/446999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/200004/"
]
} |
447,005 | I have a directory with filenames of the format $date.txt. I would like to cat 5 latest files from it. Is there a more elegant solution to that, than for f in 2*.txt; do echo $f; done | tail -5 | while read f; do cat $f; done | In ksh93 , bash or zsh : files=( 2*.txt )cat "${files[@]: -5}" This would create an array of the filenames matching the pattern 2*.txt . It would then output the contents of the last five of these. In zsh , you can also specify a range of files as part of its glob qualifiers : cat 2*.txt([-5,-1]) In any POSIX shell, this may also be done through set -- 2*.txtwhile [ "$#" -gt 5 ]; do shift; done # or: [ "$#" -gt 5 ] && shift "$(( $# - 5 ))"cat "$@" This sets the positional parameters to the filenames matching the pattern. It then shifts off the names from the beginning of the list until the list only has five elements in it. cat is then invoked on the remaining filenames. In all of these solutions, the files would be sorted lexicographically. Filenames with spaces or newlines are handled correctly. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/272696/"
]
} |
447,120 | I want to add an argument to bash alias script so that i can just run alias -p l='ls -l' and it would get added to my .zshrc permenantly. But i am unable to locate the shell script for alias. I tried whereis alias , but with no luck. even man alias shows alias has no entry. I looked in /usr/bin, /usr/share/local/bin but with no luck. Can anyone pointout the location? EDIT:I have also tried looking in all possible paths by ls $(echo $PATH| tr ':' '\n') | grep alias | alias is a builtin command, so it doesn't show up as script in any file, or as a function. The type command will show this: $ type aliasalias is a shell builtin But you can still override it. A function with the same name will mask the builtin, unless it's explicitly called with the builtin builtin. So, something like this should work: alias() { if [ "$1" = "-p" ]; then echo "-p was given"; shift; fi; builtin alias "$@";} If you want to print the same alias assignment to a file, you need to be careful to get it quoted right, so that it's usable as input to the shell. Something like this might do (added right after the shift in the function), but do test it: printf "alias %q\n" "$@" >> ~/my.alias.file As for the Bash vs. Zsh issue, I think the above works with both, but I'm not an expert on Zsh. Incidentally, you may also want to note that Bash's alias already has a -p option help alias says: Options: -p print all defined aliases in a reusable format I don't know if it's any use, since the default behaviour of alias without arguments is also to print all aliases in a reusable format. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/447120",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293409/"
]
} |
447,177 | I am wondering how I can check the end of the file in while loop.What I am doing here is if there is some word, such as, "pineapple", "apple", "onion", "orange", or etc, I want to find lines including specific words by using grep command and make some comment "window". For example, if I use grep 'a' 'file' , then it will find "pineapple", "apple", and "orange". Then, finally I want to make it printed like "pineapple, window, apple, window, orange, window", something like this. So, I would like to make some condition in while or for loop. Any helps will be really appreciated. edit:Sample inputs applebananapineappleorangemanduricecakemeatjuicemilkoniongreen onion Expected outputs when using grep command --> grep 'a' 'file name' applewindowbananawindowpineapplewindoworangewindowmanduwindowricecakewindowmeatwindow | Using awk To print window after every line that matches a : $ awk '/a/{print; print "window"}' filenameapplewindowbananawindowpineapplewindoworangewindowmanduwindowricecakewindowmeatwindow How it works /a/{...} selects lines that match the regex a . For each such line, the commands in curly braces are executed. print prints the line containing the match print "window" prints window . Using sed $ sed -n '/a/{s/$/\nwindow/; p}' filenameapplewindowbananawindowpineapplewindoworangewindowmanduwindowricecakewindowmeatwindow How it works -n tells sed not to print unless we explicitly as it to. /a/{...} selects lines that match the regex a . For those lines, the commands in curly braces are executed. s/$/\nwindow/ adds a newline and window after the end of the current line. p prints. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447177",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292137/"
]
} |
447,191 | I'm using CentOS 7 with bash shell. I thought base64-encoding a binary file would be as simple as [rails@server lib]$ cat mybinary.file | base64 > /tmp/output.base64 However, I notice when I look at the file length, it's not a multiple of four [rails@server lib]$ ls -al /tmp/output.base64 -rw-rw-r-- 1 rails rails 92935 May 31 15:50 /tmp/output.base64 I don't know if what I have done is valid or not, but when I try and decode the file with a JS library I get an error complaining about the fact that the string length is not a multiple of four, so I'm wondering if what I did above is correct or if there's some other way to do it. | $ echo foo |base64 Zm9vCg==$ echo foo |base64 |wc -c9 Note the trailing newline in the output of base64 , it's the ninth character here. For longer input, it'll produce more than one line, as it wraps the output every 76 characters by default. You can disable the wrapping (including the final newline) with base64 -w0 , or by piping the output through tr -d '\n' . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166917/"
]
} |
447,197 | On Lubuntu 18.04, I run a shell in lxterminal. Its controlling terminal is the current pseudoterminal slave: $ tty/dev/pts/2 I would like to know what relations are between my current controlling terminal /dev/pts/2 and /dev/tty . /dev/tty acts like my current controlling terminal /dev/pts/2 : $ echo hello > /dev/ttyhello$ cat < /dev/ttyworldworld^C But they seem to be unrelated files, instead of one being a symlinkor hardlink to the other: $ ls -lai /dev/tty /dev/pts/2 5 crw--w---- 1 t tty 136, 2 May 31 16:38 /dev/pts/213 crw-rw-rw- 1 root tty 5, 0 May 31 16:36 /dev/tty For different sessions with different controlling terminals, if /dev/tty is guaranteed to be their controlling terminals. How can it be different controlling terminals, without being a symlink or hardlink? So what are their relations and differences? Any help is much appreciated! This post is originated from an earlier one Do the output of command `tty` and the file `/dev/tty` both refer to the controlling terminal of the current bash process? | The tty manpage in section 4 claims the following: The file /dev/tty is a character file with major number 5 and minor number 0, usually of mode 0666 and owner.group root.tty. It is a synonym for the controlling terminal of a process, if any. In addition to the ioctl(2) requests supported by the device that tty refers to, the ioctl(2) request TIOCNOTTY is supported. TIOCNOTTY Detach the calling process from its controlling terminal. If the process is the session leader, then SIGHUP and SIGCONT signals are sent to the foreground process group and all processes in the current session lose their controlling tty. This ioctl(2) call works only on file descriptors connected to /dev/tty . It is used by daemon processes when they are invoked by a user at a terminal. The process attempts to open /dev/tty . If the open succeeds, it detaches itself from the terminal by using TIOCNOTTY , while if the open fails, it is obviously not attached to a terminal and does not need to detach itself. This would explain in part why /dev/tty isn’t a symlink to the controlling terminal: it would support an additional ioctl , and there might not be a controlling terminal (but a process can always try to access /dev/tty ). However the documentation is incorrect: the additional ioctl isn’t only accessible via /dev/tty (see mosvy’s answer , which also gives a more sensible explanation for the nature of /dev/tty ). /dev/tty can represent different controlling terminals, without being a link, because the driver which implements it determines what the calling process’ controlling terminal is, if any. You can think of this as /dev/tty being the controlling terminal, and thus offering functionality which only makes sense for a controlling terminal, whereas /dev/pts/2 etc. are plain terminals, one of which might happen to be the controlling terminal for a given process. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/447197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
447,206 | I am using an ERR trap to catch any error in my bash script and output what happened to log. (similar to this question: Trap, ERR, and echoing the error line ) It works as expected. The only problem is, that at some point in my script an exitcode !=0 is expected to happen. How can I make the trap not trigger in this situation? Here is some code: err_report() { echo "errexit on line $(caller)" | tee -a $LOGFILE 1>&2}trap err_report ERR Then later in the script: <some command which occasionally will return a non-zero exit code>if [ $? -eq 0 ]; then <handle stuff>fi Everytime the command returns non-zero my trap is triggered. Can I avoid this only for this part of the code? I checked this question: Correct behavior of EXIT and ERR traps when using `set -eu` but I am not really getting how to apply it to my case - if at all aplicable. | An ERR trap will not trigger if an error code is immediately "caught", which means that you can use if statements and whatnot without having to flip error trapping on and off all the time. However, you cannot use checking $? for flow control, because as the time you get to that check, you already (may) have the uncaught error. If you have a command you expect to fail -- and you do not want those failures to trigger the trap , you simply have to catch the failure. Wrapping them in an if statement is clunky and verbose, but this shorthand works nicely: /bin/false || : # will not trigger an ERR trap However, if you want to do things when a command fails, if will be fine here: if ! /bin/false; then echo "this was not caught by the trap!"fi Or alternatively, else will catch the error state also: if /bin/false; then : # dead codeelse echo "this was not caught by the trap!"fi In sum, set -e and trap "command" ERR only get tripped if there is an error condition which is not immediately and intrinsically accounted for. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57561/"
]
} |
447,254 | We can print the last column of each line in a file using $NF if we don't know the last column number. But I facing difficulty is the last column has an empty value. For example, Parsing who command $ whoroot tty1 2018-01-25 09:36root pts/0 2018-05-30 07:39 (192.168.1.134)root pts/1 2018-05-28 23:12 (192.168.1.134)root pts/2 2018-06-01 10:01 (192.168.1.188) Getting result: $ who | awk '{print $NF}'09:36(192.168.1.134)(192.168.1.134)(192.168.1.188) Expected Result (192.168.1.134)(192.168.1.134)(192.168.1.188) Let me know the possibilities of getting expected result in a one-liner. EDIT 1: The above scenario is only an example. I don't like to change the delimiter to achieve the result. EDIT 2: nothing (an empty line of output) from the lines that have fewer fields than the maximum | To only output the last column of rows that have the max number of columns, you could do something like: who | awk ' NF > max {max = NF; output = ""} NF == max {output = output $NF ORS} END {printf "%s", output}' To output one row for every row of input, but empty for rows that don't have the max number of columns: who | awk ' NF > max {max = NF} {n[NR] = NF} NF == max {last[NR] = $NF} END {for (i = 1; i <= NR; i++) print n[i] == max ? last[i] : ""}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270935/"
]
} |
447,277 | I have a script which check the mounted file system against the entry listed under fstab, the issue what I am facing here is to keep the output align. Below is the script output: / is mounted OK/boot is mounted OK/was8 is mounted OK/was8/slogs is mounted OK/was8/cluster is mounted OK/was8/working is mounted OK/was8/app is mounted OK/was8/tools is mounted OK/was8/plugin is mounted OK/was8/coreproduct is mounted OK... I want to keep these line aligned so it should look like this: / is mounted OK/boot is mounted OK/was8 is mounted OK/was8/slogs is mounted OK/was8/cluster is mounted OK/was8/working is mounted OK/was8/app is mounted OK/was8/tools is mounted OK/was8/plugin is mounted OK/was8/coreproduct is mounted OK... I have tried column and xargs unable to get the desire result. Can someone help me with this. | In general, when you're doing the printing, you can set the width in the format string to printf . %-20s would print a string on a field 20 characters (*) wide, unless it overflows. %-20.20s would make it 20 characters and drop any overflowing part. (* Though e.g. Bash's printf actually counts bytes . The difference can be seen with characters like ä in UTF-8.) So, e.g. printf "%-40s %s\n" "$mountpoint is mounted" "$status" would make the first part (at least) 40 characters wide: /was8/coreproduct is mounted OK... Or, if you need to post-process an input like that, you could use Perl or awk: perl -pe 's/(.*) +(\S+)$/ sprintf "%-40s %s", $1, $2 /e' < fileawk '{s=$NF; sub(/ *[^ ]+ *$/, "", $0); printf "%-40s %s\n", $0, s}' < file Both basically separate the last non-whitespace string, and then print the two parts with the first on a fixed-width field. Or, if you don't care about keeping the separation between the fields exactly as they were, a simpler solution commented by @ JJoao would be: awk '{s=$NF; NF-- ; printf "%-40s %s\n", $0, s}' < file That produces the below output. Note that the two-space blank before is mounted is collapsed to one. This happens since awk rebuilds the whole $0 when NF or any of the fields are modified. /was8/coreproduct is mounted OK | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293542/"
]
} |
447,305 | I am writing an HTTP server daemon in C (there are reasons why), managing it with systemd unit file. I am rewriting an application designed 20 years ago, around 1995. And the system they use is that they chroot and then setuid, and the standard procedure. Now in my previous work, the usual policy was that you never ever run any process as root. You create a user/group for it and run from there. Of course, the system did run some things as root, but we could achieve all business logic processing without being root. Now for the HTTP daemon, I can run it without root if I don't chroot inside the application. So isn't it more secure for the application to never ever run as root? Isn't it more secure to run it as mydaemon-user from the beginning? Instead of starting it with root, chrooting, then setuid to mydaemon-user? | It seems that others have missed your point, which was not reasons why to use changed roots, which of course you clearly already know, nor what else you can do to place limits on dæmons, when you also clearly know about running under the aegides of unprivileged user accounts; but why to do this stuff inside the application . There's actually a fairly on point example of why. Consider the design of the httpd dæmon program in Daniel J. Bernstein's publicfile package. The first thing that it does is change root to the root directory that it was told to use with a command argument, then drop privileges to the unprivileged user ID and group ID that are passed in two environment variables. Dæmon management toolsets have dedicated tools for things like changing root directory and dropping to unprivileged user and group IDs. Gerrit Pape's runit has chpst . My nosh toolset has chroot and setuidgid-fromenv . Laurent Bercot's s6 has s6-chroot and s6-setuidgid . Wayne Marshall's Perp has runtool and runuid . And so forth. Indeed, they all have M. Bernstein's own daemontools toolset with setuidgid as an antecedent. One would think that one could extract the functionality from httpd and use such dedicated tools. Then, as you envision, no part of the server program ever runs with superuser privileges. The problem is that one as a direct consequence has to do significantly more work to set up the changed root, and this exposes new problems. With Bernstein httpd as it stands, the only files and directories that are in the root directory tree are ones that are to be published to the world. There is nothing else in the tree at all. Moreover, there is no reason for any executable program image file to exist in that tree. But move the root directory change out into a chain-loading program (or systemd), and suddenly the program image file for httpd , any shared libraries that it loads, and any special files in /etc , /run , and /dev that the program loader or C runtime library access during program initialization (which you might find quite surprising if you truss / strace a C or C++ program), also have to be present in the changed root. Otherwise httpd cannot be chained to and won't load/run. Remember that this is a HTTP(S) content server. It can potentially serve up any (world-readable) file in the changed root. This now includes things like your shared libraries, your program loader, and copies of various loader/CRTL configuration files for your operating system. And if by some (accidental) means the content server has access to write stuff, a compromised server can possibly gain write access to the program image for httpd itself, or even your system's program loader. (Remember that you now have two parallel sets of /usr , /lib , /etc , /run , and /dev directories to keep secure.) None of this is the case where httpd changes root and drops privileges itself. So you have traded having a small amount of privileged code, that is fairly easy to audit and that runs right at the start of the httpd program, running with superuser privileges; for having a greatly expanded attack surface of files and directories within the changed root. This is why it is not as simple as doing everything externally to the service program. Notice that this is nonetheless a bare minimum of functionality within httpd itself. All of the code that does things such as look in the operating system's account database for the user ID and group ID to put into those environment variables in the first place is external to the httpd program, in simple standalone auditable commands such as envuidgid . (And of course it is a UCSPI tool, so it contains none of the code to listen on the relevant TCP port(s) or to accept connections, those being the domain of commands such as tcpserver , tcp-socket-listen , tcp-socket-accept , s6-tcpserver4-socketbinder , s6-tcpserver4d , and so on.) Further reading Daniel J. Bernstein (1996). httpd . publicfile . cr.yp.to. httpd . Daniel J. Bernstein's softwares all in one . Softwares. Jonathan de Boyne Pollard. 2016. gopherd . Daniel J. Bernstein's softwares all in one . Softwares. Jonathan de Boyne Pollard. 2017. https://unix.stackexchange.com/a/353698/5132 https://github.com/janmojzis/httpfile/blob/master/droproot.c | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/447305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293566/"
]
} |
447,307 | I am trying to recursively change the permission of all files and directories in my project. I found a post on the magento forum saying that I can use these commands: find ./ -type f | xargs chmod 644find ./ -type d | xargs chmod 755chmod -Rf 777 varchmod -Rf 777 media It worked for find ./ -type d | xargs chmod 755 . The command find ./ -type f returned a lot of files, but I get chmod: access to 'fileXY.html' not possible: file or directory not found on all files, if I execute find ./ -type f | xargs chmod 644 . How can I solve this? PS: I know that he recommended to use 777 permission for my var and media folder, which is a security risk, but what else should we use? | I’m guessing you’re running into files whose names contain characters which cause xargs to split them up, e.g. whitespace. To resolve that, assuming you’re using a version of find and xargs which support the appropriate options (which originated in the GNU variants, and aren’t specified by POSIX), you should use the following commands instead: find . -type f -print0 | xargs -0 chmod 644find . -type d -print0 | xargs -0 chmod 755 or better yet, chmod -R a-x,a=rX,u+w . which has the advantages of being shorter, using only one process, and being supported by POSIX chmod (see this answer for details). Your question on media and var is rather broad, if you’re interested in specific answers I suggest you ask it as a separate question with more information on what your use of the directories is. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124191/"
]
} |
447,334 | I understand there are 12 permission bits of which there are 3 groups of 3 bits for each of user, group, and others, which are RWX respectively. RW are read and write, but for X is search for directories and execute for files. Here is what I don't get: What are the 3 remaining mode bits and are they all stored in the inode? I know the file directory itself is considered a file as well, since all things in UNIX are files (is this true?), but since UNIX systems use ACL to represent the file system, then the file system is a list of filename-inode_number pairs. Where does a file directory store it's own inode number and filename? | stat /bin/su shows on one system: Access: (4755/-rwsr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) There's the octal representation 4755 of all 12 mode bits. The number corresponds to the bits: octal 4 7 5 5bits 100 111 101 101 sst uuu ggg ooo ug rwx rwx rwx Where uuu , ggg and ooo are the permission bits for the user, group and others. The remaining group (the first one in order) contains the setuid ( su ), setgid ( sg ) and sticky ( t ) bits. The setuid and sticky bits are often not mentioned, since they're zero for most files. They're still there for every file, saved along with the others. If we really get down to it, some filesystems and interfaces store the file type along the mode bits, in the still-higher bits. The above only accounts for 12 bits, so with a 16-bit field, there's 4 left over. See, for example, the description of st_mode in stat(2) . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293584/"
]
} |
447,429 | I want to limit the change to the separator only to the following echo command not to the shell: $ myarr=(1 2 3)$ echo $( IFS="|"; echo "${myarr[@]}" )1 2 3 Why doesn't the separator work for array expansion? Thanks. | From POSIX, regarding $* : When the expansion occurs in a context where field splitting will not be performed, the initial fields shall be joined to form a single field with the value of each parameter separated by the first character of the IFS variable if IFS contains at least one character, or separated by a <space> if IFS is unset, or with no separation if IFS is set to a null string. To join words with a separator, you will have to use $* , or ${array[*]} in bash : $ set -- word1 word2 word3 "some other thing" word4$ IFS='|'$ echo "$*"word1|word2|word3|some other thing|word4 Or, with an array in bash : $ arr=( word1 word2 word3 "some other thing" word4 )$ IFS='|'$ echo "${arr[*]}"word1|word2|word3|some other thing|word4 With your code: $ myarr=( 1 2 3 )$ echo "$( IFS="|"; echo "${myarr[*]}" )"1|2|3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
447,430 | I’m looking for an “in” operator that works something like this: if [ "$1" in ("cat","dog","mouse") ]; then echo "dollar 1 is either a cat or a dog or a mouse"fi It's obviously a much shorter statement compared to, say, using several "or" tests. | You can use case ... esac $ cat in.sh #!/bin/bashcase "$1" in "cat"|"dog"|"mouse") echo "dollar 1 is either a cat or a dog or a mouse" ;; *) echo "none of the above" ;;esac Ex. $ ./in.sh dogdollar 1 is either a cat or a dog or a mouse$ ./in.sh hamsternone of the above With ksh , bash -O extglob or zsh -o kshglob , you could also use an extended glob pattern: if [[ "$1" = @(cat|dog|mouse) ]]; then echo "dollar 1 is either a cat or a dog or a mouse"else echo "none of the above"fi With bash , ksh93 or zsh , you could also use a regular expression comparison: if [[ "$1" =~ ^(cat|dog|mouse)$ ]]; then echo "dollar 1 is either a cat or a dog or a mouse"else echo "none of the above"fi | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/447430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65536/"
]
} |
447,453 | When I use the adduser command on fedora, it's not aksing for password or fullname. It's working exactly like useradd. I don't understand why. [hugues@localhost ~]$ sudo adduser user1[hugues@localhost ~]$ sudo useradd user2 And it creates two users in /etc/passwd user1:x:1004:1010::/home/user1:/bin/bash user2:x:1005:1011::/home/user2:/bin/bash | in fedora there is only useradd command, adduser is just a symlink to useradd. you can check that with the following command: ls -ld /usr/sbin/adduser the output of the command: [root@fedora28 ~]# ls -ld /usr/sbin/adduserlrwxrwxrwx. 1 root root 7 Feb 6 05:37 /usr/sbin/adduser -> useradd | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/447453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293682/"
]
} |
447,525 | In my laptop: $ cat /etc/issue Ubuntu 18.04 LTS \n \l There are two different folders for libraries x86 and x86_64 : ~$ ls -1 / binliblib64sbin... Why for binaries exists only one directory? P.S. I'm also interested in Android but I hope that answer should be the same. | First, why there are separate /lib and /lib64 : The Filesystem Hierarchy Standard mentions that separate /lib and /lib64 exist because: 10.1. There may be one or more variants of the /lib directory on systems which support more than one binary format requiring separate libraries. (...) This is commonly used for 64-bit or 32-bit support on systems which support multiple binary formats, but require libraries of the same name. In this case, /lib32 and /lib64 might be the library directories, and /lib a symlink to one of them. On my Slackware 14.2 for example there are /lib and /lib64 directories for 32-bit and 64-bit libraries respectively even though /lib is not as a symlink as the FHS snippet would suggest: $ ls -l /lib/libc.so.6lrwxrwxrwx 1 root root 12 Aug 11 2016 /lib/libc.so.6 -> libc-2.23.so$ ls -l /lib64/libc.so.6lrwxrwxrwx 1 root root 12 Aug 11 2016 /lib64/libc.so.6 -> libc-2.23.so There are two libc.so.6 libraries in /lib and /lib64 . Each dynamically built ELF binary contains a hardcoded path to the interpreter, in this case either /lib/ld-linux.so.2 or /lib64/ld-linux-x86-64.so.2 : $ file mainmain: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, not stripped$ readelf -a main | grep 'Requesting program interpreter' [Requesting program interpreter: /lib/ld-linux.so.2]$ file ./main64./main64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, not stripped$ readelf -a main64 | grep 'Requesting program interpreter' [Requesting program interpreter: /lib64/ld-linux-x86-64.so.2] The job of the interpreter is to load necessary shared libraries. Youcan ask a GNU interpreter what libraries it would load without evenrunning a binary using LD_TRACE_LOADED_OBJECTS=1 or a ldd wrapper: $ LD_TRACE_LOADED_OBJECTS=1 ./main linux-gate.so.1 (0xf77a9000) libc.so.6 => /lib/libc.so.6 (0xf760e000) /lib/ld-linux.so.2 (0xf77aa000)$ LD_TRACE_LOADED_OBJECTS=1 ./main64 linux-vdso.so.1 (0x00007ffd535b3000) libc.so.6 => /lib64/libc.so.6 (0x00007f56830b3000) /lib64/ld-linux-x86-64.so.2 (0x00007f568347c000) As you can see a given interpreter knows exactly where to look forlibraries - 32-bit version looks for libraries in /lib and 64-bitversion looks for libraries in /lib64 . FHS standard says the following about /bin : /bin contains commands that may be used by both the system administrator and by users, but which are required when no other filesystems are mounted (e.g. in single user mode). It may also contain commands which are used indirectly by scripts. IMO the reason why there are no separate /bin and /bin64 is that if we hadthe file with the same name in both of these directories we couldn't call one of themindirectly because we'd have to put /bin or /bin64 first in $PATH . However, notice that the above is just the convention - the Linuxkernel does not really care if you have separate /bin and /bin64 .If you want them, you can create them and setup your system accordingly. You also mentioned Android - note that except for running a modifiedLinux kernel it has nothing to do with GNU systems such asUbuntu - no glibc, no bash (by default, you can of course compile and deploy it manually), and also directory structure iscompletely different. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/447525",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72849/"
]
} |
447,561 | When I run systemctl status , I get State: degraded at the top, ● x230 State: degraded Jobs: 0 queued Failed: 1 units Since: Wed 2018-05-30 17:09:49 CDT; 3 days ago .... What's going on, and how do I fix it? | That means some of your services failed to start. You can see them if you run systemctl; without the status argument. They should show something like, loaded failed failed Or you can just list the failed services with systemctl --failed , in my case it shows UNIT LOAD ACTIVE SUB DESCRIPTION ● [email protected] loaded failed failed PostgreSQL Cluster 9.4-mainLOAD = Reflects whether the unit definition was properly loaded.ACTIVE = The high-level unit activation state, i.e. generalization of SUB.SUB = The low-level unit activation state, values depend on unit type. Normally, you'll need to read the journal/log to figure out what to do next about that failing item, by using journalctl -xe . If you just want to reset the units so the system "says" running with a green dot, you can run: systemctl reset-failed | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/447561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
447,589 | When I turned on my Ubuntu 18.04 yesterday and wanted to start GitKraken, it did not work. After I click its icon I see how the process tries to start in the upper left corner (next to "Activities") but after a few seconds the process seems to die and nothing happens.Trying to launch GitKraken from the console fails too with the following two messages: /snap/gitkraken/58/bin/desktop-launch: line 23: $HOME/.config/user-dirs.dirs: Permission deniedln: failed to create symbolic link '$HOME/snap/gitkraken/58/.config/gtk-2.0/gtkfilechooser.ini': File exists Unfortunately, my Linux skills are too limited to solve this. The only thing I've tried is chmod 777 $HOME/.config/user-dirs.dirs because of the Permossion denied but that did not help. EDIT: as terdon suggested in his comment I've made ls -ld ~/.config/user-dirs.dirs and this is its output: -rwxrwxrwx 1 myusername myusername 633 Mai 6 10:30 /home/mayusername/.config/user-dirs.dirs Then, I made the mv ~/snap/gitkraken/58/.config/gtk-2.0/gtkfilechooser.ini gtkfilechooser.ini.bak command and tried to start GitKraken afterwards. I did not start showing again: /snap/gitkraken/58/bin/desktop-launch: line 23: /home/myusername/.config/user-dirs.dirs: Permission denied The ln: failed to create symbolic link ... error from my initial post did not appear. Exe cuting ll in the directory ~/snap/gitkraken/58/.config/gtk-2.0 gives me the following output: drwxrwxr-x 2 myusername myusername 4096 Jun 3 16:44 ./drwxrwxr-x 8 myusername myusername 4096 Mai 21 12:28 ../lrwxrwxrwx 1 myusername myusername 47 Jun 3 15:45 gtkfilechooser.ini -> /home/myusername/.config/gtk-2.0/gtkfilechooser.ini-rw-r--r-- 1 myusername myusername 198 Jun 3 16:44 gtkfilechooser.ini.bak gtkfilechooser.ini -> /home/myusername/.config/gtk-2.0/gtkfilechooser.ini is red since the file does not exist anymore. Executing the chmod command afterwards did not change anything. GitKraken does not start and outputs the same errors. | SOLVED:Had to install libgnome-keyring: sudo apt install libgnome-keyring0 The UI now comes up and works for me.Still get the following warnings, but it's working: Gtk-Message: 11:19:31.343: Failed to load module "overlay-scrollbar"Gtk-Message: 11:19:31.349: Failed to load module "canberra-gtk-module"Node started time: 1528391971495state: update-not-availableEVENT: Main process loaded at 441 msstate: checking-for-updatestate: update-not-availablestate: checking-for-updatestate: update-not-availableEVENT: Starting initial render of foreground window at 5331 msEVENT: Startup triggers started at 5446 ms | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/447589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165579/"
]
} |
447,594 | What can explain the examples below and how do I fix this, preferably without heavy quoting acrobatics? I am using the $n to simulate multiple line command strings, just in case it distracts you from the real question. ~$ n=$'\n'; sudo -i echo "line1${n}line2${n}"line1line2~$ but ~$ n=$'\n'; sudo echo "line1${n}line2${n}"line1line2~$ | Running that sudo -i echo $'line1\nline2' under strace shows Bash gets started like this: 9183 execve("/bin/bash", ["-bash", "--login", "-c", "echo line1\\\nline2\\\n"], ... Now, strace presents special characters with backslash-escapes when it displays the strings, so what Bash actually gets as the argument to -c is echo line1 [backslash][newline] line2 [backslash][newline] and for the shell, a backslash at the end of a line marks a continuation line and removes the backslash and the following newline. Without -i , sudo runs echo directly, without going through the shell: 9189 execve("/bin/echo", ["echo", "line1\nline2\n"], ... Here, that's a literal newline going to echo , and echo duly prints that. The idea here must be that sudo tries to add a layer of shell escaping to accommodate for the fact that sh -c takes a single string, while sudo itself takes the command as distinct arguments. Compare the following cases: sudo escapes the space (this is just the name of the command, no arguments!): $ sudo -i 'echo foo'-bash: echo foo: command not found sudo escapes the backslash, so that this actually works (Bash's echo doesn't process the backslash): $ sudo -i echo 'foo\bar'foo\bar Same with a tab: $ sudo -i echo $'foo\tbar'foo bar Here, there's no extra quoting on the backslash, so Bash removes it while processing the shell command line ( b isn't a special character to the shell, and doesn't need quoting. This is basically the same as bash -c 'echo foo"b"ar' ): $ bash -c 'echo foo\bar'foobar The problem is just that you can't escape a newline with a backslash, and sudo doesn't seem to take that into account. In any case, quoting issues like this probably turn quite a bit easier if you store the commands you want in a file, and run that as a script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/447594",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103363/"
]
} |
447,613 | I've used Debian's Raspberry Pi image builder to create arm64 image, but the problem is it's too barebones. Are there any metapackages that installs useful tools, equivalent to Ubuntu's ubuntu-minimal and ubuntu-server ? Blind search on packages.debian.org proved to be futile. | There are quite a few meta-packages in Debian; whether or not any one of them is appropriate will depend on your exact requirements. Start by looking at the packages produced by tasksel ; those are the meta-packages used by the Debian installer. Most of them are language-related, or desktop-related, but there are a few server-related packages too ( task-print-server , task-ssh-server , and task-web-server ). Each tasksel package corresponds to an entry in the installer, so any package set which can be installed using the installer can also be obtained by installing tasksel packages (or using tasksel itself). The “base” Debian installation is determined by package priorities and the “essential” flag rather than a meta-package (see the definition in Debian Policy ). You’ll always have all essential packages, and you should always have all packages with priority “required”. In your particular case the contents of your image will be determined by the options given to debootstrap ; see its documentation for details. If you don’t specify a --variant , you’ll get a base Debian install, the same as you’d obtain from the installer if you didn’t select any additional packages. Based on your comments, I take it what you’re really looking for is to replicate the set of packages which end up installed by default . A default installation includes more packages than the base system; it also includes what’s known as the standard package set, i.e. all packages with standard “priority”. This includes packages such as bash-completion , file , the Debian documentation, vim-tiny ... There is no corresponding meta-package; to install these packages after debootstrap , install tasksel and run tasksel install standard . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8305/"
]
} |
447,622 | How can remove the last comma separator from a file on Linux? Example of file : "is_supported_kafka_ranger" : "true","kafka_log_dir" : "/var/log/kafka","kafka_pid_dir" : "/var/run/kafka","kafka_user" : "kafka","kafka_user_nofile_limit" : "128000","kafka_user_nproc_limit" : "65536", expected results: "is_supported_kafka_ranger" : "true","kafka_log_dir" : "/var/log/kafka","kafka_pid_dir" : "/var/run/kafka","kafka_user" : "kafka","kafka_user_nofile_limit" : "128000","kafka_user_nproc_limit" : "65536" | Using GNU sed : sed -i '$s/,$//' file That is, on the last line ( $ ) substitute ( s ) the comma at the end of the line ( ,$ ) by nothing. The change will be done in-place due to the -i flag. With standard sed : sed '$s/,$//' <file >file.new &&mv file.new file Note: Someone suggested an edit to change "on the last line" to "last on the line" (or something similar). This is wrong. When $ is used to specify an address (a line where to apply an editing command), then it refers to the last line of the stream or file. This is different from using $ in a regular expression. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/447622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
447,726 | I often have to connect to a server over ssh in an unreliable wifi environment. On the server, I run screen, so if I get disconnected, I can reconnect and resume the screen session, and pick up where I left off, but the loss of connection is still a major time sink: if the connection drops out while I'm on the server, the terminal window tends to freeze. I have to kill that tab, open a new one, ssh to the server again and resume the screen session. I've tried this with running screen on the server and screen locally. Either way it tends to freeze when the connection drops out. Is there any way I can have something similar to screen, or maybe screen itself, that will automatically try to reconnect and keep the session running, so I don't have to keep manually reconnecting? Often when I lose the connection I think it's only for a very brief period - less than a second maybe. I'm using Ubuntu 14.04 LTS, MATE edition. thanks | You could look at using mosh : https://mosh.org/ You could set up a 'jump' server with a reliable internet connection which you use mosh to connect to, then have ssh sessions to each server you manage.The reason I suggest using a jump server is that you may not wish to install mosh on the servers you are managing. Another advantage of mosh is that it is based on UDP rather than TCP, and your session can survive a change of IP address, for example going from WiFi to a mobile internet connection. Just to make it clear, mosh is not a replacement for screen , but rather ssh . It's still a good idea to use screen with it, since mosh itself doesn't provide a way to reconnect to your session if the client dies for some reason. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/447726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27368/"
]
} |
447,729 | Copying scripts from the previous isntall of Debian for Raspberry. The commant should mount an external NTFS drive to a folder. Before the new install, this command worked: sudo mount -t ntfs /dev/sdb1 -o umask=007,gid=wheel,gid=www-data,rw,nosuid,nodev,relatime,allow_other ~/Ext-hdd/ Now, I get the ntfs: (device sdb1): parse_options(): Unrecognized mount option allow_other. I tried removing this option, but without it I don't have write access to the drive, which is something that I need. I tried tweeking the flags sudo mount -t ntfs /dev/sdb1 -o umask=007,gid=wheel,uid=1001,rw,nosuid,nodev,relatime ~/Ext-hdd/ But I still cannot get write access. | You could look at using mosh : https://mosh.org/ You could set up a 'jump' server with a reliable internet connection which you use mosh to connect to, then have ssh sessions to each server you manage.The reason I suggest using a jump server is that you may not wish to install mosh on the servers you are managing. Another advantage of mosh is that it is based on UDP rather than TCP, and your session can survive a change of IP address, for example going from WiFi to a mobile internet connection. Just to make it clear, mosh is not a replacement for screen , but rather ssh . It's still a good idea to use screen with it, since mosh itself doesn't provide a way to reconnect to your session if the client dies for some reason. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/447729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170592/"
]
} |
447,741 | Input: United+States Output: United States I tried many times using sed without success. | By default, sed uses basic regular expressions (BRE), where the plus sign is not special. So you can use it in the s command as you would use a regular character: <<< 'United+States' sed 's/+/ /g' If you want to modify a file with several instances in the same line ( g ) or with several lines sed 's/+/ /g' filename If you use extended regular expressions (ERE, sed -E in versions of sed that support it), then you need to escape the plus: sed -E 's/\+/ /g' ... (See this question for the difference between the regex variants.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/447741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227199/"
]
} |
447,886 | From the man page, I know you can use raw sockets, but I don’t understand what is meant by “bind to any address for transparent proxying”. I know there’s another capability required to bind to privileged ports, so I know you can’t bind to any port. Is there a way to tell Linux that you’re binding on an address for proxying? | Quoting from this Security SE Answer : CAP_NET_RAW : Any kind of packet can be forged, which includes faking senders, sending malformed packets, etc., this also allows to bind to any address (associated to the ability to fake a sender this allows to impersonate a device, legitimately used for "transparent proxying" as per the manpage but from an attacker point-of-view this term is a synonym for Man-in-The-Middle), | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194192/"
]
} |
447,898 | In Program 1 Hello world gets printed just once, but when I remove \n and run it (Program 2), the output gets printed 8 times. Can someone please explain me the significance of \n here and how it affects the fork() ? Program 1 #include <sys/types.h>#include <unistd.h>#include <stdio.h>#include <stdlib.h>int main(){ printf("hello world...\n"); fork(); fork(); fork();} Output 1: hello world... Program 2 #include <sys/types.h>#include <unistd.h>#include <stdio.h>#include <stdlib.h>int main(){ printf("hello world..."); fork(); fork(); fork();} Output 2: hello world... hello world...hello world...hello world...hello world...hello world...hello world...hello world... | When outputting to standard output using the C library's printf() function, the output is usually buffered. The buffer is not flushed until you output a newline, call fflush(stdout) or exit the program (not through calling _exit() though). The standard output stream is by default line-buffered in this way when it's connected to a TTY. When you fork the process in "Program 2", the child processes inherits every part of the parent process, including the unflushed output buffer. This effectively copies the unflushed buffer to each child process. When the process terminates, the buffers are flushed. You start a grand total of eight processes (including the original process), and the unflushed buffer will be flushed at the termination of each individual process. It's eight because at each fork() you get twice the number of processes you had before the fork() (since they are unconditional), and you have three of these (2 3 = 8). | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/447898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293805/"
]
} |
447,911 | We are running Red Hat Enterprise Linux version 7, and all our machines are virtual machines. Our memory resources are limited and physical RAM costs money so we are thinking of increasing swap instead of adding memory. Is this a good idea? Secondly, when / from which point does the OS start using swap? | No, it’s a bad idea. You shouldn’t think of swap as a mechanism by which you can expand memory; it’s a storage area for parts of memory which don’t have to remain in physical memory, and whose contents don’t exist anywhere else. See Why does Linux need swap space in a VM? for details. If the processes running inside your VMs are running out of memory, you need to determine what their real working set is, both in nominal operation and in the worst case. Then, assuming you can’t reduce their memory usage, you need to configure their memory setups to suit: RAM allocation, swap, and kernel configuration (swappiness etc.). The RAM allocation will have a direct impact on the number of VMs you can run per host, and that should really be your main adjustment variable if you can’t add more memory to your hosts. (That doesn’t help with the cost aspect of course...) Depending on what you need VMs for, another strategy could be to use containers instead since that will allow you to reduce the overhead. Operating systems typically start using swap when they need to allocate memory and they’ve run out of available physical memory, and the least used memory pages currently in physical memory don’t have what’s called a backing store (or rather, their backing store is swap). When a program needs more memory, the kernel will first look for some free memory; then it will look through a hierarchical list of things it can get rid of — cache, buffers, mapped executables, etc. Note that swap can be used even in the absence of “visible” memory pressure: there are always pieces of data stored in memory which aren’t actually used, and are better stored in swap. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/447911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
447,929 | I am trying to install moreutils on Red Hat Enterprise Linux 7.4, but it is complaining about a dependency on perl(IPC::Run) . Here is the command I'm running and the error message I am receiving: # /bin/yum -d 0 -e 0 -y install moreutilsError: Package: moreutils-0.49-2.el7.x86_64 (epel) Requires: perl(IPC::Run)You could try using --skip-broken to work around the problemYou could try running: rpm -Va --nofiles --nodigest I've tried searching for the package perl-IPC-Run but it does not seem to be available. | It turns out Perl-IPC-Run is in the rhel-7-server-optional-rpms repository which had not been enabled. These are the steps I took to fix the issue: # subscription-manager repos --enable=rhel-7-server-optional-rpmsRepository 'rhel-7-server-optional-rpms' is enabled for this system.# yum search Perl-IPC-Run...perl-IPC-Run.noarch : Perl module for interacting with child processes Now the Perl-IPC-Run package is available and moreutils installs without an error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245552/"
]
} |
447,939 | I am trying to summarise a table of data that changes everyday. I have already summarised the table to only display rows with entries that are larger than 30. However, on some days, there are no entries more than 30 in the original table. When that happens, i do not need the entire section that is empty in the summary. How do i then remove the entire header for those sections? Ideally, if there are no entries in all 5 sections, there should not be any lines printed ( or just a string that says: "None: there is no entry larger than 30" as i was trying to do) Example of a summarised table with 5 sections, summarised_output.txt: =========================================================================================================Month: Jun Counter Name 06/04 18:00 06/04 17:00 06/04 16:00 06/04 15:00=========================================================================================================SYS.SYS.SYS.SYS.SYS.SYS. : 45 45 45 45SYS.SYS.SYS.SYS.SYS.SYS. : 45 45 45 45=========================================================================================================Month: Jun Counter Name 06/05 14:00 06/05 13:00 06/05 12:00 06/05 11:00=========================================================================================================SYS.SYS.SYS.SYS.SYS.SYS. : 45 45 45 45SYS.SYS.SYS.SYS.SYS.SYS. : 45 45 45 45=========================================================================================================Month: Jun Counter Name 06/05 10:00 06/05 09:00 06/05 08:00 06/05 07:00==================================================================================================================================================================================================================Month: Jun Counter Name 06/05 06:00 06/05 05:00 06/05 04:00 06/05 03:00=========================================================================================================SYS.SYS.SYS.SYS.SYS.SYS. : 45 45 45 45SYS.SYS.SYS.SYS.SYS.SYS. : 45 45 45 45=========================================================================================================Month: Jun Counter Name 06/04 18:00 06/04 17:00 06/04 16:00 06/04 15:00=========================================================================================================SYS.SYS.SYS.SYS.SYS.SYS. : 45 45 45 45SYS.SYS.SYS.SYS.SYS.SYS. : 45 45 45 45========================================================================================================= As you can see, the third section is empty because there is no entry in the original_output.txt file higher than 30. But the header is still there. My summary code( worked): awk '$1=="Month:"||$1==""||$1=="Counter"||(index($1, "=")!=0)||$3>=30|| $4>=30 || $5>=30||$6>=30' original_output.txt>>summarised_output.txt My attempt at deleting the header (doesn't work): touch summarised_output_temp.txtawk '{if ($1=="Month:"||$1==""||$1=="Counter"||(index($1, "=")!=0)||$3>=30|| $4>=30 || $5>=30||$6>=30) print $0}' original_output.txt >> summarised_output_temp.txtif (((wc -l < summarised_output_temp.txt)==42))thenecho "None: there is no entry larger than 30" >> summarised_output.txtelsecat output_7_temp.txt>>summarised_output.txtfi The error received for the attempt: line 3: ((: (wc -l output_7_temp.txt | awk {print $1})==42: syntax error: invalid arithmetic operator (error token is ".txt | awk {print $1})==42") | It turns out Perl-IPC-Run is in the rhel-7-server-optional-rpms repository which had not been enabled. These are the steps I took to fix the issue: # subscription-manager repos --enable=rhel-7-server-optional-rpmsRepository 'rhel-7-server-optional-rpms' is enabled for this system.# yum search Perl-IPC-Run...perl-IPC-Run.noarch : Perl module for interacting with child processes Now the Perl-IPC-Run package is available and moreutils installs without an error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292207/"
]
} |
447,960 | The Arch Linux ZFS wiki page explains grub-compatible pool creation , as does this page about booting Fedora , but I have not been able to create a pool that is readable by Grub. The Arch Linux wiki page about Installing Arch Linux on ZFS highlights certain bugs but doesn't really explain how to overcome them. The linked pages explain that Grub supports a subset of zpool features and cannot read a pool that uses features that it doesn't support. They go on to explain how to configure a suitable pool but I have been unable to make it work. The supported feature subset does not appear to be documented anywhere. I am using a virtual machine to test with Grub 2.02 and Arch Linux kernel 4.16.13-1-ARCH which is the most recent and is compatible with the current zfs-linux package version ( zfs-linux-0.7.9.4.16.13.1-1 ). I am not (yet) trying to make a bootable system, only to prove that Grub can read the zpool. Here is what I have tried: First, like the arch wiki page suggests, by disabling unwanted features: # zpool create \ -o feature@multi_vdev_crash_dump=disabled \ -o feature@large_dnode=disabled \ -o feature@sha512=disabled \ -o feature@skein=disabled \ -o feature@edonr=disabled \ testpool mirror \ /dev/disk/by-id/ata-VBOX_HARDDISK_VB{5f2d4170-647f16b7,f38966d8-57bff7df} which results in these features: testpool feature@async_destroy enabled localtestpool feature@empty_bpobj active localtestpool feature@lz4_compress active localtestpool feature@multi_vdev_crash_dump disabled localtestpool feature@spacemap_histogram active localtestpool feature@enabled_txg active localtestpool feature@hole_birth active localtestpool feature@extensible_dataset active localtestpool feature@embedded_data active localtestpool feature@bookmarks enabled localtestpool feature@filesystem_limits enabled localtestpool feature@large_blocks enabled localtestpool feature@large_dnode disabled localtestpool feature@sha512 disabled localtestpool feature@skein disabled localtestpool feature@edonr disabled localtestpool feature@userobj_accounting active local Then, like the fedora example , by enabling wanted features: zpool create -d \ -o feature@async_destroy=enabled \ -o feature@empty_bpobj=enabled \ -o feature@spacemap_histogram=enabled \ -o feature@enabled_txg=enabled \ -o feature@hole_birth=enabled \ -o feature@bookmarks=enabled \ -o feature@embedded_data=enabled \ -o feature@large_blocks=enabled \ testpool mirror \ /dev/disk/by-id/ata-VBOX_HARDDISK_VB{5f2d4170-647f16b7,f38966d8-57bff7df} which results in these features: # zpool get all testpool | grep featuretestpool feature@async_destroy enabled localtestpool feature@empty_bpobj active localtestpool feature@lz4_compress disabled localtestpool feature@multi_vdev_crash_dump disabled localtestpool feature@spacemap_histogram active localtestpool feature@enabled_txg active localtestpool feature@hole_birth active localtestpool feature@extensible_dataset enabled localtestpool feature@embedded_data active localtestpool feature@bookmarks enabled localtestpool feature@filesystem_limits disabled localtestpool feature@large_blocks enabled localtestpool feature@large_dnode disabled localtestpool feature@sha512 disabled localtestpool feature@skein disabled localtestpool feature@edonr disabled localtestpool feature@userobj_accounting disabled local In each case, I loaded some content: # cp -a /boot /testpool And then, rebooted into Grub: grub> search --set --label testpoolgrub> ls /@/grub> ls /@error: compression algorithm 80 not supported.grub> ls /@/error: compression algorithm inherit not supported. I tried enabling/disabling some features, most notably lz4_compress . I also tried creating a dataset on the pool. Nothing I tried worked inside Grub. I expected to be able to list /boot or /@/boot . Errors encountered include compression algorithm inherit not supported compression algorithm 66 not supported compression algorithm 80 not supported incorrect dnode type How should a ZFS zpool be created in order for it to be readable by Grub? | Grub cannot reliably perform a directory listing of a zpool due to a bug as confirmed via the mailing list : listing directory contents in Grub is broken, I have a patch lyingaround that fixes that specific issue. if you get strange error messages(e.g., something like invalid BP type or compression algorithm), it'slikely this issue. This problem is present in Grub installed from ArchLinux and also Fedora 28. Ubuntu, however, appears to have patched their Grub to fix this (confirmed with Ubuntu 16.10). Grub can, however, read the files required to boot. You can create a zpool that Grub will boot like this: zpool create -m none "$ZPOOL" "$RAIDZ" "${DISKS[@]}" where the variables define the name of the pool, e.g mypool , the raid level, e.g. mirror and the disks, e.g /dev/disk/by-id/... (two disks being required for a mirror). You need to create a dataset zfs create -p "$ZPOOL"/ROOT/archlinux And you need to set the dataset's mount point: zfs set mountpoint=/ "$ZPOOL"/ROOT/archlinux You can then boot it with Grub using commands like: insmod part_gptsearch --set --label mypoollinux /ROOT/archlinux@/boot/vmlinuz-linux zfs=mypool rwinitrd /ROOT/archlinux@/boot/initramfs-linux.imgboot I scripted this to test it in a VirtualBox machine. I installed Arch from the ISO onto a normal ext4 root and then used that to install a new ZFS root onto two mirrored virtual disks. The above is a summary - see the script for full details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/447960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9259/"
]
} |
447,965 | When I run this command it works: $ find . -inum 888696 -exec ls '{}' \;Conversation.pst Outlook Data File Outlook Data File.sbd Trash Unsent MessagesInbox.pst Outlook Data File.msf Sent.pst Trash.msf Unsent Messages.msf However, When replacing ls with cd it does not work: $ find . -inum 888696 -exec cd '{}' \;find: ‘cd’: No such file or directory I know cd is a bash built-in, so I tried this which does not work either: $ find . -inum 888696 -exec builtin cd '{}' \;find: ‘builtin’: No such file or directory How can I use cd along with find -exec command? UPDATE The reason I'm trying to use cd with find -exec is that the directory name is a strange one which shows up on my terminal as something like ???? . | The -exec option to find executes an external utility, possibly with some command line option and other arguments. Your Unix does not provide cd as an external utility, only as a shell built-in, so find fails to execute it. At least macOS and Solaris do provide cd as an external utility. There would be little or no use for executing cd in this way, except as a way of testing whether the pathname found by find is a directory into which you would be able to cd . The working directory in your interactive shell (or whatever is calling find ) would not change anyway. Related: Understanding the -exec option of `find` Script to change current directory (cd, pwd) What is the point of the `cd` external command? If you're having issues with a directory's name being strange or extremely difficult to type, and you want to change into that directory, then consider creating a symbolic link to the directory and then cd into it using that link instead: find . -inum 888696 -exec ln -s {} thedir ';' This would create a symbolic link named thedir that would point to the problematic directory. You may then change working directory with cd thedir (if the link exists in the current directory). This avoids modifying the directory in any way. Another idea would be to rename the directory in a similar way with find , but that would not be advisable if another program expects the directory to have that particular name. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/447965",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/158683/"
]
} |
448,079 | I created a new home directory for myself on my SSH server and when I log in my bashrc is never loaded, I always have to type . ~/.bashrc after I log in. How can I save keystrokes so this is done automatically? | You could link your .bash_login - used when you login - to your .bashrc - used for other bash shell sessions: mv -f .bash_login .bash_login.old # Don't worry if this says no such fileln -s .bashrc .bash_login Ensure that the commands in your .bashrc can handle the possibility that they are being run without a terminal being connected. So don't print anything unless there's a terminal attached to stdout , for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255918/"
]
} |
448,096 | I'm running a Debian 8.2 vm and trying to execute a file called install.sh. I've run the following commands: sh ./install.sh sh install.sh apt-get install install.sh The first two above commands gave me the error "Configuration Absent: Installation Failed". The third command gave me the following output: Reading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package install.shE: Couldn't find any package by regex 'install.sh' I've run chmod 700 install.sh to make sure the file CAN be executed. And I absolutely can't find anything about this type of error. | You could link your .bash_login - used when you login - to your .bashrc - used for other bash shell sessions: mv -f .bash_login .bash_login.old # Don't worry if this says no such fileln -s .bashrc .bash_login Ensure that the commands in your .bashrc can handle the possibility that they are being run without a terminal being connected. So don't print anything unless there's a terminal attached to stdout , for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/294186/"
]
} |
448,209 | How can i do this in a single line? tcp dport 53 counter accept comment "accept DNS"udp dport 53 counter accept comment "accept DNS" | With a recent enough nftables , you can just write: meta l4proto {tcp, udp} th dport 53 counter accept comment "accept DNS" Actually, you can do even better: set okports { type inet_proto . inet_service counter elements = { tcp . 22, # SSH tcp . 53, # DNS (TCP) udp . 53 # DNS (UDP)} And then: meta l4proto . th dport @okports accept You can also write domain instead of 53 if you prefer using port/service names (from /etc/services ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73778/"
]
} |
448,244 | I use a very simple makefile with TeX: test-makefile: echo '\newcommand{\seance}{seance1}' > seances/seance.tex I run it: $ make test-makefile echo '\newcommand{\seance}{seance1}' > seances/seance.tex My problem is that the file created in the folder named "seances" does not contain the two first characters it should contain: ewcommand{\seance}{seance1} The first line of it being empty. Of course I can protect the first antislash: echo '\\newcommand{\seance}{seance1} , etc. But in the real world it does not work: my real makefiles (I have posted an ECM) don't work. What happens? How can bash/debian misunderstand the beginning of the command? By the way: $ bash --versionGNU bash, version 4.4.19(1)-release (x86_64-pc-linux-gnu) $ cat /etc/debian_versionbuster/sid $ uname -aLinux giljourdan 4.16.0-1-amd64 #1 SMP Debian 4.16.5-1 (2018-04-29) x86_64 GNU/Linux | Your echo is one of those that interprets backslash-escapes . A \n means a newline, so you get exactly that. The latter backslash comes as-is, since \s isn't a valid escape code. Make runs the commands through a shell, using /bin/sh by default, and on Debian that's dash. Dash's builtin echo does process backslashes. Bash's doesn't. (And neither does the external /bin/echo on Debian, not that it matters unless you explicitly run /bin/echo ). Your best bet is to use printf explicitly, it's at least safe in that it always processes backslash-escapes. The below should always do the same thing, the \\n at the start produces a real backslash and an n , the \n later produces a newline to end the line. foo: printf '\\newcommand\n' > foo (or, if you want to avoid processing backslashes, then use printf "%s" '\newcommand' ) See the question Why is printf better than echo? for more details about echo gotchas. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/294309/"
]
} |
448,261 | I'm trying to assign Spotify to a specific workspace but with no luck. My i3 config file looks like this #Startup-programs exec firefox exec spotifyassign [class="Spotify"] $ws4 assign [class="Firefox"] $ws2 xprop on Spotify gives me this output _NET_WM_ICON(CARDINAL) = WM_CLASS(STRING) = "spotify", "Spotify"WM_NAME(STRING) = "Spotify"_NET_WM_NAME(UTF8_STRING) = "Spotify"_NET_WM_DESKTOP(CARDINAL) = 0WM_STATE(WM_STATE): window state: Normal icon window: 0x0XdndProxy(WINDOW): window id # 0x1a00002WM_NORMAL_HINTS(WM_SIZE_HINTS): program specified location: 0, 0 window gravity: Static_NET_WM_PID(CARDINAL) = 27058WM_LOCALE_NAME(STRING) = "it_IT.UTF-8"WM_CLIENT_MACHINE(STRING) = "placobravo"WM_PROTOCOLS(ATOM): protocols WM_DELETE_WINDOW, _NET_WM_PING At startup both firefox and spotify loads, but only firefox gets placed in its right workspace, and I really can't get what's going on since I'm using the exactly same syntax. I already tried using a different workspace but it doesn't work. After a bit more of searching I've found a solution in another post https://github.com/i3/i3/issues/2060 | (Taken exactly from question). Simply use for_window [class="Spotify"] move to workspace $ws4 This is also in the the Arch i3 Wiki | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
448,262 | I fetch files via SCP from a machine owned by another group. My only access is SCP, I do not have the ability to SSH into their machine. Occasionally, their system is rebooted which causes problems on my end if I don't know about it. I was hoping to SCP some file from their system to find out when it was last booted except I can't seem to find anything appropriate. I tried copying via scp: scp -p remoteSys:/proc . (-p says preserve timestamp) and was told /proc is not a regular file and cannot be copied. When I tried: scp -p remoteSys:/proc/uptime . and I got a zero byte file with the current timestamp. I copied: scp -p remoteSys:/var/log/boot.log . and I got a zero size file with a date that may or may not be the boot date. Does anybody have any good suggestions? Thank you in advance. | scp remote:/var/log/wtmp /tmp/remote.wtmplast -f /tmp/remote.wtmp reboot | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448262",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205722/"
]
} |
448,290 | From Search for and remove files safely locate -i nohup.out | xargs -d '\n' -L1 -p rm Each line in the output of locate is treated as a argument by xargs, so are -L1 and -n 1 the same? | From the manual: -L max-lines Use at most max-lines nonblank input lines per command line. Trailing blanks cause an input line to be logically continued on the next input line. Implies -x. -n max-args Use at most max-args arguments per command line. Fewer than max-args arguments will be used if the size (see the -s option) is exceeded, unless the -x option is given, in which case xargs will exit. -d delim Input items are terminated by the specified character. [...] Based on this and my understanding, in your case -L1 and -n1 are made equivalent both by the argument 1 passed and the delimiter changed from blank space to \n (newline) by the argument -d For example, without the -d argument if you were to have a blank space in your locate output, then this line would be splitted into two arguments and hence 2 different use of rm with -n1 , while it would still be treated as one argument and only one command with -L1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
448,413 | I have a unix csv file as pipeline "|" separator . But while I am opening in vi editor there are some extra characters are coming as ~G .But while I am doing cat , I could not see any ~G characters . 453136~G|OORAHASS0343136~G|Generic Box Access~G|NMBLDD~G|/shelf=0/slot=1/port=7~G|20Mbit/s~G|80Mbit/s~G|IS How to remove ~G characters . I have already tried below steps but no luck . sed -e 's/[^ -~]//g' file_in > file_out or grep -c '[^ -~]' file_in or sed -i 's/\~H//g;s/\~G//g' file_in | cat -e rendering them as M-^G suggests they are 0x87 bytes (0207 in octal). As its documentation 1 says, vim renders byte 0x87 as ~G when in locales using single-byte charsets or when the encoding is Unicode and the ESA character is encoded as a valid UTF-8 multibyte sequence, and renders the byte as <87> when the encoding option is Unicode and the character does not form part of a valid UTF-8 sequence. (It renders ^G for 0x7, the ASCII BEL character.) That's G (0x47 in ASCII) with bit 7 (meta) set to 1 and bit 6 set to 0 (control). That byte doesn't form a valid character in UTF-8 and is typically the code for a control character ( ESA ) in the C1 set in ISO8859-x charsets. To get rid of it, you can do: tr -d '\207' < file > file.new With GNU sed and a shell like ksh93/zsh/bash with support for $'...' : sed -i $'s/\207//g' file Your sed 's/[^ -~]//g' would have done it, but only in the C locale. What character ranges match in other locales is pretty random. So: LC_ALL=C sed 's/[^ -~]//g' < file > file.new (note that it would delete all other control characters including tabulation and CR (but not LF) and non-ASCII characters). 0x87 is ‡ in the windows-1252 character set (sometimes improperly refereed to as latin1 or iso8859-1). If you wanted those 0x87 to be converted to ‡ (because for instance those files come from the Windows world and that's what those 0x87 were intended to be) in your locale's charset (assuming it has such a character), you could use: iconv -f windows-1252 < file > file.new 1 Bram Moolenaar (2011-03-22). 'isprint' . "options". VIM Reference Manual . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/448413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/294134/"
]
} |
448,443 | From findutils' manual: For example constructs such as these two commands # riskyfind -exec sh -c "something {}" \;find -execdir sh -c "something {}" \; are very dangerous. The reason for this is that the ‘{}’ is expanded to a filename which might contain a semicolon or other characters special to the shell. If for example someone creates the file /tmp/foo; rm -rf $HOME then the two commands above could delete someone’s home directory. So for this reason do not run any command which will pass untrusted data (such as the names of fi les) to commands which interpret arguments as commands to be further interpreted (for example ‘sh’). In the case of the shell, there is a clever workaround for this problem: # saferfind -exec sh -c 'something "$@"' sh {} \;find -execdir sh -c 'something "$@"' sh {} \; This approach is not guaranteed to avoid every problem, but it is much safer than substituting data of an attacker’s choice into the text of a shell command. Is the cause of the problem in find -exec sh -c "something {}" \; that the replacement for {} isunquoted and therefore not treated as a single string? In the solution find -exec sh -c 'something "$@"' sh {} \; , first {} is replaced, but since {} is unquoted, doesn't "$@" also have the same problem as the original command? For example, "$@" will be expanded to "/tmp/foo;" , "rm" , "-rf" , and "$HOME" ? why is {} not escaped or quoted? Could you give other examples (still with sh -c , or without it ifapplicable; with or without find which may be not necessary) where the same kind of problem and solution apply, andwhich are minimal examples so that we can focus on the problem andsolution with little distraction as possible? See Ways to provide arguments to a command executed by `bash -c` Thanks. | This isn’t really related to quoting, but rather to argument processing. Consider the risky example: find -exec sh -c "something {}" \; This is parsed by the shell, and split into six words: find , -exec , sh , -c , something {} (no quotes any more), ; . There’s nothing to expand. The shell runs find with those six words as arguments. When find finds something to process, say foo; rm -rf $HOME , it replaces {} with foo; rm -rf $HOME , and runs sh with the arguments sh , -c , and something foo; rm -rf $HOME . sh now sees -c , and as a result parses something foo; rm -rf $HOME ( the first non-option argument ) and executes the result. Now consider the safer variant: find -exec sh -c 'something "$@"' sh {} \; The shell runs find with the arguments find , -exec , sh , -c , something "$@" , sh , {} , ; . Now when find finds foo; rm -rf $HOME , it replaces {} again, and runs sh with the arguments sh , -c , something "$@" , sh , foo; rm -rf $HOME . sh sees -c , and parses something "$@" as the command to run, and sh and foo; rm -rf $HOME as the positional parameters ( starting from $0 ), expands "$@" to foo; rm -rf $HOME as a single value , and runs something with the single argument foo; rm -rf $HOME . You can see this by using printf . Create a new directory, enter it, and run touch "hello; echo pwned" Running the first variant as follows find -exec sh -c "printf \"Argument: %s\n\" {}" \; produces Argument: .Argument: ./hellopwned whereas the second variant, run as find -exec sh -c 'printf "Argument: %s\n" "$@"' sh {} \; produces Argument: .Argument: ./hello; echo pwned | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/448443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.