source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
605,870 | I'm using WineHQ on Debian 10 and it is running totally fine with many Windows programs. However Wine cannot show the right application icon (it always shows only a generic icon for .exe applications). I wonder if there is a way to restore the right icon or if there is a way to permanently link another icon to this particular program. For example, I'm playing FO2 and it would be cool to link .exe file to the icon which is stored in the same folder and every time I move the game to another directory it will keep the same icon. I had the same problem with .mp3 files where I wanted to change the default audio icon. I solved that with program called Puddletag which merges images into the original .mp3 file. Is there something similar for .exe programs? | sh is simple and commonly available. sh is the tool that is invoked to parse command lines in things like system(cmdline) in many languages. Many OSes including some GNU ones have stopped using bash (the GNU shell) to implement sh for the reason that it has become too bloated to do just that simple thing of parsing command lines and interpreting POSIX sh scripts. Your bash -l -c 'echo /usr/local/conda-meta/*.json' command line is possibly being interpreted by a sh invocation already. So possibly you can just do: printf '%s\n' /usr/local/conda-meta/*.json directly. If not: sh -c 'printf "%s\n" /usr/local/conda-meta/*.json' You could also use find here. find doesn't do globbing but it can report file names that match patterns similar to shell ones. LC_ALL=C find /usr/local/conda-meta/. ! -name . -prune -name '*.json' Or with some find implementations: LC_ALL=C find /usr/local/conda-meta -mindepth 1 -maxdepth 1 -name '*.json' (note that the LC_ALL=C needed here so that * matches any sequence of bytes, not just those that are forming valid characters in the current locale, is a shell construct. If that command line is not interpreted by a shell, you may need to change it to env LC_ALL=C find... ) Some differences with shell globs: the list of files is not sorted hidden files are included (you could add a ! -name '.*' to exclude them) you get no output if there's no matching file. globs have that misfeature that they leave the pattern as-is unexpanded in that case. with the first (standard) variant, files will be output as /usr/local/conda-meta/./file.json . some globs such as x*/y/../*z are not easily translated (also note the differing behaviour with respect to symlinks to directories in that case). In any case, you can't use echo to output arbitrary data. My next question would be: what are you going to do with that output? With echo , you're outputting those file paths separated by SPC characters, and with my printf or find above, delimited by NL characters. Both NL and SPC are perfectly valid characters in file names, so those outputs are not post-processable reliable. You could use '%s\0' instead of '%s\n' (or use find 's -print0 if supported), not suitable for display to a user, but post-processable. In terms of efficiency, comparing Ubuntu 20.04's /bin/sh (dash 0.5.10.2) with its find (GNU find 4.7.0). Startup time: $ time (repeat 1000 sh -c '')( repeat 1000; do; sh -c ''; done; ) 0.91s user 0.66s system 105% cpu 1.483 total$ time (repeat 1000 find . -quit)( repeat 1000; do; find . -quit; done; ) 1.35s user 1.25s system 103% cpu 2.507 total Globbing some json files: $ TIMEFMT='%U user %S system %P cpu %*E total'$ time (repeat 1000 sh -c 'printf "%s\n" /usr/share/iso-codes/json/*.json') > /dev/null0.95s user 0.72s system 105% cpu 1.587 total$ time (repeat 1000 find /usr/share/iso-codes/json -mindepth 1 -maxdepth 1 -name '*.json') > /dev/null1.34s user 1.35s system 103% cpu 2.599 total Even bash is hardly slower than find here: $ time (repeat 1000 bash -c 'printf "%s\n" /usr/share/iso-codes/json/*.json') > /dev/null1.53s user 1.36s system 102% cpu 2.808 total Of course YMMV depending on the system, implementation, version of the respective utilities and the libraries they're linked against. Now on the history note, the glob name actually comes from the name of a utility called glob in the very first versions of Unix in the early 70s. It was located in /etc and was invoked by sh as a helper to expand wildcard patterns. You'll find a few projects online to revive that very old shell such as https://etsh.nl/ . More as an exercise in archaeology, you could build the glob utility from there and then be able to do: glob printf '%s\n' '/usr/local/conda-meta/*.json' A few notes of warning though. those are ancient globs, [!x] (let alone [^x] ) is not supported. it's not 8 bit safe. Actually, the 8th bit is used for escaping the glob operators ( $'\xe9*' would match the same thing as i* , $'\xaa*' would match on filenames that start with * ; the shell would set that 8th bit for the quoted characters before invoking glob ) ranges like [a-f] match on byte value rather than collation order (in practice, that's generally an advantage IMO). Non-matching globs result in a No match error (again, probably preferably, that's something that was broken by the Bourne shell in the late 70s). The glob functionality was later moved into the shell starting with the PWB shell and Bourne shell in the late 70s. Later, some fnmatch() and glob() functions were added to the C library to allow that feature to be used from other applications, but I'm not aware of a standard nor common utility that is a bare interface to that function. Even perl used to invoke csh in its early days to expand glob patterns. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/605870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/429510/"
]
} |
605,969 | Sometimes I need to add more disk to a database; for that, I need to list the disks to see what disks already exist. The problem is that the output is always sorted as 1,10,11,12...2,20,21...3 etc. How can I sort this output the way I want it? A simple sort does not work; I've also tried using sort -t.. -k.. -n . Example of what I need to sort: [root@server1 ~]# oracleasm listdisksDATA1DATA10DATA11DATA12DATA2DATA3DATA4DATA5DATA6DATA7DATA8DATA9FRA1FRA10FRA11FRA2FRA3..OCR1OCR2OCR3.... How I'd like to see the output: DATA1DATA2DATA3DATA4DATA5DATA6DATA7DATA8DATA9DATA10DATA11DATA12FRA1FRA2FRA3....FRA10FRA11..OCR1OCR2OCR3.... | Your best bet is piping to GNU sort , with GNU sort 's --version-sort option enabled so that would be oracleasm listdisks | sort --version-sort From the info page --version-sort’ Sort by version name and number. It behaves like a standard sort, except that each sequence of decimal digits is treated numerically as an index/version number. (*Note Details about version sort::.) On your input it gives me DATA1DATA2DATA3DATA4DATA5DATA6DATA7DATA8DATA9DATA10DATA11DATA12FRA1FRA2FRA3FRA10FRA11OCR1OCR2OCR3 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/605969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/373074/"
]
} |
606,029 | I am trying to send some hex values to a tty port /dev/ttyS2 . When I use echo or printf \x0A it will print an extra \x0D before the \x0A . i=1 a=$(printf "\xAA\xEE\x0A\x%02x" $i)echo -ne "$a" > /dev/ttyS2 I get this on the terminal/serial port -- note the extra 0D. How can I remove that? | You need to disable newline conversion: stty -F /dev/ttyS2 -onlcr or, for strict POSIX stty : stty -onlcr < /dev/ttyS2 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/606029",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74629/"
]
} |
606,070 | In bash we can iterate over index of an array like this ~$ for i in "${!test[@]}"; do echo $i; done where test is an array, say, ~$ test=(a "b c d" e f) so that the output looks like 0123 However, when I do the same in zsh I get an error: ➜ ~ for i in "${!test[@]}"; do echo $i; donezsh: event not found: test[@] What is going on? What is the proper way of iterating over indices in zsh? | zsh arrays are normal arrays like in most other shells and languages, they are not like in ksh/bash associative arrays with keys limited to positive integers (aka sparse arrays). zsh has a separate variable type for associative arrays (with keys being arbitrary sequences of 0 or more bytes). So the indices for normal arrays are always integers ranging from 1 to the size of the array (assuming ksh compatibility is not enabled in which case array indices start at 0 instead of 1). So: typeset -a arrayarray=(a 'b c' '')for ((i = 1; i <= $#array; i++)) print -r -- $array[i] Though generally, you would loop over the array members, not over their indice: for i ("$array[@]") print -r -- $i (the "$array[@]" syntax, as opposed to $array , preserves the empty elements). Or: print -rC1 -- "$array[@]" to pass all the elements to a command. Now, to loop over the keys of an associative array , the syntax is: typeset -A hashhash=( key1 value1 key2 value2 '' empty empty '')for key ("${(@k)hash}") printf 'key=%s value=%s\n' "$key" "$hash[$key]" (with again @ inside quotes used to preserve empty elements). Though you can also pass both keys and values to commands with: printf 'key=%s value=%s\n' "${(@kv)hash}" For more information on the various array designs in Bourne-like shells, see Test for array support by shell | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/606070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/428045/"
]
} |
606,120 | When reading from dev/urandom , with say head or dd , it’s of course expected that the output is always random and different. How is this handled by UNIX at a low level? Is the file naturally truncated on reading or instead is the file actually an interface for a symmetric cipher or equivalent and as such “reading” is actually the act of executing the cipher. | /dev/urandom is a character device, not a regular file. Opening it provides an interface to a driver, usually in the kernel, which handles reads; every time a program reads from /dev/urandom , a call is made to the driver, and the driver determines how to provide appropriate content (same as any other character device — /dev/null , /dev/zero ...). On Linux, this is implemented in drivers/char/random.c . It maintains an “entropy pool”, seeded from various sources of random data, and when read, processes the pool data using a ChaCha stream cipher to construct data to return. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/606120",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/352497/"
]
} |
606,153 | Is it possible to install only x86_64-w64-mingw32-gcc ? I need it for one command and the mingw-w64 installation is over 800MB... I'm in Debian Buster, but the same is true for other Linux flavors I just tried. $ sudo apt-get install mingw-w64 -V --no-install-recommends...The following NEW packages will be installed: binutils-mingw-w64-i686 (2.31.1-11+8.3) binutils-mingw-w64-x86-64 (2.31.1-11+8.3) g++-mingw-w64 (8.3.0-6+21.3~deb10u1) g++-mingw-w64-i686 (8.3.0-6+21.3~deb10u1) g++-mingw-w64-x86-64 (8.3.0-6+21.3~deb10u1) gcc-mingw-w64 (8.3.0-6+21.3~deb10u1) gcc-mingw-w64-base (8.3.0-6+21.3~deb10u1) gcc-mingw-w64-i686 (8.3.0-6+21.3~deb10u1) gcc-mingw-w64-x86-64 (8.3.0-6+21.3~deb10u1) mingw-w64 (6.0.0-3) mingw-w64-common (6.0.0-3) mingw-w64-i686-dev (6.0.0-3) mingw-w64-x86-64-dev (6.0.0-3)0 upgraded, 13 newly installed, 0 to remove and 2 not upgraded.Need to get 137 MB of archives.After this operation, 809 MB of additional disk space will be used.Do you want to continue? [Y/n] | Why is this Mingw-w64 package so large? Because mingw-w64 is a meta-package providing the MinGW-w64 toolchain with a C and C++ compiler targeting all supported targets. Currently this involves four different backends (32- and 64-bit, combined with POSIX and Windows threading models). If you don’t need all that, you can ask apt to only install the compiler you’re interested in, and you’ll end up with a smaller set of packages: apt install gcc-mingw-w64-x86-64 This will install the 64-bit toolchain, without g++ . That’s still around 300MiB... The next version of Debian (and Ubuntu 20.04) provide finer granularity, so you can specify only one of the threading models: apt install gcc-mingw-w64-x86-64-posix | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/606153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/429766/"
]
} |
606,169 | I know how to insert a single text to specific interval. But now my problem is, I want to insert different texts/words saved in an add.txt file to a specific intervals of another data.txt file. I want to insert first word from add.txt to a specific position of data.txt , then add second word from add.txt to next specific position and so on. My data.txt contain two columns, but the inserted word must appear as a merged row. Please see the example below of what I need. add.txt 2001-01-01 00:00:00 42 12001-01-02 00:00:00 42 12001-01-03 00:00:00 42 12001-01-04 00:00:00 42 12001-01-05 00:00:00 42 1 data.txt -500 11.822788 -400 12.006394 -350 12.287062 -300 12.793395 -500 11.823597 -400 12.008012 -350 12.287062 -300 12.794204 -500 11.826023 -400 12.011247 -350 12.291915 -300 12.800675 -500 11.827641 -400 12.013674 -350 12.295959 -300 12.805528 -500 11.830067 -400 12.016100 -350 12.300003 -300 12.811998 I want 2001-01-01 00:00:00 42 1 -500 11.822788 -400 12.006394 -350 12.287062 -300 12.7933952001-01-02 00:00:00 42 1 -500 11.823597 -400 12.008012 -350 12.287062 -300 12.7942042001-01-03 00:00:00 42 1 -500 11.826023 -400 12.011247 -350 12.291915 -300 12.8006752001-01-04 00:00:00 42 1 -500 11.827641 -400 12.013674 -350 12.295959 -300 12.8055282001-01-04 00:00:00 42 1 -500 11.830067 -400 12.016100 -350 12.300003 -300 12.811998 I am looking for a simplest solution using awk , sed or something. | $ awk '(FNR-1)%4 == 0 { getline add <"add.txt"; print add }; 1' data.txt2001-01-01 00:00:00 42 1 -500 11.822788 -400 12.006394 -350 12.287062 -300 12.7933952001-01-02 00:00:00 42 1 -500 11.823597 -400 12.008012 -350 12.287062 -300 12.7942042001-01-03 00:00:00 42 1 -500 11.826023 -400 12.011247 -350 12.291915 -300 12.8006752001-01-04 00:00:00 42 1 -500 11.827641 -400 12.013674 -350 12.295959 -300 12.8055282001-01-05 00:00:00 42 1 -500 11.830067 -400 12.016100 -350 12.300003 -300 12.811998 This uses awk to read and output every line of the data.txt file. Before outputting any 4th line, a line is read and outputted from the add.txt file. No check is made to verify that the data read from add.txt is correctly read (if the file is too short, the above code would repeat the last line). Using paste : $ paste -d '\n' add.txt - - - - <data.txt2001-01-01 00:00:00 42 1 -500 11.822788 -400 12.006394 -350 12.287062 -300 12.7933952001-01-02 00:00:00 42 1 -500 11.823597 -400 12.008012 -350 12.287062 -300 12.7942042001-01-03 00:00:00 42 1 -500 11.826023 -400 12.011247 -350 12.291915 -300 12.8006752001-01-04 00:00:00 42 1 -500 11.827641 -400 12.013674 -350 12.295959 -300 12.8055282001-01-05 00:00:00 42 1 -500 11.830067 -400 12.016100 -350 12.300003 -300 12.811998 Here, I ask paste to create records with a line from add.txt as the first field, followed by four lines from data.txt as the next four fields. With -d '\n' I set the character to use as a field delimiter to a newline character. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/606169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/429787/"
]
} |
606,542 | I'm trying to make a bash script that deals with every file in a directory. All of those file names begin with a dot, so they're hidden. When I try to use a wildcard to grab everything in the directory, the wildcard isn't expanding. My code that loops over it looks like this right now: #!/bin/bashshopt -s extglobfor i in "$(pwd)"/*; do echo "$i"done The output is just /Users/.../* . The wildcard doesn't expand. This is different than some of the other threads because it deals with hidden files specifically. If I add a file like test to the directory, then it works. I get /Users/.../test . I tried running this in the terminal by itself as well and got the same result. How do I get the wildcard to expand for hidden files? | I figured it out! Looking more closely at the documentation for shopt , there's an option called dotglob that can be used to include filenames that begin with a dot! I added shopt -s dotglob to the beginning of my script and it works now. The output now lists every hidden file and directory (except ./ and ../ ). My script now looks like this: #!/bin/bashshopt -s extglobshopt -s dotglobfor i in "$(pwd)"/*; do echo "$i"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/606542",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327514/"
]
} |
606,639 | I've no issue with this cut command. wolf@linux:~$ echo ab cd efab cd efwolf@linux:~$ echo ab cd ef | cut -d ' ' -f 1abwolf@linux:~$ echo ab cd ef | cut -d ' ' -f 2cd However, when I try the same command with different input like this, I did not get the output as expected. wolf@linux:~$ ip address show eth0 | grep 'inet ' inet 10.10.10.10/24 brd 10.10.10.255 scope global dynamic eth0wolf@linux:~$ ip address show eth0 | grep 'inet ' | cut -d ' ' -f 1wolf@linux:~$ ip address show eth0 | grep 'inet ' | cut -d ' ' -f 2 What's wrong in the second example? awk doesn't seems to have a problem with the same input, strange. wolf@linux:~$ ip address show eth0 | awk '/inet / {print $1}'inetwolf@linux:~$ ip address show eth0 | awk '/inet / {print $2}'10.10.10.10/24 | cut takes each and every delimiter as meaningful, even if there are consecutive spaces. This is unlike awk , which by default splits on whitespace, but takes multiple whitespace as only one delimiter, and ignore leading whitespace. You could get awk to behave similarly by setting the field separator FS to [ ] : $ ip add show lo | awk -F'[ ]' '$5 == "inet" {print $6}'127.0.0.1/8 That has to be set like that, as a regexp, since a single space by itself in FS marks the default, special operation. See, e.g. the GNU awk manual has to say on field separators . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/606639",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409008/"
]
} |
606,644 | In the following example, there are 4 spaces before inet. wolf@linux:~$ ip address show eth0 | grep 'inet ' inet 10.10.10.10/24 brd 10.10.10.255 scope global dynamic eth0wolf@linux:~$ How do I count the number of spaces like this example. This sample is easy as it only has 4 spaces. What if it has more than that? Hundreds, thousands? Is there an easy way to do this? | You can use tr to delete everything that’s not the character you’re interested in, the wc to count the remaining characters: ip address show eth0 | grep 'inet ' | tr -d -c ' ' | wc -m This scales well to large amounts of text, tr is very efficient. Note however that with some implementations of tr including GNU tr , that only works properly for single-byte characters (such as the space character). If you only want to count leading spaces, you’ll need something a little more powerful than tr : ip address show eth0 | grep 'inet ' | sed 's/[^ ].*$//' | tr -d '\n' | wc -m This deletes every part of each line which is not leading space, then deletes newlines and counts. See How to count the number of a specific character in each line? if you’re interested in counts per line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/606644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409008/"
]
} |
606,665 | I'm trying to write the output of strace ls to a file. I know that I need to use > in order to forward output of a command to a file, but it doesn't work. It creates a file but the command prints the output of strace ls to stdout but writes the file name into the file. $ strace ls > ls_sys.txt...strace output...$ cat ls_sys.txtls_sys.txt | By default strace outputs to stderr. By simply typing man strace , you will have the full documentation of strace . In the manual page, it states that the -o option can be used to output it to a file instead of stderr. You can type man [insert command here] for the vast majority of programs, and have all the documentation you will need to effectively use them. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/606665",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/430235/"
]
} |
606,667 | I was reading a book published in 2018 titled "Linux Basics for Hackers: Getting Started with Networking, Scripting, and Security in Kali" from no starch press . And this was written there that you can move up as many levels as you want using the corresponding number of double dots separated by spaces: You would use .. to move up one level You would use .. .. for two levels You would use .. .. .. to move up three levels, and so on. So, for example, to move up two levels, enter cd followed by two sets of double dots with a space in between. This is the page from the book: Was that ever working? It is not working in 2020. | This is an error in the book which the publisher addresses in the "Updates" section on the book's "homepage" ( https://nostarch.com/linuxbasicsforhackers#updates ): Updates Page 7 The following text regarding moving up through directory levels is incorrect: You would use .. to move up one level. You would use .. .. to move up two levels. You would use .. .. .. to move up three levels, and so on. This text should read: You would use .. to move up one level. You would use ../.. to move up two levels. You would use ../../.. to move up three levels, and so on. The errata does not mention the example that you also quote, which shows cd .. .. , but this is obviously also wrong. Some shells support a cd command with two arguments, where the second argument replaces whatever matches the first argument in the pathname of the current working directory, and the resulting pathname is changed into. But the pathname of current directory, as found by pwd and in $PWD , would not contain .. , and even if it did, the cd .. .. command would not change directory at all (given the semantics that I just described). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/606667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/430422/"
]
} |
606,702 | Please note this is a FreeBSD question and not a Linux question. Please don't answer with how it would be done on Linux or systemd or any of that. I have a situation where memcached is crashing. It's not that repeatable and I'll eventually figure it out. In the meantime, I need to ensure that memcached is running. If it's not, I need to restart it. It is installed via pkg and starts via /usr/local/etc/rc.d/memcached . There are a few choices. I could write a watchdog script and invoke it every like 10 minutes or something via cron . Kinda ugly, but would work. Main thing here is that I need to go write that script. Calling service memcached status , evaluate the result, maybe call service memcached start . I know how to write that, but it seems clunky. I'd rather just use a mechanism that already exists. I could write a do ... until loop script. Then I could modify /usr/local/etc/rc.d/memcached . But I want to keep files that were installed by the package pristine. I don't want to perpetuate my changes each time I upgrade the package. I drop a script into /usr/local/etc/periodic.d/hourly and have it invoked by periodic(8) . Is there some easy, FreeBSD-native mechanism that I'm not thinking of to keep processes running? Or am I just overthinking it and I should just go write my 8 line script and start calling it from cron ? | What you're looking for is called a supervisor . I don't think FreeBSD comes with one out of the box. But there are some in the ports. I see at least; supervisord is available as a port called py-supervisor (the port has several flavors, install with pkg install py37-supervisor or whatever matches your Python version). daemontools is available as a port . Monit is available as a port . FSCD is available as a port called fsc . I suggest supervisord. Install the package and add a stanza to /usr/local/etc/supervisord.conf : [program:memcached]command=/usr/local/etc/rc.d/memcached To run supervisord at boot time, edit /etc/rc.conf or /etc/rc.conf.local to have the line supervisord_enable="YES" Whichever supervisor you choose, make sure to disable the direct starting of memcached . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/606702",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/430252/"
]
} |
606,892 | Can a process background itself? What would be a Perl and/or C implementation of this? | I just want to answer the literal question here: can a process background itself , as opposed to fork itself, continue execution in the child and exit so that the process waiting for it can resume execution already covered by others here. A note on terminology first here. Backgrounding is usually referring to job control in interactive shells. Like when you run a command with & appended. Or press Ctrl + Z and run bg afterwards. Jobs there are not processes they are process groups . When you run: $ ps -ej | grep -w "$$" & echo "$!"35131 19152 19152 19152 pts/0 00:00:00 zsh 35130 35130 19152 pts/0 00:00:00 ps 35131 35130 19152 pts/0 00:00:00 grep It's the ps | grep job (here implemented via that 35130 process group whose leader is the process running ps but also contains the process running grep ) that is put in background. Backgrounding here means: The shell is not waiting for the termination of that job. You will return to the prompt and will be able to enter more commands while that job is still running. The terminal device driver is told that that process group is not in foreground. As such, processes in that process group don't have the right to read from the terminal (all processes in the group will be interrupted if any process in it tries to read from the terminal) and it won't get affected by ^C / ^Z / ^\ , etc. The shell enters the job in its internal job table and records it as being currently in background. Now the backgrounding wording is sometimes used outside of terminal job control. When you do: cmd1 | cmd2 & pid=$!somecommandwait in a script, there is no job control. If that script is started in a terminal by an interactive shell, it will be itself put in either foreground or background depending on how the script was started like any other command. but that cmd1 | cmd2 in the script will not be put in background by the shell interpreting the script as it's not an interactive shell. If you press ^C, cmd1 and cmd2 will be killed just the same alongside the shell running the script, and suspended when you press ^Z. Instead of saying that cmd1 | cmd2 is started in background, it's more correct to say that they are being run asynchronously . Compared to a job started in background in an interactive shell, only 1 is done. No process group is created to run that pipeline. Now, with that clarified, can a process put itself in background? As it's jobs and not processes that are put in background, the questions for that process would be: am I in a session attached to a terminal? (would job control even make sense?) am I alone in my process group or are there other processes? is my process group in foreground or background already? is my process group ultimately being waited for by an interactive shell? If we can answer yes to all those, that is if we're being called as a simple command (not as part of pipeline or compound command) from an interactive shell in a terminal, then we would need to (1) tell the waiting shell to stop waiting for us, (2) tell the terminal that its process group is no longer the foreground one and (3) tell the shell to update its job table to record the fact that we're now in background. You can't really tell another process to stop waiting for you other than by terminating or being suspended in which case that process will receive a SIGCHLD signal or the wait*() call it's currently doing will return. However, you can suspend yourself by sending yourself the SIGTSTP signal (same sent when you press ^Z ) or SIGSTOP (which cannot be intercepted), in which case, all of (1), (2) and (3) will happen automatically, except that the job state will be suspended instead of running in background . Now, since you're suspended, you're no longer running and cannot resume yourself. You could however fork a child process that will resume yourself (by sending SIGCONT to your pid) in a little while prior to suspending yourself. When you resume execution, your shell will receive a SIGCHLD again, and (3) where the shell realises that you're now running in background will happen when it gets to process that signal. As an example, implementing that in sh : $ sh -c 'echo running in foreground; sleep 1 (sleep 1; echo resuming my parent; kill -s CONT "$$") & echo stopping; kill -s STOP "$$" echo resumed sleep 30 echo finished'; echo "$?"running in foregroundstopping147zsh: suspended (signal) sh -c$ resuming my parentresumed$ jobs[1] + running sh -c$ finished[1] + done sh -c It's also possible to suspend your whole job with kill(0, SIGSTOP) ( kill -s STOP 0 in sh ), but is it right for a process to do that, to affect the run flow of processes it hasn't started and it doesn't know about? sh -c 'echo running in foreground perl -MPOSIX -le "setpgid 0,0; # leave the process group before it is suspended sleep 2; print q(resuming the process group of my parent); kill q(CONT), - shift@ARGV " "$(ps -o pgid= -p "$$")" & sleep 1 echo stopping my process group; kill -s STOP 0 echo process group resumed sleep 30 echo finished' | cat | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/606892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
607,020 | I need to check if my entire files contained only 4 characters; "A", "T", "G" and "C".I used to split the characters using sed and then grep -o and -v to exclude the targeted characters for checking. Is there any simple and straight forward way to do this in linux? Using sed / awk / grep? (There seemed to be suggestion on this related questions but they were including the whole texts in the command. My file size is too big for this.) For example, there are four lines in the input file, with possibility of other characters existing in the line (other than ATGC). I would like to detect the odd characters and show the odd characters together with the number of line they are in, if possible. Input: ATTGTAAGGTAAGTGGATTYTCCGGGRETCTTVGGATCGTTGACCAGTKGCCCGGGCCGGTCCTTTGGTGCGTGGGGCTCTCCCAACCCCCCCACCCTCGACCTGAGCTCAGGCXC Desired Output: 1:Y1:R1:E2:V2:K4:X | -n Prefix each line of output with the 1-based line number. -o Print only the matched parts. [^ATGC] exclude characters. grep -no '[^ATGC]' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/607020",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345181/"
]
} |
607,037 | How to use random ip address in Curl request,I'm using this code in Curl But return my local ip from http://ifconfig.me printf "%d.%d.%d.%d\n" "$((RANDOM % 256))" "$((RANDOM % 256))" "$((RANDOM % 256))" "$((RANDOM % 256))" in Curl curl --header 'X-Forwarded-For: printf "%d.%d.%d.%d\n" "$((RANDOM % 256))" "$((RANDOM % 256))" "$((RANDOM % 256))" "$((RANDOM % 256))"' http://ifconfig.me | You're quoting incorrectly. You can try: curl --header "X-Forwarded-For: $(printf "%d.%d.%d.%d" "$((RANDOM % 256))" "$((RANDOM % 256))" "$((RANDOM % 256))" "$((RANDOM % 256))")" You can get rid of the inner quotes and even the printf entirely: curl --header "X-Forwarded-for: $((RANDOM % 256)).$((RANDOM % 256)).$((RANDOM % 256)).$((RANDOM % 256))" However, whether the target site accepts the X-Forwarded-For is another matter entirely. Setting this header does not actually hide your own IP from the target site. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/607037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/430601/"
]
} |
607,128 | The shell standard output redirects to the last line of the file, is there a way to write it to the first line of the file? Since the content of stdout is unpredictable, I suspect that this may require changing the default behavior of stdout, but I am not sure if it is feasible. Example, redirecting a timestamp to file echo `date` >> test.txt Default save to last line of file Mon Aug 31 00:40:27 UTC 2020Mon Aug 31 00:40:28 UTC 2020Mon Aug 31 00:40:29 UTC 2020Mon Aug 31 00:40:30 UTC 2020 Desired effect, save the output to the first line of the file Mon Aug 31 00:40:30 UTC 2020Mon Aug 31 00:40:29 UTC 2020Mon Aug 31 00:40:28 UTC 2020Mon Aug 31 00:40:27 UTC 2020 Thanks in advance! | To write the date to the beginning instead of the end of file , try: { date; cat file; } >file.new && mv file.new file Discussion Adding new text to the beginning of a file requires the whole file to be rewritten. Adding new text to the end of a file only requires writing the new text. Andy Dalton 's suggestion of just appending to the end of a file like normal and then using tac file to view the file is a good one. echo `date` >> test.txt can be replaced by a simpler and more efficient date >> test.txt . If one is using bash 4.3 or later then, as Charles Duffy points out, printf '%(%c)T\n' -1 >>test.txt is still more efficient. The spaces around the curly braces are essential. This is because { and } are shell reserved words (as opposed to shell keywords which do not require spaces). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/607128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/374064/"
]
} |
607,221 | With normal syslog I can go to /var/log and run tail -F *log if I am not sure which log something is logged in. Is there an equivalent for systemd ? Background I am trying to debug a server. It crashes without leaving a trace. I am hoping that using the systemd version of tail -f *log that I can see log messages that are logged (but not yet written to disk) when the server crashes. | What you want to use is the journalctl command. For example, if I want to get updated log entries on the service vmware, I would run this (f = follow, u = unit/service name): journalctl -f -u vmware.service Here's how you can get the full system journal. I use this command for my updated system logs (f = follow, x = Add message explanations where available, b = since boot): journalctl -fxb --no-hostname --no-full | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/607221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2972/"
]
} |
607,333 | The GNU sed manpage says: The -E option switches to using extended regular expressions instead; it has been supported for years by GNU sed, and is now included in POSIX. However, POSIX Issue 7 (2018) sed doesn't list -E as an option . Where can POSIX draft standards be viewed? | Drafts are only available to Austin Group members , but the information is publicly available in the Austin Group bug tracker: sed -E is queued for issue 8 . (Joining the Austin Group only requires signing up to the mailing list .) So the manpage is only slightly ahead of itself... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/607333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
607,358 | I am moving from iptables to nftables. I have a basic questions about the packet processing order in nftables. Since one can create multiple tables of same type, say inet, and also chains can be created inside each table with different or the same priority, what will be the processing order. For example, if I create following, what will be the order. table inet t1 { chain INPUT { type filter hook input priority 20; policy accept; ... }}table inet t2 { chain INPUT { type filter hook input priority 20; policy accept; ... }} while I understand that chains are hooked to different inputs, I yet to understand the logic behind having different tables. Apology if is a stupid or basic question | The ordering in the example will be undefined , but both chains will be traversed (unless for example the packet gets dropped in the first chain seen). Netfilter and the Network/Routing stack provide the ordering Here's the Packet flow in Netfilter and General Networking schematic: While it was made with iptables in mind, the overall behaviour is the same when applied to nftables with minor differences (eg: no separation between mangle and filter , it's all filter in nftables with the exception of mangle/OUTPUT which should probably be translated into type route hook output , or most of the bridge mingling between ebtables and iptables seen in the lower part UPDATE: doesn't exist with nftables exists but should be avoided by using nftables directly in the bridge family directly, and using kernel >= 5.3 if conntrack features are needed there (and by not using the kernel module br_netfilter at all). Role of tables A table in nftables is not equivalent to a table in iptables : it's something less rigid. In nftables , the table is a container to organise chains, set and other kinds of objets, and limit their scope. Contrary to iptables It's perfectly acceptable and sometimes required to mix different chain types (eg: nat, filter, route) in the same table: for example that's the only way they can access a common set since it's scoped to the table and not global (like would be iptables ' companion ipset ). Then it's also perfectly acceptable to have multiple tables of the same family including again the same kind of chains, for specific handling or to handle specific traffic: there's no risk of altering rules in an other table when changing the contents of this table (though there's still the risk of having clashing effects as an overall result). It helps managing rules. For example the nftlb load-balancer creates tables (in various families) all named nftlb , intended to be managed only by itself and not clashing with other user-defined tables. Ordering between hooks and within hooks In a given family (netdev, bridge, arp, ip, ip6), chains registered to different hooks (ingress, prerouting, input, forward, output, postrouting) are ordered from the hook order provided by Netfilter as seen in the schematic above. Priority's scope is limited to the same hook and doesn't matter here. For example type filter hook prerouting priority 500 still happens before type filter hook forward priority -500 in the case of a forwarded packet. Where applicable, for each possible hook of a given family, each chain will be competing with other chains registered at the same place. The tables play no role here, except defining the family. As long as the priority is different, within a given hook type, a packet will traverse chains within this hook from the lowest priority to the highest. If exactly the same priority is used for two chains of the same family and hook type, order becomes undefined. When creating chains, will the current kernel version add the chain before or after a chain with the same priority in the corresponding list structure? Will the next kernel version still keep the same behaviour or will some optimization change this order? It's not documented. Both hooks will still be called, but the order they are called in is undefined. How could this matter? Here's as quote from man page below, just to clarify that a packet can be accepted (or not) multiple times in the same hook: accept Terminate ruleset evaluation and accept the packet. The packet canstill be dropped later by another hook, for instance accept in theforward hook still allows to drop the packet later in the postroutinghook, or another forward base chain that has a higher priority numberand is evaluated afterwards in the processing pipeline. For example if one chain accepts a certain packet, and the other chain drops this same packet, the overall result will always be a drop . But one hook might have done additional actions leading to side effects: for example it could have added the packet's source address in a set and the other chain called next have dropped the packet. If the order is reversed and the packet is dropped first, this "side effect" action will not have happened and the set will not have been updated. So one should avoid using the exact same priority in this case. For other cases, mostly when no drop happens, this would not matter. One should avoid using the same priority unless knowing it won't matter. Relation to other networking subsystems Within a hook, all the integer range is available to choose the order, but some specific thresholds do matter. From nftables ' wiki , Here are the legacy iptables hook values valid for the ip family, which also include other subsystems: NF_IP_PRI_CONNTRACK_DEFRAG (-400) : priority of defragmentation NF_IP_PRI_RAW (-300) : traditional priority of the raw table placed before connection tracking operation NF_IP_PRI_SELINUX_FIRST (-225) : SELinux operations NF_IP_PRI_CONNTRACK (-200) : Connection tracking operations NF_IP_PRI_MANGLE (-150) : mangle operation NF_IP_PRI_NAT_DST (-100) : destination NAT NF_IP_PRI_FILTER (0) : filtering operation, the filter table NF_IP_PRI_SECURITY (50) : Place of security table where secmark can be set for example NF_IP_PRI_NAT_SRC (100) : source NAT NF_IP_PRI_SELINUX_LAST (225) : SELinux at packet exit NF_IP_PRI_CONNTRACK_HELPER (300) : connection tracking at exit Of those only a few really matter: those not coming from iptables . For example (non-exhaustive) in the ip family: NF_IP_PRI_CONNTRACK_DEFRAG (-400) : for a chain to ever see incoming IPv4 fragments, it should register in prerouting at a priority lower than -400. After this only reassembled packets are seen (and rules checking for the presence of fragments never match). NF_IP_PRI_CONNTRACK (-200) : for a chain to act before conntrack or nat it should register in prerouting or in output at a prority lower than -200. Example, register at priority NF_IP_PRI_RAW (-300) (or any other value < -200 but still > -400 if one want to match the port in all cases) to add a notrack statement to prevent conntrack to create a connection entry for this packet. So the nftables equivalent of iptables ' raw/PREROUTING is just filter prerouting with an adequate priority. Misc I left other families and special cases out: for example the inet family registers within ip and ip6 families' hooks at the same time. Or the type nat which might behave differently when a NAT rule matches (it might not traverse again other nat chains of the same hook, I'm not completely sure and it might depend on kernel version) and is really dependent on conntrack (eg: prerouting at priority -200) and at least since kernel 4.18 competes only with other nat type chains, not with chains of an other type (it will always be seen at priority -200 for type filter chains). When also using iptables-legacy (or iptables-nft ) all this still applies, and priority choices can matter. NAT rules from both iptables-legacy and nftables shouldn't be mixed with a kernel < 4.18 or undefined behaviour can happen (eg: one chain will handle all the NAT, the other won't be able to, but the first subsystem to register, rather than the lowest priority chain, wins). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/607358",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359152/"
]
} |
607,448 | In less , is there a way or trick to quickly count the number of matches instead of pressing N repeatedly and counting the matches manually? | I don't think there's a direct method, but you can hack your way around. The following command will pipe everything from the first line on the screen to the end of the file to grep -c ... | less , opening a new instance of less to show the output of grep , which will be the number of lines matching the pattern: g|$ grep -c <pattern> | less When you quit this less , you'll be back to the first less . Other tricks: &pattern and then pipe to wc -l using g|$ like above, to use less 's pattern matching jump a number of matches (e.g., do 10n x times until it fails, then proceed by y single steps to get 10x+y matches). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/607448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359768/"
]
} |
607,450 | Is there any difference between bash 's $HOSTNAME and zsh 's $HOST ? If no, is there a historical reason for bash to chose the $HOSTNAME variable when tcsh and zsh use $HOST ? | I don't think there's a direct method, but you can hack your way around. The following command will pipe everything from the first line on the screen to the end of the file to grep -c ... | less , opening a new instance of less to show the output of grep , which will be the number of lines matching the pattern: g|$ grep -c <pattern> | less When you quit this less , you'll be back to the first less . Other tricks: &pattern and then pipe to wc -l using g|$ like above, to use less 's pattern matching jump a number of matches (e.g., do 10n x times until it fails, then proceed by y single steps to get 10x+y matches). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/607450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
607,485 | The command sudo dd if=/dev/sdb | pigz -c | sudo tee /sdb.img.gz (omitted sudo in the title) prints binary data to console either the output of dd or pigz . I'm wondering why since all output are caught in a pipe | and the last in the chain is redirect to file. So, there's no "leak" to stdout. What am I not getting here? I'm in bash on Ubuntu 20.04 with the shipped versions of the commands. | tee duplicates its input, sending it (in your case) to its standard output and /sdb.img.gz . You can redirect its output to avoid seeing the output on your console: sudo dd if=/dev/sdb | pigz -c | sudo tee /sdb.img.gz > /dev/null I would run pigz directly as root instead, avoiding dd and tee : sudo sh -c 'pigz -c < /dev/sdb > /sdb.img.gz' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/607485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63502/"
]
} |
607,524 | Softlinks are easily traceable to the original file with readlink etc... but I am having a hard time tracing hardlinks to the original file. $ ll -i /usr/bin/bash /bin/bash 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /bin/bash* 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /usr/bin/bash* ^ above is as expected - cool --> both files point to same inode 1310813 (but the number of links, indicated by ^ , shows to be 1. From Gilles answer the reason for this can be understood) $ find / -samefile /bin/bash 2>/dev/null/usr/bin/bash above is as expected - so no problems. $ find / -samefile /usr/bin/bash 2>/dev/null/usr/bin/bash above is NOT cool. How do I trace the original file or every hardlink using the /usr/bin/bash file as reference? Strange - below did not help either. $ find / -inum 1310813 2>/dev/null/usr/bin/bash | First, there is no original file in the case of hard links; all hard links are equal. However, hard links aren’t involved here, as indicated by the link count of 1 in ls -l ’s output: $ ll -i /usr/bin/bash /bin/bash 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /bin/bash* 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /usr/bin/bash* Your problem arises because of a symlink, the bin symlink which points to usr/bin . To find all the paths in which bash is available , you need to tell find to follow symlinks, using the -L option: $ find -L / -xdev -samefile /usr/bin/bash 2>/dev/null/usr/bin/rbash/usr/bin/bash/bin/rbash/bin/bash I’m using -xdev here because I know your system is installed on a single file system; this avoids descending into /dev , /proc , /run , /sys etc. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/607524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243342/"
]
} |
607,610 | From my own experience, i noticed that most people use /tmp/ for temp files or to save on disk write, but i don't often see anyone recommending or even using /dev/shm instead. Why is that? | /dev/shm is intended to be used for shared memory segments that live in the file system. There are two types of shared memory: SysV shared memory and POSIX shared memory. POSIX shared memory uses named segments created via shm_open , and these typically live in the file system under /dev/shm , which is usually a tmpfs. /dev/shm is usually mounted nosuid and noexec . /tmp is intended to be used for temporary files. On some systems, it is a tmpfs, but on many systems it is backed by disk. On many systems, it is possible to run programs from /tmp because it is not marked noexec . The system administrator may have sized it appropriately (either larger or smaller) to fit the needs of the particular system. /var/tmp is like /tmp , but the latter is usually cleared on boot while the former is not. The latter should be used in most cases, but if a temporary file needs to persist for a longer period of time, /var/tmp can be used. So there's no explicit reason why you physically cannot use /dev/shm for temporary files, but it isn't a typical use case or its intended purpose, and people won't expect it. If your goal is to write code that is easy to maintain and that works optimally across a variety of systems, it's best to follow the conventions unless you have a compelling reason to do otherwise. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/607610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409852/"
]
} |
607,695 | I am writing a shell script for docker containers. I want to check before running my script whether it's docker container or host machine. something like this: if $MACHINE=docker; then echo proceedelif $MACHINE=host; then echo 'it's not container' exitfi | You can check if any of the control groups belong to docker: if grep -q docker /proc/1/cgroup; then echo inside docker else echo on host exitfi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/607695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205721/"
]
} |
607,704 | In a shell script I'd like to execute the users shell and continue the script once the shell finishes. It should look something like this: > myScript.shScript ouput> echo "shell started by myScript.sh"shell started by myScript.sh> exitMore script output> It works when I execute a shell in the script: echo "Script output"bashecho "More script output" But I'd like it not to use a fixed shell. The users login shell or the shell he was in before he started myScript.sh whould be fine. Any solution must work not only on Linux-based systems but also Mac OSX | You can check if any of the control groups belong to docker: if grep -q docker /proc/1/cgroup; then echo inside docker else echo on host exitfi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/607704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/431259/"
]
} |
607,849 | Suppose I have an Ubuntu machine and did sudo apt install docker.io . Did I just put a huge security hole in my machine that will allow anyone with terminal access to escalate into root? I ask that because I can mount any directory as a volume inside a container, escalate to root inside the container, and it seems I can do whatever I want in the mounted volume. All this seems so wrong that I feel I am certainly missing something. Or it is simple as that, and there really is a "sudo" replacement in Ubuntu's repository that doesn't ask for password? | Not quite, you’re allowing anyone in the docker group to escalate into root. This is described in /usr/share/doc/docker.io/README.Debian : As noted in the upstream documentation ( https://docs.docker.io ), Docker willallow non-root users in the "docker" group to access "docker.sock" and thuscommunicate with the daemon. To add yourself to the "docker" group, usesomething like: adduser YOURUSER docker As also noted in the upstream documentation, the "docker" group (and any othermeans of accessing the Docker API) is root-equivalent. If you don't trust auser with root on your box, you shouldn't trust them with Docker either.If you are interested in further information about the security aspects ofDocker, please be sure to read the "Docker Security" article in theupstream documentation: https://docs.docker.com/engine/security/security/ If you’re interested in being able to run (most) OCI containers without requiring root-equivalent privileges, take a look at Podman . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/607849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60040/"
]
} |
607,864 | I have these aliases in my ~/.bashrc alias grep='grep --color=auto -H'alias fgrep='fgrep --color=auto -H'alias egrep='egrep --color=auto -H' but they have no effect when I run find ... -exec grep ... , and I always have to provide those options manually. Is there a way to tell find to rely on aliases in the -exec option's arguments? I'm thinking of configuration files, rather than other aliases. Would it be unsafe in some way? | Not quite, you’re allowing anyone in the docker group to escalate into root. This is described in /usr/share/doc/docker.io/README.Debian : As noted in the upstream documentation ( https://docs.docker.io ), Docker willallow non-root users in the "docker" group to access "docker.sock" and thuscommunicate with the daemon. To add yourself to the "docker" group, usesomething like: adduser YOURUSER docker As also noted in the upstream documentation, the "docker" group (and any othermeans of accessing the Docker API) is root-equivalent. If you don't trust auser with root on your box, you shouldn't trust them with Docker either.If you are interested in further information about the security aspects ofDocker, please be sure to read the "Docker Security" article in theupstream documentation: https://docs.docker.com/engine/security/security/ If you’re interested in being able to run (most) OCI containers without requiring root-equivalent privileges, take a look at Podman . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/607864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164309/"
]
} |
608,116 | I recently installed ubuntu 20.04 and bluetooth seemed to work out-of-the-box. Yesterday, it stopped working with no known reason. I can turn it ON but the settings still show it to be OFF. I tried the following: $ sudo -i$ rfkill list0: phy0: Wireless LAN Soft blocked: no Hard blocked: no3: hci0: Bluetooth Soft blocked: no Hard blocked: no and on running bluetoothctl , Agent registered[bluetooth]# power offNo default controller available[bluetooth]# power onNo default controller available[bluetooth]# exit What could be the problem and how to tackle it ? | I tried various hacks (all at once) and did a restart but I am not sure which led to the bluetooth working right. I ran sudo apt-get updatesudo apt upgradesudo systemctl start bluetoothsudo rfkill unblock bluetooth # rfkill also requires sudo And after the restart, it worked :? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/608116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/396778/"
]
} |
608,207 | I use tar -cJvf resultfile.tar.xz files_to_compress to create tar.xz and tar -xzvf resultfile.tar.xz to extract the archive in current directory. How to use multi threading in both cases? I don't want to install any utilities. | tar -c -I 'xz -9 -T0' -f archive.tar.xz [list of files and folders] This compresses a list of files and directories into an .tar.xz archive. It does so by specifying the arguments to be passed to the xz subprocess, which compresses the tar archive. This is done using the -I argument to tar, which tells tar what program to use to compress the tar archive, and what arguments to pass to it. The -9 tells xz to use maximum compression. The -T0 tells xz to use as many threads as you have CPUs. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/608207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/431721/"
]
} |
608,215 | I want to switch to the KDE desktop environment. But my distro uses gnome by default. Now if i run the apt full-upgrade command, the default DE that comes with it is Gnome and not KDE. Wouldn't that also put back Gnome and i would have to uninstall Gnome manually again, because the repository contains Gnome? 1.) How do i stop Gnome from installing when i do a apt full-upgrade? (Since i don't want Gnome) 2.) How do i go about managing my KDE package (i.e updating it). Do i also do a apt-mark hold on the KDE package just to "prevent any potential tamper" whenever i do apt full-upgrade? and then just apt-get update && apt-get upgrade KDE commands to update it? | tar -c -I 'xz -9 -T0' -f archive.tar.xz [list of files and folders] This compresses a list of files and directories into an .tar.xz archive. It does so by specifying the arguments to be passed to the xz subprocess, which compresses the tar archive. This is done using the -I argument to tar, which tells tar what program to use to compress the tar archive, and what arguments to pass to it. The -9 tells xz to use maximum compression. The -T0 tells xz to use as many threads as you have CPUs. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/608215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/431688/"
]
} |
608,260 | I have two data files one.txt and two.txt on a Linux system. I want to convert all positive values to negative in one.txt and vice versa for two.txt , but only on the first column. Please note that data contains zeroes also. one.txt is like this: 10 35.74 8 35.74 6 35.74 4 35.74 2 35.74 0 35.74 two.txt like this -20 35.74 -18 35.74 -16 35.74 -14 35.74 -12 35.74 -0 35.74 I want to change one.txt like this: -10 35.74 -8 35.74 -6 35.74 -4 35.74 -2 35.74 -0 35.74 And similarly, I want to change all negative values to positive in two.txt I tried awk '$1 *= -1' file.txt but it messed-up zero. I need to execute these two problems in two different processes. So, I prefer two different solutions/codes for each instead of doing in one step | A slight variant of your attempt will work: awk '{ $1 *= -1 } 1' file.txt The issue with awk '$1 *= -1' is that $1 *= -1 is taken as a condition; its effect is applied, but then taken as a condition, so the line which was processed is output only if the result is non-zero. Adding braces causes the multiplication to be applied to all lines, and the 1 on its own causes all lines to be printed. To avoid changing empty lines, you can add a condition to the multiplication: awk 'NF { $1 *= -1 } 1' file.txt If you’re using Gawk¹, you can check that the first column is a number before changing it: awk 'typeof($1) == "strnum" { $1 *= -1 } 1' file.txt ¹ and there's no POSIXLY_CORRECT variable in the environment | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/608260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/429787/"
]
} |
608,267 | Let's say I have the same ASCII text file on Linux and Windows: onetwothree The two files will have \n and \r\n respectively as EOL character on the two OSes. Does this mean that the file on Linux is smaller? This test, performed on linux, seems say yes: $ echo -en 'one\ntwo\nthree\n' | wc --bytes 14$ echo -en 'one\r\ntwo\r\nthree\r\n' | wc --bytes 17 | Your test is correct, albeit strictly speaking limited — it only shows that on Linux, the strings produced by your echo commands occupy respectively 14 and 17 bytes as measured by wc --bytes . Each \n and \r occupy one byte, so each newline incurs a one-byte penalty when using DOS/Windows-style newlines. Strictly speaking, the storage requirements depend on the program you use to write the file; neither Linux nor Windows impose anything on the contents of the file. It is possible to store files with CRLF newlines on Linux, and files with LF newlines on Windows. To determine the storage requirements of your file on either operating system, you should write it using whatever tool you aim to use, on both operating systems, and measure the file’s size using the operating system’s tools. Note that files typically use storage in multiples of a certain unit of storage, so the variation related to newlines might not have a practical impact. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/608267",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164309/"
]
} |
608,316 | I was asked to clarify the question. I'm not asking about any specific program's behavior, but I used ffmpeg as an example of the behavior I'm asking about. To restate the question: When a program's stdout is piped to another program, how does that program also produce terminal output. Is stderr the only option in terms of output streams? Original question: I'm a long-time Windows developer slowly learning my way around Linux to support various electronics and programming hobbies. I'm working with some headless Raspberry Pi's, so I only interact with them through an ssh terminal. I had thought terminal output was just stdout , and indeed when I control a child process launched from a .NET Core program via the Process class, the terminal output is what my program receives when it intercepts the stdout stream. But I need to launch bash so that I can pipe ffmpeg to VLC, and I realized when I do this from the terminal "by hand", ffmpeg writes processing details to the terminal while simultaneously piping data to VLC. I had thought a command-line pipe redirects stdout to another program's stdin . I was having trouble with bash (it doesn't seem to pass stdin data from my program to the programs launched with the bash -c "ffmpeg ... | cvlc ..." switch), so I was considering using two Process instances and handling the pipe that way. Then it occurred to me to wonder about the relationship of terminal output versus pipe output, and what's really going on behind the scenes. Edit: When I wrote that, I forgot that I typically capture and output both stdout and stderr using the Process class. Under Windows, stderr is rarely used in my experience, is it perhaps routine in Unix to use stderr for non-error terminal output? Just a guess... | There are two different approaches for a program to send output that is separate from its standard output. One is to output to standard error, and as you suspect, this might be more common on Unix-style environments than on Windows; see Do progress reports/logging information belong on stderr or stdout? for some discussion of this. Standard error can be redirected; see What are the shell's control and redirection operators? The other is to output directly to the terminal the program is running on, if any, by using /dev/tty . See How does `less` take data from stdin while still be able to read commands from user? for a discussion of this (on input, but similar aspects apply to output). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/608316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/420408/"
]
} |
608,327 | I often run commands like this: find … -exec grep … --color=always -l {} \+ and sometimes I need to open the matching files in Vim. But what is the most reliable way to do so? One way seems to be vim $(find … -exec grep … {} \+) Another way seems to make use of xargs . Are there advantages/disvantages/concerns to be aware for these two, and others, if any, methods? | vim $(find path/ -exec grep -l 'pattern' {} +) is an unquoted command substitution, so word splitting will be performed on whitespace on its result, as well as pathname expansion. I.e., if a file a b matches, Vim will incorrectly open a and b . If a file * matches, alas, that will be expanded to every file in the corresponding directory. An appropriate solution is find path/ -type f -exec grep -q 'pattern' {} \; -exec vim {} + Grep runs in q uiet mode: Only its return value is used for each file. If 0 , a match was found in that file and the file is passed on to Vim. {} \; means one file will be analysed at a time by Grep. If we used {} + , all files would be passed as arguments to Grep, and a found match in any of those files would result on 0 exit status, so all those files would be opened in Vim. On the other hand, {} + is used for Vim so that it each found file goes to one buffer in a single Vim process. You can try changing them to feel the difference. If you need to speed things up: If 'pattern' is not a regular expression, but only a fixed pattern, add the -F flag to Grep. grep -lZ , Xargs and special shell constructs should also speed-up the process if you have those available, see Stéphane Chazelas' answer . And here are another similar use cases with Xargs, Find and Vim. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/608327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164309/"
]
} |
608,483 | Hi I have a small wrapper script for NetworkManager which works with dmenu, I want it to be as simple as possible and hopefully fully posix. I'm using shellcheck and it gives me a "warnig" about this line: ...if [ -z "$(echo "$VAR" | grep "pattern")" ] && [ -z "$(grep -w $OtherVar ~/somefile)" ]... It (shellcheck) says that I should use grep -q instead of [ -z ] but after reading (and re-reading) both man pages for bash and grep it doesn't seem like grep -q is actually what I want to use, or is it? And how does grep -q actually compares to [ -z/-n] ? | Putting the condition on [ -z "$(echo "$VAR" | grep "pattern")" ] checks if the output from grep is empty or not. Using grep -q checks if grep matched anything. If you want to know if $var contains the regex $pattern , you can use if echo "$var" | grep -qe "$pattern"; then echo matchfi or if ! echo ... for the inverse case. That's strictly not the same as looking at the output of grep , since you might theoretically have a pattern that matches a zero-character string... (but then it would likely match on any input ever.) Note that there's no [ .. ] there, we're using that echo | grep pipeline directly as the condition. if command checks the exit status of command , which can be [ , or another command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/608483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/419050/"
]
} |
608,538 | I know how to use 256 colors for text in terminal: printf "\033[38;5;196mhello\n" But for background color, I seem to be limited to the basic 8 colors only, ie: printf "\033[41mhello\n" How can I use 256 colors for background colors as well ? I mean, the terminal already know the colors, so it should be possible. But what is the syntax? In case it is relevant, I am using terminator as my terminal emulator, and zsh as my shell. | In zsh , you don't need to hardcode escape sequences as it has several builtin ways to set the background and foreground colours. You can use echoti setaf to set the terminal a nsi f oreground colour and echoti setab to set the b ackground one ( setaf and setab being the names of the corresponding t erm i nfo capabilities) Assuming your terminal supports 256 colours (as VTE-based ones such as your gnome-terminator do) and $TERM is correctly set to a value that identifies a terminfo entry with the right escape sequences for that, it should work. $ echoti setab 196 | sed -n l\033[48;5;196m$ Or you can use prompt expansion with print -P or the % parameter expansion flag and: $ print -rP '%K{196}' | sed -n l\033[48;5;196m$ (here sed -n l is used to reveal the corresponding escape sequence that is being sent, $ is just to show where the line ends, it's not part of the output, \033 is GNU sed 's l command's representation of the ESC character (with octal 033 byte value in ASCII)) Some terminals (including VTE-based ones such as your gnome-terminator) also support RGB specifications. On those, you could do $ print -rP '%K{#ffffff}' | sed -n l\033[48;2;255;255;255m$ (here with fffffff for bright white as that's ff the maximum value for all of the red, green and blue components). In that case, zsh hardcodes the xterm-style sequence (see there for the background) as there is no corresponding terminfo capability. Though not standard , that's currently the most widely supported across modern FLOSS terminal emulators. %K sets the background colour, %F for foreground. %k / %f restore the default colour. For terminals that don't support that but do support the 88 or 256 colour palette, zsh also has a zsh/nearcolor module to get you the colour nearest to that RGB specification: $ zmodload zsh/nearcolor$ echoti colors256$ print -rP '%K{#ffffff}' | sed -n l\033[48;5;231m$ (here colour 231 on my 256 colour terminal is the closest one to bright white, it is actually bright white). If you have access to the X11 rgb.txt file, you could also define associative arrays for each of the X11 colour names with something like: typeset -A X11_bg X11_fgwhile read -r r g b c; do [[ $r = [0-9]* ]] || continue printf -v hex %02x $r $g $b X11_fg[$c]=${(%):-%F{#$hex}} X11_bg[$c]=${(%):-%K{#$hex}}done < /etc/X11/rgb.txtX11_bg[default]=${(%):-%k} X11_fg[default]=${(%):-%f} (Debian-like systems have /etc/X11/rgb.txt as part of the x11-common package). To do things like: print -r "$X11_bg[dark olive green]text$X11_bg[default]" For more details, see: man 5 terminfo info zsh echoti info zsh print info zsh "Prompt Expansion" info zsh "The zsh/nearcolor Module" (beware that on some systems, you need to install a zsh-doc package or equivalent for the info pages to become available). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/608538",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155832/"
]
} |
608,577 | I have a plain text file (not containing source code). I often modify it (adding lines, editing existing lines, or any other possible modification). For any modification, I would like to automatically record: what has been modified (the diff information); the date and time of the modification. (Ideally, I would also like to be able to obtain the version of my file at a specific time, but this is a plus, not essential). This is surely possible with Git, but it's too powerful and complex. I don't want to deal with add , commit messages, push , etc. each time. I would simply like to edit the file with vi (or equivalent), save it, and automatically record the modification as above (its diff and its time). Is there a tool to accomplish this in Linux? Update : Thanks for all the suggestions and the several solutions that have been introduced. I have nothing against git , but I explicitly wished to avoid it (for several reason, last but not least the fact that I don't know it enough). The tool which is closest to the above requirements (no git , no commit messages, little or nothing overhead) is RCS. It is file-based and it is exactly what I was looking for. This even avoids the use of a script, provides the previous versions of the file and avoids the customization for vi . The requirements of the question were precise; many opinions have been given, but the question is not - per se - that much opinion-based. Then, obviously, the same goal can be achieved through a tool or through a script, but this apply in many other cases as well. | Give git a chance I don't see why it is an issue to use a powerful tool. Just write a simple bash script that runs git periodically (via cron or systemd timers); auto-generate commit messages etc. As others highlighted in the comments it is - of course - possible to create a local repository (see here and there for more details). If you prefer to host your own remote repo, you'll need to set up a "Bare Repository." Both git init and git clone accept a --bare argument. Borg backup I can also recommend borg backup . It offers you: Timestamps borg diff (compare between snapshots) Pruning (get rid of older snapshots - say you want a snapshot for the current month every day but otherwise only one per month) Encryption (optional) Compression (optional) and much more... The cool thing is that it is very flexible - it is easy to setup but give you a lot of options if you want so. I once wrote a quick-start guide which might be of help. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/608577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48707/"
]
} |
608,606 | I currently have two bash functions, one which uploads, and another that downloads a file. I would like to create a bash script that allows users to specify which of the two they would like to do. The issue I am having is the upload and download function run no matter what. For example: function upload() { var=$1 #something goes here for upload}function download() { var=$1 #something here for download}main() { case "$1" in -d) download "$2";; -u) upload "$2";; *) "Either -d or -x needs to be selected" esac} I cannot get main() to run only and suppress download and upload until needed. | You need to call the main function too, and pass the script's command line arguments to it: #!/bin/shupload() { echo "upload called with arg $1"}download() { echo "download called with arg $1"}main() { case "$1" in -d) download "$2";; -u) upload "$2";; *) echo "Either -d or -u needs to be selected"; exit 1;; esac}main "$@" No need for the ksh-style function foo declarations here, use foo() instead, as it's standard and more widely supported. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/608606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250695/"
]
} |
608,643 | I've attached a Raspberry Pi running Ubuntu to my home network with a cable. It is booted up and connected to the network. The Pi has no keyboard, mouse, or monitor. If I know the IP address that was assigned to the robot, they could ssh into it. It turns out that RasPis have a known OUI {Organizationally Unique Identifier} to their MAC addresses. All of their MAC addresses start with b8:27:eb . So if I could get a list of all the MAC addresses on my network I would be golden. But... arp -a | grep "b8:27:eb" Should should do it. Except that apr -a does not produce an exhaustive and up to date list. Any ideas on how I could get an up to date list of MAC addresses on computers on the network, or get the IP address of a newly attached Raspberry Pi? Thanks! | Assuming not too large a network range you can force the ARP table to be populated before you look through it. These examples are for a typical home network on 192.168.1.0-255 nmap -sn 192.168.1.0/24 # Ping scanarp -na | grep 'at b8:27:eb:' # Match the RPi devices Otherwise, you could look for devices with an open SSH port, nmap -oG - -p 22 192.168.1.0/24 | grep /open/ Or look at your router's DHCP assignment table to see what addresses it has recently allocated. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/608643",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27971/"
]
} |
608,776 | I have a lot of links like https://content.example.net/skin/frontend/2015/default/fonts/test.ttfhttps://content.example.net/skin/frontend/2015/default/img/test.svghttps://content.example.net/skin/frontend/2015/default/fonts/test.eothttps://content.example.net/skin/forntend/2015/default/js/test.js How can I delete links from a file that contain words in the url likecss, jpg, svg, png, ttf ..etc Now use something like that cat url.txt | sed '/png/d' | sed '/jpg/d' | sed '/svg/d' | ...etc This takes a lot of time and effort Can this matter be replaced in one command? | You can use the "OR" syntax for regular expressions: sed -E '/png|jpg|svg/d' url.txt This will delete all lines containing either pattern. If you want to make sure that this pattern is the filename extension, i.e. that the pattern occurs at the end of the line , you can include an anchor into the regular expression: sed -E '/(png|jpg|svg)$/d' url.txt By the way, you never need to cat a file into sed ; it can read them all on its own. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/608776",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/427405/"
]
} |
608,778 | We are running kubernetes on centos7 on premises from past 3years, Recently our NFS storage device was migrated to different VLAN and there was a change in IP address, now none of pods are functioning properly and waiting for PV. My question is what is best possible way to replace old NFS server IP with new NFS server IP in PV and all PVC without loosing any data? | You can use the "OR" syntax for regular expressions: sed -E '/png|jpg|svg/d' url.txt This will delete all lines containing either pattern. If you want to make sure that this pattern is the filename extension, i.e. that the pattern occurs at the end of the line , you can include an anchor into the regular expression: sed -E '/(png|jpg|svg)$/d' url.txt By the way, you never need to cat a file into sed ; it can read them all on its own. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/608778",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285043/"
]
} |
608,785 | I have a script that takes two numbers both are 6 digit e.g 220210 and 220221. All I want is a loop to write all numbers between 220210 and 220221 and these 2 numbersinto a file. I know its probably dead simple but its been doing my head in. HP-UX 11.23bash | You can use the "OR" syntax for regular expressions: sed -E '/png|jpg|svg/d' url.txt This will delete all lines containing either pattern. If you want to make sure that this pattern is the filename extension, i.e. that the pattern occurs at the end of the line , you can include an anchor into the regular expression: sed -E '/(png|jpg|svg)$/d' url.txt By the way, you never need to cat a file into sed ; it can read them all on its own. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/608785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/432244/"
]
} |
608,842 | When I put export GPG_TTY=$(tty) in my .zshrc and restart terminal window and execute echo $GPG_TTY it says not a tty . When I source .zshrc by source ~/.zshrc && echo $GPG_TTY it correctly reports /dev/pts/1 . What could be that my .zshrc fails to find tty when its documentation says that .zshrc is used for interactive shell initialisation? Here is my .zshrc contents: # Enable Powerlevel10k instant prompt. Should stay close to the top of ~/.zshrc.if [[ -r "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh" ]]; then source "${XDG_CACHE_HOME:-$HOME/.cache}/p10k-instant-prompt-${(%):-%n}.zsh"fiexport ZSH="/home/ashar/.oh-my-zsh"export EDITOR=nvimexport GPG_TTY=$(tty)ZSH_THEME="powerlevel10k/powerlevel10k"plugins=(git zsh-autosuggestions)source $ZSH/oh-my-zsh.sh# To customize prompt, run `p10k configure` or edit ~/.p10k.zsh.[[ ! -f ~/.p10k.zsh ]] || source ~/.p10k.zsh | tty command requires that stdin is attached to a terminal. When using Powerlevel10k , stdin is redirected from /dev/null when Instant Prompt is activated and until Zsh is fully initialized. This is explained in more detail in Powerlevel10k FAQ . To solve this problem you can either move export GPG_TTY=$(tty) to the top of ~/.zshrc so that it executes before Instant Prompt is activated, or (better!) use export GPG_TTY=$TTY . The latter version will work anywhere and it's over 1000 times faster. TTY is a special parameter set by Zsh very early during initialization. It gives you access to the terminal even when stdin might be redirected. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/608842",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/432286/"
]
} |
608,847 | I use Linux Mint, running Cinnamon as my DE. I'm used to switching keyboard layouts using LAlt+LShift.I'm also used to switching windows between workspaces using LCtrl+LAlt+LShift+<direction>. I used to have a configuration that allowed me to do both of those seamlessly - I do not remember having any issue with layouts changing without my will or with workspace hotkeys not working. Unfortunately, a data loss incident has forced me to lose some configs - including this one. Enabling the layout switching hotkeys in Keyboard Settings now makes me lose functionality I have with Ctrl+Alt+Shift. How did I manage to set this up? I would like to do it again. | tty command requires that stdin is attached to a terminal. When using Powerlevel10k , stdin is redirected from /dev/null when Instant Prompt is activated and until Zsh is fully initialized. This is explained in more detail in Powerlevel10k FAQ . To solve this problem you can either move export GPG_TTY=$(tty) to the top of ~/.zshrc so that it executes before Instant Prompt is activated, or (better!) use export GPG_TTY=$TTY . The latter version will work anywhere and it's over 1000 times faster. TTY is a special parameter set by Zsh very early during initialization. It gives you access to the terminal even when stdin might be redirected. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/608847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/432292/"
]
} |
609,077 | I am running VirtualBox 6.1 on Catalina 10.15.6 and when I start my Ubuntu 20.04 LTS VM it says "VirtualBox VM quit unexpectedly". How can I fix this? The Apple report is below Path: /Applications/VirtualBox.app/Contents/Resources/VirtualBoxVM.app/Contents/MacOS/VirtualBoxVMIdentifier: org.virtualbox.app.VirtualBoxVMVersion: 6.1.14 (6.1.14)Code Type: X86-64 (Native)Parent Process: VBoxSVC [2581]User ID: 501Date/Time: 2020-09-12 11:06:06.400 +0200OS Version: Mac OS X 10.15.6 (19G73)Report Version: 12Bridge OS Version: 4.6 (17P6065)Anonymous UUID: D239C6C6-47E5-4E96-B087-5A70B7918F9DTime Awake Since Boot: 1100 secondsSystem Integrity Protection: enabledCrashed Thread: 34 Dispatch queue: com.apple.root.default-qosException Type: EXC_CRASH (SIGABRT)Exception Codes: 0x0000000000000000, 0x0000000000000000Exception Note: EXC_CORPSE_NOTIFYTermination Reason: Namespace TCC, Code 0x0Thread 0:: Dispatch queue: com.apple.main-thread | Go to Settings > Audio and uncheck "Enable Audio". There seems to be a bug that crashes Ubuntu 20.04 LTS in VirtualBox when it attempts to check audio input permissions with CoreAudio. 00:00:32.681184 CoreAudio: macOS 10.14+ detected, checking audio input permissions (then the system crashes...) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/609077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/432515/"
]
} |
609,082 | I have two tab-delimited files(fileA.txt and fileB.txt), I have to compare the first column of fileA.txt with the first column of fileB.txt and also I want to print the values present in the second column of fileB.txt in the output file.Below is my fileA.txt idchr1_45796849_A_Tchr1_45796854_C_Tchr1_45797174_T_Achr1_45796852_G_Cchr19_9018540_A_Gchr19_9002576_T_Cchr1_45797487_A_Gchr1_45797153_A_Tchr1_45797750_C_T FileB.txt chr_pos freq.varchr1_45796849_A_T 0.028399811chr1_45796852_G_C 0.019154034chr1_45796854_C_T 0.015872901chr1_45797153_A_T 0.010129176chr1_45797487_A_G 0.012981216chr1_45797750_C_T 0.024949931 following is expected outcome id freq.varchr1_45796849_A_T 0.028399811chr1_45796854_C_T 0.015872901chr1_45797174_T_A chr1_45796852_G_C 0.019154034chr19_9018540_A_G chr19_9002576_T_C chr1_45797487_A_G 0.012981216chr1_45797153_A_T 0.010129176chr1_45797750_C_T 0.024949931 I have referred to awk - comparing 2 columns of 2 files and print common lines but it gives only matching entries | Read fileB.txt first, make the 1st field a key and the 2nd field its value in an array,skipping the header line with FNR>1 ( What are NR and FNR and what does "NR==FNR" imply? ). Then read fileA.txt , print its header for the first line and then print its 1st field followed by the corresponding element in thearray, if any. awk ' FNR==NR && FNR>1{a[$1]=$2} NR!=FNR{ if(FNR>1){print $1,a[$1]} else{print "id", "freq.var"} }' OFS="\t" fileB.txt fileA.txt OFS="\t" sets the output field separator to tab. Since your file is tab delimited, I assume the output file should be tab delimited too. You can pipe that into column -t for alignment. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/609082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198676/"
]
} |
609,087 | I want to let a friend to download a file from my mac. Just one file. Is there a way to expose a specific local file from my ubuntu or mac so that my friend could just enter my IP and port in a browser and get the file I've exposed? | Read fileB.txt first, make the 1st field a key and the 2nd field its value in an array,skipping the header line with FNR>1 ( What are NR and FNR and what does "NR==FNR" imply? ). Then read fileA.txt , print its header for the first line and then print its 1st field followed by the corresponding element in thearray, if any. awk ' FNR==NR && FNR>1{a[$1]=$2} NR!=FNR{ if(FNR>1){print $1,a[$1]} else{print "id", "freq.var"} }' OFS="\t" fileB.txt fileA.txt OFS="\t" sets the output field separator to tab. Since your file is tab delimited, I assume the output file should be tab delimited too. You can pipe that into column -t for alignment. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/609087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126231/"
]
} |
609,471 | I have seen the below pattern is used in several places (even on sof) as an example for email id validation. \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b the above is taken from https://www.regular-expressions.info/tutorial.html , and the quote this pattern describes an email address. This pattern does not take into consideration lower case alphabets (unless I am missing something). Is there any thing further I got to understand about this patter? As this pattern can not be really used in production? Why is it so popular? | It should not be used in production. For example "email me"@contoso.com is a syntactically valid email address but will not be matched by that naïve RE. See RFC5322 section 3.4.1 for the definitive grammar. Annoyingly perhaps, there is no BRE or ERE that can match that grammar definition, but you can get very close. However, a PCRE will do the trick. See How to validate an email address using a regular expression? on StackOverflow. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/609471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243342/"
]
} |
609,474 | I searched but couldn't find anything - I am looking for a breakdown of the file structure of a symlink in bytes, in a ext filesystem. I have tried creating a symlink file and then using hexdump on the symlink, but it complains that it's a directory (the link was to a folder) so it's obviously trying to dump the file/folder the link points to rather than the link itself. | You didn't provide additional details, so this explanation is for the moment centered on the EXT file systems common in Linux. If you look at the "size" of a symlink as provided by e.g. ls -l , you will notice that the size is just as large as the name of the target it is pointing to is long. So, you can infer that the "actual" file contains just the path to the link target as text, and the interpretation as a symbolic link is stored in the filetype metadata (in particular, the flag S_IFLINK in the i_mode field of the inode the link file is attached to, where also the permission bits are stored; see this kernel documentation reference ). In order to improve performance and reduce device IO, if the symlink is shorter than 60 bytes it will be stored in the i_block field in the inode itself (see here ). Since this makes a separate block access unnecessary, these links are called "fast symlinks" as opposed to symlinks pointing to longer paths, which fall back to the "traditional" method of storing the link target as text in an external data block. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/609474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/432879/"
]
} |
609,574 | For example, I have directory with multiple files created by this way: touch files/{1..10231}_file.txt I want to move them into new directory new_files_dir . The simplest way to do this is: for filename in files/*; do mv "${filename}" -t "new_files_dir"done This script works for 10 seconds on my computer. It is slow. The slowness happens due execution of mv command for every file. ###Edit start### I have understood, that in my example the simplest way will be just mv files/* -t new_files_dir or, if the "Argument list too long": printf '%s\0' files/* | xargs -0 mv -t new_files_dir but aforementioned case is a part of task. The whole task is in this question: Moving large number of files into directories based on file names in linux .So, the files must be moved into corresponding subdirectories, the correspondence of which is based on a number in the filename. This is the cause of for loop usage and other oddities in my code snippets. ###Edit end### There is possibility to speedup this process by passing bunch of files to mv command instead of a single file, like this: batch_num=1000# Counting of files in the directoryshopt -s nullglobfile_list=(files/*)file_num=${#file_list[@]}# Every file's common partsuffix='_file.txt'for((from = 1, to = batch_num; from <= file_num; from += batch_num, to += batch_num)); do if ((to > file_num)); then to="$file_num" fi # Generating filenames by `seq` command and passing them to `xargs` seq -f "files/%.f${suffix}" "$from" "$to" | xargs -n "${batch_num}" mv -t "new_files_dir"done In this case the script works for 0.2 seconds. So, the performance has increased by 50 times. But there is a problem: at any moment the program can refuse to work due "Argument list too long", because I can't guarantee that the bunch of filenames length is less than max allowable length. My idea is to calculate the batch_num : batch_num = "max allowable length" / "longest filename length" and then use this batch_num in xargs . Thus, the question: How can max allowable length be calculated? I have done something: Overall length can be found by this way: $ getconf ARG_MAX 2097152 The environment variables contributes into the argument size too, so probably they should be subtracted from ARG_MAX : $ env | wc -c 3403 Made a method to determine the max number of files of equal sizes by trying different amount of files before the right value is found (binary search is used). function find_max_file_number { right=2000000 left=1 name=$1 while ((left < right)); do mid=$(((left + right) / 2)) if /bin/true $(yes "$name" | head -n "$mid") 2>/dev/null; then left=$((mid + 1)) else right=$((mid - 1)) fi done echo "Number of ${#name} byte(s) filenames:" $((mid - 1)) } find_max_file_number A find_max_file_number AA find_max_file_number AAA Output: Number of 1 byte(s) filenames: 209232 Number of 2 byte(s) filenames: 190006 Number of 3 byte(s) filenames: 174248 But I can't understand the logic/relation behind these results yet. Have tried values from this answer for calculation, but they didn't fit. Wrote a C program to calculate the total size of passed arguments. The result of this program is close, but some non-counted bytes are left: $ ./program {1..91442}_file.txt arg strings size: 1360534 number of pointers to strings 91443 argv size: 1360534 + 91443 * 8 = 2092078 envp size: 3935 Overall (argv_size + env_size + sizeof(argc)): 2092078 + 3935 + 4 = 2096017 ARG_MAX: 2097152 ARG_MAX - overall = 1135 # <--- Enough bytes are # left, but no additional # filenames are permitted. $ ./program {1..91443}_file.txt bash: ./program: Argument list too long program.c #include <stdio.h> #include <string.h> #include <unistd.h> int main(int argc, char *argv[], char *envp[]) { size_t chr_ptr_size = sizeof(argv[0]); // The arguments array total size calculation size_t arg_strings_size = 0; size_t str_len = 0; for(int i = 0; i < argc; i++) { str_len = strlen(argv[i]) + 1; arg_strings_size += str_len; // printf("%zu:\t%s\n\n", str_len, argv[i]); } size_t argv_size = arg_strings_size + argc * chr_ptr_size; printf( "arg strings size: %zu\n" "number of pointers to strings %i\n\n" "argv size:\t%zu + %i * %zu = %zu\n", arg_strings_size, argc, arg_strings_size, argc, chr_ptr_size, argv_size ); // The enviroment variables array total size calculation size_t env_size = 0; for (char **env = envp; *env != 0; env++) { char *thisEnv = *env; env_size += strlen(thisEnv) + 1 + sizeof(thisEnv); } printf("envp size:\t%zu\n", env_size); size_t overall = argv_size + env_size + sizeof(argc); printf( "\nOverall (argv_size + env_size + sizeof(argc)):\t" "%zu + %zu + %zu = %zu\n", argv_size, env_size, sizeof(argc), overall); // Find ARG_MAX by system call long arg_max = sysconf(_SC_ARG_MAX); printf("ARG_MAX: %li\n\n", arg_max); printf("ARG_MAX - overall = %li\n", arg_max - (long) overall); return 0; } I have asked a question about the correctness of this program on StackOverflow: The maximum summarized size of argv, envp, argc (command line arguments) is always far from the ARG_MAX limit . | Let xargs do the calculation for you. printf '%s\0' files/* | xargs -0 mv -t new_files_dir | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/609574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109397/"
]
} |
609,590 | I have a virtual machine that I'm trying to extend from 150GB to 500GB. $ sudo cfdisk /dev/sda cfdisk (util-linux 2.23.2) Disk Drive: /dev/sda Size: 536870912000 bytes, 536.8 GB Heads: 255 Sectors per Track: 63 Cylinders: 65270 Name Flags Part Type FS Type [Label] Size (MB) -------------------------------------------------------------------------- Pri/Log Free Space 1.05* sda1 Boot Primary xfs 536.88* sda2 Primary LVM2_member 536333.00* I've run fdisk ( https://unix.stackexchange.com/a/134813/173008 ) to delete and create a new sda2 partition. I am following this guide ( https://unix.stackexchange.com/a/108229/173008 ), however I am stuck because I can't extend vg0 to use this additional space. $ df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/vg0-rootvol 146G 123G 16G 90% //dev/sda1 509M 268M 242M 53% /boot$ sudo pvs PV VG Fmt Attr PSize PFree /dev/sda2 vg0 lvm2 a-- <149.50g 0$ sudo vgs VG #PV #LV #SN Attr VSize VFree vg0 1 2 0 wz--n- <149.50g 0$ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert rootvol vg0 -wi-ao---- <147.50g swap1 vg0 -wi-ao---- 2.00g The problem is there is no "extra space" on /dev/sda2 , and still shows <149.5g , so running the following doesn't work: $ sudo pvresize /dev/sda2 Physical volume "/dev/sda2" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized$ sudo lvextend --size +350G /dev/mapper/vg0-rootvol Insufficient free space: 89600 extents needed, but only 0 available I'm stuck and don't know how to fix this. Any help would be appreciated. | Let xargs do the calculation for you. printf '%s\0' files/* | xargs -0 mv -t new_files_dir | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/609590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173008/"
]
} |
609,612 | Example of text before changes: First lineSecond lineThird line When you put a comment on a line it will auto tab once that line. This happens every time you switch from ESCAPE mode to INSERT mode for the first comment. Example of text after adding your first comment ( # ) First line #Second lineThird line What is the option to put in .vimrc to disable this behavior?There is no article I could find on google that talks about this topic.Thank you! | This is caused by 0# being part of the 'indentkeys' for the YAML filetype. You can disable this behaviour by adding the following to, say, ~/.vim/after/indent/yaml.vim : set indentkeys-=0# | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/609612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/323041/"
]
} |
609,618 | bash has a handy file .bash_history in which it saves the history of commands and on the next execution of bash the history is populated with saved commands. Is it possible to save bc command history to a file in the same way and then load it on startup so that bc history is preserved? I tried reading GNU bc manual and it mentions readline and libedit . From ldd /usr/bin/bc I see mine uses readline and readline has write_history and read_history functions. Is this functionality implemented in bc or to do it I'll need to patch bc ? | If you aren't happy with the command line editing features that are built into a program, you can run it through rlwrap . This is a wrapper around a command line processor (a REPL ) that lets you edit each line before it's sent. Rlwrap uses the readline library and saves history separately for each command. Running rlwrap bc won't do anything for you because rlwrap detects that your bc wants to do its own command line editing, so rlwrap turns itself off. Since you do want rlwrap's command line editing features and not the underlying command's, run rlwrap -a bc The command history will be saved in ~/.bc_history . The main downside of relying on rlwrap rather than using the program's own readline integration is that rlwrap can't do any context-sensitive completion. For example, the python toplevel completes known variables and fields, but rlwrap python cannot do that. Since bc doesn't appear to have any custom completion, rlwrap -a bc doesn't lose functionality over bc. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/609618",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209955/"
]
} |
609,626 | Is there a way to burn a DVD on a Debian/stable system. Here is what brasero is telling me when trying to burn a iso file: For information: % file 7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso 7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso: ISO 9660 CD-ROM filesystem data 'GRMSXEVAL_EN_DVD' (bootable) and % cdrskin --devices cdrskin 1.5.0 : limited cdrecord compatibility wrapper for libburncdrskin: scanning for devices ...cdrskin: ... scanning for devices donecdrskin: Overview of accessible drives (1 found) :-----------------------------------------------------------------------------0 dev='/dev/sr0' rwrw-- : 'HL-DT-ST' 'DVD+-RW GH82N'----------------------------------------------------------------------------- and % dvd+rw-mediainfo /dev/dvdINQUIRY: [HL-DT-ST][DVD+-RW GH82N ][A101]GET [CURRENT] CONFIGURATION: Mounted Media: 2Bh, DVD+R Double Layer Media ID: MBIPG101/R10 Current Write Speed: 8.0x1385=11080KB/s Write Speed #0: 8.0x1385=11080KB/s Write Speed #1: 6.0x1385=8310KB/s Write Speed #2: 4.0x1385=5540KB/s Write Speed #3: 2.4x1385=3324KB/sGET [CURRENT] PERFORMANCE: Write Performance: 4.0x1385=5540KB/s@[0 -> 196607] 6.0x1385=8310KB/s@[196608 -> 385023] 8.0x1385=11079KB/s@[385024 -> 3788799] 6.0x1385=8310KB/s@[3788800 -> 3977215] 4.0x1385=5540KB/s@[3977216 -> 4173823] Speed Descriptor#0: 02/4173823 [email protected]=16629KB/s [email protected]=11080KB/s Speed Descriptor#1: 02/4173823 [email protected]=16629KB/s [email protected]=8310KB/s Speed Descriptor#2: 02/4173823 [email protected]=16629KB/s [email protected]=5540KB/s Speed Descriptor#3: 02/4173823 [email protected]=16629KB/s [email protected]=3324KB/sREAD DVD STRUCTURE[#0h]: Media Book Type: 00h, DVD-ROM book [revision 0] Legacy lead-out at: 2086912*2KB=4273995776DVD+R DOUBLE LAYER BOUNDARY INFORMATION: L0 Data Zone Capacity: 2086912*2KB, can still be setREAD DISC INFORMATION: Disc status: blank Number of Sessions: 1 State of Last Session: empty "Next" Track: 1 Number of Tracks: 1READ TRACK INFORMATION[#1]: Track State: blank Track Start Address: 0*2KB Next Writable Address: 0*2KB Free Blocks: 4173824*2KB Track Size: 4173824*2KB ROM Compatibility LBA: 266240READ CAPACITY: 0*2048=0 I did read: https://wiki.debian.org/BurnCd#Burn_the_image_file_to_CD.2C_DVD.2C_or_BD So if I now try from the command line: % cdrskin -dummy 7601.17514.101119-1850_x64fre_server_eval_en-us-GRMSXEVAL_EN_DVD.iso cdrskin 1.5.0 : limited cdrecord compatibility wrapper for libburncdrskin: scanning for devices ...cdrskin: ... scanning for devices donecdrskin: beginning to burn disccdrskin: NOTE : -dummy mode will prevent actual writingcdrskin: SORRY : Drive offers no suitable write mode with this jobcdrskin: Reason: SAO: simulation of write job not supported by drive and media, cdrskin: Media : blank DVD+R/DLcdrskin: FATAL : burning failed. | If you aren't happy with the command line editing features that are built into a program, you can run it through rlwrap . This is a wrapper around a command line processor (a REPL ) that lets you edit each line before it's sent. Rlwrap uses the readline library and saves history separately for each command. Running rlwrap bc won't do anything for you because rlwrap detects that your bc wants to do its own command line editing, so rlwrap turns itself off. Since you do want rlwrap's command line editing features and not the underlying command's, run rlwrap -a bc The command history will be saved in ~/.bc_history . The main downside of relying on rlwrap rather than using the program's own readline integration is that rlwrap can't do any context-sensitive completion. For example, the python toplevel completes known variables and fields, but rlwrap python cannot do that. Since bc doesn't appear to have any custom completion, rlwrap -a bc doesn't lose functionality over bc. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/609626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32896/"
]
} |
609,630 | How to print a specific value between commas from text file? There are several lines of this type in the file: 0.9999899864,0.6666600108,0.00,0.00,0.00,36988,140920,1,150.00,1500.00,1400.00,1300.00,1,0.50,2.00,0.10,1.00,-0.10,1,123.40,1,0.0,8, I want to print the 7th value, it's 140920 | If you aren't happy with the command line editing features that are built into a program, you can run it through rlwrap . This is a wrapper around a command line processor (a REPL ) that lets you edit each line before it's sent. Rlwrap uses the readline library and saves history separately for each command. Running rlwrap bc won't do anything for you because rlwrap detects that your bc wants to do its own command line editing, so rlwrap turns itself off. Since you do want rlwrap's command line editing features and not the underlying command's, run rlwrap -a bc The command history will be saved in ~/.bc_history . The main downside of relying on rlwrap rather than using the program's own readline integration is that rlwrap can't do any context-sensitive completion. For example, the python toplevel completes known variables and fields, but rlwrap python cannot do that. Since bc doesn't appear to have any custom completion, rlwrap -a bc doesn't lose functionality over bc. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/609630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/422452/"
]
} |
609,855 | The latest updates of my Debian testing system removed Python version 2, and I only have Python 3 installed, with python3 . There is no longer any command named python . This causes several scripts to fail, including scripts compatible with Python 3. I would be interested in knowing what is the proper way to globally configure python as an alias for python3 . One dirty solution would be to manually do something like sudo ln -s /usr/bin/python{3,} , but I worry that this may not be robust to future APT updates (or if reinstalling Python 2 later). Another option is to set an alias, but then it would only work for my user, not for the entire system. I also note that on Ubuntu there is a package python-is-python3 which does precisely this, but there is no such package on Debian. | It looks like Debian is now shipping python-is-python3 themselves (in Debian 11 and later), so the premise of the question no longer holds and you can just: sudo apt update && sudo apt install python-is-python3 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/609855",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8446/"
]
} |
610,056 | Is there a way to put little awk scriptoids on the path? For example, I have this really useful collation operation: // collate-csv.awkFNR > 1 || NR == 1 And I can use it in all sorts of great ways: xargs -a $(find * -name *.csv) awk -F',' -f collate-csv.awk | ... The only problem is I don't have a way to call my awk tools from anywhere. With an executable shell script, I can drop it into a bin folder on the path. Is there a mechanism in linux where I can make these non-executable awk source files available from anywhere I go in the filesystem? (with the qualification that the "mechanism" is not a "why don't you just hit it with a hammer"-style kludge) | In addition to @Roamia's answer you can use AWKPATH variable for a list of directory where to look for collate-csv.awk AWKPATH=${HOME}/include/awk:/some/other/pathexport AWKPATHxargs -a $(find * -name *.csv) awk -f collate-csv.awk -F',' | ... please note .awk extension is not mandatory, just be consistent, shebang line e.g. #!/usr/bin/awk -f is mandatory when script is used standalone as a script (no awk -f call), you will have to use awk -f (and awk know how to use AWKPATH , bash don't) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288274/"
]
} |
610,133 | On my wired LAN, with 1GBit/s devices, I have two Linux machines (One Haswell, One Skylake Xeon) and when I do a secure copy of a large file, I see 38MB/s. Seeing that this is 3 times below the 1000Mbit/s spec, I wonder if this performance is as expected? Both machines use SSD for storage, both run 64bit Ubuntu. During the transfer, both machine have approximately one core at 30% load. The router that sits between the machines is a TP-Link Archer C7 AC1750. Both machines have Intel(R) Gigabit Ethernet Network devices that are in Full Duplex mode. What is a normal scp transfer speed on 1Gbit LANs? UPDATE Using /dev/zero to rule out disk IO yielded the same results. Using nc yielded slightly higher: 41MiB/s. Paradoxically, UDP nc was slower than TCP nc, at 38MiB/s? Switching to crossover cable: 112MB/s for scp. CONCLUSION The TP-Link router in between was the weak link in the network, and could not keep up. | It does seem slow from a theoretical stand point although I've not seen any transfers much quicker practically on home hardware. Some experiments you might like to try to rule out possible limiting factors: Assess your raw SSH speed by copying from /dev/zero to /dev/null . This rules out a HD bottle neck. ssh remote_host cat /dev/zero | pv > /dev/null Check other unencrypted protocols such as HTTP. HTTP actually sends files as with nothing but a header. Sending large files over HTTP is a reasonable measure of TCP speeds. Check that you are not forcing traffic through the router but only it's ethernet switch. For example if your machine has a public IP and local IP, then scp to/from the local IP. This is because home routers often have to process WAN traffic through their CPU which creates a bottle neck. Even if both machines are on the LAN, using the public IP can force packets through the CPU as if it was going to the WAN. Similarly I would use IPv4. Some home routers have a weird behaviour with IPv6 where they ask all local traffic to be forwarded to the router. If at all possible try with a gigabit crossover cable instead of a router. This should rule out router. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610133",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188363/"
]
} |
610,192 | I need to check via bash script if kubernetes is installed. If it is not I start my setup routine.I think it would be best to check if kubectl cluster-info has an output at all. How do I check for a failing command? if command kubectl cluster-info > /dev/null; then # sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" # ...fi | In sh and compatible shells the exit status from a nonexistent command should be 127 . If a command is not found, the exit status shall be 127 . If the command name is found, but it is not an executable utility, the exit status shall be 126 . ( source ) The command builtin doesn't change much: […] the following exit values shall be returned: 126 The utility specified […] was found but could not be invoked. 127 An error occurred in the command utility or the utility specified […] could not be found. ( source ) Your example modified: command kubectl cluster-info >/dev/null 2>&1if [ "$?" -eq 127 ]; then … There's also type . The type utility shall indicate how each argument would be interpreted if used as a command name. ( source ) POSIX does not specify the exact format of the output. The exit status is only required to tell apart error from success. It's not clear if it's a success to successfully find out the command provided does not exist. However in Bash there is no doubt: The return status is zero if all of the names are found, non-zero if any are not found. There are useful options: If the -t option is used, type prints a single word which is one of alias , function , builtin , file or keyword , if name is an alias, shell function, shell builtin, disk file, or shell reserved word, respectively. If the name is not found, then nothing is printed, and type returns a failure status. If the -p option is used, type either returns the name of the disk file that would be executed, or nothing if -t would not return file . By checking the output and exit status from type -t kubectl and/or type -p kubectl , you can tell something about kubectl without invoking it. Still, finding out in advance that kubectl , when used as a command, would be interpreted as a file to run doesn't mean it's the kubectl you need. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210969/"
]
} |
610,226 | Is it possible to have a conditional within /etc/rc.local ?I've checked many Q&As and most people suggest running chmod +x on it, but my problem is different. It actually does work for me without conditionals, but doesn't otherwise. #!/bin/shif [[ -e /usr/src/an-existing-file ]]then echo "seen" >> /etc/rclocalmadethisfi Here's the weird error I see when I run systemctl status rc-local.service : rc.local[481]: /etc/rc.local: 3: /etc/rc.local: [[: not found And here's my rc.local in the exact same location ls -lah /etc/ : -rwxr-xr-x 1 root root 292 Sep 19 09:13 rc.local I'm on Debian 10 Standard. | The [[ ... ]] syntax isn't valid for /bin/sh . Try: if [ -e /usr/src/an-existing-file ]then echo "seen" >> /etc/rclocalmadethisfi Note that sometimes it works because /bin/sh -> /bin/bash or some other shell that supports that syntax, but you can't depend on that being the case (as you see here). You can run ls -l /bin/sh to get to know this info for instance: lrwxrwxrwx 1 root root 4 Jul 18 2019 /bin/sh -> dash | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/610226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316753/"
]
} |
610,406 | Does echo -n | ... send an EOF to the pipe? I.e., echo -n | sth Will sth recieve an EOF on its stdin? | There is no EOF that is represented as data in a file or stream. It is merely a status associated with the file descriptor. When the echo terminates (which will be almost immediately), the write end of the pipe is closed. The next time sth reads (assuming it has read all data previously written to the file) the pipe status changes to EOF and the read issued by sth returns with the EOF condition. The process can continue with any processing it requires, it just cannot read any more from the pipe. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/610406",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282382/"
]
} |
610,484 | I have a pipe delimited file a.txt which includes a header row. The first column holds a filename. I would like to split a.txt into several different files - the name of which is determined by the first column. I would also like to have the header row of a.txt repeated at the top of each file . so I have a.txt : filename|count|age1.txt|1|151.txt|2|142.txt|3|141.txt|44|12.txt|1|3 and I want to create 1.txt filename|count|age1.txt|1|151.txt|2|14 and 2.txt filename|count|age2.txt|3|12.txt|1|3 and 41.txt filename|count|age41.txt|44|1 I have a basic split working awk -F\| '{print>$1}' a.txt but I am struggling to work out how to get the header included, could anybody help? Thanks! | The solution would be to store the header in a separate variable and print it on the first occurence of a new $1 value (=file name): awk -F'|' 'FNR==1{hdr=$0;next} {if (!seen[$1]++) print hdr>$1; print>$1}' a.txt This will store the entire first line of a.txt in a variable hdr but otherwise leave that particular line unprocessed. On all subsequent lines, we first check if the $1 value (=the desired output filename) was already encountered, by looking it up in an array seen which holds an occurence count of the various $1 values. If the counter is still zero for the current $1 value, output the header to the file indicated by $1 , then increase the counter to suppress header output for all later occurences. The rest you already figured out yourself. Addendum: If you have more than one input file, which all have a header line, you can simply place them all as arguments to the awk call, as in awk -F'|' ' ... ' a.txt b.txt c.txt ... If, however, only the first file has a header line, you would need to change FNR to NR in the first rule. Caveat As noted by Ed Morton, the simple approach only works if the number of different output files is small (max. around 10). GNU awk will still continue working, but become slower due to automatically closing and opening files in the background as needed; other awk implementations may simply fail due to "too many open files". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/433744/"
]
} |
610,494 | How do I remove the first 300 million lines from a 700 GB text fileon a system with 1 TB disk space total, with 300 GB available? (My system has 2 GB of memory.) The answers I found use sed, tail, head: How do I delete the first n lines of a text file using shell commands? Remove the first n lines of a large text file But I think (please correct me) I cannot use them due to the disk space being limited to 1 TB and they produce a new file and/or have a tmp file during processing. The file contains database records in JSON format. | If you have enough space to compress the file, which should free a significant amount of space allowing you to do other operations, you can try this: gzip file && zcat file.gz | tail -n +300000001 | gzip > newFile.gz That will first gzip the original input file ( file ) to create file.gz . Then, you zcat the newly created file.gz , pipe it through tail -n +300000001 to remove the first 3M lines, compress the result to save disk space and save it as newFile.gz . The && ensures that you only continue if the gzip operation was successful (it will fail if you run out of space). Note that text files are very compressible. For example, I created a test file using seq 400000000 > file , which prints the numbers from 1 to 400,000,000 and this resulted in a 3.7G file. When I compressed it using the commands above, the compressed file was only 849M and the newFile.gz I created only 213M. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/610494",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160012/"
]
} |
610,502 | I have 100+ jinja template files with 0-k occurrences of "value: ..." string in each of them. The problem is that some of the files are using: value: something some of them: value: 'something' and some of them: value: "some other thing" I need all of these to look the same, to use double quotes. I thought I'd do it with sed: sed -i 's/value: ['"]?(.*)['"]?/value: "\1"/g' *.j2 but as you can see I'm quite horrible with sed and the past 2 hours only made me want to break my keyboard with the nonsense error messages I'm getting, like: unterminated `s' command and such. Sample input: - param: name: Command type: String value: '/bin/echo'- param: name: Args type: String value: Hello World- param: name: Something type: EnvVar value: "PATH" from this I need to get: - param: name: Command type: String value: "/bin/echo"- param: name: Args type: String value: "Hello World"- param: name: Something type: EnvVar value: "PATH" | When you need to use the two forms of quotes ( "' ) in the expression, things get tricky. For one, in your original attempt the shell identifies this 's/value: [' as a quoted string: the latter quote is not preserved. In these cases, rather than having a headache, you can simply put the Sed commands in a file. Its contents won't be subject to the shell manipulation. quotes.sed : # (1) If line matches this regex (value: '), # (2) substitute the first ' with " and# (3) substitute the ' in the end-of-line with "./value: '/{ s/'/"/ s/'$/"/}# (4) If line matches this regex (value: [^"]), # (5) substitute :<space> for :<space>" and# (6) append a " in the end-of-line./value: [^"]/{ s/: /: "/ s/$/"/} $ sed -Ef quotes.sed file- param: name: Command type: String value: "/bin/echo"- param: name: Args type: String value: "Hello World"- param: name: Something type: EnvVar value: "PATH" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/433763/"
]
} |
610,656 | Reading in arguments from the command line is pretty easy, $1 $2 $3 But! if I want to do a loop that assigns $1 to $arg1 , $2 to $arg2 I don't want to do it by entering arg1=$1; arg2=$2 ,I want to learn how to handle the single $ ..How do I do the loop? to increase the $arg , I just add $arg$nr and count up the $nr , but how do I do with the $1 ? $$nr isn't working.. | The traditional answer, for shells that has arrays, is to use an array: arg=( "$@" ) You then have $1 in ${arg[0]} , $2 in ${arg[1]} etc. To loop over these, use for a in "${arg[@]}"; do # code that uses "$a" goes heredone or, if you still have the data in the list of positional parameters and want to reduce typing, for a do # code that uses "$a" goes heredone Note that the quoting is important to protect the data in the original arguments from being split and/or used as globbing patterns. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382913/"
]
} |
610,763 | See this answer below for a zsh solution Let's say I have the following in my ~/.bash_aliases to ask for confirmation before suspending the system: function suspend(){ # echo "Please confirm/cancel system suspension:" select confirmation in "confirm" "cancel"; do case ${confirmation} in confirm ) echo "System suspending..." systemctl suspend break;; cancel ) echo "Canceled suspension." break;; esac done} I would like systemctl suspend to be still executed if no answer is given by the user. For example, after 10 seconds without user input, the content of the "confirm" case would be executed. I tried the following, with a backgrounded sleep in a subshell: function suspend(){ # flag_cancel=0 echo "Please confirm/cancel system suspension:" ( sleep 10 && if [ $flag_cancel -eq 0 ]; then echo "System suspending..." systemctl suspend fi & ) select confirmation in "confirm" "cancel"; do case ${confirmation} in confirm ) echo "System suspending..." systemctl suspend break;; cancel ) flag_cancel=1 echo "Canceled suspension." break;; esac done} but a change of the value of flag_cancel is not taken into account, so the command is always executed after the sleep . How to achieve what I want? | Your attempt with sleep would not work as the sleep call and the subsequent test on $flag_cancel happens in a background job. Any change to the variable flag_cancel in the main part of the code would not affect the value of the variable in the backgrounded subshell and the code would unconditionally suspend the system after 10 seconds. Instead, you can use the fact that both read and select times out after $TMOUT seconds in bash . Here's a variation on the theme of your first piece of code: suspend_maybe (){ local PS3='Please confirm/cancel system suspension: ' local TMOUT=10 local do_suspend=true select confirmation in confirm cancel; do case $REPLY in 1) # default case break ;; 2) do_suspend=false break ;; *) echo 'Sorry, try again' >&2 esac done if "$do_suspend"; then echo 'Suspending...' systemctl suspend else echo 'Will not suspend' fi} Changes made: The function is now called suspend_maybe since there is already a built-in suspend utility in bash . The select loop uses PS3 for its prompt. The select loop times out after $TMOUT seconds. We use the digits in the case statement. That way we don't have to type all the strings in twice. The value of $REPLY will be whatever the user types in. We only need the select loop to tell us whether the user wants to cancel the suspension of the system. We treat suspension as the default action. Once we're out of the select loop, we suspend the system unless the user chose to cancel that action. The same thing but with an input loop using read as a drop-in replacement for select : suspend_maybe (){ local PS3='Confirm system suspension [y]/n: ' local TMOUT=10 local do_suspend=true while true; do if ! read -p "$PS3"; then # timeout break fi case $REPLY in [yY]*) # default case break ;; [nN]*) do_suspend=false break ;; *) echo 'Sorry, try again' >&2 esac done if "$do_suspend"; then echo 'Suspending...' systemctl suspend else echo 'Will not suspend' fi} Here, the user may enter any string starting with n or N to cancel the suspension. Letting the read time out, entering a word starting with y or Y , or pressing Ctrl+D , would suspend the system. With the loop above, it is easier to catch the timeout case, in case you want to do anything special when the suspension is happening due to a user not responding to the prompt. The break in the if -statement would be triggered whenever the read call fails, which it does upon timing out (or when the input stream is closed). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224511/"
]
} |
610,779 | I've followed the classic procedure to install Windows and Linux in dual boot. First I installed Windows in UEFI mode, then I use a bootable PopOS key to resize the main Windows partition; I created a Linux partition as well as a 500MB /boot/efi partition in the remaining space. My problem is, systemd-boot can't seem to detect the Windows bootloader. When I display the systemd-boot menu, it only lists PopOS as a possible boot option, even though I can launch Windows from my BIOS menu with no problem. When I run bootctl , I get the following output: System: Firmware: UEFI 2.70 (American Megatrends 5.14) Secure Boot: disabled Setup Mode: setupCurrent Boot Loader: Product: systemd-boot 245.4-4ubuntu3.1pop0~1590695674~20.04~eaac747 Features: ✓ Boot counting ✓ Menu timeout control ✓ One-shot menu timeout control ✓ Default entry control ✓ One-shot entry control ✓ Support for XBOOTLDR partition ✓ Support for passing random seed to OS ✓ Boot loader sets ESP partition information ESP: /dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515 File: └─/EFI/SYSTEMD/SYSTEMD-BOOTX64.EFIRandom Seed: Passed to OS: yes System Token: set Exists: yesAvailable Boot Loaders on ESP: ESP: /boot/efi (/dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515) File: └─/EFI/systemd/systemd-bootx64.efi (systemd-boot 245.4-4ubuntu3.1pop0~1590695> File: └─/EFI/BOOT/BOOTX64.EFI (systemd-boot 245.4-4ubuntu3.1pop0~1590695674~20.04~e>Boot Loaders Listed in EFI Variables: Title: Linux Boot Manager ID: 0x0003 Status: active, boot-order Partition: /dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515 File: └─/EFI/SYSTEMD/SYSTEMD-BOOTX64.EFI Title: Windows Boot Manager ID: 0x0000 Status: active, boot-order Partition: /dev/disk/by-partuuid/42f0d8f0-13e0-41cf-bc36-ac80dccc54fd File: └─/EFI/MICROSOFT/BOOT/BOOTMGFW.EFI Title: UEFI OS ID: 0x0009 Status: active, boot-order Partition: /dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515 File: └─/EFI/BOOT/BOOTX64.EFIBoot Loader Entries: $BOOT: /boot/efi (/dev/disk/by-partuuid/585919b8-7f1b-4f94-a0b1-6ff195d07515)Default Boot Loader Entry: title: Pop!_OS id: Pop_OS-current.conf source: /boot/efi/loader/entries/Pop_OS-current.conf linux: /EFI/Pop_OS-3ce60b75-530a-4cad-9e80-5156a8e6bb56/vmlinuz.efi initrd: /EFI/Pop_OS-3ce60b75-530a-4cad-9e80-5156a8e6bb56/initrd.img options: root=UUID=3ce60b75-530a-4cad-9e80-5156a8e6bb56 ro quiet loglevel=0 systemd.sh> Notice the Windows Boot Manager entry under Boot Loaders Listed in EFI Variables . It seems systemd-boot is somewhat aware that my Windows partition exists, it just won't detect it as something that can be booted from. (running bootctl install doesn't seem to change anything) My /boot/efi/ directories look like this: /boot/efi/EFI├── BOOT│ └── BOOTX64.EFI├── Linux├── Pop_OS-3ce60b75-530a-4cad-9e80-5156a8e6bb56│ ├── cmdline│ ├── initrd.img│ └── vmlinuz.efi└── systemd └── systemd-bootx64.efi /boot/efi/loader/entries/└── Pop_OS-current.conf So the directories that should have been populated with the Windows Bootloader somehow aren't. How can I diagnose this problem, and add Windows as a startup option to systemd-boot? | Try This method has only been tested on a multi drive system Find Windows EFI Partition lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT Create Path & Mount Windows EFI Partition sudo mkdir /mnt/win-efisudo mount /dev/sdb1 /mnt/win-efi Copy Contents of Windows EFI to POP EFI sudo cp -r /mnt/win-efi/EFI/Microsoft /boot/efi/EFI Add timer to bootloader sudo micro /boot/efi/loader/loader.conf and add a new line timeout 5 or any number of seconds to loader.conf Reboot sudo reboot | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/418104/"
]
} |
610,786 | I'm using bash shell. If I want to execute the last echo command, I can run history | grep echo and then grab the last echo command from what is displayed and run it. I was wondering, is there a shorter way to do this? I'm happy to use another shell if that allows me to somehow more easily execute the last "echo" command if all I know is the command started with "echo." | Try This method has only been tested on a multi drive system Find Windows EFI Partition lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT Create Path & Mount Windows EFI Partition sudo mkdir /mnt/win-efisudo mount /dev/sdb1 /mnt/win-efi Copy Contents of Windows EFI to POP EFI sudo cp -r /mnt/win-efi/EFI/Microsoft /boot/efi/EFI Add timer to bootloader sudo micro /boot/efi/loader/loader.conf and add a new line timeout 5 or any number of seconds to loader.conf Reboot sudo reboot | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166917/"
]
} |
610,812 | I want to have a linux OS and a Windows 10 OS on the same computer. Is there a way were it's like when I turn on the computer I get an option for choosing which OS I want to boot up? I want to use Ubuntu for programming and Windows 10 for gaming since I've heard that a lot of games mainly work on Windows operating systems. What are the benefits for having multiple OS? Is there a better set up for having more than one operating system? Linux seems pretty interesting to me and I just want to mess with it more. | Yes, of course you can have 2 or even more operating systems installedon the same computer. It's called dual-boot, see this article forexample. Apart from installing additional operating systems you can considerrunning them in a virtual machine - it should be much easier to dothan dual-boot but it takes more RAM as you'd run more than oneoperating system at the sametime. Virtualbox is a popular, free andstable software for running virtual machines and it works on Linux andWindows. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/610812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/424184/"
]
} |
610,933 | I have a large csv file that starts this way : codeRegion,nomEPCI,codeDepartement,nomCommune,populationTotale,titre,objet,site_web,libelle_objet_social1,libelle_objet_social2,siret,numero_waldec,nombreAnneesExistence,date_creation,nombreAnneesDerniereDeclaration,date_derniere_declaration,position_activite,date_dissolution,codeCommune,adresse_siege_complement,adresse_siege_numero_voie,adresse_siege_type_voie,adresse_siege_libelle_voie,adresse_siege_distribution,adresse_siege_code_postal,nom_declarant,adresse_gestion_complement_association,adresse_gestion_complement_geo,adresse_gestion_libelle_voie,adresse_gestion_distribution_facturation,adresse_gestion_code_postal,adresse_gestion_achemine,adresse_gestion_pays,civilite_declarant,codeEPCI01,CA CAP Excellence,971,Abymes,54049,ABSOLU MAS/KA,"organiser des manifestations sociales et culturelles ainsi que des activités de loisirs","",Action socio-culturelle,"Clubs de loisirs, relations","",W9G2015492,0.5808219178082191,2020-02-05,0.5808219178082191,2020-02-05,A,0001-01-01,97101,62 Résidence les Lavandes,62,RES,Boisripeaux,_Boisripeaux,97139,"","",RÉSIDENCE LES LAVANDES,62 RESIDENCE BOISRIPEAUX,"",97139,LES ABYMES,FRANCE,PM,20001865301,CA CAP Excellence,971,Abymes,54049,AMBITION PLUS DE TERRASSON,proposer des activités sportives et culturelles,"",Action socio-culturelle,"Sports, activités de plein air","",W9G2015457,0.6356164383561644,2020-01-16,0.6356164383561644,2020-01-16,A,0001-01-01,97101,Maison Andreopa Marcel,3,CHEM1,"Route de Terrasson, Rue Albert Léogane",_97139,97139,"","",MAISON ANDREOPA MARCEL,3 CHEMIN ROUTE DE TERRASSO,"",97139,LES ABYMES,FRANCE,PF,20001865301,CA CAP Excellence,971,Abymes,54049,ASSOCIATION SYNDICALE DE LA RESIDENCE MORNE CARUEL,gérer et d'entretenir les espaces communs cette mission ne comporte pas la possibilité daliéner les espaces indivis si ce n'est au profit de la commune,"",Actions de sensibilisation et d'éducation à l'environnement et au développement durable,"","",W9G2015446,14.797260273972602,2005-11-21,14.797260273972602,2005-11-21,A,0001-01-01,97101,residence morne caruel,"","","",_,97139,"",residence morne caruel,"","","",97139,Les Abymes,FRANCE,PM,20001865301,CA CAP Excellence,971,Abymes,54049,ASSOCIATION SPORTIVE DU LYCEE POLYVALENT CHEVALIER DE SAINT-GEORGES,"organiser et développer en prolongement de l'éducation physique et sportive donnée pendant les heures de scolarité, l'initiation et la pratique sportive pour les élèves qui y adhèrent elle représente l'établissement dans les épreuves sportives scolaires","",Activités de plein air (dont saut à l'élastique),"Centres de loisirs, clubs de loisirs multiples","",W9G2002394,19.87123287671233,2000-10-26,4.742465753424658,2015-12-09,A,0001-01-01,97101,"","",BD,des Héros,_Baimbridge,97139,"","",LYCéE POLYVALENT CHEVALIER SAINT-GEORG,BOULEVARD DES HéROS,BAIMBRIDGE,97139,ABYMES,FRANCE,PM,20001865301,CA CAP Excellence,971,Abymes,54049,"LOISIRS, COMPETITIONS, CLUB (LO. CO. CLUB)","participer aux competitions cyclistes organiser des manifesta- tions sportives a caractere promotionnel.","",Activités de plein air (dont saut à l'élastique),"Centres de loisirs, clubs de loisirs multiples","",W9G2010818,28.747945205479454,1991-12-13,27.378082191780823,1993-04-26,A,0001-01-01,97101,"","","","",_,97139,"","","","","",97139,LES ABYMES,FRANCE,PM,20001865301,CA CAP Excellence,971,Abymes,54049,ASSOCIATION ETUDIANTE DE TOURISME ET DE LOISIRS.( A.T.O.L.),réunir les anciens élèves ayant eu une formation de tourisme et de loisirs et de promouvoir les actions du brevet de technicien supérieur de tourisme et de loisir,"","Amicales, personnel d’établissements scolaires ou universitaires","Syndicats d'initiative, offices de tourisme, salons du tourisme","",W9G2011048,24.53698630136986,1996-02-27,24.53150684931507,1996-02-29,A,0001-01-01,97101,"","","",Ecole superieure des cadres et techniciens,_Route de la rocade grand-camp,97139,"","","",Ecole superieure des cadres et technic,Route de la rocade grand-camp,97139,LES ABYMES,FRANCE,PM,200018653 It's fifth line of data has a description, between double quotes, containing a new line. It works perfectly with Excel or LibreCalc . I need to retain only the lines that are starting by a specific French region. For example here, the '01' one (Guadeloupe). I execute : # Get the CSV headerhead -n1 associations_touristiques.csv > associations_touristiques_gua.csv# Extract data of region '01'cat associations_touristiques.csv | grep -a '^01,' >> associations_touristiques_gua.csv But my final csv is broken. codeRegion,nomEPCI,codeDepartement,nomCommune,populationTotale,titre,objet,site_web,libelle_objet_social1,libelle_objet_social2,siret,numero_waldec,nombreAnneesExistence,date_creation,nombreAnneesDerniereDeclaration,date_derniere_declaration,position_activite,date_dissolution,codeCommune,adresse_siege_complement,adresse_siege_numero_voie,adresse_siege_type_voie,adresse_siege_libelle_voie,adresse_siege_distribution,adresse_siege_code_postal,nom_declarant,adresse_gestion_complement_association,adresse_gestion_complement_geo,adresse_gestion_libelle_voie,adresse_gestion_distribution_facturation,adresse_gestion_code_postal,adresse_gestion_achemine,adresse_gestion_pays,civilite_declarant,codeEPCI01,CA CAP Excellence,971,Abymes,54049,ABSOLU MAS/KA,"organiser des manifestations sociales et culturelles ainsi que des activités de loisirs","",Action socio-culturelle,"Clubs de loisirs, relations","",W9G2015492,0.5808219178082191,2020-02-05,0.5808219178082191,2020-02-05,A,0001-01-01,97101,62 Résidence les Lavandes,62,RES,Boisripeaux,_Boisripeaux,97139,"","",RÉSIDENCE LES LAVANDES,62 RESIDENCE BOISRIPEAUX,"",97139,LES ABYMES,FRANCE,PM,20001865301,CA CAP Excellence,971,Abymes,54049,AMBITION PLUS DE TERRASSON,proposer des activités sportives et culturelles,"",Action socio-culturelle,"Sports, activités de plein air","",W9G2015457,0.6356164383561644,2020-01-16,0.6356164383561644,2020-01-16,A,0001-01-01,97101,Maison Andreopa Marcel,3,CHEM1,"Route de Terrasson, Rue Albert Léogane",_97139,97139,"","",MAISON ANDREOPA MARCEL,3 CHEMIN ROUTE DE TERRASSO,"",97139,LES ABYMES,FRANCE,PF,20001865301,CA CAP Excellence,971,Abymes,54049,ASSOCIATION SYNDICALE DE LA RESIDENCE MORNE CARUEL,gérer et d'entretenir les espaces communs cette mission ne comporte pas la possibilité daliéner les espaces indivis si ce n'est au profit de la commune,"",Actions de sensibilisation et d'éducation à l'environnement et au développement durable,"","",W9G2015446,14.797260273972602,2005-11-21,14.797260273972602,2005-11-21,A,0001-01-01,97101,residence morne caruel,"","","",_,97139,"",residence morne caruel,"","","",97139,Les Abymes,FRANCE,PM,20001865301,CA CAP Excellence,971,Abymes,54049,ASSOCIATION SPORTIVE DU LYCEE POLYVALENT CHEVALIER DE SAINT-GEORGES,"organiser et développer en prolongement de l'éducation physique et sportive donnée pendant les heures de scolarité, l'initiation et la pratique sportive pour les élèves qui y adhèrent elle représente l'établissement dans les épreuves sportives scolaires","",Activités de plein air (dont saut à l'élastique),"Centres de loisirs, clubs de loisirs multiples","",W9G2002394,19.87123287671233,2000-10-26,4.742465753424658,2015-12-09,A,0001-01-01,97101,"","",BD,des Héros,_Baimbridge,97139,"","",LYCéE POLYVALENT CHEVALIER SAINT-GEORG,BOULEVARD DES HéROS,BAIMBRIDGE,97139,ABYMES,FRANCE,PM,20001865301,CA CAP Excellence,971,Abymes,54049,"LOISIRS, COMPETITIONS, CLUB (LO. CO. CLUB)","participer aux competitions cyclistes01,CA CAP Excellence,971,Abymes,54049,ASSOCIATION ETUDIANTE DE TOURISME ET DE LOISIRS.( A.T.O.L.),réunir les anciens élèves ayant eu une formation de tourisme et de loisirs et de promouvoir les actions du brevet de technicien supérieur de tourisme et de loisir,"","Amicales, personnel d’établissements scolaires ou universitaires","Syndicats d'initiative, offices de tourisme, salons du tourisme","",W9G2011048,24.53698630136986,1996-02-27,24.53150684931507,1996-02-29,A,0001-01-01,97101,"","","",Ecole superieure des cadres et techniciens,_Route de la rocade grand-camp,97139,"","","",Ecole superieure des cadres et technic,Route de la rocade grand-camp,97139,LES ABYMES,FRANCE,PM,20001865301,CA CAP Excellence,971,Abymes,54049,ASSOCIATION AMICALE DES AGENTS D'ENTRETIEN DE LA MUNICIPALITE DES ABYMES.,"realiser des rencontres et echanges d'ordre culturel, sportif, social avec toutes categories de personnel communal et autres associations ou groupements.","",Association du personnel d'une entreprise (hors caractère syndical),"Comités de défense et d'animation de quartier, association locale ou municipale","",W9G2012061,31.56986301369863,1989-02-16,31.56986301369863,1989-02-16,A,0001-01-01,97101,"","","","",_,97139,"","","","","",97139,LES ABYMES,FRANCE,PM,200018653 Tthe line 5 is truncated after cyclistes . What the proper way to make cat command not taking into account a newline when it is inside double quotes, and is it needed to change the grep command too after that, and if so : how ? | awk '/^01/||n%2{print;n+=gsub(/"/,"&")}' file For each line, /^01/||n%2 If line begins with 01 or n (initally zero) is odd, print Print it n+=gsub(/"/,"&") increment n by the return value of the gsub function. This replaces every double-quote /"/ with itself "&" . That would be pointless, indeed, but it also returns the number of substitutions made, so it is a way of counting the number of double-quotes in the line. Notice that if the n is odd ( n%2 ) the line does not have a closing double-quote, so it keeps printing until n is even, regardless of whether there is a /^01/ match on the next lines. A side-by-side diff for you: $ diff -yW 30 <(cat file) <(awk '/^01/||n%2{print;n+=gsub(/"/,"&")}' file)04,xde <01,abc" 01,abc"cd cdas" as"02,dsad <03,1ad" <01,as,"as 01,as,"asus" us"02,s <01,a 01,a | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/610933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350549/"
]
} |
610,946 | In order to repeat a character N times, we could use printf . E.g to repeat @ 20 times, we could use something like this: N=20printf '@%.0s' $(seq 1 $N) output: @@@@@@@@@@@@@@@@@@@@ However, there is no newline character at the end of that string. I've tried piping the output to sed : printf '@%.0s' $(seq 1 $N) | sed '$s/$/\n/' Is it possible to achieve the same result with a single printf (adding a newline character at the end of the output) without using sed? | With zsh : printf '%s\n' ${(l[20][@])} (using the l left-padding parameter expansion flag . You could also use the r ight padding one here). Of course, you don't have to use printf . You could also use print or echo here which do add a \n by default. ( printf '%s\n' "$string" can be written print -r -- "$string" or echo -E - "$string" in zsh , though if $string doesn't contain backslashes and doesn't start with - , that can be simplified to print "$string" / echo "$string" ). If the end-goal is to display a list of strings padded to the width of the screen, you'd do: $ lines=(short 'longer text' 'even longer')$ print -rC1 -- ${(ml[$COLUMNS][@][ ])lines}@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ short@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ longer text@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ even longer$ print -rC1 -- ${(mr[$COLUMNS][@][ ])lines}short @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@longer text @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@even longer @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Where the m flag causes zsh to take into account the display width of each character (like for those double-width characters above (which your browser may not render with exactly double-width, but your terminal should)). print -rC1 -- is like printf '%s\n' or print -rl -- to print one element per line except in the case where no arguments are passed to it (like when lines=() ) in which case it prints nothing instead of an empty line). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/610946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267622/"
]
} |
611,331 | This is the standard output of ls -ln | nl wolf@linux:~$ ls -lh | nl 1 total 24 2 -rw-rw-r-- 1 wolf wolf 186 Sep 24 22:18 01.py 3 -rw-rw-r-- 1 wolf wolf 585 Sep 24 22:21 02.py 4 -rw-rw-r-- 1 wolf wolf 933 Sep 24 22:26 03.pywolf@linux:~$ Instead of starting the number from total 24 , would it be possible to start it from the actual files/directory which is the second line? Desired output wolf@linux:~$ ls -lh | nl total 24 1 -rw-rw-r-- 1 wolf wolf 186 Sep 24 22:18 01.py 2 -rw-rw-r-- 1 wolf wolf 585 Sep 24 22:21 02.py 3 -rw-rw-r-- 1 wolf wolf 933 Sep 24 22:26 03.pywolf@linux:~$ | This will make nl start from 0 : $ ls -lh | nl -v 0 0 total 24 1 -rw-rw-r-- 1 wolf wolf 186 Sep 24 22:18 01.py 2 -rw-rw-r-- 1 wolf wolf 585 Sep 24 22:21 02.py 3 -rw-rw-r-- 1 wolf wolf 933 Sep 24 22:26 03.py | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/611331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409008/"
]
} |
611,347 | I am looking for a Linux command in RHEL v6.x to search a particular directory structure in the current directory. Though I know the below command which will search a particular single directory in the current directory which is working for a single directory but not for the directory structure. Working: find /home/dir1/* -name "def" Not Working: find /home/dir1/* -name "abc/def" I also tried below command but it is also listing the files inside this directory But I don't want to list the file inside this directory. I only want to list the full path of all the directory which has got this abc/def directory structure. locate abc/def/ Can anyone please help me. Thanks In Advance. | The -name test only matches the last path component. To match something like abc/def you will need -path : $ mkdir -p somedir/otherdir/abc/def/ghi$ find somedir -path '*/abc/def'somedir/otherdir/abc/def | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/611347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95117/"
]
} |
611,372 | Frequencies -- 1403.6738 1403.6738 1403.6738IR Inten -- 25.0809 25.0809 25.0809 I want to get two columns Frequencies IR Inten1403.6738 25.0809 and so on | The -name test only matches the last path component. To match something like abc/def you will need -path : $ mkdir -p somedir/otherdir/abc/def/ghi$ find somedir -path '*/abc/def'somedir/otherdir/abc/def | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/611372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/434510/"
]
} |
611,394 | What's the difference between ${1-default_string_value} and ${1-`echo default_string_value`} Why use the second form over the first one? Edit: I have seen the second form used in multiple places, for instance as a relatively wide spread git alias abbrev = !sh -c 'git rev-parse --short ${1-`echo HEAD`}' - My gut feeling is that this second form is used by modifying existing (e.g. googled) snippets that do have to use a command as default value, but I could be overlooking something, hence this question. | The -name test only matches the last path component. To match something like abc/def you will need -path : $ mkdir -p somedir/otherdir/abc/def/ghi$ find somedir -path '*/abc/def'somedir/otherdir/abc/def | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/611394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140852/"
]
} |
611,407 | I'm using Ubuntu20.04 I used the command sudo apt-get install gnome by mistake and gnome package was installed on my system, how to restore everything to its original state? | I've just tested sudo apt-get install gnome and then sudo apt-get purge gnome --autoremove on a fresh Ubuntu 20.04.1 LTS. It does not remove all of the dependencies which are dragged in by installing gnome because many of them are suggested by other installed packages. Fortunately, on Ubuntu, apt keeps a log of what packages were installed and when. To see the log, it is enough to issue: $ cat /var/log/apt/history.log There you will find the list of packages grouped by install occurrence. Look for Commandline: apt-get install gnome . If you did not install or upgrade recently, it should be the last one. To copy the list of packages for removal, you will want to pre-format them to strip the information in parentheses. You can do it with this useful script : $ perl -pe 's/\(.*?\)(, )?//g' /var/log/apt/history.log Then just copy the list of packages installed with gnome to your sudo apt purge command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/611407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/434534/"
]
} |
611,573 | Let's say I have a directory called /tmp/main and inside it I have 100 other directories . I want to run a loop through each directory of those directories, for example to make a file with touch test.txt How do I tell the script to process the first, the second, the third and so on? | A simple loop would work: for dir in /tmp/main/*/; do touch "$dir"/test.txtdone The / at the end of the pattern /tmp/main/*/ guarantees that if the pattern matches anything, it will match a directory. In bash , you may want to set the nullglob shell option with shopt -s nullglob before the loop to ensure that the loop doesn't run at all if the pattern doesn't match anything. Without nullglob set, the loop would still run once with the pattern unexpanded in $dir . Another way to fix that would be to make sure that $dir is actually a directory before calling touch : for dir in /tmp/main/*/; do if [ -d "$dir" ]; then touch "$dir"/test.txt fidone or, equivalently, for dir in /tmp/main/*/; do [ -d "$dir" ] && touch "$dir"/test.txtdone | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/611573",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382913/"
]
} |
611,590 | $ mkdir temp && cd temp$ ls [0-9]ls: cannot access '[0-9]': No such file or directory$ touch \[0-9\]$ ls [0-9]'[0-9]'$ touch 1$ ls 1 '[0-9]'$ ls [0-9]1 I find this behavior very surprising. To me, the [0-9] glob pattern should only match a file whose name consists of a single numeric digit. But it is also sometimes matching a file named [0-9] itself. Is this a bug with glob expansion in bash? Can I disable this, so that [0-9] never matches [0-9] (as it shouldn't)? | You should set the failglob option with shopt -s failglob : $ ls [2-9]ls: cannot access '[2-9]': No such file or directory$ touch '[2-9]'$ ls [2-9][2-9]$ shopt -s failglob$ ls [2-9]bash: no match: [2-9] Bonus question: why did the second-to-last ls print a leading space? Because of the new "user-friendly" default quoting style of GNU ls: $ touch 1$ unset QUOTING_STYLE # use the default$ ls 1 '[2-9]'$ QUOTING_STYLE=literal ls1 [2-9]$ ls --quoting-style=literal1 [2-9] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/611590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163490/"
]
} |
611,685 | I want to find all directories and subdirectories in my current folder, excluding those that are hidden (or belonging to those that are hidden). The following do not work: find . -type d -name "[!.]*" because I think it only avoids empty hidden folders. Something like this is matched ./.cache/totem | I'm assuming you're classing directories that start with a dot as "hidden". To avoid descending into such directories you should use -prune . find . -mindepth 1 -type d \( -name '.*' -prune -o -print \) This starts in the current directory (we could have specified * here but that presupposes your wildcard is not set to include dot files/directories - for example bash 's dotglob ). It then matches only on directories, but not considering . itself. The section in brackets tells find that if the name matches .* then it's to be pruned, so that neither it nor its descendants are to be considered further; otherwise print its name. If you don't have the (non-POSIX) -mindepth option you could use this alternative. Arguably this is better than the original solution I've suggested but I'm going to leave both in the answer find . -type d \( -name '.?*' -prune -o -print \) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/611685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/416168/"
]
} |
611,713 | I often need to pop the last positional argument of a bash function or script. By "pop" I mean: "remove it from the list of positional arguments, and (optionally) assign it to a variable." Given how frequently I need this operation, I am a bit surprised that best I have found is what is illustrated by the example below: foo () { local argv=( "$@" ) local last=${argv[$(( ${#argv[@]} - 1 ))]} argv=( ${argv[@]:0:$(( ${#argv[@]} - 1 ))} ) echo "last: $last" echo "rest: ${argv[@]}"} In other words, an epic production featuring a cast of thousands... Is there anything simpler, easier to read? | You can access the last element with ${argv[-1]} (bash 4.2 or above) and remove it from the array with the unset builtin (bash 4.3 or above): last=${argv[-1]}unset 'argv[-1]' The quotes around argv[-1] are required as [...] is a glob operator, so argv[-1] unquoted could expand to argv- and/or argv1 if those files existed in the current directory (or to nothing or cause an error if they didn't with nullglob / failglob enabled). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/611713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
612,322 | Since being corrected many years ago, I switched from backticks to $() for command expansion. But I still prefer the backticks. It is fewer keystrokes and does not involve the Shift key. I understand that the parentheses are preferable because it is less prone to the errors that backticks is prone to, but what is the reason for the rule to never use backticks? | The Bash FAQ gives a number of reasons to prefer parentheses to backticks, but there isn’t a universal rule that you shouldn’t ever use backticks. The main reason to prefer parentheses in my view is that parsing inside $() is consistent with parsing performed outside, which isn’t the case with backticks. This means that you can take a shell command and wrap it with "$()" without much thought; that’s not true if you use backticks instead. This cascades, so wrapping a command which itself contains a substitution is easily done with "$()" , not so with backticks. Ultimately I think it’s a question of habit. If you choose to use backticks for simple cases, parentheses for others, you’ll have to make that choice every time you want to substitute a command. If you choose to always use parentheses, you never have to think about it again. The latter can explain the presence of a “don’t use backticks” rule in certain coding guides: it simplifies development, and removes a source of errors for developers and reviewers. It also explains why using parentheses can be recommended even for one-liners: it’s hard to ingrain a habit for script-writing when it’s not applied everywhere. (As far as keying goes, that depends on the keyboard layout; on my AZERTY keyboard, $() doesn’t involve any shifting, whereas backticks are quite painful to write.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/612322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251630/"
]
} |
612,333 | I always login a server with a ssh key, so I am not sure what my actual password was anymore. I would need to guess a couple of times. However the server has fail2ban and I don't want to trigger that. Is there any way to check which of my passwords corresponds to the ssh key which is accepted by the server? Is there anyway I can check my password after logging in with the ssh-key without triggering fail2ban? | No, there is no relation between an account password and the ssh key. You can logon to an account with ssh with a key even if it has no password. For the updated question, yes, type: su YOURUSERNAME then try if you remember your password | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/612333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224357/"
]
} |
612,335 | I just struggle with the problem that I want to create a directory tree on a remote machine in which all directories have a certain group ownership. Furthermore, I explicitely want to have parent directories generated automatically if not existent yet. So what I tried to do by now was: ssh me@remotemachine "newgrp mygroup && mkdir -p /path/to/my/directory" However, it seems not to work to execute newgrp on the remote machine via SSH. Of course another option might be to create the directory first and then change the group ownership afterwards, but this would require that I knew which parent directories were created automatically by the -p option of mkdir . So is there a way to either log in by SSH as member of a specific group rather than as member of my default group on the remote machine or, alternatively, to get mkdir telling me which parent directories it created automatically? | No, there is no relation between an account password and the ssh key. You can logon to an account with ssh with a key even if it has no password. For the updated question, yes, type: su YOURUSERNAME then try if you remember your password | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/612335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/435354/"
]
} |
612,416 | I tried to check what my DNS resolver is and I noticed this: user@ubuntu:~$ cat /etc/resolv.conf nameserver 127.0.0.53options edns0 I was expecting 192.168.1.1 , which is my default gateway, my router. I don't understand why it points at 127.0.0.53 . When I hit that ip, apache2 serves me its contents. Could someone clear this up for me? Shouldn't the file point directly at my default gateway which acts as a DNS resolver - or even better directly at my preferred DNS which is 1.1.1.1 ? P.S: When I capture DNS packets with wireshark on port 53 all I see is 192.168.1.1 and not 127.0.0.53 , as it should be. | You are likely running systemd-resolved as a service. systemd-resolved generates two configuration files on the fly, for optional use by DNS client libraries (such as the BIND DNS client library in C libraries): /run/systemd/resolve/stub-resolv.conf tells DNS client libraries to send their queries to 127.0.0.53. This is where the systemd-resolved process listens for DNS queries, which it then forwards on. /run/systemd/resolve/resolv.conf tells DNS client libraries to send their queries to IP addresses that systemd-resolved has obtained on the fly from its configuration files and DNS server information contained in DHCP leases. Effectively, this bypasses the systemd-resolved forwarding step, at the expense of also bypassing all of systemd-resolved 's logic for making complex decisions about what to actually forward to, for any given transaction. In both cases, systemd-resolved configures a search list of domain name suffixes, again derived on the fly from its configuration files and DHCP leases (which it is told about via a mechanism that is beyond the scope of this answer). /etc/resolv.conf can optionally be: a symbolic link to either of these; a symbolic link to a package-supplied static file at /usr/lib/systemd/resolv.conf , which also specifies 127.0.0.53 but no search domains calculated on the fly; some other file entirely. It's likely that you have such a symbolic link.In which case, the thing that knows about the 192.168.1.1 setting, that is (presumably) handed out in DHCP leases by the DHCP server on your LAN, is systemd-resolved , which is forwarding query traffic to it as you have observed.Your DNS client libraries, in your applications programs, are themselves only talking to systemd-resolved . Ironically, although it could be that you haven't captured loopback interface traffic to/from 127.0.0.53 properly, it is more likely that you aren't seeing it because systemd-resolved also (optionally) bypasses the BIND DNS Client in your C libraries and generates no such traffic to be captured. There's an NSS module provided with systemd-resolved , named nss-resolve , that is a plug-in for your C libraries.Previously, your C libraries would have used another plug-in named nss-dns which uses the BIND DNS Client to make queries using the DNS protocol to the server(s) listed in /etc/resolv.conf , applying the domain suffixes listed therein. nss-resolve gets listed ahead of nss-dns in your /etc/nsswitch.conf file, causing your C libraries to not use the BIND DNS Client, or the DNS protocol, to perform name→address lookups at all.Instead, nss-resolve speaks a non-standard and idiosyncratic protocol over the (system-wide) Desktop Bus to systemd-resolved , which again makes back end queries of 192.168.1.1 or whatever your DHCP leases and configuration files say. To intercept that you have to monitor the Desktop Bus traffic with dbus-monitor or some such tool.It's not even IP traffic, let alone IP traffic over a loopback network interface. as the Desktop Bus is reached via an AF_LOCAL socket. If you want to use a third-party resolving proxy DNS server at 1.1.1.1, or some other IP address, you have three choices: Configure your DHCP server to hand that out instead of handing out 192.168.1.1. systemd-resolved will learn of that via the DHCP leases and use it. Configure systemd-resolved via its own configuration mechanisms to use that instead of what it is seeing in the DHCP leases. Make your own /etc/resolv.conf file, an actual regular file instead of a symbolic link, list 1.1.1.1 there and remember to turn off nss-resolve so that you go back to using nss-dns and the BIND DNS Client. The systemd-resolved configuration files are a whole bunch of files in various directories that get combined, and how to configure them for the second choice aforementioned is beyond the scope of this answer.Read the resolved.conf (5) manual page for that. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/612416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/433400/"
]
} |
612,420 | I have a Table in unix with below format and covert as output. +--------------------------+-------------------------+-+| col_name | type | +--------------------------+-------------------------+-+| Name | String || Date | Fri 29 13:17:2020 |+--------------------------+-------------------------+-+ Output: "col_name","type""Name","String""Date","Fri 29 13:17:2020" Any help would be appreciated. | You are likely running systemd-resolved as a service. systemd-resolved generates two configuration files on the fly, for optional use by DNS client libraries (such as the BIND DNS client library in C libraries): /run/systemd/resolve/stub-resolv.conf tells DNS client libraries to send their queries to 127.0.0.53. This is where the systemd-resolved process listens for DNS queries, which it then forwards on. /run/systemd/resolve/resolv.conf tells DNS client libraries to send their queries to IP addresses that systemd-resolved has obtained on the fly from its configuration files and DNS server information contained in DHCP leases. Effectively, this bypasses the systemd-resolved forwarding step, at the expense of also bypassing all of systemd-resolved 's logic for making complex decisions about what to actually forward to, for any given transaction. In both cases, systemd-resolved configures a search list of domain name suffixes, again derived on the fly from its configuration files and DHCP leases (which it is told about via a mechanism that is beyond the scope of this answer). /etc/resolv.conf can optionally be: a symbolic link to either of these; a symbolic link to a package-supplied static file at /usr/lib/systemd/resolv.conf , which also specifies 127.0.0.53 but no search domains calculated on the fly; some other file entirely. It's likely that you have such a symbolic link.In which case, the thing that knows about the 192.168.1.1 setting, that is (presumably) handed out in DHCP leases by the DHCP server on your LAN, is systemd-resolved , which is forwarding query traffic to it as you have observed.Your DNS client libraries, in your applications programs, are themselves only talking to systemd-resolved . Ironically, although it could be that you haven't captured loopback interface traffic to/from 127.0.0.53 properly, it is more likely that you aren't seeing it because systemd-resolved also (optionally) bypasses the BIND DNS Client in your C libraries and generates no such traffic to be captured. There's an NSS module provided with systemd-resolved , named nss-resolve , that is a plug-in for your C libraries.Previously, your C libraries would have used another plug-in named nss-dns which uses the BIND DNS Client to make queries using the DNS protocol to the server(s) listed in /etc/resolv.conf , applying the domain suffixes listed therein. nss-resolve gets listed ahead of nss-dns in your /etc/nsswitch.conf file, causing your C libraries to not use the BIND DNS Client, or the DNS protocol, to perform name→address lookups at all.Instead, nss-resolve speaks a non-standard and idiosyncratic protocol over the (system-wide) Desktop Bus to systemd-resolved , which again makes back end queries of 192.168.1.1 or whatever your DHCP leases and configuration files say. To intercept that you have to monitor the Desktop Bus traffic with dbus-monitor or some such tool.It's not even IP traffic, let alone IP traffic over a loopback network interface. as the Desktop Bus is reached via an AF_LOCAL socket. If you want to use a third-party resolving proxy DNS server at 1.1.1.1, or some other IP address, you have three choices: Configure your DHCP server to hand that out instead of handing out 192.168.1.1. systemd-resolved will learn of that via the DHCP leases and use it. Configure systemd-resolved via its own configuration mechanisms to use that instead of what it is seeing in the DHCP leases. Make your own /etc/resolv.conf file, an actual regular file instead of a symbolic link, list 1.1.1.1 there and remember to turn off nss-resolve so that you go back to using nss-dns and the BIND DNS Client. The systemd-resolved configuration files are a whole bunch of files in various directories that get combined, and how to configure them for the second choice aforementioned is beyond the scope of this answer.Read the resolved.conf (5) manual page for that. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/612420",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/435422/"
]
} |
612,443 | Because of a few different applications I need to use, I need to be able to bypass Google's 2 Factor Authentication pam.d module when an SSH connection is coming from the same network. There is very little information about this online, but there are a few questions on the Stack Network, but none of the solutions worked for me. I am not sure if it is because the solutions are specifically for Linux, or I am just missing something. I am using macOS in all instances here. I am not very familiar with these settings. I do want to require a password, key, & 2FA if I am not on the same local network, but skip the 2FA if I am on the same local network Current Setup: SSH requires a valid key, password, & 2 Factor Auth File Contents Of: /etc/pam.d/sshd auth optional pam_krb5.so use_kcminitauth optional pam_ntlm.so try_first_passauth optional pam_mount.so try_first_passauth required pam_opendirectory.so try_first_passauth required pam_google_authenticator.so nullokaccount required pam_nologin.soaccount required pam_sacl.so sacl_service=sshaccount required pam_opendirectory.sopassword required pam_opendirectory.sosession required pam_launchd.sosession optional pam_mount.so /etc/ssh/ssh_config # Host *# ForwardAgent no# ForwardX11 no# PasswordAuthentication yes# HostbasedAuthentication no GSSAPIAuthentication yes GSSAPIDelegateCredentials no# BatchMode no# CheckHostIP yes# AddressFamily any# ConnectTimeout 0# StrictHostKeyChecking ask# IdentityFile ~/.ssh/id_rsa# IdentityFile ~/.ssh/id_dsa# IdentityFile ~/.ssh/id_ecdsa# IdentityFile ~/.ssh/id_ed25519# Port 22# Protocol 2# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc# MACs hmac-md5,hmac-sha1,[email protected]# EscapeChar ~# Tunnel no# TunnelDevice any:any# PermitLocalCommand no# VisualHostKey no# ProxyCommand ssh -q -W %h:%p gateway.example.com# RekeyLimit 1G 1hHost * SendEnv LANG LC_* /etc/ssh/sshd_config #Protocol VersionProtocol 2#Port 22#AddressFamily any#ListenAddress 0.0.0.0#ListenAddress ::#HostKey /etc/ssh/ssh_host_rsa_key#HostKey /etc/ssh/ssh_host_ecdsa_key#HostKey /etc/ssh/ssh_host_ed25519_key# Ciphers and keying#RekeyLimit default none# Logging#SyslogFacility AUTH#LogLevel INFO# Authentication:#LoginGraceTime 2m#PermitRootLogin prohibit-password#StrictModes yesMaxAuthTries 3#MaxSessions 10PubkeyAuthentication yesAuthenticationMethods publickey,keyboard-interactive:pam# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2# but this is overridden so installations will only check .ssh/authorized_keysAuthorizedKeysFile .ssh/authorized_keys#AuthorizedPrincipalsFile none#AuthorizedKeysCommand none#AuthorizedKeysCommandUser nobody# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts#HostbasedAuthentication no# Change to yes if you don't trust ~/.ssh/known_hosts for# HostbasedAuthentication#IgnoreUserKnownHosts no# Don't read the user's ~/.rhosts and ~/.shosts files#IgnoreRhosts yes# To disable tunneled clear text passwords, change to no here!#PasswordAuthentication yesPermitEmptyPasswords no# Change to no to disable s/key passwordsChallengeResponseAuthentication yes# Kerberos optionsKerberosAuthentication yesKerberosOrLocalPasswd yesKerberosTicketCleanup yes#KerberosGetAFSToken no# GSSAPI optionsGSSAPIAuthentication yesGSSAPICleanupCredentials yes# Set this to 'yes' to enable PAM authentication, account processing,# and session processing. If this is enabled, PAM authentication will# be allowed through the ChallengeResponseAuthentication and# PasswordAuthentication. Depending on your PAM configuration,# PAM authentication via ChallengeResponseAuthentication may bypass# the setting of "PermitRootLogin without-password".# If you just want the PAM account and session checks to run without# PAM authentication, then enable this but set PasswordAuthentication# and ChallengeResponseAuthentication to 'no'.UsePAM yes#AllowAgentForwarding yes#AllowTcpForwarding yes#GatewayPorts no#X11Forwarding no#X11DisplayOffset 10#X11UseLocalhost yes#PermitTTY yes#PrintMotd yes#PrintLastLog yes#TCPKeepAlive yes#PermitUserEnvironment no#Compression delayedClientAliveInterval 360ClientAliveCountMax 0#UseDNS no#PidFile /var/run/sshd.pid#MaxStartups 10:30:100#PermitTunnel no#ChrootDirectory none#VersionAddendum none# pass locale informationAcceptEnv LANG LC_*# no default banner pathBanner /etc/ssh/banner# override default of no subsystemsSubsystem sftp /usr/libexec/sftp-server# Example of overriding settings on a per-user basis#Match User anoncvs# X11Forwarding no# AllowTcpForwarding no# PermitTTY no# ForceCommand cvs server EDIT: I attempted a few different combinations of the listed solutions to the Stack posts at the links listed below but I could not get the provided solutions to work. I do not know if I am missing something in my configuration, or if it has to do with I'm using macOS, or if maybe the order of what's listed in my sshd file in pam.d is incorrect. SSH - Only require google-authenticator from outside local network https://serverfault.com/questions/799657/ssh-google-authenticator-ignore-whitelist-ips I attempted to add this to the sshd file in pam.d: auth [success=1 default=ignore] pam_access.so accessfile=/etc/security/access.confauth sufficient pam_google_authenticator.so And adding an access.conf file to /etc/security/access.conf: + : ALL : 10.0.1.0/24+ : ALL : LOCAL+ : ALL : 10.0.1.4+ : ALL : 10.0.1.6+ : ALL : 10.0.1.16+ : ALL : 10.0.1.20- : ALL : ALL | You are likely running systemd-resolved as a service. systemd-resolved generates two configuration files on the fly, for optional use by DNS client libraries (such as the BIND DNS client library in C libraries): /run/systemd/resolve/stub-resolv.conf tells DNS client libraries to send their queries to 127.0.0.53. This is where the systemd-resolved process listens for DNS queries, which it then forwards on. /run/systemd/resolve/resolv.conf tells DNS client libraries to send their queries to IP addresses that systemd-resolved has obtained on the fly from its configuration files and DNS server information contained in DHCP leases. Effectively, this bypasses the systemd-resolved forwarding step, at the expense of also bypassing all of systemd-resolved 's logic for making complex decisions about what to actually forward to, for any given transaction. In both cases, systemd-resolved configures a search list of domain name suffixes, again derived on the fly from its configuration files and DHCP leases (which it is told about via a mechanism that is beyond the scope of this answer). /etc/resolv.conf can optionally be: a symbolic link to either of these; a symbolic link to a package-supplied static file at /usr/lib/systemd/resolv.conf , which also specifies 127.0.0.53 but no search domains calculated on the fly; some other file entirely. It's likely that you have such a symbolic link.In which case, the thing that knows about the 192.168.1.1 setting, that is (presumably) handed out in DHCP leases by the DHCP server on your LAN, is systemd-resolved , which is forwarding query traffic to it as you have observed.Your DNS client libraries, in your applications programs, are themselves only talking to systemd-resolved . Ironically, although it could be that you haven't captured loopback interface traffic to/from 127.0.0.53 properly, it is more likely that you aren't seeing it because systemd-resolved also (optionally) bypasses the BIND DNS Client in your C libraries and generates no such traffic to be captured. There's an NSS module provided with systemd-resolved , named nss-resolve , that is a plug-in for your C libraries.Previously, your C libraries would have used another plug-in named nss-dns which uses the BIND DNS Client to make queries using the DNS protocol to the server(s) listed in /etc/resolv.conf , applying the domain suffixes listed therein. nss-resolve gets listed ahead of nss-dns in your /etc/nsswitch.conf file, causing your C libraries to not use the BIND DNS Client, or the DNS protocol, to perform name→address lookups at all.Instead, nss-resolve speaks a non-standard and idiosyncratic protocol over the (system-wide) Desktop Bus to systemd-resolved , which again makes back end queries of 192.168.1.1 or whatever your DHCP leases and configuration files say. To intercept that you have to monitor the Desktop Bus traffic with dbus-monitor or some such tool.It's not even IP traffic, let alone IP traffic over a loopback network interface. as the Desktop Bus is reached via an AF_LOCAL socket. If you want to use a third-party resolving proxy DNS server at 1.1.1.1, or some other IP address, you have three choices: Configure your DHCP server to hand that out instead of handing out 192.168.1.1. systemd-resolved will learn of that via the DHCP leases and use it. Configure systemd-resolved via its own configuration mechanisms to use that instead of what it is seeing in the DHCP leases. Make your own /etc/resolv.conf file, an actual regular file instead of a symbolic link, list 1.1.1.1 there and remember to turn off nss-resolve so that you go back to using nss-dns and the BIND DNS Client. The systemd-resolved configuration files are a whole bunch of files in various directories that get combined, and how to configure them for the second choice aforementioned is beyond the scope of this answer.Read the resolved.conf (5) manual page for that. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/612443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321965/"
]
} |
612,549 | I'm trying to edit fasta headers of multiple files, in order to remove a forward slash and everything after it (as long as 'everything after it' is equal or less than 10 characters). Header lines are marked by a '>'. for i in ./*.fa;do sed -r 's/(>.*)\/.\{,10\}\n/\1\n/' "$i"; done I've also tried for i in ./*.fa;do sed -r 's/(>.*)\/.{,10}\n/\1\n/' "$i"; done but it doesn't seem to be any better. My hunch is that it's the {,10} quantifier which breaks things. I'm not sure though. Help would be much appreciated! For example, if the following was in a file: >header1_some_extra_data_here/1-1000ATGCGGGTACCCCA>code/header2_some_extra_dataAGGTCCCCGGGAAAAA I'd like the following to be the output: >header1_some_extra_data_hereATGCGGGTACCCCA>code/header2_some_extra_dataAGGTCCCCGGGAAAAA | Your sed substitutions will not work as expected because you'll never be able to match a newline in the input data. This is because sed reads your file line by line, i.e. with the newlines as delimiters, and the expression(s) are applied to the lines individually, without the delimiting newlines. Instead, changing your code slightly: for fasta in ./*.fa; do sed 's;^\(>.*\)/.\{0,10\}$;\1;' "$fasta"done The few changes I've done are: Use ; as the delimiter for the s/// command instead of the default / . This allows us to not escape the / in the pattern. Almost any character may be used as the delimiter, but one should probably pick one that does not occur in the pattern or in the replacement text. Use only the standard basic regular expression syntax. In your pattern, (...) is extended regular expression syntax and \{...\} is basic regular expression syntax. I settled on using the basic syntax for portability. This also means dropping the -r option which enables the extended syntax in GNU sed . Anchor the pattern to the start and end of the line with ^ and $ respectively. Don't try to insert a newline in the replacement bit. An alternative and shorter sed expression would be sed '/^>/s;/.\{0,10\}$;;' This applies a substitution to all lines that start with the > character ( /^>/ acts as the "address" for the subsequent s/// command). The substitution simply deletes the / and the bit after it to the end of the line if that bit is 10 characters or fewer long. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/612549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/377508/"
]
} |
612,568 | I'd like to inline the following 2 commands: big_query_that_returns_text > in.txt$ printf '%s\n' "foo" "bar" | grep -f /dev/stdin in.txt that do work by finding foo and bar in in.txt but when I try to printf '%s\n' "foo" "bar" | grep -f /dev/stdin big_query_that_returns_text I receive zsh: argument list too long: grep I also tried var=`big_query_that_returns_text`printf '%s\n' "foo" "bar" | grep -f /dev/stdin $varprintf '%s\n' "foo" "bar" | grep -f /dev/stdin "$var" but I receive the same error. | This is a place for a process substitution : it's a block of code that acts like a file Pipe the big query results to grep's stdin big_query_that_returns_text | grep -f <(printf '%s\n' "foo" "bar") If the command to produce "foo" and "bar" is more complicated, you can help readability with arbitrary newlines inside the process substitution: big_query_that_returns_text \| grep -f <( printf '%s\n' "foo" "bar" ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/612568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/435266/"
]
} |
612,604 | I have a file that is owned and written by root user. I have another user called systemuser that we have given sudo privileges for every action. I'm able to perform every action by systemuser just like how i do for root user. But I'm trying to null a file owned and written by root and I get permission denied error. >api-server-out-0.log-bash: api-server-out-0.log: Permission denied$ sudo >api-server-out-0.log-bash: api-server-out-0.log: Permission denied Note: a process running with root user is currently writing to this log file. Can you please suggest how can I using `systemuser ? | sudo > api-server-out-0.log won't work. What it actually does, it starts sudo without any parameters and tries to redirect its output under your user account to the specified file which of course will not work since the file is owned by root. What you really want to do is something like sudo dd if=/dev/null of=api-server-out-0.log# orsudo truncate -s 0 api-server-out-0.log# orsudo sh -c 'echo -n "" > api-server-out-0.log'# orsudo sh -c ': > api-server-out-0.log' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/612604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/392596/"
]
} |
612,611 | Is it possible to make a function like function doStuffAt { cd $1 # do stuff} but make it so invoking that function doesn't actually change my pwd, it just changes it for duration of the function? I know I can save the pwd and set it back at the end, but I'm hoping there's a way to just make it happen locally and not have to worry about that. | Yes. Just make the function run its commands in a ( ) subshell instead of a { } group command: doStuffAt() ( cd -- "$1" || exit # the subshell if cd failed. # do stuff) The parentheses ( ( ) ) open a new subshell that will inherit the environment of its parent. The subshell will exit as soon as the commands running it it are done, returning you to the parent shell and the cd will only affect the subshell, therefore your PWD will remain unchanged. Note that the subshell will also copy all shell variables, so you cannot pass information back from the subshell function to the main script via global variables. For more on subshells, have a look at man bash : (list) list is executed in a subshell environment (see COMMANDEXECUTION ENVIRONMENT below). Variable assignments and builtincommands that affectthe shell's environment do not remain in effect after the command completes. The return status is the exit status of list. Compare to: { list; } list is simply executed in the current shell environment. list must be terminated with a newline or semicolon. This is known as a group command.The return status is the exit status of list. Note that unlike the metacharacters ( and ), { and } are reserved words and must occur where a reserved word is permitted to be recognized. Since they do not cause a word break, they must be separated from list by whitespace or another shellmetacharacter. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/612611",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/433825/"
]
} |
612,680 | I have a text file A which contains line numbers which I want to remove from text file B . For example, file A.txt contains lines 145 and file B.txt contains lines ABCDE The resulting file should be: BC Of course, this can be done manually with sed '1d;4d;5d' B.txt but I wonder how to do it without specifying line numbers manually. | You can use awk as well: awk 'NR==FNR { nums[$0]; next } !(FNR in nums)' linenum infile in specific case when 'linenum' file might empty, awk will skip it, so it won't print whole 'infile' lines then, to fix that, use below command: awk 'NR==FNR && FILENAME==ARGV[1]{ nums[$0]; next } !(FNR in nums)' linenum infile or even better (thanks to Stéphane Chazelas ): awk '!firstfile_proceed { nums[$0]; next } !(FNR in nums)' linenum firstfile_proceed=1 infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/612680",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77744/"
]
} |
612,905 | To backup a snapshot of my work, I run a command like tar -czf work.tgz work to create a gzipped tar file, which I can then drop in cloud storage. However, I have just noticed that gzip has a 4 GB size limit, and my work.tgz file is more than 4 GB. Despite that, if I create a gzip tar file on my current computer (running Mac OS X 10.15.4, gzip version is called Apple gzip 287.100.2) I can successfully retrieve it. So gunzip works on a >4GB in my particular case. But I want to be able to create and read these large gzip files on either Mac OS X or Linux, and possibly other systems in the future. My question is: will I be able to untar/gunzip large files anywhere? In other words, how portable is a gzip file which is more than 4 GB in size? Does it matter if I create it on Mac OS, Linux, or something else? A bit of online reading suggests gzip will successfully gzip/gunzip a larger file, but will not correctly record the uncompressed size, because the size is stored as a 32 bit integer. Is that all the limit is? | I have just noticed that gzip has a 4 GB size limit More accurately, the gzip format can’t correctly store uncompressed file sizes over 4GiB; it stores the lower 32 bits of the uncompressed size, and gzip -l misleadingly presents that as the size of the original data. The result is that, up to gzip 1.11 included, gzip -l won’t show the right size for any compressed file whose original size is over 4GiB. Apart from that, there is no limit due to gzip itself, and gzip ped files over 4GiB are portable. The format is specified by RFC 1952 and support for it is widely available. The confusion over the information presented by gzip -l has been fixed in gzip 1.12 ; gzip -l now decompresses the data to determine the real size of the original data, instead of showing the stored size. Will I be able to untar/gunzip large files anywhere? Anywhere that can handle large files, and where spec-compliant implementations of tar and gunzip are available. In other words, how portable is a gzip file which is more than 4 GB in size? The gzip format itself is portable, and gzip files are also portable, regardless of the size of the data they contain. Does it matter if I create it on Mac OS, Linux, or something else? No, a gzip file created on any platform can be uncompressed on any other platform with the required capabilities (in particular, the ability to store large files, in the context of this question). See also Compression Utility Max Files Size Limit | Unix/Linux . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/612905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/435807/"
]
} |
613,231 | Heyo! I'm currently working on a non-lfs system from scratch with busybox as the star. Now, my login says: (none) login: Hence, my hostname is broken. hostname brings me (none) too. The guide I was following told me to throw the hostname to /etc/HOSTNAME . I've also tried /etc/hostname . No matter what I do, hostname returns (none) - unless I run hostname <thename> or hostname -F /etc/hostname . Now obviously, I don't want this to be done every time somebody freshly installed the distro -- so what is the real default file, if not /etc/hostname ? Thanks in advance! | The hostname commands in common toolsets, including BusyBox, do not fall back to files when querying the hostname.They report solely what the kernel returns to them as the hostname from a system call, which the kernel initializes to a string such as "(none)", changeable by reconfiguring and rebuilding the kernel.(In systemd terminology this is the dynamic hostname , a.k.a. transient hostname ; the one that is actually reported by Linux, the kernel.)There is no "default file". There's usually a single-shot service that runs at system startup, fairly early on, that goes looking in these various files, pulls out the hostname, and initializes the kernel hostname with it.(In systemd terminology this configuration string is the static hostname .)For example: In my toolset I provide an "early" hostname service that runs the toolset's set-dynamic-hostname command after local filesystem mounts and before user login services. The work is divided into stuff that is done (only) when one makes a configuration change, and stuff that is done at (every) system bootstrap: The external configuration import mechanism reads /etc/hostname and /etc/HOSTNAME , amongst other sources (since different operating systems configure this in different ways), and makes an amalgamated rc.conf . The external configuration import mechanism uses the amalgamated rc.conf to configure this service's hostname environment variable. When the service runs, set-dynamic-hostname doesn't need to care about all of the configuration source possibilities and simply takes the environment variable, from the environment configured for the service, and sets the dynamic hostname from it. In systemd this is an initialization action that is hardwired into the code of systemd itself, that runs before service management is even started up. The systemd program itself goes and reads /etc/hostname (and also /proc/cmdline , but not /etc/HOSTNAME nor /etc/default/hostname nor /etc/sysconfig/network ) and passes that to the kernel. In Void Linux there is a startup shell script that reads the static hostname from (only) /etc/hostname , with a fallback to the shell variable read from rc.conf , and sets the dynamic hostname from its value. If you are building a system "from scratch", then you'll have to make a service that does the equivalent.The BusyBox and ToyBox tools for setting the hostname from a file are hostname -F "${filename}" , so you'll have to make a service that runs that command against /etc/hostname or some such file. BusyBox comes with runit's service management toolset, and a simple runit service would be something along the lines of: #!/bin/sh -eexec 2>&1exec hostname -F /etc/hostname Further reading Lennart Poettering et al. (2016). hostnamectl . systemd manual pages. Freedesktop.org. Jonathan de Boyne Pollard (2017). " set-dynamic-hostname ". User commands manual . nosh toolset. Softwares. Jonathan de Boyne Pollard (2017). " rc.conf amalgamation ". nosh Guide . Softwares. Jonathan de Boyne Pollard (2015). " external formats ". nosh Guide . Softwares. Rob Landley. hostname . Toybox command list . landley.net. https://unix.stackexchange.com/a/12832/5132 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/613231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382366/"
]
} |
613,352 | I have file with values written in ranges, I need to "unmap" them to plain list, how this can be effectively achieved? Example: 141540000,141569999,1147280000,147289999,0 First column is range start value, second value is range end and third value is some data which corresponds to each number in range. Example result I want to achieve: 141540000, 1141540001, 1141540002, 1... 141569998, 1141569999, 1147280000, 0147280001, 0...147289999, 0 I suppose the best approach is to use something like sed or awk, but I don`t know how to approach solution. | awk -F, '{for (i=$1;i<=$2;i++) print i ", " $3}' file For every line we do the for loop and print the number and the last field. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/613352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/424907/"
]
} |
613,436 | After following the recommendations on the debian Wiki, adding the i386 architecture and installing steam from apt, still missing libGL.so.1: me@hostname:~$sudo apt-get install --reinstall libgl1-mesa-glx:i386Reading package lists... DoneBuilding dependency tree Reading state information... Done0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 1 not upgraded.Need to get 0 B/50.3 kB of archives.After this operation, 0 B of additional disk space will be used.(Reading database ... 185832 files and directories currently installed.)Preparing to unpack .../libgl1-mesa-glx_20.1.9-1_i386.deb ...Unpacking libgl1-mesa-glx:i386 (20.1.9-1) over (20.1.9-1) ...Setting up libgl1-mesa-glx:i386 (20.1.9-1) ...me@hostname:~$sudo apt-get install --reinstall steamReading package lists... DoneBuilding dependency tree Reading state information... Done0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 1 not upgraded.Need to get 0 B/9,554 B of archives.After this operation, 0 B of additional disk space will be used.(Reading database ... 185832 files and directories currently installed.)Preparing to unpack .../steam_1%3a1.0.0.66_i386.deb ...Unpacking steam:i386 (1:1.0.0.66) over (1:1.0.0.66) ...Setting up steam:i386 (1:1.0.0.66) ...me@hostname:~$steam/home/me/.local/share/Steam/steam.sh: line 114: VERSION_ID: unbound variable/home/me/.local/share/Steam/steam.sh: line 114: VERSION_ID: unbound variableRunning Steam on debian 64-bit/home/me/.local/share/Steam/steam.sh: line 114: VERSION_ID: unbound variableSTEAM_RUNTIME is enabled automaticallyPins up-to-date!Error: You are missing the following 32-bit libraries, and Steam may not run:libGL.so.1Steam client's requirements are satisfied/home/me/.local/share/Steam/ubuntu12_32/steam[2020-10-07 20:44:30] Startup - updater built Sep 3 2020 21:18:09Installing breakpad exception handler for appid(steam)/version(1599174997)Looks like steam didn't shutdown cleanly, scheduling immediate update checkInstalling breakpad exception handler for appid(steam)/version(1599174997)[2020-10-07 20:44:30] Checking for update on startup[2020-10-07 20:44:30] Checking for available updates...[2020-10-07 20:44:30] Downloading manifest: client-download.steampowered.com/client/steam_client_ubuntu12Installing breakpad exception handler for appid(steam)/version(1599174997)[2020-10-07 20:44:30] Download skipped: /client/steam_client_ubuntu12 version 1599174997, installed version 1599174997, downloaded version 0[2020-10-07 20:44:30] Nothing to do[2020-10-07 20:44:30] Verifying installation...[2020-10-07 20:44:30] Performing checksum verification of executable files[2020-10-07 20:44:31] Verification completeFailed to load steamui.so - dlerror(): libGL.so.1: wrong ELF class: ELFCLASS64[2020-10-07 20:44:33] ShutdownInstalling breakpad exception handler for appid(steam)/version(1599174997)Installing breakpad exception handler for appid(steam)/version(1599174997) | Anyone else coming across this error, try installing apt install nvidia-driver-libs:i386 this seems to have resolved it for me. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/613436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6027/"
]
} |
613,613 | The issue that I'm having is that headphones plugged into the 3.5mm jack on the front of my desktop computer are not always detected. I'm running Ubuntu 20.04 on a custom built computer with a B450 Tomahawk motherboard. I'm certain that the issue is with Ubuntu 20.04 because the issue was not occurring (as far as I'm aware) when the same computer was running 18.04. The steps to create my problem are: Be running computer with headphones disconnected using another audio output. Simply plug headphones into computer but headphones don't appear in sound settings. By suspending and unsuspending the computer, the problem is usually fixed and headphones will now appear in sound settings. Some things that I've tried: Performing a fresh install of PulseAudio after removing all configurations. Unmuting the device in alsamixer as in https://askubuntu.com/questions/1230819/how-to-fix-3-5-mm-audio-jack-not-working-after-upgrading-to-20-04 Overriding the headphone jack using hdajackretask as in https://askubuntu.com/questions/818111/ubuntu-16-04-front-headphone-jack-not-detected Any help would be greatly appreciated! | Open your terminal and run these pulseaudio --kill pulseaudio --start This solved my problem in Ubuntu 20.04. But unfortunately there should be automatic detection, which is not working. Slightly disappointed | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/613613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/397520/"
]
} |
613,657 | According to the following tutorials https://linuxize.com/post/regular-expressions-in-grep/ \s Match a space. and https://www.guru99.com/linux-regular-expressions.html Some interval regular expressions are: Expression Description {n} Matches the preceding character appearing 'n' times exactly {n,m} Matches the preceding character appearing 'n' times but not more than m {n, } Matches the preceding character only when it appears 'n'times or more Sample file wolf@linux:~$ cat space.txt0space1 spaces2 spaces3 spaces4 spaceswolf@linux:~$ I just want to grep up to 3 spaces only, minimum 1 space, maximum 3 spacesUnfortunately, it doesn't really work as expected. wolf@linux:~$ cat space.txt | grep -P '\s{1,3}'1 spaces2 spaces3 spaces4 spaceswolf@linux:~$ wolf@linux:~$ cat space.txt | grep -P '\s{3}'3 spaces4 spaceswolf@linux:~$ wolf@linux:~$ cat space.txt | grep -P '\s{3,3}'3 spaces4 spaceswolf@linux:~$ wolf@linux:~$ cat space.txt | grep -P '\s{0,3}'0space1 spaces2 spaces3 spaces4 spaceswolf@linux:~$ Desired Output wolf@linux:~$ cat space.txt | grep -P '\s{0,3}' <- need to fix it here1 spaces2 spaces3 spaceswolf@linux:~$ | you need: grep -P '\S\s{1,3}\S' infile \s matches a whitespace-character, not only a space. \S matches a non-whitespace-character in your attempt, you are not limiting that before &after your matches should not be a whitespace. to filter on space only and avoid using PCRE, you can do: grep '[^ ] \{1,3\}[^ ]' infile or to work on lines having leading/trailing 1~3spaces: grep '\([^ ]\|^\) \{1,3\}\([^ ]\|$\)' infile input data ( cat -e infile ): 0space$1 spaces$2 spaces$3 spaces$4 spaces$ 3spaces$ 4space$3spaces $4spaces $ output: 1 spaces$2 spaces$3 spaces$ 3spaces$3spaces $ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/613657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409008/"
]
} |
613,793 | So I have a textfile of which I want to remove everything to the first colon (including the colon). So for example if this is the input 0000007ba9ec6950086ce79a8f3a389db4235830:9515rfsvk000000da2a12da3fbe01a95bddb8ee183c62b94d:letmein2x000000edf3179a1cf4c354471a897ab7f420bd52:heychudi:rbhai000000f636f0d7cbc963a62f3a1bc87c9c985a04:cornetti0000010a15f5b9315ef8e113f139fa413d1f2eb2:3648067PY128 Then this should be the output 9515rfsvkletmein2xheychudi:rbhaicornetti3648067PY128 Note that the second colon in line 3 remain, only from the start of each line to (including) the first column should be removed. Is there a quick way to do this with grep or awk? | With cut cut -d: -f2- file -d sets the separator and -f2- means from the second to the last field. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/613793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260724/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.