source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
448,521 | From bash manual, about tilde expansion: If a word begins with an unquoted tilde character (‘~’), all of the characters up to the first unquoted slash (or all characters, if there is no unquoted slash) are considered a tilde-prefix . I was wondering why ~ is recognized as tilde-prefix in $ mypath=/program_files:~/home/t$ echo $mypath/program_files:/home/t/home/t What words is mypath=/program_files:~/home/t splitted into by the lexer of bash? Is ~/home/t recognized exactly as a word? What word separators does the lexer of bash use to break a command into words? Are : and = word separators? Are they also words? Thanks. This is originated from that I can't understand https://unix.stackexchange.com/a/448469/674 The tilde inside a PATH string is not understood. This is why the POSIX standard requires to expand tilde sequences after a colon in the command line when a shell macro is assigned. | This isn’t the result of word splitting (more accurately, token splitting), it’s the result of tilde expansion in variable assignments: Each variable assignment is checked for unquoted tilde-prefixes immediately following a ‘:’ or the first ‘=’. In these cases, tilde expansion is also performed. When it splits a command into tokens, the word separators bash uses are its metacharacters : A character that, when unquoted, separates words. A metacharacter is a space, tab, newline, or one of the following characters: ‘|’, ‘&’, ‘;’, ‘(’, ‘)’, ‘<’, or ‘>’. mypath=/program_files:~/home/t is a single token from bash’s perspective. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
448,595 | I need to install old Debian 5 on my system in order to run some tests. The installation complains regarding bad archive mirror on network. How to solve that problem? Does it means that nobody mirrors the outdated distribution? | At the configure the package manager step you should select (in top of the country mirror list): enter information manually Then select a debian mirror from here , for example: archive.debian.org Then the Debian mirror directory: /debian/ You should ignore the next error saying : security.debian.org couldn't be accessed because there is no security updates for debian Lenny. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37277/"
]
} |
448,633 | What is the key denoted by ^@ when typed in the terminal? My system is getting spammed by this key, so I have to disable it. | ^@ is not a key, it's the representation of a control character. In that case the NUL character, the one with byte value 0. If n is the byte value of X, then the byte value of ^X will be n ^ 0x40 . You can tell the byte value of X with: printf X | od -An -tu1 or (for single byte characters): printf '%d\n' "'X" So here: $ printf '%s\n' "'@"64$ echo "$((64 ^ 0x40))"0 For ^? : $ printf '%s\n' "'?"63$ echo "$((63 ^ 0x40))"127 (that's the DEL character). Depending on the terminal, you may be able to enter it by pressing Ctrl+Space or Ctrl+@ . On my UK keyboard in xterm on Debian, I get it on Ctrl+2 (shift 2 is " on a UK keyboard, but @ on a US keyboard). The NUL character is ignored by terminals and terminal emulators. It's a padding character which in the olden days would have been used by applications to let give the terminal time between two other control characters when there was no flow control. You'd see that ^@ in a terminal in applications like vim that choose it as the visual representation of a NUL. You would also typically see it as the echo of a NUL character you enter on input. Either by the terminal driver itself when the terminal line discipline is in icanon mode and the echoctl parameter is enabled (generally on by default, see stty -a ), or by line editors in applications (like readline used by bash ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13394/"
]
} |
448,642 | From https://unix.stackexchange.com/a/156010/674 Note that the second sh above goes into the inline script's $0 . You should use something relevant there (like sh or find-sh ), not things like _ , - , -- or the empty string as that is used for the shell's error messages: $ find . -name accept_ra -exec sh -c 'echo 0 > "$1"' inline-sh {} \;inline-sh: ./accept_ra: Permission denied What does " _ , - , -- or the empty string is used for the shell's error messages" mean? Why does using inline-sh not work in the example, given that inline-sh is not _ , - , -- or the empty string? Thanks. | The subject of “is used for the shell’s error messages” is “ $0 ”, not “ _ , - , -- or the empty string”. The value given to $0 is used for error messages; so you shouldn’t specify a meaningless value for $0 , otherwise you’ll end up with weird error messages. It might make more sense as Note that the second sh above goes into the inline script's $0 . You should use something relevant there (like sh or find-sh ), not things like _ , - , -- or the empty string, as the value in $0 is used for the shell's error messages: inline-sh does work in the example: it’s used in the error message, which is the whole point of the example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
448,650 | I am writing my first code in bash. I am stuck from 2 hours. This is my code: #!/bin/bashdeclare -i l=0declare -i d=0declare -i s1=0declare -i s2=0declare -i t=0declare -i o=0declare -i p=0for i in demo_room/subject_1/?*do if [ "${i:0:1}" = "l" ]; then ((l++)); echo "l=$l" python motempl_n.py $i $l elif [ "${i:0:1}" = "d" ]; then ((d++)); echo "d=$d" python motempl_n.py $i $d elif [ "${i:0:1}" = "o" ]; then o=$((o+1)); echo "o=$o" python motempl_n.py $i $o elif [ "${i:0:1}" = "p" ]; then p=$((p+1)); python motempl_n.py $i $p elif [ "${i:0:1}" = "t" ]; then t=$((t+1)); python motempl_n.py $i $t elif [ "${i:0:7}" = "slide_1" ]; then s1=$((s1+1)); python motempl_n.py $i $s1 #elif [ "${i:0:7}" == 'slide_2' ] else s2=$((s2+1)); python motempl_n.py $i $s2 fi done So I am having a folder demo_room/subject_1 . In this folder I have 140 avi videos their names are: 20 videos have name: dislike_01 to dislike_20 20 videos have name: like_01 to like_20 20 videos have name: ok_01 to ok_20 20 videos have name: point_01 to point_20 20 videos have name: slide_1_01 to slide_1_20 20 videos have name: slide_2_01 to slide_2_20 20 videos have name: take_a_picture_01 to take_a_picture_1_20 What I want to do is first find the class of the input video then give its occurrence as input to python file. First 20 videos of subject_1 folder are dislike one so this code works fine but the 21st video is like_01 but the parameter it passes to the python code is 21. But it should be 1 because this is first video of like class in the for loop. And each time it prints value of $d . It means each time it goes in 2nd if condition . In the python code I can verify that the name of the video is like_01 but second value passed is 21 . Why? This happens for all the 140 videos. | Your $i will contain the full path ( demo_room/subject_1/file ), so will always start with d . You could do something like: case "${i##*/}" in (d*) ...;; (l*) ...;; (slide1_*) ...;; ...esac Where ${i##*/} is $i with the leading part matching */ removed. Also beware that parameter expansions need to be quoted in sh/bash: python motempl_n.py "$i" "$s2" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448650",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255251/"
]
} |
448,677 | $ lswkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.deb$ sudo apt install wkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.debReading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package wkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.debE: Couldn't find any package by glob 'wkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.deb'E: Couldn't find any package by regex 'wkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.deb' I was wondering why the deb file can't be located? Is it because of sudo or apt install ? Thanks. Note that $ sudo apt install ./wkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.deb works, but I was asking for the reason of the previous failure. Related How to install a deb file, by dpkg -i or by apt? | Since version 1.1~exp1, apt and apt-get support installing from package files accessible via the file system, and not just from repositories. However, in order to preserve backwards compatibility, the feature only works for package specifiers which are unmistakably files, i.e. which contain / . Anything else is processed as a package name rather than a package file , using the pre-existing mechanisms. Thus sudo apt install wkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.deb is handled as a request to install the package named “wkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.deb”, and apt goes looking for that in its repositories and fails. But sudo apt install ./wkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.deb is handled as a request to install the package contained in the file named “./wkhtmltox_0.12.5-0.20180604.140.rc~6f77c46~bionic_amd64.deb” (along with its dependencies, if necessary). This also works for absolute paths. I can’t find any trace of this in the apt documentation though, apart from the brief mention in the changelog : add support for "apt-get install foo_1.0_all.deb" There is a bug requesting that this feature be documented . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/448677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
448,682 | Currently, I am trying to parse out all the files inside a directory. I have a find function, but it appears to not be able to parse in file directories with whitespaces into a loop. Here, "$DIR" is the directory I wish to search in. Does anyone have any ideas how I can modify it to work? thanks. for file in $(find "$DIR" -type f)do echo "$file"done | find ... | xargs ... and find ... -exec ... are both better options than this: to use a shell loop to iterate over find results correctly, we must use a while read loop: while IFS= read -d '' -r filename; do echo "$filename"done < <(find "$dir" -type f -print0) A lot to unpack there: we use a Process Substitution to execute the find command and be able to read from the results like it's a file. To read a line of input verbatim, the bash idiom is IFS= read -r line . That allows arbitrary whitespace and backslashes to be read into the variable. read -d '' uses the null byte (as produced by -print0 ) as the end-of-line character instead of newline. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448682",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250695/"
]
} |
448,690 | I have a file containing a list of DNA sequence names and another containing DNA sequences. They look like this: $ cat list.txtseq1seq3$ cat sequences.txt>seq1AAAAAAAAAA>seq2CCCCCCCCCCCCCCC>seq3TTTTT I want to retrieve only seq1 and seq2 (listed on list.txt) and redirect them to individual files. As you can see, each sequence has different number of lines hence I cannot just say to 'sed' to pick up N number of lines after each match. I want my output like this: $ lsseq1.txtseq2.txt$ cat seq1.txt>seq1AAAAAAAAAA$ cat seq2.txt>seq3TTTTT I am using this: while read listdonames=$(echo $list) sed '/$list/,/>/{/>/q}' "$PWD/sequences.txt" > "$names".dnadone < list.txt However, the output is: $ lsseq1.txtseq2.txt$ cat seq1.txt>seq1AAAAAAAAAA>seq3TTTTT$ cat seq2.txt>seq1AAAAAAAAAA>seq3TTTTT The script is creating individual files but all contain all the matches, not individuals as I need. Thanks in advance. | find ... | xargs ... and find ... -exec ... are both better options than this: to use a shell loop to iterate over find results correctly, we must use a while read loop: while IFS= read -d '' -r filename; do echo "$filename"done < <(find "$dir" -type f -print0) A lot to unpack there: we use a Process Substitution to execute the find command and be able to read from the results like it's a file. To read a line of input verbatim, the bash idiom is IFS= read -r line . That allows arbitrary whitespace and backslashes to be read into the variable. read -d '' uses the null byte (as produced by -print0 ) as the end-of-line character instead of newline. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/448690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/294643/"
]
} |
448,692 | When I run this script, intended to run until killed... # foo.shwhile true; do sleep 1; done ...I'm not able to find it using ps ax : >./foo.sh// In a separate shell:>ps ax | grep foo.sh21110 pts/3 S+ 0:00 grep --color=auto foo.sh ...but if I just add the common " #! " header to the script... #! /usr/bin/bash# foo.shwhile true; do sleep 1; done ...then the script becomes findable by the same ps command... >./foo.sh// In a separate shell:>ps ax | grep foo.sh21319 pts/43 S+ 0:00 /usr/bin/bash ./foo.sh21324 pts/3 S+ 0:00 grep --color=auto foo.sh Why is this so? This may be a related question: I thought " # " was just a comment prefix, and if so " #! /usr/bin/bash " is itself nothing more than a comment. But does " #! " carry some significance greater than as just a comment? | When the current interactive shell is bash , and you run a script with no #! -line, then bash will run the script. The process will show up in the ps ax output as just bash . $ cat foo.sh# foo.shecho "$BASHPID"while true; do sleep 1; done$ ./foo.sh55411 In another terminal: $ ps -p 55411 PID TT STAT TIME COMMAND55411 p2 SN+ 0:00.07 bash Related: Which shell interpreter runs a script with no shebang? The relevant sections form the bash manual: If this execution fails because the file is not in executable format, and the file is not a directory, it is assumed to be a shell script , a file containing shell commands. A subshell is spawned to execute it. This subshell reinitializes itself, so that the effect is as if a new shell had been invoked to handle the script , with the exception that the locations of commands remembered by the parent (see hash below under SHELL BUILTIN COMMANDS) are retained by the child. If the program is a file beginning with #! , the remainder of the first line specifies an interpreter for the program. The shell executes the specified interpreter on operating systems that do not handle this executable format themselves. [...] This means that running ./foo.sh on the command line, when foo.sh does not have a #! -line, is the same as running the commands in the file in a subshell, i.e. as $ ( echo "$BASHPID"; while true; do sleep 1; done ) With a proper #! -line pointing to e.g. /bin/bash , it is as doing $ /bin/bash foo.sh | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/448692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/214773/"
]
} |
448,708 | I have a string, for example "Icecream123 AirplaneBCD CompanyTL1 ComputerYU1" Let's say I know that my string will contain for sure the substring IceCream but I don't know what follows it. It might be 123 as in my example or it might be something different. While I can use grep to detect if "Icecream" substring exists in my string with the following command echo $string | grep -oF 'Icecream'; Which will print Icecream I want with a command to get it to print the whole substring, which in my example is Icecream123 Of course what follows Icecream is random and not known beforehand so I can't just do $SUBSTRING=$(echo $string | grep -oF 'Icecream')$SUBSTRINGTRAIL=123echo $SUBSTRING$SUBSTRINGTRAIL | If your grep supports perl compatible regular expressions, you could match non-greedily up to the next word boundary: echo "$string" | grep -oP 'Icecream.*?\b' Otherwise, match the longest sequence of non-blank characters: echo "$string" | grep -o 'Icecream[^[:blank:]]*' Or keep everything in the shell and remove the longest trailing sequence of characters starting with a space: echo "${string%% *}" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/448708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/294653/"
]
} |
448,770 | I wish to clone a large disk (a 500GB SSD, for what it's worth), and I am leaning toward using cat , as suggested by Gilles here . But what gave me pause is that I do not really know what cat does upon read errors. I know how dd behaves in these cases, i.e. the command dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync status=progress does not stop for errors on read, and pads the read error with zeroes (the sync option) so that data stays in sync. Unfortunately, it does so by padding the zeroes at the end of the block to be written, so that a single error in an early 512-byte read messes up the whole 64K of data (even worse with larger, faster block sizes). So I am wondering: can I do better/differently with cat ? Or should I just move on to Clonezilla ? | cat stops if it encounters a read or write error. If you’re concerned there might be unreadable sectors on your source drive, you should look at tools such as ddrescue . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/448770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49626/"
]
} |
448,811 | Is it possible to export a gnome-terminal profile to another computer? I create a terminal profile using edit>preferences and save it as "def". I would like to save the configuration in a file and use it another computer. I try to grep "def" within .config/dconf/ and find Binary file dconf/user matches Is it possible to extract the information from the configuration (specially about the colours, takes a lot of time to find the right colurs) and use them in another computer. I am using Fedora 28 with gnome. 4.16.13-300.fc28.x86_64 , gnome-terminal-3.28.2-2.fc28.x86_64 . | You can use dconf(1) to dump and load the gnome-terminal profiles. I got the basic command usage from this source: https://gist.github.com/reavon/0bbe99150810baa5623e5f601aa93afc To export all of your gnome-terminal profiles from one system, and then load them on another, you would issue the following: source system: $ dconf dump /org/gnome/terminal/legacy/profiles:/ > gnome-terminal-profiles.dconf destination system (after transferring the gnome-terminal-profiles.dconf file): $ dconf load /org/gnome/terminal/legacy/profiles:/ < gnome-terminal-profiles.dconf | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/448811",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145150/"
]
} |
448,880 | I have a makefile like this, that kills a process if it's already running, test: client server pgrep myserver && killall myserver /build/bin/myserver --background /build/bin/myclient --server 127.0.0.1 It works if I have a myserver started previously. When it's not, pgrep myserver just fails with non zero status, and Makefile take it as an error, e.g pgrep myserver && killall myservermake: *** [test] Error 1 Any suggestions? | Don't combine killall with pgrep . They don't use the same matching rules, so what pgrep shows may not be what killall kills. Use pkill , which is exactly the same as pgrep except that it kills the matching processes instead of displaying their PIDs. Beware that if you call both pgrep and pkill , there's a race condition: by the time pkill runs, some processes shown by pgrep may have terminated and some new processes may have started. Unless you care about the process IDs, there's no point in calling pgrep ; you can just call pkill directly. pkill returns the status 1 if it doesn't find any process to kill. Either add - at the beginning of the command, to tell make to ignore this error, or change the command to pkill myserver || true which does exactly the same thing as pkill myserver but always returns a success status. test: client server pkill myserver || true /build/bin/myserver --background /build/bin/myclient --server 127.0.0.1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/448880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
448,964 | I have program that depends on library that is linked to libboost 1.67, which installed in the system. When I launch it, I have an error that libboost_system.so.1.58 does not exist. LD_PRELOAD and LD_LIBRARY_PATH are unset. lddtree execution doesn't show this library as dependency but ldd does. How can I trace from where the library is required? | If on a GNU system, try running your application with: LD_DEBUG=libs your-application See LD_DEBUG=help for more options or man ld.so . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/448964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/294838/"
]
} |
449,067 | My Centos 7 server doesn't resolve domain names properly. From what I see, in modern Linux systems /etc/resolv.conf is often generated with dhclient , dnsmasq or Network Manager . Thus I have a general theoretical question about network stack in modern Linuxes: Who is responsible for reading /etc/resolv.conf ? What players (services or kernel subsystems) are involved in domain name resolution? SHORT ANSWER: Arch linux manual says that high-level configuration of domain name resolution is done in /etc/nsswitch.conf and relies on Name Service Switch glibc API. glibc uses nss-resolve function for sending DNS requests to DNS servers. Normally on modern CentOS systems nss-resolve relies upon systemd-resolved service. If /etc/resolv.conf was generated by something like dhclient-script , systemd-resolved reads it and works in a compatibility mode, emulating behaviour of older systems like BIND DNS client. | DNS client libraries do. C libraries contain DNS clients that wrap up name-to-address lookups in the DNS protocol and hand them over to proxy DNS servers to do all of the grunt work of query resolution. There are a lot of these DNS clients. The one that is in the main C runtime library of your operating system will very likely be the one from ISC's BIND. But there are a whole load of others from Daniel J. Bernstein's dns library through c-ares to adns. Although several of them contain their own native configuration mechanisms, they generally have a BIND library compatibility mode where they read resolv.conf , which is the configuration file for the ISC's BIND C client library. The NSS is layered on top of this, and is configured by nsswitch.conf . One of the things that NSS lookups can invoke internally is the DNS client, and nsswitch.conf is read by the NSS code in the C library to determine whether and where lookups are handed to the DNS client and how to deal with the various responses. (There is a slight complication to this idea caused by the Name Services Cache Dæmon, nscd. But this simply adds an extra upper-layer client in the C library, speaking an idiosyncratic protocol to a local server, which in its turn acts as a DNS client speaking the DNS protocol to a proxy DNS server. systemd-resolved adds similar complications.) systemd-resolved , NetworkManager , connman , dhcpcd , resolvconf , and others adjust the BIND DNS client configuration file to switch DNS clients to talk to different proxy DNS servers on the fly. This is out of scope for this answer, especially since there are plenty of answers on this WWW site already dealing with the byzantine details that such a mechanism involves. The more traditional way of doing things in the Unix world is to run a proxy DNS server either on the machine itself or on a LAN. Hence what the FreeBSD manual says about normally configured systems, where the default action of the DNS client library in the absence of resolv.conf matches what Unix system administrators normally have, which is a proxy DNS server listening on 127.0.0.1. (The FreeBSD manual for resolv.conf is actually doco that also originates from ISC's BIND, and can of course also be found where the BIND DNS client library has been incorporated into other places such as the GNU C library.) Further reading Daniel J. Bernstein. The dns library . cr.yp.to. Jonathan de Boyne Pollard (2017). What DNS name qualification is . Frequently Given Answers. Jonathan de Boyne Pollard (2004). What DNS query resolution is . Frequently Given Answers. Jonathan de Boyne Pollard (2001). The Big Picture for "djbdns" . Frequently Given Answers. Jonathan de Boyne Pollard (2000). "content" and "proxy" DNS servers. Frequently Given Answers. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/449067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23424/"
]
} |
449,074 | This- $a=`ls` | do something | etc. | etc. | etc...echo "$a" gives this- file1file2file3 But I want to append something to the end of that variable, so I tried a=${a}\nfile4 But I get this- file1file2file3nfile4 # wrong! | In Bash, you can append to a variable using var+=value . But your problem isn't that, but in generating the newline. The easiest way is to use $'..' quoting, which interprets backslash-escapes like \n for newline: a+=$'\nfile4' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/449074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291147/"
]
} |
449,224 | Suppose I have two resources, named 0 and 1 , that can only be accessed exclusively. Is there any way to recover the "index" of the "parallel processor" that xargs launches in order to use it as a free mutual exclusion service? E.g., consider the following parallelized computation: $ echo {1..8} | xargs -d " " -P 2 -I {} echo "consuming task {}"consuming task 1consuming task 2consuming task 3consuming task 4consuming task 5consuming task 6consuming task 7consuming task 8 My question is whether there exists a magic word, say index , where the output would look like $ echo {1..8} | xargs -d " " -P 2 -I {} echo "consuming task {} with resource index"consuming task 1 with resource 0consuming task 2 with resource 1consuming task 3 with resource 1consuming task 4 with resource 1consuming task 5 with resource 0consuming task 6 with resource 1consuming task 7 with resource 0consuming task 8 with resource 0 where the only guarantee is that there is only ever at most one process using resource 0 and same for 1 . Basically, I'd like to communicate this index down to the child process that would respect the rule to only use the resource it was told to. Of course, it'd be preferable to extend this to more than two resources. Inspecting the docs, xargs probably can't do this. Is there a minimal equivalent solution? Using/cleaning files as fake locks is not preferable. | If you're using GNU xargs , there's --process-slot-var : --process-slot-var = environment-variable-name Set the environment variable environment-variable-name to a unique value in each running child process. Each value is a decimal integer. Values are reused once child processes exit. This can be used in a rudimentary load distribution scheme, for example. So, for example: ~ echo {1..9} | xargs -n2 -P2 --process-slot-var=index sh -c 'echo "$index" "$@" "$$"' _0 1 2 104751 3 4 104761 5 6 104770 7 8 104781 9 10479 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/449224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/269078/"
]
} |
449,258 | I am trying to print a random n letter word, where I input n from the command line itself, but for some reason my script is giving me the same answer every time when using the same value for n . #!/bin/bash num=$1egrep "^.{$num}$" /usr/share/dict/words | head -n $RANDOM| tail -n 1 I am calling my script like: $ bash var3.sh 5étude # always the same output when using 5 $ bash var3.sh 3zoo # always the same output when using 3 where var3.sh is the name of my script and 5 is the length of the word I want to print randomly. How do I get it to print a truly random word? | It doesn't. But $RANDOM returns big numbers (between 0 and 32767) which, especially for words of limited lengths, shows the same result, as the head portion probably returns all the results of the grep (for 3, there are only 819 matches in my /usr/share/dict/words ). Better solution might be to shuffle the results: egrep "^.{$num}$" /usr/share/dict/words | sort -R | tail -n 1 where -R means --random-sort (a GNU sort extension). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/449258",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295079/"
]
} |
449,373 | I am curious if there is a script friendly way to compute the equivalent of apt list --upgradeable . That produces a nice output with exactly one upgrade candidate per line, very parseable. BUT, apt also warns: WARNING: apt does not have a stable CLI interface. Use with caution in scripts. So I feel I should use venerable apt-get instead. Unfortunately, the output for that looks something like: apt-get -s --no-download dist-upgradeReading package lists... DoneBuilding dependency treeReading state information... DoneCalculating upgrade... DoneThe following NEW packages will be installed: dbus libdbus-1-3The following packages will be upgraded: bash gcc-8-base gpgv libedit2 libgcc1 libprocps7 libpsl5 libselinux1 libsemanage-common libsemanage1 libsepol1 libsqlite3-0 libstdc++6 perl-base procps publicsuffix rsyslog twigpilot-core18 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.... Which is much less parseable. So I was hoping for some way to get apt-get update to print a more succinct list like apt would. | I don't use Ubuntu regularly but how about this: $ apt-get -s --no-download dist-upgrade -V | grep '=>' | awk '{print$1}' It prints one package per line. As described in man apt-get : -V, --verbose-versions Show full versions for upgraded and installed packages. Configuration Item: APT::Get::Show-Versions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/449373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101140/"
]
} |
449,412 | At home I have installed Pi-Hole on a Fedora 28 machine, and it is now working properly serving IPv4 addresses via DHCP, acting as the DNS server, and blocking IPv4 addresses as appropriate. However, it seems to be failing to block IPv6 addresses. In the log I see (for example): 2018-06-12 00:11:15 IPv4 v10.events.data.microsoft.com 192.168.1.79 Pi-holed - Whitelist2018-06-12 00:11:15 IPv6 v10.events.data.microsoft.com 192.168.1.79 OK (forwarded) - Blacklist ...There are a good many other such combinations: IPv4 Pi-holed, IPv6 forwarded at the same timestamp to the same FQDN. I know very little about IPv6 at this moment. These are a few of the gaps in my knowledge that I think are contributing to my issue: How do I handle distributing IPv6 addresses in my home LAN? On Pi-Hole's DHCP page, there's a setting to "Enable IPv6 Support", which I've done. Does this mean my Pi-Hole will now serve up IPv6 addresses? On my router, I have various IPv6 settings: IPv6 (I turned it on), DHCPv6 (also on, but makes no difference if it's off it seems), DHCPv6 Prefix Delegation (on, and unmodifyable when DHCPv6 is off). This may be colliding with PiHole, but, I don't know if I can shut off IPv6 or even DHCPv6 on my router, because from what I've read, the concept of having NAT'ed IPv6 addresses inside the LAN is passe'. All devices get a routable IPv6 address these days because of the large address space. I cannot modify the Upstream DNS servers on my PiHole settings page to include, for example, Google's IPv6 servers. I don't know why. IPv6 testing sites show that I can reach them via IPv6. Ultimately, I'm concerned about two things: I can't change the IPv6 DNS settings, and the logs show that IPv6 entries are forwarded. | I got it working. Here's what I did: When I initially set up my Pi-Hole, I only had IPv4 on my system. Thus Pi-Hole only downloaded IPv4-capable blacklists. So I turned IPv6 on on my home router, and enabled DHCPv6. I turned IPv6 on on my Pi-Hole computer, and rebooted. ip -o addr then showed that I had an IPv6 address. Actually, it has a couple of addresses which I don't understand yet. It still didn't block IPv6 domain names. I went into my computer (command line), and edited /etc/pihole/setupVars.conf . There I inserted my IPv6 address at IPV6_ADDRESS=2600:1700:(etc) I also edited /etc/pihole/pihole-FTL.conf , and added AAAA_QUERY_ANALYSIS=yes . I restarted pihole-FTL with: systemctl restart pihole-FTL I went to the Pi-Hole web gui, and turned on DHCPv6 (SLAAC + RA). I turned on the Google IPv6 DNS checkboxes. I rebooted my system. I downloaded the blacklists again. This time it included IPv6 entries. I enjoyed the Internet again. I'm not against ads. I buy stuff that I've seen in ads. I do, however, object to being chased all over the Internet. I do not concur. And I do object to having my precious bandwidth consumed. It's too much, you advertisers. You've gone over the line and I'll be happy to do what I can in my power to ensure I take back a bit of my online experience. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/449412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94381/"
]
} |
449,438 | I was going through a Unix shell script where I came across this command: sed 's/[^0-9]*//g' Could someone explain this? | The command sed 's/[^0-9]//g' would act like a filter that only allowed digits to pass through. So would sed 's/[^0-9]*//g' But due to the g at the end, the * is not needed (more at the end about this). The regular expression [^0-9] means "any character that is not a digit", and the sed command s/[^0-9]//g means "replace any non-digit character with nothing, then repeat for as many times as possible on every line of input (i.e. not just the first non-digit on each line)". Example: $ echo '1-2 1-2? Is this mic on? Hello world! It is 2018!' | sed 's/[^0-9]//g'12122018 It is the same as the command tr -dc '0-9\n' which also deletes non-digit in its input (and leaves newlines alone too). The difference between [^0-9] and [^0-9]* is that the former matches exactly one non-digit character while the latter matches zero or more non-digit characters. If you want to delete non-digits , you don't want to match empty strings (the "zero" in "zero or more" above), so it makes more sense to match with [^0-9] than it does to match with [^0-9]* . The g flag at the end of the sed command means "globally", i.e. everywhere on the line, not just the first match. Removing this, you will notice that $ echo '123 testing' | sed 's/[^0-9]*//'123 testing matches the empty space in front of 1 , and replaces nothing. A more visual example of this: $ echo '123 testing' | sed 's/[^0-9]*/(&)/'()123 testing ... and with g at the end: $ echo '123 testing' | sed 's/[^0-9]*/(&)/g'()1()2()3( testing) And then we have $ echo '123 testing' | sed 's/[^0-9]//'123testing which matches and replaces the space, which is a non-digit. A more visual example of that: $ echo '123 testing' | sed 's/[^0-9]/(&)/'123( )testing ... and with g at the end: $ echo '123 testing' | sed 's/[^0-9]/(&)/g'123( )(t)(e)(s)(t)(i)(n)(g) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/449438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295234/"
]
} |
449,488 | I'm wondering why it is required to put ~/ before .bashrc when opening the bashrc file. To illustrate: I normally open files on my system as follows: vim filename.extension But while in the /home directory if I do the following: vim .bashrc vim will open a new file called .bashrc In order to open my bashrc file I must do as follows: vim ~/.bashrc Why? My current system is Linux Mint 18.3 | The ~ or ~/ refers to the absolute path of your home directory a.k.a. /home/username . Additionally, if you try cd ~ or cd ~/ they will both do the same thing; the shortest option being simply cd . All three options take you to your home directory. NOT /home . Since .bashrc is located in your home directory, you must specify its location by adding the tilde, which allows you to point to home directory from wherever you are and thus access the .bashrc . Of course, this works for any other files and folders located in your ~ , for example: cd ~/myFolder ~/myScript.sh What you were trying to do is open .bashrc , but since vim checks in your current location if the file already exists or not, it will create a new .bashrc file in your current pwd , since there is no current .bashrc where you were trying to open it. In other words, if you were in /home/username/someFolder/someSubFolder , doing the vim .bashrc command will create a new .bashrc file, since there is no already existing .bashrc and you did not point to the right path, which is /home/username/.bashrc (or ~/.bashrc ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/449488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252422/"
]
} |
449,498 | Is it possible to call a function which is declared below in bash? Example if [ "$input" = "yes" ]; then YES_FUNCTIONelif [ "$input" = "no" ]; then NO_FUNCTIONelse exit 0;fiYES_FUNCTION(){ ..... .....}NO_FUNCTION(){ ..... .....} | Like others have said, you can't do that. But if you want to arrange the code into one file so that the main program is at the top of the file, and other functions are defined below, you can do it by having a separate main function. E.g. #!/bin/shmain() { if [ "$1" = yes ]; then do_task_this else do_task_that fi}do_task_this() { ...} do_task_that() { ...} main "$@"; exit When we call main at the end of file, all functions are already defined. Explicitly passing "$@" to main is required to make the command line arguments of the script visible in the function. The explicit exit on the same line as the call to main is not mandatory, but can be used to prevent a running script from getting messed up if the script file is modified. Without it, the shell would try to continue reading commands from the script file after main returns. (see How to read the whole shell script before executing it? ) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/449498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270935/"
]
} |
449,699 | I want to put a pair of parenthesis around a string using a sed oneliner. This is what I have tried echo 1 | sed -e 's/.*/(\0)/' I expect the letter 1 will be matched by the .* pattern. However the output is just (0) I actually wants to get this outcome: (1) I am using BSD sed on OSX | & is the standard way to substitute the whole pattern match. Some sed implementations like GNU's or busybox' support \0 as an alternative but that's not standard nor portable. $ echo 1 | sed -e 's/.*/(&)/'(1) That command encloses the first (possibly empty) sequence of characters , as many as possible in each line inside parenthesis. That may not enclose the full line for those lines that contain bytes not forming valid characters, in which case you may find that: sed 's/^/(/; s/$/)/' Works more reliably to enclose the full line inside parenthesis. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/449699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7003/"
]
} |
449,741 | The built-in test and [ utilities have the -nt ("newer than") and -ot ("older than") tests in most shells, even when the shell is running in "POSIX mode" (also true for the external utilities of the same names on the systems that I have access to). These tests are for comparing modification timestamps on two files. Their documented semantics are slightly varying across implementations (with regards to what happens if one or the other file exists on not), but they are not included in the POSIX spec. for the test utility . They were not carried forward into the test utility when the conditional command was removed from the [KornShell] shell because they have not been included in the test utility built into historical implementations of the sh utility. Assuming I'd like to compare the modification timestamp between files in a /bin/sh shell script and then take action depending on whether one file is newer than the other, as in if [ "$sigfile" -nt "$timestamp" ] || [ "$sigfile.tmp" -nt "$timestamp" ]then returnfi ... what other utility could I use, apart from make (which would make the rest of the script unwieldy to say the least)? Or should I just assume that nobody is ever going to run the script on a "historical implementation of sh ", or resign to writing for a specific shell like bash ? | POSIXLY: f1=/path/to/file_1f2=/path/to/file_2if [ -n "$(find -L "$f1" -prune -newer "$f2")" ]; then printf '%s is newer than %s\n' "$f1" "$f2"fi Using absolute path to files prevent a false positive with filename contains newlines only. In case of using relative path, then change find command to: find -L "$f1" -prune -newer "$f2" -exec echo . \; | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/449741",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116858/"
]
} |
449,794 | I'm trying to install the nvidia-driver for Debian. I've read everywhere that the correct solution is to run sudo apt install nvidia-driver and the driver should install itself without problems. However this command leaves me with the output Reading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: nvidia-driver : Depends: nvidia-driver-libs (= 375.82-1~deb9u1) but it is not going to be installed Depends: nvidia-driver-bin (= 375.82-1~deb9u1) but it is not going to be installed Depends: xserver-xorg-video-nvidia (= 375.82-1~deb9u1) but it is not going to be installed Depends: nvidia-vdpau-driver (= 375.82-1~deb9u1) but it is not going to be installed Depends: nvidia-alternative (= 375.82-1~deb9u1) Depends: nvidia-kernel-dkms (= 375.82-1~deb9u1) or nvidia-kernel-375.82 Recommends: nvidia-settings (>= 375) but it is not going to be installed Recommends: nvidia-persistencedE: Unable to correct problems, you have held broken packages. I've tried installing the missing dependencies (like sudo apt install nvidia-driver-libs ) but this just results in Reading package lists... DoneBuilding dependency tree Reading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: nvidia-driver-libs : Depends: libgl1-nvidia-glvnd-glx (= 375.82-1~deb9u1) but it is not going to be installed or libgl1-nvidia-glx (= 375.82-1~deb9u1) but it is not going to be installed Depends: nvidia-egl-icd (= 375.82-1~deb9u1) but it is not going to be installed or libegl1-nvidia (= 375.82-1~deb9u1) but it is not going to be installed Recommends: nvidia-driver-libs-i386 Recommends: libopengl0-glvnd-nvidia but it is not going to be installed Recommends: libglx-nvidia0 (= 375.82-1~deb9u1) but it is not going to be installed Recommends: libgles-nvidia1 (= 375.82-1~deb9u1) but it is not going to be installed Recommends: libgles-nvidia2 (= 375.82-1~deb9u1) but it is not going to be installed Recommends: libnvidia-cfg1 (= 375.82-1~deb9u1) but it is not going to be installed Recommends: nvidia-vulkan-icd (= 375.82-1~deb9u1) but it is not going to be installed How do I install the nvidia-driver with apt? | You need to enable the non-free repositories: sudo sed -i.bak 's/stretch[^ ]* main$/& contrib non-free/g' /etc/apt/sources.list Then run apt update and try your installation again. You’ll probably also need to install the kernel headers if you haven’t already: sudo apt install linux-headers-$(uname -r) See the full instructions on the Debian wiki . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/449794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/269183/"
]
} |
449,800 | What I'm trying to accomplish is to move files in my directory that match with the records in my text file based on 2 parameters . For example I have a record in my text file that reads: SPPARK|10416|3308123|3308123|Uphold|Thelma|1930/05/20|| I have a file in my directory that reads: 1123_M1123_UPHOLD_M1123_MESSAGE_SPPARK_348642.pdf So if last name UPHOLD and 4th field M1123 match up to my fields in my text file, then I want to move them to a specified directory. for files in test/* ; do echo $files | awk -F "_" '{print $3,$4}'done | You need to enable the non-free repositories: sudo sed -i.bak 's/stretch[^ ]* main$/& contrib non-free/g' /etc/apt/sources.list Then run apt update and try your installation again. You’ll probably also need to install the kernel headers if you haven’t already: sudo apt install linux-headers-$(uname -r) See the full instructions on the Debian wiki . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/449800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285239/"
]
} |
449,803 | Can somebody give me the appropriate methodology in dealing with a broken alsa ? To be honest I don't know what happenned but that the computer went out of battery while sleeping (the pc not me), and then I was not able to get sound back up.alsa-info allowed me to upload the diagnostic . I'd like to point out the fact that I don't have any volume icon on my launcher anymore and I'm using Lubuntu. I tried purging and reinstalling alsa-base but with no effects. I don't want to reinstall the whole system because of that. | You need to enable the non-free repositories: sudo sed -i.bak 's/stretch[^ ]* main$/& contrib non-free/g' /etc/apt/sources.list Then run apt update and try your installation again. You’ll probably also need to install the kernel headers if you haven’t already: sudo apt install linux-headers-$(uname -r) See the full instructions on the Debian wiki . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/449803",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180856/"
]
} |
449,818 | Im looking for an stunnel 5.x RPM for Centos 6.5 so I can get TLS1.2 support. Ive looked everywhere, but cannot find one. Ive tried downloading and compiling, as per another question on here (title: Stunnel 5.4 on Centos ), and followed all the instructions, but am running into compile errors. The command is: rpmbuild -ta stunnel-5.46.tar.gz Here is an example. client.c:147: warning: expected [error|warning|ignored] after '#pragma GCC diagnostic'client.c:180: warning: expected [error|warning|ignored] after '#pragma GCC diagnostic'client.c:203: warning: expected [error|warning|ignored] after '#pragma GCC diagnostic'/root/rpmbuild/BUILD/stunnel-5.46/src/client.c:487: undefined reference to `OpenSSL_version_num'/root/rpmbuild/BUILD/stunnel-5.46/src/client.c:487: undefined reference to `OpenSSL_version_num'/root/rpmbuild/BUILD/stunnel-5.46/src/client.c:488: undefined reference to `OpenSSL_version_num'/root/rpmbuild/BUILD/stunnel-5.46/src/stunnel.c:897: undefined reference to `OpenSSL_version'/root/rpmbuild/BUILD/stunnel-5.46/src/stunnel.c:899: undefined reference to `OpenSSL_version'/root/rpmbuild/BUILD/stunnel-5.46/src/stunnel.c:900: undefined reference to `OpenSSL_version_num'collect2: ld returned 1 exit statusmake[2]: *** [stunnel] Error 1make[2]: Leaving directory `/root/rpmbuild/BUILD/stunnel-5.46/src'make[1]: *** [all] Error 2make[1]: Leaving directory `/root/rpmbuild/BUILD/stunnel-5.46/src'make: *** [all-recursive] Error 1error: Bad exit status from /var/tmp/rpm-tmp.mbHOf4 (%build) If anyone can help, Id be grateful. regardsRichard | You need to enable the non-free repositories: sudo sed -i.bak 's/stretch[^ ]* main$/& contrib non-free/g' /etc/apt/sources.list Then run apt update and try your installation again. You’ll probably also need to install the kernel headers if you haven’t already: sudo apt install linux-headers-$(uname -r) See the full instructions on the Debian wiki . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/449818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295506/"
]
} |
449,841 | So I need to write a bash script that copies all files to a specified directory however I need to rename the files to it's absolute path by replacing the / character with __ . For example if the file zad1.sh is in the directory /home/123456/ the file needs to be renamed to __home__123456__zad1.sh Any ideas on how to do this? | To get the path of your file : realpath <file> Replace in bash: echo "${var//search/replace}" The first two slashes are for making global search. Using just / would only do one replacement. So your code could be path=$(realpath zad1.sh)path_replaced=${path//\//__} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/449841",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293644/"
]
} |
449,853 | I ran into the same problem described Port forwarding using VPN client , but unsuccessfully. I have a OpenVPN access server version 2.5 and a client configured with a site-to-site routing. Both client and server can communicate with each other by using the private IP addresses. On the client, there is an Apache server which listen on port 8081. The goal is to be able to connect to the OpenVPN server public IP, and have it forward the connection to the client, so that the user can access the Apache server behind My current setup is: sysctl -w net.ipv4.ip_forward=1 iptables -t nat -A PREROUTING -d 50.xxx.xxx.xxx -p tcp --dport 8081 -j DNAT --to-dest 192.168.2.86:8081 iptables -t nat -A POSTROUTING -d 192.168.2.86 -p tcp --dport 8081 -j SNAT --to-source 10.0.2.42 Is there something simple I'm doing incorrectly? Thank you. | The issue was related with the iptables rules. By adding the following rules, everything works as expected: iptables -t nat -I PREROUTING 1 -d {SERVER_LOCAL_IP_ADDRESS} -p tcp --dport {CLIENT_PORT} -j DNAT --to-dest {CLIENT_LOCAL_IP_ADDRESS}:{CLIENT_PORT} iptables -t nat -I POSTROUTING 1 -d {CLIENT_LOCAL_IP_ADDRESS} -p tcp --dport {CLIENT_PORT} -j SNAT --to-source {VPN_GATEWAY_IP} iptables -I FORWARD 1 -d {CLIENT_LOCAL_IP_ADDRESS} -p tcp --dport {CLIENT_PORT} -j ACCEPT | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/449853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295541/"
]
} |
449,875 | I'm trying to write a make task that will export all variables from my .env file, which should look like this: A=1B=2 If I type in terminal: set -a. ./.envset +a it works perfectly. But same task in Makefile doesn't work: export.env: set -a . ./.env set +amake export.env # doneprintenv | grep A # nothing! I need these vars to persist after make task finished. | Like any process, make can’t modify the environment of an existing process , it can only control the environment which is passed to processes it starts. So short of stuffing the input buffer, there’s no way to do what you’re trying to do. In addition, make processes each command line in a different shell, so your set -a , . ./.env , and set +a lines are run in separate shells. The effects of . ./.env will only be seen in the shell which runs that command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/449875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295568/"
]
} |
449,960 | I've set up a new Debian 9 ( stretch ) LXC container on a machine running Proxmox VE, and installed the cifs-utils package.I quickly tested the connection to the SMB server by running smbclient //192.168.0.2/share -U myusername which worked fine. However, the command mount.cifs //192.168.0.2/share /mnt -o user=myusername failed, printing the following error message: mount error(1): Operation not permittedRefer to the mount.cifs(8) manual page (e.g. man mount.cifs) I've made sure that… the owner and group of the shared directory (on the SMB server, which is a FreeBSD machine) are both existent on the client, i.e., inside the container. the owner of the shared directory is a member of the group , both on the server and the client. ( id myusername ) the mountpoint ( /mnt ) exists on the client. What could be the cause of the above-mentioned error? | You're probably running an unprivileged LXC container. The easiest solution is to use a privileged container instead. However, there might be other solutions; take a look e.g. at this thread/post in the proxmox forums. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/449960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231113/"
]
} |
450,008 | Python packages are frequently hosted in many distribution's repositories. After reading this tutorial, specifically the section titled "Do you really want to do this" I have avoided using pip and preferred to use the system repository, only resorting to pip when I need to install a package not in the repository. However, because this is an inconsistent installation method, would it be better to only use pip? What are the benefits/detractors to using pip over the system's own repository for packages that are available in both places? The link I included states The advantage of always using standard Debian / NeuroDebian packages, is that the packages are carefully tested to be compatible with each other. The Debian packages record dependencies with other libraries so you will always get the libraries you need as part of the install. I use arch. Is this the case with other package-management systems besides apt? | The biggest disadvantage I see with using pip to install Python modules on your system, either as system modules or as user modules, is that your distribution’s package management system won’t know about them. This means that they won’t be used for any other package which needs them, and which you may want to install in the future (or which might start using one of those modules following an upgrade); you’ll then end up with both pip - and distribution-managed versions of the modules, which can cause issues (I ran into yet another instance of this recently). So your question ends up being an all-or-nothing proposition: if you only use pip for Python modules, you can no longer use your distribution’s package manager for anything which wants to use a Python module... The general advice given in the page you linked to is very good: try to use your distribution’s packages as far as possible, only use pip for modules which aren’t packaged, and when you do, do so in your user setup and not system-wide. Use virtual environments as far as possible, in particular for module development. Especially on Arch, you shouldn’t run into issues caused by older modules; even on distributions where that can be a problem, virtual environments deal with it quite readily. It’s always worth considering that a distribution’s library and module packages are packaged primarily for the use of other packages in the distribution; having them around is a nice side-effect for development using those libraries and modules, but that’s not the primary use-case. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/450008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288980/"
]
} |
450,056 | var="/ax1121/global/config/domains/adf_domain/config/fmwconfig/components/OHS/instances/vmserver1234/" I want the portion "/instances" to be removed and stored in a variable. After removal, it should look as follows var="/ax1121/global/config/domains/adf_domain/config/fmwconfig/components/OHS/vmserver1234/" Thanks in advance | Using bash : var=${var/\/instances/} This uses the parameter substitution ${variable/pattern/replacement} to replace (the first) /instances string in $var with nothing. This could also be written var=${var/'/instances'/} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295633/"
]
} |
450,059 | I created a file using: printf 'this is \n not is \n is is \n this biz' > file2 when I try to remove all \n(newline) it only remove sed's own inserted number's newline sed '=' file2 | sed 'N; s/\n/ /' the output is: 1 this is 2 not is 3 is is 4 this biz and not what I expected which is: 1 this is 2 not is 3 is is 4 this biz I am lost. | Your second sed script, Ns/\n/ / does not work the way you expect because it will read one line, then append the next line to it with an embedded newline inserted by the N command, and then replace that newline with a space (and output). When reading the line after, this result from the first two lines is discarded. Instead, you would have had to use the hold space: H; # append to hold space with a '\n' embedded # for the last line:${ x; # swap in the hold space s/\n//; # delete the first newline (from the H command on the very first line of input) y/\n/ /; # replace all other newlines with spaces p; # print result} This script is running once for each line of input, collecting data in the hold space until we hit the last line. At the last line, we process the collected data and output it. You would run this with sed -n : $ sed '=' <file2 | sed -n 'H; ${ x; s/\n//; y/\n/ /; p; }'1 this is 2 not is 3 is is 4 this biz (no newline at end of output, as there was none at the end of the input). Alternatively, with an explicit loop we may use N . The trick here is to never reach the end of the script until we're ready to print the result. :top; # define label 'top'N; # append next line with a '\n' embedded$!btop; # if not at end, branch to 'top'y/\n/ /; # replace all newlines with spaces # (implicit print) This script only runs (to the end) once and manages the reading of the data itself whereas the previous script was fed data by the built-in read loop in sed (which replaces the pattern space for each line read, which was your issue). It uses the pattern space rather than the hold space to collect the data and processes it when the last line has been read. On the command line: $ sed '=' <file2 | sed ':top; N; $!btop; y/\n/ /' (same output as above) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295716/"
]
} |
450,200 | I decided to try lxdm (was using fluxbox and xfce), and discovered that for many programs the url handler was failing, producing this error message; Quite strange as you can see, it's prepending the user directory to the url. The example here is from telegram, but it happens in discord, as well as when executing from the command line; xdg-open https://www.google.com produces a similar error. xdg-settings get default-web-browser output's firefox.desktop which works as a link in both xfce and lxdm. More information; I ran bash -x on it and... $ bash -x /usr/bin/xdg-open http://www.google.com+ check_common_commands http://www.google.com+ '[' 1 -gt 0 ']'+ parm=http://www.google.com+ shift+ case "$parm" in+ '[' 0 -gt 0 ']'+ '[' -z '' ']'+ unset XDG_UTILS_DEBUG_LEVEL+ '[' 0 -lt 1 ']'+ xdg_redirect_output=' > /dev/null 2> /dev/null'+ '[' xhttp://www.google.com '!=' x ']'+ url=+ '[' 1 -gt 0 ']'+ parm=http://www.google.com+ shift+ case "$parm" in+ '[' -n '' ']'+ url=http://www.google.com+ '[' 0 -gt 0 ']'+ '[' -z http://www.google.com ']'+ detectDE+ unset GREP_OPTIONS+ '[' -n LXDE ']'+ case "${XDG_CURRENT_DESKTOP}" in+ DE=lxde+ '[' xlxde = x ']'+ '[' xlxde = x ']'+ '[' xlxde = x ']'+ '[' xlxde = xgnome ']'+ '[' -f /run/user/1000/flatpak-info ']'+ '[' xlxde = x ']'+ DEBUG 2 'Selected DE lxde'+ '[' -z '' ']'+ return 0+ case "${BROWSER}" in+ case "$DE" in+ open_lxde http://www.google.com+ pcmanfm --help -a is_file_url_or_path http://www.google.com++ file_url_to_path http://www.google.com++ local file=http://www.google.com++ echo http://www.google.com++ grep -q '^file:///'++ echo http://www.google.com+ local file=http://www.google.com+ echo http://www.google.com+ grep -q '^/'++ pwd+ file=/home/nesmerrill/.local/share/applications/http://www.google.com+ pcmanfm /home/nesmerrill/.local/share/applications/http://www.google.com+ '[' 0 -eq 0 ']'+ exit_success+ '[' 0 -gt 0 ']'+ exit 0 The important part seems to be pcmanfm --help -a is_file_url_or_path http://www.google.com but, that command if that's how it was used, doesn't seem to do much of anything? $ pcmanfm --help -a is_file_url_or_path http://www.google.comUsage: pcmanfm [OPTION…] [FILE1, FILE2,...] Help Options: -h, --help Show help options --help-all Show all help options --help-gtk Show GTK+ OptionsApplication Options: -p, --profile=PROFILE Name of configuration profile -d, --daemon-mode Run PCManFM as a daemon --no-desktop No function. Just to be compatible with nautilus --desktop Launch desktop manager --desktop-off Turn off desktop manager if it's running --desktop-pref Open desktop preference dialog --one-screen Use --desktop option only for one screen -w, --set-wallpaper=FILE Set desktop wallpaper from image FILE --wallpaper-mode=MODE Set mode of desktop wallpaper. MODE=(color|stretch|fit|crop|center|tile|screen) --show-pref=N Open Preferences dialog on the page N -n, --new-win Open new window -f, --find-files Open a Find Files window --role=ROLE Window role for usage by window manager --display=DISPLAY X display to use | @user310685 got it close - but DEFINITELY WRONG. That fix "works" only when xdg-open is NOT given "naked" file paths (i.e. with no leading "file://" URI scheme and double-slash) or file-schemed URI's (i.e. with the leading "file://"). Those two types of argument should have xdg-open defer to pcmanfm , but they won't. The actual error is not a mistake in the STDERR redirection. Rather, it's that the script writer confused the test "and" operator and the shell's process list "and" connector. The one (erroneously) used is "-a"; the correct one is "&&". As reference, I've reproduced the original script line, my fix for that line, and the "horror of horrors" suggestion by @user310685: #ORIG# if pcmanfm --help >/dev/null 2>&1 -a is_file_url_or_path "$1"; then#FIXED# if pcmanfm --help >/dev/null 2>&1 && is_file_url_or_path "$1"; then#HORROR# if pcmanfm --help >/dev/null 2>$1 -a is_file_url_or_path "$1"; then The intention of the if ..; then is given in the script line just above it: # pcmanfm only knows how to handle file:// urls and filepaths, it seems. With this comment in mind, the way to understand the problematic if .. then line is: Test if pcmanfm is runnable (by having it report it's own help, and discarding any STDOUT or STDERR) AND, run the script-function is_file_url_or_path() to then see if the "$1" argument is acceptable to pcmanfm (as per the code comment noted above) If both these conditions hold, then the script flows into a short block that: Calls the script-function file_url_to_path() to strip off any leading "file://" part (as local var file ) If the result is NOT an absolute path (i.e. doesn't start with "/"), then prepend the CWD to the value of file Execute pcmanfm "$file" Why the Original Script Fails: As noted above, the script is (erroneously) using "-a" as a "process list and operator." What actually happens is that the shell runs the command (after STDOUT and STDERR redirections are "pulled out" of the command, which are allowed to be anywhere in the command word sequence after the first word): pcmanfm --help -a is_file_url_or_path "$1" This always succeeds (unless pcmanfm isn't executable on the PATH). All the extra stuff on the command line ( -a .. ) is ignored by pcmanfm running it's --help mode. Thus, the "process as a file or file-URL" code block is always executed. When given an URL (with a scheme part), the file_url_to_path() script-function only removes a leading "file://", truncates any trailing "#..." fragment, and also URI-decodes the argument (i.e. "%XX" are converted to ASCII). NOTE: Unless the argument starts with "file:///", nothing is done. For example, the OP's URL " https://www.google.com " is unchanged by file_url_to_path() since it does not begin with "file:///". BUT later code then considers this argument to be a "relative path" since it clearly doesn't start with "/". Thus, it prepends the CWD as described and then pcmanfm is almost certainly NOT going to find that munged value as an extant path to display. Instead, it shows an error pop-up, as in the OP's question. The Fix: Simple enough: use the correct syntax for a process chain AND-operator: "&&" as shown in the #FIXED# line, above. The HORROR of @user310685's Suggestion: What @user310685 proposes does fix one problem, sort of. What happens is that the shell dutifully does variable expansion and then attempts to execute something like: pcmanfm --help >/dev/null 2>https://www.google.com -a is_file_url_or_path https://www.google.com This, is almost certainly going to produce a shell redirection error (unless the CWD has a folder (in the right place) named "https:" - which it could ). That redirection error spits a message to STDERR, and then the shell moves on. Since this error occured within an if .. else .. fi block, the shell takes the else .. fi part, which is what @user310685 wants. Thus, the problem is solved... BUT AT WHAT COST??? There are two problems with this not-quite-right fix: When actually given a path or a file-schemed URL, the wrong code path is executed (the else .. fi part). This is because the intended process chain is really only a single process that (almost) always generates a shell redirection error which is taken as the if .. ; condition as being "false." This not soooo bad, since that else .. fi block merely defers work to another script-function called open_generic() which is designed to handle paths and file-URL's (but not using pcmanfm to do the work, rather some other complex code-path that I didn't analyze but I presume does a fair job). But WAIT! The HORROR ... Look back up at the pcmanfm --help ... expanded script line that the shell attempts. Note the redirection of STDERR. Consider what happens if this is done with a legitimate path, like "/home/user/precious". OMG The attept to probe if pcmanfm is available and then to test if the argument is a file just OVERWROTE THE FILE!!! Bye-bye precious... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450200",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295810/"
]
} |
450,226 | Just wondering if this: if [ "$first_arg" == "major" ] || [ "$first_arg" == "minor" ]; then exit 1;fi is the same as this: if [ "$first_arg" == "major" || "$first_arg" == "minor" ]; then exit 1;fi | They're not the same. In fact [ "$first_arg" == "major" || "$first_arg" == "minor" ] is not even a valid expression. This is because [ is a command that's equivalent to test and they can't use the || alternative, which operates on the inter-command level. What could be historically considered correct for alternative is -o , but it's now marked as obsolete by POSIX 1 , which advises to rewrite test "$1" -o "$2" into test "$1" || test "$2" Apart from the test and [ constructs, there's also the "modern" [[ test command, which in turn doesn't accept -o altogether, but instead accepts || . Thus all of these are valid and equivalent: One [[ test: if [[ $first_arg == major || $first_arg == minor ]]; then exit 1;fi Two [[ tests: if [[ $first_arg == major ]] || [[ $first_arg == minor ]]; then exit 1;fi Two [ tests (the standard equivalent): if [ "$first_arg" = major ] || [ "$first_arg" = minor ]; then exit 1;fi Double quotes aroung $first_arg are not necessary inside [[ , as there's no word splitting nor pathname expansion in there. The quotes should be used with [ , however. And there's no point in quoting minor nor major either. Not just here, but with test or [ too. That's because they're simple strings. 1. See APPLICATION USAGE. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
450,229 | The official Checkpoint out command line tool from CheckPoint, for setting up a SSL Network Extender VPN is not longer working from the Linux command line. It is also no longer actively supported by CheckPoint. However, there is a promising project, that tries to replicate the Java applet for authentication, that talks with the snx command line utility, called snxconnect . I was trying to put snxconnect text utility to work in Debian Buster, doing: sudo pip install snxvpn and export PYTHONHTTPSVERIFY=0snxconnect -H checkpoint.hostname -U USER However, it was mostly dying either with an HTTP error of: HTTP/1.1 301 Moved Permanently: or: Got HTTP response: HTTP/1.1 302 Found or: Unexpected response, try again. What to do about it? PS. The EndPoint Security VPN official client is working well both in a Mac High Sierra and Windows 10 Pro. | SNX build 800007075 from 2012, used to support the CheckPoint VPN from the Linux command line. So I tested it, and lo and behold, it still works with the latest distributions and kernel(s) 4.x/5.x. So ultimately, my other answer in this thread holds true, if you cannot get hold of SNX build 800007075 or if that specific version of SNX stops working with the current Linux versions (it might happen in a near future) or if you need OTP support. Presently, the solution is then installing this specific last version of SNX that still supports doing the VPN from the command line. To install snx build 800007075, get it from: wget https://starkers.keybase.pub/snx_install_linux30.sh?dl=1 -O snx_install.sh For Debian and Debian-based 64-bit systems like Ubuntu and Linux Mint, you might need to add the 32-bit architecture: sudo dpkg --add-architecture i386sudo apt-get update I had to install the following 32-bit packages: sudo apt-get install libstdc++5:i386 libx11-6:i386 libpam0g:i386 Run then the snx installation script: chmod a+rx snx_install.shsudo ./snx_install.sh` You will have now a /usr/bin/snx 32-bit client binary executable. Check if any dynamic libraries are missing with: sudo ldd /usr/bin/snx You can only proceed to the following points when all the dependencies are satisfied. You might need to run manually first snx -s CheckpointURLFQDN -u USER , before scripting any automatic use, for the signature VPN be saved at /etc/snx/USER.db . Before using it, you create a ~/.snxrc file, using your regular user (not root) with the following contents: server IP_address_of_your_VPNusername YOUR_USERreauth yes For connecting, type snx $ snxCheck Point's Linux SNXbuild 800007075Please enter your password: SNX - connected. Session parameters: Office Mode IP : 10.x.x.xDNS Server : 10.x.x.xSecondary DNS Server: 10.x.x.xDNS Suffix : xxx.xx, xxx.xxTimeout : 24 hours If you understand the security risks of hard coding a VPN password in a script, you also can use it as: echo 'Password' | snx For closing/disconnecting the VPN, while you may stop/kill snx , the better and official way is issuing the command: $snx -d SNX - Disconnecting... done. see also Linux Checkpoint SNX tool configuration issues for some clarifications about which snx version to use. If automating the login and accepting a new signature (and understanding the security implications), I wrote an expect script, which I called the script snx_login.exp ; not very secure, however you can automate your login, calling it with the password as an argument: #!/usr/bin/expectspawn /usr/bin/snx set password [lindex $argv 0] expect " ?assword: "send -- "$password\r" expect {"o:" {send "y\r"exp_continue}eof} PS. Beware snx does not support OTP alone, you will have to use the snxconnect script present on the other answer if using it. PPS @gibies called to my attention that using an etoken, the password field gets the password plus the appended etoken and not a fixed password. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/450229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138261/"
]
} |
450,234 | BIND 9.11.3 Ubuntu 18.04 kernel 4.15.0-23 I am running bind9 as my LAN DNS and it is working for all hosts and forwarding to internet through the google DNS IPs Why does my log have many instances of this message. 3-4 entries per minute : named[862]: resolver priming query complete I have run named-checkconf named-checkzone without errors. | This is a known (and fixed in later versions) bug in bind9: https://gitlab.isc.org/isc-projects/bind9/issues/752 Note that bind9 9.11 continues to be supported as the long-term support branch, the fix is in 9.13 and later. A workaround appears to be to run dig +trace on any domain frequently enough (every 8–12 hours) that the cache doesn't expire. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251630/"
]
} |
450,239 | When attempting to source a file, wouldn't you want an error saying the file doesn't exist so you know what to fix? For example, nvm recommends adding this to your profile/rc: export NVM_DIR="$HOME/.nvm"[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm With above, if nvm.sh doesn't exist, you'll get a "silent error". But if you try . "$NVM_DIR/nvm.sh" , the output will be FILE_PATH: No such file or directory . | In POSIX shells, . is a special builtin, so its failure causes the shell to exit (in some shells like bash , it's only done when in POSIX mode). What qualifies as an error depends on the shell. Not all of them exit upon a syntax error when parsing the file, but most would exit when the sourced file can't be found or opened. I don't know of any that would exit if the last command in the sourced file returned with a non-zero exit status (unless the errexit option is on of course). Here doing: [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" Is a case where you want to source the file if it's there, and don't if it's not (or is empty here with -s ). That is, it should not be considered an error (fatal error in POSIX shells) if the file is not there, that file is considered an optional file. It would still be a (fatal) error if the file was not readable or was a directory or (in some shells) if there was a syntax error while parsing it which would be real error conditions that should be reported. Some would argue that there's a race condition. But the only thing it means would be that the shell would exit with an error if the file is removed in between the [ and . , but I'd argue it's valid to consider it an error that this fixed path file would suddenly vanish while the script is running. On the other hand, command . "$NVM_DIR/nvm.sh" 2> /dev/null where command ¹ removes the special attribute to the . command (so it doesn't exit the shell on error) would not work as: it would hide . 's errors but also the errors of the commands run in the sourced file it would also hide real error conditions like the file having the wrong permissions. Other common syntaxes (see for instance grep -r /etc/default /etc/init* on Debian systems for the init scripts that haven't been converted to systemd yet (where EnvironmentFile=-/etc/default/service is used to specify an optional environment file instead)) include: [ -e "$file" ] && . "$file" Check the file it's there, still source it if it's empty. Still fatal error if it can't be opened (even though it's there, or was there). You may see more variants like [ -f "$file" ] (exists and is a regular file), [ -r "$file" ] (is readable), or combinations of those. [ ! -e "$file" ] || . "$file" A slightly better version. Makes it clearer that the file not existing is an OK case. That also means the $? will reflect the exit status of the last command run in $file (in the previous case, if you get 1 , you don't know whether it's because $file didn't exist or if that command failed). command . "$file" Expect the file to be there, but don't exit if it can't be interpreted. [ ! -e "$file" ] || command . "$file" Combination of the above: it's OK if the file is not there, and for POSIX shells, failures to open (or parse) the file are reported but are not fatal (which may be more desirable for ~/.profile ). ¹ Note: In zsh however, you can't use command like that unless in sh emulation; note that in the Korn shell, source is actually an alias for command . , a non-special variant of . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/450239",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273817/"
]
} |
450,248 | I'm planning on doing a migration of my Debian installation from one disk to another in the near future. As a part of that, I'm thinking about setting the file systems up differently, for future-proofing as well as for simplifying the setup. My current setup is a one-device RAID1 LVM (I originally intended to set up mirroring of the system disk, but never got around to actually doing that) on a partition on a SSD. That RAID1 in turn holds the ext4 root file system, with /opt plus parts of /usr and /var separated onto ZFS storage. Particularly, /boot is part of the root file system, and I'm booting using old-style MBR using GRUB 2. The idea is to have a large root file system with a *nix-esque file system (probably ext4 to begin with), and to separate out the parts that have special needs. I'd like to leave open the possibility of migrating to UEFI boot later, possibly including a migration to GPT, without needing to move things around. (Backup/repartition/restore is another matter, and will likely be needed for migrating from MBR to GPT, but I'll probably be getting a new disk again before that becomes an issue.) I'd also like to have the option to migrate the root file system to ZFS later, or at least to set up dm-verity for data integrity verification. (Yes, it'll be a bit of a headache to get everything about that right, especially semi-in-place. That'll be a matter for a later day; their only consideration for this question is in terms of later options.) This all seems to make an obvious case for separating / , /boot and the FAT32 /boot/efi (the last of which may initially be empty), in addition to those that I have already separated from the root file system. But are there others? Which system file systems, backed by persistent storage, should be separated from the root file system and why on a modern-day Linux installation? Do any of these file systems need to go onto specific partition locations when using MBR, or are their locations arbitrary? For example, would /boot/efi need to go onto the first primary partition or something like that? | In POSIX shells, . is a special builtin, so its failure causes the shell to exit (in some shells like bash , it's only done when in POSIX mode). What qualifies as an error depends on the shell. Not all of them exit upon a syntax error when parsing the file, but most would exit when the sourced file can't be found or opened. I don't know of any that would exit if the last command in the sourced file returned with a non-zero exit status (unless the errexit option is on of course). Here doing: [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" Is a case where you want to source the file if it's there, and don't if it's not (or is empty here with -s ). That is, it should not be considered an error (fatal error in POSIX shells) if the file is not there, that file is considered an optional file. It would still be a (fatal) error if the file was not readable or was a directory or (in some shells) if there was a syntax error while parsing it which would be real error conditions that should be reported. Some would argue that there's a race condition. But the only thing it means would be that the shell would exit with an error if the file is removed in between the [ and . , but I'd argue it's valid to consider it an error that this fixed path file would suddenly vanish while the script is running. On the other hand, command . "$NVM_DIR/nvm.sh" 2> /dev/null where command ¹ removes the special attribute to the . command (so it doesn't exit the shell on error) would not work as: it would hide . 's errors but also the errors of the commands run in the sourced file it would also hide real error conditions like the file having the wrong permissions. Other common syntaxes (see for instance grep -r /etc/default /etc/init* on Debian systems for the init scripts that haven't been converted to systemd yet (where EnvironmentFile=-/etc/default/service is used to specify an optional environment file instead)) include: [ -e "$file" ] && . "$file" Check the file it's there, still source it if it's empty. Still fatal error if it can't be opened (even though it's there, or was there). You may see more variants like [ -f "$file" ] (exists and is a regular file), [ -r "$file" ] (is readable), or combinations of those. [ ! -e "$file" ] || . "$file" A slightly better version. Makes it clearer that the file not existing is an OK case. That also means the $? will reflect the exit status of the last command run in $file (in the previous case, if you get 1 , you don't know whether it's because $file didn't exist or if that command failed). command . "$file" Expect the file to be there, but don't exit if it can't be interpreted. [ ! -e "$file" ] || command . "$file" Combination of the above: it's OK if the file is not there, and for POSIX shells, failures to open (or parse) the file are reported but are not fatal (which may be more desirable for ~/.profile ). ¹ Note: In zsh however, you can't use command like that unless in sh emulation; note that in the Korn shell, source is actually an alias for command . , a non-special variant of . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/450248",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2465/"
]
} |
450,365 | In a shell script, how can I test programmatically whether or not the terminal supports 24-bit or true color? Related: This question is about printing a 24-bit / truecolor test pattern for eyeball verification | This source says to check if $COLORTERM contains 24bit or truecolor . sh [ "$COLORTERM" = truecolor ] || [ "$COLORTERM" = 24bit ] bash / zsh : [[ $COLORTERM =~ ^(truecolor|24bit)$ ]] | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
450,369 | I have the following command: python -c 'import crypt; print(crypt.crypt("$Password", crypt.mksalt(crypt.METHOD_SHA512)))' Where $Password is a shell variable. How do I correctly expand it as a variable, and not have it treated as a literal? | Don't as that would be a code injection vulnerability and also avoid passing passwords in arguments to commands, as they then become public by showing in the output of ps and they are sometimes logged in some audit logs. Using environment variables is usually better: PASSWORD="$Password" python3 -c 'import os, cryptprint(crypt.crypt(os.getenv("PASSWORD"), crypt.mksalt(crypt.METHOD_SHA512)))' (here using the VAR=value cmd syntax as opposed to export VAR so the environment variable is passed only to that one command invocation). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450369",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293979/"
]
} |
450,378 | The find command on Linux has a lot of options compared to the find command on SunOS or Solaris. I want to use the find command like this: find data/ -type f -name "temp*" -printf "%TY-%Tm-%Td %f\n" | sort -r It works perfectly fine on a Linux machine, but the same command doesn't have the option -printf on a SunOS machine. I want to customize my output in the "%TY-%Tm-%Td %f\n" format. Please suggest any alternatives for SunOS. | Note that it has nothing to do with Linux; that -printf predicate is specific to the GNU implementation of find . Linux is not an OS, it's just the kernel found in a number of OSes. While most of those OSes used to use a GNU userland in the past, now the great majority of OSes using Linux are embedded and have basic commands if they have any. The GNU find command, which predates Linux, can be installed on most Unix-like OSes. It was certainly used on Solaris (called SunOS back then) before Linux came out. Nowadays, it's even available as an Oracle package for Solaris. On Solaris 11, that's in file/gnu-findutils , and the command is named gfind (for GNU find , to distinguish it from the system's own find command). Now, if you can't install packages, your best bet is probably to use perl : find data/ -type f -name "temp*" -exec perl -MPOSIX -le ' for (@ARGV) { unless(@s = lstat($_)) { warn "$_: $!\n"; next; } print strftime("%Y-%m-%d", localtime($s[9])) . " $_"; }' {} + | sort -r Here, we're still using find (Solaris implementation) to find the files, but we're using its -exec predicate to pass the list of files to perl . And perl does a lstat() on each to retrieve the file metadata (including the modification time as the 10th element ( $s[9] )), interprets it in the local timezone ( localtime() ) and formats it ( strftime() ) which it then print s alongside the file name ( $_ is the loop variable if none is specified in perl , and $! is the equivalent of stderror(errno) , the error text for the last system call failure). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295951/"
]
} |
450,392 | How to use "rsync" to copy directories and files (except certain types) to destination ?following are the file types to be exempted or not to be synced;*.odb, *.a3db | Note that it has nothing to do with Linux; that -printf predicate is specific to the GNU implementation of find . Linux is not an OS, it's just the kernel found in a number of OSes. While most of those OSes used to use a GNU userland in the past, now the great majority of OSes using Linux are embedded and have basic commands if they have any. The GNU find command, which predates Linux, can be installed on most Unix-like OSes. It was certainly used on Solaris (called SunOS back then) before Linux came out. Nowadays, it's even available as an Oracle package for Solaris. On Solaris 11, that's in file/gnu-findutils , and the command is named gfind (for GNU find , to distinguish it from the system's own find command). Now, if you can't install packages, your best bet is probably to use perl : find data/ -type f -name "temp*" -exec perl -MPOSIX -le ' for (@ARGV) { unless(@s = lstat($_)) { warn "$_: $!\n"; next; } print strftime("%Y-%m-%d", localtime($s[9])) . " $_"; }' {} + | sort -r Here, we're still using find (Solaris implementation) to find the files, but we're using its -exec predicate to pass the list of files to perl . And perl does a lstat() on each to retrieve the file metadata (including the modification time as the 10th element ( $s[9] )), interprets it in the local timezone ( localtime() ) and formats it ( strftime() ) which it then print s alongside the file name ( $_ is the loop variable if none is specified in perl , and $! is the equivalent of stderror(errno) , the error text for the last system call failure). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270188/"
]
} |
450,421 | To test my network, I want to send x MB/s between two hosts. I know that ping can be used to send a good amount of data, but I need a solution where I can set the bandwidth (it does not have to be really precise). $ sendTrafic --throughput 10M 10.0.0.1 Any idea how I can do that? I thought about a script running scappy x times per second, but there should be something better. EDIT: I used the following solution: # On receiving node:iperf -s -u# On sending node:iperf -c <ip> -u -b 10m -t 30 Which configures the first host as a UDP server, and the second one as a UDP client who send 10Mb/s for 30 seconds. Thank everyone for your help. | If you don't want to install iperf (which is not the most reliable tool I've use in the past IMHO), you can use pv and netcat You would first need to install pv and netcat (it's available in most distro). On the receiving site you will need a listening socket on a reachable port: #if you want the output you can remove the redirection or redirect it to a different file.#if you want to listen to a TCP port below 1024 you will need to use rootnc -l 4444 > /dev/null On the sending machine you will use this command : dd if=/dev/urandom bs=1000 count=1000 | pv -L 10M | nc <ip> 4444 dd if=/dev/urandom bs=1000 count=1000 will send blocks of 1000 random characters (1000 Bytes) 1000 time: 1000B * 1000 = 1MB . You can adjust the count to increase the amount of data send. pv -L 10M : will limit the write rate to 10 mebibytes/s (*1024). netcat <ip> 4444 will send the data to the IP on port TCP 4444. You adapt to send more data or even real file using : cat /some/files| pv -L 1M | nc <ip> 4444 and on the other side : nc -l 4444 > /some/destinationfiles | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/281450/"
]
} |
450,429 | I wanted to extract 3 words from a variable in 3 different variables in ksh script. I used this line on ksh93 and it worked fine: read A B C <<< $line Got this error for the above command while running it on ksh88: syntax error at line ## : '<' unexpected How to perform the same action on ksh88? | No, here strings are not available on ksh88 and pdksh . On the more recent ksh93 (original AT&T Korn Shell) and mksh (currently actively developed pdksh derivative) it is, however, available. <<< is one of the “modern” shell extensions shared between ksh93 , mksh , GNU bash and zsh . Your specific problem… read A B C <<< $line … can be worked around with this (Korn shell): print -r -- $line |&read -p A B C You can also use this (POSIX shell), it has tmpfile performance penalty though (on the other hand, <<< likely also has that): read A B C <<EOF$lineEOF If you just want to split words, though: set -A arrname -- $line Then use ${arrname[0]} instead of $A and ${arrname[1]} instead of $B . Only it will not stop at splitting at three elements, so if $line is " foo bar baz bla ", $C would contain “baz bla”, whereas ${arrname[2]} has “baz” and ${arrname[3]} has “bla”. If you don’t need your positional parameters, though, you can do set -- $lineA=$1; shiftB=$1; shiftC=$* The shift will cause errors if $line has fewer than three words, though (check $# if you’re not sure, or use [[ $line = *' '*' '[! ] ]] (likely slower though) to check first). Mind that set … $line will also do globbing (thanks Stéphane for reminding us), so you need set -o noglob before (and possibly restore the previous state afterwards, usually set +o noglob ). Full disclosure: I’m the mksh developer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295951/"
]
} |
450,480 | I performed a git commit command and it gave me the following reply: 7 files changed, 93 insertions(+), 15 deletions(-)mode change 100644 => 100755 assets/internal/fonts/icomoon.svgmode change 100644 => 100755 assets/internal/fonts/icomoon.ttfmode change 100644 => 100755 assets/internal/fonts/icomoon.woff I know files can have user / group / other rwx permissions and those can be expressed as three bytes, like "644" or "755". But why is git showing six bytes here? I've read the following articles but didn't find an answer: Wikipedia's article on "File system permissions" How do I remove files saying “old mode 100755 new mode 100644” from unstaged changes in Git? Unix permissions made easy Chmod permissions (flags) explained: 600, 0600, 700, 777, 100 etc.. | The values shown are the 16-bit file modes as stored by Git , following the layout of POSIX types and modes : 32-bit mode, split into (high to low bits) 4-bit object type valid values in binary are 1000 (regular file), 1010 (symbolic link) and 1110 (gitlink) 3-bit unused 9-bit unix permission. Only 0755 and 0644 are valid for regular files. Symbolic links and gitlinks have value 0 in this field. That file doesn’t mention directories; they are represented using object type 0100. Each digit in the six-digit value is in octal, representing three bits; 16 bits thus need six digits, the first of which only represents one bit: Type|---|Perm bits1000 000 1111011011 0 0 7 5 51000 000 1101001001 0 0 6 4 4 Git doesn’t store arbitrary modes, only a subset of the values are allowed, from the usual POSIX types and modes (in octal, 12 for a symbolic link, 10 for a regular file, 04 for a directory) to which git adds 16 for Git links. The mode is appended, using four octal digits. For files, you’ll only ever see 100755 or 100644 (although 100664 is also technically possible); directories are 040000 (permissions are ignored), symbolic links 120000. The set-user-ID, set-group-ID and sticky bits aren’t supported at all (they would be stored in the unused bits). See also this related answer . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/450480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28160/"
]
} |
450,489 | I have a 588Ko file, and I want to extract bytes from 0x7E8D6 to 0x8AD5D.I tried : dd if=file of=result bs=50311 count=1 skip=518358 50311 stands for 0x8AD5D - 0x7E8D6 518358 stands for 0x7E8D6 (from where I want to cut) dd tells me that it can't skip to the specified offset.What can I do? Is there any other utility to do it? | The values shown are the 16-bit file modes as stored by Git , following the layout of POSIX types and modes : 32-bit mode, split into (high to low bits) 4-bit object type valid values in binary are 1000 (regular file), 1010 (symbolic link) and 1110 (gitlink) 3-bit unused 9-bit unix permission. Only 0755 and 0644 are valid for regular files. Symbolic links and gitlinks have value 0 in this field. That file doesn’t mention directories; they are represented using object type 0100. Each digit in the six-digit value is in octal, representing three bits; 16 bits thus need six digits, the first of which only represents one bit: Type|---|Perm bits1000 000 1111011011 0 0 7 5 51000 000 1101001001 0 0 6 4 4 Git doesn’t store arbitrary modes, only a subset of the values are allowed, from the usual POSIX types and modes (in octal, 12 for a symbolic link, 10 for a regular file, 04 for a directory) to which git adds 16 for Git links. The mode is appended, using four octal digits. For files, you’ll only ever see 100755 or 100644 (although 100664 is also technically possible); directories are 040000 (permissions are ignored), symbolic links 120000. The set-user-ID, set-group-ID and sticky bits aren’t supported at all (they would be stored in the unused bits). See also this related answer . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/450489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296039/"
]
} |
450,509 | This is from Programming Perl , Fourth Edition . It's about executing a Perl script. Finally, if you are unfortunate enough to be on an ancient Unix system that doesn’t support the magic #! line, or if the path to your interpreter is longer than 32 characters (a built-in limit on many systems), you may be able to work around it like this: #!/bin/sh -- # perl, to stop loopingeval 'exec /usr/bin/perl -S $0 ${1+"$@"}'if 0; Can you explain step by step what's going on here? I'm trying to make it work or encompass, but to no avail. At execution of just the above, I'm getting this: /bin/sh: 0: Illegal option -- | The idea is that the eval command is valid in both shell and in Perl, with the difference that the newline terminates the command in shell, but not in Perl. Instead, Perl reads the following line, which adds the condition if 0 , effectively negating the whole command. (Perl supports a few of such "backwards" structures for shorthand, e.g. you can write next if $_ == 0 instead of if ($_ == 0) { next } , or print for @a instead of for (@a) { print } .) If the script is started by a shell, the shell processes the eval , and replaces it with the Perl interpreter, giving it the script name ( $0 ) and it's arguments ( $@ ) as parameters. Then Perl runs, reads the eval , skips it (because of the if 0 ), and then goes on to execute the rest of the script. That's how it should work. In practice, you get the error because of two things: 1) Perl reads the hashbang line itself, and 2) the way Linux processes the hashbang lines. When Perl runs a script with a hashbang, it doesn't really take it just as a comment. Instead, it interprets any options to Perl given in the hashbang (you can have #!/usr/bin/perl -Wln etc.) but it also checks the interpreter and executes it if the script isn't supposed to be run by Perl! Try e.g. this: $ cat > hello.sh#!/bin/bashecho $BASH_VERSION $ perl hello.sh4.4.12(1)-release That actually runs Bash. So, the comment #perl is there to tell Perl that yes, this is actually supposed to be run by Perl, so that it doesn't start a shell again . However, Linux gives everything after the interpreter name as a single argument, so when you run the script, it runs /bin/sh , with the two arguments -- # perl, to stop looping , and scriptname.pl . The first one starts with a dash, but isn't exactly -- , so both Dash and Bash try to interpret it as options. It's not a valid one, so you get an error, similarly as if you tried to run bash --- , or bash "-- #perl" . On other systems that split the arguments in the hashbang line, -- #perl would give the shell -- , #perl , and it would try to look for a file called #perl . This again wouldn't work. But apparently there are/have been some systems that take # signs as comment markers in the #! line, and/or only pass on the first argument. (See Sven Mascheck's page on the matter .) On those systems, it might work. A somewhat better working one is given in perlrun (adapted): #!/usr/bin/perleval 'exec /usr/bin/perl -S $0 ${1+"$@"}' if $running_under_some_shell;print("Perl $^V\n"); Running that with bash ./script.pl actually runs Perl and prints (e.g.) Perl v5.24.1 . ( $running_under_some_shell is just an undefined variable, which defaults to falsy. if 0 would be cleaner but not as descriptive.) Like it says on your quote, all of this is only required on ancient systems where #! doesn't work properly. In some, it's not supported at all, and asking a shell to run a non-binary always runs it in the shell. In others, there's a limit on the path length in the hashbang line, so #!/really/veeeery/long/path/to/perl wouldn't work. Just put the #!/usr/bin/perl hashbang there on any modern system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450509",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
450,519 | In Arch Linux, after installing the most recent updates today, I see the following errors in the journal: kernel: FS-Cache: Duplicate cookie detectedkernel: FS-Cache: O-cookie There are about 20 lines in total that are like these. I don't find any info on this via a search. Is this a serious or known problem? My CPU is an Intel Core i7 with an Asus motherboard. I can provide any requested relevant info. However, at this moment, I don't know what I'm looking at, so I am not sure what info is relevant. UPDATE: on a 2nd reboot there are fewer of the messages. Here is the complete output of journalctl -b -p err kernel: FS-Cache: Duplicate cookie detectedkernel: FS-Cache: O-cookie c=000000001e72b895 [p=0000000089da8da7 fl=222 nc=0 na=1]kernel: FS-Cache: O-cookie d=00000000c3a2cbed n=00000000f757123akernel: FS-Cache: O-key=[10] '040002000801c0a805c3'kernel: FS-Cache: N-cookie c=00000000ea48db1d [p=0000000089da8da7 fl=2 nc=0 na=1]kernel: FS-Cache: N-cookie d=00000000c3a2cbed n=000000000f72327ekernel: FS-Cache: N-key=[10] '040002000801c0a805c3' | This appears to be working as intended. The Duplicate cookie detected errors are not indicative of a situation that requires action by the sysadmin. As has been pointed out on the upstream bug report this may well be working as intended https://bugzilla.kernel.org/show_bug.cgi?id=200145#c12 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ec0328e46d6e5d0f17372eb90ab8e333c2ac7ca9 And: fscache: Maintain a catalogue of allocated cookies Maintain a catalogue of allocated cookies so that cookie collisions can be handled properly. For the moment, this just involves printing a warning and returning a NULL cookie to the caller of fscache_acquire_cookie(), but in future it might make sense to wait for the old cookie to finish being cleaned up. This requires the cookie key to be stored attached to the cookie so that we still have the key available if the netfs relinquishes the cookie. This is done by an earlier patch. The catalogue also renders redundant fscache_netfs_list (used for checking for duplicates), so that can be removed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
450,537 | I'm transferring about 9TB across my gigabit LAN. To do so as quickly as possible (i hope) I mounted the destination via NFS on the source and ran rsync across it. Here is my mount options: x.x.x.x:/mnt on /mnt type nfs (rw,noatime,nodiratime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=x.x.x.x,mountvers=3,mountport=56548,mountproto=udp,local_lock=none,addr=x.x.x.x) Here is my rsync command: rsync -avWH --progress ./ /mnt/ looking at nload, what i see, for a single file is speed that spikes up to 900MBps then down to numbers in the KBps range, then back up. Here is a graphic from nload where you can see that the transfer seems to stop, midfile. The files are all typically 5-6GB in size. MTU is 9000; switch is a cisco 3750x with plenty of backplane speed. These are esxi 6.7 guests on 2 different hosts. There are no other guests that contend for network resources. This image is ONE file being sent Basically, I'm hoping there is a setting I have wrong or something I can change to keep the transfer speed somewhat consistent. CPU utilization on the source is approximately 10%, on the dest is approximately 10%. The strange thing is that on the destination, iotop shows 99% i/o (sometimes) from nfsd, the source shows 60-80% IO from rsync. These are 7200RPM WD red drives. w | Unfortunately just about the worst thing you can do is to use rsync across NFS. (Or to any remote filesystem that's mounted into the local system.) This switches off almost all of the efficiency enhancements for which rsync is known. For this much data one of the fastest ways to transfer it between systems may be to dump it across an unencrypted connection without any consideration for what was already on the target system. Once you have at least a partial copy the best option is to use rsync between the two hosts. This allows rsync to run one process on each host to consider and compare differences. (The rsync will completely skip files that have the same size and modification date. For other files the client and server components will perform a rolling checksum to determine which block(s) need still to be transferred.) Fast dump. This example uses no authentication or encryption at all. It does apply compression, though, which you can remove by omitting both -z flags: Run this on the destination machine to start a listening server: cd /path/to/destination && nc -l 50505 | pax -zrv -pe Run this on the source machine to start the sending client: cd /path/to/source && pax -wz . | nc destination_server 50505 Some versions of nc -l may require the port to be specified with a flag, i.e. nc -l -p 50505 . The OpenBSD version on Debian ( nc.openbsd , linked via /etc/alternatives to /bin/nc ) does not. Slower transfer. This example uses rsync over ssh , which provides authentication and encryption. Don't miss off the trailing slash ( / ) on the source path. Omit the -z flag if you don't want compression: rsync -avzP /path/to/source/ destination_server:/path/to/destination You may need to set up SSH certificates to allow login to destination_server as root. Add the -H flag if you need to handle hard links. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61428/"
]
} |
450,539 | ubuntu 16.04 LTS$ sudo apt install virtualbox$ virtualboxVirtualBox: supR3HardenedMainGetTrustedMain: dlopen("/usr/lib/virtualbox/VirtualBox.so",) failed: /usr/lib/x86_64-linux-gnu/libQt5OpenGL.so.5: undefined symbol: _ZN6QDebug9putStringEPK5QCharm virtualbox is not run. What's wrong and how can i solve this? $ ls -l /usr/lib/x86_64-linux-gnu/libQt5OpenGL.so.5lrwxrwxrwx 1 root root 21 5월 13 2017 /usr/lib/x86_64-linux-gnu/libQt5OpenGL.so.5 -> libQt5OpenGL.so.5.5.1$ apt-cache policy libqt5opengl5libqt5opengl5:설치: 5.5.1+dfsg-16ubuntu7.5후보: 5.5.1+dfsg-16ubuntu7.5버전 테이블:*** 5.5.1+dfsg-16ubuntu7.5 500 500 http://ftp.daum.net/ubuntu xenial-updates/main amd64 Packages 100 /var/lib/dpkg/status 5.5.1+dfsg-16ubuntu7 500 500 http://ftp.daum.net/ubuntu xenial/main amd64 Packages | Unfortunately just about the worst thing you can do is to use rsync across NFS. (Or to any remote filesystem that's mounted into the local system.) This switches off almost all of the efficiency enhancements for which rsync is known. For this much data one of the fastest ways to transfer it between systems may be to dump it across an unencrypted connection without any consideration for what was already on the target system. Once you have at least a partial copy the best option is to use rsync between the two hosts. This allows rsync to run one process on each host to consider and compare differences. (The rsync will completely skip files that have the same size and modification date. For other files the client and server components will perform a rolling checksum to determine which block(s) need still to be transferred.) Fast dump. This example uses no authentication or encryption at all. It does apply compression, though, which you can remove by omitting both -z flags: Run this on the destination machine to start a listening server: cd /path/to/destination && nc -l 50505 | pax -zrv -pe Run this on the source machine to start the sending client: cd /path/to/source && pax -wz . | nc destination_server 50505 Some versions of nc -l may require the port to be specified with a flag, i.e. nc -l -p 50505 . The OpenBSD version on Debian ( nc.openbsd , linked via /etc/alternatives to /bin/nc ) does not. Slower transfer. This example uses rsync over ssh , which provides authentication and encryption. Don't miss off the trailing slash ( / ) on the source path. Omit the -z flag if you don't want compression: rsync -avzP /path/to/source/ destination_server:/path/to/destination You may need to set up SSH certificates to allow login to destination_server as root. Add the -H flag if you need to handle hard links. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/279149/"
]
} |
450,564 | I'm trying to make a directory for each file in a directory. mkdir * returns File exists. So I try mkdir *.d and it makes a directory called "*.d". How do I force the wildcard to expand? | A wildcard always expands to existing names. Your command mkdir * fails because the names that * expands to already exists. Your command mkdir *.d "fails" because the *.d does not match any existing names. The pattern is therefore left unexpanded by default 1 and a directory called *.d is created. You may remove this with rmdir '*.d' . To create a directory for each regular file in the current directory, so that the new directories have the same name as the files, but with a .d suffix: for name in ./*; do if [ -f "$name" ]; then # this is a regular file (or a symlink to one), create directory mkdir "$name.d" fidone or, for people that like "one-liners", for n in ./*; do [ -f "$n" ] && mkdir "$n.d"; done In bash , you could also do names=( ./* )mkdir "${names[@]/%/.d}" but this makes no checks for whether the things that the glob expands to are regular files or something else. The initial ./ in the commands above are to protect against filenames that contain an initial dash ( - ) in their filenames. The dash and the characters following it would otherwise be interpreted as options to mkdir . 1 Some shells have a nullglob shell option that causes non-matched shell wildcards to be expanded to an empty string. In bash this is enabled using shopt -s nullglob . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450564",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260582/"
]
} |
450,572 | I have a file roughly like header_oneparam1param2...data_onedata1data2data3data4...header_twoparam1param2...data_twodata1data2data3data4 I'd like to extract all header blocks with N following lines and all data blocks with M != N following lines, keeping the order in which they appear in the file and discarding everything else. If M == N == 8 I could e.g. do grep -A8 -E "header_|data_" But what if I want to have different context for each pattern? | A wildcard always expands to existing names. Your command mkdir * fails because the names that * expands to already exists. Your command mkdir *.d "fails" because the *.d does not match any existing names. The pattern is therefore left unexpanded by default 1 and a directory called *.d is created. You may remove this with rmdir '*.d' . To create a directory for each regular file in the current directory, so that the new directories have the same name as the files, but with a .d suffix: for name in ./*; do if [ -f "$name" ]; then # this is a regular file (or a symlink to one), create directory mkdir "$name.d" fidone or, for people that like "one-liners", for n in ./*; do [ -f "$n" ] && mkdir "$n.d"; done In bash , you could also do names=( ./* )mkdir "${names[@]/%/.d}" but this makes no checks for whether the things that the glob expands to are regular files or something else. The initial ./ in the commands above are to protect against filenames that contain an initial dash ( - ) in their filenames. The dash and the characters following it would otherwise be interpreted as options to mkdir . 1 Some shells have a nullglob shell option that causes non-matched shell wildcards to be expanded to an empty string. In bash this is enabled using shopt -s nullglob . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/289991/"
]
} |
450,647 | If I needed to append a username to the end line 32 on a file, how would I do so? I can find on Google how to add text to the beginning of a line with sed , but I can't figure out how I would append it to the end, or even the middle, if that was possible. | You can substitute your text for the line end ( $ ) like this: sed -e '32s/$/your_text/' file To insert text in the middle of the line some information about the line structure would be useful. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230085/"
]
} |
450,696 | I have a list of arguments and three processes: bash_script -> child -> grandchild The list of arguments is aimed at the grandchild. I can modify all three processes. The grandfather script gets one argument for itself. Is the following a proper way to pass the remaining arguments to the grandchild? #!/usr/bin/env bash# This is the grandfather first_arg="$1"shift 1;export MY_ARGS="$@" I "spread" the env variable later, in the child process, as part of the command that calls the grandchild, something like: grandchild --foo "$MY_ARGS" # append $MY_ARGS as arguments to foo | In a script, you should not demote an array to a string. An environment variable and its value is a simple key=value pair where both key and value are strings. Demoting the positional parameters to a simple string (by concatenation) will make it difficult to retain separation between them, and it would be hard to get quoting right when you end up wanting to use them. Instead, pass the positional parameters (command line argument) that you want to pass to the next script on its command line. #!/bin/bashfirst_arg=$1shift# later ..../my_other_script "$@" In the other script: #!/bin/bash# use "$@" herefoo --bar "$@" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
450,700 | I want to open a programme in a floating window. I tried exec emacsclient -c ; floating enable , but that made thewindow that was active before float, not the new window. | This is how I did it for my Galculator application: ~/.config/i3/config for_window [class="Galculator" instance="galculator"] floating enable To find out what goes in your class="..." and instance="...", type xprop in terminal, then click on the window you want to float. You will find the info somewhere on the bottom under WM_CLASS(STRING)="instance", "Class". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450700",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134690/"
]
} |
450,713 | Two windows, same user, with bash prompts. In window-1 type: $ mkfifo f; exec <f So bash is now attempting to read from file descriptor 0, which is mapped to named pipe f . In window-2 type: $ echo ls > f Now window-1 prints an ls and then the shell dies. Why? Next experiment: open window-1 again with exec <f . In window-2 type: $ exec 3>f$ echo ls >&3 After the first line above, window-1 wakes up and prints a prompt. Why? After the second line above, window-1 prints the ls output and the shell stays alive. Why? In fact, now in window-2, echo ls > f does not close the window-1 shell. The answer must have to do with the existence of the file descriptor 3 from window-2 referencing the named pipe?! | It has to do with the closing of the file descriptor. In your first example, echo writes to its standard output stream which the shell opens to connect it with f , and when it terminates, its descriptor is closed (by the shell). On the receiving end, the shell, which reads input from its standard input stream (connected to f ) reads ls , runs ls and then terminates due to the end-of-file condition on its standard input. The end-of-file condition occurs because all writers to the named pipe (only one in this example) have closed their end of the pipe. In your second example, exec 3>f opens file descriptor 3 for writing to f , then echo writes ls to it. It's the shell that now has the file descriptor opened, not the echo command. The descriptor remains open until you do exec 3>&- . On the receiving end, the shell, which reads input from its standard input stream (connected to f ) reads ls , runs ls and then waits for more input (since the stream is still open). The stream remains open because all writers to it (the shell, via exec 3>f , and echo ) have not closed their end of the pipe ( exec 3>f is still in effect). I have written about echo above as if it was an external command. It's most likely is built into the shell. The effect is the same nonetheless. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4896/"
]
} |
450,714 | Say I have these bash functions in a script: foo(){ my_args_array=("$@") export my_args="${my_args_array[@]}" bar $my_args}bar(){ echo "number of args: $#";}foo a b 'c d e' if I run the above script, I will get: number of args: 5 but what I am looking for is: number of args: 3 so my question is - is there a way to map the value returned by my_args_array[@] , so I can surround each element with single quotes? Or do whatever I need to do to make the env variable string look like the original command line arguments. | It has to do with the closing of the file descriptor. In your first example, echo writes to its standard output stream which the shell opens to connect it with f , and when it terminates, its descriptor is closed (by the shell). On the receiving end, the shell, which reads input from its standard input stream (connected to f ) reads ls , runs ls and then terminates due to the end-of-file condition on its standard input. The end-of-file condition occurs because all writers to the named pipe (only one in this example) have closed their end of the pipe. In your second example, exec 3>f opens file descriptor 3 for writing to f , then echo writes ls to it. It's the shell that now has the file descriptor opened, not the echo command. The descriptor remains open until you do exec 3>&- . On the receiving end, the shell, which reads input from its standard input stream (connected to f ) reads ls , runs ls and then waits for more input (since the stream is still open). The stream remains open because all writers to it (the shell, via exec 3>f , and echo ) have not closed their end of the pipe ( exec 3>f is still in effect). I have written about echo above as if it was an external command. It's most likely is built into the shell. The effect is the same nonetheless. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
450,735 | I have a server with only one IP with three VMs running: http-proxy - IP 10.77.77.254 email - IP 10.77.77.101 services - IP 10.77.77.104 On the Host I select with iptables, which port goes to which server. I have set up all email ports like 25, 143,... to the email VM. Port 80 and 443 goes to the http-proxy that decides which domain goes to which VM. I have Php, ruby and rust scripts on both VMS running: the services VM and the email VM. The email VM with postfix and courier works fine as my email server (and more). It can send and receive emails fine. Also scripts on that server like php can send out and receive emails there. There are also some user accounts on the email VM that have their email boxes there. How do I have to set up my other services VM on the same host so scripts on that VM can send out emails too? | In reality the answer for the services VM can be...it depends. If it is applications, you can point them to email:25/TCP. If we are talking about daemons/services, you configure both in the services and http-proxy VMs: in exim, as smarthost email a simple postfix with a relayhost configured to point to the email host. As in, in main.cf : relayhost = email Or, you can configure a lightweight SMTP forwarder as ssmtp , that just forwards emails send by the sendmail compatible API. In ssmtp.conf you define then: hostname=FQDN # full DNS name of your server where `ssmtp` is installedmailhub=email # name or IP address of your central SMTP server sSMTP - Simple SMTP sSMTP is a simple MTA to deliver mail from a computer to a mail hub (SMTP server). sSMTP is simple and lightweight, there are no daemons or anything hogging up CPU; Just sSMTP. Unlike Exim4, sSMTP does not receive mail, expand aliases, or manage a queue. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
450,796 | I try to put find inside function and catch an argument passed to this function with the following minimal work example: function DO{ ls $(find . -type f -name "$@" -exec grep -IHl "TODO" {} \;)} But, when I execute DO *.tex , I get “find: paths must precede expression:”. But when I do directly: ls $(find . -type f -name "*.tex" -exec grep -IHl "TODO" {} \;) then I get all TeX files witch contain "TODO". I try many thing in the DO function, such as \"$@\" , '$@' , I change the quotes marks, but the behavior still the same. So, what to do to force find work inside function? | There are a few issues in your code: The *.tex pattern will be expanded when calling the function DO , if it matches any filenames in the current directory. You will have to quote the pattern as either '*.tex' , "*.tex" or \*.tex when calling the function. The ls is not needed. You already have both find and grep that are able to report the pathnames of the found files. -name "$@" only works properly if "$@" contains a single item. It would be better to use -name "$1" . For a solution that allows for multiple patterns, see below. The function may be written DO () { # Allow for multiple patterns to be passed, # construct the appropriate find expression from all passed patterns for pattern do set -- "$@" '-o' '-name' "$pattern" shift done # There's now a -o too many at the start of "$@", remove it shift find . -type f '(' "$@" ')' -exec grep -qF 'TODO' {} ';' -print} Calling this function like DO '*.tex' '*.txt' '*.c' will make it execute find . -type f '(' -name '*.tex' -o -name '*.txt' -o -name '*.c' ')' -exec grep -qF TODO {} ';' -print This would generate a list of pathnames of files with those filename suffixes, if the files contained the string TODO . To use grep rather than find to print the found pathnames, change the -exec ... -print bit to -exec grep -lF 'TODO' {} + . This will be more efficient, especially if you have a large number of filenames matching the given expression(s). In either case, you definitely do not need to use ls . To allow the user to use DO tex txt c your function could be changed into DO () { # Allow for multiple patterns to be passed, # construct the appropriate find expression from all passed patterns for suffix do set -- "$@" '-o' '-name' "*.$suffix" # only this line (and the previous) changed shift done # There's now a -o too many at the start of "$@", remove it shift find . -type f '(' "$@" ')' -exec grep -qF 'TODO' {} ';' -print} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56505/"
]
} |
450,810 | Using sed, I'd like to substitute every comma that is outside of double quotes for a pipe. So that this line in .csv file: John,Tonny,"345.3435,23",56th Street Would be converted to: John|Tonny|"345.3435,23"|56th Street Could you help me with the regex for that? | If your sed supports the -E option ( -r in some implementations): sed -Ee :1 -e 's/^(([^",]|"[^"]*")*),/\1|/;t1' < file The :label s/pattern/replacement/t label Is a very common sed idiom. It keeps doing the same substitution in a loop as long as it's successful. Here, we're substituting the leading part of the line made of 0 or more quoted strings or characters other that " and , (captured in \1 ) followed by a , with that \1 capture and a | , so on your sample that means: John,Tonny,"345.3435,23",56th Street -> John|Tonny,"345.3435,23",56th Street John|Tonny,"345.3435,23",56th Street -> John|Tonny|"345.3435,23",56th Street John|Tonny|"345.3435,23",56th Street -> John|Tonny|"345.3435,23"|56th Street and we stop here as the pattern doesn't match any more on that. With perl , you could do it with one substitution with the g flag with: perl -pe 's{("[^"]*"|[^",]+)|,}{$1 // "|"}ge' Here, assuming quotes are balanced in the input, the pattern would match all the input, breaking it up in either: quoted string sequences of characters other than , or " a comma And only when the matched string is a comma (when $1 is not defined in the replacement part), replace it with a | . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29487/"
]
} |
450,835 | That's a question I've seen several time for several Linux flavours, so let's try to be exhaustive. What is the method to execute script/command/program before and after user login into its desktop session ? | Introduction To run a program in graphical environement before a user logged in a graphical environment depend on your display manager. A display manager is in charge to provide you a login interface and setup your graphical environment once logged in. the most important are the following: GDM is the GNOME display manager. LightDM is a cross-desktop display manager, can use various front-ends written in any toolkit. LXDM is the LXDE display manager but independent of the LXDE desktop environment. SDDM is a modern display manager for X11 and Wayland aiming to be fast, simple and beautiful. We will review how to setup the execution of command when the display manager popup before any user logged in and how to execute something when someone is finally logged in. If you don't know which one you're running, you can refer to this question : Is there a simple linux command that will tell me what my display manager is? IMPORTANT Before I start, you are going to edit file that except if mention execute command as root . Do not remove existing stuff in those files except if you know what you're doing and be careful in what you put in those file. This could remove your ability to log in. GDM Be careful with GDM, it will run all script as `root`, a different error code than 0 could limit your log in capability and GDM will wait for your script to finish making it irresponsive as long as your command run. For complete explanation [read the documentation][5]. Before Login If you need to run commands before a user logged-in you can edit the file: `/etc/gdm3/Init/Default`. This file is a shell script that will be executed before the display manager is displayed to the user. After Login If you need to execute things once a user has logged in but before its session has been initialize edit the file: `/etc/gdm3/PostLogin/Default`If you want to execute command after the session of session initialization (env, graphical environment, login...) edit the file: `/etc/gdm3/PreSession/Default` LightDM I will talk about lightdm.conf and not about /etc/lightdm.conf.d/*.conf. You can do what you want what is important is to know the options you can use. Be careful with lightDM, you could already have several other script starting you should read precisely your config file before editing it. also the order in which you put those script might influence the way the session load. LightDM works a bit differently from the others you will put options in the main configuration files to indicate script that will be execute. Edit the main lightDM conf file /etc/lightdm/lightdm.conf . You should add first line with [Seat:*] , as indicated here : Later versions of lightdm (15.10 onwards) have replaced the obsolete[SeatDefaults] with [Seat:*] Before Login Add a line `greeter-setup-script=/my/path/to/script` This script will be executed when lightDM shows the login interface. After Login Add a line `session-setup-script=/script/to/start/script` This will run the script as `root` after a user successfully logged in. LXDM Before Login If you want to execute command before anyone logged in, you can edit the shell script: `/etc/lxdm/LoginReady` After Login If you want to execute command after someone logged in but as root, you can edit the shell script: `/etc/lxdm/PreLogin` And if you want to run command as the logged in user, you can edit the script: `/etc/lxdm/PostLogin` SDDM Before Login Modify the script located at /usr/share/sddm/scripts/Xsetup . This script is executed before the login screen appears and is mostly used to adjust monitor displays in X11. Not sure what the equivalent would be for wayland After Login sddm will now source the script located at /usr/share/sddm/scripts/Xsession , which in turn will source the user's dotfiles depending on their default shell. For bash shell, it will source ~/.bash_profile (among others), and for zsh, it will source ${ZDOTDIR:-$HOME}/.zprofile (among others). You can take this opportunity to modify those files to also run any other command you need after logging in. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53092/"
]
} |
450,844 | I've an image of a bootable 16GB SD card. I've created the image with: cat /dev/sdd | gzip >sdcard.img.gz And I was happy because $ du -h sdcard.img.gz482M sdcard.img.gz 482MB instead of 16GB, yay! Here're the details of the (uncompressed) image: $ du -h sdcard.img15G sdcard.img$ partx -s sdcard.imgNR START END SECTORS SIZE NAME UUID 1 16384 81919 65536 32M 6e1be81b-01 2 81920 3588095 3506176 1.7G 6e1be81b-02 However, now I need to write this image back to the SD card but I don't want to write 14GB of trailing zeros/junk! That'd take ages. How can I create image without copying what's after the last partition? When I already created image of whole SD card, how can I truncate it to not include useless junk? The point is, I don't care about the size the image is taking in the backup, but I care about the size that's transferred back to SD card, because copying to SD card is slow and copying 14GB of useless data is pointless. So compressing the disk image or copying to a sparse aware filesystem as other answers on Internet suggest is not what I'm looking for. | Introduction To run a program in graphical environement before a user logged in a graphical environment depend on your display manager. A display manager is in charge to provide you a login interface and setup your graphical environment once logged in. the most important are the following: GDM is the GNOME display manager. LightDM is a cross-desktop display manager, can use various front-ends written in any toolkit. LXDM is the LXDE display manager but independent of the LXDE desktop environment. SDDM is a modern display manager for X11 and Wayland aiming to be fast, simple and beautiful. We will review how to setup the execution of command when the display manager popup before any user logged in and how to execute something when someone is finally logged in. If you don't know which one you're running, you can refer to this question : Is there a simple linux command that will tell me what my display manager is? IMPORTANT Before I start, you are going to edit file that except if mention execute command as root . Do not remove existing stuff in those files except if you know what you're doing and be careful in what you put in those file. This could remove your ability to log in. GDM Be careful with GDM, it will run all script as `root`, a different error code than 0 could limit your log in capability and GDM will wait for your script to finish making it irresponsive as long as your command run. For complete explanation [read the documentation][5]. Before Login If you need to run commands before a user logged-in you can edit the file: `/etc/gdm3/Init/Default`. This file is a shell script that will be executed before the display manager is displayed to the user. After Login If you need to execute things once a user has logged in but before its session has been initialize edit the file: `/etc/gdm3/PostLogin/Default`If you want to execute command after the session of session initialization (env, graphical environment, login...) edit the file: `/etc/gdm3/PreSession/Default` LightDM I will talk about lightdm.conf and not about /etc/lightdm.conf.d/*.conf. You can do what you want what is important is to know the options you can use. Be careful with lightDM, you could already have several other script starting you should read precisely your config file before editing it. also the order in which you put those script might influence the way the session load. LightDM works a bit differently from the others you will put options in the main configuration files to indicate script that will be execute. Edit the main lightDM conf file /etc/lightdm/lightdm.conf . You should add first line with [Seat:*] , as indicated here : Later versions of lightdm (15.10 onwards) have replaced the obsolete[SeatDefaults] with [Seat:*] Before Login Add a line `greeter-setup-script=/my/path/to/script` This script will be executed when lightDM shows the login interface. After Login Add a line `session-setup-script=/script/to/start/script` This will run the script as `root` after a user successfully logged in. LXDM Before Login If you want to execute command before anyone logged in, you can edit the shell script: `/etc/lxdm/LoginReady` After Login If you want to execute command after someone logged in but as root, you can edit the shell script: `/etc/lxdm/PreLogin` And if you want to run command as the logged in user, you can edit the script: `/etc/lxdm/PostLogin` SDDM Before Login Modify the script located at /usr/share/sddm/scripts/Xsetup . This script is executed before the login screen appears and is mostly used to adjust monitor displays in X11. Not sure what the equivalent would be for wayland After Login sddm will now source the script located at /usr/share/sddm/scripts/Xsession , which in turn will source the user's dotfiles depending on their default shell. For bash shell, it will source ~/.bash_profile (among others), and for zsh, it will source ${ZDOTDIR:-$HOME}/.zprofile (among others). You can take this opportunity to modify those files to also run any other command you need after logging in. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124070/"
]
} |
450,877 | Brian Kernighan explains in this video the early Bell Labs attraction to small languages/programs being based on memory limitations A big machine would be 64 k-bytes--K, not M or G--and so that meant any individual program could not be very big, and so there was a natural tendency to write small programs, and then the pipe mechanism, basically input output redirection, made it possible to link one program to another. But I don't understand how this could limit memory usage considering the fact that the data has to be stored in RAM to transmit between programs. From Wikipedia : In most Unix-like systems, all processes of a pipeline are started at the same time [emphasis mine] , with their streams appropriately connected, and managed by the scheduler together with all other processes running on the machine. An important aspect of this, setting Unix pipes apart from other pipe implementations, is the concept of buffering: for example a sending program may produce 5000 bytes per second, and a receiving program may only be able to accept 100 bytes per second, but no data is lost. Instead, the output of the sending program is held in the buffer. When the receiving program is ready to read data, then next program in the pipeline reads from the buffer. In Linux, the size of the buffer is 65536 bytes (64KB). An open source third-party filter called bfr is available to provide larger buffers if required. This confuses me even more, as this completely defeats the purpose of small programs (though they would be modular up to a certain scale). The only thing I can think of as a solution to my first question (the memory limitations being problematic dependent upon the size data) would be that large data sets simply weren't computed back then and the real problem pipelines were meant to solve was the amount of memory required by the programs themselves. But given the bolded text in the Wikipedia quote, even this confuses me: as one program is not implemented at a time. All this would make a great deal of sense if temp files were used, but it's my understanding that pipes do not write to disk (unless swap is used). Example: sed 'simplesubstitution' file | sort | uniq > file2 It's clear to me that sed is reading in the file and spitting it out on a line by line basis. But sort , as BK states in the linked video, is a full stop, so the all of the data has to be read into memory (or does it?), then it's passed on to uniq , which (to my mind) would be a one-line-at-a-time program. But between the first and second pipe, all the data has to be in memory, no? | The data doesn’t need to be stored in RAM. Pipes block their writers if the readers aren’t there or can’t keep up; under Linux (and most other implementations, I imagine) there’s some buffering but that’s not required. As mentioned by mtraceur and JdeBP (see the latter’s answer ), early versions of Unix buffered pipes to disk, and this is how they helped limit memory usage: a processing pipeline could be split up into small programs, each of which would process some data, within the limits of the disk buffers. Small programs take less memory, and the use of pipes meant that processing could be serialised: the first program would run, fill its output buffer, be suspended, then the second program would be scheduled, process the buffer, etc. Modern systems are orders of magnitude larger than the early Unix systems, and can run many pipes in parallel; but for huge amounts of data you’d still see a similar effect (and variants of this kind of technique are used for “big data” processing). In your example, sed 'simplesubstitution' file | sort | uniq > file2 sed reads data from file as necessary, then writes it as long as sort is ready to read it; if sort isn’t ready, the write blocks. The data does indeed live in memory eventually, but that’s specific to sort , and sort is prepared to deal with any issues (it will use temporary files it the amount of data to sort is too large). You can see the blocking behaviour by running strace seq 1000000 -1 1 | (sleep 120; sort -n) This produces a fair amount of data and pipes it to a process which isn’t ready to read anything for the first two minutes. You’ll see a number of write operations go through, but very quickly seq will stop and wait for the two minutes to elapse, blocked by the kernel (the write system call waits). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/450877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288980/"
]
} |
450,891 | I have a text file with following contents $ cat foo.txtsome text [email protected] 07:12 (Asia/Kolkata)again some text over heresome more text againMessagesome text [email protected] 07:12 (Asia/Kolkata)again some text over heresome more text againMessage I would like to get following output $ cat foo.txtsome text [email protected] 8903457923 2018-02-09 07:12 (Asia/Kolkata) again some text over her some more text again Messagesome text [email protected] 8903457923 2018-02-05 07:12 (Asia/Kolkata) again some text over here some more text again Message I guess I can achive this using tr and taking "Message" as a common string.But not sure how to implement this. | If the current line is not "Message", then append the line to the list, joined with OFS; when you see "Message", print the current list (joined by OFS with the current "Message" line): awk '/^Message$/ { print t OFS $0 ORS; t=""; } !/^Message$/ { t=(t ? t OFS $0 : $0) }' < foo.txt The t=(t ? t OFS $0 : $0) part is a ternary operator; it checks to see if t is empty; if it is, then just assign the current line to it; otherwise, append the current value with OFS followed by the current line. Output: some text [email protected] 8903457923 2018-02-09 07:12 (Asia/Kolkata) again some text over here some more text again Messagesome text [email protected] 8903457923 2018-02-05 07:12 (Asia/Kolkata) again some text over here some more text again Message | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/450891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277542/"
]
} |
450,944 | Is it possible to format this sample: for i in string1 string2 stringNdo echo $idone to something similar to this: for i in string1string2stringNdo echo $idone EDIT: Sorry for confusion, didn't realize that there was different methods of executing script - sh <scriptname> versus bash <scriptname> and also this thing which I cannot name right now - #!/bin/sh and #!/bin/bash :) | Using arrays in bash can aid readability: this array syntax allows arbitrary whitespace between words. strings=( string1 string2 "string with spaces" stringN)for i in "${strings[@]}"; do echo "$i"done | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/450944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
450,973 | I mean, if two users have the same name, how does the system know that they're actually different users when it enforces file permissions? This doubt came to my mind while I was considering to rename my home /home/old-arch before reinstalling the system (I have /home on its own partition and I don't format it), so that I could then have a new, pristine /home/arch . I wondered if the new system would give me the old permissions on my files or if it would recognize me as a different arch . | If you force there to exist multiple users with the same username, then there will be multiple entries in /etc/{shadow,passwd} with the same name: $ cat /etc/passwd...a:x:1001:1002::/home/a:/bin/basha:x:1002:1003::/home/b:/bin/bash# cat /etc/shadowa:...:17702:0:99999:7:::a:...:17702:0:99999:7::: If you try to log in as that user, you'll log in as the first match. $ ssh a@<host>Password:$ iduid=1001(a) gid=1002(a) groups=1002(a)$ pwd/home/a There will be no way to log in as the second user with the same name. Note that Linux tracks users by their uid, not by their username. It would be possible, however, to have two different usernames be the same user ID. Consider a different version of /etc/passwd : $ cat /etc/passwd...a:x:1001:1002::/home/a:/bin/bashb:x:1001:1002::/home/b:/bin/bash Note that for both usernames a and b , the third column is 1001 -- that's the uid / user ID. Now, if user a or user b logs in (even with different passwords), they'll both be "user 1001", and show as user a from the OS' perspective. Here too, the first matching entry is the one returned (in most cases): $ ssh a@hostPassword: <a's password>$ iduid=1001(a) gid=1002(a) groups=1002(a)$ ssh b@hostPassword: <b's password>$ iduid=1001(a) gid=1002(a) groups=1002(a) Both a and b are uid 1001 and will have access to the resources available to uid 1001 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/450973",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121967/"
]
} |
451,007 | If I run this: rsync -r /a/b/c /a/d/e I will get this: /a/d/e/c but I am looking for just: /a/d/e this will not solve this problem: rsync -r /a/b/c/* /a/d/e because the above will skip dot-files (hidden files).How can I solve this one - copying to an existing directory, in this case, folder with name e ? | So close... $ rsync -r /a/b/c/ /a/d/e/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
451,081 | I have this folder structure: foo`----> bar How can I extract the content of bar into foo ? I tried mv -f bar/* . from within foo . -f, --force | dont't ask before overwrite but I get "could not move bar/ajax to foo/ajax because the directory is not empty" How can I solve this? | mv will overwrite files, but it will refuse to overwrite directories . There's no single command that will merge directories and remove the source directories (which is probably what you want with mv ). Even rsync --remove-source-files will leave empty directories. You can use a combination of commands: cp -a dev/. .rm -r dev which copies everything in dev to the current directory and then removes the dev directory. Or: rsync -a --remove-source-files dev/ .find dev -depth -type d -exec rmdir {} \; which uses rsync to move all the files, and then deletes the empty directories left behind. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/451081",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124191/"
]
} |
451,085 | Along side the question " Username is not in the sudoers file. This incident will be reported " that explained the programical aspects of the error and suggested some workarounds, I want to know: what does this error mean? X is not in the sudoers file. This incident will be reported. The former part of the error explains, clearly, the error. But the second part says that "This error will be reported"?! But why? Why the error will be reported and where? To whom? I'm both user and administrator and didn't receive any report :)! | The administrator(s) of a system are likely to want to know when a non-privileged user tries but fails to execute commands using sudo . If this happens, it could be a sign of a curious legitimate user just trying things out, or a hacker trying to do "bad things". Since sudo by itself can not distinguish between these, failed attempts to use sudo are brought to the attention of the admins. Depending on how sudo is configured on your system, any attempt (successful or not) to use sudo will be logged. Successful attempts are logged for audit purposes (to be able to keep track of who did what when), and failed attempts for security. On a fairly vanilla Ubuntu setup that I have, this is logged in /var/log/auth.log . If a user gives the wrong password three times, or if they are not in the sudoers file, an email is sent to root (depending on the configuration of sudo , see below). This is what's meant by "this incident will be reported". The email will have a prominent subject: Subject: *** SECURITY information for thehostname *** The body of the message contains the relevant lines from the logfile, for example thehostname : Jun 22 07:07:44 : nobody : user NOT in sudoers ; TTY=console ; PWD=/some/path ; USER=root ; COMMAND=/bin/ls (Here, the user nobody tried to run ls through sudo as root, but failed since they were not in the sudoers file). No email is sent if (local) mail has not been set up on the system. All of these things are configurable as well, and that local variations in the default configuration may differ between Unix variants. Have a look at the mail_no_user setting (and related mail_* settings) in the sudoers manual (my emphasis below): mail_no_user If set, mail will be sent to the mailto user if the invoking user is not in the sudoers file. This flag is on by default . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/451085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79615/"
]
} |
451,153 | I have two PGP keys I use to sign and decrypt e-mails in kmail. When doing so, I have to enter the key's password (currently stored in KeePass). Is it possible to save the passwords in my kwallet in a way that automatically unlocks the keys as needed? If so, how can this be achieved? Edit: I have found something similar here , but for SSH keys rather than PGP keys. Maybe that can be adapted? | Unlocking Is it possible to save the passwords in my kwallet in a way thatautomatically unlocks the keys as needed? If so, how can this beachieved? As far as I know this cannot be done in kWallet. Use gpg-agent instead. You can make it's settings as liberal as you like, balanced between security and ease of access. Depending on which distribution you are running, the agent should work out of the box. Perhaps it's even already running in the background? Other key management I never used KeePass , so I don't know its features. However, kGPG might be worth looking at. It is a GUI front end to the system's GnuPG. Specifically, it also allows for low-ish level settings of GPG , including GPG agent. GnuPG Settings Here you can configure which gpg binary and which configuration file and home folder are used. These values are autodetected on first startand should already work. Using the GnuPG agent makes work with GnuPG more comfortable as you donot need to type in your password for every action. It is cached inmemory for a while so any operation that would require a password canimmediately be done. Note that this may allow other people to use yourprivate keys if you leave your session accessible to them. kMail The question also contains the kmail tag, so I will also elaborate on that. You might want to read the PGP configuration section and kmail FAQ, GnuPG section . If you have set up the keys using kGPG above, you don't have to be very worried about all the fat warning and the steps in the top part of the page. Just be informed about them. Integration Integration is actually happening implicit. kGPG just tells GnuPG which keys to create, modify, open and more actions. It lists in its interface what keys are on the system and their trust level etc. But in the background everything is stored in the ~/.gnupg directory in the GnuPG format. (I'm not sure if kGPG invokes GPG or is linked to GPG libraries, but the effect is the same) kMail is just another kind of front end. It invokes the gpg command to access the keys stored in the same directory. For instance for signing, encrypting and decrypting. The gpg-agent is session wide. Meaning, if you unlock a private key in kGPG, it will also be unlocked for kMail and visa versa. Edit I just found kwalletcli , which provides kwallet bindings for pinentry. My distribution does not provide a package, so at this moment I'm unable to try it out. You might have to manually install the package if your distro does not support it as well. Once again, arch wiki comes along and saves the day: Tip: For using /usr/bin/pinentry-kwallet you have to install thekwalletcli package. ~/.gnupg/gpg-agent.conf:#pinentry interface with kdewalletpinentry-program /usr/bin/pinentry-kwallet Alternative If you don't want to or can't install kwalletcli , you might be able to do some scripting using the kwallet-query command. You will have to have knowledge about which wallet to open to obtain the password. See man kwallet-query for more info. However, gpg does not allow password input from STDIN by default, so you will need to configure gpg for it. Note on ssh-agent If you get gpg-agent to work properly, you can use it also as a ssh-agent . example on Kubuntu 22.04 (Jellyfish) how use Keybase PGP keys with Git (auth & sign) # setup Keybase where you're storing PGP keys in cloudhttps://keybase.io/docs/the_app/install_linux# Import the public keykeybase pgp export | gpg --import# Import the private keykeybase pgp export -s | gpg --allow-secret-key-import --import# show all keysgpg --list-keys --with-keygripgpg --list-secret-keys --with-keygrip# There should be 3 keys: one main [SC]==PUBKEY_USAGE_SIG&PUBKEY_USAGE_CERT and two subkeys [A]==PUBKEY_USAGE_AUTH && [E]==PUBKEY_USAGE_ENC# Now you have to edit main one ([SC] ID) of them to "trust" itgpg --edit-key PUT_[SC]_ID_HEREkey 0trust5ykey 1trust5ykey 2trust5yquitecho 'enable-ssh-support' >> ~/.gnupg/gpg-agent.confecho 'pinentry-program /usr/bin/pinentry-kwallet' >> ~/.gnupg/gpg-agent.confgpg -K --with-keygripecho 'PUT_[A]_keygrip_ID_HERE' >> ~/.gnupg/sshcontrolecho 'export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)' >> ~/.bashrcecho 'gpgconf --launch gpg-agent' >> ~/.bashrc# setup git configs & set your favorite editorecho 'export VISUAL="vim"' >> ~/.bashrcgit config --global commit.gpgsign truegpg --list-secret-keys --keyid-format=longgit config --global user.signingkey [SC]_sec_idgit config --global user.name "stackexchange"git config --global user.email [email protected]# reload terminal env & gpg-agent and check everything workssource ~/.bashrcgpgconf --kill gpg-agentssh-add -Lssh -T [email protected] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83756/"
]
} |
451,186 | In looking for a good suite of mail/contacts/calendar .etc apps, I tried out KDE's Kontact and ran into the following issue upon starting pretty much any of the bundled applications (kmail, korganiser .etc). The application would display a loading screen like Image 1, and then it would display the error in Image 2, saying The Akonadi personal information management service is not operational. or something to that effect. Image 1: Image 2: Here is my system info as provided by screenfetch : OS: KDE neon 5.12 Kernel: x86_64 Linux 4.13.0-45-generic Uptime: 4h 37m Packages: 2060 Shell: bash 4.3.48 Resolution: 1280x800 DE: KDE 5.47.0 / Plasma 5.13.1 WM: KWin GTK Theme: Breeze [GTK2/3] Icon Theme: breeze Font: Noto Sans Regular CPU: Intel Core2 Duo P8700 @ 2x 2.534GHz [36.0°C] GPU: intel RAM: 1802MiB / 2946MiB I have already tried uninstalling (and reinstalling) the Kontact suite (both with and without the --purge argument to apt and for some reason the "Details" button provided on the error screen appears to do nothing when i click on it. I had already saved the selftest report file to my desktop and then subsequently forgot it was there (redactions mine). | Solution The error logs show that the akonadi verion of the mysql server that these K* applications require is trying to access ~/.local/share/akonadi/db_data/ except db_data doesn't exist, so it throws an error much like touch ~/nonexistent_dir/file.txt would. To solve, simply run the below commands. cd ~/.local/share/akonadi/; mkdir db_data Explaination After doing a lot of digging around on the internet (there was a decent amount of information but most of it was incomplete/unsolved forum threads about similar but not identical issues with Akonadi) I was able to find this general summary of Akonadi from KDE which was an excellent kickstart into my own investigating. After playing around with the commands mentioned in the link, I got the following output (redactions mine): $ akonadictl start$ Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString)mysqld: [ERROR] Could not open required defaults file: /home/[my username]/.config/akonadi/mysqld: [ERROR] Fatal error in defaults handling. Program aborted!org.kde.pim.akonadiserver: database server stopped unexpectedlyorg.kde.pim.akonadiserver: Database process exited unexpectedly during initial connection!org.kde.pim.akonadiserver: executable: "/usr/sbin/mysqld-akonadi"org.kde.pim.akonadiserver: arguments: ("--defaults-file=/home/[my username]/.local/share/akonadi/mysql.conf", "--datadir=/home/[my username]/.local/share/akonadi/db_data/", "--socket=/tmp/akonadi-[my username].UXCgLp/mysql.socket", "--pid-file=/tmp/akonadi-[my username].UXCgLp/mysql.pid")org.kde.pim.akonadiserver: stdout: ""org.kde.pim.akonadiserver: stderr: "mysqld: Can't change dir to '/home/[my username]/.local/share/akonadi/db_data/' (Errcode: 2 - No such file or directory)\n2018-06-21T19:34:18.989616Z 0 [Warning] The syntax '--log_warnings/-W' is deprecated and will be removed in a future release. Please use '--log_error_verbosity' instead.\n2018-06-21T19:34:18.989703Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).\n2018-06-21T19:34:18.991172Z 0 [Warning] Can't create test file /home/[my username]/.local/share/akonadi/db_data/[my hostname].lower-test\n2018-06-21T19:34:18.992274Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.22-0ubuntu0.16.04.1) starting as process 11859 ...\n2018-06-21T19:34:19.006500Z 0 [Warning] Can't create test file /home/[my username]/.local/share/akonadi/db_data/[my hostname].lower-test\n2018-06-21T19:34:19.006549Z 0 [Warning] Can't create test file /home/[my username]/.local/share/akonadi/db_data/[my hostname].lower-test\n2018-06-21T19:34:19.006623Z 0 [ERROR] failed to set datadir to /home/[my username]/.local/share/akonadi/db_data/\n2018-06-21T19:34:19.006632Z 0 [ERROR] Aborting\n\n2018-06-21T19:34:19.006658Z 0 [Note] Binlog end\n2018-06-21T19:34:19.006726Z 0 [Note] /usr/sbin/mysqld: Shutdown complete\n\n"org.kde.pim.akonadiserver: exit code: 1org.kde.pim.akonadiserver: process error: "Unknown error"mysqladmin: connect to server at 'localhost' failederror: 'Can't connect to local MySQL server through socket '/tmp/akonadi-[my username].UXCgLp/mysql.socket' (2)'Check that mysqld is running and that the socket: '/tmp/akonadi-[my username].UXCgLp/mysql.socket' exists!org.kde.pim.akonadiserver: Failed to remove runtime connection config fileorg.kde.pim.akonadicontrol: Application 'akonadiserver' exited normally... This produces a couple interesting lines. The problematic one being org.kde.pim.akonadiserver: stderr: "mysqld: Can't change dir to '/home/[my username]/.local/share/akonadi/db_data/' (Errcode: 2 - No such file or directory) . To me this looked like the program was trying to write to a directory to which it didn't have access and was throwing an error, much like touch ~/nonexistent_dir/file.txt would. So I ran cd ~/.local/share/akonadi/; mkdir db_data and retried it. Bam it worked. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154424/"
]
} |
451,207 | I've created a self-signed certificate for foo.localhost using a Let's Encrypt recommendation using this Makefile: include ../.envconfiguration = csr.cnfcertificate = self-signed.crtkey = self-signed.key.PHONY: allall: $(certificate)$(certificate): $(configuration) openssl req -x509 -out $@ -keyout $(key) -newkey rsa:2048 -nodes -sha256 -subj '/CN=$(HOSTNAME)' -extensions EXT -config $(configuration)$(configuration): printf "[dn]\nCN=$(HOSTNAME)\n[req]\ndistinguished_name = dn\n[EXT]\nsubjectAltName=DNS:$(HOSTNAME)\nkeyUsage=digitalSignature\nextendedKeyUsage=serverAuth" > [email protected]: cleanclean: $(RM) $(configuration) I've then assigned that to a web server. I've verified that the server returns the relevant certificate: $ openssl s_client -showcerts -connect foo.localhost:8443 < /dev/nullCONNECTED(00000003)depth=0 CN = foo.localhostverify error:num=20:unable to get local issuer certificateverify return:1depth=0 CN = foo.localhostverify error:num=21:unable to verify the first certificateverify return:1---Certificate chain 0 s:/CN=foo.localhost i:/CN=foo.localhost-----BEGIN CERTIFICATE-----[…]-----END CERTIFICATE--------Server certificatesubject=/CN=foo.localhostissuer=/CN=foo.localhost---No client certificate CA names sentPeer signing digest: SHA512Server Temp Key: X25519, 253 bits---SSL handshake has read 1330 bytes and written 269 bytesVerification error: unable to verify the first certificate---New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256Server public key is 2048 bitSecure Renegotiation IS supportedCompression: NONEExpansion: NONENo ALPN negotiatedSSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 Session-ID: […] Session-ID-ctx: Master-Key: […] PSK identity: None PSK identity hint: None SRP username: None TLS session ticket: […] Start Time: 1529622990 Timeout : 7200 (sec) Verify return code: 21 (unable to verify the first certificate) Extended master secret: no---DONE How do I make cURL trust it without modifying anything in /etc? --cacert does not work, presumably because there is no CA: $ curl --cacert tls/foo.localhost.crt 'https://foo.localhost:8443/'curl: (60) SSL certificate problem: unable to get local issuer certificateMore details here: https://curl.haxx.se/docs/sslcerts.htmlcurl failed to verify the legitimacy of the server and therefore could notestablish a secure connection to it. To learn more about this situation andhow to fix it, please visit the web page mentioned above. The goal is to enable HTTPS during development: I can't have a completely production-like certificate without a lot of work to enable DNS verification in all development environments. Therefore I have to use a self-signed certificate. I still obviously want to make my development environment as similar as possible to production, so I can't simply ignore any and all certificate issues. curl -k is like catch (Exception e) {} in this case - nothing at all like a browser talking to a web server. In other words, when running curl [something] https://project.local/api/foo I want to be confident that if TLS is configured properly except for having a self-signed certificate the command will succeed and if I have any issues with my TLS configuration except for having a self-signed certificate the command will fail. Using HTTP or --insecure fails the second criterion. | Try -k : curl -k https://yourhost/ It should "accept" self-signed certificates | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/451207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
451,244 | The command curl "https://tools.keycdn.com/geo.json?host={18.205.6.240}" returns the following JSON document: {"status":"success","description":"Data successfully received.","data":{"geo":{"host":"18.205.6.240","ip":"18.205.6.240","rdns":"ec2-18-205-6-240.compute-1.amazonaws.com","asn":14618,"isp":"AMAZON-AES","country_name":"United States","country_code":"US","region_name":"Virginia","region_code":"VA","city":"Ashburn","postal_code":"20149","continent_name":"North America","continent_code":"NA","latitude":39.0469,"longitude":-77.4903,"metro_code":511,"timezone":"America\/New_York","datetime":"2022-06-17 10:44:39"}}} In this output, I need to extract the country_name . I am not sure how to do that. | $ curl -s 'https://tools.keycdn.com/geo.json?host={18.205.6.240}' | jq -r '.data.geo.country_name'United States The jq expression .data.geo.country_name extracts the given item in the JSON document returned from the endpoint that you access with curl . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296612/"
]
} |
451,253 | Currently I use Fish as my main shell on local and remote hosts. I connect to remote hosts via ssh and sftp. I wanted to open or reuse a remote tmux whenever I connect, automatically, by default; so I added this to my ~/.ssh/config : Host example.comRemoteCommand tmux a; or tmuxRequestTTY yes The problem is that now I cannot connect through sftp , nor can I run a direct command from my local CLI: ➤ ssh example.com ping localhostCannot execute command-line and remote command.➤ sftp example.comCannot execute command-line and remote command.Connection closed So, my question is: How can I define a default command to be executed when opening a new interactive SSH session, but make it overridable? | Option 1 ) You can the option Match ( see man ssh_config ) Match Host example.com exec "test $_ = /usr/bin/ssh" RemoteCommand tmux a; or tmux RequestTTY yes This will only differentiate difference between ssh & sftp Option 2 You create some placeholder config for your diffe command , example : Host tmux.example.com HostName example.com HostKeyAlias example.com RemoteCommand tmux a; or tmux RequestTTY yes And after you can still use example.com for you sftp / ping usage . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50281/"
]
} |
451,263 | For example, when I archive a few gigs of files (using tar), Linux uses quite a lot of disk caching (and some swap) but never cleans it up when the operation has completed. As a result, because there's no free memory Linux will try to swap out something from memory which in its turn creates an additional load on CPU. Of course, I can clean up caches by running echo 1 > /proc/sys/vm/drop_caches but isn't that stupid that I have to do that? Even worse with swap, there's no command to clean up unused swap, I have to disable/enable it completely which I don't think is a safe thing to do at all. UPD: I've run a few tests and found out a few things: The swapped out memory pages during the archive command not related to archived files, it seems it's just a usual swapping out process caused by decreased free memory (because disk caching ate it all) according to swappiness Running swapoff -a is actually safe, meaning swapped pages will move back to memory My current solution is to limit archive command memory usage via cgroups (I run docker container with -m flag). If you don't use docker, there's a project https://github.com/Feh/nocache that might help. The remaining question is when will Linux clean up disk caching and will it at all? If not, is it a good practice to manually clean up disk cache ( echo 1 > /proc/sys/vm/drop_caches )? | Nitpick: the CPU time used by swapping is not usually significant. When the system is slow to respond during swapping, the usual problem is the disk time. (1) Even worse with swap, there's no command to clean up unused swap Disabling and then enabling swap is a valid and safe technique, if you want to trigger and wait for the swapped memory to be read back in. I just want to say "clean up unused swap" is not the right description - it's not something you would ever need to do. The swap usage might look higher than you expected, but that does not mean it is not being used. A page of memory can be stored in both RAM and swap at the same time. There is a good reason for this. When a swap page is read back in, it is not specifically erased, and it is still kept track of. This means if the page needs to be swapped out again, and it has not changed since it was written to swap, the page does not have to be written again. This is also explained at linux-tutorial.info: Memory Management - The Swap Cache If the page in memory is changed or freed, the copy of the page in swap space will be freed automatically. If your system has relatively limited swap space and a lot of RAM, it might need to remove the page from swap space at some point. This happens automatically. (Kernel code: linux-5.0/mm/swap.c:800 ) (2) The remaining question is when will Linux clean up disk caching and will it at all? If not, is it a good practice to manually clean up disk cache (echo 1 > /proc/sys/vm/drop_caches)? Linux cleans up disk cache on demand. Inactive disk cache pages will be evicted when memory is needed. If you change the value of /proc/sys/vm/swappiness , you can alter the bias between reclaiming inactive file cache, and reclaiming inactive "anonymous" (swap-backed) program memory. The default is already biased against swapping. If you want to, you can experiment with tuning down the swappiness value further on your system. If you want to think more about what swappiness does, here's an example where it might be desirable to turn it up : Make or force tmpfs to swap before the file cache Since Linux cleans up disk cache on demand, it is not generally recommended to use drop_caches . It is mostly for testing purposes. As per the official documentation : This file is not a means to control the growth of the various kernel caches (inodes, dentries, pagecache, etc...) These objects are automatically reclaimed by the kernel when memory is needed elsewhere on the system. Use of this file can cause performance problems. Since it discards cached objects, it may cost a significant amount of I/O and CPU to recreate the dropped objects, especially if they were under heavy use. Because of this, use outside of a testing or debugging environment is not recommended. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146366/"
]
} |
451,368 | What is the allowed range of characters in Linux network interfaces names? I've searched around but did not find any definition or clarification. Are uppercase characters allowed? Are uppcase and lowercase letters different? | The iproute2 tools do the following checks for a valid interface name : The name must not be empty The name must be less than 16 ( IFNAMSIZ ) characters The name must not contain / or any whitespace characters Using upper-case and lower-case characters are OK and names are case sensitive (e.g. if0 and IF0 are distinct). If you want more flexibility in names, you can set an alias using ip link DEV set alias ... . This will appear in the output of ip link show . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288012/"
]
} |
451,375 | I have following files in a directory: GE.AARS_vs_Control16.txt GE.DHX30_vs_Control18.txt GE.DNAJC2_vs_Control18.txt I would like to remove the *_Control<numeric> and replace it with *_Others such that the files will be renamed as GE.AARS_vs_Others.txt GE.DHX30_vs_Others.txt GE.DNAJC2_vs_Others.txt | for file in /dir/*.txt; do mv "$file" "${file%_*}_Others.txt"done The ${file%_*} is a form of shell parameter expansion that will remove everything from the last _ and on. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190714/"
]
} |
451,413 | For a long time I've been having problems with my Screen Saver not working properly, now it seems it tells me the problem in the XFCE4 Panel's Power Manager Plugin Captain reads /usr/lib/chromium-browser/chromium-browser is currently inhibiting power management It seems specifically that a backgrounded tab of Reuters is doing it https://www.reuters.com/article/us-usa-immigration-children-idUSKBN0EK1VM20140609 How do I disable this most annoying "feature" of chromium-browser When I run xfce4-power-manager --no-daemon --debug , I get in the output, TRACE[xfpm-inhibit.c:405] xfpm_inhibit_inhibit(): Inhibit send application name=/usr/lib/chromium-browser/chromium-browser reason=WebRTC has active PeerConnections sender=:1.628 External links Reddit post of someone having the same problem with chrome using dbus to send inhibit message Google Product Forum post on this problem | This is a verified bug without a patch yet, Launchpad Bug 1600622 One solution that is far from ideal is to block all advertisements with uBlock Origin . Chrome can still inhibit the screensaver, but in this case it seems that whatever is responsible isn't getting through. Specifically the problem seems to be with GoogleTagServices.com which pulls in this script . If that's blocked by uBlock Origin, you're good and you won't have Power Management Inhibition. If that's not blocked, you'll have problems. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
451,426 | I created user small , added him to group kek and allowed that group to only read files in user home directory. Then I chowned all files to root:kek . However, small still can delete files in his home directory. Commands I ran: useradd -ms /bin/bash smallgroupadd kekusermod -a -G kek smallchown -R root:kek /home/small/*chmod -R g=r /home/small/* Then when I try to remove file: $ ls -ltotal 16-rw-r--r-- 1 root kek 240 Jun 23 06:17 Dockerfile-rw-r--r-- 1 root kek 39 Jun 21 09:17 flag.txt-rw-r--r-- 1 root kek 2336 Jun 22 14:19 server.py-rw-r--r-- 1 root kek 24 Jun 22 08:16 small.py$ rm flag.txt$ ls -ltotal 12-rw-r--r-- 1 root kek 240 Jun 23 06:17 Dockerfile-rw-r--r-- 1 root kek 2336 Jun 22 14:19 server.py-rw-r--r-- 1 root kek 24 Jun 22 08:16 small.py$ whoamismall Why does this happens? | Whether a file can be deleted or not is not a property of the file but of the directory that the file is located in. A user may not delete a file that is located in a directory that they can't write to. Files (and subdirectories) are entries in the directory node. To delete a file, one unlinks it from the directory node and therefore one has to have write permissions to the directory to delete a file in it. The write permissions on a file determines whether one is allowed to change the contents of the file. The write permissions on a directory determines whether one is allowed to change the contents of the directory. Related: Execute vs Read bit. How do directory permissions in Linux work? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154737/"
]
} |
451,428 | I an using Ubuntu 14.04. I have a script that is supposed to run at all times. The easy way would be to use crontab to run another script that checks if script1 is running and if not restart it. I would like to avoid crontab and if possible any su command (I would like to run this without any additional settings as root). Also as root I have a script that cleans (kills) all processes once a day for the user I plan on running script1 from. I want to restart script1 after the cleanup and in between this interval if script1 stops. | Whether a file can be deleted or not is not a property of the file but of the directory that the file is located in. A user may not delete a file that is located in a directory that they can't write to. Files (and subdirectories) are entries in the directory node. To delete a file, one unlinks it from the directory node and therefore one has to have write permissions to the directory to delete a file in it. The write permissions on a file determines whether one is allowed to change the contents of the file. The write permissions on a directory determines whether one is allowed to change the contents of the directory. Related: Execute vs Read bit. How do directory permissions in Linux work? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296757/"
]
} |
451,437 | By default, tmux passes over edge of a pane. For example, suppose there are two panes, pane 1 and pane 2 . Suppose you're at pane 1 and you do Ctrl+b → , you're at the pane 2 . If you again do Ctrl+b → , you'll be again at pane 1 . How can I disable that feature so, when I'm navigating from the last pane, I don't go anywhere? | This is a bit of a hack but might be good enough for you. From version 2.3 you can find the x and y co-ordinate of each pane's borders. For example, display -p #{pane_right} for a pane at the right-hand edge of an 80 column terminal would be 79. If you give the command to move right to the next pane, and the new pane's pane_right is, for example, 39, then you have moved left, so you will want to move back to the previous pane with select-pane -l . You can run most tmux commands from a shell script, so create the following file mytmux in your PATH and make it executable ( chmod +x mytmux ): #!/bin/bash# https://unix.stackexchange.com/a/451473/119298restrict(){ case $1 in U) d=-U p=pane_top cmp=-gt ;; D) d=-D p=pane_bottom cmp=-lt ;; L) d=-L p=pane_left cmp=-gt ;; R) d=-R p=pane_right cmp=-lt ;; *) exit 1 ;; esac old=$(tmux display -p "#{$p}") tmux select-pane "$d" new=$(tmux display -p "#{$p}") [ "$new" "$cmp" "$old" ] && tmux select-pane -l exit 0}case $1 in-restrict)shift restrict "${1?direction}" ;;esac then setup the following bindings in your ~/.tmux.conf : bind-key -r -T prefix Up run-shell 'mytmux -restrict U'bind-key -r -T prefix Down run-shell 'mytmux -restrict D'bind-key -r -T prefix Left run-shell 'mytmux -restrict L'bind-key -r -T prefix Right run-shell 'mytmux -restrict R' You will need to extend this if you want to handle multiple sessions, for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282376/"
]
} |
451,467 | I am currently using a theme based on Green Laguna (actually, Green Laguna itself, but MATE Desktop Environment 1.18.0 thinks something was customized). Caja and many other apps follow the theme style: But Gedit and some other apps do not: What is required to apply theme to all apps? | This is a bit of a hack but might be good enough for you. From version 2.3 you can find the x and y co-ordinate of each pane's borders. For example, display -p #{pane_right} for a pane at the right-hand edge of an 80 column terminal would be 79. If you give the command to move right to the next pane, and the new pane's pane_right is, for example, 39, then you have moved left, so you will want to move back to the previous pane with select-pane -l . You can run most tmux commands from a shell script, so create the following file mytmux in your PATH and make it executable ( chmod +x mytmux ): #!/bin/bash# https://unix.stackexchange.com/a/451473/119298restrict(){ case $1 in U) d=-U p=pane_top cmp=-gt ;; D) d=-D p=pane_bottom cmp=-lt ;; L) d=-L p=pane_left cmp=-gt ;; R) d=-R p=pane_right cmp=-lt ;; *) exit 1 ;; esac old=$(tmux display -p "#{$p}") tmux select-pane "$d" new=$(tmux display -p "#{$p}") [ "$new" "$cmp" "$old" ] && tmux select-pane -l exit 0}case $1 in-restrict)shift restrict "${1?direction}" ;;esac then setup the following bindings in your ~/.tmux.conf : bind-key -r -T prefix Up run-shell 'mytmux -restrict U'bind-key -r -T prefix Down run-shell 'mytmux -restrict D'bind-key -r -T prefix Left run-shell 'mytmux -restrict L'bind-key -r -T prefix Right run-shell 'mytmux -restrict R' You will need to extend this if you want to handle multiple sessions, for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187772/"
]
} |
451,478 | I'm experimenting with different union/overlay filesystem types. I've found unionfs-fuse package in Ubuntu which allowed me to use unionfs mount command as non-root user. But it seems aufs , which is created to provided similar options as unionfs, cannot be used as non-root user. I need to give sudo password for aufs mount. Can I use aufs without giving root password? | This is a bit of a hack but might be good enough for you. From version 2.3 you can find the x and y co-ordinate of each pane's borders. For example, display -p #{pane_right} for a pane at the right-hand edge of an 80 column terminal would be 79. If you give the command to move right to the next pane, and the new pane's pane_right is, for example, 39, then you have moved left, so you will want to move back to the previous pane with select-pane -l . You can run most tmux commands from a shell script, so create the following file mytmux in your PATH and make it executable ( chmod +x mytmux ): #!/bin/bash# https://unix.stackexchange.com/a/451473/119298restrict(){ case $1 in U) d=-U p=pane_top cmp=-gt ;; D) d=-D p=pane_bottom cmp=-lt ;; L) d=-L p=pane_left cmp=-gt ;; R) d=-R p=pane_right cmp=-lt ;; *) exit 1 ;; esac old=$(tmux display -p "#{$p}") tmux select-pane "$d" new=$(tmux display -p "#{$p}") [ "$new" "$cmp" "$old" ] && tmux select-pane -l exit 0}case $1 in-restrict)shift restrict "${1?direction}" ;;esac then setup the following bindings in your ~/.tmux.conf : bind-key -r -T prefix Up run-shell 'mytmux -restrict U'bind-key -r -T prefix Down run-shell 'mytmux -restrict D'bind-key -r -T prefix Left run-shell 'mytmux -restrict L'bind-key -r -T prefix Right run-shell 'mytmux -restrict R' You will need to extend this if you want to handle multiple sessions, for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451478",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19288/"
]
} |
451,479 | input json: { "id": "3885", "login": "050111", "lastLoginTime": 1529730115000, "lastLoginFrom": "192.168.66.230"}{ "id": "3898", "login": "050112", "lastLoginTime": null, "lastLoginFrom": null} I want to get output for login, lastLoginTime and lastLoginFrom in tabulator delimited format: 050111 1529730115000 192.168.66.230050112 - - with below jq filter I get on output no "null" values which I could replace with "-" $ jq -r '.|[.login, .lastLoginTime, .lastLoginFrom]|@tsv' test_json050111 1529730115000 192.168.66.230050112 Is there any other way to get "-" printed for such null values? | use the alternative operator : // so : $jq -r '.|[.login, .lastLoginTime // "-" , .lastLoginFrom // "-" ]|@tsv' test_json050111 1529730115000 192.168.66.230050112 - - | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/451479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274554/"
]
} |
451,486 | Context for the question: According to POSIX specs , ARG_MAX is maximum length of command-line arguments to exec() family of functions. Which lead me to believe that's actual number of arguments, however that clearly didn't work: $ ulimit -s8192$ touch {1..18000}.jpg$ rm *.jpg$ Clearly, this works fine, despite being in length over 8192 items. According to D.W.'s answer , the 8192 is supposedly size in kB. So clearly the previous assumption was wrong. This is where the actual question comes in: How do I figure out the amount of items that actually will get above 8192 kB limit ? In other words, what sort of computation I have to perform to ensure that *.jpg type of glob will result into Argument list too long error ? Please note, this isn't a duplicate of What defines the maximum size of single command argument . I know about getconf ARG_MAX and ulimit -s values, that's not my question. I need to know how to generate enough arguments in size that will be above the limit . In other words, I need to find a way to get the error, not avoid it. | Using getconf ARG_MAX to generate a long list of x and calling an external utility with that as its argument would generate an "Argument list too long" error: $ /bin/echo $( perl -e 'print "x" x $ARGV[0]' "$(getconf ARG_MAX)" )/bin/sh: /bin/echo: Argument list too long The environment and the length of the string /bin/echo will be included in what makes the error occur, so we can try to find the biggest possible number by subtracting these: $ envPATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin (I started this shell with env -i sh , so there's only the PATH variable in the environment) $ /bin/echo $( perl -e 'print "x" x ($ARGV[0] - length($ENV{"PATH"}) - length("/bin/echo"))' "$(getconf ARG_MAX)" )sh: /bin/echo: Argument list too long Still too long. By how much? i=0while ! /bin/echo $( perl -e 'print "x" x ($ARGV[0] - length($ENV{"PATH"}) - length("/bin/echo") - $ARGV[1])' "$(getconf ARG_MAX)" "$i" )do i=$(( i + 1 ))done This loop exits for i=8 . So there's four bytes that I can't immediately account for (four of the eight must be for the name of the PATH environment variable). These are the null terminators for the four strings PATH , the value of PATH , /bin/echo and the long string of x characters. Note that each argument is null terminated, so the more arguments you have to the command, the shorter the combined length of them can be. Also, just to show the effect of a big environment: $ export BIG=$( perl -e 'print "x" x $ARGV[0]' "$( getconf ARG_MAX )" )$ /bin/echo hellosh: /bin/echo: Argument list too long$ /bin/echosh: /bin/echo: Argument list too long | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85039/"
]
} |
451,496 | json input: [ { "name": "cust1", "grp": [ { "id": "46", "name": "BA2" }, { "id": "36", "name": "GA1" }, { "id": "47", "name": "NA1" }, { "id": "37", "name": "TR3" }, { "id": "38", "name": "TS1" } ] }] expected, on output are two lines: name: cust1groups: BA2 GA1 NA1 TR3 TS1 I was trying to build filter without success.. $ jq -r '.[]|"name:", .name, "groups:", (.grp[]|[.name]|@tsv)' test_jsonname:cust1groups:BA2GA1NA1TR3TS1 Update: the solution provided below works fine, but I did not predict case when no groups exists: [ { "name": "cust1", "grp": null }] in such case, the solution provided returns error: $ jq -jr '.[]|"name:", " ",.name, "\n","groups:", (.grp[]|" ",.name),"\n"' test_json2name: cust1jq: error (at test_json2:6): Cannot iterate over null (null) any workaround appreciated. | Use the "join", -j $ jq -jr '.[]|"name:", " ",.name, "\n","groups:", (.grp[]|" ",.name),"\n"' test_jsonname: cust1groups: BA2 GA1 NA1 TR3 TS1 And with a place holder $ jq -jr '.[]|"name:", " ",.name, "\n","groups:", (.grp//[{"name":"-"}]|.[]|" ",.name),"\n"' test_jsonname: cust1groups: - | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/451496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274554/"
]
} |
451,554 | In a stat format (at least the one I get from bash on Linux) one can use format modifiers: for instance %010s will force a size field to be at least 10 characters, padded to the left with zeroes (btw is this documented somewhere?) Is there an equivalent trick to restrict the length of a field? I want to drop the decimal part of the second in the %xyz formats. Or will I have to postprocess the output with sed/awk? | Using GNU tools, date -r file +'%F %T %z' This would get the timestamp of last modification of the given file (no subsecond resolution), and use date to reformat this into the same format as stat -c %y file would produce. Example: $ stat -c '%y' file2021-03-17 08:53:39.540802643 +0100 $ date -r file +'%F %T %z'2021-03-17 08:53:39 +0100 One can use printf -like formatting for the %y format specification directly, but not to modify a piece of the string in the middle: $ stat -c '%.19y' file2021-03-17 08:53:39 This truncates the string after 19 characters, which removes the subsecond data, but the time zone info is also left out. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451554",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171560/"
]
} |
451,561 | I have been installing ubuntu in VMs (using KVM) for development for quite some time now and I have been facing a problem where the boot loader never seemed to install and just fail. So, either I would install the boot loader manually or just manually partition the disk while installing. What's the best fix for a smoother install? | Using GNU tools, date -r file +'%F %T %z' This would get the timestamp of last modification of the given file (no subsecond resolution), and use date to reformat this into the same format as stat -c %y file would produce. Example: $ stat -c '%y' file2021-03-17 08:53:39.540802643 +0100 $ date -r file +'%F %T %z'2021-03-17 08:53:39 +0100 One can use printf -like formatting for the %y format specification directly, but not to modify a piece of the string in the middle: $ stat -c '%.19y' file2021-03-17 08:53:39 This truncates the string after 19 characters, which removes the subsecond data, but the time zone info is also left out. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169528/"
]
} |
451,579 | I have thousands of unl files named something like this cbs_cdr_vou_20180624_603_126_239457.unl . I wanted to print all the lines from those files by using following command. but its giving me only file names. I don't need file names, I just need contents from those files. find -type f -name 'cbs_cdr_vou_20180615*.unl' > /home/fifa/cbs/test.txt Current Output: ./cbs_cdr_vou_20180615_603_129_152023.unl./cbs_cdr_vou_20180615_603_128_219001.unl./cbs_cdr_vou_20180615_602_113_215712.unl./cbs_cdr_vou_20180615_602_120_160466.unl./cbs_cdr_vou_20180615_603_125_174428.unl./cbs_cdr_vou_20180615_601_101_152369.unl./cbs_cdr_vou_20180615_603_133_193306.unl Expected output: 8801865252020|200200|20180613100325|;8801837463298|200200|20180613111209|;8801845136955|200200|20180613133708|;8801845205889|200200|20180613141140|;8801837612072|200200|20180613141525|;8801877103875|200200|20180613183008|;8801877167964|200200|20180613191607|;8801845437651|200200|20180613200415|;8801845437651|200200|20180613221625|;8801839460670|200200|20180613235936|; Please note that, for cat command I'm getting error like -bash: /bin/logger: Argument list too long that's why wanted to use find instead of cat command. | The find utility deals with pathnames. If no specific action is mentioned in the find command for the found pathnames, the default action is to output them. You may perform an action on the found pathnames, such as running cat , by adding -exec to the find command: find . -type f -name 'cbs_cdr_vou_20180615*.unl' -exec cat {} + >/home/fifa/cbs/test.txt This would find all regular files in or under the current directory, whose names match the given pattern. For as large batches of these as possible, cat would be called to concatenate the contents of the files. The output would go to /home/fifa/cbs/test.txt . Related: Understanding the -exec option of `find` | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/451579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131420/"
]
} |
451,599 | Example task: if a line contains foo , replace it with bar , otherwise append baz to the line. sed -e s/foo/bar/ -e s/$/baz/ doesn't work, as the second command gets executed whether or not the first one matches. Is there a way to tell sed to go to the next line after a match? | You can use the t command without a label to start next cycle on successful substitution $ cat ip.txt a foo 123xyzfore1foo$ sed -e 's/foo/bar/' -e t -e 's/$/baz/' ip.txta bar 123xyzbazforebaz1bar From manual: t label (test) Branch to label only if there has been a successful substitution since the last input line was read or conditional branch was taken. The label may be omitted, in which case the next cycle is started. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/451599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4830/"
]
} |
451,618 | I am looking at logs files that have timestamps and I just want the entries from a specific time period and after. 2018-06-17T13:43:09 For example I want all entries from 18:00:00 on 2018-06-23 and onward. sed -n '2018/-06/-23T18,$p' board3sed: -e expression #1, char 5: unknown command: `/' | You can use the t command without a label to start next cycle on successful substitution $ cat ip.txt a foo 123xyzfore1foo$ sed -e 's/foo/bar/' -e t -e 's/$/baz/' ip.txta bar 123xyzbazforebaz1bar From manual: t label (test) Branch to label only if there has been a successful substitution since the last input line was read or conditional branch was taken. The label may be omitted, in which case the next cycle is started. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/451618",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122887/"
]
} |
451,642 | Say I have a bash function, which is supposed to remove all arguments that start with "-" until it gets to an argument that does not start with "-". gmx(){ local options=( ); while [ "${1:0:1}" == "-" ]; do options+=("${1}") shift 1; done echo "first legit arg: $1" "$@" # omg will be executed here, like `omg --rolo`}gmx -a -f -c omg --rolo this seems to work, but I am wondering if this is a good generic solution to always get 'omg' to be the first "legit" argument. Are there any edge cases that might fail? In other words -a, -f, -c are all arguments to gmx. Whereas omg and everything that follows, will be run in a child process. | The official and best way is t use the getopts builtin to parse the command line options. See the man page for more information. A note may be important: bash does not support long options. If you like scripts to deal with long options, you have two shells that support them: ksh93 and bosh . Both shells support long options the way they are supported by the getopt(3) function in libc on Solaris. See the bosh man page (currently starting at page 43: http://schilytools.sourceforge.net/man/man1/bosh.1.html getopts "f:(file)(input-file)o:(output-file)" OPT supports e.g an option -f with an argument and that option has a long option alias --file and a second alias for this option --input-file ksh93 supports this as well, even though it is not documented in the ksh93 man page. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
451,686 | One of the big reasons why find makes things awesome if I do this: find -path *ncf or this: find -path ncf* I get no results, but if I do this: find -path *ncf.js I get a matching file. Why is that? Is it a peculiarity of find, or something more grandiose? Does anyone have the home address of the guy who wrote find ? | There are two issues here. You need to escape the * , otherwise it will be processed by the shell (matching files in the current directory, if any): find -path \*ncf.js or find -path '*ncf.js' The behaviour you’re seeing comes from the fact that the globbing expression matches against the file path in its entirety, including the extension. (Use -name to match the filename, which still includes the extension.) This isn’t specific to find , try it with ls in the directory containing your files. Note that you should get into the habit of specifying the start directory, even when it’s . ; not all versions of find use that as the default. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
451,705 | Unfortunately timedatectl set-timezone doesn't update /etc/timezone . How do I get the current timezone as Region/City , eg, given: % timedatectl | grep zone Time zone: Asia/Kuala_Lumpur (+08, +0800) I can get the last part: % date +"%Z %z"+08 +0800 How do I get the Asia/Kuala_Lumpur part without getting all awk -ward? I'm on Linux, but is there also a POSIX way? | In this comment by Stéphane Chazelas , he said: timedatectl is a systemd thing that queries timedated over dbus and timedated derives the name of the timezone (like Europe/London ) by doing a readlink() on /etc/localtime . If /etc/localtime is not a symlink, then that name cannot be derived as those timezone definition files don't contain that information. Based on this and tonioc's comment , I put together the following: #!/bin/bashset -euo pipefailif filename=$(readlink /etc/localtime); then # /etc/localtime is a symlink as expected timezone=${filename#*zoneinfo/} if [[ $timezone = "$filename" || ! $timezone =~ ^[^/]+/[^/]+$ ]]; then # not pointing to expected location or not Region/City >&2 echo "$filename points to an unexpected location" exit 1 fi echo "$timezone"else # compare files by contents # https://stackoverflow.com/questions/12521114/getting-the-canonical-time-zone-name-in-shell-script#comment88637393_12523283 find /usr/share/zoneinfo -type f ! -regex ".*/Etc/.*" -exec \ cmp -s {} /etc/localtime \; -print | sed -e 's@.*/zoneinfo/@@' | head -n1fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
451,778 | I accidentially destroyed my cd command. I tried to automatically execute ls after cd is called. I found a post saying that I have to execute alias cd='/bin/cd && /bin/ls' , but now I get -bash: /bin/cd: No such file or directory and can't change directoy anymore. | Your system (like many Unix systems) does not have an external cd command (at least not at that path). Even if it had one, the ls would give you the directory listing of the original directory. An external command can never change directory for the calling process (your shell) 1 . Remove the alias from the environment with unalias cd (and also remove its definition from any shell initialization files that you may have added it to). With a shell function, you can get it to work as cd ordinarily does, with an extra invocation of ls at the end if the cd succeeded: cd () { command cd "$@" && ls -lah} or, cd () { command cd "$@" && ls -lah; } This would call the cd command built into your shell with the same command line arguments that you gave the function. If the change of directory was successful, the ls would run. The command command stops the shell from executing the function recursively. The function definition (as written above) would go into your shell's startup file. With bash , this might be ~/.bashrc . The function definition would then be active in the next new interactive shell session . If you want it to be active now , then execute the function definition as-is at the interactive shell prompt, which will define it within your current interactive session. 1 On systems where cd is available as an external command, this command also does not change directory for the calling process. The only real use for such a command is to provide POSIX compliance and for acting as a test of whether changing directory to a particular one would be possible . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/451778",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124191/"
]
} |
451,788 | Short question: How do I connect to a local unix socket (~/test.sock) via ssh? This sockets forwards to an actual ssh server. The obvious does not work and I can't find any documentation: public> ssh /home/username/test.sock"ssh: Could not resolve hostname: /home/username/test.sock: Name of service not known" Long Question: The Problem I try to solve, is to connect from my ( public ) university server to my ( local ) PC, which is behind NAT and not visible to public. The canonical solution is to create a ssh proxy/tunnel to local on public : local> ssh -NR 2222:localhost:22 public But this is not possible, as the administration prohibits creating ports.So I have thought about using UNIX socket instead, which works: local> ssh -NR /home/username/test.sock:localhost:22 public But now, how can I connect to it with ssh? | You should be able to do utilizing socat and ProxyCommand option for ssh. ProxyCommand configures ssh client to use proxy process for communicating with your server. socat establishes two-way communication between STDIN/STDOUT ( socat and ssh client) and your UNIX socket . ssh -o "ProxyCommand socat - UNIX-CLIENT:/home/username/test.sock" foo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/451788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94837/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.