source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
573,047
Say I have a variable with a path release/linux/x86 , and want the relative path from a different directory (i.e. ../../.. for current working directory), how would I get that in a shell command (or possibly GNU Make)? Soft link support not required. This question has been heavily modified based on the accepted answer for improved terminology.
Absolutely not clear the purpose of it, but this will do exactly what was asked, using GNU realpath : realpath -m --relative-to=release/linux/x86 .../../..realpath -m --relative-to=release///./linux/./x86// .../../..
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/573047", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157668/" ] }
573,209
I am new to sed and am having some troubles making it work. What I want is this: abc.ztx.com. A 132.123.12.44 ---> abc.ztx.com I used the below pattern, but doesn't seem to work: echo "abc.ztx.com. A 132.123.12.44" | sed 's/\.\s.+//g' I verified the regex using regex101.com and pattern, \.\s.+ matches the part . A 132.123.12.44 perfectly. Why is it not working with sed . Appreciate your help. Thank you.
sed uses POSIX basic regular expressions (BRE) by default. \s is a PCRE (Perl-compatible regular expression) which is equivalent to the BRE [[:blank:]] (I think, matching spaces and tabs, or possiby [[:space:]] which matches a larger set of whitespace characters). The + is a POSIX extended regular expression (ERE) modifier, which is equivalent to \{1,\} as a BRE. So try sed 's/\.[[:blank:]].*//' instead. You may replace [[:blank:]] by a space character if you don't need to match tabs: sed 's/\. .*//' Note that there is no need to do the substitution with the g flag as there will only ever be a single match. Also, the .+ that you use could just be replaced by .* instead of .\{1,\} as we don't care whether there are any further characters at all (just delete all of them). Related: Why does my regular expression work in X but not in Y?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/573209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/400456/" ] }
573,235
I'm looking for a way to take a text file and put each line one-at-a-time centered on screen with a certain character width. Sort of like a bare-bones slide show, e.g. seeing the first line until the user presses a key, and then seeing the next line, until all the lines have been viewed. I suspect there is a basic way to do this in bash, but I haven't found an answer yet.
Something like that: #!/usr/bin/env bashif [ ! "$#" -eq 1 ]then printf "Usage: %s <file>\n" "$0" >&2 exit 1fifile="$1"display_center(){ clear columns="$(tput cols)" lines="$(tput lines)" down=$((lines / 2)) printf '\n%.0s' $(seq 1 $down) printf "%*s\n" $(( (${#1} + columns) / 2)) "$1"}while IFS= read -r linedo display_center "$line" read -n 1 -s -r </dev/ttydone < "$file" Name it centered.sh and use like that: ./centered.sh centered.sh It will print each line from the given file. Press any key to show thenext line. Notice that it's not well tested yet so use with cautionand that it'll always print lines starting from the center of the screen so it willmake long lines appear more at the bottom. The first line: #!/usr/bin/env bash is a shebang .Additionally, I use env for itsfeatures .I tried to avoid Bash and write this script in POSIX shell but I gave up because especially read was very problematic. You should keep inmind that even though it may seem that Bash is ubiquitous it isn'tpreset everywhere by default, for example on BSD or small embeddedsystems with Busybox. In this part: if [ ! "$#" -eq 1 ]then printf "Usage: %s <file>\n" "$0" >&2 exit 1fi we check if user provided exactly one parameter and if they didn't we printusage info to standard error and return 1, that means an error toa parent process. Here file="$1" we assign filename parameter that user has passed to a variable file that we'll use later. This is a function that actually prints centered text: display_center(){ clear columns="$(tput cols)" lines="$(tput lines)" down=$((lines / 2)) printf '\n%.0s' $(seq 1 $down) printf "%*s\n" $(( (${#1} + columns) / 2)) "$1"} There are no function prototypes in Bash so you can't know how manyparameters function takes in advance - that one takes only oneparameter which is a line to print and it's dereferenced using $1 This functions first clears thescreen, then moves down by lines/2 from the top of the screen to reachcenter of the screen and then it prints centered line using the methodI borrowed from here . That is the loop that reads input file passed by the user and calls display_center() function: while IFS= read -r linedo display_center "$line" read -n 1 -s -r </dev/ttydone < "$file" read is used with -n 1 to read only one character, -s to notecho input coming from a terminal and -r to prevent manglingbackslashes . Youcan learn more about read in help read . We also read from/dev/tty directly because stdin already points to the file - if wedidn't tell read to read from /dev/tty the script would very quicklyprint all lines from the file and exit immediately without waiting forthe user to press a key.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/573235", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/400487/" ] }
573,377
Just using kubectl as an example, I note that kubectl run --image nginx ... and kubectl run --image=nginx ... both work. For command-line programs in general, is there a rule about whether an equals sign is allowed/required between the option name and the value?
In general, the implementation of how command-line arguments are interpreted is left completely at the discretion of the programmer. That said, in many cases , the value of a "long" option (such as is introduced with --option_name ) is specified with an = between the option name and the value (i.e. --option_name=value ), whereas for single-letter options it is more customary to separate the flag and value with a space, such as -o value , or use no separation at all (as in -oValue ). An example from the man-page of the GNU date utility: -d, --date=STRING display time described by STRING, not 'now' -f, --file=DATEFILE like --date; once for each line of DATEFILE As you can see, the value would be separated by a space from the option switch when using the "short" form (i.e. -d ), but by an = when using the "long" form (i.e. --date ). Edit As pointed out by Stephen Kitt, the GNU coding standard recommends the use of getopt and getopt_long to parse command-line options. The man-page of getopt_long states: A long option may take a parameter, of the form --arg=param or --arg param . So, a program using that function will accept both forms.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/573377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146345/" ] }
573,455
I have this: curl -L "https://github.com/cmtr/cp-go-api/tarball/$commit_id" | tar x -C "$project_dir/" I am just trying to download a tarball from github and extract it to an existing directory. The problem is I am getting this error: Step 10/13 : RUN curl -L "https://github.com/channelmeter/cp-go-api/tarball/$commit_id" | tar x -C "$project_dir/" ---> Running in a883449de956 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 9 100 9 0 0 35 0 --:--:-- --:--:-- --:--:-- 35tar: This does not look like a tar archivetar: Exiting with failure status due to previous errorsThe command '/bin/sh -c curl -L "https://github.com/channelmeter/cp-go-api/tarball/$commit_id" | tar x -C "$project_dir/"' returned a non-zero code: 2 does anyone know why it's not a tar archive? If you go to github.com in the browser and put in this pattern, it will download a tar.gz archive: https://github.com/<org>/<repo>/tarball/<sha> so not sure why it's not working.
So ultimately it's because Github wants credentials. Without 2-factor auth, you can just do this with curl: curl -u username:password https://github.com/<org>/<repo>/tarball/<sha> but if you have 2-factor auth setup, then you need to use a Github access token, and you should use api.github.com instead of github.com, like so: curl -L "https://api.github.com/repos/<org>/<repo>/tarball/$commit_sha?access_token=$github_token" | tar -xz -C "$extract_dir/" the access token thing is documented here: https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/573455", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
573,508
I am trying to use yocto poky environment. I did the following: #source environment-setup-cortexa9hf-neon-poky-linux-gnueabi Now, if I try to compile the program using: #$(CC) hello.c -o hello.elf it throws me error since $(CC) isn't defined. However, if I do $CC it works. I am confused on what is the fundamental difference between $(CC) and $CC ?
I'm assuming that you've seen $(CC) in a Makefile where it serves as an expansion of the variable CC , which normally holds the name of the C compiler. The $(...) syntax for variable expansions in Makefiles is used whenever a variable with a multi-character name is expanded, as $CC would otherwise expand to the value of the variable C followed by a literal C ( $CC would in effect be the same as $(C)C in a Makefile). In the shell though, due to having a different syntax, $(CC) is a command substitution that would be replaced by the output of running the command CC . If there is no such command on your system, you would see a "command not found" error. It's also possible that you've mistaken $(CC) for ${CC} which, in the shell, is equivalent to $CC under most circumstances. The curly braces are only needed if the variable's expansion is followed immediately by some other string that would otherwise be interpreted as part of the variable's name. An example of the difference may be seen in "$CC_hello" (expands the variable called CC_hello ) and "${CC}_hello" (expands the variable CC and appends the string _hello to its value). In all other circumstances, ${CC} is equivalent to $CC . Note that using curly braces is not quoting the expansion, i.e. ${CC} is not the same as "$CC" . If have a shell or environment variable holding the name of the compiler that you're using for compiling C code, and you want to use that variable on the command line, then use "$CC" , or just $CC if the variable's value does not contain spaces or shell globbing characters. $CC -o hello.elf hello.c
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/573508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398477/" ] }
573,526
Requirement Col1 , Col2 , Col3 , Col41 , a:/b/c/abc.txt , f:/b/c/abc1.txt , a:/b/c/abc6.txt 2 , b:/bd/a/s/3/d/mon.dat , g:/b/c/abc 2.txt, b:/b/c/ab7.txt 3 , c:/j/h/4/u/y/d/m/ttt.dat, h:/b/c/abc 3.txt, c:/b/c/abc 8.txt 4 , c:/j/h/4/test1.msg , i:/b/c/abc4.txt , d:/b/c/abc 9.txt5 , d:/iasa/dda/dia/yyy.dat , j:/b/c/abc5.txt , e:/b/c/abc 10.txt Expecte O/P Col1 , Col2 , Col3 , Col4 1 , abc.txt , abc1.txt , abc6.txt2 , mon.dat , abc2.txt , abc7.txt3 , ttt.dat , abc3.txt , abc8.txt4 , test1.msg , abc4.txt , abc9.txt5 , yyy.dat , abc5.txt , abc10.txt
I'm assuming that you've seen $(CC) in a Makefile where it serves as an expansion of the variable CC , which normally holds the name of the C compiler. The $(...) syntax for variable expansions in Makefiles is used whenever a variable with a multi-character name is expanded, as $CC would otherwise expand to the value of the variable C followed by a literal C ( $CC would in effect be the same as $(C)C in a Makefile). In the shell though, due to having a different syntax, $(CC) is a command substitution that would be replaced by the output of running the command CC . If there is no such command on your system, you would see a "command not found" error. It's also possible that you've mistaken $(CC) for ${CC} which, in the shell, is equivalent to $CC under most circumstances. The curly braces are only needed if the variable's expansion is followed immediately by some other string that would otherwise be interpreted as part of the variable's name. An example of the difference may be seen in "$CC_hello" (expands the variable called CC_hello ) and "${CC}_hello" (expands the variable CC and appends the string _hello to its value). In all other circumstances, ${CC} is equivalent to $CC . Note that using curly braces is not quoting the expansion, i.e. ${CC} is not the same as "$CC" . If have a shell or environment variable holding the name of the compiler that you're using for compiling C code, and you want to use that variable on the command line, then use "$CC" , or just $CC if the variable's value does not contain spaces or shell globbing characters. $CC -o hello.elf hello.c
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/573526", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/400587/" ] }
573,541
It seems this package for ebooks https://packages.debian.org/buster/python3-ebooklib is not (yet) packaged for Fedora. How to package this into rpm (locally) or make it available in copr?
I'm assuming that you've seen $(CC) in a Makefile where it serves as an expansion of the variable CC , which normally holds the name of the C compiler. The $(...) syntax for variable expansions in Makefiles is used whenever a variable with a multi-character name is expanded, as $CC would otherwise expand to the value of the variable C followed by a literal C ( $CC would in effect be the same as $(C)C in a Makefile). In the shell though, due to having a different syntax, $(CC) is a command substitution that would be replaced by the output of running the command CC . If there is no such command on your system, you would see a "command not found" error. It's also possible that you've mistaken $(CC) for ${CC} which, in the shell, is equivalent to $CC under most circumstances. The curly braces are only needed if the variable's expansion is followed immediately by some other string that would otherwise be interpreted as part of the variable's name. An example of the difference may be seen in "$CC_hello" (expands the variable called CC_hello ) and "${CC}_hello" (expands the variable CC and appends the string _hello to its value). In all other circumstances, ${CC} is equivalent to $CC . Note that using curly braces is not quoting the expansion, i.e. ${CC} is not the same as "$CC" . If have a shell or environment variable holding the name of the compiler that you're using for compiling C code, and you want to use that variable on the command line, then use "$CC" , or just $CC if the variable's value does not contain spaces or shell globbing characters. $CC -o hello.elf hello.c
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/573541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156074/" ] }
573,544
I have a script I execute in the background and capture all output with a command like this: nohup bash -x test.sh < /dev/null > log.txt 2>&1 & How can I find out how long it took to complete? If I try to use time after bash -x I get an error: /usr/bin/time: /usr/bin/time: cannot execute binary file If I try to use acct it doesn't seem to log the process or I can't find it.
I'm assuming that you've seen $(CC) in a Makefile where it serves as an expansion of the variable CC , which normally holds the name of the C compiler. The $(...) syntax for variable expansions in Makefiles is used whenever a variable with a multi-character name is expanded, as $CC would otherwise expand to the value of the variable C followed by a literal C ( $CC would in effect be the same as $(C)C in a Makefile). In the shell though, due to having a different syntax, $(CC) is a command substitution that would be replaced by the output of running the command CC . If there is no such command on your system, you would see a "command not found" error. It's also possible that you've mistaken $(CC) for ${CC} which, in the shell, is equivalent to $CC under most circumstances. The curly braces are only needed if the variable's expansion is followed immediately by some other string that would otherwise be interpreted as part of the variable's name. An example of the difference may be seen in "$CC_hello" (expands the variable called CC_hello ) and "${CC}_hello" (expands the variable CC and appends the string _hello to its value). In all other circumstances, ${CC} is equivalent to $CC . Note that using curly braces is not quoting the expansion, i.e. ${CC} is not the same as "$CC" . If have a shell or environment variable holding the name of the compiler that you're using for compiling C code, and you want to use that variable on the command line, then use "$CC" , or just $CC if the variable's value does not contain spaces or shell globbing characters. $CC -o hello.elf hello.c
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/573544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119404/" ] }
573,597
# When I do just Shift-Insert, I get~$ 2~# When I do Ctrl-V, then Shift-Insert, I get~$ ^[[2;2~ Shift-insert works well in other situations, like Windows CMD or Git-Bash In wsl, I can use Ctrl-Shift-V to paste, but prefer shift-insert. Is there any work-around ?
According to microsoft/WSL : Note that WSL distro's launch in the Windows Console (unless you have taken steps to launch a 3rd party console/terminal). Therefore, please file UI/UX related issues in the Windows Console issue tracker. But the given link for Windows Console points to Windows Terminal : The new Windows Terminal and the original Windows console host, all in the same place! There's no (usable) documentation, so your question has to be answered by pointing into its source-code. The relevant chunk (which you'd like to exercise) is here, in windowio.cpp : // handle shift-ins paste if (inputKeyInfo.IsShiftOnly() && ShouldTakeOverKeyboardShortcuts()) { if (!bKeyDown) { return; } else if (VirtualKeyCode == VK_INSERT && !(pSelection->IsInSelectingState() && pSelection->IsKeyboardMarkSelection())) { Clipboard::Instance().Paste(); return; } } Half of the conditions (to reach that Paste() ) appear likely to be met (barring some bug in this program). The ones that aren't apparent: ShouldTakeOverKeyboardShortcuts() — but this is used in the ctrl+shift+plus/minus code pSelection->IsKeyboardMarkSelection() — we're assuming the mouse was used for selection. But that's assuming that this HandleKeyEvent method is treating the two different key sequences equally. The ^[[2;2~ comes from another part of the program, in terminalInput.cpp , using a built-in table // Sequences to send when a modifier is pressed with any of these keys// Basically, the 'm' will be replaced with a character indicating which// modifier keys are pressed.static constexpr std::array<TermKeyMap, 22> s_modifierKeyMapping{ and applied here : // If a modifier key was pressed, then we need to try and send the modified sequence.if (keyEvent.IsModifierPressed() && _searchWithModifier(keyEvent, senderFunc)){ return true;} From reading the code, it appears that this is all upstream from the windowio.cpp logic, so that combination will never be reached. The developers provided no way to override or modify this behavior. As suggested in a comment by @Rody-Oldenhuis: You can use wsltty ; this supports Ctrl+Ins/Shift-Ins out of the box (which is derived from mintty).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/573597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/387273/" ] }
573,654
I'm writing a bash script with the AWS CLI and shellcheck is giving an error that I think is incorrect. I'd like to try to figure out why its carping. Here's the code, and error message: for server in $(${aws} ec2 describe-instances --query 'Reservations[].Instances[][].{Name: Tags[?Key==`Name`].Value[] | [0]}' --filters "Name=tag:Name,Values=${server_name}*" --output text); ^-- SC2016: Expressions don't expand in single quotes, use double quotes for that. I can't get the code to line up correctly in the SO editor but the ^-- is pointing at the * in the code. This part: "Name=tag:Name,Values=${server_name}*" The error provides a link to ShellCheck documentation for reference but when I double check everything, it looks like I'm in compliance. :D I am guessing that the * is throwing things off and I know that I can get around this by doing shellcheck -e SC2016 but I'm really wondering what might be causing shellcheck to carp. Any ideas?
It is a false positive, but it's not the one you think it is. It has nothing to do with the * , and didn't point there for me. It's upset about `Name` being inside of single quotes. For example, echo '`Name`' produces the same warning, because it thinks that you want the backticks to be evaluated, so it's warning you that they won't be.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/573654", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68909/" ] }
573,733
I've downloaded my contact list in a .vcf format to my linux machine. I would like to be able to consult it without having to connect to the internet. The search feature is most important. I've got a script with grep and so on but I was hoping someone had already done the work to make things beautiful and readable.
There are a number of console-based tools designed to process vCard files; I know of the following: Rolo , a full-screen address-book manager; Khard , a console-based CardDAV client (which works fine with locally-stored vCard files); mutt_vc_query , a simple querying tool for vCard files (designed for Mutt, but usable standalone).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/573733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/312803/" ] }
573,760
I have created a systemd service file and placed it in /etc/systemd/system/anfragen-3dkonfig-mapper.service . I ran systemctl daemon-reload , systemctl daemon-reexec and rebooted the system. systemctl enable anfragen-3dkonfig-mapper results in Failed to enable unit: Unit file anfragen-3dkonfig-mapper.service does not exist. systemctl start anfragen-3dkonfig-mapper results in Failed to start anfragen-3dkonfig-mapper.service: Unit anfragen-3dkonfig-mapper.service not found. ls -lh /etc/systemd/system/anfragen-3dkonfig-mapper.service outputs -rw-r--r--. 1 root root 440 Mar 19 12:08 /etc/systemd/system/anfragen-3dkonfig-mapper.service cd /root && systemd-analyze verify anfragen-3dkonfig-mapper.service has an exit code of 0 and prints no output. mount shows /dev/sda2 on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota) There are no other mounts touching /usr or /etc . The contents of the service file are: [Unit]Description=Anfragen 3D Konfigurations Mapper ServiceAfter=network.target[Service]Restart=alwaysExecStartPre=-/usr/bin/podman stop anfragen-3dkonfig-mapperExecStartPre=-/usr/bin/podman rm anfragen-3dkonfig-mapperExecStart=/usr/bin/podman run --rm --name anfragen-3dkonfig-mapper-app -p 10010:10000 anfragen-3dkonfig-mapper-app:0.0.1ExecStop=/usr/bin/podman stop anfragen-3dkonfig-mapper[Install]WantedBy=multi-user.target All above commands were run as the root user. Operating System: CentOS Linux release 8.0.1905 (Core) Systemd version: 239 Linux kernel: Linux version 4.18.0-80.11.2.el8_0.x86_64 ([email protected]) (gcc version 8.2.1 20180905 (Red Hat 8.2.1-3) (GCC)) I vaguely remember having a similar problem with another service file some months ago which just magically started working after a few hours of poking around and renaming the service file back and forth. I'm interested in two things: How does one debug such a problem? What is wrong?
As hinted at by @JdeBP wrong SELinux file labels are the reason for the behavior. The . character in the output of ls indicates that there is a security context set for the file. So be attentive to the . in the ls output! cd /etc/systemd/system && ls -lhZ some-other-service.service anfragen-3dkonfig-mapper.service prints -rw-r--r--. 1 root root unconfined_u:object_r:admin_home_t:s0 440 Mar 19 12:08 anfragen-3dkonfig-mapper.service-rw-r--r--. 1 root root unconfined_u:object_r:systemd_unit_file_t:s0 457 Feb 24 11:42 some-other-service.service It can be seen that the other service file has the systemd_unit_file_t label, while the broken service doesn't. This can be fixed with restorecon anfragen-3dkonfig-mapper.service . After this the labels look as follows: -rw-r--r--. 1 root root unconfined_u:object_r:systemd_unit_file_t:s0 440 Mar 19 12:08 anfragen-3dkonfig-mapper.service-rw-r--r--. 1 root root unconfined_u:object_r:systemd_unit_file_t:s0 457 Feb 24 11:42 some-other-service.service systemd now behaves as expected.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/573760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/396163/" ] }
573,949
I have some directories: drwxr-xr-x 10 shamoon staff 320B Mar 20 10:05 dryrun-20200320_140542-1vbczul4drwxr-xr-x 10 shamoon staff 320B Mar 20 10:06 dryrun-20200320_140605-uze15jtadrwxr-xr-x 10 shamoon staff 320B Mar 20 10:06 dryrun-20200320_140644-193byncidrwxr-xr-x 13 shamoon staff 416B Mar 20 10:07 dryrun-20200320_140721-fuv399jidrwxr-xr-x 13 shamoon staff 416B Mar 20 10:08 dryrun-20200320_140810-34dim70rdrwxr-xr-x 14 shamoon staff 448B Mar 20 10:10 dryrun-20200320_140935-138yuidxdrwxr-xr-x 14 shamoon staff 448B Mar 20 10:23 dryrun-20200320_141044-35pfvec6drwxr-xr-x 14 shamoon staff 448B Mar 20 11:14 dryrun-20200320_151418-14g88zfrdrwxr-xr-x 14 shamoon staff 448B Mar 20 12:11 dryrun-20200320_151800-gf551inzdrwxr-xr-x 14 shamoon staff 448B Mar 20 12:21 dryrun-20200320_161134-wyu9kaxu I want to set up a symlink to the most recent. Now, there may be more recent directories created, so ideally, the symlink should also automatically update. Is this possible?
This isn't possible to do automatically -- Unix provides no facility for symlinks to dynamically change. However, you can have a program in the background that updates the symlink using inotify and the fact that later files sort as being later with LC_COLLATE=C : #!/bin/bash -eexport LC_COLLATE=Cshopt -s nullglobbase=/pathwhile inotifywait -e create \ -e moved_to \ -e moved_from \ -e close_write "$base" > /dev/null; do dirs=("$base"/dryrun-[0-9]*/) (( ${#dirs[@]} )) && ln -sfn -- "${dirs[-1]}" "$base"/latestdone And here is the result of it running: % mkdir dryrun-20200320_140935-138yuidx% ls -l latestlrwxrwxrwx 1 cdown cdown 39 Mar 20 16:40 latest -> /path/dryrun-20200320_140935-138yuidx/% mkdir dryrun-20200320_141044-35pfvec6% ls -l latest lrwxrwxrwx 1 cdown cdown 39 Mar 20 16:40 latest -> /path/dryrun-20200320_141044-35pfvec6/
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/573949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/386283/" ] }
573,958
Does less (or any other lightweight pager I could use as $PAGER ) has such a feature? For example, if I type man bash and then enter /incorporates it doesn't find the word, despite it being right there, in the second paragraph: DESCRIPTION Bash is an sh-compatible command language interpreter that executes commands read from the standard input or from a file. Bash also incor- porates useful features from the Korn and C shells (ksh and csh). My djvu and pdf viewer "incorporates" such a feature, and probably other document viewers do too. ( pdftotext simply re-joins the words, which means that pdftotext file.pdf - | grep pattern may still be more reliable than pdfgrep ). IIRC info (the GNU texinfo docs viewer) just doesn't hyphenate the words, in order to avoid this. While not directly actionable (I'm mostly using less with preformatted files), I would also be interested in any groff/mandoc options/tricks that could inhibit the end-of-line hyphenation.
This isn't possible to do automatically -- Unix provides no facility for symlinks to dynamically change. However, you can have a program in the background that updates the symlink using inotify and the fact that later files sort as being later with LC_COLLATE=C : #!/bin/bash -eexport LC_COLLATE=Cshopt -s nullglobbase=/pathwhile inotifywait -e create \ -e moved_to \ -e moved_from \ -e close_write "$base" > /dev/null; do dirs=("$base"/dryrun-[0-9]*/) (( ${#dirs[@]} )) && ln -sfn -- "${dirs[-1]}" "$base"/latestdone And here is the result of it running: % mkdir dryrun-20200320_140935-138yuidx% ls -l latestlrwxrwxrwx 1 cdown cdown 39 Mar 20 16:40 latest -> /path/dryrun-20200320_140935-138yuidx/% mkdir dryrun-20200320_141044-35pfvec6% ls -l latest lrwxrwxrwx 1 cdown cdown 39 Mar 20 16:40 latest -> /path/dryrun-20200320_141044-35pfvec6/
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/573958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
574,089
It's possible to change the xterm font size by holding ctrl and right-clicking the window. Is it possible to do it without a mouse?
The default keybindings include what's needed: Shift~Ctrl <KeyPress> KP_Add:larger-vt-font() \n\ Shift Ctrl <KeyPress> KP_Add:smaller-vt-font() \n\ Shift <KeyPress> KP_Subtract:smaller-vt-font() \n\ That is (without any customization needed): shift keypad + switches to the next-larger font. shift keypad - switches to the next-smaller font. There are two bindings for KP_Add to make it workable by default on some unusual keyboards. This was originally just for bitmap-fonts (in 1999 ); TrueType fonts were accommodated in 2008 . It is also possible to do this with an escape-sequence , e.g., printf '\033]50;#+1\007' to switch to the next-larger font, and printf '\033]50;#-1\007' to switch to the next-smaller font. The fonts.sh script in the sources makes xterm repeatedly shrink/grow, and when interrupted, restores the original font. (The \007 in the printf is a nonprinting control/G in the script to accommodate very old shells).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/574089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134690/" ] }
574,106
I have a string with a below pattern SrcWorkspaceName=abc_1234;SrcEndVer=1409;Lang=ENU,FRA,NLD I need the SrcEndVer value to be replaced from 1409 with other number. Here the number is stored in a variable,say Var=1600 So, 1409 value should be replaced with a variable Var Like the output as below for example SrcWorkspaceName=abc_1234;SrcEndVer=1600;Lang=ENU,FRA,NLD
The default keybindings include what's needed: Shift~Ctrl <KeyPress> KP_Add:larger-vt-font() \n\ Shift Ctrl <KeyPress> KP_Add:smaller-vt-font() \n\ Shift <KeyPress> KP_Subtract:smaller-vt-font() \n\ That is (without any customization needed): shift keypad + switches to the next-larger font. shift keypad - switches to the next-smaller font. There are two bindings for KP_Add to make it workable by default on some unusual keyboards. This was originally just for bitmap-fonts (in 1999 ); TrueType fonts were accommodated in 2008 . It is also possible to do this with an escape-sequence , e.g., printf '\033]50;#+1\007' to switch to the next-larger font, and printf '\033]50;#-1\007' to switch to the next-smaller font. The fonts.sh script in the sources makes xterm repeatedly shrink/grow, and when interrupted, restores the original font. (The \007 in the printf is a nonprinting control/G in the script to accommodate very old shells).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/574106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/393706/" ] }
574,134
Let's say, I have an input text file (sample_simple.txt) like this: 3 1 10 1 69 4 2 4 19 2 2 2 1 By using the command: awk '$0=$1" "$1*$2" "$3*$4' sample_simple.txt , I get the following output: 3 3 109 36 89 18 4 Then using the command: awk '$1*$2" "$3*$4' sample_simple.txt , nothing changes from the input file: 3 1 10 1 69 4 2 4 19 2 2 2 1 The only change between the commands is '$0=$1' , Can anyone explain this?
It's not really $0=$1 ; think of it more like $0 = ($1" "$1*$2" "$3*$4) So $0=$1" "$1*$2" "$3*$4 assigns the result of string concatenation $1" "$1*$2" "$3*$4 to variable $0 and performs the default action {print $0} , whereas $1*$2" "$3*$4 concatenates the results of $1*$2 and $3*$4 (with a space " " between) and performs the default action {print $0} because the result is a non-empty string. The value of $0 is not modified.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/574134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/401361/" ] }
574,224
This function should exit the calling script: crash() { echo error exit 1} This works as expected: echo beforecrashecho after # execution never reaches here But this does not: echo beforex=$(crash) # nothing is printed, and execution continuesecho after # this is printed How do I capture the result of a function, as well as allow it to exit?
This is because $(crash) executes crash in a subshell, so the exit applies to the subshell and not to your script. What is the point of capturing the output in a variable if you won't use it because the script exited anyway?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/574224", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291147/" ] }
574,257
I'm new to X11 and want to understand if it is really as dangerous as they say on the Internet. I will explain how I understand this. Any application launched from under the current user has access to the keyboard, mouse, display (e.g. taking a screenshot), and this is not good. But, if we install programs from the official repository (for example, for Debian) , which are unlikely to contain keyloggers, etc., then the danger seems exaggerated. Am I wrong? Yes, you can open applications on separate servers (for example, Xephyr) , but this is inconvenient, since there is no shared clipboard. Creating a clipboard based on tmp files is also inconvenient.
Any application launched from under the current user has access to the keyboard, mouse, display (e.g. taking a screenshot), and this is not good. All the X11 clients on a desktop can access each other in depth, including getting the content of any window, changing it, closing any window, faking key and mouse events to any other client, grabbing any input device, etc. The X11 protocol design is based on the idea that the clients are all TRUSTED and will collaborate, not step on each other's toes (the latter completely broken by modern apps like Firefox, Chrome or Java). BUT, if we install programs from the official repository (for example, for Debian), which are unlikely to contain keyloggers, etc., then the danger problem is clearly exaggerated. Am I wrong? Programs have bugs, which may be exploited. The X11 server and libraries may not be up-to-date. For instance, any X11 client can crash the X server in the current version of Debian (Buster 10) via innocuous Xkb requests. (That was fixed in the upstream sources, but didn't make it yet in Debian). If it's able to crash it, then there's some probability that it's also able to execute code with the privileges of the X11 server (access to hardware, etc). For the problems with the lax authentication in Xwayland (and the regular Xorg Xserver in Debian), see the notes of the end of this answer . Yes, you can open applications on separate servers (for example, Xephyr), but this is inconvenient, since there is no shared clipboard. Creating a clipboard based on tmp files is also inconvenient. Notice that unless you take extra steps, Xephyr allows any local user to connect to it by default. See this for a discussion about it. Creating a shared clipboard between multiple X11 servers is an interesting problem, which deserves its own Q&A, rather than mixed with this.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/574257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/401458/" ] }
574,621
I have a large dataset as given below: 35.7337 408 0.5 35.732 407 0.5 35.7301 406 0.5 35.7281 405 0.5 35.7259 404 0.5 35.7236 403 0.5 35.7212 402 0.5 35.7187 401 0.5 35.7162 400 0.5 35.7136 399 0.5 35.711 398 0.5 35.7085 397 0.5 35.706 396 0.5 35.7036 395 0.5 35.7013 394 0.5 35.6992 393 0.5 Now, I would like to get max value of column1; only among the values whose column2 is less than 400 and also Max value of column1 whose column2 is greater than 400. There are no negative values in column 2 and column1. column 2==400 is not required as the expected outcome shall be far away from $2==400. So my desired output 35.7136 (second column value <400)35.7337 (second column value > 400)
User csvsql from csvkit : csvsql -HS -d' ' --query 'select max(a) from file where b<400' file For tab separated content, use -t instead of -d' ' or awk : awk ' $2<400 && $1>max1{max1=$1} $2>400 && $1>max2{max2=$1} END {printf "%s (second column value < 400)\n%s (second column value > 400)\n",max1,max2}' file If column 1 can be negative, you have to initialize max1 and max2 , because if it unset, max1 equals zero for $1>max1 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/574621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/293538/" ] }
574,669
Originally, I was trying to determine why some directories show up differently colored than others when using the ls command. While playing around with this. I have now encountered the problem of not being able to clear the screen inside a tmux terminal $ clear'tmux-256color': unknown terminal type. This problem only persists in tmux, not the actual terminal itself and only showed up after trying to fix the initial problem. Also the colors have changed when running vim inside tmux now. Here are some outputs: outside tmux : $ echo $TERM; tput colors; tput longnamexterm-256color256xterm with 256 colors inside tmux : echo $TERM; tput colors; tput longnametmux-256colortput: unknown terminal "tmux-256color"tput: unknown terminal "tmux-256color" EDIT: my .bashrc file has: case "$TERM" in xterm-color|*-256color) color_prompt=yes;;esac my .tmux.conf has: set -g default-terminal "screen-256color"
Your plaform doesn't have tmux-256color , you will need to either: 1) Use screen-256color instead. 2) See if you can upgrade ncurses or terminfo to a later version with tmux-256color . 3) Copy tmux-256color from another computer which does have it, you can do this by saving it with infocmp -x tmux-256color >saved then installing it with tic -x saved .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/574669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149952/" ] }
574,679
I freshly installed system on my HW RAID 0 drives, over which is installed SW Raid 1 with LVM. After Installation of Debian 10, I've got this problem which I don't know how to resolve. I tried so far, completly wipe drives, create own LVM in live image, which worked, tryed to force new UUID. Nothing worked.
Your plaform doesn't have tmux-256color , you will need to either: 1) Use screen-256color instead. 2) See if you can upgrade ncurses or terminfo to a later version with tmux-256color . 3) Copy tmux-256color from another computer which does have it, you can do this by saving it with infocmp -x tmux-256color >saved then installing it with tic -x saved .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/574679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172216/" ] }
574,690
I'm making an intelligent alias gc that can differentiate git commit/checkout. If gc is called without any arguments or with the -a , -m arguments, then git commit is run. Otherwise, git checkout is run (if there is a b flag with an additional argument). If any other variation of gc is called, I'd prefer it throws an error rather than doing something unexpected. Here is my shell function so far. gc() { args=$@ commit=false checkout=false # Check flags while getopts ":am:b" opt; do case $opt in a|m) commit=true ;; b) checkout=true ;; \?) echo "Unknown flags passed." return 1 ;; esac done shift "$((OPTIND-1))" # Check number of arguments if [[ "$#" == 0 ]]; then commit=true elif [[ "$#" == 1 ]]; then checkout=true else echo "Too many arguments" return 1 fi # Finally run respective command if [[ $commit == true && $checkout == true ]]; then echo "Unable to ascertain which operation you would like to perform." return 1 elif [[ $commit == true ]]; then git commit "$args" elif [[ $checkout == true ]]; then git checkout "$args" else echo "Undefined behavior" return 1 fi} However, this does not work properly. After a bit of experimenting, I found that the assigning $@ to another variable was the root cause but I was unable to understand why and what exactly was going wrong. Also, as I'm writing shell functions for the first time, highlight any mistakes I've made.
$@ is an array, assign it to an array: args=("$@") Then use it as an array: elif [[ $commit == true ]]; then git commit "${args[@]}"elif [[ $checkout == true ]]; then git checkout "${args[@]}"else What's happening in your current code is that all of your separate arguments are being stored as a single string. So if you call: bc -a "foo bar" That gets assigned to args as: args='-a foo bar' Then instead of executing: git commit -a "foo bar" You get: git commit '-a foo bar'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/574690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194607/" ] }
574,702
I have a very simple chat-like tool that runs within a GNU screen session. The screen window is split, the top part is running tail -f file.txt and the bottom part is running a script with the following content: #!/bin/bashwhile : ; do read -p "Message: " msg ctime=$(date +"%H:%M:%S") echo "[${ctime}] User: ${msg}" >> file.txtdone Very simple, but gets the job done with the requirements I have. There's only one problem: When I press the ESC or any of the arrow keys, it inserts an escape-sequence, like ^[[D for example. And this messes up the file, resulting in terrible output. So my question is simple: How can I escape the input from read so it's safe to write to the file? I've tried echo "[${ctime}] User: ${msg}" | strings >> file.txt which made it a lot better, there were no big mess-ups anymore (e.g. nothing was overwritten or wrongly put out), but things are still not perfect (e.g. entering te^[[Dst would turn into te\n[Dst (the \n being an actual new line)).
How about a slightly different approach? Rather than remove the escape characters and sequences, you can allow users to use them to edit the input line with read -e . If you want, you can take this even further by recording chat message history, like this: ...read -e -p "Message: " msghistory -s "$msg"... With this, if someone makes a typo in a message, they can hit up-arrow, use left- and right-arrow to edit and fix the typo, then hit return to send the corrected message.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/574702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/296862/" ] }
574,837
I wrote a filter / select function that takes a function and a stream as input. It should yield a new array of ( 2 4 ). However, my result is nothing. I suspect it has something to do with the IFS. # func int -> boolis_even () { (( $1 % 2 == 0 )) ;}# func func -> intfilter () { local function_to_apply=$1 local arg while read -r arg; do $function_to_apply $arg && echo $arg done;:}# int arrayintegers=( 1 2 3 4 ) result=$(echo "${integers[*]}" | filter is_even)declare -p result Output is a string "" declare -- result="" Expected Output is an array of ( 2 4 ) declare -a result ='([0]="2" [1]="4")' Give credit where credit is due: http://www.binaryphile.com/bash/2018/07/26/approach-bash-like-a-developer-part-1-intro.html
You give your function input on a single line. The line will be the string 1 2 3 4 which is what "${integers[*]}" expands to with the default value of $IFS . This entire single line will be read by the first call to read into arg and used (unquoted) in a call to your function. Since $arg is unquoted, the shell will spit the string on the spaces and your function will only use that initial 1 , which is not even. This means that the echo $arg is not triggered. Instead: #!/bin/bashis_even () { (( $1 % 2 == 0 )) ;}filter () { local function_to_apply="$1" local arg while read -r arg; do "$function_to_apply" "$arg" && echo "$arg" done}integers=( 1 2 3 4 ) result=$(printf '%s\n' "${integers[@]}" | filter is_even)declare -p result The main thing here is to print the elements of the array on individual lines to the filter function. This will give you the single string 2\n4 (where \n is a newline). This should be no surprise as you are assigning a string to result . If you want to get an array back, you can do this in a recent release of bash : #!/bin/bashis_even () { (( $1 % 2 == 0 )) ;}filter () { local func="$1" local -n in_array="$2" local -n out_array="$3" local element out_array=() for element in "${in_array[@]}"; do if "$func" "$element"; then out_array+=( "$element" ) fi done}integers=( 1 2 3 4 )even_ints=()filter is_even integers even_intsdeclare -p even_ints This is using two name reference variables inside the filter function. The first one is the input array and the second one is the output array. This will give you the output declare -a even_ints=([0]="2" [1]="4") Another way to pass the values to the filter function is obviously to pass them on the command line of the function: #!/bin/bashis_even () { (( $1 % 2 == 0 )) ;}filter () { local func="$1" local -n out_array="$2" shift 2 local element out_array=() for element do if "$func" "$element"; then out_array+=( "$element" ) fi done}integers=( 1 2 3 4 )even_ints=()filter is_even even_ints "${integers[@]}"declare -p even_ints
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/574837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33386/" ] }
574,920
How to get stdout to console and pipe to next command at the same time? I've tried using the Read command as suggested here which worked to get the output of Grep from a Tail of a log file to a variable and then to a log or email, but I'd still like to get the output to stdout console as well: https://unix.stackexchange.com/a/365222/346155 I've tried using Tee as here: https://unix.stackexchange.com/a/47936/346155 I'm using the --line-buffered flag just in case from here: https://stackoverflow.com/a/7162898/4240654 I may be missing something simple about the sdtin logic, but the case from the first link suggests that Bash may not have this simple capability. And that variables cannot read from a subshell. The fact that echo 'hello' | echo $(</dev/stdin) works, suggests it might be possible. Another way to look at it is, how can I stdout to console within each pipe segment. That should help to debug a long chain of commands, before committing it to a bash script. EDITS:Something like echo 'hello' | echo $(</dev/stdin) >/dev/stout or echo 'hello' | tee >/dev/stdout | echo 2nd $(</dev/stdin) , the latter should output 'hello' twice, but only does so once.
Use tee /dev/tty . Example: echo "Hello word" | tee /dev/tty | wc -c tee and /dev/tty are required by POSIX. In general you can use a name like /dev/tty2 or /dev/pts/7 . It doesn't have to be the current terminal, if only you can write to it. Another way to look at it is, how can I stdout to console within each pipe segment. That should help to debug a long chain of commands, before committing it to a bash script. I have done something similar conveniently with tmux panes. You can do this with or without tmux . Prepare as many panes as you need; or console windows, if you prefer. In each invoke tty to learn the filename of the terminal. In one of them invoke the pipeline you want to debug, using tee and a different terminal in each step. This pipeline command1 | command2 | command3 will become something like command1 | tee /dev/pts/4 | command2 | tee /dev/pts/5 | command3 where /dev/pts/4 and /dev/pts/5 are additional terminals (e.g. panes in tmux ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/574920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346155/" ] }
574,961
I just installed cmake but I'm getting compiler not found error. In trying to build https://gitlab.com/interception/linux/tools on a new Kubuntu installation, running cmake .. from the tools/build directory returns the error: CMake Error at CMakeLists.txt:3 (project): No CMAKE_CXX_COMPILER could be found. Tell CMake where to find the compiler by setting either the environment variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path to the compiler, or to the compiler name if it is in the PATH. What's wrong? I assumed cmake would be equipped with its compiler, but maybe it needs to be configured before it can be used???
The "compiler" is a separate package that needs to be installed. One called g++ can be installed on it's own and is also included within a bundle of packages called "build-essential". Thus sudo apt-get install build-essential solves the problem (and sudo apt-get install g++ should also work), allowing cmake .. to work with no configuration necessary.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/574961", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115900/" ] }
574,965
How do i close a port listening on a local host in CentOS7? So far I have used the below command to find the process id sudo netstat -tlpn | grep 5601 Then, used the below command to kill the process but it starts up with new process id. sudo kill -SIGTERM 29565 Please help.
The "compiler" is a separate package that needs to be installed. One called g++ can be installed on it's own and is also included within a bundle of packages called "build-essential". Thus sudo apt-get install build-essential solves the problem (and sudo apt-get install g++ should also work), allowing cmake .. to work with no configuration necessary.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/574965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/401824/" ] }
574,966
I have a space separated file like this:(it has 1775 lines) head output.fam0 ALIKE_g_1LTX827_BI_SNP_F01_33250.CEL 0 0 0 -90 BURRY_g_3KYJ479_BI_SNP_A12_40182.CEL 0 0 0 -90 ABAFT_g_4RWG569_BI_SNP_E12_35136.CEL 0 0 0 -90 MILLE_g_5AVC089_BI_SNP_F02_35746.CEL 0 0 0 -90 PEDAL_g_8WWR250_BI_SNP_B06_37732.CEL 0 0 0 -9... and a comma separated file phg000008.individualinfo (that has 1838 lines): #Phen_Sample_ID - individual sample name associated with phenotypes#Geno_Sample_ID - sample name associates with genotypes#Ind_id - unique individual name which can be used to match duplicates (in this case same as Phen_Sample_ID)#Ped_id - Pedigree ID#Fa_id - Father individual ID#Ma_id - Mother individual ID#Sex - coded 1 for Male, 2 for Female#Ind_QC_flag - value "ALL" indicates released in both Quality Filtered and Complete set#Genotyping_Plate#Sample_plate_well_string - This string corresponds to the file within the CEL files distribution#Genotype_Clustering_Set#Study-id - dbGaP assigned study id #Phen_ID,Geno_Sample_ID,Ind_id,Ped_id,Fa_id,Ma_id,Sex,Ind_QC_flag,Genotyping_Plate,Sample_plate_well_string,Genotyping_Clustering_Set,Study_idG1000,G1000,G1000,fam1000-,0,0,2,ALL,7FDZ321,POSED_g_7FDZ321_BI_SNP_B02_36506,set05,phs000018G1001,G1001,G1001,fam1001-,G4243,G4205,1,ALL,3KYJ479,BURRY_g_3KYJ479_BI_SNP_H04_40068,set02,phs000018 G2208,G2208,G2208,fam2208-,G3119,G3120,2,ALL,1LTX827,ALIKE_g_1LTX827_BI_SNP_F01_33250,set01,phs000018G1676,G1676,G1676,fam1676-,G1675,G1674,1,ALL,3KYJ479,BURRY_g_3KYJ479_BI_SNP_A12_40182,set02,phs000018... I would like to change my output.fam by looking if I could find value from the 2nd column in output.fam, say ALIKE_g_1LTX827_BI_SNP_F01_33250.CEL in phg000008.individualinfo (disregarding .CEL suffix) and is there is a row with that entry replace that entry in output.fam with the value in the first column of phg000008.individualinfo and also for the same line replace the value of the first column of output.fam with the value in the 4th column of phg000008.individualinfo (excluding - suffix) So for example for two lines, output.fam would look like this: fam2208 G2208 0 0 0 -9fam1676 G1676 0 0 0 -9
The "compiler" is a separate package that needs to be installed. One called g++ can be installed on it's own and is also included within a bundle of packages called "build-essential". Thus sudo apt-get install build-essential solves the problem (and sudo apt-get install g++ should also work), allowing cmake .. to work with no configuration necessary.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/574966", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/401868/" ] }
575,312
I have a folder tree of text files and I would like to find instances of a substring and the names of the files they came from. If I do something like: find . | xargs cat | grep 'abc' then I would find instances of the substring, but not the files they originally came from. Thanks for the help!
Running xargs cat like that loses the filenames and there's no good way to pass them through the pipeline at the same time as the data flows through. But grep -l lists the names of files with matching strings, so you could use that: find . -type f | xargs grep -l hello Or with just having grep -r recursively dig through the directory, also resolving issues the issues xargs has with filenames containing white space or quotes: grep -lre abc . If you wanted the matching strings too, and not just the filenames, remove the -l to get the usual grep behaviour. With -r , it should print the matching filenames too even though we only give one path on the command line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/575312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/400487/" ] }
576,589
I was trying to look into some reference from O'Reilly about Unix and Bash about the meaning of */ but couldn't find any. We can echo */ and see all the directories. It seems like it means all "directories" only, while * means "all files and directories", but for some reason, many users seem to not know it, and books don't mention it either. Is there a definitive source that talks about what */ means and perhaps its variations?
The POSIX specification is the definitive reference on how unix tools should behave. The section on pathname resolution explains the meaning of trailing slashes in file names. A pathname that contains at least one non-<slash> character and that ends with one or more trailing <slash> characters shall not be resolved successfully unless the last pathname component before the trailing <slash> characters names an existing directory or a directory entry that is to be created for a directory immediately after the pathname is resolved. In other words: foo/ requires foo to be an existing directory, or a directory that the program will create (so mkdir foo/ is permitted). If foo is some other type of file (regular file, name piped, etc.), then accessing it as foo/ must not work. The sentence above is incomplete: foo/ is actually a valid way to reference a symbolic link to a directory. This is specified below: If a symbolic link is encountered during pathname resolution, the behavior shall depend on whether the pathname component is at the end of the pathname and on the function being performed. If all of the following are true, then pathname resolution is complete: 1. This is the last pathname component of the pathname. 2. The pathname has no trailing . 3. The function is required to act on the symbolic link itself, or certain arguments direct that the function act on the symbolic link itself. In all other cases, the system shall prefix the remaining pathname, if any, with the contents of the symbolic link (…). In other words: if foo is a symbolic link, and the program is supposed to follow symbolic links, then foo/ is equivalent to the target of the link, which may be a directory. So if foo is a symbolic link to a directory, foo/ is a valid way to refer to this directory. But if foo is a symbolic to a regular file or other non-directory, then foo/ is not a valid way to refer to that file. Functions such as open return the error ENOTDIR if given a pathname with a trailing slash that is an existing non-directory. [ ENOTDIR ] (…) O_CREAT and O_EXCL are not specified, the path argument contains at least one non-<slash> character and ends with one or more trailing <slash> characters, and the last pathname component names an existing file that is neither a directory nor a symbolic link to a directory (…) The impact of a slash in */ is described implicitly in the section on pattern matching in sh . There is no special rule for the / in a pattern (other than the rule that it cannot appear in a bracket expression, which is not relevant here). Therefore a / in a pattern must match a / in a pathname. For example, the pattern */ matches foo/ but not foo . Therefore */ matches directories and symbolic link to directories, but not regular files, symbolic link to regular files, broken symbolic links, named pipes, etc.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/576589", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19342/" ] }
576,605
INPUT="TEST: www.google.com TEST2: 123444 TEST3 Id: ABCD1234 TEST.txt" My expected output is ABCD1234 I tried OUTPUT=`echo $INPUT | sed 's/^.*TEST3 Id://' | sed 's/\[space].*//'` Got this as output ABCD1234 TEST.txt
The POSIX specification is the definitive reference on how unix tools should behave. The section on pathname resolution explains the meaning of trailing slashes in file names. A pathname that contains at least one non-<slash> character and that ends with one or more trailing <slash> characters shall not be resolved successfully unless the last pathname component before the trailing <slash> characters names an existing directory or a directory entry that is to be created for a directory immediately after the pathname is resolved. In other words: foo/ requires foo to be an existing directory, or a directory that the program will create (so mkdir foo/ is permitted). If foo is some other type of file (regular file, name piped, etc.), then accessing it as foo/ must not work. The sentence above is incomplete: foo/ is actually a valid way to reference a symbolic link to a directory. This is specified below: If a symbolic link is encountered during pathname resolution, the behavior shall depend on whether the pathname component is at the end of the pathname and on the function being performed. If all of the following are true, then pathname resolution is complete: 1. This is the last pathname component of the pathname. 2. The pathname has no trailing . 3. The function is required to act on the symbolic link itself, or certain arguments direct that the function act on the symbolic link itself. In all other cases, the system shall prefix the remaining pathname, if any, with the contents of the symbolic link (…). In other words: if foo is a symbolic link, and the program is supposed to follow symbolic links, then foo/ is equivalent to the target of the link, which may be a directory. So if foo is a symbolic link to a directory, foo/ is a valid way to refer to this directory. But if foo is a symbolic to a regular file or other non-directory, then foo/ is not a valid way to refer to that file. Functions such as open return the error ENOTDIR if given a pathname with a trailing slash that is an existing non-directory. [ ENOTDIR ] (…) O_CREAT and O_EXCL are not specified, the path argument contains at least one non-<slash> character and ends with one or more trailing <slash> characters, and the last pathname component names an existing file that is neither a directory nor a symbolic link to a directory (…) The impact of a slash in */ is described implicitly in the section on pattern matching in sh . There is no special rule for the / in a pattern (other than the rule that it cannot appear in a bracket expression, which is not relevant here). Therefore a / in a pattern must match a / in a pathname. For example, the pattern */ matches foo/ but not foo . Therefore */ matches directories and symbolic link to directories, but not regular files, symbolic link to regular files, broken symbolic links, named pipes, etc.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/576605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/402653/" ] }
576,614
I use grep to get the output of mysqladmin as sudo mysqladmin ext -i10 | grep 'buffer_pool_pages_flushed' and the output is continuous (every 10 seconds) as | Innodb_buffer_pool_pages_flushed | 265708726 || Innodb_buffer_pool_pages_flushed | 265735665 || Innodb_buffer_pool_pages_flushed | 265751712 || Innodb_buffer_pool_pages_flushed | 265754576 || Innodb_buffer_pool_pages_flushed | 265774380 | How can I adjust the grep command to output the differences between consecutive numbers in the second column, like 26939 (265735665-265708726)16047 (265751712-265735665)2864 (265754576-265751712)19804 (265774380-265754576)
Append: | awk '{if(NR>1){print $4-last,"("$4"-"last")"} last=$4}' Output: 26939 (265735665-265708726)16047 (265751712-265735665)2864 (265754576-265751712)19804 (265774380-265754576)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/576614", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10780/" ] }
576,654
echo is used to produce a new line in Linux. echo -e '\n' 2 lines echo -e '\n\n' 3 lines and so on. e.g. user@linux:~$ echouser@linux:~$ echo -e '\n'user@linux:~$ echo -e '\n\n'user@linux:~$ Instead of typing \n\n , would it be possible to use something like \n * 3 . Desired Output I know echo -e '\n * 3' won't produce 3 lines, but I hope you get what I mean. user@linux:~$ echo -e '\n * 3' # <== I hope you get what I meanuser@linux:~$
If you are using bash or ksh93 or zsh then printf '%.0s\n' {1..3} will produce 3 newlines. The {1..3} expands to 1 2 3 , and then the printf outputs these as zero width strings followed by newline.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/576654", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
576,668
I want to check if the size of the newest file is greater than 2 MB: test $(ls -st | head -n2 | tail -n1 | awk '{print $1}') -gt 2097152 && echo "true" Is there a more efficient or elegant way to do this? I tried to further pipe the output of awk to | test {} -gt 2097152 but get bash: test: {}: integer expression expected Then | test {}>"2097152" yields always 'true' so I came up with the construct test $(...) -gt 2097152
If you are using bash or ksh93 or zsh then printf '%.0s\n' {1..3} will produce 3 newlines. The {1..3} expands to 1 2 3 , and then the printf outputs these as zero width strings followed by newline.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/576668", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189709/" ] }
576,682
I have pairwise strings like this in my file A BA CB AB CC A I'm looking for a way to count how many commutative pairs I got. I.e A B and B A makes one such pair but B C doesn't (because we miss C B ). I have tried working with awk but I'm just guessing at this point. Thanks in advance.
If you are using bash or ksh93 or zsh then printf '%.0s\n' {1..3} will produce 3 newlines. The {1..3} expands to 1 2 3 , and then the printf outputs these as zero width strings followed by newline.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/576682", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/402700/" ] }
576,701
I like to disable all locale specific differences in shell scripts. What is the preferred way to do it? LANG=C or LC_ALL=C
LANG sets the default locale, i.e. the locale used when no more specific setting ( LC_COLLATE , LC_NUMERIC , LC_TIME etc.) is provided; it doesn’t override any setting, it provides the base value. LC_ALL on the other hand overrides all locale settings. Thus to override scripts’ settings, you should set LC_ALL . You can check the effects of your settings by running locale . It shows the calculated values, in quotes, for all locale categories which aren’t explicitly set; in your example, LANG isn’t overriding LC_NUMERIC , it’s providing the default value. If LC_ALL and LC_NUMERIC aren’t set in the environment, the locale is taken from LANG , and locale shows that value for LC_NUMERIC , as indicated by the quotes. See the locales manpage and the POSIX definitions of environment variables for details. See also How does the "locale" program work?
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/576701", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7167/" ] }
576,785
I'm trying to redirect the sound of a program (in my case, ffmpeg) to a source, so I can play the sound in question in Zoom (video conference software). The classic way of doing this would be to select, as a sound source in Zoom, the SinkName.monitor source, since every sink comes with a monitor. But : 1) Zoom doesn't list the monitor sources in its microphone dropdown. I've tried setting the monitor's device.class property to "sound" (instead of "monitor") to trick Zoom into accepting it, to no avail. 2) Zoom seems to refuse to let you set the source yourself from pavucontrol. In the Recording tab, when I try to set myself another source to be listened to by Zoom, no matter what I choose, my choice is ignored. The dropdown option doesn't even change. I've read somewhere that creating ~/.alsoftrc and writing "[pulse]" (new line) "allow-moves=yes" could help, but it didn't do anything for me. Therefore, I'm trying to set my own source, and "redirect sound to it". I created a null sink, a null source, and a loopback, but upon opening pavucontrol, I understood that I probably got it backwards; a loopback appears to be used to redirect a source to a sink, not the other way round (pavucontrol says, under Playing: "Loopback of MySource on [...] MySink", MySink being the value of the dropdown list on the right). Is there a way to achieve what I'm trying to do? Either: a) Modify a monitor so as to make it look like a normal microphone, or b) Redirect the sound of a sink to a source ? Thanks.
Found another way to do it: module-pipe-source. pactl load-module module-pipe-source source_name=virtualmic file=/tmp/virtualmic format=s16le rate=44100 channels=1 Then: ffmpeg -re -i movie.mkv -f v4l2 /dev/video2 -f s16le -ar 44100 -ac 1 - > /tmp/virtualmic (Note that this also fakes a webcam, using the module v412loopback)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/576785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164393/" ] }
576,838
I am currently trying to automate updating the text of a file, titled original_file.txt. Imagine the file looks like the following: common_text### REPLACE EVERYTHING AFTER THIS LINE ###text_that_willbe_removedafter_the_command This file will be updated by removing all text after "Replace everything after this line", and replacing it with the text in the file replacement_file.txt . For the sake of the post, imagine that replacement_file.txt has the following text: testing123this_is_the_replacement_text From what I've been able to find with sed, I can only figure out how to edit the rest of the line after a certain phrase. I want to replace the text in original_file.txt after the replacement phrase with all of the text from replacement_file.txt (I want to keep the replace line text for future updates). original_file.txt should look like this at the end: common_text### REPLACE EVERYTHING AFTER THIS LINE ###testing123this_is_the_replacement_text Thanks in advance!
Using sed : sed -n -e '1,/^### REPLACE EVERYTHING AFTER THIS LINE ###$/{ p; d; }' \ -e 'r replacement_file.txt' \ -e 'q' original_file.txt The three sed blocks do this: The first block prints all lines from line 1 to the line with the special contents. I print these lines explicitly with p and then invoke d to force a new cycle to start (" print; next " in awk ). After the initial lines have been outputted by the first block, the second block outputs the contents of the extra file. The editing script is then terminated. Ordinarily, q in the third block would output the current line before quitting (this would be the line in the example data reading text_that_will ), but since sed is invoked with -n , this default outputting of a line at the end of a cycle is inhibited. The result of the above command, given your data, is common_text### REPLACE EVERYTHING AFTER THIS LINE ###testing123this_is_the_replacement_text To update the original file, you could use sed -i ... , or redirect the output to a new file that you then replace the original with: sed ... original_file.txt >original_file.txt.new &&mv original_file.txt.new original_file.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/576838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/402841/" ] }
577,050
I have installed Debian 10 remotely on a server from a live netinstal iso USB over KVM, but then I encounter this strange problem: # fdisk -lbash: fdisk: command not found However if I use /sbin/fdisk -l , the command executes with no issues. I'm wondering what has caused this and how can I fix it?
You have to add /sbin to your PATH: vagrant@stretch:~$ PATH="/sbin:$PATH"vagrant@stretch:~$ command -v fdisk/sbin/fdisk And use fdisk with sudo: vagrant@stretch:~$ sudo fdisk -lDisk /dev/sda: 19.8 GiB, 21265121280 bytes, 41533440 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xa0fd0b1aDevice Boot Start End Sectors Size Id Type/dev/sda1 * 2048 39438335 39436288 18.8G 83 Linux/dev/sda2 39440382 41531391 2091010 1021M 5 Extended/dev/sda5 39440384 41531391 2091008 1021M 82 Linux swap / Solaris
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577050", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346985/" ] }
577,058
I want to use python2.7 with virtualenv . I recently upgraded from debian 8 to debian 10. I originally had python2.7 and pip installed on debian 8, but maybe something happened during the installation and now I don't have pip . But I do still have python2.7 installed: $ python --versionPython 2.7.16 So I just installed pip like so: $ cd /tmp$ wget https://bootstrap.pypa.io/get-pip.py$ python get-pip.py And now I can see that I have pip installed: $ which pip/home/me/.local/bin/pip$ pip --versionpip 20.0.2 from /home/me/.local/lib/python2.7/site-packages/pip (python 2.7) Firstly, is this where pip should be installed? Under my home directory? I am the only user on this PC, but I'm not sure if pip should be in /usr/share/ or somewhere more public for it to work properly? Should I have used sudo python /tmp/get-pip.py to install pip ? I don't plan to run python as root, but apt always requires root for installations, so maybe installing pip should have too? The documentation did not specify. Anyway, next I tried to update pip to the latest version and install virtualenv : $ pip install -U pipDEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-supportDefaulting to user installation because normal site-packages is not writeableRequirement already up-to-date: pip in ./.local/lib/python2.7/site-packages (20.0.2)$ pip install virtualenvDEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-supportDefaulting to user installation because normal site-packages is not writeableRequirement already satisfied: virtualenv in ./.local/lib/python2.7/site-packages (15.1.0) All seems good. But when I try and check which version of virtualenv I have, it fails: $ virtualenv --versionTraceback (most recent call last): File "/usr/local/bin/virtualenv", line 6, in <module> from virtualenv.__main__ import run_with_catchImportError: No module named __main__ And if I try and use virtualenv it always throws up these errors. So overall, my question is to how correctly install python2.7 , pip and virtualenv on debian 10. I don't mind uninstalling everything and starting again if that is what it takes. As instructed by Stephen Kitt in the answer below, I have tried uninstalling the versions of pip and virtualenv that I previously installed with get-pip.py , however this gives some new errors. I will explain exactly what I have done... First uninstall virtualenv : $ pip uninstall virtualenvDEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-supportFound existing installation: virtualenv 15.1.0Uninstalling virtualenv-15.1.0: Would remove: /home/me/.local/bin/virtualenv /home/me/.local/lib/python2.7/site-packages/virtualenv-15.1.0.dist-info/* /home/me/.local/lib/python2.7/site-packages/virtualenv.py /home/me/.local/lib/python2.7/site-packages/virtualenv_support/*Proceed (y/n)? y Successfully uninstalled virtualenv-15.1.0 Seems fine. Then uninstall pip: $ python -m pip uninstall pipDEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-supportFound existing installation: pip 20.0.2Uninstalling pip-20.0.2: Would remove: /home/me/.local/bin/pip /home/me/.local/bin/pip2 /home/me/.local/bin/pip2.7 /home/me/.local/lib/python2.7/site-packages/pip-20.0.2.dist-info/* /home/me/.local/lib/python2.7/site-packages/pip/*Proceed (y/n)? y Successfully uninstalled pip-20.0.2$ pip --versionbash: /home/me/.local/bin/pip: No such file or directory$ ls -a ~/.local/bin. .. chardetect easy_install easy_install-2.7 flake8 pew pipenv pycodestyle pyflakes virtualenv-clone That also seems fine. I'm not sure how bash knows that pip should be /home/me/.local/bin/pip since that file does not exist. Maybe bash has a cache? Anyway, next install pip and virtualenv from the debian 10 repo: $ sudo apt install python-pip virtualenvReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following additional packages will be installed: python3-distutils python3-lib2to3 python3-virtualenvThe following NEW packages will be installed: python-pip python3-distutils python3-lib2to3 python3-virtualenv virtualenv But when I try and check what version of virtualenv I now have, it fails: $ virtualenv --versionTraceback (most recent call last): File "/usr/local/bin/virtualenv", line 6, in <module> from virtualenv.__main__ import run_with_catch File "/usr/local/lib/python2.7/dist-packages/virtualenv/__init__.py", line 3, in <module> from .run import cli_run File "/usr/local/lib/python2.7/dist-packages/virtualenv/run/__init__.py", line 6, in <module> from virtualenv.run.app_data import AppDataAction File "/usr/local/lib/python2.7/dist-packages/virtualenv/run/app_data.py", line 8, in <module> from virtualenv.util.lock import ReentrantFileLock File "/usr/local/lib/python2.7/dist-packages/virtualenv/util/lock.py", line 11, in <module> from virtualenv.util.path import Path File "/usr/local/lib/python2.7/dist-packages/virtualenv/util/path/__init__.py", line 3, in <module> from ._pathlib import Path File "/usr/local/lib/python2.7/dist-packages/virtualenv/util/path/_pathlib/__init__.py", line 42, in <module> from pathlib2 import PathImportError: No module named pathlib2
To avoid messing things up outside the virtualenvs, I recommend using the packaged versions: sudo apt install python-pip virtualenv (along with python3-pip for Python 3 support, if appropriate). You’ll probably need to remove the versions of pip and virtualenv installed in your home directory, and any others on your PATH outside /usr/bin . When setting up your virtualenvs, you can specify Python 2.7: virtualenv -p /usr/bin/python2.7 ... and virtualenv will do the right thing.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577058", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5451/" ] }
577,120
Consider these shell commands $lsmy_app newlist note and $echo *my_app newlist note and $printf *my_app The first command ls will List information about the files (the current directory by default).The second command echo is a command that outputs the strings it is being passed as arguments. However when I type echo * it is outputting the same as ls . And printf * only giving me the first filename as output Why is echo interpreting the * like this?And why printf , being even stranger: with only outputting the first?
Why echo interprets the * to do the same as ls The answer is simple. It is not. echo does exactly as you say: it iterates through its arguments and outputs them with a space between them. So why do we see a behaviour like ls ? This because the shell will substitute the * with a parameter list matching all files (not starting with a . (unless dotglob in on)). Then echo just does its thing. This glob substitution will happen for all commands, as it is done by the shell, not by the command. So what about printf ? printf is print formatting. The first argument is the format. If you do printf "%s " * , then it is like echo . If the first file is hello%sworld , then you get: hellofile2world hellofile3world To explore more Try this. It will help you learn what is happening. (I am not suggesting that you use this code for anything real. Just use it for learning). Try cat /proc/self/cmdline * | less -- It will show at the start, what the command line looks like (after the shell has done its bit).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577120", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369707/" ] }
577,230
When using grep with wildcards as in grep -in github */* for each directory grep shows lots of messages like grep: dir1/dir2: Is a directory How to suppress these messages? Using the flag --exclude-dir does't work to my surprise. I'm using grep (BSD grep) 2.5.1-FreeBSD on MacOS.
-d skip will make grep skip directories: grep -in -d skip github / According to this MaxOS man page that option should work for MacOS grep. If it turns out that this doesn't work with the MacOS grep you can install then Homebrew MacOS package manager and then use Homebrew to install the GNU version of grep , since GNU grep supports -d skip (though in that case you'll have to make the directory containing GNU grep the first in your PATH environmental variable).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577230", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/127813/" ] }
577,231
I am trying to cleanup a server which had a PowerHA configuration. I have stopped cluster ( smitty clstop ) and removed resource groups. How do I remove the caavg_private properly? hdisk5 00cc90476e2a44dd caavg_private active# lsvg -l caavg_privatecaavg_private:LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINTcaalv_private1 boot 1 1 1 closed/syncd N/Acaalv_private2 boot 1 1 1 closed/syncd N/Acaalv_private3 boot 4 4 1 open/syncd N/Apowerha_crlv boot 1 1 1 closed/syncd N/A# clstat -oclstat - HACMP Cluster Status Monitor-------------------------------------Cluster: <ClName> (1591186363)Wed Apr 1 03:57:10 2020State: UP Nodes: 2SubState: STABLENode: Node01 State: UPInterface: Node01 (0) Address: 10.x.x.xState: UPNode: Node02 State: UPInterface: Node02 (0) Address: 10.x.x.xState: UP
-d skip will make grep skip directories: grep -in -d skip github / According to this MaxOS man page that option should work for MacOS grep. If it turns out that this doesn't work with the MacOS grep you can install then Homebrew MacOS package manager and then use Homebrew to install the GNU version of grep , since GNU grep supports -d skip (though in that case you'll have to make the directory containing GNU grep the first in your PATH environmental variable).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577231", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204864/" ] }
577,309
suppose I write: find . -regex "[0-9]{4}-[0-9]{2}-[0-9]{2}-(foo|bar).csv.gz" -printf "%f\n" This command looks to me like it should work. I have iterated through the various regextype options and various regex formats, but am unable to get this sort of regex to work through find. Is there something simple I am getting wrong here regarding find and the regular expression parser?
Find's -name doesn't take regular expressions (this was used in the original version of the question). It takes shell globs, and that isn't a valid shell glob. You want to use the -regex test, but also need to tell it to use extended regular expressions or any other flavor that understands the {N} and foo|bar notations. Finally, unlike -name , the -regex test looks at the entire pathname, so you need something like this: $ find . -regextype posix-extended -regex ".*/[0-9]{4}-[0-9]{2}-[0-9]{2}-(foo|bar).csv.gz" -printf "%f\n"5678-34-56-bar.csv.gz1234-12-12-foo.csv.gz If you want to use -name , you could do: find . \( \ -name "[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-foo.csv.gz" \ -o -name "[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-bar.csv.gz" \ \) -printf "%f\n"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/577309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288274/" ] }
577,379
I'm trying to use a Debian network cd to install it, via the advanced graphical installer. I want to use the full-disk encryption option, but I'm trying to install it on an older machine. I think it has about 1GB of RAM. I installed Pop!_OS on it, and it ran fast enough, as I could specify a decent swapfile size, but try as I might, I couldn't get it to find a graphics driver that would give me anything but a 640 screen resolution. (Debian found a great video driver, FWIW). When I use the guided setup for an encrypted whole disk on an LVM volume, it gives me a tiny 1.1 GB encrypted swap partition. It installs fine, and seems to run, but when I start to use the Software Center, pretty soon it just starts to grind and grind on the swapfile. If I try to shrink one of the big LVM partitions, I can't in gparted, since it tells me it is in use. I've tried command-line approaches, but they fail; and are extremely frustrating. If I boot on a Debian Live DVD, first must do sudo apt-get update , which takes a while, and then sudo apt-get crypt-setup and lvm2 in order to mount them. Nothing in the Debian docs told me how, but this Ubuntu page describes one method: Resize encrypted partitions If I pause anytime for very long during this process, the monitor goes dark, and when I press a key, the screen presents me with my nice desktop, only to grind on the live DVD for about 5 minutes, and finally show me a nice, colorful wallpaper, and grind for about 20 minutes or more, before showing me a prompt that it has been locked, and it asks me for a password; which I never knew, but found out that it is "live". If I tried actually carrying out the commands, I think it was on the e2fsck command, or the resize2fs didn't work. I forget the exact error. I tried making the partition smaller, using only 130GB of a 160GB HD, and then expanding it with the instructions from How to enlarge encrypted swap partition? (sic), but it failed on the mkswap command, since the volume was in use. I tried using the live DVD, but gave up in frustration after it locked the screen again. I went back to fight with the graphic installer, but once I told it to use guided full-disk encryption, it insisted on giving me a 1.1 GB swap partition. When I tried to reduce the size of the main LVM partition, it gave me the clever "No modifications can be made to this device ...", "In use by LVM Volume Group XXX". If I try to double-click on the 158 GB ext4 partition, there's nothing there that lets me reduce its size, to make room for a reasonably-sized swap-file. If I try to do a manual partition setup, and try to create partitions like it has with the guided LVM encrypted setup, I can't get them the same way. I think a 30 or 40 GB swapfile for Linux is a lot more realistic - especially since e.g. Linux Performance: Why You Should Almost Always Add Swap Space | Hacker News details how awfully Linux behaves when it is out of swapfile space: it's almost always a hard reboot. Open too many tabs in your browser, or run an application that uses too much data, and there you are. I'm sure it must be possible. I'd hate to think that an encrypted volume on Debian is simply impractical unless one has huge amounts of RAM. I'm sure it could be done from the command line, but I think it would be a longer timewaster than I have been on now (around two weeks on this so far), to set it up. I'm sure it's not impossible, but is there a way to set up an encrypted volume on Debian through the graphical installer, with maybe a few commands I can execute afterwards, or from the Debian Live DVD (which as above, lacks so much as a partition manager!!!)? Perhaps a Kali Linux live disk wouldn't give me so much heartache if I tried to use it after-the-fact. Maybe somebody can give me command-line instructions that will do this in Debian. The swapfile should be encrypted, too, of course. Otherwise, it would defeat the point of encryption. EDIT: I tried to manually create the partitions. I created a root partition, and made it bootable, although I'm not sure what size it should be. I suppose I could learn its size from a guided partition. I created a encrypted partition with all the remaining space on the disk. I then created a volume group within it. However, I wasn't able to create a partition within it, much less specify that that's where the bulk of the OS should be installed; nor create a swap partition within the volume group. It says the volume is part of a volume group already. Without a volume group, I was likewise not able to create partitions within the encrypted partition. EDIT 2: The solution was to use manual partition configuration in the graphic installer. I had to create the boot partition outside the encrypted volume, create an encrypted volume with the rest of the disk, make an LVM group in the encrypted volume thus created, and then create the root and other volumes within the volume group. I made a 30GB swap partition since Linux has no well-maintained truly dynamic swapfile manager (although I may try my luck with swapspace ); and Linux is useless once the swap partition is used up - worse than Windows when there is no more space on the disk for the swapfile. Without a huge swap partition, just open a lot of tabs, a really large spreadsheet and a really large log file, and you may be forced to do a hard reboot as the HD grinds and grinds and grinds. I'm sporting a whopping 1GB RAM on a Pentium Dual E2200 on my server computer! It'll make a nice small server in addition to my main desktop one. I chose not to install any desktop, but just the tools and servers; and then upon reboot, I did apt-get updateapt-get install plasma-desktopapt-get install sddm because I don't want the default bloatware. I made sure I can log in as su, since I can do su and log in on the console to install stuff system-wide (i.e., for all users; otherwise, I might be locked out of su access). The biggest problems is that Discover(=Software Center) runs too slow to be usable, and it only has picked up my MBs SPDIF audio output, not my regular audio ones yet. At least it doesn't grind the swap partition a huge amount when attempting to use Discover. However, I can install what I need via apt-get, and Konqueror and other stuff runs fine. Of course, as with many challenging problems, in retrospect, doing this doesn't seem as difficult as when I tried to do it myself without a guide. I guess that'll bring at least this extended round of distro hopping to an end :P.
How to manually partition your Debian install with full disk encryption I am going to outline the steps to take using the netinstall ISO on Virtual Box. These steps should work the same as any of the full Desktop environment installers with Desktops. (Do note that near the end of the netinstall , you can choose a Desktop environment of your choice.) I will also be including a link to the Debian Buster Installation Guide provided by the Debian Installer team. It covers everything needed to get started with Debian. I am going to include screenshots of each step, but will start at the partition disks section. If you have issues with any previous step in the installer, please refer to the installation guide . When it comes to manual partitioning, there are a few ways we can do this, and the choice is yours. Remember to do what makes sense in your environment and always check with the official documentation or the Debian wiki for advice. Step 1: Once reaching the partition disks menu. Select Manual Step 2: Select your drive. In my case, I have a 64 GB VBOX HARDDISK. In your case you could have a 1 TB Hard Drive, or a 128 GB SSD, or whatever. Pay special attention to what disk you select. You may see your flash drive and other attached disks. Make sure you are selecting the right disk! We will be formatting and encrypting this disk. All contents will be erased! Select continue after selecting the drive you are installing Debian on. Step 3: If you are using an entire disk for your Debian install you will need to format the drive. Select yes to create new empty partition table. Select continue to move on. Step 4: If you are wishing to use whole disk encryption, select Configure Encrypted Volumes, and then continue. Step 5: Select Yes to agree to having the partitioning scheme written to disk and then continue. Step 6: Select Create encrypted volumes, then continue. Step 7: Select the devices to be encrypted. In This case it is my 64424MB drive. In your case it will be something different. Make sure you are selecting the correct drive. The encryption process will overwrite the disk. Step 8: I leave everything as the default except that I change the Bootable flag to On . You can customize this to better suit your environment. Step 9: Again, You will be asked if it is okay to write the current partitioning scheme. Select Yes and continue. Step 10: Back at the encrypted volumes menu, select Finish and continue. Step 11: If you selected the erase data option (a default) you will be asked if this okay. Agree and continue. This process took me about 20 minutes to complete. Step 12: At this point you will create your encryption password. Enter it in twice and continue. Step 13: Now you will be back at the main Partition Disks menu. The next step is to configure the Logical Volume Manager (LVM). Select that and continue. Step 13: You will be asked to write the current partitioning scheme before you continue. Agree and continue. Step 14: Now we are at the LVM configuration menu. Select Create Volume group and continue. At the next screen you will be prompted to name your new volume group. Choose a name that works best for you. I used vg-1 . In the future you may be installing to a machine that has many volume groups. Just use something that you can recognize as the volume group for this Debian install. Step 15: The next step is to select the partition or disk that your physical volume will be taking up. Select your encrypted volume and continue. Step 16: After we have configured a physical volume, we need to create logical volumes. Step 17: When creating a logical volume, you need to select a volume group, give the logical volume a name, and size. This is going to be a boot partition so I have named it and sized it accordingly. Note that for gigabytes you use a G. 1 Gigabyte is more than enough for a boot partition. I will cover why I chose each partition size later. Step 18: Here I am showing the LVM configuration for my virtual machine. I like to have a 1 GB or larger boot (you certainly don't need it larger than 1 GB), and separate root and home partitions. In this case, as it is a virtual machine, I have a smaller home than root. If you plan on saving a lot of files, or using this install as your personal or work computer make sure to size your home to be enough. If this was a 1 TB hard drive I would likely dedicate around 25% of the disk to root, have my swap and boot (appropriately sized), and the rest for home. So, roughly 200+ GB for root, 1-2 GB boot, possibly a 16 GB swap, and then 700+ GB for home. Swap is usually double your RAM, but with an 8 GB or more system you likely do not need swap to be bigger than your RAM. Swapping too much can trash your disk and when you use 16 GB of RAM that really is a lot. You either need more physical RAM for what you are doing or figure out what is causing such high RAM usage. Swap was great when systems only had 64 megabytes, and hard drives could have a 2 gigabytes (or 2,000 megabtyes). Step 19: Now that we have configured LVM we need to actually configure the partitions on the drive. Back on the main partition disks menu, it should like something like this: Double click or select a partition (In this case boot) and configure it appropriately. As the screen shots show, I am configuring this partition to be an ext4 filesystem, mounted at /boot , and labeled as boot. You likely will also be using an ext4 filesystem. For each of your logical volumes (which you should have labeled!) do the same. Here is what you do for the swap one: Step 20: Now you are back to the main menu, it should look like this: Now you complete the installation process as you normally would. Remember to install GRUB on the drive with your /boot (If you are using only 1 disk, this is the disk your install is on). You can also set up a one to two GB boot partition OUTSIDE the encrypted LVM either on a flash drive or on the disk but outside of the encrypted area. In Conclusion I have done this install many times over. I am very familiar with the Debian and other similar installers because I used to distro hop every month. You can know what works and does not work after a lot of practice yourself. You do not have to have this identical. As you can see, you can size and label things however you want. However, Root should be at least 20 GB (more if you install a lot of stuff), and boot at least 500 MB, and swap roughly 2 times or equal to your RAM on 8 GB or less systems. Some people forego swap all together but what works for you is different than anyone else. So practice this on VM, or a spare laptop or if you are brave, the only computer you own. Depending on the disk size, what you are doing with that computer, what kind of computer it is, and what your needs are are going to determine what partitions you need or do not need. The simplest partitioning scheme would be 2 partitions. A swap partition, and the rest of the disk as / . Read the installation guide to know more about the Debian Installation process. Best of Luck!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/577379", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/401011/" ] }
577,458
Say I have a window in kitty and press ctrl+shift+enter to open a new window. The new window always uses ~/ as current working directory. I'd like the new window to use the same working directory that the last window used. Is this possible?
In your kitty.conf , instead of using map ctrl+shift+enter new_window , use map ctrl+shift+enter new_window_with_cwd . Couldn't find this in the documentation but the author mentions it in this issue .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/577458", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110197/" ] }
577,461
Whenever I enter my password, in e.g git , ssh , I want it to be remembered. So I have conifgured ssh-agent I also added AddKeysToAgent yes to ~/.ssh/config If I run ssh-add now every thing is fine. But without it, it doesn't work (I have to enter my password everytime) I know I could add ssh-add to an init script but that's not what I want since I don't need it every day. I thought that's what AddKeysToAgent yes is for? I start ssh-agent as a systemd service with SSH_AUTH_SOCK=$XDG_RUNTIME_DIR/ssh-agent.socket . update For testing I removed the service and SSH_AUTH_SOCK. Rebooted and ran eval $(ssh-agent) then logged in ssh pi@my-ip entered password and did it again. Still the password prompt. So it doesn't seem to be an issue with systemd. update 2 Moving AddKeysToAgent yes to the first line fixed it with eval.
ok so the issue was the format of ~/.ssh/config . It was Host awesomehost.tld User user IdentityFile ~/.ssh/id_rsaAddKeysToAgent yes It needs to be either Host awesomehost.tld User user IdentityFile ~/.ssh/id_rsaHost * AddKeysToAgent yes or AddKeysToAgent yesHost awesomehost.tld User user IdentityFile ~/.ssh/id_rsa
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/308214/" ] }
577,603
I would like to do something like this where on Friday, the output is for both conditions that match: #!/bin/bash#!/bin/bashNOW=$(date +"%a")case $NOW in Mon) echo "Mon";; Tue|Wed|Thu|Fri) echo "Tue|Wed|Thu|Fri";; Fri|Sat|Sun) echo "Fri|Sat|Sun";; *) ;;esac As the code above is written, the only output on Friday would be: Tue|Wed|Thu|Fri Desired output on Friday: Tue|Wed|Thu|Fri Fri|Sat|Sun I understand that normally, only the commands corresponding to the first pattern that matches the expression are executed. Is there a way to execute commands for additional matched patterns? EDIT: I am not looking for fall-through behavior , but that's also a nice thing to know about. Thanks steeldriver.
You can use the ;;& conjunction. From man bash : Using ;;& in place of ;; causes the shell to testthe next pattern list in the statement, if any, and execute anyassociated list on a successful match. Ex. given $ cat myscript #!/bin/bashNOW=$(date -d "$1" +"%a")case $NOW in Mon) echo "Mon";; Tue|Wed|Thu|Fri) echo "Tue|Wed|Thu|Fri";;& Fri|Sat|Sun) echo "Fri|Sat|Sun";; *) ;;esac then $ ./myscript thursdayTue|Wed|Thu|Fri$ ./myscript fridayTue|Wed|Thu|FriFri|Sat|Sun$ ./myscript saturdayFri|Sat|Sun For more information (including equivalents in other shells) see Can bash case statements cascade?
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/577603", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
577,663
Let's say I have the following trivial script, tmp.sh : echo "testing"stat .echo "testing again" Trivial as it is, it has \r\n (that is, CRLF, that is carriage return+line feed) as line endings. Since the webpage will not preserve the line endings, here is a hexdump: $ hexdump -C tmp.sh 00000000 65 63 68 6f 20 22 74 65 73 74 69 6e 67 22 0d 0a |echo "testing"..|00000010 73 74 61 74 20 2e 0d 0a 65 63 68 6f 20 22 74 65 |stat ...echo "te|00000020 73 74 69 6e 67 20 61 67 61 69 6e 22 0d 0a |sting again"..|0000002e Now, it has CRLF line endings, because the script was started and developed on Windows, under MSYS2. So, when I run it on Windows 10 in MSYS2, I get the expected: $ bash tmp.shtesting File: . Size: 0 Blocks: 40 IO Block: 65536 directoryDevice: 8e8b98b6h/2391513270d Inode: 281474976761067 Links: 1Access: (0755/drwxr-xr-x) Uid: (197609/ USER) Gid: (197121/ None)Access: 2020-04-03 10:42:53.210292000 +0200Modify: 2020-04-03 10:42:53.210292000 +0200Change: 2020-04-03 10:42:53.210292000 +0200 Birth: 2019-02-07 13:22:11.496069300 +0100testing again However, if I copy this script to an Ubuntu 18.04 machine, and run it there, I get something else: $ bash tmp.shtestingstat: cannot stat '.'$'\r': No such file or directorytesting again In other scripts with the same line endings, I have also gotten this error in Ubuntu bash: line 6: $'\r': command not found ... likely from an empty line. So, clearly, something in Ubuntu chokes on the carriage returns. I have seen BASH and Carriage Return Behavior : it doesn’t have anything to do with Bash: \r and \n are interpreted by the terminal, not by Bash ... however, I guess that is only for stuff typed verbatim on the command line; here the \r and \n are already typed in the script itself, so it must be that Bash interprets the \r here. Here is the version of Bash in Ubuntu: $ bash --versionGNU bash, version 4.4.20(1)-release (x86_64-pc-linux-gnu) ... and here the version of Bash in MSYS2: $ bash --versionGNU bash, version 4.4.23(2)-release (x86_64-pc-msys) (they don't seem all that much apart ...) Anyways, my question is - is there a way to persuade Bash on Ubuntu/Linux to ignore the \r , rather than trying to interpret it as a (so to speak) "printable character" (in this case, meaning a character that could be a part of a valid command, which bash interprets as such)? EDIT: without having to convert the script itself (so it remains the same, with CRLF line endings, if it is checked in that way, say, in git) EDIT2: I would prefer it this way, because other people I work with might reopen the script in Windows text editor, potentially reintroduce \r\n again into the script and commit it; and then we might end up with an endless stream of commits which might be nothing else than conversions of \r\n to \n polluting the repository. EDIT2: @Kusalananda in comments mentioned dos2unix ( sudo apt install dos2unix ); note that just writing this: $ dos2unix tmp.sh dos2unix: converting file tmp.sh to Unix format... ... will convert the file in-place; to have it output to stdout, one must setup stdin redirection: $ dos2unix <tmp.sh | hexdump -C00000000 65 63 68 6f 20 22 74 65 73 74 69 6e 67 22 0a 73 |echo "testing".s|00000010 74 61 74 20 2e 0a 65 63 68 6f 20 22 74 65 73 74 |tat ..echo "test|00000020 69 6e 67 20 61 67 61 69 6e 22 0a |ing again".|0000002b ... and then, in principle, one could run this on Ubuntu, which seems to work in this case: $ dos2unix <tmp.sh | bashtesting File: . Size: 20480 Blocks: 40 IO Block: 4096 directoryDevice: 816h/2070d Inode: 1572865 Links: 27Access: (1777/drwxrwxrwt) Uid: ( 0/ root) Gid: ( 0/ root)Access: 2020-04-03 11:11:00.309160050 +0200Modify: 2020-04-03 11:10:58.349139481 +0200Change: 2020-04-03 11:10:58.349139481 +0200 Birth: -testing again However, - aside from the slightly messy command to remember - this also changes bash semantics, as stdin is no longer a terminal; this may have worked with this trivial example, but see e.g. https://stackoverflow.com/questions/23257247/pipe-a-script-into-bash for example of bigger problems.
As far as I’m aware, there’s no way to tell Bash to accept Windows-style line endings. In situations involving Windows, common practice is to rely on Git’s ability to automatically convert line-endings when committing, using the autocrlf configuration flag. See for example GitHub’s documentation on line endings , which isn’t specific to GitHub. That way files are committed with Unix-style line endings in the repository, and converted as appropriate for each client platform. (The opposite problem isn’t an issue: MSYS2 works fine with Unix-style line endings, on Windows.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577663", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8069/" ] }
577,677
Sometimes I need to reproduce issues that appear only in the customer's environment. I could manually set up virtual machines to sufficiently mirror their environment, but it would be very nice to find a semi-automated way to do this. In other words, I'm looking for something that will let me say, "Create an environment that runs this version of Linux, has this version of PHP installed", and so on. Then I hope to be able to log in to that environment and execute the reproduction steps. These environments would obviously be relatively short-lived, since once I've reproduced that particular issue, there's a chance I will never have to recreate the same environment again. That said, it would be nice if the environment configuration was in a format easy to version control, in case it would be needed again. Is there a technology suited to this type of use case? Things I have heard of that may be relevant Proxmox (seems overkill and insufficient on its own) Vagrant (could be insufficient on its own, might also need configuration management like Ansible) Docker (commonly used to run single applications, not recreate full OS environments) Are any of these good fits for this use case? Should I look into other options?
As far as I’m aware, there’s no way to tell Bash to accept Windows-style line endings. In situations involving Windows, common practice is to rely on Git’s ability to automatically convert line-endings when committing, using the autocrlf configuration flag. See for example GitHub’s documentation on line endings , which isn’t specific to GitHub. That way files are committed with Unix-style line endings in the repository, and converted as appropriate for each client platform. (The opposite problem isn’t an issue: MSYS2 works fine with Unix-style line endings, on Windows.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339447/" ] }
577,726
When awk receives "0" as input, it behaves differently in some cases. Code below: var=$1echo ""; echo -n 'o/p of $1=$1 ==>'; echo $var | awk '$1=$1'echo "";echo -n 'o/p of {$1=$1;print} ==>';echo $var | awk '{$1=$1;print}'echo "";echo -n 'o/p of $1==$1 ==>';echo $var | awk '$1==$1'echo "";echo -n 'o/p of {$1==$1;print} ==>';echo $var | awk '{$1==$1;print}' The output with "0" (number zero) : [root@host ~]# sh /tmp/te.sh 0o/p of $1=$1 ==>o/p of {$1=$1;print} ==>0o/p of $1==$1 ==>0o/p of {$1==$1;print} ==>0[root@GORJALA ~]# The output with "1" (number one) : [root@host ~]# sh /tmp/te.sh 1o/p of $1=$1 ==>1o/p of {$1=$1;print} ==>1o/p of $1==$1 ==>1o/p of {$1==$1;print} ==>1[root@host ~]# Why is there a difference when I use var=0; echo $var | awk '$1=$1' and var=1; echo $var | awk '$1=$1' ? All numbers are working fine other than 0 . Versions: GNU bash, version 4.2.46 GNU Awk 4.0.2 coreutils-8.22-24.el7.x86_64
From the The GNU Awk User’s Guide : An assignment is an expression, so it has a value—the same value that is assigned. Thus, ‘z = 1’ is an expression with the value one. So echo 0 | awk '$1=$1' the pattern evaluates to 0 (FALSE) echo 1 | awk '$1=$1' the pattern evaluates to 1 (TRUE) and the default action print is executed
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/577726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98471/" ] }
577,755
Is POSIX a description of how applications have implemented each specific part of UNIX in the past or is it a prescriptive norm of how UNIX must be implemented ? If descriptive, only the features that are common to all included implementations would be valid. While any implementation have not implemented a feature, that feature is "undefined". If prescriptive: On which theoretical framework is it based? Math? C language? Experience?
It's prescriptive de jure , but mostly descriptive de facto . POSIX is a set of specifications that implementations can be matched against, including both implementations that already exist when the document is published and future implementations. So it's prescriptive. In practice, POSIX started mostly as a common subset of existing implementations. So in this sense, it's mostly descriptive. But POSIX sometimes mandates new behavior. Most commonly, for features that existed in many implementations but with different interfaces (function names, command line options, etc.), POSIX has introduced several functions and utilities, such as pax (a replacement for tar and cpio , which were very different across Unix variants) and various posix_xxx functions . POSIX also introduced new constant and command line options; for example, for ps , “The -A option is equivalent to the BSD -g and the SVID -e . Because the two systems differed, a mnemonic compromise was selected.”. The rationale sections often explains why this or that feature was included, often mentioning which implementations already had a feature, or why a choice was made or not made between incompatible implementations.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
577,862
How can I recursively cleanup all empty files and directories in a parent directory? Let’s say I have this directory structure: Parent/ |____Child1/ |______ file11.txt (empty) |______ Dir1/ (empty) |____Child2/ |_______ file21.txt |_______ file22.txt (empty) |____ file1.txt I should end up with this: Parent/ |____Child2/ |_______ file21.txt |____ file1.txt
This is a really simple one liner: find Parent -empty -delete It's fairly self explanatory. Although when I checked I was surprised that it successfully deletes Parent/Child1. Usually you would expect it to process the parent before the child unless you specify -depth . This works because -delete implies -depth . See the GNU find manual : -delete Delete files; true if removal succeeded. If the removal failed, an error message is issued. If -delete fails, find's exit status will be nonzero (when it eventually exits). Use of -delete automatically turns on the -depth option. Note these features are not part of the Posix Standard , but most likely will be there under many Linux Distribution. You may have a specific problem with smaller ones such as Alpine Linux as they are based on Busybox which doesn't support -empty . Other systems that do include non-standard -empty and -delete include BSD and OSX but apparently not AIX .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/577862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369707/" ] }
577,918
I am trying to write a sed command to substitute excessive spaces in a file. Each word should have only one space between words, but leading spaces and tabs should be left alone. So the file: This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph. Will become: This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph. I have tried variations of /^[ \t]*/!s/[ \t]+/ /g Any ideas would be appreciated.
$ sed 's/\>[[:blank:]]\{1,\}/ /g' file This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph. The expression I used matches one or several [[:blank:]] (spaces or tabs) after a word , and replaces these with a single space. The \> matches the zero-width boundary between a word-character and a non-word-character. This was tested with OpenBSD's native sed , but I think it should work with GNU sed as well. GNU sed also uses \b for matching word boundaries. You could also use sed -E to shorten this to sed -E 's/\>[[:blank:]]+/ /g' file Again, if \> doesn't work for you with GNU sed , use \b instead. Note that although the above sorts out your example text in the correct way, it does not quite work for removing spaces after punctuation, as after the first sentence in This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph. For that, a slightly more complicated variant would do the trick: $ sed -E 's/([^[:blank:]])[[:blank:]]+/\1 /g' file This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph. This replaces any non-blank character followed by one or more blank characters with the non-blank character and a single space. Or, using standard sed (and a very tiny optimization in that it will only do the substitution if there are two or more spaces/tabs after the non-space/tab), $ sed 's/\([^[:blank:]]\)[[:blank:]]\{2,\}/\1 /g' file This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276578/" ] }
577,934
I'm connected to a server through ssh and want to run a process that will take a long time. I connect to the server by using ssh in my laptops terminal, but I want to be able to turn my laptop off but still have the progress running on the server. Since they're two separate computers, it seems like I should be able to do this, but I'm not sure if it's possible through ssh.
$ sed 's/\>[[:blank:]]\{1,\}/ /g' file This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph. The expression I used matches one or several [[:blank:]] (spaces or tabs) after a word , and replaces these with a single space. The \> matches the zero-width boundary between a word-character and a non-word-character. This was tested with OpenBSD's native sed , but I think it should work with GNU sed as well. GNU sed also uses \b for matching word boundaries. You could also use sed -E to shorten this to sed -E 's/\>[[:blank:]]+/ /g' file Again, if \> doesn't work for you with GNU sed , use \b instead. Note that although the above sorts out your example text in the correct way, it does not quite work for removing spaces after punctuation, as after the first sentence in This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph. For that, a slightly more complicated variant would do the trick: $ sed -E 's/([^[:blank:]])[[:blank:]]+/\1 /g' file This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph. This replaces any non-blank character followed by one or more blank characters with the non-blank character and a single space. Or, using standard sed (and a very tiny optimization in that it will only do the substitution if there are two or more spaces/tabs after the non-space/tab), $ sed 's/\([^[:blank:]]\)[[:blank:]]\{2,\}/\1 /g' file This is an indented paragraph. The indentation should not be changed.This is the second line of the paragraph.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/577934", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/377053/" ] }
578,094
We have the following text file , this is configuration file: advertised.host.name: DEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. Hostname to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, it will use the value for host.name if configured. Otherwise it will use the value returned from java.net.InetAddress.getCanonicalHostName(). Type: string Default: node1 Valid Values: Importance: high Update Mode: read-onlyadvertised.listeners: Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners it is not valid to advertise the 0.0.0.0 meta-address. Type: string Default: null Valid Values: Importance: high Update Mode: per-brokeradvertised.port: DEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. The port to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the port to which the broker binds. If this is not set, it will publish the same port that the broker binds to. Type: int Default: 5500 Valid Values: Importance: high Update Mode: read-onlyauto.create.topics.enable: Enable auto creation of topic on the server Type: boolean Default: true Valid Values: Importance: high Update Mode: read-only... What we want is to convert the file above to be ini file as the following advertised.host.name=node1advertised.listeners=nulladvertised.port=5500auto.create.topics.enable=true... note - each parameter in text file is in the beginning of the file without space while the value is represented by the Default , any suggestion how to convert the text file to ini file with bash or awk or perl/python , etc
With awk : $ awk -F': ' '/^[^\t ]+:/{key=$1; next}; $1 ~ /^[\t ]+Default/{print key "=" $2}' fileadvertised.host.name=node1advertised.listeners=nulladvertised.port=5500auto.create.topics.enable=true
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
578,221
I am searching for files in my home directory that were modified in the last two minutes that also contain a certain string. I tried with this command: find -type d -mmin -2 -ls | grep -Ril "mystring" It seems to work but apparently it prints only those files with the given string inside rather than the files last modified 2 minutes ago containing the string. It seems like the first part of the command is not executed.
You had a good attempt with your own suggestion find -type d -mmin -2 -ls | grep -Ril "mystring" This would have identified directories ( -type d ) that had been modified within the last two minutes rather than files ( -type f ). Piping the output of -ls to grep would usually have searched the generated file names for mystring . However, in this case the -R flag changes the behaviour of grep and it ignores your list of filenames, searching instead through every file at and below the current directory. So, let's split the problem into two parts Search for last modified files in the last 2 minutes in your home directory find ~ -type f -mmin -2 [Files] which contain a certain String grep -Fl 'certain String' {files...} Now you need to put them together. The {} is a placeholder for the filenames generated by the find from step 1, and the trailing + indicates that the {} can be repeated multiple times , i.e. several filenames find ~ -type f -mmin -2 -exec grep -Fl 'certain String' {} + Changing the grep to echo grep will show you what is being run by the find command; this can be a useful debugging technique: find ~ -type f -mmin -2 -exec echo grep -Fl 'certain String' {} + Please consider running man find and man grep to find out what the various options are, such as the -F and -l in grep -Fl , as otherwise you're not learning anything from the exercise you've been set; you're just copying an answer.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/578221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/404413/" ] }
578,365
I'm trying to write a command that can simultaneously (i) read from stdin and (ii) read from a pipe. This basic concept works in zsh , but not in bash . The following session illustrates the difference in behavior for the two shells: $ echo bar > bar$ zsh -fzsh-5.8$ echo foo | cat < barfoobarzsh-5.8$ exit$ bash --noprofile --norcbash-5.0$ echo foo | cat < barbar I can see that the above commands give cat two sources of stdin (the pipe and the redirect), so perhaps it's ambiguous how that should be handled. zsh seems to concatenate the two input streams, with the piped input consistently coming first. bash seems to simply drop the piped input. My questions are: Why do the two shells behave differently? Is there any way to force bash to behave like zsh ?
As you have noticed, the MULTIOS shell option in zsh is what makes this possible. There is no similar built in facility in the bash shell. In bash , you would get the same behavior (for this particular example; see Uncle Billy's comment below ) from echo foo | { cat; cat bar; } or echo foo | cat - bar Both of these right hand sides first read their standard inputs before reading bar .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/404535/" ] }
578,506
I want to sort a tab-delimited file by a specific field while preserving the header. I'm using awk as described here sort and uniq in awk , but I can't figure out who to tell sort that the field separator is a tab. Toy data: $ echo -e "head_1\thead_2\thead_3" > file.tsv$ echo -e "aaa zzz\tc\t300" >> file.tsv$ echo -e "bbb yyy ooo\ta\t100" >> file.tsv$ echo -e "ccc xxx nnn\tb\t200" >> file.tsv$ column -ts $'\t' file.tsvhead_1 head_2 head_3aaa zzz c 300bbb yyy ooo a 100ccc xxx nnn b 200$ awk -F'\t' 'NR==1; NR>1 { print | "sort -k2" }' file.tsv | column -ts $'\t' head_1 head_2 head_3ccc xxx nnn b 200 ## note these data are sorted bbb yyy ooo a 100 ## based on the xxx/yyy/zzz aaa zzz c 300 ## not the a/b/c When I try to explicitly tell sort the the field separator is a tab, I get this error, which I believe is related to quoting issues: $ awk -F'\t' 'NR==1; NR>1 { print | "sort -k2 -t $'\t'" }' file.tsv | column -ts $'\t'sort: option requires an argument -- 't'Try 'sort --help' for more information.head_1 head_2 head_3 How do I specify the column separator for sort inside `awk? Thanks SE's web interface is doing a better job of syntax highlighting than Notepad++; here are a couple of things I've tried: $ awk -F'\t' 'NR==1; NR>1 { print | "sort -k2 -t $'$'\t''" }' file.tsv | column -ts $'\t'head_1 head_2 head_3aaa zzz c 300bbb yyy ooo a 100ccc xxx nnn b 200$ awk -F'\t' 'NR==1; NR>1 { print | "sort -k2 -t $'\t'" }' file.tsv | column -ts $'\t'sort: option requires an argument -- 't'Try 'sort --help' for more information.head_1 head_2 head_3$ awk -F'\t' 'NR==1; NR>1 { print | "sort -k2 -t "'$'\t''"" }' file.tsv | column -ts $'\t'sort: option requires an argument -- 't'Try 'sort --help' for more information.head_1 head_2 head_3$ awk -F'\t' 'NR==1; NR>1 { print | "sort -k2 -t "'$'\t'' }' file.tsv | column -ts $'\t'sort: option requires an argument -- 't'Try 'sort --help' for more information.head_1 head_2 head_3
chose one of these options: ... | "sort -k2 -t \\\t " ... | "sort -k2 -t \"\t\" " ... | "sort -k2 -t'\''\t'\'' " ... | "sort -k2 -t \047\011\047" ## preferred \011 is the Octet ASCII code for Tab character/ \047 for single quote ' awk -v q="'" ... { print | "sort -k2 -t " q "\t" q }' awk -v tb="'\t'" ... { print | "sort -k2 -t " tb }' awk -v tb=$'\t' ... { print | "sort -k2 -t \"" tb "\"" }' awk -v tb=$'\t' -v q="'" ... { print | "sort -k2 -t " q tb q }' and many more …; read Shell Quoting Issues in awk ; see also Escape Sequences in awk
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/366162/" ] }
578,511
There are 3 or 4 files in same directory as below; AAA.360p.mp4AAA.450p.mp4AAA.720p.mp4AAA.1080p.mp4 Filenames of those files are almost the same except expression of frame resolution. (AAA is example of file name. To present that those filenames are same except frame resolution.) And There are several dots between AAA. For example, filename is like this; Interesting.Comedy.E10.200406.450p.mp4Interesting.Comedy.E10.200406.720p.mp4Interesting.Comedy.E10.200406.1080p.mp4 Sizes of each file are different (file size : 360p < 450p < 720p < 1080p) → It is always true. I'd like to keep only one file, the one that is largest size and delete all other files. Location of directory is /volume1/video/ It will be run the command only on the synology. (to use task scheduler in the control panel) If you explain to me, please include path of directory in my case as above. (because I can't apply the code what you recommend to me for lack of my understanding. I apologize)
chose one of these options: ... | "sort -k2 -t \\\t " ... | "sort -k2 -t \"\t\" " ... | "sort -k2 -t'\''\t'\'' " ... | "sort -k2 -t \047\011\047" ## preferred \011 is the Octet ASCII code for Tab character/ \047 for single quote ' awk -v q="'" ... { print | "sort -k2 -t " q "\t" q }' awk -v tb="'\t'" ... { print | "sort -k2 -t " tb }' awk -v tb=$'\t' ... { print | "sort -k2 -t \"" tb "\"" }' awk -v tb=$'\t' -v q="'" ... { print | "sort -k2 -t " q tb q }' and many more …; read Shell Quoting Issues in awk ; see also Escape Sequences in awk
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/402727/" ] }
578,522
How to sort a zsh array by modification date? files=( ~/a ~/b ~/c )# how to sort files by date? PS: Here is my exact usecase, ( fz is almost fzf ) v () { local files files=() command rg '^>' ~/.viminfo | cut -c3- | while read line do [ -f "${line/\~/$HOME}" ] && files+="$line" done test -f ~/.emacs.d/.cache/recentf && { command rg --only-matching --replace '$1' '^\s*"(.*)"$' ~/.emacs.d/.cache/recentf | while read line do [ -f "$line" ] && files+="$line" done } files="$(<<<"${(F)files}" fz --print0 --query "$*")" || return 1 files="${files//\~/$HOME}" local ve="$ve" test -z "$ve" && ! isSSH && ve=nvim "${ve:-vim}" -p ${(0@)files} : '-o opens in split view, -p in tabs. Use gt, gT, <num>gt to navigate tabs.'}
It's a lot easier if you sort the list when you build it . But if you can't… A classic approach is to add the sort criterion to the data, then sort it, and then remove the added cruft. Build an array containing timestamps and file names, in a way that is unambiguous and with the timestamps in a format that can be sorted lexicographically. Sort the array (using the o parameter expansion flag ), then strip the prefix. You can use the stat module to obtain the file's modification time. zmodload zsh/statfor ((i=1; i<$#files; i++)); do times[$i]=$(stat -g -F %020s%N +mtime -L -- $files[$i]):$files[$i]; donesorted=(${${(o)times}#*:}) The %N format to zstat (to get timestamps at nanosecond resolution) requires zsh ≥5.6. If your zsh is older, remove it and the code will still work, but comparing timestamps at a 1-second resolution. Many filesystem have subsecond resolution, but I don't think you can get it with the zsh stat module in older versions of zsh. If your zsh is too old, you can get more precise timestamps with the stat utility from GNU coreutils . If you have it, you probably have the other GNU coreutils as well, so I'll use those. GNU coreutils are typically present on non-embedded Linux, but might not be on BSD or macOS. On macOS, you can install them using brew . If the GNU coreutils aren't part of the base operating system, you may need to change stat to gstat , sort to gsort and cut to gcut . if (($#files)); then sorted=(${(0@)"$(stat --printf='%040.18Y:%n\0' "$files[@]" | sort -rz | cut -z -d':' -f2-)"})else sorted=()fi An alternative zsh approach is to build a pattern that includes all the files in $files and more. Sort the files matching this pattern, then filter it to include only the desired files. You do need to build the whole pattern for more_files , which may not always be practical. more_files=(~/*(Om))sorted=(${more_files:*files})
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282382/" ] }
578,536
E: You don't have enough free space in /var/cache/apt/archives/.root@kali:~# df -HFilesystem Size Used Avail Use% Mounted onudev 2.0G 0 2.0G 0% /devtmpfs 406M 7.0M 399M 2% /run/dev/sda6 12G 11G 480M 96% /tmpfs 2.1G 78M 2.0G 4% /dev/shmtmpfs 5.3M 0 5.3M 0% /run/locktmpfs 2.1G 0 2.1G 0% /sys/fs/cgroup/dev/sda8 58G 114M 55G 1% /hometmpfs 406M 37k 406M 1% /run/user/0
If you're getting this error in a Docker container - it helped me to do a docker system prune
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/578536", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/404715/" ] }
578,537
I'm trying to pipe into something that will return only the first "paragraph" or "section" separated by a blank line. I thought I could use awk or sed to get a range as per some other answers but it doesn't seem to work. $ cat txtPackage: plasma-desktopArchitecture: amd64Version: 4:5.12.9.1-0ubuntu0.1Supported: 3yPackage: plasma-desktopArchitecture: amd64Version: 4:5.12.4-0ubuntu1Supported: 3y$ cat txt |awk '/^Package:/,/^$/'Package: plasma-desktopArchitecture: amd64Version: 4:5.12.9.1-0ubuntu0.1Supported: 3yPackage: plasma-desktopArchitecture: amd64Version: 4:5.12.4-0ubuntu1Supported: 3y Should it not return only the first "section"? (as per: Grep starting from a fixed text, until the first blank line and https://www.unix.com/shell-programming-and-scripting/148692-awk-script-match-pattern-till-blank-line.html ) If I use grep -ve ^$ the blank lines get removed, so there's no special characters. If I try to extract a different part, I get the parts from both "sections": $ cat txt |awk '/^Package:/,/^Version:/'Package: plasma-desktopArchitecture: amd64Version: 4:5.12.9.1-0ubuntu0.1Package: plasma-desktopArchitecture: amd64Version: 4:5.12.4-0ubuntu1 If I use sed -n '/^Package:/,/^$/p' or sed -n '/^Package:/,/^Version:/p' I get the same results as the equivalent awk. How do I get awk or sed to stop after the first occurrence?
This is exactly why awk has a paragraph mode: $ awk -v RS= 'NR==1' filePackage: plasma-desktopArchitecture: amd64Version: 4:5.12.9.1-0ubuntu0.1Supported: 3y and to print the 2nd record is just the obvious change of NR==1 to NR==2 : $ awk -v RS= 'NR==2' filePackage: plasma-desktopArchitecture: amd64Version: 4:5.12.4-0ubuntu1Supported: 3y Never use range expressions btw - they make code for trivial problems very slightly briefer than using a flag but then if your requirements change in the slightest require a complete rewrite or duplicate conditions. So any time you thing you might want to use /begin/,/end/ with sed or awk use /begin/{f=1} f{print} /end/{f=0} with awk instead and that gives you FAR more control on when/how to print the begin/end lines, etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/578537", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109270/" ] }
578,573
How can I organize a command to print the results like this: USER | PACKAGE | CREATORresult - result - resultresult - result - resultresult - result - result I want to fetch these results from a file: /usr/local/users/*/user.conf There are several files of different users, for example: /usr/local/users/user1/user.conf/usr/local/users/user2/user.conf/usr/local/users/user3/user.conf... I'm using the wildcard character, because there are a lot of files, so I need to organize the display in tables. He's got the content that way. username=testpackage=basiccreator=admin I tried it that way, just to try to fetch a variable, but I don't know how to complement the command. grep package /usr/local/users/*/user.conf
Another variant: awk -F'=' ' BEGIN{ print "USER | PACKAGE | CREATOR" } $1=="username" || $1=="package" || $1=="creator"{ printf $2 (++cnt%3 ? " - " : RS) }' /usr/local/users/*/user.conf | column -t Edit This sorts the table by package and allows a different order of the three elements in the user.conf files: { echo "USER | PACKAGE | CREATOR" awk -F'=' ' $1=="username"{ a[0]=$2 } $1=="package" { a[1]=$2 } $1=="creator" { a[2]=$2 } 0 in a && 1 in a && 2 in a{ print a[0] " - " a[1] " - " a[2] delete a } ' /usr/local/users/*/user.conf | sort -k3,3} | column -t
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395064/" ] }
578,595
When writing a long pipe it is usually clearer to separate it in two lines. This long command line: ruby -run -e httpd -- -p 5000 . 2>&1 | tee >(grep -Fq 'WEBrick::HTTPServer#start' && open localhost:5000) Could be divided as: ruby -run -e httpd -- -p 5000 . 2>&1 \ | tee >(grep -Fq 'WEBrick::HTTPServer#start' && open localhost:5000) Or: ruby -run -e httpd -- -p 5000 . 2>&1 | tee >(grep -Fq 'WEBrick::HTTPServer#start' && open localhost:5000) In short: command1 \ | command2 Or: command1 | command2 I realize that this might be a style (opinion) issue, but: Is there a preferred way, and, if so, why?
Ask your self what would this do? command1 \ | command2 Can't see the difference. Neither can I, but the shell can.Look closely, there is a space after the \ . This stops the newline from being escaped. Therefore use the other form, as it is safer. Shown here with the same error (a space after the | in this case). But it does not cause a bug. command1 | command2
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/578595", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
578,612
I have a number of files that have the following format: Num name1 name2 value So, for example, the file called in1.sp looks like this: C1 in1 out1 3.9e-12 C2 foo st1/in1 1.2e-14 C3 foo2 in1 8.3e-14 ... and so on. In all lines, one of the name columns contains the filename. I want to remove the entire entry if it contains the filename, even if there is other text. So if above is the input, the desired output would be: C1 out1 3.9e-12 C2 foo 1.2e-14 C3 foo2 8.3e-14 ... Thanks!
Ask your self what would this do? command1 \ | command2 Can't see the difference. Neither can I, but the shell can.Look closely, there is a space after the \ . This stops the newline from being escaped. Therefore use the other form, as it is safer. Shown here with the same error (a space after the | in this case). But it does not cause a bug. command1 | command2
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/578612", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/404782/" ] }
578,629
tl;dr I want to change between headphones/amplifier without having to change any configuration. How to do that? In Windows, every audio output was available at the same time, so I could send audio to an external amplifier (connected to the rear socket) and/or to headphones (connected to the front socket) without any configuration change. In Debian, I needed to install pavucontrol and change the "Output Devices", "Built-in Audio Analog Stereo", "Port" to "Headphones (unplugged)" instead of the default "Line Out (plugged in)", to use headphones. With the default selection, sound only goes to the rear socket. After change to "Headphones (unplugged)", sound goes to both rear and front sockets. If sound may go to both, why isn't this the default option in every Linux system?
Ask your self what would this do? command1 \ | command2 Can't see the difference. Neither can I, but the shell can.Look closely, there is a space after the \ . This stops the newline from being escaped. Therefore use the other form, as it is safer. Shown here with the same error (a space after the | in this case). But it does not cause a bug. command1 | command2
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/578629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134202/" ] }
578,703
I have the next problem: Directory Example1 has three files: Example1 , Things and Pictures . Directory Example2 has three files: Example2 , Example3 and Pictures . I need a list showing only the files that match the directory name, this is: Example1 and Example2 . I have tried with diff, find, locate and ls... but I have not achieved anything.
Since there are generally fewer directories than files, let's look for all directories and then test whether they contain the required filename. find . -type d -exec sh -c ' for dirpath do filepath="$dirpath/${dirpath##*/}" [ -f "$filepath" ] && printf "%s\n" "$filepath" done' sh {} + This would print the pathnames of all regular files (and symbolic links to regular files) that is located in a directory that has the same name as the file. The test is done in a short in-line sh -c script that will get a number of directory pathnames as arguments. It iterates over each directory pathname and constructs a file pathname with the name that we're looking for. The ${dirpath##*/} in the code could be replaced by $(basename "$dirpath") . For the given example directory structure, this would output ./Example1/Example1./Example2/Example2 To just test for any name , not just regular files, change the -f test to a -e test.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578703", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/404882/" ] }
578,706
How can I disable and turn off the fingerprint reader embedded in my Dell XPS 13 (7390)? The device is a HTMIcroelectronics Goodix Fingerprint Device, USB ID 27c6:5385. Powertop warns that it is constantly on (although I never use it) and it is unnecessarily draining the battery of the laptop.
Since there are generally fewer directories than files, let's look for all directories and then test whether they contain the required filename. find . -type d -exec sh -c ' for dirpath do filepath="$dirpath/${dirpath##*/}" [ -f "$filepath" ] && printf "%s\n" "$filepath" done' sh {} + This would print the pathnames of all regular files (and symbolic links to regular files) that is located in a directory that has the same name as the file. The test is done in a short in-line sh -c script that will get a number of directory pathnames as arguments. It iterates over each directory pathname and constructs a file pathname with the name that we're looking for. The ${dirpath##*/} in the code could be replaced by $(basename "$dirpath") . For the given example directory structure, this would output ./Example1/Example1./Example2/Example2 To just test for any name , not just regular files, change the -f test to a -e test.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14861/" ] }
578,791
I have a problem with my script. Prelude Firstly, I have a list, 100 lines-file like that: 100;TEST ONE101;TEST TWO...200;TEST HUNDRED Each line have 2 arguments. For example, first line's arguments are: "645", "TEST ONE". So semicolon is a delimiter. I need to put both arguments in two variables. Let's say it will be $id and $name. For each line, $id and $name values will be different. For example, for second line $id = "646" and $name = "TEST TWO". After that I need to take the sample file and change predefined keywords to $id and $name values. Sample file looks like this: xxx is yyy And as a result I want to have 100 files with different content. Each file must contain $id and $name data from every line. And It must be named by it's $name value. There is my script: #!/bin/bash -xrm -f output/*for i in $(cat list) do id="$(printf "$i" | awk -F ';' '{print $1}')" name="$(printf "$i" | awk -F ';' '{print $2}')" cp sample.xml output/input.tmp sed -i -e "s/xxx/$id/g" output/input.tmp sed -i -e "s/yyy/$name/g" output/input.tmp mv output/input.tmp output/$name.xml done So, I just try to read my list file line by line. For every line I'm getting two variables and then use them to replace keywords (xxx and yyy) from sample file and then save result. But something went wrong As a result I have only 1 output file. And debug is looking bad. Here is debug window with only 2 lines in my list file.I got only one output file.File name is just "TEST" and it contain a string: "101 is TEST". Two files expected: "TEST ONE", "TEST TWO" and it must contain "100 is TEST ONE" and "101 is TEST TWO". As you can see, second variable have a space in it ("TEST ONE" for example). I think the issue is related to the space special symbol, but I don't know why. I put -F awk parameter to ";", so awk must interpret only semicolon as a separator! What I did wrong?
If I understand you correctly, you can use a while loop and variable expansion while IFS= read -r line; do id="${line%;*}" name="${line#*;}" cp sample.xml output/input.tmp sed -i -e "s/xxx/$id/g" output/input.tmp sed -i -e "s/yyy/$name/g" output/input.tmp mv output/input.tmp output/"$name".xmldone < file As proposed by @steeldriver , here's a (more elegant) option: while IFS=';' read -r id name; do cp sample.xml output/input.tmp sed -i -e "s/xxx/$id/g" output/input.tmp sed -i -e "s/yyy/$name/g" output/input.tmp mv output/input.tmp output/"$name".xmldone < file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177068/" ] }
578,792
man bash reads: Special Parameters The shell treats several parameters specially. These parameters may only be referenced; assignment to them is not allowed. Well, shift changes both @ and * (in the same way, sure). And set -- one two three actually assigns whatever I want to @ and * . So am I misinterpreting what the man pages says?
If I understand you correctly, you can use a while loop and variable expansion while IFS= read -r line; do id="${line%;*}" name="${line#*;}" cp sample.xml output/input.tmp sed -i -e "s/xxx/$id/g" output/input.tmp sed -i -e "s/yyy/$name/g" output/input.tmp mv output/input.tmp output/"$name".xmldone < file As proposed by @steeldriver , here's a (more elegant) option: while IFS=';' read -r id name; do cp sample.xml output/input.tmp sed -i -e "s/xxx/$id/g" output/input.tmp sed -i -e "s/yyy/$name/g" output/input.tmp mv output/input.tmp output/"$name".xmldone < file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578792", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164309/" ] }
578,898
I have this file: header: title: hello version: 1.2.3 I want to extract the version number. My original attempt was grep ^\s+version:\s+(\d\.\d\.\d) file.txt but that produced empty output. After suggestions in the comments, I tried grep -P '^\s+version:\s+(\d\.\d\.\d)' file.txt but I get " version: 1.2.3" instead of "1.2.3". What am I doing wrong?
grep uses Posix Basic Regex ( BRE ) by default which does not support your notation. Use grep -E to use Posix Extended Regex ( ERE ) and grep -P to use Perl Compatible Regex ( PCRE ) if available. Your notation works with grep -P : grep -P '^\s+version:\s+(\d\.\d\.\d)' file.txt This works with BRE : grep '^ \+version: \+\([0-9]\.[0-9]\.[0-9]\)' file.txt Output: version: 1.2.3 Note, that the capture group is not necessary here, as grep doesn't do anything with it. If you want the version nr only., use \K and -o option: grep -Po '^\s+version:\s+\K\d\.\d\.\d' file.txt Output: 1.2.3 With BRE , this is not possible, you will need to chain two grep commands: grep 'version: ' file.txt | grep -o '[0-9]\.[0-9]\.[0-9]' or use sed (credits @Kusalananda): sed -n 's/.*version: //p' file.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33514/" ] }
578,968
I have 4 tsv (tab separated) files that look like this: file_1: abc 1def 2ghi 3 file_2: abc 2ghi 3 file_3: def 1ghi 2jkl 4 file_4: ghi 3jkl 4 I want to join those file to get 1 tsv file like this: dataset file_1 file_2 file_3 file_4abc 1 2 def 2 4 ghi 3 3 2 3jkl 4 4 I have try using awk $ awk ' BEGIN{OFS=FS="\t"} FNR==1{f = f "\t" FILENAME} NR==FNR{a[$1] = $2} NR!=FNR{a[$1] = a[$1] "\t" $2} END{printf "dataset%s\n", f; for(i in a) print i, a[i]} ' file_{1..4} This command is work, but I got shifted value. Let say, if first and second column have empty value and third and fourth column have value 4 and 4, the output that I got from that command is for first and second column have value 4, but for third and fourth column have empty value.So I try to join my tsv file separately using awk that I mentioned. First only for file_1 and file_2 to get output_1 , then join file_3 and file_4 to get output_2 . After that I use $ join output_1 output_2 to merge output_1 and output_2 but I only get value that exist in 4 file. I lost data that only exist in one file. I'll very appreciate if you can give me an advice. Thank you
$ cat tst.awkBEGIN { FS=OFS="\t" }{ datasets[$1]; fnames[FILENAME]; vals[$1,FILENAME] = $2 }END { printf "%s", "dataset" for (fname in fnames) { printf "%s%s", OFS, fname } print "" for (dataset in datasets) { printf "%s", dataset for (fname in fnames) { printf "%s%s", OFS, vals[dataset,fname] } print "" }}$ tail -n +1 file?==> file1 <==a 1b 2c 3==> file2 <==a 2c 3$ awk -f tst.awk file1 file2dataset file1 file2a 1 2b 2c 3 3 Add as many files to the list as you like.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/578968", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/399575/" ] }
579,068
As far as I can tell from the documentation of systemd , Wants= and WantedBy= perform the same function, except that the former is put in the dependent unit file and vice-versa. (That, and WantedBy= creates the unit.type.wants directory and populates it with symlinks.) From DigitalOcean: Understanding Systemd Units and Unit Files : The WantedBy= directive... allows you to specify a dependency relationship in a similar way to the Wants= directive does in the [Unit] section. The difference is that this directive is included in the ancillary unit allowing the primary unit listed to remain relatively clean. Is it really just about keeping a unit file "clean"? What is the best practice for using these two directives? That is, if service alpha "wants" service beta, when should I use Wants=beta.service in alpha.service and when should I prefer WantedBy=alpha.service in the beta.service ?
Functionally Wants is in the Unit section and WantedBy is in the Install . The init process systemd does not process/use the Install section at all. Instead, a symlink must be created in multi-user.target.wants . Usually, that's done by the utility systemctl which does read the Install section. In summary, WantedBy is affected by systemctl enable / systemctl disable . Logically Consider which of the services should "know" or be "aware" of the other. For example, a common use of WantedBy : [Install]WantedBy=multi-user.target Alternatively, that could be in multi-user.target: [Unit]Wants=nginx.service But that second way doesn't make sense. Logically, nginx.service knows about the system-defined multi-user.target, not the other way around. So in your example, if alpha's author is aware of beta, then alpha Wants beta. If beta's author is aware of alpha then beta is WantedBy alpha. To help you decide, you may consider which service can be installed (say, from a package manager) without the other being present. Config directories As another tool in your box, know that systemd files can also be extended with config directories: /etc/systemd/system/myservice.service.d/extension.conf This allows you to add dependencies where neither service is originally authored to know about the other. I often use this with mounts, where (for example) neither nginx nor the mount need explicit knowledge of the other, but I as the system adminstrator understand the dependency. So I create nginx.service.d/mymount.conf with Wants=mnt-my.mount .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/579068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130124/" ] }
579,087
When I typed ip route show : What does linkdown actually mean?Sometimes I can still see onlink . Also want to know what onlink means. Does it affect the routing priority? For example, the case of two default routes in the picture
linkdown is the status that will show for a route that is in the table and configured to go out through an interface that is in the DOWN state. You can see this by running: ip a and looking for the statuses of the interfaces. On my laptop I have wifi on and the Ethernet adapter unplugged so it shows: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000enp0s25: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 In my routing table I have a number of routes but I can add a couple garbage ones: sudo ip route add 192.168.123.0/24 dev enp0s25sudo ip route add 192.168.124.0/24 dev wlp3s0 Then my table will show linkdown for the ethernet route: 192.168.123.0/24 dev enp0s25 scope link linkdown 192.168.124.0/24 dev wlp3s0 scope link onlink means that the routing should "pretend that the nexthop is directly attached to this link, even if it does not match any interface prefix". So we can make a fake one of those in the table too: sudo ip route add 192.168.125.0/24 via 192.168.123.111 dev wlp3s0 onlink Which will now show up in the routing table: 192.168.123.0/24 dev enp0s25 scope link linkdown 192.168.124.0/24 dev wlp3s0 scope link 192.168.125.0/24 via 192.168.123.111 dev wlp3s0 onlink You can even get fancy and have both if you onlink to the down interface: 192.168.126.0/24 via 192.168.123.111 dev enp0s25 onlink linkdown
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/579087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/405256/" ] }
579,088
I am working on a script to run on my Synology DS 1019+ that takes all the subfolders within a certain directory, creates the subfolders in another directory, then creates hard links of all the .mkv files within the subfolders in the newly created subfolders in the other location. However, if the subfolder already exists in the second location, then I want it to just create the hard link. Due to the nature of my files, the folder structure and files within said folders will be different for each instance this script will be used for. Is there a way to make this script loop so that each loop takes the next folder within Location A and performs the mkdir of each subfolder and hard link task (to the files within said subfolder)? Here is what I have currently: #! /bin/bashecho "Enter Movie Collection:"read MOVIE_DIRecho "Enter Bonus Feature Disc: "read BONUS_DIRcd "/volume1/Plex/Movies/$MOVIE_DIR/"for dir in */.; do if [[ ! -e "$dir"/*/ ]]; then mkdir "$dir"/*/ fi if [[ ! -d "$dir"/*/ ]]; then ln /volume1/Plex/"Bonus Feature Discs"/$BONUS_DIR/*/*.mkv -t ./"$dir"/*/. fidone I am not skilled at all when it comes to loops, especially nested loops, so I am not sure where to begin to troubleshoot this. Currently, instead of the folders in location A being replicated (if not already existing) in Location B, and the files within the folders in Location A being hardlinked, I am getting an empty folder with a name of "_2X68P~X" within the Location B's main directories (for example, in my test, Instead of "TEST 1" getting a "featurette" folder with a hardlinked TEST.mkv within it, I just get that garbage data folder within "TEST 1" that is empty. I have attempted to use basename as well as dirname, but I have yet to find a way to have it loop so that it cycles through each folder within a directory. I have also attempted to use cd /the/directory/path/ "${PWD##*/}" EDIT:Over the course of the several hours after posting this, I did figure out a solution. The code I had above simply was not logically sound, thanks to too many areas that were not specific, which meant nothing would happen. I ended up having to restart from the ground up. Here is the code that I ended up with, and this code does do the job I need it to do. It might not be the most elegant method for doing this, but it does work well enough based on the tests I've ran. #! /bin/bash#Ask the user to input the directories of both the bonus disc's files and where the movies are located that the bonus disc's contents will be needed.echo "Enter the name of the Movie Collection and press [ENTER]: "read MOVIE_DIRecho "Enter the name of the Bonus Feature Disc and press [ENTER]: "read BONUS_DIR#This goes to the location of the bonus disc specified by the end user. I believe this part is necessary for creating the text document below, but it might not be.cd "/volume1/Plex/Bonus Feature Discs/$BONUS_DIR/" || return#This creates a text document that has each directory within the specified Bonus Disc directory as a separate line ls -d -- */ >> /volume1/Plex/"Bonus Feature Discs"/output.txt echo ls -d -- */#This goes to the movie directory specified by the end user. This cd is definitely requiredcd "/volume1/Plex/Movies/$MOVIE_DIR/" || return#the for loop loops through every movie that resides in the movie collection folder for dir in *; do #this while loop reads the text document and will use each line from the document as a variable. while IFS=' ' read -r line; do name="$line" #A directory with the name of the line of the text document will be created in each movies directory only if it doesn't already exist mkdir -p "$dir/$name" #this will create the hard links to every single video that resides in the folders within the bonus features disc into the corresponding folders for each movie if [[ ! -e "/volume1/Plex/Movies/$MOVIE_DIR/$name" ]]; then ln "/volume1/Plex/Bonus Feature Discs/$BONUS_DIR/$name"*.mkv -t ./"$dir/$name" fi done < "/volume1/Plex/Bonus Feature Discs/output.txt" doneecho "Linking Completed"echo $dir#finally, once all the work is complete, the script will delete the text document that is no longer neededrm "/volume1/Plex/Bonus Feature Discs/output.txt"
linkdown is the status that will show for a route that is in the table and configured to go out through an interface that is in the DOWN state. You can see this by running: ip a and looking for the statuses of the interfaces. On my laptop I have wifi on and the Ethernet adapter unplugged so it shows: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000enp0s25: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 In my routing table I have a number of routes but I can add a couple garbage ones: sudo ip route add 192.168.123.0/24 dev enp0s25sudo ip route add 192.168.124.0/24 dev wlp3s0 Then my table will show linkdown for the ethernet route: 192.168.123.0/24 dev enp0s25 scope link linkdown 192.168.124.0/24 dev wlp3s0 scope link onlink means that the routing should "pretend that the nexthop is directly attached to this link, even if it does not match any interface prefix". So we can make a fake one of those in the table too: sudo ip route add 192.168.125.0/24 via 192.168.123.111 dev wlp3s0 onlink Which will now show up in the routing table: 192.168.123.0/24 dev enp0s25 scope link linkdown 192.168.124.0/24 dev wlp3s0 scope link 192.168.125.0/24 via 192.168.123.111 dev wlp3s0 onlink You can even get fancy and have both if you onlink to the down interface: 192.168.126.0/24 via 192.168.123.111 dev enp0s25 onlink linkdown
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/579088", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/405255/" ] }
579,184
I'm trying to upgrade from Fedora 30 to 31 and I've successfully done these two steps: dnf upgrade --refreshdnf install dnf-plugin-system-upgrade However, when I do the next: dnf system-upgrade download --releasever=31 ... I get this: Before you continue ensure that your system is fully upgraded by running "dnf --refresh upgrade". Do you want to continue [y/N]: yAdobe Systems Incorporated 35 kB/s | 2.9 kB 00:00 Fedora Modular 31 - x86_64 23 kB/s | 25 kB 00:01 Fedora Modular 31 - x86_64 - Updates 19 kB/s | 16 kB 00:00 Fedora 31 - x86_64 - Updates 17 kB/s | 18 kB 00:01 Fedora 31 - x86_64 37 kB/s | 25 kB 00:00 google-chrome 18 kB/s | 1.3 kB 00:00 MariaDB 9.7 kB/s | 2.9 kB 00:00 packages-microsoft-com-prod 16 kB/s | 3.0 kB 00:00 PostgreSQL common RPMs for Fedora 31 - x86_64 11 kB/s | 3.0 kB 00:00 PostgreSQL 12 for Fedora 31 - x86_64 3.3 kB/s | 3.8 kB 00:01 RPM Fusion for Fedora 31 - Free - Updates 29 kB/s | 9.1 kB 00:00 RPM Fusion for Fedora 31 - Free 26 kB/s | 9.9 kB 00:00 RPM Fusion for Fedora 31 - Nonfree - Updates 11 kB/s | 9.4 kB 00:00 RPM Fusion for Fedora 31 - Nonfree 21 kB/s | 10 kB 00:00 skype (stable) 6.6 kB/s | 2.9 kB 00:00 teams 4.9 kB/s | 3.0 kB 00:00 Fedora 31 - x86_64 - VirtualBox 247 B/s | 181 B 00:00 Visual Studio Code 19 kB/s | 3.0 kB 00:00 Yarn Repository 25 kB/s | 2.9 kB 00:00 terminate called after throwing an instance of 'libdnf::ModulePackageContainer::EnableMultipleStreamsException' what(): Cannot enable multiple streams for module 'ant'Aborted (core dumped) Is there some way to overcome this problem? Any and all ideas are welcome. I don't mind if I have to disable/remove some of my extra package repos, if that is what it takes ...
It's really weird but I have stumbled on this issue too and found out that you have to disable these repos: fedora-modular.repo fedora-updates-modular.repo fedora-updates-testing-modular.repo Thanks @vonbrand and @dbdemon for the idea.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579184", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272917/" ] }
579,400
Suppose I have various images that I want to download and I have the links available: https://images.unsplash.com/photo-1548363585-5b1241ee3b85?ixlib=rb-1.2.1&auto=format&fit=crop&w=634&q=80 https://images.unsplash.com/photo-1556648011-e01aca870a81?ixlib=rb-1.2.1&auto=format&fit=crop&w=634&q=80 I don't want to type it in one by one wget https://images.unsplash.com/photo-1548363585-5b1241ee3b85?ixlib=rb-1.2.1&auto=format&fit=crop&w=634&q=80wget https://images.unsplash.com/photo-1556648011-e01aca870a81?ixlib=rb-1.2.1&auto=format&fit=crop&w=634&q=80 How can I achieve this? I have read that saving these links in a .txt file and using a for loop is not a correct way.
If you have the URLs in a file like this: https://images.unsplash.com/photo-1548363585-5b1241ee3b85?ixlib=rb-1.2.1&auto=format&fit=crop&w=634&q=80https://images.unsplash.com/photo-1556648011-e01aca870a81?ixlib=rb-1.2.1&auto=format&fit=crop&w=634&q=80 Then you could run wget --input-file=file to download the images as described by @ Kusalananda .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/395980/" ] }
579,480
I'm looking for the commands that will tell me the allocation quantum on drives formatted with ext4 vs btrfs. Background: I am using a backup system that allows users to restore individual files. This system just uses rsync and has no server-side software, backups are not compressed. The result is that I have some 3.6TB of files, most of them small. It appears that for my data set storage is much less efficient on a btrfs volume under LVM than it is on a plain old ext4 volume, and I suspect this has to do with the minimum file size, and thus the block size, but I have been unable to figure out how to get those sizes for comparison purposes. The btrfs wiki says that it uses the "page size" but there's nothing I've found on obtaining that number.
You'll want to look at the data block allocation size, which is the minimum block that any file can allocate. Large files consist of multiple blocks. And there's always some "waste" at the end of large files (or all small files) where the final block isn't filled entirely, and therefore unused. As far as I know, every popular Linux filesystem uses 4K blocks by default because that's the default pagesize of modern CPUs, which means that there's an easy mapping between memory-mapped files and disk blocks. I know for a fact that BTRFS and Ext4 default to the page size (which is 4K on most systems). On ext4, just use tune2fs to check your block size, as follows (change /dev/sda1 to your own device path): [root@centos8 ~]# tune2fs -l /dev/sda1 |grep "^Block size:"Block size: 4096[root@centos8 ~]# On btrfs, use the following command to check your block size (change /dev/mapper/cr_root to your own device path, this example simply uses a typical encrypted BTRFS-on-LUKS path): sudo btrfs inspect-internal dump-super -f /dev/mapper/cr_root | grep "^sectorsize"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579480", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290910/" ] }
579,535
I want to avoid having temporary files laying around if my program crashes. UNIX is wonderful in that you can keep a file open - even after you delete it. So if you open the file, immediately delete it, and then do the slow processing, chances are high that even if you program crashes, the user will not have to clean up the file. In shell I often see something similar to: generate-the-file -o the-file[...loads of other stuff that may use stdout or not...]do_slow_processing < the-filerm the-file But if the program crashes before rm the user will have to clean up the-file . In Perl you can do: open(my $filehandle, "<", "the-file") || die;unlink("the-file");while(<$filehandle>) { # Do slow processing stuff here print;}close $filehandle; Then the file is removed as soon as it is opened. Is there a similar construct in shell?
This is tested in csh, tcsh, sh, ksh, zsh, bash, ash, sash: echo foo > the-file(rm the-file; cat) < the-file | do_slow_processing or if you prefer: (rm the-file; do_slow_processing) < the-file Interestingly, it also works for fifos: mkfifo the-fifo(rm the-fifo; cat) < the-fifo | do_slow_processing &echo foo > the-fifo This is because the reader is blocked until something is written.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579535", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
579,576
I am using shell to write a git hook file, to check my commit message before commit. I an a total beginner, this is what I have tried: My commit-msg hook file is like below: #!/bin/shmsg=`head -n 1 $1`if echo $msg | egrep -qv '(Android-\d{3,4}.{20,})'; then echo "[Message Format] Your message is not formatted correctly. Correct message format\n #Ticket Number - Minimum 20 or more Character \n like #Android-123 Bug fixed for login issue" exit 1;fi So whenever I make a commit message like this: git commit -m " #Android-123 I pretty sure this is more than 20 character,but it still failed to commit" So I pretty sure my commit message is more than 20 character, and the ticket number, but every time I commit it still get the error I set. I think my logic problem is that I should use if not echo $msg | egrep -qv '(Android-\d{3,4}.{20,})' , so I tried: if ! [[echo $msg | egrep -qv '(Android-\d{3,4}.{20,})']] ; then.... But this give me this error: .git/hooks/commit-msg: line 5: syntax error near unexpected token !' Question: What I going wrong that cause failed to commit, even though I make a right commit message? How can I fix it?
Your regular expression uses PCRE syntax like \d , but grep -E (that's what your egrep is, but use grep -E instead, egrep is being deprecated) doesn't understand that. Also, you don't need the parentheses there, you aren't actually capturing anything. If you have GNU grep , you can use grep -P instead: grep -Pqv 'Android-\d{3,4}.{20,}' If not, you will have to replace \d with [0-9] : grep -Eqv 'Android-[0-9]{3,4}.{20,}' However, you don't need (or want) to only take the first line of the file, you can just grep the whole file directly. You also don't need to reverse the match ( -v ), that just complicates things. Here's a simpler, working version of your script, using if ! to negate the condition: #!/bin/shif ! grep -E 'Android-[0-9]{3,4}.{20,}' "$1"; then printf "[Message Format] Your message is not formatted correctly. Correct message format: #Ticket Number - Minimum 20 or more Character like #Android-123 Bug fixed for login issue\n" exit 1;fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271717/" ] }
579,612
I work on the application that uses Unix domain socket for IPC. The common way as I know is to place the socket file inside /var/run . I work with Ubuntu 18.04 and I see that var/run is a symlink for /run . Unfortunately the folder is accessible for root only: ls -Al / drwxr-xr-x 27 root root 800 Apr 12 17:39 run So only root has write access for this folder and that makes it impossible to use Unix domain sockets for regular users. First of all I can't understand why? And how to use Unix domain sockets for non-root users? I can use the home folder of course, but I prefer to use some correct and common method.
There's nothing wrong with creating the socket in a dotfile or dotdir in the home directory of the user, if the user is not some kind of special, system user. The only problem would be with the home directory shared between multiple machines over nfs, but that could be easily worked around by including the hostname in the name of the socket. On Linux/Ubuntu you could also use "abstract" Unix domain sockets, which don't use any path or inode in the filesystem. Abstract unix sockets are those whose address/path starts with a NUL byte: abstract : an abstract socket address is distinguished (from a pathname socket) by the fact that sun_path[0] is a null byte ( \0 ). The socket's address in this namespace is given by the additional bytes in sun_path that are covered by the specified length of the address structure. (Null bytes in the name have no special significance.) The name has no connection with filesystem pathnames. When the address of an abstract socket is returned, the returned addrlen is greater than sizeof(sa_family_t) (i.e., greater than 2), and the name of the socket is contained in the first (addrlen - sizeof(sa_family_t)) bytes of sun_path . When displayed for or entered by the user, the NUL bytes in a abstract Unix socket address are usually replaced with @ s. Many programs get that horribly wrong, as they don't escape regular @ s in any way and/or assume that only the first byte could be NUL. Unlike regular Unix socket paths, abstract Unix socket names have different semantics, as anybody can bind to them (if the name is not already taken), and anybody can connect to them. Instead of relying on file/directory permission to restrict who can connect to your socket, and assuming that eg. only root could create sockets inside some directory, you should check the peer's credential with getsockopt(SO_PEERCRED) (to get the uid/pid of who connected or bound the peer), or the SCM_CREDENTIALS ancillary message (the get the uid/pid of who sent a message over the socket). This (replacing the usual file permission checks) is also the only sane use of SO_PEERCRED / SCM_CREDENTIALS IMHO.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579612", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92830/" ] }
579,730
How can I delete all lines in a text file which have fewer than 'x' letters OR numbers OR symbols? I can't use awk 'length($0)>' as it will include spaces.
Assuming you want to delete lines that contain less than n graphical symbols: awk -v n=5 '{ line = $0; gsub("[^[:graph:]]", "") } length >= n { print line }' This deletes all characters that does not match [[:graph:]] . If the length of the string that remains is greater than or equal to n , the (unmodified) line is printed. The value of n is given on the command line. [[:graph:]] is equivalent to [[:alnum:][:punct:]] , which in turn is the same as [[:alpha:][:digit:][:punct:]] . It is roughly the same as [[:print:]] but does not match spaces. Instead of [^[:graph:]] , you could possibly use [[:blank:]] to delete all tabs or spaces. With sed , following the above awk code almost literally, sed -e 'h; s/[^[:graph:]]//g' \ -e '/.\{5\}/!d; g' or, simplified (only counting non-blank characters), sed -e 'h; s/[[:blank:]]//g' \ -e '/...../!d; g' This first saves the current line into the hold space with h . It then deletes all non-graph characters (or blank characters in the second variation) on the line with s///g . If the line then contains less than 5 characters (change this to whatever number you want, or change the number of dots in the second variation), the line is deleted. Else, the stored line is fetched from the hold space with g and (implicitly) printed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579730", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378209/" ] }
579,786
How can I delete many folders that have more than one - in their names? For example: e97bf913-5759-4fff-bdaf-2f931b53a432/39f953c5-dab0-420e-a650-a50a30f48097/
The pattern *-*-*/ matches directories with two or more hyphens. The * matches any string (zero or more characters). If you want to only match directory names that should not start and end with a hyphen (as in your example), you could use [!-]*-*-*[!-]/ instead. The [!-] matches any character that is not ( ! ) a hyphen. Run ls -d [!-]*-*-*[!-]/ first to see if these are the ones you want to delete. Then run rm -r [!-]*-*-*[!-]/ to delete them recursively. If you should really need to force the deletion, add -f to the command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299545/" ] }
579,810
test.txt file: Ext Temperatur: 210.0°Avg.Speed(All): 62.89mm/sAvg.Speed(Print): 49.99mm/sAvg.Speed(Travel): 199.84mm/sOverall Time (w/o Acceleration): 11:00:41 (39640.65sec) How can I grab 62.89mm/s and 62.89 as outputs using grep and print with php code? I am using this code but it prints the whole line e.g.: Avg.Speed(All): 62.89mm/s <?php$gcode_new = "grep -P 'Avg.Speed(All):' test.txt";include('Net/SSH2.php');$ssh = new Net_SSH2('XXX.61.XXX.227');if (!$ssh->login('root', 'PASS')) { exit('Login Failed');}$target = $ssh->exec($gcode_new);echo $target ;?> Question Update: How can i copy 11:00:41 from above data which is associated with "Overall Time (w/o Acceleration):" Tried to use the command which is suggested from answer, but receiving 11:00:41 (39640.65sec) here how can i avoid (39640.65sec) . $ grep -Po '(?<=Overall Time \(w/o Acceleration\): ).*' testjar.txt
The pattern *-*-*/ matches directories with two or more hyphens. The * matches any string (zero or more characters). If you want to only match directory names that should not start and end with a hyphen (as in your example), you could use [!-]*-*-*[!-]/ instead. The [!-] matches any character that is not ( ! ) a hyphen. Run ls -d [!-]*-*-*[!-]/ first to see if these are the ones you want to delete. Then run rm -r [!-]*-*-*[!-]/ to delete them recursively. If you should really need to force the deletion, add -f to the command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579810", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/404886/" ] }
579,813
How can i find files with write permission for the group "others", regardless of any other permissions, with extension ".sh" (use symbolic format) I've already tried find / -type f -perm -g=w -name "*.sh"
The pattern *-*-*/ matches directories with two or more hyphens. The * matches any string (zero or more characters). If you want to only match directory names that should not start and end with a hyphen (as in your example), you could use [!-]*-*-*[!-]/ instead. The [!-] matches any character that is not ( ! ) a hyphen. Run ls -d [!-]*-*-*[!-]/ first to see if these are the ones you want to delete. Then run rm -r [!-]*-*-*[!-]/ to delete them recursively. If you should really need to force the deletion, add -f to the command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579813", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/405978/" ] }
579,829
According to this question: How to exclude dirs in find , the command should be this: find . -type d \( -path dir1 -o -path dir2 -o -path dir3 \) -prune -o -print But if I do find . -type d \( -path "./.cpan" -o -path "./.mozilla" -o "./.cache" \) -prune -o -print That gives: find: paths must precede expression: ./.cache' find: possible unquoted pattern after predicate -o'? but I did quotes. Also, after -path option, does the path should be absolute or relative? Because I have included the current dir ./[somefile] , but is it needed for -path ?
You've missed the -path predicate in front of the last option value "./.cache" The path used with -path must begin with the top-level search path used by find . For example find . -path './something/here' find /etc -path '/etc/init.d' You may need to use the * wildcard if you want to match a directory name without specifying its position in the filesystem tree. This example will match all files ( -type f ) somewhere under the directory wizard find . -path '*/wizard/* -type f -print
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579829", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378669/" ] }
579,854
I've installed arch, and lighDM with awesome window manager.Everything runs okay.If I login as root, awesome load and works but if I login with the user I've created it attemps to log, clean te screen but reload the login screen,and here I'm in this login loop. This is my lightdm.log file (Actually I can't export it cause the pc isn't working properly) Thanks!
You've missed the -path predicate in front of the last option value "./.cache" The path used with -path must begin with the top-level search path used by find . For example find . -path './something/here' find /etc -path '/etc/init.d' You may need to use the * wildcard if you want to match a directory name without specifying its position in the filesystem tree. This example will match all files ( -type f ) somewhere under the directory wizard find . -path '*/wizard/* -type f -print
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579854", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/406008/" ] }
579,868
I am trying to create my own PID 1 init script, to be called from the boot cmdline with init=/myscript. How can I make it work on a real filesystem, with any kernel? When it runs in an initrd, it works fine and can mount things, etc. - but when I use it on my filesystem without an initrd, it fails to mount things, because: mount: only root can do that (effective UID is 1000) When I strace any command that fails, it inevitably issues geteuid32() and that returns 1000. Why? How can I run as euid 0?
You've missed the -path predicate in front of the last option value "./.cache" The path used with -path must begin with the top-level search path used by find . For example find . -path './something/here' find /etc -path '/etc/init.d' You may need to use the * wildcard if you want to match a directory name without specifying its position in the filesystem tree. This example will match all files ( -type f ) somewhere under the directory wizard find . -path '*/wizard/* -type f -print
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579868", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/406036/" ] }
579,930
I have multiple substitution rules but want to use only the first applicable rule. sed , by default, continues processing the output with the remaining rules. How do I break after the first substitution? Is it possible using just sed ? I could not find an option in the manual. Actual output $ sed "s/something/else/; s/one/two/; s/two/three/;" <<<"one"three Desired operation $ sed "s/something/else/; s/one/two/; s/two/three/;" <<<"one"two
Use the t command after each s command to branch to the end of the script if a substitution was made: sed -e 's/something/else/;t' \ -e 's/one/two/;t' \ -e 's/two/three/;t' <<<"one" Here, the t command after the last substitution is not needed, but if you generate this code automatically, there is no problem letting it suffix each s , even the last.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/579930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/351025/" ] }
579,931
Normally, after a recent fresh install of Linux Mint 19.3 x64 MATE on an Acer Aspire E15 laptop, media keys (using Fn+arrow keys on the built-in keyboard, or dedicated keys on an external keyboard) work fine for changing the volume: A modal pops up showing the current volume level, and disappears after a moment of not adjusting the volume. Also, the default volume control tray icon affects the same volume level when I drag the slider. Now, sometimes we'll plug in an HDMI device that has built-in speakers, and want the audio to go through the speakers. Often the software audio source (browser, for example) is already open, and the only way I've found to switch its output to HDMI without having to restart the browser, is to go into the default Sounds applet: ...and set the analog output to Off, and the HDMI output to HDMI. The problem is, once I've done this (or the other way around—started with HDMI and then switched back to analog stereo) the media keys no longer have an effect on the volume level. (But they do still make a popup showing a level changing...it just doesn't actually affect what's heard!) Also, the volume control tray applet no longer has an effect on what's heard. The slider still visually works, but strangely, seems to have become independent from the popup that the media keys produce. Then, often the tray applet will simply disappear entirely (crashed, I guess.) At that point the only (GUI-based) way to change the volume is to open the Sounds applet pictured above and adjust it from the slider there. My main question is, how can I keep them working after switching audio outputs as described? Or, if there happens to be a way to avoid this problem by using a different method than described for forcing an audio output change, that would be a welcome answer too.
Use the t command after each s command to branch to the end of the script if a substitution was made: sed -e 's/something/else/;t' \ -e 's/one/two/;t' \ -e 's/two/three/;t' <<<"one" Here, the t command after the last substitution is not needed, but if you generate this code automatically, there is no problem letting it suffix each s , even the last.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/579931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34450/" ] }
579,976
I have found a similar topic but could not figure out how to implement it for my own use: grab multiple lines after a matching target line Here's the issue: I'm trying to implement it on a project of my own but can't seem to make it work. I am using Linux, can someone break it down? Basically what I'm trying to do is go through a log bundle and capture specific lines along with their stack\details. Here's an example: 2020-01-20T05:58:19.119Z verbose vpxa[6E21B70] [Originator@6876 sub=PropertyProvider opID=k5cokp1a-928316-auto-jwal-h5:70047736-92-01-84] [CommitChangesAndNotify] Updating cached values2020-01-20T05:58:19.119Z info vpxa[6E21B70] [Originator@6876 sub=Default opID=k5cokp1a-928316-auto-jwal-h5:70047736-92-01-84] [VpxLRO] -- ERROR task-107599 -- **vm-1178** -- vim.VirtualMachine.reconfigure: vmodl.fault.InvalidArgument:--> Result:--> (vmodl.fault.InvalidArgument) {--> faultCause = (vmodl.MethodFault) null,--> faultMessage = (vmodl.LocalizableMessage) [--> (vmodl.LocalizableMessage) {--> key = "msg.disk.extendFailure",--> arg = (vmodl.KeyAnyValue) [--> (vmodl.KeyAnyValue) { I'll want to capture every line that contains "vm-1178" and all subsequent lines that start with "-->" until the pattern changes, then start looking for vm-1178 until the next time this occurs, etc. Hope it makes sense. Thanks!
Try this, awk '!/^-->/{p=0} /vm-1178/{p=1} p' !/^-->/{p=0} : Set var p (like print) to 0 whenever line not begins with --> . /vm-1178/{p=1} : Set var p = 1 whenever line matches /vm-1178/ . p : Print the line whenever p is true (here=1)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/405941/" ] }
579,984
I am trying to printf some unicode codes that I pipe in like this echo 0024 0025 | xargs -n1 echo # one code per line | xargs printf '\u%s\n' hoping to get this $% but this is what I get printf: missing hexadecimal number in escape After some trial and error, I actually have two smaller problems, and one kind-of makes sense and the other seems like a complete mystery. Problem 1: printf '\u%s\n' 0024 0025 gives me this -bash: printf: missing unicode digit for \u\u0024-bash: printf: missing unicode digit for \u\u0025 Problem 2: > # use built-in for $> printf '\u0024\n'$> # use exe for $> which printf/usr/bin/printf> /usr/bin/printf '\u0024\n'$> # now use built-in for %> printf '\u0025\n'%> # but look what happens when we use exe for % !!!!> /usr/bin/printf '\u0025\n'/usr/bin/printf: invalid universal character name \u0025 (using > for $ so you can see the $ in the output) For some reason some characters work with exe version but some don't even though all work with built-in printf. so here is a work-around that would work if it weren't for problem #2(but might be quite a bit slower than my original idea) echo 0024 0025 | xargs -n1 echo # one item per line | xargs -I {} printf '\u{}\n' but due to problem #2, it kind of half works: $ echo 0024 0025 | xargs -n1 echo | xargs -I {} printf '\u{}\n'$printf: invalid universal character name \u0025 ($ comes out but % gets error) So I guess my questions are: -Is there any way of making printf work with the number code so that I can run printf once instead of once per argument with -I ? -What am I doing wrong that printf built-in doesn't mind, but printf exe doesn't like, but only for % and not for $ ?
To avoid the double-expansion problem ( \u is processed before %s ), you can use %b , at least in Bash printf : printf '%b\n' \\u0024 \\u0025 You can pre-process your input in various ways: set 0024 0025printf '%b\n' "${@/#/\\u}" The standalone printf , as implemented in GNU coreutils , has the following restrictions on Unicode character specifications: printf interprets two character syntaxes introduced in ISO C 99: ‘ \u ’ for 16-bit Unicode (ISO/IEC 10646) characters, specified as four hexadecimal digits hhhh , and ‘ \U ’ for 32-bit Unicode characters, specified as eight hexadecimal digits hhhhhhhh . printf outputs the Unicode characters according to the LC_CTYPE locale. Unicode characters in the ranges U+0000…U+009F, U+D800…U+DFFF cannot be specified by this syntax, except for U+0024 ($), U+0040 (@), and U+0060 (`). This explains why you can’t produce % in this manner.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/579984", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/158192/" ] }
580,011
I've got a huge tree of folders, each with multiple subdirectories going down around 3 levels. Here's an example with just one level: $ tree.|-- AB.txt|-- CD.txt|-- destination_folder|-- spreadsheet.txt`-- subdirectory `-- EF.txt2 directories, 4 files I've got a list of filenames I'm interested in, called spreadsheet.txt : $ cat spreadsheet.txt AB.txtCD.txtEF.txt I'd like to copy all the files which appear in spreadsheet.txt into a single folder, e.g. destination_folder . Any help very gratefully received! I imagine it will involve find and cp , but can't seem to work it out,
To avoid the double-expansion problem ( \u is processed before %s ), you can use %b , at least in Bash printf : printf '%b\n' \\u0024 \\u0025 You can pre-process your input in various ways: set 0024 0025printf '%b\n' "${@/#/\\u}" The standalone printf , as implemented in GNU coreutils , has the following restrictions on Unicode character specifications: printf interprets two character syntaxes introduced in ISO C 99: ‘ \u ’ for 16-bit Unicode (ISO/IEC 10646) characters, specified as four hexadecimal digits hhhh , and ‘ \U ’ for 32-bit Unicode characters, specified as eight hexadecimal digits hhhhhhhh . printf outputs the Unicode characters according to the LC_CTYPE locale. Unicode characters in the ranges U+0000…U+009F, U+D800…U+DFFF cannot be specified by this syntax, except for U+0024 ($), U+0040 (@), and U+0060 (`). This explains why you can’t produce % in this manner.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/580011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/406189/" ] }
580,016
How can I save stdout to one file, stderr to another file, stdout+stderr to a third file and also get stdout + stderr to terminal like normal for a shell script? I found this elsewhere: exec > >(tee std_out) 2> >(tee err_out >&2)ls # Should got to std_outfsdfs # Command not found goes to err_out Which is really close. If I run bash test.sh 2>&1 | tee output then it works, but I don't have access to how my script is run. It's a cicd system. I need to be able to do the "combined output" from inside the script using exec. I'm creating a CI/CD library and I'm unable to know what the clients would use the library for, so I want to account for each use case.
Simply expanding on your approach: exec 2> >(tee -a stderr stdall) 1> >(tee -a stdout stdall) Standard error will be written to the file named stderr , standard output to stdout and both standard error and standard output will also be written to the console (or whatever the two file descriptors are pointing at the time exec is run) and to stdall . tee -a (append) is required to prevent stdall from being overwritten by the second tee that starts writing to it. Note that the order in which redirections are performed is relevant: the second process substitution is affected by the first redirection, i.e. the errors it emitted would be sent to >(tee -a stderr stdall) . You can, of course, redirect the second process substitution's standard error to /dev/null to avoid this side effect. Redirecting standard output before standard error would send every error to stdout and stdall too. Since the commands in Bash's process substitutions are executed asynchronously , there is no way to guarantee that their output will be displayed in the order it was generated. Worse, fragments from standard output and standard error are likely to end up appearing on the same line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/580016", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184111/" ] }
580,124
I have a simple script involving a for loop from bash that I am trying to get working in zsh. I had assumed that the shebang would ensure a POSIX compliant shell would be used (on my system I have /bin/sh -> dash* ) so there wouldn't be any issues. MWE script where ITEMS is actually the output of a command that lists packages e.g. ITEMS=$(pip freeze) : #!/bin/sh# ITEMS=$(pip freeze) # Example of useful commandITEMS="Item1Item2Item3" # Dummy variable for testingfor ITEM in $ITEMS; do echo $ITEM echo Completedone This is the output when I try to run the script in zsh : $ source scratch.shItem1Item2Item3Complete # Undesired$ . ./scratch.shItem1Item2Item3Complete # Undesired$ bash scratch.shItem1CompleteItem2CompleteItem3Complete # Desired$ sh scratch.shItem1CompleteItem2CompleteItem3Complete # Desired When I run it in a bash terminal it works fine. I think I've misunderstood how the shebang is interpreted by zsh ? Can someone please explain to me how it should be used such that when I run source scratch.sh or . ./scratch.sh I have the same output as if I had run sh scratch.sh ? I know I could modify my for loop script to be compliant with zsh and bash natively, but I want to use /bin/sh -> dash so I'm always using a posix compliant shell and don't have to worry about bashisms or zshisms. Apologies if this is a basic question, I did search for zsh , posix and shebang but didn't find a similar question.
The shebang only has an affect if you execute the script directly without specifying how to run it ; that is, with something like ./scratch.sh or /path/to/scratch.sh or by putting it in a directory in your PATH and just using scratch.sh . If you run it using some other command, that controls what's done with it (overriding the shebang). If you use bash scratch.sh , it runs in bash ; if you use zsh scratch.sh , it runs in zsh ; if you use sh , it runs in whatever sh is on your system ( dash in your specific case). If you use source scratch.sh or . scratch.sh , it runs in the current shell , whatever that is. That's the entire purpose of the . and source commands. And again, the shebang is ignored here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/580124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/406299/" ] }
580,142
Noob here - I want to run grep -r asdf however, I only want unique matches in my directories (i.e. disregarding any directory, display unique matches only). So I ran grep -r asdf | sort --unique . However - this does not work since the directory names are different ( dir1/a.txt asdf and dir2/a.txt asdf ). I didn't see an option (I tried e.g. grep -riol ) to exclude directories and I guess that barely makes sense for the scope of the function. Can I somehow cut-away the directories and only show the matched filename + match (possibly without a mind/universe-bending regex/sed/...)?
The shebang only has an affect if you execute the script directly without specifying how to run it ; that is, with something like ./scratch.sh or /path/to/scratch.sh or by putting it in a directory in your PATH and just using scratch.sh . If you run it using some other command, that controls what's done with it (overriding the shebang). If you use bash scratch.sh , it runs in bash ; if you use zsh scratch.sh , it runs in zsh ; if you use sh , it runs in whatever sh is on your system ( dash in your specific case). If you use source scratch.sh or . scratch.sh , it runs in the current shell , whatever that is. That's the entire purpose of the . and source commands. And again, the shebang is ignored here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/580142", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/401497/" ] }
580,271
I have a script I'm running to fix file ownership and permissions after an rsync. Questions of the optimal way to do my task aside, I wonder, is there a way to run chmod and chown at the same time? In the current iteration of my script, I'm find ing files twice. find /var/www/mysite -exec chown www-data:www-data {} \;find /var/www/mysite -type f -exec chmod 775 {} \; I thought it would be nice if I could change both the permissions and owner/group with a single command. After some googling, I was surprised to learn that such a command, argument, or option doesn't exist. Can I change both the permissions and ownership at the same time, to avoid find ing each file twice? Edit A community edit or post or something suggested that this question is a duplicate of "Change all folder permissions with 1 command" . This question is different because it asks about changing both permissions and ownership at the same time, not just permissions.
You can pass multiple exec commands: find /var/www/mysite -exec chown www-data:www-data {} \; \ -type f -exec chmod 775 {} \;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/580271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394/" ] }