source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
493,662
we have huge file like this this is partial list in the file Topic: Ho_HTR_bvt Partition: 31 Leader: 1007 Replicas: 1007,1008,1009 Isr: 1009,1007,1008Topic: Ho_HTR_bvt Partition: 32 Leader: 1008 Replicas: 1008,1009,1010 Isr: 1010,1009,1008Topic: Ho_HTR_bvt Partition: 33 Leader: 1009 Replicas: 1009,1010,1006 Isr: 1009,1010,1006Topic: Ho_HTR_bvt Partition: 34 Leader: 1010 Replicas: 1010,1006,1007 Isr: 1006,1007,1010Topic: Ho_HTR_bvt Partition: 35 Leader: 1006 Replicas: 1006,1008,1009 Isr: 1006,1009,1008Topic: Ho_HTR_bvt Partition: 36 Leader: 1007 Replicas: 1007,1009,1010 Isr: 1010,1007,1009Topic: Ho_HTR_bvt Partition: 37 Leader: 1008 Replicas: 1008,1010,1006 Isr: 1006,1010,1008Topic: Ho_HTR_bvt Partition: 38 Leader: 1009 Replicas: 1009,1006,1007 Isr: 1007,1009,1006Topic: Ho_HTR_bvt Partition: 39 Leader: 1010 Replicas: 1010,1007,1008 Isr: 1010,1007,1008Topic: Ho_HTR_bvt Partition: 40 Leader: 1006 Replicas: 1006,1009,1010 Isr: 1006,1010,1009Topic: Ho_HTR_bvt Partition: 41 Leader: 1007 Replicas: 1007,1010,1006 Isr: 1006,1007,1010Topic: Ho_HTR_bvt Partition: 42 Leader: 1008 Replicas: 1008,1006,1007 Isr: 1006,1007,1008Topic: Ho_HTR_bvt Partition: 43 Leader: 1009 Replicas: 1009,1007,1008 Isr: 1009,1007,1008Topic: Ho_HTR_bvt Partition: 44 Leader: 1010 Replicas: 1010,1008,1009 Isr: 1010,1009,1008 how to count the number - 1007 string ? or any other word in file
Using your example data: $ grep -Fo 1007 file | wc -l 19 The grep part of this pipeline will search for the string 1007 (the -F flag is used because we are doing string comparisons, not regular expression matching). It will return each individual instance of the string on a new line due to the -o flag. The number of lines returned is counted by wc -l . If the string occurs twice on a line in the input data, this will count it twice. If the string occurs as a substring of another word, it will be counted too. With awk : $ awk -v str="1007" '{ c += gsub(str, str) } END { print c }' file19 This counts the number of times the string occurs using gsub() (this function returns the number of times a substitution is performed, and we apply it to each input line individually) and prints the total count at the end. The string we're interested in is passed on the command line with -v str="1007" .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/493662", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
493,676
I was looking at another question ( https://stackoverflow.com/q/47845/537980 ), and saw an answer, about how much set up this other OS had to do, for every Process Create. I got wondering. Would it be possible to do the setup (once, then fork), then do a partial exec to load the variable parts? That is only part of the process should be replaced. A specific example of partial, would be. We want to load some execution environment, then exec to replace the loader, but not the environment. So this is taking control of what gets replaced (I know that exec does not replace everything (e.g. it keeps a COW of the file descriptor table)). I realise that this may not have any practical use, as fork and exec are relatively cheep on many Unixes.
Using your example data: $ grep -Fo 1007 file | wc -l 19 The grep part of this pipeline will search for the string 1007 (the -F flag is used because we are doing string comparisons, not regular expression matching). It will return each individual instance of the string on a new line due to the -o flag. The number of lines returned is counted by wc -l . If the string occurs twice on a line in the input data, this will count it twice. If the string occurs as a substring of another word, it will be counted too. With awk : $ awk -v str="1007" '{ c += gsub(str, str) } END { print c }' file19 This counts the number of times the string occurs using gsub() (this function returns the number of times a substitution is performed, and we apply it to each input line individually) and prints the total count at the end. The string we're interested in is passed on the command line with -v str="1007" .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/493676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4778/" ] }
493,729
I know that Bash and Zsh support local variables, but there are systems only have POSIX-compatible shells. And local is undefined in POSIX shells. So I want to ask which shells support local keyword for defining local variables? Edit : About shells I mean the default /bin/sh shell.
It's not as simple as supporting local or not. There is a lot of variation on the syntax and how it's done between shells that have one form or other of local scope. That's why it's very hard to come up with a standard that agrees with all. See http://austingroupbugs.net/bug_view_page.php?bug_id=767 for the POSIX effort on that front. local scope was added first in ksh in the early 80s. The syntax to declare a local variable in a function was with typeset : function f { typeset var=value set -o noglob # also local to the function ...} (function support was added to the Bourne shell later, but with a different syntax ( f() command ) and ksh added support for that one as well later; the Bourne shell never had local scope (except of course via subshells)) The local builtin AFAIK was added first to the Almquist shell (used in BSDs, dash, busybox sh) in 1989, but works significantly differently from ksh 's typeset . ash derivatives don't support typeset as an alias to local , but you can always define one by hand. bash and zsh added typeset aliased to local in 1989 and 1991 respectively. ksh88 added local as an undocumented alias to typeset circa 1990 and pdksh and its derivatives in 1994. posh (based on pdksh ) removed typeset (for strict compliance to the Debian Policy that requires local , but not typeset ). POSIX initially objected to specifying typeset on the ground that it was dynamic scoping. So ksh93 (a rewrite of ksh in 1993 by David Korn) switched to static scoping instead. Also in ksh93, as opposed to ksh88, local scoping is only done for functions declared with the ksh syntax ( function f {...} ), not the Bourne syntax ( f() {...} ) and the local alias was removed. However the ksh93v- beta and final version from AT&T can be compiled with an experimental "bash" mode (actually enabled by default) that does dynamic scoping (in bother forms of functions, including with local and typeset ) when ksh93 is invoked as bash . local differs from typeset in that case in that it can only be invoked from within a function. That bash mode will be disabled by default in ksh2020 though the local / declare aliases to typeset will be retained even when the bash mode is not compiled in (though still with static scoping). yash (written much later), has typeset (à la ksh88), but has only had local as an alias to it since version 2.48 (December 2018). @Schily (who sadly passed away in 2021 ) used to maintain a Bourne shell descendant which has been recently made mostly POSIX compliant, called bosh that supports local scope since version 2016-07-06 (with local , similar to ash ). So the Bourne-like shells that have some form of local scope for variables today are: ksh, all implementations and their derivatives (ksh88, ksh93, pdksh and derivatives like posh, mksh, OpenBSD sh). ash and all its derivatives (NetBSD sh, FreeBSD sh, dash, busybox sh) bash zsh yash bosh As far as the sh of different systems go, note that there are systems where the POSIX sh is in /bin (most), and others where it's not (like Solaris where it's in /usr/xpg4/bin ). For the sh implementation on various systems we have: ksh88: most SysV-derived commercial Unices (AIX, HP/UX, Solaris¹...) bash: most GNU/Linux systems, Cygwin, macOS ash: by default on Debian and most derivatives (including Ubuntu, Linux/Mint) though can be changed by the admin to bash or mksh. NetBSD, FreeBSD and some of their derivatives (not macOS). busybox sh: many if not most embedded Linux systems pdksh or derivatives: OpenBSD, MirBSD, Android Now, where they differ: typeset (ksh, pdksh, bash, zsh, yash) vs local (ksh88, pdksh, bash, zsh, ash, yash 2.48+). the list of supported options. static (ksh93, in function f {...} function), vs dynamic scoping (all other shells). For instance, whether function f { typeset v=1; g; echo "$v"; }; function g { v=2; }; f outputs 1 or 2 . See also how the export attribute affects scoping in ksh93 . whether local / typeset just makes the variable local ( ash , bosh ), or creates a new instance of the variable (other shells). For instance, whether v=1; f() { local v; echo "${v:-empty}"; }; f outputs 1 or empty (see also the localvar_inherit option in bash 5.0 and above). with those that create a new variable, whether the new one inherits the attributes (like export ) and/or type and which ones from the variable in the parent scope. For instance, whether export V=1; f() { local V=2; printenv V; }; f prints 1 , 2 or nothing. whether that new variable has an initial value (empty, 0, empty list, depending on type, zsh ) or is initially unset. whether unset V on a variable in a local scope leaves the variable unset , or just peels one level of scoping ( mksh , yash , bash under some circumstances). For instance, whether v=1; f() { local v=2; unset v; echo "$v"; } outputs 1 or nothing (see also the localvar_unset option in bash 5.0 and above) like for export , whether it's a keyword or only a mere builtin or both and under what condition it's considered as a keyword. like for export , whether the arguments are parsed as normal command arguments or as assignments (and under what condition). whether you can declare local a variable that was readonly in the parent scope. the interactions with v=value myfunction where myfunction itself declares v as local or not. That's the ones I'm thinking of just now. Check the austin group bug above for more details. As far as local scoping for shell options (as opposed to variables ), shells supporting it are: ksh88 (with both function definition syntax): done by default, I'm not aware of any way to disable it. ash (since 1989): with local - . It makes the $- parameter (which stores the list of options) local. ksh93 : now only done for function f {...} functions. zsh (since 1995). With setopt localoptions . Also with emulate -L for the emulation mode (and its set of options) to be made local to the function. bash (since 2016) with local - like in ash , but only for the options managed by set , not the ones managed by shopt . ¹ the POSIX sh on Solaris is /usr/xpg4/bin/sh (though it has many conformance bugs including those options local to functions). /bin/sh up to Solaris 10 was the Bourne shell (so no local scope), and since Solaris 11 is ksh93
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/493729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178265/" ] }
493,771
The man page for udev mentions in several places that certain rules options can be used to invoke 'builtin' commands, which are apparently built in to the udev program itself. However, I haven't been able to find any reference documentation that clearly explains what udev builtins are available; what they do and how they are used. I have searched the web without much success. Does anyone know if there is a reference anywhere that provides details about these builtin commands?
Unfortunately, this information is missing on manpages and even knowing how to read them(see below) you will find trouble on trying to find that info. However, the beauty of the opensource relies on having the power to read the sources. If you take a look at the udev-builtin.c source file inside systemd / udev repository and have basic C language knowledge , you will find the following snippet of code: A structure that maps all existing builtin types. static const struct udev_builtin *builtins[_UDEV_BUILTIN_MAX] = {#if HAVE_BLKID [UDEV_BUILTIN_BLKID] = &udev_builtin_blkid,#endif [UDEV_BUILTIN_BTRFS] = &udev_builtin_btrfs, [UDEV_BUILTIN_HWDB] = &udev_builtin_hwdb, [UDEV_BUILTIN_INPUT_ID] = &udev_builtin_input_id, [UDEV_BUILTIN_KEYBOARD] = &udev_builtin_keyboard,#if HAVE_KMOD [UDEV_BUILTIN_KMOD] = &udev_builtin_kmod,#endif [UDEV_BUILTIN_NET_ID] = &udev_builtin_net_id, [UDEV_BUILTIN_NET_LINK] = &udev_builtin_net_setup_link, [UDEV_BUILTIN_PATH_ID] = &udev_builtin_path_id, [UDEV_BUILTIN_USB_ID] = &udev_builtin_usb_id,#if HAVE_ACL [UDEV_BUILTIN_UACCESS] = &udev_builtin_uaccess,#endif}; This struct holds all built-in types, and they map source files depending on what type it is. Example: udev-builtin-kmod.c - A Kernel Module loader. udev-builtin-keyboard.c - A keyboard handler. udev-builtin-usb_id.c - A USB handler that will set the usb type and initialize the device. Related: How do I use man pages to learn how to use commands?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/493771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/257802/" ] }
493,785
I'm installing OpenOCD on my Debian stretch system. When I run ./configure it reports it cannot find libusb. ...checking for LIBUSB1... noconfigure: WARNING: libusb-1.x not found, trying legacy libusb-0.1 as a fallback; consider installing libusb-1.x insteadchecking for LIBUSB0... no... I have the correct dependencies installed, but I still get the error. libhidapi-libusb0/stable,now 0.8.0~rc1+git20140818.d17db57+dfsg-1 amd64 [installed,automatic]libusb-1.0-0/stable,now 2:1.0.21-1 amd64 [installed,automatic]libusb-1.0-0-dev/stable,now 2:1.0.21-1 amd64 [installed] What gives?
The error message is unhelpful at best. The OpenOCD README lists pkg-config as a dependency. As soon as pkg-config was installed, the ./configure script was able to find libusb-1.0-0-dev . ...checking for LIBUSB1... yesconfigure: libusb-1.0 header bug workaround: LIBUSB1_CFLAGS changed to "-isystem /usr/include/libusb-1.0"checking for LIBUSB0... no... tl;dr sudo apt-get install pkg-config
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/493785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48197/" ] }
493,799
I have a pdf kinda-book file which has a table of contents as metadata in file but they are not listed on any page of the document. I want to print the file with table of contents, or print the table of contents separately. How can I do that?
pdftk can dump out the "bookmarks" with, e.g., pdftk file.pdf dump_data_utf8 ; you'll get a bunch of Bookmark* entries buried in the rest of the metadata. grep can give just them: $ pdftk whatever.pdf dump_data_utf8 | grep ^BookmarkBookmarkBeginBookmarkTitle: CoverBookmarkLevel: 1BookmarkPageNumber: 1BookmarkBeginBookmarkTitle: AgendaBookmarkLevel: 1BookmarkPageNumber: 2 The "level" is the indentation level (so a level 2 is indented from a level 1). You can format that into whatever format you want for printing. Here is a Perl script to print it in LaTeX format, which can then be fed to e.g., pdflatex to get a PDF file (which you could even use pdftk to prepend to your original PDF). Note this is also available at https://gitlab.com/derobert/random-toys/blob/master/pdf/pdftoc-to-latex (which is a good place to send pull requests if you want to improve it): #!/usr/bin/perluse 5.024;use strict;use warnings qw(all);use IPC::Run3;use LaTeX::Encode;use Encode qw(decode);my @levels = qw(chapter section subsection subsubsection paragraph subparagraph);my @counters;my ($data_enc, $data);run3 ['pdftk', $ARGV[0], 'dump_data_utf8'], undef, \$data_enc;$data = decode('UTF-8', $data_enc, Encode::FB_CROAK);my @latex_bm;my $bm;foreach (split(/\n/, $data)) { /^Bookmark/ or next; if (/^BookmarkBegin$/) { add_latex_bm($bm) if $bm; $bm = {}; } elsif (/^BookmarkLevel: (\d+)$/a) { ++$counters[$1 - 1]; $#counters = $1 - 1; $bm->{number} = join(q{.}, @counters); $bm->{level} = $1 - 1; } elsif (/^BookmarkTitle: (.+)$/) { $bm->{title} = latex_encode($1); } elsif (/^BookmarkPageNumber: (\d+)$/a) { $bm->{page} = $1; } else { die "Unknown Bookmark tag in $_\n"; }}add_latex_bm($bm) if $bm;print <<LATEX;\\documentclass{report}\\begin{document}${ \join('', @latex_bm) }\\end{document}LATEXexit 0;sub add_latex_bm { my $bm = shift; my $level = $levels[$bm->{level}]; my $number = $bm->{number}; my $title = $bm->{title}; my $page = $bm->{page}; push @latex_bm, <<LINE;\\contentsline {$level}{\\numberline {$number}$title}{$page}%LINE} Here is how to use this script: Download https://gitlab.com/derobert/random-toys/raw/master/pdf/pdftoc-to-latex?inline=false and save as pdftoc-to-latex.pl Make it executable by running chmod +x /path/to/pdftoc-to-latex.pl in the terminal Install Latex::Encode perl package. On Debian Stretch you can do so via sudo apt install liblatex-encode-perl . On other distros you will probably need to do something else. Run the script like this: /path/to/pdftoc-to-latex.pl /path/to/pdf/file.pdf > /path/to/where/you/want/tex/file.tex Compile the resulting tex file to pdf with your favorite LaTeX compiler (e.g., cd /path/to/where/you/want/tex; pdflatex file.tex )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/493799", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179420/" ] }
493,879
I have a process that creates many files in a known directory, and the only way to tell how far along it is is to type ls manually. Is there a way to make the output of ls update automatically as new files are created, similar to how tail -f works? Because of their names, every new file appears at the end of the list, so I wouldn't have to worry about them appearing in the middle.
You can use command like: watch ls to loop execution of ls command If the listing is too long you can add -C to ls watch ls -C Or you can create explicit loop with while while [ 1 ] do clear ls sleep 60done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/493879", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/309357/" ] }
493,897
I made a recording with ffmpeg -f alsa -ac 2 -i plughw:0,0 /tmp/audio.mp4 I then moved /tmp/audio.mp4 to another directory ( /root/audio.mp4 ) without stopping ffmpeg leading to a broken .mp4 file: ffplay /root/audio.mp4[...][mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f3524000b80] moov atom not foundaudio.mp4: Invalid data found when processing input How to recover and read my .mp4 file?
You can try and use Untrunc to fix the file. Restore a damaged (truncated) mp4, m4v, mov, 3gp video. Provided you have a similar not broken video. you may need to compile it from source, but there is another option to use a Docker container and bind the folder with the file into the container and fix it that way. You can use the included Dockerfile to build and execute the package as a container git clone https://github.com/ponchio/untrunc.gitcd untruncdocker build -t untrunc .docker run -v ~/Desktop/:/files untrunc /files/filea /files/fileb
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/493897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189711/" ] }
493,943
I use wget command in the background like this wget -bq and it prints Continuing in background, pid 31754. But when I type the command jobs , I don't see my job(although the downloading is not finished).
When using wget with -b or --background it puts itself into the background by disassociating from the current shell ( by forking off a child process and terminating the parent ). Since it's not the shell that puts it in the background as an asynchronous job, it will not show up as a job when you use jobs . To run wget as an asynchronous (background) job in the shell, use wget ... URL & If you do this, you may additionally want to redirect output to some file (which wget does automatically with -b ), or discard it by redirecting to /dev/null , or use -q or --quiet .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/493943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/330992/" ] }
493,948
I'm trying escape characters like \' but it doesn't work. $ cp 'A long filen* ./short_filename
Your file does not contain quotes, it is a new output behavior of ls . See: Why is 'ls' suddenly wrapping items with spaces in single quotes? You can use cp "A long file n"* short_filename The * must be outside the quotes or escape all spaces (and other special characters like \ , ; or | , etc.) cp A\ long\ file\ n* short_filename
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/493948", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/331001/" ] }
493,963
I have the following command cat filename Output: 0040004200480043 If I do cat filename | sort -r the output is: 0048004300420040 I dont want that; it looks its sorting in descending order instead of in reverse order. I want the following output (A true reverse order) 0043004800420040 How can I do that?
To reverse a file, use tac : tac filename sort -r doesn’t do what you’re after because it doesn’t reverse, it sorts in reverse order; that’s why you end up with the numbers in decreasing order (although you shouldn’t think of them as numbers here, since the default sort is lexicographic).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/493963", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/324252/" ] }
493,980
I want a script that will run another utility over some default paths if no parameters are passed to it; ideally I want this safe for paths that contain spaces. So far I have script.sh : #!/bin/shbase=$(dirname "$0")exec touch "${@:-"$base/aaa" "$base/bbb"}" If I put this into a folder called "foo bar" and run it as: foo\ bar/script.sh I want it to should end up doing: touch foo\ bar/aaa foo\ bar/bbb i.e. create files "aaa" and "bbb" under "foo bar", the directory in which the script is located. Instead I get the error touch: cannot touch 'foo bar/aaa foo bar/bbb': No such file or directory (If I pass in parameters to the script it seems to work fine. Presumably removing the outer quotes in the last command would reverse my cases.)
It appears you can't set default parameters in an expansion of ${@:-...} , and "${@:-"$base/aaa" "$base/bbb"}" is expanded as a single string. If you want to set default parameters you might want to do this: base=$(dirname -- "$0")# test explicitly for no parameters, and set them.if [ "$#" -eq 0 ]; then set -- "$base/aaa" "$base/bbb"fi Then, the "$@" magically quoted parameter substitution can happen unabated: touch -- "$@"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/493980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56564/" ] }
494,219
Suppose I have an X application, which grabs keyboard and mouse as its normal mode of operation (e.g. QEMU), but which, due to a bug somewhere, hangs really hard (e.g. gets stuck in Disk sleep). Normally I'd kill an app using kill(1) from a remote terminal, but if the app is in Disk sleep mode, it can't really be killed. I could kill this app's connection to X server by the xkill utility, but this time I can't do this because mouse is grabbed, so xkill will fail to run. So, how do I release my keyboard and mouse from grab by an X client, if I'm willing to sacrifice this client, but am unable to kill it by the OS means?
Although the most well-known use mode of xkill is "click to kill", there's an option -id , which can be supplied with Window Id of the client you want to disconnect from X server. Then, if you can access your X session from a remote terminal/VT, you can use xprop or some other means to get the Id, and pass it to xkill . Suppose that current active window belongs to the X client who grabbed the keys&mouse. Then the following will kill this client's connection to the X server and thus release keyboard and mouse from the grab: winid=$(xprop -root _NET_ACTIVE_WINDOW | cut -d# -f2)xkill -id $winid This actually worked for me when I tried to get rid of QEMU's grab when QEMU was stuck in Disk sleep.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/494219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27672/" ] }
494,295
I'm trying to get apcupsd to work with my Ups (APC Back-UPS 700VA) with my server running Debian 9 stretch but can't connect to ups and get this when I run: rene@odroidxu4-share:~$ sudo apcaccess status[sudo] password for rene:APC : 001,018,0453DATE : 2019-01-13 18:44:10 +0000HOSTNAME : odroidxu4-shareVERSION : 3.14.14 (31 May 2016) debianUPSNAME : smartups750CABLE : USB CableDRIVER : USB UPS DriverUPSMODE : Stand AloneSTARTTIME: 2019-01-13 18:44:00 +0000STATUS : COMMLOSTMBATTCHG : 5 PercentMINTIMEL : 3 MinutesMAXTIME : 0 SecondsNUMXFERS : 0TONBATT : 0 SecondsCUMONBATT: 0 SecondsXOFFBATT : N/ASTATFLAG : 0x05000100END APC : 2019-01-13 19:53:54 +0000 The problem is: STATUS: COMMLOST - (No connection to UPS) Despite I can see it's connected with: rene@odroidxu4-share:~$ lsusbBus 006 Device 002: ID 0bda:8153 Realtek Semiconductor Corp.Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 004 Device 002: ID 152d:0578 JMicron Technology Corp. / JMicron USA Technology Corp.Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 002 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power SupplyBus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub (Bus 002 Device 002: ID 051d:0002 American Power Conversion Uninterruptible Power Supply) Output of conf file: sudo nano /etc/apcupsd/apcupsd.conf## apcupsd.conf v1.1 ### # "apcupsd" POSIX config file## Note that the apcupsd daemon must be restarted in order for changes to# this configuration file to become active.### ========= General configuration parameters ============## UPSNAME xxx# Use this to give your UPS a name in log files and such. This# is particulary useful if you have multiple UPSes. This does not# set the EEPROM. It should be 8 characters or less.UPSNAME smartups750# UPSCABLE <cable># Defines the type of cable connecting the UPS to your computer.## Possible generic choices for <cable> are:# simple, smart, ether, usb## Or a specific cable model number may be used:# 940-0119A, 940-0127A, 940-0128A, 940-0020B,# 940-0020C, 940-0023A, 940-0024B, 940-0024C,# 940-1524C, 940-0024G, 940-0095A, 940-0095B,# 940-0095C, 940-0625A, M-04-02-2000#UPSCABLE usb# To get apcupsd to work, in addition to defining the cable# above, you must also define a UPSTYPE, which corresponds to# the type of UPS you have (see the Description for more details).# You must also specify a DEVICE, sometimes referred to as a port.# For USB UPSes, please leave the DEVICE directive blank. For# other UPS types, you must specify an appropriate port or address.## UPSTYPE DEVICE Description# apcsmart /dev/tty** Newer serial character device, appropriate for # SmartUPS models using a serial cable (not USB).## usb <BLANK> Most new UPSes are USB. A blank DEVICE# setting enables autodetection, which is# the best choice for most installations.## net hostname:port Network link to a master apcupsd through apcupsd's # Network Information Server. This is used if the# UPS powering your computer is connected to a # different computer for monitoring.## snmp hostname:port:vendor:community# SNMP network link to an SNMP-enabled UPS device.# Hostname is the ip address or hostname of the UPS # on the network. Vendor can be can be "APC" or # "APC_NOTRAP". "APC_NOTRAP" will disable SNMP trap # catching; you usually want "APC". Port is usually # 161. Community is usually "private".## netsnmp hostname:port:vendor:community# OBSOLETE# Same as SNMP above but requires use of the # net-snmp library. Unless you have a specific need# for this old driver, you should use 'snmp' instead.## dumb /dev/tty** Old serial character device for use with # simple-signaling UPSes.## pcnet ipaddr:username:passphrase:port# PowerChute Network Shutdown protocol which can be # used as an alternative to SNMP with the AP9617 # family of smart slot cards. ipaddr is the IP # address of the UPS management card. username and # passphrase are the credentials for which the card # has been configured. port is the port number on # which to listen for messages from the UPS, normally # 3052. If this parameter is empty or missing, the # default of 3052 will be used.## modbus /dev/tty** Serial device for use with newest SmartUPS models# supporting the MODBUS protocol.# modbus <BLANK> Leave the DEVICE setting blank for MODBUS over USB# or set to the serial number of the UPS to ensure # that apcupsd binds to that particular unit# (helpful if you have more than one USB UPS).#UPSTYPE usbDEVICE# POLLTIME <int># Interval (in seconds) at which apcupsd polls the UPS for status. This# setting applies both to directly-attached UPSes (UPSTYPE apcsmart, usb, # dumb) and networked UPSes (UPSTYPE net, snmp). Lowering this setting# will improve apcupsd's responsiveness to certain events at the cost of# higher CPU utilization. The default of 60 is appropriate for most# situations.POLLTIME 60# LOCKFILE <path to lockfile># Path for device lock file. This is the directory into which the lock file# will be written. The directory must already exist; apcupsd will not create# it. The actual name of the lock file is computed from DEVICE.# Not used on Win32.LOCKFILE /var/lock# SCRIPTDIR <path to script directory># Directory in which apccontrol and event scripts are located.SCRIPTDIR /etc/apcupsd# PWRFAILDIR <path to powerfail directory># Directory in which to write the powerfail flag file. This file# is created when apcupsd initiates a system shutdown and is# checked in the OS halt scripts to determine if a killpower# (turning off UPS output power) is required.PWRFAILDIR /etc/apcupsd# NOLOGINDIR <path to nologin directory># Directory in which to write the nologin file. The existence# of this flag file tells the OS to disallow new logins.NOLOGINDIR /etc## ======== Configuration parameters used during power failures ==========## The ONBATTERYDELAY is the time in seconds from when a power failure# is detected until we react to it with an onbattery event.## This means that, apccontrol will be called with the powerout argument# immediately when a power failure is detected. However, the# onbattery argument is passed to apccontrol only after the # ONBATTERYDELAY time. If you don't want to be annoyed by short# powerfailures, make sure that apccontrol powerout does nothing# i.e. comment out the wall.ONBATTERYDELAY 6# # Note: BATTERYLEVEL, MINUTES, and TIMEOUT work in conjunction, so# the first that occurs will cause the initation of a shutdown.## If during a power failure, the remaining battery percentage# (as reported by the UPS) is below or equal to BATTERYLEVEL, # apcupsd will initiate a system shutdown.BATTERYLEVEL 5# If during a power failure, the remaining runtime in minutes # (as calculated internally by the UPS) is below or equal to MINUTES,# apcupsd, will initiate a system shutdown.MINUTES 3# If during a power failure, the UPS has run on batteries for TIMEOUT# many seconds or longer, apcupsd will initiate a system shutdown.# A value of 0 disables this timer.## Note, if you have a Smart UPS, you will most likely want to disable# this timer by setting it to zero. That way, you UPS will continue# on batteries until either the % charge remaing drops to or below BATTERYLEVEL,# or the remaining battery runtime drops to or below MINUTES. Of course,# if you are testing, setting this to 60 causes a quick system shutdown# if you pull the power plug. # If you have an older dumb UPS, you will want to set this to less than# the time you know you can run on batteries.TIMEOUT 0# Time in seconds between annoying users to signoff prior to# system shutdown. 0 disables.ANNOY 300# Initial delay after power failure before warning users to get# off the system.ANNOYDELAY 60# The condition which determines when users are prevented from# logging in during a power failure.# NOLOGON <string> [ disable | timeout | percent | minutes | always ]NOLOGON disable# If KILLDELAY is non-zero, apcupsd will continue running after a# shutdown has been requested, and after the specified time in# seconds attempt to kill the power. This is for use on systems# where apcupsd cannot regain control after a shutdown.# KILLDELAY <seconds> 0 disablesKILLDELAY 0## ==== Configuration statements for Network Information Server ====## NETSERVER [ on | off ] on enables, off disables the network# information server. If netstatus is on, a network information# server process will be started for serving the STATUS and# EVENT data over the network (used by CGI programs).NETSERVER on# NISIP <dotted notation ip address># IP address on which NIS server will listen for incoming connections.# This is useful if your server is multi-homed (has more than one# network interface and IP address). Default value is 0.0.0.0 which# means any incoming request will be serviced. Alternatively, you can# configure this setting to any specific IP address of your server and # NIS will listen for connections only on that interface. Use the# loopback address (127.0.0.1) to accept connections only from the# local machine.NISIP 127.0.0.1# NISPORT <port> default is 3551 as registered with the IANA# port to use for sending STATUS and EVENTS data over the network.# It is not used unless NETSERVER is on. If you change this port,# you will need to change the corresponding value in the cgi directory# and rebuild the cgi programs.NISPORT 3551# If you want the last few EVENTS to be available over the network# by the network information server, you must define an EVENTSFILE.EVENTSFILE /var/log/apcupsd.events# EVENTSFILEMAX <kilobytes># By default, the size of the EVENTSFILE will be not be allowed to exceed# 10 kilobytes. When the file grows beyond this limit, older EVENTS will# be removed from the beginning of the file (first in first out). The# parameter EVENTSFILEMAX can be set to a different kilobyte value, or set# to zero to allow the EVENTSFILE to grow without limit.EVENTSFILEMAX 10## ========== Configuration statements used if sharing =============# a UPS with more than one machine## Remaining items are for ShareUPS (APC expansion card) ONLY## UPSCLASS [ standalone | shareslave | sharemaster ]# Normally standalone unless you share an UPS using an APC ShareUPS# card.UPSCLASS standalone# UPSMODE [ disable | share ]# Normally disable unless you share an UPS using an APC ShareUPS card.UPSMODE disable## ===== Configuration statements to control apcupsd system logging ========## Time interval in seconds between writing the STATUS file; 0 disablesSTATTIME 0# Location of STATUS file (written to only if STATTIME is non-zero)STATFILE /var/log/apcupsd.status# LOGSTATS [ on | off ] on enables, off disables# Note! This generates a lot of output, so if # you turn this on, be sure that the# file defined in syslog.conf for LOG_NOTICE is a named pipe.# You probably do not want this on.LOGSTATS off# Time interval in seconds between writing the DATA records to# the log file. 0 disables.DATATIME 0# FACILITY defines the logging facility (class) for logging to syslog. # If not specified, it defaults to "daemon". This is useful # if you want to separate the data logged by apcupsd from other# programs.#FACILITY DAEMON## ========== Configuration statements used in updating the UPS EPROM =========### These statements are used only by apctest when choosing "Set EEPROM with conf# file values" from the EEPROM menu. THESE STATEMENTS HAVE NO EFFECT ON APCUPSD.## UPS name, max 8 characters #UPSNAME UPS_IDEN# Battery date - 8 characters#BATTDATE mm/dd/yy# Sensitivity to line voltage quality (H cause faster transfer to batteries) # SENSITIVITY H M L (default = H)#SENSITIVITY H# UPS delay after power return (seconds)# WAKEUP 000 060 180 300 (default = 0)#WAKEUP 60# UPS Grace period after request to power off (seconds)# SLEEP 020 180 300 600 (default = 20)#SLEEP 180# Low line voltage causing transfer to batteries# The permitted values depend on your model as defined by last letter # of FIRMWARE or APCMODEL. Some representative values are:# D 106 103 100 097# M 177 172 168 182# A 092 090 088 086# I 208 204 200 196 (default = 0 => not valid)#LOTRANSFER 208# High line voltage causing transfer to batteries# The permitted values depend on your model as defined by last letter # of FIRMWARE or APCMODEL. Some representative values are:# D 127 130 133 136# M 229 234 239 224# A 108 110 112 114# I 253 257 261 265 (default = 0 => not valid)#HITRANSFER 253# Battery charge needed to restore power# RETURNCHARGE 00 15 50 90 (default = 15)#RETURNCHARGE 15# Alarm delay # 0 = zero delay after pwr fail, T = power fail + 30 sec, L = low battery, N = never# BEEPSTATE 0 T L N (default = 0)#BEEPSTATE T# Low battery warning delay in minutes# LOWBATT 02 05 07 10 (default = 02)#LOWBATT 2# UPS Output voltage when running on batteries# The permitted values depend on your model as defined by last letter # of FIRMWARE or APCMODEL. Some representative values are:# D 115# M 208# A 100# I 230 240 220 225 (default = 0 => not valid)#OUTPUTVOLTS 230# Self test interval in hours 336=2 weeks, 168=1 week, ON=at power on# SELFTEST 336 168 ON OFF (default = 336)# SELFTEST 336 Someone who has a clue why I can't connect with my Apc ups?
I had the same problem with COMMLOST with a USB connection, after a lot of research I found by pure hazard that with a USB connection you should change DEVICE /dev/ttys0 to DEVICE in /etc/apcupsd/apcupsd.conf with nothing after it, this way apcupsd search everywhere on the system to find the UPS and connect correctly, no more COMMLOST . After that, restart apcupsd with: $ sudo /etc/init.d/apcupsd restart Gilbert
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/494295", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318914/" ] }
494,324
In Linux /etc/resolv.conf get often overwritten when we setup the DNS, because of the multitude of programs managing the DNS servers. How to properly setup the DNS ?
DNS Config Under Linux DNS usage on linux is done over a set of routines in the C library that provide access to the Internet Domain Name System (DNS). The resolver configuration file ( resolv.conf ) contains information that is read by the resolver routines the first time they are invoked by a process. In short each process requesting DNS will read /etc/resolv.conf over library. The NSS is layered on top of this, and is configured by /etc/nsswitch.conf . Linux DNS config are located in the file /etc/resolv.conf BUT there are a number of programs/services that wants to automatically manage and handle the DNS configuration file at /etc/resolv.conf . In some situations you may want to manage this file yourself. Each program/service managing DNS have its own configuration files like /etc/dnsmasq.conf (for dnsmasq service) and append the DNS config at connection change and/or on other events... a quick solution is to lock the DNS config file with chattr +i /etc/resolv.conf but this is not recommended in certain case, a better solution is to setup correctly all the program/services using the DNS like (dnsmasq/network-manager/resolvconf/etc.) Getting Back The Control Of DNS Here is an exhaustive list of setups to get back the control of resolv.conf and avoid having it overwritten ( how to disable/setup DNS from other location other than resolv.conf ) note that resolvconf is an independent program from resolv.conf, also depending on your system/config you may not have one or many of the programs listed here. 1. Resolvconf: Config files cat /etc/resolvconf/resolv.conf.d/headnameserver 8.8.4.4cat /etc/resolvconf/resolv.conf.d/basenameserver 8.8.4.4 Update the config sudo resolvconf -u Disable resolvconf systemctl disable --now resolvconf.service 2. Dnsmasq Service: Config files cat /etc/dnsmasq.confserver=1.1.1.1server=8.8.4.4 Update the config sudo systemctl restart dnsmasq.service 3. Network Manager: Config files /etc/NetworkManager/* Disable DNS $ cat /etc/NetworkManager/conf.d/no-dns.conf[main]dns=none Enable DNS $ cat /etc/NetworkManager/conf.d/dns.conf[main]dns=default[global-dns]searches=example.com[global-dns-domain-*] Use resolved service $ cat /usr/lib/NetworkManager/conf.d/resolved.conf [main]dns=systemd-resolved Use resolvconf $ cat /usr/lib/NetworkManager/conf.d/resolvconf.conf [main]rc-manager=resolvconf Update the config systemctl restart NetworkManager.service 4. Network Interfaces: Config files $ cat /etc/network/interfaces#nameservers# or dns-search like so# dns-search x.y dns-nameservers 4.4.4.4 8.8.8.8 Update The Config reboot 5. DHCP Client: Config files $ cat /etc/dhcp3/dhclient.confsupersede domain-name-servers <dns_ip_address1>,<dns_ip_address2>; Update The Config reboot 6. Rdnssd Service: Disable rdnssd systemctl disable --now rdnssd.service 7. Resolved Service: Disable resolved systemctl disable --now systemd-resolved.service 8. Netconfig: Config files /etc/sysconfig/network/config Disable netconfig cat /etc/sysconfig/network/configNETCONFIG_DNS_POLICY="" Update The Config reboot Setting The DNS Server Example of a /etc/resolv.conf configuration #Cloudflarenameserver 1.0.0.1#Google#nameserver 8.8.8.8#nameserver 8.8.4.4#Cloudflare #nameserver 1.1.1.1#Classic Config#nameserver 192.168.1.1#search lan
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/494324", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120919/" ] }
494,474
I use Xubuntu and found that I can update packages by using apt and apt-get . But I have heard that programmers use usually git in their project to manage different version. So why Xubuntu do not use git to handle different versions of software?
apt and apt-get are related and very different from git . apt is the package management tool for Debian-derived Linux distributions (including Ubuntu/Xubuntu). This is used to manage (i.e. download, install, remove, update) the binary software packages that makeup the Linux distribution you are using. This is about updating your local system software, as well as adding and removing programs. 'apt' is the command-line tool that is used to interact with the Synaptic graphical tool. Essentially, they do the same things; however, one is graphical and runs in the X-Window System and the other is run from the Linux command line. apt-get is the command that is most-commonly used to install or update packages on your computer. apt is less-commonly used and differs from apt-get mostly in terms of output formatting. You can use man apt or man apt-get to pull up the manual pages, which will give you more details about the differences between the commands. There are also many pages online that will give you more information about how Synaptic and apt can be used. git , on the other hand, is a versioning control system for source code for software development. Again, you can use man git for more information (if git is installed on your system). However, I wouldn't think you would need to worry much about git if you have Xubuntu and are not involved in developing software yourself.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/494474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/331453/" ] }
494,487
Let say in a text file if I do grep FINAL *.msg it returns FINAL COMPUTATIONS: Elapsed Time: 000:30:55.65; CPU Time: 000:30:26.53 FINAL COMPUTATIONS: Elapsed Time: 000:28:11.77; CPU Time: 000:27:41.36 Now if I do for loop as for line in `grep FINAL *.msg` the "$line" does not consider "FINAL COMPUTATIONS: Elapsed Time: 000:30:55.65; CPU Time: 000:30:26.53" as a single line. How I can solve this?
apt and apt-get are related and very different from git . apt is the package management tool for Debian-derived Linux distributions (including Ubuntu/Xubuntu). This is used to manage (i.e. download, install, remove, update) the binary software packages that makeup the Linux distribution you are using. This is about updating your local system software, as well as adding and removing programs. 'apt' is the command-line tool that is used to interact with the Synaptic graphical tool. Essentially, they do the same things; however, one is graphical and runs in the X-Window System and the other is run from the Linux command line. apt-get is the command that is most-commonly used to install or update packages on your computer. apt is less-commonly used and differs from apt-get mostly in terms of output formatting. You can use man apt or man apt-get to pull up the manual pages, which will give you more details about the differences between the commands. There are also many pages online that will give you more information about how Synaptic and apt can be used. git , on the other hand, is a versioning control system for source code for software development. Again, you can use man git for more information (if git is installed on your system). However, I wouldn't think you would need to worry much about git if you have Xubuntu and are not involved in developing software yourself.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/494487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/284152/" ] }
494,525
find $LOG_PATH -type f -mtime +60 -print -exec rm {} \; Above command deletes log files, I did read the manual for each command but didn't understand it WELL. Can anyone explain this in a simple explanation? Thanks!
Kudos for trying to understand the command using the manual first. I'll try and explain how the command works by referring to each section of the manual located here . The command essentially does the following things. 1) It looks inside a path specified by the $LOG_PATH variable for regular files that have been modified more than 60 days prior. 2) For each valid result, it prints the filename and then executes the rm command on the file. The detailed breakdown is as follows. The find command has a basic syntax which looks like this (a few advanced options have been omitted for clarity): find [starting-point...] [expression] The starting point is a path, such as /home or documents/. The manual says: GNU find searches the directory tree rooted at each given starting-point by evaluating the given expression from left to right, according to the rules of precedence... In your case, this starting point is specified by the variable $LOG_PATH. This variable is expected to contain a value that is valid path. Now that find knows where to look for files, the next step is to evaluate the expressions given. Again, referring back to the manual: The part of the command line after the list of starting points is the expression. This is a kind of query specification describing how we match files and what we do with the files that were matched. For simplicity, we will consider the two types of expressions that appear in your command: tests and actions. Tests return a true or false value, usually on the basis of some property of a file we are considering. Actions have side effects (such as printing something on the standard output) and return either true or false, usually based on whether or not they are successful. The tests in this case are the -type f and the -mtime +60 expressions. The -type test checks that a file is of a certain type. -type f checks if a file a regular file. Other variations include -type d to check for directories, and -type l to look for symbolic links. The -mtime +60 test is a bit more involved. It checks if a file's data/contents were modified more than 60 days ago. There is a complication here: find ignores the fractions involved in calculating the modified time. As a result, a file would actually need to be modified 61*24 hours ago to successfully pass this test. The time is calculated from the time when the command is executed, and is not based on calendar days. The next expression in your find command is an action: -print . With the -print action, the filename of each file that passes the -type and -mtime tests is printed to standard output (one file per line). This essentially gives you the result of find : a list of files, which pass the test conditions you have specified. The final part of your find command is also an action: -exec . The -exec action runs the specified command on each result of find . In your case, this is the rm command, which removes the file. The curly braces ( {} ) specify where the name of the file is to be substituted. This results in a command of the form rm /path/to/target/file . The semicolon at the end specifies that the command specified by -exec should be executed once for each matched file. Because the semicolon is a special character for the shell as well, it is escaped by prefixing a backslash.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/308199/" ] }
494,538
I run a awk script. But, it is here. This is the command awk -f awk_scr ERR.txt . BEGIN {FS=" " target="missing" }{for (i=1; i <= NR; i++) { for(j=1; j <= NF; j++) { if ($j == target) { do { printf $j > "final.txt" } while (j == NF) } if (j == NF) { printf "\n" } }}} The ERR.txt content is here. This awk script is for trimming what that is matched by "missing" and then print to a file, final.txt . npm ERR! peer dep missing: react@^15.0.0, required by [email protected] ERR! peer dep missing: [email protected] - 3, required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] ERR! missing: [email protected], required by [email protected] But, when I execute the command, there is only a blank stdout. ================ADD MORE CONTENT================ This is what I expect to be, albeit I just spend few minutes to do. Anyway, the awk script is worth to use in the future. missing: react@^15.0.0, required by [email protected] missing: [email protected] - 3, required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected] missing: [email protected], required by [email protected]
Kudos for trying to understand the command using the manual first. I'll try and explain how the command works by referring to each section of the manual located here . The command essentially does the following things. 1) It looks inside a path specified by the $LOG_PATH variable for regular files that have been modified more than 60 days prior. 2) For each valid result, it prints the filename and then executes the rm command on the file. The detailed breakdown is as follows. The find command has a basic syntax which looks like this (a few advanced options have been omitted for clarity): find [starting-point...] [expression] The starting point is a path, such as /home or documents/. The manual says: GNU find searches the directory tree rooted at each given starting-point by evaluating the given expression from left to right, according to the rules of precedence... In your case, this starting point is specified by the variable $LOG_PATH. This variable is expected to contain a value that is valid path. Now that find knows where to look for files, the next step is to evaluate the expressions given. Again, referring back to the manual: The part of the command line after the list of starting points is the expression. This is a kind of query specification describing how we match files and what we do with the files that were matched. For simplicity, we will consider the two types of expressions that appear in your command: tests and actions. Tests return a true or false value, usually on the basis of some property of a file we are considering. Actions have side effects (such as printing something on the standard output) and return either true or false, usually based on whether or not they are successful. The tests in this case are the -type f and the -mtime +60 expressions. The -type test checks that a file is of a certain type. -type f checks if a file a regular file. Other variations include -type d to check for directories, and -type l to look for symbolic links. The -mtime +60 test is a bit more involved. It checks if a file's data/contents were modified more than 60 days ago. There is a complication here: find ignores the fractions involved in calculating the modified time. As a result, a file would actually need to be modified 61*24 hours ago to successfully pass this test. The time is calculated from the time when the command is executed, and is not based on calendar days. The next expression in your find command is an action: -print . With the -print action, the filename of each file that passes the -type and -mtime tests is printed to standard output (one file per line). This essentially gives you the result of find : a list of files, which pass the test conditions you have specified. The final part of your find command is also an action: -exec . The -exec action runs the specified command on each result of find . In your case, this is the rm command, which removes the file. The curly braces ( {} ) specify where the name of the file is to be substituted. This results in a command of the form rm /path/to/target/file . The semicolon at the end specifies that the command specified by -exec should be executed once for each matched file. Because the semicolon is a special character for the shell as well, it is escaped by prefixing a backslash.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494538", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182443/" ] }
494,546
I have a file that might contain GUIDs (their canonical textual representation ). I want to do an action for each GUID in the file. It might contain any number of GUIDs. I have already a file ready for reading. How do I spot the GUIDS? I know I need to use while read FILENAME An example of my file : GUIDs--------------------------------------cf6e328c-c918-4d2f-80d3-71ecaf09bf7b91d523b0-4926-456e-a9d2-ade713f5b07f(2 rows)// THERE IS AN EMPTY LINE HERE AFTER NUMBER OF ROWS
With the GNU implementation of grep (or compatible): <your-file grep -Ewo '[[:xdigit:]]{8}(-[[:xdigit:]]{4}){3}-[[:xdigit:]]{12}' | while IFS= read -r guid; do your-action "$guid" sleep 5 done Would find those GUIDs wherever they are in the input (and provided they are neither preceded nor followed by word characters ). GNU grep has a -o option that prints the non-empty matches of the regular expression. -w is another non-standard extension coming I believe from SysV to match on whole words only. It matches only if the matched text is between a transition between a non-word and word character and one between a word and non-word character (where word characters are alphanumerics or underscore). That's to guard against matching on things like: aaaaa aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaaaaaa aaaaa The rest is standard POSIX syntax. Note that [[:xdigit:]] matches on ABCDEF as well. You can replace it with [0123456789abcdef] if you want to match only lower case GUIDs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/331529/" ] }
494,561
I have a directory that has a name like 10.2 (14C92) i want to remove using rmdir -rf (to include their content). I do know how to pass a space with \ but I got the result: -bash: syntax error near unexpected token (' So it seems to me that I somehow have to escape the parenthesis from the command. How am I doing this properly?
You can use \ to escape any single character, ( and ) included as you already do with spaces. While it works, it can be cumbersome if you have lots of characters/spaces to escape. A faster alternative is to use single quotes ( ' ) to escape the whole string , i.e. something like: rm -rf '10.2 (14C92)' Please keep in mind that ' escape everything , so use it with care if you need, for example, variable expansion inside the quotes. That said, using double quotes works for escaping spaces and parentheses, also: rm -rf "10.2 (14C92)" Also, based on your question, you try to use rmdir . rmdir works only for empty directories and it doesn't have -r and/or -f flags: NAME rmdir - remove empty directories SYNOPSIS rmdir [OPTION]... DIRECTORY... DESCRIPTION Remove the DIRECTORY(ies), if they are empty . You'll want to use rm -rf . If 10.2 (14C92) is indeed empty, a simple rmdir '10.2 (14C92)' would do. Escaping is done by bash itself, so it works for every command. In other words, it's bash which decides what to pass to the command, based on its own parsing rules, the command then acts on the arguments after bash has parsed them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494561", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/331547/" ] }
494,637
So I less my file: less myFile.log Then I try to search for a value: /70.5 I've since learned less uses regex, so . is a wildcard. I've tried to escape it with no success.
You can turn off regex mode by hitting Ctrl + R before typing the pattern: ^R Don't interpret regular expression metacharacters; that is, do a simple textual comparison.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/494637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/218434/" ] }
494,725
I want to clarify that I am not talking about how to escape characters on the shell level of interpretation. As far as I can tell, only two character need to be escaped: % and \ To print a literal % , you must escape it with a preceding % : printf '%%' To print a literal \ you must escape it with a preceding \ : printf '\\' Are there any other instances where I would need to escape a character for it to be interpreted literally?
In the format argument of printf , only the % and \ characters are special (no, " is not special and \" is unspecified per POSIX). But, two important notes. In most printf implementations¹, it's the byte values for \ and % that are special and the POSIX specification could even be interpreted as requiring it as it requires the printf utility to be an interface to the printf(3) C function and not wprintf(3) for instance (like it requires %.3s to truncate to 3 bytes and not 3 characters ). In some character encodings including BIG5 and GB18030, there are hundreds of characters that contain the encoding of backslash, and to escape those for printf , you'd need to insert a \ before each 0x5c byte within the encoding of those characters! For instance in BIG5-HKSCS, as used for instance in the zh_HK.big5hkscs (Hong Kong) locale, all of Ěαжふ㘘㙡䓀䨵䪤么佢俞偅傜兝功吒吭园坼垥塿墦声娉娖娫嫹嬞孀尐岤崤幋廄惝愧揊擺暝枯柦槙檝歿汻沔涂淚滜潿瀙瀵焮燡牾狖獦珢珮琵璞疱癧礒稞穀笋箤糭綅縷罡胐胬脪苒茻莍蓋蔌蕚螏螰許豹贕赨跚踊蹾躡鄃酀酅醆鈾鎪閱鞸餐餤駹騱髏髢髿鱋鱭黠﹏ contain byte 0x5c (which is also the encoding of \ ). With most printf implementations, in that locale, printf 'αb' doesn't output αb but byte 0xa3 (the first byte of the encoding of α ) followed by the BS character (the expansion of \b ). $ LC_ALL=zh_HK.big5hkscs luit$ locale charmapBIG5-HKSCS$ printf 'αb' | LC_ALL=C od -tx1 -tc0000000 a3 08 243 \b0000002 Best is to avoid using (and even installing / making available) those locales as they cause all sorts of bugs and vulnerabilities of that sort. Some printf implementations support options, and even those that don't are required to support -- as the option delimiter. So printf -- won't output -- but likely report an error about a missing format argument. So if you can't guarantee your format won't start with - , you have to use the -- option delimiter: printf -- "$escaped_format" x y... In any case, if you want to print arbitrary strings, you'd use: printf '%s\n' "$data" # with terminating newlineprintf %s "$data" # without There's no character that is special in the string passed to %s (though note that with the exception of the printf builtin of zsh , you can't pass the NUL character in any of printf arguments). Note that while the canonical way to enter a literal \ is with \\ and a literal % with %% , on ASCII-based systems, you can also use \134 and \45 and with some printf implementations \x5c , \x25 , or \x{5c} , \x{25} , or (even on non-ASCII systems): \u005c , \u0025 or \u{5c} , \u{25} . ¹ yash 's printf builtin being the only exception I am aware of.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/494725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273175/" ] }
494,754
Here is output of: sudo iwlist scan 2>/dev/null | grep ESSID | sed 's/.*ESSID:"\(.*\)".*/\1/' 2>/dev/null Yash ShahCoreFragmentCoreFragment_5GCoreFragmentdlinkYash ShahCOMFASTAppbirds_TechnologiesSYMBIOSIS20096641CoreFragment_5GAMBER_AP1REDWING LABS_5G While the same thing written in a script is not working the same. Here is a snippet in which I used above command. for ssid_name in $(sudo iwlist scan 2>/dev/null | grep ESSID | sed 's/.*ESSID:"\(.*\)".*/\1/' 2>/dev/null)do echo "$ssid_name"done I got output like this: YashShahCoreFragmentCoreFragment_5GCoreFragmentYashShahREDWINGLABSCOMFASTAppbirds_TechnologiesSYMBIOSISCoreFragment_5GREDWINGLABS_5G Note : When there is a space in output it take as a new line. I'm working on Ubuntu 18.04.
First, note that it is not one command behaving differently. In your code snippets, standard output is written by two quite different commands. Your first one prints sed 's output, while in your second one: The output of sed is substituted (by the command substitution $(...) ) after in ; The resulting string is expanded into a list of elements; Each item is in turn assigned to ssid_name and printed by echo . Expansion in point 2 is performed according to the value of the internal field separator ( IFS ), which by default is <space><tab><newline> . Thus, your solution works because it lets for split the substituted string on newlines only. As an aside on your answer, note that $'\n' is not portable - even if that syntax ( $'string' ) is supported in many shells. You can nevertheless use an assignment like this: IFS='' An alternative construct for processing lines of input is using read in a while loop: sudo iwlist scan 2>/dev/null | grep ESSID | sed 's/.*ESSID:"\(.*\)".*/\1/' 2>/dev/null |while IFS= read -r i; do printf '%s\n' "$i"done Here, too, each line of sed 's output is split according to the value of IFS - that in this example is set to the null string exactly to prevent any effects of word splitting - but lines would be kept separated by read regardless of IFS .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/324691/" ] }
494,760
I tried xtricman⚓ArchVirtual⏺️~ls /proc/self/fd/ -lTotal 0lrwx------ 1 xtricman users 64 1月 16 16:34 0 -> /dev/pts/0lrwx------ 1 xtricman users 64 1月 16 16:34 1 -> /dev/pts/0lrwx------ 1 xtricman users 64 1月 16 16:34 2 -> /dev/pts/0lrwx------ 1 xtricman users 64 1月 16 16:34 3 -> '/home/xtricman/a (deleted)'lr-x------ 1 xtricman users 64 1月 16 16:34 4 -> /proc/1273/fdxtricman⚓ArchVirtual⏺️~ln /proc/self/fd/3 bln: failed to create hard link 'b' => '/proc/self/fd/3': Invalid cross-device link Since the inode is still on the disk, how can I re-create a name for it? What if there's no open file description pointing to that inode but that inode is mmaped? How can I restore it in that case?
You're not supposed to be able to do that (but read below for an interesting exception). If the kernel was to let it happen, then a call like: fd = open(filename, O_CREAT|O_RDWR, 0666);unlink(filename);linkat(fd, "", 0, "/new/path", AT_EMPTY_PATH); would succeed even when the inode referenced by fd has a link count of 0, when done by a process with CAP_DAC_READ_SEARCH caps. But the kernel is actively preventing it from happening, without regard to the capabilities or privileges of the process doing it. int vfs_link(struct dentry *old_dentry, ...{ ... /* Make sure we don't allow creating hardlink to an unlinked file */ if (inode->i_nlink == 0 && !(inode->i_state & I_LINKABLE)) error = -ENOENT; This is also documented in the manpage: AT_EMPTY_PATH (since Linux 2.6.39) If oldpath is an empty string, create a link to the file referenced by olddirfd (which may have been obtained using the open(2) O_PATH flag). In this case, olddirfd can refer to any type of file except a directory. This will generally not work if the file has a link count of zero (files created with O_TMPFILE and without O_EXCL are an exception) . The caller must have the CAP_DAC_READ_SEARCH capability in order to use this flag. This flag is Linux-specific; define _GNU_SOURCE to obtain its definition. based on the kernel source, there seems to be no other exception besides O_TMPFILE . O_TMPFILE is documented in the open(2) manpage; below is a small example based on that: #define _GNU_SOURCE 1#include <fcntl.h>#include <unistd.h>#include <stdio.h>#include <err.h>int main(int ac, char **av){ char path[64]; int fd; if(ac < 3) errx(1, "usage: %s dir newpath", av[0]); if((fd = open(av[1], O_TMPFILE|O_RDWR, 0666)) == -1) err(1, "open"); /* * ... * write stuff to fd and only "realize" the file at the end if * everything has succeeded */ /* the following line only works with CAP_DAC_READ_SEARCH */ /* if(linkat(fd, "", 0, av[2], AT_EMPTY_PATH)) err(1, "linkat"); */ snprintf(path, sizeof path, "/proc/self/fd/%d", fd); if(linkat(AT_FDCWD, path, AT_FDCWD, av[2], AT_SYMLINK_FOLLOW)) err(1, "linkat"); return 0;}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301641/" ] }
494,786
I use Arch Linux 4.19.15-1-lts #1 SMP Sun Jan 13 13:53:52 CET 2019 x86_64 GNU/Linux . I do also have Nix installed: nix-env (Nix) 2.2 . I've never had any problems until the recent update to version 2.2 . I always do the upgrades/updates with these two steps: $ nix-channel --update...$ nix-env --upgrade... ...but after the recent 2.2 update I can't find a way to make nix-channel --update work anymore. I'm always getting these errors: error: cloning builder process: Operation not permittederror: unable to start build processerror: program '/nix/store/876x7a35qbn3q062b6zcz6va88m0990d-nix-2.2/bin/nix-env' failed with exit code 1 ...even if I do rollback the previous operation(s): $ nix-channel --update unpacking channels...error: cloning builder process: Operation not permittederror: unable to start build processerror: program '/nix/store/876x7a35qbn3q062b6zcz6va88m0990d-nix-2.2/bin/nix-env' failed with exit code 1$ nix-channel --rollback switching from generation 40 to 39$ nix-channel --update unpacking channels...error: cloning builder process: Operation not permittederror: unable to start build processerror: program '/nix/store/876x7a35qbn3q062b6zcz6va88m0990d-nix-2.2/bin/nix-env' failed with exit code 1 This is what I have in the update list: $ nix-channel --list nixpkgs https://nixos.org/channels/nixpkgs-unstable ...and eventually I can't even delete that: $ nix-channel --remove nixpkgs uninstalling 'nixpkgs-19.03pre165281.7d864c6bd63'error: cloning builder process: Operation not permittederror: unable to start build processerror: program '/nix/store/876x7a35qbn3q062b6zcz6va88m0990d-nix-2.2/bin/nix-env' failed with exit code 1 I would like to avoid a reinstall. UPDATE I couldn't wait! O:) I went ahead and removed the current installation...and when I do a fresh install I basically got the same result: $ sh <(curl https://nixos.org/nix/install) --no-daemon % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 2476 100 2476 0 0 5417 0 --:--:-- --:--:-- --:--:-- 5406downloading Nix 2.2.1 binary tarball for x86_64-linux from 'https://nixos.org/releases/nix/nix-2.2.1/nix-2.2.1-x86_64-linux.tar.bz2' to '/tmp/nix-binary-tarball-unpack.n5vqvsi4Uq'... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 22.5M 100 22.5M 0 0 4016k 0 0:00:05 0:00:05 --:--:-- 4377kNote: a multi-user installation is possible. See https://nixos.org/nix/manual/#sect-multi-user-installationperforming a single-user installation of Nix...directory /nix does not exist; creating it by running 'mkdir -m 0755 /nix && chown x80486 /nix' using sudo[sudo] password for x80486: copying Nix to /nix/store.................................initialising Nix database...Nix: creating /home/x80486/.nix-profileinstalling 'nix-2.2.1'error: cloning builder process: Operation not permittederror: unable to start build process/tmp/nix-binary-tarball-unpack.n5vqvsi4Uq/unpack/nix-2.2.1-x86_64-linux/install: unable to install Nix into your default profile ...so looks like there is, in general, something going on with Linux (or specifically the distro that use) and Nix.
Following the suggestion in this comment resolves the problem: sysctl kernel.unprivileged_userns_clone=1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/280674/" ] }
494,788
How do I omit the group column in the output of ls -hal ? I was using ls -hal|cut -f4 --complement -d ' ' and it works find most of the time, but if I run it in / I get scrambled output: drwxr-xr-x 25 root 4.0K Dec 21 06:08 .drwxr-xr-x 25 root 4.0K Dec 21 06:08 ..drwxr-xr-x root root 4.0K Jan 12 06:17 bindrwxr-xr-x root root 4.0K Jan 16 10:36 bootdrwxrwxr-x root root 4.0K May 1 2018 cdrom-rw------- root root 56M May 1 2018 coredrwxr-xr-x 20 root 4.7K Jan 6 17:54 devdrwxr-xr-x root root 4.0K Oct 9 15:14 .dotnetdrwxr-xr-x 154 root 12K Jan 16 10:36 etcdrwxr-xr-x root root 4.0K Nov 24 19:39 home (I can't figure out why this happens, ls -hal alone gives drwxr-xr-x 25 root root 4.0K Dec 21 06:08 .drwxr-xr-x 25 root root 4.0K Dec 21 06:08 ..drwxr-xr-x 2 root root 4.0K Jan 12 06:17 bindrwxr-xr-x 3 root root 4.0K Jan 16 10:36 bootdrwxrwxr-x 2 root root 4.0K May 1 2018 cdrom-rw------- 1 root root 56M May 1 2018 coredrwxr-xr-x 20 root root 4.7K Jan 6 17:54 devdrwxr-xr-x 154 root root 12K Jan 16 10:36 etc ) I also tried awk '{print $1,$2,$3,$5,$6,$7,$8,$9}' but that always messes up the alignment: drwxr-xr-x 25 root 4.0K Dec 21 06:08 .drwxr-xr-x 25 root 4.0K Dec 21 06:08 ..drwxr-xr-x 2 root 4.0K Jan 12 06:17 bindrwxr-xr-x 3 root 4.0K Jan 16 10:36 bootdrwxrwxr-x 2 root 4.0K May 1 2018 cdrom-rw------- 1 root 56M May 1 2018 coredrwxr-xr-x 20 root 4.7K Jan 6 17:54 dev
There are specific ls options to hide the group column. From ls(1) : -G , --no-group in a long listing, don't print group names -o like -l , but do not list group information So you could use either ls -hao , or ls -halG .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494788", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106476/" ] }
494,803
I have the following file: ICR1 +ICR1+1+3199 +ICR1+2526+2828 +IRT1 +IRT1+1+1489 +IRT1+713+937 +LSR1 -LSR1+1+1175 -LSR1+366+638 -NME1 +NME1+1+340 +NME1+2+118 +PWR1 -PWR1+1+941 -PWR1+724+939 -Q0017 -Q0017+1+162 -Q0020 -Q0020+1370+1513 -Q0020+1+440 - The first and second columns are tab-separated. I do need to have the following: ICR1 +IRT1 +LSR1 -NME1 +PWR1 -Q0017 -Q0020 - I've tried to use awk with the field separator "+" but it erased the + from the second column as well...
You can set awk's field separator to whitespace or + , and then do the classic associative array based de-duplication: $ awk -F'[ \t+]' '!seen[$1]++' fileICR1 +IRT1 +LSR1 -NME1 +PWR1 -Q0017 -Q0020 -
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/325709/" ] }
494,889
I have a program that sums a column in a file: awk -v col=2 '{sum+=$col}END{print sum}' input-file However, it has a problem: If you give it a file that doesn't have numeric data, (or if one number is missing) it will interpret it as zero. I want it to produce an error if one of the fields cannot be parsed as a number. Here's an example input: bob 1dave 2alice 3.5foo bar I want it to produce an error because 'bar' is not a number, rather than ignoring the error.
I ended up with this: awk -v col=$col 'typeof($col) != "strnum" { print "Error on line " NR ": " $col " is not numeric" noprint=1 exit 1}{ sum+=$col}END { if(!noprint) print sum}' $file This uses typeof, which is a GNU awk extension. typeof($col) returns 'strnum' if $col is a valid number, and 'string' or 'unassigned' if it is not. See Can I determine type of an awk variable?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/494889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32067/" ] }
495,017
I'm trying to get the status of a unit, but only the first 3 lines like this: systemctl --user status resilio-sync --lines=3 I've tried various variations of this with -n 3 etc..., nothing works.And the strange part: it always shows the full log (13 lines), instead of 10 lines which should be the default according to the documentation for systemctl . Trying systemctl status confirms this: it just outputs all 45 lines to the terminal, when it actually should be 10. Am I missing something here? As far as I know I didn't change anything. As a workaround I'm currently using: systemctl --user status resilio-sync | sed -ne '1,3p' but I'd rather like to fix the underlying problem and use the native command.System is Kali Linux (re4son-kernel, sticky fingers) on a Raspberry Pi (easy to blame on this strange setup, but since this is core Linux functionality I don't think it should matter)
The command systemctl status display the status of the service and the corresponding lines from journalctl , the --lines=3 will limit the displayed number of lines from the journal to 3. e,g: systemctl --user status resilio-sync --lines=0 will display only the status of esilio-sync service without the journalctl log. -n, --lines= When used with status , controls the number of journal lines to show, counting from the most recent ones. Takes a positive integer argument, or 0 to disable journal output . Defaults to 10. To limit the output of the systemctl status command you can use options: systemctl check resilio-syncsystemctl is-active resilio-syncsystemctl is-enabled resilio-sync or by groupping the options: systemctl is-active is-enabled resilio-sync
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/495017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/331949/" ] }
495,032
I'm trying to write bash script, which will automatically read all file names in current and custom directories, then apply new created docker image' names to finded yml files via kubectl, then read from two arrays image names and full registry names: declare -a IMG_ARRAY=`docker images | awk '/dev2/ && /latest/' | awk '{print $1}' | sed ':a;N;$!ba;s/\n/ /g'`declare -a IMG_NAME=`docker images | awk '/dev2/ && /latest/' | awk '{print $1}' | awk -F'/' '{print $3}' | cut -f1 -d"." | sed ':a;N;$!ba;s/\n/ /g'`IFS=' ' read -r -a array <<< "$IMG_NAME"for element in "${array[@]}" do kubectl set image deployment/$IMG_NAME $IMG_NAME=$IMG_ARRAY --record kubectl rollout status deployment/$IMG_NAMEdone Both arrays has same number of indexes. My loop should take first indexes from IMG_NAME and put into kubectl commands for every array index. For now it is taking whole array....
The command systemctl status display the status of the service and the corresponding lines from journalctl , the --lines=3 will limit the displayed number of lines from the journal to 3. e,g: systemctl --user status resilio-sync --lines=0 will display only the status of esilio-sync service without the journalctl log. -n, --lines= When used with status , controls the number of journal lines to show, counting from the most recent ones. Takes a positive integer argument, or 0 to disable journal output . Defaults to 10. To limit the output of the systemctl status command you can use options: systemctl check resilio-syncsystemctl is-active resilio-syncsystemctl is-enabled resilio-sync or by groupping the options: systemctl is-active is-enabled resilio-sync
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/495032", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37033/" ] }
495,094
I have a command that outputs information about all DIMM slots in blocks like the following: ID SIZE TYPE44 105 SMB_TYPE_MEMDEVICE (type 17) (memory device) Manufacturer: NO DIMM Serial Number: NO DIMM Asset Tag: NO DIMM Location Tag: P1-DIMMD1 Part Number: NO DIMM Physical Memory Array: 43 Memory Error Data: Not Supported Total Width: 0 bits Data Width: 0 bits Size: Not Populated Form Factor: 9 (DIMM) Set: None Rank: Unknown Memory Type: 2 (unknown) Flags: 0x4 SMB_MDF_UNKNOWN (unknown) Speed: Unknown Configured Speed: Unknown Device Locator: P1-DIMMD1 Bank Locator: P0_Node1_Channel0_Dimm0 Minimum Voltage: 1.20V Maximum Voltage: 1.20V Configured Voltage: 1.20V The blocks start with the ID SIZE TYPE header and end with the configured voltage information. The command outputs one of these blocks of data for each DIMM separated by a single blank line each. I would like to be able to get the block of information for a specific DIMM slot based on the Location Tag field, but am unsure how to go about it. I am pretty sure this can be done with awk but only know how to print the match awk '/P1-DIMMD1/' or the line prior to match awk '/P1-DIMMD1/ {print a}{a=$0}' Does anyone know how I could extract this whole block of data if the Location Tag matches my search ( P1-DIMMD1 )?
The following will match the tag given in the tag variable: awk -v tag=P1-DIMMD1 '/ID SIZE TYPE/ { block = $0; output = 0; next } { block = block "\n" $0 } /Location Tag/ { output = ($0 ~ tag) } /Configured Voltage/ && output { print block }' The AWK script is /ID SIZE TYPE/ { block = $0 output = 0 next}{ block = block "\n" $0 }/Location Tag/ { output = ($0 ~ tag) }/Configured Voltage/ && output { print block } We accumulate a block in the block variable, and output it when we reach the end of the block if we saw the right tag in the process.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/495094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237982/" ] }
495,147
I made some researches about the files with bash and sh extensions. Most of the people and resources say that if a file has bash extension, then it contains bash scripts. Likewise, the file with sh extensions contains sh scripts. However, I cannot find the differences between bash and sh scripting. There are some courses and articles which aim to teach the people to write scripts on shell, and all of them has the title shell scripting . In this point, which one does shell scripting correspond to ? Bash Scripting or Sh scripting. What I try to understand is what is the difference between bash and sh scripting.
File names in POSIXland don't have "extensions". A . in a filename is no different from any other character and has no specific meaning other than those that might be attributed to them by meatbags such as ourselves. One could hope that any file with a name ending in .bash would be a script meant to be executed via the bash shell, but there is no guarantee of this. Indeed, it's quite common to give all shell scripts a suffix of .sh no matter which interpreter is intended for their use, as the shebang line should properly specify which shell should be used to execute such a file. sh and bash are two different, but related, shells; two amongst many others such as ksh , csh , zsh , fish , ash , dash , and yet more others. Each shell has its own syntax, capabilities, mannerisms, and foibles; some shells are largely compatible with each other (generally any script written for sh can also be run in bash or many other shells), but some are not.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/495147", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278582/" ] }
495,161
I set some environment variables in a terminal, and then run my script. How can I pull in the variables in the script? I need to know their values. Simply referring to them as $MY_VAR1 doesn't work; it is empty.
If the variables are truly environment variables (i.e., they've been exported with export ) in the environment that invokes your script, then they would be available in your script. That they aren't suggests that you haven't exported them, or that you run the script from an environment where they simply don't exist even as shell variables. Example: $ cat script.sh#!/bin/shecho "$hello" $ sh script.sh (one empty line of output since hello doesn't exist anywhere) $ hello="hi there"$ sh script.sh (still only an empty line as output as hello is only a shell variable, not an environment variable) $ export hello$ sh script.shhi there Alternatively, to set the environment variable just for this script and not in the calling environment: $ hello="sorry, I'm busy" sh script.shsorry, I'm busy $ env hello="this works too" sh script.shthis works too
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/495161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86215/" ] }
495,169
My understanding is that the kernel understands how to communicate with the different hardware in a system via specific device trees. How is it that I can download one version of Ubuntu and I am able to install this on any system where the hardware may vary? The same goes for the BeagleBone embedded boards. There is a default Debian image which can flash to any of the different type of BeagleBone boards which have different peripherals. How does it know which device tree / device tree overlay to use when the same image works for all?
If the variables are truly environment variables (i.e., they've been exported with export ) in the environment that invokes your script, then they would be available in your script. That they aren't suggests that you haven't exported them, or that you run the script from an environment where they simply don't exist even as shell variables. Example: $ cat script.sh#!/bin/shecho "$hello" $ sh script.sh (one empty line of output since hello doesn't exist anywhere) $ hello="hi there"$ sh script.sh (still only an empty line as output as hello is only a shell variable, not an environment variable) $ export hello$ sh script.shhi there Alternatively, to set the environment variable just for this script and not in the calling environment: $ hello="sorry, I'm busy" sh script.shsorry, I'm busy $ env hello="this works too" sh script.shthis works too
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/495169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/323971/" ] }
495,258
Have confirmed correct server is configured in /etc/ntp.conf It can ping that server. It definitely has the ntp package /home/admin# dpkg -s ntpPackage: ntpStatus: install ok installed But the daemon is not running /home/admin# ps wax | grep ntp21959 pts/0    S+     0:00 grep ntp Status check /home/admin# ntpstatUnable to talk to NTP daemon. Is it running? I get this when I try to restart it /home/admin# systemctl start ntpdFailed to start ntpd.service: Unit ntpd.service failed to load: No such file or directory. What should try next?
To check the status of ntp you should use: systemctl status ntp By modifying the /etc/ntp.conf you should restart the service through: systemctl restart ntp The ntpstat report Unable to talk to NTP daemon. Is it running? , you can simply start the ntp service through: systemctl start ntp To start the service at boot time: systemctl enable ntp
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/495258", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/281726/" ] }
495,421
I am trying to mount /boot/config-4.14.90-v8 on to /usr/src/linux/.config . I've tried: rpi-4.14.y:linux Necktwi$ sudo mount -o loop,ro -t vfat /boot/config-4.14.90-v8-g6d68e517b3ec /usr/src/linux/.configmount: /usr/src/linux/.config: cannot mount /dev/loop0 read-only. notice the error cannot mount /dev/loop0 read-only . rootfs is btrfs /boot is vfat /usr/src is nfs (I mounted remote server's /usr/src ) I tried mount --bind but it failed. rpi-4.14.y:linux Necktwi$ sudo mount --bind /boot/config-4.14.90-v8-g6d68e517b3ec /usr/src/linux/.configmount: /usr/src/linux/.config: bind /boot/config-4.14.90-v8-g6d68e517b3ec failed.
If you want to mount a single file, so that the contents of that file are seen on the mount point, then what you want is a bind mount . You can accomplish that with the following command: # mount --bind /boot/config-4.14.90-v8 /usr/src/linux/.config You can use -o ro to make it read-only on the /usr/src/linux/.config path. For more details, look for bind mounts in the man page for mount(8) . Loop devices do something similar, yet different. They mount a filesystem stored into a regular file onto another directory. So if you had a vfat or ext4 etc. filesystem stored into a file, say /vol/myfs.img , you could then mount it into a directory , say /mnt/myfs , using the following command: # mount -o loop /vol/myfs.img /mnt/myfs You can pass it -t vfat etc. to force the filesystem type. Note that the -o loop is usually not needed, since mount will figure that out by you trying to mount a file and will do that for you automatically. Also, mounting a file with -o loop (or automatically detected) is a shortcut to mapping that file to a /dev/loopX device, which you can also do using losetup , and then running the mount command, such as mount /dev/loop0 /mnt/myfs . See the man page for losetup(8) for details on loop devices.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/495421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27711/" ] }
495,422
I encountered a Linux command, builtin cd . What is the difference between the commands builtin cd and cd ? In fact, I made some researches about the difference, but I could not find a remarkable and significant explanation about this.
The cd command is a built-in, so normally builtin cd will do the same thing as cd . But there is a difference if cd is redefined as a function or alias, in which case cd will call the function/alias but builtin cd will still change the directory (in other words, will keep the built-in accessible even if clobbered by a function.) For example: user:~$ cd () { echo "I won't let you change directories"; }user:~$ cd mysubdirI won't let you change directoriesuser:~$ builtin cd mysubdiruser:~/mysubdir$ unset -f cd # undefine function Or with an alias: user:~$ alias cd='echo Trying to cd to'user:~$ cd mysubdirTrying to cd to mysubdiruser:~$ builtin cd mysubdiruser:~/mysubdir$ unalias cd # undefine alias Using builtin is also a good way to define a cd function that does something and changes directory (since calling cd from it would just keep calling the function again in an endless recursion.) For example: user:~ $ cd () { echo "Changing directory to ${1-home}"; builtin cd "$@"; }user:~ $ cd mysubdirChanging directory to mysubdiruser:~/mysubdir $ cdChanging directory to homeuser:~ $ unset -f cd # undefine function
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/495422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278582/" ] }
495,449
given a podcast rss feed , how can I download the complete podcast from the commandline ? I am not looking for a full-blown, command line, podcast client. I just need a one time command to download a complete history (all episodes) of a given podcast in mp3. As an example, here is a rss feed containing close to 200 episodes: http://www.internethistorypodcast.com/feed/ How can I download all of them as mp3 files ?
youtube-dl can do it. Just pass the URL of the feed as the only argument.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/495449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
495,461
Hi my crypttab looks as follows: crypt_device /dev/sda luks,header=/boot/header.img update-initramfs -u -k all works with success, but for some reason cryptsetup will not find the header.img which resides on the usb stick (that also contains the boot partition) during boot. It is stored on /boot/header.img (using luks encryption with detached header, and seperate boot partition on usb, os: lubuntu 18)
youtube-dl can do it. Just pass the URL of the feed as the only argument.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/495461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332334/" ] }
495,477
I used md5sum with pv to check 4 GiB of files that are in the same directory: md5sum dir/* | pv -s 4g | sort The command completes successfully in about 28 seconds, but pv 's output is all wrong. This is the sort of output that is displayed throughout: 219 B 0:00:07 [ 125 B/s ] [> ] 0% ETA 1668:01:09:02 It's like this without the -s 4g and | sort aswell. I've also tried it with different files. I've tried using pv with cat and the output was fine, so the problem seems to be caused by md5sum .
The pv utility is a "fancy cat ", which means that you may use pv in most situations where you would use cat . Using cat with md5sum , you can compute the MD5 checksum of a single file with cat file | md5sum or, with pv , pv file | md5sum Unfortunately though, this does not allow md5sum to insert the filename into its output properly. Now, fortunately, pv is a really fancy cat , and on some systems (Linux), it's able to watch the data being passed through another process. This is done by using its -d option with the process ID of that other process. This means that you can do things like md5sum dir/* | sort >sums &sleep 1pv -d "$(pgrep -n md5sum)" This would allow pv to watch the md5sum process. The sleep is there to allow md5sum , which is running in the background, to properly start. pgrep -n md5sum would return the PID of the most recently started md5sum process that you own. pv will exit as soon as the process that it is watching terminates. I've tested this particular way of running pv a few times and it seems to generally work well, but sometimes it seems to stop outputting anything as md5sum switches to the next file. Sometimes, it seems to spawn spurious background tasks in the shell. It would probably be safest to run it as md5sum dir/* >sums &sleep 1pv -W -d "$!"sort -o sums sums The -W option will cause pv to wait until there's actual data being transferred, although this does also not always seem to work reliably.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/495477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85900/" ] }
495,506
I copied thousands of files into an exFAT MicroSD card. The number of files and bytes is identical, but how do I know whether the data is corrupt or not? It would be good if the JackPal Android Terminal also supports the command.
Unmount, eject, and remount the device. Then use diff -r source destination In case you used rsync to do the copy, rsync -n -c might be very convenient, and it is nearly as good as diff . It doesn't do a bit-for-bit comparison though; it uses an MD5 checksum. There are some similar answers with other details at: Verifying a large directory after copy from one hard drive to another
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/495506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270469/" ] }
495,509
My script (should) acts differently, depending on the presence of the data in the input stream. So I can invoke it like this: $ my-script.sh or: $ my-script.sh <<-MARK Data comes... ...data goes.MARK or: $ some-command | my-script.sh where two last cases should read the data, while first case should notice the data is missing and act accordingly. The crucial part (excerpt) of the script is: #!/bin/bashlocal myData;read -d '' -t 0 myData;[ -z "${myData}" ] && { # Notice the lack of the data.} || { # Process the data.} I use read to read input data, then option -d '' to read multiple lines, as this is expected, and the -t 0 to set timeout to zero. Why the timeout? According to help read (typing left unchanged; bold is mine): -t timeout time out and return failure if a complete line of input is not read withint TIMEOUT seconds. The value of the TMOUT variable is the default timeout. TIMEOUT may be a fractional number. If TIMEOUT is 0, read returns success only if input is available on the specified file descriptor . The exit status is greater than 128 if the timeout is exceeded So I in case 2 and 3 it should read the data immediately, as I understand it. Unfortunately it doesn't. As -t can take fractional values (according to above man page), changing read line to: read -d '' -t 0.01 myData; actually reads the data when data is present and skips it (after 10ms timeout) if it is not. But it should also work when TIMEOUT is set to real 0 . Why it actually doesn't? How can this be fixed? And is there, perhaps, alternative solution to the problem of "act differently depending on the presence of the data"? UPDATE Thanks to @Isaac I found a misleading discrepancy between quoted on-line version and my local one (normally I do not have locale set to en_US, so help read gave me translation which I couldn't paste here, and looking up for on-line translation was faster than setting new env---but that caused the whole problem). So for 4.4.12 version of Bash it says: If TIMEOUT is 0, read returns immediately, without trying to read any data , returning success only if input is available on the specified file descriptor. This gives a little bit different impression than "If TIMEOUT is 0, read returns success only if input is available on the specified file descriptor"---for me it implied actually an attempt to read the data. So finally I tested this and it worked perfectly: read -t 0 && read -d '' myData; The meaning: see if there's anything to read and if it succeed, just read it. So as to base question, the correct answer was provided by Isaac. And as to alternative solution I prefer the above "read && read" method.
Unmount, eject, and remount the device. Then use diff -r source destination In case you used rsync to do the copy, rsync -n -c might be very convenient, and it is nearly as good as diff . It doesn't do a bit-for-bit comparison though; it uses an MD5 checksum. There are some similar answers with other details at: Verifying a large directory after copy from one hard drive to another
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/495509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216764/" ] }
495,519
I got this in a file: \033[31mException log And when I do: less -R demo I get no colors: What am I doing wrong?
You need to put the actual escape code in the file. One way to do this would be: echo -e "\033[31mException log\033[0m" > file.txt Then less -R file.txt should be able to interpret the color code.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/495519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3853/" ] }
495,638
My netbook runs on Debian Linux without X.org. I sometimes need to make a screenshot of the output of scripts. I tried to use a framebuffer device for this purpose: # cat /dev/fb0 > screenshot.raw But the problem is that this .raw file is not a graphic format since it cannot even be opened with GIMP. How it's possible to convert it to .png file, for example?
The format of the raw file you capture is going to depend on the bit depth and resolution. There are a number of tools out there to do this. Debian has the fbcat package. You may need to sudo apt-get install fbcat to install it. fbcat will grab the frame buffer in ppm format, so you can then use ppmtojpeg or similar to convert it to the format you want. There's also a fbgrab wrapper which will save in PNG format.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/495638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267463/" ] }
495,643
I have an UTF-8 file that contains a strange character -- visible to me just as <96> This is how it appears on vi and how it appears on gedit and how it appears under LibreOffice and that makes a series of basic Unix tools misbehave, including: cat file make the character dissapear, and more as well I cannot copy and paste inside vi/vim -- it will not even find itself grep fails to display anything as well, as if the character did not exists. The program file works fine and recognizes it an UTF-8 file. I also know that, because of the nature of the file, it most likely came from a Copy & Paste from the web and the character initially represented an EMDASH. My basic questions are: Is there anything wrong with this file? How can I search for other occurrences of it inside the same file? How can I grep for other files that may contain the same problem/character? The file can be found here: file.txt
This file contains bytes C2 96 , which are the UTF-8 encoding of codepoint U+0096. That codepoint is one of the C1 control characters commonly called SPA "Start of Guarded Area" (or "Protected Area"). That isn't a useful character for any modern system, but it's unlikely to be harmful that it's there. The original source for this was likely a byte 0x96 in some single-byte 8-bit encoding that has been transcoded incorrectly somewhere along the way. Probably this was originally a Windows CP1252 en dash "–", which has byte value 96 in that encoding - most other plausible candidates have the control set at positions 80-9F - which has been translated to UTF-8 as though it was latin-1 ( ISO/IEC 8859-1 ), which is not uncommon. That would lead to the byte being interpreted as the control character and translated accordingly as you've seen. You can fix this file with the iconv tool, which is part of glibc. iconv -f utf-8 -t iso-8859-1 < mwe.txt | iconv -f cp1252 -t utf-8 produces a correct version of your minimal example for me. That works by first converting the UTF-8 to latin-1 (inverting the earlier mistranslation), and then reinterpreting that as cp1252 to convert it back to UTF-8 correctly. It does depend on what else is in the real file, however. If you have characters outside Latin-1 elsewhere it will fail because it can't encode those correctly at the first step. If you don't have iconv, or it doesn't work for the real file, you can replace the bytes directly using sed: LC_ALL=C sed -e $'s/\xc2\x96/\xe2\x80\x93/g' < mwe.txt This replaces C2 96 with the UTF-8 en dash encoding E2 80 93 . You could also replace it with e.g. a hyphen or two by changing \xe2\x80\x93 into -- . You can grep in a similar fashion. We're using LC_ALL=C to make sure we're reading the actual bytes, and not having grep interpret things: LC_ALL=C grep -R $'\xc2\x96` . will list out everywhere under this directory those bytes appear. You may want to limit it to just text files if you have mixed content around, since binary files will include any pair of bytes fairly often.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/495643", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332501/" ] }
495,657
I have followed instructions in this post How to insert text after a certain string in a file? but I suspect the instructions are not valid for OSX. I want to add some text into a source file using Bash sed '/pattern/a some text here' ${sourceFile} but when I run the command I get "/pattern/a some text here": command a expects \ followed by text edit I have created a new file called infile with a single line pattern and a bash script #!/bin/bashsed '/pattern/a\text to insert' infile running the script echos "pattern" to the console but doesn't insert the text edit I have also tried for the bash script #!/bin/bashsed '/pattern/a\add one line\\\and one more' infile and the terminal echos patternadd one line\and one more but infile still has single line pattern
This file contains bytes C2 96 , which are the UTF-8 encoding of codepoint U+0096. That codepoint is one of the C1 control characters commonly called SPA "Start of Guarded Area" (or "Protected Area"). That isn't a useful character for any modern system, but it's unlikely to be harmful that it's there. The original source for this was likely a byte 0x96 in some single-byte 8-bit encoding that has been transcoded incorrectly somewhere along the way. Probably this was originally a Windows CP1252 en dash "–", which has byte value 96 in that encoding - most other plausible candidates have the control set at positions 80-9F - which has been translated to UTF-8 as though it was latin-1 ( ISO/IEC 8859-1 ), which is not uncommon. That would lead to the byte being interpreted as the control character and translated accordingly as you've seen. You can fix this file with the iconv tool, which is part of glibc. iconv -f utf-8 -t iso-8859-1 < mwe.txt | iconv -f cp1252 -t utf-8 produces a correct version of your minimal example for me. That works by first converting the UTF-8 to latin-1 (inverting the earlier mistranslation), and then reinterpreting that as cp1252 to convert it back to UTF-8 correctly. It does depend on what else is in the real file, however. If you have characters outside Latin-1 elsewhere it will fail because it can't encode those correctly at the first step. If you don't have iconv, or it doesn't work for the real file, you can replace the bytes directly using sed: LC_ALL=C sed -e $'s/\xc2\x96/\xe2\x80\x93/g' < mwe.txt This replaces C2 96 with the UTF-8 en dash encoding E2 80 93 . You could also replace it with e.g. a hyphen or two by changing \xe2\x80\x93 into -- . You can grep in a similar fashion. We're using LC_ALL=C to make sure we're reading the actual bytes, and not having grep interpret things: LC_ALL=C grep -R $'\xc2\x96` . will list out everywhere under this directory those bytes appear. You may want to limit it to just text files if you have mixed content around, since binary files will include any pair of bytes fairly often.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/495657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269365/" ] }
495,740
Suppose I don't know where my Django source is stored, but I know that it contains these directories in this way: django/contrib/admin . How can I use the find command or any more appropriate coreutils command to find where this partial directory path (structure) is available? Example output: /home/me/python/extracted/django/contrib/admin//home/me/env/django/contrib/admin/
You should use -path flag for such purpose find /home/me -path "*django/contrib/admin*"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/495740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206960/" ] }
495,755
Given the shell script test.sh for var in "$@"do echo "$var"done and the invocation sh test.sh $(echo '"hello " "hi and bye"') I would expect the output hellohi and bye but instead get: "hello""hiandbye" If I change to sh test.sh $(echo "hello " "hi and bye") then I get hellohiandbye Neither of these behaviours are desired, how can I get the desired behaviour? What I want to understand is why sh test.sh $(echo "hello " "hi and bye") and sh test.sh "hello " "hi and bye" are not equivalent?
Your command substitution will generate a string. In the case of $(echo '"hello " "hi and bye"') this string will be "hello " "hi and bye" . The string is then undergoing word splitting (and filename globbing, but it doesn't affect this example). The word splitting happen on every character that is the same as one of the characters in $IFS (by default, spaces, tabs and newlines). The words generated by the default value of IFS would be "hello , " , "hi , and , and bye" . These are then given as separate arguments to your script. In your second command, the command substitution is $(echo "hello " "hi and bye") This generates the string hello hi and bye and the word splitting would result in the four words hello , hi , and , and bye . In your last example, you use the two arguments hello and hi and bye directly with your script. These won't undergo word splitting because they are quoted.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/495755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37566/" ] }
495,769
Following on from: unexpected behaviour in shell command substitution I have a command which can take a huge list of arguments, some of which can legitimately contain spaces (and probably other things) I wrote a script which can generate those arguments for me, with quotes, but I must copy and paste the output e.g. ./somecommand<output on stdout with quoting>./othercommand some_args <output from above> I tried to streamline this by simply doing ./othercommand $(./somecommand) and ran into the unexpected behaviour mentioned in question above. The question is -- can command substitution be reliably used to generate the arguments to othercommand given that some arguments require quoting and this cannot be changed?
I wrote a script which can generate those arguments for me, with quotes If the output is properly quoted for the shell, and you trust the output , then you could run eval on it. Assuming you have a shell that supports arrays, it would be best to use one to store the arguments you get. If ./gen_args.sh produces output like 'foo bar' '*' asdf , then we could run eval "args=( $(./gen_args.sh) )" to populate an array called args with the results. That would be the three elements foo bar , * , asdf . We can use "${args[@]}" as usual to expand the array elements individually: $ eval "args=( $(./gen_args.sh) )"$ for var in "${args[@]}"; do printf ":%s:\n" "$var"; done:foo bar::*::asdf: (Note the quotes. "${array[@]}" expands to all elements as distinct arguments unmodified. Without quotes the array elements are subject to word splitting. See e.g. the Arrays page on BashGuide .) However , eval will happily run any shell substitutions, so $HOME in the output would expand to your home directory, and a command substitution would actually run a command in the shell running eval . An output of "$(date >&2)" would create a single empty array element and print the current date on stdout. This is a concern if gen_args.sh gets the data from some untrusted source, like another host over the network, file names created by other users. The output could include arbitrary commands. (If get_args.sh itself was malicious, it wouldn't need to output anything, it could just run the malicious commands directly.) An alternative to shell quoting, which is hard to parse without eval, would be to use some other character as separator in the output of your script. You'd need to pick one that is not needed in the actual arguments. Let's choose # , and have the script output foo bar#*#asdf . Now we can use unquoted command expansion to split the output of the command to the arguments. $ IFS='#' # split on '#' signs$ set -f # disable globbing$ args=( $( ./gen_args3.sh ) ) # assign the values to the array$ for var in "${args[@]}"; do printf ":%s:\n" "$var"; done:foo bar::*::asdf: You'll need to set IFS back later if you depend on word splitting elsewhere in the script ( unset IFS should work to make it the default), and also use set +f if you want to use globbing later. If you're not using Bash or some other shell that has arrays, you could use the positional parameters for that. Replace args=( $(...) ) with set -- $(./gen_args.sh) and use "$@" instead of "${args[@]}" then. (Here, too, you need quotes around "$@" , otherwise the positional parameters are subject to word splitting.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/495769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37566/" ] }
495,790
When I display the manual for pwd command, it says that long options like --physical are supported $ man pwdPWD(1) User Commands PWD(1)NAME pwd - print name of current/working directorySYNOPSIS pwd [OPTION]...DESCRIPTION Print the full filename of the current working directory. -L, --logical use PWD from environment, even if it contains symlinks -P, --physical avoid all symlinks However, it fails when I type the following $ pwd --physical-bash: pwd: --: invalid optionpwd: usage: pwd [-LP] Why are long options not working for me? I'm using RHEL 6.4. No alias for pwd is configured. Looks like it's standard pwd: $ which pwd/bin/pwd
bash has a built-in command pwd which is what you are using when you simply type pwd into your shell. To get the pwd as described by the manpage, you need force use of the external command. You can do this by specifying the full path to the executable ( /bin/pwd in your case) or by prepending env before the line: env pwd , which starts the env command which can be used to add settings to the environment (but which is not done here) and then env starts the command specified. As env doesn't have a builtin pwd , the "real" /bin/pwd is executed. The advantage of the builtin pwd in bash is that bash keeps track of the current directory, so getting the value is at zero cost, whereas the external command needs to search up through the filesystem to determine the path, which is much more IO intensive.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/495790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109300/" ] }
495,797
I have a csv file having many rows and its header is: DateTime,CallEndTime,KeywordTagTexts,TotalDuration Position of the the header keys changes, therefore I have calculated positions using sed duration=$(sed -n $'1s/,/\\\n/gp' rawfile.csv | grep -nx 'TotalDuration' | cut -d: -f1);callend=$(sed -n $'1s/,/\\\n/gp' rawfile.csv | grep -nx 'CallEndTime' | cut -d: -f1);callstart=$(sed -n $'1s/,/\\\n/gp' rawfile.csv | grep -nx 'DateTime' | cut -d: -f1); The value in DateTime is "2018-12-18 18:36:55" in date time format and TotalDuration is in seconds. I want to add the value DateTime + TotalDuration to CallEndTime,
bash has a built-in command pwd which is what you are using when you simply type pwd into your shell. To get the pwd as described by the manpage, you need force use of the external command. You can do this by specifying the full path to the executable ( /bin/pwd in your case) or by prepending env before the line: env pwd , which starts the env command which can be used to add settings to the environment (but which is not done here) and then env starts the command specified. As env doesn't have a builtin pwd , the "real" /bin/pwd is executed. The advantage of the builtin pwd in bash is that bash keeps track of the current directory, so getting the value is at zero cost, whereas the external command needs to search up through the filesystem to determine the path, which is much more IO intensive.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/495797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332616/" ] }
495,899
One of the tutorials I've been following briefly stated that cd . has no use. When trying to replicate issue shown by OP in Symbolic link recursion - what makes it “reset”? , I also tried cd . , which showed the same effect OP described (growing $PWD variable), which can be countered with cd -P . This makes me wonder, is there any case where one does in fact would want to use cd . ?
The path of the directory could have changed since the last command was executed, and without cd . the bash and ksh93 shells will rely on the logical working directory described in the post linked in the question, so calling cd . which makes the shell issue the getcwd() syscall will ensure your current path is still valid. Steps to reproduce in bash: In a terminal tab issue mkdir ./dir_no_1; cd ./dir_no_1 In a different terminal tab issue mv dir_no_1 dir_no_2 In the first terminal tab issue echo $PWD and pwd . Notice that the directory has been externally renamed; the shell's environment has not been updated. Issue cd .; pwd; echo $PWD . Notice the value has been updated. ksh93, however, does not update the environment information, so cd . in ksh93 may in fact be useless. In /bin/dash on Ubuntu and other Debian-based systems, cd . returns dash: 3: cd: can't cd to . error, however cd -P . works (unlike in ksh93).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/495899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85039/" ] }
495,958
I saw sed 's=.*/==' in context of sh script and I'm puzzled. I could not find in sed manual or web search (for sed s= ) how s is used, not s/// . Apart from s I see only one potential command here = (Print the current input line number), but in such case what the rest is doing... Running the command in shell produces same output as input for e.g echo 'jkfdsa=335r34' , whereas echo 'jkfdsa=335r34' | sed 's/=.*/==/' does replacement as per manual.Also slightly modifying command to e.g. echo 'jkfdsa=3' | sed 's798=.*/==/' gives sed: -e expression #1, char 11: unterminated 's' command , so original should have some correct meaning. What is it?
The = are alternative delimiters. These are used since the pattern contains a / (which is the more commonly used delimiter). Almost any character can be used as an alternative delimiter, so s@.*/@@ or s_.*/__ would have meant the same thing. With the ordinary delimiter, the sed expression could have been written as s/.*\/// (the literal / that the expression wants to match needs to be escaped here) or, possibly more readable, s/.*[/]// (most characters within a [...] character class are literal 1 ) What the sed expression does is to substitute anything that matches .*/ with nothing. This will have the effect of removing everything up to and including the last / character on the line. It will remove up to the last / (not the first) since .* does a greedy match of any sequence of any characters. Example: $ echo 'a/b/c' | sed 's/.*[/]//'c The unterminated 's' command error that you get when testing s798=.*/==/ is due to 7 being used as the delimiter for the s command. The expression s7.*/77 would have worked though. 1 ... apart from the characters that have special meaning within [...] such as ^ (at the start) and - (when not first, second after ^ , or last). The characters [ and ] also needs special treatment within [...] , but that goes outside the scope of this question. If this is used to get the filename at the end of a path in some string or shell variable, then the basename utility may do a better job of it (and also does the right thing if the path ends with a slash): $ basename a/b/cc$ basename a/b/c/c Likewise, the standard shell parameter substitution ${variable##*/} would, assuming the variable contains no newlines, be equivalent in its effect to passing the value of $variable through the above sed expression in a command substitution, but many times faster. The variable substitution and the basename utility also copes with correctly handling pathnames containing newlines, which sed would not do (since it processes its input line by line).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/495958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266260/" ] }
496,011
for (( a=1;a<=$(wc -l sedtest1);a++ ));do echo $a; done Gives me an error: -bash: ((: a<=21 sedtest1: syntax error in expression (error token is "sedtest1")
The output of wc -l sedtest1 will be something like: 21 sedtest1 So the test will become something like a <= 21 sedtest1 which is invalid. Also, that does mean that the wc command will be run for each iteration of the loop. If the content of the sedtest1 file doesn't change between each iteration, it would be better to save that number of line first in a variable outside the loop: n=$(wc -l < sedtest1) # using redirection avoids the output containing the filenamefor ((a = 0; a < n; a++)); do echo "$a"; done I also suspect that you're trying to use a loop to process text in an inefficient and non-shell way. You may want to read Why is using a shell loop to process text considered bad practice? . Looping over each line of a file in that way is not the right way to go.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/323587/" ] }
496,104
Is there a way to open() a file and cause it to shrink? One can, of course, open them in append-mode or seek to the end and write to cause them to grow. However, as far as I know, there is no method to shrink a file via typical unix-style system call interfaces. The only way to do so, as far as I know, is by faking it by creating a new shorter file and rename() it in place of the older one. I just wanted confirmation, because I saw an answer that implied that it was possible to make file editors that worked directly on a file instead of going through the process of making a new one and renaming it in place. I've always thought that the file api in libc and unix-style system call interfaces did not allow for the shrinking of files to ease implementation of filesystems and maybe avoid usage patterns that might contribute to fragmentation.
man -s 2 ftruncate says DESCRIPTION The truncate() and ftruncate() functions cause the regular file named by path or referenced by fd to be truncated to a size of precisely length bytes. ... CONFORMING TO POSIX.1-2001, POSIX.1-2008, 4.4BSD, SVr4 (these calls first appeared in 4.2BSD). it goes on to say that if you use ftruncate you must have opened the file for writing, and if you use truncate the file must be writable.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/496104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189744/" ] }
496,174
I have a machine which has a directory that seems corrupt. The output of ls -lah is something like: ??????????? ? ? ? ? ? dir_name This used to be a valid directory in a CentOS 7 VM on SSD. I don't know what happened but now I just want to delete it, but that does not seem possible: $sudo rm -rf dir_name rm: cannot remove ‘dir_name’: Is a directory And stat can't read it either: stat dir_namestat: cannot stat ‘dir_name’: No such device What's the simplest way to have this directory safely deleted?
You cannot delete corrupted dirs. You must umount the filesystem and perform a fsck as per man 8 fsck fsck - check and repair a Linux filesystem
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496174", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212177/" ] }
496,179
I have an awk script, new.awk : BEGIN { FS = OFS = "," }NR == 1 { for (i = 1; i <= NF; i++) f[$i] = i}NR > 1 { begSecs = mktime(gensub(/[":-]/, " ", "g", $(f["DateTime"]))) endSecs = begSecs + $(f["TotalDuration"]) $(f["CallEndTime"]) = strftime("%Y-%m-%d %H:%M:%S", endSecs)}{ print } I am calling this in shell awk new.awk sample.csv ... but I can see the changes in the terminal. How to make the change in-place in the file, like when using sed -i ?
GNU awk (commonly found on Linux systems), since version 4.1.0, can include an " awk source library" with -i or --include on the command line. One of the source libraries that is distributed with GNU awk is one called inplace : $ cat filehellothere $ awk -i inplace '/hello/ { print "oh,", $0 }' file$ cat fileoh, hello As you can see, this makes the output of the awk code replace the input file. The line saying there is not kept as the program does not output it. With an awk script in a file, you would use it like awk -i inplace -f script.awk datafile If the awk variable INPLACE_SUFFIX is set to a string, then the library would make a backup of the original file with that as a filename suffix. awk -i inplace -v INPLACE_SUFFIX=.bak -f script.awk datafile If you have several input files, each file with be individually in-place edited. But you can turn in-place editing off for a file (or a set of files) by using inplace=0 on the command line before that file: awk -i inplace -f script.awk file1 file2 inplace=0 file3 inplace=1 file4 In the above command, file3 would not be edited in place. For a more portable "in-place edit" of a single file, use tmpfile=$(mktemp)cp file "$tmpfile" &&awk '...some program here...' "$tmpfile" >filerm "$tmpfile" This would copy the input file to a temporary location, then apply the awk code on the temporary file while redirecting to the original filename. Doing the operations in this order (running awk on the temporary file, not on the original file) ensures that the file meta-data (permissions and ownership) of the original file is not modified.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/496179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332716/" ] }
496,193
I typically use column to convert input into a table, eg: $ echo 'a\tb\tc\nd\te\tf' | column -t -s $'\t'a b cd e f However it collapses empty columns eg: $ echo 'a\tb\tc\nd\t\tf' | column -t -s $'\t'a b cd f Rather than printing an empty column when there are consecutive delimiters. This is what I would like, using column or otherwise: a b cd f
If you use GNU column : -n By default, the column command will merge multiple adjacent delimiters into a single delimiter when using the -t option; this option disables that behavior. This option is a Debian GNU/Linux extension. printf 'a\tb\tc\nd\t\tf\n' | column -t -n -s $'\t' Output: a b cd f If GNU column is not available, you can use sed to add a space (or something else, e.g. a - ) between the tabs: printf 'a\tb\tc\nd\t\tf\n' | sed -e ':loop; s/\t\t/\t-\t/; t loop' | column -t -s $'\t'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496193", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2680/" ] }
496,213
I have an array like this: array=(1 2 7 6) and would like to search for the second largest value, with the output being secondGreatest=6 Is there any way to do this in bash?
printf '%s\n' "${array[@]}" | sort -n | tail -2 | head -1 Print each value of the array on it's own line, sort it, get the last 2 values, remove the last value secondGreatest=$(printf '%s\n' "${array[@]}" | sort -n | tail -2 | head -1) Set that value to the secondGreatest variable. Glenn Jackman had an excellent point about duplicate numbers that I didn't consider. If you only care about unique values you can use the -u flag of sort: secondGreatest=$(printf '%s\n' "${array[@]}" | sort -nu | tail -2 | head -1)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496213", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332357/" ] }
496,259
The yash shell has a printf built-in, according to its manual . However, this is what I see in a yash shell with default configuration: $ command -v printf/usr/bin/printf$ type printfprintf: a regular built-in at /usr/bin/printf Is printf a built-in in this shell or not? The result is similar for a number of other supposedly built-in utilities that are also available as external commands. As a comparison, in pdksh ( ksh on OpenBSD, where printf is not a built-in): $ command -v printf/usr/bin/printf$ type printfprintf is /usr/bin/printf And in bash (where printf is a built-in): $ command -v printfprintf$ type printfprintf is a shell builtin
The yash shell does have, and does use, a built-in version of printf (and other utilities). It just happens to be very pedantically POSIX compliant in the way it formulates the result of the command -v and type commands. As mosvy comments , the POSIX standard requires that a regular built-in command be available as an external command in $PATH for the built-in version of the command to be executed. This is the relevant text from the standard : Command Search and Execution If a simple command results in a command name and an optional list of arguments, the following actions shall be performed: If the command name does not contain any <slash> characters, the first successful step in the following sequence shall occur: a. If the command name matches the name of a special built-in utility, that special built-in utility shall be invoked. [...] e. Otherwise, the command shall be searched for using the PATH environment variable as described in XBD Environment Variables : i. If the search is successful: a. If the system has implemented the utility as a regular built-in or as a shell function, it shall be invoked at this point in the path search. b. Otherwise, the shell executes the utility in a separate utility environment [...] [...] ii. If the search is unsuccessful, the command shall fail with an exit status of 127 and the shell shall write an error message. If the command name contains at least one <slash>, [...] This means that the output of command -v printf signifies that the printf command was found in the search path, while the output of type printf adds to this that the command is a regular built-in. Since the printf command was found in the search path, and since it's a regular built-in in the shell, yash will call its built-in version of the command . If the printf was not found in the path, and if the yash shell was running in POSIX-ly correct mode, an error would have been generated instead. yash prides itself on being a very POSIX compliant shell, and this is also true if we look at what POSIX says about command -v : -v Write a string to standard output that indicates the pathname or command that will be used by the shell, in the current shell execution environment (see Shell Execution Environment ), to invoke command_name , but do not invoke command_name . Utilities, regular built-in utilities , command_names including a <slash> character, and any implementation-defined functions that are found using the PATH variable (as described in Command Search and Execution ), shall be written as absolute pathnames .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/496259", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116858/" ] }
496,299
I like using the following command to print file names but it prints them with extensions. find . -type f -printf '%f\n' In my directory, there are many files with different extensions. I tried adding --ignore='*.*' both before and after -printf , but it didn't work.example; I have files myfile1.txt, myfile2.mp3, etc. I need it prints myfile1, myfile2, etc.How would I do this?
If I understand you correctly ( I didn't, see second part of the answer ), you want to avoid listing filenames that contain a dot character. This will do that: find . -type f ! -name '*.*' The filename globbing pattern *.* would match any filename containing at least one dot. The preceding ! negates the sense of the match, which means that the pathnames that gets through to the end will be those of files that have no dots in their names. In really ancient shells, you may want to escape the ! as \! (or update your Unix installation). The lone ! won't invoke bash 's history expansion facility . To print only the filename component of the found pathname, with GNU find : find . -type f ! -name '*.*' -printf '%f\n' With standard find (or GNU find for that matter): find . -type f ! -name '*.*' -exec basename {} \; Before using this in a command substitution in a loop, see " Why is looping over find's output bad practice? ". To list all filenames, and at the same time remove everything after the last dot in the name ("remove the extension"), you may use find . -type f -exec sh -c ' for pathname do pathname=$( basename "$pathname" ) printf "%s\n" "${pathname%.*}" done' sh {} + This would send all found pathnames of all files to a short shell loop. The loop would take each pathname and call basename on it to extract the filename component of the pathname, and then print the resulting string with everything after the last dot removed. The parameter expansion ${pathname%.*} means "remove the shortest string matching .* (a literal dot followed by arbitrary text) from the end of the value of $pathname ". It would have the effect of removing a filename suffix after the last dot in the filename. For more info about find ... -exec ... {} + , see e.g. " Understanding the -exec option of `find` ".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496299", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/331503/" ] }
496,341
From this question about whether printf is a built-in for yash , comes this answer that quotes the POSIX standard . The answer points out that the POSIX search sequence is to find an external implementation of the desired command, and then, if the shell has implemented it as a built-in, run the built-in. (For built-ins that aren't special built-ins .) Why does POSIX have this requirement for an external implementation to exist before allowing an internal implementation to be run? It seems... arbitrary, so I am curious.
This is an "as if" rule. Simply put: The behaviour of the shell as users see it should not change if an implementation decides to make a standard external command also available as shell built-in. The contrast that I showed at https://unix.stackexchange.com/a/496291/5132 between the behaviours of (on the one hand) the PD Korn, MirBSD Korn, and Heirloom Bourne shells; (on the other hand) the Z, 93 Korn, Bourne Again, and Debian Almquist shells; and (on the gripping hand) the Watanabe shell highlights this. For the shells that do not have printf as a built-in, removing /usr/bin from PATH makes an invocation of printf stop working. The POSIX conformant behaviour, exhibited by the Watanabe shell in its conformant mode, causes the same result. The behaviour of the shell that has a printf built-in is as if it were invoking an external command. Whereas the behaviour of all of the non-conformant shells does not alter if /usr/bin is removed from PATH , and they do not behave as if they were invoking an external command. What the standard is trying to guarantee to you is that shells can build-in all sorts of normally external commands (or implement them as its own shell functions), and you'll still get the same behaviour from the built-ins as you did with the external commands if you adjust PATH to stop the commands from being found. PATH remains your tool for selecting and controlling what commands you can invoke. (As explained at https://unix.stackexchange.com/a/448799/5132 , years ago people chose the personality of their Unix by changing what was on PATH .) One might opine that making the command always work irrespective of whether it can be found on PATH is in fact the point of making normally external commands built-in. (It's why my nosh toolset just gained a built-in printenv command in version 1.38, in fact. Although this is not a shell.) But the standard is giving you the guarantee that you'll see the same behaviour for regular external commands that are not on PATH from the shell as you will see from other non-shell programs invoking the execvpe() function, and the shell will not magically be able to run (apparently) ordinary external commands that other programs cannot find with the same PATH . Everything works self-consistently from the user's perspective, and PATH is the tool for controlling how it works. Further reading Why are POSIX mandatory utilities not built into the shell? Why is echo a shell built in command? How does bash execute commands
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/496341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173210/" ] }
496,360
When I use i3 or Gnome, each monitor gets a workspace/desktop, but under XMonad my laptop screen and the external monitor get joined as one big screen (as show by xdpyinfo ) How can I configure X to use two screens and not one?
This is an "as if" rule. Simply put: The behaviour of the shell as users see it should not change if an implementation decides to make a standard external command also available as shell built-in. The contrast that I showed at https://unix.stackexchange.com/a/496291/5132 between the behaviours of (on the one hand) the PD Korn, MirBSD Korn, and Heirloom Bourne shells; (on the other hand) the Z, 93 Korn, Bourne Again, and Debian Almquist shells; and (on the gripping hand) the Watanabe shell highlights this. For the shells that do not have printf as a built-in, removing /usr/bin from PATH makes an invocation of printf stop working. The POSIX conformant behaviour, exhibited by the Watanabe shell in its conformant mode, causes the same result. The behaviour of the shell that has a printf built-in is as if it were invoking an external command. Whereas the behaviour of all of the non-conformant shells does not alter if /usr/bin is removed from PATH , and they do not behave as if they were invoking an external command. What the standard is trying to guarantee to you is that shells can build-in all sorts of normally external commands (or implement them as its own shell functions), and you'll still get the same behaviour from the built-ins as you did with the external commands if you adjust PATH to stop the commands from being found. PATH remains your tool for selecting and controlling what commands you can invoke. (As explained at https://unix.stackexchange.com/a/448799/5132 , years ago people chose the personality of their Unix by changing what was on PATH .) One might opine that making the command always work irrespective of whether it can be found on PATH is in fact the point of making normally external commands built-in. (It's why my nosh toolset just gained a built-in printenv command in version 1.38, in fact. Although this is not a shell.) But the standard is giving you the guarantee that you'll see the same behaviour for regular external commands that are not on PATH from the shell as you will see from other non-shell programs invoking the execvpe() function, and the shell will not magically be able to run (apparently) ordinary external commands that other programs cannot find with the same PATH . Everything works self-consistently from the user's perspective, and PATH is the tool for controlling how it works. Further reading Why are POSIX mandatory utilities not built into the shell? Why is echo a shell built in command? How does bash execute commands
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/496360", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133484/" ] }
496,368
I'm trying to set up a systemd service in order to launch a set of files on a daily basis (different types of journals) from directories based off the date. For example, a todo list for today would be located in: ~/Documents/Journals/2019/1/23/ToDo.md Now the easiest thing to do is to put it in a separate directory, say today, and then have a bash script move it to the appropriate spot after the last modified time is no longer when it was created, or when the size of the file is larger than the template file. But while that would be easier, I was wondering if it would be possible to write a script to return the directory of the file to be piped through the executed command in the service. Something along the lines of: ExecStart=/usr/bin/atom | /Path/To/Script/Todays_Dir Todo.md which would take the file as an argument and return the directory/file path based off the date (the same way the directory and files are being created). Is this possible, or should I just stick to the already proposed solution?
Yes, you can write directly to the target directory. You can't really use a pipe directly as part of an ExecStart= command, since systemd doesn't really implement a full shell. But you can invoke a shell explicitly, which would make it work. For example: ExecStart=/bin/sh -c '/usr/bin/atom | /Path/To/Script/Todays_Dir Todo.md' But it turns out this is a bit awkward, since Todays_Dir would end up having to run cat to write to the full path of its argument. In effect, you don't really need a pipe here, you just need to determine the name of a directory and run atom with the proper redirect. Consider instead just implementing everything in a script and running it directly from the systemd unit. Something like: #!/bin/bashset -e # exit on errordated_dir=$HOME/Documents/Journals/$(date +%Y/%-m/%-d)mkdir -p "${dated_dir}"exec atom >"${dated_dir}/ToDo.md" And then in the systemd unit: ExecStart=/Path/To/Script/GenerateMarkdownToTodaysDir.sh The exec at the end of the shell script makes the shell replace itself with the atom program, so that systemd will end up running atom directly after setup. For this case it's not that important, so it could be omitted (especially if you're interested in doing some kind of post-processing after the atom run.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/496368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332235/" ] }
496,379
My zshrc includes the following function to create a directory and then enter it: function mcd () { mkdir -p "$*" && cd "$*"} The function itself works fine but I get odd behavior with completion. If I start typing e.g. mcd ~/ and then press Tab , the message _mtools_drives:3: command not found: mtoolstest is inserted at the insertion point and nothing is completed. What I want is for the command to be completed just as mkdir would be: zsh should offer me the names of existing directories. How do I tell zsh that for completion purposes, it should treat mcd the same as mkdir ?
The following snippet causes mcd to be completed like mkdir : compdefas () { if (($+_comps[$1])); then compdef $_comps[$1] ${^@[2,-1]}=$1 fi}compdefas mkdir mcd The way it works is to look up the current completion setting for mkdir . The completion code for a function (generally the name of a completion function) is stored in the associative array _comps . Thus compdef $_comps[mkdir] mcd declares that mcd should be completed in the same way that mkdir is completed right now. The function above adds a few niceties: The test for (($+_comps[$1])) ensures that if $1 doesn't have a specified completion method then no completion method is set for the other arguments. ${@[2,-1]} is the list of arguments to the function starting with the second one, so you can specify more than one command name to define completions for. It's actually ${^@[a,-1]} so that the text around the array expansion is replicated for each array element . =$1 sets the service name to use . This matters only for a few commands whose completion function handles several closely-related commands. For example the completion function _gzip handles both gzip and gunzip as well as pigz and unpigz ; compdef _gzip foo makes foo use the default behavior of _gzip while compdef _gzip foo=pigz makes foo use the behavior of _gzip when it completes for pigz . Turning to your specific case, the default completion for mkdir not only offers directories, but also options, which your function does not support. So you'd actually be better off defining mcd as just completing existing directories. Zsh comes with a helper function for that (an undocumented wrapper around _files ). compdef _directories mcd The reason you were getting these bizarre-looking completions for mcd is that it's the name of a command from a once moderately widespread suite of commands mtools .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496379", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88560/" ] }
496,381
I have a process that I want to kill remotely. I tried ssh remotehost "kill -9 $(ps -aux | grep foo | grep bar | awk '{print $2}')" but this returns the error kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec] However if I run the command within the quotation marks kill -9 $(ps -aux | grep foo | grep bar | awk '{print $2}') on the remote host it works fine. Am I missing something here?
The $(..) command substitution would fail as the $ is expanded by the local shell even before it is passed to the stdin of the ssh command. You either need to escape it or use here-strings. Also the command inside the awk that prints $2 gets interpolated as a command-line argument. So we escape it to defer its expansion until the command is executed remotely. With escaping, ssh remotehost "kill -9 \$(ps -aux | grep foo | grep bar | awk '{print \$2}')" or with here-doc ssh remotehost <<'EOF'kill -9 $(ps -aux | grep foo | grep bar | awk '{print $2}')EOF Also note that grep .. | grep .. | awk is superfluous. You can do the whole operation with awk alone. Or even better use pkill to get the process to kill directly by name.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222302/" ] }
496,438
Debian 9.7 released today,I wanted to update my pendrive but as I go to the release page there are offline installers, links below Debian 9.7 "Current" and Debian 9.7 "Current - live" I am well aware of the primary difference but what bugs me was that a image in Debian current is about 3.4 - 4.4 GB, labeled as DVD 1 .. 2 .. 3,while max file size in current live is 2.4 GB, KDE. Why?
The contents of the DVDs are quite different (beyond the live v. non-live side of things): the first DVD in the DVD images contains 4566 packages , whereas the KDE live image contains 2604 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278984/" ] }
496,439
I was trying to add JAVA_HOME in the path variable. I downloaded Java JDK and done following: nano ~/.bash_profile added following lines and saved the file: export JAVA_HOME=$(/usr/libexec/java_home)export PATH=$JAVA_HOME/bin=$PATH source ~/.bash_profile After that I tried to open bash file again using: nano ~/.bash_profile It shows: -bash: nano: command not found I tried other commands too such as brew doctor , curl , vim , java -version etc. All of them shows command not found error. What is the solution for this? How can I restore my system? Updated: Solution that I used: I run the following commands to set the standard default path that Mac OS uses in the command line: export PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
The line export PATH=$JAVA_HOME/bin=$PATH should read export PATH="$JAVA_HOME/bin:$PATH" (note the = changing to : towards the end, and I also double-quoted the value for safety in case there are any spaces in any of the pathnames) You will have to change that using the full path to the nano editor ( /usr/bin/nano on macOS) /usr/bin/nano ~/.bash_profile ... and then restart your shell/terminal. Using source on shell startup files is almost never a good idea as that would add to the existing PATH variable (and possibly to others as well) rather than modify a "clean" version of the variable, and it may have other interesting side-effects if things like tmux or screen are automatically started. You could also temporarily get a sensible value for PATH so that you can repair the file with nano using PATH=$(getconf PATH)nano ~/.bash_profile The getconf PATH command returns a PATH string that is supposed to cover all standard utilities. On macOS, this includes the nano editor.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333186/" ] }
496,451
I'm forced to use a script like below: # test.shfunction my_fun{ echo "Give value for FOO" local my_var read my_var export FOO=$my_var}# Call my_fun functionmy_fun by sourcing it from my shell. $ source test.shGive value for FOOstackexchange$ echo $FOOstackexchange I would like to automate the script with expect like shown below: $ expect test.exp$ echo $FOOstackexchange The amount and names of the environmental variables in test.sh is unknown. Update: Added my_fun functionc all to test.sh.
The fundamental problem is that a child process cannot alter the environment of its parent. This is why you need to source that shell script, so the environment variables will persist in your current shell. Expect is designed to spawn child processes. Your current shell cannot be affected by the result of expect test.exp . However, you can spawn a shell, source that shell script, and then keep that shell around by interacting with it: this is off the top of my head and is untested: #!/usr/bin/expect -fset timeout -1spawn $env(SHELL)set myprompt "some pattern that matches your prompt"expect -re $mypromptsend "source test.sh\r"expect { "Give value for " { # provide the same answer for every question: send "some value\r" exp_continue } -re $myprompt}interact Now you are interacting with the shell you spawned. When you exit that shell, the spawned shell dies, then expect script ends, and you are back in your current shell (without the variables you initialized).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144728/" ] }
496,461
I have installed python3.7 however I am not sure how to make it the default python. See below: ~/Documents/robosuite$ python3.7Python 3.7.1 (default, Oct 22 2018, 11:21:55) [GCC 8.2.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> KeyboardInterrupt>>> ~/Documents/robosuite$ python3Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> I want python3.7 to show up when I use the command python3
Simple solution is edit .bashrc and put this line: alias python3=python3.7 Whenever you will write python3 it will replace it with python3.7 . Or you can use command update-alternatives which is preferred i.e: sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 2 So here python3.7 will have higher priority then python3.6 .Then use: sudo update-alternatives --config python3 Press the enter key if you are satisfied
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/496461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333207/" ] }
496,469
Essentially, I want to know how to run 2 (or more) find commands in one - an "or" search rather than an "and": find . -name "*.pem"find . -name "*.crt"
find ’s “or” operator is -o : find . -name "*.pem" -o -name "*.crt" It is short-circuiting, i.e. the second part will only be evaluated if the first part is false: a file which matches *.pem won’t be tested against *.crt . -o has lower precedence than “and”, whether explicit ( -a ) or implicit; if you’re combining operators you might need to wrap the “or” part with parentheses: find . \( -name "*.pem" -o -name "*.crt" \) -print In my tests this is significantly faster than using a regular expression, as you might expect (regular expressions are more expensive to test than globs, and -regex tests the full path, not only the file name as -name does).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/496469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/329458/" ] }
496,475
when I open file by vi as vi fileCurrent partition replica assignment@@@@@{"version":1,"partitions":[{"topic")]@@@@@Proposed partition reassignment configuration but I try to delete this empty line as sed -i 's/^ *//; s/ *$//; /^$/d; /^\s*$/d' file or sed -i '/^$/d' file or sed -i '/^$/d' file still file is with the empty lines how to remove the empty/blank lines ?
Those are not empty lines, but lines that are too long to fit on the screen, even after wrapping. $ perl -e 'print "foo\n", "bar " x 4096' >/tmp/file$ vim /tmp/filefoo@@@... This is what the standard says: In visual mode, if a line from the edit buffer (other than the current line) does not entirely fit into the lines at the bottom of the display that are available for its presentation, the editor may choose not to display any portion of the line. The lines of the display that do not contain text from the edit buffer for this reason shall each consist of a single '@' character. Also look at the vim 's documentation about the display option: When neither "lastline" nor "truncate" is included, a last line that doesn't fit is replaced with "@" lines. Do not confuse them with null bytes, which are usually shown as ^@ (notice the caret). Also, this behavior is not universal (it doesn't seem to be implemented in nvi ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496475", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
496,505
Generally, shell scripts contain the following comment at the first line of the script file: #!/bin/sh . According to the researches that I made, this is called "hash bang" and it is conventional comment. This comment informs Unix that this file is executed by the Bourne Shell under the directory /bin . My question begins in that point. Up to now I have not seen this comment like #!/bin/bash . It is always #!/bin/sh . However, Ubuntu distributions do not have the Bourne Shell program. They have the Bourne Again Shell (bash). In that point, is it correct to place the comment #!/bin/sh in shell scripts written in Ubuntu distributions?
#!/bin/sh should work on all Unix and Unix-like distributions. It is generally thought of as the most portable hashbang so long as your script is kept POSIX compliant. The /bin/sh shell is supposed to be a shell that implements the POSIX shell standard, regardless of what the actual shell is that masquerade as the /bin/sh shell. #!/bin/sh is normally just a link now as the Bourne shell is no longer maintained. On many Unix systems /bin/sh will be a link to /bin/ksh or /bin/ash , on many RHEL based Linux systems it will be a link to /bin/bash , however on Ubuntu and many Debian based systems it is a link to /bin/dash . All shells, when invoked as sh , will enter POSIX compatibility mode. The hashbang is an important placeholder though because it allows for much greater portability than other methods, so long as your script is strictly POSIX compliant (repeated to stress importance). Note: When bash is invoked in POSIX mode it will still allow some non-POSIX things like [[ , arrays, and more. Those things may fail on a non- bash system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496505", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278582/" ] }
496,542
I'm working with a Debian GNU/Linux 7.5 (wheezy) When I check the file /etc/apt/sources.list The file has this line: # Line commented out by installer because it failed to verify:#deb http://security.debian.org/ wheezy/updates main I tried to replaces for another repo line, but each time I replace it and run the command apt-get update after a while, I get: E: Some index files failed to download. They have been ignored, or old ones used instead. What should I do to solve this problem?
EDIT: You maybe able to follow the advice in this linuxquestions thread and uncomment the relevant entries in your sources.list and try to run apt update . Debian 7 reached EOL as of May 31, 2018. Those repositories are no longer active. You can still receive support for Wheezy but it is going to be a lot less painful and expensive on your part to make moves to migrate to Jessie or even Stretch. On the Debian Wiki there is advice on what to do now that Debian Wheezy is EOL. You may also want to check out this entry to get more information on what to do. If you are upgrading to Jessie, your sources.list needs to changed to the following: deb http://deb.debian.org/debian/ jessie main contrib non-freedeb-src http://deb.debian.org/debian/ jessie main contrib non-freedeb http://security.debian.org/ jessie/updates main contrib non-freedeb-src http://security.debian.org/ jessie/updates main contrib non-freedeb http://deb.debian.org/debian/ jessie-updates main contrib non-freedeb-src http://deb.debian.org/debian/ jessie-updates main contrib non-free Then run apt update , apt install apt -t jessie , apt upgrade , and finally apt-get dist-upgrade to update and upgrade to Jessie. If you absolutely need to stay on Wheezy and cannot change to Jessie, either due to issues outlined here or because of some other limitations on your environment then you may need to change over to the archive mirrors or look into paid ELTS support. Archive mirrors will look something like this : deb http://archive.debian.org/debian/ wheezy main contrib non-free
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/496542", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66743/" ] }
496,602
Why am I seeing: btrfs replace "ERROR: target device smaller than source device" when I have already shrunk the source device filesystem to be smaller than the target via: btrfs filesystem resize <devid>:<small-size> /mountpoint
I encountered this when trying to replace a disk with one slightly smaller. I was getting this error even after resizing the filesystem on the source drive. Since I was using whole disks, there was no option to resize the partition. The trick turned out to be to pass a devid for the source drive instead of a device path. That seemed to result in btrfs filesystem replace checking the actually filesystem size on the source device, and not the size of the device itself. My initial state: # btrfs fi show /mnt/storageLabel: 'Storage' uuid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Total devices 5 FS bytes used 15.25TiB devid 1 size 7.28TiB used 3.84TiB path /dev/sdb devid 2 size 7.28TiB used 3.84TiB path /dev/sdc devid 3 size 7.28TiB used 3.84TiB path /dev/sdd devid 4 size 7.28TiB used 3.84TiB path /dev/sde devid 5 size 7.28TiB used 3.84TiB path /dev/sdf I wanted to replace /dev/sdf with /dev/sdg. Attempt #1: # btrfs replace start /dev/sdf /dev/sdg /mnt/storageERROR: target device smaller than source device (required 8001561124864 bytes) Resizing the filesystem on /dev/sdf (devid 5): # blockdev --getsize64 /dev/sdg 8001546444800# btrfs fi res 5:8001546444800 /mnt/storageResize '/mnt/storage' of '5:8001546444800' Attempt #2: # btrfs replace start /dev/sdf /dev/sdg /mnt/storageERROR: target device smaller than source device (required 8001561124864 bytes) No change. It appears when specifying the source as a block device, replace only looks at the size of the block device when checking whether there is enough space on the destination. However, perusing the source code, I discovered that replace handles a source devid differently, and actually retrieves the correct size from the filesystem. This led to attempt #3: # btrfs replace start 5 /dev/sdg /mnt/storage This formulation, combined with the preceding resize, allowed the replace operation to start successfully.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
496,684
How can I modify the following content of a file: cat:persian/young-1cat:winter/young-2cat:summer/wild-3dog:persian/young-1dog:winter/young-2dog:summer/wild-3 To : cat:persian/young-1cat:winter/young-2cat:summer/wild-3dog:persian/young-1dog:winter/young-2dog:summer/wild-3 It's not specific to dog or cat, it's more of symbolic representation of whatever the first word/term is
You could do something like: awk -F: 'NR>1 && $1 "" != last {print ""}; {print; last = $1}' The "" is to force string comparison. Without it, it wouldn't work properly in input like: 100:foo100:bar1e2:baz1e2:biz Where 100 and 1e2 would be compared as numbers.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/496684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175715/" ] }
496,705
When I'm turning my Debian Stretch (9) off, chances are that I see something like: So a have a few questions: 1) It seems like a bug that's not been solved yet (it's been around for a few years). By "bug" I mean Linux should turn off faster than Windows; if it doesn't, there's a bug. 2) Since this bug seems hard to isolate and solve, maybe a "Esc to cancel" would solve a big part of the problem. 3) I have programming experience, but not with Linux Kernel and such. Am I advised to try to include "Esc to cancel" myself? If so, which file should I change? May I compile only this file, or something more? EDIT Contents of /etc/gdm3/daemon.conf # GDM configuration storage## See /usr/share/gdm/gdm.schemas for a list of available options.[daemon]# Uncoment the line below to force the login screen to use Xorg#WaylandEnable=false# Enabling automatic login# AutomaticLoginEnable = true# AutomaticLogin = user1# Enabling timed login# TimedLoginEnable = true# TimedLogin = user1# TimedLoginDelay = 10[security][xdmcp][chooser][debug]# Uncomment the line below to turn on debugging# More verbose logs# Additionally lets the X server dump core if it crashes#Enable=true
You could do something like: awk -F: 'NR>1 && $1 "" != last {print ""}; {print; last = $1}' The "" is to force string comparison. Without it, it wouldn't work properly in input like: 100:foo100:bar1e2:baz1e2:biz Where 100 and 1e2 would be compared as numbers.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/496705", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134202/" ] }
496,706
I want to rename all files in a folder, including a current time stamp in the name, retaining the original extension, in Solaris, like this: test1.txt > test1.date.txt I tried this, but I lose the first part of the name: find * -prune -type f -name '*.txt' -exec mv {} {}.$(date +'%Y%m%d%H%M').txt \; What am I missing ?
find . -type f -name '*.txt' -exec sh -c ' now=$( date +%Y%m%d%H%M ) for pathname do mv "$pathname" "${pathname%.txt}.$now.txt" done' sh {} + This would find all pathnames of regular files in or below the current directory, whose names end in .txt . For batches of these, a short shell script is called. This script will first get the timestamp (this is had only once for each batch, for efficiency) and then loop over the given pathnames. For each pathname, it will rename it by removing the final .txt (using ${pathname%.txt} ), and adding a dot, the timestamp and .txt . This has not been tested on Solaris, but uses only standard components and options etc. To compute the timestamp only once, before calling find , use now=$( date +%Y%m%d%H%M ) \find ...as before (but without assigning to now)... or env now="$( date +%Y%m%d%H%M )" \find ...as before (but without assigning to now)... (note the line continuations in the above two commands) If the files are located in a single directory, just run the loop: now=$( date +%Y%m%d%H%M )for pathname in ./*.txt; do mv "$pathname" "${pathname%.txt}.$now.txt"done This assumes that the pattern ./*.txt matches all names that you'd like to rename. Related: Understanding the -exec option of `find` Your {}.$(date +'%Y%m%d%H%M').txt does not work portably. It may work with some implementations of find , but an implementation is not required to expand {} to the current pathname if the {} occurs together with another string. The relevant text from the POSIX standard is : A utility_name or argument containing only the two characters {} shall be replaced by the current pathname. If a utility_name or argument string contains the two characters {} , but not just the two characters {} , it is implementation-defined whether find replaces those two characters or uses the string without change .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333415/" ] }
496,717
$cat contents.txtcat-1.15cat-1.15cat-1.15cat-1.18 The above output has blank lines $cat contents.txt | grep cat results in the word cat being highlighted, but the resultant text is also merged, eliminating blank lines cat-1.15cat-1.15cat-1.15cat-1.18 How can I grep to highlight without grep affecting the text structure, so that the only difference is the grep term being highlighted ?
With GNU grep this can be accomplished with the -z option. -z, --null-data Treat input and output data as sequences of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline. Like the -Z or --null option, this option can be used with commands like sort -z to process arbitrary file names. Also this is a UUOC . You can specify an input file with grep. $ grep --color cat contents.txtcat-1.15cat-1.15cat-1.15cat-1.18$ grep --color -z cat contents.txtcat-1.15cat-1.15cat-1.15cat-1.18
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175715/" ] }
496,843
I do the following: $ ./input "" Output: argc 2argv[1][0] 0 But if I want to pass (several) empty quotes in program based manner: $ python -c 'print "\"\""'""./input $(python -c 'print "\"\""') gives: argc 2argv[1][0] 22 - (22 hex value for ") So how can I generate something like: $ ./input "" "" "" "" and get result same as in example 1 ?
in ./input $(cmd) Because, $(cmd) is unquoted, that's a split+glob operator. The shell retrieves the output of cmd , removes all the trailing newline characters, then splits that based on the value of the $IFS special parameter, and then performs filename generation (for instance turns *.txt into the list of non-hidden txt files in the current directory) on the resulting words (that latter part not with zsh ) and in the case of ksh also performs brace expansion (turns a{b,c} into ab and ac for instance). The default value of $IFS contains the SPC, TAB and NL characters (also NUL in zsh , other shells either remove the NULs or choke on them). Those (not NUL) also happen to be IFS-whitespace characters¹, which are treated specially when it comes to IFS-splitting. If the output of cmd is " a b\nc \n" , that split+glob operator will generate a "a" , "b" and "c" arguments to ./input . With IFS-white-space characters, it's impossible for split+glob to generate an empty argument because sequences of one or more IFS-whitespace characters are treated as one delimiter. To generate an empty argument, you'd need to choose a separator that is not an IFS-whitespace character. Actually, any non-whitespace character will do (best to also avoid multi-byte characters which are not supported by all shells here). So for instance if you do: IFS=: # split on ":" which is not an IFS-whitespace characterset -o noglob # disable globbing (also brace expansion in ksh)./input $(cmd) And if cmd outputs a::b\n , then that split+glob operator will result in "a" , "" and "b" arguments (note that the " s are not part of the value, I'm just using them here to help show the values). With a:b:\n , depending on the shell, that will result in "a" and "b" or "a" , "b" and "" . You can make it consistent across all shells with ./input $(cmd)"" (which also means that for an empty output of cmd (or an output consisting only of newline characters), ./input will receive one empty argument as opposed to no argument at all). Example: cmd() { printf 'a b:: c\n'}input() { printf 'I got %d arguments:\n' "$#" [ "$#" -eq 0 ] || printf ' - <%s>\n' "$@"}IFS=:set -o noglobinput $(cmd) gives: I got 3 arguments: - <a b> - <> - < c> Also note that when you do: ./input "" Those " are part of the shell syntax, they are shell quoting operators. Those " characters are not passed to input . ¹ IFS whitespace characters , per POSIX being the characters classified as [:space:] in the locale and that happen to be in $IFS though in ksh88 (on which the POSIX specification is based) and in most shells, that's still limited to SPC, TAB and NL. The only POSIX compliant shell in that regard I found was yash . ksh93 and bash (since 5.0) also include other whitespace (such as CR, FF, VT...), but limited to the single-byte ones (beware on some systems like Solaris, that includes the non-breaking-space which is single byte in some locales)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39130/" ] }
496,868
I'm trying to write a bash script that shows a user's name based on the uid the user provides : #!/bin/bashread -p "donner l UID" cheruid if [ $(grep -w $cheruid) -n ] then grep -w $cheruid /etc/passwd | cut -d ":" -f "1" | xargs echo "user is : "else echo "user not found"fi when I execute this the terminal only shows the prompt message then stops working. Am I missing something ?
With GNU id , you can do: id -un -- "$cheruid" That will query the account database (whether it's stored in /etc/passwd , LDAP, NIS+, a RDBMS...) for the first user name with that uid. Generally, there's only one user name per uid, but that's not guaranteed, the key in the user account database is the username, not user id. If you want to know all the user names for a given uid, you can do: getent passwd | ID=$cheruid awk -F: '$3 == ENVIRON["ID"] {print $1}' But that may not work for some account databases that are not enumerable (as sometimes the case for large LDAP-based ones).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/496868", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333523/" ] }
496,874
When playing with filesystems and partition, I realized that when I created a ext file system on my USB drive and plug it to Windows, I am forced to format it. On the other end, when building a FAT partition on Windows, and plugging it to my virtual machine, Linux is perfectly able to read and mount my FAT partition. 1 - Why can't Windows read Linux filesystems? 2 - What's the key difference that allows Linux to do it, yet Windows can't?
Windows can’t read “Linux” file systems (such as Ext4 or XFS) by default because it doesn’t ship with drivers for them. You can install software such as Ext2fsd to gain read access to Ext2/3/4 file systems. Linux can access FAT file systems because the kernel has a FAT file system driver, and most distributions enable it by default. There are cases where Linux distributions won’t be able to access a Windows-formatted USB key by default: large keys are typically formatted using ExFAT, and the Linux kernel doesn’t support that. You would have to install a separate ExFAT driver in this situation. There’s nothing inherent in Windows or Linux which limits their ability to support file systems; it’s really down to the availability of drivers. Linux supports Windows file systems because they are very popular; this then provides a common basis for file exchange, meaning that there is less need for Windows to support Linux file systems.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496874", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332329/" ] }
496,881
I have a bash script that has a for loop that iterates a lot of times and sleeps between iterations, potentially for a long time. It then writes results to a file, then terminates. On occasion I get the result I need before many loop iterations are complete. On these occasions I need the script to break out of the for loop while it is either doing stuff, or sleeping in such as way that it will continue with the rest of the script, after the for loop, which is writing a report file of data it's gathered so far. I wish to use a key combination, eg CTRL+Q, to break out of the for loop. Script looks like this: #!/bin/bashfor (( c=0; c<=$1; c++ ))do # SOME STUFF HERE# data gathering into arrays and other commands here etc# sleep potentially for a long timesleep $2done#WRITE REPORT OUT TO SCREEN AND FILE HERE
I've not had contact with this kind of tasks for a while, but I remember something like this used to work: #!/bin/bashtrap break INTfor (( c=0; c<=$1; c++ ))do # SOME STUFF HERE# data gathering into arrays and other commands here etc echo loop "$c" before sleep # sleep potentially for a long time sleep "$2" echo loop "$c" after sleepdone#WRITE REPORT OUT TO SCREEN AND FILE HEREecho outside The idea is to use Ctrl - C to break the loop. This signal (SIGINT) is caught by the trap, which breaks the loop and lets the rest of the script follow. Example: $ ./s 3 1loop 0 before sleeploop 0 after sleeploop 1 before sleep^Coutside Let me know if you have any problems with this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46470/" ] }
496,982
I created some systemd services which basically works: location: /etc/systemd/system/multi-user.target.wants/publicapi.service content: [Unit]Description=public api startup script[Service]Type=oneshotRemainAfterExit=yesEnvironmentFile=-/etc/environmentWorkingDirectory=/home/techopsExecStart=/home/techops/publicapi startExecStop=/home/techops/publicapi stop[Install]WantedBy=multi-user.target When I try to restart the service as techops user in the command line, I get the following output: ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===Authentication is required to start 'publicapi.service'.Multiple identities can be used for authentication: 1. Myself,,, (defaultuser) 2. ,,, (techops)Choose identity to authenticate as (1-2): I want that only techops can restart services and I want that this prompt does not appear when being logged in as techops. How can I do that? I read that there are different approaches with polkit-1 or sudoers, but I'm unsure. [UPDATE] 2019-01-27 4:40pm Thanks for this comprehensive answer to Thomas and Perlduck. It helped me to improve my knowledge of systemd. According to the approach to start the service without a password prompt, and I want to apologize that I did not emphasize the real problem enough: Actually, what is most important for me is that no other user than techops should stop or start the service. But at least with the first two approaches I can still run service publicapi stop and I get the prompt ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === again. When I choose the defaultuser and know the password, I could stop all the services. I want to deny this user from doing that, even if he has the password . Important background info to better understand why this is the more important part for me: The defaultuser is the only user which is exposed to ssh but this user cannot do anything else (except changing to other users if you have the password of these other users). But at the moment, he can start or stop the services, but this user must not to do this. If someone gets the password of defaultuser and logs in via ssh, then he could stop all the services at the moment. This is what I meant with "I want that only techops can restart services". Sorry, that I was no that exact at my initial question. I thought that sudoing the techops user would maybe bypass this problem, but it does not. The problem itself is not to run the command without password prompt. (I could easily do that as techops user when I just execute /home/techops/publicapi start ). The problem itself is to lock out the defaultuser from starting these services. And I hoped that any of the solutions could do that. I started with the approaches of Thomas.The approach with sudo works when I don't want to get asked for the password for the user techops when I execute the commands as explained, e.g. sudo systemctl start publicapi.servicesudo systemctl stop publicapi.service The second approach does not work for me yet. I cannot start the service without password prompt ==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units === and I stall can login as defaultuser when I have the password of this user. With the third approach, the service does not even start at boot process anymore so I'm not sure if this approach is the right one for me at all. I cannot even able it with systemctl enable publicapi.service which leads me to the following error: Failed to enable unit: Unit file mycuisine-publicapi.service does not exist. The error does no occur when I move all the services back into /etc/systemd/system/ and execute systemctl enable publicapi.service . Then the service starts again at boot. All these approaches will more or less help to bypass the password prompt for the techops user but when I run service publicapi stop or systemctl stop publicapi with defaultuser, I can stop the services if I have the password. But my target is to lock out defaultuser from starting or stopping services at all.
To achieve that the user techops can control the service publicapi.service without giving a password, you have different possiblities. Which one is suitable for you cannot be answered as you have to choose on your own. The classical sudo approach is maybe the most used, as it is there for a long time. You would have to create e.g. the file as follows. Note that the drop-in directory /etc/sudoers.d is only active when #includedir /etc/sudoers.d is set in /etc/sudoers . But that should be the case if you are using a modern Ubuntu distribution. As root execute: cat > /etc/sudoers.d/techops << SUDOtechops ALL= NOPASSWD: /bin/systemctl restart publicapi.servicetechops ALL= NOPASSWD: /bin/systemctl stop publicapi.servicetechops ALL= NOPASSWD: /bin/systemctl start publicapi.serviceSUDO Now you should be able to run the systemctl commands as user techops without giving a password by prepending sudo to the commands. sudo systemctl start publicapi.servicesudo systemctl stop publicapi.servicesudo systemctl restart publicapi.service The second method would be to use PolKit (was renamed from PolicyKit ) to allow the user techops to control systemd services. Depending on the version of polit , you can give normal users control over systemd units. To check the polkit version, just run pkaction --version . with polkit version 0.106 and higher , you can allow users to control specific systemd units. To do so, you could create a rule as root : cat > /etc/polkit-1/rules.d/10-techops.rules << POLKITpolkit.addRule(function(action, subject) { if (action.id == "org.freedesktop.systemd1.manage-units" && action.lookup("unit") == "publicapi.service" && subject.user == "techops") { return polkit.Result.YES; }});POLKIT with polkit version 0.105 and lower : you can allow users to control systemd units. This unfortunately includes all systemd units and you might not want to do this. Not sure if there is a way to limit access to specific systemd units with version 0.105 or lower, but maybe someone else can clarify. To enable this, you could create a file as root : cat > /etc/polkit-1/localauthority/50-local.d/org.freedesktop.systemd1.pkla << POLKIT[Allow user techops to run systemctl commands]Identity=unix-user:techopsAction=org.freedesktop.systemd1.manage-unitsResultInactive=noResultActive=noResultAny=yesPOLKIT In both cases you can run systemctl [start|stop|restart] publicapi.service as user techops without giving a password. In the latter case ( polkit <= 0.105 ) the user techops could control any systemd unit. A third option would be to make the service a user service, which does not need sudo or polkit configurations. This puts everything under the control of the user and only works if your actual service that is started with /home/techops/publicapi start can run without root privileges. First you have to enable lingering for the user techops . This is needed to startup the user service on boot. As root execute: loginctl enable-linger techops Next you have to move the systemd unit file into the techops user directory. As user techops execute the commands as follows. mkdir -p ~/.config/systemd/usercat > ~/.config/systemd/user/publicapi.service << UNIT[Unit]Description=public api startup script[Service]Type=oneshotRemainAfterExit=yesEnvironmentFile=-/etc/environmentWorkingDirectory=/home/techopsExecStart=/home/techops/publicapi startExecStop=/home/techops/publicapi stop[Install]WantedBy=default.targetUNIT Note that the WantedBy has to be default.target as there is no multi-user.target in the user context. Now reload the configuration and enable the service. Again as user techops execute the commands. systemctl --user daemon-reloadsystemctl --user enable publicapi.servicesystemctl --user start publicapi.service In general you should place your systemd units in /etc/systemd/system/ not directly in /etc/systemd/system/multi-user.target.wants . When you execute systemctl enable publicapi.service a symbolic link will be created in etc/systemd/system/multi-user.target.wants or whatever target is specified for that unit. As already mentioned, if the service/process itself can be run without root privileges you should consider adding User=techops to your unit file to run the process with a non-privileged user account.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/496982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3214/" ] }
496,994
I try, with no success, to use an awk command inside a for loop. I've got a variable which contains a series of strings that I want to cut with awk to get the data. I know how to do that but what I really want is to cut the data successively. So I've got this variable: var="data1,data2,data3" And here where I am right now: for ((i=1; i<=3; i++))do echo $(awk -F, '{print $1}' <<< $var)done I try to replace the $1 by the loop $i but without success.
You can accomplish what you're trying to do by using double quotes in the awk script to inject the shell variable into it. You still want to keep one literal $ in it, which you can do by escaping it with backslash: echo $(awk -F, "{print \$$i}" <<<$var) This will expand the $i to 1 , 2 and 3 in each of the iterations, therefore awk will see $1 , $2 and $3 which will make it expand each of the fields. Another possibility is to inject the shell variable as an awk variable using the -v flag: echo $(awk -F, -v i="$i" '{print $i}' <<<$var) That assigns the awk variable i to the contents of the shell variable with the same name. Variables in awk don't use a $ , which is used for fields, so $i is enough to refer to the i -th field if i is a variable in awk. Assigning an awk variable with -v is generally a safer approach, particularly when it can contain arbitrary sequences of characters, in that case there's less risk that the contents will be executed as awk code against your intentions. But since in your case the variable holds a single integer, that's less of a concern. Yet another option is to use a for loop in awk itself. See awk documentation (or search this site) for more details on how to do that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/496994", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333659/" ] }
497,032
I have installed pulseaudio-module-bluetooth using apt . $ type pulseaudio-module-bluetoothbash: type: pulseaudio-module-bluetooth: not found$ which pulseaudio-module-bluetooth$ whereis pulseaudio-module-bluetoothpulseaudio-module-bluetooth: Clearly, I'm looking for the wrong thing. The package does not simply install a command that is the same name as the package. Alright, then. I want to find out what all commands (or executables) this package installed, and their locations. The answers to " How to get information about deb package archive? " tell me how to find the files installed by the package if I have a .deb file . I'm not installing directly from a .deb file, though. I'm using apt . Where is the .deb file that that used? Is there a copy on my system somewhere that I can query with the commands in the answers to that question? Where? If there isn't a local copy on my system, can I get one with apt ? How? Is there some handy apt (or similar) command that wraps this up for me, so that I do not have to run dpkg-deb directly? What is it? Can I find the package's file list entirely on-line, without explicitly downloading any .deb files and before installing anything with apt ? How?
I think there is an existing answer to your question (which isn’t How to get information about deb package archive? ), but I can’t find it. To list the contents of an installed package, use dpkg -L : dpkg -L pulseaudio-module-bluetooth If you want to list the contents of a package before installing it, install apt-file , then run apt update , and apt-file list pulseaudio-module-bluetooth will list the contents of the package without downloading it or installing it. You can also view the contents of a package from its web page ; look for “list of files” links at the bottom of the page.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/497032", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220462/" ] }
497,063
I just plugged into USB A 3.0 / C 3.1 my new external HDD to Debian Buster system. The disk was sold as LaCie 2.5" Porsche Design P'9227 2TB USB-C . Here is the output of fdisk -l /dev/sdc : Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectorsDisk model: P9227 Slim Units: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 33553920 bytes I just read some articles about 4k-emulated drives ( 512e ), this one should be the case. I am confused as to how to format it with NTFS. I tried to use my brain, and here is what I came with: Start sector of the partition should probably start on 4096 sector (?) So I created a partition with gdisk like this: Device Start End Sectors Size Type/dev/sdc1 4096 3907029134 3907025039 1.8T Microsoft basic data Sector size should probably be forced with the --sector-size option like I did (?) issuing: mkfs.ntfs --no-indexing --verbose --with-uuid --label EXTERNAL_2TB --quick --sector-size 4096 /dev/sdc1 EDIT1: Windows 10 fully updated did not recognize the partition and asked me to format, I used my favorite tool for that, and back to Linux here is the output of fdisk -l /dev/sdc : Device Start End Sectors Size Type/dev/sdc1 2048 3907028991 3907026944 1,8T Microsoft basic data So why it must start at sector 2048, I don't understand. EDIT2: I don't understand what I am doing wrong in terms of compatibility with Windows. Every time I re-partition it / re-format it and boot Windows and plug the drive in, it just offers me to Format it itself. I am quite positive I tried everything from inside gdisk + mkfs.ntfs . I would like to know why I am unable to do the same as Windows does from my Linux CLI. I will answer all questions tomorrow morning as well as comments. I am now running: pv --progress --timer --eta --rate --average-rate --bytes -s 1953314876k < /dev/zero > /media/vlastimil/LACIE_2TB/zero with an expected speed of 123 MiB/s.
A physical sector size of 4096 means that the data on the drive is laid out in units of 4096 bytes, i.e. disk comprised of sequential "compartments" of 4096 bytes, that have to be written atomically. For compatibility reasons, most disks with 4096 byte sectors present themselves as having traditional 512 byte "logical sectors", which means the addressing unit is a 512 byte block. The practical implication of this emulation of a 512 sector drive with an underlying disk with 4096 byte sectors is a potential performance issue. When writing a single 512 byte sector to a 512e disk, the drive must read the whole 4096 byte sector containing the 512-byte sector, modify the sector in RAM (on the disk controller) by replacing the 512-byte sector with the new contents, and finally write the whole 4096 sector back to the disk. Things get worse if you are reading or writing a couple of consecutive 512 sectors that happen to cross a 4096 sector boundary. File systems usually lay out their data structures well, i.e. they are aligned to multiples of at least 4096 bytes, so the bigger sector size normally does not present a problem. This all breaks down, however, if the partition containing the file system itself is not aligned properly. In the case of a 512e disk, the partitions should be aligned so that the first 512-byte logical sector number is a multiple of eight.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/497063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
497,094
I installed time but when I use it, I am getting the portable format, not the default format. When installed I think it said GNU time 1.7 or 1.72. Commands like time --helptime --version fail with the error "command not found". The TIME environment variable is unset. Why is time behaving like this?
While Jesse_b is correct about how shells generally look up commands, there's an easier fix. \time --version Bash, ksh, zsh, and I believe a few other common shells will treat a leading backslash on a command with no path as 'skip to looking into the PATH for this thing.' Also, knowing what the time builtin is, we could also get around this by running time time --version After all, the reason for the command not found error rather than a no such option error is because the shell builtin just runs the command that follows and checks how long it took to run when it finishes... which is the same thing that /bin/time does. If you're expecting time to take arguments... are you wanting to find out what time it is? Because that's the date command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/497094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
497,141
In the question How to append multiple lines to a file the OP was seeking a way to append multiple lines to a file within the shell. The solution: cat <<EOT >> test.txtline 1line 2EOT I want to prepend lines and my attempt looks like: echo 3 >test.textcat test.text <<EOT >> test.text12EOT But this results in an error: cat: test.text: input file is output file EDIT : for clarification, I am following a long server setup guide with instructions to manually edit configuration files. Editing at times involves prepending blocks of text to a file. In automating some of the steps, I want to retain the verbosity of the command by copying the text from the guide as-is and putting into a bash one-liner. For this reason the multi-line text input using EOT is preferred. EDIT : other answers using sed require backslashes to be appended to the end of each line but I want to enter multiple lines without modification. Is there a way to prepend multiple lines to a file in a similar fashion above (ideally without a temporary file and installing moreutils )?
If you need to read in the output of a command, you could use ed as in the linked question, with this variation: ed -s test.txt <<< $'0r !echo stuff\nw\nq' This r eads the output of the command echo stuff into test.txt after line zero. To insert multi-line text before the 1st line via here-doc you'd run ed -s test.txt <<EOT1iadd some lineand some moreto the beginning.wqEOT The dot signals the end of input-mode which means the last solution assumes your text doesn't contain lines consisting of single dots.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/497141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177246/" ] }
497,146
I would like to register a URL scheme (or protocol) handler for my own custom URL protocol, so that clicking on a link with this custom protocol will execute a command on that URL. Which steps do I need to take to add this handler? Example: I want to open URLs like ddg://query%20terms in a new DuckDuckGo browser search. If this protocol already exists, I assume that the steps to override a handler don't differ much from the steps to create a new one. Yes, technically, this is just a URL scheme, not a protocol.
To register a new URL scheme handler with XDG, first create a Desktop Entry which specifies the x-scheme-handler/... MIME type: [Desktop Entry]Type=ApplicationName=DDG Scheme HandlerExec=open-ddg.sh %uStartupNotify=falseMimeType=x-scheme-handler/ddg; Note that %u passes the URL (e.g. ddg://query%20terms ) as a single parameter, according to the Desktop Entry Specification . Once you have created this Desktop Entry and installed it (i.e. put it in the local or system applications directory for XDG, like ~/.local/share/applications/ or /usr/share/applications/ ), then you must register the application with the MIME type (assuming you had named your Desktop Entry ddg-opener.desktop ): xdg-mime default ddg-opener.desktop x-scheme-handler/ddg A reference implementation of the ddg-open.sh handler: #!/usr/bin/env bash# bash and not just sh because we are using some bash-specific syntaxif [[ "$1" == "ddg:"* ]]; then ref=${1#ddg://} #ref=$(python -c "import sys, urllib as ul; print ul.unquote_plus(sys.argv[1])" "$ref") # If you want decoding xdg-open "https://duckduckgo.com/?q=$ref"else xdg-open "$1" # Just open with the default handlerfi
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/497146", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13308/" ] }
497,185
Is there a way in linux to look through a directory tree for only those directories that are the ends of branches (I will call them leaves here), i.e., dircetories with no subdirectories in them? I looked at this question but it was never properly answered. So if I have a directory tree root/├── branch1│   ├── branch11│   │   └── branch111 *│   └── branch12 *└── branch2 ├── branch21 * └── branch22 └── branch221 * can I find only the directories that are the end of their branch (the ones marked with * ), so looking only at the number of directories, not at the number of files? In my real case I am looking for the ones with files, but they're a subset of the 'leaves' that I want to find in this example.
To find only those leaf directories that contain non-directory files, you can combine an answer of the referenced question https://unix.stackexchange.com/a/203991/330217 or similar questions https://stackoverflow.com/a/4269862/10622916 or https://serverfault.com/a/530328 with find 's ! -empty find rootdir -type d -links 2 ! -empty Checking the hard links with -links 2 should work for traditional UNIX file systems. The -empty condition is not part of the POSIX standard, but should be available on most Linux systems. According to KamilMaciorowski's comment the traditional link count semantics for directories is not valid for Btrfs. This is confirmed in https://linux-btrfs.vger.kernel.narkive.com/oAoDX89D/btrfs-st-nlink-for-directories which also mentions Mac OS HFS+ as an exception from the traditional behavior. For these file systems a different method is necessary to check for leaf directories.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/497185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86288/" ] }
497,201
i was wondering if a GUI login attempt with a non existing username will be written to any log files? (im using mint with cinnamon and Kali with gnome3) I accidentally put my password in as username and hit enter and now i want to know if my password is stored anywhere in cleartext, like it would be for an incorrect login via ssh in /var/log/auth.log
To find only those leaf directories that contain non-directory files, you can combine an answer of the referenced question https://unix.stackexchange.com/a/203991/330217 or similar questions https://stackoverflow.com/a/4269862/10622916 or https://serverfault.com/a/530328 with find 's ! -empty find rootdir -type d -links 2 ! -empty Checking the hard links with -links 2 should work for traditional UNIX file systems. The -empty condition is not part of the POSIX standard, but should be available on most Linux systems. According to KamilMaciorowski's comment the traditional link count semantics for directories is not valid for Btrfs. This is confirmed in https://linux-btrfs.vger.kernel.narkive.com/oAoDX89D/btrfs-st-nlink-for-directories which also mentions Mac OS HFS+ as an exception from the traditional behavior. For these file systems a different method is necessary to check for leaf directories.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/497201", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333852/" ] }
497,204
When I try to start X with i3 by typing startx /usr/bin/i3 I get the following error message: (EE) xf86OpenConsole: Cannot open virtual console 7 (Permission denied) Launching startx as root or after chowning /dev/tty7 solves it (expect for input not working, not even switching terminals), but I don`t think that is the proper way to do that. Sway and Weston work flawlessly why won`t X do so?
startx works fine on my system (Fedora). However, one outdated page on the Gentoo Wiki mentions a different way to run it: startx /usr/bin/i3 -- vt1 1 is the number of the "terminal" you are logged in on. If you are not on terminal 1 , then adjust the command accordingly. Apparently this fixed the Permission denied error, and allowed X to start. I don't understand why startx would need this to be passed explicitly. I don't understand how Gentoo could be doing anything differently to Fedora here. Oh well. At least it should stop startx / Xorg from trying to open tty7 . That was definitely not the modern way to do things, and it was not working for you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/497204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272893/" ] }
497,207
I've been using the following to launch programs from terminals: program_name >/dev/null 2>&1 & But recently I came across this webpage where the following method was recommended: program_name </dev/null &>/dev/null & Now, I know what the first one means. It means to point stderr to stdout and stdout to /dev/null i.e. both stderr and stdout is now pointed to null. But what does the second one means? And which one is more suitable for launching programs headlessly from terminal?
There are, by default, three "standard" files open when you run a program, standard input ( stdin ), standard output ( stdout ), and standard error ( stderr ). In Unix, those are associated with "file descriptors" ( stdin = 0, stdout = 1, stderr = 2). By default, all three are associated with the device that controls your terminal (the interface through which you see content on screen and type input into the program). The shell gives you the ability to "redirect" file descriptors. The > operator redirects output; the < redirects input. For example: program_name > /dev/null Which is equivalent to: program_name 1> /dev/null Redirects the output to file descriptor 1 ('stdout') to `/dev/null' Similarlly, program_name 2> /dev/null Redirects the output to file descriptor 2 ('stderr') to '/dev/null' You might want to redirect both stdout and stderr to a single file, so you might think you'd do: program_name > /dev/null 2> /dev/null But that doesn't handle interleaving writes to the file descriptors (the details behind this are a different question). To address this, you can do: program_name > /dev/null 2>&1 Which says "redirect writes to file descriptor 1 to /dev/null and redirect writes to file descriptor 2 to the same place as the writes to file descriptor 1 are going". This handles interleaving writes to the file descriptors. That option is so common that some shells include a short-hand that is shorter and functionally equivalent: program_name &> /dev/null Finally, you can redirect input in the same way that you redirect output: program_name < /dev/null Will redirect file descriptor 0 ( stdin ) from /dev/null , so if the program tries to read input, it'll get EOF. Putting that all together: program_name </dev/null &>/dev/null & Say (1) run program_name , (2) redirect standard input from /dev/null ( </dev/null ), (3) redirect both file descriptors 1 and 2 ( stdout and stderr ) to /dev/null ( &>/dev/null ), and (4) run the program in the background ( & ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/497207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282376/" ] }
497,208
I have an SQL command which fetches you an output like 70138200 This is stored in a csv and I have added headers to it but appears like A,B,C70138200 I wanted it to look like A 70B 138C 200 or A70B138C200 or A,B,C70,138,200
There are, by default, three "standard" files open when you run a program, standard input ( stdin ), standard output ( stdout ), and standard error ( stderr ). In Unix, those are associated with "file descriptors" ( stdin = 0, stdout = 1, stderr = 2). By default, all three are associated with the device that controls your terminal (the interface through which you see content on screen and type input into the program). The shell gives you the ability to "redirect" file descriptors. The > operator redirects output; the < redirects input. For example: program_name > /dev/null Which is equivalent to: program_name 1> /dev/null Redirects the output to file descriptor 1 ('stdout') to `/dev/null' Similarlly, program_name 2> /dev/null Redirects the output to file descriptor 2 ('stderr') to '/dev/null' You might want to redirect both stdout and stderr to a single file, so you might think you'd do: program_name > /dev/null 2> /dev/null But that doesn't handle interleaving writes to the file descriptors (the details behind this are a different question). To address this, you can do: program_name > /dev/null 2>&1 Which says "redirect writes to file descriptor 1 to /dev/null and redirect writes to file descriptor 2 to the same place as the writes to file descriptor 1 are going". This handles interleaving writes to the file descriptors. That option is so common that some shells include a short-hand that is shorter and functionally equivalent: program_name &> /dev/null Finally, you can redirect input in the same way that you redirect output: program_name < /dev/null Will redirect file descriptor 0 ( stdin ) from /dev/null , so if the program tries to read input, it'll get EOF. Putting that all together: program_name </dev/null &>/dev/null & Say (1) run program_name , (2) redirect standard input from /dev/null ( </dev/null ), (3) redirect both file descriptors 1 and 2 ( stdout and stderr ) to /dev/null ( &>/dev/null ), and (4) run the program in the background ( & ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/497208", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333863/" ] }
497,212
I have a bash script which loops through a list of filenames in a text file, deduces a date from the filename and then calls a library script (./dropbox_uploader.sh delete [filname]) to conditionally delete a remote file in my Dropbox account using Dropbox's api. The condition is simply whether the file is older than [n] days. n is passed into the script in position $2. $1 is a text file containing the entries. 2 local variables, $go and $stay count the instances of where a file is eligible for deletion, or not. At the end of my script, I fire off an email via the mail program using the values of the 2 variables.. My problem is that if I run this from the command line everything works just fine and the email reads: "The dropbox files were purged: 554 files deleted and 310 were kept." However, if the script is run via Cron using my user's crontab then it seems that the local variables are both zero / null, and the email is: The dropbox files were purged: 0 files deleted and 0 were kept. I can run the script again manually after the cron run and the variables are non zero. I think I'm missing something fundamental about the way cron runs for the user and would be grateful for suggestions. My script: #!/bin/bashthreshold=$(date -d "$2 days ago" +%s)go=0stay=0rm /home/pi/Dropbox-Uploader/camDeleteLog.txtecho "deleting older than $2 days ago ..."while IFS='' read -r line || [[-n "$line" ]]; do line=$(echo $line | xargs) # trim spaces from the filename, just i$ y=${line:0:4} m=${line:5:2} d=${line:8:2} #echo "Filedate is $y-$m-$d" seconds=$(date -d "$y-$m-$d" +%s) if ((seconds < threshold)) then echo "Deleting file $go, $line" ((go++)) ###echo "./dropbox_uploader.sh delete \"$line\"" ./dropbox_uploader.sh delete "$line" >> /home/pi/Dropbox-Upload$ else echo "$line is too new to delete" ((stay++)) fidone < "$1"echo "The dropbox files were purged: $go files deleted and $stay were kept." | $ My crontab for user pi: 0 */8 * * * /home/pi/Dropbox-Uploader/moveVideostoDropbox.py # JOB_ID_10 0,6,12,18 * * * Dropbox-Uploader/processDBFiles.sh # JOB_ID_20 1,7,13,19 * * * Dropbox-Uploader/processByDate.sh camfiles.txt 2 # JOB_ID_3 Job 3 is the one that does the date processing and deleting.Jobs 2 and 3 were originally run together but I suspected (wrongly) that this was the root of my problem so split the 2 and ran them an hour apart as an experiement - Job 2 will never take more than 1 hour to run.
There are, by default, three "standard" files open when you run a program, standard input ( stdin ), standard output ( stdout ), and standard error ( stderr ). In Unix, those are associated with "file descriptors" ( stdin = 0, stdout = 1, stderr = 2). By default, all three are associated with the device that controls your terminal (the interface through which you see content on screen and type input into the program). The shell gives you the ability to "redirect" file descriptors. The > operator redirects output; the < redirects input. For example: program_name > /dev/null Which is equivalent to: program_name 1> /dev/null Redirects the output to file descriptor 1 ('stdout') to `/dev/null' Similarlly, program_name 2> /dev/null Redirects the output to file descriptor 2 ('stderr') to '/dev/null' You might want to redirect both stdout and stderr to a single file, so you might think you'd do: program_name > /dev/null 2> /dev/null But that doesn't handle interleaving writes to the file descriptors (the details behind this are a different question). To address this, you can do: program_name > /dev/null 2>&1 Which says "redirect writes to file descriptor 1 to /dev/null and redirect writes to file descriptor 2 to the same place as the writes to file descriptor 1 are going". This handles interleaving writes to the file descriptors. That option is so common that some shells include a short-hand that is shorter and functionally equivalent: program_name &> /dev/null Finally, you can redirect input in the same way that you redirect output: program_name < /dev/null Will redirect file descriptor 0 ( stdin ) from /dev/null , so if the program tries to read input, it'll get EOF. Putting that all together: program_name </dev/null &>/dev/null & Say (1) run program_name , (2) redirect standard input from /dev/null ( </dev/null ), (3) redirect both file descriptors 1 and 2 ( stdout and stderr ) to /dev/null ( &>/dev/null ), and (4) run the program in the background ( & ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/497212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/327451/" ] }
497,397
How to do sort -V in alpine linux? sort: unrecognized option: VBusyBox v1.28.4 (2018-12-06 15:13:21 UTC) multi-call binary.Usage: sort [-nrugMcszbdfiokt] [-o FILE] [-k start[.offset][opts][,end[.offset][opts]] [-t CHAR] [FILE]...Sort lines of text -o FILE Output to FILE -c Check whether input is sorted -b Ignore leading blanks -f Ignore case -i Ignore unprintable characters -d Dictionary order (blank or alphanumeric only) -g General numerical sort -M Sort month -n Sort numbers -t CHAR Field separator -k N[,M] Sort by Nth field -r Reverse sort order -s Stable (don't sort ties alphabetically) -u Suppress duplicate lines -z Lines are terminated by NUL, not newline
With Alpine, you can add GNU sort via the coreutils package: apk add coreutils
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/497397", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223456/" ] }
497,413
I'm using the sed command and I want to keep colored output from the previous command. The output of ls is colored, but the output of sed is not. I'm using OSX. ls -la | sed -En '/Desktop/q;p'
On macOS, the ls is not GNU ls and does not accept the --color=always option that Linux users might expect for this functionality. In the macOS version of ls , the colors are controlled by two variables : $CLICOLOR and $CLICOLOR_FORCE . If the former is defined, the terminal specified by $TERM supports color, and the output is to a terminal, then this output will be colored, much like GNU's --color=auto option. If the latter variable is defined as well, the final condition is dropped, behaving like GNU's --color=always . So to have color passed through to sed , you would need something like the following: CLICOLOR_FORCE=1 ls -la | sed -En '/Desktop/q;p'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/497413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/334041/" ] }
497,426
I am trying to set up a multi-boot on my machine with Ubuntu (my original OS, on /dev/sda2 ), Kali Linux and Debian. However, I got stuck halfway through my installation of Debian, and since Ubuntu took a lot of time to boot, I followed the steps of this post to make the boot process faster. But when I rebooted my machine, Ubuntu would only boot in emergency mode... The only thing I was able to notice was that in my /etc/fstab the line associated with my Ubuntu partition was gone. I would gladly post the contents of my fstab here but I don't know how to copy it from the emergency mode to here (I am using my Kali Linux on /dev/sda5 to write this post). Maybe there is a way to restore my fstab, to begin with? Edit 1 Here is the content of my /etc/fstab : # /etc/fstab: static file system information.## Use 'blkid' to print the universally unique identifier for a# device; this may be used with UUID= as a more robust way to name devices# that works even if disks are added and removed. See fstab(5).## <file system> <mount point> <type> <options> <dump> <pass># /boot/efi was on /dev/sda1 during installationUUID=95B2-5AED /boot/efi vfat umask=0077 0 1# /home was on /dev/sda3 during installationUUID=69d6623e-0bcc-4cef-8b25-e46c98210d44 /home ext4 defaults 0 2# swap was on /dev/sda4 during installationUUID=a8ee0943-0cd9-4dba-b018-ca00fc450e5d none swap sw 0 0 And here is thd result of blkid | grep UUID : /dev/sda1: UUID="95B2-5AED" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="f3ead83c-a7ca-453b-8317-a854080d37fc"/dev/sda2: UUID="7d4d2f18-146c-4d56-b5f3-0dc605eeb9e0" TYPE="ext4" PARTLABEL="Ubuntu" PARTUUID="94d6c9bd-30da-4abf-a784-41e20992fdd4"/dev/sda3: UUID="69d6623e-0bcc-4cef-8b25-e46c98210d44" TYPE="ext4" PARTLABEL="Home" PARTUUID="dd1299b6-adb1-45c0-99a6-94e922f4964b"/dev/sda4: UUID="a8ee0943-0cd9-4dba-b018-ca00fc450e5d" TYPE="swap" PARTUUID="228fa2d0-8b0c-4562-bb5a-ebb73bb00f04"/dev/sda5: UUID="489b70a2-db82-4b0c-bebd-cf19a403ade1" TYPE="ext4" PARTUUID="48ba997c-e595-45c1-93c0-b97e4f7ffbf5"/dev/sda6: UUID="9068da24-6073-45dc-a18e-29634daa3910" TYPE="ext4" PARTUUID="9033f352-349f-4cee-94bf-c686f462adea" Edit 2 I ran the e2fsck command on my Ubuntu, home and Debian partitions, and now instead of booting into emergency mode, Ubuntu starts to launch normally, but freezes after some time loading.
Since your Kali installation is working, you can use it to access your Ubuntu installation in a chroot. To do this, run the following commands as root: mkdir /ubunturootmount /dev/sda2 /ubunturootmount -o bind /dev /ubunturoot/devmount -o bind /dev/pts /ubunturoot/dev/ptsmount -o bind /proc /ubunturoot/procmount -o bind /sys /ubunturoot/syschroot /ubunturoot Now your command prompt window (note: this particular shell only!) should be accessing your Ubuntu root filesystem just as if you had logged onto Ubuntu and become root in Ubuntu. Take a look and ensure everything is as it should be. If your Ubuntu /etc/fstab is in error, now you can edit it. Once that is fixed, first make sure the /boot/efi filesystem is mounted in your Ubuntu chroot: mount /boot/efi Then run ls /lib/modules to see one or more directories named with kernel version numbers. Use update-initramfs -u -k <kernel version number> to update the initramfs file of the respective Ubuntu kernel. (Since you are now really running Kali's kernel, you must explicitly specify the version number of Ubuntu's kernel: trying to update the default kernel would result in an error message since Ubuntu's and Kali's kernel versions are unlikely to match.) Then check /etc/default/grub for boot options mentioning filesystem UUIDs or other things that may have changed on your OS installations. Fix as necessary, then run update-grub to update the configuration file of Ubuntu's GRUB bootloader. Once you've fixed all the problems you've found, undo the temporary chroot environment manually: umount /boot/efiexit # out of the chroot environment, back to Kali native view of the filesystemumount /ubunturoot/sysumount /ubunturoot/procumount /ubunturoot/dev/ptsumount /ubunturoot/devumount /ubunturootrmdir /ubunturoot
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/497426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/307418/" ] }