source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
418,784
What is the min and max values of the following exit codes in Linux: The exit code returned from a binary executable (for example: a Cprogram). The exit code returned from a bash script (when calling exit ). The exit code returned from a function (when calling return ). Ithink this is between 0 and 255 .
The number passed to the _exit() / exit_group() system call (sometimes referred as the exit code to avoid the ambiguity with exit status which is also referring to an encoding of either the exit code or signal number and additional info depending on whether the process was killed or exited normally) is of type int , so on Unix-like systems like Linux, typically a 32bit integer with values from -2147483648 (-2 31 ) to 2147483647 (2 31 -1). However, on all systems, when the parent process (or the child subreaper or init if the parent died) uses the wait() , waitpid() , wait3() , wait4() system calls to retrieve it, only the lower 8 bits of it are available (values 0 to 255 (2 8 -1)). When using the waitid() API (or a signal handler on SIGCHLD), on most systems (and as POSIX now more clearly requires in the 2016 edition of the standard (see _exit() specification )), the full number is available (in the si_status field of the returned structure). That is not the case on Linux yet though which also truncates the number to 8 bits with the waitid() API, though that's likely to change in the future. Generally, you'd want to only use values 0 (generally meaning success) to 125 only, as many shells use values above 128 in their $? representation of the exit status to encode the signal number of a process being killed and 126 and 127 for special conditions. You may want to use 126 to 255 on exit() to mean the same thing as they do for the shell's $? (like when a script does ret=$?; ...; exit "$ret" ). Using values outside 0 -> 255 is generally not useful. You'd generally only do that if you know the parent will use the waitid() API on systems that don't truncate and you happen to have a need for the 32bit range of values. Note that if you do a exit(2048) for instance, that will be seen as success by parents using the traditional wait*() APIs. More info at: Default exit code when process is terminated? That Q&A should hopefully answer most of your other questions and clarify what is meant by exit status . I'll add a few more things: A process cannot terminate unless it's killed or calls the _exit() / exit_group() system calls. When you return from main() in C , the libc calls that system call with the return value. Most languages have a exit() function that wraps that system call, and the value they take, if any is generally passed as is to the system call. (note that those generally do more things like the clean-up done by C's exit() function that flushes the stdio buffers, runs the atexit() hooks...) That's the case of at least: $ strace -e exit_group awk 'BEGIN{exit(1234)}'exit_group(1234) = ?$ strace -e exit_group mawk 'BEGIN{exit(1234)}'exit_group(1234) = ?$ strace -e exit_group busybox awk 'BEGIN{exit(1234)}'exit_group(1234) = ?$ echo | strace -e exit_group sed 'Q1234'exit_group(1234) = ?$ strace -e exit_group perl -e 'exit(1234)'exit_group(1234) = ?$ strace -e exit_group python -c 'exit(1234)'exit_group(1234) = ?$ strace -e exit_group expect -c 'exit 1234'exit_group(1234) = ?$ strace -e exit_group php -r 'exit(1234);'exit_group(1234) = ?$ strace -e exit_group zsh -c 'exit 1234'exit_group(1234) You occasionaly see some that complain when you use a value outside of 0-255: $ echo 'm4exit(1234)' | strace -e exit_group m4m4:stdin:1: exit status out of range: `1234'exit_group(1) = ? Some shells complain when you use a negative value: $ strace -e exit_group dash -c 'exit -1234'dash: 1: exit: Illegal number: -1234exit_group(2) = ?$ strace -e exit_group yash -c 'exit -- -1234'exit: `-1234' is not a valid integerexit_group(2) = ? POSIX leaves the behaviour undefined if the value passed to the exit special builtin is outside 0->255. Some shells show some unexpected behaviours if you do: bash (and mksh but not pdksh on which it is based) takes upon itself to truncate the value to 8 bits: $ strace -e exit_group bash -c 'exit 1234'exit_group(210) = ? So in those shells, if you do want to exit with a value outside of 0-255, you have to do something like: exec zsh -c 'exit -- -12345'exec perl -e 'exit(-12345)' That is execute another command in the same process that can call the system call with the value you want. as mentioned at that other Q&A, ksh93 has the weirdest behaviour for exit values from 257 to 256+max_signal_number where instead of calling exit_group() , it kills itself with the corresponding signal¹. $ ksh -c 'exit "$((256 + $(kill -l STOP)))"'zsh: suspended (signal) ksh -c 'exit "$((256 + $(kill -l STOP)))"' and otherwise truncates the number like bash / mksh . ¹ That's likely to change in the next version though. Now that the development of ksh93 has been taken over as a community effort outside of AT&T, that behaviour, even though encouraged somehow by POSIX, is being reverted
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/418784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271801/" ] }
418,796
I want to run bunch of commands simultaneously and when all of them finished, run another bunch of commands.some thing like thiscommand1 & command2echo "command 1 and 2 finished"command 3 & command 4
command1 &command2 &waitecho 'command1 and command2 have finished'command3 &command4 &waitecho 'command3 and command4 have finished' The call to wait will pause the script until all backgrounded tasks have finished executing. Alternatively (just "for your information"), depending on whether you want command 1 and 2 to run concurrently or not (equivalently for command 3 and 4): ( command1; command2 ) &echo 'command1 and command2 are running'waitecho 'command1 and command2 have finished' In the above case, command1 and command2 will run in the background, but not concurrently with each other. Doing command1 & command2wait is equivalent to command1 &command2wait ... which will work, but command2 will not be running in the background and wait will not be called until command2 has finished executing.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271812/" ] }
418,809
in my bash script I try to use a number as an input variable for a for loopI run the script as ./script.sh InputFolder/ Number_of_iterations the script should work inside the given folder and run a for loop as many times as the Number_of_iterations variable is set to.But somehow I can't set the variable as an integer. this is an example of my loop in the script: for i in {1..$(($2))}do echo "Welcome $i times"done I have tried already the double brackets $(($...)) option as well as the double quotations "..." , but the output I keep getting is Welcome {1..5} times which make me think, this is not an integer. I would appreciate any help in reading the input parameter as an integer into the script.
You can do this two ways: With ksh93-compatible shells (ksh93, zsh, bash): for (( i=1;i<=$2;i++ ))do echo "Welcome $i times"done Here we set i to 1 and loop, incrementing it until it is less than or equal to $2, outputting: Welcome 1 timesWelcome 2 times With POSIX shells on GNU systems: for i in $(seq "$2")do echo "Welcome $i times"done The seq command (GNU specific) will output numbers from 1 to the number specified in $2 on separate lines. Assuming you've not modified $IFS (which by default contains the line delimiter character), the command substitution will split that into as many elements for for to loop on.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271406/" ] }
418,820
I have a Debian stretch where I'm running transmission-daemon as a service. I keep my seeded files on an external USB hard disk drive mounted on /mnt/external-disk . This disk has an ext4 filesystem, and I mapped it in /etc/fstab by uuid. The problem is: When the service transmission-daemon starts at boot it doesn't check if the external filesystem is already mounted so it doesn't find the files on it, and I get a data error and the torrent files are not seeded, but the service starts. To resolve this problem I checked the systemd documentation, and I found what was missing: The line RequiresMountsFor= in the [Unit] section of the transmission-daemon.service file is located in the tree below /lib/systemd/ .After I added that line with the path of the mountpoint /mnt/external-disk the problem disappeared and the service was working fine.If I rebooted the machine, the service was working, and the files were seeded. This worked until I had a apt-get dist-upgrade where the package transmission-daemon was involved and after it stopped.So I checked the transmission-daemon.service , and I found the modification I made was missing. I added the line RequiresMountsFor= another time with the proper path, and the problem was fixed again. My question is: How can I make this modification persistent?
You should override the unit with a unit in /etc . The easiest way to do this is to use systemctl edit : sudo systemctl edit transmission-daemon will open an editor and allow you to create a override snippet. An override snippet ensures that future changes to the package’s unit (in /lib ) are taken into account: the reference will be the package’s unit, with your overrides applied on top. All you need to use this in your case is a .conf file in /etc/systemd/system/transmission-daemon.service.d/ , containing only the section and RequiresMountsFor line. systemctl edit will do this for you, creating an override.conf file in the appropriate location. Alternatively, you can copy the full /lib/systemd/system/transmission-daemon.service unit to /etc/systemd/system and edit that. Again, systemctl edit can take care of this for you, with the --full option. Look for “Example 2. Overriding vendor settings” in the systemd.unit documentation for details.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/418820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172889/" ] }
418,850
I encountered a problem with Nautilus for which I did not find any solution other than downloading the source code, making some changes and compiling it on my own. So now I have two versions of nautilus, the official version from the repositories and mine with a few changes. I would like to keep both. What would be a good way to tell applications to use my own compiled version of Nautilus when starting Nautilus from within the application? (e.g. opening the Downloads folder with firefox) I figured out that firefox calls /usr/bin/nautilus so I could just replace this with a symlink to my own program. However, I believe this symlink will be overwritten as soon as I install an update for Nautilus. Is there anything else I could do?
I would fix the packaged version of Nautilus, which might seem daunting at first but is easy enough — although it doesn’t survive package upgrades so it does require some discipline. (See Wouter’s answer for details.) The simplest approach in your situation is to add a diversion: sudo dpkg-divert --divert /usr/bin/nautilus.original --rename /usr/bin/nautilus This will instruct dpkg to rename /usr/bin/nautilus to /usr/bin/nautilus.original whenever a package tries to install it. Then you can add your own symlink, and it will remain untouched even when the Nautilus package is upgraded. To remove it, run sudo dpkg-divert --rename --remove /usr/bin/nautilus You can apply the same technique for any other file you need to replace in a similar fashion, apart from some configuration files which aren’t handled correctly when diverted.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418850", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
418,857
I really love the extension "auto-move windows", but it is rather limited: the "Add Rule" Button shows a list which only contains certain applications -- those that installed through apt, I suppose. (But then, why whould System Monitor be missing then?) What I want is to add any application I made a .desktop file for to that list. Is this possible somehow?
I would fix the packaged version of Nautilus, which might seem daunting at first but is easy enough — although it doesn’t survive package upgrades so it does require some discipline. (See Wouter’s answer for details.) The simplest approach in your situation is to add a diversion: sudo dpkg-divert --divert /usr/bin/nautilus.original --rename /usr/bin/nautilus This will instruct dpkg to rename /usr/bin/nautilus to /usr/bin/nautilus.original whenever a package tries to install it. Then you can add your own symlink, and it will remain untouched even when the Nautilus package is upgraded. To remove it, run sudo dpkg-divert --rename --remove /usr/bin/nautilus You can apply the same technique for any other file you need to replace in a similar fashion, apart from some configuration files which aren’t handled correctly when diverted.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418857", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233767/" ] }
418,901
Applications like lynx browser, htop etc and many others accept position dependent mouse clicks in bash over ssh shell. I know that ssh is a command line interface. Then how does it accepts mouse clicks?
IMHO, the simplest way to write such a TUI application is to use ncurses . "New Curses" is a library that abstracts the design of the TUI from the details of the underlying device. All the software you cited use ncurses to render their interface. When you click on a terminal emulator (e.g. xterm, gnome-term, etc), the terminal emulator translates the click in a sequence of ANSI Escape codes . These sequences are read and translated in events by the ncurses library. You can find an example on Stack Overflow: Mouse movement events in NCurses
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260657/" ] }
419,017
I'm dabbling in traps in Bash again. I've just noticed the RETURN trap doesn't fire up for functions. $ trap 'echo ok' RETURN$ f () { echo ko; }$ fko$ . xok$ cat x$ As you can see it goes off as expected for sourcing the empty file x . Bash's man has it so: If a sigspec is RETURN, the command arg is executed each time a shell function or a script executed with the . or source builtins finishes executing. What am I missing then? I have GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu).
As I understand this, there's an exception to the doc snippet in my question. The snippet was: If a sigspec is RETURN, the command arg is executed each time a shell function or a script executed with the . or source builtins finishes executing. The exception is described here: All other aspects of the shell execution environment are identical between a function and its caller with these exceptions: the DEBUG and RETURN traps (see the description of the trap builtin under SHELL BUILTIN COMMANDS below) are not inherited unless the function has been given the trace attribute (see the description of the declare builtin below) or the -o functrace shell option has been enabled with the set builtin (in which case all functions inherit the DEBUG and RETURN traps), and the ERR trap is not inherited unless the -o errtrace shell option has been enabled. As for functrace , it can be turned on with the typeset 's -t : -t Give each name the trace attribute. Traced functions inherit the DEBUG and RETURN traps from the calling shell. The trace attribute has no special meaning for variables. Also set -o functrace does the trick. Here's an illustration. $ trap 'echo ko' RETURN$ f () { echo ok; }$ cat yf$ . yokko$ set -o functrace$ . yokkoko As for declare , it's the -t option again: -t Give each name the trace attribute. Traced functions inherit the DEBUG and RETURN traps from the calling shell. The trace attribute has no special meaning for variables. Also extdebug enables function tracing, as in ikkachu's answer .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
419,026
Given this loop: while sleep 10s ; do something-that-runs-foreverdone When I press Ctrl+C the whole while-loop gets interrupted. What I want to do is to interrupt the "something"-process, let 10 seconds pass, and then restart "something". How do I make ctrl+c only affect "something", and not the while-loop? EDIT: "interrupt" as in SIGINT. Kill. Abort. Terminate. Not "interrupt" as in "pause".
It should work if you just trap SIGINT to something. Like : ( true ). #!/bin/shtrap ":" INT while sleep 10s ; do something-that-runs-foreverdone Interrupting the something... doesn't make the shell exit now, since it ignores the signal. However, if you ^C the sleep process, it will exit with a failure, and the loop stops due to that. Move the sleep to the inside of the loop or add something like || true to prevent that. Note that if you use trap "" INT to ignore the signal completely (instead of assigning a command to it), it's also ignored in the child process, so then you can't interrupt something... either. This is explicitly mentioned in at least Bash's manual : If arg is the null string, then the signal specified by each sigspec is ignored by the shell and commands it invokes. [...] Signals ignored upon entry to the shell cannot be trapped or reset.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/419026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11865/" ] }
419,028
I'm looking for some interfaces to monitor the use of the CPU and the temperature, i have already installed lm-sensors for temp and htop for CPU but i want something that shows them always in real-time in the bar at the top of the screen (the one which says time, battery% ecc.. sorry i don't know how it is called) so that i shouldn't always run the mentioned command from the terminal. I have Ubuntu 16.04.
The software is called psensor. Linux: https://wpitchoune.net/psensor/ Specific for Ubuntu: https://wpitchoune.net/psensor/ubuntu.html There is an option to display the info on the toolbar, as well as in a stand-alone window.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419028", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268372/" ] }
419,036
I have a file abc.txt in that file having 29 records In these records we need to remove some of the lines which are having the url based on the URL http://163.172.47.140:55555/ For example: - 163.12372.473.1440 35010 2018-01-18 01:03:13 +0000 POST http://163.172.47.140:55555/?oip=163.172.47.140 HTTP/1.1 200 147 -est_useragent - - test_refe test_useragent - - test_referer text/json 323
The software is called psensor. Linux: https://wpitchoune.net/psensor/ Specific for Ubuntu: https://wpitchoune.net/psensor/ubuntu.html There is an option to display the info on the toolbar, as well as in a stand-alone window.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271508/" ] }
419,122
I have changed my Ubuntu super password by recovery mode;after that, I can't run my sudo command in normal user . I have attempted to crack my previous password in Recovery mode; I followed this link to crack my password . $sudo ---In global mode throws me the below error: sudo: /usr/local/bin/sudo must be owned by uid 0 and have the setuid bit set $ ls -l sudo gives: -r-sr-xr-x 1 root root 136808 May 29 2017 sudo /usr/local/bin$ ./sudo ---> I need this /usr/local/bin ./sudo isn't working -- it throws the below error: sudo: ./sudo must be owned by uid 0 and have the setuid bit set /usr/bin$ ./sudo --> working fine usage: sudo -h | -K | -k | -V I need to access my sudo command from the terminal from anywhere.
You shouldn’t have a /usr/local/bin/sudo , that’s what’s breaking things (not the password change). Move it out of the way: /usr/bin/sudo mv /usr/local/bin/sudo{,2} and then tell your shell about it: hash -r That will restore the sudo functionality you’re used to.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272054/" ] }
419,143
I have a file input.txt which contains multiple filename in the below format. FILENAME_DATE_LINENUMBER , the input.txt contains many such filenames. The filename itself has precisely 5 underscore . FILE_NAME_1.DAT_20180123_4FILE_NAME_2.DAT_20180123_5FILE_NAME_3.DAT_20180123_6FILE_NAME_4.DAT_20180123_7 All files are present in sub directory as input.txt . I want to parse input.txt , iterate through each filename and print FILENAME and the specified line number ( from the FILENAME ) to output.txt I understand that sed or awk will be used , and below command can do the job. awk 'FNR==LINENUMBER {print FILENAME, $0}' *.txt >output.txt But how can i iterate through the file input.txt and find the FILENAME and extract LINENUMBER from FILENAME to output.txt The FILENAME specified in input.txt can in one of the sub directories where input.txt is located. There can be only one file with FILENAME in input.txt inside one of the sub directory ( one level ) form the input.txt location. DIR├── input.txt│ ├── DIR1│ │ ├── FILE_NAME_1.DAT│ ├── DIR2│ │ ├── FILE_NAME_2.DAT│ ├── DIR3│ │ ├── FILE_NAME_3.DAT In output.txt it should be printed as FILENAMELINE( Extracted from FILENAME present in input.txt )
You shouldn’t have a /usr/local/bin/sudo , that’s what’s breaking things (not the password change). Move it out of the way: /usr/bin/sudo mv /usr/local/bin/sudo{,2} and then tell your shell about it: hash -r That will restore the sudo functionality you’re used to.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/131294/" ] }
419,148
I know that Bash has word splitting but zsh doesn't, and I'm not familiar with others(csh, tcsh, ksh, etc), but I was wondering if it is a part of any standard. In other words, does sh have word splitting, or is it a Bash-only feature? If I wanted to write a portable shell script, would I have to account for word splitting, or is it something nonstandard that is added by other shells?
Implicit word splitting, i.e. word splitting on an unquoted variable expansion ( $foo as opposed to "$foo" ), is something that all POSIX compliant shells do, and more generally all sh shells. They also perform globbing on the result. This is why you need double quotes around variable substitutions . The same goes for command substitutions. POSIX calls these field splitting and pathname expansion . Zsh deviates from the standard sh behavior. It doesn't perform word splitting on unquoted variable substitutions (but it does perform word splitting on unquoted command substitutions), and it doesn't perform globbing on unquoted substitutions at all. (Zsh has those features, of course, but they're explicit: $=foo to do word splitting and $~foo to do globbing.) Zsh is not an sh-compatible shell; it's fairly close, but not compatible, and the reduced implicit splitting is one of the main deviations. Zsh has a compatibility mode (which is entered automatically if the zsh executable is called sh or ksh ) in which it does perform implicit word splitting and globbing like sh, among other things. Bash and ksh are both sh-compatible shells. Bash does have a few incompatibilities with POSIX, but you have to dig a lot deeper to find them. On important issues like implicit splitting, it's compatible. (T)csh is a completely different family of shells. Its syntax is vastly different from sh. It's also pretty much dead, so don't worry about it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/177660/" ] }
419,162
I have many csv files. The original design was supposed to have five columns. I just found out that the middle column of the csv file has a string with arbitrary number of commas in it and it is not quoted properly. This leads to rows with arbitrary number of columns. How do I get just the first two and last two columns of these csv files? Since the number of commas can change from row to row I need a way to specify first two and last two columns.
awk -F, '{print $1, $2, $(NF-1), $NF}' < input More generally (per the Question's title), to print the first and last n columns of the input -- without checking to see whether that means printing some columns twice -- awk -v n=2 '{ for(i=1; i <= n && i <= NF; i++) printf "%s%s", $i, OFS for(i=NF-n+1; i <= NF && i >= 1; i++) printf "%s%s", $i, OFS printf "%s", ORS }' < input (using -F as needed for the delimiter)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/419162", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272095/" ] }
419,198
I can't edit the .bashrc of the fs I'm connecting to I have to use the ~/.ssh/config file Here is my current config: Host my-sshHostName <ip>User <user>IdentityFile <file location>Compression yesRemoteCommand cd <path to folder> When I run ssh my-ssh nothing happens. The connection seems to automatically close. If I remove the RemoteCommand line it connects without an issue. Is this something on the server config? It's an EC2 instance, either CentOS or RHEL and bash is the shell.
If you specify a remote command, then the ssh connection is going to close as soon as the remote command exits. A cd command will exit almost immediately. A common way to do what you want is: RemoteCommand cd /some/path && bash (Substitute your desired shell in place of "bash"). This cd's to the path and then invokes a subshell if the cd operation succeeded. The ssh connection will close when the subshell exits. You will also want to force ssh to allocate a PTY for the session: RequestTTY yes If you don't, then ssh won't request one by default, and you'll get a non-interactive shell session. Notably, you won't get a command prompt.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/254951/" ] }
419,207
I am trying to have an infinite scp between two hosts but of course there is no such file large enough for this. I tried scp -l 512 192.168.1.1:/dev/zero /dev/null But scp says /dev/zero not a regular file. I need a consistent traffic between two hosts so I can try something on my router/firewall and I really need it to run for a long time. Any suggestions? It does not have to be scp but I need to be able to specify the speed. Thanks
The scp tool expects to copy a file. You can use ssh to transport an unending stream of bytes, and you can rate-limit with something like pv . The pertinent section of the man page for pv writes, -L RATE , --rate-limit RATE Limit the transfer to a maximum of RATE bytes per second. A suffix of K , M , G , or T can be added to denote Kilobytes (*1024), Megabytes, and so on. A suitable solution would be something like this, which rate-limits at approximately 10Mb/s (remember that 1MB/s is approximately 10Mb/s, after accounting for padding, network headers, etc.): pv --rate-limit 1M </dev/zero | ssh [email protected] 'cat >/dev/null' If you want bidirectional traffic flow, remove the quotes from the 'cat >/dev/null' .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39569/" ] }
419,284
Is there a pkill-like tool which does this: send signal to all matching processes wait N seconds until the processes have terminated if all processes have terminated, nice: exit if some processes have not terminated send SIGKILL For me it is important that the tool waits until the processes have really terminated. I know that it is quite easy to do this in my favourite scripting language, but in this case it would be very nice if I use a tool which already exists. This needs to run on SuSE-Linux and Ubuntu.
There is no “standard” command which provides the behaviour you’re after. However, on Debian and derivatives, you can use start-stop-daemon ’s --stop action with the --retry option: start-stop-daemon --stop --oknodo --retry 15 -n daemontokill will send SIGTERM to all processes named daemontokill , wait up to 15s for them to stop, then send SIGKILL to all remaining processes (from the initial selection), and wait another 15s for them to die. It will exit with status 0 if there was nothing to kill or all the processes stopped, 2 if some processes were still around after the second timeout. There are a number of options to match processes in various ways, see the documentation (linked above) for details. You can also provide a more detailed schedule with varying timeouts. start-stop-daemon is part of the dpkg package so it’s always available on Debian systems (and derivatives). Some non- .deb distributions make the package available too; for example, openSUSE Leap 42 has it. It’s quite straightforward to build on other platforms: git clone https://salsa.debian.org/dpkg-team/dpkg.git cd dpkgautoreconf -fi && ./configure && make You’ll need autoconf , automake , libtool , gettext . Once the build is finished you’ll find start-stop-daemon in the utils directory.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22068/" ] }
419,307
I know a question about the differences between Posix and SUS has already been asked and answered beautifully. Anyway, the answers seemed to suggest the possibility that SUS "encompasses more than Posix", and there are certain things in SUS that are not included in Posix. An answer specifically addressed the XSI (XOPEN) option group as the only difference, but added that SUS seems to not care so much about it anymore. Now I'm wondering if there is any other difference, or they are just named differently for historical reasons? Moreover, wikipedia seems to suggest that there is a difference and that Posix is the core of SUS : Very few BSD and Linux-based operating systems are submitted for compliance with the Single UNIX Specification, although system developers generally aim for compliance with POSIX standards, which form the core of the Single UNIX Specification.
There is no other difference. The SUSv4, 2016 edition site states that it is Technically identical to IEEE Std 1003.1, 2016 Edition and ISO/IEC 9945:2009 including ISO/IEC 9945:2009/Cor 1:2013(E) and ISO/IEC 9945:2009/Cor 2:2017(E) with the addition of X/Open Curses. IEEE Std 1003.1 is POSIX. You can also verify this by looking at the table of contents : XBD, XSH, XCU, and XRAT are the four sections of POSIX, leaving only XCURSES in SUSv4 but not in POSIX. All of POSIX is in SUSv4, so POSIX is a subset of SUSv4.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215663/" ] }
419,321
I've cloned two vSphere VMs off of an Ubuntu 17.10 template. After boot, they both claim the same IP and fight for it (ssh connections break off as the IP switches between them). The hostnames and MAC addresses are different between the two machines. dhclient correctly claims two separate IPs, but the resolver in use is systemd-networkd .
systemd-networkd uses a different method to generate the DUID than dhclient . dhclient by default uses the link-layer address while systemd-networkd uses the contents of /etc/machine-id . Since the VMs were cloned, they have the same machine-id and the DHCP server returns the same IP for both. To fix, replace the contents of one or both of /etc/machine-id . This can be anything, but deleting the file and running systemd-machine-id-setup will create a random machine-id in the same way done on machine setup.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/419321", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4934/" ] }
419,341
I'm using pv for sending files via ssh . I can change "active pv" the limit at under 100M without any problem.When i set active pv process to 100M or 1G or higher I cant change rate anymore... BUT! if i change 5-10 times 1M to 2M, 2M to 1M pv can set sometimes to new rate. I couldn't find any solution for the problem. Any idea? Examples: pv -R "15778" -f -F "%p***%t***%e***%r***%b" -L 1M pv -R "15778" -f -F "%p***%t***%e***%r***%b" -L 1G pv -R "15778" -f -F "%p***%t***%e***%r***%b" -L 1M (not working anymore)
This is caused by accounting in pv , which effectively means its rate-limiting is read-limited rather than write-limited. Looking at the source code shows that rate-limiting is driven by a “target”, which is the amount remaining to send. If rate-limiting is on, once per rate limit evaluation cycle, the target is increased by however much we’re supposed to send according to the rate limit; the target is then decreased by however much is actually written. This means that if you set the rate limit to a value larger than the actual write capacity, the target will keep going up; reducing the rate limit won’t then have any effect until pv has caught up with its target (including what it’s allowed to write according to the new rate limit). To see this in action, start a basic pv : pv /dev/zero /dev/null Then control that: pv -R 32605 -L 1M; sleep 10; pv -R 32605 -L 1G; sleep 1; pv -R 32605 -L 1M You’ll see the impact of the target calculations by varying the duration of the second sleep... Because of the write limitation, this only causes an issue when you set the rate limit to a value greater than the write capacity. In a little more detail, here’s how the accounting works with a flow initially limited to 1M, then to 1G for 5s, then back to 1M, on a connection capable of transmitting 400M: Time Rate Target Sent Remaining1 1M 1M 1M 02 1G 1G 400M 600M3 1G 1.6G 400M 1.2G4 1G 2.2G 400M 1.8G5 1G 2.8G 400M 2.4G6 1G 3.4G 400M 3G7 1M 3001M 400M 2601M8 1M 2602M 400M 2202M9 1M 2203M 400M 1803M10 1M 1804M 400M 1404M11 1M 1405M 400M 1005M12 1M 1006M 400M 606M13 1M 607M 400M 207M14 1M 208M 208M 015 1M 1M 1M 0 It takes 7s for the rate limit to be applied again. The longer the time spent with a high rate limit, the longer it takes for the reduced rate limit to be enforced... The fix for this is quite straightforward, if you can recompile pv : in loop.c , change line 154 to target = (from target += ), resulting in || (cur_time.tv_sec == next_ratecheck.tv_sec && cur_time.tv_usec >= next_ratecheck.tv_usec)) { target = ((long double) (state->rate_limit)) / (long double) (1000000 / RATE_GRANULARITY); Once that’s done, rate limit reductions are applied immediately (well, within one rate-limit cycle).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199043/" ] }
419,343
Hi friends could u pls help me to rewrite the method with nested if statements? Thanks much. isDiskMounted(){ if [ -d "/folder1" ] && [ -d "/folder2" ] && [ -d "/folder3" ] && [ -d "/folder4" ];then echo "true" else echo "false" fi} I try to write like this; isDiskMounted() { if [ -d "/folder1" ]; then echo "/folder1 klasoru bulundu" if [ -d "/folder2" ]; then echo "/folder2 klasoru bulundu" if [ -d "/folder3" ]; then echo "/folder3 klasoru bulundu" if [ -d "/folder4" ]; then echo "/folder4 klasoru bulundu" fi fi fi echo "true" else echo "false" fi }
Purely from a code review point-of-view, I'd write that function like this: func(){ for d in /folder1 /folder2 /folder3 /folder4 ; do if ! [ -d "$d" ] ; then echo "$d does not exist (or is not a directory)" return 1 fi done echo "all dirs exist"} The loop might make it more straightforward to add new directories to the list, or pass them as arguments to the function. (But if your aim is to check that something is mounted, like the function name implies, testing that directories exist doesn't do that much good.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419343", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272259/" ] }
419,374
I'm trying to restart services after a yum update on RHEL 7.4. I could restart every service using systemctl, but needs-restarting from yum utils tells me that I should also restart systemd itself: # needs-restarting1 : /usr/lib/systemd/systemd --system --deserialize 21 Can I restart systemd without rebooting the server, and how? I found a few mentions of systemctl daemon-reload , but this doesn't make it disappear from the needs-restarting list.
To restart the daemon, run systemctl daemon-reexec This is documented in the systemctl manpage : Reexecute the systemd manager. This will serialize the manager state, reexecute the process and deserialize the state again. This command is of little use except for debugging and package upgrades. Sometimes, it might be helpful as a heavy-weight daemon-reload . While the daemon is being reexecuted, all sockets systemd listening on behalf of user configuration will stay accessible. Unfortunately needs-restarting can’t determine that systemd has actually restarted. systemd execs itself to restart, which doesn’t reset the process’s start time; but needs-restarting compares the executable’s modification time with the process’s start time to determine whether a process needs to be restarted (among other things), and as a result it always considers that systemd needs to be restarted... To determine whether systemd really needs to be restarted, you can check the output of lsof -p1 | grep deleted : systemd uses a library, libsystemd-shared , which is shipped in the same package and is thus upgraded along with the daemon, so if systemd needs to be restarted you’ll see it using a deleted version of the library. If lsof shows no deleted files, systemd doesn’t need to be restarted. (Thanks to Jeff Schaller for the hint!)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/419374", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30018/" ] }
419,377
I want to add a line as a first line of many files, unless the first line of a file is a shebang, in which case it should be the second.
There are many ways to do it... e.g. sed '1!b/^#!/a\one_line_text//!i\one_line_text' infile Note that backslashes (if any) in your line have to be escaped (e.g. \ becomes \\ ). This won't edit empty files. Also, this won't edit the file in-place. Consult your sed manual to see if it supports -i to edit the file in-place (and check the syntax of that option).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29181/" ] }
419,406
How to prevent chrome to take more than for example 4GB of ram. From time to time he decides to take something like 7GB (with 8GB RAM total) and makes my computer unusable. Do you have any help. PS: I even didn't have more than 10 tabs opened.Edit: maybe I did ... something like 15. Anyway I want chrome to freeze or shutdown not to freeze the whole system.
I believe you would want to use something like cgroups to limit resource usage for a individual process. So you might want to do something like this except with cgcreate -g memory,cpu:chromegroupcgset -r memory.limit_in_bytes=2048 chromegroup to create chromegroup and restrict the memory usage for the group to 2048 bytes cgclassify -g memory,cpu:chromegroup $(pidof chrome) to move the current chrome processes into the group and restrict their memory usage to the set limit or just launch chrome within the group like cgexec -g memory,cpu:chromegroup chrome However, it's pretty insane that chrome is using that much memory in the first place. Try purging reinstalling / recompiling first to see if that doesn't fix the issue, because it really should not be using that much memory to begin with, and this solution is only a band-aid over the real problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419406", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206961/" ] }
419,415
I have recently purchased myself an HP rack server for use as a personal fileserver. This server currently lives under my bed as I have nowhere else to put it. For those not aware (as I was not fully) this server is VERY LOUD . I need to be able to access my files a lot of the time during the day, and due to the situation of my server, turning it off every night at the wall (it likes to suddenly spring into action for no apparent reason) isn't really an option. I would really like if the server could remain powered on all the time, but when not in use enter a sleep state such that the fans turn off, if nothing else, over LAN. The server also runs Debian. If this kind of setup can't happen for whatever reason, I could settle for the machine shutting down at a certain time of day (or night) and starting up again in the morning, or something to that effect. I have very little idea about how to go about such a task, other than to use wake/sleep-on-LAN.
I believe you would want to use something like cgroups to limit resource usage for a individual process. So you might want to do something like this except with cgcreate -g memory,cpu:chromegroupcgset -r memory.limit_in_bytes=2048 chromegroup to create chromegroup and restrict the memory usage for the group to 2048 bytes cgclassify -g memory,cpu:chromegroup $(pidof chrome) to move the current chrome processes into the group and restrict their memory usage to the set limit or just launch chrome within the group like cgexec -g memory,cpu:chromegroup chrome However, it's pretty insane that chrome is using that much memory in the first place. Try purging reinstalling / recompiling first to see if that doesn't fix the issue, because it really should not be using that much memory to begin with, and this solution is only a band-aid over the real problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272298/" ] }
419,422
I'm running Fedora 27, and my university uses a network authenication portal so GNOME pops up a hotspot login screen. I would like to disable this screen, and just have it open it in firefox, because my login data is already there. How do I change this setting? I've checked the settings app and there are no settings to change it. Unless there is a better way to get past the captive portal. I saw mention of the WHISPr protocol. The captive portal my university uses is Cisco Meraki.
For disabling it, in Ubuntu it is (no idea if it applies to Fedora): Open SettingsSelect PrivacyTurn ‘network connectivity checking’ off The offending file in Fedora is however /usr/libexec/gnome-shell-portal-helper ; you may replace it with a bash script that does nothing; after that you can login once and save the login credentials in Firefox or a Firefox add-on. Cisco Meraki does indeed support the WISPr protocol and it could be an interesting venue to pursue for automating the login process via a script or program.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115135/" ] }
419,464
If I am having a line as: There are seven pencil I want to print this as: Ther a svn pcil What is the bash shell command for this? Clarification: the goal is to remove all, at least twice occuring letters, except their first occurence.
Based on sed classic synthax s/replace-this/with-that/g where g means global replace = all occurences , someone can use 2g instead of g which means global replacement but after second occurence (this is a gnu sed extension). Example that removes only e : $ echo $athere are seven pencil$ echo $a | sed 's/e//2g'ther ar svn pncil To remove all duplicate letter we can make a trick like this: $ sed -f <(printf 's/%s//2g\n' {a..z}) <<<"$a"ther a svn pcil Unfortunatelly this will not work : sed 's/[a-z]//2g' The above trick uses process substitution <( ) which can be used as file. In my solution process substitution is treated like a sed script file, fed to sed by -f option = read sed commands from a file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272345/" ] }
419,518
I've got a front end machine with about 1k persistent, very low-bandwidth TCP connections. It's a bit memory constrained so I'm trying to figure out where a few hundred MBs are going. TCP buffers are one possible culprit, but I can't make a dent in these questions: Where is the memory reported? Is it part of the buff/cache item in top , or is it part of the process's RES metric? If I want to reduce it on a per-process level, how do I ensure that my reductions are having the desired effect? Do the buffers continue to take up some memory even when there's minimal traffic flowing, or do they grow dynamically, with the buffer sizes merely being the maximum allowable size? I realize one possible answer is "trust the kernel to do this for you," but I want to rule out TCP buffers as a source of memory pressure. Investigation: Question 1 This page writes, "the 'buffers' memory is memory used by Linux to buffer network and disk connections." This implies that they're not part of the RES metric in top . To find the actual memory usage, /proc/net/sockstat is the most promising: sockets: used 3640TCP: inuse 48 orphan 49 tw 63 alloc 2620 mem 248UDP: inuse 6 mem 10UDPLITE: inuse 0RAW: inuse 0FRAG: inuse 0 memory 0 This is the best explanation I could find, but mem isn't addressed there. It is addressed here , but 248*4k ~= 1MB, or about 1/1000 the system-wide max, which seems like an absurdly low number for a server with hundreds of persistent connections and sustained .2-.3Mbit/sec network traffic. Of course, the system memory limits themselves are: $ grep . /proc/sys/net/ipv4/tcp*mem/proc/sys/net/ipv4/tcp_mem:140631 187510 281262/proc/sys/net/ipv4/tcp_rmem:4096 87380 6291456/proc/sys/net/ipv4/tcp_wmem:4096 16384 4194304 tcp_mem 's third parameter is the system-wide maximum number of 4k pages dedicated to TCP buffers; if the total of buffer size ever surpasses this value, the kernel will start dropping packets. For non-exotic workloads there's no need to tune this value. Next up is /proc/meminfo , and its mysterious Buffers and Cached items. I looked at several sources but couldn't find any that claimed it accounted for TCP buffers. ...MemAvailable: 8298852 kBBuffers: 192440 kBCached: 2094680 kBSwapCached: 34560 kB... Investigation: Questions 2-3 To inspect TCP buffer sizes at the process level, we've got quite a few options, but none of them seem to provide the actual allocated memory instead of the current queue size or maximum. There's ss -m --info : State Recv-Q Send-QESTAB 0 0... <snip> ....skmem:(r0,rb1062000,t0,tb2626560,f0,w0,o0,bl0) ...<snip> rcv_space:43690 So we have Recv-Q and Send-Q , the current buffer usage r and t , which are explained in this excellent post , but it's unclear how they're different from Recv-Q and Send-Q Something called rb , which looks suspiciously like some sort of max buffer size, but for which I couldn't find any documentation rcv_space , which this page claims isn't the actual buffer size; for that you need to call getsockopt This answer suggests lsof , but the size/off seems to be reporting the same buffer usage as ss : COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAMEsslocal 4032 michael 82u IPv4 1733921 0t0 TCP localhost:socks->localhost:59594 (ESTABLISHED) And then these answers suggest that lsof can't return the actual buffer size. It does provide a kernel module that should do the trick, but it only seems to work on sockets whose buffer sizes have been fixed with setsockopt ; if not, SO_SNDBUF and SO_RCVBUF aren't included.
/proc/net/sockstat , specifically the mem field, is where to look. This value is is reported in kernel pages and corresponds directly to /proc/sys/net/ipv4/tcp_mem . At the individual socket level, memory is allocated in kernel space only until the user space code reads it, at which time the kernel memory is freed (see here ). sk_buff->truesize is the sum of both the amount of data buffered, as well as the socket structure itself (see here , and the patch which corrected for memory alignment is talked about here ) I suspect that the mem field of /proc/net/sockstat is calculated simply by summing sk_buff->truesize for all sockets, but I'm not familiar enough with the kernel source to know where to look for that. By way of confirmation, this feature request from the netdata monitoring system includes a lot of good discussion and relevant links as well, and it backs up this interpretation of /proc/net/sockstat . This post on the "out of socket memory" error contains some more general discussion of different memory issues.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272386/" ] }
419,539
Currently, I have to allow each and every port I want to connect from & to localhost ( iptables -A INPUT -i lo -p tcp -m tcp --dport 8888 -j ACCEPT ), which is a) a little bit annoying each time, and b) often impossible with services which change their local port. I'm testing this with netcat on debian: nc -vv -l -s 127.0.0.1 -p 8888 is listening on the 127.0.0.1 interface at port 8888. Trying to connect nc 127.0.0.1 8888 results in a connection refused, until I manually add the rule above.How can I make this work? I tried to allow all traffic on the loopback interface, to no avail: -A INPUT -i lo -j ACCEPT-A OUTPUT -o lo -j ACCEPT and also to allow everything from 127.0.0.1 : -A INPUT -s 127.0.0.1 -j ACCEPT Shouldn't that work, since netstat is listing it as coming from 127.0.0.1: tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 15338/nc Here are my current rules: # Generated by iptables-save v1.6.0 on Thu Jan 25 08:01:28 2018*nat:PREROUTING ACCEPT [0:0]:INPUT ACCEPT [0:0]:OUTPUT ACCEPT [0:0]:POSTROUTING ACCEPT [0:0]-A POSTROUTING -s 172.20.0.0/24 -j MASQUERADECOMMIT# Completed on Thu Jan 25 08:01:28 2018# Generated by iptables-save v1.6.0 on Thu Jan 25 08:01:28 2018*filter:INPUT DROP [0:0]:FORWARD DROP [0:0]:OUTPUT DROP [0:0]:port-scan - [0:0]-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p tcp -m tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT-A INPUT -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT-A INPUT -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT-A INPUT -i eth0 -p udp -m udp --dport 67:68 -j ACCEPT-A INPUT -p tcp -m tcp --dport 587 -j ACCEPT-A INPUT -p tcp -m tcp --dport 25 -j ACCEPT-A INPUT -p tcp -m tcp --dport 2368 -j ACCEPT-A INPUT -i eth0 -p tcp -m tcp --dport 143 -m state --state NEW,ESTABLISHED -j ACCEPT-A INPUT -i eth0 -p tcp -m tcp --dport 993 -m state --state NEW,ESTABLISHED -j ACCEPT-A INPUT -i eth0 -p udp -m udp --dport 9987 -j ACCEPT-A INPUT -i lo -p tcp -m tcp --dport 3306 -j ACCEPT-A INPUT -p tcp -m tcp --dport 1666 -j ACCEPT-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT-A INPUT -p tcp -m tcp --dport 80 -m limit --limit 25/min --limit-burst 100 -j ACCEPT-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7-A INPUT -j REJECT --reject-with icmp-port-unreachable-A INPUT -i eth0 -p udp -m udp --sport 53 -j ACCEPT-A INPUT -i eth0 -p tcp -m tcp --sport 53 -j ACCEPT-A INPUT -s 127.0.0.1/32 -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -j DROP-A FORWARD -i wintap0 -j ACCEPT-A FORWARD -i eth0 -o eth1 -j ACCEPT-A FORWARD -j REJECT --reject-with icmp-port-unreachable-A OUTPUT -j ACCEPT-A OUTPUT -o lo -j ACCEPT-A OUTPUT -o eth0 -j ACCEPT-A OUTPUT -o tun0 -j ACCEPT-A port-scan -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK RST -m limit --limit 1/sec -j RETURN-A port-scan -j DROP Note: I obviously, don't want the ports visible to the outside world, only internally.
The problem is almost certainly this line: -A INPUT -j REJECT --reject-with icmp-port-unreachable As it comes before your rules that allow traffic to lo and has no filter, the last four rules are never looked at: -A INPUT -j REJECT --reject-with icmp-port-unreachable-A INPUT -i eth0 -p udp -m udp --sport 53 -j ACCEPT-A INPUT -i eth0 -p tcp -m tcp --sport 53 -j ACCEPT-A INPUT -s 127.0.0.1/32 -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -j DROP You don't have to worry about people being able to connect to a service that is listening only the lo interface (127.0.0.1/8): that is an internal-only interface.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23994/" ] }
419,632
I am writing a shell script that checks to see if a parameter matches a string. There are around 20 of them and more may need to be added in the future. Currently the way I have it written it's hard to read and would be cumbersome to update. I'm not very familiar with shell scripting so I'm not sure of the best way to simplify this and make it easier to manage. if [ $4 =="CRITICAL" ] && [[ $2 == "foo" || $2 == "bar" || $2 == "foo" || $2 == "bar" || $2 == "foo" || $2 == "bar" || $2 == "foo" || $2 == "bar" || $2 == "foo" || $2 == "bar" || ]] VARIABLE=1fi Foo and bar would all be different strings in the above script.
if [[ $4 == CRITICAL && $2 =~ ^(a|b|c|d|e|f|g)$ ]]; then VARIABLE=1fi BTW, unquoted variables and positional parameters are safe to use inside [[ ... ]] , but not in [ ... ] . In other words, your [ $4 == "CRITICAL" ] should be [ "$4" == "CRITICAL" ] . Also, CRITICAL doesn't need to be quoted at all above . It's a fixed string, with no spaces or shell metacharacters. If it was a fixed string that did need quoting for any reason, it's best to use single quotes. Single quotes are for fixed strings, double-quotes are for when you want to interpolate variables, command substitutions, etc into a string.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419632", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266661/" ] }
419,697
After finding out that several common commands (such as read ) are actually Bash builtins (and when running them at the prompt I'm actually running a two-line shell script which just forwards to the builtin), I was looking to see if the same is true for true and false . Well, they are definitely binaries. sh-4.2$ which true/usr/bin/truesh-4.2$ which false/usr/bin/falsesh-4.2$ file /usr/bin/true/usr/bin/true: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=2697339d3c1923506e10af65aa3120b12295277e, strippedsh-4.2$ file /usr/bin/false/usr/bin/false: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=b160fa513fcc13537d7293f05e40444fe5843640, strippedsh-4.2$ However, what I found most surprising was their size. I expected them to be only a few bytes each, as true is basically just exit 0 and false is exit 1 . sh-4.2$ truesh-4.2$ echo $?0sh-4.2$ falsesh-4.2$ echo $?1sh-4.2$ However I found to my surprise that both files are over 28KB in size. sh-4.2$ stat /usr/bin/true File: '/usr/bin/true' Size: 28920 Blocks: 64 IO Block: 4096 regular fileDevice: fd2ch/64812d Inode: 530320 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)Access: 2018-01-25 19:46:32.703463708 +0000Modify: 2016-06-30 09:44:27.000000000 +0100Change: 2017-12-22 09:43:17.447563336 +0000 Birth: -sh-4.2$ stat /usr/bin/false File: '/usr/bin/false' Size: 28920 Blocks: 64 IO Block: 4096 regular fileDevice: fd2ch/64812d Inode: 530697 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)Access: 2018-01-25 20:06:27.210764704 +0000Modify: 2016-06-30 09:44:27.000000000 +0100Change: 2017-12-22 09:43:18.148561245 +0000 Birth: -sh-4.2$ So my question is: Why are they so big? What's in the executable other than the return code? PS: I am using RHEL 7.4
In the past, /bin/true and /bin/false in the shell were actually scripts. For instance, in a PDP/11 Unix System 7: $ ls -la /bin/true /bin/false-rwxr-xr-x 1 bin 7 Jun 8 1979 /bin/false-rwxr-xr-x 1 bin 0 Jun 8 1979 /bin/true$$ cat /bin/falseexit 1$$ cat /bin/true$ Nowadays, at least in bash , the true and false commands are implemented as shell built-in commands. Thus no executable binary files are invoked by default, both when using the false and true directives in the bash command line and inside shell scripts. From the bash source, builtins/mkbuiltins.c : char *posix_builtins[] = { "alias", "bg", "cd", "command", "**false**", "fc", "fg", "getopts", "jobs", "kill", "newgrp", "pwd", "read", "**true**", "umask", "unalias", "wait", (char *)NULL }; Also per @meuh comments: $ command -V true falsetrue is a shell builtinfalse is a shell builtin So it can be said with a high degree of certainty the true and false executable files exist mainly for being called from other programs . From now on, the answer will focus on the /bin/true binary from the coreutils package in Debian 9 / 64 bits. ( /usr/bin/true running RedHat. RedHat and Debian use both the coreutils package, analysed the compiled version of the latter having it more at hand). As it can be seen in the source file false.c , /bin/false is compiled with (almost) the same source code as /bin/true , just returning EXIT_FAILURE (1) instead, so this answer can be applied for both binaries. #define EXIT_STATUS EXIT_FAILURE#include "true.c" As it also can be confirmed by both executables having the same size: $ ls -l /bin/true /bin/false-rwxr-xr-x 1 root root 31464 Feb 22 2017 /bin/false-rwxr-xr-x 1 root root 31464 Feb 22 2017 /bin/true Alas, the direct question to the answer why are true and false so large? could be, because there are not anymore so pressing reasons to care about their top performance. They are not essential to bash performance, not being used anymore by bash (scripting). Similar comments apply to their size, 26KB for the kind of hardware we have nowadays is insignificant. Space is not at premium for the typical server/desktop anymore, and they do not even bother anymore to use the same binary for false and true , as it is just deployed twice in distributions using coreutils . Focusing, however, in the real spirit of the question, why something that should be so simple and small, gets so large? The real distribution of the sections of /bin/true is as these charts shows; the main code+data amounts to roughly 3KB out of a 26KB binary, which amounts to 12% of the size of /bin/true . The true utility got indeed more cruft code over the years, most notably the standard support for --version and --help . However, that it is not the (only) main justification for it being so big, but rather, while being dynamically linked (using shared libs), also having part of a generic library commonly used by coreutils binaries linked as a static library. The metada for building an elf executable file also amounts for a significant part of the binary, being it a relatively small file by today´s standards. The rest of the answer is for explaining how we got to build the following charts detailing the composition of the /bin/true executable binary file and how we arrived to that conclusion. As @Maks says, the binary was compiled from C; as per my comment also, it is also confirmed it is from coreutils. We are pointing directly to the author(s) git https://github.com/wertarbyte/coreutils/blob/master/src/true.c , instead of the gnu git as @Maks (same sources, different repositories - this repository was selected as it has the full source of the coreutils libraries) We can see the various building blocks of the /bin/true binary here (Debian 9 - 64 bits from coreutils ): $ file /bin/true/bin/true: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=9ae82394864538fa7b23b7f87b259ea2a20889c4, stripped$ size /bin/true text data bss dec hex filename 24583 1160 416 26159 662f true Of those: text (usually code) is around 24KB data (initialised variables, mostly strings) are around 1KB bss (uninitialized data) 0.5KB Of the 24KB, around 1KB is for fixing up the 58 external functions. That still leaves around roughly 23KB for rest of the code. We will show down bellow that the actual main file - main()+usage() code is around 1KB compiled, and explain what the other 22KB are used for. Drilling further down the binary with readelf -S true , we can see that while the binary is 26159 bytes, the actual compiled code is 13017 bytes, and the rest is assorted data/initialisation code. However, true.c is not the whole story and 13KB seems pretty much excessive if it were only that file; we can see functions called in main() that are not listed in the external functions seen in the elf with objdump -T true ; functions that are present at: https://github.com/coreutils/gnulib/blob/master/lib/progname.c https://github.com/coreutils/gnulib/blob/master/lib/closeout.c https://github.com/coreutils/gnulib/blob/master/lib/version-etc.c Those extra functions not linked externally in main() are: set_program_name() close_stdout() version_etc() So my first suspicion was partly correct, whilst the library is using dynamic libraries, the /bin/true binary is big *because it has some static libraries included with it* (but that is not the only cause). Compiling C code is not usually that inefficient for having such space unaccounted for, hence my initial suspicion something was amiss. The extra space, almost 90% of the size of the binary, is indeed extra libraries/elf metadata. While using Hopper for disassembling/decompiling the binary to understand where functions are, it can be seen the compiled binary code of true.c/usage() function is actually 833 bytes, and of the true.c/main() function is 225 bytes, which is roughly slightly less than 1KB. The logic for version functions, which is buried in the static libraries, is around 1KB. The actual compiled main()+usage()+version()+strings+vars are only using up around 3KB to 3.5KB. It is indeed ironic, such small and humble utilities have became bigger in size for the reasons explained above. related question: Understanding what a Linux binary is doing true.c main() with the offending function calls: intmain (int argc, char **argv){ /* Recognize --help or --version only if it's the only command-line argument. */ if (argc == 2) { initialize_main (&argc, &argv); set_program_name (argv[0]); <----------- setlocale (LC_ALL, ""); bindtextdomain (PACKAGE, LOCALEDIR); textdomain (PACKAGE); atexit (close_stdout); <----- if (STREQ (argv[1], "--help")) usage (EXIT_STATUS); if (STREQ (argv[1], "--version")) version_etc (stdout, PROGRAM_NAME, PACKAGE_NAME, Version, AUTHORS, <------ (char *) NULL); } exit (EXIT_STATUS);} The decimal size of the various sections of the binary: $ size -A -t true true :section size addr.interp 28 568.note.ABI-tag 32 596.note.gnu.build-id 36 628.gnu.hash 60 664.dynsym 1416 728.dynstr 676 2144.gnu.version 118 2820.gnu.version_r 96 2944.rela.dyn 624 3040.rela.plt 1104 3664.init 23 4768.plt 752 4800.plt.got 8 5552.text 13017 5568.fini 9 18588.rodata 3104 18624.eh_frame_hdr 572 21728.eh_frame 2908 22304.init_array 8 2125160.fini_array 8 2125168.jcr 8 2125176.data.rel.ro 88 2125184.dynamic 480 2125272.got 48 2125752.got.plt 392 2125824.data 128 2126240.bss 416 2126368.gnu_debuglink 52 0Total 26211 Output of readelf -S true $ readelf -S trueThere are 30 section headers, starting at offset 0x7368:Section Headers: [Nr] Name Type Address Offset Size EntSize Flags Link Info Align [ 0] NULL 0000000000000000 00000000 0000000000000000 0000000000000000 0 0 0 [ 1] .interp PROGBITS 0000000000000238 00000238 000000000000001c 0000000000000000 A 0 0 1 [ 2] .note.ABI-tag NOTE 0000000000000254 00000254 0000000000000020 0000000000000000 A 0 0 4 [ 3] .note.gnu.build-i NOTE 0000000000000274 00000274 0000000000000024 0000000000000000 A 0 0 4 [ 4] .gnu.hash GNU_HASH 0000000000000298 00000298 000000000000003c 0000000000000000 A 5 0 8 [ 5] .dynsym DYNSYM 00000000000002d8 000002d8 0000000000000588 0000000000000018 A 6 1 8 [ 6] .dynstr STRTAB 0000000000000860 00000860 00000000000002a4 0000000000000000 A 0 0 1 [ 7] .gnu.version VERSYM 0000000000000b04 00000b04 0000000000000076 0000000000000002 A 5 0 2 [ 8] .gnu.version_r VERNEED 0000000000000b80 00000b80 0000000000000060 0000000000000000 A 6 1 8 [ 9] .rela.dyn RELA 0000000000000be0 00000be0 0000000000000270 0000000000000018 A 5 0 8 [10] .rela.plt RELA 0000000000000e50 00000e50 0000000000000450 0000000000000018 AI 5 25 8 [11] .init PROGBITS 00000000000012a0 000012a0 0000000000000017 0000000000000000 AX 0 0 4 [12] .plt PROGBITS 00000000000012c0 000012c0 00000000000002f0 0000000000000010 AX 0 0 16 [13] .plt.got PROGBITS 00000000000015b0 000015b0 0000000000000008 0000000000000000 AX 0 0 8 [14] .text PROGBITS 00000000000015c0 000015c0 00000000000032d9 0000000000000000 AX 0 0 16 [15] .fini PROGBITS 000000000000489c 0000489c 0000000000000009 0000000000000000 AX 0 0 4 [16] .rodata PROGBITS 00000000000048c0 000048c0 0000000000000c20 0000000000000000 A 0 0 32 [17] .eh_frame_hdr PROGBITS 00000000000054e0 000054e0 000000000000023c 0000000000000000 A 0 0 4 [18] .eh_frame PROGBITS 0000000000005720 00005720 0000000000000b5c 0000000000000000 A 0 0 8 [19] .init_array INIT_ARRAY 0000000000206d68 00006d68 0000000000000008 0000000000000008 WA 0 0 8 [20] .fini_array FINI_ARRAY 0000000000206d70 00006d70 0000000000000008 0000000000000008 WA 0 0 8 [21] .jcr PROGBITS 0000000000206d78 00006d78 0000000000000008 0000000000000000 WA 0 0 8 [22] .data.rel.ro PROGBITS 0000000000206d80 00006d80 0000000000000058 0000000000000000 WA 0 0 32 [23] .dynamic DYNAMIC 0000000000206dd8 00006dd8 00000000000001e0 0000000000000010 WA 6 0 8 [24] .got PROGBITS 0000000000206fb8 00006fb8 0000000000000030 0000000000000008 WA 0 0 8 [25] .got.plt PROGBITS 0000000000207000 00007000 0000000000000188 0000000000000008 WA 0 0 8 [26] .data PROGBITS 00000000002071a0 000071a0 0000000000000080 0000000000000000 WA 0 0 32 [27] .bss NOBITS 0000000000207220 00007220 00000000000001a0 0000000000000000 WA 0 0 32 [28] .gnu_debuglink PROGBITS 0000000000000000 00007220 0000000000000034 0000000000000000 0 0 1 [29] .shstrtab STRTAB 0000000000000000 00007254 000000000000010f 0000000000000000 0 0 1Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings), I (info), L (link order), O (extra OS processing required), G (group), T (TLS), C (compressed), x (unknown), o (OS specific), E (exclude), l (large), p (processor specific) Output of objdump -T true (external functions dynamically linked on run-time) $ objdump -T truetrue: file format elf64-x86-64DYNAMIC SYMBOL TABLE:0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __uflow0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 getenv0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 free0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 abort0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __errno_location0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strncmp0000000000000000 w D *UND* 0000000000000000 _ITM_deregisterTMCloneTable0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 _exit0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __fpending0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 textdomain0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fclose0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 bindtextdomain0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 dcgettext0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __ctype_get_mb_cur_max0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strlen0000000000000000 DF *UND* 0000000000000000 GLIBC_2.4 __stack_chk_fail0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 mbrtowc0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strrchr0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 lseek0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memset0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fscanf0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 close0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __libc_start_main0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 memcmp0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fputs_unlocked0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 calloc0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 strcmp0000000000000000 w D *UND* 0000000000000000 __gmon_start__0000000000000000 DF *UND* 0000000000000000 GLIBC_2.14 memcpy0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fileno0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 malloc0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fflush0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 nl_langinfo0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 ungetc0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __freading0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 realloc0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fdopen0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 setlocale0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.4 __printf_chk0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 error0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 open0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fseeko0000000000000000 w D *UND* 0000000000000000 _Jv_RegisterClasses0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 __cxa_atexit0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 exit0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 fwrite0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3.4 __fprintf_chk0000000000000000 w D *UND* 0000000000000000 _ITM_registerTMCloneTable0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 mbsinit0000000000000000 DF *UND* 0000000000000000 GLIBC_2.2.5 iswprint0000000000000000 w DF *UND* 0000000000000000 GLIBC_2.2.5 __cxa_finalize0000000000000000 DF *UND* 0000000000000000 GLIBC_2.3 __ctype_b_loc0000000000207228 g DO .bss 0000000000000008 GLIBC_2.2.5 stdout0000000000207220 g DO .bss 0000000000000008 GLIBC_2.2.5 __progname0000000000207230 w DO .bss 0000000000000008 GLIBC_2.2.5 program_invocation_name0000000000207230 g DO .bss 0000000000000008 GLIBC_2.2.5 __progname_full0000000000207220 w DO .bss 0000000000000008 GLIBC_2.2.5 program_invocation_short_name0000000000207240 g DO .bss 0000000000000008 GLIBC_2.2.5 stderr
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/419697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/254388/" ] }
419,761
I was surprised today to find apparently how difficult it is to go from a webp animation to gif animation. My GIMP 2.8.22 and ImageMagick 7.0.7-21 on linux 4.14.13-1-ARCH don't seem to support the format, and the only tool available in repos seem to be libwebp 0.4.1 which includes a decode tool that lets you extract individual frames to some image formats, none of them being gif (It's a licensing problem maybe?) Anyway, I used the following script: #!/bin/bashDELAY=${DELAY:-10}LOOP=${LOOP:-0}r=`realpath $1`d=`dirname $r`pushd $d > /dev/nullf=`basename $r`n=`webpinfo -summary $f | grep frames | sed -e 's/.* \([0-9]*\)$/\1/'`pfx=`echo -n $f | sed -e 's/^\(.*\).webp$/\1/'`if [ -z $pfx ]; then pfx=$ffiecho "converting $n frames from $f working dir $dfile stem '$pfx'"for ((i=0; i<$n; i++)); do webpmux -get frame $i $f -o $pfx.$i.webp dwebp $pfx.$i.webp -o $pfx.$i.pngdoneconvert $pfx.*.png -delay $DELAY -loop $LOOP $pfx.gifrm $pfx.[0-9]*.png $pfx.[0-9]*.webppopd > /dev/null Which creates a gif animation from the extracted frames of the file supplied in the first argument. I tried it on this file and the resulting file was kind of artifacty. Is it proper form to post in this forum for suggestions of improvement of the procedure/invocations? And: If there are custom tools for this conversion, please share your knowledge! :)
Running into the same issue myself, I found that using Python and its Pillow library might be the easiest way. Just import it, let it load the image file, and directly save it again with appropriate options. from PIL import Imageim = Image.open('your_file.webp')im.save('your_file.gif', 'gif', save_all=True, optimize=True, background=0) Tested with Python3.8 and Pillow 8.0.1. You might have to install or upgrade the library first, using e.g. python3 -m pip install --user --upgrade Pillow All on one line to batch convert all *.webp files in the current folder to *.gif with the same name: for f in *.webp;do echo "$f";python3 -c "from PIL import Image;Image.open('$f').save('${f%.webp}.gif','gif',save_all=True,optimize=True,background=0)";done Note: This answer was inspired by Stack Overflow .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125207/" ] }
419,762
I am trying to add a public key to a server but I don't want to restart the sshd service for it to take effect. The reason is that restarting the ssh service seems to be disruptive for other users who could use the ssh service at that time. Most documentation suggest to add a public key to $HOME/.ssh/authorized_keys and then to restart the sshd service ( systemctl restart sshd ). The OS of interest is Linux. My questions are: Is the restart of sshd needed? If sshd is restarted, is there a service outage at that time? Is there a way to set up passwordless auth using ssh without needing to restart the sshd service after adding new public keys to $HOME/.ssh/authorized_keys ?
Is the restart of sshd needed? Not usually. Linux distributions usually ship with a default configuration that allows public key authentication, so you usually don't even have to edit configuration to enable it, and so restarting is unnecessary. Even in the case that you had to do something with sshd_config , you'd only have to restart it only once after editing that file, not for each edit after of the authorized keys file. Note that you don't even have to restart sshd. From man sshd : sshd rereads its configuration file when it receives a hangup signal, SIGHUP, by executing itself with the name and options it was started with, e.g. /usr/sbin/sshd . And the typical systemd service for sshd recognizes this, so you can do systemctl reload sshd instead. If sshd is restarted, is there a service outage at that time? Depends on your definition of service outage. A simple restart of sshd will not kill existing ssh connections, but new connections wouldn't be accepted until sshd finishes restarting.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/419762", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/159513/" ] }
419,798
I have a large amount of folders and files. I need to parse it and find only those with the extension xmp to finally remove them. How can I achieve this and keep track of the name of the removed files? To find: I know I can use find /path -name "*.xmp" But how can I run two commands on the output? keep file path and name in removelist.txt and remove it.
With GNU find's -fprint and -delete actions: find . -name "*.xmp" -fprint "removelist.txt" -delete -fprint file - print the full file name into file file . If file does not exist when find is run, it is created; if it does exist, it is truncated. -delete - delete files
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/195932/" ] }
419,842
I have some C source code which was originally developed on Windows. Now I want to work on it in Linux. There are tons of include directives that should be changed to Linux format, e.g: #include "..\includes\common.h" I am looking for a command-line to go through all .h and .c files, find the include directives and replace any backslash with a forward slash.
find + GNU sed solution: find . -type f -name "*.[ch]" -exec sed -i '/^#include / s|\\|/|g' {} + "*.[ch]" - wildcard to find files with extension .c or .h -i : GNU sed extension to edit the files in-place without backup. FreeBSD/macOS sed have a similar extension where the syntax is -i '' instead. /^#include / - on encountering/matching line which starts with pattern: #include s|\\|/|g - substitute all backslashes \ with forward slashes / ( \ escaped with backslash \ for literal representation).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/419842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122146/" ] }
420,011
I'm not going for complicated tools like AppArmor complain mode, I need easy tools to tell me which files are accessed by a specific program.
Per Chris Down, you can use strace -p to examine an already running process, to see what files it opens from now until the time you terminate strace or the process itself finishes. If you want to see files opened for the entire duration of a process, right from the start, use strace with the executable name. Adding -f ensures that any forked sub-processes also get reported. Example # strace -e open -f /bin/idopen("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3open("/lib64/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3open("/lib64/libpcre.so.1", O_RDONLY|O_CLOEXEC) = 3open("/lib64/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3open("/proc/thread-self/attr/current", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/proc/self/task/1581/attr/current", O_RDONLY|O_CLOEXEC) = 3open("/usr/share/locale/locale.alias", O_RDONLY|O_CLOEXEC) = 3open("/usr/share/locale/en_US.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en_US.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en_US/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en.UTF-8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)open("/etc/nsswitch.conf", O_RDONLY|O_CLOEXEC) = 3open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3open("/lib64/libnss_files.so.2", O_RDONLY|O_CLOEXEC) = 3open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 3open("/etc/group", O_RDONLY|O_CLOEXEC) = 3open("/etc/group", O_RDONLY|O_CLOEXEC) = 3uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023+++ exited with 0 +++# Using lsof to see what files a process currently has open # lsof -p $(pidof NetworkManager)COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMENetworkMa 722 root cwd DIR 253,0 224 64 /NetworkMa 722 root rtd DIR 253,0 224 64 /NetworkMa 722 root txt REG 253,0 2618520 288243 /usr/sbin/NetworkManagerNetworkMa 722 root mem REG 253,0 27776 34560 /usr/lib64/libnss_dns-2.17.so[...]# If you have SystemTap, you can monitor the entire host for files being opened. [root@localhost tmp]# cat mon#!/usr/bin/env stapprobe syscall.open { printf ("pid %d program %s opened %s\n", pid(), execname(), filename) }# ./monpid 14813 program touch opened "/etc/ld.so.cache"pid 14813 program touch opened "/lib64/libc.so.6"pid 14813 program touch opened 0x7f7a8c6ec8d0pid 14813 program touch opened "foo2"[...]#
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272242/" ] }
420,015
I came across with these two lines, and though I've been trying to figure out what they do, I'm still in doubt about their meaning in the code.The piece of code I am talking about is: my $mapped_from = ($num_phones_in == 60)? = $1 : $2;my $mapped_to = ($num_phones_out == 48)? = $2 : $3; I don't really understand what a variable between parentheses followed by a question mark do ()?. And also I don't know what those two numbers with dollar sign (as variables) separated by colon mean. To give you more details about the code, in this part I'm working with a file that look like this: ah X /au u aU Where the columns have 60, 48 and 39 lines respectively. I would really appreciate if someone could give me a clue since I am a bit lost.
my $mapped_from = ($num_phones_in == 60)? = $1 : $2; That's a syntax error. The test ? val_true : val_false is the "ternary operator" , an inline form of an if-else statement. If test is true, it evaluates to the val_true part, and if test is false, it evaluates to the val_false part. The question mark has nothing to do with the parenthesis. But = $1 isn't a valid expression. Without the extra = , ($num_phones_in == 60)? $1 : $2; would check if $num_phones_in is sixty, and return $1 or $2 accordingly. $1 and $2 (etc.) are variables that refer to the contents of the capture groups in the previous regex. If you did "afoob" =~ /a(.*)b/ , then $1 would contain foo . See Variables related to regular expressions in perlvar .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420015", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272793/" ] }
420,066
I recently noticed I am only getting 100Mbit/s of througput on my gigabit home network. When looking into it with ethtool I found my ArchLinux Box was using 100baseT/Half as link speed instead of 1000baseT/Full which its NIC and the switch connected to it support.I am not sure why but the NIC seems to not be advertising its link-modes according to ethtool : Settings for enp0s31f6: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 100Mb/s Duplex: Half Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off MDI-X: on (auto) Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: yes When enabling auto-negotioation explicitly by running ethtool --change enp0s31f6 autoneg on it seems to advertise all its modes to the switch and uses 1000baseT/Full . That only works most of the time and for a while though. When I unplug the cable and pluggin it back in switches autoneg off most of the time , but not always. Also, sometimes setting autoneg to on immediately disables it again.Rebooting also disables it again. Note that auto-negotiation does not get disabled when unplugging but when replugging. dsmeg logs this when autoneg was enabled and I plug in a cable: [153692.029252] e1000e: enp0s31f6 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx[153699.577779] e1000e: enp0s31f6 NIC Link is Up 100 Mbps Half Duplex, Flow Control: None[153699.577782] e1000e 0000:00:1f.6 enp0s31f6: 10/100 speed: disabling TSO I am using the intel NIC of my asrock motherboard (from ~2015) and an unmanaged switch (Netgear GS208).
After hours of searching I found the solution in the most obvious place: NetworkManager seems to somehow have disabled autonegotiation right in the settings for my ethernet port: The weird part is that even after knowing NetworkManager can change the ethernet link-mode I cannot find even a single source online detailing that functionality. The only way according to the google search results I found is setting it via ethtool .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/129982/" ] }
420,071
I have trouble executing binary file, both from the GUI and the command line. I am running Ubuntu 17.10 . Here are the logs : julien@julien-PC:~/JEUX/ROMS/Logiciels/snes9x-1.53$ lsdata docs snes9x-gtkjulien@julien-PC:~/JEUX/ROMS/Logiciels/snes9x-1.53$ ./snes9x-gtk bash: ./snes9x-gtk: Aucun fichier ou dossier de ce type PS : The last line is in French but it means "no file or directory of this type" . I also have this issue with the Super Meat Boy installer I have downloaded from Humble Bundle. UPDATE : Using file , I have : julien@julien-PC:~/JEUX/ROMS/Logiciels/snes9x-1.53$ file ./snes9x-gtk ./snes9x-gtk: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.9, not stripped I tried the command /lib/ld-linux.so.2 ./snes9x-gtk (because it is the interpreter) and it was not found. After some research on Internet, I found it in the package lib32z1 , and after installing it, when I retried the command, I get error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory . By using the command ldd I have as output : julien@julien-PC:~/JEUX/ROMS/Logiciels/snes9x-1.53$ ldd ./snes9x-gtk linux-gate.so.1 => (0xf7f82000) libX11.so.6 => not found libdl.so.2 => /lib32/libdl.so.2 (0xf7f5b000) libXext.so.6 => not found libGL.so.1 => not found [...] libm.so.6 => /lib32/libm.so.6 (0xf7e54000) libgcc_s.so.1 => not found libc.so.6 => /lib32/libc.so.6 (0xf7c81000) /lib/ld-linux.so.2 (0xf7f84000) There is a lot of missing dependencies... I tried to fix both libX11 and libXext, but I had issues : I assumed libX11 was in package libx11-6 but after trying to install it, it says that it is already installed. Same for libXext and package libxext-6 . Do you have any suggestions ? Thanks.
After hours of searching I found the solution in the most obvious place: NetworkManager seems to somehow have disabled autonegotiation right in the settings for my ethernet port: The weird part is that even after knowing NetworkManager can change the ethernet link-mode I cannot find even a single source online detailing that functionality. The only way according to the google search results I found is setting it via ethtool .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272824/" ] }
420,205
Creating custom menu entry, got stuck on this command: exec tail -n +3 $0 Tried it in terminal, got weird result, cannot understand, what this command exactly does and why grub needs it. Could you explain, please?
tail -n +3 prints its input, starting at line 3 ( man page ). $0 is the name of the script in a shell script ( Bash special parameters ) and exec ( Bash builtins ) replaces the script with the command. You probably have something like this (like in /etc/grub.d/40_custom on my system): #!/bin/shexec tail -n +3 $0foobar When you run the script, it replaces itself with tail reading the script itself, so the rest of the script gets copied to its output. I think grub has a bunch of scripts to create its config, they're probably executed as grubscript.sh >> grub-config-file or something to effect. The scripts could use any logic they need to produce the output, but the exec tail trick allows to just dump some fixed lines in the output without changing the logic the script is started with. In addition to that magic incantation, Debian's /etc/grub.d/40_custom also includes a comment telling the user to Simply type the menu entries you want to add after this comment.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272727/" ] }
420,211
Which distribution is the one in the picture below. More precisely, in which distribution can I find that top bar with the navigation numbers on the left ?
Some random distro that happens to be running i3 window manager. https://i3wm.org/ Per i3wm site the window manager is distributed in Debian, Arch, Gentoo, Ubuntu, FreeBSD, NetBSD, OpenBSD, OpenSUSE, Megeia, Fedora, Exherbo, PiBang and Slackware.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/420211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272913/" ] }
420,219
I read `which`, but all but I cannot really get the difference. I am running zsh 5.4.2 on 64-bit debian-buster. Both which and whence are shell-builtins . Can people point out where whence would be more appropriate than which and vice-versa ? /home/shirish> zsh --versionzsh 5.4.2 (x86_64-debian-linux-gnu)/home/shirish> type -a whichwhich is a shell builtinwhich is /usr/bin/whichwhich is /bin/which/home/shirish> type -a whencewhence is a shell builtin
which was a csh command (well a csh script that read your ~/.cshrc ), whence was the Korn shell's answer to csh 's which , type the Bourne shell one, command -v/V the POSIX one... zsh implements ksh 's whence with a few extensions, but also provides a which alias for the csh junkies and type / command -v/V for POSIX compliance which are just the same command but with different default behaviour. which is whence -c ( c for csh ) type is whence -v (more verbose whence ) where is whence -ca POSIX command -v is like whence POSIX command -V is like whence -v You'll find some more information (though in a bit of a messy way, sorry) at Why not use "which"? What to use then?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
420,221
lsb_release & uname [wellbye@AY130622174524343529Z:~]lsb_release -a LSB Version: core-9.20160110ubuntu0.2-amd64:core-9.20160110ubuntu0.2-noarch:security-9.20160110ubuntu0.2-amd64:security-9.20160110ubuntu0.2-noarch Distributor ID: Ubuntu Description: Ubuntu 16.04.3 LTS Release: 16.04 Codename: xenial[wellbye@AY130622174524343529Z:~]uname -a Linux AY130622174524343529Z 4.4.0-105-generic #128-Ubuntu SMP Thu Dec 14 12:42:11 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux /etc/fstab : UUID=e2048966-750b-4795-a9a2-7b477d6681bf / ext4 errors=remount-ro 0 1 /dev/xvdb1 /newdisk ext3 rw,user,noauto,exec,utf8 0 0 sudo mount -a has neither effect nor error, but manually mounting works: sudo mount -t auto /dev/xvdb1 /newdisk/ what's the problem with fstab ?
I see you have the noauto flag set. This means "don't mount with the -a flag" From man 5 fstab noauto do not mount when "mount -a" is given (e.g., at boot time)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247962/" ] }
420,242
Let's say my IP is 198.51.100.27 and my friend's IP is 203.0.113.11. We both are connected to internet behind a standard consumer ISP router. How can we send a few bytes to each other without using a 3rd party server? (and without having to do router port forwarding configuration) I've heard about netcat or ncat but I'm not sure how I could use it to send "hello world" to my friend, and how he would see this message in his terminal Should he do: ncat -C 198.51.100.27 80 # this IP is mine and me: ncat -l 203.0.113.11 80 < echo "hello world" # this IP is my friend's IP ? I'm even not sure if netcat / ncat is the right tool for this. I also looked at chownat / pwnat but I couldn't figure out how to use it in such a simple example: just sending / receiving "hello world". Notes: I don't want to connect to my friend's computer via SSH or SFTP. I just want to send him "hello world" or send short text messages to each other. I don't have a precise goal other than just understanding how bytes can be sent directly peer-to-peer without a centralized server. So if I can just send him one or two bytes, such that he sees them in his console, that will be fine!
I see you have the noauto flag set. This means "don't mount with the -a flag" From man 5 fstab noauto do not mount when "mount -a" is given (e.g., at boot time)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420242", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59989/" ] }
420,250
OS - Fedora 27 Kernel - 4.14.14-300.fc27.x86_64 Whenever I plug in any usb drive or try to unmount an already plugged in usb drive I get logged out from gnome session. After logging back in again, I can access the drive and work normally.Why is this happening? Please inform me if I need to attach an kind of log.
I see you have the noauto flag set. This means "don't mount with the -a flag" From man 5 fstab noauto do not mount when "mount -a" is given (e.g., at boot time)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272940/" ] }
420,328
We can use the syntax ${var##pattern} and ${var%%pattern} to extract the last and first section of an IPv4 address: IP=109.96.77.15echo IP: $IPecho 'Extract the first section using ${var%%pattern}: ' ${IP%%.*}echo 'Extract the last section using ${var##pattern}: ' ${IP##*.} How we can extract the second or third section of an IPv4 address using parameter expansion? Here is my solution: I use an array and change the IFS variable. :~/bin$ IP=109.96.77.15:~/bin$ IFS=. read -a ArrIP<<<"$IP":~/bin$ echo ${ArrIP[1]} 96:~/bin$ printf "%s\n" "${ArrIP[@]}" 109 96 77 15 Also I have written some solutionsusing the awk , sed , and cut commands. Now, my question is: Is there a simpler solution based on parameter expansion which does not use array and IFS changing?
Assuming the default value of IFS you extract each octet into it's own variable with: read A B C D <<<"${IP//./ }" Or into an array with: A=(${IP//./ })
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420328", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64110/" ] }
420,365
Suppose I have a folder containing files with names like file1.txtfile2.txtfile2.txt etc. I would like to run a command on each of them, like so: mycommand file1.txt -o file1-processed.txtmycommand file2.txt -o file2-processed.txtmycommand file3.txt -o file3-processed.txt etc. There are several similar questions on this site - the difference is that I want to insert the -processed test into the middle of the file name, before the extension. It seems like find should be the tool for the job. If it wasn't for the -o flag I could do find *.txt -exec mycommand "{}" ";" However, the {} syntax gives the whole file name, e.g. file1.txt etc., so I can't add the " -processed " in between the filename and its extension. A similar problem exists with using a simple bash for loop. Is there a simple way to accomplish this task, using find or otherwise?
If all the files to be processed are in the same folder, you don't need to use find , and can make do with native shell globbing. for foo in *.txt ; do mycommand "${foo}" -o "${foo%.txt}-processed.txt"done The shell idiom ${foo%bar} removes the smallest suffix string matching the pattern bar from the value of foo , in this case the .txt extension, so we can replace it with the suffix you want.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/420365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273047/" ] }
420,452
I am running Linux Mint 18.3 with Gnome 3.18 as my desktop. I have been building a loading screen for an application I have installed (Mycroft AI). I have the animation, I have it pop up on loading, i have it closing as soon as it finishes loading. What I DO NOT have is a loading screen with no title bar (what I have is in the screen shot below). As you can see, i still have the title bar. How do I remove it? The fewer apps I have to install to get this to work, the better. Thanks in advance!
Title bars / window decorations are usually specific to the window manager in use. GNOME doesn't support a built-in method to launch a window/program without decorations, unlike window managers such as Openbox . A solution that works within GTK across any window manager is to use GTK's gtk_window_set_decorated() , with more information here .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261787/" ] }
420,513
Will the executable of a small, extremely simple program, such as the one shown below, that is compiled on one flavor of Linux run on a different flavor? Or would it need to be recompiled? Does machine architecture matter in a case such as this? int main(){ return (99);}
It depends. Something compiled for IA-32 (Intel 32-bit) may run on amd64 as Linux on Intel retains backwards compatibility with 32-bit applications (with suitable software installed). Here's your code compiled on RedHat 7.3 32-bit system (circa 2002, gcc version 2.96) and then the binary copied over to and run on a Centos 7.4 64-bit system (circa 2017): -bash-4.2$ file codecode: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped-bash-4.2$ ./code-bash: ./code: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory-bash-4.2$ sudo yum -y install glibc.i686...-bash-4.2$ ./code ; echo $?99 Ancient RedHat 7.3 to Centos 7.4 (essentially RedHat Enterprise Linux 7.4) is staying in the same "distribution" family, so will likely have better portability than going from some random "Linux from scratch" install from 2002 to some other random Linux distribution in 2018. Something compiled for amd64 would not run on 32-bit only releases of Linux (old hardware does not know about new hardware). This is also true for new software compiled on modern systems intended to be run on ancient old things, as libraries and even system calls may not be backwards portable, so may require compilation tricks, or obtaining an old compiler and so forth, or possibly instead compiling on the old system. (This is a good reason to keep virtual machines of ancient old things around.) Architecture does matter; amd64 (or IA-32) is vastly different from ARM or MIPS so the binary from one of those would not be expected to run on another. At the assembly level the main section of your code on IA-32 compiles via gcc -S code.c to main: pushl %ebp movl %esp,%ebp movl $99,%eax popl %ebp ret which an amd64 system can deal with (on a Linux system--OpenBSD by contrast on amd64 does not support 32-bit binaries; backwards compatibility with old archs does give attackers wiggle room, e.g. CVE-2014-8866 and friends). Meanwhile on a big-endian MIPS system main instead compiles to: main: .frame $fp,8,$31 .mask 0x40000000,-4 .fmask 0x00000000,0 .set noreorder .set nomacro addiu $sp,$sp,-8 sw $fp,4($sp) move $fp,$sp li $2,99 move $sp,$fp lw $fp,4($sp) addiu $sp,$sp,8 j $31 nop which an Intel processor will have no idea what to do with, and likewise for the Intel assembly on MIPS. You could possibly use QEMU or some other emulator to run foreign code (perhaps very, very slowly). However! Your code is very simple code, so will have fewer portability issues than anything else; programs typically make use of libraries that have changed over time (glibc, openssl, ...); for those one may also need to install older versions of various libraries (RedHat for example typically puts "compat" somewhere in the package name for such) compat-glibc.x86_64 1:2.12-4.el7.centos or possibly worry about ABI changes (Application Binary Interface) for way old things that use glibc, or more recently changes due to C++11 or other C++ releases. One could also compile static (greatly increasing the binary size on disk) to try to avoid library issues, though whether some old binary did this depends on whether the old Linux distribution was compiling most everything dynamic (RedHat: yes) or not. On the other hand, things like patchelf can rejigger dynamic (ELF, but probably not a.out format) binaries to use other libraries. However! Being able to run a program is one thing, and actually doing something useful with it another. Old 32-bit Intel binaries may have security issues if they depend on a version of OpenSSL that has some horrible and not-backported security problem in it, or the program may not be able to negotiate at all with modern web servers (as the modern servers reject the old protocols and ciphers of the old program), or SSH protocol version 1 is no longer supported, or ...
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/420513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/272846/" ] }
420,519
i have two variables (txt and a line number) i want to insert my txt in the x line card=$(shuf -n1 shuffle.txt)i=$(shuf -i1-52 -n1) 'card' is my txt : a card randomly selected in a shuffle 'deck'and i want to insert it at a random line (i)
It depends. Something compiled for IA-32 (Intel 32-bit) may run on amd64 as Linux on Intel retains backwards compatibility with 32-bit applications (with suitable software installed). Here's your code compiled on RedHat 7.3 32-bit system (circa 2002, gcc version 2.96) and then the binary copied over to and run on a Centos 7.4 64-bit system (circa 2017): -bash-4.2$ file codecode: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped-bash-4.2$ ./code-bash: ./code: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory-bash-4.2$ sudo yum -y install glibc.i686...-bash-4.2$ ./code ; echo $?99 Ancient RedHat 7.3 to Centos 7.4 (essentially RedHat Enterprise Linux 7.4) is staying in the same "distribution" family, so will likely have better portability than going from some random "Linux from scratch" install from 2002 to some other random Linux distribution in 2018. Something compiled for amd64 would not run on 32-bit only releases of Linux (old hardware does not know about new hardware). This is also true for new software compiled on modern systems intended to be run on ancient old things, as libraries and even system calls may not be backwards portable, so may require compilation tricks, or obtaining an old compiler and so forth, or possibly instead compiling on the old system. (This is a good reason to keep virtual machines of ancient old things around.) Architecture does matter; amd64 (or IA-32) is vastly different from ARM or MIPS so the binary from one of those would not be expected to run on another. At the assembly level the main section of your code on IA-32 compiles via gcc -S code.c to main: pushl %ebp movl %esp,%ebp movl $99,%eax popl %ebp ret which an amd64 system can deal with (on a Linux system--OpenBSD by contrast on amd64 does not support 32-bit binaries; backwards compatibility with old archs does give attackers wiggle room, e.g. CVE-2014-8866 and friends). Meanwhile on a big-endian MIPS system main instead compiles to: main: .frame $fp,8,$31 .mask 0x40000000,-4 .fmask 0x00000000,0 .set noreorder .set nomacro addiu $sp,$sp,-8 sw $fp,4($sp) move $fp,$sp li $2,99 move $sp,$fp lw $fp,4($sp) addiu $sp,$sp,8 j $31 nop which an Intel processor will have no idea what to do with, and likewise for the Intel assembly on MIPS. You could possibly use QEMU or some other emulator to run foreign code (perhaps very, very slowly). However! Your code is very simple code, so will have fewer portability issues than anything else; programs typically make use of libraries that have changed over time (glibc, openssl, ...); for those one may also need to install older versions of various libraries (RedHat for example typically puts "compat" somewhere in the package name for such) compat-glibc.x86_64 1:2.12-4.el7.centos or possibly worry about ABI changes (Application Binary Interface) for way old things that use glibc, or more recently changes due to C++11 or other C++ releases. One could also compile static (greatly increasing the binary size on disk) to try to avoid library issues, though whether some old binary did this depends on whether the old Linux distribution was compiling most everything dynamic (RedHat: yes) or not. On the other hand, things like patchelf can rejigger dynamic (ELF, but probably not a.out format) binaries to use other libraries. However! Being able to run a program is one thing, and actually doing something useful with it another. Old 32-bit Intel binaries may have security issues if they depend on a version of OpenSSL that has some horrible and not-backported security problem in it, or the program may not be able to negotiate at all with modern web servers (as the modern servers reject the old protocols and ciphers of the old program), or SSH protocol version 1 is no longer supported, or ...
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/420519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273172/" ] }
420,539
df command can be used to list all mounted folder spaces. The output includes local disk, remote disk. Is there a way for me to get the disk usage only for the local disk? I'd like to filter out other types of mounted points.
$ df -lh would do the work. Where [from man page], -l, --local - limit listing to local file systems -h, --human-readable - print sizes in human readable format (e.g., 1K 234M 2G)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/168679/" ] }
420,606
For testing purposes ,I would like to run a command check every 15 minutes past the hour.I'm a little bit confused about the correct timeframe syntax of the crontab : Is this correct : */15 * * * * or this one : 15 * * * * I think its the first one ,since the second will run 15 minutes (once) after 1 hour passed.Any ideas?
The first one will (with most common cron implementations) run the command every 15 minutes and is equivalent to 0,15,30,45 * * * * . The second one will run 15 minutes past the hour, every hour. This is described in the crontab(5) manual on your system ( man 5 crontab ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269970/" ] }
420,609
I wanted to check available serial ports. How the script should like if I want to : Check available devices from ttyUSBx If anyone device is plugged -> run first program After that, if plugged devices are more than 1 run second program
The first one will (with most common cron implementations) run the command every 15 minutes and is equivalent to 0,15,30,45 * * * * . The second one will run 15 minutes past the hour, every hour. This is described in the crontab(5) manual on your system ( man 5 crontab ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420609", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270609/" ] }
420,640
Using NetworkManager on Arch Linux on a MacBookPro14,3, I am unable to connect to any wireless network. I've tried connecting to a number of different WiFi networks (home, mobile hotspot, work) all with the same result. I've tried doing this with both nmcli and nmtui . Example: $ nmcli dev wifi connect <SSID> password <password>Error: Connection activation failed: (7) Secrets were required, but not provided. Looking at logs with journalctl shows: wpa_supplicant[PID]: wlp3s0: CTRL-EVENT-ASSOC-REJECT bssid=00:00:00:00:00:00 status_code=16 and NetworkManager[PID]: <info> [TIMESTAMP] device (wlp3s0): state change: need-auth -> failed (reason 'no-secrets', sys-iface-state: 'managed') The Macbook has a Broadcom BCM43602 with driver brcmfmac. NetworkManager and wpa_supplicant are installed and enabled.
It seems that NetworkManager automatically reuses an existing connection. In case your existing connection does not have any secrets stored, the new connection attempt will not update the existing connection and fail due to missing secrets. So in my case these steps helped: nmcli con delete <SSID> Then reconnect using nmcli dev wifi connect <SSID> password <password>
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202042/" ] }
420,655
Accidentially, I found out that wc counts differently depending on how it gets the input from bash: $ s='hello'$ wc -m <<<"$s"6$ wc -c <<<"$s"6$ printf '%s' "$s" | wc -m5$ printf '%s' "$s" | wc -c5 Is this - IMHO confusing - behaviour documented somewhere? What does wc count here - is this an assumed newline?
The difference is caused by a newline added to the here string. See the Bash manual : The result is supplied as a single string, with a newline appended, to the command on its standard input (or file descriptor n if n is specified). wc is counting in the same way, but its input is different.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/420655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123669/" ] }
420,667
How to check if numbers from list file are increasing? Example list1: 658659663 will get "OK". Example list2: 658664663 will get "FAIL". Example list3: 23242526 will get "OK".
You can use sort -nc filename to validate if the file is in incremental order or not (containing numbers only). sort -n -c filename >/dev/null 2>&1 && echo "OK" || echo "FAIL" Or in short (note the upper -C " like -c, but do not report first bad line "), also using -u option to check for a strictly ascending order as well as -g option to have more number formats to be supported (like +2 , 0x10 , 1.2e+3 , infinity , ... ) suggested by @StéphaneChazelas : sort -guC filename && echo "OK" || echo "FAIL" Note: if you don't want report "FAIL" on the repeated same numbers, omit the -u option at above.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420667", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246468/" ] }
420,678
I just installed Debian 9 on my laptop, however Wifi isn't working and I'm not sure if my graphic card is either. I'm sure that it's just a lack of drivers, but I've never actually had to update drivers on Linux before. So how do I do that?
In Debian, if you run into issues with wi-fi or graphics support, there are usually two things to check. The first is the kernel, especially for graphics. You can usually get a newer kernel by looking at backports ; currently, the Debian 9 kernel is 4.9.320 , but 4.19.118 is available as a backport with better support for newer graphics adapters. This is particularly true for Intel GPUs. To install a kernel from backports, proceed as follows: Add the backports repository: echo deb http://http.debian.net/debian stretch-backports main contrib non-free > /etc/apt/sources.list.d/stretch-backports.list On Debian 10, replace stretch with buster ; on Debian 11, with bullseye . (Yes, we’re adding contrib and non-free , they’ll come in handy later...) Install the backported kernel: apt update apt -t stretch-backports install linux-image-amd64 This won’t replace your current kernel, so you can always boot that instead if things go wrong. Again, use buster-backports instead of stretch-backports on Debian 10, and bullseye-backports on Debian 11. The second is firmware. Most GPUs, and many wi-fi adapters, need firmware, which might not be installed by default (because source code is generally not available, so it’s not made available as part of Debian, but instead in the non-free repositories). To install the available firmware, from backports too (might as well): apt install -t stretch-backports firmware-linux As before, use buster-backports instead of stretch-backports on Debian 10, and bullseye-backports on Debian 11. In both cases, reboot to pick up the new or improved hardware support...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420678", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273303/" ] }
420,718
I thought [ 1 -eq $1 ] && echo "yes" || echo "no" Acts like if [ 1 -eq $1 ]; then echo "yes"else echo "no"fi But, when I run this script ( nocmd is a non-existing command) #!/bin/bash[ 1 -eq $1 ] && nocmd "yes" || echo "no" I get a strange output for parameter '1': me@ubuntu:/tmp$ ./ddd.sh 0nome@ubuntu:/tmp$ ./ddd.sh 1./sh.sh: line 3: nocmd: command not foundno Looks like it acts like: if [ 1 -eq $1 ]; then nocmd "yes" if [ $? -ne 0 ]; then echo "no" fielse echo "no"fi Is it fine? Am I missing something?
Except for the overall exit status, it acts like: if ! { [ 1 -eq $1 ] && nocmd "yes" }then echo nofi In: A || B B is executed iff A fails. That's a OR operator. In your case A is [ 1 -eq $1 ] && nocmd "yes" where nocmd is run iff [ succeeds (a AND operator), in which case the exit status of A will be that of nocmd . In other words echo no will be executed if either [ or nocmd "yes" fails (bearing in mind that nocmd is only run if [ succeeds). Those x && y || z are dirty hacks. They are best avoided for that very reason. Use a if / then / else construct if you do want a if/then/else logic. Use x && y || z only if you want z to be unless both x and y succeeded. Even in: cmd && echo OK || echo >&2 KO The echo OK could fail under some pathological conditions (like stdout going to a file on a full filesystem), and echo >&2 KO could end up being executed as well. $ bash -c 'true && echo OK || echo KO >&2' > /dev/fullbash: line 0: echo: write error: No space left on deviceKO
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208735/" ] }
420,768
I have got this script, but it is not working. It is because it is failing to evaluate the number comparison in the if statement, I think. #!/bin/bash{ read __ WIDTH; read __ HEIGHT; read __ __ BORDER_WIDTH; } < <(xwininfo -id "$(xdotool getactivewindow)" | grep -o -e 'Height:.*' -e 'Width:.*' -e 'Border width:.*')echo "Height: $HEIGHT, Width: $WIDTH, Border width: $BORDER_WIDTH"x = 1920if($WIDTH == x)then wmctrl -r :ACTIVE: -b toggle,maximized_vert,maximized_horz else xdotool key Ctrl+F12fi How can I fix this?
There are several issues with the script: bash tests are either done with test , [ .. ] or [[ .. ]] ; ( .. ) means sub-shell Assignment are made without spaces, x = 1920 will call the command x with the parameters = and 1920 . Use x=1920 instead. Variable names need to be prefixed with a dollar sign when you use them. So == x is bad and == $x is good. (Except within arithmetic evaluations or expansions: (( ... )) or $(( ... )) , thanks to comment by Kusalananda ). Numbers should be compared with -eq , = is for string comparison. In your case it should also work since the numbers are likely to be stored identically, but it's better to use the conceptually correct operator. == is a non-standard equivalent to = . You should get used to double quoting variables everywhere when possible, which prevents globbing for instance. I'll just fix the lines starting from x = 1920 , the fixed version is: x=1920if [ "$WIDTH" -eq "$x" ]then wmctrl -r :ACTIVE: -b toggle,maximized_vert,maximized_horz else xdotool key Ctrl+F12fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186352/" ] }
420,786
There are many good answers on how to tunnel VNC traffic using SSH . When doing something like... ssh user@host -L 5900:localhost:5900 x11vnc ...you can connect to the SSH tunnel on localhost:5900 (on the client side) to the SSH. But isn't host:5900 also open for attackers? How can I make x11vnc listening only to the traffic comming from the SSH tunnel? I'd prefer something temporary and not messing around with iptables or so. I think the -listen parameter is not what I need, because it listens to the interface with the given IP address: -listen ipaddr listen for connections only on network interface with addr ipaddr. '-listen localhost' and hostname work too. ...copied from here .
Turns out that -listen is what I need. By listening to the device with addr localhost it listens only to the loopback device: ssh user@host -L 5900:localhost:5900 x11vnc -listen localhost
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9023/" ] }
420,814
Per the IPv6 standard, Linux assigns IPv6 link local addresses to interfaces. These interfaces are always assigned /64 addresses. Is this correct? I would think they should be /10. Why are they assigned /64 addresses?
The address space allocated to link-local addresses is fe80::/10, but the next 54 bits are defined to be all zeroes, so the effective range is fe80::/64. Which puts it in line with the usual custom for IPv6 addresses. RFC 4291 : 2.5.6. Link-Local IPv6 Unicast Addresses Link-Local addresses are for use on a single link. Link-Local addresses have the following format: | 10 | | bits | 54 bits | 64 bits | +----------+-------------------------+----------------------------+ |1111111010| 0 | interface ID | +----------+-------------------------+----------------------------+
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8068/" ] }
420,891
I have some long log files. I can view the last lines with tail -n 50 file.txt , but sometimes I need to edit those last lines. How do I jump straight to the end of a file when viewing it with nano ?
Open the file with nano file.txt . Now type Ctrl + _ and then Ctrl + V
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/420891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273498/" ] }
420,894
How to create a script that will create a new user with a blank password in Solaris 10?
Open the file with nano file.txt . Now type Ctrl + _ and then Ctrl + V
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/420894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273500/" ] }
420,905
I am trying to install a program for my user because I don't have sudo privileges. I tried to install dos2unix package as follows: apt-get source dos2unix ./configure --prefix=$HOME/myappsmakemake install But I get the following error: E: You must put some 'source' URIs in your sources.list As I cannot edit sources.list, is there a way to make apt-get read another file?
You can use another sources.list , and as muru pointed out, it’s as simple as apt -o "Dir::Etc::sourcelist=/path/to/your/sources.list" source dos2unix The documentation suggests that this isn’t possible except in a configuration file, but it turns out the documentation is wrong (see the revision history for the configuration file variant). Alternatively, you could clone the package source directly, if the package is maintained in a revision control system. apt showsrc dos2unix shows Vcs-Git: https://anonscm.debian.org/git/collab-maint/dos2unix.git so if you have git installed you can clone that. debcheckout , in the devscripts package, can automate that for you, but you probably don’t have that installed... See How to know the source repository of a package in debian? for details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273512/" ] }
420,906
Printing the value of an unlimited docker container I get the value 9223372036854771712 which is 0x7FFFFFFFFFFFF000 (this is the same value of the XUbuntu Host machine). I couldn't find a reference that this is the Docker or Linux default value indicating an unlimited memory resource. Where does this value come from?Is it different between Container Virtualizations or Linux Distributions/Bitnesses?
The value comes from the cgroup setup in the memory management layer; by default, it’s set to PAGE_COUNTER_MAX , which is LONG_MAX / PAGE_SIZE on 64-bit platforms, and multiplied by PAGE_SIZE again when read . This confirms ilkkachu ’s explanation : the value is the maximum 64-bit signed integer, rounded to the nearest page (by dropping the last bits).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/420906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134763/" ] }
420,927
I am trying to batch-rename a bunch of files in my shell, and even though there is plenty of material about it on the internet, I cannot seem to find a solution for my specific case. I have a bunch of files that have (what appears to be) a "timestamp-id": abc_128390.pngabc_138493.pngabc_159084.png... that I'd like to exchange for a counter: abc_001.pngabc_002.pngabc_003.png... My (plenty) naïve approach would be something like: mv abc_*.png abc_{001..123}.png Also, I could not figure out a way to make it work with a for -loop. FWIW, unfortunately rename is not available on this particular system. Any advice would be greatly appreciated!
I can't think of a solution that handles incrementing the counter in a more clever way, but this should work: i=0for fi in abc_??????.png; do mv "$fi" abc_$i.png i=$((i+1))done It should be safe to use abc_*.png because it is expanded before the first mv is ever executed, but it can be useful to be very specific in that you only want files with a six-character timestamp at the end.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420927", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/257337/" ] }
420,930
Currently I hard coded the pane of which to send the keys to, I wonder how can I send the command to the right pane of the pane I'm currently focused on? Current: bind b send-keys -t 2 'make' Enter Would like to have: bind b send-keys -t toTheRightPane 'make' Enter
I can't think of a solution that handles incrementing the counter in a more clever way, but this should work: i=0for fi in abc_??????.png; do mv "$fi" abc_$i.png i=$((i+1))done It should be safe to use abc_*.png because it is expanded before the first mv is ever executed, but it can be useful to be very specific in that you only want files with a six-character timestamp at the end.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270029/" ] }
420,934
The four letters should be in alphabetical order.For example, inux and ianauax are in the output, but ixnux and naiauax are not. I can only use grep to accomplish this task. I tried grep 'i\w*n\w*u\w*x\w*' but i failed because ixnux is in the output but it shouldn't be in the output ( ixnux is not a word which 'i','n','u','x' are in alphabet order)
I can't think of a solution that handles incrementing the counter in a more clever way, but this should work: i=0for fi in abc_??????.png; do mv "$fi" abc_$i.png i=$((i+1))done It should be safe to use abc_*.png because it is expanded before the first mv is ever executed, but it can be useful to be very specific in that you only want files with a six-character timestamp at the end.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420934", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273529/" ] }
420,965
I'm trying to extract the local IP address using a cross-platform command. Until today, I was using this command: ip route get 1 | awk '{print $NF;exit}' But on Fedora 27 is not working because the output of ip route get 1 is: 0.0.0.1 via 192.168.1.1 dev en1 src 192.168.0.229 uid 1000 cache And I'm getting 1000 as the IP address. In all other systems that I have tried, the output has been always: 0.0.0.1 via 192.168.1.1 dev en1 src 192.168.0.229 I also tried using this command with same result: ip route get 255.255.255.255 | sed -n '/src/ s/.*src //p'
To print the address coming just after src (assuming all the relevant parts stay on the same line...): ip route get 1 | sed 's/^.*src \([^ ]*\).*$/\1/;q'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/420965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120489/" ] }
421,020
According to this and this , a subshell is started by using parenthesis (…) . ( echo "Hello" ) According to this , this and this , a process is forked when the command is ended with a & echo "Hello" & The Posix specification use the word subshell in this page but doesn't define it and, also, on the same page, doesn't define "child process" . Both are using the kernel fork() function, correct? What is the exact difference then to call some forks a "sub-shell" and some other forks a "child process".
In the POSIX terminology, a subshell environment is linked to the notion of Shell Execution Environment . A subshell environment is a separate shell execution environment created as a duplicate of the parent environment. That execution environment includes things like opened files, umask, working directory, shell variables/functions/aliases... Changes to that subshell environment do not affect the parent environment. Traditionally in the Bourne shell or ksh88 on which the POSIX specification is based, that was done by forking a child process. The areas where POSIX requires or allows command to run in a subshell environment are those where traditionally ksh88 forked a child shell process. It doesn't however force implementations to use a child process for that. A shell can choose instead to implement that separate execution environment any way they like. For instance, ksh93 does it by saving the attributes of the parent execution environment and restoring them upon termination of the subshell environment in contexts where forking can be avoided (as an optimisation as forking is quite expensive on most systems). For instance, in: cd /foo; pwd(cd /bar; pwd)pwd POSIX does require the cd /bar to run in a separate environment and that to output something like: /foo/bar/foo It doesn't require it to run in a separate process. For instance, if stdout becomes a broken pipe, pwd run in the subshell environment could very well have the SIGPIPE sent to the one and only shell process. Most shells including bash will implement it by evaluating the code inside (...) in a child process (while the parent process waits for its termination), but ksh93 will instead upon running the code inside (...) , all in the same process: remember it is in a subshell environment. upon cd , save the previous working directory (typically on a file descriptor opened with O_CLOEXEC), save the value of the OLDPWD, PWD variables and anything that cd might modify and then do the chdir("/bar") upon returning from the subshell, the current working directory is restored (with a fchdir() on that saved fd), and everything else that the subshell may have modified. There are contexts where a child process can't be avoided. ksh93 doesn't fork in: var=$(subshell) (subshell) But does in { subshell; } & { subshell; } | other command That is, the cases where things have to run in separate processes so they can run concurrently. ksh93 optimisations go further than that. For instance, while in var=$(pwd) most shells would fork a process, have the child run the pwd command with its stdout redirected to a pipe, pwd write the current working directory to that pipe, and the parent process read the result at the other end of the pipe, ksh93 virtualises all that by neither requiring the fork nor the pipe. A fork and pipe would only be used for non-builtin commands. Note that there are contexts other that subshells for which shells fork a child process. For instance, to run a command that is stored in a separate executable (and that is not a script intended for the same shell interpreter), a shell would have to fork a process to run that command in it as otherwise it wouldn't be able to run more commands after that command returns. In: /bin/echo "$((n += 1))" That is not a subshell, the command will be evaluated in the current shell execution environment, the n variable of the current shell execution environment will be incremented, but the shell will fork a child process to execute that /bin/echo command in it with the expansion of $((n += 1)) as argument. Many shells implement an optimisation in that they don't fork a child process to run that external command if it's the last command of a script or a subshell (for those subshells that are implemented as child processes).( bash however only does it if that command is the only command of the subshell). What that means is that, with those shells, if the last command in the subshell is an external command, the subshell doesn't not cause an extra process to be spawned. If you compare: a=1; /bin/echo "$a"; a=2; /bin/echo "$a" with a=1; /bin/echo "$a"; (a=2; /bin/echo "$a") there will be the same number of processes created, only in the second case, the second fork is done earlier so that the a=2 is run in a subshell environment.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421020", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
421,065
For example, this is the list.txt Joe 3Jack 1Ulysses 6Fox 2Cassidy 1Jones 6Kevin 7 Then the output should be 5 because there are 5 different values in the 2rd column. How should I finish this by only using sort cut wc uniq ? I have an idea, first use sort -k2n to sort the second column in increasing order and then use uniq to eliminate the second-column duplicated rows, and then the result would be like Cassidy 1Fox 2Joe 3Jones 6Kevin 7 and then I use cut -d ' ' -f2 to list all the numbers like 1 2 3 6 7 and then I use wc -d to count the number of distinct values and this will return 5 . What should I do in the uniq part to eliminate the duplicated rows with the same number? Is there a simple way to accomplish this?
I would start with cut since you only care about uniqueness in column 2: cut -d' ' list.txt results in: 1213676 Now you want unique values; uniq will do that, but only if it's sorted. If you're going to sort, go ahead and use sort's -u flag: cut -d' ' -f2 list.txt | sort -u Results in: 12367 and now you can use wc to count the number of lines of output: cut -d' ' -f2 list.txt | sort -u | wc -l which gives you: 5 Note that we're relying a specific format for the list.txt file -- no spaces in people's names!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421065", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273529/" ] }
421,066
I'm using Kubuntu 17.10. Always after login, the notification below pops up. When I click it, it asks for my password and wants to install or remove packages – but without telling me what packages . I already searched the internet but couldn't find a way to identify what packages are needed. The standard apt upgrade is not affecting this popup. What program is causing this popup, and how can I see which packages it wants to install? Thanks!
The missing packages can be seen if installing the full language support in Terminal: sudo apt install $(check-language-support)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/421066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273159/" ] }
421,068
I am setting up a samba share and two shares that are identical one is accessible and one is not. Wile looking into what might solve the problem I saw this command. I can not find what it does though. What does "chcon -t samba_share_t /path/to/share" do? To elaborate on this question why would I ever need to run this command on one share but not the other. Both shares were created the same, same user and same computer.
It changes the SELinux context of the specified path to samba_share_t . This would be necessary if you have SELinux in enforcing mode on your system and the path being referred to was not previously designated as a Samba share (via SELinux labeling).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270255/" ] }
421,077
QNX4 operating system using Korn Shell.This is in a .profile file. export VARDIR=//1/usr/pvcs What does the//1/represent?
For the most part, multiple slashes are equivalent to a single slash . There's one exception: paths beginning with exactly two slashes ( //foo/… , as opposed to /foo/… or ///foo/… ) have a different meaning on some Unix variants. The meaning is often to access a remote resource with a path like //hostname/dir1/dir2/dir3/file . (Windows does this too, with \\hostname\dir1\dir2\dir3\file .) QNX is one of those variants. On QNX4 with the FLEET distributed processing protocol, // followed by a number refers to that node. So //1/usr/pvcs on any node refers to the file /usr/pvcs on node 1. (Source: the QNX6 manual , I can't find official QNX4 documentation online.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/421077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154371/" ] }
421,083
how to calculate percentage from number for example we set number=248 and we want to know what is the 80% from $number so how to calculate it in bash ? expected output 198 ( exactly is 198.4 but we want to round down with floor )
bash cannot do floating point math, but you can fake it for things like this if you don't need a lot of precision: $ number=248$ echo $(( number*80/100 ))198
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/421083", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
421,132
I have tried very hard to figure this one out before actually posting here, but I can't seem to find any other examples of people solving this particular issue. I am running Ubuntu 17.10 I have written a custom function to handle tab completion for one of my scripts. The intention is that when I type in the name of my script and hit [TAB] that it list all of the files in /opt/use that end in ".use" (without displaying the ".use" or the leading path). I seem to have made that work so far. The files in /opt/use are as follows: blender-2.79.usechrome.useclarisse-3.6.useunity-2017.3.0p2.use The code for the function and the completion is: _use () { files=( `compgen -f -X "!*.use" /opt/use/` ) output=() for file in "${files[@]}"; do fileBaseName=`basename $file .use` output+=("$fileBaseName") done COMPREPLY=( ${output[@]} )}complete -F _use use Please don't judge me too harshly, I'm a graphic artist, not a programmer. :) Also, my "use" script is actually an alias for the following command: source /opt/scripts/use.sh Now when I type: use[SPACE][TAB][TAB] I successfully get a list of the files in /opt/use that end in ".use". So far so good. For example, I type "use[SPACE][TAB][TAB]", this is what it looks like: bvz@bvz-xps15-linux:~$ use blender-2.79 chrome clarisse-3.6 unity-2017.3.0p2 My first question is why I have to hit [TAB] twice? The first time just beeps. The second time it shows me my options. That isn't an issue for me, I just wonder if it is a clue as to my problem, which is this: If I type enough letters to be completely unique, the tab completion does not actually "complete" the line. It just leaves the entry exactly as I typed it and re-shows me the list of files in /opt/use. So, for example, if I type: use clar[TAB][TAB] instead of filling out the line to read: use clarisse-3.6 which is what I would expect (and what I am after) it leaves the line as: use clar and shows me underneath the full list of possible completions. Here is a sample: bvz@bvz-xps15-linux:~$ use clar[TAB][TAB]blender-2.79 chrome clarisse-3.6 unity-2017.3.0p2 bvz@bvz-xps15-linux:~$ use clar Notice how it didn't actually complete the line to read "clarisse-3.6" even though that is the only possible completion. Can anyone enlighten me as to what I have done wrong? Also, if this is a duplicate I apologize. I have looked around for several days but can't find any examples where someone has even run into this issue, much less solved it. Thanks
Thanks to Muru for pointing me to the solution. Apparently my issue was that every time my function was called it always returned the full list of possible matches. I needed to only return those possible completions that match what the user has typed so far. Once I return only a single item, then bash will auto-complete the rest of the line. This is the function that works. Before adding each possible completion, I do a quick check to see if the completion starts with the word that was typed originally. I do this using regular expressions. I also had to test to see if they haven't entered anything yet - in which case I always return everything. I also changed it to make some more variables "local". _use () { local word="${COMP_WORDS[$COMP_CWORD]}" local files=( `compgen -f -X "!*.use" /opt/use/` ) local output=() for file in "${files[@]}"; do fileBaseName=`basename $file .use` if [[ $fileBaseName =~ ^$word.* ]] || [ "$word" == "" ]; then output+=("$fileBaseName") fi done COMPREPLY=( ${output[@]} )}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421132", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273643/" ] }
421,158
How to use pseudo-arrays in POSIX shell script? I want to replace an array of 10 integers in a Bash script with something similar into POSIX shell script. I managed to come across Rich’s sh (POSIX shell) tricks , on section Working with arrays . What I tried: save_pseudo_array(){ for i do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/" done echo " "}coords=$(save_pseudo_array "$@")set -- 1895 955 1104 691 1131 660 1145 570 1199 381eval "set -- $coords" I don't comprehend it, that's the problem, if anyone could shed some light on it, much appreciated.
The basic idea is to use set to re-create the experience of working with indexed values from an array. So when you want to work with an array, you instead run set with the values; that’s set -- 1895 955 1104 691 1131 660 1145 570 1199 381 Then you can use $1 , $2 , for etc. to work with the given values. All that’s not much use if you need multiple arrays though. That’s where the save and eval trick comes in: Rich’s save function ¹ processes the current positional parameters and outputs a string, with appropriate quoting, which can then be used with eval to restore the stored values. Thus you run coords=$(save "$@") to save the current working array into coords , then create a new array, work with that, and when you need to work with coords again, you eval it: eval "set -- $coords" To understand the example you have to consider that you’re working with two arrays here, the one with values set previously, and which you store in coords , and the array containing 1895, 955 etc. The snippet itself doesn’t make all that much sense on its own, you’d have some processing between the set and eval lines. If you need to return to the 1895, 955 array later, you’d save that first before restoring coords : newarray=$(save "$@")eval "set -- $coords" That way you can restore $newarray later. ¹ Defined as save () {for i do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/" ; doneecho " "}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421158", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
421,219
I have two files: file_a and file_b. I want to redirect ( > ) all content of file_a into file_b. pseudo code is file_a > file_b . how to do that? I feel I should use cat.
Nitpick: In Unix, you can redirect output or input streams, but you can't redirect files . As RomanPerekhrest suggested in comments to the question: cat file_a >file_b This redirects the standard output stream (or just "the output") of cat into file_b . The output of cat will be the contents of file_a . This has the same effect (disregarding edge-case differences regarding permissions and ownership) as cp file_a file_b There are many other ways of duplicating a file's complete contents into another file, including trivial examples that apply non-modifying filters on text files, such as dd if=file_a of=file_bawk '1' file_a >file_bsed '' file_a >file_b etc. All the above examples will overwrite the previous contents of file_b . To append to file_b , replace > in the examples above that uses > with >> , e.g. cat file_a >>file_b To append the contents of file_a to that of file_b and store that in a third file: cat file_b file_a >file_c cat will output the contents of each of its file arguments after each other, in order, and the result will be redirected into the new file_c file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273733/" ] }
421,283
I have a file looks like this : 1 7.8e-12 1 7.8e-12 1 1.0e-11 2 9.3e-13 2 3.5e-12 2 3.5e-102 3.1e-9 3 3.0e-11 3 3.0e-11 3 1.7e-08 For every value in column one, I want to select "all rows" having minimum value in column 2 and group by column one. So the desired output is: 1 7.8e-12 1 7.8e-12 2 9.3e-13 3 3.0e-11 3 3.0e-11 Any idea how to do this?
One approach would be to sort in ascending order, then note the first col2 value for each col1 and print if the current col2 value is equal to it: sort -k1,1n -k2,2g file | awk '!a[$1] {a[$1] = $2} $2 == a[$1]'1 7.8e-121 7.8e-122 9.3e-133 3.0e-113 3.0e-11
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421283", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216256/" ] }
421,291
I use the expression "overriding wrapper" to refer to a function foo that overrides some original function falls, and calls this original function (or a copy of its) during its execution. I have found Stack Exchange threads about this (like this one ), but in my case I have the additional requirement that both the original foo as well as the overriding foo are meant to be accessible through FPATH , and autoloaded. (The overriding version presumably would appear earlier in the search sequence, thus shadowing the original version.) Is there a way to do this? FWIW, in the particular scenario I'm dealing with, the overriding foo just assigns some none-standard values to some global variables that the original refers to for doing its thing.
One approach would be to sort in ascending order, then note the first col2 value for each col1 and print if the current col2 value is equal to it: sort -k1,1n -k2,2g file | awk '!a[$1] {a[$1] = $2} $2 == a[$1]'1 7.8e-121 7.8e-122 9.3e-133 3.0e-113 3.0e-11
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421291", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
421,354
I have a epoch time in cell H2 which has a value of 1517335200000 . I am trying to convert it to a human readable format which should return 30/01/2018 6:00:00 PM in GMT. I have tried to convert it with the formula H2/86400+25569 which I got from the OpenOffice forum. The formula returns the value 17587319 . When I change the number format in LibreOffice Calc to Date , it returns the value of 06/09/-15484 . That's not the value I want. So, how can I get the value in dd/mm/yyyy hh:mm:ss format form?
If H2 contains the number to transform ( 1517335200000 ). Make H3 contain the formula: = H2/1000/(60*60*24) + 25569 Which will return the number 43130.75. Change format of cell H3 to date. Either: Press Shift - Ctrl - 3 Select Format --> Number Format --> Date Select Format --> Cells (a window opens) --> Numbers - Date - Format Change format of the H3 cell to the required date format: Select Format --> Cells (a panel opens) --> Numbers - Date - Format (select one) Expand width of cell if not wide enough to show the desired format (hint: three # appear). Why: Epoch time is in seconds since 1/1/1970. Calc internal time is in days since 12/30/1899. So, to get a correct result in H3: Get the correct number (last formula): H3 = H2/(60*60*24) + ( Difference to 1/1/1970 since 12/30/1899 in days )H3 = H2/86400 + ( DATE (1970,1,1) - DATE(1899,12,30) )H3 = H2/86400 + 25569 But the epoch value you are giving is too big, it is three zeros bigger than it should. Should be 1517335200 instead of 1517335200000. It seems to be given in milliseconds. So, divide by 1000. With that change, the formula gives: H3 = H2/1000/86400+25569 = 43130.75 Change the format of H3 to date and time (Format --> Cells --> Numbers --> Date --> Date and time) and you will see: 01/30/2018 18:00:00 in H3. Of course, since Unix epoch time is always based on UTC (+0 meridian), the result above needs to be shifted as many hours as the local Time zone is distant from UTC. So, to get the local time, if the Time zone is Pacific standard time GMT-8, we need to add (-8) hours. The formula for H3 with the local time zone (-8) in H4 would be: H3 = H2/1000/86400 + 25569 + H4/24 = 43130.416666 And presented as: 01/30/2018 10:00:00 if the format of H3 is set to such time format.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/421354", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199183/" ] }
421,366
I just upgraded from Debian 8 to 9. Apparently, it breaks everything database related. I now can't start MySQL, the error is: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory") Service status: systemctl status mariadb.service Shows errors: ● mariadb.service - MariaDB database server Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2018-02-02 16:55:27 AEST; 35s ago Process: 6859 ExecStartPost=/etc/mysql/debian-start (code=exited, status=203/EXEC) Process: 6832 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=0/SUCCESS) Process: 6821 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=0/SUCCESS) Process: 6815 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) Process: 6814 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS) Main PID: 6832 (code=exited, status=0/SUCCESS) Status: "MariaDB server is down"2 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495303359232 [Note] /usr/sbin/mysqld: Normal shutdown2 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495303359232 [Note] Event Scheduler: Purging the queue. 0 events2 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140494621349632 [Note] InnoDB: FTS optimize thread exiting.2 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495303359232 [Note] InnoDB: Starting shutdown...2 02 16:55:26 doraemoe mysqld[6832]: 2018-02-02 16:55:26 140495303359232 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool2 02 16:55:27 doraemoe mysqld[6832]: 2018-02-02 16:55:27 140495303359232 [Note] InnoDB: Shutdown completed; log sequence number 16168892 02 16:55:27 doraemoe mysqld[6832]: 2018-02-02 16:55:27 140495303359232 [Note] /usr/sbin/mysqld: Shutdown complete2 02 16:55:27 doraemoe systemd[1]: Failed to start MariaDB database server.2 02 16:55:27 doraemoe systemd[1]: mariadb.service: Unit entered failed state.2 02 16:55:27 doraemoe systemd[1]: mariadb.service: Failed with result 'exit-code'. Another way to get some information: journalctl -xe` that maybe useful: `2 02 16:42:39 doraemoe systemd[5505]: mariadb.service: Failed at step EXEC spawning /etc/mysql/debian-start: No such file or directory How do I fix it? Edit :Here is the complete output of journalctl -u mariadb.service Logs begin at Fri 2018-02-02 16:36:33 AEST, end at Fri 2018-02-02 19:05:01 AEST. 2月 02 16:36:35 doraemoe systemd[1]: Starting MariaDB database server...2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] /usr/sbin/mysqld (mysqld 10.1.26-MariaDB-0+deb9u1) starting as process 3534 ...2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: innodb_empty_free_list_algorithm has been changed to legacy because of small buffer pool size. In order to use backoff, increase buffer pool at least up to 20MB.2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Using mutexes to ref count buffer pool pages2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: The InnoDB memory heap is disabled2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Compressed tables use zlib 1.2.82月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Using Linux native AIO2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Using SSE crc32 instructions2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Initializing buffer pool, size = 128.0M2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Completed initialization of buffer pool2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Highest supported file format is Barracuda.2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: 128 rollback segment(s) are active.2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Waiting for purge to start2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.36-82.1 started; log sequence number 16168592月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] Plugin 'FEEDBACK' is disabled.2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] Server socket created on IP: '::'.2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139855396824832 [Note] InnoDB: Dumping buffer pool(s) not yet started2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856107147840 [Note] /usr/sbin/mysqld: ready for connections.2月 02 16:36:36 doraemoe mysqld[3534]: Version: '10.1.26-MariaDB-0+deb9u1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 Debian 9.12月 02 16:36:36 doraemoe systemd[3810]: mariadb.service: Failed at step EXEC spawning /etc/mysql/debian-start: No such file or directory2月 02 16:36:36 doraemoe systemd[1]: mariadb.service: Control process exited, code=exited status=2032月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856106375936 [Note] /usr/sbin/mysqld: Normal shutdown2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856106375936 [Note] Event Scheduler: Purging the queue. 0 events2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139855422002944 [Note] InnoDB: FTS optimize thread exiting.2月 02 16:36:36 doraemoe mysqld[3534]: 2018-02-02 16:36:36 139856106375936 [Note] InnoDB: Starting shutdown...2月 02 16:36:37 doraemoe mysqld[3534]: 2018-02-02 16:36:37 139856106375936 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool2月 02 16:36:39 doraemoe mysqld[3534]: 2018-02-02 16:36:39 139856106375936 [Note] InnoDB: Shutdown completed; log sequence number 16168692月 02 16:36:39 doraemoe mysqld[3534]: 2018-02-02 16:36:39 139856106375936 [Note] /usr/sbin/mysqld: Shutdown complete2月 02 16:36:39 doraemoe systemd[1]: Failed to start MariaDB database server.2月 02 16:36:39 doraemoe systemd[1]: mariadb.service: Unit entered failed state.2月 02 16:36:39 doraemoe systemd[1]: mariadb.service: Failed with result 'exit-code'.2月 02 16:42:39 doraemoe systemd[1]: Starting MariaDB database server...2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] /usr/sbin/mysqld (mysqld 10.1.26-MariaDB-0+deb9u1) starting as process 5478 ...2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: innodb_empty_free_list_algorithm has been changed to legacy because of small buffer pool size. In order to use backoff, increase buffer pool at least up to 20MB.2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Using mutexes to ref count buffer pool pages2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: The InnoDB memory heap is disabled2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Compressed tables use zlib 1.2.82月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Using Linux native AIO2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Using SSE crc32 instructions2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Initializing buffer pool, size = 128.0M2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Completed initialization of buffer pool2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Highest supported file format is Barracuda.2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: 128 rollback segment(s) are active.2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Waiting for purge to start2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.36-82.1 started; log sequence number 16168692月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] Plugin 'FEEDBACK' is disabled.2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140596496787200 [Note] InnoDB: Dumping buffer pool(s) not yet started2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] Server socket created on IP: '::'.2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597204075072 [Note] /usr/sbin/mysqld: ready for connections.2月 02 16:42:39 doraemoe mysqld[5478]: Version: '10.1.26-MariaDB-0+deb9u1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 Debian 9.12月 02 16:42:39 doraemoe systemd[5505]: mariadb.service: Failed at step EXEC spawning /etc/mysql/debian-start: No such file or directory2月 02 16:42:39 doraemoe systemd[1]: mariadb.service: Control process exited, code=exited status=2032月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597203303168 [Note] /usr/sbin/mysqld: Normal shutdown2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597203303168 [Note] Event Scheduler: Purging the queue. 0 events2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140596521965312 [Note] InnoDB: FTS optimize thread exiting.2月 02 16:42:39 doraemoe mysqld[5478]: 2018-02-02 16:42:39 140597203303168 [Note] InnoDB: Starting shutdown...2月 02 16:42:40 doraemoe mysqld[5478]: 2018-02-02 16:42:40 140597203303168 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool2月 02 16:42:42 doraemoe mysqld[5478]: 2018-02-02 16:42:42 140597203303168 [Note] InnoDB: Shutdown completed; log sequence number 16168792月 02 16:42:42 doraemoe mysqld[5478]: 2018-02-02 16:42:42 140597203303168 [Note] /usr/sbin/mysqld: Shutdown complete2月 02 16:42:42 doraemoe systemd[1]: Failed to start MariaDB database server.2月 02 16:42:42 doraemoe systemd[1]: mariadb.service: Unit entered failed state.2月 02 16:42:42 doraemoe systemd[1]: mariadb.service: Failed with result 'exit-code'.2月 02 16:55:24 doraemoe systemd[1]: Starting MariaDB database server...2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] /usr/sbin/mysqld (mysqld 10.1.26-MariaDB-0+deb9u1) starting as process 6832 ...2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: innodb_empty_free_list_algorithm has been changed to legacy because of small buffer pool size. In order to use backoff, increase buffer pool at least up to 20MB.2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: Using mutexes to ref count buffer pool pages2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: The InnoDB memory heap is disabled2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: Compressed tables use zlib 1.2.82月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: Using Linux native AIO2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: Using SSE crc32 instructions2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: Initializing buffer pool, size = 128.0M2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: Completed initialization of buffer pool2月 02 16:55:24 doraemoe mysqld[6832]: 2018-02-02 16:55:24 140495304131136 [Note] InnoDB: Highest supported file format is Barracuda.2月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495304131136 [Note] InnoDB: 128 rollback segment(s) are active.2月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495304131136 [Note] InnoDB: Waiting for purge to start2月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495304131136 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.36-82.1 started; log sequence number 16168792月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495304131136 [Note] Plugin 'FEEDBACK' is disabled.2月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140494596171520 [Note] InnoDB: Dumping buffer pool(s) not yet started2月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495304131136 [Note] Server socket created on IP: '::'.2月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495304131136 [Note] /usr/sbin/mysqld: ready for connections.2月 02 16:55:25 doraemoe mysqld[6832]: Version: '10.1.26-MariaDB-0+deb9u1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 Debian 9.12月 02 16:55:25 doraemoe systemd[6859]: mariadb.service: Failed at step EXEC spawning /etc/mysql/debian-start: No such file or directory2月 02 16:55:25 doraemoe systemd[1]: mariadb.service: Control process exited, code=exited status=2032月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495303359232 [Note] /usr/sbin/mysqld: Normal shutdown2月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495303359232 [Note] Event Scheduler: Purging the queue. 0 events2月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140494621349632 [Note] InnoDB: FTS optimize thread exiting.2月 02 16:55:25 doraemoe mysqld[6832]: 2018-02-02 16:55:25 140495303359232 [Note] InnoDB: Starting shutdown...2月 02 16:55:26 doraemoe mysqld[6832]: 2018-02-02 16:55:26 140495303359232 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool2月 02 16:55:27 doraemoe mysqld[6832]: 2018-02-02 16:55:27 140495303359232 [Note] InnoDB: Shutdown completed; log sequence number 16168892月 02 16:55:27 doraemoe mysqld[6832]: 2018-02-02 16:55:27 140495303359232 [Note] /usr/sbin/mysqld: Shutdown complete2月 02 16:55:27 doraemoe systemd[1]: Failed to start MariaDB database server.2月 02 16:55:27 doraemoe systemd[1]: mariadb.service: Unit entered failed state.2月 02 16:55:27 doraemoe systemd[1]: mariadb.service: Failed with result 'exit-code'. Run /etc/init.d/mysql start results: [....] Starting mysql (via systemctl): mysql.serviceJob for mariadb.service failed because the control process exited with error code.See "systemctl status mariadb.service" and "journalctl -xe" for details. failed! I can't find a file named mysql.err and ps -aux|grep -i maria yields no result.
For some reason you don't have the file /etc/mysql/debian-start , which is the error given: 16:55:25 doraemoe systemd[6859]: mariadb.service: Failed at step EXEC spawning /etc/mysql/debian-start: No such file or directory I have checked my installation of mariadb and that file is part of the mariadb-server-10.1 package. Check whether there is /etc/mysql/debian-start.dpkg-dist or similar, if so rename that to /etc/mysql/debian-start . Otherwise, check what exact package name you have installed: dpkg -S /usr/bin/mysqld_safemariadb-server-10.1: /usr/bin/mysqld_safe Download the package .deb (e.g. via this page ), unpack it: dpkg-deb --extract mariadb-server-10.1_10.1.26-0+deb9u1_amd64.deb /tmp/mariadb-server (substitute the name of the file you downloaded) and then copy the file: cp /tmp/mariadb-server/etc/mysql/debian-start /etc/mysql/ You should then be able to start it; unless there are more files missing. It's strange, that this file should be missing so something went wrong at some stage.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/421366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273853/" ] }
421,404
I have a line with colon separated values that I want process in awk. Lines are handled differently if variable $4 contains varible $3 at the begin. So I wrote the expression: $4 ~ /^$3/ , but unfortunaltely this does not work, it never matches. What's wrong, how can I use a variable in a regular expression pattern? This is the full example: green="$(tput setaf 2)"red="$(tput setaf 1)"yellow="$(tput setaf 3)"normal="$(tput sgr0)"stacks=$(docker stack ls --format='{{.Name}}')for stack in ${stacks}; do status=$(docker stack ps --filter="desired-state=running" --format="{{.Name}}:{{.Node}}:{{.DesiredState}}:{{.CurrentState}}:{{.Error}}" ${stack}) if test -z "$status"; then echo "${red}$stack$: disabled${normal}" else awk -F: ' $4 ~ /^$3/ {print "GOOD '"${green}"'" $1 ": " $4 "'"${normal}"'"} $4 !~ /^$3/ {print "BAD '"${yellow}"'" $1 ": " $3 " ≠ " $4 $5 "'"${normal}"'"} ' <<<${status} fidone Result is always BAD , e.g. here, line: bind_bind.1:urknall:Running:Running 18 hours ago: should print GOOD , but prints: BAD bind_bind.1: Running ≠ Running 18 hours ago
/^$3/ is a regular expression that is guaranteed to never match as it matches on records that have 3 after the end of the record (the $ regular expression anchor operator matches at the end of the subject, not to be confused with the $ awk operator that is used to dereference fields by number). To test whether the third field occurs in the beginning of the fourth field, one could do either a regular expression match with match() , which will return the start position of the match (or -1 if no match was found): awk -F ':' 'match($4, $3) == 1 { ..."GOOD"... ; next } { ..."BAD"... }' or, for a string comparison, awk -F ':' 'substr($4, 1, length($3)) == $3 { ..."GOOD"... ; next } { ..."BAD"... }'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/193535/" ] }
421,433
Every day we receive an e-mail from e.g. [email protected] with an attachment, the filename is e.g. report.xlsx How can I save the file with the received date? e.g. 20180131_report.xlsx and how can I filter on the subject or the sender? My ~/.procmailrc : :0*^content-Type:{ :fw | ripmime --overwrite --no-nameless -i - -d /dir/to/save/attachment}
If your Procmail or the receiving MTA is configured to put in a From_ line before the message proper, this pseudo-header generally already contains the date. You'll need to parse it, which is a drag, so unless this is a system where you really need to optimize for perfomance (hundreds of matches per second on this condition?) the absolutely easiest solution is to call date +%Y%m%d . To match on either of two unrelated headers, just put them both in a regex with | : :0* ^Content-type:* ^From:(.*\<)?foo@example\.tld|^Subject: Your daily report| ripmime --overwrite --no-nameless -i - -d /dir/to/save/attachment/$(date +%Y%m%d)_report.xslx (Bug here; see update below.) The fw flags don't make sense in this context so I took them out (and actually I'm not sure the Content-type: condition makes a lot of sense either; most messages will have it anyway, these days). If you have more complex conditions you want to combine, you can use a fundamental principle from logic called de Morgan's laws. There is no direct syntax in Procmail to say "this condition or that condition", but you can refactor this to "not ((not this condition) and (not that condition))." :0* ! this condition* ! that condition{ } # nothing happens here:0E # else{ LOG="at least one of them matched" } Or simply use scoring; :0* 1^0 this condition* 1^0 that conditon{ LOG="at least one of them matched" } Update: It looks like ripmime doesn't actually support (extracting or) naming an individual attachment. The easiest solution is perhaps a cron job which renames the latest arrival a bit before midnight (or if you know when it arrives, a bit after the latest time you expect it): 55 23 * * * cd /dir/to/save/attachment && mv report.xslx "$(date +%%Y%%m%%d)"_report.xslx Notice how (peculiarly) you need to double any percent signs in a cron command! You would obviously revert the Procmail recipe above to simply have ripmime save to /dir/to/save/attachment Alternatively, I would rename the attachment right after it arrives, perhaps while also tightening the conditions considerably. The following includes a fair amount of guesswork as to how exactly the message which delivers the attachment is encoded -- it could choose between a number of different content types, MIME structures, MIME header conventions, etc, so it probably doesn't work without some tweaking. :0* ^From:(.*\<)?foo@example\.tld* ^Subject: Your daily report* HB ?? ^Content-type: application/(octet-stream|vnd\.openxmlformats-officedocument\.spreadsheetml\.sheet|vnd\.ms-here-be-dragons-xslx); filename="?report.xslx| ( cd dir/to/save/attachment; \ ripmime --overwrite --no-nameless -i - -d . && \ mv report.xslx $(date +%Y%m%d)"_report.xslx ) The Content-type: header might not contain the filename; it could (and these days should) be specified in Content-Disposition: but many senders put it in both places for backwards compatibility. The filename should properly be RFC2231-encoded which means a number of optional fields could be populated where I have conveniently assumed they will be empty, like they were when ASCII filenames were the only game in town. Notice also how I require the sender and the subject to match now. The HB ?? says (imprecisely) to look for a match either in the main message headers, or somewhere in the body. Properly speaking, the match should be in the headers of a MIME body part in the latter case, but Procmail has no easy way to specify this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421433", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/192320/" ] }
421,451
I have this script that extracts one frame from a lot of mp4 videos and store them as png for i in *.mp4; do [ -f "$i" ] || break number=$(echo "$i" | grep -o -E '[0-9]+') name=frame"$number".png ffmpeg -i "$i" -ss 00:01:00.000 -vframes 1 "$name"done the videos are named like video1.mp4 , video2.mp4 , etc. Because the loop takes videos out of order, I extract the number from the video to generate the output file name. This is working great but the file names of the png files are like this: frame1?4.png , frame3?4.png , frame3?4.png ... why the ?4 ? ah, I see now that the 4 comes from the 4 on mp4. How do I get just the first number?
That ? is just a placeholder character output by ls to represent the newline character that you have embedded in your file name. echo foo3bar.mp4 | grep -Eo '[0-9]+' would output: 34 As there are two sequences of 1 or more decimal digits in that file name. Instead you could do: for file in *[0-9]*.mp4; do n=${file%.*} # remove the extension n=${n%"${n##*[0-9]}"} # remove anything after the last digit n=${n##*[!0-9]} # remove everything up to the last non-digit. ffmpeg...done Which would extract the right-most sequence of decimal digits on the filename stripped of its extension and avoid running several commands for each file. Note that that code only uses standard POSIX sh syntax, so you don't need bash to run it. With bash or other ksh93 -like shells, if you know the root name of the file only contains one sequence of digits, you can also use: n=${file%.*} # remove the extensionn=${n//[^0-9]} # remove all non-digit characters With bash -specific features: [[ $file =~ ([0-9]+).*\. ]] && n=${BASH_REMATCH[1]} Would extract the left-most sequence of digits in the root name. With zsh : n=${(SM)${file:r}##<->}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45335/" ] }
421,460
Using https://regex101.com/ I built a regular expression to return the first occurrence of an IP address in a string. RegExp: (?:\d{1,3}\.)+(?:\d{1,3}) RegExp including delimiters: /(?:\d{1,3}\.)+(?:\d{1,3})/ With the following test string: eu-west 140.243.64.99 It returns a full match of: 140.243.64.99 No matter what I try with anchors etc, the following bash script will not work with the regular expression generated. temp="eu-west 140.243.64.99 "regexp="(?:\d{1,3}\.)+(?:\d{1,3})"if [[ $temp =~ $regexp ]]; then echo "found a match"else echo "No IP address returned"fi
\d is a nonstandard way for saying "any digit". I think it comes from Perl, and a lot of other languages and utilities support Perl-compatible REs (PCRE), too. (and e.g. GNU grep 2.27 in Debian stretch supports the similar \w for word characters even in normal mode.) Bash doesn't support \d , though, so you need to explicitly use [0-9] or [[:digit:]] . Same for the non-capturing group (?:..) , use just (..) instead. This should print match : temp="eu-west 140.243.64.99 "regexp="([0-9]{1,3}\.)+([0-9]{1,3})"[[ $temp =~ $regexp ]] && echo match
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/421460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273907/" ] }
421,491
As I understand it, the hosts file is one of several system facilities that assists in addressing network nodes in a computer network. But what should be inside it? When I install Ubuntu by default 127.0.0.1 localhost will be there. Why? How does /etc/hosts work in case of JVM systems like Cassandra? When is DNS alternative, I guess not on a single computer?
The file /etc/hosts started in the old days of DARPA as the resolution file for all the hosts connected to the internet (before DNS existed). It has the maximum priority, meaning this file is preferred ahead of any other name system. 1 However, as a single file, it doesn't scale well: the size of the file becomes too big very soon. That is why the DNS system was developed, a hierarchical distributed name system. It allows any host to find the numerical address of some other host efficiently. The very old concept of the /etc/hosts file is very simple, just an address and a host name: 127.0.0.1 localhost for each line. That is a simple list of pairs of address-host. 2 Its primary present-day use is to bypass DNS resolution. A match found in the /etc/hosts file will be used before any DNS entry. In fact, if the name searched (like localhost ) is found in the file, no DNS resolution will be performed at all. 1 Well, the order of name resolution is actually defined in /etc/nsswitch.conf , which usually has this entry: hosts: files dns which means "try files ( /etc/hosts ); and if it fails, try DNS." But that order could be changed or expanded. 2 (in present days) The hosts file contains lines of text consisting of an IP address in the first text field followed by one or more host names. Each field is separated by white space – tabs are often preferred for historical reasons, but spaces are also used. Comment lines may be included; they are indicated by an octothorpe (#) in the first position of such lines. Entirely blank lines in the file are ignored. For example, a typical hosts file may contain the following: 127.0.0.1 localhost loopback::1 localhost localhost6 ipv6-localhost ipv6-loopback mycomputer.local192.168.0.8 mycomputer.lan10.0.0.27 mycomputer.lan This example contains entries for the loopback addresses of the system and their host names, the first line is a typical default content of the hosts file. The second line has several additional (probably only valid in local systems) names. The example illustrates that an IP address may have multiple host names (localhost and loopback), and that a host name may be mapped to both IPv4 and IPv6 IP addresses, as shown on the first and second lines respectively. One name ( mycomputer.lan ) may resolve to several addresses ( 192.168.0.8 10.0.0.27 ). However, in that case, which one is used depends on the routes (and their priorities) set for the computer. Some older OSes had no way to report a list of addresses for a given name.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/421491", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143955/" ] }
421,556
Say for example I've got this C function: void f(int *x, int *y){ (*x) = (*x) * (*y);} When saved to f.c , compiling with gcc -c f.c produces f.o . objdump -d f.o gives this: f.o: file format elf64-x86-64Disassembly of section .text:0000000000000000 <f>: 0: 55 push %rbp 1: 48 89 e5 mov %rsp,%rbp 4: 48 89 7d f8 mov %rdi,-0x8(%rbp) 8: 48 89 75 f0 mov %rsi,-0x10(%rbp) c: 48 8b 45 f8 mov -0x8(%rbp),%rax 10: 8b 10 mov (%rax),%edx 12: 48 8b 45 f0 mov -0x10(%rbp),%rax 16: 8b 00 mov (%rax),%eax 18: 0f af d0 imul %eax,%edx 1b: 48 8b 45 f8 mov -0x8(%rbp),%rax 1f: 89 10 mov %edx,(%rax) 21: 5d pop %rbp 22: c3 retq I'd like it to output something more like this: 55 48 89 e5 48 89 7d f8 48 89 75 f0 48 8b 45 f8 8b 10 48 8b 45 f0 8b 00 0f af d0 48 8b 45 f8 89 10 5d c3 I.e., just the hexadecimal values of the function. Is there some objdump flag to do this? Otherwise, what tools can I use (e.g. awk, sed, cut, etc) to get this desired output?
You can extract the byte values in the text segment with: $ objcopy -O binary -j .text f.o fo The -O binary option: objcopy can be used to generate a raw binary file by using an output target of binary (e.g., use -O binary). When objcopy generates a raw binary file, it will essentially produce a memory dump of the contents of the input object file. All symbols and relocation information will be discarded. The memory dump will start at the load address of the lowest section copied into the output file. The -j .text option: -j sectionpattern --only-section=sectionpattern Copy only the indicated sections from the input file to the output file. This option may be given more than once. Note that using this option inappropriately may make the output file unusable. Wildcard characters are accepted in sectionpattern. The end result is a file ( fo ) with the binary values of only the .text section, that is the executable code without symbols or relocation information. And then print the hex values of the fo file: $ od -An -t x1 fo 55 48 89 e5 48 89 7d f8 48 89 75 f0 48 8b 45 f8 8b 10 48 8b 45 f0 8b 00 0f af d0 48 8b 45 f8 89 10 90 5d c3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421556", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194482/" ] }
421,633
I have a project comprised of about 20 small .sh files. I name these "small" because generally, no file has more than 20 lines of code. I took a modular approach because thus I'm loyal to the Unix philosophy and it's easier for me to maintain the project. In the start of each .sh file, I put #!/bin/bash . Simply put, I understand script declarations have two purposes: They help the user recall what shell is needed to execute the file (say, after some years without using the file). They ensure that the script runs only with a certain shell (Bash in that case) to prevent unexpected behavior in case another shell was used. When a project starts to grow from say 5 files to 20 files or from 20 files to 50 files (not this case but just to demonstrate) we have 20 lines or 50 lines of script declarations. I admit, even though it might be funny to some, it feels a bit redundant for me to use 20 or 50 instead say just 1 per project (maybe in the main file of the project). Is there a way to avoid this alleged redundancy of 20 or 50 or a much greater number of lines of script declarations by using some "global" script declaration, in some main file?
Even though your project may now be solely consisting of 50 Bash scripts, it will sooner or later start accumulating scripts written in other languages such as Perl or Python (for the benefits that these scripting languages have that Bash does not have). Without a proper #! -line in each script, it would be extremely difficult to use the various scripts without also knowing what interpreter to use. It doesn't matter if every single script is executed from other scripts , this only moves the difficulty from the end users to the developers. Neither of these two groups of people should need to know what language a script was written in to be able to use it. Shell scripts executed without a #! -line and without an explicit interpreter are executed in different ways depending on what shell invokes them (see e.g. the question Which shell interpreter runs a script with no shebang? and especially Stéphane's answer ), which is not what you want in a production environment (you want consistent behaviour, and possibly even portability). Scripts executed with an explicit interpreter will be run by that interpreter regardless of what the #! -line says. This will cause problems further down the line if you decide to re-implement, say, a Bash script in Python or any other language. You should spend those extra keystrokes and always add a #! -line to each and every script. In some environments, there are multi-paragraph boilerplate legal texts in each script in each project. Be very happy that it's only a #! -line that feels "redundant" in your project.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/421633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273994/" ] }
421,637
Here is the route path from my home to sina.com.cn . traceroute -n sina.com.cntraceroute to sina.com.cn (202.108.33.60), 30 hops max, 60 byte packets 1 192.168.31.1 0.476 ms 0.587 ms 0.695 ms 2 140.0.5.1 2.557 ms 2.699 ms 3.065 ms 3 221.11.155.65 4.501 ms * 221.11.165.9 5.045 ms 4 * 221.11.156.18 26.480 ms 221.11.165.233 22.950 ms 5 219.158.9.97 14.176 ms * 219.158.19.149 21.472 ms 6 219.158.9.97 18.142 ms 219.158.8.81 44.856 ms 52.539 ms 7 124.65.194.190 53.162 ms 219.158.8.81 50.614 ms 124.65.194.190 47.266 ms 8 124.65.194.190 50.760 ms 61.148.143.26 49.351 ms 53.515 ms 9 210.74.176.138 43.056 ms 43.286 ms 61.148.143.26 53.712 ms10 202.108.33.60 46.385 ms 210.74.176.138 42.896 ms 46.931 ms 192.168.31.1 is my home router. 140.0.5.1 is my public ip the ISP provides. curl ifconfig.me140.0.5.1 In the third line ,it says 3 221.11.155.65 4.501 ms * 221.11.165.9 5.045 ms Why there are two ip addresses 221.11.155.65 and 221.11.165 ?What does it mean? Does the packet jump from 140.0.5.1 to 221.11.155.65 ,then jumps from 221.11.155.65 to 221.11.165 ?
From the traceroute(8) manual on OpenBSD: Three probes (the exact number can be changed using the -q option) are sent and a line is printed showing the TTL or hop limit, address of the gateway, and round trip time of each probe. If the probe answers come from different gateways, the address of each responding system will be printed. The Linux manual will have similar wording. The multiple IP addresses that you see are the gateways responding to the individual probes at specific hop limits. In your case, the three probes resulted in replies that, at hop limit 3, came back to you from the gateways at 221.11.155.65 and at 221.11.165.9. So, the answer is: No, the packet does not jump between the two hosts listed on that line, there are three probes sent and they take two different routes from 140.0.5.1.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/243284/" ] }
421,729
I am using GNU bash 4.3.48 . Consider the following two commands that only differ by a single dollar sign. Command 1: echo "(echo " * ")" Command 2: echo "$(echo " * ")" The output of them are, respectively (echo test.txt ppcg.sh ) and * So obviously in the first case the * is globbed, which means that the first quotation mark goes with the second to form a pair, and the third and the fourth form another pair. In the second case the * is not globbed, and there are exactly two extra spaces in the output, one before the asterisk and one after, meaning that the second quotation mark goes with the third one, and the first goes with the fourth. Are there any other cases besides the $() construction that the quotation marks are not matched with the next one, but are nested instead? Is this behavior well documented and if yes, where can I find the corresponding document?
Any of the nesting constructs that can be interpolated inside strings can have further strings inside them: they are parsed like a new script, up to the closing marker, and can even be nested multiple levels deep. All bar one of those starts with a $ . All of them are documented in a combination of the Bash manual and POSIX shell command language specification. There are a few cases of these constructs: Command substitution with $( ... ) , as you've found. POSIX specifies this behaviour : With the $(command) form, all characters following the open parenthesis to the matching closing parenthesis constitute the command. Any valid shell script can be used for command ... Quotes are part of valid shell scripts, so they're allowed with their normal meaning. Command substitution using ` , too. The "word" element of advanced parameter substitution instances such as ${parameter:-word} . The definition of "word" is : A sequence of characters treated as a unit by the shell - which includes quoted text and even mixed quotes a"b"c'd'e - though the actual behaviour of the expansions is a little more liberal than that, and for example ${x:-hello world} works too. Arithmetic expansion with $(( ... )) , although it is largely useless there (but you can nest command substitution or variable expansions, too, and then have quotes usefully inside those). POSIX states that : The expression shall be treated as if it were in double-quotes, except that a double-quote inside the expression is not treated specially. The shell shall expand all tokens in the expression for parameter expansion, command substitution, and quote removal. so this behaviour is explicitly required. That means echo "abc $((4 "*" 5))" does arithmetic, rather than globbing. Note though that old-style $[ ... ] arithmetic expansion is not treated the same way: quotes will be an error if they appear, regardless of if the expansion is quoted or not. This form isn't documented at all any more, and isn't meant to be used anyway. Locale-specific translation with $"..." , which actually uses the " as a core element. $" is treated as a single unit. There's one further nesting case you may not expect, not involving quotes, which is with brace expansion : {a,b{c,d},e} expands to "a bc bd e". ${x:-a{b,c}d} does not nest, however; it is treated as a parameter substitution giving " a{b,c ", followed by " d} ". That is also documented : When braces are used, the matching ending brace is the first ‘}’ not escaped by a backslash or within a quoted string, and not within an embedded arithmetic expansion, command substitution, or parameter expansion. As a general rule, all delimited constructs parse their bodies independently of the surrounding context (and exceptions are treated as bugs ). In essence, on seeing $( the command-substitution code just asks the parser to consume what it can from the body as though it's a new program, and then checks that the expected terminating marker (an unescaped ) or )) or } ) appears once the sub-parser runs out of things it can consume. If you think about the functioning of a recursive-descent parser , that's just a simple recursion to the base case. It's actually easier to do than the other way, once you've got string interpolation at all. Regardless of the underlying parsing technique, shells supporting these constructs give the same result. You can nest quoting as deeply as you like through these constructs and it will work as expected. Nowhere will get confused by seeing a quote in the middle; instead, that will be the start of a new quoted string in the interior context.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/421729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259023/" ] }
421,737
In a parameter expansion: Is it always better (or not worse) to double quote a parameter expansion than not? Are there cases where double quoting is not suggested? When is it necessary to add braces around parameter name? when shall we use double quote around parameter expansion instead of braces around parameter name? When the other way around? When does either of the two work? Thanks.
Quote It is always better to quote parameter expansions when you want to keep the expanded value not split into several words and affected by the value of IFS. For example: $ IFS=" elr"$ var="Hello World"$ printf '<%s> ' $var; echo<H> <> <> <o> <Wo> <> <d>$ printf '<%s> ' "$var"; echo<Hello World> However, there are some very limited instances that require an unquoted expansion to actually get the splitting done: $ IFS=$' \t\n'$ var="one two three"$ array=($var)$ declare -p arraydeclare -a array=([0]="one" [1]="two" [2]="three") Links on the subject: When is double-quoting necessary? Gilles Stéphane Chazelas Braces Braces are always required when the characters following the variable name should not be joined with such variable name: $ var=one$ echo "The value of var is $varvalue"The value of var is$ echo "The value of var is ${var}value"The value of var is onevalue From LESS="+/which is not to be interpreted as part" man bash ${parameter} The braces are required … when parameter is followed by a character which is not to be interpreted as part of its name. Additionally; braces are required when dealing with any double digit positional parameter. $ set -- one two t33 f44 f55 s66 s77 e88 n99 t10 e11 t12$ echo "$11 ${11} $12 ${12}"one1 e11 one2 t12 Read the manual: LESS="+/enclosed in braces" man bash When a positional parameter consisting of more than a single digit is expanded, it must be enclosed in braces Or LESS="+/with more than one digit" man bash ${parameter} The value of parameter is substituted. The braces are required when parameter is a positional parameter with more than one digit, … Quotes vs Braces when shall we use double quote around parameter expansion instead of braces around parameter name? When the other way around? When does either of the two work? There is no rule for "shall" only the open posibility of using either: $ var=One$ echo "ThisIs${var}Var"ThisIsOneVar$ echo "ThisIs""$var""Var"ThisIsOneVar$ echo 'ThisIs'"$var"'Var'ThisIsOneVar$ echo 'ThisIs'"${var}"'Var'ThisIsOneVar All the expansions are entirely equivalent, use any one that you like better.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421737", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
421,750
I see a lot of people online referencing arch/x86/entry/syscalls/syscall_64.tbl for the syscall table, that works fine. But a lot of others reference /include/uapi/asm-generic/unistd.h which is commonly found in the headers package. How come syscall_64.tbl shows, 0 common read sys_read The right answer, and unistd.h shows, #define __NR_io_setup 0__SC_COMP(__NR_io_setup, sys_io_setup, compat_sys_io_setup) And then it shows __NR_read as #define __NR_read 63__SYSCALL(__NR_read, sys_read) Why is that 63, and not 1? How do I make sense of out of /include/uapi/asm-generic/unistd.h ? Still in /usr/include/asm/ there is /usr/include/asm/unistd_x32.h#define __NR_read (__X32_SYSCALL_BIT + 0)#define __NR_write (__X32_SYSCALL_BIT + 1)#define __NR_open (__X32_SYSCALL_BIT + 2)#define __NR_close (__X32_SYSCALL_BIT + 3)#define __NR_stat (__X32_SYSCALL_BIT + 4)/usr/include/asm/unistd_64.h#define __NR_read 0#define __NR_write 1#define __NR_open 2#define __NR_close 3#define __NR_stat 4/usr/include/asm/unistd_32.h#define __NR_restart_syscall 0#define __NR_exit 1 #define __NR_fork 2 #define __NR_read 3 #define __NR_write 4 Could someone tell me the difference between these unistd files. Explain how unistd.h works? And what the best method for finding the syscall table?
When I’m investigating this kind of thing, I find it useful to ask the compiler directly (see Printing out standard C/GCC predefined macros in terminal for details): printf SYS_read | gcc -include sys/syscall.h -E - This shows that the headers involved (on Debian) are /usr/include/x86_64-linux-gnu/sys/syscall.h , /usr/include/x86_64-linux-gnu/asm/unistd.h , /usr/include/x86_64-linux-gnu/asm/unistd_64.h , and /usr/include/x86_64-linux-gnu/bits/syscall.h , and prints the system call number for read , which is 0 on x86-64. You can find the system call numbers for other architectures if you have the appropriate system headers installed (in a cross-compiler environment). For 32-bit x86 it’s quite easy: printf SYS_read | gcc -include sys/syscall.h -m32 -E - which involves /usr/include/asm/unistd_32.h among other header files, and prints the number 3. So from the userspace perspective, 32-bit x86 system calls are defined in asm/unistd_32.h , 64-bit x86 system calls in asm/unistd_64.h . asm/unistd_x32.h is used for the x32 ABI . uapi/asm-generic/unistd.h lists the default system calls, which are used on architectures which don’t have an architecture-specific system call table. In the kernel the references are slightly different, and are architecture-specific (again, for architectures which don’t use the generic system call table). This is where files such as arch/x86/entry/syscalls/syscall_64.tbl come in (and they ultimately end up producing the header files which are used in user space, unistd_64.h etc.). You’ll find a lot more detail about system calls in the pair of LWN articles on the topic, Anatomy of a system call part 1 and Anatomy of a system call part 2 .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/421750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
421,778
The: http://lynx.isc.org/ is not loading. Is it the https://lynx.invisible-island.net/ what is the official for the "lynx", the text-based webbrowser?
The homepage for Lynx has moved more than once, as discussed in this development page : However, things change. Paul Vixie left ISC in mid-2013 to form a new company. At the time, that did not affect Lynx—from ISC's standpoint Lynx was just a box in a rack of servers. For the last four years of Lynx's stay at ISC, I did all of the software maintenance for the project. Still, a box in a rack costs money for electricity. Late in 2015, ISC shifted away from this style of project support, to reduce costs. I expanded my website to incorporate Lynx (roughly doubling the size of the site). Old site: http://lynx.isc.org/ftp://lynx.isc.org/ New site: https://lynx.invisible-island.net/ftp://ftp.invisible-island.net/lynx/ This new site is still the current homepage as of February 2018.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/421778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/261470/" ] }
421,793
How might I search two directories for files with the same name, size,type...and remove them from one of those directories?
Using fdupes : fdupes --delete dir1 dir2 fdupes will not test on filename or file type, but will test on file size and contents (which implicitly includes file type). Example: $ mkdir dir1 dir2$ touch dir{1,2}/{a,b,c}$ tree.|-- dir1| |-- a| |-- b| `-- c`-- dir2 |-- a |-- b `-- c2 directories, 6 files$ fdupes --delete dir1 dir2[1] dir1/a[2] dir1/b[3] dir1/c[4] dir2/a[5] dir2/b[6] dir2/cSet 1 of 1, preserve files [1 - 6, all]: 1 [+] dir1/a [-] dir1/b [-] dir1/c [-] dir2/a [-] dir2/b [-] dir2/c$ tree.|-- dir1| `-- a`-- dir22 directories, 1 file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273564/" ] }
421,801
From what I understand, devices connected to different controllers should show up under different USB busses. However, when I connect a keyboard to the xHCI controller, it is still listed under one of the EHCI busses. See the >>>> markers in the listings: $ lspci | grep -i usb>>>> 00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04)00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 04)00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 04)$ lspci -vs 00:14.000:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04) (prog-if 30 [XHCI])Subsystem: ASUSTeK Computer Inc. 8 Series/C220 Series Chipset Family USB xHCIFlags: bus master, medium devsel, latency 0, IRQ 27Memory at ef920000 (64-bit, non-prefetchable) [size=64K]Capabilities: [70] Power Management version 2Capabilities: [80] MSI: Enable+ Count=1/8 Maskable- 64bit+Kernel driver in use: xhci_hcd So I do indeed have an xHCI controller. It is a separate physical port on the motherboard. $lsusbBus 002 Device 002: ID 8087:8000 Intel Corp. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 002: ID 8087:8008 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub>>>> Bus 004 Device 002: ID 174c:3074 ASMedia Technology Inc. ASM1074 SuperSpeed hub>>>> Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 003 Device 014: ID 046d:c03d Logitech, Inc. M-BT96a Pilot Optical MouseBus 003 Device 015: ID 195d:2030 Itron Technology iONE Bus 003 Device 013: ID 05e3:0608 Genesys Logic, Inc. HubBus 003 Device 012: ID 0424:2228 Standard Microsystems Corp. 9-in-2 Card ReaderBus 003 Device 011: ID 0424:2602 Standard Microsystems Corp. USB 2.0 HubBus 003 Device 010: ID 0424:2512 Standard Microsystems Corp. USB 2.0 HubBus 003 Device 003: ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub>>>> Bus 003 Device 016: ID 03f0:0024 Hewlett-Packard KU-0316 KeyboardBus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub The "superspeed" 3.0 hub on bus 004 should be the xHCI controller. The keyboard, however, is attached to bus 003: $lsusb -t/: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M |__ Port 3: Dev 2, If 0, Class=Hub, Driver=hub/4p, 5000M/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/14p, 480M>>>>|__ Port 1: Dev 16, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M |__ Port 3: Dev 3, If 0, Class=Hub, Driver=hub/4p, 480M |__ Port 2: Dev 10, If 0, Class=Hub, Driver=hub/2p, 480M |__ Port 1: Dev 11, If 0, Class=Hub, Driver=hub/4p, 480M |__ Port 1: Dev 12, If 0, Class=Mass Storage, Driver=usb-storage, 480M |__ Port 3: Dev 13, If 0, Class=Hub, Driver=hub/4p, 480M |__ Port 2: Dev 15, If 0, Class=Human Interface Device, Driver=usbhid, 12M |__ Port 2: Dev 15, If 1, Class=Human Interface Device, Driver=usbhid, 12M |__ Port 2: Dev 15, If 2, Class=Human Interface Device, Driver=usbhid, 12M |__ Port 4: Dev 14, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/8p, 480M/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/6p, 480M In fact, no matter how I connect devices to physical controllers, they always show up under the same bus. Does anyone have a clue what might be going on? System Processor: Intel(R) Core(TM) i7-4771 CPU @ 3.50GHzOS: Debian GNU/Linux testing (buster) with ACS patch, IOMMU enabled.Kernel: Linux 4.10.0-acs+ (x86_64)Version: #3 SMP PREEMPT Sun Feb 26 00:03:48 CET 2017Processor: Intel(R) Core(TM) i7-4771 CPU @ 3.50GHz : 3900.00 MHzBoard: Asus Z87-PROBIOS: AMI version 1707, VT-d/x enabled
Using fdupes : fdupes --delete dir1 dir2 fdupes will not test on filename or file type, but will test on file size and contents (which implicitly includes file type). Example: $ mkdir dir1 dir2$ touch dir{1,2}/{a,b,c}$ tree.|-- dir1| |-- a| |-- b| `-- c`-- dir2 |-- a |-- b `-- c2 directories, 6 files$ fdupes --delete dir1 dir2[1] dir1/a[2] dir1/b[3] dir1/c[4] dir2/a[5] dir2/b[6] dir2/cSet 1 of 1, preserve files [1 - 6, all]: 1 [+] dir1/a [-] dir1/b [-] dir1/c [-] dir2/a [-] dir2/b [-] dir2/c$ tree.|-- dir1| `-- a`-- dir22 directories, 1 file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274142/" ] }
421,803
I'm using GUI Emacs. My background color remains the same no matter which custom theme I load. It may not matter, but when I load a different theme, I always get the message: message [sml] sml/theme set to automatic in the minibuffer. Possibly pertinent elisp is: (require 'powerline) . . .(setq sml/theme 'powerline)(sml/setup) I use a slightly modified version of solarized-light as my theme. In my .emacs file I have: (load-theme 'my-solarized-light 1);; (set-background-color "#fffff0") ;; not necessary because theme was customized The only difference between solarized-light and my-solarized-light is that I've set the background color to #FFFFF0 instead of #FDF6E3. One problem I have is that I can't remember how I did that. Near the top of my .emacs file, under custom-set-variables , is '(custom-enabled-themes (quote (my-solarized-light))) How do I make "load-theme" work correctly again?
I had the same problem. I had modified some font settings through describe-face which had set values in custom-set-faces in my .spacemacs file. (custom-set-faces ;; custom-set-faces was added by Custom. ;; If you edit it by hand, you could mess it up, so be careful. ;; Your init file should contain only one such instance. ;; If there is more than one, they won't work right. '(org-table ((t (:background "black" :foreground "#586e75" :weight bold))))) This seemed to then be applying a background colour to all themes. Removing this customisation and restarting spacemacs solved the problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/421803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98407/" ] }
421,808
This is, how I download various master branches from GitHub, and I aim to have a prettier script (and maybe more reliable?). wget -P ~/ https://github.com/user/repository/archive/master.zipunzip ~/master.zipmv ~/*-master ~/dir-name Can this be shorten to one line somehow, maybe with tar and pipe? Please address issues of downloading directly to the home directory ~/ and having a certain name for the directory ( mv really needed?).
The shortest way that seems to be what you want would be git clone https://github.com/user/repository --depth 1 --branch=master ~/dir-name . This will only copy the master branch, it will copy as little extra information as possible, and it will store it in ~/dir-name .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/421808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274154/" ] }
421,821
I cannot update my Kali Linux, when trying to execute apt-get update I get this error message: # apt-get updateGet:1 http://kali.mirror.garr.it/mirrors/kali kali-rolling InRelease [30.5 kB]Err:1 http://kali.mirror.garr.it/mirrors/kali kali-rolling InRelease The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]>Reading package lists... DoneW: GPG error: http://kali.mirror.garr.it/mirrors/kali kali-rolling InRelease: The following signatures were invalid: EXPKEYSIG ED444FF07D8D0BF6 Kali Linux Repository <[email protected]>E: The repository 'http://kali.mirror.garr.it/mirrors/kali kali-rolling InRelease' is not signed.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details. If you need my kernel version: # uname -a4.13.0-kali1-amd64 #1 SMP Debian 4.13.10-1kali2 (2017-11-08) x86_64 GNU/Linux How can I fix this?
Add the gpg key: gpg --keyserver keyserver.ubuntu.com --recv-key 7D8D0BF6 Check the fingerprint: gpg --fingerprint 7D8D0BF6 Sample output: pub rsa4096 2012-03-05 [SC] [expires: 2021-02-03] 44C6 513A 8E4F B3D3 0875 F758 ED44 4FF0 7D8D 0BF6uid [ unknown] Kali Linux Repository <[email protected]>sub rsa4096 2012-03-05 [E] [expires: 2021-02-03] then : gpg -a --export 7D8D0BF6 | apt-key add -apt update Debian : SecureApt update : 8 Feb , 2018. Answer from the official documentation : Note that if you haven’t updated your Kali installation in some time (tsk2), you will like receive a GPG error about the repository key being expired ( ED444FF07D8D0BF6 ). Fortunately, this issue is quickly resolved by running the following as root: wget -q -O - https://archive.kali.org/archive-key.asc | apt-key add Kali docs: how to deal with APT complaining about Kali's expired key The easiest solution is to retrieve the latest key and store it in place where apt will find it: sudo wget https://archive.kali.org/archive-key.asc -O /etc/apt/trusted.gpg.d/kali-archive-keyring.asc
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/421821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274166/" ] }