source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
387,010 | I have written a script that notifies me when a value is not within a given range. All values "out of range" are logged in a set of per day files. Every line is timestamped in a proprietary reverse way:
yyyymmddHHMMSS Now, I would like to refine the script, and receive notifications just when at least 60 minutes are passed since the last notification for the given out of range value. I already solved the issue to print the logs in reverse ordered way with: for i in $(ls -t /var/log/logfolder/*); do zcat $i|tac|grep \!\!\!|grep --color KEYFORVALUE; done that results in: ...
20170817041001 - WARNING: KEYFORVALUE=252.36 is not between 225 and 245 (!!!)
20170817040001 - WARNING: KEYFORVALUE=254.35 is not between 225 and 245 (!!!)
20170817035001 - WARNING: KEYFORVALUE=254.55 is not between 225 and 245 (!!!)
20170817034001 - WARNING: KEYFORVALUE=254.58 is not between 225 and 245 (!!!)
20170817033001 - WARNING: KEYFORVALUE=255.32 is not between 225 and 245 (!!!)
20170817032001 - WARNING: KEYFORVALUE=254.99 is not between 225 and 245 (!!!)
20170817031001 - WARNING: KEYFORVALUE=255.95 is not between 225 and 245 (!!!)
20170817030001 - WARNING: KEYFORVALUE=255.43 is not between 225 and 245 (!!!)
20170817025001 - WARNING: KEYFORVALUE=255.26 is not between 225 and 245 (!!!)
20170817024001 - WARNING: KEYFORVALUE=255.42 is not between 225 and 245 (!!!)
20170817012001 - WARNING: KEYFORVALUE=252.04 is not between 225 and 245 (!!!)
... Anyway, I'm stuck at calculating the number of seconds between two of those timestamps, for instance: 20170817040001
20160312000101 What should I do in order to calculate the time elapsed between two timestamps? | This will give you the date in seconds (since the UNIX epoch) date --date '2017-08-17 04:00:01' +%s # "1502938801" And this will give you the date as a readable string from a number of seconds date --date '@1502938801' # "17 Aug 2017 04:00:01" So all that's needed is to convert your date/timestamp into a format that GNU date can understand, use maths to determine the difference, and output the result datetime1=20170817040001
datetime2=20160312000101
# bash string manipulation
datestamp1="${datetime1:0:4}-${datetime1:4:2}-${datetime1:6:2} ${datetime1:8:2}:${datetime1:10:2}:${datetime1:12:2}"
datestamp2="${datetime2:0:4}-${datetime2:4:2}-${datetime2:6:2} ${datetime2:8:2}:${datetime2:10:2}:${datetime2:12:2}"
# otherwise use sed
# datestamp1=$(echo "$datetime1" | sed -nr 's/(....)(..)(..)(..)(..)(..)/\1-\2-\3 \4:\5:\6/p')
# datestamp2=$(echo "$datetime2" | sed -nr 's/(....)(..)(..)(..)(..)(..)/\1-\2-\3 \4:\5:\6/p')
seconds1=$(date --date "$datestamp1" +%s)
seconds2=$(date --date "$datestamp2" +%s)
delta=$((seconds1 - seconds2))
echo "$delta seconds" # "45197940 seconds" We've not provided timezone information here so it assumes local timezone. Your values for the seconds from the datetime will probably be different to mine. (If your values are UTC then you can use date --utc .) | {
"source": [
"https://unix.stackexchange.com/questions/387010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105218/"
]
} |
387,076 | I'd like to be able to use xargs to execute multiple parameters in different parts of a command. For example, the following: echo {1..8} | xargs -n2 | xargs -I v1 -I v2 echo the number v1 comes before v2 I would hope that it would return the number 1 comes before 2
the number 3 comes before 4 ... etc Is this achievable? I suspect that my multiple use of -I is incorrect. | I believe that you can’t use -I that way.
But you can get the effect / behavior you want by saying: echo {1..8} | xargs -n2 sh -c 'echo "the number $1 comes before $2"' sh This, essentially, creates an ad hoc one-line shell script,
which xargs executes via sh -c .
The two values that xargs parses out of the input
are passed to this “script”.
The shell then assigns those values to $1 and $2 ,
which you can then reference in the “script”. | {
"source": [
"https://unix.stackexchange.com/questions/387076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20889/"
]
} |
387,586 | When I do tail -f filename , how to quit the mode without use Ctrl+c to kill the process? What I want is a normal way to quit, like q in top . I am just curious about the question, because I feel that killing the process is not a good way to quit something. | As said in the comments, Ctrl-C does not kill the tail process, which is done by sending either a SIGTERM or SIGKILL signal (the infamous -9 ...); it merely sends a SIGINT which tells tail to end the forward mode and exit. FYI, these's a better tool: less +F filename In less , you can press Ctrl-C to end forward mode and scroll through the file, then press F to go back to forward mode again. Note that less +F is advocated by many as a better alternative to tail -f . For difference and caveats between the two tools, read this answer: Is `tail -f` more efficient than `less +F`? | {
"source": [
"https://unix.stackexchange.com/questions/387586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/235470/"
]
} |
387,600 | dmesg shows lots of messages from serial8250: $ dmesg | grep -i serial
[ 0.884481] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[ 6.584431] systemd[1]: Created slice system-serial\x2dgetty.slice.
[633232.317222] serial8250: too much work for irq4
[633232.453355] serial8250: too much work for irq4
[633248.378343] serial8250: too much work for irq4
... I have not seen this message before. What does it generally mean? Should I be worried? (From my research, it is not distribution specific, but in case it is relevant, I see the messages on an EC2 instance running Ubuntu 16.04.) | There is nothing wrong with your kernel or device drivers. The problem is with your machine hardware. The problem is that it is impossible hardware. This is an error in several virtualization platforms (including at least XEN, QEMU, and VirtualBox) that has been plaguing people for at least a decade. The problem is that the UART hardware that is emulated by various brands of virtual machine behaves impossibly, sending characters at an impossibly fast line speed. To the kernel, this is indistinguishable from faulty real UART hardware that is continually raising an interrupt for an empty output buffer/full input buffer. (Such faulty real hardwares exist, and you will find embedded Linux people also discussing this problem here and there.) The kernel pushes the data out/pulls the data in, and the UART is immediately raising an interrupt saying that it is ready for more. H. Peter Anvin provided a patch to fix QEMU in 2008. You'll need to ask Amazon when EC2 is going to catch up. Further reading Alan Cox (2008-01-12). Re: [PATCH] serial: remove "too much work for irq" printk . Linux Kernel Mailing List. H. Peter Anvin (2008-02-07). Re: 2.6.24 says "serial8250: too much work for irq4" a lot. . Linux Kernel Mailing List. Casey Dahlin (2009-05-15). 'serial8250: too much work for irq4' message when viewing serial console on SMP full-virtualized xen domU . 501026. Red Hat Bugzilla. Sibiao Luo (2013-07-21). guest kernel will print many "serial8250: too much work for irq3" when using kvm with isa-serial . 986761. Red Hat Bugzilla. schinkelm (2008-12-16). serial port in linux guest gives "serial8250: too much work for irq4" . 2752. VirtualBox bugs. Marc PF (2015-09-05). EC2 instance becomes unresponsive . AWS Developer Forums. | {
"source": [
"https://unix.stackexchange.com/questions/387600",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29509/"
]
} |
387,640 | It is written in the linux kernel Makefile that clean - Remove most generated files but keep the config and
enough build support to build external modules
mrproper - Remove all generated files + config + various backup files And it is stated on the arch docs that To finalise the preparation, ensure that the kernel tree is absolutely clean; $ make clean && make mrproper So if make mrproper does a more thorough remove, why is the make clean used? | Cleaning is done on three levels, as described in a comment in the Linux kernel Makefile : ###
# Cleaning is done on three levels.
# make clean Delete most generated files
# Leave enough to build external modules
# make mrproper Delete the current configuration, and all generated files
# make distclean Remove editor backup files, patch leftover files and the like According to the Makefile, the mrproper target depends on the clean target (see line 1421 ). Additionally, the distclean target depends on mrproper . Executing make mrproper will therefore be enough as it would also remove the same things as what the clean target would do (and more). The mrproper target was added in 1993 (Linux 0.97.7) and has always depended on the clean target. This means that it was never necessary to use both targets as in make clean && make mrproper . Historic reference: https://archive.org/details/git-history-of-linux | {
"source": [
"https://unix.stackexchange.com/questions/387640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100597/"
]
} |
387,656 | For example, we want to count all quote ( " ) characters; we just worry if files have more quotes than it should. For example: cluster-env,"manage_dirs_on_root","true"
cluster-env,"one_dir_per_partition","false"
cluster-env,"override_uid","true"
cluster-env,"recovery_enabled","false" expected results: 16 | You can combine tr (translate or delete characters) with wc (count words, lines, characters): tr -cd '"' < yourfile.cfg | wc -c -d elete all characters in the c omplement of " , and then count the c haracters (bytes). Some versions of wc may support the -m or --chars flag which will better suit non-ASCII character counts. | {
"source": [
"https://unix.stackexchange.com/questions/387656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
387,675 | I have been struggling for the past couple of days attempting to hook up my 1920x1080 external monitor to my 3200x1800 laptop. When I run xrandr , it outputs: Screen 0: minimum 320 x 200, current 5120 x 1800, maximum 8192 x 8192
eDP-1 connected 3200x1800+1920+0 (normal left inverted right x axis y axis) 294mm x 165mm
3200x1800 59.98*+ 47.99
2048x1536 60.00
1920x1440 60.00
1856x1392 60.01
1792x1344 60.01
1920x1200 59.95
1920x1080 59.93
1600x1200 60.00
1680x1050 59.95 59.88
1600x1024 60.17
1400x1050 59.98
1280x1024 60.02
1440x900 59.89
1280x960 60.00
1360x768 59.80 59.96
1152x864 60.00
1024x768 60.04 60.00
960x720 60.00
928x696 60.05
896x672 60.01
960x600 60.00
960x540 59.99
800x600 60.00 60.32 56.25
840x525 60.01 59.88
800x512 60.17
700x525 59.98
640x512 60.02
720x450 59.89
640x480 60.00 59.94
680x384 59.80 59.96
576x432 60.06
512x384 60.00
400x300 60.32 56.34
320x240 60.05
DP-1 connected primary 1920x1080+0+720 (normal left inverted right x axis y axis) 527mm x 296mm
1920x1080 60.00 + 50.00 59.94
1920x1080i 60.00* 50.00 59.94
1600x1200 60.00
1600x900 60.00
1280x1024 75.02 60.02
1152x864 75.00
1280x720 60.00 50.00 59.94
1024x768 75.03 60.00
800x600 75.00 60.32
720x576 50.00
720x480 60.00 59.94
640x480 75.00 60.00 59.94
720x400 70.08
HDMI-1 disconnected (normal left inverted right x axis y axis)
DP-2 disconnected (normal left inverted right x axis y axis)
HDMI-2 disconnected (normal left inverted right x axis y axis) So, I figured if I run, xrandr --output DP-1 --mode 1920x1080 , then the display would show on the external monitor... I was wrong: the monitor claimed to have no signal. I followed this comment which allowed the monitor to detect the HDMI signal, but I could only use a resolution lower than 1024x768 . I played around a bit more, and the monitor detected 1920x1080i as well, but the borders around the screen were cutoff. I did some research and figured out about something called overscan and used xrandr --output DP-1 --set underscan on , but that caused the following output: X Error of failed request: BadName (named color or font does not exist)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 11 (RRQueryOutputProperty)
Serial number of failed request: 38
Current serial number in output stream: 38 I also tried to add a new mode via xrandr and cvt and also tried changing the display settings via the settings panel in Ubuntu. There does not seem to be a problem with the monitor because it works fine when I boot Windows 10. Is there anything else I could try? Machine: Dell XPS 13 9350 (no hardware changes) OS: Ubuntu 16.04 LTS External Monitor: Dell S2415H | You can combine tr (translate or delete characters) with wc (count words, lines, characters): tr -cd '"' < yourfile.cfg | wc -c -d elete all characters in the c omplement of " , and then count the c haracters (bytes). Some versions of wc may support the -m or --chars flag which will better suit non-ASCII character counts. | {
"source": [
"https://unix.stackexchange.com/questions/387675",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247647/"
]
} |
387,847 | I've got the following script: #!/bin/bash
echo "We are $$"
trap "echo HUP" SIGHUP
cat # wait indefinitely When I send SIGHUP (using kill -HUP pid ), nothing happens. If I change the script slightly: #!/bin/bash
echo "We are $$"
trap "kill -- -$BASHPID" EXIT # add this
trap "echo HUP" SIGHUP
cat # wait indefinitely ...then the script does the echo HUP thing right as it exits (when I press Ctrl+C): roger@roger-pc:~ $ ./hupper.sh
We are 6233
^CHUP What's going on? How should I send a signal (it doesn't necessarily have to be SIGHUP ) to this script? | The Bash manual states: If bash is waiting for a command to complete and receives a signal for
which a trap has been set, the trap will not be executed until the
command completes. That means that despite the signal is received by bash when you send it, your trap on SIGHUP will be called only when cat ends. If this behavior is undesirable, then either use bash builtins (e.g. read + printf in a loop instead of cat ) or use background jobs (see Stéphane's answer ). | {
"source": [
"https://unix.stackexchange.com/questions/387847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46851/"
]
} |
388,165 | Can sed make something like : 12345 become : 1&2&3&4&5 ? | With GNU sed : sed 's/./\&&/2g' ( s ubstitute every ( g ) character ( . ) with the same ( & ) preceded with & ( \& ) but only starting from the second occurrence ( 2 )). Portably: sed 's/./\&&/g;s/&//' (replace every occurrence, but then remove the first & which we don't want). With some awk implementations (not POSIX as the behaviour is unspecified for an empty FS): awk -F '' -v OFS="&" '{$1=$1;print}' (with gawk and a few other awk implementations, an empty field separator splits the records into its character constituents . The output field separator ( OFS ) is set to & . We assign a value to $1 (itself) to force the record to be regenerated with the new field separator before printing it, NF=NF also works and is slightly more efficient in many awk implementations but the behaviour when you do that is currently unspecified by POSIX). perl : perl -F -lape '$_=join"&",@F' ( -pe runs the code for every line, and prints the result ( $_ ); -l strips and re-adds line endings automatically; -a populates @F with input split on the delimiter set in -F , which here is an empty string. The result is to split every character into @F , then join them up with '&', and print the line.) Alternatively: perl -pe 's/(?<=.)./&$&/g' (replace every character provided it's preceded by another character (look-behind regexp operator (?<=...)) Using zsh shell operators: in=12345
out=${(j:&:)${(s::)in}} (again, split on an empty field separator using the s:: parameter expansion flag, and join with & ) Or: out=${in///&} out=${out#?} (replace every occurrence of nothing (so before every character) with & using the ${var//pattern/replacement} ksh operator (though in ksh an empty pattern means something else, and yet something else, I'm not sure what in bash ), and remove the first one with the POSIX ${var#pattern} stripping operator). Using ksh93 shell operators: in=12345
out=${in//~(P:.(?=.))/\0&} ( ~(P:perl-like-RE) being a ksh93 glob operator to use perl-like regular expressions (different from perl's or PCRE's though), (?=.) being the look-ahead operator: replace a character provided it's followed by another character with itself ( \0 ) and & ) Or: out=${in//?/&\0}; out=${out#?} (replace every character ( ? ) with & and itself ( \0 ), and we remove the superflous one) Using bash shell operators: shopt -s extglob
in=12345
out=${in//@()/&}; out=${out#?} (same as zsh 's, except that you need @() there (a ksh glob operator for which you need extglob in bash )). | {
"source": [
"https://unix.stackexchange.com/questions/388165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
388,166 | Recently I had a file that reported a file size of 33P bytes in my 500GB SSD, more details here . That was through ls and cp would report that there was no enough space. In my short knowledge and poor understanding of VFS, I would believe that the (SATA) drivers talk to the disk and it moves its way through the VFS until it makes it to the inodes (assumption from the description on section 8.6 Inodes here ) and then the kernel somehow pass it to user space. In the end, I like to know how ls and cp know the size, but I would also like to know how a file could report the wrong size and if it were to happen again in the future, where to look for answers. | With GNU sed : sed 's/./\&&/2g' ( s ubstitute every ( g ) character ( . ) with the same ( & ) preceded with & ( \& ) but only starting from the second occurrence ( 2 )). Portably: sed 's/./\&&/g;s/&//' (replace every occurrence, but then remove the first & which we don't want). With some awk implementations (not POSIX as the behaviour is unspecified for an empty FS): awk -F '' -v OFS="&" '{$1=$1;print}' (with gawk and a few other awk implementations, an empty field separator splits the records into its character constituents . The output field separator ( OFS ) is set to & . We assign a value to $1 (itself) to force the record to be regenerated with the new field separator before printing it, NF=NF also works and is slightly more efficient in many awk implementations but the behaviour when you do that is currently unspecified by POSIX). perl : perl -F -lape '$_=join"&",@F' ( -pe runs the code for every line, and prints the result ( $_ ); -l strips and re-adds line endings automatically; -a populates @F with input split on the delimiter set in -F , which here is an empty string. The result is to split every character into @F , then join them up with '&', and print the line.) Alternatively: perl -pe 's/(?<=.)./&$&/g' (replace every character provided it's preceded by another character (look-behind regexp operator (?<=...)) Using zsh shell operators: in=12345
out=${(j:&:)${(s::)in}} (again, split on an empty field separator using the s:: parameter expansion flag, and join with & ) Or: out=${in///&} out=${out#?} (replace every occurrence of nothing (so before every character) with & using the ${var//pattern/replacement} ksh operator (though in ksh an empty pattern means something else, and yet something else, I'm not sure what in bash ), and remove the first one with the POSIX ${var#pattern} stripping operator). Using ksh93 shell operators: in=12345
out=${in//~(P:.(?=.))/\0&} ( ~(P:perl-like-RE) being a ksh93 glob operator to use perl-like regular expressions (different from perl's or PCRE's though), (?=.) being the look-ahead operator: replace a character provided it's followed by another character with itself ( \0 ) and & ) Or: out=${in//?/&\0}; out=${out#?} (replace every character ( ? ) with & and itself ( \0 ), and we remove the superflous one) Using bash shell operators: shopt -s extglob
in=12345
out=${in//@()/&}; out=${out#?} (same as zsh 's, except that you need @() there (a ksh glob operator for which you need extglob in bash )). | {
"source": [
"https://unix.stackexchange.com/questions/388166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167520/"
]
} |
388,475 | I read about that I should quote variables in bash, e.g. "$foo" instead of $foo. However, while writing a script, I found an a case where it works without quotes but not with them: wget_options='--mirror --no-host-directories'
local_root="$1" # ./testdir recieved from command line
remote_root="$2" # ftp://XXX recieved from command line
relative_path="$3" # /XXX received from command line This one works: wget $wget_options --directory_prefix="$local_root" "$remote_root$relative_path" This one does not (note the double quotes aroung $wget_options): wget "$wget_options" --directory_prefix="$local_root" "$remote_root$relative_path" What is the reason for this? Is the first line the good version; or should I suspect that there is
a hidden error somewhere that causes this behavior? In general, where do I find good documentation to understand how bash and its quoting works? During writing this script I feel that I started to work on a trial-and-error base instead of understanding the rules. | Basically, you should double quote variable expansions to protect them from word splitting (and filename generation). However, in your example, wget_options='--mirror --no-host-directories'
wget $wget_options --directory_prefix="$local_root" "$remote_root$relative_path" word splitting is exactly what you want . With "$wget_options" (quoted), wget doesn't know what to do with the single argument --mirror --no-host-directories and complains wget: unknown option -- mirror --no-host-directories For wget to see the two options --mirror and --no-host-directories as separate, word splitting has to occur. There are more robust ways of doing this. If you are using bash or any other shell that uses arrays like bash do, see glenn jackman's answer . Gilles' answer additionally describes an alternative solution for plainer shells such as the standard /bin/sh . Both essentially store each option as a separate element in an array. Related question with good answers: Why does my shell script choke on whitespace or other special characters? Double quoting variable expansions is a good rule of thumb. Do that . Then be aware of the very few cases where you shouldn't do that. These will present themselves to you through diagnostic messages, such as the above error message. There are also a few cases where you don't need to quote variable expansions. But it's easier to continue using double quotes anyway as it doesn't make much difference. One such case is variable=$other_variable Another one is case $variable in
...) ... ;;
esac | {
"source": [
"https://unix.stackexchange.com/questions/388475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245573/"
]
} |
388,586 | Is there any difference between Requires vs Wants in target files? [Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
Wants=display-manager.service Thanks | As heemayl noted in the comment, the man page answers your question.
From the web: Wants= A weaker version of Requires=. Units listed in this option will be started if the configuring unit is. However, if the listed units fail to start or cannot be added to the transaction, this has no impact on the validity of the transaction as a whole. This is the recommended way to hook start-up of one unit to the start-up of another unit. And Requires= Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units gets deactivated or its activation fails, this unit will be deactivated. This option may be specified more than once or multiple space-separated units may be specified in one option in which case requirement dependencies for all listed names will be created. Note that requirement dependencies do not influence the order in which services are started or stopped. This has to be configured independently with the After= or Before= options. If a unit foo.service requires a unit bar.service as configured with Requires= and no ordering is configured with After= or Before=, then both units will be started simultaneously and without any delay between them if foo.service is activated. Often, it is a better choice to use Wants= instead of Requires= in order to achieve a system that is more robust when dealing with failing services. Note that this dependency type does not imply that the other unit always has to be in active state when this unit is running. Specifically: failing condition checks (such as ConditionPathExists=, ConditionPathExists=, … — see below) do not cause the start job of a unit with a Requires= dependency on it to fail. Also, some unit types may deactivate on their own (for example, a service process may decide to exit cleanly, or a device may be unplugged by the user), which is not propagated to units having a Requires= dependency. Use the BindsTo= dependency type together with After= to ensure that a unit may never be in active state without a specific other unit also in active state (see below). From the freedesktop.org page Your service will only start if the multi-user.target has been reached (I don't know what happens if you try to add it to that target?), and systemd will try to start the display-manager.service together with your service.
If display-manager.service fails for whatever reason, your service will still be started (so if you really need the display-manager, use Requires= for that).
If the multi-user.target is not reached however, your service will not be launched. What is your service? Is it a kiosk system? Intuitively I'd suppose you want to add your service to the multi-user.target (so its launched at startup), and have it strictly depend on the display-manager.service via Requires=display-manager.service . But that's just wild guessing now. | {
"source": [
"https://unix.stackexchange.com/questions/388586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65498/"
]
} |
388,844 | I can run ENV_VAR=value command to run command with a specific value for ENV_VAR . What is the equivalent to unset ENV_VAR for command ? | Other than with the -i option, which wipes the whole environment, POSIX env doesn't provide any way to unset a variable. However, with a few env implementations (including GNU's, busybox ' and FreeBSD's at least), you can do: env -u ENV_VAR command Which would work at removing every instance of an ENV_VAR variable from the environment (note though that it doesn't work for the environment variable with an empty name ( env -u '' either gives an error or is ineffective depending on the implementation even though all accept env '=value' , probably a limitation incurred by the unsetenv() C function which POSIX requires to return an error for the empty string, while there's no such limitation for putenv() )). Portably (in POSIX shells), you can do: (unset -v ENV_VAR; exec command) (note that with some shells, using exec can change which command is run: runs the one in the filesystem instead of a function or builtin for instance (and would bypass alias expansion obviously), like env above. You'd want to omit it in those cases). But that won't work for environment variables that have a name that is not mappable to a shell variable (note that some shells like mksh would strip those variables from the environment on startup anyway), or variables that have been marked read-only. -v is for the Bourne shell and bash whose unset without -v could unset a ENV_VAR function if there was no variable by that name. Most other shells wouldn't unset functions unless you pass the -f option. (unlikely to make a difference in practice). (Also beware of the bug/misfeature of bash / mksh / yash whose unset , under some circumstance may not unset the variable, but reveal the variable in an outer scope ) If perl is available, you could do: perl -e 'delete $ENV{shift@ARGV}; exec @ARGV or die$!' ENV_VAR command Which will work even for the environment variable with an empty name. Now all those won't work if you want to change an environment variable for a builtin or function and don't want them to run in a subshell, like in bash : env -u LANG printf -v var %.3f 1.2 # would run /usr/bin/printf instead
(unset -v LANG; printf -v var %.3f 1.2) # changes to $var lost afterwards (here unsetting LANG as a misguided approach at making sure . is used and understood as the decimal separator. It would be better to use LC_ALL=C printf... for that) With some shells, you can instead create a local scope for the variable using a function: without() {
local "$1" # $1 variable declared as initially unset in bash¹
shift
"$@"
}
without LANG printf -v var %.3f 1.2 With zsh , you can also use an anonymous function: (){local ENV_VAR; command} That approach won't work in some shells (like the ones based on the Almquist shell), whose local don't declare the variable as initially unset (but inherit the value and attributes). In those, you can do local ENV_VAR; unset ENV_VAR , but don't do that in mksh or yash ( typeset instead of local for that one) as that wouldn't work, the unset would only be cancelling the local . ¹ Also beware that in bash , the local ENV_VAR (even though unset) would retain the export attribute. So if command was a function that assigns a value to ENV_VAR , the variable would be available in the environment of commands called afterwards. unset ENV_VAR would clear that attribute. Or you could use local +x ENV_VAR which would also ensure a clean slate (unless that variable has been declared read-only, but then there's nothing you can do about it in bash ). | {
"source": [
"https://unix.stackexchange.com/questions/388844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72868/"
]
} |
388,875 | How can I determine or set the size limit of /etc/hosts ? How many lines can it have? | Problematical effects include slow hostname resolution (unless the OS somehow converts the linear list into a faster-to-search structure?) and the potential for surprising interaction with shell tab completion well before any meaningful file size is reached. For example! If one places 500,000 host entries in /etc/hosts # perl -E 'for (1..500000) { say "127.0.0.10 $_.science" }' >> /etc/hosts for science, the default hostname tab completion in ZSH takes about ~25 seconds on my system to return a completion prompt (granted, this is on a laptop from 2009 with a 5400 RPM disk, but still). | {
"source": [
"https://unix.stackexchange.com/questions/388875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37153/"
]
} |
388,892 | I wrote the following command in order to match $a with $b, but when the value includes "-", then I get an error. How can I avoid that? # a="-Xmx5324m"
# b="-Xmx5324m"
#
#
# echo "$a" | grep -Fxc "$b"
grep: conflicting matchers specified | Place -- before your pattern: echo "$a" | grep -Fxc -- "$b" -- specifies end of command options for many commands/shell built-ins, after which the remaining arguments are treated as positional arguments. | {
"source": [
"https://unix.stackexchange.com/questions/388892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
389,014 | So I wanted to call two background ssh processes: ssh -D localhost:8087 -fN aws-gateway-vpc1
ssh -D localhost:8088 -fN aws-gateway-vpc2 These gateways don't have the benefit of letting me set an authorized_keys file, so I must be prompted for my interactive password. That is why I'm using the -f flag and not the shell's & which will only background the process after I authenticate interactively. In this scenario I appear to be unable to use the $! bash variable to get the pid of the recently [self] backgrounded process. What other options do I have to find the correct pid to kill later if interrupted? | Finding the pid by grepping might be error prone. Alternative option would be to use ControlPath and ControlMaster options of SSH. This way you will be able to have your ssh command listen on a control socket and wait for commands from subsequent ssh calls. Try this ssh -D localhost:8087 -S /tmp/.ssh-aws-gateway-vpc1 -M -fN aws-gateway-vpc1
# (...)
# later, when you want to terminate ssh connection
ssh -S /tmp/.ssh-aws-gateway-vpc1 -O exit aws-gateway-vpc1 The exit command lets you kill the process without knowing the PID. If you do need the PID for anything, you can use the check command to show it: $ ssh -S /tmp/.ssh-aws-gateway-vpc1 -O check aws-gateway-vpc1
Master running (pid=1234) | {
"source": [
"https://unix.stackexchange.com/questions/389014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126165/"
]
} |
389,255 | In a text processing field is there a way to know if a tab is 8 characters in length (the default length) or less? For example, if I have a sample file with tab delimiter and the content of a field fit in less than one tab (≤7), and if I have a tab after that, then that tab will be only ‘tab size – field size’ in length. Is there a way to get the total length of tabs on a line? I'm not looking for the number of tabs (i.e. 10 tabs should not return 10) but the character length of those tabs. For the following input data (tab delimited between fields and only one tab): field0 field00 field000 last-field
fld1 fld11 fld001 last-fld
fd2 fld3 last-fld I expect to count length of tabs in each line, so 11
9
9 | The TAB character is a control character which when sent to a terminal¹ makes the terminal's cursor move to the next tab-stop. By default, in most terminals, the tab stops are 8 columns apart, but that's configurable. You can also have tab stops at irregular intervals: $ tabs 3 9 11; printf '\tx\ty\tz\n'
x y z Only the terminal knows how many columns to the right a TAB will move the cursor. You can get that information by querying the cursor position from the terminal before and after the tab has been sent. If you want to make that calculation by hand for a given line and assuming that line is printed at the first column of the screen, you'll need to: know where the tab-stops are² know the display width of every character know the width of the screen decide whether you want to handle other control characters like \r (which moves the cursor to the first column) or \b that moves the cursor back...) It can be simplified if you assume the tab stops are every 8 columns, the line fits in the screen and there are no other control characters or characters (or non-characters) that your terminal cannot display properly. With GNU wc , if the line is stored in $line : width=$(printf %s "$line" | wc -L)
width_without_tabs=$(printf %s "$line" | tr -d '\t' | wc -L)
width_of_tabs=$((width - width_without_tabs)) wc -L gives the width of the widest line in its input. It does that by using wcwidth(3) to determine the width of characters and assuming the tab stops are every 8 columns. For non-GNU systems, and with the same assumptions, see @Kusalananda's approach . It's even better as it lets you specify the tab stops but unfortunately currently doesn't work with GNU expand (at least) when the input contains multi-byte characters or 0-width (like combining characters) or double-width characters. ¹ note though that if you do stty tab3 , the tty device line discipline will take over the tab processing (convert TAB to spaces based on its own idea of where the cursor might be before sending to the terminal) and implement tab stops every 8 columns. Testing on Linux, it seems to handle properly CR, LF and BS characters as well as multibyte UTF-8 ones (provided iutf8 is also on) but that's about it. It assumes all other non-control characters (including zero-width, double-width characters) have a width of 1, it (obviously) doesn't handle escape sequences, doesn't wrap properly... That's probably intended for terminals that can't do tab processing. In any case, the tty line discipline does need to know where the cursor is and uses those heuristics above, because when using the icanon line editor (like when you enter text for applications like cat that don't implement their own line editor), when you press Tab Backspace , the line discipline needs to know how many BS characters to send to erase that Tab character for display. If you change where the tab stops are (like with tabs 12 ), you'll notice that Tabs are not erased properly. Same if you enter double-width characters before pressing Tab Backspace . ² For that, you could send tab characters and query the cursor position after each one. Something like: tabs=$(
saved_settings=$(stty -g)
stty -icanon min 1 time 0 -echo
gawk -vRS=R -F';' -vORS= < /dev/tty '
function out(s) {print s > "/dev/tty"; fflush("/dev/tty")}
BEGIN{out("\r\t\33[6n")}
$NF <= prev {out("\r"); exit}
{print sep ($NF - 1); sep=","; prev = $NF; out("\t\33[6n")}'
stty "$saved_settings"
) Then, you can use that as expand -t "$tabs" using @Kusalananda's solution. | {
"source": [
"https://unix.stackexchange.com/questions/389255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72456/"
]
} |
389,256 | I have an input file with non-fixed column number on which I would like to do some arithmetic calculations on: input.txt
ID1 4651455 234 4651765 392 4652423 470
ID2 16181020 176 16184958 869 16185889 347 16187777 231 The input file has tab-separated fields has always a unique ID in column $1 (not duplicated). Not all rows have the same number of columns. What I would like to achieve is a tab-separated file as follows: output1.txt
ID1 76 266
ID2 3762 62 1541 Basically it would print the $1 of the original file, then it would start from the second even column of the file ( $4 ) and subtract to its value the previous two columns ( $4 - $3 - $2 ) then do the same with all the even columns of the input file (e.g., $6 - $5 - $4 ; $8 - $7 - $6 ; ...). In my knowledge, this can be done with awk print , but I only know how to deal with it when my file has a fixed number of columns in every row. An even more ideal output for my needs would be the following: output2.txt
ID1 234 76 392 266 470
ID2 176 3762 869 62 347 1541 231 Basically it would print the $1 of the original file, then interleave printing the odd columns from the input file to the columns as in output1.txt . | The TAB character is a control character which when sent to a terminal¹ makes the terminal's cursor move to the next tab-stop. By default, in most terminals, the tab stops are 8 columns apart, but that's configurable. You can also have tab stops at irregular intervals: $ tabs 3 9 11; printf '\tx\ty\tz\n'
x y z Only the terminal knows how many columns to the right a TAB will move the cursor. You can get that information by querying the cursor position from the terminal before and after the tab has been sent. If you want to make that calculation by hand for a given line and assuming that line is printed at the first column of the screen, you'll need to: know where the tab-stops are² know the display width of every character know the width of the screen decide whether you want to handle other control characters like \r (which moves the cursor to the first column) or \b that moves the cursor back...) It can be simplified if you assume the tab stops are every 8 columns, the line fits in the screen and there are no other control characters or characters (or non-characters) that your terminal cannot display properly. With GNU wc , if the line is stored in $line : width=$(printf %s "$line" | wc -L)
width_without_tabs=$(printf %s "$line" | tr -d '\t' | wc -L)
width_of_tabs=$((width - width_without_tabs)) wc -L gives the width of the widest line in its input. It does that by using wcwidth(3) to determine the width of characters and assuming the tab stops are every 8 columns. For non-GNU systems, and with the same assumptions, see @Kusalananda's approach . It's even better as it lets you specify the tab stops but unfortunately currently doesn't work with GNU expand (at least) when the input contains multi-byte characters or 0-width (like combining characters) or double-width characters. ¹ note though that if you do stty tab3 , the tty device line discipline will take over the tab processing (convert TAB to spaces based on its own idea of where the cursor might be before sending to the terminal) and implement tab stops every 8 columns. Testing on Linux, it seems to handle properly CR, LF and BS characters as well as multibyte UTF-8 ones (provided iutf8 is also on) but that's about it. It assumes all other non-control characters (including zero-width, double-width characters) have a width of 1, it (obviously) doesn't handle escape sequences, doesn't wrap properly... That's probably intended for terminals that can't do tab processing. In any case, the tty line discipline does need to know where the cursor is and uses those heuristics above, because when using the icanon line editor (like when you enter text for applications like cat that don't implement their own line editor), when you press Tab Backspace , the line discipline needs to know how many BS characters to send to erase that Tab character for display. If you change where the tab stops are (like with tabs 12 ), you'll notice that Tabs are not erased properly. Same if you enter double-width characters before pressing Tab Backspace . ² For that, you could send tab characters and query the cursor position after each one. Something like: tabs=$(
saved_settings=$(stty -g)
stty -icanon min 1 time 0 -echo
gawk -vRS=R -F';' -vORS= < /dev/tty '
function out(s) {print s > "/dev/tty"; fflush("/dev/tty")}
BEGIN{out("\r\t\33[6n")}
$NF <= prev {out("\r"); exit}
{print sep ($NF - 1); sep=","; prev = $NF; out("\t\33[6n")}'
stty "$saved_settings"
) Then, you can use that as expand -t "$tabs" using @Kusalananda's solution. | {
"source": [
"https://unix.stackexchange.com/questions/389256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246707/"
]
} |
389,263 | Is it possible to corrupt a RHEL7 system (specifically an XFS filesystem) by completely filling it with data? I can imagine bad things happening if disk writes fill to complete but I would also hope that there are protections in place. Would it matter whether / fills up or another partition? | The TAB character is a control character which when sent to a terminal¹ makes the terminal's cursor move to the next tab-stop. By default, in most terminals, the tab stops are 8 columns apart, but that's configurable. You can also have tab stops at irregular intervals: $ tabs 3 9 11; printf '\tx\ty\tz\n'
x y z Only the terminal knows how many columns to the right a TAB will move the cursor. You can get that information by querying the cursor position from the terminal before and after the tab has been sent. If you want to make that calculation by hand for a given line and assuming that line is printed at the first column of the screen, you'll need to: know where the tab-stops are² know the display width of every character know the width of the screen decide whether you want to handle other control characters like \r (which moves the cursor to the first column) or \b that moves the cursor back...) It can be simplified if you assume the tab stops are every 8 columns, the line fits in the screen and there are no other control characters or characters (or non-characters) that your terminal cannot display properly. With GNU wc , if the line is stored in $line : width=$(printf %s "$line" | wc -L)
width_without_tabs=$(printf %s "$line" | tr -d '\t' | wc -L)
width_of_tabs=$((width - width_without_tabs)) wc -L gives the width of the widest line in its input. It does that by using wcwidth(3) to determine the width of characters and assuming the tab stops are every 8 columns. For non-GNU systems, and with the same assumptions, see @Kusalananda's approach . It's even better as it lets you specify the tab stops but unfortunately currently doesn't work with GNU expand (at least) when the input contains multi-byte characters or 0-width (like combining characters) or double-width characters. ¹ note though that if you do stty tab3 , the tty device line discipline will take over the tab processing (convert TAB to spaces based on its own idea of where the cursor might be before sending to the terminal) and implement tab stops every 8 columns. Testing on Linux, it seems to handle properly CR, LF and BS characters as well as multibyte UTF-8 ones (provided iutf8 is also on) but that's about it. It assumes all other non-control characters (including zero-width, double-width characters) have a width of 1, it (obviously) doesn't handle escape sequences, doesn't wrap properly... That's probably intended for terminals that can't do tab processing. In any case, the tty line discipline does need to know where the cursor is and uses those heuristics above, because when using the icanon line editor (like when you enter text for applications like cat that don't implement their own line editor), when you press Tab Backspace , the line discipline needs to know how many BS characters to send to erase that Tab character for display. If you change where the tab stops are (like with tabs 12 ), you'll notice that Tabs are not erased properly. Same if you enter double-width characters before pressing Tab Backspace . ² For that, you could send tab characters and query the cursor position after each one. Something like: tabs=$(
saved_settings=$(stty -g)
stty -icanon min 1 time 0 -echo
gawk -vRS=R -F';' -vORS= < /dev/tty '
function out(s) {print s > "/dev/tty"; fflush("/dev/tty")}
BEGIN{out("\r\t\33[6n")}
$NF <= prev {out("\r"); exit}
{print sep ($NF - 1); sep=","; prev = $NF; out("\t\33[6n")}'
stty "$saved_settings"
) Then, you can use that as expand -t "$tabs" using @Kusalananda's solution. | {
"source": [
"https://unix.stackexchange.com/questions/389263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248845/"
]
} |
389,383 | Here's my source: #!/bin/bash
echo "Running script to free general cached memory!"
echo "";
echo "Script must be run as root!";
echo "";
echo "Clearing swap!";
swapoff -a && swapon -a;
echo "";
echo "Clear inodes and page file!";
echo 1 > /proc/sys/vm/drop_caches;
echo ""; It clears caches and stuff, and it echoes that it needs to be run as root in the terminal. I basically just want the script to cease running if it detects it's not being executed as root. Example: "Running script to free general cached memory!"
"Warning: script must be run as root or with elevated privileges!"
"Error: script not running as root or with sudo! Exiting..." If run with elevated privileges, it just runs as normal. Any ideas? Thanks! | #!/bin/sh
if [ "$(id -u)" -ne 0 ]; then
echo 'This script must be run by root' >&2
exit 1
fi
cat <<HEADER
Host: $(hostname)
Time at start: $(date)
Running cache maintenance...
HEADER
swapoff -a && swapon -a
echo 1 >/proc/sys/vm/drop_caches
cat <<FOOTER
Cache maintenance done.
Time at end: $(date)
FOOTER The root user has UID 0 (regardless of the name of the "root" account). If the effective UID returned by id -u is not zero, the user is not executing the script with root privileges. Use id -ru to test against the real ID (the UID of the user invoking the script). Don't use $EUID in the script as this may be modified by an unprivileged user: $ bash -c 'echo $EUID'
1000
$ EUID=0 bash -c 'echo $EUID'
0 If a user did this, it would obviously not lead to privilegie escalation, but may lead to commands in the script not being able to do what they are supposed to do and files being created with the wrong owner etc. | {
"source": [
"https://unix.stackexchange.com/questions/389383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248942/"
]
} |
389,520 | I've enabled compression (mounted with compress=lzo ) for my btrfs partition and used it for a while. I'm curious about how much benefit the compression brought me and am interested in the saved space value (sum of all file sizes) - (actual used space) . Is there any straightforward way to get this value, or would I have to write a script that sums up e.g. df output and compres it to btrfs filesystem df output? | In Debian/Ubuntu: apt install btrfs-compsize
compsize /mnt/btrfs-partition In Fedora: dnf install compsize
compsize /mnt/btrfs-partition output is like this: Processed 123574 files, 1399139 regular extents (1399139 refs), 69614 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 73% 211G 289G 289G
none 100% 174G 174G 174G
lzo 32% 37G 115G 115G It requires root ( sudo ) to work at all (otherwise SEARCH_V2: Operation not permitted ). It can be used on any directory (totalling the subtree), not just the whole filesystem from the mountpoint. On a system with zstd, but some old files still compressed with lzo, there will be rows for each of them. (The Perc column is the disk_size / uncompressed_size for that row, not how much of the total is compressed that way. Smaller is better.) | {
"source": [
"https://unix.stackexchange.com/questions/389520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9266/"
]
} |
389,717 | Makefile my_test:
ifdef $(toto)
@echo 'toto is defined'
else
@echo 'no toto around'
endif Expected behavior $ make my_test
no toto around
$ make my_test toto
toto is defined Current behavior $ make my_test
no toto around
$ make my_test toto
no toto around
make: *** No rule to make target `toto'. Stop. When I run make my_test I get the else text as expected no toto around . However make my_test toto
no toto around
make: *** No rule to make target `toto'. Stop. Makefile version $ make -v
GNU Make 3.81 SLE version $ cat /etc/*release
VERSION_ID="11.4"
PRETTY_NAME="SUSE Linux Enterprise Server 11 SP4" PS The point is to make make my_test verbose if toto , if toto not given then the command will run silently | You need to remove the dollar around toto, and also pass toto from the command line differently Command line make toto=1 my_test Makefile my_test:
ifdef toto
@echo 'toto is defined'
else
@echo 'no toto around'
endif | {
"source": [
"https://unix.stackexchange.com/questions/389717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142331/"
]
} |
389,731 | I have output, like the following, from postgres database datname | size ---- template1 | 6314 kB template0 | 6201 kB postgres | 7938 kB misago |
6370 kB (4 rows) I want only these 6314, 6201, and 7938 values from output.
How can I do this? awk, grep or sed are preferable. | You need to remove the dollar around toto, and also pass toto from the command line differently Command line make toto=1 my_test Makefile my_test:
ifdef toto
@echo 'toto is defined'
else
@echo 'no toto around'
endif | {
"source": [
"https://unix.stackexchange.com/questions/389731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231557/"
]
} |
389,879 | I would like to set up wpa_supplicant and openvpn to run as non-root user, like the recommended setup for wireshark . I can't find any documentation for what +eip in this example means: sudo setcap cap_net_raw,cap_net_admin,cap_dac_override+eip /usr/bin/dumpcap | The way capabilities work in Linux is documented in man 7 capabilities . Processes' capabilities in the effective set are against which permission checks are done. File capabilities are used during an execv call (which happens when you want to run another program 1 ) to calculate the new capability sets for the process. Files have two sets for capabilities, permitted and inheritable and effective bit . Processes have three capability sets: effective , permitted and inheritable . There is also a bounding set, which limits which capabilities may be added later to a process' inherited set and affects how capabilities are calculated during a call to execv . Capabilities can only be dropped from the bounding set , not added. Permissions checks for a process are checked against the process' effective set . A process can raise its capabilities from the permitted to the effective set (using capget and capset syscalls, the recommended APIs are respectively cap_get_proc and cap_set_proc ). Inheritable and bounding sets and file capabilities come into play during an execv syscall. During execv , new effective and permitted sets are calculated and the inherited and bounding sets stay unchanged. The algorithm is described in the capabilities man page: P'(permitted) = (P(inheritable) & F(inheritable)) |
(F(permitted) & cap_bset)
P'(effective) = F(effective) ? P'(permitted) : 0
P'(inheritable) = P(inheritable) [i.e., unchanged] Where P is the old capability set, P' is the capability set after execv and F is the file capability set. If a capability is in both processes' inheritable set and the file's inheritable set (intersection/logical AND), it is added to the permitted set . The file's permitted set is added (union/logical OR) to it (if it is within the bounding set). If the effective bit in file capabilities is set, all permitted capabilities are raised to effective after execv . Capabilities in kernel are actually set for threads, but regarding file capabilities this distinction is usually relevant only if the process alters its own capabilities. In your example capabilities cap_net_raw , cap_net_admin and cap_dac_override are added to the inherited and permitted sets and the effective bit is set. When your binary is executed, the process will have those capabilities in the effective and permitted sets if they are not limited by a bounding set. [1] For the fork syscall, all the capabilities and the bounding set are copied from parent process. Changes in uid also have their own semantics for how capabilities are set in the effective and permitted sets. | {
"source": [
"https://unix.stackexchange.com/questions/389879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249275/"
]
} |
389,881 | Whenever I open a new instance of a terminal, the history is empty. Why is that? Do I need to set something up? In bash there's no need for this, though. | Bash and zsh have different defaults. Zsh doesn't save the history to a file by default. When you run zsh without a configuration file, it displays a configuration interface. In this configuration interface, select (1) Configure settings for history, i.e. command lines remembered
and saved by the shell. (Recommended.) then review the proposed settings and select # (0) Remember edits and return to main menu (does not save file yet) Repeat for the other submenus for (2) completion, (3) keybindings and (4) options, then select (0) Exit, saving the new settings. They will take effect immediately. from the main menu. The recommended history-related settings are HISTFILE=~/.histfile
HISTSIZE=1000
SAVEHIST=1000
setopt appendhistory I would use a different name for the history file, to indicate it's zsh's history file. And 1000 lines can be increased on a modern system. HISTFILE=~/.zsh_history
HISTSIZE=10000
SAVEHIST=10000
setopt appendhistory These lines go into ~/.zshrc , by the way. | {
"source": [
"https://unix.stackexchange.com/questions/389881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
389,969 | I've compiled linked and created a program in C++ now I have foobar.out I want to be able to put it into the bin directory and use it like system wide commands e.g. ssh, echo, bash, cd... How can I achieve that? | There are two ways of allowing you to run the binary without specifying its path (not including creating aliases or shell functions to execute it with an absolute path for you): Copy it to a directory that is in your $PATH . Add the directory where it is to your $PATH . To copy the file to a directory in your path, for example /usr/local/bin ( where locally managed software should go ), you must have superuser privileges, which usually means using sudo : $ sudo cp -i mybinary /usr/local/bin Care must be taken not to overwrite any existing files in the target directory (this is why I added -i here). To add a directory to your $PATH , add a line in your ~/.bashrc file (if you're using bash ): PATH="$HOME/bin:$PATH" ... if the binary is in $HOME/bin . This has the advantage that you don't need to have superuser privileges or change/add anything in the base system on your machine. You just need to move the binary into the bin directory of your home directory. Note, changes to .bashrc takes effect when the file is sourced next time, which happens if you open a new terminal or log out and in again, or run source ~/.bashrc manually. | {
"source": [
"https://unix.stackexchange.com/questions/389969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249332/"
]
} |
390,135 | I was thinking that it might be advantageous to have a user with permissions higher than the root user. You see, I would like to keep all of the activities and almost all existing root user privileges exactly as they are now. However, I would like the ability to deny privileges to root on an extremely isolated case by case basis. One of the advantages of this would allow me to prevent certain unwanted files from being installed during updates. This is just an example of one possible advantage. Because apt-get updates are run by root or with sudo privileges, apt-get has the ability to replace certain unwanted files during updates. If I could deny these privileges to these individual particular files, I could set them as a simlink to /dev/null or possibly have a blank placeholder file that could have permissions that would deny the file from being replaced during the update. Additionally, I can't help but be reminded about a line which was said in an interview with one of the Ubuntu creators when the guy said something about how users better trust "us" (referring to the Ubuntu devs) "because we have root" which was a reference to how system updates are performed with root permission. Simply altering the installation procedure to say work around this problem is absolutely not what I am interested here. Now that my mind has a taste for the idea of having the power to deny root access, I would like to figure out a way to make this happen just for the sake of doing it. I just thought about this and have not spent any time on the idea so far and I'm fairly confident that this could be figured out. However, I am curious to know if this has already been done or if this is possibly not a new idea or concept. Basically, it seems like there should be some way to have a super super-user which would have permission beyond that of the system by only one degree. Note: Although I feel the accepted answer fits the criteria the most, I really like the answer by @CR. also. I would like to create an actual user higher on the tree (me) but I guess I'll just have to sit down one day when I have the time to figure it out. Additionally, I'm not trying to pick on Ubuntu here; I wouldn't use it as my main distro if I felt negative about it. | The "user" you want is called LSM: Linux security module. The most well known are SELinux and AppArmor. By this you can prevent certain binaries (and their child processes) from doing certain stuff (even if their UID is root ). But you may allow these operations to getty and its child processes so that you can do it manually. | {
"source": [
"https://unix.stackexchange.com/questions/390135",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70847/"
]
} |
390,307 | On a fresh installation of Debian 9 Stretch on a desktop PC when booting the ...
Failed to start Raise network interfaces
... error occurres. The (cable) LAN-connection works but the (USB) WiFi is not working properly (detecting the WiFi networks but failing to connect). Previously on the same harware Debian 8 Jessie was installed working fine without any errors. Seems the issues are connected to the recent predictable network interface names changes. Found users A , B , C , D , and E had similar symptoms. However, they had upgraded Ubuntu systems (without a clean install). Aditionally the proposed solutions are suggesting disabling the assignment of fixed/predictable/unique names . I would prefer to keep the new naming scheme/standard, eventually to find and eliminate the reason why( ? ) it is not working properly. Found also users F , and G with the same problem -- without solution. Would be very thankful for any hint. Also, I'm happy to answer your questions if you need more in depth details. Further you find some detailed system output. $ sudo systemctl status networking.service
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2017-09-04 17:21:42 IST; 1h 27min ago
Docs: man:interfaces(5)
Process: 534 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
Process: 444 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (code=exited, status=0/SUCCESS)
Main PID: 534 (code=exited, status=1/FAILURE)
Sep 04 17:21:42 XXX ifup[534]: than a configuration issue please read the section on submitting
Sep 04 17:21:42 XXX ifup[534]: bugs on either our web page at www.isc.org or in the README file
Sep 04 17:21:42 XXX ifup[534]: before submitting a bug. These pages explain the proper
Sep 04 17:21:42 XXX ifup[534]: process and the information we find helpful for debugging..
Sep 04 17:21:42 XXX ifup[534]: exiting.
Sep 04 17:21:42 XXX ifup[534]: ifup: failed to bring up eth0
Sep 04 17:21:42 XXX systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Sep 04 17:21:42 XXX systemd[1]: Failed to start Raise network interfaces.
Sep 04 17:21:42 XXX systemd[1]: networking.service: Unit entered failed state.
Sep 04 17:21:42 XXX systemd[1]: networking.service: Failed with result 'exit-code'.
$ cat /etc/network/interfaces.d/setup
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp EDIT2start: $ sudo ifconfig
[sudo] password for XXX:
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.178.31 netmask 255.255.255.0 broadcast 192.168.178.255
inet6 xxxx::xxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link>
ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet)
RX packets 765 bytes 523923 (511.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 803 bytes 101736 (99.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 17
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 50 bytes 3720 (3.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 50 bytes 3720 (3.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
wlxf4f26d1b7521: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 EDIT2end. $ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: wlxf4f26d1b7521: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff EDITstart: $ lsusb
...
Bus 001 Device 004: ID 0cf3:9271 Atheros Communications, Inc. AR9271 802.11n
...
$ sudo cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback EDITend. EDIT3start: $ sudo systemctl status networking.service
● networking.service - Raise network interfaces
Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
Active: active (exited) since Tue 2017-09-05 10:29:16 IST; 44min ago
Docs: man:interfaces(5)
Process: 565 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=0/SUCCESS)
Process: 438 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (code=exited, status=0/SUCCESS)
Main PID: 565 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/networking.service
Sep 05 10:26:56 sdd9 systemd[1]: Starting Raise network interfaces...
Sep 05 10:26:56 sdd9 ifup[565]: ifup: waiting for lock on /run/network/ifstate.enp3s0
Sep 05 10:29:16 sdd9 systemd[1]: Started Raise network interfaces. EDIT3end. | Remove the /etc/network/interfaces.d/setup file then edit your /etc/network/interfaces as follows : auto lo
iface lo inet loopback (friendly edit: GAD3R suggested there should be nothing else in the file. It appears that entries can also be ignored if a line starts with # followed by a space) Save and reboot The man interfaces : INCLUDING OTHER FILES Lines beginning with "source" are used to include stanzas from other
files, so configuration can be split into many files. The word "source"
is followed by the path of file to be sourced. Shell wildcards can be
used. (See wordexp(3) for details.) In your case you are using the /etc/network/interfaces.d/setup to configure the network instead of /etc/network/interfaces Lines beginning with "allow-" are used to identify interfaces that
should be brought up automatically by various subsytems. This may be
done using a command such as "ifup --allow=hotplug eth0 eth1", which
will only bring up eth0 or eth1 if it is listed in an "allow-hotplug"
line. Note that "allow-auto" and "auto" are synonyms. (Interfaces
marked "allow-hotplug" are brought up when udev detects them. This can
either be during boot if the interface is already present, or at a
later time, for example when plugging in a USB network card. Please
note that this does not have anything to do with detecting a network
cable being plugged in.) | {
"source": [
"https://unix.stackexchange.com/questions/390307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58310/"
]
} |
390,518 | Within the output of top, there are two fields, marked "buff/cache" and "avail Mem" in the memory and swap usage lines: What do these two fields mean? I've tried Googling them, but the results only bring up generic articles on top, and they don't explain what these fields signify. | top ’s manpage doesn’t describe the fields, but free ’s does: buffers Memory used by kernel buffers ( Buffers in /proc/meminfo ) cache Memory used by the page cache and slabs ( Cached and SReclaimable in /proc/meminfo ) buff/cache Sum of buffers and cache available Estimation of how much memory is available for starting new
applications, without swapping. Unlike the data provided by
the cache or free fields, this field takes into account page
cache and also that not all reclaimable memory slabs will be
reclaimed due to items being in use ( MemAvailable in /proc/meminfo , available on kernels 3.14, emulated on kernels
2.6.27+, otherwise the same as free) Basically, “buff/cache” counts memory used for data that’s on disk or should end up there soon, and as a result is potentially usable (the corresponding memory can be made available immediately, if it hasn’t been modified since it was read, or given enough time, if it has); “available” measures the amount of memory which can be allocated and used without causing more swapping (see How can I get the amount of available memory portably across distributions? for a lot more detail on that). | {
"source": [
"https://unix.stackexchange.com/questions/390518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
390,574 | If I watch a video with mpv, it closes after the video ends. How can I configure it such that it doesn't close, for example just freezes the last image of the movie, so that I can seek back and forth without restarting the video. | You'd use mpv --keep-open=yes , which you can find in the mpv manpage . It allows three values: no (close/advance to next at end of video, the default), yes (advance if there is a next video, otherwise pause), and always (always pause at end of video, even if there is a next video). You should also be able to put keep-open=yes in your ~/.config/mpv/mpv.conf or ~/.mpv/config (whichever you're using) | {
"source": [
"https://unix.stackexchange.com/questions/390574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
391,223 | I use Cygwin on my laptop (DOS). I have a collection of scripts from my colleagues, and my own. I am not an IT person, not knowledgeable in Unix. I am following my colleagues' syntax and able to manage a few simple things. The scripts worked well on my old laptop. I just changed laptop and installed Cygwin. When I run my scripts, they do not work. Here is one example of the error message I get: line 1: $':\r': command not found
line 5: syntax error near unexpected token `$'\r''
line 5: `fi Here are the top 5 lines of my script :
iter=1
if [ -f iter.txt ]
then rm ./iter.txt
fi Can someone please explain how I can get around this problem? | You have Windows-style line endings. The no-op command : is instead read as :<carriage return> , displayed as :\r or more fully as $':\r' . Run dos2unix scriptname and you should be fine. If you don't have dos2unix , the following should work almost anywhere (and I tested on MobaXterm on Windows): vi -b filename Then in vi , type: :%s/\r$//
:x You're good to go. In vim , which is what you are using on Cygwin for vi , there are multiple ways of doing this. Another one involves the fileformat setting, which can take the values dos or unix . Either explicitly change it after loading the file with set fileformat=unix or explicitly force the file format when writing out the file with :w +fileformat=unix For more on this, see the many questions and answers here covering this subject, including: Remove ^M character from log files Why is vim creating files with DOS line endings? How to add a carriage return before every newline? | {
"source": [
"https://unix.stackexchange.com/questions/391223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250338/"
]
} |
391,344 | When I run gpg --with-fingerprints --with-colons keyfile.key , I get a machine parsable output on stdout containing the key fingerprint for the key inside the keyfile (which is exactly what I want), plus the following error on stderr: gpg: WARNING: no command supplied. Trying to guess what you mean ... So GnuPG is guessing the command correctly, but for my life I can't figure out what command it is guessing. I have tried almost all of the commands listed on the man page. I'm using GnuPG 2.2. Does anybody know the correct command to read a key file and show information about the key? Edit : Ideally the mechanism would be able to read the keyfile from stdin, such as cat keyfile.key | gpg --some-command I should have mentioned this earlier but so many commands for gpg work with stdin I didn't even consider it a relevant constraint. | The good folks at the [email protected] mailing list had the answer: For versions >= 2.1.23: cat keyfile.key | gpg --with-colons --import-options show-only --import For versions >= 2.1.13 but < 2.1.23: cat keyfile.key | gpg --with-colons --import-options import-show --dry-run --import | {
"source": [
"https://unix.stackexchange.com/questions/391344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128786/"
]
} |
391,456 | When you want to redirect both stdout and stderr to the same file, you can do it by using command 1>file.txt 2>&1 , or command &>file.txt .
But why is the behavior of command 1>file.txt 2>file.txt different from the above two commands? The following is a verification command. $ cat redirect.sh
#!/bin/bash
{ echo -e "output\noutput" && echo -e "error" 1>&2; } 1>file.txt 2>&1
{ echo -e "output\noutput" && echo -e "error" 1>&2; } 1>file1.txt 2>file1.txt
{ echo -e "error" 1>&2 && echo -e "output\noutput"; } 1>file2.txt 2>file2.txt
{ echo -e "output" && echo -e "error\nerror" 1>&2; } 1>file3.txt 2>file3.txt
{ echo -e "error\nerror" 1>&2 && echo -e "output"; } 1>file4.txt 2>file4.txt
$ ./redirect.sh
$ echo "---file.txt---"; cat file.txt;\
echo "---file1.txt---"; cat file1.txt; \
echo "---file2.txt---"; cat file2.txt; \
echo "---file3.txt---"; cat file3.txt; \
echo "---file4.txt----"; cat file4.txt;
---file.txt---
output
output
error
---file1.txt---
error
output
---file2.txt---
output
output
---file3.txt---
error
error
---file4.txt----
output
rror As far as the results are seen, it looks like that the second echo string overwrites the first echo string when you run command 1>file.txt 2>file.txt , but I do not know why it will. (Is there a reference somewhere?) | You need to know two things: An open file descriptor known to the application-mode side of a process references an internal kernel object known as a file description , which is an instance of an open file. There can be multiple file descriptions per file, and multiple file descriptors sharing a file description. The current file position is an attribute of a file description . So if multiple file descriptors map to a single file description, they all share the same current file position, and a change to the file position enacted using one such file descriptor affects all of the other such file descriptors. Such changes are enacted by processes calling the read() / readv() , write() / writev() , lseek() , and suchlike system calls. The echo command calls write() / writev() of course. So what happens is this: command 1>file.txt 2>&1 only creates one file description, because the shell only opens a file once. The shell makes both the standard output and standard error file descriptors map to that single file description. It duplicates standard output onto standard error. So a write via either file descriptor will move the shared current file position: each write goes after the previous write the common file description. And as you can see the results of the echo commands do not overwrite one another. command 1>file.txt 2>file.txt creates two file descriptions, because the shell opens the same file twice, in response to the two explicit redirections. The standard output and standard error file descriptors map to two different file descriptions, which then in turn map to the same single file. The two file descriptions have entirely independent current file positions, and each write goes immediately the previous write on the same file description. And as you can see the result is that what is written via one can overwrite what is written via the other, in various different ways according to what order you execute the writes in. Further reading What is an open file description? What exactly is a file offset in lsof output? | {
"source": [
"https://unix.stackexchange.com/questions/391456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/219904/"
]
} |
391,629 | I couldn't find the answer to this anywhere. How can I know who renamed a directory? ls -al shows only the name of user who created that dirctory. | That is not information that is normally recorded, unless you took special disposition to that effect (like via some audit system). The service through which the user has renamed the directory (like over FTP, SFTP, WebDAV, samba...) may have logs that can help. You can try and check those logs, the last , lastcomm , audit , authentication logs around the time the folder was renamed. If you're administrator, you can look at the history file of the shells of the users that had the permissions to rename it (if the directory was renamed from /A/dir to /B/newdir , it's whoever had write access to both /A and /B (assuming /A didn't have the t bit in its permissions and /A/dir and /B are on the same filesystem)). | {
"source": [
"https://unix.stackexchange.com/questions/391629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250647/"
]
} |
392,050 | I know that I can run a command with an environment variable like this: FOO=bar mycommand I know that I can run commands in a subshell like this: (firstcommand && secondcommand) But can I somehow combine those two? FOO=bar (firstcommand && secondcommand) gives: sh: syntax error: unexpected "(" at least in busybox shell (ash). Edit: Kusalananda suggested FOO=bar sh -c 'first && second' which is indeed a solution. However, I am also interested in alternative answers because I like the subshell syntax because it doesn't require fiddling around with escaping of quotes. | One way: FOO=bar sh -c 'first && second' This sets the FOO environment variable for the single sh command. To set multiple environment variables: FOO=bar BAZ=quux sh -c 'first && second' Another way to do this is to create the variable and export it inside a subshell. Doing the export inside the subshell ensures that the outer shell does not get the variable in its environment: ( export FOO=bar; first && second ) Summarizing the (now deleted) comments: The export is needed to create an environment variable (as opposed to a shell variable). The thing with environment variables is that they get inherited by child processes. If first and second are external utilities (or scripts) that look at their environment, they would not see the FOO variable without the export . | {
"source": [
"https://unix.stackexchange.com/questions/392050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30206/"
]
} |
392,284 | My machine has an SSD, where I installed the system and an HDD, which I use as a storage for large and/or infrequently used files. Both are encrypted, but I chose to use the same passphrase for them. SSD is mounted at / and HDD at /usr/hdd (individual users each have a directory on it and can symlink as they like from home directory). When the system is booted, it immediately asks for passphrase for the SSD, and just a couple seconds later for the one for HDD (it is auto-mounted). Given that both passphrases are the same, is there a way to configure the system to ask just once? | Debian based distributions: Debian and Ubuntu ship a password caching script decrypt_keyctl with cryptsetup package. decrypt_keyctl script provides the same password to multiple encrypted LUKS targets, saving you from typing it multiple times. It can be enabled in crypttab with keyscript=decrypt_keyctl option. The same password is used for targets which have the same identifier in keyfile field . On boot password for each identifier is asked once. An example crypttab : <target> <source> <keyfile> <options>
part1_crypt /dev/disk/... crypt_disks luks,keyscript=decrypt_keyctl
part2_crypt /dev/disk/... crypt_disks luks,keyscript=decrypt_keyctl The decrypt_keyctl script depends on the keyutils package (which is only suggested, and therefore not necessarily installed). After you've updated your cryptab , you will also have to update initramfs to apply the changes. Use update-initramfs -u . Full readme for decrypt_keyctl is located in /usr/share/doc/cryptsetup/README.keyctl Unfortunately, this currently doesn't work on Debian systems using systemd init due to a bug (other init systems should be unaffected). With this bug you're asked a second time for the password by systemd, making it impossible to unlock remotely via ssh. Debian crypttab man page suggests as a workaround to use initramfs option to force processing in initramfs stage of boot. So to circumvent this bug an example for /etc/crypttab in Debian <target> <source> <keyfile> <options>
part1_crypt /dev/disk/... crypt_disks luks,initramfs,keyscript=decrypt_keyctl
part2_crypt /dev/disk/... crypt_disks luks,initramfs,keyscript=decrypt_keyctl Distributions which do not provide decrypt_keyctl script: If decrypt_keyctrl isn't provided by your distribution, the device can be unlocked using a keyfile in encrypted root file system. This when root file system can be unlocked and mounted before of any other encrypted devices. LUKS supports multiple key slots. This allows you to alternatively unlock the device using password if the key file is unavailable/lost. Generate the key with random data and set its permissions to owner readable only to avoid leaking it. Note that the key file needs to be on the root partition which is unlocked first. dd if=/dev/urandom of=<path to key file> bs=1024 count=1
chmod u=rw,g=,o= <path to key file> Add the key to your LUKS device cryptsetup luksAddKey <path to encrypted device> <path to key file> Configure crypttab to use the key file. First line should be the root device, since devices are unlocked in same order as listed in crypttab . Use absolute paths for key files. <target> <source> <keyfile> <options>
root_crypt /dev/disk/... none luks
part1_crypt /dev/disk/... <path to key file> luks | {
"source": [
"https://unix.stackexchange.com/questions/392284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
392,393 | When I move a single file with spaces in the filename it works like this: $ mv "file with spaces.txt" "new_place/file with spaces.txt" Now I have a list of files which may contain spaces and I want to move them. For example: $ echo "file with spaces.txt" > file_list.txt
$ for file in $(cat file_list.txt); do mv "$file" "new_place/$file"; done;
mv: cannot stat 'file': No such file or directory
mv: cannot stat 'with': No such file or directory
mv: cannot stat 'spaces.txt': No such file or directory Why does the first example work, but the second one doese not? How can I make it work? | Never, ever use for foo in $(cat bar) . This is a classic mistake, commonly known as bash pitfall number 1 . You should instead use: while IFS= read -r file; do mv -- "$file" "new_place/$file"; done < file_list.txt When you run the for loop, bash will apply wordsplitting to what it reads, meaning that a strange blue cloud will be read as a , strange , blue and cloud : $ cat files
a strange blue cloud.txt
$ for file in $(cat files); do echo "$file"; done
a
strange
blue
cloud.txt Compare to: $ while IFS= read -r file; do echo "$file"; done < files
a strange blue cloud.txt Or even, if you insist on the UUoC : $ cat files | while IFS= read -r file; do echo "$file"; done
a strange blue cloud.txt So, the while loop will read over its input and use the read to assign each line to a variable. The IFS= sets the input field separator to NULL * , and the -r option of read stops it from interpreting backslash escapes (so that \t is treated as slash + t and not as a tab). The -- after the mv means "treat everything after the -- as an argument and not an option", which lets you deal with file names starting with - correctly. * This isn't necessary here, strictly speaking, the only benefit in this scenario is that keeps read from removing any leading or trailing whitespace, but it is a good habit to get into for when you need to deal with filenames containing newline characters, or in general, when you need to be able to deal with arbitrary file names. | {
"source": [
"https://unix.stackexchange.com/questions/392393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251281/"
]
} |
392,512 | I'm an OpenBSD user. In the OpenBSD FAQ it says: OpenBSD is a complete system, intended to be kept in sync. It is not a kernel plus utilities that can be upgraded separately from each other. When you upgrade a system, you do so in one go; the kernel and the base system is replaced. Then you go and update your 3rd party packages . If compiling from source , you recompile the kernel and boot it. Then you rebuild the base system, and then the packages that you've got installed. If more than a couple of weeks/months have past since you last rebuilt everything, you first install a snapshot and rebuild from there (if you're following the most current CVS branch). Having an out of sync kernel, base system and/or 3rd party packages is a potential source of issues and more or less disqualifies you from getting any serious help from the official mailing lists. I'm quite okay with this. In fact, this is one of the reasons I use OpenBSD. It makes the system a consistent unit and it makes it easy for me to form a mental overview of it. What's it like on Linux? Most Linuxes that I'm aware of don't have a "base system" in the same sense as the BSDs, but rather a collection of packages assembled by the distribution provider. Further software is then added to this by a local administrator in such a way that the boundary between what was there from the start and what was added later is, at best, blurry. Does Linux (in general) not have a strong kernel to userspace coupling? The kernel is updated, as far as I know, like any other software package, and it confuses me slightly that this is at all possible. Add to this the fact that some even compile custom kernels (which is discouraged on OpenBSD), and have a multitude of various kernel versions listed in their boot menus. Who or what guarantees that the various subsystems of a Linux system are able to cooperate with each other even though they are updated independently from each other? The reason I'm asking is because another user on this site asked me whether replacing the kernel in his Linux system with a newer version "would be doable". Coming from the OpenBSD side of things, I couldn't say that yes, this would be guaranteed to not break the system. I use "Linux" above as a shorthand for "Linux distribution", kernel + utilities. | Linus Torvalds has a very strong opinion against kernel changes resulting in userspace regressions (see the question " The Linux kernel: breaking user space " for details). Interface between userspace and kernel is provided by system calls. Newer kernels can have more system calls, and changes in exiting ones when those changes do not break existing applications. When a system call interface has a flag parameter, new kernels often expose the new functionality with a new bit flag. This way kernel maintains backwards compatibility to old applications. When it has not been possible to alter existing interface without breaking userspace, additional system calls have been added that provide the extended functionality. This is why there are three versions of dup and two versions of umount system call. The policy of having a stable userspace is the reason why kernel updates rarely cause issues in userspace applications and you do not generally expect issues after upgrading the kernel. However same API stability is not guaranteed for kernel interfaces and other implementation details . Sysfs (on /sys ) and procsfs (on /proc/ ) expose kernel implementation details on runtime configuration, hardware, network, processes etc. which are used by low-level applications. It is possible for those interfaces to change in an incompatible way between kernel versions if there is a good reason to. Changes still try to minimize incompatibilities if possible and there are rules for how applications can use the interfaces in a way least likely to cause issues. The impact is also limited because non low-level applications shouldn't be using these interfaces. @PeterCordes pointed out if a change in procfs or sysfs breaks an application used by your distributions init scripts, you could have a a problem. This depends somewhat on how your distribution updates kernel (long term support or mainline) and even then the issues are relatively rare as distributions usually ship the updated tools at the same time. @StephenKitt added that upgraded userspace might require a newer version of the kernel, in which case the system might not be able to boot with the old kernel and that distribution release notes mention this when appropriate. | {
"source": [
"https://unix.stackexchange.com/questions/392512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116858/"
]
} |
392,951 | I wonder why there is an error using an asynchronous command within a loop? $ for i in {1..8}; do sleep 100 & ; done
bash: syntax error near unexpected token `;' If I write it as $ for i in {1..8}; do
> sleep 100 &
> done this works fine. How can I write it in one line without error? | Drop the ; : for i in {1..8}; do sleep 100 & done & separates commands , so the ; is extraneous (and the shell expects something between & and ; ). | {
"source": [
"https://unix.stackexchange.com/questions/392951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
393,069 | 1 #!/bin/bash
2 # query2.sh
3
4 numbers=(53 8 12 9 784 69 8 7 1)
5 i=4
6
7 echo ${numbers[@]} # <--- this echoes "53 8 12 9 784 69 8 7 1" to stdout.
8 echo ${numbers[i]} # <--- this echoes "784" to stdout.
9
10 unset numbers[i]
11
12 echo ${numbers[@]} # <--- this echoes "53 8 12 9 69 8 7 1" to stdout.
13 echo ${numbers[i]} # <--- stdout is blank. Why, in line 13, is the stdout blank, considering that the array seems to have been updated judging by line 12's stdout? And therefore, what should I do to get the intended answer, "69"? | unset removes an element. It doesn't renumber the remaining elements. We can use declare -p to see exactly what happens to numbers : $ unset "numbers[i]"
$ declare -p numbers
declare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [5]="69" [6]="8" [7]="7" [8]="1") Observe the numbers no longer has an element 4 . Another example Observe: $ a=()
$ a[1]="element 1"
$ a[22]="element 22"
$ declare -p a
declare -a a=([1]="element 1" [22]="element 22") Array a has no elements 2 through 21. Bash does not require that array indices be consecutive. Suggested method to force a renumbering of the indices Let's start with the numbers array with the missing element 4 : $ declare -p numbers
declare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [5]="69" [6]="8" [7]="7" [8]="1") If we would like the indices to change, then: $ numbers=("${numbers[@]}")
$ declare -p numbers
declare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [4]="69" [5]="8" [6]="7" [7]="1") There is now an element number 4 and it has value 69 . Alternate method to remove an element & renumber array in one step Again, let's define numbers : $ numbers=(53 8 12 9 784 69 8 7 1) As suggested by Toby Speight in the comments, a method to remove the fifth element (at index 4) and renumber the remaining elements all in one step: $ numbers=("${numbers[@]:0:4}" "${numbers[@]:5}")
$ declare -p numbers
declare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [4]="69" [5]="8" [6]="7" [7]="1") As you can see, the fifth element was removed and all remaining elements were renumbered. ${numbers[@]:0:4} slices array numbers : it takes the first four elements starting with element 0. Similarly, ${numbers[@]:5} slice array numbers : it takes all elements starting with element 5 and continuing to the end of the array. Obtaining the indices of an array The values of an array can be obtained with ${a[@]} . To find the indices (or keys ) that correspond to those values, use ${!a[@]} . For example, consider again our array numbers with the missing element 4 : $ declare -p numbers
declare -a numbers=([0]="53" [1]="8" [2]="12" [3]="9" [5]="69" [6]="8" [7]="7" [8]="1") To see which indices are assigned: $ echo "${!numbers[@]}"
0 1 2 3 5 6 7 8 Again, 4 is missing from the list of indices. Documentation From man bash : The unset builtin is used to destroy arrays. unset name[subscript] destroys the array element at index subscript .
Negative subscripts to indexed arrays are interpreted as described above. Care must be taken to avoid unwanted side effects caused by pathname expansion. unset name , where name is an array, or unset name[subscript] , where subscript is * or @ , removes the entire
array. | {
"source": [
"https://unix.stackexchange.com/questions/393069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/238486/"
]
} |
393,091 | I don't know why I can't use env array variable inside a script ? In my ~/.bashrc or ~/.profile export HELLO="ee"
export HELLOO=(aaa bbbb ccc) in a shell : > echo $HELLO
ee
> echo $HELLOO
aaa
> echo ${HELLOO[@]}
aaa bbbb ccc in a script : #!/usr/bin/env bash
echo $HELLO
echo $HELLOO
echo ${HELLOO[@]}
---
# Return
ee Why ? | A bash array can not be an environment variable as environment variables may only be key-value string pairs. You may do as the shell does with its $PATH variable, which essentially is an array of paths; turn the array into a string, delimited with some particular character not otherwise present in the values of the array: $ arr=( aa bb cc "some string" )
$ arr=$( printf '%s:' "${arr[@]}" )
$ printf '%s\n' "$arr"
aa:bb:cc:some string: Or neater, arr=( aa bb cc "some string" )
arr=$( IFS=:; printf '%s' "${arr[*]}" )
export arr The expansion of ${arr[*]} will be the elements of the arr array separated by the first character of IFS , here set to : . Note that if doing it this way, the elements of the string will be separated (not delimited ) by : , which means that you would not be able to distinguish an empty element at the end, if there was one. An alternative to passing values to a script using environment variables is (obviously?) to use the command line arguments: arr=( aa bb cc )
./some_script "${arr[@]}" The script would then access the passed arguments either one by one by using the positional parameters $1 , $2 , $3 etc, or by the use of $@ : printf 'First I got "%s"\n' "$1"
printf 'Then I got "%s"\n' "$2"
printf 'Lastly there was "%s"\n' "$3"
for opt in "$@"; do
printf 'I have "%s"\n' "$opt"
done | {
"source": [
"https://unix.stackexchange.com/questions/393091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/202577/"
]
} |
393,598 | I've seen cases like that with faulty storage devices, with faults in remote storage (SAN, NAS), I think I've even seen something similar caused by mount permissions. But it's the first time I see this happening on the same filesystem as my home directory. What kind of permissions are kicking in here? Definitely not mounts (I'm on the same ext4 filesystem), not SELinux, not ACLs. Then what? I do not recall how this directory was created. It's likely it got created by some kind of software. For me the weirdest part is that the directory is not even allowed to see its or its parent's info (last command). I'm using Linux Mint Sarah. user01@MyPC ~/somedirectory $ ls -l ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:
ls: negaliu pasiekti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace': Permission denied
viso 0
d????????? ? ? ? ? ? workspace user01@MyPC ~/somedirectory $ ls -ld ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:
drw-r--r-- 3 user01 user01 4096 Rgs 27 2016 ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D: user01@MyPC ~/somedirectory $ sudo file ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:
./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:: directory user01@MyPC ~/somedirectory $ sudo ls -l ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:
viso 4
drwxr-xr-x 3 user01 user01 4096 Rgs 27 2016 workspace user01@MyPC ~/somedirectory $ sudo stat ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:
File: './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:'
Size: 4096 Blocks: 8 IO Block: 4096 aplankas
Device: 807h/2055d Inode: 3937216 Links: 3
Access: (0644/drw-r--r--) Uid: ( 1000/ user01) Gid: ( 1000/ user01)
Access: 2017-09-21 12:57:33.990819052 +0300
Modify: 2016-09-27 11:18:38.309775066 +0300
Change: 2017-03-13 14:56:40.960468954 +0200
Birth: - user01@MyPC ~/somedirectory $ sudo getfacl ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:
# file: deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:
# owner: user01
# group: user01
user::rw-
group::r--
other::r-- user01@MyPC ~/somedirectory $ stat ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:
File: './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:'
Size: 4096 Blocks: 8 IO Block: 4096 aplankas
Device: 807h/2055d Inode: 3937216 Links: 3
Access: (0644/drw-r--r--) Uid: ( 1000/ user01) Gid: ( 1000/ user01)
Access: 2017-09-21 12:57:33.990819052 +0300
Modify: 2016-09-27 11:18:38.309775066 +0300
Change: 2017-03-13 14:56:40.960468954 +0200
Birth: - user01@MyPC ~/somedirectory $ stat ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/workspace
stat: nepavyksta patikrinti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace': Permission denied user01@MyPC ~/somedirectory $ sudo stat ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/workspace
File: './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace'
Size: 4096 Blocks: 8 IO Block: 4096 aplankas
Device: 807h/2055d Inode: 3937217 Links: 3
Access: (0755/drwxr-xr-x) Uid: ( 1000/ user01) Gid: ( 1000/ user01)
Access: 2017-09-21 12:58:46.845727190 +0300
Modify: 2016-09-27 11:18:38.309775066 +0300
Change: 2016-12-02 13:56:08.298109826 +0200
Birth: - user01@MyPC ~/somedirectory $ stat .
File: '.'
Size: 4096 Blocks: 8 IO Block: 4096 aplankas
Device: 807h/2055d Inode: 3278479 Links: 23
Access: (0755/drwxr-xr-x) Uid: ( 1000/ user01) Gid: ( 1000/ user01)
Access: 2017-09-21 09:46:22.102269130 +0300
Modify: 2017-09-20 17:33:04.564009275 +0300
Change: 2017-09-20 17:33:04.564009275 +0300
Birth: - user01@MyPC ~/somedirectory $ ll ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/
ls: negaliu pasiekti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace': Permission denied
ls: negaliu pasiekti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/.': Permission denied
ls: negaliu pasiekti './deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/..': Permission denied
viso 0
d????????? ? ? ? ? ? ./
d????????? ? ? ? ? ? ../
d????????? ? ? ? ? ? workspace/ Attributes: user01@MyPC ~/somedirectory $ sudo lsattr ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/
-------------e-- ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace
user01@MyPC ~/somedirectory $ sudo lsattr ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D\:/workspace
-------------e-- ./deploy_dir/liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/bin/D:/workspace/directory2 | On files read suffices to check the permissions. You need read AND execute on folders to ls them. chmod -R a+X ./deploy_dir Capital X to only set execute on folders (and files that already have execute bit set). | {
"source": [
"https://unix.stackexchange.com/questions/393598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95079/"
]
} |
393,948 | OS: Kernel 2.6.x Utilities: From busybox 1.2x A command outputs multiple lines of text. string1 text1: "asdfs asdf adfas"
string2 text2: "iojksdfa kdfj adsfj;"
string3 text3: "skidslk sadfj"
string4 text4: "lkpird sdfd"
string5 text5: "alskjfdsd safsd" Goal: I need to search for the line that contains "text4: " (no quotes) and then extract all characters after that string to the end of the line. Desired Output: "lkpird sdfd" (with quotes) Currently I have ... command | grep 'text4:' | awk -F': ' '{print $3}' Is there a simpler way to write this ? | Using sed $ command | sed -n 's/.*text4://p'
"lkpird sdfd" -n tells sed not to print unless we explicitly ask it to. s/.*text4:// tells sed to remove any text from the beginning of the line to the final occurrence of text4: . If such a line is found, then the p tells sed to print it. Using grep -P $ command | grep -oP '(?<=text4:).*'
"lkpird sdfd" -o tells grep to print only the matching part. (?<=text4:).* matches any text that follows text4: but does not include the text4: . The -P option requires GNU grep. Thus, it will not work with busybox's builtin grep , nor with the default grep on BSD/Mac OSX systems. Using awk The original grep-awk solution can be simplified: $ command | awk -F': ' '/text4: /{print $2}'
"lkpird sdfd" Using awk (alternate) $ command | awk '/text4:/{sub(/.*text4:/, ""); print}'
"lkpird sdfd" /text4:/ selects lines that contain text4: . sub(/.*text4:/, "") tells awk to remove all text from the beginning of the line to the last occurrence of text4: on the line. print tells awk to print those lines. | {
"source": [
"https://unix.stackexchange.com/questions/393948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227235/"
]
} |
394,065 | When I scan documents that are landscape-oriented, the output PDF files are portrait and so all the PDF viewers display the scanned documents in portrait. From the command line, how do you rotate a PDF file 90 degrees? I tried searching and found a bunch of solutions but I had trouble finding what looked like an authoritative solution[1] that uses a stable and robust Linux/Unix tool. footnote [1] For example, here is a sampling of some of the haphazard solutions I found: "just use Adobe Acrobat Pro to rotate the file and then save the file" "use pdfjam" "use PDFtk" "use ${PROGRAM_NAME} from Poppler" "use ImageMagick's convert"
-- but then all the comments were very negative and stating "the image quality is ruined" "open the file in a PDF viewer, then rotate, then print using a PDF printer like cutePDF or PDF printer or etc" "use ${PROGRAM_NAME}", then I searched for "${PROGRAM_NAME}" and there is something about "Fedora removed ${PROGRAM_NAME} because of licensing issues" | Use PDFtk. For rotating clockwise: pdftk input.pdf cat 1-endeast output output.pdf For rotating anti-clockwise: pdftk input.pdf cat 1-endwest output output.pdf Regarding the installation of PDFtk on Fedora, I found these links: Pdftk substitute for Fedora 21 and 22 Pdftk not available? Install pdftk on Fedora using the Snap Store | {
"source": [
"https://unix.stackexchange.com/questions/394065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5510/"
]
} |
394,143 | There's a shortcut on Discord that enables you to switch between guilds easily. It's Ctrl + Alt + Up and Ctrl + Alt + Down . The problem is that Gnome uses this shortcut for changing workspaces. I have two monitors so I don't use additional workspaces very often so I opened settings and looked for the shortcut so that I can disable it. I found that apparently the shortcut to switch workspaces up and down is Super + Page Up and Super + Page Down and I couldn't find the Ctrl + Alt + Up or down shortcut anywhere else. It seems almost as if this shortcut isn't possible to change but I'm sure that's not the case, though I have no idea how to do that. | In general this can happen because the OS (window system) has priority and intercepts this shortcut and stops propagation to your desired application.
Solution: Removing the shortcuts using dconf-editor : Open a terminal sudo apt-get install dconf-tools (or dconf-editor ) Now run dconf-editor in dconf-editor go to: /org/gnome/desktop/wm/keybindings/ Find switch-to-workspace-down , put ['disabled'] instead of default same for switch-to-workspace-up quit dconf-editor and you are done I always have this problem when I want to use some Eclipse IDE shortcuts: https://bugs.eclipse.org/bugs/show_bug.cgi?id=321094 | {
"source": [
"https://unix.stackexchange.com/questions/394143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252642/"
]
} |
394,169 | Is it risky to rename folder with 180GB with the mv command? We have a folder /data that contain 180GB. We want to rename the /data folder to /BD_FILES with the mv command. Is it safe to do that? | Changing the name on a folder is safe, if it stays within the same file system. If it is a mount point ( /data kinda looks like it could be a mount point to me, check this with mount ), then you need to do something other than just a simple mv since mv /data /BD_FILES would move the data to the root partition (which may not be what you want to happen). You should unmount the filesystem, rename the now empty directory, update /etc/fstab with the new location for this filesystem, and then remount the filesystem at the renamed location. In other words, umount /data mv /data /BD_FILES (assuming /BD_FILES doesn't already exist, in that case, move it out of the way first) update /etc/fstab , changing the mount point from /data to /BD_FILES mount /BD_FILES This does not involve copying any files around, it just changes the name of the directory that acts as the mount point for the filesystem. If the renaming of the directory involves moving it to a new file system (which would be the case if /data is on one disk while /BD_FILES is on another disk, a common thing to do if you're moving things to a bigger partition, for example), I'd recommend copying the data while leaving the original intact until you can check that the copy is ok. You may do this with rsync -a /data/ /BD_FILES/ for example, but see the rsync manual for what this does and does not do (it does not preserve hard links, for example). Once the folder is renamed, you also need to make sure that existing procedures (programs and users using the folder, backups etc.) are aware of the name change. | {
"source": [
"https://unix.stackexchange.com/questions/394169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
394,421 | I would like to install kubectl version 1.2.4 on a machine. The Kubernetes documentation recommends using snap for installation on Ubuntu. snap install --help is not very useful, the one promising parameter --revision= doesn't work: $ sudo snap install --revision=1.2.4 kubectl
error: cannot decode request body into snap instruction: invalid snap revision: "\"1.2.4\"" I suspect that --revision expects a SHA rather than a semver. The apt-get convention of using package=1.2.3 also doesn't work: $ sudo snap install kubectl=1.2.4
error: snap "kubectl=1.2.4" not found The usage documentation seems silent on the question. Anybody know? | you can run snap info kubectl which gives you a list of kubectl versions. Then you can install your preferred version with --channel like this sudo snap install kubectl --channel=1.6/stable --classic or if you want to upgrade / downgrade to specific version: sudo snap refresh kubectl --channel=1.6/stable --classic It seems that version 1.2.4 Is not available in snap, in that case you can download the executable https://storage.googleapis.com/kubernetes-release/release/v1.2.4/bin/linux/amd64/kubectl | {
"source": [
"https://unix.stackexchange.com/questions/394421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
394,461 | I have a scenario where I have to switch to the different user and after that, I need to execute the some Linux command. my command is something like this ( echo myPassword | sudo -S su hduser ) && bash /usr/local/hadoop/sbin/start-dfs.sh but with this command, I switch to the user and the next command got triggered on the previous user. Is there any I can accomplish this using shell script | Try. sudo -H -u TARGET_USER bash -c 'bash /usr/local/hadoop/sbin/start-dfs.sh' see man sudo : -H The -H (HOME) option requests that the security policy set the HOME
environment variable to the home directory of the target user (root by
default) as specified by the password database. Depending on the
policy, this may be the default behavior. -u user The -u (user) option causes sudo to run the specified command as a
user other than root. To specify a uid instead of a user name, use #uid.
When running commands as a uid, many shells require that the '#' be
escaped with a backslash ('\'). Security policies may restrict uids to
those listed in the password database. The sudoers policy allows uids
that are not in the password database as long as the targetpw option is
not set. Other security policies may not support this. | {
"source": [
"https://unix.stackexchange.com/questions/394461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207576/"
]
} |
394,490 | How to cut till first delimiter / and get remaining part of strings? Ex: pandi/sha/Dev/bin/boot I want to cut pandi , so the output like sha/Dev/bin/boot | Simply with cut command: echo "pandi/sha/Dev/bin/boot" | cut -d'/' -f2-
sha/Dev/bin/boot -d'/' - field delimiter -f2- - a range of fields to output ( -f<from>-<to> ; in our case: from 2 to the last) | {
"source": [
"https://unix.stackexchange.com/questions/394490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248795/"
]
} |
394,917 | I am using tune2fs, but it gives data in blocks, and I can't get the exact value of total size of the partition. I have also used fdisk -l /dev/mmcblk0p1 , but the size am getting from here is also a different value. How can I find the exact partition size? | The command is: blockdev --getsize64 /dev/mmcblk0p1 It gives the result in bytes, as a 64-bit integer. It queries the byte size of a block device , as the kernel see its size. The reason, why fdisk -l /dev/mmcblk0p1 didn't work, was that fdisk does some total different thing: it reads in the partition table (= first sector) of the block device, and prints what it found . It doesn't check anything, only says what is in the partition table. It doesn't even bother if the partition table is damaged, or the block device doesn't have one: it will print a warning that the checksum is not okay, but it still prints what is finds, even if the values are clearly non-sense. This is what happened in your case: /dev/mmcblk0p1 does not have a partition table. As the name of the device shows, it is already the first partition of the physical disk /dev/mmcblk0 . This disk contains a partition table, had you queried it with fdisk -l /dev/mmcblk0 , it had worked (assuming it had an msdos partition table). | {
"source": [
"https://unix.stackexchange.com/questions/394917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253236/"
]
} |
395,086 | I am trying to create 50 directories (dir-01..dir-50). And I want to create 50 files (01.txt..50.txt) inside each of the 50 directories. For example:
dir-01/01.txt..50.txt
dir-02/02.txt..50.txt
etc... I am able to create the the directories, but I am having trouble with creating the files inside each. I am also trying to compress all these afterwards into a tar file. This is where I am at so far: for i in {1..50};
do mkdir dir-$i;
done;
for j in {1..50};
do touch $j.txt.dir-*;
done;
tar -cf final.tar dir-{1..50} I know that second loop is wrong, but I am unsure how to proceed. Any advice is appreciated. This seems to work, but I am unsure if it is correct in syntax or format: for i in {1..50}; do
mkdir "dir-$i";
for j in {1..50}; do
touch "./dir-$i/$j.txt";
done;
done;
tar -cf final.tar dir-{1..50} | With zsh or bash or yash -o braceexpand : $ mkdir dir-{01..50}
$ touch dir-{01..50}/file{01..50}.txt
$ ls dir-45
file01.txt file09.txt file17.txt file25.txt file33.txt file41.txt file49.txt
file02.txt file10.txt file18.txt file26.txt file34.txt file42.txt file50.txt
file03.txt file11.txt file19.txt file27.txt file35.txt file43.txt
file04.txt file12.txt file20.txt file28.txt file36.txt file44.txt
file05.txt file13.txt file21.txt file29.txt file37.txt file45.txt
file06.txt file14.txt file22.txt file30.txt file38.txt file46.txt
file07.txt file15.txt file23.txt file31.txt file39.txt file47.txt
file08.txt file16.txt file24.txt file32.txt file40.txt file48.txt
$ tar -cf archive.tar dir-{01..50} With ksh93 : $ mkdir dir-{01..50%02d}
$ touch dir-{01..50%02d}/file{01..50%02d}.txt
$ tar -cf archive.tar dir-{01..50%02d} The ksh93 brace expansion takes a printf() -style format string that can be used to create the zero-filled numbers. With a POSIX sh : i=0
while [ "$(( i += 1 ))" -le 50 ]; do
zi=$( printf '%02d' "$i" )
mkdir "dir-$zi"
j=0
while [ "$(( j += 1 ))" -le 50 ]; do
zj=$( printf '%02d' "$j" )
touch "dir-$zi/file$zj.txt"
done
done
tar -cf archive.tar dir-* # assuming only the folders we just created exists An alternative for just creating your tar archive without creating so many files, in bash : mkdir dir-01
touch dir-01/file{01..50}.txt
tar -cf archive.tar dir-01
for i in {02..50}; do
mv "dir-$(( i - 1 ))" "dir-$i"
tar -uf archive.tar "dir-$i"
done This just creates one of the directories and adds it to the archive.
Since all files in all 50 directories are identical in name and contents, it then renames the directory and appends it to the archive in successive iterations to add the other 49 directories. | {
"source": [
"https://unix.stackexchange.com/questions/395086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253375/"
]
} |
395,284 | I found malware on my ec2 instance which was continuously mining bitcoin and using my instance processing power. I successfully identified the process, but was unable to remove and kill it. I ran this command watch "ps aux | sort -nrk 3,3 | head -n 5" It shows the top five process running on my instance, from which I found there is a process name ' bashd ' which was consuming 30% of cpu. The process is bashd -a cryptonight -o stratum+tcp://get.bi-chi.com:3333 -u 47EAoaBc5TWDZKVaAYvQ7Y4ZfoJMFathAR882gabJ43wHEfxEp81vfJ3J3j6FQGJxJNQTAwvmJYS2Ei8dbkKcwfPFst8FhG -p x I killed this process by using the kill -9 process_id command. After 5 seconds, the process started again. | If you did not put the software there and/or if you think your cloud instance is compromised: Take it off-line, delete it, and rebuild it from scratch (but read the link below first). It does not belong to you anymore, you can not trust it any longer . See "How to deal with a compromised server" on ServerFault for further information about what to do and how to behave when getting a machine compromised. In addition to the things to do and think about in the list(s) linked to above, be aware that depending on who you are and where you are, you may have a legal obligation to report it to either a local/central IT security team/person within your organization and/or to authorities (possibly even within a certain time frame). In Sweden (since December 2015), for example, any state agency (e.g. universities) are obliged to report IT-related incidents within 24 hours. Your organization will have documented procedures for how to go about doing this. | {
"source": [
"https://unix.stackexchange.com/questions/395284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253538/"
]
} |
395,291 | I am new to shell scripting and I am trying to sequentially number the headers in a fasta file. The sequences in my fasta file look like this: >Rodentia sp.
MALWILLPLLALLILWGPDPAQAFVNQHLCGSHLVEALYILVCGERGFFYTPMSRREVED
PQVGQVELGAGPGAGSEQTLALEVARQARIVQQCTSGICSLYQENYCN
>Ovis aries
MALWTRLVPLLALLALWAPAPAHAFVNQHLCGSHLVEALYLVCGERGFFYTPKARREVEG
PQVGALELAGGPGAGGLEGPPQKRGIVEQCCAGVCSLYQLENYCN I want to use awk in my shell script so that the headers are sequentially numbered, by inserting a number starting from 1 to n (where n is the number of sequences) after the ">", so that the sequences look like this: > 1 Rodentia sp.
MALWILLPLLALLILWGPDPAQAFVNQHLCGSHLVEALYILVCGERGFFYTPMSRREVED
PQVGQVELGAGPGAGSEQTLALEVARQARIVQQCTSGICSLYQENYCN
> 2 Ovis aries
MALWTRLVPLLALLALWAPAPAHAFVNQHLCGSHLVEALYLVCGERGFFYTPKARREVEG
PQVGALELAGGPGAGGLEGPPQKRGIVEQCCAGVCSLYQLENYCN I tried using the sub function in awk, to do this, replacing every instance of ">" with "> [a number]". awk '/>/{sub(">", "> ++i ")}1' file However, I don't understand how to increment variables using the sub function in awk. I would like to know if there is a way to do this using the sub function. I understand how sub works, but I don't know how to declare the variable to be incremented properly. I declared i to be 1 at the beginning of my shell script: i=1 However, the output I get from the sub function is: > ++$i Rodentia sp.
> ++$i Ovis aries How can a declare a variable properly so that I can use the awk sub function to number the headers? | If you did not put the software there and/or if you think your cloud instance is compromised: Take it off-line, delete it, and rebuild it from scratch (but read the link below first). It does not belong to you anymore, you can not trust it any longer . See "How to deal with a compromised server" on ServerFault for further information about what to do and how to behave when getting a machine compromised. In addition to the things to do and think about in the list(s) linked to above, be aware that depending on who you are and where you are, you may have a legal obligation to report it to either a local/central IT security team/person within your organization and/or to authorities (possibly even within a certain time frame). In Sweden (since December 2015), for example, any state agency (e.g. universities) are obliged to report IT-related incidents within 24 hours. Your organization will have documented procedures for how to go about doing this. | {
"source": [
"https://unix.stackexchange.com/questions/395291",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253541/"
]
} |
395,297 | I have a directory as follows -rw-r--r-- 1 ualaoip2 mcm1 1073233 Sep 30 12:40 database.260.4-0.tar.gz
-rw-r--r-- 1 ualaoip2 mcm1 502373963 Sep 30 12:40 database.260.4-1.tar.gz
-rw-r--r-- 1 ualaoip2 mcm1 880379753 Sep 30 12:40 database.260.4-2.tar.gz
drwxr-xr-x 2 ualaoip2 mcm1 4096 Sep 30 13:41 db0file
drwxr-xr-x 2 ualaoip2 mcm1 4096 Sep 30 13:41 db1file
drwxr-xr-x 2 ualaoip2 mcm1 4096 Sep 30 13:41 db2file and I want to move the file database...0 into folder0 &c... What's the best way of doing this? I tried various variants of for i in $(ls fi*) do; mv $i ./folder$i but they renamed things and overwrote lots of stuff I didn't want! I tried using variants of find . -maxdepth 1 -type d -printf '%f\n' | sort /* why is it not sorted? but couldn't get rid of the . for the current directory. I used mkdir db{0..7} to create the files - is this the best way? I would appreciate a couple of words of explanation with the answer - not just a monkey see, monkey do! :-) | If you did not put the software there and/or if you think your cloud instance is compromised: Take it off-line, delete it, and rebuild it from scratch (but read the link below first). It does not belong to you anymore, you can not trust it any longer . See "How to deal with a compromised server" on ServerFault for further information about what to do and how to behave when getting a machine compromised. In addition to the things to do and think about in the list(s) linked to above, be aware that depending on who you are and where you are, you may have a legal obligation to report it to either a local/central IT security team/person within your organization and/or to authorities (possibly even within a certain time frame). In Sweden (since December 2015), for example, any state agency (e.g. universities) are obliged to report IT-related incidents within 24 hours. Your organization will have documented procedures for how to go about doing this. | {
"source": [
"https://unix.stackexchange.com/questions/395297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99494/"
]
} |
395,428 | If I rename images via exiv to the exif date time, I do the following: find . -iname \*jpg -exec exiv2 -v -t -r '%Y_%m_%d__%H_%M_%S' rename {} \; Now it might happen that pictures have exactly the same timestamp (including seconds). How can I make the filename unique automatically? The command should be stable in the sense that if I execute it on the same directory structure again (perhaps after adding new pictures), the pictures already renamed shouldn't change and if pictures with already existing filenames are added the new filenames should be unique as well. My first attempt was just to leave the original basename in the resulting filename, but then the command wouldn't be stable in the sense above. | You may want to try jhead instead which does that out-of-the-box (with a , b ... z suffixes allowing up to 27 files with the same date) and doesn't have the stability issue mentioned by @meuh: find . -iname '*jpg' -exec jhead -n%Y_%m_%d__%H_%M_%S {} + Or using exiftool (example in man page): exiftool -ext jpg '-FileName<CreateDate' -d %Y_%m_%d__%H_%M_%S%%-c.%%e . (here with %-c being a numerical suffix starting with - ) | {
"source": [
"https://unix.stackexchange.com/questions/395428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
395,933 | I am having trouble grasping how to properly check from a bash script if the current time is between 23:00 and 06:30.
I am trying to run an infinite loop to check the time now, and to do something if the time range is between 11pm and 6:30 am.
Here's what I have written so far, which doesn't work the next day: fireup()
{
local starttime=$(date --date="23:00" +"%s")
local endtime=$(date --date="06:30" +"%s")
while :; do
local currenttime=$(date +%s)
if [ "$currenttime" -ge "$starttime" -a "$currenttime" -ge "$endtime" ]; then
do_something
else
do_something_else
fi
test "$?" -gt 128 && break
local currenttime=$(date +%s)
done &
} What I am doing wrong? | If all you need is to check if HH:MM is between 23:00 and 06:30, then don't use Unix timestamps. Just check the HH:MM values directly: fireup()
{
while :; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "23:00" ]] || [[ "$currenttime" < "06:30" ]]; then
do_something
else
do_something_else
fi
test "$?" -gt 128 && break
done &
} Notes: Time in HH:MM will be in lexicographic order, so you can directly compare them as strings. Avoid using -a or -o in [ ] , use || and && instead. Since this is bash, prefer [[ ]] over [ ] , it makes life easier. | {
"source": [
"https://unix.stackexchange.com/questions/395933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123334/"
]
} |
395,939 | Need to mass rename files that prepending their parent directory name to them without using rename command. e.g. /tmp/2017-09-22/cyber.gz
/tmp/2017-09-23/cyber.gz
/tmp/2017-09-24/cyber.tar Also renamed files has to be copy in /tmp/archive without impacting above original files. Looks like below /tmp/archive/2017-09-22_cyber.gz
/tmp/archive/2017-09-23_cyber.gz
/tmp/archive/2017-09-24_cyber.tar | If all you need is to check if HH:MM is between 23:00 and 06:30, then don't use Unix timestamps. Just check the HH:MM values directly: fireup()
{
while :; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "23:00" ]] || [[ "$currenttime" < "06:30" ]]; then
do_something
else
do_something_else
fi
test "$?" -gt 128 && break
done &
} Notes: Time in HH:MM will be in lexicographic order, so you can directly compare them as strings. Avoid using -a or -o in [ ] , use || and && instead. Since this is bash, prefer [[ ]] over [ ] , it makes life easier. | {
"source": [
"https://unix.stackexchange.com/questions/395939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254002/"
]
} |
396,223 | My script: date
echo -e "${YELLOW}Network check${NC}\n\n"
while read hostname
do
ping -c 1 "$hostname" > /dev/null 2>&1 &&
echo -e "Network $hostname : ${GREEN}Online${NC}" ||
echo -e "${GRAY}Network $hostname${NC} : ${RED}Offline${NC}"
done < list.txt
sleep 30
clear
done Is outputting info like this: Network 10.x.xx.xxx : Online
Network 10.x.xx.xxx : Offline
Network 10.x.xx.xxx : Offline
Network 10.x.xx.xxx : Offline
Network 10.x.xx.x : Online
Network 139.xxx.x.x : Online
Network 208.xx.xxx.xxx : Online
Network 193.xxx.xxx.x : Online which I'd like to clean up to get something like this: Network 10.x.xx.xxx : Online
Network 10.x.xx.xxx : Offline
Network 10.x.xx.xxx : Offline
Network 10.x.xx.x : Online
Network 139.xxx.x.x : Online
Network 208.xx.xxx.xxx : Online
Network 193.xxx.xxx.x : Online
Network 193.xxx.xxx.xxx : Offline | Simply with column command: yourscript.sh | column -t The output: Network 10.x.xx.xxx : Online
Network 10.x.xx.xxx : Offline
Network 10.x.xx.xxx : Offline
Network 10.x.xx.xxx : Offline
Network 10.x.xx.x : Online
Network 139.xxx.x.x : Online
Network 208.xx.xxx.xxx : Online
Network 193.xxx.xxx.x : Online | {
"source": [
"https://unix.stackexchange.com/questions/396223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254201/"
]
} |
396,382 | I would like to do 2 things: 1) Revert back the interfaces to the old classic name: eth0 instead of ens33. 2) Rename the interfaces in the way I want so that for example I can call interface eth0 as wan0 or assign eth1, eth2 and so on the mac address I want. | Assuming that you have just installed your debian 9 stretch. 1) For reverting back the old names for the interfaces do: nano /etc/default/grub edit the line GRUB_CMDLINE_LINUX="" to GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0" then launch a grub-mkconfig for apply the changes inside the bootloader grub-mkconfig -o /boot/grub/grub.cfg You need a reboot after that. 2) For renaming the interfaces use: For just a temporary modification take a look at the @xhienne answer. For a permanent modification: Start by creating / editing the /etc/udev/rules.d/70-persistent-net.rules file. nano /etc/udev/rules.d/70-persistent-net.rules And insert inside lines like: # interface with MAC address "00:0c:30:50:48:a1" will be assigned "eth0"
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:30:50:48:a1", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
# interface with MAC address "00:0c:30:50:48:ab" will be assigned "eth1"
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:30:50:48:ab", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" If you want to assign for example a name like wan0 to eth0 you can use given my example: # interface with MAC address "00:0c:30:50:48:a1" will be assigned "eth0"
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:30:50:48:a1", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="wan0" After the next reboot or using service networking restart you should see the changes applied. EXTRA: Remember that after all this modifications you have to edit your /etc/network/interfaces file replacing the old interfaces names with the new ones! EXTRA: If you want to know what MAC address your interfaces have, just do a ip addr show and look under the link/ section. | {
"source": [
"https://unix.stackexchange.com/questions/396382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143935/"
]
} |
396,526 | If I want to perform some commands given variables aren't set I'm using: if [[ -z "$a" || -z "$v" ]]
then
echo "a or b are not set"
fi Yet the same syntax doesn't work with -v , I have to use: if [[ -v a && -v b ]]
then
echo "a & b are set"
fi What is the history behind this? I don't understand why the syntax wouldn't be the same. I've read that -v is a somewhat recent addition to bash (4.2) ? | Test operators -v and -z are just not the same. Operator -z tells if a string is empty. So it is true that [[ -z "$a" ]] will give a good approximation of "variable a is unset",
but not a perfect one: the expression will yield true if a is set to the empty string
rather than unset; the enclosing script will fail if a is unset and the option nounset is enabled. On the other hand, -v a will be exactly "variable a is set", even
in edge cases. It should be clear that passing $a rather than a to -v would not be right, as it would expand that possibly-unset
variable before the test operator sees it; so it has to be part of
that operator's task to inspect that variable, pointed to by its name,
and tell whether it is set. | {
"source": [
"https://unix.stackexchange.com/questions/396526",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
396,630 | My problem: I'm writing a bash script and in it I'd like to check if a given service is running. I know how to do this manually, with $ service [service_name] status . But (especially since the move to systemd) that prints a whole bunch of text that's a little messy to parse. I assumed there's a command made for scripts with simple output or a return value I can check. But Googling around only yields a ton of "Oh, just ps aux | grep -v grep | grep [service_name] " results. That can't be the best practice, is it? What if another instance of that command is running, but not one started by the SysV init script? Or should I just shut up and get my hands dirty with a little pgrep? | systemctl has an is-active subcommand for this: systemctl is-active --quiet service will exit with status zero if service is active, non-zero otherwise, making it ideal for scripts: systemctl is-active --quiet service && echo Service is running If you omit --quiet it will also output the current status to its standard output. As pointed out by don_crissti , some units can be active even though nothing is running to provide the service: units marked as “RemainAfterExit” are considered active if they exit successfully, the idea being that they provide a service which doesn’t need a daemon ( e.g. they configure some aspect of the system). Units involving daemons will however only be active if the daemon is still running. | {
"source": [
"https://unix.stackexchange.com/questions/396630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109544/"
]
} |
397,269 | In a multiple monitor set-up, is there a way to transfer entire workspaces (as opposed to single applications) to a different monitor? | You can define a binding in your i3 config. Note: windows are called "containers", and monitors are called "outputs". move workspace to output left|right|down|up|current|primary|<output> Here's what I use in my config: # move focused workspace between monitors
bindsym $mod+Ctrl+greater move workspace to output right
bindsym $mod+Ctrl+less move workspace to output left Strangely, I'd expect the $mod+Ctrl+greater to require me to hit Ctrl and Shift at the same time, since you need to press Shift to type < and > . However, pressing just mod, Ctrl, and , works, which is very nice. Note, you can also set a keybinding to send things to a specific monitor by its name. | {
"source": [
"https://unix.stackexchange.com/questions/397269",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161652/"
]
} |
397,524 | For example, I can do the following touch a or touch ./a Then when I do ls I can view both, so what exactly is the ./ for? | The dot-slash, ./ , is a relative path to something in the current directory. The dot is the current directory and the slash is a path delimiter. When you give the command touch ./a you say "run the touch utility with the argument ./a ", and touch will create (or update the timestamp for) the file a in the current directory. There is no difference between touch a and touch ./a as both commands will act on the thing called a in the current directory. In a similar way, touch ../a will act on the a in the directory above the current directory as .. refers to "one directory further up in the hierarchy". . and .. are two special directory names that are present in every directory on Unix systems. It's useful to be able to put ./ in front of a filename sometimes, as when you're trying to create or delete, or just work with, a file with a dash as the first character in its filename. For example, touch -a file will not create a file called -a file , and neither would touch '-a file' But, touch ./'-a file' would. | {
"source": [
"https://unix.stackexchange.com/questions/397524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230247/"
]
} |
397,586 | Is there any way I can print the variable name along with its value? j=jjj
k=kkk
l=lll
for i in j k l
do
....
done Expected output (each variable on a separate line): j = jjj
k = kkk
l = lll Can any one suggest a way to get the above result? | A simple way in Bash: j="jjj"
k="kkk"
l="lll"
for i in j k l; do echo "$i = ${!i}"; done The output: j = jjj
k = kkk
l = lll ${!i} - Bash variable expansion/indirection (gets the value of the variable name held by $i ) | {
"source": [
"https://unix.stackexchange.com/questions/397586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255194/"
]
} |
397,589 | Installing it via Packages -> Alt-F -> ownlcloud -> install worked, but I got an empty page accessing https://mynas:8443/owncloud (only for testing: http://mynas:8080/owncloud ) (which forwarded to .../index.php ). | A simple way in Bash: j="jjj"
k="kkk"
l="lll"
for i in j k l; do echo "$i = ${!i}"; done The output: j = jjj
k = kkk
l = lll ${!i} - Bash variable expansion/indirection (gets the value of the variable name held by $i ) | {
"source": [
"https://unix.stackexchange.com/questions/397589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141555/"
]
} |
397,655 | How to find two files matched data in shell script and duplicate data store in another file in shell? #!/bin/bash
file1="/home/vekomy/santhosh/bigfiles.txt"
file2="/home/vekomy/santhosh/bigfile2.txt"
while read -r $file1; do
while read -r $file2 ;do
if [$file1==$file2] ; then
echo "two files are same"
else
echo "two files content different"
fi
done
done I written code but it didn't work. How to write it? | To just test whether two files are the same, use cmp -s : #!/bin/bash
file1="/home/vekomy/santhosh/bigfiles.txt"
file2="/home/vekomy/santhosh/bigfile2.txt"
if cmp -s "$file1" "$file2"; then
printf 'The file "%s" is the same as "%s"\n' "$file1" "$file2"
else
printf 'The file "%s" is different from "%s"\n' "$file1" "$file2"
fi The -s flag to cmp will make the utility "silent". The exit status of cmp will be zero when comparing two files that are identical. This is used in the code above to print out a message about whether the two files are identical or not. If your two input files contains list of pathnames of files that you wish to compare, then use a double loop like so: #!/bin/bash
filelist1="/home/vekomy/santhosh/bigfiles.txt"
filelist2="/home/vekomy/santhosh/bigfile2.txt"
mapfile -t files1 <"$filelist1"
while IFS= read -r file2; do
for file1 in "${files1[@]}"; do
if cmp -s "$file1" "$file2"; then
printf 'The file "%s" is the same as "%s"\n' "$file1" "$file2"
fi
done
done <"$filelist2" | tee file-comparison.out Here, the result is produced on both the terminal and in the file file-comparison.out . It is assumed that no pathname in the two input files contain any embedded newlines. The code first reads all pathnames from one of the files into an array, files1 , using mapfile . I do this to avoid having to read that file more than once, as we will have to go through all those pathnames for each pathname in the other file. You will notice that instead of reading from $filelist1 in the inner loop, I just iterate over the names in the files1 array. | {
"source": [
"https://unix.stackexchange.com/questions/397655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255262/"
]
} |
397,656 | I was wondering what is the fastest way to run a script , I've been reading that there is a difference in speed between showing the output of the script on the terminal, redirecting it to a file or perhaps /dev/null . So if the output is not important , what is the fastest way to get the script to work faster , even if it's minim . bash ./myscript.sh
-or-
bash ./myscript.sh > myfile.log
-or-
bash ./myscript.sh > /dev/null | Terminals these days are slower than they used to be, mainly because graphics cards no longer care about 2D acceleration. So indeed, printing to a terminal can slow down a script, particularly when scrolling is involved. Consequently ./script.sh is slower than ./script.sh >script.log , which in turn is slower than /script.sh >/dev/null , because the latter involve less work. However whether this makes enough of a difference for any practical purpose depends on how much output your script produces and how fast. If your script writes 3 lines and exits, or if it prints 3 pages every few hours, you probably don't need to bother with redirections. Edit: Some quick (and completely broken) benchmarks: In a Linux console, 240x75: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)
real 3m52.053s
user 0m0.617s
sys 3m51.442s In an xterm , 260x78: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)
real 0m1.367s
user 0m0.507s
sys 0m0.104s Redirect to a file, on a Samsung SSD 850 PRO 512GB disk: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >file)
real 0m0.532s
user 0m0.464s
sys 0m0.068s Redirect to /dev/null : $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >/dev/null)
real 0m0.448s
user 0m0.432s
sys 0m0.016s | {
"source": [
"https://unix.stackexchange.com/questions/397656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217253/"
]
} |
397,668 | I've installed RHEL v5 on my PC. The installation was successful, but after that I came across 2 lines on boot-up, the first one was setting clock with OK message and the second one was starting udev with OK message . After that a black screen showed up. I searched the internet and came to know that the systems which do have integrated graphics card will not load during boot-up, so the solution I found was to do the nomodeset option on GRUB, but I am very new to Linux so I don't know how to do this. | Terminals these days are slower than they used to be, mainly because graphics cards no longer care about 2D acceleration. So indeed, printing to a terminal can slow down a script, particularly when scrolling is involved. Consequently ./script.sh is slower than ./script.sh >script.log , which in turn is slower than /script.sh >/dev/null , because the latter involve less work. However whether this makes enough of a difference for any practical purpose depends on how much output your script produces and how fast. If your script writes 3 lines and exits, or if it prints 3 pages every few hours, you probably don't need to bother with redirections. Edit: Some quick (and completely broken) benchmarks: In a Linux console, 240x75: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)
real 3m52.053s
user 0m0.617s
sys 3m51.442s In an xterm , 260x78: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done)
real 0m1.367s
user 0m0.507s
sys 0m0.104s Redirect to a file, on a Samsung SSD 850 PRO 512GB disk: $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >file)
real 0m0.532s
user 0m0.464s
sys 0m0.068s Redirect to /dev/null : $ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >/dev/null)
real 0m0.448s
user 0m0.432s
sys 0m0.016s | {
"source": [
"https://unix.stackexchange.com/questions/397668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255271/"
]
} |
397,677 | Let's suppose Mary is a directory. Is the following path ~/Mary relative? | No, it's not relative. It's a full path, with ~ being an alias. Relative paths describe a path in relation to your current directory location. However, ~/Mary is exactly the same, no matter which directory you're currently in. Assuming you were currently logged in as Bob and also in the directory /home/Bob , then ../Mary would be an example of a relative path to /home/Mary . If you were currently in /etc/something then ~/Mary would still be /home/Bob/Mary but ../Mary would now be /etc/Mary . Note that Bash handles ~ in particular ways, and that it doesn't always translate to $HOME . For further reading, see Why doesn't the tilde (~) expand inside double quotes? The POSIX standard on tilde expansion The Bash manual on tilde expansion | {
"source": [
"https://unix.stackexchange.com/questions/397677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255278/"
]
} |
397,747 | I got two files: file1 with about 10 000 lines and file2 with a few hundred lines. I want to check whether all lines of file2 occur in file1. That is: ∀ line ℓ ∈ file2 : ℓ ∈ file1 Should anyone not know what these symbols mean or what "check whether all lines of file2 occur in file1" means: Several equivalent lines in either files don't influence whether the check returns that the files meet the requirement or don't. How do I do this? | comm -13 <(sort -u file_1) <(sort -u file_2) This command will output lines unique to file_2 . So, if output is empty, then all file_2 lines are contained in the file_1 . From comm's man: With no options, produce three-column output. Column one contains
lines unique to FILE1, column two contains lines unique to FILE2, and
column three contains lines common to both files.
-1 suppress column 1 (lines unique to FILE1)
-2 suppress column 2 (lines unique to FILE2)
-3 suppress column 3 (lines that appear in both files) | {
"source": [
"https://unix.stackexchange.com/questions/397747",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147785/"
]
} |
398,142 | I have the following code that I run on my Terminal. LC_ALL=C && grep -F -f genename2.txt hg38.hgnc.bed > hg38.hgnc.goi.bed This doesn't give me the common lines between the two files. What am I missing there? | Use comm -12 file1 file2 to get common lines in both files. You may also needs your file to be sorted to comm to work as expected. comm -12 <(sort file1) <(sort file2) From man comm : -1 suppress column 1 (lines unique to FILE1)
-2 suppress column 2 (lines unique to FILE2) Or using grep command you need to add -x option to match the whole line as a matching pattern. The F option is telling grep that match pattern as a string not a regex match. grep -Fxf file1 file2 Or using awk . awk 'NR==FNR{seen[$0]=1; next} seen[$0]' file1 file2 This is reading whole line of file1 into an array called seen with the key as whole line (in awk the $0 represent the whole current line). We used NR==FNR as condition to run its followed block only for first input fle1 not file2 , because NR in awk refer to the current processing line number and FNR is referring to the current line number in all inputs. so NR is unique for each input file but FNR is unique for all inputs. The next is there telling awk do not continue rest code and start again until NR wan not equal with FNR that means all lines of file1 read by awk . Then next seen[$0] will only run for second file2 and for each line in file2 will look into the array and will print that line where it does exist in array. Another simple option is using sort and uniq : sort file1 file2|uniq -d This will print both files sorted then uniq -d will print only duplicated lines. BUT this is granted when there is NO duplicated lines in both files themselves, else below is always granted even if there is a lines duplicated within both files. uniq -d <(sort <(sort -u file1) <(sort -u file2)) | {
"source": [
"https://unix.stackexchange.com/questions/398142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/199494/"
]
} |
398,413 | I have a text file containing tweets and I'm required to count the number of times a word is mentioned in the tweet. For example, the file contains: Apple iPhone X is going to worth a fortune
The iPhone X is Apple's latest flagship iPhone. How will it pit against it's competitors? And let's say I want to count how many times the word iPhone is mentioned in the file. So here's what I've tried. cut -f 1 Tweet_Data | grep -i "iPhone" | wc -l it certainly works but I'm confused about the 'wc' command in unix. What is the difference if I try something like: cut -f 1 Tweet_Data | grep -c "iPhone" where -c is used instead? Both of these yield different results in a large file full of tweets and I'm confused on how it works. Which method is the correct way of counting the occurrence? | Given such a requirement, I would use a GNU grep (for the -o option ), then pass it through wc to count the total number of occurrences: $ grep -o -i iphone Tweet_Data | wc -l
3 Plain grep -c on the data will count the number of lines that match, not the total number of words that match. Using the -o option tells grep to output each match on its own line, no matter how many times the match was found in the original line. wc -l tells the wc utility to count the number of lines. After grep puts each match in its own line, this is the total number of occurrences of the word in the input. If GNU grep is not available (or desired), you could transform the input with tr so that each word is on its own line, then use grep -c to count: $ tr '[:space:]' '[\n*]' < Tweet_Data | grep -i -c iphone
3 | {
"source": [
"https://unix.stackexchange.com/questions/398413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255849/"
]
} |
398,540 | I have installed MySQL on my Arch Linux server. I moved the data directory to a place under /home, where my RAID volume is mounted. I noticed that mysqld will not start in this configuration by default since the systemd unit contains the setting ProtectHome=true . I want to override just this setting. I don't want to re-specify the ExecStart or similar commands, in case they change when the package is upgraded. I tried making a simple file at /etc/systemd/system called mysqld.service and added only these lines: [Service]
ProtectHome=false This doesn't work as it looks like the service in /etc replaces , not overrides, the system service. Is there a way to override settings in systemd unit files this way without directly modifying the files in /usr/lib/systemd/system? (which is what I have done for now as a temporary fix, although that will end up reverted if the package is updated) | systemctl edit will create a drop-in file where you can override most of the settings, but these files have some specifics worth mentioning: Note that for drop-in files, if one wants to remove entries from a setting that is parsed as a list (and is not a dependency), such as AssertPathExists= (or e.g. ExecStart= in service units), one needs to first clear the list before re-adding all entries except the one that is to be removed. #/etc/systemd/system/httpd.service.d/local.conf
[Unit]
AssertPathExists=
AssertPathExists=/srv/www Dependencies ( After= , etc.) cannot be reset to an empty list, so dependencies can only be added in drop-ins. If you want to remove dependencies, you have to override the entire unit. To override the entire unit, use systemctl edit --full , this will make a copy in /etc if there is none yet and let you edit it. See also Systemd delete overrides | {
"source": [
"https://unix.stackexchange.com/questions/398540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65884/"
]
} |
398,543 | I accidentally overwrote the /bin/bash file with a dumb script that I intented to put inside the /bin folder. How do I get the contents of that file back? Is there a way I can find the contents on the web, and just copy them back in? What are my options here, considering that terminal gives an error talking about "Too many Symbolic Links?" I'm still a newcomer to this kind of thing, and I appreciate all the help I can get. Edit: I forgot to mention I'm on Kali 2.2 Rolling, which is pretty much debian with some added features. Edit 2: I also restarted the machine, as I didn't realize my mistake until a few days ago. That makes this quite a bit harder. | bash is a shell, probably your system shell, so now weird things happen, while parts of the shell are still in memory. Once you log out or reboot, you,ll be in deeper trouble. So the first thing should be to change your shell to something safe. See what shells you have installed cat /etc/shells Then change your shell to one of the other shells listed there, for example chsh -s /bin/dash Update, because you already rebooted: You are lucky that nowadays the boot process doesn't rely on bash , so your system boots, you just can't get a command line. But you can start an editor to edit /etc/passwd and change the shell in the root line from /bin/bash to /bin/dash . Log out and log in again. Just don't make any other change in that file, or you may mess up your system completely. Then try to reinstall bash with apt-get --reinstall install bash If everything succeeded you can chsh back to bash . Finally: I think, kali is a highly specialized distribution, probably not suited for people who accidently overwrite their shell. As this sentence was called rude and harsh, I should add that I wrote it out of my own experience. When I was younger, I did ruin my system because nobody told me to avoid messing around as root. | {
"source": [
"https://unix.stackexchange.com/questions/398543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245476/"
]
} |
398,905 | I have installed TightVNCServer on Raspbian (the September 2.017 version) for my Raspberry Pi 2 B+ : luis@Frambuesio:~$ vncserver -name Frambuesio -geometry 1280x1024 -depth 16
New 'Frambuesio' desktop at :1 on machine Frambuesio
Starting applications specified in /etc/X11/Xvnc-session
Log file is /home/luis/.vnc/Frambuesio:1.log
Use xtigervncviewer -SecurityTypes VncAuth -passwd /home/luis/.vnc/passwd :1 to connect to the VNC server.
luis@Frambuesio:~$ netstat -ano | grep "5901"
tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN off (0.00/0/0)
tcp6 0 0 ::1:5901 :::* LISTEN off (0.00/0/0) But my VNC Viewer (from RealVNC on a remote Windows machine) receives the message " Connection refused " when trying to connect, and the port doesn't seem to be listening: luis@Hipatio:~$ sudo nmap Frambuesio- -p 5900,5901,5902
[sudo] password for luis:
Starting Nmap 7.01 ( https://nmap.org ) at 2017-10-18 16:58 CEST
Nmap scan report for Frambuesio- (192.168.11.142)
Host is up (0.00050s latency).
PORT STATE SERVICE
5900/tcp closed vnc
5901/tcp closed vnc-1
5902/tcp closed vnc-2
MAC Address: B8:27:EB:7D:7C:B0 (Raspberry Pi Foundation)
Nmap done: 1 IP address (1 host up) scanned in 0.67 seconds If I try from Ubuntu 16.04.3 on another Raspberry Pi everything goes all right (note the different netstat results): luis@Zarzaparrillo:~$ vncserver -name Zarzaparrillo -geometry 1280x1024 -depth 16
New 'Zarzaparrillo' desktop is Zarzaparrillo:1
Starting applications specified in /home/luis/.vnc/xstartup
Log file is /home/luis/.vnc/Zarzaparrillo:1.log
luis@Zarzaparrillo:~$ netstat -ano | grep 5901
tcp6 0 0 :::5901 :::* LISTEN off (0.00/0/0) Same results with VNC4Server . I have read the official Raspberry papers , consisting on installing the realvnc-vnc-server package. But the RealVNC program installs a ton of extra packages and is not open source , even when it is free for educative purposes. I would prefer some GNU's more open policies for my VNC, as long as it could be used in an enterprise production environment. My workaround for now consists on using X11vnc to serve the display on another port: luis@Frambuesio:~$ vncserver -name Frambuesio -geometry 1280x1024 -depth 16
[... on another terminal: ]
luis@Frambuesio:~$ sudo x11vnc -display :1 -passwd anypassword -auth guess -forever ... and now the X11vnc program makes display :1 available. Note that, as long as the port 5901 TCP is occupied, X11VNC uses the 5900 TCP (aka :0 port ): The VNC desktop is: Frambuesio:0
PORT=5900 Note the netstat output, now in a working condition: luis@Frambuesio:~$ netstat -ano | grep 5900
tcp 0 0 0.0.0.0:5900 0.0.0.0:* LISTEN off (0.00/0/0)
tcp6 0 0 :::5900 :::* LISTEN off (0.00/0/0)
luis@Frambuesio:~$ netstat -ano | grep 5901
tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN off (0.00/0/0)
tcp6 0 0 ::1:5901 :::* LISTEN off (0.00/0/0) Why are my VNC servers failing and how could I solve this? | The problem seems to be just a default argument on VNCServer with the improper (for your case) option. From vncserver command line help: [-localhost yes|no] Only accept VNC connections from localhost This should solve your problem: vncserver -localhost no Interpreting the same last example in the original question, note the 0.0.0.0:5900 meaning "listening connections from anywhere at 5900 TCP": luis@Frambuesio:~$ netstat -ano | grep 5900
tcp 0 0 0.0.0.0:5900 0.0.0.0:* LISTEN off (0.00/0/0)
tcp6 0 0 :::5900 :::* LISTEN off (0.00/0/0) Meanwhile, note the 127.0.0.1:5901 meaning "listening connections from localhost at 5901 TCP" luis@Frambuesio:~$ netstat -ano | grep 5901
tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN off (0.00/0/0)
tcp6 0 0 ::1:5901 :::* LISTEN off (0.00/0/0) | {
"source": [
"https://unix.stackexchange.com/questions/398905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
398,921 | I have a directory containing files with names rho_0.txt
rho_5000.txt
rho_10000.txt
rho_150000.txt
rho_200000.txt and so on. I would like to delete all those that are a multiple of 5000. I tried the following: printf 'rho_%d.txt\n' $(seq 5000 10000 25000) | rm , but that gave me the response rm: missing operand . Is there another way to do this? | You don't need a loop or extra commands where you have Bash Shell Brace Expansion . rm -f rho_{0..200000..5000}.txt Explanation : {start..end..step} . The -f to ignore prompt on non-existent files. P.s. To keep safety and check which files will be deleted, please do a test first with: ls -1 rho_{0..200000..5000}.txt | {
"source": [
"https://unix.stackexchange.com/questions/398921",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45002/"
]
} |
398,944 | I'm learning about fork() and exec() commands. It seems like fork() and exec() are usually called together. ( fork() creates a new child process, and exec() replaces the current process image with a new one.) However, in what scenarios might you call each function on its own? Are there scenarios like these? | Sure! A common pattern in "wrapper" programs is to do various things and then replace itself with some other program with only an exec call (no fork) #!/bin/sh
export BLAH_API_KEY=blub
...
exec /the/thus/wrapped/program "$@" A real-life example of this is GIT_SSH (though git(1) does also offer GIT_SSH_COMMAND if you do not want to do the above wrapper program method). Fork-only is used when spawning a bunch of typically worker processes (e.g. Apache httpd in fork mode (though fork-only better suits processes that need to burn up the CPU and not those that twiddle their thumbs waiting for network I/O to happen)) or for privilege separation used by sshd and other programs on OpenBSD (no exec) $ doas pkg_add pstree
...
$ pstree | grep sshd
|-+= 70995 root /usr/sbin/sshd
| \-+= 28571 root sshd: jhqdoe [priv] (sshd)
| \-+- 14625 jhqdoe sshd: jhqdoe@ttyp6 (sshd) The root sshd has on client connect forked off a copy of itself (28571) and then another copy (14625) for the privilege separation. | {
"source": [
"https://unix.stackexchange.com/questions/398944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256229/"
]
} |
399,027 | This error has arise when I add gns repository and try to use this command: #sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys F88F6D313016330404F710FC9A2FD067A2E3EF7B the error is: gpg: keyserver receive failed: Server indicated a failure | Behind a firewall you should use the port 80 instead of the default port 11371 : sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9A2FD067A2E3EF7B Sample output: Executing: /tmp/apt-key-gpghome.mTGQWBR2AG/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv 9A2FD067A2E3EF7B
gpg: key 9A2FD067A2E3EF7B: "Launchpad PPA for GNS3" not changed
gpg: Total number processed: 1
gpg: unchanged: 1 | {
"source": [
"https://unix.stackexchange.com/questions/399027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254922/"
]
} |
399,619 | When booting a kernel in an embedded device, you need to supply a device tree to the Linux kernel, while booting a kernel on a regular x86 pc doesn't require a device tree -- why? As I understand, on an x86 pc the kernel "probes" for hardware (correct me if I'm wrong), so why can't the kernel probe for hardware in and embedded system? | Peripherals are connected to the main processor via a bus . Some bus protocols support enumeration (also called discovery), i.e. the main processor can ask “what devices are connected to this bus?” and the devices reply with some information about their type, manufacturer, model and configuration in a standardized format. With that information, the operating system can report the list of available devices and decide which device driver to use for each of them. Some bus protocols don't support enumeration, and then the main processor has no way to find out what devices are connected other than guessing. All modern PC buses support enumeration, in particular PCI (the original as well as its extensions and successors such as AGP and PCIe), over which most internal peripherals are connected, USB (all versions), over which most external peripherals are connected, as well as Firewire , SCSI , all modern versions of ATA/SATA , etc. Modern monitor connections also support discovery of the connected monitor ( HDMI , DisplayPort , DVI , VGA with EDID ). So on a PC, the operating system can discover the connected peripherals by enumerating the PCI bus, and enumerating the USB bus when it finds a USB controller on the PCI bus, etc. Note that the OS has to assume the existence of the PCI bus and the way to probe it; this is standardized on the PC architecture (“PC architecture” doesn't just mean an x86 processor: to be a (modern) PC, a computer also has to have a PCI bus and has to boot in a certain way). Many embedded systems use less fancy buses that don't support enumeration. This was true on PC up to the mid-1990s, before PCI overtook ISA . Most ARM systems, in particular, have buses that don't support enumeration. This is also the case with some embedded x86 systems that don't follow the PC architecture. Without enumeration, the operating system has to be told what devices are present and how to access them. The device tree is a standard format to represent this information. The main reason PC buses support discovery is that they're designed to allow a modular architecture where devices can be added and removed, e.g. adding an extension card into a PC or connecting a cable on an external port. Embedded systems typically have a fixed set of devices¹, and an operating system that's pre-loaded by the manufacturer and doesn't get replaced, so enumeration is not necessary. ¹ If there's an external bus such as USB, USB peripherals are auto-discovered, they wouldn't be mentioned in the device tree. | {
"source": [
"https://unix.stackexchange.com/questions/399619",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153502/"
]
} |
399,690 | I am wondering whether there is a general way of passing multiple options to an executable via the shebang line ( #! ). I use NixOS, and the first part of the shebang in any script I write is usually /usr/bin/env . The problem I encounter then is that everything that comes after is interpreted as a single file or directory by the system. Suppose, for example, that I want to write a script to be executed by bash in posix mode. The naive way of writing the shebang would be: #!/usr/bin/env bash --posix but trying to execute the resulting script produces the following error: /usr/bin/env: ‘bash --posix’: No such file or directory I am aware of this post , but I was wondering whether there was a more general and cleaner solution. EDIT : I know that for Guile scripts, there is a way to achieve what I want, documented in Section 4.3.4 of the manual: #!/usr/bin/env sh
exec guile -l fact -e '(@ (fac) main)' -s "$0" "$@"
!# The trick, here, is that the second line (starting with exec ) is interpreted as code by sh but, being in the #! ... !# block, as a comment, and thus ignored, by the Guile interpreter. Would it not be possible to generalize this method to any interpreter? Second EDIT : After playing around a little bit, it seems that, for interpreters that can read their input from stdin , the following method would work: #!/usr/bin/env sh
sed '1,2d' "$0" | bash --verbose --posix /dev/stdin; exit; It's probably not optimal, though, as the sh process lives until the interpreter has finished its job. Any feedback or suggestion would be appreciated. | There is no general solution, at least not if you need to support Linux, because the Linux kernel treats everything following the first “word” in the shebang line as a single argument . I’m not sure what NixOS’s constraints are, but typically I would just write your shebang as #!/bin/bash --posix or, where possible, set options in the script : set -o posix Alternatively, you can have the script restart itself with the appropriate shell invocation: #!/bin/sh -
if [ "$1" != "--really" ]; then exec bash --posix -- "$0" --really "$@"; fi
shift
# Processing continues This approach can be generalised to other languages, as long as you find a way for the first couple of lines (which are interpreted by the shell) to be ignored by the target language. GNU coreutils ’ env provides a workaround since version 8.30, see unode ’s answer for details. (This is available in Debian 10 and later, RHEL 8 and later, Ubuntu 19.04 and later, etc.) | {
"source": [
"https://unix.stackexchange.com/questions/399690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/212582/"
]
} |
399,980 | Suppose I have the following trivial example in my history : ...
76 cd ~
77 ./generator.sh out.file
78 cp out.file ~/out/
79 ./out/cleaner.sh .
80 ls -alnh /out
... If I wanted to execute commands 77 , 78 , and 79 in one command, does there exist a shortcut for this? I've tried !77 !78 !79 , which will simply place them all on a single line to execute. | EDIT: You can do this in POSIX-compliant fashion with the fix command tool fc : fc 77 79 This will open your editor (probably vi ) with commands 77 through 79 in the buffer. When you save and exit ( :x ), the commands will be run. If you don't want to edit them and you're VERY SURE you know which commands you're calling, you can use: fc -e true 77 79 This uses true as an "editor" to edit the commands with, so it just exits without making any changes and the commands are run as-is. ORIGINAL ANSWER: You can use: history -p \!{77..79} | bash This assumes that you're not using any aliases or functions or any variables that are only present in the current execution environment, as of course those won't be available in the new shell being started. A better solution (thanks to Michael Hoffman for reminding me in the comments) is: eval "$(history -p \!{77..79})" One of the very, very few cases where eval is actually appropriate! Also see: Is there any way to execute commands from history? | {
"source": [
"https://unix.stackexchange.com/questions/399980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79419/"
]
} |
399,986 | I'm running xfce on Arch without a DM. Using xorg-xinit to startx. Per default, after startup, I get a login prompt on tty1 and all is good. However, I'd like to change the default behavior to be dropped at a login prompt on tty6 (or whatever) without having to manually Ctrl+Alt+F6. I've spent a bunch of time reading various sources, Arch wiki, man pages, http://0pointer.de/blog/projects/systemd-docs.html , etc. However, I'm still not getting it. I've tried both manually adding and deleting the files, /etc/systemd/system/getty.target.wants/[email protected] and [email protected]. Alternatively also used systemctl to enable and disable them. As a test, also editing last line of /usr/lib/systemd/system/[email protected] DefaultInstance=tty1 to DefaultInstance=tty7, and combinations of all the above. Would have created in /etc/systemd/system if it worked. I asked on the Arch forums and got one very general reply, mostly crickets chirping. Is what I'm trying to do frowned upon for some reason? I ended up just creating a service file in /etc/systemd/system that calls a bash one liner with chvt in it. This gives me what I wanted, but now I can't scroll the boot messages I have setup to not clear on tty1. This solution also seems like a bad add on hack. What would be the proper way to do this? | EDIT: You can do this in POSIX-compliant fashion with the fix command tool fc : fc 77 79 This will open your editor (probably vi ) with commands 77 through 79 in the buffer. When you save and exit ( :x ), the commands will be run. If you don't want to edit them and you're VERY SURE you know which commands you're calling, you can use: fc -e true 77 79 This uses true as an "editor" to edit the commands with, so it just exits without making any changes and the commands are run as-is. ORIGINAL ANSWER: You can use: history -p \!{77..79} | bash This assumes that you're not using any aliases or functions or any variables that are only present in the current execution environment, as of course those won't be available in the new shell being started. A better solution (thanks to Michael Hoffman for reminding me in the comments) is: eval "$(history -p \!{77..79})" One of the very, very few cases where eval is actually appropriate! Also see: Is there any way to execute commands from history? | {
"source": [
"https://unix.stackexchange.com/questions/399986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134593/"
]
} |
400,231 | I'm just wondering what is the equivalent of apt-get upgrade
apt upgrade
yum update with OpenWRT or LEDE? | There no single command or argument, but you can easily do it. To upgrade all of the packages, LEDE recommends , opkg list-upgradable | cut -f 1 -d ' ' | xargs -r opkg upgrade There are other less efficient ways where people use AWK and such. An important caveat often follows with extensive use of LEDE / OpenWRT's opkg Since OpenWrt firmware stores the base system in a compressed read-only partition, any update to base system packages will be written in the read-write partition and therefore use more space than it would if it was just overwriting the older version in the compressed base system partition. It's recommended to check the available space in internal flash memory and the space requirements for updates of base system packages. | {
"source": [
"https://unix.stackexchange.com/questions/400231",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
400,235 | Elsewhere I have seen a cd function as below: cd()
{
builtin cd "$@"
} why is it recommended to use $@ instead of $1 ? I created a test directory "r st" and called the script containing this function and it worked either way $ . cdtest.sh "r st" but $ . cdtest.sh r st failed whether I used "$@" or "$1" | Because, according to bash(1) , cd takes arguments cd [-L|[-P [-e]] [-@]] [dir]
Change the current directory to dir. if dir is not supplied,
... so therefore the directory actually may not be in $1 as that could instead be an option such as -L or another flag. How bad is this? $ cd -L /var/tmp
$ pwd
/var/tmp
$ cd() { builtin cd "$1"; }
$ cd -L /var/tmp
$ pwd
/home/jhqdoe
$ Things could go very awry if you end up not where you expect using cd "$1" … | {
"source": [
"https://unix.stackexchange.com/questions/400235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194688/"
]
} |
400,351 | I'm using a Debian 9 image on a virtual machine. The ping command is not installed. When I run: sudo apt-get install ping It asks me: Package ping is a virtual package provided by:
iputils-ping 3:20161105-1
inetutils-ping 2:1.9.4-2+b1
You should explicitly select one to install. Why is there two ping utilities? What are the differences between them? Is there some guidelines to choose one version over the other? What are the implications of this choice? Will all scripts and programs be compatible with both versions? | iputils ’s ping supports quite a few more features than inetutils ’ ping , e.g. IPv6 (which inetutils implements in a separate binary, ping6 ), broadcast pings, quality of service bits... The linked manpages provide details. iputils ’ ping supports all the options available on inetutils ’ ping , so scripts written for the latter will work fine with the former. The reverse is not true: scripts using iputils -specific options won’t work with inetutils . As far as why both exist, inetutils is the GNU networking utilities , targeting a variety of operating systems and providing lots of different networking tools; iputils is Linux-specific and includes fewer utilities. So typically you’d combine both to obtain complete coverage and support for Linux-specific features, on Linux, and only use inetutils on non-Linux systems. | {
"source": [
"https://unix.stackexchange.com/questions/400351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74926/"
]
} |
400,447 | I don't do enough scripting to remember, without looking up, whether double or single quotes result in a Unix variable being substituted. I definitely understand what is going on. My question is does anyone have a memory trick for making the correct quoting rule stick in my head? | Single quotes are simple quotes, with a single standard: every character is literal. Double quotes have a double standard: some characters are literal, others are still interpreted unless there's a backslash before them. Single quotes work alone: backslash inside single quotes is not special. Double quotes pair up with backslash: backslash inside double quotes makes the next character non-special. | {
"source": [
"https://unix.stackexchange.com/questions/400447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57899/"
]
} |
400,549 | Background: One of my colleagues who doesn't come from a Linux background asked me about using ./ before some commands and not others, so I explained to him how PATH works and how binaries are chosen to be run. His response was that it was dumb and he just wanted to not need to type ./ before commands. Question: Is there a way to easily modify the behavior of the shell such that $PWD is always the first item on PATH ? | If you really want to, you can do this by prepending . to your path: export PATH=".:$PATH" However, that’s a bad idea, because it means your shell will pick any command in the current directory in preference to others. If someone (or some program) drops a malicious ls command in a directory you use frequently, you’re in for trouble... | {
"source": [
"https://unix.stackexchange.com/questions/400549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167515/"
]
} |
400,772 | I am tasked with automating a gpg decryption using cron (or any Ubuntu Server compatible job scheduling tool). Since it has to be automated I used --passphrase but it ends up in the shell history so it is visible in the process list. How can I go about automating decryption while maintaining good (preferably great) security standards? An example will be highly appreciated. | Store the passphrase in a file which is only readable by the cron job’s user, and use the --passphrase-file option to tell gpg to read the passphrase there. This will ensure that the passphrase isn’t visible in process information in memory. The level of security will be determined by the level of access to the file storing the passphrase (as well as the level of access to the file containing the key), including anywhere its contents end up copied to (so take care with backups), and off-line accessibility (pulling the disk out of the server). Whether this level of security is sufficient will depend on your access controls to the server holding the file, physically and in software, and on the scenarios you’re trying to mitigate. If you want great security standards, you need to use a hardware security module instead of storing your key (and passphrase) locally. This won’t prevent the key from being used in situ , but it will prevent it from being copied and used elsewhere. | {
"source": [
"https://unix.stackexchange.com/questions/400772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257605/"
]
} |
400,849 | I want to print the value of /dev/stdin, /dev/stdout and /dev/stderr. Here is my simple script : #!/bin/bash
echo your stdin is : $(</dev/stdin)
echo your stdout is : $(</dev/stdout)
echo your stderr is : $(</dev/stderr) i use the following pipes : [root@localhost home]# ls | ./myscript.sh
[root@localhost home]# testerr | ./myscript.sh only $(</dev/stdin) seems to work , I've also found on some others questions people using : "${1-/dev/stdin}" tried it without success. | stdin , stdout , and stderr are streams attached to file descriptors 0, 1, and 2 respectively of a process. At the prompt of an interactive shell in a terminal or terminal emulator, all those 3 file descriptors would refer to the same open file description which would have been obtained by opening a terminal or pseudo-terminal device file (something like /dev/pts/0 ) in read+write mode. If from that interactive shell, you start your script without using any redirection, your script will inherit those file descriptors. On Linux, /dev/stdin , /dev/stdout , /dev/stderr are symbolic links to /proc/self/fd/0 , /proc/self/fd/1 , /proc/self/fd/2 respectively, themselves special symlinks to the actual file that is open on those file descriptors. They are not stdin, stdout, stderr, they are special files that identify what files stdin, stdout, stderr go to (note that it's different in other systems than Linux that have those special files). reading something from stdin means reading from file descriptor 0 (which will point somewhere within the file referenced by /dev/stdin ). But in $(</dev/stdin) , the shell is not reading from stdin, it opens a new file descriptor for reading on the same file as the one open on stdin (so reading from the start of the file, not where stdin currently points to). Except in the special case of terminal devices open in read+write mode, stdout and stderr are usually not open for reading. They are meant to be streams that you write to . So reading from the file descriptor 1 will generally not work. On Linux, opening /dev/stdout or /dev/stderr for reading (as in $(</dev/stdout) ) would work and would let you read from the file where stdout goes to (and if stdout was a pipe, that would read from the other end of the pipe, and if it was a socket, it would fail as you can't open a socket). In our case of the script run without redirection at the prompt of an interactive shell in a terminal, all of /dev/stdin, /dev/stdout and /dev/stderr will be that /dev/pts/x terminal device file. Reading from those special files returns what is sent by the terminal (what you type on the keyboard). Writing to them will send the text to the terminal (for display). echo $(</dev/stdin)
echo $(</dev/stderr) will be the same. To expand $(</dev/stdin) , the shell will open that /dev/pts/0 and read what you type until you press ^D on an empty line. They will then pass the expansion (what you typed stripped of the trailing newlines and subject to split+glob) to echo which will then output it on stdout (for display). However in: echo $(</dev/stdout) in bash ( and bash only ), it's important to realise that inside $(...) , stdout has been redirected. It is now a pipe. In the case of bash , a child shell process is reading the content of the file (here /dev/stdout ) and writing it to the pipe, while the parent reads from the other end to make up the expansion. In this case when that child bash process opens /dev/stdout , it is actually opening the reading end of the pipe. Nothing will ever come from that, it's a deadlock situation. If you wanted to read from the file pointed-to by the scripts stdout, you'd work around it with: { echo content of file on stdout: "$(</dev/fd/3)"; } 3<&1 That would duplicate the fd 1 onto the fd 3, so /dev/fd/3 would point to the same file as /dev/stdout. With a script like: #! /bin/bash -
printf 'content of file on stdin: %s\n' "$(</dev/stdin)"
{ printf 'content of file on stdout: %s\n' "$(</dev/fd/3)"; } 3<&1
printf 'content of file on stderr: %s\n' "$(</dev/stderr)" When run as: echo bar > err
echo foo | myscript > out 2>> err You'd see in out afterwards: content of file on stdin: foo
content of file on stdout: content of file on stdin: foo
content of file on stderr: bar If as opposed to reading from /dev/stdin , /dev/stdout , /dev/stderr , you wanted to read from stdin, stdout and stderr (which would make even less sense), you'd do: #! /bin/sh -
printf 'what I read from stdin: %s\n' "$(cat)"
{ printf 'what I read from stdout: %s\n' "$(cat <&3)"; } 3<&1
printf 'what I read from stderr: %s\n' "$(cat <&2)" If you started that second script again as: echo bar > err
echo foo | myscript > out 2>> err You'd see in out : what I read from stdin: foo
what I read from stdout:
what I read from stderr: and in err : bar
cat: -: Bad file descriptor
cat: -: Bad file descriptor For stdout and stderr, cat fails because the file descriptors were open for writing only, not reading, the the expansion of $(cat <&3) and $(cat <&2) is empty. If you called it as: echo out > out
echo err > err
echo foo | myscript 1<> out 2<> err (where <> opens in read+write mode without truncation), you'd see in out : what I read from stdin: foo
what I read from stdout:
what I read from stderr: err and in err : err You'll notice that nothing was read from stdout, because the previous printf had overwritten the content of out with what I read from stdin: foo\n and left the stdout position within that file just after. If you had primed out with some larger text, like: echo 'This is longer than "what I read from stdin": foo' > out Then you'd get in out : what I read from stdin: foo
read from stdin": foo
what I read from stdout: read from stdin": foo
what I read from stderr: err See how the $(cat <&3) has read what was left after the first printf and doing so also moved the stdout position past it so that the next printf outputs what was read after. | {
"source": [
"https://unix.stackexchange.com/questions/400849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/201418/"
]
} |
400,893 | I know this question is similar to " Udev : renaming my network interface ", but I do not consider it a duplicate because my interface is not named via a udev rule, and none of the other answers in that question worked for me. So I have one WiFi adapter on this laptop machine, and I would like to rename the interface from wlp5s0 to wlan0: root@aj-laptop:/etc/udev/rules.d# iwconfig
wlp5s0 IEEE 802.11 ESSID:off/any
Mode:Managed Access Point: Not-Associated Tx-Power=off
Retry short limit:7 RTS thr:off Fragment thr:off
Encryption key:off
Power Management:on
eth0 no wireless extensions.
lo no wireless extensions.
root@aj-laptop:/etc/udev/rules.d# ifconfig wlp5s0
wlp5s0: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 00:80:34:1f:d8:3f txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 However, there are no rules for this interface in 70-persistent-net.rules or any of the other files in the /etc/udev/rules.d/ directory. Is there any way that I can rename this interface? | Choose a solution: ip link set wlp5s0 name wlan0 - not permanent create yourself an udev rule file in /etc/udev/rules.d - permanent add net.ifnames=0 kernel parameter into grub.cfg - permanent, if
your distro won't overwrite it. | {
"source": [
"https://unix.stackexchange.com/questions/400893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197044/"
]
} |
401,547 | While trying to receive keys in my Debian Stretch server, I get this error: sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
Executing: /tmp/apt-key-gpghome.4B7hWtn7Rm/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/tmp/apt-key-gpghome.4B7hWtn7Rm/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr | Installing the package dirmngr fixed the error. user@debian-server:~$ sudo apt-get install dirmngr Retrying : user@debian-server:~$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
Executing: /tmp/apt-key-gpghome.haKuPppywi/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
gpg: key A6A19B38D3D831EF: public key "Xamarin Public Jenkins (auto-signing) <[email protected]>" imported
gpg: Total number processed: 1
gpg: imported: 1 | {
"source": [
"https://unix.stackexchange.com/questions/401547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237568/"
]
} |
401,561 | The OpenShift Origin Client Tools allow to forward ports (example command: oc port-forward postgresql-1-a7hrv 5432 ). However, my database backups are fetched from a FreeBSD box. Apparently the oc tools are not available on *BSD and I'd rather use standard commands anyway. How can I do an oc port-forward -equivalent on FreeBSD and access the according database? | Installing the package dirmngr fixed the error. user@debian-server:~$ sudo apt-get install dirmngr Retrying : user@debian-server:~$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
Executing: /tmp/apt-key-gpghome.haKuPppywi/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
gpg: key A6A19B38D3D831EF: public key "Xamarin Public Jenkins (auto-signing) <[email protected]>" imported
gpg: Total number processed: 1
gpg: imported: 1 | {
"source": [
"https://unix.stackexchange.com/questions/401561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12169/"
]
} |
401,621 | Cron doesn't use the path of the user whose crontab it is and, instead, has its own. It can easily be changed by adding PATH=/foo/bar at the beginning of the crontab, and the classic workaround is to always use absolute paths to commands run by cron, but where is cron's default PATH defined? I created a crontab with the following contents on my Arch system (cronie 1.5.1-1) and also tested on an Ubuntu 16.04.3 LTS box with the same results: $ crontab -l
* * * * * echo "$PATH" > /home/terdon/fff That printed: $ cat fff
/usr/bin:/bin But why? The default system-wide path is set in /etc/profile , but that includes other directories: $ grep PATH= /etc/profile
PATH="/usr/local/sbin:/usr/local/bin:/usr/bin" There is nothing else relevant in /etc/environment or /etc/profile.d , the other files I thought might possibly be read by cron: $ grep PATH= /etc/profile.d/* /etc/environment
/etc/profile.d/jre.sh:export PATH=${PATH}:/usr/lib/jvm/default/bin
/etc/profile.d/mozilla-common.sh:export MOZ_PLUGIN_PATH="/usr/lib/mozilla/plugins"
/etc/profile.d/perlbin.sh:[ -d /usr/bin/site_perl ] && PATH=$PATH:/usr/bin/site_perl
/etc/profile.d/perlbin.sh:[ -d /usr/lib/perl5/site_perl/bin ] && PATH=$PATH:/usr/lib/perl5/site_perl/bin
/etc/profile.d/perlbin.sh:[ -d /usr/bin/vendor_perl ] && PATH=$PATH:/usr/bin/vendor_perl
/etc/profile.d/perlbin.sh:[ -d /usr/lib/perl5/vendor_perl/bin ] && PATH=$PATH:/usr/lib/perl5/vendor_perl/bin
/etc/profile.d/perlbin.sh:[ -d /usr/bin/core_perl ] && PATH=$PATH:/usr/bin/core_perl There is also nothing relevant in any of the files in /etc/skel , unsurprisingly, nor is it being set in any /etc/cron* file: $ grep PATH /etc/cron* /etc/cron*/*
grep: /etc/cron.d: Is a directory
grep: /etc/cron.daily: Is a directory
grep: /etc/cron.hourly: Is a directory
grep: /etc/cron.monthly: Is a directory
grep: /etc/cron.weekly: Is a directory
/etc/cron.d/0hourly:PATH=/sbin:/bin:/usr/sbin:/usr/bin So, where is cron's default PATH for user crontabs being set? Is it hardcoded in cron itself? Doesn't it read some sort of configuration file for this? | It’s hard-coded in the source code (that link points to the current Debian cron — given the variety of cron implementations, it’s hard to choose one, but other implementations are likely similar): #ifndef _PATH_DEFPATH
# define _PATH_DEFPATH "/usr/bin:/bin"
#endif cron doesn’t read default paths from a configuration file; I imagine the reasoning there is that it supports specifying paths already using PATH= in any cronjob, so there’s no need to be able to specify a default elsewhere. (The hard-coded default is used if nothing else specified a path in a job entry .) | {
"source": [
"https://unix.stackexchange.com/questions/401621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
401,759 | I just installed Debian 9.2.1 on an old laptop as a cheap server. The computer is not physically accessed by anyone other than myself, so I would like to automatically login upon startup so that if I have to use the laptop itself rather than SSH, I don't have to bother logging in. I have no graphical environments installed, so none of those methods would work, and I've tried multiple solutions such as https://superuser.com/questions/969923/automatic-root-login-in-debian-8-0-console-only However all it did was result in no login prompt being given at all... So I reinstalled Debian.
What can I do to automatically log in without a graphical environment? Thanks! | Edit your /etc/systemd/logind.conf , change #NAutoVTs=6 to NAutoVTs=1 Create a /etc/systemd/system/[email protected]/override.conf through ; systemctl edit getty@tty1 Paste the following lines [Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin root --noclear %I 38400 linux enable the [email protected] then reboot systemctl enable [email protected]
reboot Arch linux docs :getty | {
"source": [
"https://unix.stackexchange.com/questions/401759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258425/"
]
} |
401,840 | I was searching for a way to convert hexadecimal via command line and found there is a very easy method echo $((0x63)) . It's working great but I'm a little confused as to what is happening here. I know $(...) is normally a sub-shell, where the contents are evaluated before the outer command. Is it still a sub-shell in this situation? I'm thinking not as that would mean the sub-shell is just evaluating (0x63) which isn't a command. Can someone break down the command for me? | $(...) is a command substitution (not just a subshell), but $((...)) is an arithmetic expansion. When you use $((...)) , the ... will be interpreted as an arithmetic expression. This means, amongst other things, that a hexadecimal string will be interpreted as a number and converted to decimal. The whole expression will then be replaced by the numeric value that the expression evaluates to. Like parameter expansion and command substitution, $((...)) should be quoted as to not be affected by the shell's word splitting and filename globbing. echo "$(( 0x63 ))" As a side note, variables occurring in an arithmetic expression do not need their $ : $ x=030; y=30; z=0x30
$ echo "$(( x + y +x ))"
78 | {
"source": [
"https://unix.stackexchange.com/questions/401840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
401,934 | I want to perform some action only if my shell is "connected" to a terminal, i.e. only if my standard input comes from a terminal's input and my standard output (and standard error? maybe that doesn't matter) gets printed/echoed to a terminal. How can I do that, without relying on GNU/Linux specifics (like /proc/self ) directly? | isatty is a function for checking this , and the -t flag of the test command makes that accessible from a shell script: -t file_descriptor True if file descriptor number file_descriptor is open and is associated with a terminal. False if file_descriptor is not a valid file descriptor number, or if file descriptor number file_descriptor is not open, or if it is open but is not associated with a terminal. You can check if FD 0 (standard input) is a TTY with: test -t 0 You can do the same for FDs 1 and 2 to check the output and error streams, or all of them: test -t 0 -a -t 1 -a -t 2 The command returns 0 (succeeds) if the descriptors are hooked up to a terminal, and is false otherwise. test is also available as the [ command for a "bracket test": if [ -t 0 ] ; then ... is an idiomatic way to write this conditional. | {
"source": [
"https://unix.stackexchange.com/questions/401934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
402,746 | I am trying to log in to my DSL router, because I'm having trouble with command-line mail. I'm hoping to be able to reconfigure the router. When I give the ssh command, this is what happens: $ ssh [email protected]
Unable to negotiate with 10.255.252.1 port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1 so then I looked at this stackexchange post , and modified my command to this, but I get a different problem, this time with the ciphers. $ ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 [email protected]
Unable to negotiate with 10.255.252.1 port 22: no matching cipher found. Their offer: 3des-cbc so is there a command to offer 3des-cbc encryption? I'm not sure about 3des, like whether I want to add it permanently to my system. Is there a command to allow the 3des-cbc cipher? What is the problem here? It's not asking for password. | This particular error happens while the encrypted channel is being set up. If your system and the remote system don't share at least one cipher, there is no cipher to agree on and no encrypted channel is possible. Usually SSH servers will offer a small handful of different ciphers in order to cater to different clients; I'm not sure why your server would be configured to only allow 3DES-CBC. Now, 3DES-CBC isn't terrible. It's slow, and it provides less security than some other algorithms, but it's not immediately breakable as long as the keys are selected properly. CBC itself has some issues when ciphertext can be modified in transit, but I strongly suspect that the resultant corruption would be rejected by SSH's HMAC, reducing impact. Bottom line, there are worse choices than 3DES-CBC, and there are better ones. However, always tread carefully when overriding security-related defaults, including cipher and key exchange algorithm choices. Those defaults are the defaults for a reason; some pretty smart people spent some brain power considering the options and determined that what was chosen as the defaults provide the best overall security versus performance trade-off. As you found out, you can use -c ... (or -oCiphers=... ) to specify which cipher to offer from the client side. In this case adding -c 3des-cbc allows only 3DES-CBC from the client. Since this matches a cipher that the server offers, an encrypted channel can be established and the connection proceeds to the authentication phase. You can also add this to your personal ~/.ssh/config . To avoid making a global change to solve a local problem, you can put it in a Host stanza. For example, if your SSH config currently says (dummy example): Port 9922 specifying a global default port of 9922 instead of the default 22, you can add a host stanza for the host that needs special configuration, and a global host stanza for the default case. That would become something like... Host 10.255.252.1
Ciphers 3des-cbc
KexAlgorithms +diffie-hellman-group1-sha1
Host *
Port 9922 The indentation is optional, but I find it greatly enhances readability. Blank lines and lines starting with # are ignored. If you always (or mostly) log in as the same user on that system, you can also specify that username: Host 10.255.252.1
Ciphers 3des-cbc
KexAlgorithms +diffie-hellman-group1-sha1
User enduser
Host *
Port 9922 You don't need to add a Host * stanza if there was nothing in your ~/.ssh/config to begin with, as in that case only compiled-in or system-wide defaults (typically from /etc/ssh/ssh_config) would be used. At this point, the ssh command line to connect to this host reduces to simply $ ssh 10.255.252.1 and all other users on your system, and connections to all other hosts from your system, are unaffected by the changes. | {
"source": [
"https://unix.stackexchange.com/questions/402746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41051/"
]
} |
402,750 | I have a script that process a folder, and count the files in the mean time. i=1
find tmp -type f | while read x
do
i=$(($i + 1))
echo $i
done
echo $i However, $i is always 1 , how do I resolve this? | In your example the while-loop is executed in a subshell, so changes to the variable inside the while-loop won't affect the external variable. This is because you're using the loop with a pipe, which automatically causes it to run in a subshell. Here is an alternative solution using a while loop: i=1
while read x; do
i=$(($i + 1))
echo $i
done <<<$(find tmp -type f)
echo $i And here is the same approach using a for-loop: i=1
for x in $(find tmp -type f);
do
i=$(($i + 1))
echo $i
done
echo $i For more information see the following posts: A variable modified inside a while loop is not remembered Bash Script: While-Loop Subshell Dilemma Also look at the following chapter from the Advanced Bash Scripting Guide: Chapter 23. Process Substitution | {
"source": [
"https://unix.stackexchange.com/questions/402750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
402,999 | I cloned a disk (SSD) and put the cloned disk into another machine. Now both systems have the same value in /etc/machine-id . Is it any problem to simply edit /etc/machine-id to change the value? Can I do this while the system is running (or do I need to boot from a Live USB)? Is systemd-machine-id-setup a better alternative? The naive use of systemd-machine-id-setup doesn't work. I tried these steps: nano /etc/machine-id (to remove the existing value)
systemd-machine-id-setup
> Initializing machine ID from D-Bus machine ID.
cat /etc/machine-id The new value is the same as the old value. | Although systemd-machine-id-setup and systemd-firstboot are great for systems using systemd, /etc/machine-id is not a systemd file, despite the tag. It is also used on systems that do not use systemd. So as an alternative, you can use the dbus-uuidgen tool: rm -f /etc/machine-id and then dbus-uuidgen --ensure=/etc/machine-id As mentioned by Stephen Kitt, Debian systems may have both a /etc/machine-id and a /var/lib/dbus/machine-id file. If both exist as regular files, their contents should match, so there, also remove /var/lib/dbus/machine-id : rm /var/lib/dbus/machine-id and re-create it: dbus-uuidgen --ensure This last command implicitly uses /var/lib/dbus/machine-id as the file name and will copy the machine ID from the already-newly-generated /etc/machine-id . The dbus-uuidgen invocation may or may not already be part of the regular boot sequence. If it is part of the boot sequence, then removing the file and rebooting should be enough. If you need to run dbus-uuidgen yourself, pay attention to the warning in the man page: If you try to change an existing machine-id on a running system, it will probably result in bad things happening. Don't try to change this file. Also, don't make it the same on two different systems; it needs to be different anytime there are two different kernels running. So after doing this, definitely don't continue using the system without rebooting. As an extra precaution, you may instead reboot first into rescue mode (or as you suggested, boot from a live USB stick), but from my experience, that is not necessary. Bad things may happen, but the bad things that do happen are fixed by the reboot anyway. | {
"source": [
"https://unix.stackexchange.com/questions/402999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
403,783 | The echo one; echo two > >(cat); echo three; command gives unexpected output. I read this: How process substitution is implemented in bash? and many other articles about process substitution on the internet, but don't understand why it behaves this way. Expected output: one
two
three Real output: prompt$ echo one; echo two > >(cat); echo three;
one
three
prompt$ two Also, this two commands should be equivalent from my point of view, but they don't: ##### first command - the pipe is used.
prompt$ seq 1 5 | cat
1
2
3
4
5
##### second command - the process substitution and redirection are used.
prompt$ seq 1 5 > >(cat)
prompt$ 1
2
3
4
5 Why I think, they should be the same? Because, both connects the seq output to the cat input through the anonymous pipe - Wikipedia, Process substitution . Question: Why it behaves this way? Where is my error? The comprehensive answer is desired (with explanation of how the bash does it under the hood). | Yes, in bash like in ksh (where the feature comes from), the processes inside the process substitution are not waited for (before running the next command in the script). for a <(...) one, that's usually fine as in: cmd1 <(cmd2) the shell will be waiting for cmd1 and cmd1 will be typically waiting for cmd2 by virtue of it reading until end-of-file on the pipe that is substituted, and that end-of-file typically happens when cmd2 dies. That's the same reason several shells (not bash ) don't bother waiting for cmd2 in cmd2 | cmd1 . For cmd1 >(cmd2) , however, that's generally not the case, as it's more cmd2 that typically waits for cmd1 there so will generally exit after. That's fixed in zsh that waits for cmd2 there (but not if you write it as cmd1 > >(cmd2) and cmd1 is not builtin, use {cmd1} > >(cmd2) instead as documented ). ksh doesn't wait by default, but lets you wait for it with the wait builtin (it also makes the pid available in $! , though that doesn't help if you do cmd1 >(cmd2) >(cmd3) ) rc (with the cmd1 >{cmd2} syntax), same as ksh except you can get the pids of all the background processes with $apids . es (also with cmd1 >{cmd2} ) waits for cmd2 like in zsh , and also waits for cmd2 in <{cmd2} process redirections. bash does make the pid of cmd2 (or more exactly of the subshell as it does run cmd2 in a child process of that subshell even though it's the last command there) available in $! , but doesn't let you wait for it. If you do have to use bash , you can work around the problem by using a command that will wait for both commands with: { { cmd1 >(cmd2); } 3>&1 >&4 4>&- | cat; } 4>&1 That makes both cmd1 and cmd2 have their fd 3 open to a pipe. cat will wait for end-of-file at the other end, so will typically only exit when both cmd1 and cmd2 are dead. And the shell will wait for that cat command. You could see that as a net to catch the termination of all background processes (you can use it for other things started in background like with & , coprocs or even commands that background themselves provided they don't close all their file descriptors like daemons typically do). Note that thanks to that wasted subshell process mentioned above, it works even if cmd2 closes its fd 3 (commands usually don't do that, but some like sudo or ssh do). Future versions of bash may eventually do the optimisation there like in other shells. Then you'd need something like: { { cmd1 >(sudo cmd2; exit); } 3>&1 >&4 4>&- | cat; } 4>&1 To make sure there's still an extra shell process with that fd 3 open waiting for that sudo command. Note that cat won't read anything (since the processes don't write on their fd 3). It's just there for synchronisation. It will do just one read() system call that will return with nothing at the end. You can actually avoid running cat by using a command substitution to do the pipe synchronisation: { unused=$( { cmd1 >(cmd2); } 3>&1 >&4 4>&-); } 4>&1 This time, it's the shell instead of cat that is reading from the pipe whose other end is open on fd 3 of cmd1 and cmd2 . We're using a variable assignment so the exit status of cmd1 is available in $? . Or you could do the process substitution by hand, and then you could even use your system's sh as that would become standard shell syntax: { cmd1 /dev/fd/3 3>&1 >&4 4>&- | cmd2 4>&-; } 4>&1 though note as noted earlier that not all sh implementations would wait for cmd1 after cmd2 has finished (though that's better than the other way round). That time, $? contains the exit status of cmd2 ; though bash and zsh make cmd1 's exit status available in ${PIPESTATUS[0]} and $pipestatus[1] respectively (see also the pipefail option in a few shells so $? can report the failure of pipe components other than the last) Note that yash has similar issues with its process redirection feature. cmd1 >(cmd2) would be written cmd1 /dev/fd/3 3>(cmd2) there. But cmd2 is not waited for and you can't use wait to wait for it either and its pid is not made available in the $! variable either. You'd use the same work arounds as for bash . | {
"source": [
"https://unix.stackexchange.com/questions/403783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109397/"
]
} |
Subsets and Splits