source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
406,630 | I'm watching a log and want to detect once a program has had 3 failed attempts: tail -f file.log | grep --line-buffered program\ failed\ string If the line count from grep hits 3 I want to return an error. How can I do this? | awk is a great tool for scanning streams. I assumed it should display all lines to see the log until it exit, contrary to your example with grep which display only error lines. tail -f file.log | awk ' BEGIN { count = 0 } { print($0) if ($0 ~ /program failed/) { count++ if (count == 3) { exit(1) } } }' You can move the awk code to tail.awk and call tail -f file.log | awk -f tail.awk if you prefer. Equivalently, in a more compact form: tail -f file.log | awk '1; /program failed/ && ++count == 3 { exit 1 }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
406,645 | When you do: $ whois stackoverflow.com does your Linux first do a DNS query, find the IP of stackoverflow.com, and then ask the information directly there? Or does it ask a "root" whois server (is the IP of the "root whois server" hardcoded in a Linux distribution, in a similar fashion to /etc/bind/db.root ?), which then delegates to another whois server who gives the information? What is the connection flow? my computer doing `whois ...` ---> root whois server ---> another whois server ---> information or my computer doing `whois ...` ---> DNS server (?) ---> ... ? | If you’re using Marco d’Itri’s whois , you can add the --verbose option to see what it’s doing. For stackoverflow.com, it starts by asking whois.verisign-grs.com (see its list of WHOIS servers ), which gives it a number of pieces of information, including the fact that Stack Overflow’s registrar is Name.com, and its WHOIS server is whois.name.com; so it then proceeds to ask whois.name.com. The protocol is documented in RFC 3912 . The whois manpage also has useful pointers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59989/"
]
} |
406,652 | After I renamed my username in Linux Mint as described in this article ,my Dropbox-Daemon allways crushes. Done renaming process: Ctrl + Alt + L Ctrl + Alt + T ~$ exec sudo -i~$ killall -u oldname~$ id oldname~$ usermod -l newname oldname~$ groupmod -n newname oldname~$ usermod -d /home/newname -m newname~$ usermod -c "New_real_name" newname~$ id newname~$ reboot In my case: ~$ exec sudo -i~$ killall -u user~$ id user~$ usermod -l yuza user~$ groupmod -n yuza user~$ usermod -d /home/yuza -m yuza~$ usermod -c "Orthonym" yuza~$ id yuza~$ reboot I reinstalled the Dropbox-Daemon and every time I try to start the Deamon itcrushes and produces allways a different dropbox_error####.txt-file. ~$ sudo apt remove dropbox~$ sudo apt install dropbox~$ dropbox status Dropbox isn't running!~$ sudo dropbox start -i Pop-up: Error report: dropbox_error7_MXjP.txt bn.BUILD_KEY: Dropboxbn.VERSION: 39.4.49bn.DROPBOXEXT_VERSION: failedbn.is_frozen: Truemachine_id: failedpid: 8571ppid: 8570ppid exe: '/usr/bin/python2.7'uid: 1000user_info: pwd.struct_passwd(pw_name='yuza', pw_passwd='x', pw_uid=1000, pw_gid=1000, pw_gecos='Orthonym', pw_dir='/home/yuza', pw_shell='/bin/bash')effective_user_info: pwd.struct_passwd(pw_name='yuza', pw_passwd='x', pw_uid=1000, pw_gid=1000, pw_gecos='Orthonym', pw_dir='/home/yuza', pw_shell='/bin/bash')euid: 1000gid: 1000egid: 1000group_info: grp.struct_group(gr_name='yuza', gr_passwd='x', gr_gid=1000, gr_mem=[])effective_group_info: grp.struct_group(gr_name='yuza', gr_passwd='x', gr_gid=1000, gr_mem=[])LD_LIBRARY_PATH: Nonecwd: '/home/yuza' real_path='/home/yuza' mode=040755 uid=1000 gid=1000 parent mode=040755 uid=0 gid=0HOME: u'/home/yuza'appdata: u'/home/user/.dropbox/instance1' real_path=u'/home/user/.dropbox/instance1' not found parent not founddropbox_path: u'/home/yuza/Dropbox' real_path=u'/home/yuza/Dropbox' not found parent mode=040755 uid=1000 gid=1000sys_executable: '/home/yuza/.dropbox-dist/dropbox-lnx.x86_64-39.4.49/dropbox' real_path='/home/yuza/.dropbox-dist/dropbox-lnx.x86_64-39.4.49/dropbox' mode=0100755 uid=1000 gid=1000 parent mode=040755 uid=1000 gid=1000trace.__file__: '/home/yuza/.dropbox-dist/dropbox-lnx.x86_64-39.4.49/python-packages-27.zip/dropbox/client/ui/common/boot_error.pyc' real_path='/home/yuza/.dropbox-dist/dropbox-lnx.x86_64-39.4.49/python-packages-27.zip/dropbox/client/ui/common/boot_error.pyc' not found parent not foundtempdir: '/tmp' real_path='/tmp' mode=041777 uid=0 gid=0 parent mode=040755 uid=0 gid=0Traceback (most recent call last): File "dropbox/client/main.pyc", line 6196, in main_startup File "dropbox/client/main.pyc", line 2412, in run File "dropbox/client/main.pyc", line 1453, in startup_low File "dropbox/client/main.pyc", line 1035, in safe_makedirs File "os.pyc", line 150, in makedirs File "os.pyc", line 150, in makedirs File "os.pyc", line 157, in makedirsOSError: [Errno 13] Permission denied: '/home/user' Has anyone an idea how to solve this mess? I am grateful for any help, links, references and hints! | If you’re using Marco d’Itri’s whois , you can add the --verbose option to see what it’s doing. For stackoverflow.com, it starts by asking whois.verisign-grs.com (see its list of WHOIS servers ), which gives it a number of pieces of information, including the fact that Stack Overflow’s registrar is Name.com, and its WHOIS server is whois.name.com; so it then proceeds to ask whois.name.com. The protocol is documented in RFC 3912 . The whois manpage also has useful pointers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406652",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245933/"
]
} |
406,653 | I'm watching wpa_supplicant so I can kill my script if the password is wrong. I background the whole code block below. I can see the echo run but the exit doesn't seem to stop my main script. (sudo stdbuf -o0 wpa_supplicant -Dwext -i$wifi -cwifi.conf 2>&1 \ | grep -m 1 "pre-shared key may be incorrect" \ && echo I see this \ && exit) & I suspect the exit here is just killing a thread which has been backgrounded? Is that the case? If so how can I kill the parent here? | If you’re using Marco d’Itri’s whois , you can add the --verbose option to see what it’s doing. For stackoverflow.com, it starts by asking whois.verisign-grs.com (see its list of WHOIS servers ), which gives it a number of pieces of information, including the fact that Stack Overflow’s registrar is Name.com, and its WHOIS server is whois.name.com; so it then proceeds to ask whois.name.com. The protocol is documented in RFC 3912 . The whois manpage also has useful pointers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
406,667 | I know that grep -rhI "# Active" > out.txt will output any line containing # Active within the searched directory but I want the entire .txt file contents, so example example.txt Line1 Line2Line3 # ActiveLine4Line5etc So if I grep for # Active I want it to not just output the line containing # Active within these .txt file but all the other lines too, example output.txt Line1 Line2Line3 # ActiveLine4Line5etc | For non-GNU versions of grep , which are unlikely to have -z , or if portability is required... grep -q pattern file && cat file -q suppresses any output but, per usual, exit status is set based on whether or not a pattern match was found. With a pattern match grep returns the success code 0 which is equivalent to true and that allows the cat command to be executed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262419/"
]
} |
406,674 | In Bash I run: alias myalias='echo fooecho barecho baz'myalias which returns: foobarbaz But: ssh localhost "shopt -s expand_aliases &>/dev/null;alias myalias='echo fooecho barecho baz'myalias" Returns: foo Why? | I think you found a bug of Bash. This bug is specific to option -c . Remote running has nothing to do with your problem about the multi-line alias. You can try it in your local bash. But not in bash script or interactive bash, try it with -c option, like this bash -c "shopt -s expand_aliases &>/dev/null;alias myalias='echo fooecho barecho baz'myalias" Same output as your problem. Only foo is printed. To get the right (expected) output, you have to at least add one more line after myalias , as @cuonglm suggested. bash -c "shopt -s expand_aliases &>/dev/null;alias myalias='echo fooecho barecho baz'myalias:" Why would it happen this way? Why does one more line after myalias help? I just want to say that this doesn't make sense. No document in Bash explains or mentions this case, not a little bit. It is not supposed to run this way. This is a bug. After reading code, you'll make sure of this point. Go back to the first problematic command. This time don't change anything, just re-compile bash with "ONESHOT" undefined , then you'll get right (expected) output. Yes, you hear right, the command has two different behaviors just because of different compile-time config. Whether define ONESHOT or not will lead to two completely different route in Bash code for -c "command" . If undefine ONESHOT, -c "command" will run the normal code route, which is the code route for almost all bash executions, such as interactive command and bash script. But if define ONESHOT, -c "command" will run another particular route which is specially designed for it only, to improve its performance by avoiding fork. For this case, the normal and mostly used way can give right output, while the particular way can't. I think the inconsistent behavior is not what the Bash authors want. As to which behavior is right, I tend to think the normal way is right. Some details about this bug The following piece of code is related to the bug. It is from function parse_and_execute() in file builtins/evalstring.c while (*(bash_input.location.string)) { ... } This while loop will run by lines, handling one line in one loop.After read myalias , the last line, in the command (see above), the condition in while will become false. myalias is expanded to three lines of echo, but only one echo is handled in this loop; the two other echo will be handled in next loop, but... there is not another loop. If you add one more line after myalias , after read myalias , the condition in while will remain true, so the two other echo will get chance to run in next loop. The last line after myalias will be handled after all echos expanded by myalias are handled. UPDATE I forgot to say the version of Bash involved in this issue, which is GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406674",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2972/"
]
} |
406,685 | I am trying to verify the chain for flo2cash.comopenssl gives me this: openssl s_client -showcerts -connect flo2cash.com:443 CONNECTED(00000003)depth=3 C = SE, O = AddTrust AB, OU = AddTrust External TTP Network, CN = AddTrust External CA Rootdepth=2 C = GB, ST = Greater Manchester, L = Salford, O = COMODO CA Limited, CN = COMODO RSA Certification Authoritydepth=1 C = GB, ST = Greater Manchester, L = Salford, O = COMODO CA Limited, CN = COMODO RSA Domain Validation Secure Server CAdepth=0 OU = Domain Control Validated, OU = Hosted by FreeParking Ltd, OU = COMODO SSL, CN = flo2cash.com Indicating that the root is "AddTrust External CA Root"Both Chrome and Firefox only show 3 levels in the chain with the cert rooted at "COMODO RSA Certification Authority" If I check the last certificate in the chain I get this: subject= /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authorityissuer= /C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA RootnotBefore=May 30 10:48:38 2000 GMTnotAfter=May 30 10:48:38 2020 GMT This certificate is really old. This is what is in the CA bundles that I can find on my machine (Fedora 25 patched to the latest), and also the browsers: subject= /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification Authorityissuer= /C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Certification AuthoritynotBefore=Jan 19 00:00:00 2010 GMTnotAfter=Jan 18 23:59:59 2038 GMT That old cert must be coming from somewhere. | I think you found a bug of Bash. This bug is specific to option -c . Remote running has nothing to do with your problem about the multi-line alias. You can try it in your local bash. But not in bash script or interactive bash, try it with -c option, like this bash -c "shopt -s expand_aliases &>/dev/null;alias myalias='echo fooecho barecho baz'myalias" Same output as your problem. Only foo is printed. To get the right (expected) output, you have to at least add one more line after myalias , as @cuonglm suggested. bash -c "shopt -s expand_aliases &>/dev/null;alias myalias='echo fooecho barecho baz'myalias:" Why would it happen this way? Why does one more line after myalias help? I just want to say that this doesn't make sense. No document in Bash explains or mentions this case, not a little bit. It is not supposed to run this way. This is a bug. After reading code, you'll make sure of this point. Go back to the first problematic command. This time don't change anything, just re-compile bash with "ONESHOT" undefined , then you'll get right (expected) output. Yes, you hear right, the command has two different behaviors just because of different compile-time config. Whether define ONESHOT or not will lead to two completely different route in Bash code for -c "command" . If undefine ONESHOT, -c "command" will run the normal code route, which is the code route for almost all bash executions, such as interactive command and bash script. But if define ONESHOT, -c "command" will run another particular route which is specially designed for it only, to improve its performance by avoiding fork. For this case, the normal and mostly used way can give right output, while the particular way can't. I think the inconsistent behavior is not what the Bash authors want. As to which behavior is right, I tend to think the normal way is right. Some details about this bug The following piece of code is related to the bug. It is from function parse_and_execute() in file builtins/evalstring.c while (*(bash_input.location.string)) { ... } This while loop will run by lines, handling one line in one loop.After read myalias , the last line, in the command (see above), the condition in while will become false. myalias is expanded to three lines of echo, but only one echo is handled in this loop; the two other echo will be handled in next loop, but... there is not another loop. If you add one more line after myalias , after read myalias , the condition in while will remain true, so the two other echo will get chance to run in next loop. The last line after myalias will be handled after all echos expanded by myalias are handled. UPDATE I forgot to say the version of Bash involved in this issue, which is GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262431/"
]
} |
406,689 | I want to implement something like this Q/A but for a sub-shell. Here is a minimal example of what I'm trying: (subshell=$BASHPID (kill $subshell & wait $subshell 2>/dev/null) &sleep 600)echo subshell done How can I make it so only subshell done returns instead of: ./test.sh: line 4: 5439 Terminated ( subshell=$BASHPID; ( kill $subshell && wait $subshell 2> /dev/null ) & sleep 600 )subshell done Edit: I may be wrong on the terminology here, by subshell I mean the process within the first set of brackets. Update: I want to post the snippet from the actual program for context, above is a simplification: # If subshell below if killed or returns error connected variable won't be set(if [ -n "$2" ];then # code to setup wpa configurations here # If wifi key is wrong kill subshell subshell=$BASHPID (sudo stdbuf -o0 wpa_supplicant -Dwext -i$wifi -c/etc/wpa_supplicant/wpa_supplicant.conf 2>&1 \ | grep -m 1 "pre-shared key may be incorrect" \ && kill -s PIPE "$subshell") & # More code which does the setup necessary for wifi) && connected=true# later json will be returned based on if connected is set | Note: wait $subshell won't work as $subshell is not a child of the process you're running wait in. Anyway, you'd not waiting for the process doing the wait so it doesn't matter much. kill $subshell is going to kill the subshell but not sleep if the subshell had managed to start it by the time kill was run. You could however run sleep in the same process with exec you can use SIGPIPE instead of SIGTERM to avoid the message leaving a variable unquoted in list contexts has a very special meaning in bash . So having said all that, you can do: ( subshell=$BASHPID kill -s PIPE "$subshell" & sleep 600)echo subshell done (replace sleep 60 with exec sleep 60 if you want the kill to kill sleep and not just the subshell, which in this case might not even have time to run sleep by the time you kill it). In any case, I'm not sure what you want to achieve with that. sleep 600 & would be a more reliable way to start sleep in background if that's what you wanted to do (or (sleep 600 &) if you wanted to hide that sleep process from the main shell) Now with your actual sudo stdbuf -o0 wpa_supplicant -Dwext -i"$wifi" -c/etc/wpa_supplicant/wpa_supplicant.conf command, note that sudo does spawn a child process to run the command (if only because it may need to log its status or perform some PAM session tasks afterwards). stdbuf will however execute wpa_supplicant in the same process, so in the end you'll have three processes (in addition to the rest of the script) in wpa_supplicant 's ancestry: the subshell sudo as a child of 1 wpa_supplicant (which was earlier running stdbuf) as a child of 2 If you kill 1, that doesn't automatically kills 2. If you kill 2 however, unless it's with a signal like SIGKILL that can't be intercepted, that will kill 3 as sudo happens to forward the signals it receives to the command it runs. In any case, that's not the subshell you'd want to kill here, it's 3 or at least 2. Now, if it's running as root and the rest of the script is not, you won't be able to kill it so easily. You'd need the kill to be done as root , so you'd need: sudo WIFI="$wifi" bash -c ' (echo "$BASHPID" && exec stdbuf -o0 wpa_supplicant -Dwext -i"$WIFI" -c/etc/wpa_supplicant/wpa_supplicant.conf 2>&1 ) | { read pid && grep -m1 "pre-shared key may be incorrect" && kill -s PIPE "$pid" }' That way, wpa_supplicant will be running in the same $BASHPID process as the subshell as we're making of that with exec . We get the pid through the pipe and run kill as root. Note that if you're ready to wait a little longer, sudo stdbuf -o0 wpa_supplicant -Dwext -i"$wifi" -c/etc/wpa_supplicant/wpa_supplicant.conf 2>&1 | grep -m1 "pre-shared key may be incorrect" Would have wpa_supplicant killed automatically with a SIGPIPE (by the system, so no permission issue) the next time it writes something to that pipe after grep is gone. Some shell implementations would not wait for sudo after grep has returned (leaving it running in background until it gets SIGPIPEd), and with bash , you can also do that using the grep ... <(sudo ...) syntax, where bash doesn't wait for sudo either after grep has returned. More at Grep slow to exit after finding match? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
406,695 | There is some development I need to do on some remote box. Fortunately, I have shell access, but I need to go through a gateway that has AllowTcpForwarding set to false. I took a peak at the docs and it says: AllowTcpForwarding Specifies whether TCP forwarding is permitted. The default is ''yes''. Note that disabling TCP forwarding does not improve security unless users are also denied shell access, as they can always install their own forwarders. How would I go about installing (or building) my own forwarder? My goal here is to setup a remote interpreter using Pycharm via SSH and binding it to some local port, that data fed through ssh, that through the gateway, and then to the development box where the code is actually run. I imagine I could somehow utilize nc or some other unix utility that'll help get the job done. I know I can ssh to my remote box by doing: ssh -t user1@gateway ssh user2@devbox But obviously this option isn't available in pycharm. I'll have to be able to open some local port such that ssh -p 12345 localhost(or variant) will connect me to user2@devbox. This will allow me to configure the remote interpreter to use port 12345 on localhost to connect to the remote box. | As long as one can execute socat locally and on gateway (or even just bash and cat on gateway , see last example!) and is allowed to not use a pty to be 8bits clean, it's possible to establish a tunnel through ssh. Here are 4 examples, improving upon the previous: Basic example working once (having it fork would require one ssh connection per tunnel, not good). Having to escape the : for socat to accept the exec command: term1: $ socat tcp-listen:12345,reuseaddr exec:'ssh user1@gateway exec socat - tcp\:devbox\:22',nofork term2: $ ssh -p 12345 user2@localhost term1: user1@gateway's password: term2: user2@localhost's password: Reversing first and second addresses makes the socket immediately available socat has to stay in charge, so no nofork : term1: $ socat exec:'ssh user1@gateway exec socat - tcp\:devbox\:22' tcp-listen:12345,reuseaddr user1@gateway's password: term2: $ ssh -p 12345 user2@localhost user2@localhost's password: Using a ControlMaster ssh allows to fork while using only a single ssh connection to the gateway, thus giving a behaviour similar to the usual port forwarding: term1: $ ssh -N -o ControlMaster=yes -o ControlPath=~/mysshcontrolsocket user1@gateway user1@gateway's password: term2: $ socat tcp-listen:12345,reuseaddr,fork exec:'ssh -o ControlPath=~/mysshcontrolsocket user1@gateway exec socat - tcp\:devbox\:22' term3: $ ssh -p 12345 user2@localhost user2@localhost's password: Having only bash and cat available on gateway By using bash 's built-in tcp redirection , and two half-duplex cat commands (for a full-duplex result) one doesn't even need a remote socat or netcat . Handling of multiple layers of nested and escaped quotes was a bit awkward and can perhaps be done better, or simplified by the use of a remote bash script. Care has to be taken to have the forked cat for output only: term1 (no change): $ ssh -N -o ControlMaster=yes -o ControlPath=~/mysshcontrolsocket user1@gatewayuser1@gateway's password: term2: $ socat tcp-listen:12345,reuseaddr,fork 'exec:ssh -T -o ControlPath=~/mysshcontrolsocket user1@gateway '\''exec bash -c \'\''"exec 2>/dev/null 8<>/dev/tcp/devbox/22; cat <&8 & cat >&8"\'\'\' term3: $ ssh -p 12345 user2@localhostuser2@localhost's password: | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262441/"
]
} |
406,703 | Append one file, 011C0201.WAV011C0202.WAV011C0203.WAV011C0204.WAV011C0205.WAV After another file, 52 60139 60856 101639 41665 335 result is the following, also divide by tab 011C0201.WAV 52_601_011C0201011C0202.WAV 39_608_011C0202011C0203.WAV 56_1016_011C0203011C0204.WAV 39_416_011C0204011C0205.WAV 65_335_011C0205 Here is what I do awk 'NR==FNR { start=$1; end=$2; next}{ print $0 start end }' WSJ_310P_PC_16k.epd WSJ_310P_PC_16k.spt > tmp But it is not working. What am I doing wrong? | How about paste + awk ? $ paste one another | awk '{print $1, $2 "_" $3 "_" substr($1,1,length($1)-4)}' OFS='\t'011C0201.WAV 52_601_011C0201011C0202.WAV 39_608_011C0202011C0203.WAV 56_1016_011C0203011C0204.WAV 39_416_011C0204011C0205.WAV 65_335_011C0205 If you prefer to do it entirely in awk : awk 'NR==FNR {a[FNR]=$0; next} {print a[FNR], $1 "_" $2 "_" substr(a[FNR],1,length(a[FNR])-4)}' OFS='\t' one another | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406703",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95080/"
]
} |
406,763 | When I try to run pkill -f remotely via ssh, and try to discard the possible error code (to go on with the rest of my script even if no process is found), || true does not behave as I expect. $ pkill asdf || true$ echo $?0$ pkill -f asdf || true$ echo $?0$ ssh [email protected] "pkill asdf || true"$ echo $?0$ ssh [email protected] "pkill -f asdf || true"255 I suppose that it is ssh that returns 255, not the command between quotes, but why? | Your supposition that it’s ssh itself that returns the 255 exit status is correct. The ssh man page states that: ssh exits with the exit status of the remote command or with 255 if an error occurred. If you were to simply run ssh [email protected] "pkill -f asdf" , you’d most likely get an exit status of 1 , corresponding to the pkill status for “ No processes matched ”. The challenging part is to understand why an error occurs with SSH when you run ssh [email protected] "pkill -f asdf || true" SSH remote commands The SSH server launches a shell to run remote command(s). Here’s an example of this in action: $ ssh server "ps -elf | tail -5"4 S root 35323 1024 12 80 0 - 43170 poll_s 12:01 ? 00:00:00 sshd: anthony [priv]5 S anthony 35329 35323 0 80 0 - 43170 poll_s 12:01 ? 00:00:00 sshd: anthony@notty0 S anthony 35330 35329 0 80 0 - 28283 do_wai 12:01 ? 00:00:00 bash -c ps -elf | tail -50 R anthony 35341 35330 0 80 0 - 40340 - 12:01 ? 00:00:00 ps -elf0 S anthony 35342 35330 0 80 0 - 26985 pipe_w 12:01 ? 00:00:00 tail -5 Note that the default shell is bash and that the remote command is not a simple command but a pipeline , “a sequence of one or more commands separated by the control operator | ”. The Bash shell is clever enough to realise that if the command being passed to it by the -c option is a simple command , it can optimise by not actually forking a new process, i.e., it directly exec s the simple command instead of going through the extra step of fork ing before it exec s. Here’s an example of what happens when you run a remote simple command ( ps -elf in this case): $ ssh server "ps -elf" | tail -51 S root 34740 2 0 80 0 - 0 worker 11:49 ? 00:00:00 [kworker/0:1]1 S root 34762 2 0 80 0 - 0 worker 11:50 ? 00:00:00 [kworker/0:3]4 S root 34824 1024 31 80 0 - 43170 poll_s 11:51 ? 00:00:00 sshd: anthony [priv]5 S anthony 34829 34824 0 80 0 - 43170 poll_s 11:51 ? 00:00:00 sshd: anthony@notty0 R anthony 34830 34829 0 80 0 - 40340 - 11:51 ? 00:00:00 ps -elf I’ve come across this behaviour before but I couldn’t find a better reference other than this AskUbuntu answer . pkill behaviour Since pkill -f asdf || true is not a simple command (it’s a command list ), the above optimisation can not occur so when you run ssh [email protected] "pkill -f asdf || true" , the sshd process forks and execs bash -c "pkill -f asdf || true" . As ctx’s answer points out, pkill won’t kill its own process. However, it will kill any other process whose command line matches the -f pattern. The bash -c command matches this pattern so it kills this process – its own parent (as it happens). The SSH server then sees that the shell process it started in order to run the remote commands was killed unexpectedly so it reports an error to the SSH client. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/406763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38106/"
]
} |
406,832 | Can someone explain when does rsync use port 22 on the remote host and when does it use port 873? Can it be set to always port 22, since I heard it has to use port 873 if it is run as a daemon? Can someone explain in simple terms. | For context, the rsync documentation says There are two different ways for rsync to contact a remote system: using a remote-shell program as the transport (such as ssh or rsh) or contacting an rsync daemon directly via TCP. The remote-shell transport is used whenever the source or destination path contains a single colon (:) separator after a host specification. Contacting an rsync daemon directly happens when the source or destination path contains a double colon (::) separator after a host specification, OR when an rsync:// URL is specified. Port 22 is the SSH port; it’s used when you tell rsync to connect via SSH, with a single colon (the “remote-shell” case above). Port 873 is the rsync dæmon port; it’s used when rsync is used with a double colon or a rsync:// URL. Most of the time you’ll be using SSH; using the dæmon requires specific setup. If you only ever want to use port 22, all you need to do is always specify a single colon in the remote host descriptor. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/406832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262569/"
]
} |
406,863 | I have a shell script in which I want to add a shebang.Given a variable defined as follows: SHEBANG="#!/bin/sh" My question is if I can use that variable in another script like this: $SHEBANG# other stuff... | No. The shebang line is processed by your kernel or system library* and doesn't process shell variables. If you wanted to use an environment variable as the interpreter for some reason, you could create a "trampoline" executable (not script!) that re-spawned $SHEBANG "$@" . You'd need to omit the #! from the variable in this case; #! is only useful when it is literally the first two bytes of a file. Another option is to make use of the fact that scripts without shebang lines will (generally) be executed as shell scripts with either sh or the calling shell . If there's a comment syntax for your shebang target that overlaps usefully with shell syntax, you could make it work. This strategy is or used to be common with Lisp interpreters. In the case of your example, where the interpreter is also sh , this isn't viable. You can push things even further on that line if you can rely on bash being present: a script whose first line is exec /bin/bash -c 'exec "$SHEBANG" <(tail -n +2 "$0")' "$0" "$@" will work for most interpreters; Bash is used as an intermediate step so that process substitution can be used to have tail skip the first line for us, so that we don't end up in an infinite loop. Some interpreters require that the file be seekable, and they won't accept a pipeline like this. It's worth considering whether this is really something you want to do. Aside from any security concerns, it's just not very useful: few scripts can have their interpreters changed out like that, and where they can you'd almost certainly be better off with a more restricted piece of logic that spawned the correct one for the situation (perhaps a shell script in between, that calls the interpreter with the path to the real script as an argument). * Not always; in certain circumstances some shells will read it themselves, but they still won't do variable expansion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262552/"
]
} |
406,936 | The zswap documentation says: Zswap seeks to be simple in its policies. Sysfs attributes allow for one usercontrolled policy:* max_pool_percent - The maximum percentage of memory that the compressed pool can occupy. This specifies the maximum percentage of memory the compressed pool can occupy. How do I find out: The current percentage of memory occupied by the compressed pool How much of this pool is in use Compression ratios, hit rates, and other useful info | Current statistics: # grep -R . /sys/kernel/debug/zswap/ Compression ratio: # cd /sys/kernel/debug/zswap# perl -E "say $(cat stored_pages) * 4096 / $(cat pool_total_size)" Current settings: $ grep -R . /sys/module/zswap | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/406936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
406,978 | I have several machines on my LAN. Most of them Ubuntu based. Router has DD-WRT (v3.0-r33675M kongac Release: 11/03/17) firmware. I have set DHCP to serve network settings for all my computers. Router has been set to use 9.9.9.9 for DNS server. Now I want to verify my computers are using quad9 for DNS, but I am unable to do so. My computers see only the router and are not aware which DNS it is using. For example, command (in Ubuntu) sudo netstat -l --inet -n -v -p | grep :53 | grep -i udp Gives udp 0 0 127.0.1.1:53 0.0.0.0:* So I cannot verify I am using quad9 in this manner. Router does not recognize this command, so I cannot verify DNS setting that way. I have tried things in this post in computer and in DD-WRT command line, but none help: What DNS servers am I using? How can I properly verify I am using quad9 for DNS? | You can use tcpdump to see where the DNS traffic goes: # tcpdump -i eth0 -n udp port 53 or tcp port 53tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes16:09:02.961122 IP 192.168.115.15.49623 > 192.168.115.5.53: 6115+ A? www.heise.de. (30)16:09:02.983664 IP 192.168.115.5.53 > 192.168.115.15.49623: 6115 1/13/14 A 193.99.144.85 (493) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155370/"
]
} |
407,116 | How can I preview PDFs as images in ranger ? By default it uses pdftotext (in the scope.sh preview configuration file), but I would like to use pdfimages , pdftoppm , or another graphical solution instead. The ArchWiki suggests a method using pdftoppm , but it appears out of date (it does not function as-is, and does not follow the structure of surrounding code). | Ranger supports this (disabled by default) since v1.9.0 ( see commit ab8fd9e ). To enable this, update your scope.sh to the latest version. Note that this will overwrite your previewing configuration file: ranger --copy-config=scope Then find and uncomment the following in ~/.config/ranger/scope.sh : # application/pdf)# pdftoppm -f 1 -l 1 \# -scale-to-x 1920 \# -scale-to-y -1 \# -singlefile \# -jpeg -tiffcompression jpeg \# -- "${FILE_PATH}" "${IMAGE_CACHE_PATH%.*}" \# && exit 6 || exit 1;; | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191530/"
]
} |
407,142 | I want to learn C# on a Debian system. What do I have to install? Is there something like an interactive prompt where I can try running snippets of code? I have to learn C# and I have a dual boot computer, but don't feel like powering off Linux and booting Windows just for learning C#. | What do I have to install? apt install mono-mcs and optionally apt install monodevelop if you want something more like an IDE. mcs is the compiler. You can run the compiled program with mono prog.exe (or as ./prog.exe with binfmt_misc support enabled, which I believe Debian will do by default). Is there something like a prompt where I can try running snippets of code? apt install mono-csharp-shell and then $ csharpMono C# Shell, type "help;" for helpEnter statements below.csharp> Console.WriteLine("Hello world!")Hello world!csharp> | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63603/"
]
} |
407,230 | Say one creates a dynamically named variable in zsh , thus: name="hello"typeset $name=42echo ${(P)${name}} # Prints the variable $hello, which is 42 Now, suppose one want to increment or change the said variable, but without knowing it's direct name, i.e. I'd expect something similar to the following to work: (( ${(P)${name}} = ${(P)${name}} + 1 )) # Set $hello to 43? The above doesn't work - what will? | $ name=hello$ hello=42$ (($name++))$ echo $hello43 Just like in any Korn-like shell. Or POSIXly: $ name=hello$ hello=42$ : "$(($name += 1))"$ echo "$hello"43 The point is that all parameter expansion, command substitutions and arithmetic expansions are done inside arithmetic expressions prior to the arithmetic expression being evaluated. ((something)) is similar to let "something" So in (($name++)) (like let "$name++" ), that's first expanded to hello++ and that's evaluated as the ++ operator applied to the hello variable. POSIX sh has no ((...)) operator but it has the $((...)) arithmetic expansion operator. It doesn't have ++ (though it allows implementations to have one as an extension instead of requiring it to be a combination of unary and/or binary + operators), but it has += . By using : "$((...))" where : is the null command, we get something similar to ksh's ((...)) . Though a strict equivalent would be [ "$((...))" -ne 0 ] , as ((expression)) returns false when the expression resolves to 0. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17639/"
]
} |
407,233 | Considering a custom command line software (e.g. loopHelloWorld) which detect ctrl+C for nice shutdown. How to pipe the answer without losing the Ctrl+C ? $ loopHelloWorld - Ctrl+C to nicely shutdown But with pipe, the pipe kill the software without nice shutdown $ loopHelloWorld | while IFS= read -r line; do echo "$line" done Example ping example.com | while IFS= read -r line; do echo "$line" done | Ctrl+C causes a SIGINT to be sent to all processes in the pipeline (as they're all run in the same process group that correspond to that foreground job of your interactive shell). So in: loopHelloWorld | while IFS= read -r line; do echo "$line" done Both the process running loopHelloWorld and the one running the subshell that runs the while loop will get the SIGINT . If loopHelloWorld writes the Ctrl+C to nicely shutdown message on its stdout, it will also be written to the pipe. If that's after the subshell at the other end has already died, then loopHelloWorld will also receive a SIGPIPE, which you'd need to handle. Here, you should write that message to stderr as it's not your command's normal output (doesn't apply to the ping example though). Then it wouldn't go through the pipe. Or you could have the subshell running the while loop ignore the SIGINT so it keeps reading the loopHelloWorld output after the SIGINT: loopHelloWorld | ( trap '' INT while IFS= read -r line; do printf '%s\n' "$line" done) that would however cause the exit status of the pipeline to be 0 when you press Ctrl+C . Another option for that specific example would be to use zsh or ksh93 instead of bash . In those shells, the while loop would run in the main shell process, so would not be affected by SIGINT. That wouldn't help for loopHelloWorld | cat though where cat and loopHelloWorld run in the foreground process group. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407233",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142142/"
]
} |
407,328 | I know this question has been asked before, but I do not accept the answer, "you can clearly see custom additions". When I add ppa's (which I have not done in years), I hit a key on my keyboard labeled "Enter" which allows me to add an empty line before the new entry (I would even add an explanatory comment, but I am a tech writer, so ....). I like my sources.conf clean and neat. /etc/apt/sources.d Means I have half a dozen files to parse instead of just one. AFAIK, there is "absolutely" no advantage in having one configuration file vs 6 (for sake of argument, maybe you have 3 or even 2, doesn't matter ... 1 still beats 2). Can somebody please come up with a rational advantage, "you can clearly see custom additions" is a poor man's excuse. I must add, I love change, however, ONLY when there are benefits introduced by the change. Edit after first response: It allows new installations that need their own repos to not have to search a flat file to ensure that it is not adding duplicate entries. Now, they have to search a directory for dupe's instead of a flat file. Unless they assume admin's don't change things ... It allows a system administrator to easily disable (by renaming) or remove (by deleting) a repository set without having to edit a monolithic file. Admin has to grep directory to find appropriate file to rename, before, he would search ONE file and comment out a line, a sed one-liner for "almost" any admin. It allows a package maintainer to give a simple command to update repository locations without having to worry about inadvertently changing the configuration for unrelated repositories. I do not understand this one, I "assume" package maintainer knows the URL of his repository. Again, has to sed a directory instead of a single file. | Having each repository (or collection of repositories) in its own file makes it simpler to manage, both by hand and programmatically: It allows new installations that need their own repos to not have to search a flat file to ensure that it is not adding duplicate entries. It allows a system administrator to easily disable (by renaming) orremove (by deleting) a repository set without having to edit amonolithic file. It allows a package maintainer to give a simplecommand to update repository locations without having to worry aboutinadvertently changing the configuration for unrelated repositories. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407328",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81145/"
]
} |
407,332 | What companies sell computers without pre-installed proprietary operating systems like Mac OS X or Windows? I only know of System76 , but are there others? Does Asus? | HP makes a number of machines that are Canonical (Ubuntu) certified . I know that on some of their workstations, you can get Ubuntu and Redhat distros (See the HP z840 ). Another good place to look might be the Free Software Foundation (FSF). I know they do some hardware certification . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37153/"
]
} |
407,346 | I need to run a lot of similar commands in a quickest possible amount time and using all available resources. For example my case is processing images, when I'm using following command: for INPUT in *.jpg do; some_command; done the command is executed one by one and not using all the available resources. But on the other side executing for INPUT in *.jpg do; some_command &; done makes the machine to run out of resources in a very short time. I know about at 's batch command, but I'm not sure if I can use that in my case. Correct me if I am wrong. So I was thinking about putting the commands in some kind of queue and executing just a part of them at once. I don't know how to do that in a quick way and that's the problem. I'm sure someone ran in a similar problem before. Please advise. | HP makes a number of machines that are Canonical (Ubuntu) certified . I know that on some of their workstations, you can get Ubuntu and Redhat distros (See the HP z840 ). Another good place to look might be the Free Software Foundation (FSF). I know they do some hardware certification . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137527/"
]
} |
407,385 | At first this was a bit funny, like playing "Bash Roulette"...but now it's getting old lol Any command in my terminal that exits with non-zero code closes my terminal window I was told that perhaps I have set -e set in some bash script somewhere that my terminal sources. I have checked .bash_profile / .bashrc / .profile and it doesn't look like set -e is in there. Would there be any other obvious culprits? | Alright, so indeed, it was a wayward set -e that caused my trouble. The way I found the set -e was using bash -lx The best thing to do would be to use: bash -lx > lx.log 2>&1 then open that log file and do a search for set ... once you find that wayward set -e you can remove that line and your problem should be gone! (Machine restart might be a good idea tho). In my case, the set -e was in a file that .bash_profile sources, but the line was not in .bash_profile itself. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/407385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
407,394 | I have ssh-copy-id root@c199 succeeded before. I can login by ssh root@c199 without password prompt I want to auto login by another user ufo (remote machine has this user) ssh-copy-id ufo@c199 ask me enter password, /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysufo@c199's password:Number of key(s) added: 1Now try logging into the machine, with: "ssh 'ufo@c199'"and check to make sure that only the key(s) you wanted were added. But login by ssh ufo@c199 still prompt password input . I try to login remote centos on msys2(on Windows) by ssh , I found there are many same lines like ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCs7RTfvn83Rxdmvgfh+F4kUlM5FzIUb9rRHaqq11xKIW1gztn/+G4tr+OWl4o6GTW2Z361hIiugy8DPtMATN66nTTDUYO0sSvw2BrQfDY4iIENdLpkkHO8KQVGpQE+8tDkaZfD6EQLVtl0uvDE3D77tfcnBLODXgZPQsUSlssMi+pxDbSVjjKgrPhM1G/L9OTrEHKWDhF+ZBgY1RuLl7ZEdoATbhJaK4FFb9hNn/2CSibVfLts8HJGYQXIQRX/RBzaDZp47sKZvq302ewkkVorNY+c9mmoze6mi8Ip2zEQOMi6S9zM/yRiD0XZrbmzYfNkoXA03WTmMR/DynVvX2nV /c/Users/xxxx/.ssh/id_rsa in centos's /home/ufo/.ssh/authorized_keys , I have changed .ssh user's folder permissions to 700 and authorized_keys file to 644 . Same ssh key, ssh root@c199 promptless login , but ssh ufo@c199 prompt password input .. UPDATE ssh ufo@c199 -vv output: ....debug1: Server host key: ecdsa-sha2-nistp256 SHA256:zmCg5vHhBAMd5P4ei82+KsVg072KXbC63C44P0w3zbUdebug1: Host 'c199' is known and matches the ECDSA host key.debug1: Found key in /c/Users/xxxxx/.ssh/known_hosts:35debug2: set_newkeys: mode 1debug1: rekey after 134217728 blocksdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug2: set_newkeys: mode 0debug1: rekey after 134217728 blocksdebug2: key: /c/Users/xxxxx/.ssh/id_rsa (0x60006bec0), agentdebug2: key: /c/Users/xxxxx/.ssh/id_dsa (0x0)debug2: key: /c/Users/xxxxx/.ssh/id_ecdsa (0x0)debug2: key: /c/Users/xxxxx/.ssh/id_ed25519 (0x0)debug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug1: Next authentication method: publickeydebug1: Offering RSA public key: /c/Users/xxxxx/.ssh/id_rsadebug2: we sent a publickey packet, wait for replydebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug1: Trying private key: /c/Users/xxxxx/.ssh/id_dsadebug1: Trying private key: /c/Users/xxxxx/.ssh/id_ecdsadebug1: Trying private key: /c/Users/xxxxx/.ssh/id_ed25519debug2: we did not send a packet, disable methoddebug1: Next authentication method: password | Thanks to https://unix.stackexchange.com/a/55481/106419 , which told me how to debug ssh. To enable ssh debug to see what happen systemctl stop sshd/usr/sbin/sshd -d -p 22 I found: Authentication refused: bad ownership or modes for directory /home/ufo All guys only told: /home/ufo/.ssh ownership is correct 700 /home/ufo/.ssh/authorized_keys ownership is correct 600/644 But sshd still check the user home folder !!! No one mentioned this ! sudo chmod 700 /home/ufo solve this problem. Summary: You need ensure: /home/ufo ownership is 700 /home/ufo/.ssh ownership is 700 /home/ufo/.ssh/authorized_keys ownership is 600 change ufo to you home folder name | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106419/"
]
} |
407,426 | A few days ago, I dragged some Apple Music songs to my MP3 player. When I played it, there was no sound. I googled to find a solution, but people all said that the files are DRM-encrypted on Apple Music to prevent piracy, and I could find no more information. Can anyone help me? | Apple music files have not been encrypted for a decade! So, unless you bought them ten years ago, they are not encrypted, your MP3 player simply does not support the format (AAC). You can tell by the extension: m4p -> encrypted m4a -> standard AAC or Apple Lossless You can convert AAC to MP3, however, you will get slight loss of quality. You could use ffmpeg : ffmpeg -i inputfile.m4a -c:a libmp3lame -ac 2 -b:a 320k outputfile.mp3 To remove DRM:You can simply burn DRM-encumbered files to a CD and rip the CD to remove the DRM. The burn-rip is the easiest method, yet, you have quality loss. There are multiple other ways you can remove the DRM, such as using Audacity, you can google for the exact steps. I do not know how this is possible, but there are also commercial software solutions that at best are doing exactly what Audacity does. Audacity is "GPL software" that is free of charge. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263079/"
]
} |
407,432 | I'm trying to get a USB-connected Brother HL-2240 printer to work on Linux (Ubuntu Xenial with cups 2.1.3-4ubuntu0.3). I turned debugging up to maximum and the cups error log very verbosely tells me everything succeeded. The page log simply lists the job as succeeded. I generated the PCL file manually, ran /usr/lib/cups/backend/usb under strace, and it said it succeeded, with no apparent errors in the ioctls (lots of USBDEVFS_REAPURBNDELAY => EAGAIN , but that seems to be some sort of spinlock). But nothing prints. The printer physically works fine. I can print a test page by holding down the "go" button on the printer. I've tried this with and without usblp. I don't have android-udev (one source thought that might be relevant). I tried re-installing cups. It worked ages ago. I think I might have been on Precise Pangolin at the time. Yes, that's a long time to go without printing, and there may have been other relevant things in that time too. I don't know for sure that the PCL documents I'm generating are correct. Is there a way to test those? Or a source of known-good documents for this printer? But mostly, does anyone have any ideas how to fix this? (I was going to post both the error_log and strace output here, but they're way too long. I've looked them over pretty thoroughly, but if there are strange things to look for, please suggest them.) Edit to add: I'm pretty sure it's finding the right printer because of lines in the log like: D [28/Nov/2017:00:06:11 -0500] [Job 19] envp[23]="DEVICE_URI=usb://Brother/HL-2240%20series?serial=B3N746940" That's the same serial number as in dmesg . Also, when I invoke /usb directly: export DEVICE_URI=usb://Brother/HL-2240%20series?serial=B3N746940/usr/lib/cups/backend/usb 25 dspeyer hello 1 "" < /etc/hosts I get DEBUG: Loading USB quirks from "/usr/share/cups/usb".DEBUG: Loaded 131 quirks.DEBUG: Printing on printer with URI: usb://Brother/HL-2240%20series?serial=B3N746940DEBUG: libusb_get_device_list=13STATE: +connecting-to-deviceSTATE: -connecting-to-deviceDEBUG2: Printer found with device ID: MFG:Brother;CMD:PJL,HBP;MDL:HL-2240 series;CLS:PRINTER;CID:Brother Laser Type1; Device URI: usb://Brother/HL-2240%20series?serial=B3N746940DEBUG: Device protocol: 2INFO: Sending data to printer.DEBUG: Read 195 bytes of print data...DEBUG: Wrote 195 bytes of print data...DEBUG: Sent 195 bytes...DEBUG: Waiting for read thread to exit... (And similar things if I use a PCL file instead of a text file, but longer.) If I use any other DEVICE_URI, I get error messages. And a strace on the usb command contains: ioctl(10, USBDEVFS_GET_CAPABILITIES, 0xe4c198) = 0write(2, "STATE: +connecting-to-device\n", 29STATE: +connecting-to-device) = 29ioctl(10, USBDEVFS_GETDRIVER, 0xbf941308) = -1 ENODATA (No data available)timerfd_settime(9, TFD_TIMER_ABSTIME, {it_interval={0, 0}, it_value={3607344, 967184000}}, NULL) = 0ioctl(10, USBDEVFS_SUBMITURB, 0xe65ea0) = 0 Which indicates pretty clearly data is going over USB. | I had this issue with a Brother HL-L2320D. I was doing a few things incorrectly. This post helped: Re: Printer brand recommendations | lists.debian.org I was being too clever and trying to install the printer directly through the CUPS web interface, using the .ppd file and the CUPS filter. The CUPS filter actually invokes the LPD filter, so both are necessary. I ended up just installing the Debian packages Brother provided ( hll2320dlpr-3.2.0-1.i386.deb and hll2320dcupswrapper-3.2.0-1.i386.deb ). I needed support for 32-bit binaries. The suggestion here of the Ubuntu package gcc-multilib worked for me. As for what causes the silent failure mode, I think it's various pieces of the filter pipeline failing without correctly reporting the failure back to CUPS; the printer gets sent either an empty file or an invalid one, and CUPS sees a success. The top-level filters are Perl scripts that call other scripts and binaries with the system function or backticks, without checking the exit codes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47282/"
]
} |
407,447 | I have Debian 9 Stretch installed in my pc,and when I want to hear some music i can't hear it well.I discovered in audio settings that the A2DP profile was there but nothing happens when I select it.There's a way to force the A2DP connection when connected the headset to the pc.I already paired it btw. Help is appreciated.Thanks. | I am using a SoundBuds Curve headset in Debian 9, and have had the same problem, I wasunable to switch from the HSP/HFP profile to the A2DP profile. What fixed the issue for me, was editing /etc/bluetooth/main.conf . First, add the following lines under the [General] tag (copied from audio.conf): # Automatically connect both A2DP and HFP/HSP profiles for incoming# connections. Some headsets that support both profiles will only connect the# other one automatically so the default setting of true is usually a good# idea.AutoConnect=true Next enable support for multiple profiles, which can be found a few lines below in main.conf: # Enables Multi Profile Specification support. This allows to specify if# system supports only Multiple Profiles Single Device (MPSD) configuration# or both Multiple Profiles Single Device (MPSD) and Multiple Profiles Multiple# Devices (MPMD) configurations.# Possible values: "off", "single", "multiple"MultiProfile = multiple | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262901/"
]
} |
407,473 | After update with pacman -Syuq : # pacman -Scpacman: error while loading shared libraries: libicuuc.so.59: cannot open shared object file: No such file or directory# find / -name libicuuc.so.* 2>/dev/null/usr/lib/libicuuc.so.60.1/usr/lib/libicuuc.so.60 Arch Linux in a Pi version 1: # uname -an4.9.62-1-ARCH #1 SMP Fri Nov 17 13:42:55 UTC 2017 armv6l GNU/Linux | Previously proposed solutions were not relevant or did not work for me. For some reason upgrading the icu package from 59.1-2 to 60.1-1 made linkage break and many programs (including pacman) failed with this error after. No interrupted pacman on my side. If you still have the previous package in your cache, you can try this, which worked for me: Locate cached version of package (for example, mine was /var/cache/pacman/pkg/icu-59.1-2-x86_64.pkg.tar.xz ). Extract it: mkdir -p ~/pkg/tmp && tar xJvf /var/cache/pacman/pkg/icu-59.1-2-x86_64.pkg.tar.xz -C ~/pkg/tmp Copy libs to your lib folder: sudo cp ~/pkg/tmp/usr/lib/libicu*.59 /usr/lib/ Proceed with update: sudo pacman -Syu You can now remove the files you just extracted. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407473",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263133/"
]
} |
407,501 | i'm running Ubuntu 16.04 LTS and want to use an apple keyboard (wired). No i'm facing some problems whith making my custom settings permanent. I'm using the 'English internationl, AltGr dead keys' layout. The keyboard has some keys swapped and i set the settings manually in the /sys/module/hid_apple/parameters/ folder. I set fnmode to 2 , iso_layout to 0 and swap_opt_cmd to 1 . After this everything works like intended. But after rebooting the settings are back to the default. I have to reset everything manually after each reboot. For now i wrote a little shell script which would do it but that's not the best way i think. How can i make these settings persistent? | It appears that you can create /etc/modprobe.d/hid_apple.conf and add the entries you need fixing in there, such as, options hid_apple fnmode=2 NB: This assumes the hid_apple module is already being loaded. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205129/"
]
} |
407,517 | I found some good information about wireless tools in this Q/A . Apparently it was introduced to the Linux kernel in 1997 by Jean Tourrhiles sponsored by Hewlett Packard. Edit: It seems WE(Wireless Extensions) was added to the Kernel by Tourrhiles, not the wireless tools themselves. The tools are available on most distros as the primary way to communicate with WE. You can see WE in the kernel at /proc/net/wireless . The last version released was v29 yet Ubuntu 14 & 16 seem to contain the v30 beta ( iwconfig -v ). I'm curious about what happened to this package? Why did the "beta" version 30 become the defacto standard version used? Did HP stop funding Jean Tourrhiles so development stopped? Or maybe it was decided that it was stable enough to stop development, but if that was the case why would 30 still be a beta? I found this Github page but it seems to be for historical reference only. Version History | It appears that you can create /etc/modprobe.d/hid_apple.conf and add the entries you need fixing in there, such as, options hid_apple fnmode=2 NB: This assumes the hid_apple module is already being loaded. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
407,532 | I would like to list all of the files in a folder called foldername that have the extension test , atest or btest . My immediate thought was to run ls ./foldername/*.{a,b,}test This works fine unless there is nothing with the extension atest , in which case I get the error zsh: no matches found: ./foldername/*.atest . Is there any way I can simply ignore this error and print the files that do exist? I need this to work in both zsh and Bash. | It may be best to do this with find : find ./foldername -maxdepth 1 -name '*.atest' -o -name '*.btest' -o -name '*.test' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228295/"
]
} |
407,547 | There are two flavors of netcat : netcat-openbsd and netcat-traditional . How to know which flavor of netcat I'm using? I've tried man nc but it doesn't say what flavor it is. | First of all, you can install multiple flavors in your machine. So the answer depends on how many flavors you've installed and what command you type. netcat-traditional and netcat-openbsd is available to be install via package manager apt in Ubuntu. In my case I also build from source to install GNU netcat flavor via this official website . For "openbsd" flavor, you can figure out location binary name with dpkg -L <package-name> (Googling yourself to find equivalent of dpkg L if yor package manager is not apt ): $ dpkg -L netcat-openbsd | grep /bin/bin/bin/ nc.openbsd Then use type -a to confirm that binary name nc.openbsd is searchable in $PATH and interpreted as command: $ type -a nc.openbsdnc.openbsd is /bin/nc.openbsdnc.openbsd is /bin/nc.openbsd For "traditional" flavor is same: $ dpkg -L netcat-traditional | grep /bin/bin/bin/nc.traditional$ type -a nc.traditionalnc.traditional is /bin/nc.traditionalnc.traditional is /bin/nc.traditional That's means I can issue command nc.openbsd to run netcat-openbsd tool, and also command nc.traditional to run netcat-traditional tool. (There may confuse where the command contains '.' but package name contains '-' ) Seems like there are 3 flavors to install via apt : $ apt-cache search netcat --names-onlynetcat-openbsd - TCP/IP swiss army knifenetcat - TCP/IP swiss army knife -- transitional packagenetcat-traditional - TCP/IP swiss army knife But actually netcat is dummy package only: $ apt-cache show netcat | grep Description-en -A 2Description-en: TCP/IP swiss army knife -- transitional package This is a "dummy" package that depends on lenny's default version of netcat, to ease upgrades. It may be safely removed. So you can only install netcat-openbsd and netcat-traditional via apt if you want: sudo apt-get install netcat-openbsdsudo apt-get install netcat-traditional How about commands nc and netcat ? They can tied to multiple flavors searchable by $PATH , one of the path will run if you type nc or netcat . Again, you can use type -a to check, whereas priority is the first line (as bold below): $ type -a nc nc is /usr/local/bin/nc nc is /bin/ncnc is /usr/local/bin/ncnc is /bin/nc$ type -a netcat netcat is /usr/local/bin/netcat netcat is /bin/netcatnetcat is /usr/local/bin/netcatnetcat is /bin/netcat You can use realpath to figure out resolved path of them: $ realpath /usr/local/bin/netcat /usr/local/bin/netcat $ realpath /bin/netcat /bin/nc.openbsd $ realpath /usr/local/bin/nc /usr/local/bin/netcat $ realpath /bin/nc /bin/nc.openbsd 4 of them only 2 paths is unique in my system, one is "GNU", and the other one is "openbsd": $ /usr/local/bin/netcat --version | head -1netcat (The GNU Netcat) 0.7.1$ /bin/nc.openbsd -h |& head -1OpenBSD netcat (Debian patchlevel 1.130-3) That's means if I type nc OR netcat , it will execute /usr/local/bin/netcat which is "GNU Netcat". You can try update-alternatives to adjust the resolved symlink path: $ realpath /bin/nc/bin/nc.openbsd$ realpath /bin/netcat/bin/nc.openbsd$ sudo update-alternatives --config ncThere are 2 choices for the alternative nc (providing /bin/nc). Selection Path Priority Status------------------------------------------------------------ 0 /bin/nc.openbsd 50 auto mode* 1 /bin/nc.openbsd 50 manual mode 2 /bin/nc.traditional 10 manual modePress <enter> to keep the current choice[*], or type selection number: 2update-alternatives: using /bin/nc.traditional to provide /bin/nc (nc) in manual mode$ realpath /bin/nc/bin/nc.traditional$ realpath /bin/netcat/bin/nc.traditional It changed both /bin/nc and /bin/netcat resolved symlink¹ to /bin/nc.traditional , but still it doesn't changed the flavor if I type nc OR netcat since /usr/local/bin/ still has higher precedence over /bin in my $PATH : $ /bin/nc -h |& head -1[v1.10-41]$ nc -h |& head -1GNU netcat 0.7.1, a rewrite of the famous networking tool.$ type -a nc | head -1nc is /usr/local/bin/nc Note that there are more flavors of netcat, e.g. ncat , socat , sbd , netcat6 , pnetcat , and cryptcat . ¹ The actual symlink updated were /etc/alternatives/nc and /etc/alternatives/netcat , which /bin/nc and /bin/netcat were already symlink to /etc/alternatives/nc and /etc/alternatives/netcat respectively. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263210/"
]
} |
407,634 | This site presents the xargs command with a -J option making able to pass the standard input into a desired position at the command argument: find . -name '*.ext' -print0 | xargs -J % -0 rsync -aP % user@host:dir/ but at a GNU xargs man page this option is not present. Which is the way to do this on, for commands accepting this? | I am not sure this is what you were expecting, but in the BSD world (such as macOS) -I and -J differ in how they pass the multiple "lines" to the command. Example: $ lsfile1 file2 file3$ find . -type f -print0 | xargs -I % rm %rm file1rm file2rm file3$ find . -type f -print0 | xargs -J % rm %rm file1 file2 file3 So with -I , xargs will run the command for each element passed to it individually. With -J , xargs will execute the command once and concatenate all the elements and pass them as arguments all together. Some commands such as rm or mkdir can take multiple arguments and act on them the same way as if you passed a single argument and ran them multiple times. But some apps may change depending how you pass arguments to them. For instance the tar . You may create a tar file and then add files to it or you may create a tar file by adding all the files to it in one go. $ find . -iname "*.txt" -or -iname "*.pdf" -print0 | xargs -0 -J % tar cjvf documents.tar.bz2 % | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106710/"
]
} |
407,647 | On RHEL 7 or CentOS 7, the systemctl or systemd command works fine. I know it won't work in RHEL 6 or CentOS 6. Can you tell me the alternative command for starting/stopping a service, for example: systemctl start iptables.service ? | In earlier versions of RHEL use the service command as explained in the documentation here . # service service_name start Therefore, in your case: # service iptables start You can replace start with restart , stop , status . List all services with: # service --status-all | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/407647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261580/"
]
} |
407,649 | I recently installed Zorin OS on my Lenovo Yoga 2. I completely got rid of Windows and am not dual booting. I am now trying to get back to Windows 10. I created a Windows 10 USB drive, but I can't get the computer to boot from it. When I change the UEFI boot order, it still boots into Zorin, and when I go back to the UEFI, it has changed the boot order back. I also created another Zorin USB drive, thinking I would boot from it, then format the Zorin partition so it would have to boot from USB. Same thing, it just won't boot from USB. Is there a way to trigger booting from USB from within Zorin? If not, any ideas on how to get rid of Zorin some other way? | In earlier versions of RHEL use the service command as explained in the documentation here . # service service_name start Therefore, in your case: # service iptables start You can replace start with restart , stop , status . List all services with: # service --status-all | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/407649",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263288/"
]
} |
407,740 | during hive installation we get the following errors /usr/bin/yum install hive_2_6_0_3_8 Installing : hive_2_6_0_3_8-1.2.1000.2.6.0.3-8.noarch 1/1Error unpacking rpm package hive_2_6_0_3_8-1.2.1000.2.6.0.3-8.noarcherror: unpacking of archive failed on file /usr/hdp/2.6.0.3-8/hive/conf: cpio: rename Verifying : hive_2_6_0_3_8-1.2.1000.2.6.0.3-8.noarch 1/1Failed:hive_2_6_0_3_8.noarch 0:1.2.1000.2.6.0.3-8Complete! what exactly the problem here? ls -ltd /usr/hdp/2.6.0.3-8/hive/confdrwxr-xr-x. 3 root root 24 Nov 26 14:16 /usr/hdp/2.6.0.3-8/hive/confls -ltr /usr/hdp/2.6.0.3-8/hive/conftotal 0drwxr-xr-x. 2 hive hadoop 6 Nov 26 14:16 conf.serverrpm -qa | grep hive | grep 1000hive_2_6_0_3_8-jdbc-1.2.1000.2.6.0.3-8.noarch | In package hive_2_6_0_3_8-1.2.1000.2.6.0.3-8.noarch the /usr/hdp/2.6.0.3-8/hive/conf is regular file. While on your system it is a directory. Cpio (and therefore rpm) cannot convert directory to file (and vice versa). Just remove (or move away) the directory /usr/hdp/2.6.0.3-8/hive/conf and try again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
407,798 | My application: #!/bin/sh## D2GS## Go to the directorycd ~# Run the applicationsif ! ps aux | pgrep "D2GS"; then wine "C:/D2GS/D2GS.exe" >& /dev/null &fi Gives error: ./d2gs.sh: 14: ./d2gs.sh: Syntax error: Bad fd number Which is strange, since when I start wine "C:/D2GS/D2GS.exe" >& /dev/null & - it runs without any issues. The reason why I want to start it from shell is, because I want to crontab it every minute. | >& is not syntax supported by sh . You're explicitly using sh as the shell in that script. You need to rewrite that line as: wine "C:/D2GS/D2GS.exe" > /dev/null 2>&1 & | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407798",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263427/"
]
} |
407,812 | I have a directory of files with similar names, but with incrementing digits as a suffix. I want to remove the lower suffixed files and only keep the files with the highest suffix. Below is an example file listing: 1k_02.txt1k_03.txt1l_02.txt1l_03.txt1l_04.txt2a_05.txt2a_06.txt4c_03.txt4c_04.txt The above list needs to be reduced to the files below: 1k_03.txt1l_04.txt2a_06.txt4c_04.txt I don't even know where to start with this, but if possible I would like a single bash command. | >& is not syntax supported by sh . You're explicitly using sh as the shell in that script. You need to rewrite that line as: wine "C:/D2GS/D2GS.exe" > /dev/null 2>&1 & | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263438/"
]
} |
407,888 | How can I use sed to delete lines two (or any number) after a match but not including the match and lines in between? For example: Enter command below> loginCommand validEnter your password at the below promptRemember to make sure no-one is looking at your screen> mysecretwordPassword accepted I want to only delete the line "> mysecretword" on account of it being two lines after the line "Enter your password at the below prompt". I can't use absolute line number positions, because the match could appear any number of lines after the start of the file. Searching online I find a lot of solutions like sed '/Enter your password.*/,+3d' filex , but this would also remove the "Enter your password..." line and the following line, which is not what I want. I only want to remove one line, which is a certain number of lines after a match. How can I do this with sed (or indeed any other commonly-available tool)? | Perhaps sed '/^Enter your password/ {nnd}' file or equivalently: $ sed '/^Enter your password/ {n;n;d;}' fileEnter command below> loginCommand validEnter your password at the below promptRemember to make sure no-one is looking at your screenPassword accepted | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/254388/"
]
} |
407,917 | In the following situation ls -alh total 0drwxrwx--- 1 user http 20 Nov 30 08:08 .drwxrws--- 1 user http 310 Nov 30 08:07 ..drwx------ 1 http http 10 Nov 30 08:08 empty-subdirdrwx------ 1 http http 12 Nov 30 08:08 non-empty-subdir where two subdirectories (not owned by me) exist, which I list as: sudo ls empty-subdir -alhtotal 0drwx------ 1 http http 10 Nov 30 08:08 .drwxrwx--- 1 user http 20 Nov 30 08:08 ..sudo ls non-empty-subdir -alhtotal 0drwx------ 1 http http 12 Nov 30 08:08 .drwxrwx--- 1 user http 20 Nov 30 08:08 ..drwx------ 1 http http 0 Nov 30 08:08 subdir The difference between the two subdirectories being that the non-empty non-empty-subdir contains a folder. My question is whether it is by design that trying to rm -rf remove the subdirectories I get results: $ rm empty-subdir -rf$ rm non-empty-subdir -rfrm: cannot remove 'non-empty-subdir': Permission denied$ ls -alhtotal 0drwxrwx---+ 1 user http 10 Nov 30 08:14 .drwxrws---+ 1 user http 310 Nov 30 08:07 ..drwx------+ 1 http http 12 Nov 30 08:08 non-empty-subdir It seems that the user with write permissions to a directory is allowed to remove an entry for a file, or an empty subdirectory of some other user, but not a non-empty subdirectory. An ideal answer to this question would provide information such as: a confirmation that the outlined behaviour is reproducible on other machines (and not mere quirks of my screwed up box) a rationale to explain that behaviour (e.g. are there use cases?) an overview if there are differences between systems (BSD, Linux....) Update :With respect to the comment by Ipor Sircer, I did retest the scenario, without any ACL features and it is the same. I therefore modified the question to remove the + es from the listings as not to give rise to an idea that the behaviour mightbe related to ACLs. | One can only remove a directory (with the rmdir() system call) if it's empty. rm -r dir removes the directory and all the files in it, starting with the leaves of the directory tree and walking its way up to the root ( dir ). To remove a file (with rmdir() for directories and unlink() for other types of files, or *at() variants), what matters it not the permission of the file itself but those of the directory you're removing the file from (beware the t bit in the permissions, like for /tmp , adds further complications to that). Before all, you're not really removing the file , you're unlinking it from a directory (and when it's the last link that you're removing, the file ends up being deleted as a consequence), that is, you're modifying the directory, so you need modifying (write) permissions to that directory. The reason you can't remove non-empty-dir is that you can't unlink subdir from it first, as you don't have the right to modify non-empty-dir . You would have the right to unlink non-empty-dir from your home directory as you have write/modification permission to that one, only you can't remove a directory that is not empty. In your case, as noted by @PeterCordes in comments, the rmdir() system call fails with a ENOTEMPTY (Directory not empty) error code, but since you don't have read permission to the directory, rm cannot even find out which files and directories (including subdir ) it would need to unlink from it to be able to empty it (not that it could unlink them if it knew, as it doesn't have write permissions). You can also get into situations where rm could remove a directory if only it could find out which files are in it, like in the case of a write-only directory: $ mkdir dir$ touch dir/file$ chmod a=,u=wx dir$ ls -ld dird-wx------ 2 me me 4096 Nov 30 19:43 dir/$ rm -rf dirrm: cannot remove 'dir': Permission denied Still, I am able to remove it as I happen to know it only contains one file file: $ rm dir/file$ rmdir dir$ Also note that with modern Unices you could rename that non-empty-dir , but on some like Linux or FreeBSD (but not Solaris), not move it to a different directory, even if you also had write permission to that directory, as (I think and for Linux, as suggested by the comment for the relevant code ) doing so would involve modifying non-empty-dir (the .. entry in it would point to a different directory). One could argue that removing your empty-dir also involves removing the .. and . entries in it, so modifying it, but still, the system lets you do that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24394/"
]
} |
407,921 | I have a keyboard shortcut that is mapped to running the command gnome-terminal (I'm on Fedora 27). I wish that this key, would open a new terminal if no terminal is open, but open a new tab on the existing terminal window if a terminal window is already open. What would be the command to do that? | One can only remove a directory (with the rmdir() system call) if it's empty. rm -r dir removes the directory and all the files in it, starting with the leaves of the directory tree and walking its way up to the root ( dir ). To remove a file (with rmdir() for directories and unlink() for other types of files, or *at() variants), what matters it not the permission of the file itself but those of the directory you're removing the file from (beware the t bit in the permissions, like for /tmp , adds further complications to that). Before all, you're not really removing the file , you're unlinking it from a directory (and when it's the last link that you're removing, the file ends up being deleted as a consequence), that is, you're modifying the directory, so you need modifying (write) permissions to that directory. The reason you can't remove non-empty-dir is that you can't unlink subdir from it first, as you don't have the right to modify non-empty-dir . You would have the right to unlink non-empty-dir from your home directory as you have write/modification permission to that one, only you can't remove a directory that is not empty. In your case, as noted by @PeterCordes in comments, the rmdir() system call fails with a ENOTEMPTY (Directory not empty) error code, but since you don't have read permission to the directory, rm cannot even find out which files and directories (including subdir ) it would need to unlink from it to be able to empty it (not that it could unlink them if it knew, as it doesn't have write permissions). You can also get into situations where rm could remove a directory if only it could find out which files are in it, like in the case of a write-only directory: $ mkdir dir$ touch dir/file$ chmod a=,u=wx dir$ ls -ld dird-wx------ 2 me me 4096 Nov 30 19:43 dir/$ rm -rf dirrm: cannot remove 'dir': Permission denied Still, I am able to remove it as I happen to know it only contains one file file: $ rm dir/file$ rmdir dir$ Also note that with modern Unices you could rename that non-empty-dir , but on some like Linux or FreeBSD (but not Solaris), not move it to a different directory, even if you also had write permission to that directory, as (I think and for Linux, as suggested by the comment for the relevant code ) doing so would involve modifying non-empty-dir (the .. entry in it would point to a different directory). One could argue that removing your empty-dir also involves removing the .. and . entries in it, so modifying it, but still, the system lets you do that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/407921",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237402/"
]
} |
407,934 | I have a file with blank lines at the end of the file. Can I use grep to count the number of blank lines at the end of the file with the file name being passed as variable in the script? | If the blank lines are only at the end grep -c '^$' myFile or: grep -cx '' myFile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/407934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263122/"
]
} |
408,006 | The command echo $(echo \\\z) is from the book , I don’t understand why it outputs \z I think it should output z | \ is a quoting operator for the shell. When outside other kinds of quotes, \x is like 'x' (except when x is the newline character). So: echo \\\x is like: echo '\''x' Or echo '\x' So echo is invoked with two arguments, one that contains echo (its name) and one that contains \x . Per POSIX , the behaviour of echo when any of its operands (arguments other than the command name) contain a backslash or its first operand is -n is unspecified. You'll find that the behaviour varies even outside of that. Per the UNIX specification (with XSI applying at the link above), the behaviour of echo '\z' is also unspecified as \z is not one of the standard escape sequences. With a UNIX compliant echo , like the echo builtin of bash -o posix -O xpg_echo , you can however be guaranteed that: echo '\\z' same as: echo \\\\z or: echo \\\\\z will output \z<newline> . Other than that, you won't get any guarantee by POSIX. Implementation could output z or \z or no such escape sequence: \z or a NUL character... In practice however, I don't know of any implementation that understands \z as an escape sequence, so I find that all the ones I tried output \z<newline> there. If you want a portable and reliable way to output text, don't use echo , use printf instead . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145001/"
]
} |
408,012 | I'm trying to reuse an old Asus EEE "a-la-RaspberryPi", as a small, single task unit. Since I am familiar with Fedora, I have installed Fedora 27 on it via VNC (the default graphical installer is too big for the EEE screen), but I did not install any desktop environment (I don't need it, plus even LXDE would have required more disk space than the entire size of the EEE). During the installation I was able to connect to the Internet wirelessly (after having configured from the installer the SSID and PWD for my home network). However, with the installed system, I am not : [mac@octoserver ~]$ nmclienp3s0: connected to enp3s0 "Qualcomm Atheros Attansic L2 Fast Ethernet" ethernet (atl2), 00:1F:C6:ED:3B:D9, hw, mtu 1500 ip4 default inet4 192.168.0.131/24 inet6 fe80::26b6:a207:c3f7:8c89/64lo: unmanaged "lo" loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536wlp1s0: unmanaged "Qualcomm Atheros AR242x / AR542x Wireless Network Adapter (PCI-Express) (AW-GE780 802.11bg Wireless Mini PCIe Card)" wifi (ath5k), 00:15:AF:92:4E:2E, plugin missing, hw, mtu 1500DNS configuration: servers: 192.168.0.1 interface: enp3s0 The plugin missing is in bright red in the console, but I haven't been able to google or dnf anything up on what that is supposed to be... ath5k seems to be an enabled module in my kernel, and I could not find any "plugin" package for nmcli in the repos... Also confusing for me is the following: [mac@octoserver ~]$ nmcli radio wifienabled[mac@octoserver ~]$ nmcli device wifi list[mac@octoserver ~]$ The radio [of a device, I guess] is ON but there is no device?! Any idea on what is going on? For completeness: [mac@octoserver ~]$ lspci00:00.0 Host bridge: Intel Corporation Mobile 915GM/PM/GMS/910GML Express Processor to DRAM Controller (rev 04)00:02.0 VGA compatible controller: Intel Corporation Mobile 915GM/GMS/910GML Express Graphics Controller (rev 04)00:02.1 Display controller: Intel Corporation Mobile 915GM/GMS/910GML Express Graphics Controller (rev 04)00:1b.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 04)00:1c.0 PCI bridge: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) PCI Express Port 1 (rev 04)00:1c.1 PCI bridge: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) PCI Express Port 2 (rev 04)00:1c.2 PCI bridge: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) PCI Express Port 3 (rev 04)00:1d.0 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB UHCI #1 (rev 04)00:1d.1 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB UHCI #2 (rev 04)00:1d.2 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB UHCI #3 (rev 04)00:1d.3 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB UHCI #4 (rev 04)00:1d.7 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB2 EHCI Controller (rev 04)00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev d4)00:1f.0 ISA bridge: Intel Corporation 82801FBM (ICH6M) LPC Interface Bridge (rev 04)00:1f.2 IDE interface: Intel Corporation 82801FBM (ICH6M) SATA Controller (rev 04)00:1f.3 SMBus: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) SMBus Controller (rev 04)01:00.0 Ethernet controller: Qualcomm Atheros AR242x / AR542x Wireless Network Adapter (PCI-Express) (rev 01)03:00.0 Ethernet controller: Qualcomm Atheros Attansic L2 Fast Ethernet (rev a0) | You are probably missing the NetworkManager-wifi package. # dnf install NetworkManager-wifi To apply changes, restart NetworkManager.service : # systemctl restart NetworkManager | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128447/"
]
} |
408,026 | Is it possible to recover files that have been rm'ed from an XFS filesystem? How can I recover any files assuming they haven't been overwritten? Edit:The existing questions regarding this topic are all assuming an ext{2,3,4} file system. I am looking for an XFS solution. | I deleted a python file that I knew had a specific fairly unique string in it. So I did the following: $ sudo strings -td /dev/mapper/vg01-lv_opt | grep "class Team("8648365228 class Team(object):26133651864 class Team(Account):26134147482 class Team(Account): I now had the offsets in the lvol where that string was. I then did a dd around that area to recover the text: sudo dd if=/dev/mapper/vg01-lv_opt of=/tmp/recover skip=26134140000 count=1M bs=1 ...then I brought that smaller file into vi and trimmed around the file, and voila! I had my content back. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139546/"
]
} |
408,038 | I am using Fedora on a computer with touch screen. When was using Fedora 26 or older, the on-screen keyboard always pops up when I am using the touch screen (like when I selected gedit window, the on-screen keyboard shows, and I have to close it manually). I found that caribou is the keyboard which annoys me, and this answer helped me (by disabling caribou). However, after upgrading to Fedora 27 (actually I installed it from scratch), disabling caribou no longer works, and the keyboard pops up whenever I am using touch screen on gedit (and other applications). What should I do to disable it? | You may use a GNOME shell extension called " Block Caribou ". It claims to block caribou (the on screen keyboard) from popping up when you use a touchscreen in GNOME shell v3.26. Edit: for later versions and if Block Caribou doesn't seem to help try the newer shell extension Block Caribou 36 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/235621/"
]
} |
408,072 | I don't necessarily want the answer but if someone could point me to some literature or examples. I would like to figure it out. When I run the script I receive an error: Syntax error near unexpected token fi I have deduced that my problem is in my if statement by making my if statements comments and adding echo "$NAME" which displays the names in the /etc/ . When I make changes, remove the # from if and fi and add # to wc -c "$NAME" , I receive the syntax error I listed above. I have added ; between ] then. I have also moved then to the next line with no resolution. #!/bin/bashfor NAME in /etc/*do if [ -r "$NAME" -af "$NAME" ] then wc -c "$NAME" fidone | Keywords like if , then , else , fi , for , case and so on need to be in a place where the shell expects a command name. Otherwise they are treated as ordinary words. For example, echo if just prints if , it doesn't start a conditional instruction. Thus, in the line if [ -r "$NAME" -af "$NAME" ] then the word then is an argument of the command [ (which it would complain about if it ever got to run). The shell keeps looking for the then , and finds a fi in command position. Since there's an if that's still looking for its then , the fi is unexpected, there's a syntax error. You need to put a command terminator before then so that it's recognized as a keyword. The most common command terminator is a line break, but before then , it's common to use a semicolon (which has exactly the same meaning as a line break). if [ -r "$NAME" -af "$NAME" ]; then or if [ -r "$NAME" -af "$NAME" ]then Once you fix that you'll get another error from the command [ because it doesn't understand -af . You presumably meant if [ -r "$NAME" -a -f "$NAME" ]; then Although the test commands look like options, you can't bundle them like this. They're operators of the [ command and they need to each be a separate word (as do [ and ] ). By the way, although [ -r "$NAME" -a -f "$NAME" ] works, I recommend writing either [ -r "$NAME" ] && [ -f "$NAME" ] or [[ -r $NAME && -f $NAME ]] It's best to keep [ … ] conditionals simple because the [ command can't distinguish operators from operand easily. If $NAME looks like an operator and appears in a position where the operator is valid, it could be parsed as an operator. This won't happen in the simple cases seen in this answer, but more complex cases can be risky. Writing this with separate calls to [ and using the shell's logical operators avoids this problem. The second syntax uses the [[ … ]] conditional construct which exists in bash (and ksh and zsh, but not plain sh). This construct is special syntax, whereas [ is parsed like any other command, thus you can use things like && inside and you don't need to quote variables except in arguments to some string operators ( = , == , != , =~ ) (see When is double-quoting necessary? for details). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/408072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263688/"
]
} |
408,154 | I was trying to edit crontab in the terminal, and I accidentally typed crontab -r instead of crontab -e . Who would figure such dangerous command would sit right next to the letter to edit the crontab? Moreover, I am still trying to figure out how does crontab -r not ask you for confirmation? Regardless of my lack of credibility as to how this is possible, my question is: am I able to recover the lost crontab? | You can find your cron jobs from the log if once it has executed before. Check /var/log/cron . You do not have any recovery option other than third party recovery tools. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136407/"
]
} |
408,187 | TL;DR: Is it possible to rotate by 90 degrees the feed of a built-in screen webcam? I have 2 screen monitors, one in landscape and one in portrait mode. The webcam is on the "portrait" monitor, the "landscape" monitor doesn't have a rotation capability (so I cannot switch them). The display on the second screen is rotated using xrandr, via arandr. However, the webcam feed is still filming as if there was no physical rotation, which is a problem for videoconferencing. I would like a way to tweak the video feed at driver level so that I can use it in other applications. I have tried to use v4l2-ctl but I cannot find a "rotate" features (while there are many configuration options for contrast/hue/etc.). I can use My distribution is Archlinux but I don't think it's relevant here. The portrait screen is a philips 271P4Q. Lsusb output for the integrated webcam: Bus 001 Device 005: ID 04ca:7054 Lite-On Technology Corp. If there's nothing to do yet , I would also like to know whom I can report this to, to improve the situation (Xorg developers? Linux kernel devs?). Thank you for any input on this. | You might be able to do this as described here . Install and modprobe the v4l2loopback module (you may need to compile it) to create a new video device, then copy the webcam video stream to it via ffmpeg : ffmpeg -f v4l2 -i /dev/video0 -vf transpose=1 -f v4l2 /dev/video1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263801/"
]
} |
408,192 | I find that under my root directory, there are some directories that have the same inode number: $ ls -aid */ .*/2 home/ 2 tmp/ 2 usr/ 2 var/ 2 ./ 2 ../ 1 sys/ 1 proc/ I only know that the directories' names are kept in the parent directory, and their data is kept in the inode of the directories themselves. I'm confused here. This is what I think when I trace the pathname /home/user1. First I get into the inode 2 which is the root directory which contains the directory lists. Then I find the name home paired with inode 2. So I go back to the disk to find inode 2? And I get the name user1 here? | They're on different devices. If we look at the output of stat , we can also see the device the file is on: # stat / | grep InodeDevice: 801h/2049d Inode: 2 Links: 24# stat /opt | grep InodeDevice: 803h/2051d Inode: 2 Links: 5 So those two are on separate devices/filesystems. Inode numbers are only unique within a filesystem so there is nothing unusual here. On ext2/3/4 inode 2 is also always the root directory , so we know they are the roots of their respective filesystems. The combination of device number + inode is likely to be unique over the whole system. (There are filesystems that don't have inodes in the traditional sense, but I think they still have to fake some sort of a unique identifier in their place anyway.) The device numbers there appear to be the same as those shown on the device nodes, so /dev/sda1 holds the filesystem where / is on: # ls -l /dev/sda1brw-rw---- 1 root disk 8, 1 Sep 21 10:45 /dev/sda1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/408192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249654/"
]
} |
408,214 | This is what I need to happen: start process A in the background wait for x seconds start process B in the foreground How can I make the wait happen? I'm seeing that 'sleep' seems to halt everything and I don't actually want to 'wait' for process A to finish entirely. I've seen some time based loops but I'm wondering if there's something cleaner. | Unless I'm misunderstanding your question, it can simply be achieved with this short script: #!/bin/bashprocess_a &sleep xprocess_b (and add an extra wait at the end if you want your script to wait for process_a to finish before exiting). You can even do this as an one-liner, without the need for a script (as suggested by @BaardKopperud): process_a & sleep x ; process_b | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/408214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263824/"
]
} |
408,224 | I'm in a situation where I have a 10/100/1000-capable PHY cabled only to support 10/100. The default behaviour is to use the autonegociation to find the best mode.At the other end, using a gigabit capable router ends in a non-working interface. I guess autonegociation never converges.I've heard some people tried with a 100Mbps switch and it works fine. I'm able to get it working using ethtool but this is quite frustrating : ethtool -s eth1 duplex full speed 100 autoneg off What I'd like to do is to keep the autonegociation but withdraw 1000baseT/Full from the choices so that it ends up running seemlessly in 100Mbps. Any way to achieve that using ethtool or kernel options ? (didn't find a thing on my 2.6.32 kernel ...) (Let's just say some strange dude comes to me with a 10Mbps switch, I need this eth to work with this switch from another century) | The thing with autonegotiation is that if you turn it off from one end, the other side can detect the speed but not the duplex mode, which defaults to half. Then you get a duplex mismatch, which is almost the same as the link not working. So if you disable autonegotiation on one end, you practically have to disable it on the other end too. (Then there's the thing that autonegotiation doesn't actually test the cable, just what the endpoints can do. This can result in a gigabit link over a cable that only has two pairs, and cannot support 1000Base-T.) But ethtool seems capable of telling the driver what speed/duplex modes to advertise. ethtool -s eth1 advertise 0x0f would allow all 10/100 modes but not 1G. advertise N Sets the speed and duplex advertised by autonegotiation. The argument is a hexadecimal value using one or a combination of the following values: 0x001 10baseT Half 0x002 10baseT Full 0x004 100baseT Half 0x008 100baseT Full 0x010 1000baseT Half (not supported by IEEE standards) 0x020 1000baseT Full | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72643/"
]
} |
408,237 | I was converting a timestamp here , when I red the following: It should also be pointed out (thanks to the comments from visitors to this site) that this point in time technically does not change no matter where you are located on the globe. This is very useful to computer systems for tracking and sorting dated information in dynamic and distributed applications both online and client side. I could not understand what this exactly means: is the unix timestamp an absolute measure? That is: suppose I have a client in the USA and this client connects to a server located in Russia. Does the Unix timestamp is exactly the same in the same moment for both the client and the server? I'm a little confused... | The thing with autonegotiation is that if you turn it off from one end, the other side can detect the speed but not the duplex mode, which defaults to half. Then you get a duplex mismatch, which is almost the same as the link not working. So if you disable autonegotiation on one end, you practically have to disable it on the other end too. (Then there's the thing that autonegotiation doesn't actually test the cable, just what the endpoints can do. This can result in a gigabit link over a cable that only has two pairs, and cannot support 1000Base-T.) But ethtool seems capable of telling the driver what speed/duplex modes to advertise. ethtool -s eth1 advertise 0x0f would allow all 10/100 modes but not 1G. advertise N Sets the speed and duplex advertised by autonegotiation. The argument is a hexadecimal value using one or a combination of the following values: 0x001 10baseT Half 0x002 10baseT Full 0x004 100baseT Half 0x008 100baseT Full 0x010 1000baseT Half (not supported by IEEE standards) 0x020 1000baseT Full | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408237",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263839/"
]
} |
408,242 | I have a bunch of devices all running a similar cron job. Currently I'm setting a cron minute and hours to a random number (that way they don't all run at once). $random_minute $random_hour * * * sudo /bin/script I want to keep this pattern of making each device random but I also have a script which needs to be run every 6 hours. How can I combine something like above with */6 ? | There aren't that many hours in the day, so why not just 17 3,9,15,21 * * * sudo /bin/script to run at 03:17 and every 6 hours hence? The alternatives would involve adding a sleep to the program itself: 0 */6 * * * (sleep 11820; sudo /bin/script) or running the script more often (say, hourly), and having the script just exit if the actual job was executed within the last < 6 hours. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408242",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
408,308 | I noticed when using virsh that VMs are referred to as "domains". Why are they called domains instead of Virtual Machines? $ virshvirsh # help... Domain Monitoring (help keyword 'monitor'): domblkerror Show errors on block devices domblkinfo domain block device size information domblklist list all domain blocks domblkstat get device block stats for a domain domcontrol domain control interface state domif-getlink get link state of a virtual interface domifaddr Get network interfaces' addresses for a running domain domiflist list all domain virtual interfaces domifstat get network interface stats for a domain dominfo domain information dommemstat get memory statistics for a domain domstate domain state domstats get statistics about one or multiple domains domtime domain time list list domains...virsh # list --all Id Name State---------------------------------------------------- - centos_vagrant_test_test_vm shut off - collectd01 shut off - grafana01 shut off - influxdb01 shut off - JobDBWin7_Stable shut off - OpenWRT_Red shut off | They're not kvm exclusive terminology (xen also refers to machines as domains). A hypervisor is a rough equivalent to domain zero, or dom0 , which is the first system initialized on the kernel and has special privileges. Other domains started later are called domU and are the equivalent to a guest system or virtual machine. The reason is probably that both are very similar as they are executed on the kernel that handles them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408308",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3570/"
]
} |
408,338 | When I log in to an SSH server/host I get asked whether the hash of its public key is correct, like this: # ssh 1.2.3.4The authenticity of host '[1.2.3.4]:22 ([[1.2.3.4]:22)' can't be established.RSA key fingerprint is SHA256:CxIuAEc3SZThY9XobrjJIHN61OTItAU0Emz0v/+15wY.Are you sure you want to continue connecting (yes/no)? noHost key verification failed. In order to be able to compare, I used this command on the SSH server previously and saved the results to a file on the client: # ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub2048 f6:bf:4d:d4:bd:d6:f3:da:29:a3:c3:42:96:26:4a:41 /etc/ssh/ssh_host_rsa_key.pub (RSA) For some great reason (no doubt) one of these commands uses a different (newer?) way of displaying the hash, thereby helping man-in-the-middle attackers enormously because it requires a non-trivial conversion to compare these. How do I compare these two hashes, or better: force one command to use the other's format? The -E option to ssh-keygen is not available on the server. | ssh # ssh -o "FingerprintHash sha256" testhostThe authenticity of host 'testhost (256.257.258.259)' can't be established.ECDSA key fingerprint is SHA256:pYYzsM9jP1Gwn1K9xXjKL2t0HLrasCxBQdvg/mNkuLg.# ssh -o "FingerprintHash md5" testhostThe authenticity of host 'testhost (256.257.258.259)' can't be established.ECDSA key fingerprint is MD5:de:31:72:30:d0:e2:72:5b:5a:1c:b8:39:bf:57:d6:4a. ssh-keyscan & ssh-keygen Another approach is to download the public key to a system which supports both MD5 and SHA256 hashes: # ssh-keyscan testhost >testhost.ssh-keyscan# cat testhost.ssh-keyscantesthost ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItb...testhost ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0U...testhost ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMKHh...# ssh-keygen -lf testhost.ssh-keyscan -E sha256256 SHA256:pYYzsM9jP1Gwn1K9xXjKL2t0HLrasCxBQdvg/mNkuLg testhost (ECDSA)2048 SHA256:bj+7fjKSRldiv1LXOCTudb6piun2G01LYwq/OMToWSs testhost (RSA)256 SHA256:hZ4KFg6D+99tO3xRyl5HpA8XymkGuEPDVyoszIw3Uko testhost (ED25519)# ssh-keygen -lf testhost.ssh-keyscan -E md5256 MD5:de:31:72:30:d0:e2:72:5b:5a:1c:b8:39:bf:57:d6:4a testhost (ECDSA)2048 MD5:d5:6b:eb:71:7b:2e:b8:85:7f:e1:56:f3:be:49:3d:2e testhost (RSA)256 MD5:e6:16:94:b5:16:19:40:41:26:e9:f8:f5:f7:e7:04:03 testhost (ED25519) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/408338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115138/"
]
} |
408,346 | sudo apt-get install pppoe will download pppoe package and install it. Is it possible to just download pppoe package and not install it with apt-get command? wget http://ftp.us.debian.org/debian/pool/main/p/ppp/ppp_2.4.7-1+4_amd64.deb ppp_2.4.7-1+4_amd64.deb is in the current directory now. cd /tmpsudo apt-get install -d pppReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following NEW packages will be installed: ppp0 upgraded, 1 newly installed, 0 to remove and 95 not upgraded.Need to get 0 B/346 kB of archives.After this operation, 949 kB of additional disk space will be used.Download complete and in download only mode No ppp_2.4.7-1+4_amd64.deb or ppp related package in /tmp. sudo find /tmp -name ppp* Nothing found. Where is the package ppp in /tmp with command cd /tmpsudo apt-get install -d ppp ?? | Use --download-only : sudo apt-get install --download-only pppoe This will download pppoe and any dependencies you need, and place them in /var/cache/apt/archives . That way a subsequent apt-get install pppoe will be able to complete without any extra downloads. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/408346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102745/"
]
} |
408,355 | I am currently trying to run an R script via a shell script. Here the R script: test = rnorm(1:100, 1000)write.csv(test, 'test.csv') And here the bash script which calls the R one: #!/bin/bash -l#SBATCH --partition=compute#SBATCH --job-name=test#SBATCH --mail-type=ALL#SBATCH [email protected]#SBATCH --time=00:10:00#SBATCH --nodes=1#SBATCH --tasks-per-node=12#SBATCH --account=myaccountmodule purgemodule load R${HOME}/test.R I think I did everything correctly, but the output returns the following error: /mydirectory/test.R: line 3: syntax error near unexpected token `('/mydirectory/test.R: line 3: `test = rnorm(1:100, 1000)' Why I did I get this error? | The problem is the shell is trying to the run your ${HOME}/test.R with a bash interpreter for which it is not trying to understand the syntax from the line number 3. Use the R interpreter explicitly from which you want the test.R to run from. Set the interpreter for your Rscript in test.R as #!/usr/bin/env Rscriptmodule purgemodule load R test = rnorm(1:100, 1000)write.csv(test, 'test.csv') This way with the interpreter set, you can now run it from the shell script as Rscript ${HOME}/test.R Remember, logging into R shell and running the commands on it, and trying it out in the shell script are not the same, R shell is different from the bash shell. You need to use the way to run the commands without logging into R from the command line directly and use that same approach in the shell script. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263944/"
]
} |
408,413 | [4.13.12-1-ARCH with gnome3 and gdm on Xorg] I already have set my VISUAL and EDITOR env-vars to vim . Similarly I did try SYSTEMD_EDITOR="vim"; export SYSTEMD_EDITOR in my ~/.bashrc, to no avail. When modifying unit files in Arch (systemd) via $ sudo systemctl edit _unit_ I find myself staring at nano . Life is too short and I want vim by all means. How do I do this ? | First method, you can add this line to ~/.bashrc : export SYSTEMD_EDITOR=vim And then sudo visudo and add this line: Defaults env_keep += "SYSTEMD_EDITOR" Start new bash session to take effect, then run sudo systemctl edit <foo> as usual. Second method is use update-alternatives : Install your desired editor , e.g. vim.gtk3 : $ which editoreditor is /usr/bin/editor$ sudo update-alternatives --install "$(which editor)" editor "$(which vim.gtk3)" 15 Then choose your desired editor : $ sudo update-alternatives --config editorThere are 7 choices for the alternative editor (providing /usr/bin/editor). Selection Path Priority Status------------------------------------------------------------ 0 /usr/bin/vim.gtk3 50 auto mode 1 /bin/ed -100 manual mode* 2 /bin/nano 40 manual mode 3 /usr/bin/code 0 manual mode 4 /usr/bin/gedit 5 manual mode 5 /usr/bin/vim.basic 30 manual mode 6 /usr/bin/vim.gtk3 50 manual mode 7 /usr/bin/vim.tiny 15 manual modePress <enter> to keep the current choice[*], or type selection number: 6update-alternatives: using /usr/bin/vim.gtk3 to provide /usr/bin/editor (editor) in manual mode Third method is direct set the EDITOR on runtime: sudo EDITOR=vim systemctl edit <foo> The precedence are first method > third method > second method . Don't try to set "GUI" editor such as gedit because Why don't gksu/gksudo or launching a graphical application with sudo work with Wayland? and Gedit uses 100% of the CPU while editing files | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/408413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72707/"
]
} |
408,424 | Some processes seem to eat up the disk space on Linux (Ubuntu 16). I upgraded the disk of a laptop to 400 Gb one month ago. Now I have around 5 Gb of a free space. I spent many hours reading different posts on this forum and trying different commands. For example: sudo du -x -d1 -h /var | sort -hr297G /var296G /var/lib207M /var/cache154M /var/dell118M /var/log59M /var/opt18M /var/backups17M /var/tmp7,9M /var/crash92K /var/spool20K /var/www4,0K /var/snap4,0K /var/metrics4,0K /var/mail4,0K /var/local I tried to use du , find and Disk Usage Analyzer, but haven't found the issue: find . -size +1Gfind: ‘./.ssh/typos_ssh_keys/id_rsa.pub’: Permission deniedfind: ‘./.ssh/typos_ssh_keys/id_rsa’: Permission deniedfind: ‘./.local/share/Trash/expunged/3448374582/work/Catalina’: Permission deniedfind: ‘./.local/share/Trash/expunged/3448374582/conf/Catalina’: Permission deniedfind: ‘./.local/share/jupyter/runtime’: Permission deniedfind: ‘./.dbus’: Permission deniedfind: ‘./.cache/dconf’: Permission deniedfind: ‘./.gvfs’: Permission denied I was reading that there might be some logs that consume a lot of space. But I have not found such logs. Any help will be highly appreciated? | First method, you can add this line to ~/.bashrc : export SYSTEMD_EDITOR=vim And then sudo visudo and add this line: Defaults env_keep += "SYSTEMD_EDITOR" Start new bash session to take effect, then run sudo systemctl edit <foo> as usual. Second method is use update-alternatives : Install your desired editor , e.g. vim.gtk3 : $ which editoreditor is /usr/bin/editor$ sudo update-alternatives --install "$(which editor)" editor "$(which vim.gtk3)" 15 Then choose your desired editor : $ sudo update-alternatives --config editorThere are 7 choices for the alternative editor (providing /usr/bin/editor). Selection Path Priority Status------------------------------------------------------------ 0 /usr/bin/vim.gtk3 50 auto mode 1 /bin/ed -100 manual mode* 2 /bin/nano 40 manual mode 3 /usr/bin/code 0 manual mode 4 /usr/bin/gedit 5 manual mode 5 /usr/bin/vim.basic 30 manual mode 6 /usr/bin/vim.gtk3 50 manual mode 7 /usr/bin/vim.tiny 15 manual modePress <enter> to keep the current choice[*], or type selection number: 6update-alternatives: using /usr/bin/vim.gtk3 to provide /usr/bin/editor (editor) in manual mode Third method is direct set the EDITOR on runtime: sudo EDITOR=vim systemctl edit <foo> The precedence are first method > third method > second method . Don't try to set "GUI" editor such as gedit because Why don't gksu/gksudo or launching a graphical application with sudo work with Wayland? and Gedit uses 100% of the CPU while editing files | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/408424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264003/"
]
} |
408,425 | :~/$ uname -aLinux hostname 4.9.0-4-rt-amd64 #1 SMP PREEMPT RT Debian 4.9.51-1 (2017-09-28) x86_64 GNU/Linux I think that I have a clear dependency tree without broken packages. :~/$ sudo apt-get checkReading package lists... DoneBuilding dependency tree Reading state information... Done However when I try to install npm , apt-get wants to remove libssl-dev : :~$ sudo apt-get install npmReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following packages were automatically installed and are no longer required: libldns2 libssl-docUse 'sudo apt autoremove' to remove them.The following additional packages will be installed: gyp libjs-inherits libjs-node-uuid libssl1.0-dev libuv1-dev node-abbrev node-ansi node-ansi-color-table node-archy node-async node-balanced-match node-block-stream node-brace-expansion node-builtin-modules node-combined-stream node-concat-map node-cookie-jar node-delayed-stream node-forever-agent node-form-data node-fs.realpath node-fstream node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-hosted-git-info node-inflight node-inherits node-ini node-is-builtin-module node-isexe node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream node-node-uuid node-nopt node-normalize-package-data node-npmlog node-once node-osenv node-path-is-absolute node-pseudomap node-qs node-read node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-slide node-spdx-correct node-spdx-expression-parse node-spdx-license-ids node-tar node-tunnel-agent node-underscore node-validate-npm-package-license node-which node-wrappy node-yallist nodejs nodejs-dev nodejs-docSuggested packages: node-hawk node-aws-sign node-oauth-sign node-http-signature debhelperThe following packages will be REMOVED: libldns-dev libssl-devThe following NEW packages will be installed: gyp libjs-inherits libjs-node-uuid libssl1.0-dev libuv1-dev node-abbrev node-ansi node-ansi-color-table node-archy node-async node-balanced-match node-block-stream node-brace-expansion node-builtin-modules node-combined-stream node-concat-map node-cookie-jar node-delayed-stream node-forever-agent node-form-data node-fs.realpath node-fstream node-fstream-ignore node-github-url-from-git node-glob node-graceful-fs node-gyp node-hosted-git-info node-inflight node-inherits node-ini node-is-builtin-module node-isexe node-json-stringify-safe node-lockfile node-lru-cache node-mime node-minimatch node-mkdirp node-mute-stream node-node-uuid node-nopt node-normalize-package-data node-npmlog node-once node-osenv node-path-is-absolute node-pseudomap node-qs node-read node-read-package-json node-request node-retry node-rimraf node-semver node-sha node-slide node-spdx-correct node-spdx-expression-parse node-spdx-license-ids node-tar node-tunnel-agent node-underscore node-validate-npm-package-license node-which node-wrappy node-yallist nodejs nodejs-dev nodejs-doc npm0 upgraded, 71 newly installed, 2 to remove and 32 not upgraded.Need to get 7,517 kB of archives.After this operation, 25.1 MB of additional disk space will be used.Do you want to continue? [Y/n] My sources.list : $ cat /etc/apt/sources.listdeb http://ftp.us.debian.org/debian sid main contrib non-freedeb-src http://ftp.us.debian.org/debian sid main contrib non-free# 3rd party# deb http://www.deb-multimedia.org unstable main non-freedeb [arch=amd64] https://download.docker.com/linux/debian stretch stable# deb-src [arch=amd64] https://download.docker.com/linux/debian stretch stabledeb http://http.us.debian.org/debian/ stretch main non-free contrib Any ideas why? Do you think I should report a bug? | You’re not doing anything wrong: npm depends on node-gyp , which depends on nodejs-dev , which depends on libssl1.0-dev , which conflicts with libssl-dev . Thus it is currently not possible to have npm and libssl-dev installed simultaneously, in Debian unstable; this is filed as bug #850660 . There’s not much you can do about it, apart from subscribe to the bug to be informed of any change to the situation... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/408425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/213080/"
]
} |
408,429 | If I give out a "zypper up" or "apt-get update", the repository datas are fetched in linear order. The big Question: Why? Why cannot we speed up the update process by starting all the repository data download parallel? I am not talking about package updates, just repo infos. | In my view it is because it is not necessary. Currently the typical update processes (apt, yum, etc) are not bandwidth limited in general. The fraction of time of the update process that is spent downloading repository files or packages is either not significant (seconds) or may not be significantly improved by adding parallelisation [since if bandwidth is a problem, parallelisation may make it worse]. There's other limitations. Apt for example does not even support 2 simultaneous operations whereas yum or emerge, for example, does. These limitations that may exist to limit complexity or simply because they're not really an issue to every day users and sysadmins. Complexity and performance requires effort, which means it needs to be justified to an extent before the effort is spent. However, if you really want to pursue this, it is generally possible: https://github.com/ilikenwf/apt-fast http://www.ethicalhackx.com/speed-apt-get-update-parallel-downloads/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261470/"
]
} |
408,431 | Please consider this code: mysql -u root -p << MYSQL drop user '${test}'@'localhost'; drop database ${test}; create user '${test}'@'localhost' identified by '${psw}'; create database ${test}; GRANT ALL PRIVILEGES ON ${test}.* TO ${test}@localhost; MYSQL The code outputs the following error for the first command drop user '${test}'@'localhost'; : ERROR 1396 (HY000): Operation DROP USER failed for '${test}'@'localhost' My question How could I make the program ignores this error if it appears, so that execution doesn't break? For example, how could I make sure the program will behavior this way: "Drop if you already have a user by the value of ${test} , and if you don't, just continue to create it". Of course, the same question goes for the second drop command dealing with the DB instance. | In my view it is because it is not necessary. Currently the typical update processes (apt, yum, etc) are not bandwidth limited in general. The fraction of time of the update process that is spent downloading repository files or packages is either not significant (seconds) or may not be significantly improved by adding parallelisation [since if bandwidth is a problem, parallelisation may make it worse]. There's other limitations. Apt for example does not even support 2 simultaneous operations whereas yum or emerge, for example, does. These limitations that may exist to limit complexity or simply because they're not really an issue to every day users and sysadmins. Complexity and performance requires effort, which means it needs to be justified to an extent before the effort is spent. However, if you really want to pursue this, it is generally possible: https://github.com/ilikenwf/apt-fast http://www.ethicalhackx.com/speed-apt-get-update-parallel-downloads/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258010/"
]
} |
408,436 | I have a big text file (263 lines) that contains lines something like these: image_name.jpg: *lots of spaces* JPEG image data, JFIF standard 1.01, resolution (DPI), density 96x96, segment length 16, baseline, precision 8, 1024x768, frames 3 \nimage_name.jpg: *lots of spaces* JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, comment: "CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), quality = 70", progressive, precision 8, 960x540, frames 3 \nimage_name.png: *lots of spaces* PNG image data, 752 x 760, 8-bit/color RGBA, non-interlaced \n How can I remove all the text between : and \n at once? | With cut : cut -d: -f1 file With sed : sed -e 's/:.*//' file With awk : awk -F: '{print $1}' file With GNU grep or many BSD grep s (but not POSIX grep ): grep -o '^[^:]*' file cut is the shortest one. If you want to modify the file in-place, your sed may have an option -i that does so - but how exactly that works depends on your platform. Otherwise, > file2 && mv file2 file on the end of any of them will work. Alternatively, with ed , in-place everywhere: printf ',s/:.*/\nw\n' | ed file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174627/"
]
} |
408,536 | I have 4 files. I need to check that all the files have equal number of lines. If the numbers of lines differ, I need to detect this and output, for example: #file1 - 10 lines, file2 - 9 lines, file3 - 10 lines, file4 - 10 linesLine are miss matchedNumber of lines 10 = 9 = 10 = 10 If they are equal, I want to merge the files line by line as follows: Files: #file110 1211#file2Arunkamalbabu#file3300200400#file4spot1spot4spot5 Output: Set110Arun300spot1Set212kamal200spot4Set311babu400spot5 My code: #id_name=`cat file2`echo $id_nameid_list=`cat file1`echo $id_list#id_count=`cat file3`echo $id_countid_spot=`cat spot_list`echo $id_spotSS=`cat id_list | wc -l`DS=`cat id_name | wc -l`SF=`cat id_count | wc -l`DF=`cat id_spot | wc -l`if [ $SS == $DS == $SF == $DF ] then echo " Line are matched" echo " Total line $SS" for i j in $id_list $id_name do for a b in $id_count $id_spot do k = 1 echo " Set$k" $i $j $a $b done doneelse echo " Line are Miss matched" echo " Total line $SS = $DS = $SF = $DF"fi | With a really straightforward approach: #!/usr/bin/env bashSS=$(wc -l < file1)DS=$(wc -l < file2)SF=$(wc -l < file3)DF=$(wc -l < file4)if [[ $SS -eq $DS && $DS -eq $SF && $SF -eq $DF ]]; then echo "Lines are matched" echo "Total number of lines: $SS" num=1 while (( num <= SS )); do echo "Set$num" tail -n +$num file1 | head -n 1 tail -n +$num file2 | head -n 1 tail -n +$num file3 | head -n 1 tail -n +$num file4 | head -n 1 ((num++)) echo doneelse echo "Line are miss matched" echo "Number of lines $SS = $DS = $SF = $DF"fi It is not very efficient as it calls tail 4*number_of_lines times but it is straightforward. Another approach is to replace the while loop with awk : awk '{ printf("\nSet%s\n", NR) print; if( getline < "file2" ) print if( getline < "file3" ) print if ( getline < "file4" ) print}' file1 To join files line by line, the paste command is very useful. You can use this instead of the while loop: paste -d$'\n' file1 file2 file3 file4 Or maybe a little less obvious: { cat -n file1 ; cat -n file2 ; cat -n file3; cat -n file4; } | sort -n | cut -f2- That will output the lines but with no formatting (no Set1, Set2, newlines, ...), so you have to format it afterwards with awk , for example: awk '{ if ((NR-1)%4 == 0) printf("\nSet%s\n", (NR+3)/4) print }' < <(paste -d$'\n' file1 file2 file3 file4) Some final notes: Do not use uppercase variables as they could collide with environment and internal shell variables Do not use echo "$var" | cmd or cat file | cmd when you can redirect input: cmd <<< "$var" or cmd < file You can have only one variable name in for loop. for i in ... is valid, whereas for i j in ... is not It is better to use [[ ]] instead of [ ] for testing, see this answer There are a lot of ways to do this It's up to you which approach you choose to use but be aware of the efficiency differences: Results of time , tested on files with 10000 lines: #first approachreal 0m45.387suser 0m5.904ssys 0m3.836s #second approach - significantly fasterreal 0m0.086suser 0m0.024ssys 0m0.040s #third approach - very close to second approachreal 0m0.074suser 0m0.016ssys 0m0.036s | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408536",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240776/"
]
} |
408,543 | What is the best practice for return many values from a bash function? Example1: Function-script: function mysqlquery { local dbserver='localhost' local dbuser='user' local dbpass='pass' local db='mydb' mysql -h "$dbserver" -u "$dbuser" -p "$dbpass" --skip-column-names --raw -e "$*" "$db" if [ $? -ne 0 ]; then return 1 fi} Source-script: for XY in $(mysqlquery "select XY from ABC where DEF = 123" 2>/dev/null);do dosomethingwith $XYdoneif mysqlquery "select XY from ABC where DEF = 123" 2>/dev/null; then echo truefi Example2: Function-script: function mysqlquery { local dbserver='localhost' local dbuser='user' local dbpass='pass' local db='mydb' result=$(mysql -h "$dbserver" -u "$dbuser" -p "$dbpass" -e "$*" "$db" 2>/dev/null) if [ $? -ne 0 -o -z "$result" ]; then return 1 fi} Source-script: result=$(mysqlquery "select XY from ABC where DEF = 123" 2>/dev/null)for XY in $result;do dosomethingwith $XYdoneif mysqlquery "select XY from ABC where DEF = 123" 2>/dev/null; then echo truefi Or are there more approaches to returning multiple pieces of information (much more then a single int value)? | Yes, bash 's return can only return numbers, and only integers between 0 and 255. For a shell that can return anything (lists of things), you can look at es : $ es -c "fn f {return (a 'b c' d \$*)}; printf '%s\n' <={f x y}"ab cdxy Now, in Korn-like shells like bash , you can always return the data in a pre-agreed variable. And that variable can be in any type supported by the shell. For bash , that can be scalar, sparse arrays (associative arrays with keys restricted to positive integers) or associative arrays with non-empty keys (neither key nor values can contain NUL characters). See also zsh with normal arrays and associative arrays without those restrictions. The equivalent of the f es function above could be done with: f() { reply=(a 'b c' d "$@")}fprintf '%s\n' "${reply[@]}" Now, mysql queries generally return tables, that is two-dimensional arrays. The only shell that I know that has multi-dimensional arrays is ksh93 (like bash it doesn't support NUL characters in its variables though). ksh also supports compound variables that would be handy to return tables with their headers. It also supports passing variables by reference. So, there, you can do: function f { typeset -n var=$1 var=( (foo bar baz) (1 2 3) }}f replyprintf '%s\n' "${reply[0][1]}" "${reply[1][2]}" Or: function f { typeset -n var=$1 var=( (firstname=John lastname=Smith) (firstname=Alice lastname=Doe) )}f replyprintf '%s\n' "${reply[0].lastname}" Now, to take the output of mysql and store that in some variables, we need to parse that output which is text with columns of the table separated by TAB characters and rows separated by NL and some encoding for the values to allow them to contain both NL and TAB. Without --raw , mysql would output a NL as \n , a TAB as \t , a backslash as \\ and a NUL as \0 . ksh93 also has read -C that can read text formatted as a variable definition (not very different from using eval though), so you can do: function mysql_to_narray { awk -F '\t' -v q="'" ' function quote(s) { gsub(/\\n/, "\n", s) gsub(/\\t/, "\t", s) gsub(/\\\\/, "\\", s) gsub(q, q "\\" q q, s) return q s q } BEGIN{print "("} { print "(" for (i = 1; i <= NF; i++) print " " quote($i) print ")" } END {print ")"}'}function query { typeset -n var=$1 typeset db=$2 shift 2 typeset -i n=0 typeset IFS=' ' typeset credentials=/path/to/file.my # not password on the command line! set -o pipefail mysql --defaults-extra-file="$credentials" --batch \ --skip-column-names -e "$*" "$db" | mysql_to_narray | read -C var} To be used as query myvar mydb 'select * from mytable' || exitprintf '%s\n' "${myvar[0][0]}"... Or for a compound variable: function mysql_to_array_of_compounds { awk -F '\t' -v q="'" ' function quote(s) { gsub(/\\n/, "\n", s) gsub(/\\t/, "\t", s) gsub(/\\\\/, "\\", s) gsub(q, q "\\" q q, s) return q s q } BEGIN{print "("} NR == 1 { for (i = 1; i<= NF; i++) header[i] = $i next } { print "(" for (i = 1; i <= NF; i++) print " " header[i] "=" quote($i) print ")" } END {print ")"}'}function query { typeset -n var=$1 typeset db=$2 shift 2 typeset -i n=0 typeset IFS=' ' typeset credentials=/path/to/file.my # not password on the command line! set -o pipefail mysql --defaults-extra-file="$credentials" --batch \ -e "$*" "$db" | mysql_to_array_of_compounds | read -C var} To be used as: query myvar mydb 'select "First Name" as firstname, "Last Name" as lastname from mytable' || exitprintf '%s\n' "${myvar[0].firstname" Note that the header names ( firstname , lastname above) have to be valid shell identifiers. In bash or zsh or yash (though beware array indices start at 1 in zsh and yash and only zsh can store NUL characters), you could always return one array per column, by having awk generate the code to define them: query() { typeset db="$1" shift typeset IFS=' ' typeset credentials=/path/to/file.my # not password on the command line! set -o pipefail typeset output output=$( mysql --defaults-extra-file="$credentials" --batch \ -e "$*" "$db" | awk -F '\t' -v q="'" ' function quote(s) { gsub(/\\n/, "\n", s) gsub(/\\t/, "\t", s) gsub(/\\\\/, "\\", s) gsub(q, q "\\" q q, s) return q s q } NR == 1 { for (n = 1; n<= NF; n++) column[n] = $n "=(" next } { for (i = 1; i < n; i++) column[i] = column[i] " " quote($i) } END { for (i = 1; i < n; i++) print column[i] ") " }' ) || return eval "$output"} To be used as: query mydb 'select "First Name" as firstname, "Last Name" as lastname from mytable' || exitprintf '%s\n' "${firstname[1]}" Add a set -o localoptions with zsh or local - with bash4.4+ before the set -o pipefail for the setting of that option to be local to the function like with the ksh93 approach. Note that in all the above, we're not converting back the \0 s to real NULs as bash or ksh93 would choke on them. You may want to do it if using zsh to be able to work with BLOBs but note that the gsub(/\\0/, "\0", s) would not work with all awk implementations. In any case, here, I'd use more advanced languages than a shell like perl or python to do this kind of thing. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/408543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192320/"
]
} |
408,544 | The syntax for the trap statement is straightforward: trap [COMMANDS] [SIGNALS] trap -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR111) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+338) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+843) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+1348) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-1253) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-758) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-263) SIGRTMAX-1 64) SIGRTMAX trap "ls" debugDesktop Templates Documents Which number from 1 till 64 is the debug's signal number? | Yes, bash 's return can only return numbers, and only integers between 0 and 255. For a shell that can return anything (lists of things), you can look at es : $ es -c "fn f {return (a 'b c' d \$*)}; printf '%s\n' <={f x y}"ab cdxy Now, in Korn-like shells like bash , you can always return the data in a pre-agreed variable. And that variable can be in any type supported by the shell. For bash , that can be scalar, sparse arrays (associative arrays with keys restricted to positive integers) or associative arrays with non-empty keys (neither key nor values can contain NUL characters). See also zsh with normal arrays and associative arrays without those restrictions. The equivalent of the f es function above could be done with: f() { reply=(a 'b c' d "$@")}fprintf '%s\n' "${reply[@]}" Now, mysql queries generally return tables, that is two-dimensional arrays. The only shell that I know that has multi-dimensional arrays is ksh93 (like bash it doesn't support NUL characters in its variables though). ksh also supports compound variables that would be handy to return tables with their headers. It also supports passing variables by reference. So, there, you can do: function f { typeset -n var=$1 var=( (foo bar baz) (1 2 3) }}f replyprintf '%s\n' "${reply[0][1]}" "${reply[1][2]}" Or: function f { typeset -n var=$1 var=( (firstname=John lastname=Smith) (firstname=Alice lastname=Doe) )}f replyprintf '%s\n' "${reply[0].lastname}" Now, to take the output of mysql and store that in some variables, we need to parse that output which is text with columns of the table separated by TAB characters and rows separated by NL and some encoding for the values to allow them to contain both NL and TAB. Without --raw , mysql would output a NL as \n , a TAB as \t , a backslash as \\ and a NUL as \0 . ksh93 also has read -C that can read text formatted as a variable definition (not very different from using eval though), so you can do: function mysql_to_narray { awk -F '\t' -v q="'" ' function quote(s) { gsub(/\\n/, "\n", s) gsub(/\\t/, "\t", s) gsub(/\\\\/, "\\", s) gsub(q, q "\\" q q, s) return q s q } BEGIN{print "("} { print "(" for (i = 1; i <= NF; i++) print " " quote($i) print ")" } END {print ")"}'}function query { typeset -n var=$1 typeset db=$2 shift 2 typeset -i n=0 typeset IFS=' ' typeset credentials=/path/to/file.my # not password on the command line! set -o pipefail mysql --defaults-extra-file="$credentials" --batch \ --skip-column-names -e "$*" "$db" | mysql_to_narray | read -C var} To be used as query myvar mydb 'select * from mytable' || exitprintf '%s\n' "${myvar[0][0]}"... Or for a compound variable: function mysql_to_array_of_compounds { awk -F '\t' -v q="'" ' function quote(s) { gsub(/\\n/, "\n", s) gsub(/\\t/, "\t", s) gsub(/\\\\/, "\\", s) gsub(q, q "\\" q q, s) return q s q } BEGIN{print "("} NR == 1 { for (i = 1; i<= NF; i++) header[i] = $i next } { print "(" for (i = 1; i <= NF; i++) print " " header[i] "=" quote($i) print ")" } END {print ")"}'}function query { typeset -n var=$1 typeset db=$2 shift 2 typeset -i n=0 typeset IFS=' ' typeset credentials=/path/to/file.my # not password on the command line! set -o pipefail mysql --defaults-extra-file="$credentials" --batch \ -e "$*" "$db" | mysql_to_array_of_compounds | read -C var} To be used as: query myvar mydb 'select "First Name" as firstname, "Last Name" as lastname from mytable' || exitprintf '%s\n' "${myvar[0].firstname" Note that the header names ( firstname , lastname above) have to be valid shell identifiers. In bash or zsh or yash (though beware array indices start at 1 in zsh and yash and only zsh can store NUL characters), you could always return one array per column, by having awk generate the code to define them: query() { typeset db="$1" shift typeset IFS=' ' typeset credentials=/path/to/file.my # not password on the command line! set -o pipefail typeset output output=$( mysql --defaults-extra-file="$credentials" --batch \ -e "$*" "$db" | awk -F '\t' -v q="'" ' function quote(s) { gsub(/\\n/, "\n", s) gsub(/\\t/, "\t", s) gsub(/\\\\/, "\\", s) gsub(q, q "\\" q q, s) return q s q } NR == 1 { for (n = 1; n<= NF; n++) column[n] = $n "=(" next } { for (i = 1; i < n; i++) column[i] = column[i] " " quote($i) } END { for (i = 1; i < n; i++) print column[i] ") " }' ) || return eval "$output"} To be used as: query mydb 'select "First Name" as firstname, "Last Name" as lastname from mytable' || exitprintf '%s\n' "${firstname[1]}" Add a set -o localoptions with zsh or local - with bash4.4+ before the set -o pipefail for the setting of that option to be local to the function like with the ksh93 approach. Note that in all the above, we're not converting back the \0 s to real NULs as bash or ksh93 would choke on them. You may want to do it if using zsh to be able to work with BLOBs but note that the gsub(/\\0/, "\0", s) would not work with all awk implementations. In any case, here, I'd use more advanced languages than a shell like perl or python to do this kind of thing. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/408544",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243284/"
]
} |
408,582 | How can I turn off Hardware Acceleration in Linux, also known as Direct Rendering. I wish to turn this off, as it messes with some applications like OBS Studio which can't handle capturing of hardware acceleration on other applications since it's enabled for the entire system. Certain apps can turn it on and off, but can't do this for desktop and other apps. When adding a source to capture from in OBS it just shows a blank capture image, for example if I wanted to record my desktop, it'll just show it as a blank capture input. Doesn't work if I want to capture web browser like Google Chrome, unless it's a single window with no tabs, and hardware acceleration is turned off in it's settings. Graphics: Card-1: Intel 3rd Gen Core processor Graphics Controller bus-ID: 00:02.0 Card-2: NVIDIA GF108M [GeForce GT 630M] bus-ID: 01:00.0 Display Server: X.Org 1.15.1 driver: nvidia Resolution: [email protected] GLX Renderer: GeForce GT 630M/PCIe/SSE2 GLX Version: 4.5.0 NVIDIA 384.90 Direct Rendering: Yes | You can configure Xorg to disable OpenGL / GLX. For a first try, you can run a second X session: switch to tty2, log in and type: startx -- :2 vt2 -extension GLX To permanently disable hardware acceleration, create a file: /etc/X11/xorg.conf.d/disable-gpu.conf with the the content: Section "Extensions" Option "GLX" "Disable"EndSection Note that Xwayland in Wayland compositors like Gnome3-Wayland will ignore settings in xorg.conf.d . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/408582",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
408,804 | I use Ubuntu server 16.04 and I desire to use the utility at in my current session to do something 1 minute from now (say, an echo ), without giving a specific date and time - just 1 minute ahead from current time. This failed: echo 'hi' | at 1m The reason I choose at over sleep is because sleep handicaps current session and is therefor more suitable to delay commands in another session, rather than the one we work with most of the time. AFAIR, at doesn't behave this way and won't handicap my session. Update_1 By Pied Piper's answer, I've tried: (sleep 1m; echo 'hi') & I have a problem with this method: The "hi" stream is printed inside my primary prompt and also adds an empty secondary prompt ( _ ) right under the primary prompt that contains it, see: USER@:~# (sleep 1m; echo 'hi') &[1] 22731USER@:~# hi^C[1]+ Done Update_2 By Peter Corde's answer I tried: (sleep 2 && echo -e '\nhi' && kill -WINCH $$ &) This works properly in Bash 4.4, but not in some older versions, seemingly (see comments in the answer). I personally use Bash 4.3 in my own environments. | The correct at usage is at <timespec> , where <timespec> is the time specification. You can also use the -f option to specify the file containing the commands to execute. If -f and no redirection is used, at will read the commands to execute from the stdin (standard input). In your example, to execute "some command" (I chose "some command" to be "echo hi" in the example below) one minute right after you hit enter (now), the intended <timespec> to be used is now + 1 minute . So, in order to obtain the result you want with a one-liner solution, please use the command below: echo 'echo hi' | at now + 1 minute This is supposed (and will do) to run the echo hi command after one minute. The problem here is that the echo hi command will be executed by the at command in a dummy tty and the output will be sent to the user's mail box (if it is correctly configured - which requires a more extensive explanation on how to configure a mail box in a unix machine and is not part of the scope of this question). The bottomline is that you won't be able to directly see the result of echo hi in the same terminal you issued the command. The easiest alternative is you redirect it to "some file", for example: echo 'echo hi > /tmp/at.out' | at now + 1 minute In the above example, "some file" is /tmp/at.out and the expected result is that the file at.out under /tmp is created after one minute and contais the text 'hi'. In addition to the now + 1 minute example above, many other time specifications can be used, even some complex ones. The at command is smart enough to interpret pretty human readable ones like the examples below: now + 1 day , 13:25 , mid‐night , noon , ... Please refer to at's man page for more about the possible time specifications as the possible variations are pretty extensive and may include dates, relative or absolute times, etc.. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258010/"
]
} |
408,859 | I am having some strange behavoir with zsh ( 5.4.2_1 installed with homebrew ) on osx not using the first occurrence of an executable in the path. Here is the scenario: echo $PATH returns: /usr/local/Cellar/zplug/HEAD-9fdb388/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin brew is in both /usr/local/Cellar/zplug/HEAD-9fdb388/bin and usr/local/bin/brew This is confirmed by running which -a brew which returns: /usr/local/Cellar/zplug/HEAD-9fdb388/bin/brew /usr/local/bin/brew But when I run which brew it returns: /usr/local/bin/brew and brew does run /usr/local/bin/brew rather than /usr/local/Cellar/zplug/HEAD-9fdb388/bin/brew How can this happen when brew is earlier in the path ? Help appreciated. | which -a cmd looks for all regular files named cmd which you have execute permission for in the directories in $path (in addition to aliases, functions, builtins...). While which cmd returns the command that zsh would run ( which is a builtin in zsh like in tcsh but unlike most other shells). zsh , like most other shells remembers the paths of executables in a hash table so as not to have to look them up in all the directories in $path each time you invoke them. That hash table (exposed in the $commands associative array in zsh ) can be manipulated with the hash command (standard POSIX shell command). If you have run the brew command (or which/type/whence brew , or used command completion or anything that would have primed that hash/cache) before it was added to /usr/local/Cellar/zplug/HEAD-9fdb388/bin or before /usr/local/Cellar/zplug/HEAD-9fdb388/bin was added to $path , zsh would have remembered its path and stored it as $commands[brew]=/usr/local/bin/brew . In that case, you can use hash -r (as in the Bourne shell) or rehash (as in csh) to have zsh forget the remembered commands (invalidate that cache ), so it can look it up next time and find it in the new location. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/408859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264351/"
]
} |
408,867 | I wrote a little Bash script, that uses spfquery for checking upon my domain email SPF record, if it passes on all providers IP addresses: #!/bin/bash# RED="\033[1;31m"GREEN="\033[1;32m"NOCOLOR="\033[0m"email="[my email address]" # deleted for bots not to hound medeclare -a ips=("88.86.120.212" "88.86.120.223" "88.86.120.250" "88.86.120.213" "88.86.120.103" "46.234.104.23" "46.234.104.24")echo -e "\n\n"for ip in "${ips[@]}"do echo -e "${GREEN}$ip${NOCOLOR}" spfquery -sender $email -ip $ip -helo kolbaba.stable.cz echo -e "\n\n"done Notice, there is RED commented out. That's because I would like the resulting message starting with any of these: fail softfail neutral unknown error none i.e. not with: pass to colorize in red. But how to do this is a mystery to me? | You would want to check the exit code of spfquery, then have an if/else to see if it was a pass or not. Something like this: #!/bin/bashRED="\033[1;31m"GREEN="\033[1;32m"NOCOLOR="\033[0m"email="[email protected]"declare -a ips=("88.86.120.212" "88.86.120.223" "88.86.120.250" "88.86.120.213" "88.86.120.103" "46.234.104.23" "46.234.104.24")echo -e "\n\n"for ip in "${ips[@]}"do spfquery -sender $email -ip $ip -helo kolbaba.stable.cz exit_status=$? if [ $exit_status -eq 0 ]; then echo -e "${GREEN}$ip${NOCOLOR}" else echo -e "${RED}$ip${NOCOLOR}" fi echo -e "\n\n"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408867",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
408,929 | I have a systemd service: [Unit]Description=My application[Service]ExecStart=/bin/java myapp.jarType=simpleUser=photo There is an option: StandardOutput= but I don't understand how to use it to write to a file. https://www.freedesktop.org/software/systemd/man/systemd.exec.html I was expecting to put a filepath somehere but the documentation talks about sockets and file descriptors. seems it needs more configuration than just that keyword. Where to put the filepath ? I can't find any examples of that use Thanks | Use: [Unit]Description=My application[Service]ExecStart=/usr/bin/java -jar myapp.jarType=simpleUser=photoStandardOutput=file:/var/log/logfile as documented here: https://www.freedesktop.org/software/systemd/man/systemd.exec.html#StandardOutput= Note that this way log files contents will be overwritten each time service restarts. StandardOutput/Error systemd directives do not support appending to files. If you want to maintain file log between service restarts and just append new logged lines to it, use instead: [Unit]Description=My application[Service]ExecStart=/usr/bin/sh -c 'exec /usr/bin/java -jar myapp.jar'Type=simpleUser=photo exec means that shell program will be substituted with /bin/java program after setting up redirections without forking. So there will be no difference from running /bin/java directly after ExecStart= . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/408929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64186/"
]
} |
408,980 | I like to use shell style shortcuts in insert and command-line mode, such as CTRL-K to delete to end of line. This mapping works in insert mode: inoremap <C-K> <C-O>D But I can't figure out an equivalent for command-line mode. Any ideas? I'm using Vrapper in case it matters. | alternative approach The :help command-line-window is one of the lesser-known features of Vim. You can enter it with <C-f> by default when you're already on the commandline, or q: from normal mode. As in any other Vim buffer, you can edit the current or previous command-lines using Vim commands, and press <Enter> to execute and close it. In it, you can use D just like anywhere else. mapping If you want that functionality directly in the command-line itself, you can define this simple mapping: cnoremap <C-k> <C-\>e(strpart(getcmdline(), 0, getcmdpos() - 1))<CR> Note that your suggested left-hand side clobbers the useful digraphs entry. plugin My CmdlineSpecialEdits plugin has (among many others) a CTRL-G D mapping that removes all characters between the cursor position and the end of the line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/408980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22172/"
]
} |
409,053 | I'm trying to process 'top' output to set CPU performance data. When I just grep the output it is colorized: or The numbers are bold, and it adds a lot of unnecessary data to the output: I tried to strip the color codes answer but it does not work. I want to avoid other much more complex answers in that question for sake of performance. I've tried to disable colors by switching term mode but no luck: So how can I disable the color output? PS: I found how to get data: I can awk only numbers then it works, but still wonder if there is any way to disable color here. | Here's one way to disable colored output in top : Step 1: Run top Step 2: Press the z key to toggle the color mode Step 3: Press the W key to save the new setting For reference, look at the top man page , specifically Section 4: Interactive Commands . There you will find the following descriptions of these two interactive commands: W :Write-the-Configuration-File This will save all of your options and toggles plus the current display mode and delay time. By issuing this command just before quitting top, you will be able restart later in exactly that same state.z :Color/Monochrome toggle Switches the `current' window between your last used color scheme and the older form of black-on-white or white-on-black. This command will alter both the summary area and task area but does not affect the state of the `x', `y' or `b' toggles. Also see these related posts: Set default color for top Setting the TOPCOLORS environment variable | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13161/"
]
} |
409,065 | The command echo {1..3}-{1,2} prints 1-1 1-2 2-1 2-2 3-1 3-2 . I understand the way those curly braces can be used. But what actually are they? Is it the job of sh / bash to parse/expand them and deliver the expanded version to the executed program? If so, what other tricks can it do and is there a specification? Also, is there a name for it? Is ls *.txt handled in a similar way internally? Is there a way to achieve an n-times repetition of an argument? Like (not working, of course, only a concept): cat test.pdf{*3} ⇒ cat test.pdf test.pdf test.pdf ? | They are called brace expansion . It is one of several expansions done by bash , zsh and ksh , filename expansion *.txt being another one of them. Brace expansion is not covered by the POSIX standard and is thus not portable.You can read on this in bash manual . On @Arrow's suggestion: in order to get cat test.pdf test.pdf test.pdf with brace expansion alone, you would have to use this "hack": #cat test.pdf test.pdfcat test.pdf{,}#cat test.pdf test.pdf test.pdfcat test.pdf{,,}#cat test.pdf test.pdf test.pdf test.pdfcat test.pdf{,,,} Some common uses: for index in {1..10}; do echo "$index"donetouch test_file_{a..e}.txt Or another "hack" to print a string 10 times: printf -- "mystring\n%0.s" {1..10} Be aware that brace expansion in bash is done before parameter expansion, therefore a common mistake is: num=10for index in {1..$num}; do echo "$index"done (the ksh93 shell copes with this though) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21505/"
]
} |
409,072 | I use Ubuntu server 16.04 (xenial) and desire to execute a command once, per a given number of system boots, then have it removed automatically. The solution seems to include 2 stages: Add the command in the end of /etc/bash.bashrc . Make it to be deleted after x number of bash.bashrc executions, somehow . I already did stage 1 of the solution by adding in the end of bash.bashrc the following command: echo "Welcome!" Is there a way to do so in Bash? Update - Here's what I've tried and failed: cat <<-"EOF" > /opt/bootmsg.sh #!/bin/bash echo welcome! count='0' ((count++)) sed -i -e "s/^count='[01234]'$/count='${count}'/" "$0" if ((count>=5)); then rm -- "$0"; fiEOFchmod +x /opt/bootmsg.sh# Add in the end of bash.bashrc:# [ -f /opt/bootmsg.sh ] && /opt/bootmsg.sh If you figured out what's bad with the code, please publish an answer with a fixed version of that code and an explanation of what I did wrong. | Summary The main problem is finding a way to keep track of how many times the script has been run - a way that persists between successive executions of the script. You can't do this with environment variables, because they do not retain their value after the script terminates. The most obvious way to do this is to store this number in a file; the script can read this value from the file and then write the updated value back to the file. If you want to avoid using a second file to store this count information then you can have the script update itself (e.g. using sed ). I gave example solutions illustrating each of these two approaches. Your solution attempt tried to update an environment variable and use that to keep track of state, but the environment variable doesn't persist between executions of the script, which is why your solution failed. DISCLAIMER: I gave an example of a single-file solution because it was explicitly asked for, but I would personally prefer a solution that didn't involve a self-modifying script. As a general rule, self-modifying scripts tend to be less stable, harder to debug, and more difficult to understand. Solution 1: Using a File to Store the Count Here is a solution which uses a file to keep track of the remaining number of reboots desired: #!/usr/bin/env bash# /opt/welcome.sh# Read number of remaining reboots from fileREBOOTS="$(head -1 /var/reboots)"# If the number of reboots is positive then# execute the command and decrement the countif [[ "${REBOOTS}" -ge 0 ]]; then # Run some commands echo "Doing some stuff... ($(( ${REBOOTS} - 1 )) left)"fi# Decrement the reboot countREBOOTS="$(( ${REBOOTS} - 1 ))"# Update the fileecho "${REBOOTS}" > /var/reboots# If we've run out of reboots then delete the filesif [[ ! "${REBOOTS}" -gt 0 ]]; then rm /var/reboots rm -- "$0"fi And here's an example of what this particular script would look like in action: user@host:~$ echo 3 > /var/rebootsuser@host:~$ bash /opt/welcome.sh Doing some stuff... (2 left)user@host:~$ bash /opt/welcome.sh Doing some stuff... (1 left)user@host:~$ bash /opt/welcome.sh Doing some stuff... (0 left)user@host:~$ bash /opt/welcome.sh bash: /opt/welcome.sh: No such file or directory Solution 2: Using a Self-Modifying Script Alternatively, you could also try embedding the count variable in the script itself and updating it with sed , e.g.: #!/usr/bin/env bash# /opt/welcome.sh# Read number of remaining reboots from filedeclare REBOOTS=3# If the number of reboots is positive then# execute the command and decrement the countif [[ "${REBOOTS}" -ge 0 ]]; then # Run some commands echo "Doing some stuff... ($(( ${REBOOTS} - 1 )) left)"fi# Decrement the reboot countREBOOTS="$(( ${REBOOTS} - 1 ))"# Update the scriptsed -i -e "s/^declare REBOOTS.*\$/declare REBOOTS=${REBOOTS}/" "$0"# If we've run out of reboots then delete the scriptif [[ ! "${REBOOTS}" -gt 0 ]]; then rm -- "$0"fi This should have the same effect without the additional file. Analysis of Failed Solution Attempt UPDATE: You added the following solution attempt to you question: cat <<-"WELCOME" > /opt/welcome.sh #!/bin/bash echo='welcome' count='0' ((count+1)) if ((count>=5)) then rm -- "$0" fiWELCOMEchmod +x /opt/welcome.sh# Add in the end of bash.bashrc:# [ -f /opt/welcome.sh ] && /opt/welcome.sh You're asking why this solution doesn't work. It looks to me like the actual script you're trying to run is this: #!/bin/bashecho='welcome'count='0'((count+1))if ((count>=5)) then rm -- "$0" fi The first (superficial) problem that I come across when I try to run the above code is that you're missing a semi-colon after your conditional expression ((count>=)) and after the body of rm -- "$0", i.e. you probably intended for you if` statement to look like the following: if ((count>=5)); then rm -- "$0"; fi After making these changes the script will execute, but it won't have any effect. To see why, lets just run through each line in turn. echo='welcome' This line creates a variable echo which stores the string welcome . Note that this command produces no output. If you want to print the string welcome then you'll have to use the echo command , not an environment variable named "echo", e.g. echo welcome . count='0' This line creates a variable count which stores the value 0 . Note that this implies that count will be equal to 0 on every iteration of the script. ((count+1)) This line evaluates an arithmetic expression involving the count variable. Notice that this has no effect at all. If you wanted to increment the count variable then you would do something like ((count++)) instead. Also note that even if you had incremented the value of count properly, this change would not persist once the script terminates. Furthermore, even if you did make the change persist, it would be over-written by the previous line ( count=0 ). if ((count>=5)); then rm -- "$0"; fi This line will delete the script file if the count variable is greater than or equal to 5 . However since count will only ever be equal to 0 , that will never happen. The fundamental problem with your solution attempt is that it doesn't address the issue of how to have the value of count persist between executions of the script: count is reset to 0 on every execution. The most obvious way to have a value persist between iterations of the script is to read that value from a file and then write the updated value back to that file - hence my first solution. If you want to restrict yourself to a single file, then you can do essentially the same thing by storing the value on a special line in that file (a line that is easily distinguishable so that it can be identified programmatically) and then have the script modify itself after every iteration in order to update the value on that line - hence my second solution. Minimally Modified, Corrected Solution Attempt Since you've added that you want to get your specific solution attempt to work as a stand-alone (self-modifying) file, here is a modified version of your script which incorporates the smallest number of changes possible required to make it function properly: #!/bin/bashecho welcomecount='0'((count++))sed -i -e "s/^count='[01234]'$/count='${count}'/" "$0"if ((count>=5)); then rm -- "$0"; fi If you save this to /opt/welcome.sh (as indicated in your post) then you could test it like this: user@host:~$ bash /opt/welcome.shwelcomeuser@host:~$ bash /opt/welcome.shwelcomeuser@host:~$ bash /opt/welcome.shwelcomeuser@host:~$ bash /opt/welcome.shwelcomeuser@host:~$ bash /opt/welcome.shwelcomeuser@host:~$ bash /opt/welcome.shbash: /opt/welcome.sh: No such file or directory Additional Comments Additionally, you say that you want to run the script on reboot, but you call it from your .bashrc file, which will probably run every time you open a new shell session. There are many different ways to run a script on boot - many of which depend on your specific OS. For further information you might consult the following documentation: Designing Integrated High Quality Linux Applications - Chapter 9. Starting Your Software Automatically on Boot Final solution After an indepth discussion in the comments, it became clear that what you really wanted was a script that display reminders for a changing list of tasks. You wanted the tasks to be displayed whenever you log in for the first time after a reboot. You also wanted tasks to disappear after 5 reboots. We came up with an alternative solution. The new solution is a multi-user solution which can work for multiple users simultaneously. It uses two system-wide scripts and two per-user data files: ~/.tasks A per-user file that stores a list of colon separated pairs of the form count:description - one for each task. ~/.reminder-flag A per-user status file that keeps track of whether or not a task reminder has been displayed since the last boot. /usr/local/bin/update-task-counts.sh A shell script that updates a .tasks file by decrementing all of the counts and removing tasks which have count 0. /usr/local/bin/print-tasks.shA shell script which checks and updates the reminder-flag file and prints all of the task descriptions. Here is an example ~/.tasks file: 5:This is task one.3:This is task two.1:Yet another task. The first task on this list should be displayed a total of 5 times, the second task a total of 3 times, and the last task just once. We also need a script that reads and updates this file. Here is a script that will do just that: #!/usr/bin/env bash# /usr/local/bin/update-task-counts.sh# Set the location of the task fileTASKFILE="${HOME}/.tasks"# Set the location of the reminder flag fileREMINDFILE="${HOME}/.remind-tasks"# Set a flag so that we know we need to print the task messagesecho 1 > "${REMINDFILE}"# If there is no task file, then exitif [[ ! -f "${TASKFILE}" ]]; then exit 0fi# Create a temporary fileTEMPFILE="$(mktemp)"# Loop through the lines of the current tasks-filewhile read line; do # Extract the description and the remaining count for each task COUNT="${line/:*/}" DESC="${line/*:/}" # Decrement the count ((COUNT--)) # If the count is non-negative then add it to the temporary file if [[ "${COUNT}" -ge 0 ]]; then echo "${COUNT}:${DESC}" >> "${TEMPFILE}" fidone < "${TASKFILE}" # Update the tasks file (by replacing it with the temporary file)mv "${TEMPFILE}" "${TASKFILE}" When you run this script it will iterate through each line of the task file, decrement the count for each task, and then update the task file so that it only contains tasks with positive counts. Then we need a script that will print the tasks in the task list: #!/usr/bin/env bash# /usr/local/bin/print-tasks.sh# Set the location of the task fileTASKFILE="${HOME}/.tasks"# Set the location of the reminder flag fileREMINDFILE="${HOME}/.remind-tasks"# If there is no task file, then exitif [[ ! -f "${TASKFILE}" ]]; then exit 0fi# If the reminder flag isn't set, then exitFLAG="$(head -1 ${REMINDFILE})"if [[ ! "${FLAG}" -eq 1 ]]; then exitfi# Loop through the lines of the current tasks-filewhile read line; do # Extract the description for each task DESC="${line/*:/}" # Display the task description echo "${DESC}"done < "${TASKFILE}"# Update the flag file so we know not to display the task list multiple timesecho 0 > "${REMINDFILE}" The last thing to do is make sure that these two scripts are called at the appropriate times. To get the update-task-counts.sh script to run on reboot, we can call it from the user's crontab, i.e. add the following line to your crontab (e.g. using crontab -e ): @reboot /bin/bash /usr/local/bin/update-task-counts.sh For further discussion regarding this cron technique, see the following post: How to run a script at boot time for normal user? In order to get the print-tasks.sh script to run when the user enters a shell session for the first time, we can call it from the user's bash profile, i.e. add the following line to ~/.bash_profile : bash /usr/local/bin/print-tasks.sh Now let's run these scripts with our example ~/.tasks file: 5:This is task one.3:This is task two.1:Yet another task. Here is how we enable the reminder without running update-task-counts.sh : user@host:~$ echo 1 > ~/.reminder-flag To manually test the print-task.sh script, we can just run it twice: user@host:~$ bash /usr/local/bin/print-tasks.shThis is task one.This is task two.Yet another task.user@host:~$ bash /usr/local/bin/print-tasks.shuser@host:~$ Notice that it only prints the first time it's called. In order to manually test the interaction between print-task.sh and update-task-counts.sh , we run them both together, e.g.: And here is what it looks like if we manually run the above scripts with this file: user@host:~$ bash update-task-counts.shuser@host:~$ bash print-tasks.shThis is task one.This is task two.Yet another task.user@host:~$ bash update-task-counts.shuser@host:~$ bash print-tasks.shThis is task one.This is task two.user@host:~$ bash update-task-counts.shuser@host:~$ bash print-tasks.shThis is task one.This is task two.user@host:~$ bash update-task-counts.shuser@host:~$ bash print-tasks.shThis is task one.user@host:~$ bash update-task-counts.shuser@host:~$ bash print-tasks.shThis is task one.user@host:~$ bash update-task-counts.shuser@host:~$ bash print-tasks.shuser@host:~$ That should do it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258010/"
]
} |
409,092 | I have a directory called "music" and two groups: music, singers. I executed chgrp music music I am looking to grant read and write access to the /music directory for the singers group, but I don't want to change the ownership. Is this possible? I am a bit confused with chown and chgrp. I believe chown changes ownership for a single user and not a group? and chgrp changes the group? Is there a way to set group ownership, but grant permissions to one other specified group? | There is no way to do this using traditional permissions, but it can be done using the getfacl and setfacl commands (typically available via the acl package). getfacl allows you to read the ACL entries for a directory or file. setfacl allows changing the ACL entries for a directory or file. In your example, you would want to run something like the following commands: setfacl -m g:music:rwx /path/to/musicsetfacl -m g:singers:rwx /path/to/music -m to modify , g to modify the group ACL(s), music and singers the group name, rwx the traditional permissions you want applied, /path/to/music the path to the directory you want to modify ACLs for. To make this apply by default to any new files/directories created within the music folder, you want to add the -d flag for default , and to apply recursively to all existing files/directories, you want to add the -R flag for recursive . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409092",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/212507/"
]
} |
409,132 | I have come to know that we can use watch command to monitor the output of other command and it prints the output every 2 seconds by default, which is really useful. I start to use it to monitor the output of nvidia-smi , for example. But now I do not know how to quit the program (stop monitoring the output of nvidia-smi ). I tried to press q and there is no response. | From man watch : By default, watch will run until interrupted. The key words are "until interrupted", which basically can be interpreted as until (but not limited to) the following happening: The user (you) pressed CTRL + C in the terminal. The system restarted. The process was issued a kill request. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221410/"
]
} |
409,203 | Is it possible to only show the amount of milliseconds when pinging instead of the whole result page? I want to check if my servers are online, so I want to return "OK xyz ms" or "FAIL". I am currently doing this like so: #!/bin/shergebnis=$(ping -qc1 google.com) ok=$?avg=$(echo -e "$ergebnis" | tail -n1 | awk '{print $4}' | cut -f 2 -d "/")if [ $ok -eq 0 ]then echo "OK $avg ms"else echo "FAIL"fi However, this uses quite a few pipes and since I am running this command pretty often to monitor my servers, I am wondering if there is a "smarter" approach. I am also afraid my pipes might not work properly when the ping command failes. | Another awk variation: ping -qc1 google.com 2>&1 | awk -F'/' 'END{ print (/^rtt/? "OK "$5" ms":"FAIL") }' -F'/' - treat slash / as field separator Example output: OK 47.090 ms | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216263/"
]
} |
409,210 | tcpdump -i eth0 -C 5 -W 1 -w <file name>& I use the command above to capture packets to a 5MB pcap file on an Ubuntu machine. Once the pcap file reaches the maximum size (5MB), the file gets rotated and starts again from 0KB. I need to know whether we can stop the tcpdump from rotating the file once it reaches its max size and drop the packets from then. | Another awk variation: ping -qc1 google.com 2>&1 | awk -F'/' 'END{ print (/^rtt/? "OK "$5" ms":"FAIL") }' -F'/' - treat slash / as field separator Example output: OK 47.090 ms | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264602/"
]
} |
409,225 | This answer reveals that one can copy all files - including hidden ones - from directory src into directory dest like so: mkdir destcp -r src/. dest There is no explanation in the answer or its comments as to why this actually works, and nobody seems to find documentation on this either. I tried out a few things. First, the normal case: $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file$ cp -r src dest$ ls -A destdest_file src Then, with /. at the end: $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file$ cp -r src/. dest$ ls -A destdest_file .dotfile src_dir src_file So, this behaves simlarly to * , but also copies hidden files. $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file$ cp -r src/* dest$ ls -A destdest_file src_dir src_file . and .. are proper hard-links as explained here , just like the directory entry itself. Where does this behaviour come from, and where is it documented? | The behaviour is a logical result of the documented algorithm for cp -R . See POSIX , step 2f: The files in the directory source_file shall be copied to the directory dest_file , taking the four steps (1 to 4) listed here with the files as source_files . . and .. are directories, respectively the current directory, and the parent directory. Neither are special as far as the shell is concerned, so neither are concerned by expansion, and the directory will be copied including hidden files. * , on the other hand, will be expanded to a list of files, and this is where hidden files are filtered out. src/. is the current directory inside src , which is src itself; src/src_dir/.. is src_dir ’s parent directory, which is again src . So from outside src , if src is a directory, specifying src/. or src/src_dir/.. as the source file for cp are equivalent, and copy the contents of src , including hidden files. The point of specifying src/. is that it will fail if src is not a directory (or symbolic link to a directory), whereas src wouldn’t. It will also copy the contents of src only, without copying src itself; this matches the documentation too: If target exists and names an existing directory, the name of the corresponding destination path for each file in the file hierarchy shall be the concatenation of target , a single slash character if target did not end in a slash, and the pathname of the file relative to the directory containing source_file . So cp -R src/. dest copies the contents of src to dest/. (the source file is . in src ), whereas cp -R src dest copies the contents of src to dest/src (the source file is src ). Another way to think of this is to compare copying src/src_dir and src/. , rather than comparing src/. and src . . behaves just like src_dir in the former case. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/409225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67771/"
]
} |
409,354 | I'm trying to write a systemd service file for redis. Here's my file: [Unit]PartOf=smp-data-services.targetDescription=Redis persistent key-value databaseAfter=network.target[Service]ExecStart=/opt/eg/share/redis/bin/redis-serverExecStop=/opt/eg/share/redis/bin/redis-cliRestart=on-failureUser=egGroup=eg[Install]WantedBy=multi-user.target No matter what I do, I keep getting: # systemctl daemon-reloadsystemd: redis.service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing. I can start redis on the command line with no issue like this: /opt/eg/share/redis/bin/redis-server I've read that redis' daemonized forking process is non-standard, and I should avoid Type=forking or oneshot. | In the [service] section, you should clean the ExecStart command: [Unit]PartOf=smp-data-services.targetDescription=Redis persistent key-value databaseAfter=network.target[Service]ExecStart=ExecStart=/opt/eg/share/redis/bin/redis-serverExecStop=/opt/eg/share/redis/bin/redis-cliRestart=on-failureUser=egGroup=eg[Install]WantedBy=multi-user.target | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409354",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264710/"
]
} |
409,360 | I have the following output file from a venn program. [1, 2],106[1, 3],2556[2, 3],5207[1, 2, 3],232[2],7566[3],8840[1],5320 I need a command to get each line result number as variable argument in a script like this: $area1=here it must come the results after [1],$area2=here it must come the results after [2],$area3=here it must come the results after [3],$n12=here it must come the results after [1, 2],$n13=here it must come the results after [1, 3],$n23=here it must come the results after [2, 3],$n123=here it must come the results after [1, 2, 3], these results will then be used in the script below to draw a venn diagram. cat << catfile >> $prefix1-VennDiagram.Rlibrary(VennDiagram);venn.diagram( x = list( "$sample1" = c(1:$area1, $(($area1+1)):$(($area1+$n12)), $(($area1+$n12+1)):$(($area1+$n12+$n123)), $(($area1+$n12+$n123+1)):$(($area1+$n12+$n123+$n13))), "$sample2" = c($(($area1+$n12+$n123+$n13+1)):$(($area1+$n12+$n123+$n13+$area2)), $(($area1+1)):$(($area1+$n12)), $(($area1+$n12+1)):$(($area1+$n12+$n123)), $(($area1+$n12+$n123+$n13+$area2+1)):$(($area1+$n12+$n123+$n13+$area2+$n23))), "$sample3" = c($(($area1+$n12+$n123+$n13+$area2+$n23+1)):$(($area1+$n12+$n123+$n13+$area2+$n23+$area3)), $(($area1+$n12+1)):$(($area1+$n12+$n123)), $(($area1+$n12+$n123+1)):$(($area1+$n12+$n123+$n13)), $(($area1+$n12+$n123+$n13+$area2+1)):$(($area1+$n12+$n123+$n13+$area2+$n23))) ), filename = "$prefix1-VennDiagram.tiff", col = "transparent", fill = c("red", "blue", "green"), alpha = 0.5, label.col = c("darkred", "white", "darkblue", "white", "white", "white", "darkgreen"), cex = 2.5, fontfamily = "arial", fontface = "bold", cat.default.pos = "text", cat.col = c("darkred", "darkblue", "darkgreen"), cat.cex = 2.0, cat.fontfamily = "arial", cat.fontface = "italic", cat.dist = c(0.06, 0.06, 0.03), cat.pos = 0 );catfileRscript $prefix1-VennDiagram.Rexit | In the [service] section, you should clean the ExecStart command: [Unit]PartOf=smp-data-services.targetDescription=Redis persistent key-value databaseAfter=network.target[Service]ExecStart=ExecStart=/opt/eg/share/redis/bin/redis-serverExecStop=/opt/eg/share/redis/bin/redis-cliRestart=on-failureUser=egGroup=eg[Install]WantedBy=multi-user.target | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264712/"
]
} |
409,363 | I have read the man pages of Curl , but I can't understand what those parameters (k, i and X) mean. I see it used in a REST API call, but can someone please explain what those three parameters do? It's not clear in the documentation. Thank you in advance. | -k, --insecure : If you are doing curl to a website which is using a self-signed SSL certificate then curl will give you an error as curl couldn't verify the certificate . In that case, you could use -k or --insecure flag to skip certificate validation . Example: [root@arif]$ curl --head https://xxx.xxx.xxx.xxx/login curl: (60) Peer's Certificate issuer is not recognized. More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs).If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option.If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL).If you'd like to turn off curl's verification of the certificate,use the -k (or --insecure) option. [root@arif]$ curl -k --head https://xxx.xxx.xxx.xxx/login HTTP/1.1 302 Moved TemporarilyDate: Thu, 07 Dec 2017 04:53:44 GMTTransfer-Encoding: chunkedLocation: https://xxx.xxx.xxx.xxx/login X-FRAME-OPTIONS: SAMEORIGINSet-Cookie: JSESSIONID=xxxxxxxxxxx; path=/; HttpOnly -i, --include : This flag will include http header. Usually http header are consist of server name, date, content type etc. Example: [root@arif]$ curl https://google.com <HTML><HEAD><meta http-equiv="content-type" content="text/html charset=utf-8"> <TITLE>301 Moved</TITLE></HEAD><BODY> <H1>301 Moved</H1> The document has moved <A HREF="https://www.google.com/">here</A>. </BODY></HTML> [root@arif]$ curl -i https://google.com HTTP/1.1 301 Moved PermanentlyLocation: https://www.google.com/Content-Type: text/html; charset=UTF-8Date: Thu, 07 Dec 2017 05:13:44 GMTExpires: Sat, 06 Jan 2018 05:13:44 GMTCache-Control: public, max-age=2592000Server: gwsContent-Length: 220X-XSS-Protection: 1; mode=blockX-Frame-Options: SAMEORIGINAlt-Svc: hq=":443"; ma=2592000; quic=51303431; quic=51303339;quic=51303338; quic=51303337; quic=51303335,quic=":443"; ma=2592000;v="41,39,38,37,35"<HTML><HEAD><meta http-equiv="content-..... -X, --request : This flag will used to send custom request to the server. Most of the time we do GET , HEAD , and POST . But if you need specific request like PUT , FTP , DELETE then you can use this flag. Following example will send a delete request to the google.com Example: [root@arif]$ curl -X DELETE google.com ..........................<p><b>405.</b> <ins>That’s an error.</ins><p>The request method <code>DELETE</code> is inappropriate for the URL<code>/</code>. <ins>That’s all we know.</ins>` | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/409363",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264718/"
]
} |
409,407 | I want to list all the users’ directories on the machine. Usually, I will do: ls -l /home But I use it in a script that will be deployed on others’ machines and maybe on those machines they don't call it home (e.g. myHome). So I want to generalize it to ls -l ~ . But it just lists my user’s home directory instead of all users’ home directories (basically I want to get a list of the users’ names on the machine). How can I generalize it? | Many systems have a getent command to list or query the content of the Name Service databases like passwd , group , services , protocols ... getent passwd | cut -d: -f6 Would list the home directories (the 6 th colon delimited field) of all the users in databases that can be enumerated . The user name itself is in the first field, so for the list of user names: getent passwd | cut -d: -f1 (note that it doesn't mean those users can login to the system or their home directory have been created, but that they are known to the system, they can be translated to a user id). For databases that can't be enumerated, you can try and query each possible user id individually: getent passwd {0..65535} | cut -d: -f1,6 (here assuming uids stop at 65535 (some systems support more) and a shell that supports zsh's {x..y} form of brace expansion). But you wouldn't want to do that often on systems where the user database is networked (and there's limited local caching) like LDAP, NIS+, SQL... as that could imply a lot of network traffic (and load on the directory server) to make all those queries. That also means that if there are several users sharing the same uid, you'll only get one entry for each uid, so miss the others. If you don't have getent , you could resort to perl : perl -le 'while (@e = getpwent) {print $e[7]}' for getent passwd ( $e[0] for the user names), or: perl -le 'for ($i=0;$i<65536;++$i) { if (@e = getpwuid $i) {print $e[0] ": " $e[7]}}' for getent passwd {0..65535} with the same caveats. In shells, you can use ~user to get the home directory of user , but in most shells, that only works for a limited set of user names (the list of allowed characters in user names supported for that ~ expansion operator varies from shell to shell) and with several shells (including bash ), ~$user won't work (you'd need to resort to eval when the name of the user is stored in a variable there). And you'd still have to find a way to get the list of user names. Some shells have builtin support to get that list of usernames. bash : compgen -u would return the list of users in databases that can be enumerated. zsh : the $userdirs associative array maps user names to their home directory (also limited to databases that can be enumerated, but if you do a ~user expansion for a user that is in a non-enumerable database, an entry will be added to $userdirs ). So you can do: printf '%s => %s\n' "${(kv@)userdirs}" to list users with their home directory. That only works when zsh is interactive though . tcsh , fish and yash are three other shells that can complete user names (for instance when completing ~<Tab> arguments), but it doesn't look like they let you obtain that list of user names programmatically. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/409407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251897/"
]
} |
409,433 | Skype always started and logged in automatically after PC start. But today it didn't happen. Instead of this Skype login window popped up and I was asked to enter my credentials. I entered them, the login window disappeared, but nothing else happened. I didn't see the usual things after that like the main Skype widnow, contacts etc. I thought that I was hacked, but no, as I could login with the same credentials on the Skype webpage. Then I repeated the procedure, but started Skype from terminal, hoping to see some errors, but there was no output at all. skype command just exited. Is my Skype version still supported (4.3.0.37)? I've heard that Skype had reduced the number of supported Linux versions this summer... My OS: Ubuntu 14.04.5 LTS My Skype version is | As you can see here Microsoft decided to force people to upgrade to newer Skype versions and the old ones cease to work since a few weeks already. This might probably be related to Skype moving from peer-to-peer to being cloud based (mentioned by @Rsf ). As announced earlier this year, the old Skype for Linux v4.3 is at its end-of-life and will be decommissioned in the upcoming weeks. You will be automatically signed out of Skype until you update. Please, update to the new Skype 8.x, which is ready for you with lots of improvements at Skype.com. In case you hit any issues, please check known issues, system requirements, or post your questions directly to this forum. All your feedback will be greatly appreciated. Kind regards, The Skype Team ( source ) To get the newer version you should download the latest .deb file from Microsoft and install it with gdebi or dpkg (shouldn't make a difference but gdebi is a bit more secure when it comes to dependencies). Be aware, the new Skype will create a file in /etc/apt/sources.list.d/ and will do so every time it updates if the file does not exist with the exact name Skype chosen for it. So renaming the file in /etc/apt/sources.list.d/ only calls for trouble in form of duplicated source list entries. You as well want to remove any before existing files for Skype in your sources.list.d directory. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/409433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163956/"
]
} |
409,453 | Having used Linux for years, I suddenly found that I don't know how to use the man command properly. It is well known that one can type q to quit a man page, / to search, and others. I don't know, however, where are these documented. I tried man man man -a man info man and Googling, with no luck. Please point me to the right position to look for this information, even if it is right in man man and I happen to have overlooked it, it's perfectly fine, just let me know. Are these commands different for different OS or for different distributions of Linux? | It is indeed right in the manual page for man , under the "Controlling formatted output" subheading and repeated later on in the "ENVIRONMENT" section for good measure: By default, man uses pager -s . The manual page explains how there is a hierarchy of environment variables and command-line options ( PAGER , MANPAGER , and --pager ) for overriding the default. This is how it reads on systems such as Debian Linux. On systems such as Oracle Linux, in contrast, the man-db package has been built with a different default, which is however still reflected right there in the manual page in the same places: By default, man uses less -s . The man-db package attempts to auto-detect, at compile time, which default pager to build-in to the command, and document in its manual page, out of less , more , and pager . On systems such as Debian Linux, the pager command is part of the "alternatives" system and can map to one of several actual commands: jdebp % update-alternatives --list pager/bin/less/bin/more/usr/bin/pg/usr/bin/w3mjdebp % So one consults their respective manual pages for how to drive them from the keyboard, according to which alternative has been chosen. Usefully, the Debian alternatives system keeps the manual page in synch with the chosen command, so reading this manual page is quite straightforward: man pager | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/409453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259023/"
]
} |
409,462 | The following shell command was expected to print only odd lines of the input stream: echo -e "aaa\nbbb\nccc\nddd\n" | (while true; do head -n 1; head -n 1 >/dev/null; done) But instead it just prints the first line: aaa . The same doesn't happen when it is used with -c ( --bytes ) option: echo 12345678901234567890 | (while true; do head -c 5; head -c 5 >/dev/null; done) This command outputs 1234512345 as expected. But this works only in the coreutils implementation of the head utility. The busybox implementation still eats extra characters, so the output is just 12345 . I guess this specific way of implementation is done for optimization purposes. You can't know where the line ends, so you don't know how many characters you need to read. The only way not to consume extra characters from the input stream is to read the stream byte by byte. But reading from the stream one byte at a time may be slow. So I guess head reads the input stream to a big enough buffer and then counts lines in that buffer. The same can't be said for the case when --bytes option is used. In this case you know how many bytes you need to read. So you may read exactly this number of bytes and not more than that. The corelibs implementation uses this opportunity, but the busybox one does not, it still reads more byte than required into a buffer. It is probably done to simplify the implementation. So the question. Is it correct for the head utility to consume more characters from the input stream than it was asked? Is there some kind of standard for Unix utilities? And if there is, does it specify this behavior? PS You have to press Ctrl+C to stop the commands above. The Unix utilities do not fail on reading beyond EOF . If you don't want to press, you may use a more complex command: echo 12345678901234567890 | (while true; do head -c 5; head -c 5 | [ `wc -c` -eq 0 ] && break >/dev/null; done) which I didn't use for simplicity. | Is it correct for the head utility to consume more characters from the input stream than it was asked? Yes, it’s allowed (see below). Is there some kind of standard for Unix utilities? Yes, POSIX volume 3, Shell & Utilities . And if there is, does it specify this behavior? It does, in its introduction: When a standard utility reads a seekable input file and terminates without an error before it reaches end-of-file, the utility shall ensure that the file offset in the open file description is properly positioned just past the last byte processed by the utility. For files that are not seekable, the state of the file offset in the open file description for that file is unspecified. head is one of the standard utilities , so a POSIX-conforming implementation has to implement the behaviour described above. GNU head does try to leave the file descriptor in the correct position, but it’s impossible to seek on pipes, so in your test it fails to restore the position. You can see this using strace : $ echo -e "aaa\nbbb\nccc\nddd\n" | strace head -n 1...read(0, "aaa\nbbb\nccc\nddd\n\n", 8192) = 17lseek(0, -13, SEEK_CUR) = -1 ESPIPE (Illegal seek)... The read returns 17 bytes (all the available input), head processes four of those and then tries to move back 13 bytes, but it can’t. (You can also see here that GNU head uses an 8 KiB buffer.) When you tell head to count bytes (which is non-standard), it knows how many bytes to read, so it can (if implemented that way) limit its read accordingly. This is why your head -c 5 test works: GNU head only reads five bytes and therefore doesn’t need to seek to restore the file descriptor’s position. If you write the document to a file, and use that instead, you’ll get the behaviour you’re after: $ echo -e "aaa\nbbb\nccc\nddd\n" > file$ < file (while true; do head -n 1; head -n 1 >/dev/null; done)aaaccc | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/409462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152606/"
]
} |
409,528 | I have an alias in my bash_profile that is currently one really long line like this: alias staging_server='MY_ENV1="http://example.com" MY_ENV2="http://example2.com" MY_ENV3="http://example3.com" MY_ENV4="http://example4.com" start_server -p 1234' Is there a way to split this up with newlines using a function or alias to make it more legible? Something like this (which doesn't seem to work)? alias staging_server=' \ MY_ENV1="http://example.com" \ MY_ENV2="http://example2.com" \ MY_ENV3="http://example3.com" \ MY_ENV4="http://example4.com" \ start_server -p 1234' I'd like to avoid exporting these as I don't want them as the defaults. | The alias actually seems to work for me like that (provided that there's no whitespace after the backslashes). But a function might be nicer and at least makes it easier to use single-quotes: staging_server() { MY_ENV1='http://example.com' \ MY_ENV2="..." \ start_server -p 1234 "$@"} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409528",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264830/"
]
} |
409,609 | I believe this should be simple but I can't get it to work properly. These are the commands I can run on command line: cd /home/debian/ap# Start a virtualenvsource venv-ap/bin/activate# This needs to happen inside the virtualenv and takes ~20 secondscrossbar start# Outside the virtualenv, perhaps in a different command line windowpython3 /home/debian/myscript.py These commands have to be done in this order. Due to the virtualenv, the non-executable for crossbar, and the separate python script afterwards, I haven't been able to figure out the best way to get this to work. My current work-in-progress: [Unit]Description=Start CBAfter=network.target[Service]Type=simpleUser=debianExecStartPre=source /home/debian/ap/venv-ap/bin/activateExecStart=cd /home/debian/ap/ && crossbar startRestart=always[Install]WantedBy=multi-user.target | This doesn't work because source is a shell command, so systemd's ExecStart= or ExecStartPre= won't understand them directly... (BTW, the same is true for cd and the && .) You could achieve that by running a shell explicitly and running all your commands together there: ExecStart=/bin/sh -c 'cd /home/debian/ap/ && source venv-ap/bin/activate && crossbar start' But a better approach is, instead of sourcing the "activate" script, to use the python executable in the bin/ of your virtualenv directly. If you look at virtualenv's usage document , you'll notice it says: ENV/bin is created, where executables live - noticeably a new python . Thus running a script with #! /path/to/ENV/bin/python would run that script under this virtualenv’s python. In other words, assuming crossbar is the Python script you want to run that requires the venv-ap virtualenv, simply begin crossbar with: #!/home/debian/ap/venv-ap/bin/python And it will automatically use the virtualenv whenever invoked. Also possible, invoking the Python interpreter from the virtualenv directly, with: ExecStart=/home/debian/ap/venv-ap/bin/python /path/to/crossbar start (Also, regarding running in a specific directory, setting WorkingDirectory=/home/debian/ap is better than using a cd command. You don't need a shell that way, and systemd can do better error handling for you.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/409609",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146761/"
]
} |
409,677 | Is there any way to read total running time of a linux system from BIOS or CPU? I've searched BIOS informations by dmidecode. But it gives release date which is not proper for my question. Then I've checked out /proc . But it holds uptime values just from last reboot. Maybe, writing these uptime values for every boot could be an option. Then I've checked dumpe2fs . It gives total running time of a particular hard drive. It's useless for me because hdd could be changed while my application is running. Except these above, how can I read or calculate the total runtime of my system ? Where can I read from ? | This isn’t something the firmware tracks, as far as I’m aware. Even BMCs don’t measure total uptime. This won’t help with past uptime from previous boots, but you can start recording uptimes now, by installing a tool such as uptimed and setting it up so that it never discards values (set LOG_MAXIMUM_ENTRIES to 0 in uptimed.conf ). That will measure operating system uptime, not total CPU “on” time, but it should be close enough... Once you’ve got uptimed running, you can run uprecords to view the totals, for example up 1492 days, 02:57:18 | since Sat Sep 7 00:50:06 2013 down 61 days, 08:11:24 | since Sat Sep 7 00:50:06 2013 %up 96.051 | since Sat Sep 7 00:50:06 2013 As pointed out by quixotic , you’ll be able to get some idea of historical uptime by looking at your logs. If you’re running systemd, you can view the boots which have been logged using journalctl --list-boots . Log rotation means that this is likely to miss quite a lot of uptime though. As pointed out by JdeBP , last reboot might give you a longer list of boots with the associated uptime. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264940/"
]
} |
409,686 | OS: OpenSuse Leap 42.2, Kernel 4.4.76-1-default I have the following problem: When I attach my USB Storage Device during runtime it gets the device node name /dev/sdk assigned which is fine. If I restart the machine with the external USB Drive plugged in, the OS decides to attach the USB Storage Device to /dev/sdb which is supposed to be an internal Hard Drive. I need a way to force the external USB drive to get a device node name that is higher or equal than /dev/sdk From this Thread I learned so far: According to udev manual, there's no way to change the names of files in the /dev/ directory: NAME The name to use for a network interface. The name of a device node cannot be changed by udev, only additional symlinks can be created. This means for me that I cannot use a udev rule to force the Kernel / Driver to assign my USB device a certain device node name like /dev/sdk . I created a symlink using this Thread and the Symlink works. Now I have a static alias usbDevice -> sdb1 / usbDevice -> sdk1 that refers to the external USB Drive regardless if it is mounted on /dev/sdk or /dev/sdb . But this does not solve my Issue, because it does not change the actual node name of my USB device. Does anybody know how I can: Force this specific USB Device to get the node name assigned that is higher or equal than /dev/sdk Maybe by telling the OS that it should look for USB devices after all other devices are attached to the /dev folder. I do not mind what device node name the external USB device gets assigned as long as it is not replacing one of my already used device nodes. Can anybody give me a hint where to start ? Systemd service ? | This isn’t something the firmware tracks, as far as I’m aware. Even BMCs don’t measure total uptime. This won’t help with past uptime from previous boots, but you can start recording uptimes now, by installing a tool such as uptimed and setting it up so that it never discards values (set LOG_MAXIMUM_ENTRIES to 0 in uptimed.conf ). That will measure operating system uptime, not total CPU “on” time, but it should be close enough... Once you’ve got uptimed running, you can run uprecords to view the totals, for example up 1492 days, 02:57:18 | since Sat Sep 7 00:50:06 2013 down 61 days, 08:11:24 | since Sat Sep 7 00:50:06 2013 %up 96.051 | since Sat Sep 7 00:50:06 2013 As pointed out by quixotic , you’ll be able to get some idea of historical uptime by looking at your logs. If you’re running systemd, you can view the boots which have been logged using journalctl --list-boots . Log rotation means that this is likely to miss quite a lot of uptime though. As pointed out by JdeBP , last reboot might give you a longer list of boots with the associated uptime. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264799/"
]
} |
409,695 | I have this error in a script iptable . ./iptables-firewall.sh line 4: syntax error near unexpected token 'iptables'./iptables-firewall.sh line 4: iptables -A OUTPUT -p udp -s $SERVER_IP --sport 1024:65535 -d $ip --dport 53 -m state -- state NEW, ESTABLISHED -j ACCEPT On this code: SERVER_IP="ppp"DNS_SERVER="8.8.8.8"for ip in $DNS_SERVER doiptables -A OUTPUT -p udp -s $SERVER_IP --sport 1024:65535 -d $ip --dport 53 -m state -- state NEW, ESTABLISHED -j ACCEPTiptables -A INPUT -p udp -s $ip --sport 53 -d $SERVER_IP --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPTdone | do in for loop must be on a line of it's own. shell can't guess if it is a keyword or an argument named do. just write SERVER_IP="ppp"DNS_SERVER="8.8.8.8"for ip in $DNS_SERVER doiptables -A OUTPUT -p udp -s $SERVER_IP --sport 1024:65535 -d $ip --dport 53 -m state -- state NEW, ESTABLISHED -j ACCEPTiptables -A INPUT -p udp -s $ip --sport 53 -d $SERVER_IP --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPTdone an alternative is to end the statement with a semicolon for id in $DNS_SERVER ; do... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264952/"
]
} |
409,712 | How do I print all complete lines of a file? By 'complete' I mean only the ones that end with newline character. grep treats EOF as line delimiter, so grep '^.*$' file will print the last line even if there's no newline at the end of file. The whole problem comes from parsing log files: we need to somehow be sure that the last entry was completely logged - i.e. it ends with newline. | A simple approach is to use perl : perl -ne '/\n/ && print' file If you just want to check that the last character of a file is a newline, you can do: tail -c1 file | grep -q '^$' && echo yes || echo no But the -c option is not POSIX, so it isn't portable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409712",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264957/"
]
} |
409,731 | I have a list of lines in a Bash script as follows if [ ! -z "$clone01" ]; then git clone "$clone01"; fiif [ ! -z "$clone02" ]; then git clone "$clone02"; fiif [ ! -z "$clone03" ]; then git clone "$clone03"; fi# $clone01 .... through to ... $clone60if [ ! -z "$clone60" ]; then git clone "$clone60"; fi the leading zero at the end of the variable, when the number is less than 10, is important. I have tried various substitutions and loops etc.This code is very repetitive, and there are 60 lines of it. How can I create this code dynamically and make it part of my executed script? What is the optimal approach to this problem? | Ok, don't do that, it's ugly. Either put the URLs in an array and loop over it: urls=( http://this.git http://that.git )for url in "${urls[@]}" ; do git clone "$url"done or put them in a file, one per line, and loop reading the lines. Here, it might be useful to guard for empty lines, just like you did. We could also ignore lines starting with # as comments: while read -r url ; do if [ -z "$url" ] || [ "${url:0:1}" = "#" ]; then continue; fi git clone "$url"done < file.with.urls If you want the line counter too, it's easy to add with arithmetic expansions . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46470/"
]
} |
409,791 | I need to encrypt and be able to decrypt files with openssl , currently I do this simply with: openssl enc -aes-256-cbc -salt -in "$input_filename" -out "$output_filename" and the decryption with: openssl enc -aes-256-cbc -d -salt -in "$input_filename" -out "$output_filename" But with large files, I would like to see progress. I tried different variations of the following (decryption): pv "$input_filename" | openssl enc -aes-256-cbc -d -salt | pv > "$output_filename" But this fails to ask me for a password. I am unsure as to how to go about it? EDIT1: I found this tar over openssl : https://stackoverflow.com/a/24704457/1997354 While it could be extremely helpful, I don't get it much. EDIT2: Regarding the named pipe: It almost works. Except for the blinking progress, which I can't show you obviously and the final result looking like this: enter aes-256-cbc decryption password:1.25GiB 0:00:16 [75.9MiB/s] [==============================================================================================================================================================================================>] 100% 1.25GiB 0:00:10 [ 126MiB/s] [ <=> ] | You should try openssl enc -aes-256-cbc -d -salt -in "$input_filename" | pv -W >> "$output_filename" From the Manual : -W, --wait : Wait until the first byte has been transferred before showing any progress information or calculating any ETAs. Useful if the program you are piping to or from requires extra information before it starts, eg piping data into gpg(1) or mcrypt(1) which require a passphrase before data can be processed. which is exactly your case. If you need to see the progress bar, for the reason clearly explained by Weijun Zhou in a comment below, you can reverse the order of the commands in the pipe: pv -W "$input_filename" | openssl enc -aes-256-cbc -d -salt -out "$output_filename" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
409,800 | I'm working on a Makefile setup to create a debian package: .PHONY: packagepackage: $(PACKAGE_TARGETS)build/%.changes: ../%.changes mkdir -p build for f in `<$< awk 'BEGIN { files = 0; } { if (files) print $$5; } /^Files:/ { files = 1; }'`; do \ test -f build/$$f || mv ../$$f build/; \ done mv $< $@../%.changes: dpkg-buildpackage -rfakeroot -b -uc Everything was working fine with the build until I made a very small commit. Which adds an executable file at /src/bin/app-cli . Then references that executable in the debian install file debian/app.install : src/bin/app-cli usr/local/bin For some reason making that change caused the build system to generate the error: dh_usrlocal: debian/app/usr/local/bin/app-cli is not a directoryrmdir: failed to remove 'debian/app/usr/local/bin': Directory not emptydh_usrlocal: rmdir debian/app/usr/local/bin returned exit code 1make[1]: *** [binary] Error 1make[1]: Leaving directory `/var/lib/jenkins/data/workspace/app-VBJPWWOTACEEMDGNNWMFS4QU6K4EPG5PJRESMEIZPZL4GAUCKWVQ'dpkg-buildpackage: error: fakeroot debian/rules binary gave error exit status 2make: *** [../app_0.0.1-13-master_amd64.changes] Error 2 Does anyone know this new file is being treated as directory by Debian Helper? | The dh_usrlocal manpage might help you understand what’s going on. Debian packages are allowed to install directories under /usr/local , but the best way to do so is to use maintainer scripts. dh_usrlocal automates this by transforming directories under /usr/local into maintainer script snippets. It assumes these directories are empty, since Debian packages aren’t allowed to install files under /usr/local . In your package, dh_install follows your instructions and copies app-cli to /usr/local/bin (inside your package). dh_usrlocal then runs, sees the directory, creates the appropriate maintainer script, then tries to delete the directory (so it isn’t shipped as such in the package) — and fails since it contains a file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
409,827 | Normally, pipelines in Unix are used to connect two commands and use the output of the first command as the input of the second command. However, I recently come up with the idea (which may not be new, but I didn't find much Googling) of using pipeline to run several commands in parallel, like this: command1 | command2 This will invoke command1 and command2 in parallel if command2 does not read from standard input and command1 does not write to standard outpu t. A minimal example to illustrate this is (please run it in an interactive shell) ls . -R 1>&2|ls . -R My question is, are there any downsides to use pipeline to parallelize the execution of two commands in this way? Are there anything that I have missed in this idea? Thank you very much in advance. | Command pipelines already run in parallel. With the command: command1 | command2 Both command1 and command2 are started. If command2 is scheduled and the pipe is empty, it blocks waiting to read. If command1 tries to write to the pipe and its full, command1 blocks until there's room to write. Otherwise, both command1 and command2 execute in parallel, writing to and reading from the pipe. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259023/"
]
} |
409,833 | I'm writing a function called restore that will copy a file from a backup directory to the current directory. I now need to create a hard link to restore so that it can be called as purge . How would I implement it so that I could use the if statement if [ "$0" = "purge" ] for when restore is called as purge? Here is my code, although I will shorten it since I have tested it (it works): restore(){if [ "$1" = "-p" ] || [ "$0" = "purge" ]; thenwhile [ ! ] do #Purge code, etc... doneelif [ "$1" != "-p" ]; thenselect fname in $(find /$HOME/Backup -name "$1*" | sed 's!.*/!!' | sed 's!$1!!') quit do #If restore is called with an argument code... donelocal newfname=$(echo "$fname"|sed -E 's/[0-9]{11}$//')cp -i "/$HOME/Backup/$fname" "$newfname"exit 0fiwhile [ ! ]dofname=""select fname in $(ls /$HOME/Backup) quit do #Restore with no arguments code... donelocal newfname=$(echo "$fname"|sed -E 's/[0-9]{11}$//')cp -i "/$HOME/Backup/$fname" "$newfname"done} Calling restore with the -p option is the same as invoking restore as purge. So how would I implement the code so that restore can be invoked by using purge? It is supposed to be a script rather than a function. I made a hard link to Restore.sh named Purge.sh , but when I call it using ./Purge.sh it still runs the standard Restore code. How can I determine if Restore is called by the hard link file? | Do the hard link to the restore.sh : ln restore.sh link_to_restore.sh The content of the restore.sh file: #!/bin/bashif [ "$0" = "./link_to_restore.sh" ]; then echo fooelif [ "$0" = "./restore.sh" ]; then echo barfi Testing $ ./restore.sh bar$ ./link_to_restore.sh foo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255371/"
]
} |
409,870 | I've just read about iptables-persistent and I'm completely lost w.r.t. the design. I'm not the only one , who didn't understand how it works, but actually it seems to be way beyond my imagination. I imagined something like crontab -e : You edit a set of rules and they get persisted and applied when the editor gets closed. They get stored somewhere and I as a user have no idea where. Don't tell me; it's perfect this way. Is there such a tool? Why does iptables-persistent work in this hard to follow way? | In general, you can edit the active iptables rules for IPv4 with a text editor by using the iptables-save command to write the rules to a file and then using the iptables-restore command to reload the new rules after you're done, e.g.: user@host:~$ iptables-save > rules.v4user@host:~$ vim rules.v4user@host:~$ iptables-restore rules.v4 For IPv6 you would use the analogous commands ip6tables-save and ip6tables-restore , i.e.: user@host:~$ ip6tables-save > rules.v6user@host:~$ vim rules.v6user@host:~$ ip6tables-restore rules.v6 The iptables-persistent service checks in the following locations: /etc/iptables/rules.v4/etc/iptables/rules.v6 So to apply your rules and have them persist you would follow the same steps as above, but edit the iptables-persistent files instead, e.g.: user@host:~$ iptables-save > /etc/iptables/rules.v4user@host:~$ vim /etc/iptables/rules.v4user@host:~$ iptables-restore /etc/iptables/rules.v4 I don't know of an interactive command for editing iptables rules like what you're describing, but it should be pretty easy to roll your own. Here is a simple example: #!/usr/bin/env bash# iptables-e.sh# Create a temporary file to store the new rulesTEMPFILE=$(mktemp)# Save the current rules to a fileiptables-save > "${TEMPFILE}"# Edit the rules interactively with a text editor"${EDITOR}" "${TEMPFILE}" # Try to load the rules and update the persistent rules if no errors occuriptables-restore "${TEMPFILE}" && cat "${TEMPFILE}" > /etc/iptables/rules.v4 This actually isn't too much different from how crontab -e works, which just automatically saves the active crontab to a file in the /var/spool/cron/crontabs directory, which is what causes the crontab to be persistent. See the following post for further discussion of this subject: Where is the user crontab stored? You might also be interested in the following script: iptables wizard I can't vouch for it though. I've never used it. It's just the only thing I found by searching for interactive iptables editing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5116/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.