source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
260,533
What command can be used to determine the used encryption on a LUKS partition (all the relevant information, initialization vector, generation scheme, mode of operation and block cipher primitive)?
If the decrypted volume is /dev/mapper/crypto then you can get the information with dmsetup table crypto0 104853504 crypt aes-cbc-essiv:sha256 000[...]000 0 254:2 4096 If the encrypted volume is /dev/storage2/crypto then you get the information with cryptsetup luksDump /dev/storage2/cryptoLUKS header information for /dev/storage2/cryptoVersion: 1Cipher name: aesCipher mode: cbc-essiv:sha256Hash spec: sha256[...]
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/260533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148018/" ] }
260,563
I am using zsh and oh-my-zsh on Arch Linux. I am not able to make directory using mkdir edward@ArchLinux ~ $ sudo mkdir -p /samba/raspberry [sudo] password for edward: sudo: nocorrect: command not found I know it has to do something with auto-completion feature of zsh and alias defined but can't figure out.
I have this alias alias sudo='sudo ' defined in a file which I source d at the end of ~/.zshrc file which overwrote alias sudo='nocorrect sudo' which is defined in .oh-my-zsh/lib/correction.zsh alias sudo='nocorrect sudo' is required by zsh's auto-completion feature to work More: How to disable autocorrection for sudo [command] in zsh? But at same time I need alias sudo='sudo ' for aliases of commands following sudo to work More: Load aliases from .bashrc file while using sudo Please note alias sudo='sudo ' works for zsh too So I can either have zsh's auto-completion feature or have aliases (of other commands) while using sudo so I have now disabled zsh's auto-completion feature. ( Hope I am clear and not confusing. )
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/260563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
260,627
I have a bash script that's getting quite long. It would be nice if I could list all the functions in it. Even better would be listing the name of the function and any documentation about it's usage, eg parameters.
The usual way is to use declare -f which will print a very long list of functions in an interactive bash shell. But inside an script, as most external functions are not defined, the list will be short and useful. So: declare -f Will list functions (and definitions). And: declare -F will print a list of name functions only. There is a (not so easy to use) option of extdebug which if set, the line numbers of the definition of each function will also be printed by declare -F . But extdebug needs to be set at script loading time (as all lines need to be known to be able to list them).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/260627", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136941/" ] }
260,630
I have a list of directories and subdirectories that contain large csv files. There are about 500 million lines in these files, each is a record. I would like to know How many lines are in each file. How many lines are in directory. How many lines in total Most importantly, I need this in 'human readable format' eg. 12,345,678 rather than 12345678 It would be nice to learn how to do this in 3 ways. Plain vanilla bash tools, awk etc., and perl (or python).
How many lines are in each file. Use wc , originally for word count, I believe, but it can do lines, words, characters, bytes, and the longest line length. The -l option tells it to count lines. wc -l <filename> This will output the number of lines in : $ wc -l /dir/file.txt32724 /dir/file.txt You can also pipe data to wc as well: $ cat /dir/file.txt | wc -l32724$ curl google.com --silent | wc -l63 How many lines are in directory. Try: find . -name '*.pl' | xargs wc -l another one-liner: ( find ./ -name '*.pl' -print0 | xargs -0 cat ) | wc -l BTW, wc command counts new lines codes, not lines. When last line in the file does not end with new line code, this will not counted. You may use grep -c ^ , full example: #this example prints line count for all found filestotal=0find /path -type f -name "*.php" | while read FILE; do #you see use grep instead wc ! for properly counting count=$(grep -c ^ < "$FILE") echo "$FILE has $count lines" let total=total+count #in bash, you can convert this for another shelldoneecho TOTAL LINES COUNTED: $total How many lines in total Not sure that I understood you request correctly. e.g. this will output results in the following format, showing the number of lines for each file: # wc -l `find /path/to/directory/ -type f` 103 /dir/a.php 378 /dir/b/c.xml 132 /dir/d/e.xml 613 total Alternatively, to output just the total number of new line characters without the file by file counts to following command can prove useful: # find /path/to/directory/ -type f -exec wc -l {} \; | awk '{total += $1} END{print total}' 613 Most importantly, I need this in 'human readable format' eg. 12,345,678 rather than 12345678 Bash has a printf function built in: printf "%0.2f\n" $T As always, there are many different methods that could be used to achieve the same results mentioned here.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/260630", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136941/" ] }
260,694
When I was not near my computer some guy got it and set an Alias for ls in my root folder. He set it to 'yes NeverGonnaGiveYouUp' . So now when im in my root folder and type ls I get an infinite loop of NeverGonnaGiveYouUp. It's driving me nuts and I don't know how to get rid of it. I've already tried unalias and unalias -a but those just remove it temporarily. Once I close the shell and reopen it it comes back. How do I get rid of this crap?
If unalias removes the issue (even temporarily) we have confirmation it is an alias. It could be "brute forced" out by adding an unalias ls in ~/.bashrc. echo "unalias ls" >> ~/.bashrc That will get excuted every time bashrc is read and will remove the alias. That will buy you some peace but will not resolve the actual issue that some file is still containing code to re-start the alias. You need to find which file contains the problem. If using bash: grep "NeverGonnaGiveYouUp" /etc/profile /etc/bash.bashrc \ ~/.bashrc ~/.bash_profile ~/.profile \ /root/.bashrc /root/.bash_profile /root/.profile That's a good list of possible files that got the definition. If nothing shows up in that search, or you use some other shell, let us know to further help.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/260694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155462/" ] }
260,720
I see that in Bash > 4.0 variable expansion is used to lowercase a variable. For example echo ${variable,,} Reading the man page I really don't get why the shell is converting the string to lowercase. A sequence expression takes the form {x..y[..incr]}, where x and y are either integers or single characters, and incr, an optional increment, is an integer. When integers are supplied, the expression expands to each number between x and y, inclusive. Supplied integers may be prefixed with β€˜0’ to force each term to have the same width. When either x or y begins with a zero, the shell attempts to force all generated terms to contain the same number of digits, zero-padding where necessary. When characters are supplied, the expression expands to each character lexicographically between x and y, inclusive, using the default C locale. Note that both x and y must be of the same type. When the increment is supplied, it is used as the difference between each term. The default increment is 1 or -1 as appropriate. Why is the variable converted to lowercase?
You're reading the wrong section of the documentation; look at shell parameter expansion instead. ${parameter^pattern} ${parameter^^pattern} ${parameter,pattern} ${parameter,,pattern} This expansion modifies the case of alphabetic characters in parameter . The pattern is expanded to produce a pattern just as in filename expansion. Each character in the expanded value of parameter is tested against pattern , and, if it matches the pattern, its case is converted. The pattern should not attempt to match more than one character. The β€˜^’ operator converts lowercase letters matching pattern to uppercase; the β€˜,’ operator converts matching uppercase letters to lowercase. The β€˜^^’ and β€˜,,’ expansions convert each matched character in the expanded value; the β€˜^’ and β€˜,’ expansions match and convert only the first character in the expanded value. If pattern is omitted, it is treated like a β€˜?’, which matches every character. If parameter is β€˜@’ or β€˜*’, the case modification operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with β€˜@’ or β€˜*’, the case modification operation is applied to each member of the array in turn, and the expansion is the resultant list.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/260720", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10616/" ] }
260,813
At my company, when I log into some servers, my last login and a huge banner are displayed: me@my-laptop$ ssh the-serverLast login: Mon Feb 8 18:54:36 2016 from my-laptop.company.com ************************************************************************* ** C O M P A N Y I N F O R M A T I O N S Y S T E M S ** ** !WARNING! Your connection has been logged !WARNING! ** ** This system is for the use of authorized personnel only. ** Individuals using this *computer system without authorization, ** or in excess of their authority as determined by the Company ** Code of Ethics and Acceptable Use Policy, are subject to having all ** of their activities on this system monitored, recorded and/or ** terminated by system personnel. ** If such monitoring reveals possible evidence of criminal activity, ** Company may provide said evidence to law enforcement officials, ** in compliance with its confidentiality obligations and all ** applicable national laws/regulations with regards to data privacy. ** ** This device is maintained by Company Department ** [email protected] *************************************************************************me@the-server$ Of course, I don't want this huge banner displayed every time I login, but I would like to keep the last login time and host displayed . If I use touch ~/.hushlogin , the banner is not displayed but I also loose the the last login information . In fact, nothing at all is displayed: ssh the-serverme@the-server$ How do I remove the banner but keep the last login time and host, like this: ssh the-server Last login: Mon Feb 8 18:54:36 2016 from my-laptop.company.com me@the-server$
One way would be to add the following to ~/.ssh/rc , which contains commands to be run when you ssh into the machine: lastlog -u $USER | perl -lane 'END{print "Last login: @F[3..6] $F[8] from $F[2]"}' The command will get the time of your last login from lastlogin and then format it so that it looks like the original version. You can now touch ~/.hushlogin and you will still see that message.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/260813", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37426/" ] }
260,823
A program I am working on is crashing and generating core dumps. I think that the issue is related to the arguments that it's being called with, some of which are automatically generated by another (rather complicated) program. So I've tried using gdb to debug, or file core.MyApplication.1234 to get the arguments. However, the command is pretty lengthy, and the output looks something like: core.MyApplication.1234: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from './MyApplication -view -mwip localhost -mwnp 12345 -mwlp 12346 -mwti 12347 -Debu' (I did change the names for this example, but you get the idea.) I know for a fact that there were several more arguments after this, but in the core files the command is always truncated at 80 characters. Both gdb and file report this. Looking at the output of objdump I'm not sure the rest was even written into the core dump, because it appears to cut off after "-Debu" too. I am running this on RHEL6. I found this thread from 2007 describing a solution for Solaris systems using pargs , but that's not a valid command on my system, and the Red Hat "equivalents" I've found only work on running processes, not a core file. How can I recover the entire command used to run the program? Is that even possible?
The data is there (at least up to 999 entries totalling at least 6885 bytes of numbered blahs): > cat segfault.c int main(int argc, char *argv[]){ char *s = "hello world"; *s = 'H';}> cc -g -o segfault segfault.c> limit coredumpsize 9999999> ./segfault `perl -le 'print "blah$_" for 1..999'`Segmentation fault (core dumped)> strings core.12231 | grep -c blah1000 Then with some quick altagoobingleduckgoing of gdb , and assuming debug symbols, this can be recovered via something like: > gdb ./segfault core.12231...(gdb) p argc$1 = 1000(gdb) x/1000s *argv... Another option would be to use a simple shell wrapper that logs "$@" somewhere then exec s the proper program with the given arguments.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/260823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136788/" ] }
260,906
I believe Ctrl - C can be trapped in bash scripts. Is it also possible to trap it inside an Awk script in order to handle that event? For example, for aborting processing, but printing the results of what has been processed already, instead of just silenty quitting?
I'm not aware of any awk implementation that has support for that. You could write an extension for gawk for that , but here, I'd rather switch to another language. perl makes it easy to convert awk scripts with its a2p script. For instance, if you have an awk script like: {count[$0]++}END { for (i in count) printf "%5d %s\n", count[i], i} a2p on it will give you something like: #!/usr/bin/perleval 'exec /usr/bin/perl -S $0 ${1+"$@"}' if $running_under_some_shell; # this emulates #! processing on NIH machines. # (remove #! line above if indigestible)eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift; # process any FOO=bar switcheswhile (<>) { chomp; # strip record separator $count{$_}++;}foreach $i (keys %count) { printf "%5d %s\n", $count{$i}, $i;} Which you can edit to add your signal handling (and remove that processing of var=value arguments which we don't want here, and the part intended for systems that don't support #! ): #!/usr/bin/perlsub report { foreach $i (keys %count) { printf "%5d %s\n", $count{$i}, $i; }}$SIG{INT} = sub { print STDERR "Interrupted\n"; report; $SIG{INT} = 'DEFAULT'; kill('INT', $$); # report dying of SIGINT.};while (<>) { chomp; # strip record separator $count{$_}++;}report; Another alternative could be to interrupt the feeding of data to awk , and have awk ignore the SIGINT, like instead of: awk '{count[$0]++};END{for (i in count) printf "%5d %s\n", count[i], i}' file do: cat file | ( trap '' INT awk '{count[$0]++};END{for (i in count) printf "%5d %s\n", count[i], i}') Ctrl+C will then kill cat but not awk . awk will still keep on processing remaining input still in the pipe. To detect the Ctrl+C in awk , you could do: (cat file && echo cat terminated normally) | ( trap '' INT awk '{count[$0]++} END{ if ($0 == "cat terminated normally") delete count[$0] else print "Interrupted" for (i in count) printf "%5d %s\n", count[i], i}')
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/260906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17350/" ] }
260,911
I'm currently working on a homework assignment and it is basically saying, "Find the single line with "Ju" but does not contain the letter "w" in that line" I believe that I have to use grep. However I'm not sure if I can just add this to a file. grep "Ju" | grep -v "w" maybe? grep "alpha" | grep -v "beta" > file-name
I'm not aware of any awk implementation that has support for that. You could write an extension for gawk for that , but here, I'd rather switch to another language. perl makes it easy to convert awk scripts with its a2p script. For instance, if you have an awk script like: {count[$0]++}END { for (i in count) printf "%5d %s\n", count[i], i} a2p on it will give you something like: #!/usr/bin/perleval 'exec /usr/bin/perl -S $0 ${1+"$@"}' if $running_under_some_shell; # this emulates #! processing on NIH machines. # (remove #! line above if indigestible)eval '$'.$1.'$2;' while $ARGV[0] =~ /^([A-Za-z_0-9]+=)(.*)/ && shift; # process any FOO=bar switcheswhile (<>) { chomp; # strip record separator $count{$_}++;}foreach $i (keys %count) { printf "%5d %s\n", $count{$i}, $i;} Which you can edit to add your signal handling (and remove that processing of var=value arguments which we don't want here, and the part intended for systems that don't support #! ): #!/usr/bin/perlsub report { foreach $i (keys %count) { printf "%5d %s\n", $count{$i}, $i; }}$SIG{INT} = sub { print STDERR "Interrupted\n"; report; $SIG{INT} = 'DEFAULT'; kill('INT', $$); # report dying of SIGINT.};while (<>) { chomp; # strip record separator $count{$_}++;}report; Another alternative could be to interrupt the feeding of data to awk , and have awk ignore the SIGINT, like instead of: awk '{count[$0]++};END{for (i in count) printf "%5d %s\n", count[i], i}' file do: cat file | ( trap '' INT awk '{count[$0]++};END{for (i in count) printf "%5d %s\n", count[i], i}') Ctrl+C will then kill cat but not awk . awk will still keep on processing remaining input still in the pipe. To detect the Ctrl+C in awk , you could do: (cat file && echo cat terminated normally) | ( trap '' INT awk '{count[$0]++} END{ if ($0 == "cat terminated normally") delete count[$0] else print "Interrupted" for (i in count) printf "%5d %s\n", count[i], i}')
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/260911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154544/" ] }
260,940
I've just installed haproxy on my test server. Is there a way of making it write its logs to a local file, rather than syslog? This is only for testing so I don't want to start opening ports / cluttering up syslog with all my test data. Unfortunately, the only information I can find all revolves around logging to a syslog server. I tried using: log /home/user/ha.log local0 in my config. But that told me: [ALERT] 039/095022 (9528) : sendto logger #1 failed: No such file or directory (errno=2) When I restarted. So I created the file with touch /home/user/ha.log and restarted at which point I got: [ALERT] 039/095055 (9593) : sendto logger #1 failed: Connection refused (errno=111) Is this possible, or am I going to have to configure syslog etc. to see my test data?
Haproxy simply doesn't support logging to files.As stated in the documentation ( https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#3.1-log ), the "log" statement takes as first parameter an address.If that's a file, it's a unix socket and HAProxy will speak in the syslog format to this socket.Haproxy is designed like this because its responsability is to proxy requests, not write files, it delegates writing of log files to syslog.If you don't want to mess with your machine, you can for example install logstash and run: logstash -e 'input { unix { path => "/tmp/haprxoy_log.sock" } } output { stdout { } }' and add: log /tmp/haprxoy_log.sock In your haproxy.cfg to test it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/260940", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102428/" ] }
260,941
I am trying to install Anaconda on my Linux machine.Right or wrong, at the end of the instructions they say to add this line to the file .bashrc in your home directory: export PATH="/home/username/anaconda/bin:$PATH" I do not know much of how the PATH in bash works.But, I have another PATH in my .bashrc file: export PATH="/usr/local/share/rsi/idl/bin:$PATH" How am I supposed to add the new path?
This should be it (all paths wanted in ${PATH} separated by colons) : export PATH="/usr/local/share/rsi/idl/bin:/home/username/anaconda/bin:$PATH"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/260941", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58092/" ] }
260,973
I have a really strange issue with systemd . When I issue a systemctl restart it will start the new process before the previous one finishes. This can be seen in the log, where the final shutdown message ("closing log") is logged after the startup message ("opening log"). Is there any way to add a delay between the stop and the start of process?
In your systemd service files, you can set RestartSec option to add a delay for restart. See example below: [Service]Restart=alwaysRestartSec=30 Check this link for more examples.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/260973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2671/" ] }
260,981
I am reading about pulseaudio, how it works and how I can configure it. I am encountering two keywords a lot: SINK , SOURCE. At first I thought SINK meant OUTPUT and SOURCE meant INPUT , but it seems that this is not the case. Could someone explain what SINK and SOURCE mean in simple English?
As per the project description : PulseAudio clients can send audio to "sinks" and receive audio from "sources". So sinks are outputs (audio goes there), sources are inputs (audio comes from there).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/260981", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59565/" ] }
261,034
I use the following command in order to delete only the files that start with DBG and older then two day , but this syntax not print the files that deleted find /tmp -type f -mtime +2 -name "DBG*" -exec rm {} \; How to add to this the find syntax , the print in order to print the deleted files?
Just use -print flag: find /tmp -type f -mtime +2 -name "DBG*" -exec rm {} \; -print or, if rm supports the -v option, let rm do it all: find /tmp -type f -mtime +2 -name "DBG*" -exec rm -v {} + or if your find supports -delete : find /tmp -type f -mtime +2 -name "DBG*" -delete -print (note that the first two have a race condition that could allow one to delete DBG* files anywhere on the file system )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261034", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155492/" ] }
261,036
I need to collect information about the network usage of each process. Nethogs presents the data I need in real time, I am trying to save the output to a file in order parse it and plot the data. The white bar is messing the output, so I used: sudo nethogs wlan0 | perl -pe 's/\x1b.*?[mGKH]//g' Now it is better, but the DEV and SENT column are merged. One more thing, I need to add a timestamp per flush.
Just use -print flag: find /tmp -type f -mtime +2 -name "DBG*" -exec rm {} \; -print or, if rm supports the -v option, let rm do it all: find /tmp -type f -mtime +2 -name "DBG*" -exec rm -v {} + or if your find supports -delete : find /tmp -type f -mtime +2 -name "DBG*" -delete -print (note that the first two have a race condition that could allow one to delete DBG* files anywhere on the file system )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155725/" ] }
261,044
Is there a way to apply (for example the font(size)) changes to .Xdefaults to all running terminals in a session? I can apply it to new terminals via loading xrdb -load .Xdefaults, but this doesn't apply to all running terminals. If it matters I am using urxvt (in daemon mode) as terminal and xmonad as window manager on ubuntu 15.10. Just for the font sizes I had the idea that one could use the fontsize perl-extension to inject a fontsize change to each open terminal, but I don't know how to do this.
Just use -print flag: find /tmp -type f -mtime +2 -name "DBG*" -exec rm {} \; -print or, if rm supports the -v option, let rm do it all: find /tmp -type f -mtime +2 -name "DBG*" -exec rm -v {} + or if your find supports -delete : find /tmp -type f -mtime +2 -name "DBG*" -delete -print (note that the first two have a race condition that could allow one to delete DBG* files anywhere on the file system )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5289/" ] }
261,087
My long lasting previous installation somehow tied up VLC and gtk file dialog. I didn't even do anything special, except installing VLC. After update to VLC 2.2.1 the file dialog was replaced to Qt and I don't see any obvious way how to get back with gtk. When I mark "vlc-qt" for deinstallation, entire vlc is marked for removal as well. openSUSE 13.2
VLC media player has been using Qt interface for quite long time. VLC however, has an option to override window style, which will also change the file dialog as well. In VLC media player, do the following steps: Go to Tools > Preferences (or press Ctrl + P ) In the first tab, under Interface Settings - Look and feel , look for "Force window style:" with the drop-down menu and change selection from System's default to GTK+ Finally, click Save to apply the changes. Then, go to Media > Open File... (or press Ctrl + O ) to confirm that the file dialog has been applied with GTK+ window style. That's all. Tested with VLC 2.2.1 in Debian 8 Xfce (Xfce 4.10). Force style for Qt5 in Debian/Ubuntu Previously, for Debian 9 (testing) and Ubuntu 16.04 (xenial) and older, user had to additionally install libqt5libqgtk2 package from the repository. For newer releases, that is now provided by qt5-gtk-platformtheme or qt5-gtk2-platformtheme and either one will be installed automatically by recommends. Debian Testing (stretch) -- needed libqt5libqgtk2 Debian Old Stable (stretch) and newer Ubuntu 15.10 (wily) until 16.04 (xenial) -- needed libqt5libqgtk2 Ubuntu 18.04 (bionic) and newer Tested with VLC 2.2.2 in Xubuntu 16.04 (Xfce 4.12). I did not test in Debian, but reportedly works according to this post on Ask Ubuntu . Later, I had observed that qt5-gtk-platformtheme package was installed by default for VLC 3.0.9 in Xubuntu 20.04. Force style for Qt5 in other distributions The package above is not available in repositories of other distributions, including openSUSE, according to this search result from software.opensuse.org. As an alternative, this Arch Wiki noted that QT_STYLE_OVERRIDE environment variable will force specific style to Qt applications. Therefore, the line QT_STYLE_OVERRIDE=gtk2 or QT_STYLE_OVERRIDE=GTK+ may be added in one of the following locations: ~/.profile (reportedly works in Linux Mint, suggested in this post on Unix.SE ) ~/.bashrc (suggested in this post on Ask Ubuntu ) ~/.xsession or ~/.xinitrc (suggested in this post on FreeBSD forum ) ~/.xsessionrc (suggested for OpenBox in this post on CrunchBang Linux forum ) Without installing the Qt5 package, I have tried export the line to each of above configuration files one at a time, except for the last one. However, none of these worked for VLC in Xubuntu 16.04. At the moment, I can't verify whether the environment variable actually works or not.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5884/" ] }
261,107
I've been searching around with no luck, so I'm just asking to make sure. Is it possible to import an external config file? Example with ~/.ss/config file : Hosts * IdentityFile ~/.ssh/id_rsa_servicekeyInclude ~/.sshconfig.local
VLC media player has been using Qt interface for quite long time. VLC however, has an option to override window style, which will also change the file dialog as well. In VLC media player, do the following steps: Go to Tools > Preferences (or press Ctrl + P ) In the first tab, under Interface Settings - Look and feel , look for "Force window style:" with the drop-down menu and change selection from System's default to GTK+ Finally, click Save to apply the changes. Then, go to Media > Open File... (or press Ctrl + O ) to confirm that the file dialog has been applied with GTK+ window style. That's all. Tested with VLC 2.2.1 in Debian 8 Xfce (Xfce 4.10). Force style for Qt5 in Debian/Ubuntu Previously, for Debian 9 (testing) and Ubuntu 16.04 (xenial) and older, user had to additionally install libqt5libqgtk2 package from the repository. For newer releases, that is now provided by qt5-gtk-platformtheme or qt5-gtk2-platformtheme and either one will be installed automatically by recommends. Debian Testing (stretch) -- needed libqt5libqgtk2 Debian Old Stable (stretch) and newer Ubuntu 15.10 (wily) until 16.04 (xenial) -- needed libqt5libqgtk2 Ubuntu 18.04 (bionic) and newer Tested with VLC 2.2.2 in Xubuntu 16.04 (Xfce 4.12). I did not test in Debian, but reportedly works according to this post on Ask Ubuntu . Later, I had observed that qt5-gtk-platformtheme package was installed by default for VLC 3.0.9 in Xubuntu 20.04. Force style for Qt5 in other distributions The package above is not available in repositories of other distributions, including openSUSE, according to this search result from software.opensuse.org. As an alternative, this Arch Wiki noted that QT_STYLE_OVERRIDE environment variable will force specific style to Qt applications. Therefore, the line QT_STYLE_OVERRIDE=gtk2 or QT_STYLE_OVERRIDE=GTK+ may be added in one of the following locations: ~/.profile (reportedly works in Linux Mint, suggested in this post on Unix.SE ) ~/.bashrc (suggested in this post on Ask Ubuntu ) ~/.xsession or ~/.xinitrc (suggested in this post on FreeBSD forum ) ~/.xsessionrc (suggested for OpenBox in this post on CrunchBang Linux forum ) Without installing the Qt5 package, I have tried export the line to each of above configuration files one at a time, except for the last one. However, none of these worked for VLC in Xubuntu 16.04. At the moment, I can't verify whether the environment variable actually works or not.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261107", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57910/" ] }
261,162
Say I have the following: for i in $@; do echo ${i+1}done and I run this on shell $ test.sh 3 5 8 4 , it outputs 1 1 1 1 why wouldn't ${i+1} work? I am trying to access the next argument for a list of command line arguments.
Each character in shell may have an special meaning. The code ${i+1} does not mean "add 1 to i". To find what it means, execute this command: LESS=+/'\{parameter\:\+word\}' man bash And read: ${ parameter :+ word } Use Alternate Value. If parameter is null or unset, nothing is substituted, otherwise the expansion of word is substituted. And a little way above: Omitting the colon results in a test only for a parameter that is unset. As $i has a value set by the loop for i in $@; the "Alternate Value" is substituted and 1 is printed. If you want to add 1 to the value of the arguments, do this: for ido echo "$((i+1))"done There is no need for the in "$@" (and get used to quoting all expansions). $ ./test.sh 3 5 8 44695 Next argument. But that is not "the next argument" either. The core issue is with your loop, you are using the value of arguments in a loop, not an index to the arguments. You need to loop over an index i of the arguments, not the value i of each argument. Something like: for (( i=1; i<=$#; i++)); do echo "${i}"done That will print an index, as this shows: $ ./test.sh 3 5 8 41234 Indirection How do we access the argument at position $i ?: With indirection: for (( i=1; i<=$#; i++)); do echo "${!i}"done See the simple ! added ? Now it runs like this: $ ./test.sh 3 5 8 43584 Final solution. And to print both the present argument and the next, use this: for (( i=1; i<=$#; i++)); do j=$((i+1)) echo "${!i} ${!j}"done No, there is no simpler way than to calculate the value in the variable $j . $ ./test.sh 3 5 8 43 55 88 44 That works for text also: $ ./test.sh sa jwe yqs ldfgtsa jwejwe yqsyqs ldfgtldfgt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261162", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145794/" ] }
261,183
In a directory I have X.txt , Y.txt , Z.txt files. I want to move these filenames into a single file like below: Out_file.txtX.txtY.txtZ.txt Any unix command to achieve this?
ls >> Out_file.txt When you are in concerned folder of course...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155819/" ] }
261,194
Here's an example of a bash script that redirects all output to a file (and shows output on the screen too): # writing to itexec > >(tee --ignore-interrupts ./log)exec 2>&1echo "here is a message"# reading from it againcat ./log Now I don't want to create the file ./log . I want to keep all the stdout in memory, and be able to read from it again later in the script. Also I'm in a situation where there may be no root filesystem mounted. I tried to do this with process substitution instead of ./log , but I can't seem to make sense of how to pass the file descriptor created by process substitution for a subsequent command in order to read what I just wrote. Here's another example: # can I make ./log a temporary file descriptor, an in-memory buffer?exec 3>./log 4<./log# writesecho >&3 "hello"# readscat <&4
If you want to globally redirect everything that happens to be written (like you do now), it's tricky, but can be hacked together. I strongly recommend that, if it's possible, just do it by normal piping. just wrap everything you do in a subshell. In this case ( echo "this is the message" other stuff) | cat or just write everything into a variable with "$()" syntax. The next way is to use what you did, but write to a tmpfs or /dev/shm if they are available. That's pretty straight forward, but you have to know what ram-based filesystems are in place (and set them up if possible). Another way is to create a fifo with mkfifo . In both cases, you need to clean up after yourself. EDIT: I have a very ugly hack, but I bet someone can improve it. #!/bin/bashexec 3>&1exec > >( tee >( ( tac && echo _ ) | tac | (read && cat > ./log) ) )echo "lol"sleep 5echo "lol"echo "finished writing"exec >&-exec >&3exec 3>&-echo "stdout now reopen"sleep 1 #wait if the file is still being written asynchronouslycat ./log How it works: first, you have a tee so you can see what's going on. This in turn outputs to another process substitution. There, you have the trick tac|tac which (because tac needs entire input to start outputting) waits for the entire stream to finish before going on. The last piece is in a subshell that actually outputs this into a file. Of course, the final shell would, immeadiately upon instantiation, create the output file in the filesystem if that was the only line. So something that also waits for the input to finally come, has to be done first, to delay file creation. I do this by outputting a dummy line first with echo, and then reading and discarding it. The read blocks until you close the file descriptor, signalling to tac its time has come. Hence, the closing of the stdout file descriptor at the end. I also saved the original stdout before opening the process substitution, in order to restore it at the end (to use cat once more). There's a sleep 5 in there, so I could check with ls if the file really wasn't created too early. The final sleep is trickier... The subshell is asynchronous and if there is a lot of output, you are waiting for both tac s to do their thing before the file is really there. So reasonably, you'll probably need to do something else to check if the thing really is finished. For instance, && touch sentinel at the end of the last subshell, and then while [ ! -f sentinel ]; do sleep 1; done && rm sentinel before you finally use the file. All in all, two process substitutions and in the inner one another two subshells and 2 pipes. It's one of the ugliest things I've ever written... but it should create the file only when you close the stdout, which means it's well controlled and can be done when your filesystems are ready.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56970/" ] }
261,200
I heard that I should never use --nodeps option when I do a rpm -e command. Why does this option exist then?
If you want to globally redirect everything that happens to be written (like you do now), it's tricky, but can be hacked together. I strongly recommend that, if it's possible, just do it by normal piping. just wrap everything you do in a subshell. In this case ( echo "this is the message" other stuff) | cat or just write everything into a variable with "$()" syntax. The next way is to use what you did, but write to a tmpfs or /dev/shm if they are available. That's pretty straight forward, but you have to know what ram-based filesystems are in place (and set them up if possible). Another way is to create a fifo with mkfifo . In both cases, you need to clean up after yourself. EDIT: I have a very ugly hack, but I bet someone can improve it. #!/bin/bashexec 3>&1exec > >( tee >( ( tac && echo _ ) | tac | (read && cat > ./log) ) )echo "lol"sleep 5echo "lol"echo "finished writing"exec >&-exec >&3exec 3>&-echo "stdout now reopen"sleep 1 #wait if the file is still being written asynchronouslycat ./log How it works: first, you have a tee so you can see what's going on. This in turn outputs to another process substitution. There, you have the trick tac|tac which (because tac needs entire input to start outputting) waits for the entire stream to finish before going on. The last piece is in a subshell that actually outputs this into a file. Of course, the final shell would, immeadiately upon instantiation, create the output file in the filesystem if that was the only line. So something that also waits for the input to finally come, has to be done first, to delay file creation. I do this by outputting a dummy line first with echo, and then reading and discarding it. The read blocks until you close the file descriptor, signalling to tac its time has come. Hence, the closing of the stdout file descriptor at the end. I also saved the original stdout before opening the process substitution, in order to restore it at the end (to use cat once more). There's a sleep 5 in there, so I could check with ls if the file really wasn't created too early. The final sleep is trickier... The subshell is asynchronous and if there is a lot of output, you are waiting for both tac s to do their thing before the file is really there. So reasonably, you'll probably need to do something else to check if the thing really is finished. For instance, && touch sentinel at the end of the last subshell, and then while [ ! -f sentinel ]; do sleep 1; done && rm sentinel before you finally use the file. All in all, two process substitutions and in the inner one another two subshells and 2 pipes. It's one of the ugliest things I've ever written... but it should create the file only when you close the stdout, which means it's well controlled and can be done when your filesystems are ready.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261200", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103808/" ] }
261,211
I want to delete folders using regexp in a Mac terminal. 0129_0140 (no delete)0140_0140 (delete)0150_0160 (no delete)0170_0170 (delete) I just want to delete folders such as 0140_0140 , 0170_0170 . (Added)I want to delete the nonempty folders, recursively.
Non-recursive With ksh93 (on OS/X available as ksh ): rmdir {4}(\d)_\1 (beware it could delete a directory called {4}(\d)_\1 if there's no file matching that pattern). With zsh (on OS/X available as zsh ): setopt extendedglobrmdir [0-9](#c4)_[0-9]##(/e:'[[ ${REPLY%_*} = ${REPLY#*_} ]]':) (that one also has the benefit of only considering files of type directory , using the / glob qualifier above). With bash or other POSIX shell (like the sh of most systems including OS/X ): set -- [0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9]for f do [ "${f#*_}" = "${f%_*}" ] && set -- "$@" "$f" shiftdonermdir "$@" (beware it could delete a directory called [0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9] if there are no XXXX_XXXX files in the current directory). Using find and grep : find . ! -name . -prune -type d -name '[0-9][0-9][0-9][0-9]_[0-9][0-9][0-9][0-9]' | grep -x '\./\(.*\)_\1' | xargs rmdir With BSD find (as found on OS/X): find . -maxdepth 1 -regex './\([0-9]\{4\}\)_\1' -type d -delete With GNU find (as typically not found on OS/X unless installed via macports/homebrew/fink...): find . -maxdepth 1 -regextype grep -regex './\([0-9]\{4\}\)_\1' -type d -delete Recursively: ksh93 : set -o globstarrmdir -- **/{4}(\d)\1 (beware that it won't remove 1111_1111 in case there's a 1111_1111/2222_2222 as it will try to remove the 1111_1111 one first which it can't as there's a 2222_2222 dir in it, ksh93 doesn't have the od glob qualifier (for depth-first order) of zsh ) zsh : setopt extendedglobrmdir -- **/[0-9](#c4)_[0-9]##(Dod/e@'[[ ${${REPLY:t}%_*} = ${REPLY##*_} ]]'@) BSD find : LC_ALL=C find . -regex '.*/\([0-9]\{4\}\)_\1' -type d -delete GNU find : LC_ALL=C find . -regextype grep -regex '.*/\([0-9]\{4\}\)_\1' -type d -delete
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155844/" ] }
261,247
The standard files/tools that report memory seem to have different formats on different Linux distributions. For example, on Arch and Ubuntu. Arch $ free total used free shared buff/cache availableMem: 8169312 3870392 2648348 97884 1650572 4110336Swap: 16777212 389588 16387624$ head /proc/meminfo MemTotal: 8169312 kBMemFree: 2625668 kBMemAvailable: 4088520 kBBuffers: 239688 kBCached: 1224520 kBSwapCached: 17452 kBActive: 4074548 kBInactive: 1035716 kBActive(anon): 3247948 kBInactive(anon): 497684 kB Ubuntu $ free total used free shared buffers cachedMem: 80642828 69076080 11566748 3063796 150688 58358264-/+ buffers/cache: 10567128 70075700Swap: 20971516 5828472 15143044$ head /proc/meminfo MemTotal: 80642828 kBMemFree: 11565936 kBBuffers: 150688 kBCached: 58358264 kBSwapCached: 2173912 kBActive: 27305364 kBInactive: 40004480 kBActive(anon): 7584320 kBInactive(anon): 4280400 kBActive(file): 19721044 kB So, how can I portably (across Linux distros only) and reliably get the amount of memoryβ€”excluding swapβ€”that is available for my software to use at a particular time? Presumably that's what's shown as "available" and "MemAvailable" in the output of free and cat /proc/meminfo in Arch but how do I get the same in Ubuntu or another distribution?
MemAvailable is included in /proc/meminfo since version 3.14 of the kernel; it was added by commit 34e431b0a . That's the determining factor in the output variations you show. The commit message indicates how to estimate available memory without MemAvailable : Currently, the amount of memory that is available for a new workload, without pushing the system into swap, can be estimated from MemFree , Active(file) , Inactive(file) , and SReclaimable , as well as the "low" watermarks from /proc/zoneinfo . The low watermarks are the level beneath which the system will swap. So in the absence of MemAvailable you can at least add up the values given for MemFree , Active(file) , Inactive(file) and SReclaimable (whichever are present in /proc/meminfo ), and subtract the low watermarks from /proc/zoneinfo . The latter also lists the number of free pages per zone, that might be useful as a comparison... The complete algorithm is given in the patch to meminfo.c and seems reasonably easy to adapt: sum the low watermarks across all zones; take the identified free memory ( MemFree ); subtract the low watermark (we need to avoid touching that to avoid swapping); add the amount of memory we can use from the page cache (sum of Active(file) and Inactive(file) ): that's the amount of memory used by the page cache, minus either half the page cache, or the low watermark, whichever is smaller; add the amount of memory we can reclaim ( SReclaimable ), following the same algorithm. So, putting all this together, you can get the memory available for a new process with: awk -v low=$(grep low /proc/zoneinfo | awk '{k+=$2}END{print k}') \ '{a[$1]=$2} END{ print a["MemFree:"]+a["Active(file):"]+a["Inactive(file):"]+a["SReclaimable:"]-(12*low); }' /proc/meminfo
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
261,283
I am trying to make a shell script that runs on a widest variety of *nix systems. So, if I wrote a bash script backwards compatible with old versions of the shell, but what if bash wasn't on the system? For example, is there another commonly used shell for embedded hardware? Or what did we use before bash was cool? I'd like a simple script that helps me automate a wide verity of *nix systems, both old and new. More and more *nix systems are lumped into the "Internet of Things", and I want to write some highly-compatible home-brew scripts that can also work with these devices. (Which is why I started with Bash.)
If you're looking for portability, you can bet that /bin/sh will be on most every system, though different platforms will implement it differently (e.g., on Ubuntu it links to dash and on Fedora it links to bash ). Something will be there though. If you use that and write your scripts in POSIX compliant ways you'll have your best shot at being portable.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261283", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155902/" ] }
261,305
To make it short, doing something like: -bash$ function tt{ echo $0;}-bash$ tt $0 will return -bash , but how to get the function name called, i.e. tt in this example instead?
In bash , use FUNCNAME array: tt() { printf '%s\n' "$FUNCNAME"} With some ksh implementations: tt() { printf '%s\n' "$0"; } In ksh93 : tt() { printf '%s\n' "${.sh.fun}"; } From ksh93d and above, you can also use $0 inside function to get the function name, but you must define function using function name { ...; } form. In zsh , you can use funcstack array: tt() { print -rl -- $funcstack[1]; } or $0 inside function. In fish : function tt printf '%s\n' "$_"end
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22049/" ] }
261,360
I am testing my Debian Server with some Nmap port Scanning. My Debian is a Virtual Machine running on a bridged connection. Classic port scanning using TCP SYN request works fine and detects port 80 as open (which is correct) : nmap -p 80 192.168.1.166 Starting Nmap 6.47 ( http://nmap.org ) at 2016-02-10 21:36 CETNmap scan report for 192.168.1.166Host is up (0.00014s latency).PORT STATE SERVICE80/tcp open httpMAC Address: xx:xx:xx:xx:xx:xx (Cadmus Computer Systems)Nmap done: 1 IP address (1 host up) scanned in 0.51 seconds But when running UDP port scan, it fails and my Debian server answers with an ICMP : Port unreachable error : nmap -sU -p 80 192.168.1.166Starting Nmap 6.47 ( http://nmap.org ) at 2016-02-10 21:39 CETNmap scan report for 192.168.1.166Host is up (0.00030s latency).PORT STATE SERVICE80/udp closed httpMAC Address: xx:xx:xx:xx:xx:xx (Cadmus Computer Systems)Nmap done: 1 IP address (1 host up) scanned in 0.52 seconds Wireshark record : How is that possible ? My port 80 is open, how come that Debian answers with an ICMP : Port unreachable error ? Is that a security issue?
Albeit TCP and UDP are part of TCP/IP, both belong to the same TCP/IP or OSI layers, and both are a layer above IP, they are different protocols. http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/ Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are two of the core protocols of the Internet Protocol suite. Both TCP and UDP work at the transport layer TCP/IP model and both have a very different usage. TCP is a connection-oriented protocol. UDP is a connectionless protocol. (source: ml-ip.com ) Some services do indeed answer to TCP and UDP ports at the same time, as is the case of DNS and NTP services, however that is not certainly the case with web servers, which normally only answer by default to port 80/TCP (and do not work/listen at all in UDP) You can list your UDP listenning ports in a linux system with: $sudo netstat -anlpuActive Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 0.0.0.0:1900 0.0.0.0:* 15760/minidlnad udp 0 0 0.0.0.0:5000 0.0.0.0:* 32138/asterisk udp 0 0 0.0.0.0:4500 0.0.0.0:* 1592/charon udp 0 0 0.0.0.0:4520 0.0.0.0:* 32138/asterisk udp 0 0 0.0.0.0:5060 0.0.0.0:* 32138/asterisk udp 0 0 0.0.0.0:4569 0.0.0.0:* 32138/asterisk udp 0 0 0.0.0.0:500 0.0.0.0:* 1592/charon udp 0 0 192.168.201.1:53 0.0.0.0:* 30868/named udp 0 0 127.0.0.1:53 0.0.0.0:* 30868/named udp 0 0 0.0.0.0:67 0.0.0.0:* 2055/dhcpd udp 0 0 0.0.0.0:14403 0.0.0.0:* 1041/dhclient udp 17920 0 0.0.0.0:68 0.0.0.0:* 1592/charon udp 0 0 0.0.0.0:68 0.0.0.0:* 1041/dhclient udp 0 0 0.0.0.0:56417 0.0.0.0:* 2055/dhcpd udp 0 0 192.168.201.1:123 0.0.0.0:* 1859/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 1859/ntpd udp 0 0 192.168.201.255:137 0.0.0.0:* 1777/nmbd udp 0 0 192.168.201.1:137 0.0.0.0:* 1777/nmbd udp 0 0 0.0.0.0:137 0.0.0.0:* 1777/nmbd udp 0 0 192.168.201.255:138 0.0.0.0:* 1777/nmbd udp 0 0 192.168.201.1:138 0.0.0.0:* 1777/nmbd udp 0 0 0.0.0.0:138 0.0.0.0:* 1777/nmbd udp 0 0 192.168.201.1:17566 0.0.0.0:* 15760/minidlnad And your listening TCP ports with the command: $sudo netstat -anlptActive Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:5060 0.0.0.0:* LISTEN 32138/asterisk tcp 0 0 192.168.201.1:8200 0.0.0.0:* LISTEN 15760/minidlnad tcp 0 0 192.168.201.1:139 0.0.0.0:* LISTEN 2092/smbd tcp 0 0 0.0.0.0:2000 0.0.0.0:* LISTEN 32138/asterisk tcp 0 0 192.168.201.1:80 0.0.0.0:* LISTEN 7781/nginx tcp 0 0 192.168.201.1:53 0.0.0.0:* LISTEN 30868/named tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 30868/named tcp 0 0 192.168.201.1:22 0.0.0.0:* LISTEN 2023/sshd tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 1919/perl tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 30868/named tcp 0 0 192.168.201.1:445 0.0.0.0:* LISTEN 2092/smbd tcp 0 224 192.168.201.1:22 192.168.201.12:56820 ESTABLISHED 16523/sshd: rui [pr Now normally NMAP does send a SYN to the port being scanned, and per the TCP protocol, if a daemon/service is bound to the port, it will answer with a SYN+ACK, and nmap will show it as open. TCP/IP connection negotiation: 3 way handshake To establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first bind to and listen at a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open. To establish a connection, the three-way (or 3-step) handshake occurs: SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment's sequence number to a random value A. SYN-ACK: In response, the server replies with a SYN-ACK. However, if a service is not running there, TCP/IP defines the kernel will send an ICMP message back with an "Port unreachable" message for UDP services, and TCP RST messages for TCP services. ICMP Destination unreachable Destination unreachable is generated by the host or its inbound gateway[3] to inform the client that the destination is unreachable for some reason. A Destination Unreachable message may be generated as a result of a TCP, UDP or another ICMP transmission. Unreachable TCP ports notably respond with TCP RST rather than a Destination Unreachable type 3 as might be expected. So indeed, your UDP scanning to port 80/UDP simply receives an ICMP unreachable message back because there is not a service listening to that combination or protocol/port. As for security considerations, those ICMP destination unreachable messages can certainly be blocked, if you define firewall/iptables rules that DROP all messages by default, and only allow in the ports that your machine serves to the outside. That way, nmap scans to all the open ports, especially in a network, will be slower, and the servers will use less resources. As an additional advantage, if a daemon/service opens additional ports, or a new service is added by mistake, it won't be serving requests until it is expressly allowed by new firewall rules. Please do note, that if instead of using DROP in iptables, you use REJECT rules, the kernel won't ignore the scanning/ TCP/IP negotiation tries, and will answer with ICMP messages of Destination unreachable, code 13: "Communication administratively prohibited (administrative filtering prevents packet from being forwarded)". Block all ports except SSH/HTTP in ipchains and iptables
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261360", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115216/" ] }
261,371
I'm running XFCE 4.12 on top of Gentoo with a 4.2.0 kernel. My PlayPause button on my keyboard used to work as a global hotkey for VLC. Now VLC won't even recognize the key. It does see "Alt + Media Play Pause" but not the key alone. Is there a way to see if and what program might be capturing that key? When I run xdotool key "XF86LogGrabInfo" the tail /var/log/Xorg.0.log file reads [ 10138.690] (II) Printing all currently active device grabs:[ 10138.690] (II) End list of active device grabs
To find out which app/program grabbed your key use the debug keysym XF86LogGrabInfo . Use xdotool to press keys + XF86LogGrabInfo at the same time e.g. in a terminal run KEY=XF86AudioPlayxdotool keydown ${KEY}; xdotool key XF86LogGrabInfo; xdotool keyup ${KEY} Then check for output with tail /var/log/Xorg.0.log Note that with gnome 3/gdm and systemd this is no longer logged to Xorg.0.log (it's instead logged to the journal ). In that case you couldrun journalctl -f and then in another terminal run the xdotool commands. Switch to the first terminal and you'll see something like /usr/lib/gdm/gdm-x-session[629]: Active grab 0x40c0a58e (xi2) on device 'Virtual core keyboard' (3):/usr/lib/gdm/gdm-x-session[629]: client pid 708 /usr/bin/gnome-shell/usr/lib/gdm/gdm-x-session[629]: at 32595124 (from passive grab) (device frozen, state 6)/usr/lib/gdm/gdm-x-session[629]: xi2 event mask for device 3: 0xc000/usr/lib/gdm/gdm-x-session[629]: passive grab type 2, detail 0xac, activating key 172 In the above example the program (the client) that grabbed the key is gnome-shell . How do I figure out what the keys are called? Check out the manpage of xdotool using man xdotool or an online version , as it lists a number of the special keys. For instance, "alt+r", "Control_L+J", "ctrl+alt+n", "BackSpace". The LinuxQuestions wiki also has a list of X Keysyms one could use. To make things a bit easier, xdotool also has aliases for some of these, such that pressing Shift-Alt-Tab would for instance just be shift+alt+Tab . To verify that this does indeed click that key combination, you could send the input to xev , which is a program that will print whatever key or mouse events it gets to the console. Just do sleep 2; xdotool keydown ${KEY} and switch to the xev window before two seconds has passed to see the keys being clicked on that window. It should then output a series of events, such as these: PropertyNotify event, serial 168, synthetic NO, window 0x1e00001, atom 0x13e (_GTK_EDGE_CONSTRAINTS), time 4390512, state PropertyNewValueMappingNotify event, serial 168, synthetic NO, window 0x0, request MappingKeyboard, first_keycode 8, count 248KeyPress event, serial 168, synthetic NO, window 0x1e00001, root 0x163, subw 0x0, time 4390719, (882,657), root:(1000,771), state 0x0, keycode 64 (keysym 0xffe9, Alt_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: FalseKeyPress event, serial 169, synthetic NO, window 0x1e00001, root 0x163, subw 0x0, time 4390738, (882,657), root:(1000,771), state 0x8, keycode 23 (keysym 0xff09, Tab), same_screen YES, XLookupString gives 1 bytes: (09) " " XmbLookupString gives 1 bytes: (09) " " XFilterEvent returns: False
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/261371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2049/" ] }
261,402
Bash manual says: Command substitution, commands grouped with parentheses, and asynchronous commands are invoked in a subshell environment that is a duplicate of the shell environment, except that traps caught by the shell are reset to the values that the shell inherited from its parent at invocation. In this example, b isn't an environment variable, so b doesn't exist in the subshell created by command substitution. Then why is c assigned the value of b by command substituion? Is it because the parameter expansion happens for $b in the shell process before creating a subshell to execute echo 1 ? $ b=1$ c=$(echo $b)$ echo $c1
No, the subshell was created first. A shell execution environment contains shell parameters set by variable assignments and environment variables. A subshell environment was created by duplicating the shell environment, so it contains all the variables of the current shell environment. See the example: $ b=1$ c=$(b=2; echo "$b")$ echo "$c"2 The output is 2 instead of 1 . A subshell environment created by command substitution is different with a shell environment created by calling the shell executable. When you call the shell as: $ bash -c : the the current shell used execve() to create new shell process, something like: execve("/bin/bash", ["bash", "-c", ":"], [/* 64 vars */]) = 0 the last argument passed to execve contains all the environment variables. That's why you need to export the variables to push it to the environment variables, which will be included in subsequently executed commands: $ a=; export a$ strace -e execve bash -c :execve("/bin/bash", ["bash", "-c", ":"], [/* 65 vars */]) = 0+++ exited with 0 +++ Notice the environment variables change from 64 to 65. And variables which are not exported will not be passed to new shell environment: $ a=; b=; export a$ strace -e execve bash -c :execve("/bin/bash", ["bash", "-c", ":"], [/* 65 vars */]) = 0+++ exited with 0 +++ Notice the environment variables are still 65. In command substitution, the shell used fork() to create new shell process, which just copied the current shell environment - which contains both variables set and environment variables.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
261,417
When I use the command rm -rf , I want to make sure a prompt always appears before deleting a file, so I tried adding this to ~/.bashrc alias 'rm -rf'='rm -rfi' But it doesn't work. How can I fix this?
Confirmation is a weak way to achieve the result you want: not deleting files you didn't want to delete. I can ask you to confirm 10 times in a row, but if since you just asked me to delete mispeled.txt you will not realize your error until after you confirmed it. Better to use trash or similar command on your system that sends files to the (recoverable) "recycle-bin". There is an RPM build of the trash-cli package at rpmfind.net but I can't vouch for that version. When in doubt build it yourself from the source code . As noted in the comments it is a bad idea to alias rm at all, because it will come back to bite you when you are in a shell that has no protective alias and your brain is accustomed to having a "safe" rm .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261417", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148322/" ] }
261,427
I am a Git Bash user who is now switching to Debian. Here was my Git Bash's look: Here is my look on Debian: My .bashrc file is completely EMPTY. I have nothing in there currently. I've been researching color in Linux for hours . Trust me, I've exhausted my options. I don't want to use a custom program, download packages, run a script, or use a wrapper for Terminal. I just want my terminal colored how it looks in the GitBash image OR how Command Prompt displays color. I don't want to change the text background, I want to change my terminal background. I personally would like Black, not grey. Thank you.
In your Terminal, klick Edit > Profile Preferences > Colors See the Text and Background Color Uncheck the Use colors from system theme And set the Build-in schemes: to: Gray on black
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156002/" ] }
261,430
I'm trying to configure my RPi with Raspbian Jessie to use autofs to mount at boot a NFS share from my QNAP NAS. The manual mount with mount -v -t nfs server://share /mnt/share works and also the autofs service works if I manually start it with sudo service autofs start after starting also rpcbind and nfs-common services first. Now I want that all the 3 services involved ( rpcbind , nfs-common and autofs ) start automatically at boot. Since Raspbian Jessie uses systemd , what should I do to add to the boot the rpcbind and nfs-common services, that should start before autofs ? Should I use init.d and so sudo update-rc.d rpcbind enable sudo update-rc.d nfs-common enable or do I have to create a systemd unit file?
In your Terminal, klick Edit > Profile Preferences > Colors See the Text and Background Color Uncheck the Use colors from system theme And set the Build-in schemes: to: Gray on black
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261430", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56537/" ] }
261,439
I have openssl 1.0.1e installed but its seems to be buggy based on this But when I list out the updates for the system it doesn't list out 1.0.1q as suggested on above link. Any idea how to install through yum or by compiling? Installed Packagesopenssl.x86_64 1:1.0.1e-51.el7_2.2 @updates
You don't need to upgrade or compile anything. The document you reference state that you should update from 1.0.1 to 1.0.1q because of CVE-2015-3194, CVE-2015-3195 and CVE-2105-3196. However, if you run: rpm -q --changelog openssl | grep CVE-2015-319 you should get: - fix CVE-2015-3194 - certificate verify crash with missing PSS parameter- fix CVE-2015-3195 - X509_ATTRIBUTE memory leak- fix CVE-2015-3196 - race condition when handling PSK identity hint which means that these fixed have been retrospectively applied to your version of openSSL. Distros don't update their versions on every time upstream release, as these releases are relatively new and untested. Instead they cherry-pick the patches that they will apply. Usually, this means security patches or regression fixes only.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109203/" ] }
261,442
I have a requirement to identify sequence gap in a set of files. Sequence starts at FILENAME_0001 and ends at FILENAME_9999. After this the sequence is restarted from 0001. To implement a proper sequence check I used ls -rt to pick the files in order of modified time and the compared with the previous files sequence number. If the previous file was 9999 I check whether the next one is 0001 (to accommodate the sequence reset). Recently I came across a scenario where files were listed in the below order: FILENAME_0001 FILENAME_0002FILENAME_0005FILENAME_0003FILENAME_0004FILENAME_0006FILENAME_0007 This was because files 3, 4 & 5 had the same modified time to the second. Only the millisecond was different. So I am guessing ls -rt considers only upto the seconds. Could someone suggest a workaround?
You don't need to upgrade or compile anything. The document you reference state that you should update from 1.0.1 to 1.0.1q because of CVE-2015-3194, CVE-2015-3195 and CVE-2105-3196. However, if you run: rpm -q --changelog openssl | grep CVE-2015-319 you should get: - fix CVE-2015-3194 - certificate verify crash with missing PSS parameter- fix CVE-2015-3195 - X509_ATTRIBUTE memory leak- fix CVE-2015-3196 - race condition when handling PSK identity hint which means that these fixed have been retrospectively applied to your version of openSSL. Distros don't update their versions on every time upstream release, as these releases are relatively new and untested. Instead they cherry-pick the patches that they will apply. Usually, this means security patches or regression fixes only.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261442", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79106/" ] }
261,510
The sample input is 123456789 The expected output is 123---45---678---9
One way: cat -s file | sed 's/^$/---/' From man page of cat : -s, --squeeze-blank never more than one single blank line Once cat has squeezed the blank lines, sed replaces the blank with with a ---
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155304/" ] }
261,521
I am not sure whether this constitutes a bug - so, I dare to try it here...When attempting to install (with dnf ) versions of the package python-dns , I get the following error: unpacking of archive failed on file /usr/lib/python2.7/site-packages/dnspython-1.12.0-py2.7.egg-info: cpio: rename I run 4.3.4-300.fc23.x86_64 and have tried installing python-dns-1.12.0-2.fc23.noarch as well as python-dns-1.12.0GIT465785f-1.fc23.noarch . The question is open, I am afraid: ideally I would learn how to solve the error; but I would also settle for advise where else I should post the question. added information as reaction to comments I used the command "sudo dnf install python-dns"to install the package.python-dns-1.12.0GIT465785f-1.fc23.noarch came from the default fedora repository "Fedora 23 - x86_64".python-dns-1.12.0-2.fc23.noarch came from http://koji.fedoraproject.org/koji/buildinfo?buildID=659336
One way: cat -s file | sed 's/^$/---/' From man page of cat : -s, --squeeze-blank never more than one single blank line Once cat has squeezed the blank lines, sed replaces the blank with with a ---
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156074/" ] }
261,531
I am just fooling around on my terminal (Gnome terminal). I was wondering is there a way to send output of one terminal to another without having to make a new file or pipe. for example: on first terminal I run ls and want its output to be displayed on second terminal (with or without using any command on second)
If both terminals belong to the same user, you can send your output to the virtual device that is used as the particular terminal's tty. So you can use the output from w , which includes the TTY information, and write directly to that device. ls > /dev/pts/7 (If the device mentioned by w was pts/7) Another option is to use the number of a process that is connected to that device. Send your output to /proc/<process number>/fd/1 . ls > /proc/5555/fd/1 Assuming the process number that you found that runs in that terminal is 5555. Note that this direct write is only allowed if the user that attempts to write is the same user that owns the other terminal .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/261531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
261,569
Does a normal * user have permissions to write anywhere else than his own home dir? (no sudo and those privilege escalation tools) I say normal because I do not know more categories than root and normal . Let say the involved user installed the system and do the administrative things with sudo <command> . I use Ubuntu, by the way. Thanks.
Yes. The normal/unprivileged user can write to /tmp and /var/tmp , for legitimate reasons. Also, if the user or group permissions of a given file/directory includes those of the user, he or she can write to those files or directories as well. Having said that, providing write capability to operating system files and directories to a normal user, is shooting one's self at the foot, as best as an analogy goes. There is a lot to say about this but this is not the place. If you are curious about why ? I suggest searching for and reading articles about "UNIX/Linux system administration best practices".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144279/" ] }
261,638
There are this two names: a subshell and a child-shell . Yes, a child process will be started by any of this: sh -c 'echo "Hello"'( echo "hello" )echo "$(echo "hello")echo "hello" | cat Are all equivalent and share the same name? Do all share the same properties? POSIX has this definition : A shell execution environment consists of .... But the last paragraph of above link has this: A subshell environment shall be created as a duplicate of the shell environment, except that signal traps that are not being ignored shall be set to the default action. And specially: Command substitution, commands that are grouped with parentheses, and asynchronous lists shall be executed in a subshell environment. Additionally, each command of a multi-command pipeline is in a subshell environment; .... The sh -c 'echo "Hello"' is not included there, should that be called a subshell also?
A subshell duplicates the existing shell. It has the same variablesΒΉ, the same functions, the same options, etc. Under the hood, a subshell is created with the fork system callΒ²; the child process goes on to do what is expected of it while the parent waits (e.g., $(…) ) or goes on with its life (e.g., … & ) or otherwise does what is expected of it (e.g., … | … ). sh -c … does not create a subshell. It launches another program. That program happens to be a shell, but that's just a coincidence.Β  The program may even be a different shell (e.g., if you run shΒ -c … from bash, and sh is dash), i.e., a completely different program that just happens to have significant similarities in its behavior. Under the hood, launching an external command ( sh or any other) calls the fork system call and then the execve system call to replace the shell program in the subprocess by another program (here sh ). ΒΉ Including $$ , but excluding some shell-specific variables such as bash and mksh's BASHPID . Β² At least, that's the traditional and usual implementation. Shells can optimize the fork away if they can mimic the behavior otherwise. See What is the exact difference between a "subshell" and a "child process"? Relevant man pages: fork(2) , execve(2) .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
261,672
Does the command pwd in a shell script output the directory the shell script is in?
There are three independent "directories" at play here: your current shell's current working directory, the shell script's current working directory, and the directory containing the shell script. To demonstrate that they are independent, you can write a shell script, saved to /tmp/pwd.sh, containing: #!/bin/shpwdcd /var pwd You can then change your pwd (#1 above) to /: cd / and execute the script: /tmp/pwd.sh which starts off by demonstrating your existing pwd (#1), then changes it to /var and shows it again (#2). Neither of those pwd 's were "/tmp", the directory that contains /tmp/pwd.sh (#3).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151927/" ] }
261,687
As most people here know, when using bash at the command prompt if you partially type a file name a command or an option to a command etc, bash will complete the word if there is exactly one match. When there is more than one match, you need to hit <Tab> twice and bash will generate a list of possible matches. I would like to configure bash to simply provide those options on the first <Tab> . Is this possible without writing a script? i.e. a shell option? man bash has a section "programmable completion" but I couldn't make out if there is an option to enable "single tab completion".
Put this in your ~/.inputrc : set show-all-if-ambiguous on For additional credit, add: set completion-ignore-case on All of the options are in the GNU manual ...
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/261687", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
261,693
How can I bulk replace the suffix for many files? I have a lot of files like NameSomthing-min.png NameSomthing1-min.png NameSomthing2-min.png I would like to change all them to NameSomthing.png NameSomthing1.png NameSomthing2.png i.e., remove the characters -min from the name.Β How would I do this?
Put this in your ~/.inputrc : set show-all-if-ambiguous on For additional credit, add: set completion-ignore-case on All of the options are in the GNU manual ...
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/261693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156169/" ] }
261,721
I am using pgrep for a bunch of things, however I can't get pgrep to list if the process is defunct. Running ps adds to the end of the item <defunct> but pgrep does not, is there anyway to do this?
pgrep is not able to filter a process based on its state. Try: ps axo pid,stat | awk '$2 ~ /^Z/ { print $1 }'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96935/" ] }
261,723
I am running Gentoo Hardened using kernel version 4.1.7-hardened-r1. When I first set up my system I was able to emerge Chromium without a hitch. However, I recently issued emerge --sync followed by a world update, and now Chromium will not update with this error. rockshooter /etc/portage # emerge -aNDu --with-bdeps=y @worldThese are the packages that would be merged:Calculating dependencies... done!WARNING: One or more updates/rebuilds have been skipped due to a dependency conflict:dev-libs/libxml2:2 (dev-libs/libxml2-2.9.2-r4:2/2::gentoo, ebuild scheduled for merge) conflicts with dev-libs/libxml2:=[icu] required by (www-client/chromium-48.0.2564.82:0/0::gentoo, installed) ^^^ dev-libs/libxml2:2/2=[icu] required by (www-client/chromium-48.0.2564.82:0/0::gentoo, installed) ^^^Nothing to merge; quitting. Prior to setting up Gentoo I made a test on a VM and got that common error where Chromium, libxml, qt-webkit and ICU tend to not play well on Portage, so I thought this was just going to be a matter of globally setting the icu USE flag. However... it turns out that not only I'm not seeing qt-webkit being part of the conflict, but I also do have USE="icu" set on my make.conf. CFLAGS="-O2 -pipe -march=native"CXXFLAGS="${CFLAGS}"ACCEPT_LICENSE="-* @FREE CC-Sampling-Plus-1.0"ACCEPT_KEYWORDS="amd64"FEATURES="webrsync-gpg ccache parallel-fetch userfetch"PORTAGE_GPG_DIR="/var/lib/gentoo/gkeys/keyrings/gentoo/release"CCACHE_SIZE="4G"CHOST="x86_64-pc-linux-gnu"CPU_FLAGS_X86="aes avx fma3 fma4 mmx mmxext popcnt sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 xop"USE="${CPU_FLAGS_X86} gif jpeg png tiff apng java alsa libressl icu"LINGUAS="en es es_LA fr de" Now I'm stumped because I have no idea of how to fix this update blocker. I do have USE="icu" set on make.conf and I'm not seeing qt-webkit being part of the conflict -- any idea of what's going on?
pgrep is not able to filter a process based on its state. Try: ps axo pid,stat | awk '$2 ~ /^Z/ { print $1 }'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47085/" ] }
261,801
In Xubuntu 14.04, I tried to use both ip and ifconfig to handle a network interface, but they gave the same result. $ sudo ifconfig wlan0 down$ sudo ip link set wlan0 down both correcly put down the interface and the connectivity does not work; but then $ sudo ifconfig wlan0 up$ sudo ip link set wlan up did not restore the connectivity! This is the output of ip link show after putting the interface down: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN group default qlen 1000link/ether <my_MAC_address> brd ff:ff:ff:ff:ff:ffinet 192.168.1.29/24 brd 192.168.1.255 scope global wlan0 valid_lft forever preferred_lft forever and this is the output after putting the interface up: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000link/ether <my_MAC_address> brd ff:ff:ff:ff:ff:ffinet 192.168.1.29/24 brd 192.168.1.255 scope global wlan0 valid_lft forever preferred_lft forever So it has no carrier and I can't access the web, but it has an IP! 1) Why? Shouldn't the up command restore the previous situation? I had to turn off and on the physical switch of the wireless board to browse again the web. I also tried with dhclient -r wlan0 and dhclient wlan0 , but the result was that neither the physical switch was useful and I had to restart the whole system. 2) Even after putting the interface down, the GUI connectivity icon was active and a connection to the wireless Access-Point was normally shown (even if no webpages were actually available). Why?
I think that ifconfig is not handling wireless stuff like ESSID, channel and key. Take a look to iwconfig instead. http://manpages.ubuntu.com/manpages/vivid/en/man8/iwconfig.8.html -EDIT- You can also use "NetworkManager command line" nmcli : https://askubuntu.com/questions/461825/connect-to-wifi-from-command-line
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48707/" ] }
261,809
I'm very new to using UNIX/Bash. I'm currently outputting the product of a random number generator to a text file in a subdirectory using the following: ./generate > ~/workspace/pset3/find/output/output.txt my current directory in this case would be find . Is there a way to type the path such that I can briefly specify a sub-directory of the current directory without typing the full path each time?
the reference to a subdirectory of the current directory would be ./subdir/filename or simply subdir/filename . in your example, if you are in ~/workspace/pset3/find and address the output.txt file, you can reference it as ./output/output.txt or output/output.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156242/" ] }
261,831
I have some configuration in file config and would like to cat that file. However, sometimes config doesn't exist. In this case, I would like to have my command output a default value. Perhaps something that worked like this: $ ls$ cat config || echo 4242$ echo 73 > config$ cat config || echo 4273
Your construct is fine. You could even do someting like cat config || cat defaultconfig If you use some random command (like the ./get_config_from_web in comments), you'll have to make sure the command does give a sensible return status. That can be tricky, shell scripts just return the result of the last command executed, you'd have to do a exit if you want something else as result.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261831", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4143/" ] }
261,853
Currently the Ethernet ports in the building I work in are down, but the Wi-Fi works. I have a Wi-Fi-enabled laptop ( Ubuntu 14.04 LTS (Trusty Tahr)) and a non-Wi-Fi enabled workstaion ( DebianΒ 8 (Jessie)) with only an Ethernet plug. Is it possible to connect the two via an Ethernet cable and be able to get network connectivity on the workstation?
Yes, you can do this, and it's not even that hard. I have a laptop with a wireless card, and an ethernet port. I plugged a RapberryPi running Arch Linux into it, via a "crossover" ethernet cable. That's one special thing you might need - not all ethernet cards can do a machine-to-machine direct connection. The other tricky part is IP addressing. It's best to illustrate this. Here's my little set-up script. Again, enp9s0 is the laptop's ethernet port, and wlp12s0 is the laptop's wireless device. #!/bin/bash/usr/bin/ip link set dev enp9s0 up/usr/bin/ip addr add 172.16.1.1/24 dev enp9s0sleep 10modprobe iptable_natecho 1 > /proc/sys/net/ipv4/ip_forwardiptables -t nat -A POSTROUTING -s 172.16.1.0/24 -j MASQUERADEiptables -A FORWARD -o enp9s0 -i wlp12s0 -s 172.16.1.0/24 -m conntrack --ctstate NEW -j ACCEPTiptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPTdhcpd -cf /etc/dhcpd.enp9s0.conf enp9s0 The script sets a static IP address for the ethernet card, 172.16.1.1, then sets up NAT by loading a kernel module. It turns on IP routing (on the laptop), then does some iptables semi-magic to get packets routed from the wireless card out the ethernet, and vice versa. I have dhcpd running on the ethernet port to give out IP addresses because that's what the Raspberry Pi wants, but you could do a static address on your workstation, along with static routing, DNS server, and NTP server. The file /etc/dhcpd.enp9s0.conf looks like this, just in case you go down that route: option domain-name "subnet";option domain-name-servers 10.0.0.3;option routers 172.16.1.1;option ntp-servers 10.0.0.3;default-lease-time 14440;ddns-update-style none;deny bootp;shared-network intranet { subnet 172.16.1.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; pool { range 172.16.1.50 172.16.1.200; } }} The IP address choice is pretty critical. I used 172.16.1.0/24 for the ethernet cable coming out of the laptop. The wireless card on the laptop ends up with a 192.161.1.0/24 . You need to look at what IP address the laptop's wireless has, and choose some other subnet for the ethernet card. Further, you need to choose one of the "bogon" or "non-routable" networks. In my example, 172.16.1.0/24 is from the official non-routable ranges of IP addresses, as is 192.168.1.0/24, and so is the 10.0.0.3 address dhcpd.enp9s0.conf gives out for a DNS server and NTP server. You'll have to use your head to figure out what's appropriate for your setup.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261853", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156275/" ] }
261,855
I have installed Debian 8 64Bit in my VPS. If I need to know what the bit of OS install in VPS command is uname -a and for OS information lsb_release -a . But in Debian Linux distribution the uname -a command is working but lsb_release -a command is not working.
Yes, you can do this, and it's not even that hard. I have a laptop with a wireless card, and an ethernet port. I plugged a RapberryPi running Arch Linux into it, via a "crossover" ethernet cable. That's one special thing you might need - not all ethernet cards can do a machine-to-machine direct connection. The other tricky part is IP addressing. It's best to illustrate this. Here's my little set-up script. Again, enp9s0 is the laptop's ethernet port, and wlp12s0 is the laptop's wireless device. #!/bin/bash/usr/bin/ip link set dev enp9s0 up/usr/bin/ip addr add 172.16.1.1/24 dev enp9s0sleep 10modprobe iptable_natecho 1 > /proc/sys/net/ipv4/ip_forwardiptables -t nat -A POSTROUTING -s 172.16.1.0/24 -j MASQUERADEiptables -A FORWARD -o enp9s0 -i wlp12s0 -s 172.16.1.0/24 -m conntrack --ctstate NEW -j ACCEPTiptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPTdhcpd -cf /etc/dhcpd.enp9s0.conf enp9s0 The script sets a static IP address for the ethernet card, 172.16.1.1, then sets up NAT by loading a kernel module. It turns on IP routing (on the laptop), then does some iptables semi-magic to get packets routed from the wireless card out the ethernet, and vice versa. I have dhcpd running on the ethernet port to give out IP addresses because that's what the Raspberry Pi wants, but you could do a static address on your workstation, along with static routing, DNS server, and NTP server. The file /etc/dhcpd.enp9s0.conf looks like this, just in case you go down that route: option domain-name "subnet";option domain-name-servers 10.0.0.3;option routers 172.16.1.1;option ntp-servers 10.0.0.3;default-lease-time 14440;ddns-update-style none;deny bootp;shared-network intranet { subnet 172.16.1.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; pool { range 172.16.1.50 172.16.1.200; } }} The IP address choice is pretty critical. I used 172.16.1.0/24 for the ethernet cable coming out of the laptop. The wireless card on the laptop ends up with a 192.161.1.0/24 . You need to look at what IP address the laptop's wireless has, and choose some other subnet for the ethernet card. Further, you need to choose one of the "bogon" or "non-routable" networks. In my example, 172.16.1.0/24 is from the official non-routable ranges of IP addresses, as is 192.168.1.0/24, and so is the 10.0.0.3 address dhcpd.enp9s0.conf gives out for a DNS server and NTP server. You'll have to use your head to figure out what's appropriate for your setup.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261855", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156276/" ] }
261,864
I am on Arch Linux and I'm trying to make a cron job that fires every minute. So I use: $ crontab -e And add the script in: * * * * * Rscript /srv/shiny-system/cron/CPU.R~~"/tmp/crontab.8VZ7vq" 1 line, 47 characters (I have no idea what that "/tmp/crontab.8VZ7vq" is!) But it is not working - CPU.R is not running every minute. What should I do then in Arch Linux to run the cron job? I have looked into these wiki guides below but I am still lost: https://wiki.archlinux.org/index.php/Cron https://wiki.archlinux.org/index.php/Systemd/Timers Edit I found some hints from here regarding crond . [xxx@localhost ~]$ systemctl status crond● crond.service Loaded: not-found (Reason: No such file or directory) Active: inactive (dead)[xxx@localhost ~]$ sudo systemctl start crond[sudo] password for xxx: Failed to start crond.service: Unit crond.service failed to load: No such file or directory. What does this mean? Where should I put this crond.service and what script should I put in it?
There is no crond.service on Arch Linux. As the Arch Wiki makes perfectly clear: There are many cron implementations, but none of them are installed by default as the base system uses systemd/Timers instead. Consequently, if you want to use cron, you have to choose which of the many implementations you will install, and then start that specific service. You don't just randomly type systemctl enable nonexistent.service and then wonder why it isn't running... If you want cronie, then you install cronie and start it with: pacman -Syu croniesystemctl enable --now cronie.service The Arch documentation is generally very clear; if you read the pages you linked to more carefully, you should find out what you need.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/261864", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156282/" ] }
261,869
I'm trying to use sed to remove the rest of a line after: HTTP1.1" 200 I can't figure out how to get sed to understand I want that whole thing as a string to match, including the double quote and the space. An example for good measure, I want to turn: "GET /images/loading.gif HTTP/1.1" 200 10819 "https://... into "GET /images/loading.gif HTTP/1.1" 200
You have to quote strings with spaces when you use sed (or most other tools) from the commandline. And since you already use the double quote, you have to go for single quotes: echo '"GET /images/loading.gif HTTP/1.1" 200 10819 "https://...' | \ sed 's|HTTP/1.1" 200.*|HTTP/1.1" 200|' gives: "GET /images/loading.gif HTTP/1.1" 200
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156285/" ] }
261,967
I'm using Redhat Linux 6.5 and would like to see the disk latencies for the used disks. Using iostat I get the columns await and svctm ( including %util ).But according to the man page of iostat the columns svctm are obsolete and should not be used any more . So what can I use to see the disk latencies for my disks.
You can use iostat -x and check for the await column - per device it shows the total time spent waiting plus the actual handling of the request by the disk. The units here are milli-seconds. $ iostat -yzx 5Linux 2.6.32-642.13.1.el6.x86_64 (vagrant1) 04/01/2017 _x86_64_ (1 CPU)avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.20 0.00 0.00 99.80Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsda 0.00 0.20 0.00 2.20 0.00 19.24 8.73 0.00 2.00 0.00 2.00 0.45 0.10dm-0 0.00 0.00 0.00 2.40 0.00 19.24 8.00 0.01 2.17 0.00 2.17 0.42 0.10 You can also use sar -d . Again the await column shows avg request latency in ms. $ sar -d04:50:01 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util05:00:01 PM dev8-0 0.12 0.00 0.92 7.67 0.00 1.21 0.90 0.0105:00:01 PM dev253-0 0.12 0.00 0.92 8.00 0.00 1.45 0.94 0.0105:00:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:10:01 PM dev8-0 0.14 0.00 1.07 7.90 0.00 1.05 0.80 0.0105:10:01 PM dev253-0 0.13 0.00 1.07 8.00 0.00 1.18 0.81 0.0105:10:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:20:01 PM dev8-0 0.11 0.00 0.79 7.26 0.00 1.52 1.05 0.0105:20:01 PM dev253-0 0.10 0.00 0.79 8.00 0.00 2.19 1.15 0.0105:20:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:30:01 PM dev8-0 0.12 0.00 0.97 7.89 0.00 1.22 0.93 0.0105:30:01 PM dev253-0 0.12 0.00 0.97 8.00 0.00 1.42 0.95 0.0105:30:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:40:01 PM dev8-0 0.12 0.00 0.84 7.20 0.00 0.96 0.77 0.0105:40:01 PM dev253-0 0.11 0.00 0.84 8.00 0.00 1.19 0.86 0.0105:40:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:50:01 PM dev8-0 0.11 0.00 0.84 7.75 0.00 1.31 0.94 0.0105:50:01 PM dev253-0 0.11 0.00 0.84 8.00 0.00 2.03 0.97 0.0105:50:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154053/" ] }
261,974
Im having trouble figuring out that, if I'm in let say the directory /home/test/test2 but I want to find the number of files in the /home directory, how would I do it. I know how to do it if it was the other way around, like in your home directory, list files in /home/test/test2, you would do: ls /home/test/test2 | wc -l but how would I do it if I was in the test2 directory and wanted to find the number of files in the home directory. Thanks
You can use iostat -x and check for the await column - per device it shows the total time spent waiting plus the actual handling of the request by the disk. The units here are milli-seconds. $ iostat -yzx 5Linux 2.6.32-642.13.1.el6.x86_64 (vagrant1) 04/01/2017 _x86_64_ (1 CPU)avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.20 0.00 0.00 99.80Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %utilsda 0.00 0.20 0.00 2.20 0.00 19.24 8.73 0.00 2.00 0.00 2.00 0.45 0.10dm-0 0.00 0.00 0.00 2.40 0.00 19.24 8.00 0.01 2.17 0.00 2.17 0.42 0.10 You can also use sar -d . Again the await column shows avg request latency in ms. $ sar -d04:50:01 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util05:00:01 PM dev8-0 0.12 0.00 0.92 7.67 0.00 1.21 0.90 0.0105:00:01 PM dev253-0 0.12 0.00 0.92 8.00 0.00 1.45 0.94 0.0105:00:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:10:01 PM dev8-0 0.14 0.00 1.07 7.90 0.00 1.05 0.80 0.0105:10:01 PM dev253-0 0.13 0.00 1.07 8.00 0.00 1.18 0.81 0.0105:10:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:20:01 PM dev8-0 0.11 0.00 0.79 7.26 0.00 1.52 1.05 0.0105:20:01 PM dev253-0 0.10 0.00 0.79 8.00 0.00 2.19 1.15 0.0105:20:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:30:01 PM dev8-0 0.12 0.00 0.97 7.89 0.00 1.22 0.93 0.0105:30:01 PM dev253-0 0.12 0.00 0.97 8.00 0.00 1.42 0.95 0.0105:30:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:40:01 PM dev8-0 0.12 0.00 0.84 7.20 0.00 0.96 0.77 0.0105:40:01 PM dev253-0 0.11 0.00 0.84 8.00 0.00 1.19 0.86 0.0105:40:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0005:50:01 PM dev8-0 0.11 0.00 0.84 7.75 0.00 1.31 0.94 0.0105:50:01 PM dev253-0 0.11 0.00 0.84 8.00 0.00 2.03 0.97 0.0105:50:01 PM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/261974", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155767/" ] }
262,018
I am new to using the shell, and wanted to create a directory in $HOME where I can put all my python scripts, set a path to that directory, so that I can go into any folder on my Mac and execute those scripts on certain files, without the script having to be contained inside the same directory as the file that would serve as the input to those scripts. I have read around and added this to my .zshrc file: export PATH="$HOME/python_functions/bin:$PATH" Then I added a script called sleep_plotter.py to python_functions/bin , which is where I am planning to put all my future scripts as well. However, when I navigate to the folder that contains the text file I want to use as input to that script, and type python sleep_plotter.py 113testCtM113.txt , the last argument being the text file input to my script, I get the following error message: python: can't open file 'sleep_plotter.py': [Errno 2] No such file or directory But when I call the path using echo $PATH , I see this: /Users/myname/python_functions/bin: From this, I gathered that python is looking in that directory when I execute a Python command, so it should be able to run sleep_plotter.py even when I am in a different folder that doesn't contain this file. I am using Mac OSX 10.11.2, zsh, and Anaconda 2.3.0.
PATH variable defines the directories which are searched when executing commands. However when you execute python sleep_plotter.py 113testCtM113.txt , sleep_plotter.py is an argument to the python program (command). Shell uses PATH to find python , but not its arguments. You can add an executable attribute to your script: $ chmod +x /Users/myname/python_functions/bin/sleep_plotter.py Add a shebang sequence to the top (first line) of your Python script: #!/usr/bin/env python And run the script directly as a command: $ sleep_plotter.py 113testCtM113.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/262018", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156361/" ] }
262,042
I've got a bunch of XML files under a directory tree which I would like to move to corresponding folders with the same name within that same directory tree. Here is sample structure (in shell): touch foo.xml bar.xml "[ foo ].xml" "( bar ).xml"mkdir -p foo bar "foo/[ foo ]" "bar/( bar )" So my approach here is: find . -name "*.xml" -exec sh -c ' DST=$( find . -type d -name "$(basename "{}" .xml)" -print -quit ) [ -d "$DST" ] && mv -v "{}" "$DST/"' ';' which gives the following output: β€˜./( bar ).xml’ -> β€˜./bar/( bar )/( bar ).xml’mv: β€˜./bar/( bar )/( bar ).xml’ and β€˜./bar/( bar )/( bar ).xml’ are the same fileβ€˜./bar.xml’ -> β€˜./bar/bar.xmlβ€™β€˜./foo.xml’ -> β€˜./foo/foo.xml’ But the file with square brackets ( [ foo ].xml ) hasn't been moved as if it had been ignored. I've checked and basename (e.g. basename "[ foo ].xml" ".xml" ) converts the file correctly, however find has problems with brackets. For example: find . -name '[ foo ].xml' won't find the file correctly. However, when escaping the brackets ( '\[ foo \].xml' ), it works fine, but it doesn't solve the problem, because it's part of the script and I don't know which files having those special (shell?) characters. Tested with both BSD and GNU find . Is there any universal way of escaping the filenames when using with find 's -name parameter, so I can correct my command to support files with the metacharacters?
It's so much easier with zsh globs here: for f (**/*.xml(.)) (mv -v -- $f **/$f:r:t(/[1])) Or if you want to include hidden xml files and look inside hidden directories like find would: for f (**/*.xml(.D)) (mv -v -- $f **/$f:r:t(D/[1])) But beware that files called .xml , ..xml or ...xml would become a problem, so you may want to exclude them: setopt extendedglobfor f (**/(^(|.|..)).xml(.D)) (mv -v -- $f **/$f:r:t(D/[1])) With GNU tools, another approach to avoid having to scan the whole directory tree for each file would be to scan it once and look for all directories and xml files, record where they are and do the moving in the end: (export LC_ALL=Cfind . -mindepth 1 -name '*.xml' ! -name .xml ! \ -name ..xml ! -name ...xml -type f -printf 'F/%P\0' -o \ -type d -printf 'D/%P\0' | awk -v RS='\0' -F / ' { if ($1 == "F") { root = $NF sub(/\.xml$/, "", root) F[root] = substr($0, 3) } else D[$NF] = substr($0, 3) } END { for (f in F) if (f in D) printf "%s\0%s\0", F[f], D[f] }' | xargs -r0n2 mv -v --) Your approach has a number of problems if you want to allow any arbitrary file name: embedding {} in the shell code is always wrong. What if there's a file called $(rm -rf "$HOME").xml for instance? The correct way is to pass those {} as argument to the in-line shell script ( -exec sh -c 'use as "$1"...' sh {} \; ). With GNU find (implied here as you're using -quit ), *.xml would only match files consisting of a sequence of valid characters followed by .xml , so that excludes file names that contain invalid characters in the current locale (for instance file names in the wrong charset). The fix for that is to fix the locale to C where every byte is a valid character (that means error messages will be displayed in English though). If any of those xml files are of type directory or symlink, that would cause problems (affect the scanning of directories, or break symlinks when moved). You may want to add a -type f to only move regular files. Command substitution ( $(...) ) strips all trailing newline characters. That would cause problems with a file called foo␀.xml for instance. Working around that is possible but a pain: base=$(basename "$1" .xml; echo .); base=${base%??} . You can at least replace basename with the ${var#pattern} operators. And avoid command substitution if possible. your problem with file names containing wildcard characters ( ? , [ , * and backslash; they are not special to the shell, but to the pattern matching ( fnmatch() ) done by find which happens to be very similar to shell pattern matching). You'd need to escape them with a backslash. the problem with .xml , ..xml , ...xml mentioned above. So, if we address all of the above, we end up with something like: LC_ALL=C find . -type f -name '*.xml' ! -name .xml ! -name ..xml \ ! -name ...xml -exec sh -c ' for file do base=${file##*/} base=${base%.xml} escaped_base=$(printf "%s\n" "$base" | sed "s/[[*?\\\\]/\\\\&/g"; echo .) escaped_base=${escaped_base%??} find . -name "$escaped_base" -type d -exec mv -v "$file" {\} \; -quit done' sh {} + Phew... Now, it's not all. With -exec ... {} + , we run as few sh as possible. If we're lucky, we'll run only one, but if not, after the first sh invocation, we'll have moved a number of xml files around, and then find will continue looking for more, and may very well find the files we have moved in the first round again (and most probably try to move them where they are). Other than that, it's basically the same approach as the zsh ones. A few other notable differences: with the zsh one, the file list is sorted (by directory name and file name), so the destination directory is more or less consistent and predictable. With find , it's based on the raw order of files in directories. with zsh , you'll get an error message if no matching directory to move the file to is found, not with the find approach above. With find , you'll get error messages if some directories cannot be traversed, not with the zsh one. A last note of warning. If the reason you get some files with dodgy file names is because the directory tree is writable by an adversary, then beware than none of the solutions above are safe if the adversary may rename files under the feet of that command. For instance, if you're using LXDE, the attacker could make a malicious foo/lxde-rc.xml , create a lxde-rc folder, detect when you're running your command and replace that lxde-rc with a symlink to your ~/.config/openbox/ during the race window (which can be made as large as necessary in many ways) between find finding that lxde-rc and mv doing the rename("foo/lxde-rc.xml", "lxde-rc/lxde-rc.xml") ( foo could also be changed to that symlink making you move your lxde-rc.xml elsewhere). Working around that is probably impossible using standard or even GNU utilities, you'd need to write it in a proper programming language, doing some safe directory traversal and using renameat() system calls. All the solutions above will also fail if the directory tree is deep enough that the limit on the length of the paths given to the rename() system call done by mv is reached (causing rename() to fail with ENAMETOOLONG ). A solution using renameat() would also work around the problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/262042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21471/" ] }
262,044
I have a script that open up terminal and open up 5 tabs, execute a certain command, and go to a certain working directory. #!/bin/shgnome-terminal --tab --title="Zookeeper" --profile Hold -e "sh -c '/home/kafka_2.11-0.8.2.2/bin/zookeeper-server-start.sh /home/kafka_2.11-0.8.2.2/config/zookeeper.properties'" --tab --title="Kafka" --profile Hold -e "sh -c 'sleep 5; /home/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh /home/kafka_2.11-0.8.2.2/config/server.properties'" --tab --title="APP-Binaries" --profile Hold --working-directory="/home/app-binaries" --tab --title="APP-DB" --profile Hold --working-directory="/home/prod/db" Having everything in one line is hard to maintain.How do I make it better so it is easy to read ? I've tried #!/bin/shTab=""Tab+=("--tab --title='Zookeeper' --profile Hold -e 'sh -c /home/kafka_2.11-0.8.2.2/bin/zookeeper-server-start.sh /home/kafka_2.11-0.8.2.2/config/zookeeper.properties'")Tab+=( "--tab --title='Kafka' --profile Hold -e 'sh -c 'sleep 5; /home/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh /home/kafka_2.11-0.8.2.2/config/server.properties'")Tab+=(" --tab --title='APP-Binaries' --profile Hold --working-directory='/home/app-binaries'")Tab+=(" --tab --title='APP-DB' --profile Hold --working-directory='/home/prod/db'") # echo "${Tab[@]}" gnome-terminal "${Tab[@]}" exit 0 So far it is not working yet! I'm open to any suggestions that you guys may have for me. I'm just looking to learn and improve it.
You can use \ to split long commands over multiple lines. Example: #!/bin/bashecho "Hello World!"echo \"Hello World!" running this script results in $ ./test.sh Hello World!Hello World! In your case you can use something like #!/bin/bash gnome-terminal \--tab --title="Zookeeper" --profile Hold -e "sh -c '/home/benu/Downloads/kafka_2.11-0.8.2.2/bin/zookeeper-server-start.sh /home/benu/Downloads/kafka_2.11-0.8.2.2/config/zookeeper.properties'" \--tab --title="Kafka" --profile Hold -e "sh -c 'sleep 5; /home/benu/Downloads/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh /home/benu/Downloads/kafka_2.11-0.8.2.2/config/server.properties'" \--tab --title="SSC" --profile Hold -e "sh -c 'sleep 15; cd ~/gitnewssc/benu-ssc-binaries; ./startSSC.sh'" --working-directory="/home/benu/gitnewssc/benu-ssc-binaries" \--tab --title="SSC-Binaries" --profile Hold --working-directory="/home/benu/gitnewssc/benu-ssc-binaries" \--tab --title="SSC-DB" --profile Hold --working-directory="/home/benu/SSC-V2/ssc-db"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/262044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118753/" ] }
262,098
Is it possible to hook a script execution on each process creation? Essentially the equivalent of inotifywait to monitor disk activity but applied to the process table. It would be to allow to do an action upon spawning of the processes, for example logging it, cgset it, other. I can see the challenge that it would recursively apply on the new processes. But instead of polling the process table as fast as possible to catch changes which would be vulnerable to race conditions, is there a better way. Thanks
First, process creation is rarely a useful event to log and it's irrelevant for security (except for resource limiting). I think you mean to hook the execution of programs, which is done by execve , not fork . Second, the use cases you cite are usually best served by using existing mechanism made for that purpose, rather than rolling your own. For logging, BSD process accounting provides a small amount of information, and is available on most Unix variants; on Linux, install the GNU accounting utilities (install the package from your distribution). For more sophisticated logging on Linux, you can use the audit subsystem (the auditctl man page has examples; as I explained above the system call you'll want to log is execve ). If you want to apply security restrictions to certain programs, use a security framework such as SELinux or AppArmor . If you want to run a specific program in a container, or with certain settings, move the executable and put a wrapper script in its place that sets the settings you want and calls the original executable. If you want to modify the way one specific program calls other programs, without affecting how other programs behave, there are two cases: either the program is potentially hostile or not. If the program is potentially hostile, run it in a dedicated virtual machine. If the program is cooperative, the most obvious angle of attack is to run it with a different PATH . If the program uses absolute paths that aren't easy to configure, on a non-antique Linux system, you can run it in a separate mount namespace (see also kernel: Namespaces support ). If you really need fine control, you can load a library that overrides some library calls by invoking the program with LD_PRELOAD=my_override_library.so theprogram . See Redirect a file descriptor before execution for an example. Note that in addition to execve , you'll need to override all the C library functions that call execve internally, because LD_PRELOAD doesn't affect internal C library calls. You can get more precise control by running the program under ptrace ; this allows you to override a system call even if it's made by a C library function, but it's harder to set up (I don't know of any easy way to do it).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/262098", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1278/" ] }
262,129
I am trying to configure my bash ~/.inputrc to these settings (Note: ← , β†’ mean the left and right arrow keys) Ctrl + ← - should jump back a word Ctrl + β†’ - should jump forward a word Currently I have this in my ~/.inputrc and it doesn't work. Ctrl + arrow produces nothing. "\eC-5C":forward-word"\eC-5D":backward-word I'm sure my escape sequence is wrong. What are the correct escape sequences for the Ctrl + arrow combinations? terminal: tmux inside gnome-terminal
Gnome-terminal (more properly VTE ) imitates some version of xterm's escape sequences. How closely it does this, depends on the version of VTE. The relevant xterm documentation is in the PC-Style Function Keys section of XTerm Control Sequences . What you are looking for is a string like \e[1;5D (for control left-arrow), where the 5 denotes the control modifier. In ncurses, you can see these strings using infocmp -x , as the values for kUP5 , kDN5 , kLFT5 and kRIT5 . For example: kDN5=\E[1;5B, kLFT5=\E[1;5D, kRIT5=\E[1;5C, kUP5=\E[1;5A,
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/262129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
262,177
For an assignment I need to implement my own version of the ps command, but I'm not sure where it gets its information from. Where do I look to find all process information?
On Linux, the ps command works by reading files in the proc filesystem . The directory /proc/ PID contains various files that provide information about process PID . The content of these files is generated on the fly by the kernel when a process reads them. You can find documentation about the entries in /proc in the proc(5) man page and in the kernel documentation . You can find this out by yourself by observing what the ps command does with strace , a command that lists the system calls made by a process. % strace -e open psopen("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3open("/lib/x86_64-linux-gnu/libprocps.so.3", O_RDONLY|O_CLOEXEC) = 3open("/lib/x86_64-linux-gnu/libdl.so.2", O_RDONLY|O_CLOEXEC) = 3open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3open("/sys/devices/system/cpu/online", O_RDONLY|O_CLOEXEC) = 3open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3open("/proc/self/stat", O_RDONLY) = 3open("/proc/uptime", O_RDONLY) = 3open("/proc/sys/kernel/pid_max", O_RDONLY) = 4open("/proc/meminfo", O_RDONLY) = 4open("/proc/1/stat", O_RDONLY) = 6open("/proc/1/status", O_RDONLY) = 6open("/proc/2/stat", O_RDONLY) = 6open("/proc/2/status", O_RDONLY) = 6open("/proc/3/stat", O_RDONLY) = 6open("/proc/3/status", O_RDONLY) = 6…% strace -e open ps…open("/proc/1/stat", O_RDONLY) = 6open("/proc/1/status", O_RDONLY) = 6open("/proc/1/cmdline", O_RDONLY) = 6…
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/262177", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148764/" ] }
262,185
I have a file with ANSI colors. test.txt: \e[0;31mExample\e[0m I would like to display the content of this file in a terminal, like cat does, but I would like to display the colors as well.
I was looking for a solution to this exact bash question. I nearly missed @Thomas Dickey's comment which provided me with the most elegant solution. echo -e $(cat test.txt) Some things which did not work for me are(apparently you cant pipe things to echo) cat test.txt | echo -e or less -R test.txt Another issue I had was that echo -e didn't print newlines and contiguous whitespaces within the file nicely. To print those, I used the following. echo -ne $(cat test.txt | sed 's/$/\\n/' | sed 's/ /\\a /g') This works for a test.txt file containing \e[0;31mExa mple\e[0m\e[0;31mExample line2\e[0m
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/262185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119603/" ] }
263,250
I'm not sure if this is the right community to ask about my problem as I'm actually trying to launch docker within cygwin environment on windows . After Docker Toolbox install I'm trying to launch docker version in my cygwin shell and getting: $ docker versionCould not read CA certificate "\\cygdrive\\c\\Users\\Alexey\\.docker\\machine\\machines\\default\\ca.pem": open \cygdrive\c\Users\Alexey\.docker\machine\machines\default\ca.pem: The system cannot find the path specified. However, the actual file /cygdrive/c/Users/Alexey/.docker/machine/machines/default/ca.pem is there, the problem seems to be in wrong slashes (windows vs UNIX) in the path to the certificate file. But I can't figure out where to fix it. Here are the env variables set in ~/.bash_profile: export DOCKER_HOST=tcp://192.168.99.100:2376export DOCKER_MACHINE_NAME=defaultexport DOCKER_TLS_VERIFY=1export DOCKER_CERT_PATH=/cygdrive/c/Users/Alexey/.docker/machine/machines/defaultexport TERM=cygwin UPDATE Alexey@Alexey-PC ~$ echo $DOCKER_CERT_PATH/cygdrive/c/Users/Alexey/.docker/machine/machines/default/Alexey@Alexey-PC ~$ docker versionCould not read CA certificate "\\cygdrive\\c\\Users\\Alexey\\.docker\\machine\\machines\\default\\ca.pem": open \cygdrive\c\Users\Alexey\.docker\machine\machines\default\ca.pem: The system cannot find the path specified. SOLUTION as proposed below by @cloverhap we need to set DOCKER_CERT_PATH environment variable, but it should contain windows path, not cygwin and moreover, the backslashes should be escaped, so the solution is to add this: export DOCKER_CERT_PATH=C:\\Users\\%USERNAME%\\.docker\\machine\\machines\\default to .bash_profile
On my cygwin environment the docker cert path is actually set as below and docker seems to work fine. DOCKER_CERT_PATH=C:\Users\user\.docker\machine\machines\default The following does indeed give an error DOCKER_CERT_PATH=/cygdrive/c/Users/user/.docker/machine/machines/default$ docker versionCould not read CA certificate "\\cygdrive\\c\\Users\\user\\.docker\\machine\\machines\\default\\ca.pem": open \cygdrive\c\Users\user\.docker\machine\machines\default\ca.pem: The system cannot find the path specified. So try changing your DOCKER_CERT_PATH to regular Windows path format. export DOCKER_CERT_PATH=C:\\Users\\Alexey\\.docker\\machine\\machines\\default My docker version is 1.10.1, if the results are any different.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124841/" ] }
263,274
I work on two computers with one USB headset. I want to listen to both by piping the non-Linux computers' output into the Linux computer's line in (blue audio jack) and mixing the signal into the Linux computer's headset output using PulseAudio. pavucontrol shows a "Built-in Audio Analog Stereo" Input Device which allows me to pick ports like "Line In" (selected), "Front Microphone", "Rear Microphone". I can see the device's volume meter reacting to audio playback on the non-Linux machine. How do I make PulseAudio play that audio signal into my choice of Output Device?
1. Load the loopback module pacmd load-module module-loopback latency_msec=5 creates a playback and a recording device. 2. Configure the devices in pavucontrol In pavucontrol, in the Recording tab, set the "Loopback" device's from input device to the device which receives the line in signal. In the Playback tab, set the "Loopback" device's on output device to the device through which you want to hear the line in signal. 3. Troubleshooting If the audio signal has issues, remove the module with pacmd unload-module module-loopback and retry a higher latency_msec= value Additional Notes Your modern Mid-Range computer might easily be able to manage lower latency with the latency_msec=1 option: pacmd load-module module-loopback latency_msec=1 This answer was made possible by this forum post . Thanks!
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/263274", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79702/" ] }
263,287
On a CentOS 7 server, I am trying to install version 5.x of Node.js , but for some reason, yum keeps trying to install version 0.x and returning an error when it cannot find version 0.x at the 5.x download site. The error indicates that yum is concatenating a version 0.x file name with a version 5.x url. I assume this points to something wrong in the config for yum on the CentOS 7 machine. What specific changes to the below need to be made to install version 5.x? The root error message is: Error downloading packages: nodejs-0.10.42-1nodesource.el7.centos.x86_64: [Errno 256] No more mirrors to try. The publisher page from which my code below originated can be viewed at the following link . Also, some complication may be resulting from an earlier attempt following instructions at this other link . And to explore the possible remnants of the earlier attempt, I am currently running the following command and waiting for the results: grep -rnw '/path/to/somewhere/' -e "pattern" Here is the terminal output for setting the nodesource location: [root@localhost tmp]# curl --silent --location https://rpm.nodesource.com/setup_5.x | bash -## Installing the NodeSource Node.js 5.x repo...## Inspecting system...+ rpm -q --whatprovides redhat-release || rpm -q --whatprovides centos-release || rpm -q --whatprovides cloudlinux-release || rpm -q --whatprovides sl-release+ uname -m## Confirming "el7-x86_64" is supported...+ curl -sLf -o /dev/null 'https://rpm.nodesource.com/pub_5.x/el/7/x86_64/nodesource-release-el7-1.noarch.rpm'## Downloading release setup RPM...+ mktemp+ curl -sL -o '/tmp/tmp.sH82u4Gpap' 'https://rpm.nodesource.com/pub_5.x/el/7/x86_64/nodesource-release-el7-1.noarch.rpm'## Installing release setup RPM...+ rpm -i --nosignature --force '/tmp/tmp.sH82u4Gpap'## Cleaning up...+ rm -f '/tmp/tmp.sH82u4Gpap'## Checking for existing installations...+ rpm -qa 'node|npm' | grep -v nodesource## Run `yum install -y nodejs` (as root) to install Node.js 5.x and npm.## You may also need development tools to build native addons:## `yum install -y gcc-c++ make` Here is a listing of the contents of the /tmp folder after the above command: [root@localhost tmp]# ls -altotal 8drwxrwxrwt. 13 root root 320 Feb 14 06:13 .dr-xr-xr-x. 19 root root 4096 Jan 29 20:54 ..drwx------. 2 user user 60 Feb 13 20:05 .esd-1000drwxrwxrwt. 2 root root 40 Feb 13 20:04 .font-unixprw-------. 1 root root 0 Feb 13 20:05 hogsuspenddrwxrwxrwt. 2 root root 80 Feb 13 20:05 .ICE-unixsrwxrwxrwx. 1 mongod mongod 0 Feb 13 20:04 mongodb-27017.sockdrwx------. 2 user user 40 Dec 31 1969 orbit-userdrwx------. 2 user user 60 Feb 13 20:05 ssh-AmQyH8IIEC2mdrwx------. 3 root root 60 Feb 13 20:05 systemd-private-74534ca9946043cc88dbe52a38b4344d-colord.service-hDR3Cddrwx------. 3 root root 60 Feb 13 20:04 systemd-private-74534ca9946043cc88dbe52a38b4344d-rtkit-daemon.service-ZAQmPkdrwxrwxrwt. 2 root root 40 Feb 13 20:04 .Test-unixdrwx------. 2 user user 40 Feb 13 20:08 tracker-extract-files.1000-r--r--r--. 1 root root 11 Feb 13 20:05 .X0-lockdrwxrwxrwt. 2 root root 60 Feb 13 20:05 .X11-unixdrwxrwxrwt. 2 root root 40 Feb 13 20:04 .XIM-unix Here are the results of trying to install nodejs using yum : [root@localhost tmp]# yum install -y nodejsLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfile * base: mirror.lax.hugeserver.com * epel: mirror.sfo12.us.leaseweb.net * extras: mirror.keystealth.org * updates: mirror.supremebytes.comResolving Dependencies--> Running transaction check---> Package nodejs.x86_64 0:0.10.42-1nodesource.el7.centos will be installed--> Finished Dependency ResolutionDependencies Resolved================================================================================================================================================================================ Package Arch Version Repository Size================================================================================================================================================================================Installing: nodejs x86_64 0.10.42-1nodesource.el7.centos nodesource 4.5 MTransaction Summary================================================================================================================================================================================Install 1 PackageTotal download size: 4.5 MInstalled size: 16 MDownloading packages:No Presto metadata available for nodesourcenodejs-0.10.42-1nodesource.el7 FAILED https://rpm.nodesource.com/pub_5.x/el/7/x86_64/nodejs-0.10.42-1nodesource.el7.centos.x86_64.rpm: [Errno 14] HTTPS Error 404 - Not Found ] 0.0 B/s | 0 B --:--:-- ETA Trying other mirror.To address this issue please refer to the below knowledge base article https://access.redhat.com/articles/1320623If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/Error downloading packages: nodejs-0.10.42-1nodesource.el7.centos.x86_64: [Errno 256] No more mirrors to try.[root@localhost tmp]# For the record, gedit /etc/yum.repos.d/nodesource-el.repo shows the following: [nodesource]name=Node.js Packages for Enterprise Linux 7 - $basearchbaseurl=https://rpm.nodesource.com/pub_5.x/el/7/$basearchfailovermethod=priorityenabled=1gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/NODESOURCE-GPG-SIGNING-KEY-EL[nodesource-source]name=Node.js for Enterprise Linux 7 - $basearch - Sourcebaseurl=https://rpm.nodesource.com/pub_5.x/el/7/SRPMSfailovermethod=priorityenabled=0gpgkey=file:///etc/pki/rpm-gpg/NODESOURCE-GPG-SIGNING-KEY-ELgpgcheck=1 I suspect that the problem might be resultimg from having run this other command previously: curl --silent --location rpm.nodesource.com/setup | bash - I am guessing that the underlying problem is how yum persists the results of those curl --silent --location ... | bash - commands CONTROL CASE: On a different, completely fresh installation of CentOS 7 on a different Virtual Machine, the following three simple commands successfully installed the correct current version 5.x of nodejs: # cd /tmp# curl --silent --location https://rpm.nodesource.com/setup_5.x | bash -# yum install -y nodejs # node --versionv5.6.0 These results from the control case indicate that the problem is in how yum is configured in the machine that is having the problem. So what specific changes need to be made to the machine with the problem so that yum is configured to generate the correct download url? It is not reasonable to port everything to a different VM. Surely this is just a line or two in a yum config somewhere that can be changed to resolve this problem.
This appears to have been a cache issue, though it's unclear what went wrong. After some conversation with the poster in chat, running yum clean all fixed the issue. The poster noted the following: [root@localhost yum]# ls /var/cache/yum/x86_64/7/nodesource/packages nodejs-0.10.42-1nodesource.el7.centos.x86_64.rpm [root@localhost yum]# yum clean all[root@localhost yum]# ls /var/cache/yum/x86_64/7/nodesource/packages [root@localhost yum]# yum install -y nodejs.... much terminal output during successful install[root@localhost yum]# node --versionv5.6.0 So the yum clean all deleted the obsolete package that had been stored in the cache. I don't have sufficient knowledge or experience of Red Hat based distributions to say what went wrong here, so will refrain from commenting further.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/263287", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92670/" ] }
263,302
I have setup an SSH connection to run a program on a remote server. The program prints debug information on the terminal every 10 seconds. If I leave the SSH windows open for a long time (say 10 hours), does the SSH connection become inactive? How is activity defined in an SSH session? Should I type/run command every x seconds to keep the session alive?
You should not have to do anything special, SSH does not terminate a connection because of inactivity. So there is no inactivity period defined within SSH. However one of the devices on the network route between you and your server might lose the route and for that activity from one side usually is enough, but not always (there are "arrival confirmation packages", acknowledgements, going back to the server from the client when using TCP). Traffic from both sides is no guarantee for continued connectivity, if you have a DSL modem and your provider decides that you get a new IP address every day, the connection will be broken. You can have your client sent some packets on a regular basis by inserting ServerAliveInterval 5 in /etc/ssh/ssh_config or your ~/.ssh/config to have some client to server traffic every 5 seconds in addition to the traffic coming from the server every 10 seconds. The simpler "solution" to DSL modem resetting a connection is running your server side software in tmux or screen : if the connection is broken, you just SSH into the server again, issue tmux attach or screen -r and you can continue to view the uninterrupted server program. Using tmux / screen is especially useful if you are not so much worried about losing the connection, but of the consequence that the server program stops if you do. For that you can also redirect the output of the original program to file and use tail -f , but that doesn't allow you to easily interact with the server program (if that would be necessary). tmux and screen (in their basic forms) are easy to use, but there are some side-effects like not being able to scroll back using the sliders on your graphical terminal (you have to use keyboard shortcuts to have the server scroll back in its buffer for that). A more flexible solution around devices breaking your connection is to use mosh . This uses SSH to exchange some secret info and then allows for reattachment even if one or both of the IP addresses changes. This is however more difficult to set up and get to work. In your case I would start with using tmux and manually reattach when the connection indeed deactivates.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263302", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23151/" ] }
263,309
I'm playing with btrfs, which allows cp --reflink to copy-on-write. Other programs, such as lxc-clone , may use this feature as well. My question is, how to tell if a file is a CoW of another? Like for hardlink, I can tell from the inode number.
Good question. Looks like there aren't currently any easy high-level ways to tell. One problem is that a file may only share part of the data via Copy-on-Write. This is called a physical extent, and some or all of the physical extents may be shared between CoW files. There is nothing analogous to an inode which, when compared between files, would tell you that the files share the same physical extents. (Edit: see my other answer ). The low level answer is that you can ask the kernel which physical extents are used for the file using the FS_IOC_FIEMAP ioctl , which is documented in Documentation/filesystems/fiemap.txt . In principle, if all of the physical extents are the same, then the file must be sharing the same underlying storage. Few things implement a way to look at this information at a higher level. I found some go code here . Apparently the filefrag utility is supposed to show the extents with -v. In addition, btrfs-debug-tree shows this information. I would exercise caution however, since these things may have had little use in the wild for this purpose, you could find bugs giving you wrong answers, so beware relying on this data for deciding on operations which could cause data corruption. Some related questions: How to find out if a file on btrfs is copy-on-write? How to find data copies of a given file in Btrfs filesystem?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44598/" ] }
263,312
I wrote a script to change permissions on all files in a directory: #!/bin/bashfiles=`find "$1"`for f in $files; do chown "$2" "$f" chmod 600 "$2"done Obviously, the second argument to chmod should be "$f" instead of "$2" . However, when I ran the script (on a small directory) I also forgot to include the second argument, which should have been "dave:dave" . Now, all the files in the directory are completely messed up: ~ $ ll Documents/ ls: cannot access Documents/wiki.txt: Permission deniedls: cannot access Documents/todo.txt: Permission deniedls: cannot access Documents/modules.txt: Permission deniedls: cannot access Documents/packages.txt: Permission deniedtotal 0-????????? ? ? ? ? ? modules.txt-????????? ? ? ? ? ? packages.txt-????????? ? ? ? ? ? todo.txt-????????? ? ? ? ? ? wiki.txt Running sudo chown dave:dave Documents/* and sudo chmod 600 Documents/* throws no errors, but the files remain unchanged. I know I can sudo cat each file into a new file, but I'm curious how to fix the permissions on the original files.
In addition to the answers given in the comments, you should also note that your script will break on any filenames with spaces in them. You can do all of this in a single command using find , rather than trying to parse a list of filenames output from find . Much more robust; handles filenames regardless of special characters or whitespace. find "$1" -type f -exec chown "$2" {} \; -exec chmod 600 {} \; Note that if the chown fails on a given file, the chmod will not be run on that file. That's probably the behavior you want anyway. Since you already ran an erroneous command that removed execute permissions from your "Documents" directory, you need to add back execute permissions: chmod u+x Documents If there are more directories that erroneously had execute permissions removed, you should be able to fix them with: find Documents -type d -exec chmod u+x {} \; I don't think you'll need this, though, as once execute permissions were removed from "Documents" then none of its subdirectories would be accessible, so execute permissions wouldn't have been removed from them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22172/" ] }
263,342
I created a directory in my home directory. and I set its permission as follows: uhmwk.1.4$ chmod 655 doguhmwk.1.4$ ls -ltotal 4drw-r-sr-x 2 s9 s9 4096 Feb 14 21:57 dog why is the group permission "r-s" when I set it to read and execute and it should be "r-x"? Please help
It means that directory setgid is set and the execute bit is set too. This basically means that files created by other users in this directory will have the group of the directory owner. Man page says that... chmod preserves a directory's set-user-ID and set-group-ID bits unless you explicitly specify otherwise. You can set or clear the bits with symbolic modes like u+s and g-s, and you can set (but not clear) the bits with a numeric mode. So... If these directory mode bits have been set in the past they will remain there until you explicitily remove them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156639/" ] }
263,369
I have a systemd service (for heka) which causes me some headaches. The problem is that "start" returns successfully even if the heka daemon dies shortly after starting. This is happening if the configuration files are wrong, for example: the process will start, it will verify the configuration and die if it's not happy about what it finds. Systemd returns successfully in this case. Is there any way to force systemd to check the program status after it is initializing? Maybe to sleep n seconds after the process has started? This is the script: [Unit] Description=Heka event/metric/log collection and routing daemon After=network.target auditd.service ConditionPathExists=!/etc/heka/hekad_not_to_be_run [Service] EnvironmentFile=-/etc/default/heka Type=simple PIDFile=/var/run/hekad.pid ExecStart=/usr/bin/hekad -config=/etc/heka ExecReload=/bin/kill -HUP $MAINPID KillMode=process Restart=on-failure StandardError=inherit [Install] WantedBy=multi-user.target Alias=heka.service
You can chain multiple ExecPostStart commands together. And you can run them even if the main ExecStart failed by adding a -/ ( systemd.service: Type= ). Something like this: ExecStart=-/usr/bin/hekad -config=/etc/hekaExecStartPost=/bin/sleep 3ExecStartPost=/bin/kill -0 $MAINPID &>/dev/null This ensures that you still have the MAINPID to use when stopping or restarting the service for instance.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40485/" ] }
263,373
I have two network interfaces: eth0 (10.0.0.0) and usb0 (umts usb-modem) me@ThinkCentre-A50:~$ route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface10.41.250.0 0.0.0.0 255.255.255.128 U 1 0 0 eth0192.168.42.0 0.0.0.0 255.255.255.0 U 1 0 0 usb0 How can I use both networks simultaneously. Go Internet (www) via usb0, and connect to the local network via the eth0?
You can chain multiple ExecPostStart commands together. And you can run them even if the main ExecStart failed by adding a -/ ( systemd.service: Type= ). Something like this: ExecStart=-/usr/bin/hekad -config=/etc/hekaExecStartPost=/bin/sleep 3ExecStartPost=/bin/kill -0 $MAINPID &>/dev/null This ensures that you still have the MAINPID to use when stopping or restarting the service for instance.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263373", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156584/" ] }
263,472
I am using the ls command in bash, trying to find all files or directory of length n. Let's say n=5 My command is: ls ????? But this would also include characters that are non letters such as period. For example, the following files would match: ab.cd abd.c I only want to match files that have 5 letter or number names: five1five2 five3 But not abc.d ab.cd a.bcd How can I modify my command? Answer found: ls [a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9] I found the answer but how can I make this less ugly?
Note that it's not ls that interprets those globs. Those globs are expanded by your shell into a list of file names that is passed as arguments to ls . Different shells have different globbing capabilities. bash has a few extensions over standard globs (borrowed from ksh88 and enabled with shopt -s extglob ) but is still limited compared to shells like zsh or ksh93 . With zsh : setopt extendedglobls -d [[:alnum:]](#c5) ksh93 : ls -d {5}([[:alnum:]]) or: ls -d {5}(\w) # (\w includes underscore in addition to alnums) or, if you wanted to use extended regular expressions: ls -d ~(E)^[[:alnum:]]{5}$ With bash or other POSIX shells which don't have equivalent globbing operators, you'd need to do: ls -d [[:alnum:]][[:alnum:]][[:alnum:]][[:alnum:]][[:alnum:]] Note that [[:alnum:]] includes any alphabetic character in the current locale (not only latin alphabets let alone the English one) and 0123456789 (and possibly other types of digits). If you want the letters in the English alphabet, name characters individually: c='[0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ]'unset -v IFSls -d $c$c$c$c$c Or use the C locale: (export LC_ALL=Cls -d [[:alnum:]][[:alnum:]][[:alnum:]][[:alnum:]][[:alnum:]])
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263472", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156639/" ] }
263,511
I have Debian Jessie and have added backports (according to these instructions ): echo "deb http://http.debian.net/debian jessie-backports main contrib non-free" | sudo tee /etc/apt/sources.list.d/backports.list (I did this to get a newer kernel, as I needed it, of for some hardware in my laptop.) The instructions say that nothing should happen, unless I explicitly ask for a backported package. e.g. apt-get -t jessie-backports install "package" . However I now seem to have a whole load of my system from backports, and one package has un-installed, because it depends on an exact version, of something that was updated to back-ports. So my question: How do I first stop it, so that no more backports are installed? How do I remove the existing backports? Note: this gets a list of installed packages that are from backports (and in format that can be passed to apt-get install , for some reason putting sudo apt-get install in place of echo at end of pipeline does not work ): cat /var/log/dpkg.log.1 |grep -v linux | grep -v xserver | grep -v firmware | grep "status installed" | grep bpo | cut -d" " -f 5 | cut -d: -f 1 | xargs -i{} -n1 bash -c "dpkg-query -s {} >/dev/null && echo {}" | sed -r -e "s~.*~\0/jessie~" | xargs echo Caution: Some of the packages are automatically installed, so if you reinstall them all, then these automatically installed packages will be marked as manually installed. Thus not removed when not needed. Any one have any ideas as to how to solve this?
Try adding the following to either /etc/apt/apt.conf or a file under /etc/apt/apt.conf.d : APT::Default-Release "jessie"; To remove the existing backports, you'll need to get a list of which ones were installed, and what version they replaced. Fortunately, this information can be extracted very easily from /var/log/dpkg.log e.g. grep ' upgrade ' /var/log/dpkg.log will give you many lines like the following: 2016-02-15 11:06:32 upgrade python-numpy:amd64 1:1.11.0~b2-1 1:1.11.0~b3-1 This says that at 11:06am on 15th Feb, I upgraded python-numpy from version 1:1.11.0~b2-1 to version 1:1.11.0~b3-1 If I wanted to downgrade to the previous version, then I would run: apt-get install python-numpy=1:1.11.0~b2-1 NOTE: in this particular case, it probably won't work because I run debian sid aka unstable so the old version is probably no longer available in the deb repository. If you're running jessie and are re-installing a jessie version of a package as a downgrade to the jessie-backports version, it will work as expected. Similarly, if a package has been removed you can find it and its exact version by grepping for remove in /var/log/dpkg.log . Bulk downgrading of many packages can be largely automated using standard tools like awk and grep . For example, If you know that the jessie-backports upgrades you installed were all done on a particular day (e.g. 2016-02-15), then you can downgrade to the previous versions with something like: apt-get -d -u install $(awk '/2016-02-15 ..:..:.. upgrade / {print $4 "=" $5}' /var/log/dpkg.log) (line-feed and indentation added to avoid horizontal scroll-bar) NOTE the use of the -d ( --download-only ) option. Re-run the command and remove that option after you've verified that the apt-get install will do what you want, and ONLY what you want. I would also recommend running only the awk portion of that command by itself first so you can see a list of exactly which packages and versions will be re-installed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4778/" ] }
263,527
Is there some way to find out all the files on a given system that weren't installed via RPM? I understand that I can brute force this myself using something like rpmquery -f in a script that loops through all files in the file system, however I was wondering if there is some standard way to do this for RPM based systems (specifically Fedora, which I use at home). Since this for Fedora, it is fine to use yum or dnf to figure this out. If there is no standard way to do it, does anyone know of some pre-existing scripts to do this? I don't want to re-invent the wheel if I don't need to. P.S. There is another question similar to this , but it is about Gentoo and Portage, so it isn't totally relevant.
a bit late to the party, but hopefully someone will find this useful: find /usr/ -exec /bin/sh -c 'rpm -qf -- "$1" >/dev/null 2>&1 || echo "$1"' find_sh {} \; This command crawls over the file system, and runs rpm -qf on it. rpm -qf prints the corresponding package for a file, and luckily has a return value of 0 if it finds one and 1 otherwise. If you're brave, you can tie the output to | xargs rm -f , but personally I wouldn't be so brave. Turns out there's a lot of stuff in /usr that's not really owned by anything.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28898/" ] }
263,531
I have a Folder in /srv/beta in Server Ubuntu 14.04 to upload source code. How can i set permision all user in dev-team alow all permision only this folder like vim, upload, .... Tks...
a bit late to the party, but hopefully someone will find this useful: find /usr/ -exec /bin/sh -c 'rpm -qf -- "$1" >/dev/null 2>&1 || echo "$1"' find_sh {} \; This command crawls over the file system, and runs rpm -qf on it. rpm -qf prints the corresponding package for a file, and luckily has a return value of 0 if it finds one and 1 otherwise. If you're brave, you can tie the output to | xargs rm -f , but personally I wouldn't be so brave. Turns out there's a lot of stuff in /usr that's not really owned by anything.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151643/" ] }
263,615
I am running Windows 10 and am starting to learn how to boot from USB devices. I have a 16GB USB (USB 3.0) drive and I want to do the following: Make the 16GB USB drive run Debian Linux. Keep Windows 10 on my C: drive. Not partition my hard drive or set up a dual boot. Run the OS from my USB drive. Let all of my files and programs be saved to the USB (so I don't think that a live OS would be suitable). It should work as though it was a dual boot as in the way files are saved. Make it work on any computer it is plugged in to (assuming the BIOS is compatible). I already know how to boot from a USB in my BIOS but I am unsure as to where to get an ISO file and how to install it to the USB.
To create a bootable USB, you can follow the steps below: STEP 1 Go to the website of the OS you wish to install, and find an iso image to download. In your case, since you want to run a Debian OS, here is a link to its iso options: https://www.debian.org/distrib/netinst Choose an iso image from the options, and click on it. This should automatically start the image download. While file is downloading, go to second step. STEP 2 Get a utility program to format and create bootable USB flash drives. Some have already been suggested, so I will just link you to my favourite: https://rufus.akeo.ie/ Download the utility and go to third step. STEP 3 By this stage, if your iso image has not yet finished downloading, then wait until it does. Now that you have both the utility and the iso image downloaded: Plug in your USB drive Open Rufus (to write your USB) Select the iso image you just downloaded to write on the USB, and fill out the other options accordingly (eg. selecting your USB drive etc) Click on the option for starting the write process (with Rufus, it is the "Start" button) Once Rufus finishes, simply reboot, booting from your USB, which should start up your Debian OS.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/263615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156732/" ] }
263,668
I have a string of the format [0-9]+\.[0-9]+\.[0-9] . I need to extract the first, second, and third numbers separately. As I understand it, capture groups should be capable of this. I should be able to use sed "s/\([0-9]*\)/\1/g to get the first number, sed "s/\([0-9]*\)/\2/g to get the second number, and sed "s/\([0-9]*\)/\3/g to get the third number. In each case, though, I am getting the whole string. Why is this happening?
We can't give you a full answer without an example of your input but I can tell you that your understanding of capture groups is wrong. You don't use them sequentially, they only refer to the regex on the left hand side of the same substitution operator. If you capture, for example, /(foo)(bar)(baz)/ , then foo will be \1 , bar will be \2 and baz will be \3 . You can't do s/(foo)/\1/; s/(bar)/\2/ , because in the second s/// call, there is only one captured group, so \2 will not be defined. So, to capture your three groups of digits, you would need to do: sed 's/\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\)/\1 : \2 : \3/' Or, the more readable: sed -E 's/([0-9]*)\.([0-9]*)\.([0-9]*)/\1 : \2 : \3/'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/263668", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89807/" ] }
263,677
I have one zfs pool containing several zvols and datasets of which some are also nested.All datasets and zvols are periodically snapshotted by zfs-auto-snapshot.All datasets and zvols also have some manually created snapshots. I have setup a remote pool on which due to lack of time, initial copying over local high speed network via zfs send -R did not complete (some datasets are missing, some datasets have outdated or missing snapshots). Now the pool is physically remote over a slow speed connection and I need to periodically sync the remote pool with local pool, meaning data present in local pool must be copied to remote pool, data gone from local pool must be deleted from remote pool, and data present in remote pool but not in local pool must be deleted from remote pool, by data meaning 'zvols', 'datasets' or 'snapshots'. If I was doing this between two regular filesystems using rsync, it would be "-axPHAX --delete" (that's what I actually do to backup some systems). How do I setup a synchronizing task so the remote pool zvols & datasets (including their snapshots) can be in sync with local zvols,datasets&snapshots? I would like to avoid transferring over ssh, because of low throughput performance of ssh; I'd prefer mbuffer or iscsi instead.
Disclaimer: As I've never used zvols, I cannot say if they are any different in replication than normal filesystems or snapshots. I assume they are, but do not take my word for it. Your question is actually multiple questions, I try to answer them separately: How to replicate/mirror complete pool to remote location You need to split the task into two parts: first, the initial replication has to be complete, afterwards incremental replication is possible, as long as you do not mess with your replication snapshots . To enable incremental replication, you need to preserve the last replication snapshots, everything before that can be deleted. If you delete the previous snapshot, zfs recv will complain and abort the replication. In this case you have to start all over again, so try not to do this. If you just need the correct options, they are: zfs send : -R : send everything under the given pool or dataset (recursive replication, needed all the time, includes -p ). Also, when receiving, all deleted source snapshots are deleted on the destination. -I : include all intermediate snapshots between the last replication snapshot and the current replication snapshot (needed only with incremental sends) zfs recv : -F : expand target pool, including deletion of existing datasets that are deleted on the source -d : discard the name of the source pool and replace it with the destination pool name (the rest of the filesystem paths will be preserved, and if needed also created) -u : do not mount filesystem on destination If you prefer a complete example, here is a small script: #!/bin/sh# Setup/variables:# Each snapshot name must be unique, timestamp is a good choice.# You can also use Solaris date, but I don't know the correct syntax.snapshot_string=DO_NOT_DELETE_remote_replication_timestamp=$(/usr/gnu/bin/date '+%Y%m%d%H%M%S')source_pool=tankdestination_pool=tanknew_snap="$source_pool"@"$snapshot_string""$timestamp"destination_host=remotehostname# Initial send:# Create first recursive snapshot of the whole pool.zfs snapshot -r "$new_snap"# Initial replication via SSH.zfs send -R "$new_snap" | ssh "$destination_host" zfs recv -Fdu "$destination_pool"# Incremental sends:# Get old snapshot name.old_snap=$(zfs list -H -o name -t snapshot -r "$source_pool" | grep "$source_pool"@"$snapshot_string" | tail --lines=1)# Create new recursive snapshot of the whole pool.zfs snapshot -r "$new_snap"# Incremental replication via SSH.zfs send -R -I "$old_snap" "$new_snap" | ssh "$destination_host" zfs recv -Fdu "$destination_pool"# Delete older snaps on the local source (grep -v inverts the selection)delete_from=$(zfs list -H -o name -t snapshot -r "$source_pool" | grep "$snapshot_string" | grep -v "$timestamp")for snap in $delete_from; do zfs destroy "$snap"done Use something faster than SSH If you have a sufficiently secured connection, for example IPSec or OpenVPN tunnel and a separate VLAN that only exists between sender and receiver, you may switch from SSH to unencrypted alternatives like mbuffer as described here , or you could use SSH with weak/no encryption and disabled compression, which is detailed here . There also was a website about recomiling SSH to be much faster, but unfortunately I don't remember the URL - I'll edit it later if I find it. For very large datasets and slow connections, it may also be useful to to the first transmission via hard disk (use encrypted disk to store zpool and transmit it in sealed package via courier, mail or in person). As the method of transmission does not matter for send/recv, you can pipe everything to the disk, export the pool, send the disk to its destination, import the pool and then transmit all incremental sends via SSH. The problem with messed up snapshots As stated earlier, if you delete/modify your replication snapshots, you will receive the error message cannot send 'pool/fs@name': not an earlier snapshot from the same fs which means either your command was wrong or you are in an inconsistent state where you must remove the snapshots and start all over. This has several negative implications: You cannot delete a replication snapshot until the new replication snapshot was successfully transferred. As these replication snapshots include the state of all other (older) snapshots, empty space of deleted files and snapshots will only be reclaimed if the replication finishes. This may lead to temporary or permanent space problems on your pool which you can only fix by restarting or finishing the complete replication procedure. You will have many additional snapshots, which slows down the list command (except on Oracle Solaris 11, where this was fixed). You may need to protect the snapshots against (accidental) removal, except by the script itself. There exists a possible solution to those problems, but I have not tried it myself. You could use zfs bookmark , a new feature in OpenSolaris/illumos created specifically for this task. This would free you of snapshot management. The only downside is that at present, it only works for single datasets, not recursively. You would have to save a list of all your old and new datasets and then loop over them, bookmarking, sending and receiving them, and then updating the list (or small database, if you prefer). If you try the bookmark route, I would be interested to hear how it worked out for you!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42673/" ] }
263,684
In my C++ program where intensive disk, network I/O and even CPU computation occur, I am using memory mapped region as an array. With very small data, it works fine. However when I ran the program with very huge data my application crashes. (I absolutely understand that mmap region's size should not be a concern because OS will deal with all the I/O and buffering) I don't want to blame on Linux for it, but I'd like to know whether there is any case 'mmap' becomes unstable and can make the OS crashes? When OS crashes, in the screen I can see the kernel panic message related with some blah blah 'write_back' ... (I will add the msg here as soon as I reproduce the problem) // The program uses MPI network operations over memory mapped region (Intel MPI with Infiniband's RDMA enabled) where RDMA possibly bypasses OS kernel and directly writes some data into memory. I investigated the callstack and found some kernel codes: ( http://lxr.free-electrons.com/source/fs/ext4/inode.c#L2313 ) I guess the errors comes from 'BUG_ON' trap in #L2386 BUG_ON(PageWriteback(page)); kernel's ver is 3.19.0 ( https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.19.tar.xz )
Disclaimer: As I've never used zvols, I cannot say if they are any different in replication than normal filesystems or snapshots. I assume they are, but do not take my word for it. Your question is actually multiple questions, I try to answer them separately: How to replicate/mirror complete pool to remote location You need to split the task into two parts: first, the initial replication has to be complete, afterwards incremental replication is possible, as long as you do not mess with your replication snapshots . To enable incremental replication, you need to preserve the last replication snapshots, everything before that can be deleted. If you delete the previous snapshot, zfs recv will complain and abort the replication. In this case you have to start all over again, so try not to do this. If you just need the correct options, they are: zfs send : -R : send everything under the given pool or dataset (recursive replication, needed all the time, includes -p ). Also, when receiving, all deleted source snapshots are deleted on the destination. -I : include all intermediate snapshots between the last replication snapshot and the current replication snapshot (needed only with incremental sends) zfs recv : -F : expand target pool, including deletion of existing datasets that are deleted on the source -d : discard the name of the source pool and replace it with the destination pool name (the rest of the filesystem paths will be preserved, and if needed also created) -u : do not mount filesystem on destination If you prefer a complete example, here is a small script: #!/bin/sh# Setup/variables:# Each snapshot name must be unique, timestamp is a good choice.# You can also use Solaris date, but I don't know the correct syntax.snapshot_string=DO_NOT_DELETE_remote_replication_timestamp=$(/usr/gnu/bin/date '+%Y%m%d%H%M%S')source_pool=tankdestination_pool=tanknew_snap="$source_pool"@"$snapshot_string""$timestamp"destination_host=remotehostname# Initial send:# Create first recursive snapshot of the whole pool.zfs snapshot -r "$new_snap"# Initial replication via SSH.zfs send -R "$new_snap" | ssh "$destination_host" zfs recv -Fdu "$destination_pool"# Incremental sends:# Get old snapshot name.old_snap=$(zfs list -H -o name -t snapshot -r "$source_pool" | grep "$source_pool"@"$snapshot_string" | tail --lines=1)# Create new recursive snapshot of the whole pool.zfs snapshot -r "$new_snap"# Incremental replication via SSH.zfs send -R -I "$old_snap" "$new_snap" | ssh "$destination_host" zfs recv -Fdu "$destination_pool"# Delete older snaps on the local source (grep -v inverts the selection)delete_from=$(zfs list -H -o name -t snapshot -r "$source_pool" | grep "$snapshot_string" | grep -v "$timestamp")for snap in $delete_from; do zfs destroy "$snap"done Use something faster than SSH If you have a sufficiently secured connection, for example IPSec or OpenVPN tunnel and a separate VLAN that only exists between sender and receiver, you may switch from SSH to unencrypted alternatives like mbuffer as described here , or you could use SSH with weak/no encryption and disabled compression, which is detailed here . There also was a website about recomiling SSH to be much faster, but unfortunately I don't remember the URL - I'll edit it later if I find it. For very large datasets and slow connections, it may also be useful to to the first transmission via hard disk (use encrypted disk to store zpool and transmit it in sealed package via courier, mail or in person). As the method of transmission does not matter for send/recv, you can pipe everything to the disk, export the pool, send the disk to its destination, import the pool and then transmit all incremental sends via SSH. The problem with messed up snapshots As stated earlier, if you delete/modify your replication snapshots, you will receive the error message cannot send 'pool/fs@name': not an earlier snapshot from the same fs which means either your command was wrong or you are in an inconsistent state where you must remove the snapshots and start all over. This has several negative implications: You cannot delete a replication snapshot until the new replication snapshot was successfully transferred. As these replication snapshots include the state of all other (older) snapshots, empty space of deleted files and snapshots will only be reclaimed if the replication finishes. This may lead to temporary or permanent space problems on your pool which you can only fix by restarting or finishing the complete replication procedure. You will have many additional snapshots, which slows down the list command (except on Oracle Solaris 11, where this was fixed). You may need to protect the snapshots against (accidental) removal, except by the script itself. There exists a possible solution to those problems, but I have not tried it myself. You could use zfs bookmark , a new feature in OpenSolaris/illumos created specifically for this task. This would free you of snapshot management. The only downside is that at present, it only works for single datasets, not recursively. You would have to save a list of all your old and new datasets and then loop over them, bookmarking, sending and receiving them, and then updating the list (or small database, if you prefer). If you try the bookmark route, I would be interested to hear how it worked out for you!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146221/" ] }
263,703
I have a file with a lot of lines like this /item/pubDate=Sun, 23 Feb 2014 00:55:04 +010 If I execute this echo "/item/pubDate=Sun, 23 Feb 2014 00:55:04 +010" | grep -Po "(?<=\=).*"Sun, 23 Feb 2014 00:55:04 +010 I get the correct date (all in one line). Now I want to try this with a lot of dates in a xml file. I use this and it's ok. xml2 < date_list | egrep "pubDate" | grep -Po "(?<=\=).*"Fri, 22 Jan 2016 17:56:29 +0100Sun, 13 Dec 2015 18:33:02 +0100Wed, 18 Nov 2015 15:27:43 +0100... But now I want to use the date in a bash program and I get this output for fecha in $(xml2 < podcast | egrep "pubDate" | grep -Po "(?<=\=).*"); do echo $fecha; done Fri, 22 Jan 2016 17:56:29 +0100 Sun, 13 Dec 2015 18:33:02 +0100 Wed, 18 Nov 2015 15:27:43 +0100 I want the date output in one line (in variable fecha) how the first and second examples but I don't know how to do it.
Do it this way instead: while IFS= read -r fecha; do echo $fechadone < <(xml2 < podcast | egrep "pubDate" | grep -Po "(?<=\=).*") Bash will separate "words" to loop through by characters in the Internal Field Separator ( $IFS ). You can temporarily disable this behavior by setting IFS to nothing for the duration of the read command. The pattern above will always loop line-by-line. <(command) makes the output of a command look like a real file, which we then redirect into our read loop. $ while IFS= read -r line; do echo $line; done < <(cat ./test.input)Fri, 22 Jan 2016 17:56:29 +0100Sun, 13 Dec 2015 18:33:02 +0100Wed, 18 Nov 2015 15:27:43 +0100
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/263703", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156117/" ] }
263,761
A lot of Linux programs state that the config file(s) location is distribution dependent. I was wondering how the different distributions do this. Do they actually modify the source code? Is there build parameters that sets these locations? I have searched for this but cannot find any information. I know it's out there, I just can't seem to find it. What is the "Linux way" in regards to this?
It depends on the distribution and the original ('upstream') source. With most autoconf- and automake-using packages, it is possible to specify the directory where the configuration files will be looked for using the --sysconfdir parameter. Other build systems (e.g., CMake) have similar options. If the source package uses one of those build systems, then the packager can easily specify the right parameters, and no patches are required. Even if they don't (e.g., because the upstream source uses some home-grown build system), it's often still possible to specify some build configuration to move the config files to a particular location without having to patch the upstream source. It that isn't the case, then often the distribution will indeed have to add patches to the source to make it move files in what they consider to be the 'right' location. In most cases, distribution packagers will then write a patch which will allow the source to be configured in the above sense, so that they can send the patch to the upstream maintainers, and don't have to keep maintaining/updating it. This is the case for configuration file locations, but also for other things, like the bin / sbin executables (the interpretation of what is a system administrator's command differs between distributions), location where to write documentation, and so on. Side note: if you maintain some free software, please make it easy for packagers to talk to you. Otherwise we have to maintain such patches for no particularly good reason...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/263761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156853/" ] }
263,778
When I typed ./startup.sh i am getting: Using CATALINA_BASE: /home/ashok/apache-tomcat-7.0.56Using CATALINA_HOME: /home/ashok/apache-tomcat-7.0.56Using CATALINA_TMPDIR: /home/ashok/apache-tomcat-7.0.56/tempUsing JRE_HOME: /usr/java/jdk1.7.0_05/bin/javaUsing CLASSPATH: /home/ashok/apache-tomcat-7.0.56/bin/bootstrap.jar:/home/ashok/apache-tomcat-7.0.56/bin/tomcat-juli.jar/home/ashok/apache-tomcat-7.0.56/bin/catalina.sh: line 319: /usr/java/jdk1.7.0_05/bin/java/bin/java: No such file or directory/home/ashok/apache-tomcat-7.0.56/bin/catalina.sh: line 319: exec: /usr/java/jdk1.7.0_05/bin/java/bin/java: cannot execute: No such file or directory
It depends on the distribution and the original ('upstream') source. With most autoconf- and automake-using packages, it is possible to specify the directory where the configuration files will be looked for using the --sysconfdir parameter. Other build systems (e.g., CMake) have similar options. If the source package uses one of those build systems, then the packager can easily specify the right parameters, and no patches are required. Even if they don't (e.g., because the upstream source uses some home-grown build system), it's often still possible to specify some build configuration to move the config files to a particular location without having to patch the upstream source. It that isn't the case, then often the distribution will indeed have to add patches to the source to make it move files in what they consider to be the 'right' location. In most cases, distribution packagers will then write a patch which will allow the source to be configured in the above sense, so that they can send the patch to the upstream maintainers, and don't have to keep maintaining/updating it. This is the case for configuration file locations, but also for other things, like the bin / sbin executables (the interpretation of what is a system administrator's command differs between distributions), location where to write documentation, and so on. Side note: if you maintain some free software, please make it easy for packagers to talk to you. Otherwise we have to maintain such patches for no particularly good reason...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/263778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156874/" ] }
263,801
I tried to update my OS Debian jessie using the terminal and i get an error : β€œE: The method driver /usr/lib/apt/methods/https could not be found.” error? My sources.list : deb http://httpredir.debian.org/debian/ jessie maindeb-src http://httpredir.debian.org/debian/ jessie maindeb http://security.debian.org/ jessie/updates maindeb-src http://security.debian.org/ jessie/updates main# jessie-updates, previously known as 'volatile'deb http://httpredir.debian.org/debian/ jessie-updates maindeb-src http://httpredir.debian.org/debian/ jessie-updates maindeb http://ftp.de.debian.org/debian jessie main How to fix apt-get update and aptitude update ?
Sounds like you may have added some https sources. Since there are no https sources in your sources.list , it would be something in /etc/apt/sources.list.d/ . You may also be dealing with a proxy that always redirects to https. You can add support for https apt sources by installing a couple of packages: apt-get install apt-transport-https ca-certificates If your apt-get is too broken to do this, you can download the package directly and install it with dpkg -i . Any additional dependencies of that package can be tracked down and fetched similarly ( dpkg will let you know if anything is missing). If it still doesn't work, you might try editing the source entry to use http instead of https, or just remove it and start over following the source maintainer's instructions.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/263801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153195/" ] }
263,869
I have trouble understanding a weird behavior: vi seems to add a newline (ASCII: LF, as it is a Unix ( AIX ) system) at the end of the file, when I did NOT specifically type it. I edit the file as such in vi (taking care to not input a newline at the end): # vi foo ## Which I will finish on the char "9" and not input a last newline, then `:wq`123456789123456789123456789123456789~~ ## When I save, the cursor is just above the last "9", and no newline was added. I expect vi to save it "as is", so to have 39 bytes: 10 ASCII characters on each of the first three lines (numbers 1 to 9, followed by a newline (LF on my system)) and only 9 on the last line (characters 1 to 9, no terminating newline/LF). But it appears when I save it it is 40 bytes (instead of 39), and od shows a terminating LF : # wc foo 4 4 40 foo ## I expected 39 here! as I didn't add the last newline# od -a toto0000000 1 2 3 4 5 6 7 8 9 lf 1 2 3 4 5 60000020 7 8 9 lf 1 2 3 4 5 6 7 8 9 lf 1 20000040 3 4 5 6 7 8 9 lf0000050 ## An "lf" terminates the file?? Did vi add it silently? If I create the file with a printf doing exactly what I did inside vi, it works as expected: # ## I create a file with NO newline at the end:# printf "123456789\n123456789\n123456789\n123456789" > foo2# wc foo2 ## This one is as expected: 39 bytes, exactly as I was trying to do above with vi. 3 4 39 foo ## As expected, as I didn't add the last newline ## Note that for wc, there are only three lines! ## (So wc -l doesn't count lines; it counts the [newline] chars... Which is rather odd.)# root@SPU0WMY1:~ ## od -a foo20000000 1 2 3 4 5 6 7 8 9 lf 1 2 3 4 5 60000020 7 8 9 lf 1 2 3 4 5 6 7 8 9 lf 1 20000040 3 4 5 6 7 8 90000047 ## As expected, no added LF. Both files (foo (40 characters) and foo2 (39 characters) appear exactly the same if I re-open them with vi... And if I open foo2 (39 characters, no terminating newline) in vi and just do :wq without editing it whatsoever , it says it writes 40 chars, and the linefeed appears! I can't have access to a more recent vi (I do this on AIX, vi (not Vim ) version 3.10 I think? (no "-version" or other means of knowing it)). # strings /usr/bin/vi | grep -i 'version.*[0-9]'@(#) Version 3.10 Is it normal for vi (and perhaps not in more recent version? Or Vim?) to silently add a newline at the end of a file? (I thought the ~ indicated that the previous line did NOT end with a newline.) -- Edit: some additional updates and a bit of a summary, with a big thanks to the answers below : vi silently add a trailing newline at the moment it writes a file that lacked it (unless file is empty). it only does so at the writing time! (ie, until you :w, you can use :e to verify that the file is still as you openened it... (ie: it still shows "filename" [Last line is not complete] N line, M character). When you save, a newline is silently added, without a specific warning (it does say how many bytes it saves, but this is in most cases not enough to know a newline was added) (thanks to @jiliagre for talking to me about the opening vi message, it helped me to find a way to know when the change really occurs) This (silent correction) is POSIX behavior! (see @barefoot-io answer for references)
POSIX requires this behavior, so it's not in any way unusual. From the POSIX vi manual : INPUT FILES See the INPUT FILES section of the ex command for a description of the input files supported by the vi command. Following the trail to the POSIX ex manual : INPUT FILES Input files shall be text files or files that would be text files except for an incomplete last line that is not longer than {LINE_MAX}-1 bytes in length and contains no NUL characters. By default, any incomplete last line shall be treated as if it had a trailing <newline>. The editing of other forms of files may optionally be allowed by ex implementations. The OUTPUT FILES section of the vi manual also redirects to ex: OUTPUT FILES The output from ex shall be text files. A pair of POSIX definitions: 3.397 Text File A file that contains characters organized into zero or more lines. The lines do not contain NUL characters and none can exceed {LINE_MAX} bytes in length, including the <newline> character. Although POSIX.1-2008 does not distinguish between text files and binary files (see the ISO C standard), many utilities only produce predictable or meaningful output when operating on text files. The standard utilities that have such restrictions always specify "text files" in their STDIN or INPUT FILES sections. 3.206 Line A sequence of zero or more non- <newline> characters plus a terminating <newline> character. These definitions in the context of these manual page excerpts mean that while a conformant ex/vi implementation must accept a malformed text file if that file's only deformity is an absent final newline, when writing that file's buffer the result must be a valid text file. While this post has referenced the 2013 edition of the POSIX standard, the relevant stipulations also appear in the much older 1997 edition . Lastly, if you find ex's newline appension unwelcome, you will feel profoundly violated by Seventh Edition UNIX's (1979) intolerant ed. From the manual : When reading a file, ed discards ASCII NUL characters and all characters after the last newline. It refuses to read files containing non-ASCII characters.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/263869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27616/" ] }
263,883
I'm trying to search for files using find , and put those files into a Bash array so that I can do other operations on them (e.g. ls or grep them). But I can't figure out why readarray isn't reading the find output as it's piped into it. Say I have two files in the current directory, file1.txt and file2.txt . So the find output is as follows: $ find . -name "file*"./file1.txt./file2.txt So I want to pipe that into an array whose two elements are the strings "./file1.txt" and "./file2.txt" (without quotes, obviously). I've tried this, among a few other things: $ declare -a FILES$ find . -name "file*" | readarray FILES$ echo "${FILES[@]}"; echo "${#FILES[@]}"0 As you can see from the echo output, my array is empty. So what exactly am I doing wrong here? Why is readarray not reading find 's output as its standard input and putting those strings into the array?
When using a pipeline, bash runs the commands in subshellsΒΉ. Therefore, the array is populated, but in a subshell, so the parent shell has no access to it. You also likely want the -t option so as not to store that line delimiters in the array members as they are not part of the file names. Use process substitution: readarray -t FILES < <(find .) Note that it doesn't work for files with newlines in their paths. Unless you can guarantee if won't be the case, you'd want to use NUL delimited records instead of newline delimited ones: readarray -td '' < <(find . -print0) (the -d option was added in bash 4.4) ΒΉ except for the last pipe component when using the lastpipe option, but that's only for non-interactive invocations of bash .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/263883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153578/" ] }
263,904
I have a large file that's a couple hundred lines long. This file is partitioned into many parts by a specific identifier, lets say 'ABC'. This line 'ABC' appears 6 times so I want 6 output files. I'm familiar with split and awk but can't seem to create a command line that will do what I've described, any ideas? Here's an example ABCline 1line 2line 3ABCline 1line 2ABCline1 I'd like three files where ABC is the first line in the new file and it ends before the next ABC is encountered.
Using csplit csplit -z somefile /ABC/ '{*}' The output files will be xx00 , xx01 , ... by default but you can change the format and numbering if desired - see man csplit
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/263904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88039/" ] }
264,004
I have ran into some issues when it went down to install Steam on Debian. The console as well as Apper comes up with missing dependencies and the package isn't going to install.
Better yet, just install the Steam package provided in the non-free repository hosted on Debian's infrastructure: add i386 sudo dpkg --add-architecture i386 edit /etc/apt/sources.list to enable contrib and non-free ; the jessie line should look something like (the URL will be different) deb http://ftp.fr.debian.org/debian jessie main contrib non-free (replace with stretch for Debian 9, or buster for Debian 10) update apt 's caches sudo apt-get update install Steam sudo apt-get install steam:i386 install the appropriate 3D libraries ( libgl1-mesa-glx:i386 for Mesa, libgl1-fglrx-glx:i386 for fglrx on AMD GPUs, or libgl1-nvidia-glx:i386 for the NVIDIA binary driver; note that fglrx is no longer available in Debian 9 and later): sudo apt-get install libgl1-mesa-glx:i386 Steam will update itself as necessary.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157052/" ] }
264,056
When re-partitioning a USB Flash drive on CentOS 6.x got following error. Disk /dev/sdb: 31.5 GB, 31466323968 bytes255 heads, 63 sectors/track, 3825 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x0e693bd9 Device Boot Start End Blocks Id System/dev/sdb1 * 1 3826 30727808 c W95 FAT32 (LBA)[root@csc ~]# fdisk /dev/sdbWARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u').Command (m for help): dSelected partition 1Command (m for help): 11: unknown commandCommand action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only)Command (m for help): d No partition is defined yet!Command (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-3825, default 1): Using default value 1Last cylinder, +cylinders or +size{K,M,G} (1-3825, default 3825): Using default value 3825Command (m for help): Command (m for help): Command (m for help): tSelected partition 1Hex code (type L to list codes): 86Changed system type of partition 1 to 86 (NTFS volume set)Command (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.WARNING: Re-reading the partition table failed with error 16: Device or resource busy.The kernel still uses the old table. The new table will be used atthe next reboot or after you run partprobe(8) or kpartx(8)Syncing disks.
Looks like this device is mounted. Run umount /dev/sdb1 and try again.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264056", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26379/" ] }
264,062
Is there a way to schedule a task using crontab by root user but should not be visible using crontab command i.e, crontab -l either for root user or normal users?
If you want to schedule a task using cron , an alternative to crontab in many distributions is to add a file to /etc/cron.d , in the traditional system crontab format (the variant which specifies the user). Tasks defined in this way do not show up in crontab -l 's output. For example, on Debian, amavisd-new 's Spamassassin maintenance is scheduled by /etc/cron.d/amavisd-new , which contains ## SpamAssassin maintenance for amavisd-new## m h dom mon dow user command18 */3 * * * amavis test -e /usr/sbin/amavisd-new-cronjob && /usr/sbin/amavisd-new-cronjob sa-sync24 1 * * * amavis test -e /usr/sbin/amavisd-new-cronjob && /usr/sbin/amavisd-new-cronjob sa-clean
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264062", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156920/" ] }
264,092
We created 6 logical volumes in a volume group. Configuration looks like: two disk each 250g 1st disk we use 21gb for other.(Not lvm partitioned) Remaining 229g in 1st disk and 250g in 2nd disk will participated in LVM partitioning The remaining 229g(disk1)+250g(disk2) is configured as a single pv. That whole pv is configured as single vg. In vg we split as 6 lvs Among six, 2 lv are as raw disk partition (no filesystem). We are writing some data in 2 raw lvs (cache data) We are doing vgremove (which removed all lvs and volume group from physical volume) at one scenario and Later creating pv, vg and all 6 lvs. Find that data in one of the raw logical volume partition exist. Seems data didn't wipe out. Question: Will vgremove (which removed all lvs and volume group from physical volume) wipeout data which is in raw partition. How data is persist.
If you want to schedule a task using cron , an alternative to crontab in many distributions is to add a file to /etc/cron.d , in the traditional system crontab format (the variant which specifies the user). Tasks defined in this way do not show up in crontab -l 's output. For example, on Debian, amavisd-new 's Spamassassin maintenance is scheduled by /etc/cron.d/amavisd-new , which contains ## SpamAssassin maintenance for amavisd-new## m h dom mon dow user command18 */3 * * * amavis test -e /usr/sbin/amavisd-new-cronjob && /usr/sbin/amavisd-new-cronjob sa-sync24 1 * * * amavis test -e /usr/sbin/amavisd-new-cronjob && /usr/sbin/amavisd-new-cronjob sa-clean
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264092", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157110/" ] }
264,102
I have installed bash completion using yum install --enablerepo=epel bash-completion . While it works for some basic commands (git & yum), I am missing a large part of the completers. My /etc/bash_completion.d contains the following: bash_completion.d]$ ls git iprutils redefine_filedir yum yummain.py yum-utils.bash However, I know there is bash_completion for i.e. make (which is installed) and a lot more, compare i.e. to the sample output here . How can I get the missing completer scripts? (Preferably with yum, so I do not have to update them manually) If it matters: tab completion works, but I am not sourcing anything in my .bashrc. It just started working after installing the package. UPDATE: After checking the version of bash completion I have installed as @fduff suggested I saw the following: $ yum list installed | grep completion bash-completion.noarch 1:2.1-6.el7 @base However trying uninstalling it and forcing centos to install bash_completion from the epel repository with sudo yum install --enablerepo=epel bash-completion --disablerepo=base yielded package not found . Further checking yielded that the new package which is now in @base puts the completion files into /usr/share/bash-completion/completions , however I am still missing some, i.e. ssh and sudo (kind of sucks that sudo command [tab] does not complete while command [tab] does), furthermore I sitll can't find the bit for make (which should list the targets that are in Makefile ) UPDATE2: The changelog states: Fri Nov 01 2013 Petr Stodulka - 2.1-6 Install only available completions (#810343 - comment 15) without "tar" and remove the other. Fri Sep 13 2013 Roman Rakus - 2.1-5 Added one more missing conditional Resolves: #1007839 Fri Sep 13 2013 Roman Rakus - 2.1-4 Added conditionals to not add completions for some commands; the packages has their own completions Resolves: #1007839 Thus reinstalling sudo, after I had bash_completion installed worked for the sudo completion, however I had no such luck with make. QUESTION : How to enable make bash completion in Centos 7?
You might want to try bash-completion-extras . It was briefly only in epel-testing, but has been released into epel. Right now, you should be able to run: yum --enablerepo=epel install bash-completion-extras ...to get bash-completion-extras.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264102", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24024/" ] }
264,117
I wanted to write a little bash function such that I can tell bash, import os or from sys import stdout and it will spawn a new Python interpreter with the module imported. The latter from function looks like this: from () { echo "from $@" | xxd python3 -i -c "from $@"} If I call this: $ from sys import stdout00000000: 6672 6f6d 2073 7973 2069 6d70 6f72 7420 from sys import 00000010: 7374 646f 7574 0a stdout. File "<string>", line 1 from sys ^SyntaxError: invalid syntax>>> The bytes in from sys are 66 72 6f 6d 20 73 79 73 20f r o m s y s There's no EOF in there, yet the Python interpreter is behaving as if it read EOF. There is a newline at the end of the stream, which is to be expected. from 's sister, that imports a whole Python module, looks like this, and which solves the problem by sanitising and processing the string, and by failing on non-existent modules. import () { ARGS=$@ ARGS=$(python3 -c "import re;print(', '.join(re.findall(r'([\w]+)[\s|,]*', '$ARGS')))") echo -ne '\0x04' | python3 -i python3 -c "import $ARGS" &> /dev/null if [ $? != 0 ]; then echo "sorry, junk module in list" else echo "imported $ARGS" python3 -i -c "import $ARGS" fi} That solves the problem of an unexplained EOF in the stream, but I would like to understand why Python thinks there is an EOF.
The table in this Stack Overflow answer (which got it from the Bash Hackers Wiki ) explains how the different Bash variables are expanded: You're doing python -i -c "from $@" , which turns into python -i -c "from sys" "import" "stdout" , and -c only takes a single argument, so it's running the command from sys . You want to use $* , which will expand into python -i -c "from sys import stdout" (assuming $IFS is unset or starts with a space).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/264117", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136107/" ] }
264,169
This post is basically a follow-up to an earlier question of mine. From the answer to that question I realized that not only I don't quite understand the whole concept of a "subshell", but more generally, I don't understand the relationship between fork -ing and children processes. I used to think that when process X executes a fork , a new process Y is created whose parent is X , but according to the answer to that question, [a] subshell is not a completely new process, but a fork of the existing process. The implication here is that a "fork" is not (or does not result in) "a completely new process." I'm now very confused, too confused, in fact, to formulate a coherent question to directly dispel my confusion. I can however formulate a question that may lead to enlightenment indirectly. Since, according to zshall(1) , $ZDOTDIR/.zshenv gets sourced whenever a new instance of zsh starts, then any command in $ZDOTDIR/.zshenv that results in the creation of a "a completely new [zsh] process" would result in an infinite regress. On the other hand, including either of the following lines in a $ZDOTDIR/.zshenv file does not result in an infinite regress: echo $(date; printenv; echo $$) > /dev/null #1(date; printenv; echo $$) #2 The only way I found to induce an infinite regress by the mechanism described above was to include a line like the following 1 in the $ZDOTDIR/.zshenv file: $SHELL -c 'date; printenv; echo $$' #3 My questions are: what difference between the commands marked #1 , #2 above and the one marked #3 accounts from this difference in behavior? if the shells that get created in #1 and #2 are called "subshells", what are those like the one generated by #3 called? is it possible to rationalize (and maybe generalize) the empirical/anecdotal findings described above in terms of the "theory" (for lack of a better word) of Unix processes? The motivation for the last question is to be able to determine ahead of time (i.e. without resorting to experimentation) what commands would lead to an infinite regress if they were included in $ZDOTDIR/.zshenv ? 1 The particular sequence of commands date; printenv; echo $$ that I used in the various examples above is not too important. They happen to be commands whose output was potentially helpful towards interpreting the results of my "experiments". (I did, however, want these sequences to consist of more than one command, for the reason explained here .)
Since, according to zshall(1), $ZDOTDIR/.zshenv gets sourced whenever a new instance of zsh starts If you focus on the word "starts" here you'll have a better time of things. The effect of fork() is to create another process that begins from exactly where the current process already is . It's cloning an existing process, with the only difference being the return value of fork . The documentation is using "starts" to mean entering the program from the beginning. Your example #3 runs $SHELL -c 'date; printenv; echo $$' , starting an entirely new process from the beginning. It will go through the ordinary startup behaviour. You can illustrate that by, for example, swapping in another shell: run bash -c ' ... ' instead of zsh -c ' ... ' . There's nothing special about using $SHELL here. Examples #1 and #2 run subshells. The shell fork s itself and executes your commands inside that child process, then carries on with its own execution when the child is done. The answer to your question #1 is the above: example 3 runs an entirely new shell from the start, while the other two run subshells. The startup behaviour includes loading .zshenv . The reason they call this behaviour out specifically, which is probably what leads to your confusion, is that this file (unlike some others) loads in both interactive and non-interactive shells. To your question #2: if the shells that get created in #1 and #2 are called "subshells", what are those like the one generated by #3 called? If you want a name you could call it a "child shell", but really it's nothing. It's no different than any other process you start from the shell, be it the same shell, a different shell, or cat . To your question #3: is it possible to rationalize (and maybe generalize) the empirical/anecdotal findings described above in terms of the "theory" (for lack of a better word) of Unix processes? fork makes a new process, with a new PID, that starts running in parallel from exactly where this one left off. exec replaces the currently-executing code with a new program loaded from somewhere, running from the beginning. When you spawn a new program, you first fork yourself and then exec that program in the child. That is the fundamental theory of processes that applies everywhere, inside and outside of shells. Subshells are fork s, and every non-builtin command you run leads to both a fork and an exec . Note that $$ expands to the PID of the parent shell in any POSIX-compatible shell , so you may not be getting the output you expect regardless. Note also that zsh aggressively optimises subshell execution anyway, and commonly exec s the last command, or doesn't spawn the subshell at all if all the commands are safe without it. One useful command for testing your intuitions is: strace -e trace=process -f $SHELL -c ' ... ' That will print to standard error all process-related events (and no others) for the command ... you run in a new shell. You can see what does and does not run in a new process, and where exec s occur. Another possibly-useful command is pstree -h , which will print out and highlight the tree of parent processes of the current process. You can see how many layers deep you are in the output.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
264,182
51 0 0 5 0 0.0 0.0 0 1 2 3 4 5 6 7 8 1, 2 0.998 0.567 3, 2 Rs12345 0.7 0.2 3, 2 Rs31256 0.56 0.311 3, 2 Rs25691 0 0 012.1313010310 0.1213212 0.21213313210.0121654564 0.254564564 0.25678646 0.02154 0.2485674354 0.2434 The resulting output should look like this 3, 2 Rs12345 0.7 0.2 3, 2 Rs31256 0.56 0.311 3, 2 Rs25691 I used the sed to achieve the desired result sed -i -e '1,5d;/0 0/,$d' filename This did not work. I am dealing with multiple files like this that have different lengths in the number of lines. Therefore, I will have to somehow get rid of the "0 0" and every line after that (very end of data)
Since, according to zshall(1), $ZDOTDIR/.zshenv gets sourced whenever a new instance of zsh starts If you focus on the word "starts" here you'll have a better time of things. The effect of fork() is to create another process that begins from exactly where the current process already is . It's cloning an existing process, with the only difference being the return value of fork . The documentation is using "starts" to mean entering the program from the beginning. Your example #3 runs $SHELL -c 'date; printenv; echo $$' , starting an entirely new process from the beginning. It will go through the ordinary startup behaviour. You can illustrate that by, for example, swapping in another shell: run bash -c ' ... ' instead of zsh -c ' ... ' . There's nothing special about using $SHELL here. Examples #1 and #2 run subshells. The shell fork s itself and executes your commands inside that child process, then carries on with its own execution when the child is done. The answer to your question #1 is the above: example 3 runs an entirely new shell from the start, while the other two run subshells. The startup behaviour includes loading .zshenv . The reason they call this behaviour out specifically, which is probably what leads to your confusion, is that this file (unlike some others) loads in both interactive and non-interactive shells. To your question #2: if the shells that get created in #1 and #2 are called "subshells", what are those like the one generated by #3 called? If you want a name you could call it a "child shell", but really it's nothing. It's no different than any other process you start from the shell, be it the same shell, a different shell, or cat . To your question #3: is it possible to rationalize (and maybe generalize) the empirical/anecdotal findings described above in terms of the "theory" (for lack of a better word) of Unix processes? fork makes a new process, with a new PID, that starts running in parallel from exactly where this one left off. exec replaces the currently-executing code with a new program loaded from somewhere, running from the beginning. When you spawn a new program, you first fork yourself and then exec that program in the child. That is the fundamental theory of processes that applies everywhere, inside and outside of shells. Subshells are fork s, and every non-builtin command you run leads to both a fork and an exec . Note that $$ expands to the PID of the parent shell in any POSIX-compatible shell , so you may not be getting the output you expect regardless. Note also that zsh aggressively optimises subshell execution anyway, and commonly exec s the last command, or doesn't spawn the subshell at all if all the commands are safe without it. One useful command for testing your intuitions is: strace -e trace=process -f $SHELL -c ' ... ' That will print to standard error all process-related events (and no others) for the command ... you run in a new shell. You can see what does and does not run in a new process, and where exec s occur. Another possibly-useful command is pstree -h , which will print out and highlight the tree of parent processes of the current process. You can see how many layers deep you are in the output.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157185/" ] }
264,202
I have problem with a simple read. I read a list of items xml items and then I work with them. In some point I need to ask if Im sure and accept this response in a variable. My problem is that if I ask into the "while read linea" the "read -p ..." is ignored and I can't answer the question. xml2 < list | egrep "item" | egrep "url|pubDate|title" | while read linea; do case 1 in $(($x<= 1))) ... ;; $(($x<= 2))) ... ;; $(($x<= 3))) .... if [ $DIFERENCIA -lt $num_dias ]; then ... read -p β€œAre you sure: ” sure ... fi ... ;; *) let x=1 ;; esac done Thanks
use this one instead : read -p "Are you sure: " sure </dev/tty Quotes should be ascii 0x22, not UNICODE U-201c β€œ and U-201d ” .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156117/" ] }
264,237
I would like to execute a script as a normal user and execute a command that shuts off apache (which needs a root password). I was wondering if is possible to run the script with sudo but it executes some of the commands with an specific user and executes a specific command as a root. How can I achieve this?
sudo -u <username> <command>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264237", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91570/" ] }
264,329
I have used pstree to find the name of the parent emulator of running shell script using something similar to the following: pstree -s $PPID | awk -F '---' '{print $6}' This works in my current system. I tested in mate-terminal and xterm but not sure if this will work on other Linux systems/platforms and other terminals. Is there a better/tidier (more portable way) way of achieving this?
ps -o comm= -p "$(($(ps -o ppid= -p "$(($(ps -o sid= -p "$$")))")))" May give you good results. It gives the name of the process that is the parent of the session leader. For processes started within a terminal emulator, that would generally be the process running that terminal emulator (unless things like screen , expect , tmux ... are being used (though note that screen and tmux are terminal emulators), or new sessions are started explicitly with setsid , start-stop-daemon ...) Or breaking it down into individual steps using variables (which can also help make the script more self explanatory): sid=$(ps -o sid= -p "$$")sid_as_integer=$((sid)) # strips blanks if anysession_leader_parent=$(ps -o ppid= -p "$sid_as_integer")session_leader_parent_as_integer=$((session_leader_parent))emulator=$(ps -o comm= -p "$session_leader_parent_as_integer") The stripping of whitespace around numbers here is done using $((...)) arithmetic expansion. You could also doing it using the split+glob operator (assuming an unmodified $IFS ) or as suggested by @ack in comments using xargs : ps -o sid= -p "$$" | xargs ps -o ppid= -p | xargs ps -o comm= -p You could also try parsing wtmp where terminal emulators usually log an entry with their pid associated with the pseudo-terminal device. This works for me on a Debian system provided expect/screen/tmux... are not involved: ps -o comm= -p "$( dump-utmp -r /var/log/wtmp | awk -v tty="$(ps -o tty= -p "$$")" -F ' *\\| *' ' $2 == tty {print $5;exit}')" (using dump-utmp from GNU acct ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28650/" ] }
264,344
I have a large file which has special characters in it. There is a multi line code there, that I want to replace with sed . This: text = "\ ------ ------\n\n\ This message was automatically generated by email software\n\ The delivery of your message has not been affected.\n\n\ ------ ------\n\n" Needs to turn into this: text = "" I tried the following code, but no luck: sed -i '/ text = "*/ {N; s/ text = .*affected.\./ text = ""/g}' /etc/exim.conf It does not replace anything and does not display any error messages I have been playing with it, but everything I try does not work.
Perl to the rescue: perl -i~ -0777 -pe 's/text = "[^"]+"/text = ""/g' input-file -i~ will edit the file "in place", leaving a backup copy -0777 reads the whole file at once, not line by line The substitution s/// works similarly as in sed (i.e. it matches text = " followed by anything but double quotes many times up to a double quote), but in this case, it works on the whole file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54880/" ] }
264,355
We have 2 Apache Web-Servers which is getting replicated with rsync command, when the primary host is down, we manually change the IP and make it up. Now we are trying to find way to do an automatic switchover/failover environment. Firstly when I say failover that means when the primary website is down secondary website should start & act as Primary Switch over means when we do a manual switch (Testing the ENV) the website Codes should be synced For switching IP failover is must ON each server I have 2 Nic available for failover purpose How do I sync the Codes after switching or failover vice versa for eg; If my primary is working fine and I have deployed some new codes on the live (primary) server. Here if I have sync it will make the changes in secondary. But after switching to secondary to live, I have updated the live with the new codes now how do I sync. Do I need to create a CRON on both the server or is there any way to for simple replication Centos 6.7 httpd-2.2.25-1.el6.x86_64 Is there any solution for doing this?
Perl to the rescue: perl -i~ -0777 -pe 's/text = "[^"]+"/text = ""/g' input-file -i~ will edit the file "in place", leaving a backup copy -0777 reads the whole file at once, not line by line The substitution s/// works similarly as in sed (i.e. it matches text = " followed by anything but double quotes many times up to a double quote), but in this case, it works on the whole file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/264355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
264,357
I was trying to find out the total size of all files which are owned by a particular user. While trying so, i get different sizes when executing different commands. Which command is correct to find out the total size of all files owned by the particular user? $ find . -type f -user silviya|ls -lh|head -1 total 68K$ find . -type f -user agalya|wc -c284$ find . -type f -user agalya|du -sk120 . What is the reason for this variation?
In: find . -type f -user silviya|ls -lh|head -1 you're piping the output of find to ls , but ls doesn't read its input. It takes the list of files to list as arguments. In the absence of arguments like here, it lists the non-hidden files in the current directory. So here, you get the disk usage of all the non-hidden files (of any type) in the current directory (with the size of a given file counted for each of its hard links). In: find . -type f -user agalya|wc -c You're counting the number of bytes in the output of find , so that's the size of the file paths (and newline delimiters), not their disk usage nor file size. In: find . -type f -user agalya|du -sk Like ls , du takes the file list as arguments, not from its input. So here, you get the disk usage of all the files and directories in the current directory (recursively). To get the disk usage of all regular files owned by agalya , with GNU utilities, you'd do: find . -type f -user agalya -print0 | du -hc --files0-from=- | tail -n 1 --files0-from tells du (GNU du only) to take the file list from standard input (represented by - here). -c gives the cumulative size (note that hard links of a same file are counted only once). To get the file apparent size as opposed to disk usage, add the --apparent-size option to du (again, GNU specific). Add the -l option (also GNU-specific) to count hard links several times.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/264357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157293/" ] }
264,393
I have a remote machine running Debian 8 (Jessie) with lightdm installed. I want it to start in no-GUI mode, but I don't want to remove all X-related stuff to still be able to run it though SSH with the -X parameter. So how to disable X server autostart without removing it? I tried systemctl stop lightdm , it stops the lightdm, but it runs again after reboot. I also tried systemctl disable lightdm , but it basically does nothing. It renames lightdm's scripts in /etc/rc*.d directories, but it still starts after reboot, so what am I doing wrong? And I can't just update-rc.d lightdm stop , because it's deprecated and doesn't work.
The disable didn't work because the Debian /etc/X11/default-display-manager logic is winding up overriding it. In order to make text boot the default under systemd (regardless of which distro, really): systemctl set-default multi-user.target To change back to booting to the GUI, systemctl set-default graphical.target I confirmed those work on my Jessie VM and Slashback confirmed it on Stretch, too. PS: You don't actually need the X server on your machine to run X clients over ssh. The X server is only needed where the display (monitor) is.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/264393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120537/" ] }