source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
415,970
What is difference between below two commands? 1. openssl genpkey -algorithm RSA 2. openssl genrsa In document difference is "Private Key" and "RSA Private Key" . Then.. What is diference between "Private Key with algorithm RSA" and "RSA Private Key" ?
The genpkey command can create other types of private keys - DSA, DH, EC and maybe GOST - whereas the genrsa , as it's name implies, only generates RSA keys. There are equivalent gendh and gendsa commands. However, the OpenSSL documentation states that these gen* commands have been superseded by the generic genpkey command. In the case of your examples, both generate RSA private keys. openssl genrsa -out genrsa.key 2048 and openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out genpkey.key will generate a 2048 bit RSA key with the exponent set to 65537. Simply cat the resulting files to see that they are both PEM format private keys; although openssl rsa encloses them in BEGIN RSA PRIVATE KEY and END RSA PRIVATE KEY while openssl genpkey omits the RSA . The former is PKCS#1 format, while the latter is PKCS#8 . Running openssl rsa text -in <filename> against both shows that they are RSA private keys with the same publicExponent . The newer genpkey command has the option to change this using -pkeyopt rsa_keygen_pubexp:value while the genrsa command doesn't have this option.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/415970", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269727/" ] }
415,990
Say I have a script doing: some-command "$var1" "$var2" ... And, in the event that var1 is empty, I'd rather that it be replaced with nothing instead of the empty string, so that the command executed is: some-command "$var2" ... and not: some-command '' "$var2" ... Is there a simpler way than testing the variable and conditionally including it? if [ -n "$1" ]; then some-command "$var1" "$var2" ... # or some variant using arrays to build the command # args+=("$var1")else some-command "$var2" ...fi Is there a parameter substitution than can expand to nothing in bash, zsh, or the like? I might still want to use globbing in the rest of the arguments, so disabling that and unquoting the variable is not an option.
Posix compliant shells and Bash have ${parameter:+word} : If parameter is unset or null, null shall be substituted; otherwise, the expansion of word (or an empty string if word is omitted) shall be substituted. So you can just do: ${var1:+"$var1"} and have var1 be checked, and "$var1" be used if it's set and non-empty (with the ordinary double-quoting rules). Otherwise it expands to nothing. Note that only the inner part is quoted here, not the whole thing. The same also works in zsh. You have to repeat the variable, so it's not ideal, but it works out exactly as you wanted. If you want a set-but-empty variable to expand to an empty argument, use ${var1+"$var1"} instead.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/415990", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70524/" ] }
416,011
Is there a simple way to separate a very large number per thousands with printf, awk, sed ? So 10000000000000 become 10 000 000 000 000 Thanks
With some printf implementations (including GNU printf and the printf builtin of ksh93 , zsh , bash and lksh (but not dash nor yash ) on GNU systems) and assuming your system has a French (of France or Canada at least), or Swedish or Slovenian or Macedonian or Kyrgyz locale (and a few more, that is, those that have space as the thousand separator): $ LC_ALL=fr_FR locale -k thousands_septhousands_sep=" "$ LC_ALL=fr_FR printf "%'d\n" 1000000000010 000 000 000 Also works with some awk implementations: $ LC_ALL=fr_FR awk 'BEGIN{printf "%'\''d\n", 1e10}'10 000 000 000 You can use LC_NUMERIC instead of LC_ALL if you know LC_ALL is otherwise not set.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269766/" ] }
416,018
How can I recursively replace a string in all folders and files' name with a different string? I am running Red Hat 6 and I can find them with: find . -name \*string\* I've managed to do it for strings within files: find . -type f -exec sed -i 's/string1/string2/g' {} + but how could I replace in a similar way all file names?
Using find and rename : find . -type f -exec rename 's/string1/string2/g' {} +
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416018", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266156/" ] }
416,047
I use in my bash script the tput command in order to colored the text as tput setaf 2 when I run the script from putty or console every thing is ok but when I run some external WIN application engine that run the script via SSHthe we get the following error on tput tput: No value for $TERM and no -T specifiedtput: No value for $TERM and no -T specifiedtput: No value for $TERM and no -T specifiedtput: No value for $TERM and no -T specified please advice what need to set ( ENV or else ) in my bash script in order to use the tput command ?
When connecting via ssh , environment variables may (or may not) be passed to the remote application. Also a "WIN application engine" could very well not set TERM at all. If TERM is putty (or xterm , for that matter), these have the same effect: tput setaf 2tput -T putty setaf 2 since the control sequences used for setaf are the same. Likewise, if TERM is linux , these are the same tput setaf 2tput -T linux setaf 2 The setaf is used for setting the foreground (text) to a particular value using ANSI (x3.64) escape sequences. Most of the terminals you are using do that — or some do not recognize any of those escape sequences. Since the application was not mentioned, you will have to experiment to see if the "WIN application engine" recognizes those escape sequences. If it does, it probably uses the same ANSI escapes, so you could just do tput -T xterm setaf 2 (There are other differences between putty, linux and xterm, of course).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416047", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
416,068
How do I change the timestamp of a directory and all the sub-folders within that directory to reflect the modification times of the contained files? For example with this directory structure: [Jan 9] root├── [Jan 3] file1├── [Jan 7] file2├── [Jan 6] sub1│   ├── [Jan 2] file3│   └── [Jan 1] file4└── [Jan 4] sub2 └── [Jan 8] file5 Here is a one liner to generate that: mkdir -p root/sub1 root/sub2 && touch -d '2018-01-08' root/sub2/file5 && touch -d '2018-01-04' root/sub2/ && touch -d '2018-01-01' root/sub1/file4 && touch -d '2018-01-02' root/sub1/file3 && touch -d '2018-01-06' root/sub1/ && touch -d '2018-01-07' root/file2 && touch -d '2018-01-03' root/file1 && touch -d '2018-01-09' root/ It can be listed with tree -D I'd like to change the timestamps on the three directories to be: [Jan 8] root├── [Jan 3] file1├── [Jan 7] file2├── [Jan 2] sub1│   ├── [Jan 2] file3│   └── [Jan 1] file4└── [Jan 8] sub2 └── [Jan 8] file5 Note: The current timestamps on the directories are completely ignored and the new time stamps are set only based on the contents. Time stamps bubble up to multiple levels of parent directories. The reason that I'm doing this is for a directory that gets copied with rsync. The directory is checked into git and could get rsynced from any place that has the repository checked out. To ensure that rsync is consistent and idempotent from the various places, I need to ensure that the time stamps and permissions of everything are in a known state. I already have a script that sets the timestamps of files based on when they were committed to git. I also have a script that sets the permissions on all files and directories to a known state. The only portion that I'm struggling with is bubbling time stamps from the files up to parent directories. I would like one line or short script that I can run from the command line to set directory timestamps based on the timestamps of their contents.
When connecting via ssh , environment variables may (or may not) be passed to the remote application. Also a "WIN application engine" could very well not set TERM at all. If TERM is putty (or xterm , for that matter), these have the same effect: tput setaf 2tput -T putty setaf 2 since the control sequences used for setaf are the same. Likewise, if TERM is linux , these are the same tput setaf 2tput -T linux setaf 2 The setaf is used for setting the foreground (text) to a particular value using ANSI (x3.64) escape sequences. Most of the terminals you are using do that — or some do not recognize any of those escape sequences. Since the application was not mentioned, you will have to experiment to see if the "WIN application engine" recognizes those escape sequences. If it does, it probably uses the same ANSI escapes, so you could just do tput -T xterm setaf 2 (There are other differences between putty, linux and xterm, of course).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31848/" ] }
416,124
I tried using this command to compute number of lines changed between two files: diff -U 0 file1 file2 | grep ^@ | wc -l My problem with this command is that if one file has only one line, and the other file has 100 lines, the output is still just 1. What command would give me the total number of lines changed, including the total extra lines in one file?
Looking for lines starting with @ gives you the number of blocks of changes that diff found. They would often be more than one line. As it happens, there's a tool to count the statistics of a diff: diffstat ( web site , man page ). Count insertions and deletions: $ diff -u test1 test2 | diffstat test2 | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) Combine insertions and deletions in the same block to just single "modification" operations: $ diff -u test1 test2 | diffstat -m test2 | 2 -! 1 file changed, 1 deletion(-), 1 modification(!) Also, you could use diffstat -t to get a tabular output of just the numbers of modified lines. The test files: $ cat test1abcd$ cat test2axd
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269842/" ] }
416,130
When rsyncing a folder with a million files, a million lines are output to stduot/stderr, containingthe file names. (I'm using Travis CI, and this trips it up because their log files can be max 4MB) How can I tell rsync to not "tell me the file names it's processing" I still want to hear about hiccups/errors, I just don't want a listing of the files it transferred. My command is: sudo rsync -avh --no-specials --exclude="foo/" src/ dst/
Looking for lines starting with @ gives you the number of blocks of changes that diff found. They would often be more than one line. As it happens, there's a tool to count the statistics of a diff: diffstat ( web site , man page ). Count insertions and deletions: $ diff -u test1 test2 | diffstat test2 | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) Combine insertions and deletions in the same block to just single "modification" operations: $ diff -u test1 test2 | diffstat -m test2 | 2 -! 1 file changed, 1 deletion(-), 1 modification(!) Also, you could use diffstat -t to get a tabular output of just the numbers of modified lines. The test files: $ cat test1abcd$ cat test2axd
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416130", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180441/" ] }
416,137
How can I run a script on startup on an Ubuntu Server 17.10 machine? I think the method was changed in 17.10.
Put the script in the appropriate user's cron table (i. e. the crontab ) with a schedule of @reboot . A user can edit its cron table with crontab -e . An example which will run /path/to/script.sh at startup: @reboot /path/to/script.sh If you need to run it as root, don't use @reboot sudo /path/to/script.sh ; use sudo crontab -eu root to edit root's crontab. See also: crontab(1), cron(8), crontab(8)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/416137", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/268924/" ] }
416,180
I have installed Ubuntu 17.10 on my notebook. However, I cannot connect to wi-fi because there is a "No Wi-Fi Adapter Found" message. I don't have any idea what to do next. My notebook : Asus X555LN-XX507H Network Adapter : Broadcom 802.11n BCM43142 (14e4:4365) (This is a follow-on from my earlier post, https://unix.stackexchange.com/questions/415639/kali-linux-no-wifi-adapter-found , where I was advised to try an easier system than Kali.)
Just connect using usb cable to do usb tethering, open terminal by Ctrl+Alt+T and type: sudo apt-get install --reinstall bcmwl-kernel-source Then, reboot.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/416180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269490/" ] }
416,185
I'm installing ripgrep on Ubuntu. It doesn't exist on the official repository or on private PPA's, so I'm following the project's instructions to install it as an out of tree package: https://github.com/BurntSushi/ripgrep#installation . I managed to learn that these packages should live on /usr/local : /usr/local/bin for binaries /usr/local/share/doc/<package_name> for documentation /usr/local/share/man for manual pages What about bash completion? I understand that this is a little less standardized than those other categories and may be specific to each Bash installation. What is the way to do it in Ubuntu?
From what I could gather after a lot of reading, specially their official documentation , /usr/share/doc/bash-completion/README.Debian , and the sources themselves: /etc/bash_completion.d/ is the legacy dir, where completions are eagerly loaded, i.e., all sourced at once when the main script runs. Only a few packages still install there, most already migrated to the new standard. /usr/share/bash-completion/completions/ is the directory for completion files that are dynamically loaded on demand by bash-completion . They are only sourced when needed (and, most importantly, if needed), saving a ton of resources by not loading a bazillion completions for commands you'll never use. Regarding this comment of yours about /etc : it would be mixed with apt controlled packages... I was hoping for a separate path Files at /etc are not exactly "apt controlled", at least not in the same way /usr/bin or /usr/share are. [1] They are configuration files , and they are meant to be modified by the local admin (you). Packages create them at install time, usually containing the default values, and then you're supposed to modify them to your needs. Some packages go one step further to ease maintenance and instead of (or alongside) config files they also create an (empty) configuration directory for "drop-in", extra customization files. So instead of modifying a config file to change some settings, a difficult and error prone task using tools like grep , sed and awk , you just create (or delete) "configuration snippet" files that override the desired settings. Take a look at all /etc/{,*/}*.d/ directories in your system. Does your package want to install a custom repository, like Google Chrome? Don't edit /etc/apt/sources.list , drop a file at /etc/apt/sources.list.d/ . New services? /etc/systemd/system/ . System settings? /etc/sysctl.d/ (btw, see its README). Much much easier, and no more stepping in each other's toes. Some dirs in /usr/share behave like this too: your .desktop file go to /usr/share/applications/ , and new mimetypes to /usr/share/mime/packages/ , and so on. The difference to /etc is, you, the admin, are not supposed to neither modify nor delete files that you didn't create/installed yourself. And that's exactly the case with /usr/share/bash-completion/completions/ . That said, I have good news and bad news for you: Bad news is... if you're tying to install the completion in their github repository , that's not a bash completion, but a zsh completion file! And no, bash and zsh are completely incompatible regarding completions. Don't worry, because the good news is: ripgrep is available on apt repositories since Ubuntu 18.10! Yay! :D sudo apt install ripgrep And guess what? It contains a bash completion file ! Installed, of course, at /usr/share/bash-completion/completions/rg . Time to rejoice, pal! [1] : To be honest, you could say they are "apt monitored ", not controlled. Read Everything you need to know about conffiles: configuration files managed by dpkg for more details.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/416185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18514/" ] }
416,234
I have read the following in this question : bash supports a --posix switch, which makes it more POSIX-compliant. It also tries to mimic POSIX if invoked as sh . The above quote assumes that /bin/sh is a link that points to /bin/bash . But I don't quite understand what is meant by "invoked as sh" . Say that I have the following script which is called "script.sh": #!/bin/bashecho "Hello World" Please tell me in each of the following cases whether the script will be run in normal bash mode or in POSIX mode (assume that I have executed the following commands in a terminal that is running bash ): sh script.sh bash script.sh ./script.sh Now say that I have the following script which is called "script.sh" (which is like the above script but without the shebang): echo "Hello World" Please tell me in each of the following cases whether the script will be run in normal bash mode or in POSIX mode (assume that I have executed the following commands in a terminal that is running bash ): sh script2.sh bash script2.sh ./script2.sh
Only cases 1 & 4 will run in POSIX mode (assuming that sh is bash and not some other implementation of sh). Any case that explicitly calls bash without --posix will not, whether from the shebang or not. Any case that explicitly calls sh will. The shebang is only used when no shell was explicitly started for the script already. Case 6, if your terminal is running bash , will not run in POSIX mode and Bash will invoke it using itself. If your terminal were running zsh instead, case 6 would also run in POSIX mode. POSIX is ambiguous about exactly what should happen in that case , and Bash and zsh made different choices there. Bash invokes the script using itself, while zsh uses sh (whatever that happens to be). Other shells also vary on that point. One simple way to tell what mode you're in is to make your script body: kill -SIGHUP which will fail with an error in POSIX mode , but give usage instructions for kill outside of it. This is an easy distinction and it works through a long range of Bash versions going back as far as you're likely to encounter.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/269904/" ] }
416,277
I find myself doing <command> --help | grep <feature> very very often everyday. I was wondering if it was possible to make something like ^^ that expands to "--help | grep" and then I do this: ls ^^ size That would execute the following: ls --help | grep size
With zsh , you'd use a global alias: $ alias -g '^^=--help|grep --color -i'$ ls ^^ size --block-size=SIZE scale sizes by SIZE before printing them; e.g., '--block-size=M' prints sizes in units of 1,048,576 bytes; see SIZE format below -h, --human-readable with -l and/or -s, print human readable sizes -s, --size print the allocated size of each file, in blocks -S sort by file size, largest first --sort=WORD sort by WORD instead of name: none (-U), size (-S), -T, --tabsize=COLS assume tab stops at each COLS instead of 8The SIZE argument is an integer and optional unit (example: 10K is 10*1024) With bash , you may be able to use history expansion which is one that happens early enough in the shell syntax parsing that it can work at substituting a pipe: Prime the history with a the text you want to substitute and a special character you're unlikely to use otherwise (like £ here that happens to be on my keyboard): $ --help $(: £)|grep bash: --help: command not found Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep --help' for more information. Then using history expansion to retrieve that: $ ls !?£? sizels --help $(: £)|grep size --block-size=SIZE scale sizes by SIZE before printing them; e.g., '--block-size=M' prints sizes in units of -h, --human-readable with -l and/or -s, print human readable sizes -s, --size print the allocated size of each file, in blocks -S sort by file size, largest first --sort=WORD sort by WORD instead of name: none (-U), size (-S), -T, --tabsize=COLS assume tab stops at each COLS instead of 8 Or you could have readline expand --help|grep upon some key or key sequence press. For that to apply to bash only (and not other applications like gdb using readline), you can use the bind bash builtin command which is bash 's API to configuring readline , for instance in your ~/.bashrc : bind '"^^": "--help|grep "' Or add to your ~/.inputrc (readline's configuration file): $if Bash"^^": "--help|grep "$endif (there are other shells like rc or es that use readline and where doing that binding could make sense but AFAICT, they do not set the rl_readline_name variable before invoking readline so you won't be able to add some $if statements for them (they would show as other like all applications that use readline without telling it their application name)). Note that you need to enter the second ^ within half a second (by default) after the first one for the substitution to occur.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/416277", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231067/" ] }
416,401
I was solving a challenge where I found a data file with no file extension. The file command shows that it is a data file (application/octet-stream) . The hd command shows GNP. in the last line. So if I reverse this file then I will get the .PNG format file, I searched everywhere but I didn't find a solution explaining how to reverse the content of a binary file.
With xxd (from vim ) and tac (from GNU coreutils, also tail -r on some systems): < file.gnp xxd -p -c1 | tac | xxd -p -r > file.png
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/416401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/255251/" ] }
416,453
I'm using tmux and Vim as my development IDE. I have 3 panes open in tmux: one for editing source code, one for debugging and one as a display console. From Vim I would like to run the make command and send all the build information emitted by it to the display console pane. How would I do that?
You can run any shell command from one pane and display its output in another pane with the run-shell command. For example: tmux run-shell -t 2 "echo hello" ...and "hello" will be printed to pane number 2. You can see pane numbers with prefix + q . From vim you should be able to do: :!tmux run-shell -t 2 "make ..." Add -b to run the command in the background. Update: Addressing a couple things that @SLN brought up in this comment ... tmux puts the output pane into copy-mode , the same mode it is in when you do scrolling, so break out of it however you normally do ( Ctrl + C is one way). Note: you'll know you're in this mode if you see something like [12/34] (i.e. page-num/total-pages) in the upper-right corner of the pane. As for Vim requiring you to hit Enter (or Ctrl + L ) after make or other command completes, this is just how Vim works with external commands ( :!cmd ). I'm not aware of any way to avoid this but I believe you can hit Enter before the command finishes and it will return as soon as it's done. (This might be system dependent.) Update 2: I do know a workaround for the second item. If you use a mapping to run the external command you can embed an exit key. Here's an example where I'm just doing ls as my shell command: nnoremap <leader>ls :!ls<CR><C-L> From Normal mode I hit \ls and the ls command will run but then the console output will close and return me to vim right away. Perhaps you can adapt to this whatever your command is.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270029/" ] }
416,457
Is there a command that installs all the unmet build dependencies that dpkg-checkbuilddeps would list? I tried to sed the output and give it to apt-get install , but it seems very hacky and for some reason didn't work in some environments. sudo apt-get install --yes $(dpkg-checkbuilddeps | sed 's/([^)]*)//g' | sed 's/dpkg-checkbuilddeps:\serror:\sUnmet build dependencies://g') Is there a better way?
I use mk-build-deps from the devscripts package for this (you’ll also need equivs ). mk-build-deps will build a package depending on all the build-dependencies in the debian/control control file; that package can then be installed using apt , which will also install all the missing dependencies. The advantage of this approach is that uninstalling the dependency package, once you’ve finished with it, will also identify any build-dependencies which could also be uninstalled. To reduce manual steps, the following command can be used: mk-build-deps --install --root-cmd sudo --remove The end result of this is that all the build dependencies are installed, but not the newly-generated build-dependency package itself: it’s installed ( --install ), along with all its dependencies, and then removed ( --remove ), but the dependencies are left in place.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/416457", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126773/" ] }
416,475
I have a directory ~/tmp/foo/ that's populated with subdirectories, files, and symbolic links. $ tree ~/tmp/foo/tmp/foo/├── eggs│ ├── baz│ │ └── link3.txt -> /home/me/file3.txt│ └── link2.txt -> /home/me/file2.txt├── hello.txt├── link1.txt -> /home/me/file1.txt└── spam ├── link4.txt -> /home/me/file4.txt └── link5.txt -> /home/me/file5.txt3 directories, 6 files I want to recursively copy all the symbolic links under ~/tmp/foo/ as files (as if I'd used cp -rH ) to another (nonempty) directory ~/bar/ . Is there a simple way to do this? I've tried the following: find ~/tmp/foo/ -type l -print | rsync -avzL --files-from=- ~/tmp/foo/ ~/tmp/bar/ But this fails.
I use mk-build-deps from the devscripts package for this (you’ll also need equivs ). mk-build-deps will build a package depending on all the build-dependencies in the debian/control control file; that package can then be installed using apt , which will also install all the missing dependencies. The advantage of this approach is that uninstalling the dependency package, once you’ve finished with it, will also identify any build-dependencies which could also be uninstalled. To reduce manual steps, the following command can be used: mk-build-deps --install --root-cmd sudo --remove The end result of this is that all the build dependencies are installed, but not the newly-generated build-dependency package itself: it’s installed ( --install ), along with all its dependencies, and then removed ( --remove ), but the dependencies are left in place.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/416475", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92703/" ] }
416,571
I have the following example. #!/bin/bashARGUMENTS="-executors 1 -description \"The Host\" "# call1# error: parameter Host" is not allowedjava -jar swarm-client.jar $ARGUMENTS# call2 # works fine with evaleval java -jar swarm-client.jar $ARGUMENTS In $ARGUMENTS , I have a quoted argument. I do not understand why grouping of argument by escaped quotes is not working in call1 . I do not understand why is eval necessary to resolve the quoting problem. I think I do not understand the process and the order of command evaluation in shell. Can you explain it to me?
You don't pass quoted arguments to a command, you pass arguments. When you enter: cmd arg1 arg2 The shell parses that line in its own syntax where space is a word delimiter and calls cmd1 with cmd , arg1 and arg2 as arguments. Note : cmd does not receive any space character in its arguments, the spaces are just operators in the shell language syntax. Like when in C, you write func("foo", "bar") , at run time, func receives two pointer arguments, it does not see any of the ( or , or " or space character. Also part of the shell syntax is quoting. " is used to be able to have words that contain characters that are otherwise part of the shell syntax. When you do: cmd "arg 1" arg2 cmd receives cmd , arg 1 and arg2 as arguments. It does not see any " character. Those " are used to prevent the space from being treated as a word separator in the shell syntax. Now, when you do: cmd $VAR it's not the same as doing: cmd the content of the variable If it were, you'd have trouble with: VAR='foo; reboot'echo $VAR for instance. In Bourne-like shell, the content of $VAR is not passed verbatim as a single argument to cmd either (unfortunately; it's been fixed in some other shells like rc , es , fish and to a lesser extent zsh ). Instead, it's subject to splitting and globbing ( split+glob ) and the resulting words passed to cmd . The splitting is done based on the characters in the special $IFS variable, by default space, tab and newline. For your $ARGUMENTS which contains -executors 1 -description "The Host" , that's splitting into -executors , 1 , -description , "The and Host" . Since none of those words contain wildcard character, the glob part doesn't apply, so it's those words that are passed to cmd . Here, you could use the split+glob operator, and use as separator for the splitting part a character that does not appear in those words: ARGUMENTS='-executors|1|-description|The Host'IFS='|'cmd $ARGUMENTS Or better, for shells that support them (like bash ), use arrays , where you can have a variable that contains all those arguments. eval is to evaluate shell code. So the other option is to have ARGUMENTS contain shell code (text in the shell syntax as opposed to a list of arguments), and have that passed to eval for interpretation. But remember to quote the variable to avoid the split+glob operator: eval "cmd $ARGUMENTS"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50532/" ] }
416,617
When scripting, I usually write my ifs with the following syntax as it iseasier for me to understand that what comes next is not true. if [ ! "$1" = "$2" ]; then Others say that the way below is better if [ "$1" != "$2" ]; then The thing is when I ask why and whether there are any differences no one seems to have any answer. So, are there any differences between the two syntaxes? Is one of them safer than the other? Or is it just a matter of preference/habit?
Beside the cosmetic/preference arguments, one reason could be that there are more implementations where [ ! "$a" = "$b" ] fails in corner cases than with [ "$a" != "$b" ] . Both cases should be safe if implementations follow the POSIX algorithm , but even today (early 2018 as of writing), there are still implementations that fail. For instance, with a='(' b=')' : $ (a='(' b=')'; busybox test "$a" != "$b"; echo "$?")0$ (a='(' b=')'; busybox test ! "$a" = "$b"; echo "$?")1 With dash versions prior to 0.5.9, like the 0.5.8 found as sh on Ubuntu 16.04 for instance: $ a='(' b=')' dash -c '[ "$a" != "$b" ]; echo "$?"'0$ a='(' b=')' dash -c '[ ! "$a" = "$b" ]; echo "$?"'1 (fixed in 0.5.9, see https://www.mail-archive.com/[email protected]/msg00911.html ) Those implementations treat [ ! "(" = ")" ] as [ ! "(" "text" ")" ] that is [ ! "text" ] (test whether "text" is the null string) while POSIX mandates it to be [ ! "x" = "y" ] (test "x" and "y" for equality). Those implementations fail because they perform the wrong test in that case. Note that there's yet another form: ! [ "$a" = "$b" ] That one requires a POSIX shell (won't work with the old Bourne shell). Note that several implementations have had problems with [ "$a" = "$b" ] (and [ "$a" != "$b" ] ) as well and still do like the [ builtin of /bin/sh on Solaris 10 (a Bourne shell, the POSIX shell being in /usr/xpg4/bin/sh ). That's why you see things like: [ "x$a" != "x$b" ] In scripts trying to be portable to old systems.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/416617", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264448/" ] }
416,633
I am just a little bit confused here. When you are asked to give a user sudo access to the machine. Should I just add the user to the wheel group. # usermod -aG wheel bob Or let's say there is no wheel group or it is deleted for some reason. then how can I grant bob sudo access to the machine.When I did # which sudo I get the result: /usr/bin/sudo So can I do the following line then: bob ALL=/usr/bin/sudo But then I changed to user bob after and tried to execute # sudo iptables -L and then it gives me that error message: Sorry, user bob is not allowed to execute '/sbin/iptables -L' as root And so am not sure how to give sudo access to the machine to a user if the group wheel is not there. And according to my knowledge bob ALL=ALL ALL Basically makes bob have the same power like root which is not good right. Another question I have is how to make all users on the system able to execute the last command. Do I have to create a group and then add all users to this group or is there another way?
When the wheel group membership gives an user full root access through sudo, it is normally configured like this in the /etc/sudoers file: %wheel ALL=(ALL) ALL Meaning: "any members of group wheel on ALL hosts can sudo to ALL user accounts to run ALL commands." So it's exactly the same as your "bad" line: bob ALL=(ALL) ALL If you want to give an user (or a group) full access to a specific other user account and nothing else, you can do it this way: user ALL=(targetuser) ALL# or%group ALL=(targetuser) ALL Then, the user(s) can do $ sudo -u targetuser command to quickly execute individual commands as the target user, or $ sudo -iu targetuser to get a shell as the target user, with the exact same environment the target user would get when logging in directly. For historical reasons, some people reflexively use sudo su - targetuser for the second purpose. This would require giving the user(s) in question at least access to run the ` su - targetuser command as root, and it will be more difficult to piece together from the logs what the user actually did. This command was useful back when sudo did not have the -i option, but I think that option has been there for about 15 years by now.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100193/" ] }
416,640
By default systemd drops to an emergency shell at the slightest error. For example, if one of the mounts at fstab fails for some reason the system becomes unbootable immediately. I manage dozens of diverse production systems and I've found this behavior very damaging. (Actually I think it's a major design failure, but that's a personal opinion). I'd like to increase the system boot resilience. Optimally the system should always boot up, missing drivers, mounts, etc. shouldn't drop emergency shell, (just show warning instead) unless the given error would render console login absolutely impossible. What can be run, that should be run. I know systemd automatically generates *.mount files from /etc/fstab and I could use the nofail option with small x-systemd.device timeout (or define the relevant .mount files myself). However it wouldn't solve my problem, I want to make the system more resilient, "patching" fstab every time is not very convenient and I'm not sure how many other possible "problems" exist which would render my system unbootable just because some developer somewhere thought it's important enough. In sort, I'd like to regain the control over my machine and not let systemd decide what problem is serious enough to crush the boot process. Is it possible?
It is literally only mount failures, that's all you would need to change. So the letter of your request would be trivial to answer. Create a drop-in file: # /etc/systemd/system/local-fs.target.d/nofail.conf# Clear OnFailure= (set it to nothing)[Unit]OnFailure= I believe this will add no new problem, beyond those that linux sysvinit already suffered by allowing this partial failure scenario. However you also pointed out the question of how long systemd should wait for the specified block devices to become available. I can see no way to configure this, without providing a replacement for the fstab generator as a whole. https://www.freedesktop.org/software/systemd/man/systemd.generator.html If you dump a large amount of less widely-used code here, it seems unlikely to increase system resilience. I think the closest solution would be to patch the existing fstab generator. It's not massively complex, I suspect you could get away with it / keep up with any significant changes. Technically, if your distribution had a self-contained mountall sysvinit script, you could try hooking that in. But that will significantly change the boot process - it's actually more of a fork. I would not recommend that approach. https://unix.stackexchange.com/a/393711/29483 If you search through the unit files, there are only a very few ways for the boot to fall back to emergency.target . It's usually when a .mount unit for a local filesystem fails, causing local-fs.target to fail. Or when your initramfs fails to mount the root filesystem, if your initramfs uses systemd. local-fs.target has OnFailure=emergency.target . And it gets failed because units for local filesystems are automatically added to the Requires list of local-fs.target (unless they have DefaultDependencies=no ). $ systemctl show --property Requires local-fs.targetRequires=-.mount home.mount boot.mount boot-efi.mount
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/416640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98961/" ] }
416,656
How can I convert underscore separated words to "camelCase"? Here is what I'm trying: echo "remote_available_packages" | sed -r 's/([a-z]+)_([a-z])([a-z]+)/\1\U\2\L\3/' But it returns remoteAvailable_packages without changing the p in packages .
This does that (in GNU sed): echo "remote_available_packages" | sed -E 's/_(.)/\U\1/g'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416656", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
416,663
I have a 4 line input file and i need to modify the file to combine alternate lines. I want to perform the operation in place. INPUT:TomNathanJackPoloDesired Output:Tom JackNathan Polo One way is to collect odd numbered lines and flip them and cut even numbered lines and combine both files to get the final output. But i am looking for a simpler solution.
Given $ cat INPUTTomNathanJackPolo then $ pr -s -T -2 < INPUTTom JackNathan Polo (paginate with single tab spacing between columns, no headers, two columns); or $ paste -d ' ' - - < INPUT | rs -TTom JackNathan Polo (paste then transpose)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416663", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115560/" ] }
416,693
I am trying to read the temperature from my Avalon which has the Avalon Firmware: 20170603 I'm using the command: cgminer-api -o stats It brings me a lot of information that I don´t need. root@OpenWrt:/etc# cgminer-api -o statsSTATUS=S,When=1482343577,Code=70,Msg=CGMiner stats,Description=cgminer 4.10.0|STATS=0,ID=AV70,Elapsed=30789,Calls=0,Wait=0.000000,Max=0.000000,Min=99999999.000000,MM ID1=Ver[7411612-6cf14b0] DNA[01313edbc5efabe3] Elapsed[30792] MW[340560 340538 340538 340538] LW[1362174] MH[180 238 259 216] HW[893] DH[2.749%] Temp[30] TMax[77] Fan[5430] FanR[90%] Vi[1201 1201 1202 1202] Vo[4438 4406 4443 4438] GHSmm[8063.47] WU[108083.46] Freq[715.86] PG[15] Led[0] MW0[1170 1302 1206 1250 1197 1312 1331 1262 1300 1216 1230 1281 1265 1273 1327 1291 1232 1231 1267 1292 1286 1203] MW1[1312 1189 1237 1251 1212 1247 1264 1275 1196 1256 1283 1257 1190 1247 1243 1282 1330 1315 1292 1273 1261 1271] MW2[1213 1262 1310 1202 1285 1220 1291 1267 1309 1307 1164 1212 1290 1289 1308 1174 1230 1276 1252 1189 1192 1242] MW3[1302 1275 1209 1307 1217 1294 1328 1273 1237 1256 1227 1239 1268 1242 1308 1314 1296 1314 1331 1324 1297 1190] TA[88] ECHU[512 0 0 0] ECMM[0] FM[1] CRC[974 0 0 0] PAIRS[0 0 0] PVT_T[4-70/0-76/72 0-69/11-76/70 2-70/0-77/74 20-67/0-75/70],MM ID2=Ver[7411612-6cf14b0] DNA[0132c3d0691693b9] Elapsed[30791] MW[340551 340551 340538 340538] LW[1362178] MH[2067 188 222 215] HW[2692] DH[3.629%] Temp[29] TMax[80] Fan[5490] FanR[90%] Vi[1204 1202 1201 1201] Vo[4461 4447 4420 4443] GHSmm[7887.76] WU[103670.36] Freq[700.26] PG[15] Led[0] MW0[1264 1270 1229 1313 1296 1184 1239 1237 1266 1247 1252 1242 1202 1266 1266 1317 1255 1272 1309 1230 1301 1243] MW1[1155 1159 1213 1196 1214 1154 1152 1213 1180 1180 1152 1193 1118 1122 1159 1173 1185 1193 1180 1161 1170 1175] MW2[1269 1138 1285 1180 1256 1210 1170 1299 1223 1185 1164 1132 1140 1225 1246 1173 1237 1212 1192 1284 1215 1205] MW3[762 1268 1187 1271 1277 1150 1202 1208 1172 1170 1176 1249 1177 1154 1197 1250 1176 1227 1268 1218 1262 1251] TA[88] ECHU[0 512 0 0] ECMM[0] FM[1] CRC[0 0 0 0] PAIRS[0 0 0] PVT_T[0-68/10-80/70 19-67/0-76/70 0-70/11-78/72 19-68/0-77/71],MM Count=2,Smart Speed=1,Connecter=AUC,AUC VER=AUC-20151208,AUC I2C Speed=400000,AUC I2C XDelay=19200,AUC Sensor=15483,AUC Temperature=28.17,Connection Overloaded=false,Voltage Offset=0,Nonce Mask=29,USB Pipe=0,USB Delay=r0 0.000000 w0 0.000000,USB tmo=0 0|STATS=1,ID=POOL0,Elapsed=30789,Calls=0,Wait=0.000000,Max=0.000000,Min=99999999.000000,Pool Calls=0,Pool Attempts=0,Pool Wait=0.000000,Pool Max=0.000000,Pool Min=99999999.000000,Pool Av=0.000000,Work Had Roll Time=false,Work Can Roll=false,Work Had Expire=false,Work Roll Time=0,Work Diff=65536.00000000,Min Diff=1.00000000,Max Diff=131072.00000000,Min Diff Count=12,Max Diff Count=18313,Times Sent=1531,Bytes Sent=228345,Times Recv=2668,Bytes Recv=1379612,Net Bytes Sent=228345,Net Bytes Recv=1379612|STATS=2,ID=POOL1,Elapsed=30789,Calls=0,Wait=0.000000,Max=0.000000,Min=99999999.000000,Pool Calls=0,Pool Attempts=0,Pool Wait=0.000000,Pool Max=0.000000,Pool Min=99999999.000000,Pool Av=0.000000,Work Had Roll Time=false,Work Can Roll=false,Work Had Expire=false,Work Roll Time=0,Work Diff=16384.00000000,Min Diff=4096.00000000,Max Diff=16384.00000000,Min Diff Count=374,Max Diff Count=993,Times Sent=109,Bytes Sent=12038,Times Recv=119,Bytes Recv=12214,Net Bytes Sent=12038,Net Bytes Recv=12214|STATS=3,ID=POOL2,Elapsed=30789,Calls=0,Wait=0.000000,Max=0.000000,Min=99999999.000000,Pool Calls=0,Pool Attempts=0,Pool Wait=0.000000,Pool Max=0.000000,Pool Min=99999999.000000,Pool Av=0.000000,Work Had Roll Time=false,Work Can Roll=false,Work Had Expire=false,Work Roll Time=0,Work Diff=0.00000000,Min Diff=0.00000000,Max Diff=0.00000000,Min Diff Count=0,Max Diff Count=0,Times Sent=2,Bytes Sent=151,Times Recv=3,Bytes Recv=244,Net Bytes Sent=151,Net Bytes Recv=244| But I just need this values: Temp[29] TMax[80] Fan[5490] Temp[29] TMax[80] Fan[5490] I tried with this two commands but they didn´t work 1.- cgminer-api stats | grep "^ *\[temp_avg]"2.- cgminer-api stats | grep temp
Try: $ grep -oE 'Temp[^F]*Fan\[[[:digit:]]+\]' textTemp[30] TMax[77] Fan[5430]Temp[29] TMax[80] Fan[5490] How it works -o tells grep to print only the matching text and not the rest of the line. -E tells grep to use extended regular expressions. (The default basic regular expressions are archaic.) Temp[^F]*Fan\[[[:digit:]]+\] This regex matches any string that starts with Temp , followed by any number of characters that don't include F , followed by Fan followed by a literal [ , followed by one or more digits, followed by a literal ] .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270279/" ] }
416,715
Is there a package manager for busybox devices?After all, Busybox utilities are quite restricted. I suppose one would have to compile it for specific device. Suppose that you have a device running a Linux kernel and using a Busybox binary for all tools and such. Now suppose that you want to install some software on the device. Busybox doesn't have a package manager integrated, unless you count rpm as one. So you have to install that first. How would you do it? The OS in question is Linux.
BusyBox is what is called a multicall binary. Meaning it is one binary that has multiple utility functions. If called as a shell it runs as a shell, if called as the ls command it runs the ls command. It acts as a replacement to many standard tools used on Linux and Unix-like systems with a small memory footprint. It replaces the functionality of other software like GNU coreutils, util-linux, iproute, etc and its intent is usually to be targeted to the requirements of a specific embedded system. Therefor if the desire is to have a package manager of utilities in fact.. this is what busybox replaces and is designed not to be . So you can simply use the suite of tools busybox replaces. You can select which utilities are included in busybox when you build it during compiling. Its not intended to be configured after the fact. https://www.busybox.net/FAQ.html#build_system
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/265849/" ] }
416,725
I have read that POSIX compliant operating systems (for example: Linux) must have the sh shell. But is it required for sh to be in the /bin directory, or can it be in any directory?
POSIX only mandates the /dev and /tmp directories to exist , and the /dev/null , /dev/tty , and /dev/console files. The standard utilities must exist, but there is no particular location specified. There may not be a /bin at all, and if there is it may not contain a sh , and if it does that may not be a POSIX sh . You can get a valid PATH variable that includes the POSIX tools, including sh , with the getconf command : $ PATH=$(getconf PATH)$ sh This can be useful on, for example, Solaris, where the default sh is not POSIX-compatible , but a compliant sh is provided and accessible in that way (because Solaris is a certified Unix ). getconf PATH will include /usr/xpg4/bin at the front, which contains POSIX sh and a number of other required tools ( including useless ones like cd ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/416725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270294/" ] }
416,759
The man curl entry for curl -O says: If you want the file saved in a different directory, make sure you change cur‐ rent working directory before you invoke curl with the -O, --remote-name flag! Why it isn't possible to download the file to a specific directory and one has to do cd /example to download the file to a specific directory? I know it's possible with wget this way: wget -P /example https://example.com/file But I do wonder why it seemingly isn't with curl .
This is because it is how curl is designed. You can nonetheless achieve writing to output directory via shell redirection. Example: $ curl http://example.com/file > /path/to/destination/file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
416,760
I have been using bash most of the time and has just started playing with other shells. I started with dash and tries to find out its version number but the usual method like -v or $version does not work. I can understand that --version is not going to work as that is GNU specific. I searched the net and found the following answers, all resorting to the package management system. How to tell the version number of dash? (the user is using CentOS, so the answer depends on the Redhat Package Management system) and How to find the version of the dash shell on Ubuntu bin? (This is from the AskUbuntu site so the answer depends on the Advanced Package Tool) So is there a way to find out the version of dash without resorting to the package management system? I will be very surprised if there are no simple way because I have always believed that querying the version of a software is one of the most fundamental functions. If that is really the case, I would be happy to hear some explanation e.g. what is the philosophy behind such design. I will be happy to accept either a simple way to get the dash version without resorting to the package management system or a convincing explanation (historical perspective also OK) on why I cannot do it as the accepted answer.
I have always believed that querying the version of a software is one of the most fundamental functions. It isn't. It is a good idea that we had to learn. Many years ago, we didn't get a kernel, a package manager, and a package repository. We got an operating system . That had a version, and implicitly all of the operating system's component programs were associated with the operating system's version. This was as true for BSD as it was for PC-DOS. The AT&T world at the start of the 1980s gave us the what program and the idea of embedded version strings put into binaries by the source code control system. For a while one could use that to find out the versions of things, albeit that often it was the versions of individual source files in a program rather than for the program as a whole. (I myself put this mechanism into all of my version 1 Command-Line Utilities for DOS and for OS/2, alongside a 16-bit WHAT program.) One still can today with a few OpenBSD … $ what /bin/sh/bin/sh PD KSH v5.2.14 99/07/13.2$ … and FreeBSD binaries … % what /bin/tcsh/bin/tcsh: Copyright (c) 1991 The Regents of the University of California.% … but this is not the case with even most other programs on OpenBSD and FreeBSD any more, and certainly not with the Almquist shell on FreeBSD … % what /bin/sh/bin/sh:% … nor with the Debian Almquist shell. % what /bin/dash/bin/dash:% In 1988, Digital Research gave the world the idea that tools took a /? option to ask for option help, which Microsoft copied from DR-DOS into version 5.0 of its MS-DOS in 1991 and IBM into OS/2 in 1992. This idea, widely touted by word of mouth, on Fidonet, and in computer magazines at the time as a very good thing, found its way into GNU coding conventions as a --help option, to which was added a --version option. But this was not widespread for non-GNU tools in the Unix world, nor indeed widespread at the time that the Almquist shell was written in 1989; as the GNU convention did not even appear until the 1990s. The Bourne Again shell (first published 1989) nowadays supports --version . This was likewise added to the MirBSD Korn shell (the original Korn shell being first published in 1983, remember), the TENEX C shell (1983), and the Z shell (1990) which also nowadays all support --version . This mechanism has not, however, been added to the Almquist shell, even by the Debian people when they made their Debian Almquist shell decades later. From within the shells themselves, many shells (including several of the Korn variants, the Z shell, the Bourne Again shell, and the TENEX C shell) have a shellname _VERSION or version shell variable that can be accessed in shell script. Again, the Almquist shell has not. Further reading Mike Miller (2014-10-06). consider providing a DASH_VERSION variable . Debian bug #764172. https://unix.stackexchange.com/a/257764/5132
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259023/" ] }
416,767
I just backed up the microSD card from my Raspberry Pi on my PC running a Linux distro using this command: dd if=/dev/sdx of=file.bin bs=16M The microSD card is only 3/4 full so I suppose there's a few gigs of null bytes at the end of the tremendous file. I am very sure I don't need that. How can I strip those null bytes from the end efficiently so that I can later restore it with this command? cat file.bin /dev/zero | dd of=/dev/sdx bs=16M
To create a backup copy of a disk while saving space, use gzip : gzip </dev/sda >/path/to/sda.gz When you want to restore the disk from backup, use: gunzip -c /path/to/sda.gz >/dev/sda This will likely save much more space than merely stripping trailing NUL bytes. Removing trailing NUL bytes If you really want to remove trailing NUL bytes and you have GNU sed, you might try: sed '$ s/\x00*$//' /dev/sda >/path/to/sda.stripped This might run into a problem if a large disk's data exceeds some internal limit of sed. While GNU sed has no built-in limit on data size, the GNU sed manual explains that system memory limitations may prevent processing of large files: GNU sed has no built-in limit on line length; as long as it can malloc() more (virtual) memory, you can feed or construct lines as long as you like. However, recursion is used to handle subpatterns and indefinite repetition. This means that the available stack space may limit the size of the buffer that can be processed by certain patterns.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416767", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211239/" ] }
416,786
How to append multiple lines to a file, if these lines doesn't exist in that file? For example, to add multiple global aliases to /etc/bash.bashrc I use an heredocument : cat <<-"BASHRC" >> /etc/bash.bashrc alias rss="/etc/init.d/php*-fpm restart && systemctl restart nginx.service" alias brc="nano /etc/bash.bashrc"BASHRC I was criticized that this operation doesn't include a way to check if the lines are already there, and if mistakenly rexecute the heredocument, I could cause redundancy, as well as conflict.
Simple shell script to add lines from the file newdata to datafile . It should be straightforward to change newdata to a here-doc. This is really not very effective since it calls grep for every (new) input line: target=datafilewhile IFS= read -r line ; do if ! grep -Fqxe "$line" "$target" ; then printf "%s\n" "$line" >> "$target" fidone < newdata For each line, we use grep to see if it already exists in the target file, -F for fixed-string match (no regexes), -x for full line match, and -q to suppress output of matched lines. grep returns a falsy error code if it doesn't find a matching line, so append to target file if the negated result is truthy. More effectively, in awk . This relies on awk being able handle arbitrary lines as keys to an array. $ awk 'FNR == NR { lines[$0] = 1; next } ! ($0 in lines) {print}' datafile newdata The first part FNR == NR { lines[$0] = 1; next } loads all lines of the first input file as keys into the (associative) array lines . The second part ! ($0 in lines) {print} runs on following input lines, and prints the line if it's not in the array, i.e. the "new" lines. The resulting output contains the new lines, only, so it needs to be appended to the original file, e.g. with sponge : $ awk 'FNR == NR { lines[$0] = 1; next } ! ($0 in lines) {print}' datafile newdata | sponge -a datafile Or we could have awk append the lines to the final line, it just requires passing the file name to awk : $ target=datafile $ awk -vtarget="$target" 'FNR == NR { lines[$0] = 1; next } ! ($0 in lines) {print >> target}' "$target" newdata To use a here-doc with awk , we'll need to add - (stdin) as an explicit source file, in addition to setting the redirection, so awk ... "$target" - <<EOF
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
416,820
I used to think deleting my bash history was enough to clear my bash history, but yesterday my cat was messing around the right side of my keyboard and when I got back into my computer I saw something I typed a month ago, then I started to press all the keys like crazy looking for what could've triggered it. Turns out UPARROW key shows my bash history even after deleting .bash_history. How can I delete my bash history for real?
In some cases (some bash versions), doing a: $ history -c; history -w Or simply $ history -cw Will clear history in memory (up and down arrow will have no commands to list) and then write that to the $HISTFILE file (if the $HISTFILE gets truncated by the running bash instance). Sometimes bash choose to not truncate the $HISTFILE file even with histappend option unset and $HISFILEZIZE set to 0. In such cases, the nuke option always works: history -c; >$HISTFILE That clear the history list of commands recorded in memory and all commands previously recorded to file. That will ensure that the running shell has no recorded history either in memory or disk, however, other running instances of bash (where history is active) may have a full copy of commands read from $HISTFILE when bash was started (or when a history -r is executed). If it is also required that nothing else (no new commands) of the present session would be written to the history file, then, unset HISTFILE will prevent any such logging.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/416820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270184/" ] }
416,860
Why does the same version of htop have so different layout of the CPU meters? How to switch between the layouts? Layout 1 htop --versionhtop 2.0.2 - (C) 2004-2018 Hisham MuhammadReleased under the GNU GPL. Layout 2 htop --versionhtop 2.0.2 - (C) 2004-2017 Hisham MuhammadReleased under the GNU GPL.
Ok, that was easy. Although it is not particularly well explained in the man page, after some tinkering in the setup I found an answer. Press F2 and using Enter and arrows setup the following for Layout 1:, For Layout 2 setup this:
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416860", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17765/" ] }
416,877
I know what's ugoa (owner, group, others, all) or rwx (read/right/execute) or 4,2,1 or - , f , d , l , and I tried to read in man chmod to understand what's a capital X in chmod but there wasn't an entry for it. I then read in this article in posix/chmod but was stuck in this passage: Set the executable bit only if the target a) is a directory b) has already at least one executable bit set for any one of user, group, others. I also read in this article that gives this code example: chmod -R u=rwX,g=rX,o=rX testdir/ I understand there is a recursive permission on the testdir/ , in regards to the owner (u), group (g), and others (o) but I admit I still miss the intention of the capital X. Maybe a didactic phrasing here could shed some light on this (the main reason I publish this here is because I didn't find an SE session on this). Update Sorry all, I missed that in the man. I didn't imagine the X would appear before the list of arguments and I thought the search returns x instead X, my bad.
The manpage says: execute/search only if the file is a directory or already has execute permission for some user ( X ) POSIX says: The perm symbol X shall represent the execute/search portion of the file mode bits if the file is a directory or if the current (unmodified) file mode bits have at least one of the execute bits (S_IXUSR, S_IXGRP, or S_IXOTH) set. It shall be ignored if the file is not a directory and none of the execute bits are set in the current file mode bits. This is a conditional permission flag: chmod looks at whatever it is currently processing, and if it’s a directory, or if it has any execute bit set in its current permissions (owner, group or other), it acts as if the requested permission was x , otherwise it ignores it. The condition is verified at the time chmod applies the specific X instruction, so you can clear execute bits in the same run with a-x,a=rwX to only set the executable bit on directories. You can see whether a file has an execute bit set by looking at the “access” part of stat ’s output, or the first column of ls -l . Execute bits are represented by x . -rwxr-xr-x is common for executables and indicates that the executable bit is set for the owner, group and other users; -rw-r--r-- is common for other files and indicates that the executable bit is not set (but the read bit is set for everyone, and the write bit for the owner). See Understanding UNIX permissions and their attributes which has much more detail. Thus in your example, u=rwX sets the owner permissions to read and write in all cases, and for directories and executable files, execute; likewise for group ( g=rX ) and other ( o=rX ), read, and execute for directories and executable files. The intent of this operator is to allow the user to give chmod a variety of files and directories, and get the correct execute permissions (assuming none of the files had an invalid execute bit set). It avoids having to distinguish between files and directories (as in the traditional find . -type f -exec chmod 644 {} + and find . -type d -exec chmod 755 {} + commands), and attempts to deal with executables in a sensible way. (Note that macOS chmod apparently only supports X for + operations.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/416877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
416,886
For resizing LVM2 partition, one needs to perform the following 2 commands: # lvextend -L+1G /dev/myvg/homevol# resize2fs /dev/myvg/homevol However, when I perform lvextend , I see that the changes are already applied to the partition (as shown in Gnome Disks). So why do I still need to do resize2fs ?
The lvextend command (without the --resizefs option) only makes the LVM-side arrangements to enlarge the block device that is the logical volume. No matter what the filesystem type (or even whether or not there is a filesystem at all) on the LV, these operations are always similar. If the LV contains an ext2/3/4 filesystem, the next step is to update the filesystem metadata to make the filesystem aware that it has the more space available, and to create/extend the necessary metadata structures to manage the added space. In the case of ext2/3/4 filesystems, this involves at least: creating new inodes to the added space extending the block allocation data structures so that the filesystem can tell whether any block of the added space is in use or free potentially moving some data blocks around if they are in the way of the previously-mentioned data structure extension This part is specific to the filesystem type, although the ext2/3/4 filesystem types are similar enough that they can all be resized with a single resize2fs tool. For XFS, filesystems, you would use a xfs_growfs tool instead. Other filesystems may have their own extension tools. And if the logical volume did not contain a filesystem but instead something like a "raw" database or an Oracle ASM volume, a yet another procedure would need to be applied. Each filesystem has different internal workings and so the conditions for extending a filesystem will be different for each. It took a while until a common API was designed for filesystem extension; that made it possible to implement the fsadm resize command, which provides an unified syntax for extending several filesystem types. The --resizefs option of lvextend just uses the fsadm resize command. In a nutshell: After lvextend , LVM-level tools such as lvs , vgs , lvdisplay and vgdisplay will see the updated size, but the filesystem and any tools operating on it, like df , won't see it yet.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/416886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199779/" ] }
416,896
Consider these wget codes: wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/papj.shwget -P ~/ https://raw.githubusercontent.com/user/repo/branch/nixta.sh Is there any elegant way to unite different terminals of the same basic URL as above, into one line instead 2 or more? Pseudocode: wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/papj.sh||nixta.sh
As wget accepts several URLs at once this can be done using brace expansion in bash : wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/{papj.sh,nixta.sh} (or even wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/{papj,nixta}.sh but this only works for well-suited names of course).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
416,906
I installed centos 7 on virtualbox on my local pc. I configured in the vm 2 adapter cards one as a bridge Adapter and the other as host only adapter : Right now I have 2 network adapters that are configured with a valid ip address : enp0s3 : 192.168.1.95/24 (bridged) enp0s8 : 192.168.56.102.24 (host-only) The configuration for enp0s8 : TYPE="Ethernet"BOOTPROTO="static"DEFROUTE="yes"IPV4_FAILURE_FATAL="no"NAME="enp0s8"DEVICE="enp0s8"ONBOOT="yes"PEERDNS="yes"PEERROUTES="yes"IPADDR=192.168.56.102NETMASK=255.255.255.0 When I ping google it works which means that the first network card is working fine. However, when I'm trying to connect via putty (ssh) to the vm It fails.. Any idea what else I can check ?
As wget accepts several URLs at once this can be done using brace expansion in bash : wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/{papj.sh,nixta.sh} (or even wget -P ~/ https://raw.githubusercontent.com/user/repo/branch/{papj,nixta}.sh but this only works for well-suited names of course).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270390/" ] }
416,912
I'm trying to remove this dkms module but am running into trouble. I run sudo dkms uninstall rtl8812au/4.3.14 and I get Error! The module/version combo: rtl8812au-4.3.14is not located in the DKMS tree. However, when I run dkms status , I get 8188eu, 1.0, 4.13.0-26-generic, x86_64: installedbcmwl, 6.30.223.271+bdcom, 4.13.0-26-generic, x86_64: installedmt7610u_sta, 1.0, 4.13.0-26-generic, x86_64: installed (WARNING! Diff between built and installed module!)rtl8812au, 4.3.8.12175.20140902+dfsg: added and when I go into the Makefile.dkms in the following folder, y9@y9-aspire:~/rtl8812AU_8821AU_linux$ lsclean core ifcfg-wlan0 Makefile README.mdcontrib dkms.conf include Makefile.dkms runwpaCONTRIBUTORS.md fetch.sh Kconfig os_dep wlan0dhcpcontributors.sh hal LICENSE platform I see modname := rtl8812auDKMS := dkmsmodver := 4.3.14 I just want to know how I can clear my dkms modules. Thank you.
In case of normal operations gone wrong, you can always delete dkms add-ons by hand, with sudo or as root. Normally the modules sources are installed by make install under /var/lib/dkms/ in a directory with the corresponding name, probably named rtl...something . Just delete that directory. You have also to delete the corresponding compiled file module under /lib/modules/KERNEL_VERSION/updates/dkms/ where KERNEL_VERSION is your current kernel. The file should be called rtl...something.ko or similar. Once done that, you can either try to rmmod the module or falling that, reboot. No module and corresponding dkms anymore in the system. While not critical, the module dependencies also need to be updated after deleting the module. Run: sudo /sbin/depmod -a
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/416912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258724/" ] }
416,918
I'm working on a script that will run a checksum process which outputs to STDOUT, which I then want to grep for lines matching OK, or FAILED and do different things with those matches (i.e. output to terminal and log). I've watched a ton of Youtube videos and read a ton about redirection, but I just can't seem to wrap my head around how exactly redirection works. What I'm trying to do is chain STDOUT to multiple greps without them gobbling up the non-matched text. Here's a concept of what I'm trying using cat instead of md5sum with a text file of animal names on each line (DOG, CAT, PONY, RHINO, DEER, FOX): { cat test.txt 3>&1 | tee /dev/fd/3 | grep DOG; } 3> results.txt This does what I expect. What I understand here is I'm doing a cat on the file, and then opening fd3 which points to whatever is written to STDOUT(fd1). Since the grep will gobble up everything from fd1, I tee the STDOUT of cat explicitly to fd3 and then pipe the STDOUT to grep. Grep will print out the line matching DOG, and then all the text written to fd3 from cat will get pushed to a results.txt file. Now, to chain another grep to look for other text I have to point the fd3 data back into STDOUT, tee it explicitly back to fd3 and then pipe STDOUT to a new grep. { { cat test.txt 3>&1 | tee /dev/fd/3 | grep DOG; } 3>&1 | tee /dev/fd/3 | grep PONY; } 3> results.txt The first problem here is the STDOUT from the first grep is being pushed into fd3 a second time instead of printing to the terminal. So now my results.txt is getting duplicates and I never got anything printed to the screen for the first grep. This is where my understanding of redirections is falling apart. I sort-of get what's happening but I can't figure out a simple solution. I want to grep STDOUT, print the results to screen, and pass the original text to another grep, and maybe a third, fourth, etc without modifying the original text I'm passing to each GREP, and without each subsequent grep eating up the previous' match that should print to screen. I could probably do this by storing a variable and calling it on multiple lines of greps, but then I have to wait for the entire first command to complete. In the case of the application I'm working on, I want to see realtime results during a checksum, not just a blank screen for an hour until the whole process is complete. Any clarification on what I'm doing wrong would be super helpful, thanks! EDIT I understand this exact use of cat is pointless, I just used it to demonstrate the concept. In the script I'll be applying the concept to, the first command is actually: md5sum -c checksum.md5 Which will read a checksum file, re-hash the the source and output to STDOUT a pass/fail line. I then want to grep this stream and send the results to separate logs and/or terminal output - but cat seemed like a simpler way to demonstrate the problem as this can be applied to filtering any command and grepping the stream, such as find, md5, ls, etc.
You can better do what you ask for with process substitution: Being as close as your original command as possible: cat test.txt | tee >(grep DOG) >(grep PONY) >results.txt Removing the useless use of cat: <test.txt tee >(grep DOG) >(grep PONY) >results.txt Or: tee >(grep DOG) >(grep PONY) <test.txt >results.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270391/" ] }
416,929
I have a text file with the following format: Item A,10-20Item B,21-30Item C,31-60Item D,61-120 how can I search the file with any number and I get the line returned that includes the number given by the range in the second field. so lets say I search for 33 I get Item C if I search for 100 I get Item D and so on... (my question does not focus on the field separation but rather on matching the line within the range, so if I would get the whole line displayed this would be fine)
You can better do what you ask for with process substitution: Being as close as your original command as possible: cat test.txt | tee >(grep DOG) >(grep PONY) >results.txt Removing the useless use of cat: <test.txt tee >(grep DOG) >(grep PONY) >results.txt Or: tee >(grep DOG) >(grep PONY) <test.txt >results.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240990/" ] }
416,945
At the command line I often use "simple" commands like mv foo/bar baz/bar but I don't know what to call all the parts of this: ┌1┐ ┌──2───┐git checkout master│ └──────3──────┘└───────4─────────┘ I (think I) know that 1 is a command and 2 's an argument, and I'd probably call 3 an argument list (is that correct?). However, I don't know what to call 4 . How are more complex "commands" labelled? find transcripts/?.? -name '*.txt' | parallel -- sh -c 'echo $1 $2' {} {/} I'd appreciate an answer that breaks down what to call 1,2,3,4 and what to call each part of e.g. this "command" above. It would be great to learn also about other things that are unique/surprising that I haven't included here.
The common names for each part is as follows: ┌1┐ ┌──2───┐git checkout master│ └──────3──────┘└───────4─────────┘ Command name (first word or token of command line that is not a redirection or variable assignment and after aliases have been expanded). Token, word, or argument to the command. From man bash: word: A sequence of characters considered as a single unit by the shell. Also known as a token. Generally: Arguments Command line. The concatenation of two simple commands with a | is a pipe sequence or pipeline: ┌─1┐ ┌──────2──────┐ ┌─2─┐ ┌──2──┐ ┌──1───┐ ┌2┐┌2┐┌2┐┌────2─────┐ ┌2┐ ┌2┐find transcripts/?.? -name '*.txt' | parallel -- sh -c 'echo $1 $2' {} {/}│ └────────────3──────────────┘ └────────────3──────────────┘└───────────────────────────────────4─────────────────────────────────────┘ Mind that there are redirection and variable assignments also: ┌──5──┐ ┌1┐ ┌─2─┐ ┌─2─┐ ┌───6──┐ ┌1┐ ┌─5─┐<infile tee file1 file2 | LC_ALL=C cat >file└─────────7───────────┘ └───────7────────┘└─────────────────────4────────────────────┘ Where (beside the numbers from above): redirection. Variable assignment. Simple command. This is not an exaustive list of all the element a command line could have. Such a list is too complex for this short answer.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/416945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172877/" ] }
416,958
I found out the floating point multiplication in mit-scheme is not accurate, for example, 1 ]=> (* 1991.0 0.1) will produce ;Value: 199.10000000000002 Could you please help explain the appearance of the weird trailing number “2”?
This quote is from memory and so probably not quite right but it conveys the essence of the problem: "Operating on floating point numbers is like moving piles of sand: every time you do it, you lose a little sand and you get a bit of dirt" (from Kernighan and Plauger's "Elements of programming style" IIRC). Every programming language has that problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112863/" ] }
416,993
I would like to extract version number from this string: <a href="/url/version/tree/1.0.1alpha11" class="css-truncate"> Note that ' /url/version/tree/ ' may change (ex: from /url/version/tree/ to /url/version2/tree1/) and version may change too (ex: from 1.01alpha11 to 2.0stable ) Ideas/suggestions?
This quote is from memory and so probably not quite right but it conveys the essence of the problem: "Operating on floating point numbers is like moving piles of sand: every time you do it, you lose a little sand and you get a bit of dirt" (from Kernighan and Plauger's "Elements of programming style" IIRC). Every programming language has that problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/416993", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270450/" ] }
417,041
I struggle to understand the effects of the following command: yes | tee hello | head On my laptop, the number of lines in 'hello' is of the order of 36000, much higher than the 10 lines displayed on standard output. My questions are: When does yes , and, more generally, a command in a pipe, stop? Why is there a mismatch between the two numbers above. Is it because tee does not pass the lines one by one to the next command in the pipe?
:> yes | strace tee output | head[...]read(0, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = 8192write(1, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = 8192write(3, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = 8192read(0, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = 8192write(1, "y\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\ny\n"..., 8192) = -1 EPIPE (Broken pipe)--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=5202, si_uid=1000} ---+++ killed by SIGPIPE +++ From man 2 write : EPIPE fd is connected to a pipe or socket whose reading end is closed. When this happens the writing process will also receive a SIGPIPE signal. So the processes die right to left. head exits on its own, tee gets killed when it tries to write to the pipeline the first time after head has exited. The same happens with yes after tee has died. tee can write to the pipeline until the buffers are full. But it can write as much as it likes to a file. It seems that my version of tee writes the same block to stdout and the file. head has 8K in its (i.e. the kernel's) read buffer. It reads all of it but prints only the first 10 lines because that's its job.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417041", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212582/" ] }
417,052
Apologies, this title is not the most elegant I've ever devised. But I assume a lot of people will have wondered this, and my question may be a dupe... all I can say is I haven't found it. When I say "scrolling" up, I mean using the "up arrow" key on the keyboard, which obviously scrolls you up through the history, starting at the most recent command. So you find a command maybe 30 commands back... and you run it. And then you want to run the command which originally came after it... is there is a snappy way of doing this? Or how do those fluent in BASH do this?
Running the command with Ctrl + o instead of Enter will run a command from history and then queue up the next one instead of returning to the front of the bash history.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/417052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220752/" ] }
417,101
Setup On some networks I'm able to use nslookup to resolve a domain name that is pointed to a private ip address: @work> nslookup my192.ddns.netServer: 10.1.2.3Address: 10.1.2.3#53Non-authoritative answer:Name: my192.ddns.netAddress: 192.168.20.20 However, on my home network this same query fails: @home> nslookup my192.ddns.netServer: 192.168.0.1Address: 192.168.0.1#53Non-authoritative answer:*** Can't find my192.ddns.net: No answer What Works I've found that if I change the A record for my192.ddns.net so that it points to a public IP range it will work fine: @home> nslookup my192.ddns.netServer: 192.168.0.1Address: 192.168.0.1#53Non-authoritative answer:Name: my192.ddns.netAddress: 172.217.12.238 At home, if I specify the DNS server for nslookup, or set my laptop's DNS servers to Google's nslookup works as expected: @home> nslookup my192.ddns.net 8.8.8.8Server: 8.8.8.8Address: 8.8.8.8#53Non-authoritative answer:Name: my192.ddns.netAddress: 192.168.20.20 But I'd like to continue to use my home router as my primary DNS so that it can resolve local network names. I'd just like it not to fail when trying to do lookups for DNS records that point to private range addresses (eg: 192.168.20.20 ) Home Network I run LEDE (formerly OpenWRT ) on my home router, which run dnsmasq . I've looked over the documentation for DNS and have even setup the system so that the DNS server it uses to resolve the address is Google's ( 8.8.8.8 ) - but it still fails and I can't seem to figure out why. Question What's happening here and how can I fix it?
This is a feature of dnsmasq . The dnsmasq people call it "rebind protection", and you can see it both in the dnsmasq manual as the --stop-dns-rebind command-line option and in the LEDE doco as the rebind_protection option. It defaults to being on. Either turn it off, or add the domain that you desire to work to the set of whitelisted rebind_domain domains. The attack that it purports to prevent is one where an attacker, who has taken advantage of the fact that your WWW browser will automatically download and run attacker-supplied programs from the world at large, makes xyr domain name seem to rapidly alternate between an external IP address and one internal to your LAN, allowing your machine to become a conduit between another machine on your LAN with that IP address and some attacker-run content servers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417101", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
417,187
My dash script takes a parameter in the form of hostname:port , i.e.: myhost:1234 Whereas port is optional, i.e.: myhost I need to read the host and port into separate variables. In the first case, I can do: HOST=${1%%:*}PORT=${1##*:} But that does not work in second case, when port was omitted; echo ${1##*:} simply returns hostname, instead of an empty string. In Bash, I could do: IFS=: read A B <<< asdf:111 But that does not work in dash . Can I split string on : in dash, without invoking external programs ( awk , tr , etc.)?
Just do: case $1 in (*:*) host=${1%:*} port=${1##*:};; (*) host=$1 port=$default_port;;esac You may want to change the case $1 to case ${1##*[]]} to account for values of $1 like [::1] (an IPv6 address without port part). To split, you can use the split+glob operator (leave a parameter expansion unquoted) as that's what it's for after all: set -o noglob # disable glob partIFS=: # split on colonset -- $1 # split+globhost=$1 port=${2:-$default_port} (though that won't allow hostnames that contain a colon (like for that IPv6 address above)). That split+glob operator gets in the way and causes so much harm the rest of the time that it would seem only fair that it be used whenever it's needed (though, I'll agree it's very cumbersome to use especially considering that POSIX sh has no support for local scope, neither for variables ( $IFS here) nor for options ( noglob here) (though ash and derivatives like dash are some of the ones that do (together with AT&T implementations of ksh , zsh and bash 4.4 and above)). Note that IFS=: read A B <<< "$1" has a few issues of its own: you forgot the -r which means backslash will undergo some special processing. it would split [::1]:443 into [ and :1]:443 instead of [ and the empty string (for which you'd need IFS=: read -r A B rest_ignored or [::1] and 443 (for which you can't use that approach) it strips everything past the first occurrence of a newline character, so it can't be used with arbitrary strings (unless you use -d '' in zsh or bash and the data doesn't contain NUL characters, but then note that herestrings (or heredocs) do add an extra newline character!) in zsh (where the syntax comes from) and bash , here strings are implemented using temporary files, so it's generally less efficient than using ${x#y} or split+glob operators.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/417187", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
417,276
So parse a text file and print out line 1 and line 14, and then do nothing with lines 15-46, and then print out line 47 and line 60, etc until the end of the file. So basically every 46 lines, print out the 1st and 14th line, repeatedly for every 46 lines until EOF
Since you have awk in your tags, I will provide a solution with awk : awk '(NR%46==1||NR%46==14){print}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270641/" ] }
417,277
1) We have a process.log file in which we have lot of text data and in between we have some XML data published. 2) There of thousands of different XML published in the logs along with other text data. 3) Now i need to select only the XML files which are published after Outgoing XML: value 4) Also the XML file which must be selected and copied to a new file should be the one which matches the value in the ALERTID tag . 5) The ALERTID value will be provided in the script input. So in our case mGMjhgHgffHhhFdH1u4 will be provided in the input and we need to select the full XML file published for this alertid. Starting tag is from <xml version..> and ending tag is </Alert> 5) So i need to select the relevant Outgoing XML file in a new file based on a particular ALERTID so it can be replayed in different environments. Format of the log file is like below: Info Jan 11 17:30:26.12122 The process is not responding to heartbeatsDebug Jan 11 17:30:26.12123 Incoming XML :<xml version "1.0" encoding ="UTF-8"?><Alert trigger = "true" ><Alerttype>orderReject</Alerttype><AlertID>ghghfsjUtYuu78T1</AlertID><Order>uusingas</Order><Quantity>1254</Quanity></Alert> (CreateInitEventHandler. C:356)Debug Jan 11 17:30:26.12199 The process is going down with warningsDebug Jan 11 17:30:26.148199 Outgoing XML: <xml version "1.0" encoding ="UTF-8"?><Alert trigger = "true" ><Alerttype>orderheld</Alerttype><AlertID>mGMjhgHgffHhhFdH1u4</AlertID><Order>uwiofhdf</Order><Quantity>7651</Quanity></Alert>(CreateEventHandler. C:723)Debug Jan 11 17:30:26.13214 The process has restarted and thread openedDebug Jan 11 17:30:26.13215 The heartbeat is recieved from alertlistener process Now the requirement is to take AlertID in the input, scan the process log and extract the matching outgoing XML in a separate file. Using awk i am able to extract all the outgoing xml files but not sure how to extract the one related to a particular AlertID. Also i cannot install/use any new XML parser as per the company policy.This needs to be achieved using shell/perl/awk/sed Eg: awk '/Outgoing/{p=1; s=$0} P & & /<\/Alert>/ {print $0 FS s; s="" ;p=0}p' 1.log>2.log
Since you have awk in your tags, I will provide a solution with awk : awk '(NR%46==1||NR%46==14){print}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417277", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270253/" ] }
417,284
I am wondering whether there is a simple way to operate on certain lines with preassigned line numbers. Let's say I want to output the 1st, 7th, 14th and 16th lines of a file, I can simply do sed -n '1p;7p;14p;16p' input_file but this gets more complicated when the operation is not just printing, and I don't want to write the same long command 4 times (and yes, I know I can construct this long sed command by substituting the same bash variable 4 times, but that's not ideal enough ...) , i.e. sed -n '1{long_command};7{long_command};14{long_command};16{long_command}' input_file Is there a way to do the operation on these specific lines of my file? I am expecting something like, sed -n '1,7,14,16p' which certainly will not work in the current form. Any help will be appreciated. "No, it is not possible." with explanations is also an answer that I will accept.
You can use branches: sed ' 1b1 7b1 14b1 16b1 # for the rest to be left alone, branch off (or delete them with "d"): b :1 long_command' (note that you can also add some 20,25b1 line ranges, or /re/b1 to include lines that match the re ). Or you could use awk : awk 'NR == 1 || NR == 7 || ... {stuff}' Or using a hash: awk -v l=1,7,14,16 ' BEGIN{split(l, a, ","); for (i in a) lines[a[i]]} NR in lines {stuff}' (or BEGIN{lines[1]lines[7]lines[14]lines[16]} if there aren't too many)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/417284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259023/" ] }
417,292
I was recently looking at some code that confused me because it works and I didn't expect it to. The code reduces to this example #!/bin/bashfor var;do echo "$var"done When run with command line arguments is prints them $ ./test a b cabc It is this, that is (to me) unexpected. Why does this not result in an error because var is undefined ? Is using this considered 'good practice' ?
This is the default behavior, yes. It is documented in the help of the for keyword: terdon@tpad ~ $ help forfor: for NAME [in WORDS ... ] ; do COMMANDS; done Execute commands for each member in a list. The `for' loop executes a sequence of commands for each member in a list of items. If `in WORDS ...;' is not present, then `in "$@"' is assumed. For each element in WORDS, NAME is set to that element, and the COMMANDS are executed. Exit Status: Returns the status of the last command executed. So, when you don't give it a list to iterate over, it will default to iterating over $@ , the array of positional parameters ( a , b and c in your example). And this behavior is defined by POSIX so yes, it is considered "good practice" as far as that goes.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/417292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270650/" ] }
417,323
What is the main difference between the directory cron.d (as in /etc/cron.d/ ) and crontab ? As far as I understand, one could create a file like /etc/cron.d/my_non_crontab_cronjobs and put whatever one wants inside it, just as one would put them in crontab via crontab -e . So what is the main difference between the two?
The differences are documented in detail in the cron(8) manpage in Debian. The main difference is that /etc/cron.d is populated with separate files, whereas crontab manages one file per user; it’s thus easier to manage the contents of /etc/cron.d using scripts (for automated installation and updates), and easier to manage crontab using an editor (for end users really). Other important differences are that not all distributions support /etc/cron.d , and that the files in /etc/cron.d have to meet a certain number of requirements (beyond being valid cron jobs): they must be owned by root, and must conform to run-parts ’ naming conventions ( no dots , only letters, digits, underscores, and hyphens). If you’re considering using /etc/cron.d , it’s usually worth considering one of /etc/cron.hourly , /etc/cron.daily , /etc/cron.weekly , or /etc/cron.monthly instead.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/417323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
417,405
When I install sshpass on alpine linux it will install and the doc will show up if you run it without arguments, but using any argument (valid or invalid) returns sshpass: Failed to run command: No such file or directory . It's pathed and even when using an absolute path it has the same behavior. I want to use this with ansible, but it won't even work directly. I can't seem to find any information online about this functioning or not functioning for other people, but I used other people's containers and my own and I couldn't get it to function on either. https://pkgs.alpinelinux.org/package/v3.3/main/x86/sshpass $ docker run -it --rm williamyeh/ansible:alpine3 ash/ # sshpassUsage: sshpass [-f|-d|-p|-e] [-hV] command parameters -f filename Take password to use from file -d number Use number as file descriptor for getting password -p password Provide password as argument (security unwise) -e Password is passed as env-var "SSHPASS" With no parameters - password will be taken from stdin -P prompt Which string should sshpass search for to detect a password prompt -v Be verbose about what you're doing -h Show help (this screen) -V Print version informationAt most one of -f, -d, -p or -e should be used/ # sshpass hisshpass: Failed to run command: No such file or directory/ # which sshpass/usr/bin/sshpass/ # /usr/bin/sshpassUsage: sshpass [-f|-d|-p|-e] [-hV] command parameters -f filename Take password to use from file -d number Use number as file descriptor for getting password -p password Provide password as argument (security unwise) -e Password is passed as env-var "SSHPASS" With no parameters - password will be taken from stdin -P prompt Which string should sshpass search for to detect a password prompt -v Be verbose about what you're doing -h Show help (this screen) -V Print version informationAt most one of -f, -d, -p or -e should be used/ # /usr/bin/sshpass anyinputsshpass: Failed to run command: No such file or directory It's worth mentioning that the underlying ssh executable works and I can connect to the host that way.
SSHpass was working fine, but the alpine container python:3.6-alpine doesn't have openssh installed. This errormessage is confusing as it doesn't mention that the ssh component is failing. This can be fixed by running apk add --update openssh . This was resolved by changing the line in the Dockerfile from RUN apk add --update --no-cache sshpass to RUN apk add --update --no-cache openssh sshpass .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/417405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270720/" ] }
417,406
I would like to search \G string including the backslash with grep . $ echo "\G\\G" > /tmp/test$ grep '\G' /tmp/test \G\G$ grep '\\G' /tmp/test \G\G$ grep "\\G" /tmp/test \G\G$ grep "\G" /tmp/test \G\G Also see the screenshot below for the matches in red: I was wondering why only '\\G' works? Is it because of bash only, or because of both bash and grep? Thanks.
This has to do with what grep sees versus what the shell sees. So, both. If you want to see what grep sees with various forms of quoting, use, for example: printf '<%s>\n' G "G" 'G' \G "\G" '\G' \\G "\\G" '\\G' This will demonstrate what the shell does with the various types of quoting. If grep sees just a G , it will search for (and highlight, with your settings) just the G matches. If grep sees a single backslash followed by a G , it will (in your implementation and probably all current implementations) consider that the backslash removes any special meaning from the character G . But there isn't any special meaning to G , so the result will be the same as if you just pass G . If grep sees two backslashes, the first removes the special meaning from the second. So when grep sees \\G , it searches for a literal backslash followed by a G . That's what you want. If you use the -F flag to grep , to search for a fixed string, you can pass just \G and get the same result. (That is, if you pass \G so that grep sees \G , which will require that you escape the backslash in some way so the shell doesn't remove it.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/417406", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
417,426
I submitted lots of SLURM job script with debug time limit (I forgot to change the time for actual run). Now they are all submitted at the same time, so they all start with job ID 197xxxxx. Now, I can do squeue -u $USER | grep 197 | awk '{print $1}' to print the job ID's I want to delete. But how do I use scancel command on all these ID's. The output from the above shell command would look like 197266641972666319726662197266611972666019726659197266581972665719726656197266551972665419726653197266521972665119726650
squeue -u $USER | grep 197 | awk '{print $1}' | xargs -n 1 scancel Check the documentation for xargs for details. If scancel accepts multiple job ids (it should), you may omit the -n 1 part.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/417426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115661/" ] }
417,428
I want to copy over .jpg and .png files with scp , but there files with different extensions in the same folder I'm copying from. I am doing the following: scp [email protected]:/folder/*.{jpg,png} . I am asked to enter my password for each extension type. Is there a way to do this in such a way that I enter my password only once?
Just replace it with: scp [email protected]:'/folder/*.{jpg,png}' . Please note the pair of single quotes. In your case, your local shell is evaluating the expression, turning it really into: scp [email protected]:/folder/*.jpg [email protected]:/folder/*.png . hence the two passwords asked. In this solution, the pair of single quotes protects it from evaluation by the local shell, so it's the remote shell called by (the remote) scp which is evaluating the expression.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/417428", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90334/" ] }
417,465
I need to think of a command that I'll execute a different command on uneven days.
As a normal user, run crontab -e to edit your crontab. In that crontab enter: 00 12 1-31/2 * * /path/to/the/command_for_odd_days00 12 2-30/2 * * /path/to/the/command_for_even_days For those commands to be run at 12:00 (noon) every day. If you're administrator on the machine, you can instead create a: /etc/cron.d/myservice file, with a similar content, except that you need to specify which user the commands should run as. 00 12 1-31/2 * * someuser /path/to/the/command_for_odd_days00 12 2-30/2 * * someuser /path/to/the/command_for_even_days Run man 5 crontab to learn more about the format of those crontabs. The 1-31/2 syntax (for days between 1 and 31, every two days) should be recognised by most modern cron implementations including all those available on your Ubuntu system. If you come across an ancient system where it's not supported, you can replace it with 1,3,5,7,...,29,31 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417465", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270774/" ] }
417,499
We run the script script1 from the script script_main script_main : #!/bin/bash/tmp/script1echo $?sleep 2echo script ended script1 : #!/bin/bashexit 1 As is obvious, script1 exits with exit code 1but the main script will continue until end. My question: is it possible when script1 does exit 1 , then also the main_script will be stopped immediately?
The most straightforward way would be to explicitly have the first script exit if the other script failed. Put the script execution in a conditional, like otherscript.sh || exit 1 or if ! otherscript.sh ; then ret=$? echo "otherscript failed with exit code $ret. exit." >&2 exit 1fi This would allow the main script to do any cleanup it wants, or just try some other solution, possibly depending on what the child's exit code was. In the first one, we could use just || exit , to pass the child's exit code along. If you want to have the main script exit when any command it starts fails, then set -e . set -eotherscript.shecho "this will not run if otherscript fails" set -e doesn't apply to programs that run inside conditional constructs so we can still test for particular codes or tail them with || true to ignore the failure. But with exit points everywhere now, we could use trap somecmd EXIT to do any cleanup before the shell exits, regardless of where it happens. Having the inner script force the main script to exit is also possible, but a bit unfriendly, you wouldn't expect a usual application to do it. But, if you want to, just having the inner script plain old shoot its parent process is one way: $ cat mainscript.sh#!/bin/bash./otherscript.shecho "$0 exiting normally"$ cat otherscript.sh#!/bin/bashecho "$0 kill $PPID"kill $PPID$ bash mainscript.sh./otherscript.sh kill 11825Terminated Here, if you run otherscript.sh from an interactive shell, it will try to shoot the interactive session. All shells I tested seem to ignore the SIGTERM in this case when running interactively, though. In any case, the main shell could again trap the SIGTERM . Instead of shooting just the parent process, you could use kill 0 to kill all processes in the process group otherscript.sh runs in. If started from a noninteractive shell, that would usually include the parent.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/417499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
417,638
We are recommended that, in bash scripts, we shouldn't parse the output of ls The webpage linked to above recommends code like: # Good!for f in *; do [[ -e $f ]] || continue ...done or to populate an array with filenames myfiles=( ~/* ) But when I look at these examples, and knowing how bourne shells deal with unquoted strings in general, my sense is that if I used this code on filenames containing spaces the the glob would explode every - space separated - word on the files. e.g. If I have the following directory $ ls -1 a file I downloaded from the net.pdfapple.shhello world.txt and I run this for file in * do printf "%s\n" "$file"done I'm expecting unquoted string behaviour. e.g. afileIdownloadedfromthenet.pdfapplehelloworld But what I get is the correct behaviour a file I downloaded from the net.pdfapplehello world similar for arrays myfiles=( * )declare -p myfilesdeclare -a myfiles='([0]="a file I downloaded from the net.pdf" [1]="apple" [2]="hello world")' My question is why is this? Is it because glob expansion is done after word splitting? (so that these types of file operations produce correct output?)
Filename expansion happens after word splitting https://www.gnu.org/software/bash/manual/html_node/Shell-Expansions.html#Shell-Expansions The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and filename expansion.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
417,639
WP-CLI is a Bash expansion that allows you to automate different operations in the popular CMS WordPress. Given my environment setup script is already quite long (by means of lines), I've setted the WP-CLI installation directives in one line: curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && chmod +x wp-cli.phar && mv wp-cli.phar /usr/local/bin/wp Is there a way to further shorten it? Maybe with curl's ability to print files' content to stdout and the content's ability to be piped into stdin ? This might save the chmod and the mv via permission preserving and redirection ( > )?
Filename expansion happens after word splitting https://www.gnu.org/software/bash/manual/html_node/Shell-Expansions.html#Shell-Expansions The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and filename expansion.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258010/" ] }
417,645
Problem: Running Ubuntu 17.10 I have been trying to resolv (hehe) this issue for about a week now and despite countless Google searches and about 20 different attempts, I can not stop dnsmasq from periodically causing my CPU to spike for about a minute with the following offenders: systemd-resolved systemd-journald dnsmasq Monitoring journalctl -f I see this every time it happens: maximum number of concurrent dns queries reached (150) Accompanied/preceded by a crazy loop of requests to some domain (usually ubuntu connection check) like the following: query[A] connectivity-check.ubuntu.com from 127.0.0.1forwarded connectivity-check.ubuntu.com to 127.0.1.1forwarded connectivity-check.ubuntu.com to 127.0.0.53query[A] connectivity-check.ubuntu.com from 127.0.0.1forwarded connectivity-check.ubuntu.com to 127.0.0.53query[AAAA] connectivity-check.ubuntu.com from 127.0.0.1forwarded connectivity-check.ubuntu.com to 127.0.0.53query[AAAA] connectivity-check.ubuntu.com from 127.0.0.1forwarded connectivity-check.ubuntu.com to 127.0.0.53query[A] connectivity-check.ubuntu.com from 127.0.0.1forwarded connectivity-check.ubuntu.com to 127.0.0.53query[AAAA] connectivity-check.ubuntu.com from 127.0.0.1forwarded connectivity-check.ubuntu.com to 127.0.0.53 I've found that changing my /etc/resolv.conf to use nameserver 127.0.0.53 causes the spike to dissipate almost instantaneously. However, as that file is updated regularly by Network Manager, I have to do this about once an hour. Configuration: /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN# 127.0.0.53 is the systemd-resolved stub resolver.# run "systemd-resolve --status" to see details about the actual nameservers.nameserver 127.0.0.1search fios-router.home /etc/NetworkManager/NetworkManager.conf [main]plugins=ifupdown,keyfile[ifupdown]managed=false[device]wifi.scan-rand-mac-address=no /etc/dnsmasq.conf // All default except this at the very end for my wildcard DNSaddress=/asmar.d/127.0.0.1 /run/dnsmasq/resolv.conf nameserver 127.0.0.53 /run/resolvconf/interfaces: lo.dnsmasq : nameserver 127.0.0.1 systemd-resolved : nameserver 127.0.0.53 /etc/resolvconf/interface-order: # interface-order(5)lo.inet6lo.inetlo.@(dnsmasq|pdnsd)lo.!(pdns|pdns-recursor)lotun*tap*hso*em+([0-9])?(_+([0-9]))*p+([0-9])p+([0-9])?(_+([0-9]))*@(br|eth)*([^.]).inet6@(br|eth)*([^.]).ip6.@(dhclient|dhcpcd|pump|udhcpc)@(br|eth)*([^.]).inet@(br|eth)*([^.]).@(dhclient|dhcpcd|pump|udhcpc)@(br|eth)*@(ath|wifi|wlan)*([^.]).inet6@(ath|wifi|wlan)*([^.]).ip6.@(dhclient|dhcpcd|pump|udhcpc)@(ath|wifi|wlan)*([^.]).inet@(ath|wifi|wlan)*([^.]).@(dhclient|dhcpcd|pump|udhcpc)@(ath|wifi|wlan)*ppp** systemd-resolve --status : Global DNS Servers: 127.0.0.1 DNSSEC NTA: 10.in-addr.arpa 16.172.in-addr.arpa 168.192.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa corp d.f.ip6.arpa home internal intranet lan local private testLink 5 (br-b1f5461ac410) Current Scopes: none LLMNR setting: yesMulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 4 (docker0) Current Scopes: none LLMNR setting: yesMulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 3 (wlp62s0) Current Scopes: none LLMNR setting: yesMulticastDNS setting: no DNSSEC setting: no DNSSEC supported: noLink 2 (enp61s0) Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6 LLMNR setting: yesMulticastDNS setting: no DNSSEC setting: no DNSSEC supported: no DNS Servers: 8.8.8.8 8.8.4.4 ::1 Questions: How can I resolve this issue while still using my wildcard domain name? Optional : How can I achieve this while using Google DNS? Please do not recommend upping the concurrent dns queries. That is not a solution. SOLVED! See telcoM's DNS crash course (the accepted answer) that led me to the solution See my follow-up & final solution as I experimented with the knowledge gained from that answer
It looks like you may have dnsmasq process in 127.0.0.1 and systemd-resolved process in 127.0.0.53 passing queries back and forth between each other, causing a loop. Even dnsmasq alone might be capable of looping, as by default it looks into /etc/resolv.conf to find the real DNS servers to use for the names it does not have information for. Your DNS configuration probably has quite many layers: first, there is the DNS server information you get from your ISP by DHCP or similar. then, there is NetworkManager , which could be configured to override the information and use dnsmasq instead, but isn't currently configured that way. instead, NetworkManager is configured to use the resolvconf tool to update the real /etc/resolv.conf . And dnsmasq may include a drop-in configuration for resolvconf to override any DNS services received by DHCP and use 127.0.0.1 instead while dnsmasq is running. systemd-resolved may also include a drop-in configuration for resolvconf , but is apparently getting overridden by dnsmasq . What I don't yet understand is where the 127.0.1.1 and 127.0.0.53 come from. Are they perhaps mentioned in dnsmasq default configuration in Ubuntu? As it says in the comment of /etc/resolv.conf , run this command to see more information on systemd-resolved configuration: systemd-resolve --status Also check the contents of the /run/resolvconf/interface/ directory: that is where the resolvconf tool collects all the DNS server information it gets from various sources. The /etc/resolvconf/interface-order will determine the order in which each source is checked, until either a loopback address is encountered or 3 DNS servers have been listed for real /etc/resolv.conf . Since you are using dnsmasq to set up a wildcard domain, you'll want to keep 127.0.0.1 in your /etc/resolv.conf - but you'll want to configure dnsmasq to not use that file, but instead get the DNS servers it should use from somewhere else. If /run/NetworkManager/resolv.conf contains those DNS servers you get from your ISP by DHCP, you can easily use that for dnsmasq by adding this line to its configuration: resolv-file=/run/NetworkManager/resolv.conf This tells dnsmasq where to get DNS information for those things it don't already know about. So if you want to use Google DNS, you could configure dnsmasq with resolv-file=/etc/google-dns-resolv.conf and put the DNS configuration lines for Google DNS in the usual format to /etc/google-dns-resolv.conf .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/265441/" ] }
417,648
Suppose I have set of files fSet: f1, f2, f3, f4, f5, ........... f100 and I set have words wSet: w1, w2, w3, w4, w5, ...........w100 How I may list files which has one or more word from wSet? and further how may I may make a report which not only list files but mentions respective containing words? and further how I may handle if word may have special symbol like ip address?
It looks like you may have dnsmasq process in 127.0.0.1 and systemd-resolved process in 127.0.0.53 passing queries back and forth between each other, causing a loop. Even dnsmasq alone might be capable of looping, as by default it looks into /etc/resolv.conf to find the real DNS servers to use for the names it does not have information for. Your DNS configuration probably has quite many layers: first, there is the DNS server information you get from your ISP by DHCP or similar. then, there is NetworkManager , which could be configured to override the information and use dnsmasq instead, but isn't currently configured that way. instead, NetworkManager is configured to use the resolvconf tool to update the real /etc/resolv.conf . And dnsmasq may include a drop-in configuration for resolvconf to override any DNS services received by DHCP and use 127.0.0.1 instead while dnsmasq is running. systemd-resolved may also include a drop-in configuration for resolvconf , but is apparently getting overridden by dnsmasq . What I don't yet understand is where the 127.0.1.1 and 127.0.0.53 come from. Are they perhaps mentioned in dnsmasq default configuration in Ubuntu? As it says in the comment of /etc/resolv.conf , run this command to see more information on systemd-resolved configuration: systemd-resolve --status Also check the contents of the /run/resolvconf/interface/ directory: that is where the resolvconf tool collects all the DNS server information it gets from various sources. The /etc/resolvconf/interface-order will determine the order in which each source is checked, until either a loopback address is encountered or 3 DNS servers have been listed for real /etc/resolv.conf . Since you are using dnsmasq to set up a wildcard domain, you'll want to keep 127.0.0.1 in your /etc/resolv.conf - but you'll want to configure dnsmasq to not use that file, but instead get the DNS servers it should use from somewhere else. If /run/NetworkManager/resolv.conf contains those DNS servers you get from your ISP by DHCP, you can easily use that for dnsmasq by adding this line to its configuration: resolv-file=/run/NetworkManager/resolv.conf This tells dnsmasq where to get DNS information for those things it don't already know about. So if you want to use Google DNS, you could configure dnsmasq with resolv-file=/etc/google-dns-resolv.conf and put the DNS configuration lines for Google DNS in the usual format to /etc/google-dns-resolv.conf .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417648", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205994/" ] }
417,672
I'm trying to disable some CPUs of my server. I've found this link: https://www.cyberciti.biz/faq/debian-rhel-centos-redhat-suse-hotplug-cpu/linux-turn-on-off-cpu-core-commands/ , which offers me a method as below: Here is what numactl --hardware gave me: I want to disable all CPUs from 16 to 63, so I write a script named opCPUs.sh as below: #!/bin/bashfor i in {16..63}; do if [[ "$1" == "enable" ]]; then echo 1 > /sys/devices/system/cpu/cpu$i/online elif [[ "$1" == "disable" ]]; then echo 0 > /sys/devices/system/cpu/cpu$i/online else echo 'illegal parameter' fidonegrep "processor" /proc/cpuinfo Then I execute it: ./opCPUs.sh disable and I can see the result of grep in the script: It seems to work. Now I think all of processes should be in CPU 0 - 15 because others have been disabled. So I use the existing processes dbus to verify as below: ps -Lo psr $(pgrep dbus) I get this: The psr tells me in which CPU the process is running, right? If so, I have disabled CPU 60, CPU 52 etc, why they are still here?
Besides @Yves answer, you actually are able to use the isolcpus kernel parameter. To disable the 4th CPU/core (CPU 3) with Debian or Ubuntu: In /etc/default/grub add isolcpus=3 to GRUB_CMDLINE_LINUX_DEFAULT GRUB_CMDLINE_LINUX_DEFAULT="quiet splash isolcpus=3" Run sudo update-grub Reboot the server. isolcpus — Isolate CPUs from the kernel scheduler. Synopsis isolcpus= cpu_number [, cpu_number ,...] Description Remove the specified CPUs, as defined by the cpu_number values, from the general kernel SMP balancing and scheduler algroithms. The only way to move a process onto or off an "isolated" CPU is via the CPU affinity syscalls. cpu_number begins at 0, so the maximum value is 1 less than the number of CPUs on the system. This option is the preferred way to isolate CPUs. The alternative, manually setting the CPU mask of all tasks in the system, can cause problems and suboptimal load balancer performance. Interestingly enough, the usage of this kernel parameters can be setting aside a CPU for later on using CPU affinity to one process/pin a process to a CPU, and thus both making sure there are no more user processes running on that CPU. In addition, also can make the server more stable having a guarantee a particular process with a very high load will have it´s own CPUs to play with. I have seen Meru doing that with their Linux based controllers before becoming aware of this setup. The associated command to then assign a process to the fourth CPU (CPU 3), is: sudo taskset -cp PID taskset is used to set or retrieve the CPU affinity of a running process given its PID or to launch a new COMMAND with a given CPU affinity. CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. Note that the Linux scheduler also supports natural CPU affinity: the scheduler attempts to keep processes on the same CPU as long as practical for performance reasons. Therefore, forcing a specific CPU affinity is useful only in certain applications. SUMMARY There are several techniques applied to this question : set isolcpus = 4 in grub and reboot can disable the 5th CPU/CPU 4 permanently for user land processes; echo 0 > /sys/devices/system/cpu/cpu4/online disables the 5th CPU/CPU 4, that will still keep working for the processes that have already been assigned to it but no new processes will be assigned to CPU 4 anymore; taskset -c 3 ./MyShell.sh will force MyShell.sh to be assigned to the 4th CPU/CPU 3 whereas the 4th CPU can still accept other user land processes if isolcpus is not excluding it from doing that. PS. Anecdotally, my best example of using the isolcpus / taskset on the field, was an SSL frontend for a very busy site, that kept going unstable every couple of weeks, where Ansible/ ssh would not let me in remotely anymore. I applied the techniques discussed above, and it kept working in a very stable fashion ever since.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
417,676
Using Ubuntu 14 I have a Linux machine where there are two interfaces:eth1: 172.16.20.1ppp0: 192.168.0.2 ppp0 is connected to a device which has a PPP interface (192.168.0.1) and a WAN interface (172.16.20.2). I can verify that this device can reach 172.16.20.1 The problem I am having is if I send a packet using Python on the same machine: client.py import socketcl = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)cl.sendto("Hello", ("172.16.20.1", 5005)) server.py import socketsrv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)srv.bind(("", 5005))while True: data, addr = srv.recvfrom(2048) print("Message: ", data) the script works fine but I cannot see the packet on Wireshark coming out of eth1 (I can only see it when I choose to capture on the lo interface). I assume the OS has detected the packet is for one of its local interface and does not send it through the 192.168.0.2 socket created. When I add the following rules to prevent this from happening: sudo ip route del table local 172.16.20.1 dev eth1sudo ip route add table local 172.16.20.1 dev ppp0sudo ip route flush cache What happens is: I can see the packets on Wireshark now arriving at eth1, the source address is the address of the WAN (172.16.20.2) I cannot see any output from server.py after restarting the program. Ignoring the ppp0 interface and using two ethx interfaces: If I try to run the program in two (client and server) separate machines (without applying the rules), I can see the packets arriving at eth1 in Wireshark, and the output on server.py. If I try to run the program in two separate machines AND I apply the rules above for the ppp0 connection (I have not removed it), I can no longer see any output from server.py but can still see packets arriving on Wireshark. My knowledge of the TCP/IP stack is not good, but it looks like the link layer is no longer forwarding to the application layer?
Besides @Yves answer, you actually are able to use the isolcpus kernel parameter. To disable the 4th CPU/core (CPU 3) with Debian or Ubuntu: In /etc/default/grub add isolcpus=3 to GRUB_CMDLINE_LINUX_DEFAULT GRUB_CMDLINE_LINUX_DEFAULT="quiet splash isolcpus=3" Run sudo update-grub Reboot the server. isolcpus — Isolate CPUs from the kernel scheduler. Synopsis isolcpus= cpu_number [, cpu_number ,...] Description Remove the specified CPUs, as defined by the cpu_number values, from the general kernel SMP balancing and scheduler algroithms. The only way to move a process onto or off an "isolated" CPU is via the CPU affinity syscalls. cpu_number begins at 0, so the maximum value is 1 less than the number of CPUs on the system. This option is the preferred way to isolate CPUs. The alternative, manually setting the CPU mask of all tasks in the system, can cause problems and suboptimal load balancer performance. Interestingly enough, the usage of this kernel parameters can be setting aside a CPU for later on using CPU affinity to one process/pin a process to a CPU, and thus both making sure there are no more user processes running on that CPU. In addition, also can make the server more stable having a guarantee a particular process with a very high load will have it´s own CPUs to play with. I have seen Meru doing that with their Linux based controllers before becoming aware of this setup. The associated command to then assign a process to the fourth CPU (CPU 3), is: sudo taskset -cp PID taskset is used to set or retrieve the CPU affinity of a running process given its PID or to launch a new COMMAND with a given CPU affinity. CPU affinity is a scheduler property that "bonds" a process to a given set of CPUs on the system. The Linux scheduler will honor the given CPU affinity and the process will not run on any other CPUs. Note that the Linux scheduler also supports natural CPU affinity: the scheduler attempts to keep processes on the same CPU as long as practical for performance reasons. Therefore, forcing a specific CPU affinity is useful only in certain applications. SUMMARY There are several techniques applied to this question : set isolcpus = 4 in grub and reboot can disable the 5th CPU/CPU 4 permanently for user land processes; echo 0 > /sys/devices/system/cpu/cpu4/online disables the 5th CPU/CPU 4, that will still keep working for the processes that have already been assigned to it but no new processes will be assigned to CPU 4 anymore; taskset -c 3 ./MyShell.sh will force MyShell.sh to be assigned to the 4th CPU/CPU 3 whereas the 4th CPU can still accept other user land processes if isolcpus is not excluding it from doing that. PS. Anecdotally, my best example of using the isolcpus / taskset on the field, was an SSL frontend for a very busy site, that kept going unstable every couple of weeks, where Ansible/ ssh would not let me in remotely anymore. I applied the techniques discussed above, and it kept working in a very stable fashion ever since.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/252696/" ] }
417,724
We have the following example file: tcpmux 1/tcp # TCP port service multiplexertcpmux 1/udp # TCP port service multiplexerrje 5/tcp # Remote Job Entryrje 5/udp # Remote Job Entryecho 7/tcpecho 7/udpdiscard 9/tcp sink nulldiscard 9/udp sink nullsystat 11/tcp userssystat 11/udp usersdaytime 13/tcpdaytime 13/udpqotd 17/tcp quoteqotd 17/udp quotemsp 18/tcp # Message send protocol (historic)msp 18/udp # Message send protocol (historic)chargen 19/tcp ttytst sourcechargen 19/udp ttytst source How may we append the following lines to the beginning of the file? # The latest IANA port assignments can be gotten from# http://www.iana.org/assignments/port-numbers# The Well Known Ports are those from 0 through 1023.# The Registered Ports are those from 1024 through 49151# The Dynamic and/or Private Ports are those from 49152 through 65535## Each line describes one service, and is of the form:## service-name port/protocol [aliases ...] [# comment] So that the file will look like: # The latest IANA port assignments can be gotten from# http://www.iana.org/assignments/port-numbers# The Well Known Ports are those from 0 through 1023.# The Registered Ports are those from 1024 through 49151# The Dynamic and/or Private Ports are those from 49152 through 65535## Each line describes one service, and is of the form:## service-name port/protocol [aliases ...] [# comment]tcpmux 1/tcp # TCP port service multiplexertcpmux 1/udp # TCP port service multiplexerrje 5/tcp # Remote Job Entryrje 5/udp # Remote Job Entryecho 7/tcpecho 7/udpdiscard 9/tcp sink nulldiscard 9/udp sink nullsystat 11/tcp userssystat 11/udp usersdaytime 13/tcpdaytime 13/udpqotd 17/tcp quoteqotd 17/udp quotemsp 18/tcp # Message send protocol (historic)msp 18/udp # Message send protocol (historic)chargen 19/tcp ttytst sourcechargen 19/udp ttytst source The simple solution is to copy the original file to file.bck , append the new lines to the file, and append file.bck to the file. But this isn't an elegant solution.
Relatively elegant solution using POSIX specified file editor ex —at least elegant in the sense that this will handle any arbitrary contents rather than depending on a specific format (trailing backslashes) or a specific absence of format. printf '0r headerfile\nx\n' | ex file-with-contents This will open file-with-contents in ex , read in the full contents of the headerfile at the very top, and then save the modified buffer back to file-with-contents . If performance is a SEVERE concern and the files are huge this may not be the right way for you, but (a) there is no performant general way to prepend data to a file and (b) I don't expect you will be editing your /etc/services file that often. A slightly cleaner syntax (the way I would actually code this): printf '%s\n' '0r headerfile' x | ex file-with-contents A more complicated, but convergent, bit of code that will check whether the beginning of services EXACTLY matches the entirety of header , byte for byte, and IF NOT will then prepend the entire contents of header to services and save the changes, follows. This is fully POSIX compliant. dd if=services bs=1 count="$(wc -c < header)" 2>/dev/null | cmp -s - header || printf '%s\n' '0r header' x | ex services A much simpler version, using GNU cmp 's "-n" option: cmp -sn "$(wc -c <header)" header services || printf '%s\n' '0r header' x | ex services Of course, neither of these is smart enough to check for PARTIAL matches, but that's getting far beyond the ability of a simple one liner, since guesswork would be intrinsically involved.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
417,815
rpm has a -i ( --install ) option to install a package rpm has a -U ( --upgrade ) option that will install or upgrade a package The red hat documentation indicates that rpm -i is perfectly acceptable. However every documentation I've ever seen has recommended using -U , even if the package is being installed for the first time. Why is rpm -U commonly preferred over rpm -i ?
Most documentation suggests -U over -i because -i may fail if the package was already installed, or already had an earlier version installed; while -U will succeed even if "upgrading" from the package not being installed at all. When giving a how-to, as in the case of documentation, it's generally a better idea to give commands with a lower likelihood of a failure state.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/417815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39263/" ] }
417,832
I am updating bash on our embedded platform from 4.1.9 to the latest (4.4.12), and I am seeing a behaviour change in this simple scenario of passing escaped arguments into a script. Script /tmp/printarg: #! /bin/shecho "ARG |$*|" And I invoke the script like this: bash -c "/tmp/printarg \\"abc\\"" I've tried this on several platforms (native x86_64 Linux) running bash 4.3.42, as well as several embedded platforms (ARM and PPC) running bash 4.1.9 and 4.2.37, and all of these platforms report what I would expect: 38$ bash -c "/tmp/printarg \\"abc\\""ARG |abc| But, when I run this using bash 4.4.12 (native X86 or embedded platforms), I get this: $ bash -c "/tmp/printarg \\"abc\\""ARG |abc\| <<< trailing backslash And if I add a space in the command line between the second escaped quote and the ending quote, then I no longer see the extra backslash: $ bash -c "/tmp/printarg \\"abc\\" "ARG |abc | <<< trailing space, but backslash is gone This feels like a regression. Any thoughts? I also did try enabling the various compat options (compat40, compat41, compat42, compat43) with change.
bash -c "/tmp/printargs \\"abc\\"" Does not escape what you think it does. A backslash-backslash is an escaped backslash, handled by the calling shell — so that is the same as running: /tmp/printargs \abc\ because the double-quotes are not escaped. You could have just written: bash -c '/tmp/printargs \abc\' I'm guessing you actually wanted: bash -c "/tmp/printargs \"abc\"" which escapes the double quotes, passing a quoted "abc" to the bash -c. (I'm guessing the different behavior you're seeing is different versions of bash handling the escaped nothing at end of input differently.) Perl version of printargs (slightly improved behavior): #!/usr/bin/perluse feature qw(say);for (my $i = 0; $i < @ARGV; ++$i) { say "$i: |$ARGV[$i]|";}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271044/" ] }
417,838
I have been following this install for LibreNMS https://www.linuxhelp.com/how-to-install-librenms-in-centos/ . Everything has been fine until I finally started the httpd service. It spits out this error. I have a virtual host configured. This is the exact Error message: Could not reliably determine the server's fully qualified domain name using localhost.localdomain. Set the server name directive globally. I'll display the virtualhost in the httpd config file below. NameVirtualHost *:80<VirtualHost *:80>DocumentRoot /opt/librenms/html/ServerName linuxhelp1.comCustomLog /opt/librenms/logs/access_log combinedErrorLog /opt/librenms/logs/error_logAllowEncodedSlashes On<Directory "/opt/librenms/html/">AllowOverride AllOptions FollowSymLinks MultiViews</Directory></VirtualHost>
bash -c "/tmp/printargs \\"abc\\"" Does not escape what you think it does. A backslash-backslash is an escaped backslash, handled by the calling shell — so that is the same as running: /tmp/printargs \abc\ because the double-quotes are not escaped. You could have just written: bash -c '/tmp/printargs \abc\' I'm guessing you actually wanted: bash -c "/tmp/printargs \"abc\"" which escapes the double quotes, passing a quoted "abc" to the bash -c. (I'm guessing the different behavior you're seeing is different versions of bash handling the escaped nothing at end of input differently.) Perl version of printargs (slightly improved behavior): #!/usr/bin/perluse feature qw(say);for (my $i = 0; $i < @ARGV; ++$i) { say "$i: |$ARGV[$i]|";}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271049/" ] }
417,855
Linux VM running nginx (or any other light-weight daemon with stable resource usage). VM is allocated 2GB of memory with 200-300MB used by OS and services with the rest for file cache and buffers. In one specific use-case I expect an easy 500MB overhead. Q: Why would this setup need swap space? The standard answer of "To prevent memory exhaustion" doesn't make sense to me here for 2 reasons: 1: the demand for memory is well established and does not need to support an unexpected or sudden significant increase. 2: Swap only delays OOM situation in any case. The same thing can be accomplished by assigning more memory to the VM in the first place, especially since it's thin provisioned any nobody will miss out on it as long as it's unused. The other common answer to support hibernation doesn't apply to a server in a VM. I see no reason for swap on such a server; am I missing something?
You shouldn’t think of “having swap” or not, you should consider your overall memory allocation strategy and determine whether or not swap is necessary. There are two main aspects to this. The primary purpose of swap nowadays isn’t to extend physical memory, it’s to provide a backing store for otherwise non-reclaimable pages ( e.g. memory allocations and anonymous mmap s). If you run without swap, you force the kernel to keep anonymous memory in physical memory, which reduces its ability to cope with varying memory needs. Obviously if you know your workload always fits in the available physical memory, this shouldn't be an issue. The second aspect to consider is the kernel’s overcommit strategy. By default, memory allocations mostly succeed, regardless of the memory load. If you’re trying to control your workload though, it’s often helpful to run in checking mode ( /proc/sys/vm/overcommit_memory set to 2); then commits are limited to the sum of swap, and physical memory not allocated for huge pages adjusted by the overcommit ratio (which is 50% by default). If you run without swap, nothing can ever allocate more than half the physical memory by default; adding swap increases that limit linearly, with less risk than increasing the overcommit ratio. (This often trips people up when attempting to run large-ish JVMs on typical server setups.) I mentioned there are two main aspects, which are described above, but it occurs to me that there can be another point to consider in some cases (it’s really a variant of the first point): tmpfs file systems can easily get your system in trouble if there’s no swap... For more on all this, I recommend reading the proc(5) manpage ’s sections on overcommit, and Chris Down’s recent blog post in defence of swap .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417855", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227660/" ] }
417,906
I am running a fresh install of CentOS 7 GNOMEso I could RDP from Windows.  I followed the “Connect to GNOME desktop environment via XRDP” instructions ,but when I connect I get an additional login that says authentication is required to create a color profile How do I remove this additional login? In an attempt to solve this problem I tried a solution at “Griffon's IT Library” , but it did not workbecause link is a lot more then just a solution to this problem. I pasted the solution below. When you login into your system via remote session, you will see thismessage popping up.  You can simply cancel and you will be able toproceed till the next time you login and start a new session. To avoid this prompt, we will need to change the polkit configuration. Using admin privileges, create a file called 02-allow-colord.conf under the following directory /etc/polkit-1/localauthority.conf.d/ The file should contains [sic] the following instructionsand you should not be prompted anymorewith such authentication request while remoting into your system polkit.addRule(function(action, subject) { if ((action.id == “org.freedesktop.color-manager.create-device” || action.id == “org.freedesktop.color-manager.create-profile” || action.id == “org.freedesktop.color-manager.delete-device” || action.id == “org.freedesktop.color-manager.delete-profile” || action.id == “org.freedesktop.color-manager.modify-device” || action.id == “org.freedesktop.color-manager.modify-profile”) && subject.isInGroup(“{group}”)) { return polkit.Result.YES; }});
I had the same problem and found a different work-around here: https://github.com/TurboVNC/turbovnc/issues/47#issuecomment-412005377 This variant is claimed to work independent of authentication scheme (e.g. LDAP). Create /etc/polkit-1/localauthority/50-local.d/color.pkla (note: .pkla extension is required) with the following contents: [Allow colord for all users]Identity=unix-user:*Action=org.freedesktop.color-manager.create-device;org.freedesktop.color-manager.create-profile;org.freedesktop.color-manager.delete-device;org.freedesktop.color-manager.delete-profile;org.freedesktop.color-manager.modify-device;org.freedesktop.color-manager.modify-profile;org.freedesktop.packagekit.system-sources-refreshResultAny=yesResultInactive=yesResultActive=yes Worked for me. update See next comment in linked github thread...18.04 users may want to try the above answer but with the following changes: [Allow colord for all users]Identity=unix-user:*Action=org.freedesktop.color-manager.create-device;org.freedesktop.color-manager.create-profile;org.freedesktop.color-manager.delete-device;org.freedesktop.color-manager.delete-profile;org.freedesktop.color-manager.modify-device;org.freedesktop.color-manager.modify-profile;org.freedesktop.packagekit.system-sources-refreshResultAny=noResultInactive=noResultActive=yes
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/417906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270255/" ] }
417,956
I disabled most of my entries in /proc/acpi/wakeup/ to make sure only the power button and the laptop lid can resume my system, not the mouse or keyboard. The problem is: every time I reboot, the settings are reset for some reason. Is there a way to make these changes permanent? There are some workaround out there that just put the commands into a script hooked to some wakeup routine, but is there really no other solution? I'm using a Debian/Gnome Windows 10 dual boot laptop
For a USB mouse or keyboard, you can use a udev rule to make the setting permanent. First, look up the PCI vendor ID of your mouse/keyboard using lsusb . For my mouse, it's 046d : Bus 001 Device 006: ID 046d :c52b Logitech, Inc. Unifying Receiver Then create a "rules" file like my /etc/udev/rules.d/logitech.rules , only replace "046d" with the vendor ID of your own device: ACTION=="add", SUBSYSTEM=="usb", DRIVERS=="usb", ATTRS{idVendor}=="046d", ATTR{power/wakeup}="disabled"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/417956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233767/" ] }
418,058
Input: Note: 2 columns separated by a tab, regular spaces separating words in column 2. 1 the mouse is dead2 hit the wall3 winter lasts forever Wanted output: 1 the1 mouse1 is1 dead2 hit2 the2 wall3 winter3 lasts3 forever Is awk the way to go for this?
Well, the first field is $1 , NF holds the number of fields on the line, we can access the fields with $i where i is a variable, and loops work almost like in C. So: $ awk '{for (i = 2; i <= NF; i++) printf "%s\t%s\n", $1, $i} ' < blah1 the1 mouse... (This doesn't differentiate between space and tab as field separator.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418058", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83889/" ] }
418,060
I have a file named /tmp/urlFile where each line represents a url. I am trying to read from the file as follows: cat "/tmp/urlFile" | while read urldo echo $urldone If the last line doesn't end with a newline character, that line won't be read. I was wondering why? Is it possible to read all the lines, regardless if they are ended with a new line or not?
You'd do: while IFS= read -r url || [ -n "$url" ]; do printf '%s\n' "$url"done < url.list (effectively, that loop adds back the missing newline on the last (non-)line). See also: Why is using a shell loop to process text considered bad practice? Understand "IFS= read -r line"? Why is printf better than echo?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
418,117
For example, for managing a disk partition for another system where the user exists. I know I can simply create a user temporarily but I find this question interesting.
Yes, you can chown to a numerical UID that does not have a corresponding user.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/418117", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136914/" ] }
418,131
I'm running Pop_OS on a System 76 laptop. It's running Gnome and for some reason after re-installing the OS on a new drive, (the original SSD borked on me) the font in the notifications are HUGE! We're talking 72pt here! Anyways after a couple hours of looking around the interwebs and poking around the system, I've found nothing! Possible causes are from installing a Gnome extension that I removed. I've tried removing the extensions I installed. I've also tried adding and re-removing the extensions that I tried. No Luck. Here is an image of what I'm dealing with. I'd just like to reset the notifications back to default.
Yes, you can chown to a numerical UID that does not have a corresponding user.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/418131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271287/" ] }
418,141
After updating my Arch Linux system yesterday, I started getting an error similar to this: Failed to set locale. Fix your system. This specific error comes from trying to run snapper. However, here is another related error: bsdcpio: Failed to set default locale perl gives a similar warning which I will paste below. It is not limited to any specific application; it appears to be a system-wide issue. I did not get these errors prior to yesterday's update. Furthermore, I do not get the errors in a virtual console. I only get them when inside X (KDE). For example I get the error above if I run a snapper ls command in konsole, but I do not get any error if I run the same snapper ls command in a virtual console. My other Arch systems, which are nearly identical, do not have this issue. My first attempts at troubleshooting were as follows. check /etc/locale.conf run locale-gen check output of locale see if snapper runs without an error I see no errors in locale.conf but running local-gen does not resolve the issue. Here is the relevant output: # localectl list-localesen_US.utf8# grep -v "^#" /etc/locale.confLANG=en_US.UTF-8LC_CTYPE="en_US.UTF-8"LC_NUMERIC="en_US.UTF-8"LC_TIME="en_US.UTF-8"LC_COLLATE="en_US.UTF-8"LC_MONETARY="en_US.UTF-8"LC_MESSAGES="en_US.UTF-8"LC_PAPER="en_US.UTF-8"LC_NAME="en_US.UTF-8"LC_ADDRESS="en_US.UTF-8"LC_TELEPHONE="en_US.UTF-8"LC_MEASUREMENT="en_US.UTF-8"LC_IDENTIFICATION="en_US.UTF-8"LC_ALL=# localelocale: Cannot set LC_ALL to default locale: No such file or directoryLANG=en_US.UTF-8LC_CTYPE=en_US.UTF-8LC_NUMERIC=en_US.UTF-8LC_TIME=en_GB.UTF-8LC_COLLATE=en_US.UTF-8LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8LC_PAPER=en_US.UTF-8LC_NAME=en_US.UTF-8LC_ADDRESS=en_US.UTF-8LC_TELEPHONE=en_US.UTF-8LC_MEASUREMENT=en_US.UTF-8LC_IDENTIFICATION=en_US.UTF-8LC_ALL=# locale-genGenerating locales...en_US.UTF-8... doneGeneration complete.# localelocale: Cannot set LC_ALL to default locale: No such file or directoryLANG=en_US.UTF-8LC_CTYPE=en_US.UTF-8LC_NUMERIC=en_US.UTF-8LC_TIME=en_GB.UTF-8LC_COLLATE=en_US.UTF-8LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8LC_PAPER=en_US.UTF-8LC_NAME=en_US.UTF-8LC_ADDRESS=en_US.UTF-8LC_TELEPHONE=en_US.UTF-8LC_MEASUREMENT=en_US.UTF-8LC_IDENTIFICATION=en_US.UTF-8LC_ALL=# locale -aCen_US.utf8POSIX Here's perl's warning: perl: warning: Setting locale failed.perl: warning: Please check that your locale settings: LANGUAGE = "", LC_ALL = (unset), LC_MEASUREMENT = "en_US.UTF-8", LC_PAPER = "en_US.UTF-8", LC_MONETARY = "en_US.UTF-8", LC_NAME = "en_US.UTF-8", LC_COLLATE = "en_US.UTF-8", LC_CTYPE = "en_US.UTF-8", LC_ADDRESS = "en_US.UTF-8", LC_NUMERIC = "en_US.UTF-8", LC_MESSAGES = "en_US.UTF-8", LC_TELEPHONE = "en_US.UTF-8", LC_IDENTIFICATION = "en_US.UTF-8", LC_TIME = "en_GB.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system.perl: warning: Falling back to a fallback locale ("en_US.UTF-8"). The following line appears when I run locale inside Konsole (in X), but not when I run locale in a virtual console: locale: Cannot set LC_ALL to default locale: No such file or directory I can run the snapper ls command in a virtual console without errors. As far as I know, Arch doesn't have a /etc/default/locale . That file is not present on any of my Arch machines. Rebooting the system did not help.
One of your locale settings (namely, LC_TIME ) is set to a locale that you have not generated (namely, en_GB.UTF-8 ). The error will go away if you enable that locale in /etc/locale.gen and regenerate the locales. Since the setting differs from that set in /etc/locale.conf , you may have placed an override in one of your startup scripts. Since the error does not occur in a virtual console, I suspect .xinitrc or .xprofile . However, if you are using a full desktop environment, those often have their own settings, including locale settings.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
418,145
Sudoers file seems that has error: sudosudo: >>> /etc/sudoers: syntax error near line 56 <<<sudo: parse error in /etc/sudoers near line 56sudo: no valid sudoers sources found, quittingsudo: unable to initialize policy plugin I can connect to EC2 via SSH as ec2-user but can not edit sudoers file in order to fix the error. Tried 'visudo': visudovisudo: /etc/sudoers: Permission deniedvisudo: /etc/sudoers: Permission denied Tried 'pkexec visudo': pkexec visudoError executing command as another user: No authentication agent was found. What can I do at this point in order to fix /etc/sudoers file? Thanks!
One of your locale settings (namely, LC_TIME ) is set to a locale that you have not generated (namely, en_GB.UTF-8 ). The error will go away if you enable that locale in /etc/locale.gen and regenerate the locales. Since the setting differs from that set in /etc/locale.conf , you may have placed an override in one of your startup scripts. Since the error does not occur in a virtual console, I suspect .xinitrc or .xprofile . However, if you are using a full desktop environment, those often have their own settings, including locale settings.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271301/" ] }
418,172
When I type commands on command line I see symbol < after typing 75 charachters. /developer/home/aravind.sreeram> klklkjlkjljlkjlkjlkjlkjlkj < I've tried stty cols 200 but did not work, can someone please tell me how can I see full command beyond 75 characters.
One of your locale settings (namely, LC_TIME ) is set to a locale that you have not generated (namely, en_GB.UTF-8 ). The error will go away if you enable that locale in /etc/locale.gen and regenerate the locales. Since the setting differs from that set in /etc/locale.conf , you may have placed an override in one of your startup scripts. Since the error does not occur in a virtual console, I suspect .xinitrc or .xprofile . However, if you are using a full desktop environment, those often have their own settings, including locale settings.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/69047/" ] }
418,181
I have a string built with multiple option:value separated with | the option:value can be one of the two: [[:alnum:]]{3}:all[[:alnum:]]{3}:FQDN where FQDN is the DNS name of a host for example: 647:all|1bc:all|d1f:all|vf4:www.host.com|vk4:all|k22:www.another.com|bbd:all|opo:all how to build the regex testing this string match the rule?
One of your locale settings (namely, LC_TIME ) is set to a locale that you have not generated (namely, en_GB.UTF-8 ). The error will go away if you enable that locale in /etc/locale.gen and regenerate the locales. Since the setting differs from that set in /etc/locale.conf , you may have placed an override in one of your startup scripts. Since the error does not occur in a virtual console, I suspect .xinitrc or .xprofile . However, if you are using a full desktop environment, those often have their own settings, including locale settings.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/243694/" ] }
418,195
Like described here , Samba -servers on Linux provide the users /home -directory as a shared-folder automatically. How can I prevent this behavior? In the following, the directory containing the home folders are shared using the users share name. Each user's home directory is created as a subdirectory on the \\server\users\ share, such as, \\server\users\user_name . This is the same format used in a Microsoft Windows environment and requires no additional work to set up. I only want to share an explicit declared shared-folder, but not the whole /home/username -directory of my username. How can I adjust this?
Per @Nasir Riley's answer - That will keep the share from showing to anyone browsing the server for shares. However, the share is still available if you know that it exists. It would be much better to simply remove the [homes] share from the smb.conf file completely, or if you think you may want it in the future comment it out, and restart the samba service.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241507/" ] }
418,244
case "$1" inall) echo "$1" ;;[a-z][a-z][a-z][a-z][a-z][a-z]) echo "$1" ;;*) printf 'Invalid: %s\n' "$3" exit 1 ;;esac With this the only input accepted is all, and 6 characters. It won't accept 4 characters or more than 6. What I want to do here is to only allow characters, not digits or symbols, but of unlimited length. What is the correct syntax? Thanks
You can do this with the standard pattern match by looking for any of the non-allowed characters, and rejecting the input if you find any. Or you can use extended globs ( extglob ) or regexes and explicitly make sure the whole string consists of characters that are allowed. #/bin/bashshopt -s extglob globasciirangescase "$1" in *([a-zA-Z])) echo "case ok" ;; esac[[ "$1" = *([a-zA-Z]) ]] && echo " [[ ok"[[ "$1" =~ ^[a-zA-Z]*$ ]] && echo "rege ok" globasciiranges prevents [a-z] from matching accented letters, but the regex match doesn't obey it. With the regex, you'd need to set LC_COLLATE=C to prevent matching them. All of those allow the empty string. To prevent that, change the asterisks to plusses ( * to + ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418244", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119404/" ] }
418,265
When I try to use sudo apt-get update, I get the following error: E: The repository 'http://de.archive.ubuntu.com/ubuntu zesty Release' does no longer have a Release file.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details. what can I do to update Ubuntu correctly? I'm on Ubuntu 17.04
Ubuntu 17.04 (Zesty) has reached the end of its life (see the releases page on the Ubuntu wiki for details), so it’s no longer available from the repositories. You have two options: upgrade to 17.10 (this is the better solution); replace de.archive.ubuntu.com in /etc/apt/sources.list with old-releases.ubuntu.com . The second option will allow apt-get update to finish, but you won’t get any new updates. In particular, you won’t get updates addressing Meltdown and Spectre .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418265", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271413/" ] }
418,271
If I have a delimited file with many lines and columns ( data.txt ): 346 dfd asw 34565 sd wdew 34667 ffg wew 23473 sa as 21533 jhf qwe 54 and another file with line numbers that I want to extract ( positions.txt ) 358 How do I use the positions.txt file to extract those positions from data.txt ? This is the result I would expect for this example: 667 ffg wew 23533 jhf qwe 54
Simply with awk : awk 'NR==FNR{ pos[$1]; next }FNR in pos' positions.txt data.txt NR==FNR{ ... } - processing the 1st input file (i.e. positions.txt ): pos[$1] - accumulating positions(record numbers) set as pos array keys next - jump to next record FNR in pos - while processing the 2nd input file data.txt ( FNR indicates how many records have been read from the current input file). Print record only if current record number FNR is in array of positions pos (search on keys) Sample output: 667 ffg wew 23533 jhf qwe 54...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165231/" ] }
418,275
I'm interested to know which process in Linux actually gets information from the Network layer on the receiving side, applies all the TCP-related logic(TCP-level error check, segments acknowledgement, etc.) and puts it into receive buffer of the waiting connection? On the other hand, which process receives information which was sent to the socket by the host application in order to process it and send to the network layer? Maybe I don't understand this process correctly... Please, help
In terms of code, it is code that exists in kernel space that actually handles the implementation of TCP upward from NIC drivers. The Linux kernel is aware of your network hardware and abstracts it into a set of link adapters. The TCP/UDP/IP stack is then aware of these "link" devices and is further abstracted to Linux/Unix level concepts such as sockets. Processes access this functionality through system calls to the kernel. While the concept of a process in Linux is isolated or gated from the kernel it is technically true that each process is able to access this functionality through system calls. This means that when data is received on the NIC its the kernel handling TCP. When an application receives data out of the buffer that process is handling TCP although only in a gated way through system calls in kernel space/memory through its initiation of a system call. Because Linux is preemptive even calls into kernel space are part of at least how the kernel keeps track of that processes share of time so you might technically consider TCP to be a part of every process. But if you consider only code that belongs to that processes memory space (user space applications) then only the kernel handles TCP. Keep in mind that Linux/Unix incorporates some socket functionality which is abstracting TCP/IP into libraries that are linked to when compiling an application thus would be in its memory space. Such as memory structures used to represent IP addresses.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418275", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271573/" ] }
418,286
In bash, I want to delete files starting with a # . I tried rm #* , but got message: rm: missing operand . So how to achieve this?
The octothorpe ( # ), or pound sign, is a comment character, described in the POSIX grammar here as saying: If the current character is a '#', it and all subsequent characters up to, but excluding, the next <newline> shall be discarded as a comment. The <newline> that ends the line is not considered part of the comment. So you need to quote or escape the pound sign so that it is not interpreted as a comment: rm '#'* or rm "#"* or rm \#*
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271429/" ] }
418,341
I have some code similar to this: while read -r col1 col2 col3 col4 col5 col6 col7 col8 TRASH; do echo -e "${col1}\n${col2}\n${col3}\n${col4}\n${col5}\n${col6}\n"done< <(ll | tail -n+2 | head -2) (I'm not actually using ls / ll but I believe this redacted example displays the same issue I am having) The problem is I need a conditional statement if ll | tail -n+2 | head -2 fails so I'm trying to create a mapfile instead and then read through it in a script. The mapfile gets created properly but I don't know how to redirect it in order to be properly read. code if ! mapfile -t TEST_ARR < <(ll | tail -n+2 | head -2); then exit 1fiwhile read -r col1 col2 col3 col4 col5 col6 col7 col8 TRASH; do echo -e "${col1}\n${col2}\n${col3}\n${col4}\n${col5}\n${col6}\n"done<<<"${TEST_ARR[@]}" mapfile contents declare -a TEST_ARR=( [0]="drwxr-xr-x@ 38 wheel 1.2K Dec 7 07:10 ./" [1]="drwxr-xr-x 33 wheel 1.0K Jan 18 07:05 ../") output $ while read -r col1 col2 col3 col4 col5 col6 col7 col8 TRASH; do> echo -e "${col1}\n${col2}\n${col3}\n${col4}\n${col5}\n${col6}\n"> done<<<"${TEST_ARR[@]}"[email protected] String redirect is clearly wrong in this case but I'm not sure how else I can redirect my array.
It seems to me that you're wanting to loop through your array, reading the elements into columns: for ele in "${TEST_ARR[@]}"do read -r col1 col2 col3 col4 col5 col6 col7 col8 TRASH <<< "$ele" echo -e "${col1}\n${col2}\n${col3}\n${col4}\n${col5}\n${col6}\n"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237982/" ] }
418,383
What are the standard ownership settings for files in the .gnupg folder? After doing sudo chown u:u * mine now looks like this: drwx------ 2 u u 4,0K jan 18 22:53 crls.ddrwx------ 2 u u 4,0K jan 18 22:33 openpgp-revocs.ddrwx------ 2 u u 4,0K jan 18 22:33 private-keys-v1.d-rw------- 1 u u 0 sep 28 02:12 pubring.gpg-rw-rw-r-- 1 u u 2,4K jan 18 22:33 pubring.kbx-rw------- 1 u u 32 jan 18 22:28 pubring.kbx~-rw------- 1 u u 600 jan 19 22:15 random_seed-rw------- 1 u u 0 sep 28 02:13 secring.gpgsrwxrwxr-x 1 u u 0 jan 20 10:20 S.gpg-agent-rw------- 1 u u 1,3K jan 18 23:47 trustdb.gpg However, before that, originally at least pubring.gpg , secring.gpg and random_seed were owned by root.
The .gnupg directory and its contents should be owned by the user whose keys are stored therein and who will be using them. There is in principle no problem with a root-owned .gnupg directory in your home directory, if root is the only user that you use GnuPG as (in that case one could argue that the directory should live in /root or that you should do things differently). I can see nothing wrong with the file permissions in the file listing that you have posted. The .gnupg folder itself should additionally be inaccessible by anyone other than the owner and user of the keys. The reason why the files may initially have been owned by root could be because GnuPG was initially run as root or by a process executing as root (maybe some package manager software or similar). GnuPG does permission checks and will warn you if any of the files have unsafe permissions. These warnings may be turned off (don't do that): --no-permission-warning Suppress the warning about unsafe file and home directory ( --homedir ) permissions. Note that the permission checks that GnuPG performs are not intended to be authoritative, but rather they simply warn about certain common permission problems. Do not assume that the lack of a warning means that your system is secure. Note that the warning for unsafe --homedir permissions cannot be suppressed in the gpg.conf file, as this would allow an attacker to place an unsafe gpg.conf file in place, and use this file to suppress warnings about itself. The --homedir permissions warning may only be suppressed on the command line. The --homedir directory referred to above is the .gnupg directory, usually at $HOME/.gnupg unless changed by using --homedir or setting GNUPGHOME . Additionally, the file storing the secret keys will be changed to read/write only by default by GnuPG, unless this behaviour is turned off (don't do that either): --preserve-permissions Don't change the permissions of a secret keyring back to user read/write only. Use this option only if you really know what you are doing. This applies to GnuPG 2.2.3, and the excerpts above are from the gpg2 manual on an OpenBSD system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36186/" ] }
418,398
I was working on a Bash script to help partition a hard drive correctly, and I came across a strange problem where I had to append a number to a variable. It took me a while to get the outcome right since I'm not very experienced with Bash but I managed to make something temporary. Here is an example of what I was building: #!/bin/bashdevName="/dev/sda"devTarget=$devName\3echo "$devTarget" The variable devName was /dev/sda , but I also needed a variable set for the 3rd partition later on in the script so I used the \ symbol, which let me add the number 3 to the /dev/sda to make the output /dev/sda3 Even though the \ symbol worked, I was just wondering if this is the right way to do something like this. I used this because in Python it's used to ignore the following character for quotations and such, but I wanted to do the opposite in this situation, so surprisingly it worked. If this isn't the best way to go about adding to variables, could someone please show an example of the best way to do this in Bash.
The safe way to interpolate a variable into a string is to use ${var_name} . For example: #!/bin/bashdevName="/dev/sda"devTarget="${devName}3"echo "$devTarget" This way Bash has no doubts of what the variable to interpolate is. Btw, interesting how devTarget=$devName\3 works. I'm not sure why.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
418,400
I have a machine dual booted with Arch Linux and Ubuntu (16.04). I have recently started using the Kakoune text editor , and noticed that its startup time is drastically different depending on which OS I am using. However I believe the underlying problem is not due to kakoune directly. On startup, kakoune runs a bunch of shell scripts to enable integration with x11 and tmux, git, syntax highlighting/colorschemes, etc. This can be disabled to just load the 'vanilla' editor using the -n flag. The command: kak -e q will start kakoune, run all startup scripts and exit immediately. On Arch: time kak -e q takes 1 second time kak -n -e q (no shell scripts) finishes in under 20 millis . On Ubuntu: time kak -e q takes about 450 millis time kak -n -e q is again under 20 millis After trimming the fat and removing some of the startup scripts I did see an improvement on both OS's proportional to the amount removed. I ran some benchmarks with UnixBench and found that the major differences between the two systems are seen in the 'process creation' and 'shell scripts' tests. The shells scripts test measures the number of times per minute a process can start and reap a set of one, two, four and eight concurrent copies of a shell scripts where the shell script applies a series of transformation to a data file. Here is the relevant output. Units in 'loops per second' more is better: Process creation (1 parallel copy of tests)Arch: 3,822Ubuntu: 5,297Process creation (4 parallel copies of tests)Arch: 18,935Ubuntu: 30,341Shell Scripts (1 concurrent) (1 parallel copy of tests)Arch: 972Ubuntu: 5,141Shell Scripts (1 concurrent) (4 parallel copies of tests)Arch: 7,697Ubuntu: 24,942Shell Scripts (8 concurrent) (1 parallel copy of tests)Arch: 807Ubuntu: 2,257Shell Scripts (8 concurrent) (4 parallel copies of tests)Arch: 1,289Ubuntu: 3,001 As you can see the Ubuntu system performs much better. I have tested using different login shells, terminal emulators, recompiling kakoune, removing unneeded software to clean up the disk, etc. I am certain this is the bottleneck. My question is: what can I do to further investigate this and improve performance of the Arch Linux system to match Ubuntu? Should I look into tuning the kernel? Additional notes: both systems use the same type of filesystem (ext4) I tend to use the Archlinux system more, and have noticed performance degrading over time Arch is on /dev/sda1 and is ~200GB. Ubuntu is on /dev/sda2, ~500GB. 1TB HDD. Arch uname -a : Linux ark 4.14.13-1-ARCH #1 SMP PREEMPT Wed Jan 10 11:14:50 UTC 2018 x86_64 GNU/Linux Ubuntu uname -a : Linux sierra 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Thanks
The safe way to interpolate a variable into a string is to use ${var_name} . For example: #!/bin/bashdevName="/dev/sda"devTarget="${devName}3"echo "$devTarget" This way Bash has no doubts of what the variable to interpolate is. Btw, interesting how devTarget=$devName\3 works. I'm not sure why.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271503/" ] }
418,401
I'm installing Debian 9 on an HP ProLiant DL180. When I boot from a USB drive, it opens grub2 and when I type boot it gives an error : you need to load kernel first .
From grub-rescue type set then hit the Tab , it will help you to set the first parameters , e,g.: set prefix=(hd0,gpt2)/boot/grubset root=(hd0,gpt2)insmod normalnormal you need to load kernel first To load the kernel forward with the following commands: insmod linuxlinux /vmlinuz root=/dev/sda2initrd /initrd.imgboot Change /dev/sda2 with your root partition , change gpt2 with msdos if you don't have a GUID partition table. To correctly set your boot parameters, see Ubuntu documentation : search and set
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/418401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215550/" ] }
418,424
I want to configure lamp stack for my ubuntu distro, but I have some troubles. After sudo apt-get install lamp-server^ I get: Reading package lists... DoneBuilding dependency tree Reading state information... DoneNote, selecting 'libhttp-message-perl' for task 'lamp-server'Note, selecting 'libencode-locale-perl' for task 'lamp-server'Note, selecting 'php7.0-cli' for task 'lamp-server'Note, selecting 'mysql-client-5.7' for task 'lamp-server'Note, selecting 'libapache2-mod-php' for task 'lamp-server'Note, selecting 'rename' for task 'lamp-server'Note, selecting 'mysql-server-5.7' for task 'lamp-server'Note, selecting 'php-common' for task 'lamp-server'Note, selecting 'libaprutil1' for task 'lamp-server'Note, selecting 'mysql-server' for task 'lamp-server'Note, selecting 'php7.0-opcache' for task 'lamp-server'Note, selecting 'libcgi-fast-perl' for task 'lamp-server'Note, selecting 'libwrap0' for task 'lamp-server'Note, selecting 'libhttp-date-perl' for task 'lamp-server'Note, selecting 'perl-modules-5.22' for task 'lamp-server'Note, selecting 'liblwp-mediatypes-perl' for task 'lamp-server'Note, selecting 'libfcgi-perl' for task 'lamp-server'Note, selecting 'libcgi-pm-perl' for task 'lamp-server'Note, selecting 'libaprutil1-dbd-sqlite3' for task 'lamp-server'Note, selecting 'php7.0-common' for task 'lamp-server'Note, selecting 'libaio1' for task 'lamp-server'Note, selecting 'libio-html-perl' for task 'lamp-server'Note, selecting 'ssl-cert' for task 'lamp-server'Note, selecting 'apache2-data' for task 'lamp-server'Note, selecting 'libperl5.22' for task 'lamp-server'Note, selecting 'libapr1' for task 'lamp-server'Note, selecting 'libaprutil1-ldap' for task 'lamp-server'Note, selecting 'libhtml-tagset-perl' for task 'lamp-server'Note, selecting 'mysql-client-core-5.7' for task 'lamp-server'Note, selecting 'php7.0-json' for task 'lamp-server'Note, selecting 'php7.0-readline' for task 'lamp-server'Note, selecting 'tcpd' for task 'lamp-server'Note, selecting 'liblua5.1-0' for task 'lamp-server'Note, selecting 'mysql-common' for task 'lamp-server'Note, selecting 'libhtml-template-perl' for task 'lamp-server'Note, selecting 'libtimedate-perl' for task 'lamp-server'Note, selecting 'apache2-bin' for task 'lamp-server'Note, selecting 'perl' for task 'lamp-server'Note, selecting 'apache2' for task 'lamp-server'Note, selecting 'php-mysql' for task 'lamp-server'Note, selecting 'apache2-utils' for task 'lamp-server'Note, selecting 'libhtml-parser-perl' for task 'lamp-server'Note, selecting 'libapache2-mod-php7.0' for task 'lamp-server'Note, selecting 'liburi-perl' for task 'lamp-server'Note, selecting 'mysql-server-core-5.7' for task 'lamp-server'Note, selecting 'php7.0-mysql' for task 'lamp-server'libaio1 is already the newest version (0.3.110-2).libapache2-mod-php is already the newest version (1:7.0+35ubuntu6).libapr1 is already the newest version (1.5.2-3).libaprutil1 is already the newest version (1.5.4-1build1).libaprutil1-dbd-sqlite3 is already the newest version (1.5.4-1build1).libaprutil1-ldap is already the newest version (1.5.4-1build1).libcgi-fast-perl is already the newest version (1:2.10-1).libcgi-pm-perl is already the newest version (4.26-1).libencode-locale-perl is already the newest version (1.05-1).libfcgi-perl is already the newest version (0.77-1build1).libhtml-parser-perl is already the newest version (3.72-1).libhtml-tagset-perl is already the newest version (3.20-2).libhtml-template-perl is already the newest version (2.95-2).libhttp-date-perl is already the newest version (6.02-1).libhttp-message-perl is already the newest version (6.11-1).libio-html-perl is already the newest version (1.001-1).liblua5.1-0 is already the newest version (5.1.5-8ubuntu1).liblwp-mediatypes-perl is already the newest version (6.02-1).libtimedate-perl is already the newest version (2.3000-2).liburi-perl is already the newest version (1.71-1).libwrap0 is already the newest version (7.6.q-25).php-common is already the newest version (1:35ubuntu6).php-mysql is already the newest version (1:7.0+35ubuntu6).rename is already the newest version (0.20-4).ssl-cert is already the newest version (1.0.37).tcpd is already the newest version (7.6.q-25).apache2 is already the newest version (2.4.18-2ubuntu3.5).apache2-bin is already the newest version (2.4.18-2ubuntu3.5).apache2-data is already the newest version (2.4.18-2ubuntu3.5).apache2-utils is already the newest version (2.4.18-2ubuntu3.5).libapache2-mod-php7.0 is already the newest version (7.0.22-0ubuntu0.16.04.1).libperl5.22 is already the newest version (5.22.1-9ubuntu0.2).mysql-client-5.7 is already the newest version (5.7.20-0ubuntu0.16.04.1).mysql-client-core-5.7 is already the newest version (5.7.20-0ubuntu0.16.04.1).mysql-common is already the newest version (5.7.20-0ubuntu0.16.04.1).mysql-server is already the newest version (5.7.20-0ubuntu0.16.04.1).mysql-server-5.7 is already the newest version (5.7.20-0ubuntu0.16.04.1).mysql-server-core-5.7 is already the newest version (5.7.20-0ubuntu0.16.04.1).perl is already the newest version (5.22.1-9ubuntu0.2).perl-modules-5.22 is already the newest version (5.22.1-9ubuntu0.2).php7.0-cli is already the newest version (7.0.22-0ubuntu0.16.04.1).php7.0-common is already the newest version (7.0.22-0ubuntu0.16.04.1).php7.0-json is already the newest version (7.0.22-0ubuntu0.16.04.1).php7.0-mysql is already the newest version (7.0.22-0ubuntu0.16.04.1).php7.0-opcache is already the newest version (7.0.22-0ubuntu0.16.04.1).php7.0-readline is already the newest version (7.0.22-0ubuntu0.16.04.1).0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.3 not fully installed or removed.After this operation, 0 B of additional disk space will be used.Do you want to continue? [Y/n] ySetting up mysql-server-5.7 (5.7.20-0ubuntu0.16.04.1) ...Renaming removed key_buffer and myisam-recover options (if present)Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details.invoke-rc.d: initscript mysql, action "start" failed.● mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since sob 2018-01-20 10:55:17 CET; 17ms ago Process: 4551 ExecStartPost=/usr/share/mysql/mysql-systemd-start post (code=exited, status=0/SUCCESS) Process: 4550 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE) Process: 4542 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS) Main PID: 4550 (code=exited, status=1/FAILURE)sty 20 10:55:17 len-machine systemd[1]: Failed to start MySQL Community Server.sty 20 10:55:17 len-machine systemd[1]: mysql.service: Unit entered failed s....sty 20 10:55:17 len-machine systemd[1]: mysql.service: Failed with result 'e....Hint: Some lines were ellipsized, use -l to show in full.dpkg: error processing package mysql-server-5.7 (--configure): subprocess installed post-installation script returned error exit status 1Setting up oracle-java8-installer (8u151-1~webupd8~0) ...Using wget settings from /var/cache/oracle-jdk8-installer/wgetrcDownloading Oracle Java 8...--2018-01-20 10:55:18-- http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gzResolving download.oracle.com (download.oracle.com)... 104.104.142.192Connecting to download.oracle.com (download.oracle.com)|104.104.142.192|:80... connected.HTTP request sent, awaiting response... 302 Moved TemporarilyLocation: https://edelivery.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz [following]--2018-01-20 10:55:18-- https://edelivery.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gzResolving edelivery.oracle.com (edelivery.oracle.com)... 2a02:26f0:d8:39a::2d3e, 2a02:26f0:d8:389::2d3e, 104.81.108.164Connecting to edelivery.oracle.com (edelivery.oracle.com)|2a02:26f0:d8:39a::2d3e|:443... connected.HTTP request sent, awaiting response... 302 Moved TemporarilyLocation: http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz?AuthParam=1516442239_54c9d78d4d9e3a8f11df3af6b410580b [following]--2018-01-20 10:55:19-- http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz?AuthParam=1516442239_54c9d78d4d9e3a8f11df3af6b410580bConnecting to download.oracle.com (download.oracle.com)|104.104.142.192|:80... connected.HTTP request sent, awaiting response... 404 Not Found2018-01-20 10:55:20 ERROR 404: Not Found.download failedOracle JDK 8 is NOT installed.dpkg: error processing package oracle-java8-installer (--configure): subprocess installed post-installation script returned error exit status 1dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.7; however: Package mysql-server-5.7 is not configured yet.dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfiguredNo apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.7 oracle-java8-installer mysql-serverE: Sub-process /usr/bin/dpkg returned an error code (1) I don't have an idea what is going on. Do you some tips how to solve that?
From grub-rescue type set then hit the Tab , it will help you to set the first parameters , e,g.: set prefix=(hd0,gpt2)/boot/grubset root=(hd0,gpt2)insmod normalnormal you need to load kernel first To load the kernel forward with the following commands: insmod linuxlinux /vmlinuz root=/dev/sda2initrd /initrd.imgboot Change /dev/sda2 with your root partition , change gpt2 with msdos if you don't have a GUID partition table. To correctly set your boot parameters, see Ubuntu documentation : search and set
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/418424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/190580/" ] }
418,429
If I have two files (with single columns), one like so (file1) 34678992102180blue23454 And the second file (file2) 23566769102200 How do I find elements that are common in both files (intersection)? The expected output in this example is 67102 Note that number of items (lines) in each file differs. Numbers and strings may be mixed. They may not be necessarily sorted. Each item only appears once. UPDATE: Time check based on some of the answers below. # generate some data>shuf -n2000000 -i1-2352452 > file1>shuf -n2000000 -i1-2352452 > file2#@ilkkachu>time (join <(sort "file1") <(sort "file2") > out1)real 0m15.391suser 0m14.896ssys 0m0.205s>head out111010010001000001#@Hauke>time (grep -Fxf "file1" "file2" > out2)real 0m7.652suser 0m7.131ssys 0m0.316s>head out2104786787265213704631890721807745#@Roman>time (comm -12 <(sort "file1") <(sort "file2") > out3)real 0m13.533suser 0m13.140ssys 0m0.195s>head out311010010001000001#@ilkkachu>time (awk 'NR==FNR { lines[$0]=1; next } $0 in lines' "file1" "file2" > out4)real 0m4.587suser 0m4.262ssys 0m0.195s>head out4104786787265213704631890721807745#@Cyrus >time (sort file1 file2 | uniq -d > out8)real 0m16.106suser 0m15.629ssys 0m0.225s>head out811010010001000001#@Sundeep>time (awk 'BEGIN{while( (getline k < "file1")>0 ){a[k]}} $0 in a' file2 > out5)real 0m4.213suser 0m3.936ssys 0m0.179s>head out5104786787265213704631890721807745#@Sundeep>time (perl -ne 'BEGIN{ $h{$_}=1 while <STDIN> } print if $h{$_}' <file1 file2 > out6)real 0m3.467suser 0m3.180ssys 0m0.175s>head out6104786787265213704631890721807745 The perl version was the fastest followed by awk. All output files had the same number of rows. For the sake of comparison, I have sorted the output numerically so that the output is identical. #@ilkkachu>time (join <(sort "file1") <(sort "file2") | sort -k1n > out1)real 0m17.953suser 0m5.306ssys 0m0.138s#@Hauke>time (grep -Fxf "file1" "file2" | sort -k1n > out2)real 0m12.477suser 0m11.725ssys 0m0.419s#@Roman>time (comm -12 <(sort "file1") <(sort "file2") | sort -k1n > out3)real 0m16.273suser 0m3.572ssys 0m0.102s#@ilkkachu>time (awk 'NR==FNR { lines[$0]=1; next } $0 in lines' "file1" "file2" | sort -k1n > out4)real 0m8.732suser 0m8.320ssys 0m0.261s#@Cyrus >time (sort file1 file2 | uniq -d > out8)real 0m19.382suser 0m18.726ssys 0m0.295s#@Sundeep>time (awk 'BEGIN{while( (getline k < "file1")>0 ){a[k]}} $0 in a' file2 | sort -k1n > out5)real 0m8.758suser 0m8.315ssys 0m0.255s#@Sundeep>time (perl -ne 'BEGIN{ $h{$_}=1 while <STDIN> } print if $h{$_}' <file1 file2 | sort -k1n > out6)real 0m7.732suser 0m7.300ssys 0m0.310s>head out112345 All outputs are now identical.
In awk , this loads the first file fully in memory: $ awk 'NR==FNR { lines[$0]=1; next } $0 in lines' file1 file2 67102 Or, if you want to keep track of how many times a given line appears: $ awk 'NR==FNR { lines[$0] += 1; next } lines[$0] {print; lines[$0] -= 1}' file1 file2 join could do that, though it does require the input files to be sorted, so you need to do that first, and doing it loses the original ordering: $ join <(sort file1) <(sort file2)10267
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418429", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165231/" ] }
418,451
I'd like to specify the column-separator ps -o "%a|%p" # separator | and the column-width ps -o cmd:50,pid # width 50 for cmd in one command.Is this possible?? It is not about the column-width but I'd like to have the full length command even if it is not the last column.
In awk , this loads the first file fully in memory: $ awk 'NR==FNR { lines[$0]=1; next } $0 in lines' file1 file2 67102 Or, if you want to keep track of how many times a given line appears: $ awk 'NR==FNR { lines[$0] += 1; next } lines[$0] {print; lines[$0] -= 1}' file1 file2 join could do that, though it does require the input files to be sorted, so you need to do that first, and doing it loses the original ordering: $ join <(sort file1) <(sort file2)10267
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215038/" ] }
418,480
I keep getting this error whenever I run sudo apt update : E: The repository 'http://archive.ubuntu.com/ubuntu wily Release' does not have a Release file.N: Updating from such a repository can't be done securely, and is therefore disabled by default.N: See apt-secure(8) manpage for repository creation and user configuration details. I'm running Ubuntu 16.04. lsb_release -aDistributor ID: UbuntuDescription: Ubuntu 16.04.3 LTSRelease: 16.04Codename: xenial Contents of /etc/apt/sources.list : # deb cdrom:[Ubuntu 16.04.1 LTS _Xenial Xerus_ - Release amd64 (20160719)]/ xenial main restricted# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to# newer versions of the distribution.deb http://dz.archive.ubuntu.com/ubuntu/ xenial main restricted# deb-src http://dz.archive.ubuntu.com/ubuntu/ xenial main restricted## Major bug fix updates produced after the final release of the## distribution.deb http://dz.archive.ubuntu.com/ubuntu/ xenial-updates main restricted# deb-src http://dz.archive.ubuntu.com/ubuntu/ xenial-updates main restricted## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu## team, and may not be under a free licence. Please satisfy yourself as to## your rights to use the software. Also, please note that software in## universe WILL NOT receive any review or updates from the Ubuntu security## team.deb http://dz.archive.ubuntu.com/ubuntu/ xenial universe# deb-src http://dz.archive.ubuntu.com/ubuntu/ xenial universedeb http://dz.archive.ubuntu.com/ubuntu/ xenial-updates universe# deb-src http://dz.archive.ubuntu.com/ubuntu/ xenial-updates universe## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu## security team.deb http://dz.archive.ubuntu.com/ubuntu/ xenial multiverse# deb-src http://dz.archive.ubuntu.com/ubuntu/ xenial multiversedeb http://dz.archive.ubuntu.com/ubuntu/ xenial-updates multiverse# deb-src http://dz.archive.ubuntu.com/ubuntu/ xenial-updates multiverse## N.B. software from this repository may not have been tested as## extensively as that contained in the main release, although it includes## newer versions of some applications which may provide useful features.## Also, please note that software in backports WILL NOT receive any review## or updates from the Ubuntu security team.deb http://dz.archive.ubuntu.com/ubuntu/ xenial-backports main restricted universe multiverse# deb-src http://dz.archive.ubuntu.com/ubuntu/ xenial-backports main restricted universe multiverse## Uncomment the following two lines to add software from Canonical's## 'partner' repository.## This software is not part of Ubuntu, but is offered by Canonical and the## respective vendors as a service to Ubuntu users.# deb http://archive.canonical.com/ubuntu xenial partnerdeb-src http://archive.canonical.com/ubuntu xenial partnerdeb http://security.ubuntu.com/ubuntu xenial-security main restricted# deb-src http://security.ubuntu.com/ubuntu xenial-security main restricteddeb http://security.ubuntu.com/ubuntu xenial-security universe# deb-src http://security.ubuntu.com/ubuntu xenial-security universedeb http://security.ubuntu.com/ubuntu xenial-security multiverse# deb-src http://security.ubuntu.com/ubuntu xenial-security multiversedeb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable# deb-src [arch=amd64] https://download.docker.com/linux/ubuntu xenial stabledeb http://deb.torproject.org/torproject.org stretch maindeb-src http://deb.torproject.org/torproject.org stretch main Contents of /etc/apt/sources.list.d/* : ### THIS FILE IS AUTOMATICALLY CONFIGURED #### You may comment out this entry, but any other modifications may be lost.deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main### THIS FILE IS AUTOMATICALLY CONFIGURED #### You may comment out this entry, but any other modifications may be lost.deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable maindeb http://ppa.launchpad.net/js-reynaud/kicad-4/ubuntu xenial main# deb-src http://ppa.launchpad.net/js-reynaud/kicad-4/ubuntu xenial maindeb http://ppa.launchpad.net/js-reynaud/kicad-4/ubuntu xenial main# deb-src http://ppa.launchpad.net/js-reynaud/kicad-4/ubuntu xenial maindeb http://ppa.launchpad.net/kivy-team/kivy/ubuntu xenial main# deb-src http://ppa.launchpad.net/kivy-team/kivy/ubuntu xenial main# deb-src http://ppa.launchpad.net/kivy-team/kivy/ubuntu xenial maindeb http://ppa.launchpad.net/kivy-team/kivy/ubuntu xenial main# deb-src http://ppa.launchpad.net/kivy-team/kivy/ubuntu xenial main# deb-src http://ppa.launchpad.net/kivy-team/kivy/ubuntu xenial maindeb http://ppa.launchpad.net/linuxgndu/sqlitebrowser-testing/ubuntu xenial main# deb-src http://ppa.launchpad.net/linuxgndu/sqlitebrowser-testing/ubuntu xenial maindeb http://ppa.launchpad.net/linuxgndu/sqlitebrowser-testing/ubuntu xenial main# deb-src http://ppa.launchpad.net/linuxgndu/sqlitebrowser-testing/ubuntu xenial maindeb http://ppa.launchpad.net/overcoder/hexchat/ubuntu xenial main# deb-src http://ppa.launchpad.net/overcoder/hexchat/ubuntu xenial maindeb http://ppa.launchpad.net/overcoder/hexchat/ubuntu xenial main# deb-src http://ppa.launchpad.net/overcoder/hexchat/ubuntu xenial maindeb https://dl.bintray.com/sbt/debian /deb https://dl.bintray.com/sbt/debian /deb [arch=amd64] https://repo.skype.com/deb stable main### TeamViewer DEB repository list### NOTE: Manual changes to this file### - prevent it from being updated by TeamViewer package updates### - will be lost after using the 'teamviewer repo' command### The original file can be restored with this command:### cp /opt/teamviewer/tv_bin/script/teamviewer.list /etc/apt/sources.list.d/teamviewer.list### which has the same effect as 'teamviewer repo default'### NOTE: It is preferred to use the following commands to edit this file:### teamviewer repo - show current repository configuration### teamviewer repo default - restore default configuration### teamviewer repo disable - disable the repository### teamviewer repo main [stable] - make all TeamViewer packages available (default)### teamviewer repo tv13 [stable] - make TeamViewer 13 packages available### stable omit preview and beta releases### Choose stable main to receive updates for TeamViewer 13 and upcoming major releases### Choose preview main to receive early updates for TeamViewer 13 and to receive major beta releases### Choose stable tv13 to receive updates for TeamViewer 13### Choose preview tv13 to receive early updates for TeamViewer 13deb http://linux.teamviewer.com/deb stable maindeb http://linux.teamviewer.com/deb preview main# deb http://linux.teamviewer.com/deb stable tv13# deb http://linux.teamviewer.com/deb preview tv13deb http://ppa.launchpad.net/webupd8team/tor-browser/ubuntu xenial main# deb-src http://ppa.launchpad.net/webupd8team/tor-browser/ubuntu xenial maindeb http://ppa.launchpad.net/webupd8team/tor-browser/ubuntu xenial main# deb-src http://ppa.launchpad.net/webupd8team/tor-browser/ubuntu xenial maindeb http://archive.ubuntu.com/ubuntu wily main universedeb http://archive.ubuntu.com/ubuntu wily main universedeb http://ppa.launchpad.net/wireshark-dev/stable/ubuntu xenial main# deb-src http://ppa.launchpad.net/wireshark-dev/stable/ubuntu xenial maindeb http://ppa.launchpad.net/wireshark-dev/stable/ubuntu xenial main# deb-src http://ppa.launchpad.net/wireshark-dev/stable/ubuntu xenial main# channel for the xenial (16.04) partner channel# #:description:This channel contains the partner software for xenialdeb http://archive.canonical.com/ubuntu xenial partner# channel for the xenial (16.04) partner channel# #:description:This channel contains the partner software for xenialdeb http://archive.canonical.com/ubuntu xenial partner
You’re getting that error because Wily (15.10) has reached the end of its life and has therefore been archived. If you’re really running 16.04, you don’t need Wily repositories and you can remove them from your /etc/apt/sources.list file, or whichever file in /etc/apt/sources.list.d refers to Wuly. This will avoid the error you’re getting when running apt-get update . Alternatively, you can use old-releases.ubuntu.com instead of archive.ubuntu.com if you absolutely need the Wily repositories for some reason. If you’re still running 15.10 however, you should upgrade to 16.04.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418480", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271575/" ] }
418,509
I ssh ed to my server and ran wget -r -np zzz.aaa/bbb/ccc and it started working. Then my Internet connection(at my home) got interrupted and I got worried assuming that wget has been hup ped because the ssh connection was lost and therefore the terminal had died. But then I ssh ed to my server an realized that it was still running and putting the output in wget.log and downloading stuff. Can someone please explain to me what might have happened here? This is what ps gives me: PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND32283 0.6 29.4 179824 147088 ? S 14:00 1:53 wget -r -np zzz.aaa/bbb/ccc What it does (question mark) ? mean in the column of tty ?
Programs (and scripts) can choose to ignore most signals, except a few like KILL . The HUP signal can be caught and ignored if the software so wishes to. This is from src/main.c of the wget sources (version 1.19.2): /* Hangup signal handler. When wget receives SIGHUP or SIGUSR1, it will proceed operation as usual, trying to write into a log file. If that is impossible, the output will be turned off. */ A bit further down the signal handler is installed: /* Setup the signal handler to redirect output when hangup is received. */ if (signal(SIGHUP, SIG_IGN) != SIG_IGN) signal(SIGHUP, redirect_output_signal); So it looks like wget is not ignoring the HUP signal, but it chooses to continue processing with its output redirected to the log file. Requested in comments: The meaning of the ? in the TTY column of the output from ps in the question is that the wget process is not any longer associated with a terminal/TTY. The TTY went away when the SSH connection went down.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/418509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231067/" ] }
418,570
I have a file that i want to create using command line, using the cat tool (or something similar).The text in question is multi line and is in a certain format (yaml) - which i want to maintain. Is there a way to write a file using one line command?
$ cat > test.yaml << EOFLine 1Line 2Line 3EOF$ cat test.yaml The > symbol refers to create a test.yaml file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271555/" ] }
418,582
I set MaxAuthTries to 1 on my Linux machine. Then I tried to ssh into my Linux machine from a different machine on my local network but it failed saying " Too many authentication failures ". I'm assuming this is because I had some failures earlier while I was setting things up, and they still count towards the total. The man page says: MaxAuthTries Specifies the maximum number of authentication attempts permitted per connection. Once the number of failures reaches half this value, additional failures are logged. The default is 6. What is considered a connection? Does that mean you only get MaxAuthTries from a certain IP address? Is this referring to a TCP connection? How can I kill the connection so I can make a new one and try to ssh again? https://linux.die.net/man/5/sshd_config
In the case of SSH, a connection is one established connection to the sshd 's TCP port (usually port 22). Once sshd stops accepting further authentication attempts, it closes the connection, and at this point the connection is done. Before a user gets to make an authentication attempt, the SSH protocol requires the negotiation of encryption and other protocol options, establishment of session keys and exchange of host keys. So each new connection requires a non-trivial bit of work: a storm of SSH connection attempts from multiple sources could certainly be used to DoS a server. An authentication attempt is one attempt of any authentication method that is currently enabled in sshd configuration. For example: if client offers a SSH key for authentication, each offered key counts as one attempt. if Kerberos/GSSAPI authentication method is enabled, seeing if the client can be authenticated with it counts as one attempt each password typed into the password authentication prompt obviously counts as one. The first two can cause the situation you're experiencing: if you set MaxAuthTries to one and Kerberos/GSSAPI authentication is enabled, it may eat up the single attempt before you even get to try password authentication. Likewise, if your SSH client has an authentication key available, but you haven't added your public key to the destination system's ~/.ssh/authorized_keys for the destination user, the public key authentication attempt will eat up your single attempt and you won't even get to try password authentication. pam_unix , the PAM library that normally handles password authentication, enforces a delay of two seconds after a failed authentication attempt by default. If your primary threat is password-guessing worms and bots on other compromised systems in the internet, reducing MaxAuthTries may be a bad move: since a bot won't tire, it will always reconnect and try again. Each attempt requires you to spend some CPU capacity for SSH protocol negotiations. You'll want to primarily ensure that the bot won't succeed , and secondarily that the bot will waste as much of its time as possible on that one existing connection, with minimum cost to you . Allowing multiple authentication attempts over one connection but answering... very... slowly... will do exactly that. This is also why sshd will request a password from the client even if password authentication is completely disabled: the prompt is completely fake, and the client will be rejected no matter what password is entered. But the client has no way to know that for sure. Of course, if you allow too many authentication attempts over one connection, the bot may eventually terminate the connection from its side, if the bot programmer has implemented a timeout to limit the effectiveness of such a "tar-pit defense".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259474/" ] }
418,616
why this simple python script not print the real redhat version? version = os.system( ' cat /etc/redhat-release | awk \'{print $7}\' ' )print ("my version is " ,version) when I run it we got: 7.2('my version is ', 0) why we get 0 instead 7.2? how to avoid to get version - 7.2 from os.system ?
os.system() just runs the process , it doesn't capture the output: If command generates any output, it will be sent to the interpreter standard output stream. The return value is the exit code of the process: On Unix, the return value is the exit status of the process encoded in the format specified for wait(). You'll need to use something like subprocess.check_output() or subprocess.Popen() directly to capture the output. >>> arch = subprocess.check_output("uname -a | awk '{print $9}'", shell=True);>>> arch'x86_64\n'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/246468/" ] }
418,621
The following variables are used to get the positional parameters: $1, $2, $3, etc.$@$# But they are used for both positional parameters of the script and the positional parameters of a function. When I use these variables inside a function, they give me the positional parameters of the function. Is there a way to get the positional parameters of the script from inside a function?
No, not directly, since the function parameters mask them. But in Bash or ksh, you could just assign the script's arguments to a separate array , and use that. #!/bin/bashARGV=("$@")foo() { echo "number of args: ${#ARGV[@]}" echo "second arg: ${ARGV[1]}"}foo x y z Note that the numbering for the array starts at zero, so $1 goes to ${ARGV[0]} etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/418621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271680/" ] }
418,765
I got two NICs on the server side, eth0 ? 192.168.8.140 and eth1 ? 192.168.8.142. The client sends data to 192.168.8.142, and I expect iftop to show the traffic for eth1, but it does not. All networks go through eth0, so how can I test the two NICs? Why does all the traffic go through eth0 instead of eth1? I expected I could get 1 Gbit/s per interface. What's wrong with my setup or configuration? Server ifconfig eth0 Link encap:Ethernet HWaddr 00:00:00:19:26:B0 inet addr:192.168.8.140 Bcast:0.0.0.0 Mask:255.255.252.0 inet6 addr: 0000::0000:0000:fe19:26b0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:45287446 errors:0 dropped:123343 overruns:2989 frame:0 TX packets:3907747 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:66881007720 (62.2 GiB) TX bytes:261053436 (248.9 MiB) Memory:f7e00000-f7efffffeth1 Link encap:Ethernet HWaddr 00:00:00:19:26:B1 inet addr:192.168.8.142 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: 0000::0000:0000:fe19:26b1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:19358 errors:0 dropped:511 overruns:0 frame:0 TX packets:14 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1772275 (1.6 MiB) TX bytes:1068 (1.0 KiB) Memory:f7c00000-f7cfffff Server side # Listen for incomming from 192.168.8.142nc -v -v -n -k -l 192.168.8.142 8000 | pv > /dev/nullListening on [192.168.8.142] (family 0, port 8000)Connection from 192.168.8.135 58785 received! Client # Send to 192.168.8.142time yes | pv |nc -s 192.168.8.135 -4 -v -v -n 192.168.8.142 8000 >/dev/nullConnection to 192.168.8.142 8000 port [tcp/*] succeeded! Server side $ iftop -i eth0interface: eth0IP address is: 192.168.8.140TX: cumm: 6.34MB peak: 2.31Mb rates: 2.15Mb 2.18Mb 2.11MbRX: 2.55GB 955Mb 874Mb 892Mb 872MbTOTAL: 2.56GB 958Mb 877Mb 895Mb 874Mb$ iftop -i eth1interface: eth1IP address is: 192.168.8.142TX: cumm: 0B peak: 0b rates: 0b 0b 0bRX: 4.51KB 3.49Kb 3.49Kb 2.93Kb 2.25KbTOTAL: 4.51KB 3.49Kb 3.49Kb 2.93Kb 2.25Kb$ ip link show eth02: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:00:19:26:b0 brd ff:ff:ff:ff:ff:ff$ ip link show eth13: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:00:19:26:b1 brd ff:ff:ff:ff:ff:ff
There are two possible design models for a TCP/IP network stack: a strong host model and a weak host model. You're expecting behavior that would match the strong host model. Linux is designed to use the weak host model. In general the weak host model is more common as it reduces the complexity of the routing code and thus might offer better performance. Otherwise the two host models are just different design principles: neither is inherently better than the other. Basically, the weak host model means that outgoing traffic will be sent out the first interface listed in the routing table that matches the IP address of the destination (or selected gateway, if the destination is not reachable directly), without regard to the source IP address . This is basically why it's generally inadvisable to use two separate physical interfaces if you need two IP addresses on the same network segment. Instead assign two IP addresses for one interface (IP aliases: e.g. eth1 = 192.168.8.142 and eth1:0 = 192.168.8.140). If you need more bandwidth than a single interface can provide, bond (or team, if applicable) two or more interfaces together, and then run both IPs on the bond/team. By tweaking a number of sysctl settings and using the "advanced routing" functionality to set up independent routing tables for each NIC, it is possible to make Linux behave like a strong-host-model system. But that is a very special configuration, and I would recommend thinking twice before implementing it. See the answers at Linux Source Routing, Strong End System Model / Strong Host Model? if you really need it.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/418765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47774/" ] }
418,769
SED command not replacing in bash under Debian, but works in command line. Works in command line: sed -i 's|/dev/disk/by-label/SR6D4|/dev/disk/by-label/SR4D4|g' /etc/my/config.xml Not working in bash script, does not replace the string: from="SR6D4"to="SR4D4"path_from="/dev/disk/by-label/${from}"path_to="/dev/disk/by-label/${to}"echo "sed -i 's|${path_from}|${path_to}|g' $file"sed -i 's|${path_from}|${path_to}|g' $file Why doesn't replace string in file when using sed in bash?
Shell variables won't get expanded in single quotes. (It's kind of what single quotes are for.) sed -i "s|${path_from}|${path_to}|g" "$file" should work better. (I've added double quotes around $file just in case your filename ever contains spaces.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/418769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208148/" ] }