source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
350,240
If I execute the following simple script: #!/bin/bashprintf "%-20s %s\n" "Früchte und Gemüse" "foo"printf "%-20s %s\n" "Milchprodukte" "bar"printf "%-20s %s\n" "12345678901234567890" "baz" It prints: Früchte und Gemüse fooMilchprodukte bar12345678901234567890 baz that is, text with umlauts (such as ü ) is "shrunk" by one character per umlaut. Certainly, I have some wrong setting somewhere, but I am not able to figure out which one that could be. This occurs if the file's encoding is UTF-8. If I change its encoding to latin-1, the alignment is correct, but the umlauts are rendered wrong: Fr�chte und Gem�se fooMilchprodukte bar12345678901234567890 baz
POSIX requires printf 's %-20s to count those 20 in terms of bytes not characters even though that makes little sense as printf is to print text , formatted (see discussion at the Austin Group (POSIX) and bash mailing lists). The printf builtin of bash and most other POSIX shells honour that. zsh ignores that silly requirement (even in sh emulation) so printf works as you'd expect there. Same for the printf builtin of fish (not a POSIX-like shell). The ü character (U+00FC), when encoded in UTF-8 is made of two bytes (0xc3 and 0xbc), which explains the discrepancy. $ printf %s 'Früchte und Gemüse' | wc -mcL 18 20 18 That string is made of 18 characters, is 18 columns wide ( -L being a GNU wc extension to report the display width of the widest line in the input) but is encoded on 20 bytes. In zsh or fish , the text would be aligned correctly. Now, there are also characters that have 0-width (like combining characters such as U+0308, the combining diaresis) or have double-width like in many Asiatic scripts (not to mention control characters like Tab) and even zsh wouldn't align those properly. Example, in zsh : $ printf '%3s|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| In bash : $ printf '%3s|\n' u ü $'u\u308' $'\u1100' u| ü|ü|ᄀ| ksh93 has a %Ls format specification to count the width in terms of display width. $ printf '%3Ls|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| That still doesn't work if the text contains control characters like TAB (how could it? printf would have to know how far apart the tab stops are in the output device and what position it starts printing at). It does work by accident with backspace characters (like in the roff output where X (bold X ) is written as X\bX ) though as ksh93 considers all control characters as having a width of -1 . Other options In zsh , you can use its padding parameter expansion flags ( l for left-padding, r for right-padding), which when combined with the m flag considers the display width of characters (as opposed to the number of characters in the string): $ () { printf '%s|\n' "${(ml[3])@}"; } u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| With expand : printf '%s\t|\n' u ü $'u\u308' $'\u1100' | expand -t3 That works with some expand implementations (not GNU's though). On GNU systems, you could use GNU awk whose printf counts in chars (not bytes, not display-widths, so still not OK for the 0-width or 2-width characters, but OK for your sample): gawk 'BEGIN {for (i = 1; i < ARGC; i++) printf "%-3s|\n", ARGV[i]} ' u ü $'u\u308' $'\u1100' If the output goes to a terminal, you can also use cursor positioning escape sequences. Like: forward21=$(tput cuf 21)printf '%s\r%s%s\n' \ "Früchte und Gemüse" "$forward21" "foo" \ "Milchprodukte" "$forward21" "bar" \ "12345678901234567890" "$forward21" "baz"
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/350240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6479/" ] }
350,246
I 'm working on a bash script to copy files from a single USB drive to multiple others. I'm currently using rsync that copies from the source to a single destination, going through all of the output drives in a loop one at a time: for line in $(cat output_drives_list); do rsync -ah --progress --delete mountpoints/SOURCE/ mountpoints/$line/ done I'm trying to optimize the process to get maximum use of the USB bandwidth, avaiding the bottleneck of a single drive write speed. Is is possible to do something like rsync, but with multiple output directories, that will write to all output drives at once, but read only once from the input? I guess that some of this is already taken care of by the system cache, but that only optimizes for read. If I run multiple rsync processes in parallel, this might optimize the write speed, but I'm also afraid it'll butcher the read speed. Do I need to care about single-read when copying in parallel?
POSIX requires printf 's %-20s to count those 20 in terms of bytes not characters even though that makes little sense as printf is to print text , formatted (see discussion at the Austin Group (POSIX) and bash mailing lists). The printf builtin of bash and most other POSIX shells honour that. zsh ignores that silly requirement (even in sh emulation) so printf works as you'd expect there. Same for the printf builtin of fish (not a POSIX-like shell). The ü character (U+00FC), when encoded in UTF-8 is made of two bytes (0xc3 and 0xbc), which explains the discrepancy. $ printf %s 'Früchte und Gemüse' | wc -mcL 18 20 18 That string is made of 18 characters, is 18 columns wide ( -L being a GNU wc extension to report the display width of the widest line in the input) but is encoded on 20 bytes. In zsh or fish , the text would be aligned correctly. Now, there are also characters that have 0-width (like combining characters such as U+0308, the combining diaresis) or have double-width like in many Asiatic scripts (not to mention control characters like Tab) and even zsh wouldn't align those properly. Example, in zsh : $ printf '%3s|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| In bash : $ printf '%3s|\n' u ü $'u\u308' $'\u1100' u| ü|ü|ᄀ| ksh93 has a %Ls format specification to count the width in terms of display width. $ printf '%3Ls|\n' u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| That still doesn't work if the text contains control characters like TAB (how could it? printf would have to know how far apart the tab stops are in the output device and what position it starts printing at). It does work by accident with backspace characters (like in the roff output where X (bold X ) is written as X\bX ) though as ksh93 considers all control characters as having a width of -1 . Other options In zsh , you can use its padding parameter expansion flags ( l for left-padding, r for right-padding), which when combined with the m flag considers the display width of characters (as opposed to the number of characters in the string): $ () { printf '%s|\n' "${(ml[3])@}"; } u ü $'u\u308' $'\u1100' u| ü| ü| ᄀ| With expand : printf '%s\t|\n' u ü $'u\u308' $'\u1100' | expand -t3 That works with some expand implementations (not GNU's though). On GNU systems, you could use GNU awk whose printf counts in chars (not bytes, not display-widths, so still not OK for the 0-width or 2-width characters, but OK for your sample): gawk 'BEGIN {for (i = 1; i < ARGC; i++) printf "%-3s|\n", ARGV[i]} ' u ü $'u\u308' $'\u1100' If the output goes to a terminal, you can also use cursor positioning escape sequences. Like: forward21=$(tput cuf 21)printf '%s\r%s%s\n' \ "Früchte und Gemüse" "$forward21" "foo" \ "Milchprodukte" "$forward21" "bar" \ "12345678901234567890" "$forward21" "baz"
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/350246", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67203/" ] }
350,315
I have to sort the following list with a shell script and make the latest version appear on the bottom or top. How would I do that with shell tools only? release-5.0.0.rc1release-5.0.0.rc2release-5.0.0release-5.0.1release-5.0.10release-5.0.11release-5.0.13release-5.0.14release-5.0.15release-5.0.16release-5.0.17release-5.0.18release-5.0.19release-5.0.2release-5.0.20release-5.0.21release-5.0.22release-5.0.23release-5.0.24release-5.0.25release-5.0.26release-5.0.27release-5.0.28release-5.0.29release-5.0.3
GNU sort has -V that can mostly deal with a list like that ( details ): -V, --version-sort natural sort of (version) numbers within text$ cat versrelease-5.0.19release-5.0.19~pre1release-5.0.19-bigbugfixrelease-5.0.2release-5.0.20$ sort -V versrelease-5.0.2release-5.0.19~pre1release-5.0.19release-5.0.19-bigbugfixrelease-5.0.20 However, those .rc* versions could be a bit of a problem, since they probably should be sorted before the corresponding non-rc version, if there happened to be both, that is. Some versioning systems (like Debian's), use suffixes starting with a tilde ( ~ ) to mark pre-releases, and they sort before the version without a suffix, which sorts before versions with other suffixes. Apparently this is supported by at least the sort on my system, as shown above ( sort (GNU coreutils) 8.23 ). To sort the example list, you could use the following: perl -pe 's/\.(?=rc)/~/' < versions.txt | sort -V | perl -pe 's/~/./' > versions-sorted.txt
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/350315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120583/" ] }
350,347
I installed Debian 8, I would like to install Java JRE and JDK. I use this method and it works. But, I am afraid because it's a script hosted in a repository. I would like to understand why it doesn't work when I put the extract of this JRE in /usr/java/jre1.8.0_73 directory as per the documentation . I added the path variable with PATH=/usr/local/jdk1.8.0/bin:$PATH export PATH as explained in this doc but it doesn't work. Even if I try to install OpenJDK, the package isn't found. I don't understand why it's so complicated to install Java on Debian; it is very simple on Ubuntu. I would like someone give me step-by-step instructions to install it.
You’ll find OpenJDK 8 in Jessie backports (thanks to Willian Paixao for reminding me): echo deb http://http.debian.net/debian jessie-backports main > /etc/apt/sources.list.d/jessie-backports.listapt update will enable that, then apt install -t jessie-backports openjdk-8-jdk will install the JDK, or apt install -t jessie-backports openjdk-8-jre will install the JRE. If you want Oracle’s JVM, see my answer to Linux Mint Petra (16) Java Update from JRE 7 to JRE 8 breaks Graphics System? , it’s quite simple too.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
350,352
I have 2 similar files (dos.txt and unix.txt) with text The quick brown fox jumps\n over the lazy dog. that differ by line endings. When I search for a word at the end of line, output from dos.txt is empty: $ grep -E 'jumps^M?$' dos.txt unix.txtunix.txt:The quick brown fox jumps Grep finds something but doesn't print it. Actual output from grep looks like this: $ grep -E --color=always 'jumps^M?$' dos.txt unix.txt | cat -v^[[35m^[[Kdos.txt^[[m^[[K^[[36m^[[K:^[[m^[[KThe ... ^[[01;31m^[[Kjumps^M^[[m^[[K^[[35m^[[Kunix.txt^[[m^[[K^[[36m^[[K:^[[m^[[KThe ... ^[[01;31m^[[Kjumps^[[m^[[K So it looks like the only difference is that ^M is inside colored output and it causes whole line to disappear. How can I fix this (without converting dos files using dos2unix or similar tools)?
After some searching for ^[[K escape sequence, reading half of a book about VT100 terminal and checking man grep I have found that setting environment variable GREP_COLORS to GREP_COLORS=ne Gives desired output: $ export GREP_COLORS=ne$ grep -E --color=always 'jumps^M?$' dos.txt unix.txtdos.txt:The quick brown fox jumpsunix.txt:The quick brown fox jumps$ grep -E --color=always 'jumps^M?$' dos.txt unix.txt | cat -v^[[35mdos.txt^[[m^[[36m:^[[mThe ... ^[[01;31mjumps^M^[[m^[[35munix.txt^[[m^[[36m:^[[mThe ... ^[[01;31mjumps^[[m From grep man page: ne Boolean value that prevents clearing to the end of line using Erase in Line (EL) to Right (\33[K) each time a colorized item ends. This is needed on terminals on which EL is not supported. It is otherwise useful on terminals for which the back_color_erase (bce) boolean terminfo capability does not apply, when the chosen highlight colors do not affect the background, or when EL is too slow or causes too much flicker. The default is false (i.e., the capability is omitted). In my case it works good even if I set highlight color to something that change background: export GREP_COLORS="ne:mt=41;38" Now the interesting question is why ^[[K produces this blank line. Character ^M means carriage return without going to next line: $ echo -e "start^Mend"endrt ^[[K clears line from cursor to right and then writes rest of line: $ echo -e "start\033[Kend"startend However when you put ^M before ^[[K it removes content: $ echo -e "start^M\033[Kend"end After writing start cursor goes to the beginning of line then ^[[K removes everything and rest of the line is written. In case of grep output first line writes everything up to word jumps , then goes back to beginning of line ^M , writes harmless ^[[m sequence and ^J that goes to new line. This is why ^[[K after ^M clears whole line.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/350352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183430/" ] }
350,381
I'm using bash shell. I'm trying to write a script that will read a properties file and then do some replacements in another file based on the key-value pairs it reads in the file. So I have #!/bin/bashfile = "/tmp/countries.properties"while IFS='=' read -r key valuedo echo "${key} ${value}" sed -ie 's/:iso=>"${key}"/:iso=>"${key}",:alpha_iso=>"${value}"/g' /tmp/country.rbdone < "$file" but when I go to run the file, I get a "Nno such file or directory error," despite the fact my file exists (I did an "ls" after to verify it). localhost:myproject davea$ sh /tmp/script.sh =: cannot open `=' (No such file or directory)/tmp/countries.properties: ASCII text/tmp/script.sh: line 9: : No such file or directorylocalhost:myproject davea$ localhost:myproject davea$ ls /tmp/countries.properties /tmp/countries.properties What else do I need to do to read in my properties file successfully?
The errors are right there: =: cannot open `=' (No such file or directory) Something is trying to open a file called = , but it doesn't exist. /tmp/script.sh: line 9: : No such file or directory This would usually have the file name before the last colon, but since it's empty, it seems something is trying to open a file with an empty name. Consider the line: file = "/tmp/countries.properties" That runs the command file with arguments = and /tmp/countries.properties . (The shell doesn't care what the arguments to a command are, there might be something that uses the equals sign as an argument.) Now, file just so happens to be a program used for identifying the types of files , and it does just that. First trying to open = , resulting in an error, and then opening /tmp/countries.properties , telling you what it is: /tmp/countries.properties: ASCII text The other No such file or directory comes from the redirection < $file . Since the variable isn't assigned a value, the redirection isn't going to work. An assignment in shell requires that there be no white space around the = sign, so: file=/tmp/countries.properties Also, here: sed -ie 's/:iso=>"${key}"/:iso=>"${key}",:alpha_iso=>"${value}"/g' Variables aren't expanded within single quotes, and you have those around the whole second argument, so sed will get literal ${key} and not the contents of the variable. Either end the single-quotes to expand the variables, or just use double-quotes for the whole string: sed -ie 's/:iso=>"'${key}'"/:iso=>"'${key}'",:alpha_iso=>"'${value}'"/g' sed -ie "s/:iso=>\"${key}\"/:iso=>\"${key}\",:alpha_iso=>\"${value}\"/g"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166917/" ] }
350,391
How can I count the unique log lines in a text file only until the first "-" and print the line with the count org.springframework. - initialization startedorg.springframework. - initialization startedpushAttemptLogger - initialization startedpushAttemptLogger - initialization started example result org.springframework. 2pushAttemptLogger 2 reviewed: https://stackoverflow.com/questions/6712437/find-duplicate-lines-in-a-file-and-count-how-many-time-each-line-was-duplicated
cut -f1 -d'-' inputfile | sort | uniq -c cut -f1 -d'-' will treat the file as dash-delimited and return only the first column in each line. sort is necessary for uniq to work properly. uniq -c shows only unique lines from the sorted input, including a count.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213876/" ] }
350,438
doing emerge -avuDN --with-bdeps y --keep-going @world takes a whole lot of time and often fails. Is there a way to print a list of all upgrade-able packages in a Gentoo system?
eix is your best option for this. eix --installed --upgrade will print all installed packages where the best version is not the present version (for each slot). Does come at the cost that you need to keep the eix database up to date after each sync.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/350438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7362/" ] }
350,459
After creating a few snapshots in a lxd container using lxc snapshot I cannot find a way to list those snapshots. lxc list lists only containers, not the snapshots of each container. How can I list the names of all snapshots of a container? Thanks.
You can list the snapshots for a container named example with: lxc info example --verbose
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/350459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220158/" ] }
350,553
Imagine we have a long command sleep 10 (for instance). We would like to execute it on another server using python (the ssh library paramiko to be specific). I need a 1 line command that starts the command, prints the PID of the command, and then waits for it to finish and finally returns the exit status of the command. What I've tried so far: bash -c "echo $BASHPID ; exec sleep 10" ; echo $? This prints the PID of the shell, then calls exec on the long command, and then waits and prints the exit status. The problem is $BASHPID prints the PID of the outer bash shell (possibly indicating calling bash -c cmd doesn't actually spawn a full new shell?). If I call the above command in three lines, it works. bashecho $BASHPID ; exec sleep 10echo $? Any attempt using subshells hasn't worked for me either. echo $(echo $BASHPID ; exec sleep 10) ; echo $? This works, but the subshell pipes all of its output to echo, and echo prints it all at once, meaning the $BASHPID doesn't get print until after the sleep finishes. The PID must be print immediately.
When you run, for example: bash -c "echo $BASHPID ; exec sleep 10" or echo $(echo $BASHPID ; exec sleep 10) your current shell is interpolating the $BASHPID variable before the second bash (or subshell) sees it. The solution is to prevent the expansion of those variables by the current shell: bash -c 'echo $BASHPID ; exec sleep 10'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220215/" ] }
350,558
I am trying to get all the processes for which the value corresponding to the STAT column is X. I have done this using awk ps -aux | awk {'if ($8 == "S") print $8" "$11'} However, I would like to do it without using a program other than ps. Is there a way?
ps has limited filtering capabilities, but even Linux's ps with its myriad options can't filter by status. So you will need an external filtering tool. You can simplify the set of options. -ax is equivalent to the portable -e to display all processes. You can use -o instead of counting and selecting columns with awk. ps -e -o stat,command | grep '^S '
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220222/" ] }
350,569
I have a text file, which has one word in each line and I would like to do bigrams and count repetitions (statistics) of each bigram. My approach: cat TEXTEN1.txt | tr '*\n' '*? *\n' I would like to do two columns, but this solution fails.
ps has limited filtering capabilities, but even Linux's ps with its myriad options can't filter by status. So you will need an external filtering tool. You can simplify the set of options. -ax is equivalent to the portable -e to display all processes. You can use -o instead of counting and selecting columns with awk. ps -e -o stat,command | grep '^S '
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220233/" ] }
350,610
Simple question. With rar rar X file.tar -p"mypass" With 7z 7z X file.7z -p"mypass" Vim can encrypt file using :X and every time you want to open file must use the password. The question is: is possible to pass the pass as argument like rar and 7z? A thing like this vim filex.enc.txt -P"mypass"
With --cmd you can give Vim a command to run before reading the file on the command line (as if it was part of your ~/.vimrc file). By setting the key option to the value of the encryption key in this way, you may give the encryption key on the command line: $ vim --cmd "set key=mysecretkey" myencryptedfile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
350,625
I was reading up on chmod and its octal modes . I saw that 1 is execute only. What is a valid use case for an execute only permission? To execute a file, one typically would want read and execute permission. $ echo 'echo foo' > say_foo$ chmod 100 ./say_foo$ ./say_foobash: ./say_foo: Permission denied$ chmod 500 ./say_foo$ ./say_foofoo
Shell scripts require the read permission to be executed, but binary files do not: $ cat hello.cpp#include<iostream>int main() { std::cout << "Hello, world!" << std::endl; return 0;}$ g++ -o hello hello.cpp$ chmod 100 hello$ ./helloHello, world!$ file hellohello: executable, regular file, no read permission Displaying the contents of a file and executing them are two different things. With shell scripts, these things are related because they are "executed" by "reading" them into a new shell (or the current one), if you'll forgive the simplification. This is why you need to be able to read them. Binaries don't use that mechanism. For directories, the execute permission is a little different; it means you can do things to files within that directory (e. g. read or execute them). So let's say you have a set of tools in /tools that you want people to be able to use, but only if they know about them. chmod 711 /tools . Then executable things in /tools can be run explicitly (e. g. /tools/mytool ), but ls /tools/ will be denied. Similarly, documents could be stored in /private-docs which could be read if and only if the file names are known.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/350625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181811/" ] }
350,768
Running xubuntu 16.04, with xfce, I'm trying to use ssh keys with passphrases. I would like to add my passphrased key to my ssh-agent, but I don't know why I can't add it. I don't have gnome keyring enabled or anything alike in my startup. ssh-add privatekey, adds the key but when I try to ssh again it just prints the error two times. Some fixes say to disable the gnome keyring on startup but I've already had it disabled. This all occured when I replaced the ssh keys for my raspberrypi. > OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g 1 Mar 2016debug1: Reading configuration data /home/potato/.ssh/configdebug1: /home/potato/.ssh/config line 1: Applying options for pajdebug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug1: Connecting to 111.229.105 [111.229.105] port 22253.debug1: Connection established.debug1: identity file /home/potato/.ssh/hplaptop_to_pi type 1debug1: key_load_public: No such file or directorydebug1: identity file /home/potato/.ssh/hplaptop_to_pi-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.1debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7p1 Raspbian-5+deb8u3debug1: match: OpenSSH_6.7p1 Raspbian-5+deb8u3 pat OpenSSH* compat 0x04000000debug1: Authenticating to 111.229.105:22253 as 'pi'debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: algorithm: [email protected]: kex: host key algorithm: ecdsa-sha2-nistp256debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: nonedebug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: nonedebug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ecdsa-sha2-nistp256 SHA256:zrjeaaHD8TjzsdsdssssA2fXnG3gxp2Udebug1: Host '[111.229.105]:22253' is known and matches the ECDSA host key.debug1: Found key in /home/potato/.ssh/known_hosts:2debug1: rekey after 134217728 blocksdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: rekey after 134217728 blocksdebug1: SSH2_MSG_NEWKEYS receiveddebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/potato/.ssh/hplaptop_to_pidebug1: Server accepts key: pkalg ssh-rsa blen 535sign_and_send_pubkey: signing failed: agent refused operationdebug1: Offering RSA public key: potato@potato-HP-tomatodebug1: Authentications that can continue: publickeydebug1: Offering RSA public key: potato@hplaptopdebug1: Authentications that can continue: publickeydebug1: Offering RSA public key: potato@hplaptopdebug1: Server accepts key: pkalg ssh-rsa blen 535sign_and_send_pubkey: signing failed: agent refused operationdebug1: Offering RSA public key: rsa-key-20141222debug1: Authentications that can continue: publickeydebug1: Offering RSA public key: potato@potatolaptopdebug1: Authentications that can continue: publickeydebug1: No more authentication methods to try.Permission denied (publickey).
So after hours of mindless googling and help, the problem was uncovered. I was generating my ssh keys with ssh-keygen and added an additional argument "-o" which generated the keys in a new format for openSSH. The problem was that my gnome-keyring did not support such keys as the keys had Ed255519 signature scheme. Gnome-keyring does not support that since 3.20. I reverted to RSA and no more problems!.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198922/" ] }
350,842
It's possible to dump the available / defined macros when authoring a RPM spec file by using: rpm --showrc or rpm --eval %dump or including the %dump builtin macro in the spec file itself and examining the output from the RPM build process (the build output contains macro definitions). In either case, some of the lines are prefixed with "-14" or "-11". The lines without either appear to be the body of multi-line definitions. What is the significance of the "-14" (or less common "-11") in this output? More importantly I'm interested in knowing where this is documented. Sample Output: -14: __autoconf autoconf-14: __autoheader autoheader-14: __automake automake-11= _target_cpu x86_64-11= _target_os linux References: rpm.org macros Fedoraproject.org Wiki Packaging RPM Macros
RPM macros have an associated level which is the recursion depth. When returning from a recursive expansion, macros at that level are automatically undefined. Macros with a level <= 0 are always defined (in some sense global). Negative valued levels were originally used to mark where macros were defined: from rpm internally, or from reading a configuration file. In practice, nothing in RPM has ever used or needed the macro level. But that is what the "-14" means. Not also the change from ":" to "=" in the --showrc output, which tells which macros were defined or used.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/69122/" ] }
350,891
I'm trying to exclude all vim backup files (hidden *.swp files) from the sync.The pattern in my exclude file looks like this: **.swp My rsync call: rsync -ravu --exclude=~/sync/exclude.txt /home/username/Documents/ remotehost:/home/username/Documents/ The file sits in a subdirectory of the sync root.It deosn't work. Rsync copies the vim backup file as well.I also tried: *.swp What am I doing wrong?
You're using --exclude (which expects a pattern) rather than --exclude-from (which expects the name of a file containing patterns). You also do not need -r ( --recursive ) with -a ( --archive ) as -a enables recursive syncing. In fact, -a is the same as -rlptgoD according to the manual.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220455/" ] }
350,926
sudo pkg install VBoxSolarisAdditions.pkg pkg install: The following pattern(s) did not match any allowable packages. Try using a different matching pattern, or refreshing publisher information: sudo pkg install ./VBoxSolarisAdditions.pkg pkg install: Illegal FMRI './VBoxSolarisAdditions.pkg': Invalid Package Name: ./VBoxSolarisAdditions.pkg sudo pkg set-publisher -p /media/VBOXADDITIONS_4.3.38_106717/ pkg set-publisher: file protocol error: code: 22 reason: The path '/media/VBOXADDITIONS_4.3.38_106717' does not contain a valid package repository. Repository URL: 'file:///media/VBOXADDITIONS_4.3.38_106717'. sudo pkg set-publisher -p /media/VBOXADDITIONS_4.3.38_106717/VBoxSolarisAdditions.pkg pkg set-publisher: file protocol error: code: 22 reason: Archive /media/VBOXADDITIONS_4.3.38_106717/VBoxSolarisAdditions.pkg is missing, unsupported, or corrupt. Repository URL: 'file:///media/VBOXADDITIONS_4.3.38_106717/VBoxSolarisAdditions.pkg'. Am I doing something wrong?
From the VirtualBox online manual : 4.2.3.1. Installing the Solaris Guest Additions The VirtualBox Guest Additions for Solaris are provided on the same ISO CD-ROM as the Additions for Windows and Linux described above. They also come with an installation program guiding you through the setup process. Installation involves the following steps: Mount the VBoxGuestAdditions.iso file as your Solaris guest's virtual CD-ROM drive, exactly the same way as described for a Windows guest in Section 4.2.1.1, “Installation”. If in case the CD-ROM drive on the guest doesn't get mounted (observed on some versions of Solaris 10), execute as root: svcadm restart volfs Change to the directory where your CD-ROM drive is mounted and execute as root: pkgadd -G -d ./VBoxSolarisAdditions.pkg Choose "1" and confirm installation of the Guest Additions package. After the installation is complete, re-login to X server on your guest to activate the X11 Guest Additions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74688/" ] }
350,960
I have a bunch of folders which are labelled in this way: conf1 conf2 ... But the order in the home directory is like conf1 conf10 conf100 conf101 ... conf2 conf20 conf200 conf201 ... Because each folder contains a file named "distance.txt", I would like to be able to print the content of the distance.txt file, from each single folder, but in order, going from folder 1-->2-->3... to the final folder 272. I tried several attempts, but every time the final file contains the all set of values in the wrong order; this is the piece of code I set: ls -v | for d in ./*/; do (cd "$d" && cat distance.txt >> /path/to/folder/d.txt ); done As you can see I tried to "order" the folders with the command ls -v and then to couple the cycle to iteratively save each file. Can you kindly help me?
For such a relatively small set of folders you could use a numerical loop for n in {1..272}do d="conf$n" test-d "$d" && cat "$d/distance.txt" >> /path/to/folder/d.txtdone
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/350960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214237/" ] }
351,005
How to export and migrate NetworkManager settings to new system? Use cases are: reinstalling a machine moving network configuration from laptop to desktop system (or vice-versa) All settings should be migrated, that includes: default and custom network connections wifi connections with passwords VLAN configurations VPN configurations (with keys if possible) I checked on Arch wiki and it there is nothing on migration, so I'm asking you guys and gals here.
Each connection configured in NetworkManager is stored in a file in /etc/NetworkManager/system-connections Usually, you can copy needed files from a machine to another (by root, of course). Warning : some configuration file could reference external resources. E.g. in one of my openvpn files I have a line like cert=/home/andcoz/somedir/somefile.crt . You need to copy any referred file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/351005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23211/" ] }
351,083
I want colored output of grep . .... But Strategy 1: GREP_OPTIONS. But this is deprecated. See http://www.gnu.org/software/grep/manual/html_node/Environment-Variables.html Stragegy 2: GREP_COLORS look like a solution at the first sight, but this does something different. Strategy 3: alias. This does not work for find ... | xargs grep , since xargs does not evaluate aliases. Strategy 4: Write a simple wrapper script. No, I think this is too dirty and makes more trouble than it solves. Strategy 5: patch the source code Strategy 6: Contact grep developers, ask for a replacement of GREP_OPTIONS Strategy NICE-and-EASY: ... this is missing. I have no clue. How to solve this?
Some of the reasons OP has stated are not grounded in fact (i.e. a lack of understanding of how shell scripting works and lack of understanding how a simple wrapper script does not impact performance). In this answer I demonstrate that strategy 4 has is actually a good solution for a number of reasons (easy to implement, low overhead, flexible for all any use-case, etc etc): On most distributions, grep is installed in /bin (typical) or /usr/bin (OpenSUSE, maybe others), and default PATH contains /usr/local/bin before /bin or /usr/bin . This means that if you create /usr/local/bin/grep with #!/bin/shexec /bin/grep --color=auto "$@" where /bin/sh is a POSIX-compatible shell provided by your distribution, usually bash or dash. If grep is in /usr/bin , then make that #!/bin/shexec /usr/bin/grep --color=auto "$@" performance overhead of a wrapper script is minimal The overhead of this script is minimal. The exec statement means that the script interpreter is replaced by the grep binary; this means that the shell does not remain in memory while grep is being executed. Thus, the only overhead is one extra execution of the script interpreter, i.e. a small latency in wall clock time. The latency is roughly constant (varies only depending on whether grep and sh are already in page cache or not, and on how much I/O bandwidth is available), and does not depend on how long grep executes or how much data it processes. So, how long is that latency, i.e. the overhead added by the wrapper script? To find out, create the above script, and run time /bin/grep --versiontime /usr/local/bin/grep --version On my machine, the former takes 0.005s real time (across a large number of runs), whereas the latter takes 0.006s real time. Thus, the overhead of using the wrapper on my machine is 0.001s (or less) per invocation. This is insignificant. I also fail to see anything "dirty" about this, because many common applications and utilities use the same approach. To see the list of such on your machine in /bin and /usr/bin , just run file /bin/* /usr/bin/* | sed -ne 's/:.*shell script.*$//p' On my machine, the above output includes egrep , fgrep , zgrep , which , 7z , chromium-browser , ldd , and xfig , which I use quite often. Unless you consider your entire distribution "dirty" for relying on wrapper scripts, you have no reason to consider such wrapper scripts "dirty". possible problems by putting a wrapper script on your PATH If only human users (as opposed to scripts) are using the version of grep that defaults to color support if output is to a terminal, then the wrapper script can be named colorgrep or cgrep or whatever the OP sees fit. This avoids all possible compatibility issues, because the behaviour of grep does not change at all. Enabling grep options with a wrapper script, but in a way that avoids any new problems: We can easily rewrite the wrapper script to support a custom GREP_OPTS even if GREP_OPTIONS were not supported (as it is already deprecated). This way users can simply add export "GREP_OPTIONS=--color=auto" or similar to their profile. /usr/local/bin/grep is then #!/bin/shexec /bin/grep $GREP_OPTIONS "$@" Note that there are no quotes around $GREP_OPTIONS , so that users can specify more than one option. On my system, executing time /usr/local/bin/grep --version with GREP_OPTIONS empty, or with GREP_OPTIONS=--color=auto , is just as fast as the previous version of the wrapper script; i.e., typically takes one millisecond longer to execute than plain grep . This last version is the one I'd personally recommend for use. In summary, OP's strategy 4: is aready recommended by grep developers is trivial to implement (two lines) has insignificant overhead (one millisecond extra latency per invocation on this particular laptop; easily verified on each machine) can be implemented as a wrapper script that adds GREP_OPTS support (to replace deprecated/unsupported GREP_OPTIONS ) can be implemented (as colorgrep / cgrep ) that does not affect scripts or existing users at all Because it is a technique that is widely used in Linux distributions already, it is a common technique and not "dirty". If implemented as a separate wrapper ( colorgrep / cgrep ), it cannot create new problems since it does not affect grep behaviour at all. If implemented as a wrapper script that adds GREP_OPTS support, using GREP_OPTS=--color=auto has exactly the same risks (wrt. problems with existing scripts) that upstream adding default --color=auto would. Thus, the comment that this "creates more problems than it solves" is completely incorrect: no additional problems are created.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351083", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22068/" ] }
351,115
I use screen as my window manager through putty. Screen has been great, but I need a way to increase my buffer when I run commands. I have no buffer when I scroll up, no std out is saved beyond my window size on any terminal. How can I increase this I can't seem to find an option in the commands? Ctrl + a ? doesn't seem to have what I am looking for.
Do Ctrl + a : then enter scrollback 1234 sets your buffer to 1234 lines. You enter scrollback mode ("copy mode") with Ctrl + a Esc , then move in vi-style, leave copy mode with another Esc
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/351115", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/195962/" ] }
351,119
I have an awk command.I need to use i variable but my command does not work when I do. "fechaName": "1","firstName": "gdrgo", "xxxxx": "John", "xxxxx": "John", "xxxxx": "John", "xxxxx": "John", "xxxxx": "John", "lastName": "222",dfg"fechaName": "2","xxxxx": "John", "firstName": "beto", "xxxxx": "John", "xxxxx": "John", "xxxxx": "John", "lastName": "111","xxxxx": "John","fechaName": "4","xxxxx": "John", "firstName": "beto", "xxxxx": "John", "xxxxx": "John", "xxxxx": "John", "lastName": "111","xxxxx": "John","fechaName": "4","xxxxx": "John", "xxxxx": "John", "firstName": "beto2", "xxxxx": "John","lastName": "555", "xxxxx": "John","xxxxx": "John","fechaName": "5","xxxxx": "John", "xxxxx": "John", "firstName": "beto2", "xxxxx": "John","lastName": "444", "xxxxx": "John","xxxxx": "John","fechaName": "4","firstName": "gdrgo", "xxxxx": "John", "xxxxx": "John", "xxxxx": "John", "xxxxx": "John", "xxxxx": "John", "lastName": "222",dfg"fechaName": "7","xxxxx": "John", "xxxxx": "John", "firstName": "beto2", "xxxxx": "John","lastName": "444", "xxxxx": "John","xxxxx": "John", When I use 5 instead of "i" it works awk -v OFS='"' -v FS='Name": "' '{ for( i=2; i<=7; i++ ) if( match($2, /5"/) ) print $0 }' sumacomando this is my command awk -v OFS='"' -v FS='Name": "' '{ for( i=2; i<=7; i++ ) if( match($2, /**i**"/) ) print $0 }' sumacomandoawk -v OFS='"' -v FS='Name": "' '{ for( i=2; i<=7; i++ ) if( match($2, /i"/) ) print $0 }' sumacomando
Do Ctrl + a : then enter scrollback 1234 sets your buffer to 1234 lines. You enter scrollback mode ("copy mode") with Ctrl + a Esc , then move in vi-style, leave copy mode with another Esc
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/351119", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220612/" ] }
351,123
In the process of learning/understanding Linux (difficult but enjoying it). I have written a very short shell script that uses wget to pull an index.html file from a website. #!/bin/bash#Script to wget website mywebsite and put it in /home/pi/binindex=$(wget www.mywebsite.com) And this works when i enter the command wget_test into command line. It outputs a .html file into /home/pi/bin. I have started trying to do this via cron so i can do it at a specific time. I entered the following by using crontab -e 23 13 * * * /home/pi/bin/wget_test In this example I wanted the script to run at 13.23 and to output a .html file to /home/pi/bin but nothing is happening.
This line index=$(wget www.mywebsite.com) will set the variable $index to nothing. This is because (by default) wget doesn't write anything to stdout so there's nothing to put into the variable. What wget does do is to write a file to the current directory. Cron jobs run from your $HOME directory, so if you want to write a file to your $HOME/bin directory you need to do one of two things Write wget -O bin/index.html www.mywebsite.com Write cd bin; wget www.mywebsite.com Incidentally, one's ~/bin directory is usually where personal scripts and programs would be stored, so it might be better to think of somewhere else to write a file regularly retrieved from a website.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/218629/" ] }
351,210
I need my script to do something to every file in the current directory excluding any sub-directories. For example, in the current path, there are 5 files, but 1 of them is a folder (a sub-directory).My script should activate a command given as arguments when running said script. I.e. "bash script wc -w" should give the word count of each file in the current directory, but not any of the folders, so that the output never has any of the "/sub/dir: Is a directory" lines. My current script: #!/bin/bashdir=`pwd`for file in $dir/*do $* $filedone I just need to exclude directories for the loop, but I don`t know how.
#!/bin/bash -for file in "$dir"/*do if [ ! -d "$file" ]; then "$@" "$file" fidone Note that it also excludes files that are of type symlink and where the symlink resolves to a file of type directory (which is probably what you want). Alternative (from comments), check only for files: for file in "$dir"/*do if [ -f "$file" ]; then "$@" "$file" fidone
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/351210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220674/" ] }
351,215
I am trying to make an alias for mv so it does its normal behaviour in normal folders and is replaced by git mv inside git repositories. I tried many ways. The if statement works, only the command git mv will not run correctly. alias mv='"$(if [ x`git rev-parse --show-toplevel 2> /dev/null` = x ]; echo mv; else echo "git mv"; fi)"'
I would use a function for that, like so: gitmv(){ # If in a git repo - call git mv. otherwise- call mv if [ x`git rev-parse --show-toplevel 2> /dev/null` = x ]; then mv "$@" else git mv "$@" fi} Edit: alias mv=gitmv
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/351215", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78496/" ] }
351,263
It's a well-known fact that if one wants to execute a script in shell, then the script needs to have execute permissions: $ ls -ltotal 4-rw-r--r-- 1 user user 19 Mar 14 01:08 hw$ ./hwbash: ./hw: Permission denied$ /home/user/hwbash: /home/user/hw: Permission denied$ However, it is possible to execute this script with bash <scriptname> , sh <scriptname> , etc: $ bash hwHello, World!$ This means that basically one can execute a script file, even if it only has read permissions. This maybe is a silly question, but what is the point of giving execute permissions to a script file? Is it solely because in order for a program to run it needs to have execute permissions, but it actually doesn't add security or any other benefits?
Yes, you can use bash /path/to/script , but scripts can have different interpreters. Its possible your script was written to work with ksh , zsh , or maybe even awk or expect . Thus you have to know what interpreter to use to call the script with. By instead making a script with a shebang line (that #!/bin/bash at the top) executable, the user no longer needs to know what interpreter to use.It also allows you to put the script in $PATH and call it like a normal program.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351263", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
351,286
I have a tar.gz file from which I have to extract only the files in current directory without creating the whole structure. For example : tar.gz contains below files, /u01/app/oracle/file1..../u01/app/oracle/file10/u01/testdata/file1..../u01/testdata/file5 the tar.gz is present in /u02 . So when extracting I want file1 through file10 coming under /u02 instead, the whole directory structure is getting created under /u02
Yes, you can use bash /path/to/script , but scripts can have different interpreters. Its possible your script was written to work with ksh , zsh , or maybe even awk or expect . Thus you have to know what interpreter to use to call the script with. By instead making a script with a shebang line (that #!/bin/bash at the top) executable, the user no longer needs to know what interpreter to use.It also allows you to put the script in $PATH and call it like a normal program.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176232/" ] }
351,331
Is there a way to execute a command with arguments in linux without whitespaces? cat file.txt needs to be: cat(somereplacementforthiswhitespace)file.txt
If only there was a variable whose value is a space… Or more generally, contains a space. cat${IFS}file.txt The default value of IFS is space, tab, newline. All of these characters are whitespace. If you need a single space, you can use ${IFS%??} . More precisely, the reason this works has to do with how word splitting works. Critically, it's applied after substituting the value of variables. And word splitting treats each character in the value of IFS as a separator, so by construction, as long as IFS is set to a non-empty value, ${IFS} separates words. If IFS is more than one character long, each character is a word separator. Consecutive separator characters that are whitespace are treated as a single separator, so the result of the expansion of cat${IFS}file.txt is two words: cat and file.txt . Non-whitespace separators are treated separately, with something like IFS=',.'; cat${IFS}file.txt , cat would receive two arguments: an empty argument and file.txt .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/351331", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220775/" ] }
351,415
I have a text file like this: first, secondhello, byeone, twogood, badday, night I would like to have a script which, based on a variable, retrieves the last word of a given line using grep or awk . for example: if num is 3 , output should be two . It's not necessary to ask for num's value.
Using awk : $ awk -v num="$num" 'NR == num { print $NF }' data.in Testing it: $ num=3$ awk -v num="$num" 'NR == num { print $NF }' data.intwo The awk script reads the input file record by record (a record is by default a line). Once it hits the record corresponding to the num variable, it prints the last field of that record (a field is by default a whitespace-separated column). The num variable inside the awk script is an awk variable that we initialize to the value of the shell variable num with -v num="$num" on the command line. NR is the current record number, and NF is the number of fields in this record. $NF is the data of the last field. If your file is strictly comma-separated, add -F ',' to the command line: $ awk -v num="$num" -F ',' 'NR == num { print $NF }' data.in With grep you can't select a specific line, but together with sed you can filter out the line you want and then get the last bit after the last comma: $ sed -n "${num}p" data.in | grep -o '[^,]*$' The sed bit will get the specified line while the grep bit will extract everything after the last comma on that line. You may do it with sed only too: $ sed -n "${num}s/^.*,\(.*\)$/\1/p" data.in Here, the substitution is applied to only the line whose number is $num , and it replaces the whole line with the contents of the line after the last comma and outputs the result. All other output is inhibited with the -n command line switch. Alternatively, use a substitution in sed to simply delete everything on the line to the last comma: $ sed -n "${num}s/^.*,//p" data.in
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212397/" ] }
351,566
I have a file that looks like this: asd 123 aaa wrqiqirw 123123 itiewth 123 asno 123123 132 123 123 123boagii 123 asdnojaneoienton 123 Expected output is: 123123123123 I will need to search for patterns via regex. Is there any way to implement such a thing?
With pcregrep , with a pattern like 12*3 : pcregrep -o1 '(12*3).*' With pcregrep or GNU grep -P : grep -Po '^.*?\K12*3' ( pcregrep works with bytes more than characters, while GNU grep will work on characters as defined in the current locale (and you'd have to make sure the input contains valid text in the current locale)). Note that GNU grep won't print anything if the pattern matches the empty string.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/218095/" ] }
351,582
I'am running ubuntu xenial in virtualbox. I bound some PPAs in my System with apt-pinning. An Example cat /etc/apt/preferences.d/xbmc # Apt-pinning für ppa:xbmcPackage: *Pin: origin ppa.launchpad.netPin-Priority: 50 How can I install kodi with apt install -t something kodiLANG=C apt install -t team-xbmc kodiReading package lists... DoneE: The value 'team-xbmc' is invalid for APT::Default-Release as such a release is not available in the sources I tried several entries in /etc/apt/preferences/xbmc LANG=C apt-cache policy | grep -i xbmc -A1 50 http://ppa.launchpad.net/team-xbmc/ppa/ubuntu xenial/main i386 Packages release v=16.04,o=LP-PPA-team-xbmc,a=xenial,n=xenial,l=Kodi stable,c=main,b=i386 origin ppa.launchpad.net and several options for -t . But the result is always the same. I know I can install kodi from the ppa by giving the exact version of the package, but this is circuitous.
With pcregrep , with a pattern like 12*3 : pcregrep -o1 '(12*3).*' With pcregrep or GNU grep -P : grep -Po '^.*?\K12*3' ( pcregrep works with bytes more than characters, while GNU grep will work on characters as defined in the current locale (and you'd have to make sure the input contains valid text in the current locale)). Note that GNU grep won't print anything if the pattern matches the empty string.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
351,584
This is all on Debian Testing (= Stretch as of now). I am trying to configure opendkim , but it won't use the socket I want it to. According to man opendkim.conf , the Socket can be configured in /etc/opendkim.conf . I have also tried creating the file /etc/default/opendkim as I see it in my Jessie box, but that did not work either. Thus, I have tried entering the following line in /etc/opendkim.conf : Socket inet:39172@localhost Now, according to /etc/init.d/opendkim , this file is read: if [ -f /etc/opendkim.conf ]; then CONFIG_SOCKET=`awk '$1 == "Socket" { print $2 }' /etc/opendkim.conf`fi To me, that looks good so far. But the following snippet, which follows immediately, seems to dump the information that has been read right now: # This can be set via Socket option in config file, so it's not requiredif [ -n "$SOCKET" -a -z "$CONFIG_SOCKET" ]; then DAEMON_OPTS="-p $SOCKET $DAEMON_OPTS"fiDAEMON_OPTS="-x /etc/opendkim.conf -u $USER -P $PIDFILE $DAEMON_OPTS" I don't really understand what this is supposed to do. $CONFIG_SOCKET is never actually used to start opendkim , is it? Why is being read from the configuration file in the first place, then? I noticed there is also a file /etc/systemd/system/multi-user.target.wants/opendkim which does not seem to load any configuration. If it is of any importance: To restart opendkim , I enter service opendkim restart . My check to see if the socket has been read is: telnet localhost 39172 says Connection refused and /var/log/syslog says: opendkim[8343]: OpenDKIM Filter v2.11.0 starting (args: -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock) My question is: How should I be configuring the socket for opendkim on Debian Testing/Stretch? Which probably also solves the mystery how the script above is supposed to work.
You are configuring it correctly, but this is an open bug with Debian Stretch where it ignores configuration: See: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864162
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220982/" ] }
351,593
Most answers here [ 1 ] [ 2 ] [ 3 ] use a single angle bracket to redirect to /dev/null, like this : command > /dev/null But appending to /dev/null works too : command >> /dev/null Except for the extra character, is there any reason not to do this ? Is either of these "nicer" to the underlying implementation of /dev/null ? Edit: The open(2) manpage says lseek is called before each write to a file in append mode: O_APPEND The file is opened in append mode. Before each write(2), the file offset is positioned at the end of the file, as if with lseek(2). The modification of the file offset and the write operation are performed as a single atomic step. which makes me think there might be a tiny performance penalty for using >> . But on the other hand truncating /dev/null seems like an undefined operation according to that document: O_TRUNC If the file already exists and is a regular file and the access mode allows writing (i.e., is O_RDWR or O_WRONLY) it will be truncated to length 0. If the file is a FIFO or terminal device file, the O_TRUNC flag is ignored. Otherwise, the effect of O_TRUNC is unspecified. and the POSIX spec says > shall truncate an existing file , but O_TRUNC is implementation-defined for device files and there's no word on how /dev/null should respond to being truncated . So, is truncating /dev/null actually unspecified ? And do the lseek calls have any impact on write performance ?
By definition /dev/null sinks anything written to it , so it doesn't matter if you write in append mode or not, it's all discarded. Since it doesn't store the data, there's nothing to append to, really. So in the end, it's just shorter to write > /dev/null with one > sign. As for the edited addition: The open(2) manpage says lseek is called before each write to a file in append mode. If you read closely, you'll see it says (emphasis mine): the file offset is positioned at the end of the file, as if with lseek(2) Meaning, it doesn't (need to) actually call the lseek system call, and the effect is not strictly the same either: calling lseek(fd, SEEK_END, 0); write(fd, buf, size); without O_APPEND isn't the same as a write in append mode, since with separate calls another process could write to the file in between the system calls, trashing the appended data. In append mode, this doesn't happen (except over NFS, which doesn't support real append mode ). The text in the standard doesn't mention lseek at that point, only that writes shall go the end of the file. So, is truncating /dev/null actually unspecified? Judging by the scripture you refer to, apparently it's implementation-defined. Meaning that any sane implementation will do the same as with pipes and TTY's, namely, nothing. An insane implementation might do something else, and perhaps truncation might mean something sensible in the case of some other device file. And do the lseek calls have any impact on write performance? Test it. It's the only way to know for sure on a given system. Or read the source to see where the append mode changes the behaviour, if anywhere.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/351593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/187346/" ] }
351,665
I want to find unique files inside a directory, which also have sub directories. There are specific types of files, say .lib files. There are same .lib file inside different sub directoris. I need to find the list of .lib files inside my home directory, but only unique names. Are there any method to do so ? Currently I am using find -name "*.lib" > lib_file_list But it gives duplicate results as some of the .lib files are in multiple sub directories. I am using CSH.
With GNU tools: find . -name '*.lib' -print0 | awk -v RS='\0' -F/ '! seen[$NF]++'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/187188/" ] }
351,687
Suppose I have this structure for folder0 and subfolders and files in it. folder0 subfolder01 file011 file012 subfolder02 file021 file01 file02 I want to copy all files in main folder folder0 to somewhere else, such that all file be in one directory? How Can I do that? I used cp --recursive folder0address targetfolderaddress But subfolders copied to target folder. I just want all files in directory and sub directories not folders.I mean something like the below in target folder: targetfolder file011 file012 file021 file01 file02
Use find : find folder0 -type f -exec cp {} targetfolder \; With GNU coreutils you can do it more efficiently: find folder0 -type f -exec cp -t targetfolder {} + The former version runs cp for each file copied, while the latter runs cp only once.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/351687", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221050/" ] }
351,692
I created a test service under /etc/systemd/system which is the correct path to create custom unit files. [root@apollo system]# cat sample.service[Unit]Description=This is my test serviceWants=chronyd.serviceAfter=chronyd.service[Service]Type=forkingExecStart=/root/sample.sh[Install]WantedBy=multiuser.target chronyd.service#RequiredBy=multiuser.target chronyd.service#Alias=xyz[root@apollo system]# pwd/etc/systemd/system[root@apollo system]# I made sure systemd is aware by running "systemctl daemon-reload". I was also able to stop/start the service. When I tried to mask it, it shows me this error: [root@apollo system]# systemctl mask sample.serviceFailed to execute operation: File exists[root@apollo system]# That is because systemd is trying to create a symlink using this command: ln -s /dev/null /etc/systemd/system/sample.service Since sample.service already exists inside /etc/systemd/system, the command will fail unless systemd will use "ln -fs". So meaning we cannot mask any unit files we create under /etc/systemd/system? I tried to move sample.service to /usr/lib/systemd/system and I was able to mask it because it was able to create a symlink under /etc/systemd/system without any hindrance. Has anybody experience this? Do you think this is a bug?
There is not a way to mask services which have service files in /etc/systemd/system without first removing the file from there. This is intentional design. You can disable the service by using systemctl disable servicename.service which will have the same effect as masking it in many cases. The post by the author of systemd Three Levels of Off has more detail on the differences between stop , disable and mask in systemd.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/351692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186964/" ] }
351,765
When I use either of these commands with an argument as the name of a process, both of them return the exact same number. Are they the same commands? Are they two different commands that do the same thing? Is one of them an alias to the other? pidof firefoxpgrep firefox
The programs pgrep and pidof are not quite the same thing, but they are very similar. For example: $ pidof 'firefox'5696$ pgrep '[i]ref'5696$ pidof '[i]ref'$ printf '%s\n' "$?"1 As you can see, pidof failed to find a match for [i]ref . This is because pidof program returns a list of all process IDs associated with a program called program . On the other hand, pgrep re returns a list of all process IDs associated with a program whose name matches the regular expression re . In their most basic forms, the equivalence is actually: $ pidof 'program'$ pgrep '^program$' As yet another concrete example, consider: $ ps ax | grep '[w]atch' 12 ? S 0:04 [watchdog/0] 15 ? S 0:04 [watchdog/1] 33 ? S< 0:00 [watchdogd]18451 pts/5 S+ 0:02 watch -n600 tail log-file$ pgrep watch12153318451$ pidof watch18451
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/351765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166244/" ] }
351,779
I dualbooted Kali linux on my 2013 macbook pro and at first there was no wireless extension showing when i typed iwconfig in terminal but then after following this video https://www.youtube.com/watch?v=Lp3snFy9Jbs i got wlan1 and wlan0 showing but they dont detect any wireless network. i tried it on a vm, liveboot and not i even dualbooted it to my hard drive but it still wont detect any wifi network. i posted what shows up when i type iwconfig in terminal. how do i fix this? root@kali:~# iwconfiglo no wireless extensions.wlan1 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:offwlan0 IEEE 802.11abgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry short limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:offhwsim0 no wireless extensions.eth0 no wireless extensions. output of lspci -knn | grep Net -A2 root@kali:~# lspci -knn | grep Net -A203:00.0 Network controller [0280]: Broadcom Corporation BCM4360 802.11ac Wireless Network Adapter [14e4:43a0] (rev 03) Subsystem: Apple Inc. BCM4360 802.11ac Wireless Network Adapter [106b:0112]Kernel driver in use: bcma-pci-bridgeKernel modules: bcmaroot@kali:~#
The programs pgrep and pidof are not quite the same thing, but they are very similar. For example: $ pidof 'firefox'5696$ pgrep '[i]ref'5696$ pidof '[i]ref'$ printf '%s\n' "$?"1 As you can see, pidof failed to find a match for [i]ref . This is because pidof program returns a list of all process IDs associated with a program called program . On the other hand, pgrep re returns a list of all process IDs associated with a program whose name matches the regular expression re . In their most basic forms, the equivalence is actually: $ pidof 'program'$ pgrep '^program$' As yet another concrete example, consider: $ ps ax | grep '[w]atch' 12 ? S 0:04 [watchdog/0] 15 ? S 0:04 [watchdog/1] 33 ? S< 0:00 [watchdogd]18451 pts/5 S+ 0:02 watch -n600 tail log-file$ pgrep watch12153318451$ pidof watch18451
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/351779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221115/" ] }
351,865
While executing this command to install rvm curl -sSL https://get.rvm.io | bash -s stable I am getting this error message: mktemp: failed to create file via template ‘/usr/share/rvm/rvm-exec-test.XXXXXX’: Permission denied
I solved it by changing the following curl -sSL https://get.rvm.io | bash -s stable into curl -sSL https://get.rvm.io | sudo bash -s stable The user needs access to the subdir. /usr/local
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221192/" ] }
351,881
is it possibile to decode a file name by command line? Suppose I have the following two files: foo.mp3 bar.mp3 Is there any command line tool that decodes the files names into their UTF-8 values: 0x66 0x6F 0x6F 0x2E 0x6D 0x70 0x33 0x62 0x61 0x72 0x2E 0x6D 0x70 0x33
The standard (POSIX/Unix) command to get the byte values as hex numbers is od . file=foo.mp3printf %s "$file" | od -An -vtx1 Which gives an output similar to: 66 6f 6f 2e 6d 70 33 $file above contains an arbitrary array of (non-NUL for shells other than zsh ) bytes . The character encoding doesn't enter in consideration. If you want $file to contain an array of characters (so in the locale's encoding) and you want to get the Unicode code points for each of them as hexadecimal numbers, on a Little-Endian system, you could do: printf %s "$file" | iconv -t UTF-32LE | od -An -vtx4 See also: printf %s "$file" | recode ..dump Or: printf %s "$file" | uconv -x hex/unicodeprintf %s "$file" | uconv -x '([:Any:])>&hex/unicode($1)\n' If you wanted the byte values as hex numbers of the UTF-8 encoding of those characters: printf %s "$file" | iconv -t UTF-8 | od -An -vtx1 For something like foo.mp3 that contains only ASCII characters, they're all going to be equivalent.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/351881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/191458/" ] }
351,901
($@) Expands to the positional parameters, starting from one. How can I get the positional parameters, starting from two, or more generally, n ? I want to use the positional parameters starting from two, as arguments to a command, for example, myCommand $@
For positional parameters starting from the 5th one: zsh or yash . myCommand "${@[5,-1]}" (note, as always, that the quotes above are important, or otherwise each element would be subject to split+glob in yash , or the empty elements removed in zsh ). ksh93 , bash or zsh : myCommand "${@:5}" (again, quotes important) Bourne-like shells (includes all of the above shells) (shift 4; myCommand "$@") (using a subshell so the shift only happens there). csh-like shells: (shift 4; myCommand $argv:q) (subshell) fish : myCommand $argv[5..-1] rc : @{shift 4; myCommand $*} (subshell) rc / es : myCommand $*(`{seq 5 $#*}) es : myCommand $*(5 ...)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
351,902
Say I have file.txt that is pipe-delimited, and I want to take a look at 10 non-missing observations from the 20th column to ensure they appear in the correct format. Would I use the awk command and how can I tell it only 10 observations? cut -d "|" -f 20 < file.txt|more is helpful for completely non-missing columns but this doesn't help for sparse columns.
For positional parameters starting from the 5th one: zsh or yash . myCommand "${@[5,-1]}" (note, as always, that the quotes above are important, or otherwise each element would be subject to split+glob in yash , or the empty elements removed in zsh ). ksh93 , bash or zsh : myCommand "${@:5}" (again, quotes important) Bourne-like shells (includes all of the above shells) (shift 4; myCommand "$@") (using a subshell so the shift only happens there). csh-like shells: (shift 4; myCommand $argv:q) (subshell) fish : myCommand $argv[5..-1] rc : @{shift 4; myCommand $*} (subshell) rc / es : myCommand $*(`{seq 5 $#*}) es : myCommand $*(5 ...)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/351902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221227/" ] }
351,916
Suppose file stores the pathname of a non-dir file. How can I get its parent directory? why does the following way by appending /.. to its value not work $ cd $file/.. cd: ./Tools/build.bat/..: No such file or directory Thanks.
Assuming $ file=./Tools/build.bat With a POSIX compatible shell (including zsh): $ echo "${file%/*}"./Tools With dirname : $ echo "$(dirname -- "$file")"./Tools (at least GNU dirname takes options, so the -- is required in case the path starts with a dash.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/351916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
352,031
From what I had read sometime back, it seems iwconfig is deprecated and the current methods is - $ sudo ifconfig wlan0 up and $ sudo ifconfig wlan0 down But couldn't find anything which tells the status of the wifi and know which mode it is on, which AP it is attached to, how much data is being transferred and so on and so forth on the CLI.
The current (in 2017) methods are: ip for all network interfaces, including setting up and down: ip link set wlan0 upip link set wlan0 downip helpip link helpip addr help iw for wireless extensions (needs to be called as root): iw deviw phyiw wlan0 scaniw wlan0 station dumpiw help ifconfig and iwconfig are still supported with the appropriate packages, but some features are only available with ip and iw .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352031", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
352,077
Running the following command: $ df -h Gives the following output: Filesystem Size Used Avail Use% Mounted on/dev/md2 91G 85G 1.2G 99% /home Which means out of the 91 GiB total only 85 GiB are used, which should leave 6 GiB Avail (91 - 85 = 6). Why is Avail only 1.2 GiB? This question is explicitly about the contradiction between the Used - Size and the Avail column in df output as opposed to a discrepancy between df and du output such as in this related question . In my case, there are no deleted files still in use on the filesystem.
By default, ext2, ext3 and ext4 filesystems reserve 5% of their capacity for use by the root user. This reduces fragmentation, and makes it less likely that the root user or any root-owned daemons will run out of disk space to perform important operations. More information for the reasons behind this reservation can be found among the answers to this related question . You can verify the size of the reservation with the tune2fs command: tune2fs -l /dev/md2 | grep "Reserved block count:" The reservation percentage can be changed using the -m option of the tune2fs command: tune2fs -m 0 /dev/md2 The number of reserved reserved blocks can be changed using the -r option of the tune2fs command: tune2fs -r 0 /dev/md2 Reserved space is least useful on large filesystems with static content that is not related to the operating system. For such filesystems it is reasonable to reduce the reservation to zero. Filesystems are better left with the default 5% reservation include those containing the directories / , /root , /var , and /tmp that are often used by daemons and other operatins system services to create temporary files or logs at runtime.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221368/" ] }
352,089
I tried finding an answer to this question, but got no luck so far: I have a script that runs some other scripts, and many of those other scripts have "set -x" in them, which makes them print every command they execute. I would like to get rid of that but retain the information if any of the scripts send the error message to stderr. So I can't simply write ./script 2>/dev/null Also, I don't have privileges to edit those other scripts, so I can't manually change the set option. I was thinking about logging everything from stderr to the separate file and filtering out the tracing commands, but maybe there is a simpler way?
With bash 4.1 and above, you can do BASH_XTRACEFD=7 ./script.bash 7> /dev/null (also works when bash is invoked as sh ). Basically, we're telling bash to output the xtrace output on file descriptor 7 instead of the default of 2, and redirect that file descriptor to /dev/null . The fd number is arbitrary. Use a fd above 2 that is not otherwise used in your script. If the shell you're entering this command in is bash or yash , you can even use a number above 9 (though you may run into problems if the file descriptor is used internally by the shell). If the shell you're calling that bash script from is zsh , you can also do: (export BASH_XTRACEFD; ./script.bash {BASH_XTRACEFD}> /dev/null) for the variable to be automatically assigned the first free fd above 9. For older versions of bash , another option, if the xtrace is turned on with set -x (as opposed to #! /bin/bash -x or set -o xtrace ) would be to redefine set as an exported function that does nothing when passed -x (though that would break the script if it (or any other bash script it invokes) used set to set the positional parameters). Like: set() case $1 in (-x) return 0;; (-[!-]|"") builtin set "$@";; (*) echo >&2 That was a bad idea, try something else; builtin set "$@";; esacexport -f set./script.bash Another option is to add a DEBUG trap in a $BASH_ENV file that does set +x before every command. echo 'trap "{ set +x; } 2>/dev/null" DEBUG' > ~/.no-xtraceBASH_ENV=~/.no-xtrace ./script.bash That won't work when set -x is done in a sub-shell though. As @ilkkachu said, provided you have write permission to any folder on the filesystem, you should at least be able to make a copy of the script and edit it. If there's nowhere you can write a copy of the script, or if it's not convenient to make and edit a new copy every time there's an update to the original script, you may still be able to do: bash <(sed 's/set -x/set +x/g' ./script.bash) That (and the copy approach) may not work properly if the script does anything fancy with $0 or special variables like $BASH_SOURCE (such as looking for files that are relative to the location of the script itself), so you may need to do some more editing like replace $0 with the path of the script...
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/352089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151203/" ] }
352,107
I heard about $TEMP and $TMP , but I think they are not standard for every distro. As far as I know, the best way to get the temp dir is just /tmp , is there any distro that won't work using that path?
$TMPDIR is more standard than both $TEMP and $TMP as it's mentioned by the POSIX standard. The /tmp directory is retained in POSIX.1-2008 to accommodate historical applications that assume its availability. Implementations are encouraged to provide suitable directory names in the environment variable TMPDIR and applications are encouraged to use the contents of TMPDIR for creating temporary files. Ref: http://pubs.opengroup.org/onlinepubs/9699919799/xrat/V4_xbd_chap10.html At least on macOS, $TMPDIR is not set to /tmp by default, but to something like /var/folders/4r/504v61kx02gczk_454db345c0000gn/T/ . /tmp is still available though, as a symbolic link to /private/tmp (for whatever reason). You may use tmpdir="${TMPDIR:-/tmp}" in a script, for example, to use $TMPDIR if it's set, or /tmp if it's not set (or empty). The non-standard mktemp utility will create a file or directory in $TMPDIR by default and output its name (but not on macOS, see below): tmpfile=$(mktemp)tmpdir=$(mktemp -d) Check the manual for mktemp on your system to figure out how to use it. Not all implementations are the same. On macOS, because of reasons , you will have to give the mktemp utility a template with an explicit path: tmpfile=$(mktemp "${TMPDIR:-/tmp}"/tmp.XXXXXXXX)tmpdir=$(mktemp -d "${TMPDIR:-/tmp}"/tmp.XXXXXXXX) The above commands would create a temporary file and directory (respectively) in $TMPDIR , or in /tmp if $TMPDIR is empty or if the variable is unset (this variable is by default set to the result of getconf DARWIN_USER_TEMP_DIR on macOS).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352107", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221403/" ] }
352,110
I am looking at a script that has: if [ "${PS1-}" ]; then That trailing - bugs me a bit because it doesn't seem to Posix or Bash standard syntax. It this some arcane syntax that has been around forever, or is it a typo? Any references to standards / docs would be appreciated. Normally I would code it: if [ "$PS1" ]; then Which is more correct or is there a difference between them?
The variable expansion ${parameter:-word} will use the value of $parameter if it's set and non-null (not an empty string), otherwise it will use the string word . Omitting the : will not test if the value is empty, only whether it's unset or not. This means that ${PS1-} will expand to the value of $PS1 if it's set, but to an empty string if it's empty or unset. In this case, this is exactly the same as ${PS1:-} as the string after - is also empty. The difference between "${PS1-}" and "$PS1" is subtle, as @Rakesh Sharma notes: both will expand to the value of $PS1 , or to an empty string if it is unset. The exception is when set -u is active, in which case expanding unset variables would cause an error . The (empty) default value set by "${PS1-}" circumvents this, expanding an unset PS1 to the empty string without error. This is standard syntax ( originated in the Bourne shell in the late 70s ), as are a couple of other, similar expansions.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352110", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5121/" ] }
352,122
I have a custom systemd service that runs during the first boot. If the user has no bootsplash I would like to write to the console and give some info on what's going on. Is there a way to do that from my service? Here's my systemd service: [Unit]Description=Prepare operator after [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]=network-online.targetAfter=network.target network-online.targetOnFailure=emergency.targetOnFailureJobMode=replace-irreversibly[Service]Type=oneshotExecStart=/usr/bin/provision-operator[Install]WantedBy=multi-user.target
In man systemd.directives , you can search for "output" and find that StandardOutput= is documented in in man systemd.exec . There you can find options including journal+console to send output to the systemd Journal and the system console. You might also try kmsg+console . According to the docs kmsg "connects standard output with the kernel log buffer which is accessible via dmesg(1),"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147816/" ] }
352,139
How to setup/configure the Arch Linux bootcd (live-CD, ISO) so I can login to it using an SSH client? And which password is by default set for the (automatic login) root account?
The default root password for the ISO distribution is blank. And by default you are not allowed to login with SSH using a blank password. Therefore two commands are necessary: passwd --To set a non blank password for the currently logged in user ('root' for liveCD). Enter the password twice . Before september 2021: systemctl start sshd.service --To start the ssh daemon. September 2021 and later: sshd is started by default. Now you can login from your client machine using ssh root@ip-address . PS Don't know the IP address? The live-CD includes commands ifconfig and ip address .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352139", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17560/" ] }
352,185
AWK: Display variable width columns fields into fixed spaced Column fields Format in Unix. $ cat temp.txtQUEUE(XYZ1.REQ.YAM.ALIAS) TYPE(QCLUSTER) CLUSTER(MYCLUS) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ4.REPL.YAM) TYPE(QCLUSTER) CLUSTER(MYSTER) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ8.REQ.YAM) TYPE(QCLUSTER) CLUSTER(MYCTER) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(NO) PUT(DISABLED)QUEUE(XYZ8.REPLY.YAM) TYPE(QCLUSTER) CLUSTER( ) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(KK.RAMAN.K.LQ) TYPE(QCLUSTER) CLUSTER(MYCLUSTER) CLUSQMGR() CLUSQT(QLOCAL) DEFPSIST(NO) PUT(ENABLED)QUEUE(KK.RAMAN.KATHPALIA) TYPE(QREMOTE) CLUSTER(MYCLUSTER) CLUSQMGR(ABCD) CLUSQT(QLOCAL) DEFPSIST(NO) PUT(ENABLED)QUEUE(KATHPLAIA.RAMAN) TYPE( ) CLUSTER( ) CLUSQMGR(ABCD) CLUSQT(QLOCAL) DEFPSIST(NO) PUT(ENABLED)QUEUE(XYZ8.REQ.EQUAL.LQ) TYPE(QCLUSTER) CLUSTER(MYCLUSTER) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QLOCAL) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ9.RAMAN.EQUAL.LQ) TYPE(QL) CLUSTER(MYCLUSTER) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XX10.REPL.EQUAL.ALIAS) TYPE(QA) CLUSTER(YOURC) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(DISABLED)QUEUE(XX10.KATHPLAIA.EQUAL.LOCAL) TYPE(LOCALQ) CLUSTER(MYCLUSTER) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XX11.RAMAN.EQUAL.LOCAL) TYPE(QCLUSTER) CLUSTER(MYCLUS) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XX11.REQ.LOCAL) TYPE(QCLUSTER) CLUSTER(MYCLUSTER) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(RAMAN_KATHPLIA_000_11.REQ.EQUAL.REMOTE.QUEUE) TYPE(QCLUSTER) CLUSTER(MYCLUS) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(DISABLED)QUEUE(XYZ2.REQ.RAMAN.REMOTE.QUEUE) TYPE(QLOCAL) CLUSTER(STER) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ2.REQ.EQUAL.REMOTE.QUEUE) TYPE(QCLUSTER) CLUSTER( ) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED) Expected: A neat column display This can be achieved by "column" command: $ cat temp.txt | column -tQUEUE(XYZ1.REQ.YAM.ALIAS) TYPE(QCLUSTER) CLUSTER(MYCLUS) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ4.REPL.YAM) TYPE(QCLUSTER) CLUSTER(MYSTER) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ8.REQ.YAM) TYPE(QCLUSTER) CLUSTER(MYCTER) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(NO) PUT(DISABLED)QUEUE(XYZ8.REPLY.YAM) TYPE(QCLUSTER) CLUSTER( ) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(KK.RAMAN.K.LQ) TYPE(QCLUSTER) CLUSTER(MYCLUSTER) CLUSQMGR() CLUSQT(QLOCAL) DEFPSIST(NO) PUT(ENABLED)QUEUE(KK.RAMAN.KATHPALIA) TYPE(QREMOTE) CLUSTER(MYCLUSTER) CLUSQMGR(ABCD) CLUSQT(QLOCAL) DEFPSIST(NO) PUT(ENABLED)QUEUE(KATHPLAIA.RAMAN) TYPE( ) CLUSTER( ) CLUSQMGR(ABCD) CLUSQT(QLOCAL) DEFPSIST(NO) PUT(ENABLED)QUEUE(XYZ8.REQ.EQUAL.LQ) TYPE(QCLUSTER) CLUSTER(MYCLUSTER) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QLOCAL) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ9.RAMAN.EQUAL.LQ) TYPE(QL) CLUSTER(MYCLUSTER) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XX10.REPL.EQUAL.ALIAS) TYPE(QA) CLUSTER(YOURC) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(DISABLED)QUEUE(XX10.KATHPLAIA.EQUAL.LOCAL) TYPE(LOCALQ) CLUSTER(MYCLUSTER) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XX11.RAMAN.EQUAL.LOCAL) TYPE(QCLUSTER) CLUSTER(MYCLUS) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XX11.REQ.LOCAL) TYPE(QCLUSTER) CLUSTER(MYCLUSTER) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(RAMAN_KATHPLIA_000_11.REQ.EQUAL.REMOTE.QUEUE) TYPE(QCLUSTER) CLUSTER(MYCLUS) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(DISABLED)QUEUE(XYZ2.REQ.RAMAN.REMOTE.QUEUE) TYPE(QLOCAL) CLUSTER(STER) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ2.REQ.EQUAL.REMOTE.QUEUE) TYPE(QCLUSTER) CLUSTER( ) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED) Problems: Certain AIX and Solaris hosts doesn't have "column" command. So can't use "column" universally. Even with usage of column: (a) ( ) expands to ( ) (b) More space than necessary is inserted between fields making few rows to fold into next line thus messing up formatting (19 Inch display monitor). Questions: Using awk, Problem 2 re-surfaces (or worse for few lines). Please see below. Can someone suggest a better awk statement ? Also interested to see if Problem 2 can be resolved using "column" command ? $ cat temp.txt | awk '{printf "%-55s %-15s %-20s %-35s %-15s %-15s %-15s \n", $1,$2,$3,$4,$5,$6,$7}'QUEUE(XYZ1.REQ.YAM.ALIAS) TYPE(QCLUSTER) CLUSTER(MYCLUS) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ4.REPL.YAM) TYPE(QCLUSTER) CLUSTER(MYSTER) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ8.REQ.YAM) TYPE(QCLUSTER) CLUSTER(MYCTER) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(NO) PUT(DISABLED)QUEUE(XYZ8.REPLY.YAM) TYPE(QCLUSTER) CLUSTER( ) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES)QUEUE(KK.RAMAN.K.LQ) TYPE(QCLUSTER) CLUSTER(MYCLUSTER) CLUSQMGR() CLUSQT(QLOCAL) DEFPSIST(NO) PUT(ENABLED)QUEUE(KK.RAMAN.KATHPALIA) TYPE(QREMOTE) CLUSTER(MYCLUSTER) CLUSQMGR(ABCD) CLUSQT(QLOCAL) DEFPSIST(NO) PUT(ENABLED)QUEUE(KATHPLAIA.RAMAN) TYPE( ) CLUSTER( ) CLUSQMGR(ABCD) CLUSQT(QLOCAL)QUEUE(XYZ8.REQ.EQUAL.LQ) TYPE(QCLUSTER) CLUSTER(MYCLUSTER) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QLOCAL) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ9.RAMAN.EQUAL.LQ) TYPE(QL) CLUSTER(MYCLUSTER) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XX10.REPL.EQUAL.ALIAS) TYPE(QA) CLUSTER(YOURC) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(DISABLED)QUEUE(XX10.KATHPLAIA.EQUAL.LOCAL) TYPE(LOCALQ) CLUSTER(MYCLUSTER) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XX11.RAMAN.EQUAL.LOCAL) TYPE(QCLUSTER) CLUSTER(MYCLUS) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XX11.REQ.LOCAL) TYPE(QCLUSTER) CLUSTER(MYCLUSTER) CLUSQMGR(ABCD) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(RAMAN_KATHPLIA_000_11.REQ.EQUAL.REMOTE.QUEUE) TYPE(QCLUSTER) CLUSTER(MYCLUS) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(DISABLED)QUEUE(XYZ2.REQ.RAMAN.REMOTE.QUEUE) TYPE(QLOCAL) CLUSTER(STER) CLUSQMGR(BLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) PUT(ENABLED)QUEUE(XYZ2.REQ.EQUAL.REMOTE.QUEUE) TYPE(QCLUSTER) CLUSTER( ) CLUSQMGR(BLAHBLAHBLAHBLAH) CLUSQT(QALIAS) DEFPSIST(YES) Field info: All fields are bound and don't expand beyond certain length. - Max Width field 1 = 55- Max Width field 2 = 15- Max Width field 3 = 20- Max Width field 4 = 30- Max Width field 5 = 15- Max Width field 6 = 15- Max Width field 7 = 15 Limitation : I want to optimize the display for least sized monitor in organisation == 19 Inches So, I want to minimize the gap between columns to a single space. Possibly, checkered columns (like MS Excel)
I would just replace the troublesome ( ) before processing: sed 's/( )/()/g' temp.txt | awk '{printf "%-55s %-15s %-20s %-35s %-15s %-15s %-15s \n", $1,$2,$3,$4,$5,$6,$7}' If the number of spaces varies, use sed 's/( \+)/()/g' instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/97169/" ] }
352,258
Similar to this question I a am interested in completely ignoring a drive, but in my case it is one drive which is exposed to the system as a SCSI drive. I have two drives from 21 drives in the server failing and failing: [2524080.689492] scsi 0:0:90900:0: Direct-Access ATA ST3000DM001-1CH1 CC43 PQ: 0 ANSI: 6[2524080.689502] scsi 0:0:90900:0: SATA: handle(0x000d), sas_addr(0x5003048001f298cf), phy(15), device_name(0x0000000000000000)[2524080.689506] scsi 0:0:90900:0: SATA: enclosure_logical_id(0x5003048001f298ff), slot(3)[2524080.689594] scsi 0:0:90900:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)[2524080.690671] sd 0:0:90900:0: tag#1 CDB: Test Unit Ready 00 00 00 00 00 00[2524080.690680] mpt2sas_cm0: sas_address(0x5003048001f298cf), phy(15)[2524080.690683] mpt2sas_cm0: enclosure_logical_id(0x5003048001f298ff),slot(3)[2524080.690686] mpt2sas_cm0: handle(0x000d), ioc_status(success)(0x0000), smid(17)[2524080.690695] mpt2sas_cm0: request_len(0), underflow(0), resid(0)[2524080.690698] mpt2sas_cm0: tag(65535), transfer_count(0), sc->result(0x00000000)[2524080.690701] mpt2sas_cm0: scsi_status(check condition)(0x02), scsi_state(autosense valid )(0x01)[2524080.690704] mpt2sas_cm0: [sense_key,asc,ascq]: [0x06,0x29,0x00], count(18)[2524080.690728] sd 0:0:90900:0: Attached scsi generic sg0 type 0[2524080.691269] sd 0:0:90900:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)[2524080.691285] sd 0:0:90900:0: [sdb] 4096-byte physical blocks[2524111.163712] sd 0:0:90900:0: attempting task abort! scmd(ffff880869121800)[2524111.163722] sd 0:0:90900:0: tag#2 CDB: Mode Sense(6) 1a 00 3f 00 04 00[2524111.163729] scsi target0:0:90900: handle(0x000d), sas_address(0x5003048001f298cf), phy(15)[2524111.163733] scsi target0:0:90900: enclosure_logical_id(0x5003048001f298ff), slot(3)[2524111.442310] sd 0:0:90900:0: device_block, handle(0x000d)[2524113.442331] sd 0:0:90900:0: device_unblock and setting to running, handle(0x000d)[2524114.939280] sd 0:0:90900:0: task abort: SUCCESS scmd(ffff880869121800)[2524114.939358] sd 0:0:90900:0: [sdb] Write Protect is off[2524114.939366] sd 0:0:90900:0: [sdb] Mode Sense: 00 00 00 00[2524114.939444] sd 0:0:90900:0: [sdb] Asking for cache data failed[2524114.939501] sd 0:0:90900:0: [sdb] Assuming drive cache: write through[2524114.940380] sd 0:0:90900:0: [sdb] Read Capacity(16) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK[2524114.940387] sd 0:0:90900:0: [sdb] Sense not available.[2524114.940566] sd 0:0:90900:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK[2524114.940570] sd 0:0:90900:0: [sdb] Sense not available.[2524114.940778] sd 0:0:90900:0: [sdb] Attached SCSI disk[2524114.984489] mpt2sas_cm0: removing handle(0x000d), sas_addr(0x5003048001f298cf)[2524114.984494] mpt2sas_cm0: removing : enclosure logical id(0x5003048001f298ff), slot(3)[2524134.939383] mpt2sas_cm0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000)[2524134.940116] mpt2sas_cm0: removing handle(0x000e), sas_addr(0x5003048001f298d0)[2524134.940122] mpt2sas_cm0: removing enclosure logical id(0x5003048001f298ff), slot(4)[2524153.940404] scsi 0:0:90902:0: Direct-Access ATA ST3000DM001-1CH1 CC43 PQ: 0 ANSI: 6[2524153.940418] scsi 0:0:90902:0: SATA: handle(0x000d), sas_addr(0x5003048001f298cf), phy(15), device_name(0x0000000000000000)[2524153.940423] scsi 0:0:90902:0: SATA: enclosure_logical_id(0x5003048001f298ff), slot(3)[2524153.940699] scsi 0:0:90902:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)[2524153.942194] sd 0:0:90902:0: tag#0 CDB: Test Unit Ready 00 00 00 00 00 00[2524153.942205] mpt2sas_cm0: sas_address(0x5003048001f298cf), phy(15)[2524153.942208] mpt2sas_cm0: enclosure_logical_id(0x5003048001f298ff),slot(3)[2524153.942212] mpt2sas_cm0: handle(0x000d), ioc_status(success)(0x0000), smid(12)[2524153.942214] mpt2sas_cm0: request_len(0), underflow(0), resid(0)[2524153.942217] mpt2sas_cm0: tag(65535), transfer_count(0), sc->result(0x00000000)[2524153.942220] mpt2sas_cm0: scsi_status(check condition)(0x02), scsi_state(autosense valid )(0x01)[2524153.942223] mpt2sas_cm0: [sense_key,asc,ascq]: [0x06,0x29,0x00], count(18)[2524153.942361] sd 0:0:90902:0: Attached scsi generic sg0 type 0[2524153.942833] sd 0:0:90902:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)[2524153.942840] sd 0:0:90902:0: [sdb] 4096-byte physical blocks[2524154.190159] scsi 0:0:90903:0: Direct-Access ATA ST3000DM001-1CH1 CC43 PQ: 0 ANSI: 6[2524154.190174] scsi 0:0:90903:0: SATA: handle(0x0022), sas_addr(0x5003048001ec55ed), phy(13), device_name(0x0000000000000000)[2524154.190179] scsi 0:0:90903:0: SATA: enclosure_logical_id(0x5003048001ec55ff), slot(1)[2524154.190368] scsi 0:0:90903:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)[2524154.191634] sd 0:0:90903:0: tag#1 CDB: Test Unit Ready 00 00 00 00 00 00[2524154.191639] mpt2sas_cm0: sas_address(0x5003048001ec55ed), phy(13)[2524154.191642] mpt2sas_cm0: enclosure_logical_id(0x5003048001ec55ff),slot(1)[2524154.191645] mpt2sas_cm0: handle(0x0022), ioc_status(success)(0x0000), smid(12)[2524154.191648] mpt2sas_cm0: request_len(0), underflow(0), resid(0)[2524154.191651] mpt2sas_cm0: tag(65535), transfer_count(0), sc->result(0x00000000)[2524154.191654] mpt2sas_cm0: scsi_status(check condition)(0x02), scsi_state(autosense valid )(0x01)[2524154.191657] mpt2sas_cm0: [sense_key,asc,ascq]: [0x06,0x29,0x00], count(18)[2524154.191800] sd 0:0:90903:0: Attached scsi generic sg3 type 0[2524154.192211] sd 0:0:90903:0: [sdd] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)[2524154.192219] sd 0:0:90903:0: [sdd] 4096-byte physical blocks This is in our case an old server we have decided not to upgrade/fix. And I am now thinking about even not removing old drives out, just leaving them in, making array smaller, and disabling them. The array is not full, and we are using it only as an additional backup location for some other servers. So, me being lazy and not wanting to go to a server room, is there a way to just disable those drives and move on? :-) More information about the system: lspci -nn -v -s 05:00.0 : 05:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05) Subsystem: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:3020] Flags: bus master, fast devsel, latency 0, IRQ 29 I/O ports at 7000 [size=256] Memory at df640000 (64-bit, non-prefetchable) [size=64K] Memory at df600000 (64-bit, non-prefetchable) [size=256K] Expansion ROM at df500000 [disabled] [size=1M] Capabilities: [50] Power Management version 3 Capabilities: [68] Express Endpoint, MSI 00 Capabilities: [d0] Vital Product Data Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [c0] MSI-X: Enable+ Count=16 Masked- Capabilities: [100] Advanced Error Reporting Capabilities: [1e0] #19 Capabilities: [1c0] Power Budgeting <?> Capabilities: [190] #16 Capabilities: [148] Alternative Routing-ID Interpretation (ARI) Kernel driver in use: mpt3sas Kernel modules: mpt3sas lsscsi -v : [0:0:3:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdc dir: /sys/bus/scsi/devices/0:0:3:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:2/end_device-0:0:2/target0:0:3/0:0:3:0][0:0:6:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdf dir: /sys/bus/scsi/devices/0:0:6:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:5/end_device-0:0:5/target0:0:6/0:0:6:0][0:0:7:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdg dir: /sys/bus/scsi/devices/0:0:7:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:6/end_device-0:0:6/target0:0:7/0:0:7:0][0:0:8:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdh dir: /sys/bus/scsi/devices/0:0:8:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:7/end_device-0:0:7/target0:0:8/0:0:8:0][0:0:11:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdi dir: /sys/bus/scsi/devices/0:0:11:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:10/end_device-0:0:10/target0:0:11/0:0:11:0][0:0:12:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdj dir: /sys/bus/scsi/devices/0:0:12:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:11/end_device-0:0:11/target0:0:12/0:0:12:0][0:0:13:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdk dir: /sys/bus/scsi/devices/0:0:13:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:12/end_device-0:0:12/target0:0:13/0:0:13:0][0:0:15:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdl dir: /sys/bus/scsi/devices/0:0:15:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:14/end_device-0:0:14/target0:0:15/0:0:15:0][0:0:16:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdm dir: /sys/bus/scsi/devices/0:0:16:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:15/end_device-0:0:15/target0:0:16/0:0:16:0][0:0:18:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdn dir: /sys/bus/scsi/devices/0:0:18:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:17/end_device-0:0:17/target0:0:18/0:0:18:0][0:0:20:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdo dir: /sys/bus/scsi/devices/0:0:20:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:19/end_device-0:0:19/target0:0:20/0:0:20:0][0:0:21:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdp dir: /sys/bus/scsi/devices/0:0:21:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:20/end_device-0:0:20/target0:0:21/0:0:21:0][0:0:22:0] enclosu LSI CORP SAS2X36 0717 - dir: /sys/bus/scsi/devices/0:0:22:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:21/end_device-0:0:21/target0:0:22/0:0:22:0][0:0:23:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdq dir: /sys/bus/scsi/devices/0:0:23:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:1/end_device-0:1:1/target0:0:23/0:0:23:0][0:0:24:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdr dir: /sys/bus/scsi/devices/0:0:24:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:2/end_device-0:1:2/target0:0:24/0:0:24:0][0:0:25:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sds dir: /sys/bus/scsi/devices/0:0:25:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:3/end_device-0:1:3/target0:0:25/0:0:25:0][0:0:26:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdt dir: /sys/bus/scsi/devices/0:0:26:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:4/end_device-0:1:4/target0:0:26/0:0:26:0][0:0:28:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdu dir: /sys/bus/scsi/devices/0:0:28:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:6/end_device-0:1:6/target0:0:28/0:0:28:0][0:0:30:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdw dir: /sys/bus/scsi/devices/0:0:30:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:8/end_device-0:1:8/target0:0:30/0:0:30:0][0:0:31:0] disk ATA ST3000DM001-1CH1 CC43 /dev/sdx dir: /sys/bus/scsi/devices/0:0:31:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:9/end_device-0:1:9/target0:0:31/0:0:31:0][0:0:34:0] enclosu LSI CORP SAS2X28 0717 - dir: /sys/bus/scsi/devices/0:0:34:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:1/expander-0:1/port-0:1:12/end_device-0:1:12/target0:0:34/0:0:34:0][0:0:25856:0]disk ATA ST3000DM001-1CH1 CC43 /dev/sda dir: /sys/bus/scsi/devices/0:0:25856:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:14357/end_device-0:0:14357/target0:0:25856/0:0:25856:0][0:0:98760:0]disk ATA ST3000DM001-1CH1 CC43 - dir: /sys/bus/scsi/devices/0:0:98760:0 [/sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0/port-0:0/expander-0:0/port-0:0:60931/end_device-0:0:60931/target0:0:98760/0:0:98760:0][2:0:0:0] disk ATA PLEXTOR PX-128M5 1.00 /dev/sdy dir: /sys/bus/scsi/devices/2:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.2/ata2/host2/target2:0:0/2:0:0:0] lsscsi -Hv : [0] mpt2sas dir: /sys/class/scsi_host//host0 device dir: /sys/devices/pci0000:00/0000:00:03.0/0000:05:00.0/host0[1] ahci dir: /sys/class/scsi_host//host1 device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata1/host1[2] ahci dir: /sys/class/scsi_host//host2 device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata2/host2[3] ahci dir: /sys/class/scsi_host//host3 device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata3/host3[4] ahci dir: /sys/class/scsi_host//host4 device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata4/host4[5] ahci dir: /sys/class/scsi_host//host5 device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata5/host5[6] ahci dir: /sys/class/scsi_host//host6 device dir: /sys/devices/pci0000:00/0000:00:1f.2/ata6/host6 smp_discover /dev/bsg/expander-0:0 : phy 0:S:attached:[500605b00507dd20:03 i(SSP+STP+SMP)] 6 Gbps phy 1:S:attached:[500605b00507dd20:02 i(SSP+STP+SMP)] 6 Gbps phy 2:S:attached:[500605b00507dd20:01 i(SSP+STP+SMP)] 6 Gbps phy 3:S:attached:[500605b00507dd20:00 i(SSP+STP+SMP)] 6 Gbps phy 12:U:attached:[5003048001f298cc:00 t(SATA)] 6 Gbps phy 13:U:attached:[5003048001f298cd:00 t(SATA)] 6 Gbps phy 14:U:attached:[5003048001f298ce:00 t(SATA)] 6 Gbps phy 17:U:attached:[5003048001f298d1:00 t(SATA)] 6 Gbps phy 19:U:attached:[5003048001f298d3:00 t(SATA)] 6 Gbps phy 20:U:attached:[5003048001f298d4:00 t(SATA)] 6 Gbps phy 21:U:attached:[5003048001f298d5:00 t(SATA)] 6 Gbps phy 22:U:attached:[5003048001f298d6:00 t(SATA)] 6 Gbps phy 23:U:attached:[5003048001f298d7:00 t(SATA)] 6 Gbps phy 25:U:attached:[5003048001f298d9:00 t(SATA)] 6 Gbps phy 26:U:attached:[5003048001f298da:00 t(SATA)] 6 Gbps phy 27:U:attached:[5003048001f298db:00 t(SATA)] 6 Gbps phy 28:U:attached:[5003048001f298dc:00 t(SATA)] 6 Gbps phy 29:U:attached:[5003048001f298dd:00 t(SATA)] 6 Gbps phy 31:U:attached:[5003048001f298df:00 t(SATA)] 6 Gbps phy 32:U:attached:[5003048001f298e0:00 t(SATA)] 6 Gbps phy 33:U:attached:[5003048001f298e1:00 t(SATA)] 6 Gbps phy 34:U:attached:[5003048001f298e2:00 t(SATA)] 6 Gbps phy 35:U:attached:[5003048001f298e3:00 t(SATA)] 6 Gbps phy 36:D:attached:[5003048001f298fd:00 V i(SSP+SMP) t(SSP)] 6 Gbps
The very high SCSI device numbers ( scsi 0:0:90903:0 ) show that there's a problem in this case that the hardware keeps dropping & re-initializing the drive. The MPT SAS hardware does most of the re-initializing itself here, so we can't entirely control that from the Kernel. Separately, you mention having 21 drives, so they are probably behind one or more SAS expanders. The question then becomes, it is possible, in software, to disable a port on a SAS expander? If the expander actually supports it (I think it was optional in the standard), then yes! The package in question is smp_utils . sg3_utils will also be helpful). What you want is: Figure out the expander device per the manpage above (probably ls /dev/bsg/expand* ) Confirm the faulty disks are attached to the phys from the dmesg: smp_discover /dev/bsg/expander-... . Disable the PHYs, in the form of smp_phy_control --phy=NN --op=di /dev/bsg/expander-... . Expanded for your case: smp_phy_control --phy=13 --op=di /dev/bsg/expander-0:0smp_phy_control --phy=15 --op=di /dev/bsg/expander-0:0 The phy numbers were already in your output: 13 , 15 , but you might want to confirm them using smp_discover .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352258", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14710/" ] }
352,314
If I understand correctly file permissions have an associated 3 digit number which specify read/write/execute permission. The umask value is a default 'mask' which is subtracted from the default value. So for a umask value of 0022 the default value for something that would be 777 would become 755? Is this correct and if so, what is the first 0 in the umask value?
The first digit 0 is not in use in your example. The umask reads from right to left and trailing zeros are ignored. It can however be used to set special permissions, such as sticky bit , Set GUID , Set UID as shown below. 0755 —- None of the special bits set1755 —- Sticky bit set2755 —- SGID bit set4755 —- SUID bit set You are correct that a umask of 0022 the will mask a default 777 (directory) permission to become 755 on newly created directories. The octal numbering works similar to the first three sets: user, group, world/other. The read/write/execute rwx values are represented in octal form with the corresponding values which can total a maximum of 7: 4 - Read 2 - Write 1 - Execute So for 0755: 0 is ignored. 7 (4+2+1) equals read, write, and execute for the user /owner. And 5 (4+1) equals read and execute for the group , and the remaining 5 (also 4+1) gives read and execute permissions to other /world.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352314", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
352,316
I was wondering if there's a way to find out the default shell of the current user within a shell script? Use case: I am working on a script that sets an alias for a command and this alias is set within a shell script. !# /bin/bashalias = 'some command to set the alias' There's a logic in the script where it tries to find the default shell of the user that executes the script and adds this alias in the respective ~/.bashrc or ~/.zshrc file But as I am adding a shebang in the front of the script and explicitly asking it to use bash, answers posted here always return bash as expected although I am executing this script on a ZSH terminal. Is there a way to get the shell type where the script is executed regardless of the shebang set? I am looking for a solution that works on both Mac and all the linux based bistro.
The environment variable, SHELL would always expands to the default login shell of the invoking user (gets the value from /etc/passwd ). For any other given user, you need to do some processing with /etc/passwd , here is a simple awk snippet: awk -F: -v user="foobar" '$1 == user {print $NF}' /etc/passwd Replace foobar with the actual username. If you have ldap (or something similar in place), use getent to get the database instead of directly parsing /etc/passwd : getent passwd | awk -F: -v user="foobar" '$1 == user {print $NF}' or cleanest approach, let getent do the parsing (thanks to @Kusalananda): getent passwd foobar | awk -F: '{print $NF}'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352316", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221559/" ] }
352,372
I bought cheep 2 TB HDDs (60 € each) and want to check whether they return the data they were fed when reading before using them. I checked some cheep thumb drives drives by copying large files I had lying around to them and checking the hashes of the data they gave back (and found ones which just throw data away after their actual storage capacity is exhausted). Unfortunately, I don't have any 2 TB files lying around. I now want to generate 2 TB of pseudorandom data, write it to the disks, and take a hash of the disks. I then want to write the same data directly to the hash function and get the hash it should produce this way. The pseudorandom function doesn't have to be cryptographically secure in any way, it just needs to produce data with high entropy fast. If I write a script which just hashes a variable containing a number, prints the hash to stdout, increments the variable, and repeats, the data rate is way too slow, even on when using a fast CPU. Like 5 orders of magnitude too slow (not even 60 kByte/s). Now, I could attempt to do this with tee but that seems like a really bad idea and I can't just reproduce the same data over and over again. Ideally, I'd pass some short argument (a number, a string, I don't care) to the program and get an arbitrarily large amount of data out at its stdout, and that data is the same on each call.
Well, most people just go with badblocks ... Otherwise, just encrypt zeroes. Encryption does exactly what you want. Encrypted zeroes look like random data. Decrypting random data turns it back into zeroes. It's deterministic, reversible so as long as you know the key. cryptsetup open --type plain --cipher aes-xts-plain64 /dev/yourdisk cryptodiskshred -n 0 -z -v /dev/mapper/cryptodisk # overwrites everythingcmp /dev/zero /dev/mapper/cryptodisk # byte-by-byte comparison This should utilize full disk speed on a modern system with AES-NI. Also kind of works for just piping (without backed by real storage) truncate -s 1E exabyte_of_zerolosetup --find --show --read-only exabyte_of_zerocryptsetup open --type plain --cipher aes-xts-plain64 --readonly /dev/loop4cat /dev/mapper/loopcrypt | something_that_wanted_random_data or if we're still writing to a disk and comparing cat /dev/mapper/loopcrypt > /dev/sdx# overwrites until no space left on devicecmp /dev/mapper/loopcrypt /dev/sdx# compares until EOF on /dev/sdx OR loopcrypt and sdx differ byte X. Unlike PRNG this can also be used to start comparing data somewhere in the middle of the file. With a traditional PRNG you have to re-generate it all over again to reach back to whatever position you were interested in. Of course, you could just make a random seed based on offset or something...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352372", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147785/" ] }
352,414
I have a function in a bash script: message_offset which is used to print the status of a bash script. i.e. you would call it passing a message into it and a status, like this message_offset "install font library" "[ OK ]" and it would print into the terminal where printf 's %*s format is used to always set the rightmost character of [ OK ] at 80 columns widee.g. output would be install font library [ OK ]update configuration file on server [ ERR ] ^ | always at 80 If echo was used output would look like this install font library [ OK ]update configuration file on server [ ERR ] code: #!/usr/bin/env bashfunction message_offset() { local message="$1" local status="$2" # compensate for the message length by reducing the offset # by the length of the message, (( offset = 80 - ${#message} )) # add a $(tput sgr0) to the end to "exit attributes" whether a color was # set or not printf "%s%*s%s" "${message}" 80 "$status" "$(tput sgr0)"} this all works ok, until I try to use tput to add some color sequences into the string, i.e. to make "[ ERR ]" red. It seems that the printf "%*s" formatting is countingthe tput character sequences when its setting the offset, so if I call the function like this message_offset "update configuration file on server" "$(tput setaf 1)[ ERR ]" the output will look something like: install font library [ OK ]update configuration file on server [ ERR ] because printf "%*s" is saying hey this string has got all the "[ ERR ]" characters, plus the "$(tput setaf 1) chars, but obviously the "$(tput setaf 1) chars are not printed, so don't actually affect the padding. Is there a way I can add color the "status" messages, and also use the tput style color sequences?
You're making this a lot more complicated than it should be. You can handle alignment with $message and not care about the width of ANSI sequences: #! /usr/bin/env bashmessage() { [ x"$2" = xOK ] && color=2 || color=1 let offset=$(tput cols)-4-${#2} printf "%-*s[ %s%s%s ]\n" $offset "$1" "$(tput setaf "$color")" "$2" "$(tput sgr0)"} message "install font library" "OK"message "update configuration file on server" "ERR" Edit: Please note that most printf(1) implementations don't cope well with lengths calculations for multibyte charsets. So if you want to print messages with accented characters in UTF-8 you might need a different approach. shrug
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
352,426
I have installed powerline. But my prompt looks like so instead of arrow shaped: In vim, it looks ok: I have seen the issue here: https://github.com/powerline/powerline/issues/1697 . But the solution there doesn't work for me. There is a similar question but his question was to achieve it without installing powerline here: https://stackoverflow.com/questions/32443522/triangular-background-for-bash-ps1-prompt I am using Ubuntu 16.04. How do I get it right? Edit: I have tried the following ways: 1) Used powerline fonts but made no difference. 2) Installation was done using pip3. It was installed under python3.5 directory. Since it is not giving the desired result, I have uninstalled and installed it using pip. But the installation directory remained same i.e. python3.5 and the result also remained the same. I then tried installing with python2.7 -m pip install powerline-status and it installed under python2.7 directory and it resulted in the same.
I have fixed it by reconfiguring my locale. I ran locale and it gave me this: $ localeLANG=en_IN.UTF-8LANGUAGE=en_IN:enLC_CTYPE="en_IN.UTF-8"LC_NUMERIC="en_IN.UTF-8"LC_TIME="en_IN.UTF-8"LC_COLLATE="en_IN.UTF-8"LC_MONETARY="en_IN.UTF-8"LC_MESSAGES="en_IN.UTF-8"LC_PAPER="en_IN.UTF-8"LC_NAME="en_IN.UTF-8"LC_ADDRESS="en_IN.UTF-8"LC_TELEPHONE="en_IN.UTF-8"LC_MEASUREMENT="en_IN.UTF-8"LC_IDENTIFICATION="en_IN.UTF-8"LC_ALL= So I tried to set the following in .bashrc, but it didn't work: export LANGUAGE=en_US.UTF-8 export LANG=en_US.UTF-8export LC_CTYPE=en_US.UTF-8export LC_ALL=en_US.UTF-8 So I ran the following and restarted the PC(Logging out wasn't enough): sudo locale-gen en_US en_US.UTF-8sudo dpkg-reconfigure locales In the first configuration menu, I have deselected the en_IN... using spacebar and in next menu, I have selected en_US.UTF-8 . After this locale showed all en_US. Instead of all this, probably just setting LANUAGE and LANG to en_US in /etc/default/locale could have been enough? Idk
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
352,495
On my laptop, I use MySQL and PostgreSQL only for testing. I do not need them up until I start programming, which might be hours after startup. But starting the services manually and typing in my sudo password is a (minor) annoyance. I read that systemd supports starting services only when the port for that service is accessed. But a quick Google search seems to indicate that socket-based activation is not yet supported in PG & MySQL. I realize I can hack this using shell scripts or wait for the maintainers to fix the services, but I am looking for a better way now (for educational purposes). The Question: How can I achieve on-demand startup of such services in a way that either utilizes systemd features or is recommended as a Linux "best practice"? Some thoughts: Is there a service I can install that handles auto-starting and auto-stopping services based on conditions (such as a particular process running)? Is there a proxy service that gets activated by a socket and in turn launches the target service? systemd 229, Kubuntu 16.04, MySQL 5.7, PostgreSQL 9.5 Update: The Answer: How I used systemd-socket-proxyd as suggested by Siosm: /etc/mysql/mysql.conf.d/mysqld.cnf port = 13306 /etc/systemd/system/proxy-to-mysql.socket [Socket]ListenStream=0.0.0.0:3306[Install]WantedBy=sockets.target /etc/systemd/system/proxy-to-mysql.service [Unit]Requires=mysql.serviceAfter=mysql.service[Service]# note: this path may varyExecStart=/lib/systemd/systemd-socket-proxyd 127.0.0.1:13306PrivateTmp=noPrivateNetwork=no Reload/ stop/ start as needed: sudo systemctl daemon-reloadsudo systemctl enable proxy-to-mysql.socketsudo systemctl start proxy-to-mysql.socketsudo systemctl stop mysql.service # for testing Test: sudo systemctl status proxy-to-mysql.socket # should be ACTIVEsudo systemctl status proxy-to-mysql # should be INACTIVEsudo systemctl status mysql # should be INACTIVEtelnet 127.0.0.1 3306sudo systemctl status proxy-to-mysql # should be ACTIVEsudo systemctl status mysql # should be ACTIVE
You may use the systemd-socket-proxyd tool to forward traffic from a local socket to MySQL or PostgreSQL with socket-activation. See systemd-socket-proxyd(8) for examples, and read this SO reply for a concrete example for --user systemd .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37818/" ] }
352,499
bash version: $ bash -versionGNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law. command: $ var="[a-p]"; echo $varh Anyone has an idea what's the reason for this kind of interpretation. For context I'm parsing a config file using shell and looking for patterns to grab sections names [section-foo]foo=bar while reading the file, if a section row for example is [a-p] it will get that weird behavior.
You have a file name h in the directory where you ran echo $var The shell tries to use [a-p] as a glob, and if anything matches, it replaces the glob with the match. With a file named h , that becomes echo h You can prevent this by quoting the expansion, which will cause it to not be treated as a glob. echo "$var"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221688/" ] }
352,502
I have some text with ip inside it. I'd like to replace each digit in ip which is not followed by "SpecialWord" with some other character. There is can be more than one ip in each line. For example: This input *random text with different numbers* 255.43.23.8 *some more text* "SpecialWord" 32.123.21.44 *text again* Must become *random text with different numbers* aaa.aa.aa.a *some more text* "SpecialWord" 32.123.21.44 *text again* I tried to use sed -r 's/([0-9]{1,3}\.){3}[0-9]{1,3}/.../g' but I don't know exact number of digits in ip and sed can't do lookahead stuff. What can help me here?
You have a file name h in the directory where you ran echo $var The shell tries to use [a-p] as a glob, and if anything matches, it replaces the glob with the match. With a file named h , that becomes echo h You can prevent this by quoting the expansion, which will cause it to not be treated as a glob. echo "$var"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221691/" ] }
352,544
TL;DR: Is there a command to display why each IPv6 address has been assigned to a given NIC? e.g. to show which router advertised that prefix. Details I have set up my network to use IPv6 addresses with the ULA prefix fdaa::/64 . This works, and I have addresses like this: $ ip addr show dev enp0s252: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether b8:ae:ed:72:7d:5f brd ff:ff:ff:ff:ff:ff inet 192.168.0.16/24 brd 192.168.0.255 scope global enp0s25 valid_lft forever preferred_lft forever inet6 fdaa::6666:b3ff:0:d1a/128 scope global noprefixroute valid_lft forever preferred_lft forever inet6 2001:4479:7caa:9372:baae:edff:fe72:7d5f/64 scope global mngtmpaddr noprefixroute valid_lft forever preferred_lft forever inet6 fdaa::baae:edff:fe72:7d5f/64 scope global mngtmpaddr noprefixroute valid_lft forever preferred_lft forever inet6 fe80::baae:edff:fe72:7d5f/64 scope link valid_lft forever preferred_lft forever Here I have a public 2001: address, a link-local fe80: address, but I have two addresses in my ULA fdaa: subnet. I only want one address in this subnet, as I get errors by having two. For example I can't use this machine as a DNS server because it replies on the wrong IP: host fdaa::ba27:ebff:feea:ad9d fdaa::baae:edff:fe72:7d5f;; reply from unexpected source: fdaa::6666:b3ff:0:d1a#53, expected fdaa::baae:edff:fe72:7d5f#53;; reply from unexpected source: fdaa::6666:b3ff:0:d1a#53, expected fdaa::baae:edff:fe72:7d5f#53;; connection timed out; no servers could be reached Deleting the IP and restarting the network interface restores it again, so something on my network appears to be advertising the prefix but I'm not sure how to figure out where it's coming from! Is there some command that lists each IP address and explains how it was assigned, which router advertised it as an available prefix, and so on?
After some experimentation I found the following command can be used: ip monitor It will display a list of what's happening. Run it in one terminal, restart the network interface in another, and you'll see a line printed as each IP address is removed and then re-added. It still doesn't explain exactly where the IP is coming from, but it did tell me it was an ra (Router Advertisement) which allowed me to go looking at my router config. In my case I was advertising the same fdaa::/64 prefix as I had assigned as a static IP (assuming a static IP in this subnet would prevent a dynamic one from being assigned) but instead I ended up with both a static and a dynamic IP in the same subnet , which caused the problems. I'm still in two minds as to whether this is a bug. After a lot of thought I changed the router to advertise a different prefix (actually a different subnet in the same ULA /48 , so fdaa:0:0:1/64 ) because this way both subnets fit in the same ULA assignment but being different subnets they don't cause a machine to reply from the wrong IP when it has IPs belonging to both subnets.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6662/" ] }
352,557
I don't know if it possible, but I would like use a second terminal (like terminator) and not use the current $HISTFILE . An amnesic terminal.
Depends on the shell. In Bash, you can control the history in a couple of ways : Disable saving history using set +o history and re-enable it with set -o history (note the inverted plus and minus). With history disabled, commands entered will not be saved in the history log, but previous ones will be available. Set the file used to save the history, by setting HISTFILE ( HISTFILE=~/somehistoryfile ). You can disable it completely by unsetting the variable with unset HISTFILE . If you disable the history file, you still have access to run-time history while the shell is running. You can also set HISTFILESIZE to control the amount of commands saved in the file. Prevent saving certain commands in the history by using HISTCONTROL and/or HISTIGNORE . Setting HISTCONTROL to ignorespace will tell the shell to not save command lines starting with a space. HISTIGNORE can contain patterns of commands not to save in the history. e.g. HISTIGNORE='ls:ls *' would prevent saving lines that contain only ls or ls , a space and anything after that. For an "amnesiac" shell, you would need to apply one of those settings either manually when opening the shell, or set them in some shell startup script. One option would be to create, say ~/.bashrc.nohist with: # include the standard startup files as --rcfile will override themif [ -f /etc/bash.bashrc ] ; then . /etc/bash.bashrc fiif [ -f ~/.bashrc ] ; then . ~/.bashrcfi# disable history completelyHISTSIZE=0# disable the history fileunset HISTFILE# we could even set a reminder in the promptPS1="[nohist] $PS1" and then arrange the shell to be started with bash --rcfile ~/.bashrc.nohist . Adjust the script to taste.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352557", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221742/" ] }
352,569
Given a binary file, how do you convert it to a hex string and back, using only standard tools like sed and cut , on a minimal system with busybox installed? These tools are not available: perl python xxd (comes with vim) gcc A hexdump command comes with busybox , but it is different from the one that comes with util-linux . I'm looking for a script or command to convert a file to a hex string, and a corresponding one for converting it back to binary. The intermediate format doesn't have to be hex, it can be base64 or something else. This is for an embedded device with limited disk space.
Here's what I came up with (based on several online sources and some experimentation). Converting from hex to bin ( hex2bin ): #!/bin/shsed 's/\([0-9A-F]\{2\}\)/\\\\\\x\1/gI' "$1" | xargs printf Converting from bin to hex ( bin2hex ): #!/bin/shhexdump -v -e '1/1 "%02x"' "$1" Example use: ./bin2hex binary_file_1 | ./hex2bin - > binary_file_2diff -s binary_file_1 binary_file_2 This works with busybox, but hex2bin is unfortunately limited by the maximum length of the argument given to xargs , so this method will only work for small files (less than 32 KiB on my desktop system).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3920/" ] }
352,601
Playing with e2fsprogs debugfs , by change/accident, a file named filen/ame was created. Obviously the forward slash character / serves as the special separator character in pathnames. Still using debugfs I wanted to remove the file named filen/ame , but I had little success, since the / character is not interpreted as part of the filename? Does debugfs provide a way to remove this file containing the slash? If so how? I used: cd /tmpecho "content" > contentfiledd if=/dev/zero of=/tmp/ext4fs bs=1M count=50mkfs.ext4 /tmp/ext4fsdebugfs -w -R "write /tmp/contentfile filen/ame" /tmp/ext4fsdebugfs -w -R "ls" /tmp/ext4fs which outputs: debugfs 1.43.4 (31-Jan-2017) 2 (12) . 2 (12) .. 11 (20) lost+found 12 (980) filen/ame I tried the following to remove the filen/ame file: debugfs -w -R "rm filen/ame" /tmp/ext4fs but this did not work and only produced: debugfs 1.43.4 (31-Jan-2017)rm: File not found by ext2_lookup while trying to resolve filename Apart from changing the content of the directory node manually, is there a way to remove the file using debugfs ?
If you want a fix and are not just trying out debugfs , you can have fsck do the work for you. Mark the filesystem as dirty and run fsck -y to get the filename changed: $ debugfs -w -R "dirty" /tmp/ext4fs$ fsck -y /tmp/ext4fs .../tmp/ext4fs was not cleanly unmounted, check forced.Pass 1: Checking inodes, blocks, and sizesPass 2: Checking directory structureEntry 'filen/ame' in / (2) has illegal characters in its name.Fix? yes ...$ debugfs -w -R "ls" /tmp/ext4fs2 (12) . 2 (12) .. 11 (20) lost+found 12 (980) filen.ame
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/352601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
352,636
I'm making a bash file to remove all .class files that java generates inside the src folder and it's subfolders. The structure is: project src /utils utils.class /game game.class gameManager.class So when I execute the script inside the project folder, it search all .class files and remove them, but it doesn't work. I just created this script: find . -path "src/*/*" -name "*.class" -exec rm -f {} \; How can I fix it?
It doesn't work because the path won't start with src , it will start with ./src . Your command line can be corrected into this: find . -type f -path "./src/*/*" -name "*.class" -exec rm -f {} \; Alternatively, find . -type f -path "./src/*/*" -name "*.class" -delete If you're happy deleting all *.class files anywhere under src (not just in subdirectories thereof): find src -type f -name "*.class" -delete
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185318/" ] }
352,642
I'm trying to replace an XML element in 20+ files on Windows using sed and cygwin. The line is: cd "D:\Backups\Tasks"sed -i 's~<StartWhenAvailable>true</StartWhenAvailable>~<StartWhenAvailable>false</StartWhenAvailable>~g' "Task_01.xml" This replaces nothing. However, if I try: sed 's~<~[~g' "Task_01.xml" It outputs: [AllowHardTerminate>true[/AllowHardTerminate>[StartWhenAvailable>true[/StartWhenAvailable>[RunOnlyIfNetworkAvailable>false[/RunOnlyIfNetworkAvailable> However, if I try to add just a single character, it just outputs the document as-is: sed 's~<B~[B~g' "Task_01.xml" The above does nothing. What am I doing wrong? Is the chevron a special character or am I misusing sed? Or is it a fault in cygwin?
Most probably, that file is encoded in UTF-16, that is with 2 or 4 bytes per characters, probably even with a Byte-Order-Mark at the beginning. The characters that are shown in your sample (all ASCII characters) are typically encoded on 2 bytes, the first or second of which (depending on whether it's a big-enfian or little-endian UTF-16 encoding) being 0 and the other one being the ASCII/Unicode code. The 0 byte is typically invisible on a terminal, so that text appears OK when dumped there as the rest is just ASCII, but in effect the text contains: <[NUL]S[NUL]t[NUL]a[NUL]r[NUL]t[NUL]W[NUL]h[NUL]e[NUL]n[NUL]... You'd need to convert that text to your locale's charset for sed to be able to deal with it. Note that UTF-16 cannot be used as a character encoding in a locale on Unix. You won't find a locale that uses UTF-16 as its character encoding. iconv -f utf-16 < Task_01.xml | sed 's~<StartWhenAvailable>true</StartWhenAvailable>~<StartWhenAvailable>false</StartWhenAvailable>~g' | iconv -t utf-16 > Task_01.xml.out That assumes the input has a BOM. If not, you need to determine if it's big endian or little endian (probably little endian) and change that utf-16 to utf-16le or utf-16be . If the locale's charset is UTF-8, there shouldn't be anything lost in translation even if the text contains non-ASCII characters. As Cygwin's sed is typically GNU sed , it will also be able to deal with that type of binary (since it contains NUL bytes) input by itself, so you can also do something like: LC_ALL=C sed -i 's/t\x00r\x00u\x00e/f\x00a\x00l\x00s\x00e/g' Task_01.xml The file command should be able to tell you if the input is indeed UTF-16. You can use sed -n l or od -tc to see those hidden NUL characters. Example of little-endian UTF-16 text with BOM: $ echo true | iconv -t utf-16 | od -tc0000000 377 376 t \0 r \0 u \0 e \0 \n \00000014$ echo true | iconv -t utf-16 | sed -n l\377\376t\000r\000u\000e\000$\000$$ echo true | iconv -t utf-16 | file -/dev/stdin: Little-endian UTF-16 Unicode text, with no line terminators To process several files with zsh / bash / ksh93 : set -o pipefailfor file in ./*.xml; do cp -ai "$file" "$file.bak" && iconv -f utf-16 < "$file.bak" | sed 's~<StartWhenAvailable>true</StartWhenAvailable>~<StartWhenAvailable>false</StartWhenAvailable>~g' | iconv -t utf-16 > "$file" && rm -f "$file.bak"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45398/" ] }
352,759
1) How do I know which servers are used to search for keys with gpg gpg --search-key <keyword> 2) How to add a server to the list of queried server?
“Reputable” key servers exchange key updates with others, so using one is the same as using another (with slight delays in some cases). In the past, the recommendation was to use the SKS server pool , ideally using a secure connection; see the previous link for details, or this answer . However the pool has been disabled. As of GPG 2.3.2 the default is to use keyserver.ubuntu.com; to do that with older releases, use: gpg --keyserver keyserver.ubuntu.com --search-key ... If you’re using that version or a later one, and you haven’t changed its default configuration, you’re good to go without specifying a key server manually. If necessary, you can store the keyserver setting permanently by adding the relevant option to ~/.gnupg/dirmngr.conf (you may need to run gpgconf --reload dirmngr if the dirmngr daemon is already running): keyserver hkps://keyserver.ubuntu.com You can specify multiple keyserver options in that file, but I get the impression that only the last one is taken into account. To actually answer your initial question, at least version 2.1 of GPG shows the key server used for a query: $ gpg --search-key A36B494Fgpg: data source: https://host-37-191-236-118.lynet.no:443...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189711/" ] }
352,760
Suppose I want to run something in the background and print it to a file. However, when i do the following, it still prints to the screen... Does anyone know why? and what should i do? Thank you. ./mc.x & 2>&1 > test.out wait
Place the background towards the end, like as: ./mc.x 2>&1 > test.out & N.B.: Your redirections are ineffective as 2>&1 will make stderr go where stdout goes (i.e., display and which it anyway goes). Then, stdout will gointo a file test.out. Swapping their order would have made all stderr+stdoutgo to the file test.out.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118930/" ] }
352,764
On manjaro 17 with virtualbox 5.1.6 or 5.1.8 as soon as I install the guest additional driver to my Windows 7 VM i3 start resizing automatically the VM windows to its minimum making it unusable. It does not happen if I load a linux VM. It does not happen if I load the windows VM using gnome from Ubuntu. The bug happens once you are on the desktop and might be related to the use of graphic driver/virtual graphic card not used similarly in the preceding steps. EDIT Did not reproduce on an other computer with different configuration, it is probably hardware related
In vritualbox menu bar, go to View > Vritual screen > pick a resolution. The screen stop to be automatically resized permanently
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163001/" ] }
352,781
How these process concepts are related together - background , zombie , daemon and without controlling terminal ? I feel that they are somehow close, especially through the concept of controlling terminal , but there is still not much info for me to tell a story, like if you need to explain something to a child reading an article about Linux without lying too much. UPDATE #1: For example (I don't know if that's true) background -- zombie - foreground process can not become zombie , because zombie is a background process that was left without a parent daemon -- without ctty - all daemons run without ctty , but not all processes without ctty are daemons background -- daemon - a background process can be retrieved to run interactively again, daemon is not zombie -- without ctty - zombie is indifferent if there is ctty attached to it or not background -- without ctty - processes sent to background while they have ctty , and become daemons or die if the ctty is taken from them
In brief, plus links. zombie a process that has exited/terminated, but whose parent has not yet acknowledged the termination (using the wait() system calls). Dead processes are kept in the process table so that their parent can be informed that their child of the child process exiting, and of their exit status. Usually a program forking children will also read their exit status as they exit, so you'll see zombies only if the parent is stopped or buggy. See: Can a zombie have orphans? Will the orphan children be disturbed by reaping the zombie? How does Linux handle zombie processes? Linux man page waitpid(2) controlling terminal, session, foreground, background These are related to job control in the context of a shell running on a terminal. A user logs in, a session is started, tied to a terminal (the controlling terminal) and a shell is started. The shell then runs processes and sends them on the foreground and background as the user wishes (using & when starting the process, stopping it with ^Z , using fg and bg ).Processes in the background are stopped if reading or writing from the terminal; processes in the foreground receive the interrupt signal if ^C is hit on the terminal. (It's the kernel's terminal driver that handles those signals, the shell controls which process (group) is sent to the foreground or background. See: Difference between nohup, disown and & Bash reference manual: Job Control Basics daemon A process running as a daemon is usually something that shouldn't be tied to any particular terminal (or a login session, or a shell). It shouldn't have a controlling terminal, so that it won't receive signals if the terminal closes, and one usually doesn't want it to do I/O on a terminal either. Starting a daemon from the command line requires breaking all ties to the terminal, i.e. starting a new session (in the job control sense, above) to get rid of the controlling terminal, and closing the file handles to the terminal. Of course something started from init , systemd or similar outside a login session wouldn't have these ties to begin with. Since a daemon doesn't have a controlling terminal, it's not subject to job control, and being in the "foreground" or "background" in the job control sense doesn't apply. Also, daemons usually re-parent to init which cleans them as they exit, so you don't usually see them as zombies. See: What's the difference between running a program as a daemon and forking it into background with '&'? Linux man page daemon(7) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47022/" ] }
352,866
Is there a way to convert the command line argument to uppercase and pass it as a variable within the script being invoked? Eg. ./deploy_app.csh 1.2.3.4 middleware should convert middleware to MIDDLEWARE and pass it as a variable inside the script where ever it requires a variable substitution. I know that I can use echo and awk to get this output but trying to check if there is a way without using that combination
Using bash (4.0+), inside the script: newvarname=${3^^} Using tcsh: set newvarname = $3:u:q Using zsh: # tcsh-like syntax:newvarname=${3:u} # or just $3:u# native syntax:newvarname=${(U)3} Using tr instead of shell features (though limited to single-byte letters only in some tr implementations like GNU's): newvarname=$(printf "%s" "$3" | tr '[:lower:]' '[:upper:]') This page summarizes a lot of features of different UNIX shells, including text manipulation: http://hyperpolyglot.org/unix-shells .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210099/" ] }
352,877
OK, I will try and explain what I need to do as best as possible.Basically I have two CSV files, as per the examples below: File 1: Column 1, Column 2abc , 123def , 234adf , 567 File 2 Column 1, Column 2abc , 123def , 234adf , 578 I need to write either a shell script or simple command that will do the following: Sort both files by column 1 Row by row, do the following: Using column 1 in file 1, search for this value in column 1 in file 2. if found, compare the value in column 2 in file 1 against the value in column 2 of file 2 if it matches, write column 1, column 2 and "Validated" in column 3 to a separate file if it does not match, write column 1, column 2 and "Failed" to a separate file This results in two output files: the first with everything that was found in column 1 and column 2 matches, and a second file containing either column 1 lookups that failed or where column 1 was found, where column 2 did not match, so, essentially, using column 1 as the key to check column 2.
Using bash (4.0+), inside the script: newvarname=${3^^} Using tcsh: set newvarname = $3:u:q Using zsh: # tcsh-like syntax:newvarname=${3:u} # or just $3:u# native syntax:newvarname=${(U)3} Using tr instead of shell features (though limited to single-byte letters only in some tr implementations like GNU's): newvarname=$(printf "%s" "$3" | tr '[:lower:]' '[:upper:]') This page summarizes a lot of features of different UNIX shells, including text manipulation: http://hyperpolyglot.org/unix-shells .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/352877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222005/" ] }
352,899
I want to list the files for which there exists, in a given directory, ALL of these files: <filename>.wed <filename>.tis <filename>.are <filename>LM.bmp I am currently doing it with find and sed . It works but it is inelegant and slow! find . -iname "*.wed" -exec echo {} \; | sed s/.wed$// $1 | sed s/..// $1 | while read in; do find . -name "$in.are"; done | sed s/.are$// $1 | sed s/..// $1 | while read in; do find . -name "$in.tis"; done | sed s/.tis$// $1 | sed s/..// $1 | while read in; do find . -name "$in*.bmp"; done Basically I chain a find , two sed and a while read for each extension I want to filter on. It takes >35s for barely 30K files! How can I improve it? Example If in the directory there are files called AR0505.are , AR0505.tis , AR0505.wed and AR0505LM.bmp , then the script would print "AR0505". If one or more of these files was missing, then the script wouldn't print it.
I think the major bottleneck is the number of processes you spawn. Here is a simple script which lists and filters your directory in one pass: #!/usr/bin/perluse strict;use warnings;my %files;my $dir;my @extensions = ("\.tis","\.are","LM\.bmp","\.wed");opendir($dir, ".") || die "Error opening dir\n";while (my $file = readdir($dir)) { foreach my $ext (@extensions) { if ($file =~ /^(.*)$ext$/sm) { $files{$1} += 1; } }}closedir($dir);foreach my $file (keys %files) { if ($files{$file} == scalar(@extensions)) { print "$file\n"; }}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/352899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61296/" ] }
353,018
I wrote a utility in bash that uses read -e to provide a prompt for sql-like queries. Sometimes these are long, so I want to be able to open vi, edit the current line and upon exiting, replace the line with the contents in vim. I read lines with read . Something like: query> select .... from .... very long... <ctrl-e> now in vi select .... from .... very long... edit to select ...from ....very long ... exit vi query> select ...from ....very long ... <enter> query runs. UPDATE: using 'set -o vi' before the 'read -e' seems to be the way for me, but currently when I click <esc>v the buffer that opens doesn't contain what is on the line but some other query, from my history (but not the one I typed before).
First you have to make sure to use vi as shell command line editor: set -o vi Now you can type/copy your command to the command line. To leave insert mode and enter normal mode, use Esc or Shift + Tab . Now you can open vi by pressing v . In vi , you can now do all the changes you want, save the buffer and exit vi , and the command gets executed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353018", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121502/" ] }
353,044
I want to be able to log in to a (publicly-accessible) SSH server from the local network (192.168.1.*) using some SSH key, but I don't want that key to be usable from outside the local network. I want some other key to be used for external access instead (same user in both cases). Is such a thing possible to achieve in SSH?
Yes. In the file ~/.ssh/authorized_keys on the server, each entry now probably looks like ssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT comment (or similar) There is an optional first column that may contain options. These are described in the sshd manual. One of the options is from="pattern-list" Specifies that in addition to public key authentication, either the canonical name of the remote host or its IP address must be present in the comma-separated list of patterns. See PATTERNS in ssh_config(5) for more information on patterns. In addition to the wildcard matching that may be applied to hostnames or addresses, a from stanza may match IP addresses using CIDR address/masklen notation. The purpose of this option is to optionally increase security: public key authentication by itself does not trust the network or name servers or anything (but the key); however, if somebody somehow steals the key, the key permits an intruder to log in from anywhere in the world. This additional option makes using a stolen key more difficult (name servers and/or routers would have to be compromised in addition to just the key). This means that you should be able to modify ~/.ssh/authorized_keys from ssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT comment to from="pattern" ssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT comment Where pattern is a pattern matching the client host that you're connecting from, for example by its public DNS name, IP address, or some network block: from="192.168.1.0/24" ssh-ed25519 AAAAC3NzaC1lZSOMEKEYFINGERPRINT comment (this would only allow the use of this key from a host in the 192.168.1.* network)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/353044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6252/" ] }
353,076
The following codepiece is a script used to install Apache. I run this script in-place when executing it from the heredoc block that wraps it (APACHE). Note that inside this APACHE heredoc, I have an internal heredoc (MOD_REWRITE), which I can refer to as a "secondary" or "internal" heredoc. Please also note that all the code inside APACHE is indented (tabulated), besides the code of the internal heredoc. bash /dev/fd/10 10<<'APACHE' # Setup basics: apt-get update -y && apt-get upgrade -y apt-get install tree zip unzip a2enmod mcrypt && a2enmod mbstring # Setup LAMP environment with enabled mod rewrite: echo -e "\07" && echo -e "\077" # Insert password. apt-get install lamp-server^ -y a2enmod rewritecat <<MOD_REWRITE >> /etc/apache2/apache2.conf<Directory /var/www/>Options Indexes FollowSymLinksAllowOverride AllRequire all granted</Directory>MOD_REWRITE systemctl restart apache2.service # Setup maldet: cd /usr/local/src wget http://www.rfxn.com/downloads/maldetect-current.tar.gz && tar -xzf maldetect-current.tar.gz cd maldetect-* && bash ./install.shAPACHE If I indent it with commands with spaces instead of tabulations, I can run the script just fine (as long as it doesn't have the MOD_REWRITE inside it). If I add the MOD_REWRITE, the script brakes when executed; The same happens if I remove all space-indents whatsoever and totally replace them with tabulations, but AFAIK, the last time I tried to execute the script with tabulations, it also broke (even when I added an hyphen between bash /dev/fd/10 10<< and 'APACHE' . My question: What is the right way to indent the MOD_REWRITE heredoc inside the APACHE heredoc, so the script would be more unified and would execute without breakage? Notes: The reason I want to indent internal heredocs as well, just as I would do with any other command, is from aesthetic reasons --- It makes it easier for me to read and organize my scripts. This question is not the same as " Can't indent heredoc to match nesting's indent " because it asks about the correct way of indenting internal heredocs inside external heredocs, and not about indenting external heredocs themselves.
A here-document is a redirection of the form: <<[-]DELIMITER .... .... ....DELIMITER The optional - (inside the brackets above) changes the way the delimiter is matched and allows indenting each line inside the heredoc content, with tabulations (no spaces allowed). "Matched" means the delimiter is matched to the opener (as when DELIMITER matches <<DELIMITER or <<-DELIMITER , for example). Note that you may use one or more spaces between << or <<- , and the word that follows). So to sum up the basic laws for matching inside a singlar heredoc: The opener must be placed at the very beginning of the line in an applicable syntax. The delimiter must be the only word of its line. All content under the opener (including the delimiter) can be indented with any number of tabulations , with the <<-DELIMITER syntax. Since with the former syntax, no blanks can precede the heredoc opener, if you want to indent it, your only choice is to use the following syntax and you must exclusively use tabulations at the beginning of each line inside the heredoc's content. Now you have two options with the <<- syntax. First option Use the <<- syntax for the inner heredoc. bash << APACHE ... ... cat <<- MOD_REWRITE⇨ ... ⇨ .... ⇨ MOD_REWRITE ... ... APACHE (indentation is 4 spaces, tabulations are symbolized with ⇨ ) The code seen by bash will be exactly what is written on your screen (i.e. bash will see the indentation of each line as you see it now). When the inner heredoc is met, owing to the <<- syntax, bash will strip the tabulation characters leading each line until the MOD_REWRITE delimiter. Second option Use the <<- syntax for the outer heredoc. bash <<- APACHE⇨ ...⇨ ...⇨ cat << MOD_REWRITE⇨ ⇨ ...⇨ ⇨ ....⇨ MOD_REWRITE⇨ ...⇨ ...APACHE This time, the code seen by bash will differ from what you see: it won't contain any leading tabulation. That's why this is not a problem that I use the << syntax for the inner heredoc: the MOD_REWRITE delimiter will be at the beginning of the line. In both cases, the MOD_REWRITE delimiter is recognized and your Apache configuration file /etc/apache2/apache2.conf is not indented. If you want to indent parts of it, your only option is to use spaces (after the initial tabulations that will be stripped). Of course, there is a third option: to use the <<- syntax for both heredocs, but that won't change anything from option 2 since all the leading tabulations are removed when the code is sent to bash .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/353076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
353,120
How can I make cat someFile | ssh someHost work when someFile is not a bash script? I want to remotely execute a perl script but I get a bunch of syntax errors from bash when I try the cat | ssh command.
If you want to push the Perl script through the SSH connection, you'll haveto run the Perl interpreter on the remote end. It'll read the script from stdin: ssh remotehost perl < somescript.pl In the case of Perl, it should even read the command line switches (except -T ) from the hashbang line of the input. If you want to give command line arguments to the Perl interpreter, you can just add them to the command line after perl . If you want to give arguments to the script , you'll need to explicitly tell the interpreter to read the script from stdin (otherwise it will take the first argument as a file name to look for). So, here -l goes to the interpreter, and foo and bar to the script: echo 'print "> $_" foreach @ARGV' | ssh remotehost perl -l - foo bar Note that doing just ssh somehost < script.sh counts on the remote login shell being compatible with the script. (i.e. a Bash script won't work if the remote shell happens to be something else.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/353120", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144960/" ] }
353,149
I was reviewing one shell code and I found this command written between other shell code >filename.txt I don't know what this command does, so I tried it on my desktop. I made one shell script and I wrote this command inside my shell script and when I ran that, I found that it doesn't do anything. What does this >myfile.txt do??
As you have written with nothing preceding the redirect symbol > : >filename.txt is to literally redirect nothing into filename.txt . This is commonly done to clear/erase the contents of a text file. If filename.txt does not already exist, it will be created.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222226/" ] }
353,174
Looking to compare the first column of two input files that have an identical format. The format looks like the following: FILE1:0000abc5abc3 GR0960000def5ae87 GR0010000cab5aea3 GR0010000bac5aeeb GR0010000fed5af13 GR0010000efd5b16f GR0010000cba5b187 GR0010000bca5b2a3 GR001FILE2:0000abc5abc3 GR0970000def5ae87 GR0010000cab5aea3 GR0010000bac5aeeb GR0010000fed5af13 GR1230000cba5b187 GR169 Column 1 contains MAC addresses in both FILE1 and FILE2. I want the value of column 1 in FILE1 to check against column 1 in FILE2 and if there is a match to output the value of column 1 and column 2 of FILE1 and the value of column 2 in FILE2 as a third column in this fashion. DESIRED OUTPUT:0000abc5abc3 GR096 GR0970000def5ae87 GR001 GR0010000cba5b187 GR001 GR169 Each file contains several million entries. Running the input in bash is eternally slow and inefficient using while loops as it loops through each entry: while read -r mac1 code1; do while read -r mac2 code2 ; do if [ "$mac1" == "$mac2" ]; then printf "%s %s %s\n" "$mac1" "$code1" "$code2" fi done < "$FILE1"done < "$FILE2" > OUTPUTFILE Awk is significantly faster for me using arrays but I am unable to print that 2nd column of FILE2 into the third column of the output using syntax like the following. This syntax just prints column 2 a second time: awk 'NR==FNR { n[$1] = $1; n[$2] = $2; next } ($1 in n) { print n[$1],n[$2],$2 }' My preference is AWK, but if it can be run in bash just as fast, I am okay with that as well. Summary:If the value in Column 1 in file1 is found in file2, print the value of column 1, column 2 (File1) and column2 (File2).
if the output can be sorted: join <(sort file1.txt) <(sort file2.txt)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353174", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222232/" ] }
353,176
Is it possible to only install free software packages in Arch Linux? Or perhaps to query a package's license before installing? I suppose you could use -Q to search for the text of a specific license, but it seems overkill.
if the output can be sorted: join <(sort file1.txt) <(sort file2.txt)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353176", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
353,206
Let's say I have a script that will be executed on various machines with root privileges, but I want to execute certain commands within that script without root. Is that possible?
Both su and sudo can do this. They run a command as another user; by default that "another user" is root, but it can be any user. For example, sudo -u www-data ls will run ls as the user www-data . However... The usual way is to run the script as the invoking user and use sudo for those commands which need it. sudo caches the credentials, so it should prompt at most once.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/353206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222288/" ] }
353,321
I have a string like "aaa,aaa,aaa,bbb,bbb,ccc,bbb,ccc" I want to remove duplicate word from string then output will be like "aaa,bbb,ccc" I tried This code Source $ echo "zebra ant spider spider ant zebra ant" | xargs -n1 | sort -u | xargs It is working fine with same value,but when I give my variable value then it is showing all duplicate word also. How can I remove duplicate value. UPDATE My question is adding all corresponding value into a single string if user is same .I have data like this -> user name | colour AAA | red AAA | black BBB | red BBB | blue AAA | blue AAA | red CCC | red CCC | red AAA | green AAA | red AAA | black BBB | red BBB | blue AAA | blue AAA | red CCC | red CCC | red AAA | green In coding I fetch all distinct user then I concatenate color string successfully .For that I am using code - while read the records if [ "$c" == "" ]; then #$c I defined global c="$colour1" else c="$c,$colour1" fi When I print this $c variable i get the output (For User AAA) "red,black,blue,red,green,red,black,blue,red,green," I want to remove duplicate color .Then desired output should be like "red,black,blue,green" For this desired output i used above code echo "zebra ant spider spider ant zebra ant" | xargs -n1 | sort -u | xargs but it is displaying the output with duplicate values .Like "red,black,blue,red,green,red,black,blue,red,green,"Thanks
One more awk, just for fun: $ a="aaa bbb aaa bbb ccc aaa ddd bbb ccc"$ echo "$a" | awk '{for (i=1;i<=NF;i++) if (!a[$i]++) printf("%s%s",$i,FS)}{printf("\n")}'aaa bbb ccc ddd By the way, even your solution works fine with variables: $ b="zebra ant spider spider ant zebra ant" $ echo "$b" | xargs -n1 | sort -u | xargsant spider zebra
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/353321", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213820/" ] }
353,386
In the bash reference manual it states: lithist If enabled, and the cmdhist option is enabled, multi-line commands are saved to the history with embedded newlines rather than using semicolon separators where possible. with this hence my question: where/when/how is this possible ? I tried to enable and test the feature on my GNU bash, version 4.4.12(1)-release doing this: shopts -s cmdhistshopts -s lithistecho -n "this is a test for a ";\echo "multiline command" I then did a history | tail expecting some ouput akin to this : 101 shopts -s cmdlist102 shopts -s lithist103 echo -n "this is a test for a ";\ echo "multiline command"104 history yet instead get this: 101 shopts -s cmdlist102 shopts -s lithist103 echo -n "this is a test for a "; echo "multiline command"104 history As is obvious the multiline command (the one with bash history number 103) has not been stored with "embedded newlines rathar than using semicolon separators" . Why was lithist not possible here? What did I do wrong?
A \<new line> is not the correct way to get a <new line> in the history. Memory Lets deal only with history lines as they are kept in shell memory (not disk). Lets type a couple of commands as you did: $ echo -n "this is a test for a ";\> echo "two line command" What was stored in memory as the line just written? $ history 2514 echo -n "this is a test for a ";echo "two line command"515 history 2 As you can see, the "line continuation", a backslash followed by a newline, was removed. As it should (from man bash): If a \ pair appears, and the backslash is not itself quoted, the \ is treated as a line continuation (that is, it is removed from the input stream and effectively ignored). We may get a newline if we quote it: $ echo " A test of > a new line"A test of a new line And, at this point, the history will reflect that: $ history 2 518 echo "A test ofa new line" 519 history 2 A true multi-line command: One possible example of a multi-line command is: $ for a in one two> do echo "test $a"> donetest onetest two Which will be collapsed into one history line if cmdhist is set : $ shopt -p cmdhist lithistshopt -s cmdhist shopt -u lithist$ history 3 24 for a in one two; do echo "test $a"; done 25 shopt -p cmdhist lithist 26 history 3 The numbers for each command changed because at some point I cleared the history (in memory) with a history -c . If you unset the cmdhist, you will get this: $ shopt -u cmdhist$ for a in one two> do echo "test $a"> donetest onetest two$ history 5 5 shopt -u cmdhist 6 for a in one two 7 do echo "test $a" 8 done 9 history 5 Each line (not a full command) will be at a separate line in the history. Even if the lithist is set: $ shopt -s lithist$ for a in one two> do echo "test $a"> donetest onetest two$ history 5 12 shopt -s lithist 13 for a in one two 14 do echo "test $a" 15 done 16 history 5 But if both are set: $ shopt -s cmdhist lithist$ for a in one two> do echo "test $a"> done$ history 5 23 history 15 24 shopt -p cmdhist lithist 25 shopt -s cmdhist lithist 26 for a in one twodo echo "test $a"done 27 history 5 The for command was stored as a multiline command with "newlines" instead of semicolons ( ; ). Compare with above where lithist wasn't set. Disk All the above was explained using the list of commands kept in the memory of the shell. No commands were written to the disk. The (default) file ~/.bash_history was not changed. That file will be changed when the running shell exits. At that point in time the history will overwrite the file (if histappend isn't set), or will be appended otherwise. If you want the commands to be committed to disk you need to have this set : export PROMPT_COMMAND='history -a' That will make each command line to be appended to file on each new command line. Now, lets get down to business with cmdhist and lithist. It is not so simple as it may seem. But don't worry, all will be clear in a moment. Let's say that you take the time to type all the commands below (there is no shortcut, no alias, no function, you need the actual commands, sorry). To first clear all history in memory ( history -c ) and in disk ( make a backup ) ( history -w ) and then to try three times: With the default values of cmdhist (set) and lithist (unset). With both set With both un-set Setting lithist with an unset cmdhist makes no sense (you can test it). List of commands to execute: $ history -c ; history -w # Clear all history ( Please backup).$ shopt -s cmdhist; shopt -u lithist$ for a in one two> do echo "test $a"> done$ shopt -s cmdhist; shopt -s lithist$ for a in one two> do echo "test $a"> done$ shopt -u cmdhist; shopt -u lithist$ for a in one two> do echo "test $a"> done You will end with this (in memory): $ history 1 shopt -s cmdhist; shopt -u lithist 2 for a in one two; do echo "test $a"; done 3 shopt -s cmdhist; shopt -s lithist 4 for a in one twodo echo "test $a"done 5 shopt -u cmdhist; shopt -u lithist 6 for a in one two 7 do echo "test $a" 8 done 9 history The three multiline commands end as follows: one in line numbered 2 (one single line, one command). one in a multiline numbered 4 (one command in several lines) one in several lines numbered from 6 to 8 Ok, but what happen in file? say it alredy .... finally, in file: Simple, write to file and cat it to see this: $ history -w ; cat "$HISTFILE"shopt -s cmdhist; shopt -u lithistfor a in one two; do echo "test $a"; doneshopt -s cmdhist; shopt -s lithistfor a in one twodo echo "test $a"doneshopt -u cmdhist; shopt -u lithistfor a in one twodo echo "test $a"donehistoryhistory -w ; cat "$HISTFILE" No line numbers, only commands, there is no way to tell where a multiline starts and where it ends. There is no way to tell even if there is a multiline. In fact, that is exactly what happens, if the commands are written to file as above, when the file is read back, any information about multilines gets lost. There is only one delimiter (the newline), each lines is read back as one command. Is there a solution to this, yes, to use an additional delimter. The HISTTIMEFORMAT kind of does that. HISTTIMEFORMAT When this variable is set to some value, the time at which each command was executed gets stored in file as the seconds since epoch (yes, always seconds) after a comment ( # ) character. If we set the variable and re-write the ~/.bash_history file, we get this: $ HISTTIMEFORMAT='%F'$ history -w ; cat "$HISTFILE"#1490321397shopt -s cmdhist; shopt -u lithist#1490321397for a in one two; do echo "test $a"; done#1490321406shopt -s cmdhist; shopt -s lithist#1490321406for a in one twodo echo "test $a"done#1490321418shopt -u cmdhist; shopt -u lithist#1490321418for a in one two#1490321418do echo "test $a"#1490321420done#1490321429history#1490321439history -w ; cat "$HISTFILE"#1490321530HISTTIMEFORMAT='%FT%T '#1490321571history -w ; cat "$HISTFILE" Now you can tell where and which line is a multiline. The format '%FT%T ' shows the time but only when using the history command: $ history 1 2017-03-23T22:09:57 shopt -s cmdhist; shopt -u lithist 2 2017-03-23T22:09:57 for a in one two; do echo "test $a"; done 3 2017-03-23T22:10:06 shopt -s cmdhist; shopt -s lithist 4 2017-03-23T22:10:06 for a in one twodo echo "test $a"done 5 2017-03-23T22:10:18 shopt -u cmdhist; shopt -u lithist 6 2017-03-23T22:10:18 for a in one two 7 2017-03-23T22:10:18 do echo "test $a" 8 2017-03-23T22:10:20 done 9 2017-03-23T22:10:29 history 10 2017-03-23T22:10:39 history -w ; cat "$HISTFILE" 11 2017-03-23T22:12:10 HISTTIMEFORMAT='%F' 12 2017-03-23T22:12:51 history -w ; cat "$HISTFILE" 13 2017-03-23T22:15:30 history 14 2017-03-23T22:16:29 HISTTIMEFORMAT='%FT%T' 15 2017-03-23T22:16:31 history 16 2017-03-23T22:16:35 HISTTIMEFORMAT='%FT%T ' 17 2017-03-23T22:16:37 history
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/353386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
353,405
I am trying to install plexmediaplayer from source. This involves compiling libmpv.so.1 which I've done and installed under /usr/local/lib When I run plexmediaplayer, I get the following error: $ plexmediaplayer plexmediaplayer: error while loading shared libraries: libmpv.so.1: cannot open shared object file: No such file or directory ldconfig finds the library correctly: $ ldconfig -v | grep libmpvlibmpv.so.1 -> libmpv.so.1.24.0 ldd on the plexmiediaplayer binary shows libmpv: $ ldd plexmediaplayer | grep libmpvlibmpv.so.1 => /usr/local/lib/libmpv.so.1 (0x00007f2fe4f33000) which is a symlink: ls -l /usr/local/lib/libmpv.so.1lrwxrwxrwx 1 root root 16 Feb 9 20:37 /usr/local/lib/libmpv.so.1 -> libmpv.so.1.24.0 both the shared object and executable are compiled for x86_64 and readable by the non-root user trying to run plexmediaplayer: $ file /usr/local/lib/libmpv.so.1.24.0/usr/local/lib/libmpv.so.1.24.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=855d9cbf952c76e3c0c1c1a162c4c94ea5a12b91, not stripped$ file /usr/local/bin/plexmediaplayer /usr/local/bin/plexmediaplayer: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=dc92ac026c5ac7bc3e5554a591321de81a3f4576, not stripped These both match my machine arch: $ uname -aLinux hostname 4.4.0-66-generic #87-Ubuntu SMP Fri Mar 3 15:29:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Running strace on plexmediaplayer gives the following: $ strace -o lotsalogs -ff -e trace=file plexmediaplayeropen("/opt/Qt5.8.0/5.8/gcc_64//lib/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/opt/Qt5.8.0/5.8/gcc_64//lib/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/opt/Qt5.8.0/5.8/gcc_64//lib/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/opt/Qt5.8.0/5.8/gcc_64//lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/local/lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 EACCES (Permission denied)open("/lib/x86_64-linux-gnu/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/lib/x86_64-linux-gnu/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/x86_64-linux-gnu/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/x86_64-linux-gnu/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/x86_64-linux-gnu/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/x86_64-linux-gnu/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/lib/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/lib/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/lib/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/tls/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/tls/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/x86_64/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)open("/usr/lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) Which includes: open("/usr/local/lib/libmpv.so.1", O_RDONLY|O_CLOEXEC) = -1 EACCES (Permission denied) but the permissions on the file through the symlink are: ls -l /usr/local/lib/libmpv.so.1.24.0 -rwxr-xr-x 1 root root 27872856 Mar 22 22:17 /usr/local/lib/libmpv.so.1.24.0 Any ideas why this can't be found by my binary? EDIT: I wiped all libmpv under /usr/local/lib and plexmediaplayer under /usr/local/bin , and removed by source directory, then reinstalled side-by-side in a VM. The build in the VM worked, the one on my host machine did not. I also hashed ld on both machines, and (unsurprisingly) they match.
A \<new line> is not the correct way to get a <new line> in the history. Memory Lets deal only with history lines as they are kept in shell memory (not disk). Lets type a couple of commands as you did: $ echo -n "this is a test for a ";\> echo "two line command" What was stored in memory as the line just written? $ history 2514 echo -n "this is a test for a ";echo "two line command"515 history 2 As you can see, the "line continuation", a backslash followed by a newline, was removed. As it should (from man bash): If a \ pair appears, and the backslash is not itself quoted, the \ is treated as a line continuation (that is, it is removed from the input stream and effectively ignored). We may get a newline if we quote it: $ echo " A test of > a new line"A test of a new line And, at this point, the history will reflect that: $ history 2 518 echo "A test ofa new line" 519 history 2 A true multi-line command: One possible example of a multi-line command is: $ for a in one two> do echo "test $a"> donetest onetest two Which will be collapsed into one history line if cmdhist is set : $ shopt -p cmdhist lithistshopt -s cmdhist shopt -u lithist$ history 3 24 for a in one two; do echo "test $a"; done 25 shopt -p cmdhist lithist 26 history 3 The numbers for each command changed because at some point I cleared the history (in memory) with a history -c . If you unset the cmdhist, you will get this: $ shopt -u cmdhist$ for a in one two> do echo "test $a"> donetest onetest two$ history 5 5 shopt -u cmdhist 6 for a in one two 7 do echo "test $a" 8 done 9 history 5 Each line (not a full command) will be at a separate line in the history. Even if the lithist is set: $ shopt -s lithist$ for a in one two> do echo "test $a"> donetest onetest two$ history 5 12 shopt -s lithist 13 for a in one two 14 do echo "test $a" 15 done 16 history 5 But if both are set: $ shopt -s cmdhist lithist$ for a in one two> do echo "test $a"> done$ history 5 23 history 15 24 shopt -p cmdhist lithist 25 shopt -s cmdhist lithist 26 for a in one twodo echo "test $a"done 27 history 5 The for command was stored as a multiline command with "newlines" instead of semicolons ( ; ). Compare with above where lithist wasn't set. Disk All the above was explained using the list of commands kept in the memory of the shell. No commands were written to the disk. The (default) file ~/.bash_history was not changed. That file will be changed when the running shell exits. At that point in time the history will overwrite the file (if histappend isn't set), or will be appended otherwise. If you want the commands to be committed to disk you need to have this set : export PROMPT_COMMAND='history -a' That will make each command line to be appended to file on each new command line. Now, lets get down to business with cmdhist and lithist. It is not so simple as it may seem. But don't worry, all will be clear in a moment. Let's say that you take the time to type all the commands below (there is no shortcut, no alias, no function, you need the actual commands, sorry). To first clear all history in memory ( history -c ) and in disk ( make a backup ) ( history -w ) and then to try three times: With the default values of cmdhist (set) and lithist (unset). With both set With both un-set Setting lithist with an unset cmdhist makes no sense (you can test it). List of commands to execute: $ history -c ; history -w # Clear all history ( Please backup).$ shopt -s cmdhist; shopt -u lithist$ for a in one two> do echo "test $a"> done$ shopt -s cmdhist; shopt -s lithist$ for a in one two> do echo "test $a"> done$ shopt -u cmdhist; shopt -u lithist$ for a in one two> do echo "test $a"> done You will end with this (in memory): $ history 1 shopt -s cmdhist; shopt -u lithist 2 for a in one two; do echo "test $a"; done 3 shopt -s cmdhist; shopt -s lithist 4 for a in one twodo echo "test $a"done 5 shopt -u cmdhist; shopt -u lithist 6 for a in one two 7 do echo "test $a" 8 done 9 history The three multiline commands end as follows: one in line numbered 2 (one single line, one command). one in a multiline numbered 4 (one command in several lines) one in several lines numbered from 6 to 8 Ok, but what happen in file? say it alredy .... finally, in file: Simple, write to file and cat it to see this: $ history -w ; cat "$HISTFILE"shopt -s cmdhist; shopt -u lithistfor a in one two; do echo "test $a"; doneshopt -s cmdhist; shopt -s lithistfor a in one twodo echo "test $a"doneshopt -u cmdhist; shopt -u lithistfor a in one twodo echo "test $a"donehistoryhistory -w ; cat "$HISTFILE" No line numbers, only commands, there is no way to tell where a multiline starts and where it ends. There is no way to tell even if there is a multiline. In fact, that is exactly what happens, if the commands are written to file as above, when the file is read back, any information about multilines gets lost. There is only one delimiter (the newline), each lines is read back as one command. Is there a solution to this, yes, to use an additional delimter. The HISTTIMEFORMAT kind of does that. HISTTIMEFORMAT When this variable is set to some value, the time at which each command was executed gets stored in file as the seconds since epoch (yes, always seconds) after a comment ( # ) character. If we set the variable and re-write the ~/.bash_history file, we get this: $ HISTTIMEFORMAT='%F'$ history -w ; cat "$HISTFILE"#1490321397shopt -s cmdhist; shopt -u lithist#1490321397for a in one two; do echo "test $a"; done#1490321406shopt -s cmdhist; shopt -s lithist#1490321406for a in one twodo echo "test $a"done#1490321418shopt -u cmdhist; shopt -u lithist#1490321418for a in one two#1490321418do echo "test $a"#1490321420done#1490321429history#1490321439history -w ; cat "$HISTFILE"#1490321530HISTTIMEFORMAT='%FT%T '#1490321571history -w ; cat "$HISTFILE" Now you can tell where and which line is a multiline. The format '%FT%T ' shows the time but only when using the history command: $ history 1 2017-03-23T22:09:57 shopt -s cmdhist; shopt -u lithist 2 2017-03-23T22:09:57 for a in one two; do echo "test $a"; done 3 2017-03-23T22:10:06 shopt -s cmdhist; shopt -s lithist 4 2017-03-23T22:10:06 for a in one twodo echo "test $a"done 5 2017-03-23T22:10:18 shopt -u cmdhist; shopt -u lithist 6 2017-03-23T22:10:18 for a in one two 7 2017-03-23T22:10:18 do echo "test $a" 8 2017-03-23T22:10:20 done 9 2017-03-23T22:10:29 history 10 2017-03-23T22:10:39 history -w ; cat "$HISTFILE" 11 2017-03-23T22:12:10 HISTTIMEFORMAT='%F' 12 2017-03-23T22:12:51 history -w ; cat "$HISTFILE" 13 2017-03-23T22:15:30 history 14 2017-03-23T22:16:29 HISTTIMEFORMAT='%FT%T' 15 2017-03-23T22:16:31 history 16 2017-03-23T22:16:35 HISTTIMEFORMAT='%FT%T ' 17 2017-03-23T22:16:37 history
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/353405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162837/" ] }
353,452
I'm trying to connect to port 25 with netcat from one virtual machine to another but It's telling me no route to host although i can ping. I do have my firewall default policy set to drop but I have an exception to accept traffic for port 25 on that specific subnet. I can connect from VM 3 TO VM 2 on port 25 with nc but not from VM 2 TO 3. Here's a preview of my firewall rules for VM2 Here's a preview of my firewall rules for VM 3 When I show the listening services I have *:25 which means it's listening for all ipv4 ip addresses and :::25 for ipv6 addresses. I don't understand where the error is and why is not working both firewall rules accept traffic on port 25 so it's supposed to be connecting. I tried comparing the differences between both to see why I can connect from vm3 to vm2 but the configuration is all the same. Any suggestions on what could be the problem? Update stopping the iptable service resolves the issue but I still need those rules to be present.
Your no route to host while the machine is ping-able is the sign of a firewall that denies you access politely (i.e. with an ICMP message rather than just DROP-ping). See your REJECT lines? They match the description (REJECT with ICMP xxx). The problem is that those seemingly (#) catch-all REJECT lines are in the middle of your rules, therefore the following rules won't be executed at all. (#) Difficult to say if those are actual catch-all lines, the output of iptables -nvL would be preferable. Put those REJECT rules at the end and everything should work as expected.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/353452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139921/" ] }
353,457
When a executable file is run in a process, if the executable file is overwritten or deleted and then recreated by reinstallation, will the process rerun the new executable file? Does the answer to the question depend on whether the executable is run as a service/daemon in the process or not? the operation system, e.g. Linux, Unix, ...? whether the reinstallation is from an installer file (e.g. deb file on Ubuntu, msi on Windows) or from building its source code? Here are some examples: In Ubuntu, when a process runs an executable file, and when I overwrite the executable file, by manually reinstallation via configure , make , and make install on its source code, the process still continues to run the original executable file, instead of the new executable file. I heard that in Windwos 10, when a process runs an executable file as a service, if we reinstall the executable file via its msi installer file, then the service process will restart to run the new executable file. Is it the same or similar case for installation from .deb files on Ubuntu or Debian? Thanks.
It depends on the kernel and on the type of executable. It doesn't depend on how the executable was started or installed. On Linux: For native executables (i.e. binaries containing machine code, executed directly by the kernel), an executable cannot be modified while it's running. $ cp /bin/sleep .$ ./sleep 999999 &$ echo >sleepsh: 1: cannot create sleep: Text file busy It is possible to remove the executable (i.e. unlink it) and create a new one at the same path. Like any other case where a file is removed while it's still open, removing the executable doesn't affect the running process, and doesn't actually remove it from the disk until the file is no longer in use, i.e. until all running instances of the program exit. For scripts (beginning with #! ), the script file can be modified while the program is running. Whether that affects the program depends on how the interpreter reads the script. If it reads the whole script into its own memory before starting to execute then the execution won't be affected. If the interpreter reads the script on demand then the execution may be affected; some implementations of sh do that. Many other Unix systems behave this way, but not all. IIRC older versions of Solaris allow modifying a native executable, which generally causes it to crash. A few Unix variants, including HP/UX, don't even allow removing a native executable that's currently running. Most software installation programs take care to remove an existing executable before putting a new one in place, as opposed to overwriting the existing binary. E.g. do rm /bin/targetcp target /bin rather than just cp target /bin . The install shell command does things this way. This is not ideal though, because if someone tries to execute /bin/target while the cp process is running, they'll get a corrupt program. It's better to copy the file to a temporary name and then rename it to the final name. Renaming a file (i.e. moving it inside the same directory, or more generally moving it inside the same filesystem) removes the prior target file if one exists. This is how dpkg works, for example. cp target /bin/target.tmpmv /bin/target.tmp /bin/target
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353457", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
353,481
I have a log file which looks like: Mar 23 08:20:23 New file got created in sec: 235Mar 23 08:21:45 New file got created in sec: 127Mar 23 08:22:34 New file got created in sec: 875Mar 23 08:25:46 New file got created in sec: 322Mar 23 08:26:12 New file got created in sec: 639 I need the output to look like: Mar 23 08:20:23 : 235Mar 23 08:21:45 : 127Mar 23 08:22:34 : 875Mar 23 08:25:46 : 322Mar 23 08:26:12 : 639 What I am able to do is just grep either first part or the last part of the line. I am not able to put the two together. How can I get the desired output from my input?
you can do something like this awk '{print $1,$2,$3,":",$NF}' logfile
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/353481", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222504/" ] }
353,490
I would like to have my right alt key work as left control while still having left control work as left control. So I edited my evdev file like this: <LALT> = 64;<LCTL> = 37; // original binding 37<SPCE> = 65;<RCTL> = 105;<RALT> = 37; // original binding: 108 However this does not work, now neither key is working as ctrl . How can I make this work?
The keycodes file you've changed is an XKB mapping that defines the symbol codes used in XKB layouts ( <FOO> ) by the keycodes emitted by the kernel keyboard driver when a key is pressed. Changing the codes there doesn't change what code the key generates, it changes what code the XKB layout thinks its dealing with when it sees the altered symbol. Assuming you can get your system XKB files back to their original state, the XKB way to do what you want is to load an option that will override the standard layout. There's an existing option ( ctrl:ralt_rctrl ) that's close to what you want: # definition in /usr/share/X11/xkb/rules/evdev ctrl:rctrl_ralt = +ctrl(rctrl_ralt) # similar rule for swapped option? ctrl:ralt_rctrl = +ctrl(ralt_rctrl) You can load that with setxkbmap : $ setxkbmap -option ctrl:ralt_rctrl If that does what you want, you can make it permanent by adding that command to a .xprofile or .xinitrc or your window manager's autorun script. In GNOME you may need other steps. If you still prefer to have Alt_R remapped as Ctrl_L instead of Ctrl_R , you'd want to create a local override clause. Use the existing option as a starting point; it's in /usr/share/X11/xkb/symbols/ctrl . See my superuser answer on XKB modifications and some additional resources: http://madduck.net/docs/extending-xkb/ http://apps.jcns.fz-juelich.de/doku/sc/xkbmap https://wiki.archlinux.org/index.php/Keyboard_configuration_in_Xorg Where is Xkb getting its configuration? https://askubuntu.com/questions/451945/permanently-set-keyboard-layout-options-with-setxkbmap-in-gnome-unity
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183700/" ] }
353,579
I have a very long list of emails between the < and > characters: smeimebv2t <jdyefc@nsuwtcvc>; jdedyvt <ejd2ydt2@dv2dg2vgv>; didi2jd2m <i2dmi32@hd2vdg >; 3idm23i2m <2udhu2@cdrrc>... How can I use an awk or perl one liner in order to capture only the emails addresses between the < > ? example: more results.outjdyefc@nsuwtcvcejd2ydt2@dv2dg2vgvi2dmi32@hd2vdg2udhu2@cdrrc
The simplest way I can think of is using GNU grep : $ grep -Po '<\K[^>]+(?=>)' file jdyefc@nsuwtcvcejd2ydt2@dv2dg2vgvi2dmi32@hd2vdg 2udhu2@cdrrc The -o means "only print matching region of the line" and the -P activates Perl Compatible Regular Expressions. These let us use \K which means "don't consider anything matched up to this point as part of the match" and positive lookaheads . So, the regex will match an < , then any stretch of non > characters followed by a > . Note that this will also match <foo> which isn't an email. To restrict to emails only (strings with a @ ), you can use: grep -Po '<\K[^>]+@[^>]+(?=>)' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153544/" ] }
353,615
Gnome Shell 3.18.5 notified me some extensions needed updating. I visited https://extensions.gnome.org/local/ from Firefox, updated the Firefox extension, and now I want to uninstall some of the Gnome extensions, for example the following one. Removable Drive Menu by fmuellner System extension A status menu for accessing and unmounting removable devices. Hovering the mouse on System extension , I read the following tooltip. System extension should be uninstalled using package manager. See about page for details. The About page says: What is System extension ? How to uninstall it? System extension is installed to system-wide location (usually /usr/share/gnome-shell/extensions). Such extension may be used by any PC user, however it can be uninstalled only by system administrator (root). To uninstall system extension use your distro's package manager or ask your system administrator. I looked through Synaptic but don't see this extension. How do I remove it? These are the extensions I want to remove. Applications Menu Places Status Indicator Removable Drive Menu Workspace Indicator Pomodoro
Since the remove buttons are no longer available in gnome-shell 3.26, the only way I know is deleting the extension directory itself. With Nautilus Open Nautilus and show hidden files (press CTRL + H ). Go to your home folder. Navigate to .local/share/gnome-shell/extensions Delete the directory of the unwanted extension. Reload gnome-shell. Press ALT + F2 , type r and press ENTER . The macho way Open the console. Go to the extensions directory: cd ~/.local/share/gnome-shell/extensions List the extensions and get the name of the unwanted extension: ls -l Delete the extension directory: rm -r extension@author Reload gnome-shell. Press ALT + F2 , type r and press ENTER .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/353615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123518/" ] }
353,630
I am trying to create a event handler for icing. My Bash script looks like this right now: #!/bin/bash# Event Handler for loggin out inactive RDP users# $1 is $SERVICESTATE$ (OK WARNING UNKNOWN CRITICAL)# $2 is $SERVICESTATETYPE$ (SOFT HARD)# $3 is $SERVICEATTEMPT$ (1 through 4)# $4 is $SERVICDOWNTIME$ (0 no Downtime, >0 Downtime active)# $5 is SRV29 # $6 is $host.name$ (1 through 4)if [ "$4" > 0 ]; echo "in downtime, exit"; then exitficase "$1" inOK)echo "ok!";;WARNING)echo "warning!";;UNKNOWN)echo "unknown!";;CRITICAL)echo "critical!"... When I execute this script without my if statement at the top, everything works fine. But I want to check if $4 is greater than 0. This test condition always returns true and no matter what I enter inside this condition it always results in "in downtime, exit". So even if [ "hello" = "hallo" ] it will go inside and exit right away.I also tried pretty much every variation with quotes, without, double brackets... and so on.I am obviously doing something wrong. Can anyone spot it? Thanks in advance!
For testing integers you will want to use: -eq #Is equal-ne #Is not equal-lt #Less than-le #Less than or equal-gt #Greater than-ge #Greater than or equal So your test statement should read: if [ "$4" -gt 0 ]; Additionally, your if statement is missing the then so it should be corrected to: if [ "$4" -gt 0 ]; then See man test for more test options.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353630", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186718/" ] }
353,681
I use Ctrl + R all the time, but I often end up going past the command I'm after as I'm pressing it so quickly. To forward search, Ctrl + S can be used provided it's not used the terminal first (konsole in my case, in which stty -ixon in ~/.bashrc fixes it). However, I have to press it twice: once it seems to enter i-search after being in reverse-i-search mode and a second time to actually step backwards. Is there a way to remove the need for pressing the shortcut twice?
Here's a different approach. If you are comfortable with some basic vi editing commands, bash supports a vi-mode for command line editing. If you really hate vi you won't like this. But if you can tolerate it, you may find it preferable and with fewer keystrokes. set -o vi History search works like this: Esc to enter command mode / to begin search Type text of search string Enter to perform search n to go to next match N to jump back to the previous match i to get back into insert mode Enter to run command
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54030/" ] }
353,684
I want to call a Linux syscall (or at least the libc wrapper) directly from a scripting language. I don't care what scripting language - it's just important that it not be compiled (the reason basically has to do with not wanting a compiler in the dependency path, but that's neither here nor there). Are there any scripting languages (shell, Python, Ruby, etc) that allow this? In particular, it's the getrandom syscall.
Perl allows this with its syscall function: $ perldoc -f syscall syscall NUMBER, LIST Calls the system call specified as the first element of the list, passing the remaining elements as arguments to the system call. If⋮ The documentation also gives an example of calling write(2): require 'syscall.ph'; # may need to run h2phmy $s = "hi there\n";syscall(SYS_write(), fileno(STDOUT), $s, length $s); Can't say I've ever used this feature, though. Well, before just now to confirm the example does indeed work. This appears to work with getrandom : $ perl -E 'require "syscall.ph"; $v = " "x8; syscall(SYS_getrandom(), $v, length $v, 0); print $v' | xxd00000000: 5790 8a6d 714f 8dbe W..mqO.. And if you don't have getrandom in your syscall.ph, then you could use the number instead. It's 318 on my Debian testing (amd64) box. Beware that Linux syscall numbers are architecture-specific.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/353684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60181/" ] }
353,710
I have this at the top of a script #!/usr/bin/env mocha as most people know, this tells the OS which executable to use to execute the script. However, my question is - how can we include more information to the mocha executable as to how to execute the script? mocha takes optional arguments, so I would like to do something like this: #!/usr/bin/env mocha --reporter=tap --output=foo but I don't think this is allowed. How can I give the mocha executable more information about how to run the file?
The shebang line is interpreted by the kernel and is not very flexible. On Linux, it's limited to a single argument: the syntax is #! , optional whitespace, path to the interpreter (not containing whitespace), optional whitespace, and optionally a single argument (which may contain whitespace except at the beginning). Furthermore the total size of the shebang line is limited to 128 bytes ( BINPRM_BUF_SIZE constant in the kernel sources, used in load_script ). If you want to pass more than one argument, you need a workaround. If you're using #!/usr/bin/env for path expansion, then there's only room for the command name and no other argument. The most obvious workaround is a wrapper script. Instead of having /path/to/my-script contain the mocha code, you put the mocha code in some other file /path/to/my-script.real and make /path/to/my-script a small shell script. Here's a sample wrapper that assumes that the real code is in a file with the same name as the script, plus .real at the end. #!/bin/shexec mocha --reporter=tap --output=foo "$0.real" "$@" With a shell wrapper, you can take the opportunity to do more complex things such as define environment variables, look for available interpreter versions, etc. Using exec before the interpreter ensures that the mocha script will run in the same process as the shell wrapper. Without exec , depending on the shell, it might run as a subprocess, which matters e.g. if you want to send signals to the script. Sometimes the wrapper script and the actual code can be in the same file, if you manage to write a polyglot — a file that is valid code in two different languages. Writing polyglots is not always easy (or even possible) but it has the advantage of not having to manage and deploy two separate files. Here's a JavaScript/shell polyglot where the shell part executes the JS interpreter on the file (assuming that the JS interpreter ignores the shebang line, there isn't much you can do if it doesn't): #!/bin/sh///bin/true; exec mocha --reporter=tap --output=foo "$0" "$@"… (the rest is the JS code) …
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113238/" ] }
353,730
root@debian:/home/debian8# cat /etc/sudoers## This file MUST be edited with the 'visudo' command as root.## Please consider adding local content in /etc/sudoers.d/ instead of# directly modifying this file.## See the man page for details on how to write a sudoers file.#Defaults env_resetDefaults mail_badpassDefaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"# Host alias specification# User alias specification# Cmnd alias specification# User privilege specificationroot ALL=(ALL:ALL) ALL# Allow members of group sudo to execute any command%sudo ALL=(ALL:ALL) ALL# See sudoers(5) for more information on "#include" directives:includedir /etc/sudoers.d The 27th line is only removed a chracter #,the primitive format is as below. #includedir /etc/sudoers.d I just remove the # character. root@debian:/home/debian8# ls /etc/sudoers.dmyRules READMEroot@debian:/home/debian8# cat /etc/sudoers.d/myRulesdebian8 ALL=(ALL:ALL) NOPASSWD:ALL How to fix it?
#includedir /etc/sudoers.d is not a comment, #includedir is a directive. The hash sign is part of it. Just re-add it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353730", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102745/" ] }
353,828
I am looking to create a Shell Script that will loop through each argument/parameter except for the last one. Here is what I have so far: for i in $@do echo "$i"done This works well in terms of displaying all of the arguments after my ./script.sh command but I'm hoping there is a way of ignoring the last parameter or even any parameter of my choosing (ex. always ignoring the third parameter if there is one). To be clear, I'm more concerned about the last parameter/argument for now. I'm new to scripting so I apologize if there is another post that contains this answer. I find being new at something usually means you don't know how to properly ask what you are looking for. Any help would be greatly appreciated!
If you don't need to keep the parameters while (( $# > 1 ))do echo "$1" shiftdone If you want to keep the positional parameters untouched, you can keep a count SKIP=$#let x=1for ido if (( x != SKIP )) then echo "$i" fi let x=x+1done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222781/" ] }
353,860
Is it possible to return a combined word count with wc only for certain files (like .txt files, for example) in a series of directories?
With GNU wc (at least), you can combine the results of find with wc as such: find folder/ -name '*.txt' -print0 | wc -w --files0-from=- This gives you all the power of find (a bit overkill if you just want to find all files ending with .txt to be honest) and it handles even the strangest filenames (containing newlines for example).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353860", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202374/" ] }
353,864
I would like to run the command find . '! -name *.*' in the bash shell. It does not work as intended. (It should list all files in the current directory for which -name *.* is false, i.e. which do not have a dot in their names.) Instead, it prints a list of all files in the directory and, paradoxically, ends with the line find: "! -name *.*": file or directory not found . I suspect the problem is interpretation of the expression by the shell, although it is protected by the two apostrophe (U+0027) characters. Are there ways to protect the expression reliably whatever the expression is? I use this version: find (GNU findutils) 4.4.2
Due to the OP quoting the expr '! -name *.*' it turns into the 2nd argument to find . So now what find thinks is that you passed to it 2 directory names, viz., . and a crazy name ! -name *.* , so it'll faithfully try to list out all files/subdirs recursively in these two. With . so far so good, but when the time comes to dive into that crazy dir. ! -name *.* it can't, unless you have it. And even then, find won't be doing what you wanted. For that you have to quote at the proper places: find . ! -name '*.*' or eval find . '! -name "*.*"'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353864", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222795/" ] }
353,891
Is it possible to update the Debian distribution without losing the old settings, configuration, files and folders?
You're not very specific in your question as to what you have already read or tried. This comprehensive guide will inform in depth on how to keep your system up to date. Short Answer: You do not have to use an ISO file, in most cases Debian can be upgraded using apt-get upgrade or apt-get dist-upgrade .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/353891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222820/" ] }