source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
328,553
I'm supposed to be accessing a server in order to link a company's staging and live servers into our deployment loop. An admin over on their side set up the two instances and then created a user on the server for us to SSH in as. This much I'm used to. In my mind now what would happen is I would send them my public key which could be placed inside their authorized keys folder. Instead however they sent me a file name id_rsa which inside the file contains -----BEGIN RSA PRIVATE KEY----- over email. Is this normal? I looked around and can find tonnes of resources on generating and setting up my own keys from scratch, but nothing about starting from the private keys of the server. Should I be using this to generate some key for myself or? I would ask the system admin directly but don't want to appear an idiot and waste everybody in-between us' time. Should I just ignore the key he sent me and ask them to put my public key inside their authorized folder?
In my mind now what would happen is I would send them my public key which could be placed inside their authorized keys folder. What's "in your mind" as what should now happen is correct. Email is not a secure channel of communication, so from a standpoint of proper security, you (and they) should consider that private key compromised. Depending on your technical skill and how diplomatic you want to be, you could do several different things. I would recommend one of the following: Generate your own key pair and attach the public key to an email you send to them, saying: Thanks! Since email isn't a secure distribution method for private keys, could you please put my public key in place, instead? It's attached. Thank them and ask them if they object to you installing your own keypair, since the private key they have sent should be considered compromised after having been sent over email. Generate your own keypair, use the key they sent you to log in the first time, and use that access to edit the authorized_keys file to contain the new public key (and remove the public key corresponding to the compromised private key.) Bottom line: You won't look like an idiot. But, the other admin could be made to look like an idiot very easily. Good diplomacy could avoid that. Edit in response to comments from MontyHarder: Neither of my suggested courses of action involves "fixing things without telling the other admin what he did wrong"; I just did so subtly without throwing him under the bus. However, I will add that I would also follow up (politely) if the subtle clues weren't picked up: Hello, I saw you didn't respond to my comment about email as an insecure channel. I do want to be confident that this won't happen again: Do you understand why I'm making this point about the secure handling of private keys? Best, Toby
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/328553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204226/" ] }
328,579
I know one situation of this will occur when u already installed the latest version of a package then it will occur.is there any other situations that this error occur?
Yum shows this error when it is unable to proceed with the command.There can be many reasons why this message could appear: The package is already installed and up-to-date The package does not exist on the configured repository No repository is correctly configured There was a problem fetching the package from the remote URL (unable to connect, cannot find the package, etc.) The package requires dependencies that aren't available The package conflicts with another installed package To troubleshoot the issue, you should focus on the message which appears before "Nothing to do", and not on the message "Nothing to do" which is purely the result of the error.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/328579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/199232/" ] }
328,625
I used to be confident about the fact that quoting strings is always a good practice in order to avoid having the shell parsing it. Then I came across this: $ x='('$ [ "$x" = '1' -a "$y" = '1' ]bash: [: `)' expected, found 1 Trying to isolate the problem, getting the same error: $ [ '(' = '1' -a '1' = '1' ]bash: [: `)' expected, found 1 I solved the problem like this: [ "$x" = '1' ] && [ "$y" = '1' ] Still I need to know what's going on here.
This is a very obscure corner case that one might consider a bug in how the test [ built-in is defined; however, it does match the behaviour of the actual [ binary available on many systems. As far as I can tell, it only affects certain cases and a variable having a value that matches a [ operator like ( , ! , = , -e , and so on. Let me explain why, and how to work around it in Bash and POSIX shells. Explanation: Consider the following: x="("[ "$x" = "(" ] && echo yes || echo no No problem; the above yields no error, and outputs yes . This is how we expect stuff to work. You can change the comparison string to '1' if you like, and the value of x , and it'll work as expected. Note that the actual /usr/bin/[ binary behaves the same way. If you run e.g. '/usr/bin/[' '(' = '(' ']' there is no error, because the program can detect that the arguments consist of a single string comparison operation. The bug occurs when we and with a second expression. It does not matter what the second expression is, as long as it is valid. For example, [ '1' = '1' ] && echo yes || echo no outputs yes , and is obviously a valid expression; but, if we combine the two, [ "$x" = "(" -a '1' = '1' ] && echo yes || echo no Bash rejects the expression if and only if x is ( or ! . If we were to run the above using the actual [ program, i.e. '/usr/bin/[' "$x" = "(" -a '1' = '1' ] && echo yes || echo no the error would be understandable: since the shell does the variable substitutions, the /usr/bin/[ binary only receives parameters ( = ( -a 1 = 1 and the terminating ] , it understandably fails to parse whether the open parentheses start a sub-expression or not, there being an and operation involved. Sure, parsing it as two string comparisons is possible, but doing it greedily like that might cause issues when applied to proper expressions with parenthesized sub-expressions. The problem, really, is that the shell [ built-in behaves the same way, as if it expanded the value of x before examining the expression. (These ambiguities, and others related to variable expansion, were a large reason why Bash implemented and now recommends using the [[ ... ]] test expressions instead.) The workaround is trivial, and often seen in scripts using older sh shells. You add a "safe" character, often x , in front of the strings (both values being compared), to ensure the expression is recognized as a string comparison: [ "x$x" = "x(" -a "x$y" = "x1" ]
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/328625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/157479/" ] }
328,655
I am trying to install Linux headers for Kali Linux on my machine and I have tried every possible solution on the internet but it always show "Unable to locate packages " root@kali:/usr/sbin# apt-get install linux-headers-4.6.0-kali1-amd64Reading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package linux-headers-4.6.0-kali1-amd64E: Couldn't find any package by glob 'linux-headers-4.6.0-kali1-amd64'E: Couldn't find any package by regex 'linux-headers-4.6.0-kali1-amd64' Here is my sources.list file : # Regular Repositoriesdeb http://http.kali.org/kali sana main non-free contribdeb http://security.kali.org/kali-security sana/updates main contrib non-free# Source repositoriesdeb-src http://http.kali.org/kali sana main non-free contribdeb-src http://security.kali.org/kali-security sana/updates main contrib non-free Uname -a output : root@kali:/usr/sbin# uname -aLinux kali 4.6.0-kali1-amd64 #1 SMP Debian 4.6.4-1kali1 (2016-07-21) x86_64 GNU/Linux
The package linux-headers-4.6.0-kali1-amd64 is no longer available on the regularly kali-linux repository, it should be upgraded to the 4.8.x version. update your /etc/apt/sources.list : see Kali sources.list Repositories List the available linux-headers and linux-image through apt-cache search : apt update apt-cache search linux-headers Then install the correct package e,g ( this is an example , it depends on the previous output command) : apt-get install linux-headers-4.8.0-kali1-amd64 also run; apt-cache search linux-image install it: apt-get install linux-image-4.8.0-kali1-amd64 Reboot your system. Or you can use the following command to upgrade you kernel to the latest available version and install the appropriate kernel headers: apt updateapt dist-upgraderebootapt install linux-headers-$(uname -r)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/328655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204100/" ] }
328,677
I wanted to know if it is possible to change kernels, for example, replacing Fedora's Linux kernel to that of FreeBSD's. Now, there already existed the Debian GNU/kFreeBSD . Is it possible for me to customize a Linux distro to contain a BSD kernel?
No, each kernel implements its own features in its own way. There's a large amount of POSIX compatibility but once you get out of that the executables need to be compiled with the kernel mechanisms already in place. Many projects contain source code that only gets compiled if you explicitly say that you're compiling for FreeBSD or Linux. That's essentially what kFreeBSD is. The tools support the FreeBSD kernel but they have to be compiled for it. For example, if you try to use epoll_create on FreeBSD things won't work as expected. Of course, you can cross compile the tools from a BSD system LFS-style but that's likely to take forever. Not as simple as just compiling a new kernel.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/328677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
328,693
After substantial research I still haven't found an answer to this query, how can I modify the command 'ifconfig' to show my computer's MAC address?
First, your computer doesn't have a MAC address. Each network card has a MAC address. So if your machine has a wireless card and an Ethernet card, it'll have two MAC addresses. On Linux, either of these commands will show you the MACs of all network cards in your machine: ifconfig | grep etherip link ifconfig is deprecated on Linux, so you should use ip .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/328693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/195936/" ] }
328,710
I've an array coming from the output of a command: array=(saf sri trip tata strokes) Now I want to filter items based on user input. The user can also use wildcards, so if the user enters *tr* , the output should be trip strokes
It's easier with zsh : $ array=(saf sri trip tata strokes)$ pattern='*tr*'$ printf '%s\n' ${(M)array:#$~pattern}tripstrokes ${array:#pattern} : expands to the elements of the array that don't match the pattern. (M) (for match): reverts the meaning of the :# operator to expand to the elements that match instead $~pattern , causes the content of $pattern to be taken as a pattern.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/328710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5459/" ] }
328,720
I have reviewed several related questions - the closest being: Extract the spec file out of an RPM -- AND I must add I would have phrased my question the same way. However, it seems the .spec file is not in the .rpm file (when it is a binary package). So, my question is: how to get the information that originated in a spec file - at least as much as possible. I know there is a command to list the contents of the .rpm (at leasttwo actually - rpm2cpio xxx.rpm | cpio -itv being one other commands to get what is required in particular: WHAT command(s) to get thepre/post/etc scripts that are run as part of the install process. Ideally, the answer is a single command - but if it must be several commands, c'est la vie. p.s. I have examined rpmbuild --rebuild (says it expects source RPM) and cannot locate rpmlint Thank you.
Yes the rpm SPEC is not part of packaged RPM.However, you can query the RPM package for information which was present in the SPEC file. For example: 1) Following command will give you the pre/post scripts which are executed when RPM package is installed or updated. rpm -q --scripts (installed RPM name, this name will be without the .rpm extension)rpm -qp --scripts (if you have a rpm file) 2) You can look at specific information present in the SPEC file, using the --queryformat option of rpm command. rpm -q --queryformat '%{ARCH} %{NAME}\n' (RPM name, if it installed)rpm -qp --queryformat '%{ARCH} %{NAME}\n' (if you have an RPM file) Above will give the Architecture for which the RPM is designed and the actual name of the RPM.These information go in specific sections of the SPEC file, like Name, Arch, Requires(pre), Requires(post), BuildRequires etc.For valid query options check this link
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/328720", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201205/" ] }
328,736
Sample File (test.csv): "PRCD-15234","CDOC","12","JUN-20-2016 17:00:00","title, with commas, ","Y!##!""PRCD-99999","CDOC","1","Sep-26-2016 17:00:00","title without comma","Y!##!" Output file: PRCD-15234|CDOC|12|JUN-20-2016 17:00:00|title, with commas, |Y!##!PRCD-99999|CDOC|1|Sep-26-2016 17:00:00|title without comma|Y!##! My script (doe not work) is below: while IFS="," read f1 f2 f3 f4 f5 f6; do echo $f1|$f2|$f3|$f4|$f5|$f6; done < test.csv
(generate output) | sed -e 's/","/|/g' -e 's/^"//' -e 's/"$//' or sed -e 's/","/|/g' -e 's/^"//' -e 's/"$//' $file For the 3 expressions: -e 's/","/|/g' = replace all the delimiters "," with the new delimiter | -e 's/^"//' = remove the leading " mark -e 's/"$//' = remove the trailing end of line " mark This will preserve any quote marks that happen to be in the title, as long as they don't match the initial delimiter pattern ","
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/328736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204362/" ] }
328,746
I have a hypothesis: sometimes TCP connections arrive faster than my server can accept() them. They queue up until the queue overflows and then there are problems. How can I confirm this is happening? Can I monitor the length of the accept queue or the number of overflows? Is there a counter exposed somewhere?
To check if your queue is overflowing use either netstat or nstat [centos ~]$ nstat -az | grep -i listenTcpExtListenOverflows 3518352 0.0TcpExtListenDrops 3518388 0.0TcpExtTCPFastOpenListenOverflow 0 0.0[centos ~]$ netstat -s | grep -i LISTEN 3518352 times the listen queue of a socket overflowed 3518388 SYNs to LISTEN sockets dropped Reference: https://perfchron.com/2015/12/26/investigating-linux-network-issues-with-netstat-and-nstat/ To monitor your queue sizes, use the ss command and look for SYN-RECV sockets. $ ss -n state syn-recv sport = :80 | wc -l119 Reference: https://blog.cloudflare.com/syn-packet-handling-in-the-wild/
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/328746", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33203/" ] }
328,750
I've got some command outputs saved into variables in bash script, for instance 3, in each loop iteration. I'd like to save these variables inside a plain text database containing 3 fields before the iteration finishes The idea is the following: If..Command output1 > $v1Command output2 > $v2Command output3 > $v3echo $v1 $v2 $v 3 >> database.txtfi Would the echo variable calling be valid for storing variables value into database.txt? What if we would like them to be delimited by tab in the plain text?
To check if your queue is overflowing use either netstat or nstat [centos ~]$ nstat -az | grep -i listenTcpExtListenOverflows 3518352 0.0TcpExtListenDrops 3518388 0.0TcpExtTCPFastOpenListenOverflow 0 0.0[centos ~]$ netstat -s | grep -i LISTEN 3518352 times the listen queue of a socket overflowed 3518388 SYNs to LISTEN sockets dropped Reference: https://perfchron.com/2015/12/26/investigating-linux-network-issues-with-netstat-and-nstat/ To monitor your queue sizes, use the ss command and look for SYN-RECV sockets. $ ss -n state syn-recv sport = :80 | wc -l119 Reference: https://blog.cloudflare.com/syn-packet-handling-in-the-wild/
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/328750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202883/" ] }
328,752
I noticed that with a normal 16 color xterm I can reassign the color values in the .Xresources file using the "*color0: #" through "*color15: #" commands. I switched to xterm-256color with the intention of using more colors but color reassignments don't seem to be working anymore. Is there a way to reassign the 256 color palette? Or is there another xterm setting to allow for more color options?
To check if your queue is overflowing use either netstat or nstat [centos ~]$ nstat -az | grep -i listenTcpExtListenOverflows 3518352 0.0TcpExtListenDrops 3518388 0.0TcpExtTCPFastOpenListenOverflow 0 0.0[centos ~]$ netstat -s | grep -i LISTEN 3518352 times the listen queue of a socket overflowed 3518388 SYNs to LISTEN sockets dropped Reference: https://perfchron.com/2015/12/26/investigating-linux-network-issues-with-netstat-and-nstat/ To monitor your queue sizes, use the ss command and look for SYN-RECV sockets. $ ss -n state syn-recv sport = :80 | wc -l119 Reference: https://blog.cloudflare.com/syn-packet-handling-in-the-wild/
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/328752", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204370/" ] }
328,773
A few questions about the sample script below. I'm calling a function _foo and want to capture the output of it into a variable $bar , but also use the return status (which may not be 0 or 1 ), or failing that have exit stop the script (when non-zero). Why doesn't the exit in function _foo work when called this way? ( if ! bar="$(_foo)" ). It works when called "normally". the exit will stop the script if I change the if statement to this (but I lose its output): if ! _foo ; then the exit behaves like return and will not stop the script: if ! bar="$(_foo)" ; then Just calling a function without the assignment and an exit will work, however calling it like var="$(func)" doesn't. Is there a better way to capture the output of _foo into $bar from the function as well as use return status (for other than 0 or 1 , eg a case statement?) I have a feeling I may need to use trap somehow. Here's a simple example: #!/usr/bin/env bashset -eset -uset -o pipefail_foo() { local _retval echo "baz" && false _retval=$? exit ${_retval}}echo "start"if ! bar="$(_foo)" ; then echo "foo failed"else echo "foo passed"fiecho "${bar}"echo "end" Here's the ooutput: $ ./foo.sh startfoo failedbazend Here's some more examples: This will exit: #!/usr/bin/env bashset -eset -uset -o pipefailfunc() { echo "func" exit}var=''funcecho "var is ${var}"echo "did not exit" This will not exit: #!/usr/bin/env bashset -eset -uset -o pipefailfunc() { echo "func" exit}var=''var="$(func)"echo "var is ${var}"echo "did not exit"
exit within a function exits the entire script and not just the function (subshells notwithstanding). To expound: #!/bin/bashf() { exit 3}fexit 0 The above script will terminate with exit code 3, while #!/bin/bashf() { exit 3}(f)exit 0 will terminate with exit code 0. The $(command) syntax you are using runs command within a subshell, and exit can only break out as far as the layer that subshell is running within . If you want to capture the exit code and output of something run within a subshell, that is still available to the environment in which the subshell is initiated: #!/bin/bashsubshelloutput="$( echo "output"; exit 3 )"returnval=$? # captures subshell's exit code: more stuff follows
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/328773", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23399/" ] }
328,825
I have done chmod -R 644 . inside the directory dir My user's permissions are drw-r--r-- and i'm the owner of the directory When trying chmod 755 dir, error is popped chmod: changing permissions of dir Operation not permitted The same error is popped when doing ls even as root How to change permission back to 755 and allow its deletion and modification?
from the level above dir : chmod -R a+x *dir* to give all users (a) execute permission to all subdirectories and files (+x) or: chmod -R a+X *dir* to give all users execute permission to all subdirectories only (+X)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/328825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50197/" ] }
328,882
I have an array containing some elements, but I want to push new items to the beginning of the array; How do I do that?
To add an element to the beginning of an array use. arr=("new_element" "${arr[@]}") Generally, you would do. arr=("new_element1" "new_element2" "..." "new_elementN" "${arr[@]}") To add an element to the end of an array use. arr=( "${arr[@]}" "new_element" ) Or instead arr+=( "new_element" ) Generally, you would do. arr=( "${arr[@]}" "new_element1" "new_element2" "..." "new_elementN") #Orarr+=( "new_element1" "new_element2" "..." "new_elementN" ) To add an element to specific index of an array use. Let's say we want to add an element to the position of Index2 arr[2] , we would actually do merge on below sub-arrays: Get all elements before Index position2 arr[0] and arr[1] ; Add an element to the array; Get all elements with Index position2 to the last arr[2] , arr[3] , .... arr=( "${arr[@]:0:2}" "new_element" "${arr[@]:2}" ) Removing an element from the array In addition to removing an element from an array (let's say element #3), we need to concatenate two sub-arrays. The first sub-array will hold the elements before element #3 and the second sub-array will contain the elements after element #3. arr=( "${arr[@]:0:2}" "${arr[@]:3}" ) ${arr[@]:0:2} will get two elements arr[0] and arr[1] starts from the beginning of the array. ${arr[@]:3} will get all elements from index3 arr[3] to the last. one possible handy way to re-build the arr excluding element#3 (arr[2]) from that: del_element=3; arr=( "${arr[@]:0:$((del_element-1))}" "${arr[@]:$del_element}" ) specify which element you want to exclude in del_element= . Another possibility to remove an element is Using unset (actually assign 'null' value to the element) unset -v 'arr[2]' Use replace pattern if you know the value of your array elements to truncate their value (replace with empty string). arr=( "${arr[@]/PATTERN/}" ) Print the array printf '%s\n' "${arr[@]}"
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/328882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/119942/" ] }
328,886
I've already seen this answer , but it didn't work! I tested both CentOS 6 and 7 and I got the same error. Interestingly enough when I try to install it on Vm, everything goes smoothly.
To add an element to the beginning of an array use. arr=("new_element" "${arr[@]}") Generally, you would do. arr=("new_element1" "new_element2" "..." "new_elementN" "${arr[@]}") To add an element to the end of an array use. arr=( "${arr[@]}" "new_element" ) Or instead arr+=( "new_element" ) Generally, you would do. arr=( "${arr[@]}" "new_element1" "new_element2" "..." "new_elementN") #Orarr+=( "new_element1" "new_element2" "..." "new_elementN" ) To add an element to specific index of an array use. Let's say we want to add an element to the position of Index2 arr[2] , we would actually do merge on below sub-arrays: Get all elements before Index position2 arr[0] and arr[1] ; Add an element to the array; Get all elements with Index position2 to the last arr[2] , arr[3] , .... arr=( "${arr[@]:0:2}" "new_element" "${arr[@]:2}" ) Removing an element from the array In addition to removing an element from an array (let's say element #3), we need to concatenate two sub-arrays. The first sub-array will hold the elements before element #3 and the second sub-array will contain the elements after element #3. arr=( "${arr[@]:0:2}" "${arr[@]:3}" ) ${arr[@]:0:2} will get two elements arr[0] and arr[1] starts from the beginning of the array. ${arr[@]:3} will get all elements from index3 arr[3] to the last. one possible handy way to re-build the arr excluding element#3 (arr[2]) from that: del_element=3; arr=( "${arr[@]:0:$((del_element-1))}" "${arr[@]:$del_element}" ) specify which element you want to exclude in del_element= . Another possibility to remove an element is Using unset (actually assign 'null' value to the element) unset -v 'arr[2]' Use replace pattern if you know the value of your array elements to truncate their value (replace with empty string). arr=( "${arr[@]/PATTERN/}" ) Print the array printf '%s\n' "${arr[@]}"
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/328886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183478/" ] }
328,906
What are the commands to find out fan speed and cpu temp in linux (I know lm-sensor can do the task). Is there any alternative for that?
For CPU temperature: On Debian: sudo apt-get install lm-sensors On Centos: sudo yum install lm_sensors Run using: sudo sensors-detect Type sensors to get CPU temp. For fan speed: sensors | grep -i fan This will output fan speed or install psensor using: sudo apt-get install psensor One can also use hardinfo sudo apt-get install hardinfo
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/328906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202905/" ] }
328,911
I cannot fully kill a mysql service on CentOS 7. I tried to find all PIDs: ps -ef | grep 'mysql' and then kill them with kill -9 ... but mysql recreates after some time. Also I tried to kill it like this: killall -KILL mysql mysqld_safe mysqld The same effect. After several seconds mysql rejoins. Why it happens? EDITED: # ps aux | grep mysqlroot 15284 0.0 0.3 115384 1804 ? Ss 12:10 0:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr --wsrep-new-cluster mysql 15743 0.1 40.3 1353412 202276 ? Sl 12:10 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --wsrep-provider=/usr/lib64/galera3/libgalera_smm.so --wsrep-new-cluster --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock --wsrep_start_position=43de3d74-bca8-11e6-a178-57b39b925285:9 root 16303 0.0 0.1 112648 976 pts/0 R+ 12:56 0:00 grep --color=auto mysql I am using a mysql fork (Percona Xtradb Cluster) and it can't be stopped if the node is partitioned from the cluster. It can be stopped only if I disable a mysql service and reboot a node. But it is much better for me to kill the process without node rebooting. So systemctl stop mysql Doesn't work. It tries to stop it but without success. I have installed it from Percona repository via yum: yum install Percona-XtraDB-Cluster-57 Situation is the next: There was 3 nodes and they crashed. After some time only 2 nodes could start. But they are waiting for the 3rd node. They have state: activating. If I try to stop mysql service then it change its state to: deactivating. But it can't be stopped. So, I try to kill mysql service and provision a new cluster from 2 nodes. But I can't stop mysql without reboot (reboot isn't a solution for me).
For CPU temperature: On Debian: sudo apt-get install lm-sensors On Centos: sudo yum install lm_sensors Run using: sudo sensors-detect Type sensors to get CPU temp. For fan speed: sensors | grep -i fan This will output fan speed or install psensor using: sudo apt-get install psensor One can also use hardinfo sudo apt-get install hardinfo
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/328911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164881/" ] }
328,912
Since a few days I'm facing an issue while being connected to my server in ssh, for proxy/tunel usage. I - Setup Client Here is the machine : iMac:~ Luca$ sw_vers ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G1108 iMac:~ Luca$ sudo sysctl net.inet.ip.forwarding net.inet.ip.forwarding: 0 iMac:~ Luca$ sudo sysctl net.inet.ip.fw.enable net.inet.ip.fw.enable: 1 Tried on three different network. Browser I'm using Firefox 50.0.1 to browse internet, with the FoxyProxy extension configured like so : host address : 127.0.0.1 port : 9999 socks v5 SSH command I'm using Terminal.app to connect in ssh to my server. iMac:~ Luca$ ssh -p 53 -D 9999 luca@myIP Server luca@myServer:~$ ssh -V OpenSSH_6.7p1 Debian-5+deb8u3, OpenSSL 1.0.1t 3 May 2016 luca@myServer:~$ cat /proc/sys/net/ipv4/ip_forward 1 II - Expected Once the connection is open, I can browse any website without any issue (with my IP being my server one). This was fine until a few days.This is still fine if I try : same server (A), another computer (Y) same computer (X), another server (B) From what it looks like, it doesn't work with my computer (X) and my server (A). III - What happens luca@myServer:~$ ssh_dispatch_run_fatal: Connection to myIP: message authentication code incorrect The connection is then closed. This message appears at random time. But I can reproduce it easily with a big data load through the proxy : load multiple videos, download big files, etc... IV - Another way, similar problem If I connect to my server through sftp:// (with FileZilla) with the same login (luca) and same port (53). Then I try to download a file, every <30 seconds I get the following error : Error : Incorrect MAC received on packet Once again, this happen only with my computer (X) and my server (A).If I try another server (B) on the same computer (X) : no problem.If I try the same server (A) on another computer (Y) : no problem. V - What I've tried (and didn't fix) Reboot the server and the computer Restart ssh/sshd on both the server and the computer Delete the knowns_hosts file on the computer Specify a -m and -c with the ssh command Specify a -o GSSAPIKeyExchange=no within the ssh command Uncomment the Ciphers and/or MACs lines within /etc/ssh/ssh_config on the server or/and the computer Tried to look at -vvvvv option with the ssh command and read logs on server/computer, nothing looked related. Any help would be appreciated. APPENDIX Server ssh -Q mac luca@myServer:~$ ssh -Q mac hmac-sha1 hmac-sha1-96 hmac-sha2-256 hmac-sha2-512 hmac-md5 hmac-md5-96 hmac-ripemd160 [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Computer ssh -Q mac iMac:~ Luca$ ssh -Q mac hmac-sha1 hmac-sha1-96 hmac-sha2-256 hmac-sha2-512 hmac-md5 hmac-md5-96 hmac-ripemd160 [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Server ssh -v -p 53 -D 9999 luca@myIP iMac:~ Luca$ ssh -v -p 53 -D 9999 luca@myIPOpenSSH_6.9p1, LibreSSL 2.1.8debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 21: Applying options for *debug1: Connecting to myIP [myIP] port 53.debug1: Connection established.debug1: key_load_public: No such file or directorydebug1: identity file /Users/Luca/.ssh/id_rsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Luca/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Luca/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Luca/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Luca/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Luca/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Luca/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /Users/Luca/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.9debug1: Remote protocol version 2.0, remote software version OpenSSH_6.7p1 Debian-5+deb8u3debug1: match: OpenSSH_6.7p1 Debian-5+deb8u3 pat OpenSSH* compat 0x04000000debug1: Authenticating to myIP:53 as 'luca'debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug1: kex: server->client [email protected] <implicit> nonedebug1: kex: client->server [email protected] <implicit> nonedebug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ecdsa-sha2-nistp256 SHA256:DUAAYL1r0QUDtRI89JozTTz+bm5wcg4cOSaFaRdbr/Ydebug1: Host '[myIP]:53' is known and matches the ECDSA host key.debug1: Found key in /Users/Luca/.ssh/known_hosts:1debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug1: SSH2_MSG_NEWKEYS receiveddebug1: SSH2_MSG_SERVICE_REQUEST sentdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug1: Authentications that can continue: publickey,passworddebug1: Next authentication method: publickeydebug1: Trying private key: /Users/Luca/.ssh/id_rsadebug1: Trying private key: /Users/Luca/.ssh/id_dsadebug1: Trying private key: /Users/Luca/.ssh/id_ecdsadebug1: Trying private key: /Users/Luca/.ssh/id_ed25519debug1: Next authentication method: passwordluca@myIP's password:debug1: Authentication succeeded (password).Authenticated to myIP ([myIP]:53).debug1: Local connections to LOCALHOST:9999 forwarded to remote address socks:0debug1: Local forwarding listening on ::1 port 9999.debug1: channel 0: new [port listener]debug1: Local forwarding listening on 127.0.0.1 port 9999.debug1: channel 1: new [port listener]debug1: channel 2: new [client-session]debug1: Requesting [email protected]: Entering interactive session.debug1: Sending environment.debug1: Sending env LANG = fr_FR.UTF-8Debian GNU/Linux 8.6Linux <server> #1 SMP Tue Mar 18 14:48:24 CET 2014 x86_64 GNU/Linuxserver : 274305hostname : myServereth0 IPv4 : myIPv4eth0 IPv6 : myIPv6Last login: Thu Dec 8 15:36:09 2016 from XXX.XXX.XXX.XXXluca@myServer:~$ Error I see sometime luca@myServer:~$ Bad packet length 3045540078. padding error: need -1249427218 block 8 mod 6 ssh_dispatch_run_fatal: Connection to 5.39.88.21: message authentication code incorrect Server ssh -o macs=hmac-sha1 -v -p 53 -D 9999 luca@myServer when crash happens iMac:~ Luca$ ssh -o macs=hmac-sha1 -v -p 53 -D 9999 luca@myIP// [...]luca@myServer:~$ debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 3: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 4: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 5: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 6: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 7: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 8: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 9: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 10: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 11: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 12: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 13: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 14: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 15: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 16: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 17: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 18: new [dynamic-tcpip]debug1: Connection to port 9999 forwarding to socks port 0 requested.debug1: channel 19: new [dynamic-tcpip]ssh_dispatch_run_fatal: Connection to myIP : message authentication code incorrectiMac:~ Luca$ After updating SSH on client-side iMac:~ Luca$ ssh -VOpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016iMac:~ Luca$ ssh -p 53 -D 9999 luca@myIPluca@myIP's password: luca@ns3274305:~$ ssh_dispatch_run_fatal: Connection to myIP port 53: message authentication code incorrectiMac:~ Luca$ ssh -o macs=hmac-sha1 -p 53 -D 9999 luca@myIPluca@myIP's password: luca@ns3274305:~$ ssh_dispatch_run_fatal: Connection to myIP port 53: message authentication code incorrectiMac:~ Luca$
For CPU temperature: On Debian: sudo apt-get install lm-sensors On Centos: sudo yum install lm_sensors Run using: sudo sensors-detect Type sensors to get CPU temp. For fan speed: sensors | grep -i fan This will output fan speed or install psensor using: sudo apt-get install psensor One can also use hardinfo sudo apt-get install hardinfo
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/328912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204508/" ] }
328,913
I have the following function : GetHostName () {NODE01_CHECK=`cat /etc/hosts | grep -w "node01" | awk '{print $1}'`NODE02_CHECK=`cat /etc/hosts | grep -w "node02" | awk '{print $1}'`IS_NODE1=`ifconfig -a | grep -w $NODE01_CHECK`IS_NODE2=`ifconfig -a | grep -w $NODE02_CHECK`if [[ ! -z $IS_NODE1 ]]; then echo "This is NODE 1"fiif [[ ! -z $IS_NODE2 ]]; then echo "This is Node 2"fi} This script will identify if a certain ip is configured on one of the two nodes belonging to a cluster. This works fine locally, but I need to run it remotely from a server that only knows of the VIP of the cluster. The goal is to transfer some files to both nodes. So when I run : scp -r /tmp/files CLUST_VIP ssh CLUST_VIP <<EOF NODE01_CHECK=`cat /etc/hosts | grep -w "node01" | awk '{print $1}'` NODE02_CHECK=`cat /etc/hosts | grep -w "node02" | awk '{print $1}'` IS_NODE1=`ifconfig -a | grep -w $NODE01_CHECK` IS_NODE2=`ifconfig -a | grep -w $NODE02_CHECK` if [[ ! -z $IS_NODE1 ]]; then scp -r /tmp/files node02 fi if [[ ! -z $IS_NODE2 ]]; then scp -r /tmp/files node01 fi EOF However, now while running the same commands in a ssh block, I get the following messages : Usage: grep [OPTION]... PATTERN [FILE]...Try 'grep --help' for more information.Usage: grep [OPTION]... PATTERN [FILE]...Try 'grep --help' for more information.Pseudo-terminal will not be allocated because stdin is not a terminal. I have also tried using ssh -t and that removed the above errors regarding grep , but the environment variables do not seem to work. Is there a way to use environment variables over a ssh block?
For CPU temperature: On Debian: sudo apt-get install lm-sensors On Centos: sudo yum install lm_sensors Run using: sudo sensors-detect Type sensors to get CPU temp. For fan speed: sensors | grep -i fan This will output fan speed or install psensor using: sudo apt-get install psensor One can also use hardinfo sudo apt-get install hardinfo
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/328913", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100918/" ] }
328,953
I have a strange issue related to grep -v queries. Allow me to explain: To display connections I use who : $ whoharry pts/0 2016-12-08 20:41 (192.168.0.1)james pts/1 2016-12-08 19:28 (192.168.0.1)timothy pts/2 2016-12-08 02:44 (192.168.0.1) The current tty of my terminal is pts/0 $ tty/dev/pts/0$ tty | cut -f3-4 -d'/'pts/0 I attempt to exclude my own connection using grep -v $(tty | cut -f3-4 -d'/') . The expected output of this command should be who , without my connection. However, the output is most unexpected: $ who | grep -v $(tty | cut -f3-4 -d'/')grep: a: No such file or directorygrep: tty: No such file or directory I enclose the $(...) in quotes and that seems to fix the "No such file or directory" issue. However, my connection is still printed even though my tty ( pts/0 ) should've been excluded: $ who | grep -v "$(tty | cut -f3-4 -d'/')"harry pts/0 2016-12-08 20:41 (192.168.0.1)james pts/1 2016-12-08 19:28 (192.168.0.1)timothy pts/2 2016-12-08 02:44 (192.168.0.1) As of this point, I have absolutely no idea why the grep query is malfunctioning.
Zachary has explained the source of the problem. While you can work around it with tty=$(tty)tty_without_dev=${tty#/dev/}who | grep -v "$tty_without_dev" That would be wrong as for instance if that tty is pts/1 , you would end up excluding all the lines containing pts/10 . Some grep implementations have a -w option to do a word search who | grep -vw pts/1 would not match on pts/10 because the pts/1 in there is not followed by a non-word character. Or you could use awk to filter on the exact value of the second field like: who | awk -v "tty=$tty_without_dev" '$2 != tty' If you want to do it in one command: { who | awk -v "tty=$(tty<&3)" '$2 != substr(tty,6)'; } 3<&0 The original stdin being duplicated onto file descriptor 3 and restored for the tty command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/328953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147301/" ] }
329,033
I have a large amount of files I need to copy to Box.com using DAV2fs. I've found I'm only able to copy a couple gigs at a time, and then I have to wait for a couple minutes while box catches up or it errors out. So what I did was I made a list of all the files/directories using "find ./ > outfile.txt" I want to iterate though the list of files, and after say 100 copies(or whatever) wait 10 minutes. How would I do that? without using cp -r, when the outfile.txt looks like: /dirctory1/directory1/file.txt/directory1/file2.txt cp omits the directory so file and file2 never get copied. If I do cp -r then it will do directory1 and all it's contents so my file count will be off. EDIT: To clarify, I'm interested in the cp portion. Specifically how to get cp to create a directory without doing -r as -r will throw off my count.
Try this. (Not tested.) destination= # assign i=0while read line; do cp --parents "$line" "$destination" ((i++)) [ $i -eq 100 ] && sleep 600 && i=0done < outfile.txt# or: done < <(find ... and so on) # to avoid creating the temp file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164948/" ] }
329,058
I have some R code, and at one part, I'm connecting to an sftp and trying to download some files. The files that need to be downloaded are determined by the R code and can either be only one or multiple. I'm trying to use mget to download the files, but it doesn't seem to be working: sftp> mget abc.PDF def.PDF ghi.PDFFetching /abc.PDF to def.PDF It is only downloading abc.PDF and storing it as def.PDF on the local directory instead of downloading all three files. What am I doing worng?
mget works with a glob for the "source file" portion of the arguments (at least in OpenSSH version 7.3): sftp> ls *.pdffoo.pdf bar.pdf sftp> mget *.pdfFetching /home/jdoe/bar.pdf to bar.pdfFetching /home/jdoe/foo.pdf to foo.pdfsftp> You will instead need to loop over the files somehow and fetch them one-by-one if a glob get catches too many.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329058", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204616/" ] }
329,093
I'm tweaking the pager of Git, but I've got some issues with it. What I want is: Always colored output Scrolling by touchpad or mouse Quit-if-one-screen And my current configuration is: $ git config --global core.pagerless -+F -+X -+S This does everything except the last one. But, if I remove -+F , there will be no output in case of one-screen. If I remove -+X as well, the output is back but I cannot scroll by touchpad in less . Is there a workaround which can meet all the requirements above?
UPDATE tl;dr Solution: upgrade to less 530 From http://www.greenwoodsoftware.com/less/news.530.html : Don't output terminal init sequence if using -F and file fits on one screen. So with this fix we don't even need to bother determining whether to use -X on our own, less -F just takes care of it. PS. Some other less configs that I use: export PAGER='less -F -S -R -M -i'export MANPAGER='less -R -M -i +Gg'git config --global core.pager 'less -F -S -R -i'#alias less='less -F -S -R -M -i' I eventually ended up with writing a wrapper on my own. #!/usr/local/bin/bash# BSD/OSX compatibility[[ $(type -p gsed) ]] && SED=$(type -p gsed) || SED=$(type -p sed)CONTEXT=$(expand <&0)[[ ${#CONTEXT} -eq 0 ]] && exit 0CONTEXT_NONCOLOR=$( $SED -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g" <<< "$CONTEXT")LINE_COUNT=$( (fold -w $(tput cols) | wc -l) <<< "$CONTEXT_NONCOLOR" )[[ $LINE_COUNT -ge $(tput lines) ]] && less -+X -+S -R <<< "$CONTEXT" || echo "$CONTEXT" BSD/OSX users should manually install gnu-sed . The amazing regexp, which helps remove color codes, is from https://stackoverflow.com/a/18000433/2487227 I've saved this script to /usr/local/bin/pager and then git config --global core.pager /usr/local/bin/pager The treatment for OCD patients, hooray!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65483/" ] }
329,097
ping_ does not provide any graph ( graph no in config) but it provides value, that could be used to trigger critical and warning levels. In munin.conf I have ping_.packetloss.warning 20 and in plugin config I also have env.packetloss_warn 20 My network cable is disconnected. Running fetch ping_ while connected to node on this host I am getting packetloss.value 100 But after 5 minutes and after running munin-cron manually I see no warning generated in webpage. I have ping_ graph there, but no warning about packetloss.What should I do? -- I just found that nothing is saved in rrd file if graph no . So, I commented it out and now I have data in rrd ( rrdtool fetch ... ) and munin-limits seems to work.
UPDATE tl;dr Solution: upgrade to less 530 From http://www.greenwoodsoftware.com/less/news.530.html : Don't output terminal init sequence if using -F and file fits on one screen. So with this fix we don't even need to bother determining whether to use -X on our own, less -F just takes care of it. PS. Some other less configs that I use: export PAGER='less -F -S -R -M -i'export MANPAGER='less -R -M -i +Gg'git config --global core.pager 'less -F -S -R -i'#alias less='less -F -S -R -M -i' I eventually ended up with writing a wrapper on my own. #!/usr/local/bin/bash# BSD/OSX compatibility[[ $(type -p gsed) ]] && SED=$(type -p gsed) || SED=$(type -p sed)CONTEXT=$(expand <&0)[[ ${#CONTEXT} -eq 0 ]] && exit 0CONTEXT_NONCOLOR=$( $SED -r "s/\x1B\[([0-9]{1,2}(;[0-9]{1,2})?)?[mGK]//g" <<< "$CONTEXT")LINE_COUNT=$( (fold -w $(tput cols) | wc -l) <<< "$CONTEXT_NONCOLOR" )[[ $LINE_COUNT -ge $(tput lines) ]] && less -+X -+S -R <<< "$CONTEXT" || echo "$CONTEXT" BSD/OSX users should manually install gnu-sed . The amazing regexp, which helps remove color codes, is from https://stackoverflow.com/a/18000433/2487227 I've saved this script to /usr/local/bin/pager and then git config --global core.pager /usr/local/bin/pager The treatment for OCD patients, hooray!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161645/" ] }
329,135
I have a CSV file such as: input.csv1,2,3,104,5,67,8,9,12,28,30 I want to reverse the columns in this file which means: output.csv10,3,2,16,5,430,28,12,9,8,7 I know how to do it for a fixed column count, but if the column count varies, what should I do?
With perl , assuming the fields in your CSV don't have embedded commas, newlines, etc.: perl -lpe '$_ = join ",", reverse split /,/' input.csv
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329135", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/203943/" ] }
329,141
I am developing a Debian 8 (Jessie) based system for an embedded solution, using multistrap to build a rootfs. The system is headless, and can be accessed via a serial console meant for debugging and via SSH. I am having a problem with ModemManager in this system. It installs with no problems, but once I have it enabled and it actually starts up, it usually (not always but generally) starts flooding the debug console. The output is usually just meaningless stream of characters, but sometimes there are various AT-commands too. I know this flooding is caused by ModemManager because it can ceases once I remove ModemManager. I could live with some random flooding, but the problem is that this flooding almost always somehow makes the console non-responsive and that way prevents me from logging in. Sometimes, though very rarely, I have been able to log in despite of this flooding, check the IP of device and then log in via SSH. Usually, though, that option to work around the problem is not available as I can't even get to find out the IP given to the device by a DHCP server. I found out that this problem is due to ModemManager scanning for modem in that serial port. I also found out that there is a way to fix the problem using an udev rule. The rule that is supposed to work is like this: ATTRS{idVendor}=="0ca6" ATTRS{idProduct}=="a050", ENV{ID_MM_DEVICE_IGNORE}="1" My case is a little different because the serial port is a peripheral of the CPU i.e. not a USB serial port, so I modified the rule to this form: KERNEL=="ttyS0", ENV{ID_MM_DEVICE_IGNORE}="1" udevadm now tells me that the line is being recognized and that attribute (or whatever it is called) added to the attributes of the device. The problem is not yet solved, though. For some reason, ModemManager still keeps flooding the console and makes logging in impossible. Removing ModemManager is not an option because my application needs it.
With perl , assuming the fields in your CSV don't have embedded commas, newlines, etc.: perl -lpe '$_ = join ",", reverse split /,/' input.csv
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204670/" ] }
329,148
I recently upgraded to fedora 25. Since then my VPN connection via openconnect (Cisco AnyConnect Compatible VPN) ceased to work. When I now try to define a new equivalent VPN connection, I get the message Error: unable to load VPN connection editor This appears under both, Wayland and X. I haveOpenConnect version v7.07; and I have NetworkManager-openconnect-1.2.4-1.fc25.x86_64. Can you think of ways of getting the editor to work again? OrCan you point to a way to manually define such a connection, circumventing gnome?
You need to install:NetworkManager-openconnect-gnome
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329148", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156074/" ] }
329,169
I often generate and register a lot of bash functions that automate many of the task I usually do in my development projects. That generation depends on the meta-data of the project I am working on. I want to annotate the functions with the info of the project they were generated, this way: func1() {# This function was generated for project: PROJECT1echo "do my automation"} Ideally, I would be able to see the comment when I inspect the definition: $ type func1func1 is a functionfunc1 () { # This function was generated for project: PROJECT1 echo "do my automation"} But somehow bash seems to ignore the comments at the moment of loading the function, not when executing it. So the comments are lost and I get this result: func1 is a functionfunc1 () { echo "do my automation"} Is there any way to assign metadata to functions, and check them afterwards? It is possible to retrieve it when inspecting the definition with type?
function func_name(){ : ' Invocation: func_name $1 $2 ... $n Function: Display the values of the supplied arguments, in double quotes. Exit status: func_name always returns with exit status 0. ' : local i echo "func_name: $# arguments" for ((i = 1; i <= $#; ++i)); do echo "func_name [$i] \"$1\"" shift done return 0}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122813/" ] }
329,311
What the following files means on the root directory of my Linux (Fedora 25): $ ls -latotal 70...-rw-r--r--. 1 root root 0 Nov 22 09:28 1-rw-r--r-- 1 root root 0 Sep 27 09:53 .autorelabel...-rw-r--r-- 1 root root 0 Dec 9 14:30 null Is secure to delete them? Sometime ago I delete some file (didn't remember the name) and I complete damage the HDD partition and I don't want the same here. Any advice?
This is a system with SELinux, so you should certainly keep .autorelabel . Delete null and 1 . The .autorelabel file is used most often when switching from a disabled (permissive) SELinux to an enabled (enforcing) SELinux configuration. It can also be used to correct mistakes made with SELinux when the mistakes were not made a permanent part of the SELinux configuration.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329311", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13781/" ] }
329,403
syslog fills the system hard disk in minutes. This is what logs shows: Dec 6 14:03:01 ubuntu kernel: [ 18.515567] pcieport 0000:00:1c.5: device [8086:9d15] error status/mask=00000001/00002000Dec 6 14:03:01 ubuntu kernel: [ 18.515569] pcieport 0000:00:1c.5: [ 0] Receiver Error (First)Dec 6 14:03:01 ubuntu kernel: [ 18.515574] pcieport 0000:00:1c.5: AER: Corrected error received: id=00e5Dec 6 14:03:01 ubuntu kernel: [ 18.516217] pcieport 0000:00:1c.5: can't find device of ID00e5Dec 6 14:03:01 ubuntu kernel: [ 18.516219] pcieport 0000:00:1c.5: AER: Corrected error received: id=00e5Dec 6 14:03:01 ubuntu kernel: [ 18.516227] pcieport 0000:00:1c.5: can't find device of ID00e5Dec 6 14:03:01 ubuntu kernel: [ 18.516230] pcieport 0000:00:1c.5: AER: Corrected error received: id=00e5Dec 6 14:03:01 ubuntu kernel: [ 18.516241] pcieport 0000:00:1c.5: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e5(Receiver ID) These are three new Asus F541U Laptops with Ubuntu 16 Dual Boot Windows 10. All of them experience the same issue with different grades of severity. The system works good besides that. Everything is updated. Is there a proper way to solve it? should I just ignore it and try to avoid the lines being output in the first place? how ? I read some similar post where the solutions are to unmount or blacklist the culprit (pcieport - ID00e5 ?) or restrict the directory size but not sure about this. I'm currently working by using: for i in /var/log/*; do cat /dev/null > $i; done all the time...
As described in the launchpad bug report , you need to add pci=noaer to your kernel command line. Summary, taken from the above bug report: Edit /etc/default/grub . Change the line starting with GRUB_CMDLINE_LINUX_DEFAULT into GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=noaer" Run sudo update-grub . Reboot. I recommend that you reboot first, without making the above modifications, and rather than let Grub auto-boot, edit the boot specification; edit the line containing quiet and splash by adding (a space and) pci=noaer . If the machine works fine if booted like that, and the syslog is no longer spammed, you can safely make the above edits.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204841/" ] }
329,405
We need to add few users to the sudoers file on Linux. They should be able to to anything root can except the following: Should not modify, read, delete /nfsshare/config Should not modify, read, delete /etc/passwd Should not mount anything Should not change root password Should not edit /etc/sudoers or run visudo to add other users Is this possible?
I am, basically, in agreement with Wissam Al-Roujoulah on this. We need to add few users to the sudoers file Do you, really need to do this? Maybe there are other ways, using acl or regular UNIX permissions. As Wissam Al-Roujoulah has already pointed out, trying to "blacklist" certain commands, is in reality a really bad idea (read below from man sudoers , emphasis mine): Note, however, that using a ‘!’ in conjunction with the built-in ALL alias to allow a user to run “all but a few” commands rarely works as intended Instead you can specify a "whitelist", e.g. the actual commands the users are allowed to run. Something like this: user1 ALL=/sbin/shutdown The above will allow user1 to shut down. You can add more commands in a comma separated list. Read more about this here .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115450/" ] }
329,504
Running Xubuntu 16.04.1 LTS 64-bit. /proc/sys/kernel/yama/ptrace_scope keeps resetting to 1 if I reboot, despite me changing it to 0 manually. How can I keep ptrace_scope set to a value of 0?
/proc values are stored in RAM so it isn't persistent. But it read its initial values from a file. You can permanently change the value of /proc/sys/kernel/yama/ptrace_scope to 0 by editing the file /etc/sysctl.d/10-ptrace.conf and change the line: kernel.yama.ptrace_scope = 1 To kernel.yama.ptrace_scope = 0
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329504", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164141/" ] }
329,545
I'm trying to disable viminfo completely, I tried to add something in the ~/.vimrc, I tried 3 different methods, none of them works set viminfo='0,:0,<0,@0,f0,n~/.viminfoset viminfo=set viminfo="None" Any ideas? I just don't want viminfo to record what files was edited before The vim version is: VIM - Vi IMproved 8.0 (2016 Sep 12, compiled Dec 10 2016 23:06:12)MacOS X (unix) versionIncluded patches: 1-130Compiled by Homebrew
Try invoking Vim with $ vim -i NONE From :help -i : -i {viminfo} The file "viminfo" is used instead of the default viminfo file. If the name "NONE" is used (all uppercase), no viminfo file is read or written, even if 'viminfo' is set or when ":rv" or ":wv" are used.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
329,546
I want to run all scripts in a directory at the same time. I know that I can get a list of all the scripts and execute the first one in a directory with `ls ./*.sh`, but I can't seem to get all of them to run. I also tried brace expansion {./*.sh; } but that also ran only the first script. After I figure out how to run all of the them I want to run all of them in the background. I know that I can probably do this with a for loop, but I was hoping there is a simple one liner using globbing or brace expansion that will get the job done simply. How can I do this?
for script in ./*.sh; do "$script" & donewait Or to limit the number of concurrent invocations, with GNU xargs : xargs -n1 -P5 -r0a <(printf '%s\0' ./*.sh) env (beware it assumes script names don't contain = characters). Or with zsh : autoload zargs # best in ~/.zshrczargs -n1 -P5 ./*.sh -- command (here at most 5 at a time). Beware that if those scripts produce any output they could end-up being badly interleaved. GNU parallel addresses that by storing the output of each command in a separate temporary file and outputting them in order: parallel -j0 exec ::: ./*.sh ( -j0 to run all of them in parallel, remove to limit to the number of CPU cores, or specify the number yourself).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7947/" ] }
329,581
I have been searching about why the default Debian shell is colourless and couldn't find a answer. Why is the Debian shell (bash) colourless by default?
Why is the Debian shell (bash) colourless by default? Because of this (from .bashrc on a vanilla Debian install, emphasis mine): # uncomment for a colored prompt, if the terminal has the capability; turned# off by default to not distract the user: the focus in a terminal window# should be on the output of commands, not on the prompt #force_color_prompt=yesif [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fifi In other words, this is "a feature", or a design choice if you will.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329581", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204942/" ] }
329,777
I could write a shell script to do it in a loop, but is there a one line solution using the terminal ?
Not with one cut command. You could do it with awk : awk -F '\t' '{print $1 > "file1"; print $2 > "file2"}' < file Or for every field: awk -F '\t' '{for (i = 1; i <= NF; i++) print $i > "file" i}' < file Or to avoid reading the file twice, with pipes and 2 cut invocations, with a shell like AT&T ksh, zsh or bash with support for process substitution: < file tee >(cut -f2 > file2) | cut -f1 > file1 Beware that ksh and bash don't wait for that cut command running in the process substitution, so in those shells, file2 may not be complete by the time you run the next command after that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329777", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205095/" ] }
329,778
I have a directory with ~1M files and need to search for particular patterns. I know how to do it for all the files: find /path/ -exec grep -H -m 1 'pattern' \{\} \; The full output is not desired (too slow). Several first hits are OK, so I tried to limit number of the lines: find /path/ -exec grep -H -m 1 'pattern' \{\} \; | head -n 5 This results in 5 lines followed by find: `grep' terminated by signal 13 and find continues to work. This is well explained here . I tried quit action: find /path/ -exec grep -H -m 1 'pattern' \{\} \; -quit This outputs only the first match. Is it possible to limit find output with specific number of results (like providing an argument to quit similar to head -n )?
Since you're already using GNU extensions ( -quit , -H , -m1 ), you might as well use GNU grep 's -r option, together with --line-buffered so it outputs the matches as soon as they are found, so it's more likely to be killed of a SIGPIPE as soon as it writes the 6th line: grep -rHm1 --line-buffered pattern /path | head -n 5 With find , you'd probably need to do something like: find /path -type f -exec sh -c ' grep -Hm1 --line-buffered pattern "$@" [ "$(kill -l "$?")" = PIPE ] && kill -s PIPE "$PPID" ' sh {} + | head -n 5 That is, wrap grep in sh (you still want to run as few grep invocations as possible, hence the {} + ), and have sh kill its parent ( find ) when grep dies of a SIGPIPE. Another approach could be to use xargs as an alternative to -exec {} + . xargs exits straight away when a command it spawns dies of a signal so in: find . -type f -print0 | xargs -r0 grep -Hm1 --line-buffered pattern | head -n 5 ( -r and -0 being GNU extensions). As soon as grep writes to the broken pipe, both grep and xargs will exit and find will exit itself as well the next time it prints something after that. Running find under stdbuf -oL might make it happen sooner. A POSIX version could be: trap - PIPE # restore default SIGPIPE handler in case it was disabledRE=pattern find /path -type f -exec sh -c ' for file do awk '\'' $0 ~ ENVIRON["RE"] { print FILENAME ": " $0 exit }'\'' < "$file" if [ "$(kill -l "$?")" = PIPE ]; then kill -s PIPE "$PPID" exit fi done' sh {} + | head -n 5 Very inefficient as it runs several commands for each file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65213/" ] }
329,790
I have a server with three hard drives: 250 GB 3 TB 250 GB How can I merge multiple hard drives as one bigger volume of ~ 3.5 TB? I am a programmer not a system administrator.
Use LVM (Logical Volume Management) on Linux. You can think of LVM as "dynamic partitions", meaning that you can create/resize/delete LVM "partitions" (they're called "Logical Volumes" in LVM-speak) from the command line while your Linux system is running: no need to reboot the system to make the kernel aware of the newly-created or resized partitions. First of all you can use fdisk with -l option to get info about your current "Disks", then use it to partition your "Disks" and setting the system type of those partitions to "Linux LVM", after you finish the partitioning of the "Disks", use pvcreate to prepare your new partitions for "LVM". For more info: https://www.howtoforge.com/linux_lvm
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205105/" ] }
329,793
I have a problem, while creating a script from a master one: As simple as: cat pippo << EOFLOGFILE=test.log echo '#############################' >> $LOGFILEEOF . But, when I inspect pippo I get: LOGFILE=test.logecho '#############################' >>EOF Why the $LOGFILE has been replaced?
Here document by default subjects to shell expansions, precisely parameter expansion, command substitution, and arithmetic expansion. So variable (parameter) expansion is happening in your case -- the variable LOGFILE is being expanded in the current shell, and as the variable presumably does not exist hence null is being returned (and replaced) as the expanded value. To get the shell metacharacters literally in a here doc, use quotes around the terminator string: cat pippo <<'EOF' ## "EOF" would do tooLOGFILE=test.log echo '#############################' >>"$LOGFILE"EOF Also quote the variable expansion as (presumably) it refers to a filename, so that word splitting and pathname expansion are not done on it after expansion.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58233/" ] }
329,820
Is it possible to get the peak usage for a process? Tools such as 'top' and 'ps' seem to output the instantaneous CPU usage. And 'pidstat' seems to do the average... But do any make a note of the peak?
Here document by default subjects to shell expansions, precisely parameter expansion, command substitution, and arithmetic expansion. So variable (parameter) expansion is happening in your case -- the variable LOGFILE is being expanded in the current shell, and as the variable presumably does not exist hence null is being returned (and replaced) as the expanded value. To get the shell metacharacters literally in a here doc, use quotes around the terminator string: cat pippo <<'EOF' ## "EOF" would do tooLOGFILE=test.log echo '#############################' >>"$LOGFILE"EOF Also quote the variable expansion as (presumably) it refers to a filename, so that word splitting and pathname expansion are not done on it after expansion.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29653/" ] }
329,878
I have a script that asks for the user's password and I want to check if the given password is wrong or not. #A little fragment of my scriptread -sp 'Your password: ' password;if [[ $password -ne WHAT GOES HERE? ]]; then MORE CODE HEREelse MORE CODE HEREfi
There's no fully portable way to check the user's password. This requires a privileged executable, so this isn't something you can whip up from scratch. PAM , which is used on most non-embedded Linux systems as well as many embedded systems and most other Unix variants, comes with a setuid binary but it doesn't have a direct shell interface, you need to go via the PAM stack. You can use a PAM binding in Perl, Python or Ruby . You can install one of several checkpassword implementations. If the user is allowed to run sudo for anything, then sudo -kv will prompt for authentication (unless this has been disabled in the sudo configuration). But that doesn't work if there's no sudoers rule concerning the user. You can run su . This works on most implementations. This is probably the best bet in your case. if su -c true "$USER"; then echo "Correct password"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/329878", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201966/" ] }
329,926
I installed Linux Mint on my laptop along with a pre-installed Windows 10. When I turn on the computer, the normal GRUB menu appears most of the time: But after booting either Linux or Windows then rebooting, I GRUB starts in command line mode, as seen in the following screenshot: There is probably a command that I can type to boot from that prompt, but I don't know it. What works is to reboot using Ctrl+Alt+Del, then pressing F12 repeatedly until the normal GRUB menu appears. Using this technique, it always loads the menu. Rebooting without pressing F12 always reboots in command line mode. I think that the BIOS has EFI enabled, and I installed the GRUB bootloader in /dev/sda. Why is this happening and how can I ensure that GRUB always loads the menu? Edit As suggested in the comments, I tried purging the grub-efi package and reinstalling it. This did not fix the problem, but now when it starts in command prompt mode, GRUB shows the following message: error: no such device: 6fxxxxx-xxxx-xxxx-xxxx-xxxxxee.Entering rescue mode...grub rescue> I checked with the blkid command and that is the identifier of my linux partition. Maybe this additional bit of information can help figure out what is going on?
The boot process can't find the root partition (the part of the disk, that contains the information for starting up the system), so you have to specify its location yourself. I think you have to look at something like this article: how-rescue-non-booting-grub-2-linux short summary: in the grub rescue> command line type ls ... to list all available devices. Then you have to go through each, type something like (depends what is shown by the ls command): ls (hd0,1)/ls (hd0,2)/ ... and so on, until you find: (hd0,1)/boot/grub OR (hd0,1)/grub ... or, in case of "UEFI", it look something like: (hd0,1)/efi/boot/grub OR (hd0,1)/efi/grub Now you have to set the boot parameters accordingly - just type the following (with the correct numbers for your case) and after each line press return: set prefix=(hd0,1)/grub ... or (if grub is in a sub-directory): set prefix=(hd0,1)/boot/grub Then continue with set root=(hd0,1)insmod linuxinsmod normalnormal Now it should boot: boot Go to the commandline (e.g. start a "terminal") now, and execute: sudo update-grub ... this should correct the missing information and it should boot next time. If NOT - you have to go through the steps again an might have to repair or install grub again: Please look at the "Boot-Repair"-tool from this article: https://help.ubuntu.com/community/Boot-Repair (I had positive experiences with it, when previous steps wouldn't survive the reboot)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/329926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271/" ] }
329,945
I tried to assign a static IP to my Ubuntu 16.04 server using nmcli, which worked but it still has the original IP reserved as a "secondary" IP. I'm not sure how to get rid of it. 10.163.148.36 is the original IP of the server and 10.163.148.194 is the new IP I want it to switch to. I used the following nmcli command to set the IP address: nmcli connection modify 'Wired connection 1' ipv4.addresses '10.163.148.194/24' ipv4.gateway '10.163.148.2' ipv4.method 'manual' ipv4.ignore-auto-dns 'yes' connection.autoconnect 'yes' ipv4.dns '10.10.10.10 10.20.10.10' Note the two IP addresses for the ens160 interface. aruba@ubuntu:~$ ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:50:56:8a:10:64 brd ff:ff:ff:ff:ff:ff inet 10.163.148.194/24 brd 10.163.148.255 scope global ens160 valid_lft forever preferred_lft forever inet 10.163.148.36/24 brd 10.163.148.255 scope global secondary ens160 valid_lft forever preferred_lft forever inet6 2006::b0a3:b9ab:2f96:a461/64 scope global temporary dynamic valid_lft 604254sec preferred_lft 85254sec inet6 2006::dc94:ead6:e8ef:8095/64 scope global mngtmpaddr noprefixroute dynamic valid_lft 2591987sec preferred_lft 604787sec inet6 fe80::941e:5fa3:3571:df76/64 scope link valid_lft forever preferred_lft forever My nmcli connection details: aruba@ubuntu:~$ nmcli connection show "Wired connection 1"connection.id: Wired connection 1connection.uuid: d724141e-4c7f-3fc9-97b1-c37e014aebe4connection.interface-name: --connection.type: 802-3-ethernetconnection.autoconnect: yesconnection.autoconnect-priority: -999connection.timestamp: 1481582261connection.read-only: noconnection.permissions:connection.zone: --connection.master: --connection.slave-type: --connection.autoconnect-slaves: -1 (default)connection.secondaries:connection.gateway-ping-timeout: 0connection.metered: unknownconnection.lldp: -1 (default)802-3-ethernet.port: --802-3-ethernet.speed: 0802-3-ethernet.duplex: --802-3-ethernet.auto-negotiate: yes802-3-ethernet.mac-address: 00:50:56:8A:10:64802-3-ethernet.cloned-mac-address: --802-3-ethernet.mac-address-blacklist:802-3-ethernet.mtu: auto802-3-ethernet.s390-subchannels:802-3-ethernet.s390-nettype: --802-3-ethernet.s390-options:802-3-ethernet.wake-on-lan: 1 (default)802-3-ethernet.wake-on-lan-password: --ipv4.method: manualipv4.dns: 10.1.10.10,10.2.10.10ipv4.dns-search:ipv4.dns-options: (default)ipv4.addresses: 10.163.148.194/24ipv4.gateway: 10.163.148.1ipv4.routes:ipv4.route-metric: -1ipv4.ignore-auto-routes: noipv4.ignore-auto-dns: noipv4.dhcp-client-id: --ipv4.dhcp-timeout: 0ipv4.dhcp-send-hostname: yesipv4.dhcp-hostname: --ipv4.dhcp-fqdn: --ipv4.never-default: noipv4.may-fail: yesipv4.dad-timeout: -1 (default)ipv6.method: autoipv6.dns:ipv6.dns-search:ipv6.dns-options: (default)ipv6.addresses:ipv6.gateway: --ipv6.routes:ipv6.route-metric: -1ipv6.ignore-auto-routes: noipv6.ignore-auto-dns: noipv6.never-default: noipv6.may-fail: yesipv6.ip6-privacy: -1 (unknown)ipv6.addr-gen-mode: stable-privacyipv6.dhcp-send-hostname: yesipv6.dhcp-hostname: --GENERAL.NAME: Wired connection 1GENERAL.UUID: d724141e-4c7f-3fc9-97b1-c37e014aebe4GENERAL.DEVICES: ens160GENERAL.STATE: activatedGENERAL.DEFAULT: yesGENERAL.DEFAULT6: yesGENERAL.VPN: noGENERAL.ZONE: --GENERAL.DBUS-PATH: /org/freedesktop/NetworkManager/ActiveConnection/0GENERAL.CON-PATH: /org/freedesktop/NetworkManager/Settings/0GENERAL.SPEC-OBJECT: /GENERAL.MASTER-PATH: --IP4.ADDRESS[1]: 10.163.148.194/24IP4.ADDRESS[2]: 10.163.148.36/24IP4.GATEWAY: 10.163.148.2IP4.DNS[1]: 10.10.10.10IP4.DNS[2]: 10.20.10.10IP6.ADDRESS[1]: 2006::b0a3:b9ab:2f96:a461/64IP6.ADDRESS[2]: 2006::dc94:ead6:e8ef:8095/64IP6.ADDRESS[3]: fe80::941e:5fa3:3571:df76/64IP6.GATEWAY: fe80::213:1aff:fec7:f857 Lastly, my NetworkManager config: aruba@ubuntu:~$ cat /etc/NetworkManager/NetworkManager.conf[main]plugins=keyfile,ofonodns=dnsmasq[ifupdown]managed=true
in »Red Hat« the syntax would be like this: nmcli con mod "Wired connection 1" -ipv4.addresses "10.163.148.194" You just add a Minus before your Property It might work like this in Ubuntu as well…?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/195625/" ] }
329,959
I'm configuring Hadoop environment. I have use $ ssh-keygen -t rsa -P "" to generate id_rsa.pub and id_rsa . And use cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys to set password-free login. Now, I enter ssh localhost command and get this error: The authenticity of host 'localhost (::1)' can't be established. How can I solve this problem?
in »Red Hat« the syntax would be like this: nmcli con mod "Wired connection 1" -ipv4.addresses "10.163.148.194" You just add a Minus before your Property It might work like this in Ubuntu as well…?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/329959", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205203/" ] }
329,994
From the bash manual The rules concerning the definition and use of aliases are somewhat confusing. Bash always reads at least one complete line of input before executing any of the commands on that line. Aliases are expanded when a command is read, not when it is executed. Therefore, an alias definition appearing on the same line as another command does not take effect until the next line of input is read. The commands following the alias definition on that line are not affected by the new alias. This behavior is also an issue when functions are executed. Aliases are expanded when a function definition is read, not when the function is executed , because a function definition is itself a compound command. As a consequence, aliases defined in a function are not available until after that function is executed . To be safe, always put alias definitions on a separate line, and do not use alias in compound commands. The two sentences "Aliases are expanded when a function definition is read, not when the function is executed" and "aliases defined in a function are not available until after that function is executed" seem to be contrary to each other. Can you explain what they mean respectively?
Aliases are expanded when a function definition is read,not when the function is executed … $ echo "The quick brown fox jumps over the lazy dog." > myfile $ alias myalias=cat $ myfunc() {> myalias myfile> } $ myfuncThe quick brown fox jumps over the lazy dog. $ alias myalias="ls -l" $ myalias myfile-rw-r--r-- 1 myusername mygroup 45 Dec 13 07:07 myfile $ myfuncThe quick brown fox jumps over the lazy dog. Even though myfunc was defined to call myalias ,and I’ve redefined myalias , myfunc still executes the original definition of myalias . Because the alias was expanded when the function was defined. In fact, the shell no longer remembers that myfunc calls myalias ;it knows only that myfunc calls cat : $ type myfuncmyfunc is a functionmyfunc (){cat myfile} … aliases defined in a function are not availableuntil after that function is executed. $ echo "The quick brown fox jumps over the lazy dog." > myfile $ myfunc() {> alias myalias=cat> } $ myalias myfile-bash: myalias: command not found $ myfunc $ myalias myfileThe quick brown fox jumps over the lazy dog. The myalias alias isn’t availableuntil the myfunc function has been executed. (I believe it would be rather oddif defining the function that defines the aliaswas enough to cause the alias to be defined.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/329994", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
329,996
I am testing a systemd timer and trying to override its default timeout, but without success. I'm wondering whether there is a way to ask systemd to tell us when the service is going to be run next. Normal file ( /lib/systemd/system/snapbackend.timer ): # Documentation available at:# https://www.freedesktop.org/software/systemd/man/systemd.timer.html[Unit]Description=Run the snapbackend service once every 5 minutes.[Timer]# You must have an OnBootSec (or OnStartupSec) otherwise it does not auto-startOnBootSec=5minOnUnitActiveSec=5min# The default accuracy is 1 minute. I'm not too sure that either way# will affect us. I am thinking that since our computers will be# permanently running, it probably won't be that inaccurate anyway.# See also:# http://stackoverflow.com/questions/39176514/is-it-correct-that-systemd-timer-accuracysec-parameter-make-the-ticks-slip#AccuracySec=1[Install]WantedBy=timers.target# vim: syntax=dosini The override file ( /etc/systemd/system/snapbackend.timer.d/override.conf ): # This file was auto-generated by snapmanager.cgi# Feel free to do additional modifications here as# snapmanager.cgi will be aware of them as expected.[Timer]OnUnitActiveSec=30min I ran the following commands and the timer still ticks once every 5 minutes. Could there be a bug in systemd? sudo systemctl stop snapbackend.timersudo systemctl daemon-reloadsudo systemctl start snapbackend.timer So I was also wondering, how can I know when the timer will tick next? Because that would immediately tell me whether it's in 5 min. or 30 min. but from the systemctl status snapbackend.timer says nothing about that. Just wondering whether there is a command that would tell me the delay currently used. For those interested, there is the service file too ( /lib/systemd/system/snapbackend.service ), although I would imagine that this should have no effect on the timer ticks... # Documentation available at:# https://www.freedesktop.org/software/systemd/man/systemd.service.html[Unit]Description=Snap! Websites snapbackend CRON daemonAfter=snapbase.service snapcommunicator.service snapfirewall.service snaplock.service snapdbproxy.service[Service]# See also the snapbackend.timer fileType=simpleWorkingDirectory=~ProtectHome=trueNoNewPrivileges=trueExecStart=/usr/bin/snapbackendExecStop=/usr/bin/snapstop --timeout 300 $MAINPIDUser=snapwebsitesGroup=snapwebsites# No auto-restart, we use the timer to start once in a while# We also want to make systemd think that exit(1) is fineSuccessExitStatus=1Nice=5LimitNPROC=1000# For developers and administrators to get console output#StandardOutput=tty#StandardError=tty#TTYPath=/dev/console# Enter a size to get a core dump in case of a crash#LimitCORE=10G[Install]WantedBy=multi-user.target# vim: syntax=dosini
The state of currently active timers can be shown using systemctl list-timers : $ systemctl list-timers --allNEXT LEFT LAST PASSED UNIT ACTIVATESWed 2016-12-14 08:06:15 CET 21h left Tue 2016-12-13 08:06:15 CET 2h 18min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service1 timers listed.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/329996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57773/" ] }
329,997
I have a file with the following syntax: slave_master: '1.2.3.4' and I would like to replace it with sed or awk this way: slave_master: - '1.2.3.4' - '1.2.3.5' The file file is few hundreds lines long and there are other such lines with other IP values which should not be affected. Is it possible to do it with on command?Thanks a lot.
The state of currently active timers can be shown using systemctl list-timers : $ systemctl list-timers --allNEXT LEFT LAST PASSED UNIT ACTIVATESWed 2016-12-14 08:06:15 CET 21h left Tue 2016-12-13 08:06:15 CET 2h 18min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service1 timers listed.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/329997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/191994/" ] }
330,068
Whilst attempting to compile poppler from source I get the error Package "fontconfig" not found . I have found a lot of other resources advertising that this can be fixed by installing both pkg-config and libfontconfig1-dev to get the libraries, but I still got the error. Trying to install fontconfig from source failed at the make step and I've found no resources on how to fix (whole host of C errors). If I have fontconfig on the system (and apt seems to suggest I do) how can I use it when running ./configure for a package?
The key here turned out to be the PKG_CONFIG_PATH environment variable. This was empty on a standard shell session on my system. There seem to be lots of directories with pkgconfig in the name, but to find the correct one I was able to use apt-file per this thread i.e. $ apt-file search fontconfig.pclibfontconfig1-dev: /usr/lib/x86_64-linux-gnu/pkgconfig/fontconfig.pc Then run export PKG_CONFIG_PATH=/usr/lib/x86_64-linux-gnu/pkgconfig And now the ./configure step can find the .pc file which it requires for that library. Learning about apt-file seems to be a useful outcome of this problem.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/330068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100968/" ] }
330,094
I have the following content in in /etc/udev/rules.d/81-external-disk.rules: ENV{ID_FS_UUID}=="6826692e-79f4-4423-8467-cef4d5e840c5", RUN{program}+="/bin/mount -o nofail,x-systemd.device-timeout=1 -t ext4 -U 6826692e-79f4-4423-8467-cef4d5e840c5 /backup/external" After running: udevadm control --reload ; udevadm trigger /dev/sdb1 It does nothing at all. However if II change the mount command for something such as /bin/touch /tmp/xyz it works. Versions: [root@helsinki rules.d]# rpm -qa | grep udevlibgudev1-219-19.el7_2.12.x86_64python-pyudev-0.15-7.el7_2.1.noarch[root@helsinki rules.d]# rpm -qa | grep systemdsystemd-libs-219-19.el7_2.12.x86_64systemd-219-19.el7_2.12.x86_64systemd-sysv-219-19.el7_2.12.x86_64[root@helsinki rules.d]# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core)
This is a systemd feature. The original udev command has been replaced by systemd-udevd (see its man page). One of the differences is that it creates its own filesystem namespace, so your mount is done, but it is not visible in the principal namespace. (You can check this by doing systemctl status systemd-udevd to get the Main PID of the service, then looking through the contents of /proc/<pid>/mountinfo for your filesystem). If you want to go back to having a shared instead of private filesystem namespace, then create a file /etc/systemd/system/systemd-udevd.service with contents .include /usr/lib/systemd/system/systemd-udevd.service[Service]MountFlags=shared or a new directory and file /etc/systemd/system/systemd-udevd.service.d/myoverride.conf with just the last 2 lines, i.e. [Service]MountFlags=shared and restart the systemd-udevd service. I haven't found the implications of doing this.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/330094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47954/" ] }
330,146
I have a process whose environment is the following: root@a-vm:/proc/1363# hexdump -C environ00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*0000016c I've never seen anything like this; I expect environ to contain nul terminated key=value pairs, so this output violates all sorts of assertions. Am I looking at a known kernel bug, or is there some legitimate way in Unix/Linux to accomplish this? (…and if so, why? why does the kernel even allow this nonsense?) (on Linux, 3.13.0/Ubuntu Trusty) (I ran into this while trying to figure out why this process is not writing some temporary output to the correct location; it's suppose to use a certain directory for temp storage, and it's informed of that directory via setting the env variable TMP ; but I'm setting TMP to something that looks like a very normal path, not a bunch of nuls, and I've never seen a completely empty env anyways.)
This is not nonsense, there is a legitimate way in Linux to accomplish this, and your expectations are erroneous. The argument and environment strings passed to a program's startup code by the kernel are stored in ordinary application-space virtual memory, just like any other program data; and, just like any other program data variables, they are modifiable. It is quite legitimate for programs to modify them. (Note that this is from the point of view of what the kernel supplies and enforces. What the standards for particular programming languages may say is not necessarily the same. But as far as the kernel is concerned, it's just an area of application-space virtual memory for program data that is readable and writable. The kernel doesn't care what programming language you compiled your machine code from.) The /proc/${PID}/environ file is just a window onto this application-space virtual memory. Rather than remember the actual environment data of the process, Linux just remembers the start and end addresses of the environment area that it started the process with, and the /proc/${PID}/environ file just reads out whatever is in that memory right now. You should not expect that this file contains a list of ␀-terminated strings. That is an erroneous expectation. There's no GNU C library function for modifying the memory that contains these strings. But various programs have their own functions to do so. For one example, consider OpenSSH. The OpenSSH server modifies what ps shows for its argument vector, to read things like sshd: JdeBP [priv] . The OpenSSH server contains code that tries to imitate on Linux what it can do with the BSD C library on OpenBSD. On OpenBSD there's a BSD C library function named setproctitle() that re-writes the process argument vector as reported by the ps command. It calls sysctl() to pass a new argument vector to the kernel, which ps can read out with sysctl() . FreeBSD has a similar function. On Linux, as explained, the kernel doesn't remember actual arguments and environment, merely the start and end addresses of the memory areas where it initially placed them when starting up the process. So the Linux port of OpenSSH has a compatibility setproctitle() function that overwrites the aforesaid memory area, instead. This compatibility function calculates the total size of the environment area and the argument area, and overwrites all of it with the new argument string. It does this because in the usual case programs that call setproctitle() want to write in a longer set of argument data than what the process originally had. sshd often does. So it allows the new arguments to overwrite the environment area that follows the argument area, giving programs more room for longer sets of argument strings. Importantly, it also pads the unused part of the area that it has not needed to overwrite, out to the original length of the total argument and environment data, with ␀s. And what you are seeing is the exact result of this. If you find an OpenSSH server process on your system, you'll find that it, too, has lots of ␀s in its /proc/${PID}/environ . Further reading setproctitle . FreeBSD 11.0 manual. setproctitle . OpenBSD manual. setproctitle() . OpenSSH Portable Release. environ_read() . fs/proc/base. Linux kernel. Free Electrons.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330146", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6013/" ] }
330,169
I found a library on my system and I don't think it is used anywhere. So for tidyness' sake I would delete it, but I want to make sure that I don't break anything. Specifically it's about libgme0. I am on Linux Mint 18. So far I've tried ldd /bin/* | grep libgme0ldd /sbin/* | grep libgme0ldd /usr/bin/* | grep libgme0ldd /usr/sbin/* | grep libgme0 and got no results. Is this enough proof that this library is unused and save to delete?
You should probably let the package manager of your distribution decide if it's safe to remove it or not. Maybe try to remove it with apt-get remove libgme0 and see if it wants to remove other packages?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188344/" ] }
330,233
There is not much to explain here. Just want to know why echo $SHELL always gives the output /bin/bash even though I switch to other shells. What do I have to do to make sure the $SHELL gives the correct shell path that I am in. [root@localhost user]# echo $0bash[root@localhost user]# echo $SHELL/bin/bash[root@localhost user]# csh[root@localhost user]# echo $0csh[root@localhost user]# echo $SHELL/bin/bash[root@localhost user]# tcsh[root@localhost user]# echo $0tcsh[root@localhost user]# echo $SHELL/bin/bash[root@localhost user]# shsh-4.2# echo $0shsh-4.2# echo $SHELL/bin/bashsh-4.2# [root@localhost user]# which csh/bin/csh[root@localhost user]# which csh/bin/csh
$SHELL is the environment variable that holds your preferred shell , not the currently running shell. It's initialised by login or any other application that logs you in based on the shell field in your user entry in the passwd database (your login shell ). That variable is used to tell applications like xterm , vim ... what shell they should start for you when they start a shell. You typically change it when you want to use another shell than the one set for you in the passwd database. To get a path of the current shell interpreter, on Linux, and with Bourne or csh like shells, you can do: readlink "/proc/$$/exe" The rc equivalent: readlink /proc/$pid/exe The fish equivalent: set pid %selfreadlink /proc/$pid/exe csh/tcsh set the $shell variable to the path of the shell. In Bourne-like shells, $0 will contain the first argument that the shell received ( argv[0] ) which by convention is the name of the command being invoked (though login applications, again by convention make the first character a - to tell the shell it's a login shell and should for instance source the .profile or .login file containing your login session customisations) when not called to interpret a script or when called with shell -c '...' without extra arguments. In: $ bash -c 'echo "$0"'bash$ /bin/bash -c 'echo "$0"'/bin/bash It's my shell that calls /bin/bash in both cases, but in the first case with bash as its first argument, and in the second case with /bin/bash . Several shells allow passing arbitrary strings instead like: $ (exec -a whatever bash -c 'echo "$0"')whatever In ksh/bash/zsh/yash.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/330233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205044/" ] }
330,243
I know that when passing a variable to sed, one should use double quote. So when I was using sed to display a line of a file given a line number: sed '894!d' cloud.cpp It works, but since I also want to pass the line number 894 to sed as a variable, so I tried: lnum=894sed "$lnum!d" cloud.cpp But it does not work, and even single quote with same syntax does not work: sed "894!d" cloud.cpp So in this case how can I still pass a variable to sed.
As you mentioned, you must use double quotes because when you use single quotes for strings, its contents will be treated literally.But in your case, you have a combination of patern and variable. So you should use single quotes for the pattern, and double quotes for the variable: sed "$lnum"'!d' cloud.cpp
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/194026/" ] }
330,305
I want to create a script that would automatically encrypt and push to GitHub into public repo some sensible files I don't want to expose (but do want to keep together with the whole project). As a solution I decided to encrypt them with GPG. The issue is that I can't find any clues on how to encrypt a particular file with a passphrase passed as a CLI argument to a gpg -c command. Does anybody know how to do this?
Use one of the --passphrase-... options, in batch mode: --passphrase-fd reads the passphrase from the given file descriptor echo mysuperpassphrase | gpg --batch -c --passphrase-fd 0 file --passphrase-file reads the passphrase from the given file echo mysuperpassphrase > passphrase gpg --batch -c --passphrase-file passphrase file --passphrase uses the given string gpg --batch -c --passphrase mysuperpassphrase file These will all encrypt file (into file.gpg ) using mysuperpassphrase . With GPG 2.1 or later, you also need to set the PIN entry mode to “loopback”: gpg --batch -c --pinentry-mode loopback --passphrase-file passphrase file etc. Decryption can be performed in a similar fashion, using -d instead of -c , and redirecting the output: gpg --batch -d --passphrase-file passphrase file.gpg > file etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/330305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136629/" ] }
330,366
When I used an X11 desktop, I could run graphical applications in docker containers by sharing the $DISPLAY variable and /tmp/X11-unix directory. For example: docker run -ti -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix some:ubuntu xclock Now, I'm on Fedora 25 running Wayland, so there is no X11 infrastructure to share with the container. How can I launch a graphical application in the container, and have it show up on my desktop? Is there some way to tie in XWayland?
As you say you are running Fedora 25 with Wayland, I assume you are using Gnome-Wayland desktop. Gnome-Wayland runs Xwayland to support X applications. You can share Xwayland access like you did before with Xorg. Your example command misses XAUTHORITY , and you don't mention xhost . You need one of this ways to allow X applications in docker to access Xwayland (or any X). As all this is not related to Wayland, I refer to How can you run GUI applications in docker container? on how to run X applications in docker. As for short, two solutions with xhost: Allow your local user access via xhost: xhost +SI:localuser:$(id -un) and create a similar user with docker run option: --user=$(id -u):$(id -g) Discouraged: Allow root access to X with xhost +SI:localuser:root Related Pitfall : X normally uses shared memory (X extension MIT-SHM ). Docker containers are isolated and cannot access shared memory. That can lead to rendering glitches and RAM access failures. You can avoid that with docker run option --ipc=host . That impacts container isolation as it disables IPC namespacing. Compare: https://github.com/jessfraz/dockerfiles/issues/359 To run Wayland applications in docker without X, you need a running wayland compositor like Gnome-Wayland or Weston. You have to share the Wayland socket. You find it in XDG_RUNTIME_DIR and its name is stored in WAYLAND_DISPLAY . As XDG_RUNTIME_DIR only allows access for its owner, you need the same user in container as on host. Example: docker run -e XDG_RUNTIME_DIR=/tmp \ -e WAYLAND_DISPLAY=$WAYLAND_DISPLAY \ -v $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY:/tmp/$WAYLAND_DISPLAY \ --user=$(id -u):$(id -g) \ imagename waylandapplication QT5 applications also need -e QT_QPA_PLATFORM=wayland and must be started with imagename dbus-launch waylandapplication x11docker for X and Wayland applications in docker is an all in one solution. It also cares about preserving container isolation (that gets lost if simply sharing host X display as in your example).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/330366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21466/" ] }
330,408
We appear to have two files in the /var/spool/lp/logs folder named "requests". One is owned by lp, the other by root. We need to remove the requests file owned by root - how do we reference it? Here's the output from ls -l command: -rw-r--r-- 1 root sys 0 Jan 30 2014 lp -rw-rw---- 1 root lp 6584 Nov 4 06:10 lpsched -rw-rw---- 1 lp lp 3365 Dec 14 10:56 requests -rw-r--r-- 1 root sys 1668416 Dec 14 10:41 requests drwxr-xr-x 2 root sys 1024 Sep 30 2013 requests.archives
If you have GNU ls, you can run ls -lQ to see a quoted version of the filename: $ ls -lQtotal 0-rw-r--r--. 1 user group 0 Dec 14 14:32 "requests"-rw-r--r--. 1 user group 0 Dec 14 14:32 "requests " To remove a specific file, first find its inode number with ls -li : $ touch 'requests' 'requests '$ ls -litotal 0440 -rw-r--r--. 1 user group 0 Dec 14 14:32 requests441 -rw-r--r--. 1 user group 0 Dec 14 14:32 requests Here we have two similar files, one has inode 440, the other 441 (left-hand column). For your case, find the file owned by root and grab that inode number. The -xdev (or -mount ) option to find says to stay on the same filesystem, just in case you have a filesystem mounted underneath the current directory, to avoid catching any files matching in that child filesystem. Then: $ find . -inum 441 -xdev -user root -ls441 0 -rw-r--r-- 1 user group 0 Dec 14 14:32 ./requests\ Notice that find quoted the space character at the end. and to delete: $ find . -inum 441 -xdev -user root -delete # GNU find or $ find . -inum 441 -xdev -user root -exec rm {} \; # otherwise
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205503/" ] }
330,414
I noticed bash has a short cut for ctrl + T which swaps the last two characters before the cursor. I'm wondering why the engineers decided to include this. Was it inherited from a previous convention? Or is there a practical purpose that this is commonly used for?
It's very useful to quickly fix typos: sl becomes ls with a single Ctrl T . You can use Alt T to swap words too ( e.g. when switching between service and systemctl ...). Historically speaking, the Ctrl T feature came to Bash from Emacs in all likelihood. It probably was copied to Emacs from some other editor; it was present in Stanford's E editor (see Essential E page 13) by 1980, and E had a strong impact on Richard Stallman (as described in Free as in Freedom ). It was implemented in very early versions of Bash, before its first release in 1989, when it was pulled out into the readline library where it lives today (the very first entry in the readline ChangeLog hints at this).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/330414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
330,484
I wrote a bash script, and I executed it without compiling it first. It worked perfectly. It can work with or without permissions, but when it comes to C programs, we need to compile the source code. Why?
It means that shell scripts aren't compiled, they're interpreted: the shell interprets scripts one command at a time, and figures out every time how to execute each command. That makes sense for shell scripts since they spend most of their time running other programs anyway. C programs on the other hand are usually compiled: before they can be run, a compiler converts them to machine code in their entirety, once and for all. There have been C interpreters in the past (such as HiSoft 's C interpreter on the Atari ST) but they were very unusual. Nowadays C compilers are very fast; TCC is so fast you can use it to create "C scripts", with a #!/usr/bin/tcc -run shebang, so you can create C programs which run in the same way as shell scripts (from the users' perspective). Some languages commonly have both an interpreter and a compiler: BASIC is one example that springs to mind. You can also find so-called shell script compilers but the ones I've seen are just obfuscating wrappers: they still use a shell to actually interpret the script. As mtraceur points out though a proper shell script compiler would certainly be possible, just not very interesting. Another way of thinking about this is to consider that a shell's script interpreting capability is an extension of its command-line handling capability, which naturally leads to an interpreted approach. C on the other hand was designed to produce stand-alone binaries; this leads to a compiled approach. Languages which are usually compiled do tend to sprout interpreters too, or at least command-line-parsers (known as REPLs, read-eval-print loops ; a shell is itself a REPL).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/330484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98965/" ] }
330,569
In order to forget private keys passphrase (id_rsa) i usually run: ssh-add -D # to forget all loaded identitiesssh-add -d # to forget primary identity ($HOME/.ssh/id_rsa) Now with macOS Sierra v10.12.1 i get this error: $ ssh-add -DAll identities removed.$ ssh-add -dCould not remove identity "/Users/user/.ssh/id_rsa": agent refused operationCould not remove identity "/Users/user/.ssh/id_dsa": agent refused operation I searched google with no luck!
I had the same issue with Sierra. Try removing id_rsa from $HOME/.ssh/ and then restarting (I removed id_rsa.pub as well - therefore the two keys private and public ). It solved my problem.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205531/" ] }
330,571
i try writing script on ESXI and i need add "if last Sunday of month". I try date -d @$(( $(date -d $(date -d @$(( $(date +%s) + 2678400 )) +%Y%m01) +%s) - 604800 )) +%d It can not work, but it work on Debian. On ESXI now output August
I believe the question is Given a particular date, can I determine whether it is the last Sunday in the month? and not the more general question Given a particular month, on what day is its last Sunday? Given that, we can divide the problem in two: Is the date a Sunday? Is it the last week of the month? For the first part, the test is easy enough: date -d "$date" +%a # outputs "Sun" for a Sunday We can test that: test $(date -d "$date" +%a) = Sun # success if $date is a Sunday Now, to test whether it's the last week of the month, we can add one week to the date, and see if that gives us one of the first 7 days of the next month: test $(date -d "$date + 1week" +%e) -le 7 Since the weekday of $date + 1week is the same as that of $date , we can generate both parts of the test in one go, and use a Bash regular expression test: if [[ $(date -d "$date + 1week" +%d%a) =~ 0[1-7]Sun ]]then echo "$date is the last Sunday of the month!"fi Tested: $ ./330571.sh 2016-12-01$ ./330571.sh 2016-12-04$ ./330571.sh 2016-12-252016-12-25 is the last Sunday of the month!$ ./330571.sh 2017-01-28$ ./330571.sh 2017-01-292017-01-29 is the last Sunday of the month!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/193682/" ] }
330,581
How to Geek says that Windows 10 will only give CLI access to Linux . Is there possibly any way to run, specifically, Firefox from this bash shell?
How to Geek was quite wrong, as readers quickly pointed out (q.v.), but was never corrected. One can run Linux X applications on the Windows Subsystem for Linux, provided that they don't do something else that the WSL does not support. One just needs a Win32 X server running on the machine (or indeed an X server running elsewhere) to point them at. One has quite a few choices of Win32 X server for that. This was reported within days of the initial beta release. Running Firefox was even in the reports. There are far better sources on this than How to Geek . Reading through the article, that wasn't the only glaring factual error that leapt out. There are several there, including one that was even pointed out as an mistake not to make in the WSL release notes, which the How to Geek author did not read or check, obviously. Further reading Daniel Aleksandersen (2016-04-07). Running Linux desktop apps on the Windows Subsystem for Linux . SlightFuture.com. Chris Hoffmann (2016-04-14). Windows 10's Bash shell can run graphical Linux applications with this trick . PCWorld. Rob Williams (2016-04-12). Windows 10’s Bash Fling Produces Linux GUI App Offspring For Windows Desktop . HotWardWare. https://askubuntu.com/a/754951/43344
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330581", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17056/" ] }
330,636
I am using a tool ( openocd ) which is printing a lot of garbage, then a basic progress bar slowly printing simple dots, then again some garbage. I would like to filter this output with grep so only the line with progress bar is shown, and in real time (i.e. each dot outputted by openocd is immediately printed in the terminal): openocd <args> |& grep '^\.' The problem is that grep is line buffered (at best) so the progress bar will not be shown until it is finished. How can I do with grep , or is there any standard alternative to achieve this?If there is a way through openocd configuration, this would be useful though I would prefer a more general solution.
This is a kind of hacky/unusual answer, the fact being that this is most likely possible in a not very clean way. grep itself seems to only print output when it encounters a newline character, your progress bar likely does not introduce a newline character when it updates, hence your issue. strace is a tool used to view the system calls that a command is calling, this includes things like reading and writing things to memory/storage, as well as things like opening/closing file descriptors. With strace you can view what a process is accessing, in the case of your pipe stout is being passed to grep , so with strace you can view the text that's being fed to grep . strace will regularly be sent the output coming from the piped command, and you can sniff that output and display it. I was testing with rsync --progress , which seems to run into a similar scenario. I used grep on the ##% because that's what rsync uses to show progress. rsync --progress file1 file2 | strace -e trace=read grep "[0-9]*%" If you run this command you'll find that strace doesn't have nice output, but that when I used it strace caught a couple of read s from the rsync that grep would not normally write , showing read s for 0%, 21%, 45%, 68%, 91% and 100%, seemed to update about every second (probably based on how often rsync updates the progress). So with that you can grep the strace output, which is not very nice, by calling the same grep again. rsync --progress file1 file2 | strace -e trace=read grep "[0-9]*%" 2>&1 > /dev/null | grep -o "[0-9]*%" The 2>&1 is important because strace prints on stderr . The > /dev/null redirects stdout to /dev/null to prevent the output of the first grep being reported. The end result of this was the following output: 0%21%45%68%91%100% You'll have to swap out the grep , but it seems like it'll do the trick. It's not pretty, but it functions, and works around the restrictions of grep . Seems like a grep -f that works like tail -f would be handy (I know grep -f is already in use). The first grep is mostly just to filter down the text that strace will be read ing, since only the matching lines will be listed in strace s read calls, but you also need something for the text to be moving through so strace can watch it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48086/" ] }
330,660
This script does not echo "after": #!/bin/bash -eecho "before"echo "anything" | grep e # it would if I searched for 'y' insteadecho "after"exit It also would if I removed the -e option on the shebang line, but I wish to keep it so my script stops if there is an error. I do not consider grep finding no match as an error. How may I prevent it from exiting so abruptely?
echo "anything" | { grep e || true; } Explanation: $ echo "anything" | grep e### error$ echo $?1$ echo "anything" | { grep e || true; }### no error$ echo $?0### DopeGhoti's "no-op" version### (Potentially avoids spawning a process, if `true` is not a builtin):$ echo "anything" | { grep e || :; }### no error$ echo $?0 The "||" means "or". If the first part of the command "fails" (meaning "grep e" returns a non-zero exit code) then the part after the "||" is executed, succeeds and returns zero as the exit code ( true always returns zero).
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/330660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87656/" ] }
330,677
Is it possible to make a bash script that starts tmux and split the screen horizontally and runs watch -n1 tail -n5 file_n in each ? Basically I'm starting a script multiple times and write its progress into different files that I'd like to monitor. Would be nice if I could run that from one script as opposed to manually open 10 files by myself. I never used tmux btw that's why I'm asking this.
Try this. It first establishes a detached tmux session, then opens your windows with tail commands, then sets the layout of the windows, then attaches to the session. for f in `seq 1 10`; doif [[ $f -eq 1 ]]; then tmux new-session -d -s my_session_name "watch -n1 tail -n5 file_${f}" else tmux split-window -d -t my_session_name:0 -p20 -v "watch -n1 tail -n5 file_${f}"; fidonetmux select-layout -t my_session_name:0 even-verticaltmux attach-session -t my_session_name If you want to have multiple instances of this run, you need to change all the occurences of my_session_name to be something unique for each session. Also, your title mentions 5 windows but the body of your post mentions 10 files. The code as-is will open 10 files in 10 windows. Change the seq 1 10 part for however many windows/files you actually want.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149443/" ] }
330,689
Is it possible to have variable which picks a random number from three pre-decided numbers? Sample: var= 10 or 100 or 1000
Use an array to hold the values and choose among them using the built-in variable $RANDOM . For example, x[0]=10 # One decadex[1]=100 # One centuryx[2]=1000 # One millenniumfor ((i=1; i < 20; ++i)); do echo -n " ${x[$RANDOM%3]}"; done; echo1000 10 10 10 10 100 10 100 100 10 10 100 100 100 10 1000 1000 1000 10 The quality of randomness won't be the best possible (read bytes from /dev/urandom for that), but it should be more than good enough for a script. Note 1: As people have observed in the comments, instead of initializing the array elements individually one can of course use an array litteral: x=(10 100 1000) . Note 2: Instead of hard-coding the number of elements in the array, a radom element can be extracted by ${x[$RANDOM%${#x[@]}]} .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/330689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205710/" ] }
330,690
I'm working on a script that runs a command as sudo and echoes a line of text ONLY if my sudo privileges have timed out, so only if running a command with sudo would require my user (not root) to type its password again. How do I verify that? Mind that $(id -u) even when running as sudo will return my current user id so that can't be check to match it with 0... I need a method that would check this quietly.
Use the option -n to check whether you still have privileges; from man sudo : -n , --non-interactive Avoid prompting the user for input of any kind. If a password is required for the command to run, sudo will display an error message and exit. For example, sudo -n true 2>/dev/null && echo Privileges active || echo Privileges inactive Be aware that it is possible for the privileges to expire between checking with sudo -n true and actually using them. You may want to try directly with sudo -n command... and in case of failure display a message and possibly retry running sudo interactively. Edit: See also ruakh's comment below.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/330690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183485/" ] }
330,742
I have an external hard drive which is encrypted via LUKS. It contains an ext4 fs. I just got an error from rsync for a file which is located on this drive: rsync: readlink_stat("/home/some/dir/items.json") failed: Structure needs cleaning (117) If I try to delete the file I get the same error: rm /home/some/dir/items.jsonrm: cannot remove ‘//home/some/dir/items.json’: Structure needs cleaning Does anyone know what I can do to remove the file and fix related issues with the drive/fs (if there are any)?
That is strongly indicative of file-system corruption. You should unmount, make a sector-level backup of your disk, and then run e2fsck to see what is up. If there is major corruption, you may later be happy that you did a sector-level backup before letting e2fsck tamper with the data.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/330742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148560/" ] }
330,785
Sometimes when I log on to a system via SSH (for example to the production server), I have such privileges that there can install some software, but to do that I need to know the system with which I am dealing. I would be able to check how the system is installed there. Is there a way from the CLI to determine what distribution of Unix/Linux is running?
Try: uname -a It will give you output such as: Linux debianhost 3.16.0-4-686-pae #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) i686 GNU/Linux You can also use: cat /etc/*release*PRETTY_NAME="Debian GNU/Linux 8 (jessie)"NAME="Debian GNU/Linux"VERSION_ID="8"VERSION="8 (jessie)"ID=debianHOME_URL="http://www.debian.org/"SUPPORT_URL="http://www.debian.org/support"BUG_REPORT_URL="https://bugs.debian.org/"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185096/" ] }
330,791
On Freebsd system I run my bash script with an attempt to redirect output: % sudo bash some_file.sh arg1 >/dev/null 2>&1Ambiguous output redirect.% sudo bash some_file.sh arg1 &> /dev/nullInvalid null command.
Try: uname -a It will give you output such as: Linux debianhost 3.16.0-4-686-pae #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) i686 GNU/Linux You can also use: cat /etc/*release*PRETTY_NAME="Debian GNU/Linux 8 (jessie)"NAME="Debian GNU/Linux"VERSION_ID="8"VERSION="8 (jessie)"ID=debianHOME_URL="http://www.debian.org/"SUPPORT_URL="http://www.debian.org/support"BUG_REPORT_URL="https://bugs.debian.org/"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205812/" ] }
330,876
The Bash command cd - prints the previously used directory and changes to it. On the other hand, the Bash command cd ~- directly changes to the previously used directory, without echoing anything. Is that the only difference? What is the use case for each of the commands?
There are two things at play here. First, the - alone is expanded to your previous directory. This is explained in the cd section of man bash (emphasis mine): An argument of - is converted to $OLDPWD before the directory change is attempted. If a non-empty directory name from CDPATH is used, or if - is the first argument, and the directory change is successful, the absolute pathname of the new working directory is written to the standard output. The return value is true if the directory was successfully changed; false otherwise. So, a simple cd - will move you back to your previous directory and print the directory's name out. The other command is documented in the "Tilde Expansion" section: If the tilde-prefix is a ~+ , the value of the shell variable PWD replaces the tilde-prefix. If the tilde-prefix is a ~- , the value of the shell variable OLDPWD, if it is set, is substituted. If the characters following the tilde in the tilde-prefix consist of a number N, optionally prefixed by a + or a - , the tilde-prefix is replaced with the corresponding element from the directory stack, as it would be displayed by the dirs builtin invoked with the tilde-prefix as an argument. If the characters following the tilde in the tilde-prefix consist of a number without a leading + or - , + is assumed. This might be easier to understand with an example: $ pwd/home/terdon$ cd ~/foo$ pwd/home/terdon/foo$ cd /etc$ pwd/etc$ echo ~ ## prints $HOME/home/terdon$ echo ~+ ## prints $PWD/etc$ echo ~- ## prints $OLDPWD/home/terdon/foo So, in general, the - means "the previous directory". That's why cd - by itself will move you back to wherever you were. The main difference is that cd - is specific to the cd builtin. If you try to echo - it will just print a - . The ~- is part of the tilde expansion functionality and behaves similarly to a variable. That's why you can echo ~- and get something meaningful. You can also use it in cd ~- but you could just as well use it in any other command. For example cp ~-/* . which would be equivalent to cp "$OLDPWD"/* .
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/330876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34039/" ] }
330,919
I want to count how many times a certain sequence of bytes happens inside a file that I have. For example, I want to find out how many times the number \0xdeadbeef occurs inside an executable file. Right now I am doing that using grep: #/usr/bin/fishgrep -c \Xef\Xbe\Xad\Xde my_executable_file (The bytes are written in reverse order because my CPU is little-endian) However, I have two problems with my approach: Those \Xnn escape sequences only work in the fish shell. grep is actually counting the number of lines that contain my magic number. If the pattern occurs twice in the same line it will only count once. Is there a way to fix these problems? How can I make this one liner run in Bash shell and accurately count number of times the pattern occurs inside the file?
This is the one-liner solution requested (for recent shells that have "process substitution"): grep -o "ef be ad de" <(hexdump -v -e '/1 "%02x "' infile.bin) | wc -l If no "process substitution" <(…) is available, just use grep as a filter: hexdump -v -e '/1 "%02x "' infile.bin | grep -o "ef be ad de" | wc -l Below is the detailed description of each part of the solution. Byte values from hex numbers: Your first problem is easy to resolve: Those \Xnn escape sequences only work in the fish shell. Change the upper X to a lower one x and use printf (for most shells): $ printf -- '\xef\xbe\xad\xde' Or use: $ /usr/bin/printf -- '\xef\xbe\xad\xde' For those shells that choose to not implement the '\x' representation. Of course, translating hex to octal will work on (almost) any shell: $ "$sh" -c 'printf '\''%b'\'' "$(printf '\''\\0%o'\'' $((0xef)) $((0xbe)) $((0xad)) $((0xde)) )"' Where "$sh" is any (reasonable) shell. But it is quite difficult to keep it correctly quoted. Binary files. The most robust solution is to transform the file and the byte sequence (both) to some encoding that has no issues with odd character values like (new line) 0x0A or (null byte) 0x00 . Both are quite difficult to manage correctly with tools designed and adapted to process "text files". A transformation like base64 may seem a valid one, but it presents the issue that every input byte may have up to three output representations depending if it is the first, second or third byte of the mod 24 (bits) position. $ echo "abc" | base64YWJjCg==$ echo "-abc" | base64LWFiYwo=$ echo "--abc" | base64LS1hYmMK$ echo "---abc" | base64 # Note that YWJj repeats.LS0tYWJjCg== Hex transform. Thats why the most robust transformation should be one that starts on each byte boundary, like the simple HEX representation. We can get a file with the hex representation of the file with either any of this tools: $ od -vAn -tx1 infile.bin | tr -d '\n' > infile.hex$ hexdump -v -e '/1 "%02x "' infile.bin > infile.hex$ xxd -c1 -p infile.bin | tr '\n' ' ' > infile.hex The byte sequence to search is already in hex in this case. : $ var="ef be ad de" But it could also be transformed. An example of a round trip hex-bin-hex follows: $ echo "ef be ad de" | xxd -p -r | od -vAn -tx1ef be ad de The search string may be set from the binary representation. Any of the three options presented above od, hexdump, or xxd are equivalent. Just make sure to include the spaces to ensure the match is on byte boundaries (no nibble shift allowed): $ a="$(printf "\xef\xbe\xad\xde" | hexdump -v -e '/1 "%02x "')"$ echo "$a"ef be ad de If the binary file looks like this: $ cat infile.bin | xxd00000000: 5468 6973 2069 7320 efbe adde 2061 2074 This is .... a t00000010: 6573 7420 0aef bead de0a 6f66 2069 6e70 est ......of inp00000020: 7574 200a dead beef 0a66 726f 6d20 6120 ut ......from a 00000030: 6269 0a6e 6172 7920 6669 6c65 2e0a 3131 bi.nary file..1100000040: 3232 3131 3232 3131 3232 3131 3232 3131 221122112211221100000050: 3232 3131 3232 3131 3232 3131 3232 3131 221122112211221100000060: 3232 0a Then, a simple grep search will give the list of matched sequences: $ grep -o "$a" infile.hex | wc -l2 One Line? It all may be performed in one line: $ grep -o "ef be ad de" <(xxd -c 1 -p infile.bin | tr '\n' ' ') | wc -l For example, searching for 11221122 in the same file will need this two steps: $ a="$(printf '11221122' | hexdump -v -e '/1 "%02x "')"$ grep -o "$a" <(xxd -c1 -p infile.bin | tr '\n' ' ') | wc -l4 To "see" the matches: $ grep -o "$a" <(xxd -c1 -p infile.bin | tr '\n' ' ')3131323231313232313132323131323231313232313132323131323231313232$ grep "$a" <(xxd -c1 -p infile.bin | tr '\n' ' ') … 0a 3131323231313232313132323131323231313232313132323131323231313232 313132320a Buffering There is a concern that grep will buffer the whole file, and, if the file is big, create a heavy load for the computer. For that, we may use an unbuffered sed solution: a='ef be ad de'hexdump -v -e '/1 "%02x "' infile.bin | sed -ue 's/\('"$a"'\)/\n\1\n/g' | sed -n '/^'"$a"'$/p' | wc -l The first sed is unbuffered ( -u ) and is used only to inject two newlines on the stream per matching string. The second sed will only print the (short) matching lines. The wc -l will count the matching lines. This will buffer only some short lines. The matching string(s) in the second sed. This should be quite low in resources used. Or, somewhat more complex to understand, but the same idea in one sed: a='ef be ad de'hexdump -v -e '/1 "%02x "' infile.bin | sed -u '/\n/P;//!s/'"$a"'/\n&\n/;D' | wc -l
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/330919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23960/" ] }
330,972
After setting up LAMP (on Debian) and then looking at /var/www/html's permissions, I was surprised that it is only writeable by root ( drwxr-xr-x 1 root root ). Presumably PHP scripts can create files in /var/www/html, but surely a PHP script (or it's interpreter) doesn't run in the name of root? Can anyone help me understand whatever I am misunderstanding? EDIT: I installed PHP with apt-get install php5-common libapache2-mod-php5 php5-mysql php5-cli
PHP scripts will run as either: The user running Apache as determined by the User directive in your Apache configuration (usually apache or nobody ) if you are using mod_php The user running PHP-FPM if you are using php-fpm So the user a PHP script will execute as will vary. So it's up to you to set the owner and group of /var/www/html (or wherever your DocumentRoot is) accordingly. Furthermore, you may not wish for your PHP application to be able to write (or overwrite) files in your DocumentRoot at all, as this could allow a visitor to a compromised or insecure PHP web application to gain remote code execution privileges. So it's your responsibility to decide whether or not your PHP application is trustworthy enough to allow it to write to files that Apache can serve over the web or even execute. PHP will almost never (and should never!) be run as root for similar reasons to those mentioned above.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330972", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205934/" ] }
330,983
I just installed Linux Mint 18 with KDE. While configuring the correct driver for wifi I came across the option to enable the "Processor microcode firmware for Intel CPUs". Right now the device (which is described as unknown) is marked as "Do not use". Should I enable this option? If I understood correctly microcode is supposed to enable detailed changes in the CPU, which I don't intend to do. On the other hand I read it can help better the performance from the CPU. What would you recommend? Thanks!
From the package’s documentation : Intel® 64 and IA-32 processors (x86_64 and i686 processors) are capable offield-upgrading their control program (microcode) as well as parametersfor other on-chip subsystems (power management, interconnects, etc).These microcode updates correct processor errata, and are important forsafe, stable and correct system operation. While most of the microcode updates fix problems that happen extremelyrarely, they also fix high-profile, high-hitting issues. There are enoughmicrocode updates fixing processor errata that would cause system lockup,memory corruption, or unpredictable system behavior, to warrant takingfirmware updates and microcode updates seriously. So yes, you should enable this option. It won’t improve your CPU’s performance, but it will help fix bugs (including security issues such as Spectre/Meltdown-style information leaks, or problems with features such as TSX on Haswell and Broadwell CPUs, where it can cause lockups) and it might enable new features (such as Software Guard Extensions on some Skylake CPUs). Note also the caveats listed in the same documentation, in particular Please keep your UEFI/BIOS up-to-date. Assuming your motherboard vendordoes a good job of updating system firmware components, an up-to-dateversion of the firmware will negate most of the caveats listed here. This is particularly true for CPUs released in the last decade, starting with Haswell. Nowadays keeping your UEFI/BIOS up-to-date is a good idea for security reasons too. It’s also worth keeping a copy of the recovery procedure given in the documentation, in case a microcode update causes issues when booting your system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/330983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205941/" ] }
331,006
I would like to determine which RPM packages on my Fedora 25 system depend on the libraries libLLVM-3.8.so and libclang-3.8.so . How do I?
You can use dnf repoquery to find this. For example: dnf repoquery --whatrequires libLLVM-3.8.so however, on an x86_64 system, this might not do quite what you want; to specify the x86_64 version of a library (which probably is what you want), tack on ()(64bit) , like this: dnf repoquery --whatrequires 'libLLVM-3.8.so()(64bit)' (With ' now necessary to keep the parentheses from confusing bash.) By default, this lists both available and installed packages; to restrict to the ones that are currently installed, add the --installed flag, like so: dnf repoquery --whatrequires 'libLLVM-3.8.so()(64bit)' --installed which on my system, returns: llvm-libs-0:3.8.0-1.fc25.x86_64mesa-dri-drivers-0:13.0.2-2.fc25.x86_64mesa-libxatracker-0:13.0.2-2.fc25.x86_64 If you want just package names, add --queryformat '%{name}\n' . (Use dnf repoquery --querytags to get other formatting options.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331006", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27613/" ] }
331,109
I'm wondering if there's any way to get telnet to send only a \n , not a \r\n . For example, if one process is listening on a port like this, to print the bytes of any traffic received: nc -l 1234 | xxd -c 1 Connecting to it from netcat with nc localhost 1234 , and typing "hi[enter]": 0000000: 68 h0000001: 69 i0000002: 0a . Connecting to it from telnet with telnet localhost 1234 , and typing "hi[enter]" 0000000: 68 h0000001: 69 i0000002: 0d .0000003: 0a . Telnet is sending 0x0d0a instead of 0x0a for the newline. I understand that this is a CRLF as opposed to LF. It also sends the CRLF if I use ^M or ^J . I thought I had found a solution that directly addresses this problem, by using toggle crlf , but even with this option set, Telnet is always sending the \r\n . I've also tried this on various telnet clients, so I'm guessing I'm misunderstanding what the toggling is supposed to do. Any way to send just a \n through telnet, with enter or otherwise?
You can negotiate binary mode . Once in this mode you cannot leave it. Negotiation means the telnet client will send a special byte sequence to the server, which you will have to ignore if you are not implementing the protocol. Subsequent data is sent unchanged, in line mode. Client: $ telnet localhost 1234Connected to localhost.Escape character is '^]'.^]telnet> set binaryNegotiating binary mode with remote host.hi ^]telnet> quit and server $ nc -l 1234 | xxd -c 1 00000000: ff .00000001: fd .00000002: 00 .00000003: ff .00000004: fb .00000005: 00 .00000006: 68 h00000007: 69 i00000008: 0a . Your telnet client may have an option to start off in binary mode, or you can put an entry in ~/.telnetrc localhost set binary You can apply the binary mode independently in each direction, so you might prefer set outbinary .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/331109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206033/" ] }
331,114
Is there a way to create a hotspot that doesn't have a password? The "hotspot command" of nmcli : wifi hotspot [ifname ifname] [con-name name] [ssid SSID] [band {a | bg}] [channel channel] [password password] does not allow to have a empty password: it gives Error: Invalid 'password': '' is not valid WPA PSK. I guess there is a way to edit the configuration files used by nmcli to generate a hotspot to turn it into a password-free hotspot. If nmcli is not able to do this, what other tools would do it? Thank you
It is not possible to create an open hotspot through wifi hotspot command , because nmcli will generate a password for you (WPA or wep) , the --show-secrets option will be used to print the password. The easy way to create an open wifi-hotspot is using create_ap command: To install it run: git clone https://github.com/oblique/create_apcd create_apmake install Start the service: systemctl start create_ap To create an open access point run: create_ap wlan0 eth0 MyAccessPoint or if you are connected through Wifi: create_ap wlan0 wlan0 MyAccessPoint
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331114", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189711/" ] }
331,129
Unlike other systems, Fedora 25 workstation doesn't use stable IPv6 addresses, by default. For example, with CentOS 7 or Fedora 23, a stable IPv6 is automatically configured (in an IPv6 enabled network where a IPv6 router is present) - i.e. one that is derived from the MAC-address. That IPv6 address then can be used in an DNS AAAA-record. In contrast to that, the IPv6 address of a Fedora 25 workstation system doesn't have any relation to its MAC address and doesn't seem to be stable. How to configure deterministic and stable IPv6 addresses on Fedora 25?
On Fedora 25 Workstation, NetworkManager (NM) configures all network interfaces, by default. That means also the wired ones. And the NetworkManager doesn't create EUI-64 derived IPv6 addresses. Instead it generates so called 'stable-privacy' ones. Apparently to not disclose the MAC address to each IPv6 destination. This can be changed for a given interface $i via changing the IPV6_ADDR_GEN_MODE key in the /etc/sysconfig/network-scripts/ifcfg-$i configuration file. For example via: sed -i 's/^IPV6_ADDR_GEN_MODE=stable-privacy/IPV6_ADDR_GEN_MODE=eui64/' \ /etc/sysconfig/network-scripts/ifcfg-$i The change is effective after NetworkManager rereads its configuration and after a reconnect: nmcli con reloadnmcli con down $inmcli con up $i Notes this option isn't exposed via the NM settings GUI the interface configuration files under /etc/sysconfig/network-scripts read by NM are Fedora/Redhat specific, but the configuration key is not - i.e. on other distributions NM just reads the interface configurations from different locations/configuration files Fedora also comes with systemd-networkd which doesn't disable EUI64 generation , by default. Thus, a simpler way to get stable IPv6 addresses under Fedora is just to remove NetworkManager and configure/enable systemd-networkd, instead. Or one can set the interface in question to unmanaged in NetworkManager and then configure it in systemd-networkd. In any case, the networkd config is pretty minimal then, e.g.: cat /etc/systemd/network/20-wired.network[Match]# manage all matching interfaces#Name=en*# just manage one:Name=eno1 [Network]DHCP=ipv4
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
331,208
Suppose I have a folder: cd /home/cpm135/public_html and make a symbolic link ln -s /var/lib/class . Later, I'm in that directory: cd /home/cpm135/public_html/class The pwd is going to tell me I'm in /home/cpm135/public_html/class Is there any way to know that I'm "really" in /var/lib/class ? Thanks
Depending on how your pwd command is configured, it may default to showing the logical working directory (output by pwd -L ) which would show the symlink location, or the physical working directory (output by pwd -P ) which ignores the symlink and shows the "real" directory. For complete information you can do file "$(pwd -L)" Inside a symlink, this will return /path/of/symlink: symbolic link to /path/of/real/directory
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/331208", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104388/" ] }
331,211
I run a server that has Zabbix. Recently, I've noticed that it's running out of space. Is there any easy way to increase space without loosing any data? Centos is in VM. I've allocated some space to the VM. I understand that /dev/sda2 is out of space I assume that /dev/sda4 is unused space... Simply adding space via lvextend produces error lvextend -L+5G /dev/sda2"/dev/sda2": Invalid path for Logical Volume. Run `lvextend --help' for more information. I assume that /dev/sda4 is the the unallocated space that I need to add to /dev/sda2 Am I correct?
Depending on how your pwd command is configured, it may default to showing the logical working directory (output by pwd -L ) which would show the symlink location, or the physical working directory (output by pwd -P ) which ignores the symlink and shows the "real" directory. For complete information you can do file "$(pwd -L)" Inside a symlink, this will return /path/of/symlink: symbolic link to /path/of/real/directory
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/331211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206096/" ] }
331,216
I am a first-timer user of linux, and I have been able to fix all issues I have met until this one. Upon trying to upgrade the kernel version to something above 4.1 from Debian backport, I am met with the following message: The following packages have unmet dependencies: linux-image-4.7.0-0.bpo.1-amd64: Depends: linux-base (>=4.3~) but 3.5 is to be installedE: Unable to correct problems, you have held broken packages. scouring the internet has told me, that some users fixed it by doing a clean install from scratch, but I feel like I wouldn't learn anything from it, if it is fixable - and I have done 5 clean installs already since yesterday.
Depending on how your pwd command is configured, it may default to showing the logical working directory (output by pwd -L ) which would show the symlink location, or the physical working directory (output by pwd -P ) which ignores the symlink and shows the "real" directory. For complete information you can do file "$(pwd -L)" Inside a symlink, this will return /path/of/symlink: symbolic link to /path/of/real/directory
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/331216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206101/" ] }
331,368
How can I find the package (in Debian) that contain a file with path that contains a certain substring? Example: find all packages (installed or not) that contain a file with path that contains " /usr/share/xml/ ". I have installed xsltproc and had no xml catalog for xhtml => it was looking for dtds over the net, being slow and ddosing W3C. I knew that the catalog packages should be in /usr/share/xml/, but was unable to find packages that put files to the directory. The search at https://packages.debian.org looks only for suffixes of package file paths, not the substrings.
You can do this locally by installing apt-file : sudo apt-get install apt-filesudo apt-file updateapt-file search /usr/share/xml/ (depending on the version of Debian you're using, you may not need the apt-file update step).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/331368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206215/" ] }
331,395
I have an ubuntustudio 16.10 64 bit on an IBM Thinkpad E431. I am able to launch an app from a shell script, but the next step involves pressing Ctrl + Shift + F10 On my notebook I need to press the button Fn before F10 . I use xfce Desktop. I am unable to simulate this in a shell script. I had also disabled my touchpad but did not help.
You do not need to. On my notebook I need to press the button Fn before F10 . That is, however, irrelevant to what X input events you need to simulate. What you have to remember is that the Fn key is never seen on the wire between your keyboard and your computer. It is handled entirely by the microprocessor in the keyboard itself. What comes over the wire when you press the keys with the Fn and F10 engravings is simply the key code for the F10 key, as if you had a full keyboard with a fully-fledged independent F10 key. You have a key that is engraved with F10 and something else. The keyboard microprocessor handles your Fn key as an entirely local modifier key that switches that key between looking like the "something else" key (when Fn is not pressed) on the wire and looking like the F10 (when Fn is pressed) on the wire. In fact, laptop and suchlike keyboards usually have two such local modifiers. The other is the state of the NumLock LED (sic), making every key have four different ways in which it can appear on the wire to your computer. But as seen by your computer, at the other end of the wire, all of this is invisible. It sees a full keyboard with a real, independent F10 key. That is also what the X applications see in X input events. So that is all that you need to simulate. Just simulate X events that indicate that the F10 key has been pressed, with the Level2 ⇧ and Control ⎈ modifiers. With xdotool , as in flowtron's answer, that's just xdotool key ctrl+shift+F10 Further reading Ubuntu 16.04 doesn't recognize Fn key Jonathan de Boyne Pollard (2020). The "Fn" key is local. . Frequently Given Answers. " KEYBOARD MATRIX DESIGN ". SK5126 FlexMatrix Keyboard Controller data sheet . Sprintek. 2015-02-20. " Function Key Usage ". HT82K629A Windows 2000 USB+PS/2 Keyboard Encoder data sheet . Holtek. 2004-09-15.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331395", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122988/" ] }
331,419
I found information that nvram is used for BIOS flashing/backup and that it contains some bios related data. Would cat /dev/random > /dev/nvram permanently brick computer? I'm quite tempted to type this command but somehow I feel it's not gonna end up well for my machine so I guess I'd like to know how dangerous is playing with this device.
I'm curious as to exactly why you'd want to run such a command if you think it might damage your computer... /dev/nvram provides access to the non-volatile memory in the real-time clock on PCs and Ataris. On PCs this is usually known as CMOS memory and stores the BIOS configuration options; you can see the information stored there by looking at /proc/driver/nvram : Checksum status: valid# floppies : 4Floppy 0 type : noneFloppy 1 type : noneHD 0 type : ffHD 1 type : ffHD type 48 data: 65471/255/255 C/H/S, precomp 65535, lz 65279HD type 49 data: 3198/255/0 C/H/S, precomp 0, lz 0DOS base memory: 630 kBExtended memory: 65535 kB (configured), 65535 kB (tested)Gfx adapter : monochromeFPU : installed All this is handled by the nvram kernel module, which takes care of checksums etc. Most of the information here is only present for historical reasons, and reflects the limitations of old operating systems: the computer I ran this on doesn't have four floppy drives, the hard drive information is incorrect, as is the memory information and display adapter information. I haven't tried writing random values to the device, but I suspect it wouldn't brick your system: at worst, you should be able to recover by clearing the CMOS (there's usually a button or jumper to do that on your motherboard). But I wouldn't try it! The only useful features in the CMOS memory nowadays are RTC-related. In particular, nvram-wakeup can program the CMOS alarm to switch your computer on at a specific time. (So that would be one reason to write to /dev/nvram .)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/331419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78925/" ] }
331,477
I have a file that looks like this: Id Chr Start End Prom_1 chr1 3978952 3978953 Prom_1 chr1 3979165 3979166 Prom_1 chr1 3979192 3979193 Prom_2 chr1 4379047 4379048 Prom_2 chr1 4379091 4379092 Prom_2 chr1 4379345 4379346 Prom_2 chr1 4379621 4379622 Prom_3 chr1 5184469 5184470 Prom_3 chr1 5184495 5184496 what I would like to extract is the start and end of the same Id like this: Id Chr Start End Prom_1 chr1 3978952 3979193 Prom_2 chr1 4379047 4379622 Prom_3 chr1 5184469 5184496 as you have noticed the number of repeated Id is not constant between the start and the end. Any idea would be very appreciated.
With GNU datamash : datamash -H -W -g 1,2 min 3 max 4 <input
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136099/" ] }
331,488
This is what I find in man X : The phrase "display" is usually used to refer to a collection of monitors that share a common set of input devices (keyboard, mouse, tablet, etc.). Most workstations tend to only have one display. Larger, multi-user systems, however, frequently have several displays so that more than one person can be doing graphics work at once. To avoid confusion, each display on a machine is assigned a display number (beginning at 0) when the X server for that display is started. The display number must always be given in a display name. My question is: Do we need to start multiple X servers if we want to use multiple displays, or all those displays can be handled by a single X server? Is it possible to share keyboards, mice and monitors across different displays? Edit. The display here refers to the concept defined by the X window system, not a single monitor . I know there are Xinerama and XRandR technologies that support multi-head configurations.
Quoting X(7) : From the user's perspective, every X server has a display name of the form: hostname:displaynumber.screennumber Each X server has one display (which may include multiple monitors, or even no monitors at all). Using multiple displays (in the X sense) requires multiple X servers; that's how you get multiple seats too. As far as sharing goes, I think each X server expects to "own" the devices it's using at any given time, so you can't have input from a single keyboard going to multiple X servers simultaneously, or the output of multiple X servers combined on a single monitor. X servers can hand hardware off, which allows you to run X servers on multiple VTs and switch between them (this is how simultaneous logins are handled e.g. in GNOME). You can also nest some X servers ( Xephyr , xpra ...), so input goes to your main current X server, and gets passed on to the nested X server in a window; and the output of the nested X server is displayed in a window by the main X server. On Linux, you could write a multiplexing input driver in the input layer to share input devices, but that's a different layer altogether than the X server.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331488", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7157/" ] }
331,491
When I start iptraf it can't detect any of my network interfaces. I have a suspicion that this is tied to Ubuntu's new naming scheme for network interfaces (mine is called wlp112s0 instead of wlan0 ). If I try to force with this: sudo iptraf -i wlp112s0 I get this message in a red textbox: Specified interface not supported
As suggested by @Thomas, try iptraf-ng .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331491", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9903/" ] }
331,521
So the university IT security team and I have been going around and around on this with no breaks... anyone have any thoughts on this: I recently set up a small file server for my lab running Debian 8.6 on a dedicated computer (Intel Avoton C2550 processor -- happy to provide more hardware info if needed, but I think unnecessary). Debian installed without any problems, and at the time I also installed Samba, NTP, ZFS, and python. Things seemed to be working fine, so I let it sit and run in the corner of the lab for a few weeks. About two weeks ago, I received an email from the IT team saying there that my server has been "compromised" and is vulnerable being used in an NTP amplification/DDoS attack (NTP Amplification Attacks Using CVE-2013-5211 as described in https://www.us-cert.gov/ncas/alerts/TA14-013A ). The sign they pointed to was a lot of NTPv2 traffic on port 123. Weirdly, the IP address that they identified this coming from ( *.*.*.233 ) was different from the ip address my server was configured for and reported via ifconfig ( *.*.*.77 ). Nevertheless, some basic troubleshooting revealed that my computer was indeed generating this traffic on port 123 (as revealed by tcpdump). Here is where the bizarreness began. I first ran through the "fixes" recommended for CVE-2013-5211 (both updating the NTP past version 4.2.7 as well as disabling monlist functionality). Neither stemmed the traffic flow. I then tried blocking the UDP 123 port via IP tables: $ /sbin/iptables -A INPUT -o eth0 -p udp --destination-port 123 -j DROP$ /sbin/iptables -A OUTPUT -o eth0 -p udp --destination-port 123 -j DROP but that too had no effect on the traffic. I finally tried purging NTP from the system, but that had no effect on the traffic either. As of this afternoon, nmap was still reporting: Starting Nmap 5.51 ( http://nmap.org ) at 2016-12-19 16:15 ESTNmap scan report for *.233Host is up (0.0010s latency).PORT STATE SERVICE123/udp open ntp| ntp-monlist:| Public Servers (2)| 50.116.52.97 132.163.4.101| Public Clients (39)| 54.90.159.15 185.35.62.119 185.35.62.233 185.35.63.86| 54.197.89.98 185.35.62.142 185.35.62.250 185.35.63.108| 128.197.24.176 185.35.62.144 185.35.62.251 185.35.63.128| 180.97.106.37 185.35.62.152 185.35.63.15 185.35.63.145| 185.35.62.27 185.35.62.159 185.35.63.27 185.35.63.146| 185.35.62.52 185.35.62.176 185.35.63.30 185.35.63.167| 185.35.62.65 185.35.62.186 185.35.63.34 185.35.63.180| 185.35.62.97 185.35.62.194 185.35.63.38 185.35.63.183| 185.35.62.106 185.35.62.209 185.35.63.39 185.35.63.185|_ 185.35.62.117 185.35.62.212 185.35.63.43 Which is all very weird since NTP has been purged from the system for weeks now. After hitting a dead end on this path, I started thinking about the whole IP-address mismatch issue. My computer seemed to be sitting on both *.233 and *.77 IPs (as confirmed by successfully pinging both with ethernet cable attached and both unavailable with cable unplugged), but *.233 never shows up in ifconfig: Link encap:Ethernet HWaddr d0:XX:XX:51:78:XX inet addr:*.77 Bcast:*.255 Mask:255.255.255.0inet6 addr: X::X:X:X:787a/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:23023571 errors:0 dropped:1362 overruns:0 frame:0TX packets:364849 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:1000 RX bytes:7441732389 (6.9 GiB) TX bytes:44699444 (42.6 MiB)Memory:df300000-df37ffff There is no reference to *.233 in /etc/network/interfaces, so I don't see where this IP assignment is coming from. So, I have two likely related questions I'm hoping someone can help me with:1) How can I eliminate this NTP traffic from spewing from my server to get IT off my back?2) What is up with this second IP address my server is sitting on and how can I remove it? Thanks, folks :) UPDATE:As requested: $iptables -L -v -n Chain INPUT (policy ACCEPT 57 packets, 6540 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 27 packets, 2076 bytes) pkts bytes target prot opt in out source destination And $ip addr ls 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether d0:50:99:51:78:7a brd ff:ff:ff:ff:ff:ff inet *.77/24 brd *.255 scope global eth0 valid_lft forever preferred_lft forever inet *.167/24 brd *.255 scope global secondary dynamic eth0 valid_lft 24612sec preferred_lft 24612sec inet6 X::X:X:X:787a/64 scope link valid_lft forever preferred_lft forever3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether d0:50:99:51:78:7b brd ff:ff:ff:ff:ff:ff UPDATE 2:I failed to mention that in addition to the IP address not matching up, the MAC ID also did not match. This really made me think twice about whether the traffic was indeed coming from my machine. However: (1) unplugging my server from the network made the traffic disappear; (2) move to a different network port and the traffic followed; and (3) tcpdump port 123 showed the aberrant traffic: 13:24:33.329514 IP cumm024-0701-dhcp-233.bu.edu.ntp > 183.61.254.77.44300: NTPv2, Reserved, length 44013:24:33.329666 IP cumm024-0701-dhcp-233.bu.edu.ntp > 183.61.254.77.44300: NTPv2, Reserved, length 44013:24:33.329777 IP cumm024-0701-dhcp-233.bu.edu.ntp > 183.61.254.77.44300: NTPv2, Reserved, length 296 UPDATE 3: $ss -uapn 'sport = :123' State Recv-Q Send-Q Local Address:Port Peer Address:Port (i.e., nothing) $sudo cat /proc/net/dev Inter-| Receive | Transmit face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed lo: 327357 5455 0 0 0 0 0 0 327357 5455 0 0 0 0 0 0 eth1: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 eth0: 13642399917 36270491 0 6522 0 0 0 2721337 45098276 368537 0 0 0 0 0 0 UPDATE 4:Those packets were typical of a few days ago. Today (but yes, still very high): 20:19:37.011762 IP cumm024-0701-dhcp-233.bu.edu.ntp > 103.56.63.147.26656: NTPv2, Reserved, length 15220:19:37.011900 IP cumm024-0701-dhcp-233.bu.edu.ntp > 202.83.122.78.58066: NTPv2, Reserved, length 15220:19:37.012036 IP cumm024-0701-dhcp-233.bu.edu.ntp > 103.56.63.147.17665: NTPv2, Reserved, length 15220:19:37.014539 IP cumm024-0701-dhcp-233.bu.edu.ntp > 202.83.122.78.27945: NTPv2, Reserved, length 15220:19:37.015482 IP cumm024-0701-dhcp-233.bu.edu.ntp > 202.83.122.78.42426: NTPv2, Reserved, length 15220:19:37.015644 IP cumm024-0701-dhcp-233.bu.edu.ntp > 103.56.63.147.16086: NTPv2, Reserved, length 152$ sudo ss -uapn '( sport = :42426 or dport = :42426 )'State Recv-Q Send-Q Local Address:Port Peer Address:Port Yes, I can ping the *.233 IP: $ping 128.197.112.233PING 128.197.112.233 (128.197.112.233) 56(84) bytes of data.64 bytes from 128.197.112.233: icmp_seq=1 ttl=64 time=0.278 ms64 bytes from 128.197.112.233: icmp_seq=2 ttl=64 time=0.282 ms64 bytes from 128.197.112.233: icmp_seq=3 ttl=64 time=0.320 ms No, the MAC don't matchMy hardware MAC address is: d0:50:99:51:78:7aThe traffic is associate with MAC: bc:5f:f4:fe:a1:00 UPDATE 5:As requested, a port scan against *.233: Starting Nmap 6.00 ( http://nmap.org ) at 2016-12-20 20:38 EETNSE: Loaded 17 scripts for scanning.Initiating SYN Stealth Scan at 20:38Scanning cumm024-0701-dhcp-233.bu.edu (128.197.112.233) [1024 ports]Discovered open port 22/tcp on 128.197.112.233Completed SYN Stealth Scan at 20:38, 9.79s elapsed (1024 total ports)Initiating Service scan at 20:38Scanning 1 service on cumm024-0701-dhcp-233.bu.edu (128.197.112.233)Completed Service scan at 20:38, 0.37s elapsed (1 service on 1 host)Initiating OS detection (try #1) against cumm024-0701-dhcp-233.bu.edu (128.197.112.233)Initiating Traceroute at 20:38Completed Traceroute at 20:38, 0.10s elapsedNSE: Script scanning 128.197.112.233.[+] Nmap scan report for cumm024-0701-dhcp-233.bu.edu (128.197.112.233)Host is up (0.083s latency).Not shown: 1013 filtered portsPORT STATE SERVICE VERSION21/tcp closed ftp22/tcp open ssh OpenSSH 5.5p1 Debian 6+squeeze1 (protocol 2.0)23/tcp closed telnet25/tcp closed smtp43/tcp closed whois80/tcp closed http105/tcp closed unknown113/tcp closed ident210/tcp closed z39.50443/tcp closed https554/tcp closed rtspDevice type: general purposeRunning: Linux 2.6.XOS CPE: cpe:/o:linux:kernel:2.6OS details: DD-WRT v24-sp2 (Linux 2.6.19)Uptime guess: 45.708 days (since Sat Nov 5 03:39:36 2016)Network Distance: 9 hopsTCP Sequence Prediction: Difficulty=204 (Good luck!)IP ID Sequence Generation: All zerosService Info: OS: Linux; CPE: cpe:/o:linux:kernelTRACEROUTE (using port 25/tcp)HOP RTT ADDRESS1 0.95 ms router1-lon.linode.com (212.111.33.229)2 0.70 ms 109.74.207.03 1.09 ms be4464.ccr21.lon01.atlas.cogentco.com (204.68.252.85)4 1.00 ms be2871.ccr42.lon13.atlas.cogentco.com (154.54.58.185)5 63.45 ms be2983.ccr22.bos01.atlas.cogentco.com (154.54.1.178)6 63.60 ms TrusteesOfBostonUniversity.demarc.cogentco.com (38.112.23.118)7 63.55 ms comm595-core-res01-gi2-3-cumm111-bdr-gw01-gi1-2.bu.edu (128.197.254.125)8 63.61 ms cumm024-dist-aca01-gi5-2-comm595-core-aca01-gi2-2.bu.edu (128.197.254.206)9 90.28 ms cumm024-0701-dhcp-233.bu.edu (128.197.112.233)OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .Nmap done: 1 IP address (1 host up) scanned in 20.73 seconds Raw packets sent: 557 (25.462KB) | Rcvd: 97 (8.560KB) and on UDP: Starting Nmap 6.00 ( http://nmap.org ) at 2016-12-20 20:44 EETNSE: Loaded 17 scripts for scanning.Initiating Ping Scan at 20:44Scanning 128.197.112.233 [4 ports]Completed Ping Scan at 20:44, 1.10s elapsed (1 total hosts)Initiating UDP Scan at 20:44Scanning cumm024-0701-dhcp-233.bu.edu (128.197.112.233) [1024 ports]Completed UDP Scan at 20:44, 6.31s elapsed (1024 total ports)Initiating Service scan at 20:44Scanning 1024 services on cumm024-0701-dhcp-233.bu.edu (128.197.112.233)Service scan Timing: About 0.39% doneService scan Timing: About 3.12% done; ETC: 22:12 (1:25:46 remaining)Service scan Timing: About 6.05% done; ETC: 21:53 (1:04:39 remaining)Service scan Timing: About 8.98% done; ETC: 21:46 (0:56:03 remaining)Discovered open port 123/udp on 128.197.112.233Discovered open|filtered port 123/udp on cumm024-0701-dhcp-233.bu.edu (128.197.112.233) is actually openCompleted Service scan at 21:31, 2833.50s elapsed (1024 services on 1 host)Initiating OS detection (try #1) against cumm024-0701-dhcp-233.bu.edu (128.197.112.233)Retrying OS detection (try #2) against cumm024-0701-dhcp-233.bu.edu (128.197.112.233)NSE: Script scanning 128.197.112.233.Initiating NSE at 21:31Completed NSE at 21:31, 10.02s elapsed[+] Nmap scan report for cumm024-0701-dhcp-233.bu.edu (128.197.112.233)Host is up (0.089s latency).Not shown: 1023 open|filtered portsPORT STATE SERVICE VERSION123/udp open ntp?1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at http://www.insecure.org/cgi-bin/servicefp-submit.cgi :SF-Port123-UDP:V=6.00%I=7%D=12/20%Time=58597D5C%P=x86_64-unknown-linux-gnuSF:%r(NTPRequest,30,"\xe4\x02\x04\xee\0\0\x8a\xff\0:t\xd9\x84\xa3\x04e\xdbSF:\xcaeEX\xdbC'\xc5O#Kq\xb1R\xf3\xdc\x03\xfb\xb8\+>U\xab\xdc\x03\xfb\xb8\SF:+T\xd1\xe9")%r(Citrix,C,"\xde\xc0\x010\x02\0\xa8\xe3\0\0\0\0");Too many fingerprints match this host to give specific OS detailsNetwork Distance: 9 hopsOS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .Nmap done: 1 IP address (1 host up) scanned in 2863.89 seconds Raw packets sent: 175 (6.720KB) | Rcvd: 50 (10.088KB)
This is a server class machine with IPMI. The "ghost" NTP server that is causing the issue is running on the BMC processor on the system and not the main CPU.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/331521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135458/" ] }
331,522
I'm confused how to include optional arguments/flags when writing a bash script for the following program: The program requires two arguments: run_program --flag1 <value> --flag2 <value> However, there are several optional flags: run_program --flag1 <value> --flag2 <value> --optflag1 <value> --optflag2 <value> --optflag3 <value> --optflag4 <value> --optflag5 <value> I would like to run the bash script such that it takes user arguments. If users only input two arguments in order, then it would be: #!/bin/shrun_program --flag1 $1 --flag2 $2 But what if any of the optional arguments are included? I would think it would be if [ --optflag1 "$3" ]; then run_program --flag1 $1 --flag2 $2 --optflag1 $3fi But what if $4 is given but not $3?
This article shows two different ways - shift and getopts (and discusses the advantages and disadvantages of the two approaches). With shift your script looks at $1 , decides what action to take, and then executes shift , moving $2 to $1 , $3 to $2 , etc. For example: while :; do case $1 in -a|--flag1) flag1="SET" ;; -b|--flag2) flag2="SET" ;; -c|--optflag1) optflag1="SET" ;; -d|--optflag2) optflag2="SET" ;; -e|--optflag3) optflag3="SET" ;; *) break esac shiftdone With getopts you define the (short) options in the while expression: while getopts abcde opt; do case $opt in a) flag1="SET" ;; b) flag2="SET" ;; c) optflag1="SET" ;; d) optflag2="SET" ;; e) optflag3="SET" ;; esacdone Obviously, these are just code-snippets, and I've left out validation - checking that the mandatory args flag1 and flag2 are set, etc. Which approach you use is to some extent a matter of taste - how portable you want your script to be, whether you can live with short (POSIX) options only or whether you want long (GNU) options, etc.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/331522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115891/" ] }
331,536
Is it safe to put my temporary manual backups of my website codebase and database into the /tmp folder? I'm running Debian 8. I want to leave them there for a couple days. I am not sure if this directory gets overwritten or emptied on it's own. Thanks!
I would say it is not safe in general. On many systems, /tmp is cleaned on reboot by default . See /etc/default/rcS ( TMPTIME defaults to 0 ), # delete files in /tmp during boot older than x days.# '0' means always, -1 or 'infinite' disables the feature#TMPTIME=0
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/331536", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/196159/" ] }
331,611
Is there an official POSIX, GNU, or other guideline on where progress reports and logging information (things like "Doing foo; foo done") should be printed? Personally, I tend to write them to stderr so I can redirect stdout and get only the program's actual output. I was recently told that this is not good practice since progress reports aren't actually errors and only error messages should be printed to stderr. Both positions make sense, and of course you can choose one or the other depending on the details of what you are doing, but I would like to know if there's a commonly accepted standard for this. I haven't been able to find any specific rules in POSIX, the GNU coding standards, or any other such widely accepted lists of best practices. We have a few similar questions, but they don't address this exact issue: When to use redirection to stderr in shell scripts : The accepted answer suggests what I tend to do, keep the program's final output on stdout and anything else to stderr. However, this is just presented as a user's opinion, albeit supported by arguments. Should the usage message go to stderr or stdout? : This is specific to help messages but cites the GNU coding standard. This is the sort of thing I'm looking for, just not restricted to help messages only. So, are there any official rules on where progress reports and other informative messages (which aren't part of the program's actual output) should be printed?
Posix defines the standard streams thus : At program start-up, three streams shall be predefined and need not be opened explicitly: standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). When opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device. The GNU C Library describes the standard streams similarly: Variable: FILE * stdout The standard output stream, which is used for normal output from the program. Variable: FILE * stderr The standard error stream, which is used for error messages and diagnostics issued by the program. Thus, standard definitions have little guidance for stream usage beyond “conventional/normal output” and “diagnostic/error output.” In practice, it’s common to redirect either or both of these streams to files and pipelines, where progress indicators will be a problem. Some systems even monitor stderr for output and consider it a sign of problems. Purely auxiliary progress information is therefore problematic on either stream. Instead of sending progress indicators unconditionally to either standard stream, it’s important to recognize that progress output is only appropriate for interactive streams. With that in mind, I recommend writing progress counters only after checking whether the stream is interactive (e.g., with isatty() ) or when explicitly enabled by a command-line option. That’s especially important for progress meters that rely on terminal update behavior to make sense, like %-complete bars. For certain very simple progress messages (“Starting X” ... “Done with X”) it’s more reasonable to include the output even for non-interactive streams. In that case, consider how users might interact with the streams, like searching with grep or paging with less or monitoring with tail -f . If it makes sense to see the progress messages in those contexts, they will be much easier to consume from stdout .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/331611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
331,615
I have 2 external storage devices of 1TB each and I want to backup all of this to a server. I want to use rsync to do this but I have found that of ~100,000 files on each device, ~80,000 files are the same (have the same name and directory path). I could rsync both of these separately which would merge the files, but I want a way to find out if the 'mutual' files contain the same content, because I dont want to lose a modified file if they have been modified. Is there a way of checking for this using rsync?
Posix defines the standard streams thus : At program start-up, three streams shall be predefined and need not be opened explicitly: standard input (for reading conventional input), standard output (for writing conventional output), and standard error (for writing diagnostic output). When opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device. The GNU C Library describes the standard streams similarly: Variable: FILE * stdout The standard output stream, which is used for normal output from the program. Variable: FILE * stderr The standard error stream, which is used for error messages and diagnostics issued by the program. Thus, standard definitions have little guidance for stream usage beyond “conventional/normal output” and “diagnostic/error output.” In practice, it’s common to redirect either or both of these streams to files and pipelines, where progress indicators will be a problem. Some systems even monitor stderr for output and consider it a sign of problems. Purely auxiliary progress information is therefore problematic on either stream. Instead of sending progress indicators unconditionally to either standard stream, it’s important to recognize that progress output is only appropriate for interactive streams. With that in mind, I recommend writing progress counters only after checking whether the stream is interactive (e.g., with isatty() ) or when explicitly enabled by a command-line option. That’s especially important for progress meters that rely on terminal update behavior to make sense, like %-complete bars. For certain very simple progress messages (“Starting X” ... “Done with X”) it’s more reasonable to include the output even for non-interactive streams. In that case, consider how users might interact with the streams, like searching with grep or paging with less or monitoring with tail -f . If it makes sense to see the progress messages in those contexts, they will be much easier to consume from stdout .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/331615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173266/" ] }
331,640
On Centos 7 I want Docker containers to be able to reach the host so I tried to add docker0 to trusted zone: # firewall-cmd --permanent --zone=trusted --add-interface=docker0The interface is under control of NetworkManager and already bound to 'trusted'The interface is under control of NetworkManager, setting zone to 'trusted'.success# firewall-cmd --get-zone-of-interface=docker0no zone This used to work but not on this server for whatever reason. I also tried firewall-cmd --reload , nothing. As if firewalld commands are completely ignored. That NetworkManager message seems suspicious, is it possible that firewalld and NetworkManager are in some kind of conflict? Out of desperation I also tried: nmcli connection modify docker0 connection.zone trusted which correctly set the ZONE=trusted in the interface config but firewalld still shows that interface is not in the trusted zone. What is going on here?
From what I can tell unless there's an interface using the trusted zone that's directly recognized by firewalld (i.e. eth0) the trusted zone isn't marked as active. In order to get around this, you can explicitly set the iptables rule with the following: firewall-cmd --permanent --zone=trusted --add-interface=docker0firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 3 -i docker0 -j ACCEPTfirewall-cmd --reloadsystemctl restart docker The '3' here is where in your INPUT chain the rule will be inserted, your mileage may vary. After running those commands I was able to access host ports from a container.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201542/" ] }
331,645
I'd like to extract a file from a Docker image without having to run the image. The docker save option is not currrently a viable option for me as it's saving too huge of a file just to un-tar a specific file.
You can extract files from an image with the following commands: container_id=$(docker create "$image")docker cp "$container_id:$source_path" "$destination_path"docker rm "$container_id" According to the docker create documentation , this doesn't run the container: The docker create command creates a writeable container layer over the specified image and prepares it for running the specified command. The container ID is then printed to STDOUT . This is similar to docker run -d except the container is never started. You can then use the docker start <container_id> command to start the container at any point. For reference (my previous answer), a less efficient way of extracting a file from an image is the following: docker run some_image cat "$file_path" > "$output_path"
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/331645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18009/" ] }
331,650
This is an excerpt from bash manual : When the old-style backquote form of substitution is used, backslash retains its literal meaning except when followed by $ , ` , or \ But backticks treat $ and \$ in the same way as suggested by the output of the following commands: Command Outputecho '$PWD' $PWDecho '\$PWD' \$PWD
I am new to stackexchange and to Linux also. Thanks in advance. Welcome to both! There are no backticks in your example, those are single quotes: '' Backticks looks like this: `` Also, I would suggest that you simply don't use them (the backticks that is)! It is better to use this syntax for command substitution: $(<command>) Read about why here . Happy hacking!
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/331650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206221/" ] }
331,693
I have a bunch of services (say C0 , C1 , … C9 ) that should only start after a service S has completed its initialization and is fully running and ready for the other services. How do I arrange that with systemd? In Ordering services with path activation and target in systemd it is assumed that service S has a mechanism for writing out some sort of flag file. Assume here, in contrast, that I have full control over the program that service S runs, and can add systemd mechanisms into it if needs be.
One doesn't necessarily need this. If the C services need to wait for S to be ready so that they can open a socket connection to it, then one doesn't necessarily need to do this at all. Rather, one can take advantage of early listening socket opening by service managers. Several systems, including Laurent Bercot's s6 , my nosh toolset , and systemd, have ways in which a listening socket can be opened early on, the very first thing in setting up the service. They all involve something other than the service program opening the listening socket(s), and the service program, when invoked, receiving the listening sockets(s) as already-open file descriptors. With systemd, specifically, one creates a socket unit that defines the listening socket. systemd opens the socket unit and sets it up so that the kernel networking subsystem is listening for connections; and passes it to the actual service as an open file descriptor when it comes to spawn the process(es) that handle(s) connections to the socket. (It can do this in two ways, just like inetd could, but a discussion of the details of Accept=true versus Accept=false services is beyond the scope of this answer.) The important point is that one does not necessarily need more ordering than that. The kernel batches up client connections in a queue until the service program is initialized, and ready to accept them and talk to clients. When one does, readiness protocols are the thing. systemd has a set of readiness protocols that it understands, specified service by service with the Type= setting in the service unit. The particular readiness protocol of interest here is the notify readiness protocol. With it, systemd is told to expect messages from the service, and when the service is ready it sends a message that flags readiness. systemd delays the activation of the other services until readiness is flagged. Making use of this involves two things: Modifying the code of S so that it calls something like Pierre-Yves Ritschard's notify_systemd() function or Cameron T Norman's notify_socket() function. Setting up the service unit for the service with Type=notify and NotifyAccess=main . The NotifyAccess=main restriction (which is the default) is because systemd needs to know to ignore messages from mischievous (or just plain faulty) programs, because any process on the system can send messages to systemd's notification socket. One uses Pierre-Yves Ritschard's or Cameron T Norman's code for preference because it does not exclude the possibility of having this mechanism on UbuntuBSD, Debian FreeBSD, actual FreeBSD, TrueOS, OpenBSD, and so forth; which the code supplied by the systemd authors does exclude. One trap to avoid is the systemd-notify program. It has several major problems, not the least of which is that messages sent with it can end up being thrown away unprocessed by systemd. The most major problem in this case is that it doesn't run as the "main" process of the service, so one has to open up the readiness notifications for the service S to every process on the system with NotifyAccess=all . Another trap to avoid is thinking that the forking protocol is simpler. It is not. Doing it correctly involves not forking and exiting the parent until (for one thing) all of the program's worker threads are running. This does not match how the overwhelming majority of dæmons that fork actually fork. Further reading Jonathan de Boyne Pollard (2015). Readiness protocol problems with Unix dæmons . Frequently Given Answers. Lennart Poettering (2010). sd_notify() . systemd manual pages. Freedesktop.org. Lennart Poettering (2010). systemd-notify . systemd manual pages. Freedesktop.org. How to write a systemd service unit file so it waits until a specific interface is up before starting? Add status info to systemd's status output
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5132/" ] }
331,696
find . -type f -exec echo {} \; Using the above command, I want to get file names without leading "./" characters. So basically, I want to get: filename Instead of: ./filename Any way to do this?
Use * instead of . and the leading ./ disappears. find * -type f
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/331696", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206439/" ] }
331,722
In bash. I am having some difficulty to determine what I should use? all my scripts use ">>/dev/stderr" at bash prompt, if I try: echo test >>/dev/stderr works echo test >> /dev/stderr works echo test >/dev/stderr works echo test > /dev/stderr works echo test >>&2 FAILS! echo test >> &2 FAILS! echo test >&2 works echo test > &2 FAILS! I am willing to change all my scripts to >&2 . It seems to also have a big effect over ssh (after su SomeUser ) where >>/dev/stderr will not work at all (permission denied), only >&2 will work.
>& n is shell syntax to directly duplicate a file descriptor . File descriptor 2 is stderr; that's how that one works. You can duplicate other file descriptors as well, not just stderr. You can't use append mode here because duplicating a file descriptor never truncates (even if your stderr is a file) and >& is one token, that's why you can't put a space inside it—but >& 2 works. >> name is a different permitted syntax, where name is a file name (and the token is >> ). In this case, you're using the file name /dev/stderr , which by OS-specific handling (on Linux, it's a symlink to /proc/self/fd/2 ) also means standard error. Append and truncate mode both wind up doing the same thing when stderr is a terminal because that can't be truncated. If your standard error is a file, however, it will be truncated: anthony@Zia:~$ bash -c 'echo hi >/dev/stderr; echo bye >/dev/stderr' 2>/tmp/fooanthony@Zia:~$ cat /tmp/foobye If you're seeing an error with /dev/stderr over ssh, it's possible the server admin has applied some security measure preventing that symlink from working. (E.g., you can't access /proc or /dev ). While I'd expect either to cause all kinds of weird breakage, using the duplicate file descriptor syntax is a perfectly reasonable (and likely slightly more efficient) approach. Personally I prefer it.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/331722", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30352/" ] }
331,741
I have the following file: echo filename dfT08r352|30.5|2010/06/01|2016/08/29|2281|6.24503764544832|74.9404517453799| zm00dr121|37|2008/03/05|2011/09/12|1285.95833333333|3.52076203513575|42.249144421629| ccvd00121|41.6|2008/03/05|2012/03/05|1461|4|48| sddf00121|39.6|2008/03/05|2012/09/10|1649.95833333333|4.51733972165184|54.208076659822| fttt00121|41|2008/03/05|2013/09/16|2020.95833333333|5.53308236367785|66.3969883641342| ghhyy0121|42.2|2008/03/05|2014/03/18|2203.95833333333|6.03410905772302|72.4093086926762| I am trying to format this file using awk printf to have the following desired format: keep the same order of fields (left-->right) have comma ", " FS only for the l ast three fields ($5, $6, $7) having all thenumbers to be 4 digits, if less have a leading zero and only 2digits after the point like 0123.12 or 1234.10 I wrote the following awk command awk -F"|" '{print $1","$2","$3","$4}{format = "%04.2f,%04.2f,%04.2f,"}{printf format, $5,$6,$7}' filename however the below output has the following issues: is not in order (left-->right) do not have the leading zero dfT08r352,30.5,2010/06/01,2016/08/292281.00,6.25,74.94,zm00dr121,37,2008/03/05,2011/09/121285.96,3.52,42.25,ccvd00121,41.6,2008/03/05,2012/03/051461.00,4.00,48.00,sddf00121,39.6,2008/03/05,2012/09/101649.96,4.52,54.21,fttt00121,41,2008/03/05,2013/09/162020.96,5.53,66.40,ghhyy0121,42.2,2008/03/05,2014/03/18 Can someone please let me know what is my mistake and how to fix it?
You have the fields in the right order, but your first print statement adds a newline (Output Record Separator), so your data's there, but just wrapped unexpectedly. The second issue is that you're telling printf to use a width of 4; that includes the decimal point and the two digits after it, leaving only one for the leading digit and none for any padding. Try using 5 as the width, so that your data is padded up to four total numbers. If you want 4 digits before the decimal point, then change the width to 7 instead. This is the shortest change I made from your program to something that outputs what I think you want: awk -F"|" '{ format = "%05.2f,%05.2f,%05.2f"; print $1","$2","$3","$4"," sprintf(format, $5,$6,$7)}' filename I combined multiple { } blocks into one, and also combined the print statements into one. If I was to write your awk statement from scratch, I might do something like this: awk -v FS=\| -v OFS=, '{ $5=sprintf("%05.2f", $5); $6=sprintf("%05.2f", $6); $7=sprintf("%05.2f", $7); print $1,$2,$3,$4,$5,$6,$7}' filename It explicitly sets the input Field Separator, the Output Field Separator, explicitly converts each of the fields on its own, then prints the desired fields, with the OFS separating them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/331741", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/178822/" ] }
331,764
When you redirect a command list that contains an exec redirection, the exec >/dev/null doesn't seem to still be applied afterwards, such as with: { exec >/dev/null; } >/dev/null; echo "Hi" "Hi" is printed. I was under the impression that {} command list is not considered a subshell unless it is part of a pipeline, so the exec >/dev/null should still be applied within the current shell environment in my mind. Now if you change it to: { exec >/dev/null; } 2>/dev/null; echo "Hi" there is no output as expected; file descriptor 1 remains pointed at /dev/null for future commands as well. This is shown by rerunning: { exec >/dev/null; } >/dev/null; echo "Hi" which will give no output. I tried making a script and stracing it, but I am still unsure exactly what is happening here. At each point in this script what is happening to the STDOUT file descriptor? EDIT:Adding my strace output: read(255, "#!/usr/bin/env bash\n{ exec 1>/de"..., 65) = 65open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3fcntl(1, F_GETFD) = 0fcntl(1, F_DUPFD, 10) = 10fcntl(1, F_GETFD) = 0fcntl(10, F_SETFD, FD_CLOEXEC) = 0dup2(3, 1) = 1close(3) = 0close(10) = 0open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3fcntl(1, F_GETFD) = 0fcntl(1, F_DUPFD, 10) = 10fcntl(1, F_GETFD) = 0fcntl(10, F_SETFD, FD_CLOEXEC) = 0dup2(3, 1) = 1close(3) = 0dup2(10, 1) = 1fcntl(10, F_GETFD) = 0x1 (flags FD_CLOEXEC)close(10) = 0fstat(1, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 3), ...}) = 0ioctl(1, TCGETS, 0x7ffee027ef90) = -1 ENOTTY (Inappropriate ioctl for device)write(1, "hi\n", 3) = 3
Let's follow { exec >/dev/null; } >/dev/null; echo "Hi" step by step. There are two commands: a. { exec >/dev/null; } >/dev/null , followed by b. echo "Hi" The shell executes first the command (a) and then the command (b). The execution of { exec >/dev/null; } >/dev/null proceeds as follows: a. First, the shell perform the redirection >/dev/null and remembers to undo it when the command ends . b. Then, the shell executes { exec >/dev/null; } . c. Finally, the shell switches standard output back to where is was. (This is the same mechanism as in ls -lR /usr/share/fonts >~/FontList.txt -- redirections are made only for the duration of the command to which they belong.) Once the first command is done the shell executes echo "Hi" . Standard output is wherever it was before the first command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/331764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206472/" ] }