source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
334,742
The software I'm using is only supported by CentOS 7.2. I tried sudo yum --releasever=7.2 update and get Cannot find a valid basurl for repo: base/7.2/x86_64. With sudo yum update takes me to 7.3
CentOS 7.2 actual version is 7.2.1511 and uses this URL http://mirror.centos.org/centos/7.2.1511/os/x86_64/ Try yum --releasever=7.2.1511 update You can also try to edit /etc/yum.repos.d/CentOS-Base.repo , and change baseurl to http://mirror.centos.org/centos/7.2.1511/os/x86_64/ instead of http://mirror.centos.org/centos/$releasever/os/$basearch/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/334742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135289/" ] }
334,804
Everywhere I read that internally SSDs are structured in 4K or larger "pages", which grouped in "blocks" of about 128-256 pages ( 1 , 2 ). SSDs work with these pages and blocks, "they can only erase data at the block level" (thus the block of pages is called "[NAND] erase block"). And the 512B blocks for the partition are emulated (which is done for legacy reasons). I'm trying to get educated on SSDs, since I have some weird lags/freezes during writes to my Sandisk U100 on Samsung 9 np900x3c laptop.And one useful thing would be to correctly find out what pages/blocks my SSD has? Is there a utility or /sys/... file on Linux to determine the SSD page size? Or "the drive and Googling the part numbers on the NAND chips may be needed", as in the comment ? Googling my Sandisk SSD I cannot find a proper datasheet/spec.But Sandisk and people do mention "4K random reads/writes".Does it mean the disk has 4K pages? Also, fdisk shows me sector size (both physical and logical) and I/O 512 byte: Disk /dev/sda: 128.0 GB, 128035676160 bytes255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x4b914713 Device Boot Start End Blocks Id System/dev/sda1 * 2048 50331647 25164800 83 Linux/dev/sda2 50331648 239583231 94625792 83 Linux/dev/sda4 239583232 250068991 5242880 82 Linux swap / Solaris What is "physical" sector size here? It doesn't seem to be the parameter of the SSD drive itself, since everybody say SSD pages are 4K+. Is it the emulated parameter for the disk? And "logical" is the sector size for the partition?Also, what is I/O size? PS This question is probably the same as this one for USB flash -- the answer is missing the point there, man fsstat says fsstat displays the details associated with a file system and the question is about the disk itself. My post has more details, maybe it would attract better responses?
The physical block size reported by fdisk is the physical block size reported by the disk when asked. It seldom has any relationship with SSD pages or erase blocks. 4 KiB reads/writes are a common measure of I/O performance, representing "small" I/O operations. There is no standard way for a SSD to report its page size or erase block size. Few if any manufacturers report them in the datasheets. (Because they may change during the lifetime of a SKU, for example because of changing suppliers.) There is a whitepaper from Intel which suggests that 4 KiB alignment is enough. For practical use just align all your data structures (partitions, payload of LUKS containers, LVM logical volumes) to 1 or 2 MiB boundaries. It's an SSD after all--it is designed to cope with usual filesystems, such as NTFS (which uses 4 KiB allocation units). If Windows considers that aligning partitions to 1 MiB is enough you can bet that any SSD manufacturer will make sure that their products work well with such a configuration. Best leave about 5% to 10% of unallocated space outside any partitions. Having overprovisioned space is of great help to SSDs in maintaining their performance in time.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/334804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38371/" ] }
334,815
I'm creating a backup strategy for my laptop (Ubuntu) based on rsync . When backing-up /etc , I get a lot of Operation not permitted (1) and Permission denied (13) errors. Is it required to do sudo rsync ... or are the problematic files and folders not needed anyway?
In general, you should run backups as whatever user is necessary to access all the files being backed up. In /etc 's case, that means root (so using sudo , or a root -owned cronjob or systemd timer).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/334815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150978/" ] }
334,865
The difference with and without -h should only be the human readable units, right? Well apparently no... $ du -s .74216696 .$ du -hs . 35G . Or maybe I'm mistaken and the result of du -s . isn't in KB?
du without an output format specifier gives disk usage in blocks of 512 bytes, not kilobytes . You can use the option -k to display in kilobytes instead. On OS X (or macOS, or MacOS, or Macos; whichever you like), you can customize the default unit by setting the environment variable BLOCKSIZE (this affects other commands as well).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/334865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128868/" ] }
334,905
When I try to run the wget command on http urls I get this error message: ERROR: The certificate of `url' is not trusted.ERROR: The certificate of `url' hasn't got a known issuer.
If you are using Debian or Ubuntu, install the ca-certificates package: $ sudo apt-get install ca-certificates If you don't care about checking the validity of the certificate, use the --no-check-certificate option: $ wget --no-check-certificate https://download/url Note: The second option is not recommended because of the possibility of a man-in-the-middle attack.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/334905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208681/" ] }
334,909
I'm writing a bash script to run on CentOS to first grab the the lines for the start and end of an application session, then output if the duration is longer than an hour. The timestamp format in the logfile is: 2017-01-03T00:00:15.529596-03:00 $i is the application session ID. Here is what I have so far: for i in $( grep 'session-enter\|session-exit' logfile | awk '{ print $5}' ); do echo "" echo "***** $i *****" grep 'session-enter\|session-exit' logfile | grep $i start=$(grep session-enter logfile | grep $i | awk '{ print $1 }' | sed 's/-03:00//g') end=$(grep session-exit logfile | grep $i | awk '{ print $1 }' | sed 's/-03:00//g') epochStart=$(date -d "$start" +%s ) epochEnd=$(date -d "$end" +%s ) duration=$( date -u -d "0 $epochEnd seconds - $epochStart seconds" +"%H:%M:%S" ) if [ "$epochStart"="" ] || [ "$epochEnd"="" ] then echo Duration: $duration else continue fidone Any help on this is greatly appreciated.
If you are using Debian or Ubuntu, install the ca-certificates package: $ sudo apt-get install ca-certificates If you don't care about checking the validity of the certificate, use the --no-check-certificate option: $ wget --no-check-certificate https://download/url Note: The second option is not recommended because of the possibility of a man-in-the-middle attack.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/334909", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208685/" ] }
334,912
AFAIK The basic concept about gpg/pgp is that for two people who want to create trust between them is both publish a public key and private key (the private key is kept with the user who creates it, doesn't share) with strength (1024 bits at one time, 4096 now and 8192 in future and so on and on). Now the two of them need to publish their public keys to a keyserver (similar to a phone directory) and give a link to the keyserver where those keys are published. Now if I go to a server say https://pgp.mit.edu/ and search for ashish I will need many ones https://pgp.mit.edu/pks/lookup?op=get&search=ashish&op=index Let's say the Ashish I want is this one DAD95197 (just an example) how would I import that public key ? I did try └─[$] gpg --keyserver pgp.mit.edu --recv-keys DAD95197gpg: keyserver receive failed: No keyserver available but as can be seen that didn't work.
gpg --keyserver pgp.mit.edu --recv-keys DAD95197 is supposed to import keys matching DAD95197 from the MIT keyserver. However the MIT keyserver often has availability issues so it’s safer to configure another keyserver. I generally use the SKS pools ; here are their results when looking for “ashish” . To import the key from there, run gpg --keyserver pool.sks-keyservers.net --recv-keys FBF1FC87DAD95197 (never use the short key ids, they can easily be spoofed). This answer explains how to configure your GnuPG installation to always use the SKS pools.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/334912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
334,920
My objective is to make a text on my remote machine (CentOS 7.2) available to seamlessly paste on my local machine (OS X 10.12.2) with the standard ⌘V shortcut. My setup connects to the remote machine with ssh -Y and then attaches to tmux (or creates a new session if non-existent). When I run either echo "test" | xsel -ib or echo "test" | xclip it hangs. The $DISPLAY variable is localhost:10.0 . If I exit tmux the $DISPLAY variable seems to be null and I get a can't open display error.
gpg --keyserver pgp.mit.edu --recv-keys DAD95197 is supposed to import keys matching DAD95197 from the MIT keyserver. However the MIT keyserver often has availability issues so it’s safer to configure another keyserver. I generally use the SKS pools ; here are their results when looking for “ashish” . To import the key from there, run gpg --keyserver pool.sks-keyservers.net --recv-keys FBF1FC87DAD95197 (never use the short key ids, they can easily be spoofed). This answer explains how to configure your GnuPG installation to always use the SKS pools.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/334920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52058/" ] }
335,013
I'm trying to write a simple script to retrieve memory and swap usage from a list of hosts. Currently, the only way I've been able to achieve this is to write 3 separate scripts: for a in {1..9}; do echo "bvrprdsve00$a; $(ssh -q bvrprdsve00$a "echo \$(free -m|grep Mem|/bin/awk '{print \$4}';free -m|grep Swap|/bin/awk '{print \$4}')")"; done > /tmp/svemem.txt;for a in {10..99}; do echo "bvrprdsve0$a; $(ssh -q bvrprdsve0$a "echo \$(free -m|grep Mem|/bin/awk '{print \$4}';free -m|grep Swap|/bin/awk '{print \$4}')")"; done >> /tmp/svemem.txt;for a in {100..218}; do echo "bvrprdsve$a; $(ssh -q bvrprdsve$a "echo \$(free -m|grep Mem|/bin/awk '{print \$4}';free -m|grep Swap|/bin/awk '{print \$4}')")"; done >> /tmp/svemem.txt The reason for this is that the hostname always ends in a 3 digit number and these hosts go from 001-218, so I've needed to do a different for loop for each set (001-009, 010-099, 100-218). Is there a way in which I can do this in one script instead of joining 3 together?
Bash brace expansions could generate the numbers with leading zeros ( since bash 4.0 alpha+ ~2009-02-20 ): $ echo {001..023}001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 So, you can do: for a in {001..218}; do echo "bvrprdsve$a; $(ssh -q bvrprdsve$a "echo \$(free -m|grep Mem|/bin/awk '{print \$4}';free -m|grep Swap|/bin/awk '{print \$4}')")"; done >> /tmp/svemem.txt But, let's look inside the command a little bit: You are calling free twice, using grep and then awk: free -m|grep Mem |/bin/awk '{print \$4}';free -m|grep Swap|/bin/awk '{print \$4}' All could be reduced to this one call to free and awk : free -m|/bin/awk '/Mem|Swap/{print \$4}' Furthermore, the internal command could be reduced to this value: cmd="echo \$(free -m|/bin/awk '/Mem|Swap/{print \$4}')" Then, the whole script will look like this: b=bvrprdsve;f=/tmp/svemem.txt;cmd="echo \$(free -m|/bin/awk '/Mem|Swap/{print \$4}')";for a in {001..218}; do echo "$b$a; $(ssh -q "$b$a" "$cmd")"; done >> "$f";
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/335013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181766/" ] }
335,029
With the following script I'm trying to read a text file (italian.txt), and translate from this file all words from Italian into English and save the output in another text file (english.txt). I have to use sed command with the global command g so that I translate every appearance of the word. It's not working correctly but I don't know what goes wrong. Can somebody help me? cat italian.txt | sed -i 's/sole/sun/g' | 's/penna/pen/g' > english.txtexit 0
There are a couple of problems with your script: You need to add a second sed after the second pipe ( | ). sed -i tells sed to edit files "in-place", but there is no file specified - sed is using stdin , coming from cat . You can safely remove the -i and your script should now work. The fixed script should be: cat italian.txt | sed 's/sole/sun/g' | sed 's/penna/pen/g' > english.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205637/" ] }
335,064
I want to have all DNS queries passing through tor. How to set my default DNS to go through tor? In other word I want to use the IP given by tor-resolve google.com instead of dig google.com .
There are a couple of problems with your script: You need to add a second sed after the second pipe ( | ). sed -i tells sed to edit files "in-place", but there is no file specified - sed is using stdin , coming from cat . You can safely remove the -i and your script should now work. The fixed script should be: cat italian.txt | sed 's/sole/sun/g' | sed 's/penna/pen/g' > english.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204816/" ] }
335,081
I have a Raspberry Pi running OSMC (Debian based). I have set a cron job to start a script, sync.sh, at midnight. 0 0 * * * /usr/local/bin sync.sh I need to stop the script at 7am. Currently I am using: 0 7 * * * shutdown -r now Is there a better way? I feel like rebooting is overkill. Thanks
You can run it with the timeout command , timeout - run a command with a time limitSynopsistimeout [OPTION] NUMBER[SUFFIX] COMMAND [ARG]...timeout [OPTION]DescriptionStart COMMAND, and kill it if still running after NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days. PS. If your sync process takes too much time, you might consider a different approach for syncing your data, maybe block replication.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/335081", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208805/" ] }
335,087
I use Ubuntu 16.04 and I need the following tmux solution because I want to run a timeout process with sleep as in my particular case I wasn't satisfied from at and encountered a bug with nohup (when combining nohup-sleep ). Now, tmux seems as best alternative as it has its own no-hangup mechanism and is actually working fine in manual usage (I ask the question only in regards to automizing the process I can already do manually with it). What I need: I need a way to do the following 3 actions, all in one operation: Attaching a new tmux session. Injecting a ready set of commands to that session, like (sleep 30m ; rm -rf dir_name ; exit) . I would especially prefer a multi-line set, and not one long row. Executing the above command set the moment it was finished to be written as stdin in new tmux session. In other words, I want to execute a code set in another tmux session that was specially created for that cause, but to do all in one operation. Notes: I aim to do all from my original working session (the one I work from most of the time). Generally, I have no intention to visit the newly created session, I just want to create it with its automatically executed code and that's it. If possible, I would prefer an heredoc solution. I think it's most efficient.
If you put the code you want to execute in e.g. /opt/my_script.sh , it's very easy to do what you want: tmux new-session -d -s "myTempSession" /opt/my_script.sh This starts a new detached session, named "myTempSession", executing your script. You can later attach to it to check out what it's doing, by executing tmux attach-session -t myTempSession . That is in my opinion the most straightforward and elegant solution. I'm not aware of any easy way of execute commands from stdin (read "from heredocs") with tmux. By hacking around you might even be able to do it, but it would still be (and look like) a hack. For example, here's a hack that uses the command i suggested above to simulate the behaviour you want (= execute code in a new tmux session from a heredoc. No write occurs on the server's hard drive, as the temporary file is created /dev/shm , which is a tmpfs): ( cat >/dev/shm/my_script.sh && chmod +x /dev/shm/my_script.sh && tmux new-session -d '/dev/shm/my_script.sh; rm /dev/shm/my_script.sh') <<'EOF' echo "hacky, but works"EOF
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/335087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
335,145
I have created a few aliases for git in zsh, for example gch = git checkout, grb = git rebase --committer-date-is-author-date and some more complex useful zsh functions for git commands. But how can I allow these aliases to use zsh git autocompletion?
I've had the same issue. You should check whether the option completealiases is set. It prevents aliases from being internally substituted before completion is attempted. In my case, removing setopt completealiases from my .zshrc resolved the issue. You can try unsetopt completealiases if oh-my-zsh sets it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208848/" ] }
335,166
Inadvertently I've overwritten .bashrc days ago on Ubuntu 16.04. By an empty file. And switched off since. And now when I su the password is not recognized. Is there any way to make the Ubuntu work, or do I have to reinstall?
It's pretty unlikely that it's caused by missing .bashrc file. Password that su command is expecting is root user password, which on Ubuntu is not defined. Command that you probably want is sudo which allows you to run commands from root account but authenticates with your password. Give it a try with for example: sudo whoami which should ask about your password and then just print word root . If you want to get back default .bashrc file you should be able to just copy it from /etc/skel/.bashrc - that's the file used to "propagate" all new users dot-files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335166", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208859/" ] }
335,175
Assume you have some text file (e.g. a log file) and you openit in a vim editor and hit the command :g/aaa It will output a result in which you can move with j and k keysand when you move to the bottom a green sentence Press ENTER or type command to continue will appear. I understand it somehow, that I can use some commands with the result,but don't know how to find what I can do with it. One action I'd like to do, is to save the lines to the new file. Of course you could use a command $ grep aaa file.txt > new_file.txt but is it possible from the vim editor directly?
It is possible to do this through a multi-step process. Within vim : :redir > new_file.txt:g/aaa:redir END See :help redir from within vim . The :redir command can also append to an existing file by modifying the first command. :redir >> new_file.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335175", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
335,180
The script is to read a file that contains multiple lines, each line containing a tab-delimited array. I want to execute some remote commands that take those array elements as arguments, with sudo permission. Here is the example script: while IFS=$'\t' read -r -a linedo echo ${line[0]} ssh -tty -o StrictHostKeyChecking=no ${line[0]} 'sudo echo ${line[1]}; sudo echo ${line[2]}' done < nodes.txt Here is the example input file: rivervm-1 dc2 rack1rivervm-2 dc2 rack2rivervm-3 dc2 rack3rivervm-4 dc2 rack4 The output should be 12 variables, each in a new line. However, this is what I got: rivervm-1rivervm-2 dc2 rack2rivervm-3 dc2 rack3rivervm-4 dc2 rack4Connection to rivervm-1 closed. Any idea?
It is possible to do this through a multi-step process. Within vim : :redir > new_file.txt:g/aaa:redir END See :help redir from within vim . The :redir command can also append to an existing file by modifying the first command. :redir >> new_file.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181848/" ] }
335,189
I have a problem which is reproducible on Linux Ubuntu VMs (14.04 LTS) created in Azure. After installing systemd package through script, the system refuses new ssh connections, infinitely. System is booting up. Connection closed by xxx.xxx.xxx.xxx The active ssh connection is maintained though. There is no /etc/nologin file present in the system. The only option I see is a hard reset which solves the problem. But how do I avoid it? Here is the script I am using: #!/bin/bash# Script input argumentsuser=$1server=$2# Tell the shell to quote your variables to be eval-safe!printf -v user_q '%q' "$user"printf -v server_q '%q' "$server"#SECONDS=0address="$user_q"@"$server_q"function run { ssh "$address" /bin/bash "$@"}run << SSHCONNECTION # Enable autostartup # systemd is required for the autostartup sudo dpkg-query -W -f='${Status}' systemd 2>/dev/null | grep -c "ok installed" > /home/$user_q/systemd-check.txt systemdInstalled=\$(cat /home/$user_q/systemd-check.txt) if [[ \$systemdInstalled -eq 0 ]]; then echo "Systemd is not currently installed. Installing..." # install systemd sudo apt-get update sudo apt-get -y install systemd else echo "systemd is already installed. Skipping this step." fiSSHCONNECTION
I suspect there is a /etc/nologin file (whose content would be "System is booting up.") that is not removed after the systemd installation. [update] What affects you is a bug that was reported on Ubuntu's BTS last December. It is due to a /var/run/nologin file (= /run/nologin since /var/run is a symlink to /run ) that is not removed at the end of the systemd installation. /etc/nologin is the standard nologin file. /var/run/nologin is an alternate file that may be used by the nologin PAM module ( man pam_nologin ). Note that none of the nologin files affect connections by user root, only regular users are prevented from logging in.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/335189", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/197423/" ] }
335,215
I am trying to setup a passwordless login from machineA to machineB for my user david which already exits. This is what I did to generate the authentication keys: david@machineA:~$ ssh-keygen -t rsa........david@machineB:~$ ssh-keygen -t rsa........ After that I copied id_rsa.pub (/home/david/.ssh/id_rsa.pub) key of machineA into machineB authorized_keys file (/home/david/.ssh/authorized_keys) key. And then I went back to machineA login screen and ran below command and it worked fine without any issues. So I was able to login into machineB as david user without asking for any password. david@machineA:~$ ssh david@machineB Question: Now I created a new user on machineA and machineB both by running this command only useradd golden . And now I want to ssh passwordless from this golden user into machineB from machineA . I did same exact step as above but it doesn't work. david@machineA:~$ sudo su - goldengolden@machineA:~$ ssh-keygen -t rsa........david@machineB:~$ sudo su - goldengolden@machineB:~$ ssh-keygen -t rsa........ And then I copied id_rsa.pub key /home/golden/.ssh/id_rsa.pub for golden user from machineA to machineB authorized_keys file /home/golden/.ssh/authorized_keys . And when I try to ssh, it gives me: golden@machineA:~$ ssh golden@machineBConnection closed by 23.14.23.10 What is wrong? It doesn't work only for golden user which I created manually through this command useradd . I am running Ubuntu 14.04. Is there any settings that I need to enable for this manual user which I created? In the machineB auth.log file, below is what I am seeing when I run this command from machineA ssh -vvv golden@machineB to login Jan 3 17:56:59 machineB sshd[25664]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_keyJan 3 17:56:59 machineB sshd[25664]: pam_access(sshd:account): access denied for user `golden' from `machineA'Jan 3 17:56:59 machineB sshd[25664]: pam_sss(sshd:account): Access denied for user golden: 10 (User not known to the underlying authentication module)Jan 3 17:56:59 machineB sshd[25664]: fatal: Access denied for user golden by PAM account configuration [preauth] Is there anything I am missing? Below is how my directory structure looks like: golden@machineA:~$ pwd/home/goldengolden@machineA:~$ ls -lrthatotal 60K-rw------- 1 golden golden 675 Nov 22 12:26 .profile-rw------- 1 golden golden 3.6K Nov 22 12:26 .bashrc-rw------- 1 golden golden 220 Nov 22 12:26 .bash_logoutdrwxrwxr-x 2 golden golden 4.0K Nov 22 12:26 .paralleldrwxr-xr-x 2 golden golden 4.0K Nov 22 12:34 .vimdrwxr-xr-x 7 root root 4.0K Dec 22 11:56 ..-rw------- 1 golden golden 17K Jan 5 12:51 .viminfodrwx------ 2 golden golden 4.0K Jan 5 12:51 .sshdrwx------ 5 golden golden 4.0K Jan 5 12:51 .-rw------- 1 golden golden 5.0K Jan 5 13:14 .bash_historygolden@machineB:~$ pwd/home/goldengolden@machineB:~$ ls -lrthatotal 56K-rw------- 1 golden golden 675 Dec 22 15:10 .profile-rw------- 1 golden golden 3.6K Dec 22 15:10 .bashrc-rw------- 1 golden golden 220 Dec 22 15:10 .bash_logoutdrwxr-xr-x 7 root root 4.0K Jan 4 16:43 ..drwx------ 2 golden golden 4.0K Jan 5 12:51 .ssh-rw------- 1 golden golden 9.9K Jan 5 12:59 .viminfodrwx------ 6 golden golden 4.0K Jan 5 12:59 .-rw------- 1 golden golden 4.6K Jan 5 13:10 .bash_history Update: In machineA : cat /etc/passwd | grep goldengolden:x:1001:1001::/home/golden:/bin/bash In machineB : cat /etc/passwd | grep goldengolden:x:1001:1001::/home/golden:/bin/bash
The issue is with PAM stack configuration. Your host is configured with pam_access and default configuration is not allowing external/SSH access for the new user golden ,even though your keys are setup properly. Adding golden user into /etc/security/access.conf as below fixed the issue. +:golden:ALL To see more information read man access.conf which explains each field of this file. Look at examples section to understand the order and meanings of LOCAL, ALL etc
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/335215", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207957/" ] }
335,260
I've recently got a non-touchscreen hp laptop with a hdd accelerometer. After upgrading it to Debian testing I noticed that whenever I tilt my laptop upwards past +45 deg, the screen rotates upside down. The opposite happens when I tilt my laptop -45 deg. To clarify, I am facing my laptop with the screen facing me with the keyboard parallel to the ground. The screen also rotates whenever I tilt my laptop clockwise or counterclockwise. Is there a file where I can edit to change the screen's rotational direction? The accelerometer in /proc/bus/input/devices shows this: I: Bus=0019 Vendor=0000 Product=0000 Version=0000 N: Name="ST LIS3LV02DL Accelerometer" P: Phys=lis3lv02d/input0 S: Sysfs=/devices/platform/lis3lv02d/input/input7 U: Uniq= H: Handlers=event6 js0 B: PROP=0 B: EV=9 B: ABS=7 EDIT: I found that watch -n 1 'cat /sys/devices/platform/lis3lv02d/position' is similar to what is found with the command below. Except it just displays coordinates such as (18,18,1098) . evtest /dev/input/event6 shows this: william@wksp0:~/Downloads$ sudo evtest /dev/input/event6 Input driver version is 1.0.1 Input device ID: bus 0x19 vendor 0x0 product 0x0 version 0x0 Input device name: "ST LIS3LV02DL Accelerometer" Supported events: Event type 0 (EV_SYN) Event type 3 (EV_ABS) Event code 0 (ABS_X) Value 20 Min -2304 Max 2304 Fuzz 18 Flat 18 Event code 1 (ABS_Y) Value -38 Min -2304 Max 2304 Fuzz 18 Flat 18 Event code 2 (ABS_Z) Value 1105 Min -2304 Max 2304 Fuzz 18 Flat 18 Properties: Testing ... (interrupt to exit) Event: time 1483747056.088195, type 3 (EV_ABS), code 1 (ABS_Y), value -23 Event: time 1483747056.088195, -------------- SYN_REPORT ------------ Event: time 1483747056.124189, type 3 (EV_ABS), code 0 (ABS_X), value 20 Event: time 1483747056.124189, type 3 (EV_ABS), code 1 (ABS_Y), value -38 Event: time 1483747056.124189, type 3 (EV_ABS), code 2 (ABS_Z), value 1105 Event: time 1483747056.124189, -------------- SYN_REPORT ------------ Event: time 1483747056.210931, type 3 (EV_ABS), code 0 (ABS_X), value -18 Event: time 1483747056.210931, type 3 (EV_ABS), code 1 (ABS_Y), value -28 Event: time 1483747056.210931, type 3 (EV_ABS), code 2 (ABS_Z), value 1107... EDIT2: After some googling, I've come across this which lead me to some interesting files that have little to no help on this. :P
The whole story you mention is actually a kind of bug in iio-sensor-proxy or in your DE code who makes use of iio-sensor-proxy info. Is not bios or kernel that does the rotation but the marriage between iio-sensor-proxy and your Desktop Environment. DE like Gnome (and Cinnamon as turns out) does screen auto rotate based on the data provided by iio-sensor-proxy in dbus. You can try to remove/purge iio-sensor-proxy and screen rotation will go away completely. It is not clear if this is a iio-sensor-proxy bug or a Cinnamon bug. It could be iio-sensor-proxy that is reading in a wrong way your accelerometer data or could be Cinnamon who even if it receives correct data by sensor-proxy, rotates the screen wrongly. You can clarify this issue by running monitor-sensor in root terminal. This utility comes with iio-sensor-proxy package and displays in terminal the current state of accelerometer / current screen orientation. If orientation is correctly displayed by monitor-sensor then it is a Cinnamon bug. But i'm 90% sure that this is an iio-sensor-proxy bug and you should report it to the developer. PS: It had been also mentioned that sensor-proxy had been working well with kernels up to version 4.7 but had some problems with kernel 4.8 and above. You could try to install an older kernel (i.e 4.7) for testing. If monitor-sensor reports correctly the orientation and this is a Cinnamon bug, as a workaround you could disable Cinnamon auto screen rotation feature and run a kind of shell script that will make the correct rotation based on the data of monitor-sensor. PS: Gnome gives the option to completely disable auto screen rotation, i'm not sure if Cinnamon has this option too. In XFCE that iio-sensor-proxy is installed but XFCE devs are not performing auto screen rotation (yet) we apply this script to have auto screen rotation: https://linuxappfinder.com/blog/auto_screen_rotation_in_ubuntu PS: Improved version for touch screens with transformation matrix: https://github.com/gevasiliou/PythonTests/blob/master/autorotate.sh Update for future reference / future "google searches" As advised in comments, by running monitor-sensor in a root terminal and observing the messages provided by iio-sensor-proxy it proved that iio-sensor-proxy is correctly understood the real screen orientation. As a result this seems to be a Cinnamon bug that though it gets correct info by iio-sensor-proxy is rotating the screen wrongly. You can disable the Cinnamon auto rotation feature and try the auto-rotation script as advised above ( https://linuxappfinder.com/blog/auto_screen_rotation_in_ubuntu ). To disable Cinnamon internal autorotation you need to apply settings set org.cinnamon.settings-daemon.plugins.orientation active false as advised in OP's comment.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78259/" ] }
335,284
To Create a Fake Ethernet dummy Interface On Linux we First initialize the dummy interface driver using the below command: /sbin/modprobe dummy . Then we Assign Ethernet Interface alias To Dummy Driver we just initialized above. But it gives the following Fatal error saying: FATAL: Module dummy not found. Also, at the path cd /sys/devices/virtual/net# , we can see that there are virtual interfaces present by the following names: dummy0/ lo/ sit0/ tunl0/ ifconfig -a dummy0: Link encap:Ethernet HWaddr aa:3a:a6:cd:91:2b BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)lo: Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:111 errors:0 dropped:0 overruns:0 frame:0 TX packets:111 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8303 (8.1 KiB) TX bytes:8303 (8.1 KiB)sit0: Link encap:UNSPEC HWaddr 00-00-00-00-FF-00-00-00-00-00-00-00-00-00-00-00 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)tunl0: Link encap:IPIP Tunnel HWaddr NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So, the modprobe command is not able to load the kernel module. How can we load a kernel module using modprobe or insmod to initialize a dummy interface driver? Can we create multiple dummy interfaces on a single loaded module?
The usual way to add several dummy interfaces is to use iproute2 : # ip link add dummy0 type dummy# ip link add dummy1 type dummy# ip link list...5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 22:4e:84:26:c5:98 brd ff:ff:ff:ff:ff:ff6: dummy1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 9e:3e:48:b5:d5:1d brd ff:ff:ff:ff:ff:ff But the error message FATAL: Module dummy not found indicates that you may have a kernel where the dummy interface module is not enabled, so make sure to check your kernel configuration, and recompile the kernel if necessary.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/335284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204084/" ] }
335,293
I am stuck with this configuration here after commenting the DEFRROUTE line I get the ip r output like this. Does it really works with DEFRROUTE=no when uncommented. [root@vm1 ~]# ip rdefault via 192.168.5.1 dev eth0 proto static metric 100default via 192.168.1.1 dev eth2 proto static metric 101 169.24.0.0/17 dev eth1 proto kernel scope link src 169.24.0.5 metric 100192.168.1.0/24 dev eth2 proto kernel scope link src 192.168.1.3 metric 100192.168.5.0/28 dev eth0 proto kernel scope link src 192.168.5.10 metric 100[root@vm1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2DEVICE=eth2BOOTPROTO=staticONBOOT=yesUSERCTL=noTYPE=EthernetIPADDR=192.168.1.3NETMASK=255.255.255.0GATEWAY=192.168.1.1#DEFRROUTE=no[root@vm1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0DEVICE=eth0BOOTPROTO=staticONBOOT=yesUSERCTL=noTYPE=EthernetIPADDR=192.168.5.10NETMASK=255.255.255.240GATEWAY=192.168.5.1#DEFRROUTE=yes[root@vm1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1DEVICE=eth1BOOTPROTO=staticONBOOT=yesUSERCTL=noTYPE=EthernetIPADDR=169.24.0.5NETMASK=255.255.128.0#DEFRROUTE=no When I uncomment the DEFRROUTE I get this below output without route [root@vm1 ~]# ip r169.24.0.0/17 dev eth1 proto kernel scope link src 169.24.0.5 metric 100192.168.1.0/24 dev eth2 proto kernel scope link src 192.168.1.3 metric 100192.168.5.0/28 dev eth0 proto kernel scope link src 192.168.5.10 metric 100 As @artem suggested via the link, below is the screenshot.
The usual way to add several dummy interfaces is to use iproute2 : # ip link add dummy0 type dummy# ip link add dummy1 type dummy# ip link list...5: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 22:4e:84:26:c5:98 brd ff:ff:ff:ff:ff:ff6: dummy1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 9e:3e:48:b5:d5:1d brd ff:ff:ff:ff:ff:ff But the error message FATAL: Module dummy not found indicates that you may have a kernel where the dummy interface module is not enabled, so make sure to check your kernel configuration, and recompile the kernel if necessary.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/335293", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98965/" ] }
335,294
I have the following four files in the file system: /home/martin/cvs/ops/cisco/b/s1/home/martin/cvs/ops/cisco/b/s2/home/martin/cvs/ops/extreme/b/r3/home/martin/cvs/ops/j/b/r5 I need to put those files into a tar archive, but I don't want to add directories. The best I could come up with was: tar -C ~/cvs/ops/ -czvf archive.tgz cisco/b/s1 cisco/b/s2 extreme/b/r3 j/b/r5 This is still not perfect, because each file in the archive is two directories deep. Is there a better way? Or do I simply have to copy s1 , s2 , r3 and r5 files into one directory and create the archive with tar -czvf archive.tgz s1 s2 r3 r5 ?
You can use -C multiple times (moving from one directory to another): tar czvf archive.tar.gz -C /home/martin/cvs/ops/cisco/b s1 s2 -C ../../extreme/b r3 -C ../../j/b r5 Note that each -C option is interpreted relative to the current directory at that point (or you can just use absolute paths).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/335294", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
335,333
The situation is as follows. I have a Linux partition on a primary drive (modestly-sized SSD, and sharing it with Windows). I have another Linux (ext4) partition on a hard drive. It is permanently mounted in /etc/fstab . I don't want to make a swap file on a root drive to save space. Thus I want to make a swap file on the hard drive partition. I've successfully created and enabled a swap file, but I have trouble enabling it permanently in /etc/fstab . Should it be mounted under /dev/ (where the drive is mounted), or under /mnt/ (where the file system is mounted)?
In your case the /etc/fstab entry and preceding steps for a swap file looks like as follows. dd if=/dev/zero of=/mnt/<UUID>/swapfile bs=1M count=512mkswap /mnt/<UUID>/swapfilechmod 600 /mnt/<UUID>/swapfileecho "/mnt/<UUID>/swapfile none swap defaults 0 0" >> /etc/fstab So the entry in the /etc/fstab should look like /mnt/<UUID>/swapfile none swap defaults 0 0 and should be below the line that mounts /mnt/<UUID> . Then you should be able to activate it with the command as follows. swapon -a Concerning the question from your comment, mounting the swap file with the UUID created during mkswap , no it is no possible. You have to specify the full path to the file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335333", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208975/" ] }
335,359
I am trying to do a yum update and all of the mirrors fail with a 404. I put the url into my browser and the error is correct, the url does not exist. YUM is looking for a package that does not exist on the mirrors. See below for the error message: https://mirrors.lug.mtu.edu/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not FoundTrying other mirror.http://mirror.oss.ou.edu/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not FoundTrying other mirror.https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not FoundTrying other mirror.http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not FoundTrying other mirror.http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not FoundTrying other mirror.http://kdeforge2.unl.edu/mirrors/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not FoundTrying other mirror.https://muug.ca/mirror/fedora-epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not FoundTrying other mirror.http://fedora.westmancom.com/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not FoundTrying other mirror.https://ca.mirror.babylon.network/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not FoundTrying other mirror.https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not FoundTrying other mirror. I have tried running yum clean all That command finished successfully, but it did not change any thing. I have also tried the following: rm -f /var/lib//rpm/__db*rpm --rebuilddb That also did not change anything.
Edit your /etc/yum.conf file and add http_caching=packages Explanation: http_caching option controls how to handle any HTTP downloads that YUM does and what yum should caches. Its default setting is to cache all downloads and that includes repo metadata. So If the metadata file gets corrupted during download (exp: it is partially downloaded), yum will not be able to verify the remote availability of packages and it will fail. The solution is to add http_caching=packages to /etc/yum.conf so yum will only cache packages and it will download new repository metadata each time.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/335359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79156/" ] }
335,367
Is there any way to replace a string with a char ? Example : I have, 123456789 And, I want to replace all chars from position 3 to position 8 with * , to produce this result 12******9 Is there a way perhaps using sed -i "s/${mystring:3:5}/*/g" ?
Edit your /etc/yum.conf file and add http_caching=packages Explanation: http_caching option controls how to handle any HTTP downloads that YUM does and what yum should caches. Its default setting is to cache all downloads and that includes repo metadata. So If the metadata file gets corrupted during download (exp: it is partially downloaded), yum will not be able to verify the remote availability of packages and it will fail. The solution is to add http_caching=packages to /etc/yum.conf so yum will only cache packages and it will download new repository metadata each time.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/335367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209004/" ] }
335,371
I needed to automatically get my own WAN-IP-address from my router. I found this question and, among others, a solution with dig was proposed: dig +short myip.opendns.com @resolver1.opendns.com It works perfectly, but now I want to understand what it is doing.Here is what I (hope to) understand so far (please correct me, if I am wrong): +short just gives me a short output @resolver1.opendns.com is the DNS server,which is asked what IP address belongs to the given domain What's not clear to me is myip.opendns.com . If I would write www.spiegel.de instead, I would get the IP address of the domain www.spiegel.de , right?With myip.opendns.com I get the WAN-IP of my router. So is myip.opendns.com just emulating a domain, which is resolved to my router?How does it do it? Where does it get my IP from? And how is it different to what webpages, like e.g., www.wieistmeineip.de , are doing? They also try to get my IP. In the answer of Krinkle on the question I mentioned, it is statedthat this "dns-approach" would be better than the "http-approach". Why is it better and what is the difference? There has to be a difference, because the WAN-IP I get from dig +short myip.opendns.com @resolver1.opendns.com (ip1) is the one I can also see in the web interface of my router, whereas www.wieistmeineip.de (and other similar sites too) is giving me another IP address (ip2).I could imagine that my ISP is using some kind of sub-LAN, so that my requests to webservers are going through another (ISP-) router which has ip2, so that www.wieistmeineip.de is just seeing this address (ip2). But, again, what is myip.opendns.com doing then? Additionally: Opening ip1 from within my LAN is giving me the test website from my RasPi, opening it from the outside of my LAN (mobile internet) does not work. Does it mean, that ip1 is no proper "internet IP" but more like a LAN IP?
First to summarize the general usage of dig : it requests the IP assigned to the given domain from the default DNS server. So e.g. dig google.de would request the IP assigned to the domain google.de . That would be 172.217.19.99 . The command you mentioned is: dig +short myip.opendns.com @resolver1.opendns.com What this command does is: it sends a request for the IP of the domain myip.opendns.com to the DNS server resolver1.opendns.com . This server is programmed that, if this special domain is requested, the IP the request comes from is sent back. The reasons why the method of querying the WAN IP using DNS is better were mentioned by krinkle: standardised, more stable and faster. Note that per default, dig asks for the IPv4 address ( DNS A record ). If dig establishes a connection to opendns.com via IPv6, you'll get no result back (since you asked for your IPv4 address but have an IPv6 address in use). Thus, a more robust command might be: dig +short ANY @resolver1.opendns.com myip.opendns.com This will return your IP address, version 4 or 6, depending on dig 's connection. To specify an IP version, use dig -4 or dig -6 as shown in Mafketel's answer . The reason I could imagine for those two IPs is that your router caches DNS requests and returns an old IP. Another problem could be DualStack Lite . That is often used by new internet contracts. Do you know whether your ISP is using DS Lite?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/335371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208974/" ] }
335,435
How can I remove dummy information from a file named results.txt with lines like this? Lines inside file are like this: _my0001_split00000000.txt:Total Dynamic Power = 0.0000 mW _my0001_split00000050.txt:Total Dynamic Power = 117.5261 uW (100%) ... and they should change to a tab-separated format like this: 0001 00000000 0.0000 mW 0001 00000050 117.5261 uW
How about a using sed instead of awk ? sed -r 's/^_my([0-9]+)_split([0-9]+)\.txt:[^=]*=\s*([0-9.]+) *(\S+).*/\1\t\2\t\3 \4/' /path/to/file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335435", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/144963/" ] }
335,469
What's in my terminal bash: settings64.csh: line 35: syntax error near unexpected token `('bash: settings64.csh: line 35: `foreach i ( $xlnxInstLocList )' Portion of the script set xlnxInstLocList="${xlnxInstLocList} common"set xlnxInstLocList="${xlnxInstLocList} EDK"set xlnxInstLocList="${xlnxInstLocList} PlanAhead"set xlnxInstLocList="${xlnxInstLocList} ISE"set XIL_SCRIPT_LOC_TMP_UNI=${XIL_SCRIPT_LOC}foreach i ( $xlnxInstLocList ) Location of syntactical error at the bottom line 35foreach i ( $xlnxInstLocList ) I'm not a scripter; I'm trying to fix an error in the scripting for my ISE DESIGN SUITE installation. I just need a quick set of code to replace "foreach i ( $xlnxInstLocList )" to perform its intended function. I think it's a Bash script.
bash doesn't have a foreach ; this script is probably meant to run in csh or tsch . If you are invoking the script with ./myscript.csh , make sure its first line is #!/bin/csh (or whatever the full path to that shell is on your system).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209059/" ] }
335,484
I have ubuntu file system directories in the root directory and I accidentally copied hundreds of files into root directory. I intuitively tried to remove copied files by excluding file system like rm -rf !{bin,sbin,usr,opt,lib,var,etc,srv,libx32,lib64,run,boot,proc,sys,dev} ./. bu it doesn't work. What's the proper way to exclude some directories while deleting the whole? EDIT: Never try any of the commands here without knowing what to do!
Since you are using bash : shopt -s extglobecho rm -rf ./!(bin|sbin|usr|...) I recommend to add echo at the beginning of the command line when you are running something what potentially can blow up the entire system. Remove it if you are happy with the result. Note: The above command won't remove hidden files (those which name start by a dot). If you want to remove them as well then activate also dotglob option: shopt -s dotglob
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/335484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29606/" ] }
335,497
I am trying to get everything between startStr and endStr for case bbb . I understand how I can get all occurrences between startStr and endStr using sed . I do not see how I would constraint it just to the one instance where bbb occurs. Sample input: fffstartStraaabbbcccendStrxxxyyystartStrdddendStrdddbbb Required output: startStraaabbbcccendStr This is what I have: $ sed -n -e '/startStr/,/endStr/ p' sample.txtstartStraaabbbcccendStrstartStrdddendStr
For first startStr … endStr , contains /bbb/ occurrence: sed -n '/startStr/ {:n; N; /endStr/ {/\n[^\n]*bbb[^\n]*\n/ {p; q}; b}; bn}' or sed -n '/startStr/ {:n; N; /endStr/ {/\nbbb\n/ {p; q}; b}; bn}' if bbb is not regular expression and it is exactly string you need (from begin to \n ). Explanation For address /startStr/ we: set label :n ; read next line with N ; check that it matches /endStr/ ; if it's true, check /\nbbb\n/ occurrence in this block we read; if it is present, do {p; q} for «print and quit», otherwise, do b for «throw this block and start to search next»; if it isn't end of block, we jump to :n , i.e., continue reading.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63644/" ] }
335,515
I want to run a shell script ~/.local/bin/test.sh via dmenu . If I run dmenu via $mod+D and browse for the entry test.sh I couldn't find it. The path ~/.local/bin is already set to my $PATH variable in ~/.profile $ echo $PATH/home/ubuntu/bin:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games I also removed ~/.cache/dmenu_run and restart i3. What can I do to launch the test script via dmenu?
Delete ~/.cache/dmenu_run or ~/dmenu_cache , depending on which you have, and log back in. After your PATH is reloaded from .profile after logging in, dmenu should regenerate the cache from $PATH. dmenu seems to be bad about renewing its own cache, and needs to be forced to do it sometimes. Also check that you have enabled the executable bit for script: $ ls -l ~/.local/bin/test.sh-rwxrwxrwx 1 user group 152 Jan 11 04:09 /home/user/.local/bin/test.sh
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/335515", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198795/" ] }
335,531
I can easily capture stdout from a function call (in subshell) to a variable with: val="$(get_value)" I can also modify variables (for instance, an array) in a shell by reference so to speak in the same shell with something like: function array.delete_by_index { local array_name="$1" local key="$2" unset "$array_name[$key]" eval "$array_name=(\"\${$array_name[@]}\")"}array.delete_by_index "array1" 0 But what I'm struggling to figure out how to do is to do both at the same time, in a clean manner. An example of where I want this is popping a value from an array: function array.pop { local array_name="$1" local last_index=$(( $(eval "echo \${#$array_name[@]}") - 1 )) local tmp="$array_name[\"$last_index\"]" echo "${!tmp}" # Changes "$array_name" here, but not in caller since this is a sub shell array.delete_by_index "$array_name" $last_index}val="$(array.pop "array1")" It seems to me like all forms of capturing stdout to a variable require a subshell in bash, and using a sub shell will not allow me the ability to change a value by reference in the caller's context. I'm wondering if anyone know a magical bashism to accomplish this? I do not particularly want a solution that uses any kind of file/fifo on the filesystem. The 2nd answer in this question seems to suggest that this is possible in ksh using val="${ cmd; }" , as this construct apparently allows for capturing output, but without using sub shells. So yes, I could technically switch to ksh, but I'd like to know if this is possible in bash.
This works in both bash (since release 4.3) and ksh93 . To "bashify" it, replace all typeset with local in the functions, and the typeset in the global scope with declare (while keeping all the options!). I honestly don't know why Bash has so many different names for things that are just variations of typeset . function stack_push{ typeset -n _stack="$1" typeset element="$2" _stack+=("$element")}function stack_pop{ typeset -n _stack="$1" typeset -n _retvar="$2" _retvar="${_stack[-1]}" unset _stack[-1]}typeset -a stack=()stack_push stack "hello"stack_push stack "world"stack_pop stack valueprintf '%s ' "$value"stack_pop stack valueprintf '%s\n' "$value" Using a nameref in the function, you avoid eval (I've never had to use eval anywhere in any script!). By providing the stack_pop function with a place to store the popped value, you avoid the subshell. By avoiding the subshell, the stack_pop function can modify the value of the stack variable in the outer scope. The underscores in the local variables in the function is to avoid having a nameref that has the same name as the variable that it references (Bash doesn't like it, ksh doesn't mind, see this question ). In ksh you could write the stack_pop function like function stack_pop{ typeset -n _stack="$1" printf '%s' "${_stack[-1]}" unset _stack[-1]} And then call it with printf '%s %s\n' "${ stack_pop stack }" "${ stack_pop stack }" ( ${ ... } is the same as $( ... ) but does not create a subshell) But I'm not a big fan of this. IMHO, stack_pop should not have to send the data to stdout, and I should not have to call it with ${ ... } to get the data. I could possibly be more ok with my original stack_pop , and then add a stack_pop_print that does the above, if needed. For Bash, you could go with the stack_pop in the beginning of my post, and then have a stack_top_print that just prints the top element of the stack to stdout, without removing it (which it can't because it would most likely be running in a $( ... ) subshell).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67157/" ] }
335,596
The router on my network hands out an IPv6 prefix assigned by my ISP. This prefix is dynamic but "fairly sticky". I would like my machines to automatically pick up the prefix advertised in the RAs, but combine it with a user-specified local part rather than generating one randomly or based on the MAC address. Is there any easy way to do that?
There are two ways to do this. One is the easy way and one is the hard way. The easy way is to run a DHCPv6 server on your network and assign host addresses to each device yourself. Or let the server pick the host part; the DHCPv6 servers I have seen will keep the same host part even if the prefix changes. The hard way is to use ip token to set tokenized interface identifiers. This is described as: IPv6 tokenized interface identifier support is used for assigning well-known host-part addresses to nodes whilst still obtaining a global network prefix from Router advertisements. The primary target for tokenized identifiers are server platforms where addresses are usually manually configured, rather than using DHCPv6 or SLAAC. By using tokenized identifiers, hosts can still determine their network prefix by use of SLAAC, but more readily be automatically renumbered should their network prefix change. Tokenized IPv6 Identifiers are described in the draft: <draft-chown-6man-tokenised-ipv6-identifiers-02>. The reason this is the hard way is that while Linux includes this functionality, no Linux distribution I'm aware of includes support for making such a configuration persistent and applying it at boot time, as they do for manual or DHCP configured addresses. So it is probably not going to work very well for you, until some distribution does so. Note that it is now possible to configure IPv6 tokens in NetworkManager and systemd-networkd; more recent answers have specific configuration instructions. Finally, if your ISP is occasionally changing your prefix, consider using Unique Local Addresses within your network. This way, all of your devices will always have an address that will never change, with which they can talk to each other. Some IPv6-supporting home/SOHO routers (such as OpenWrt) have an option to enable ULA across the entire home network; if there are multiple routers in the home, this should be enabled on the router which connects to the ISP.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/335596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122182/" ] }
335,641
dirmngr is used by python-apt and is recommended by gnupg and gpgsm. I tried to shutdown dirmngr as shared in the manpage but got this - └─[$] dirmngr -vv --shutdowndirmngr[9494]: error opening '/home/shirish/.gnupg/dirmngr_ldapservers.conf': No such file or directory Can somebody share how to shutdown it ? I tried --debug-level and other tricks but couldn't get it to shutdown. How do I shutdown dirmngr ? Update - [$] dpkg -l dirmngrDesired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Architecture Description+++-==============================-====================-====================-=================================================================ii dirmngr 2.1.17-3 amd64 GNU privacy guard - network certificate management service Think it is installed perfectly inspite of the errors - [$] systemctl --user status dirmngr● dirmngr.service - GnuPG network certificate management daemon Loaded: loaded (/usr/lib/systemd/user/dirmngr.service; static; vendor preset: enabled) Active: active (running) since Sun 2017-01-08 14:46:21 IST; 5h 47min ago Docs: man:dirmngr(8) Main PID: 1203 (dirmngr) CGroup: /user.slice/user-1000.slice/[email protected]/dirmngr.service └─1203 /usr/bin/dirmngr --supervisedJan 08 14:46:40 debian dirmngr[1203]: DBG: chan_5 <- KEYSERVER --clear hkp://pgp.mit.eduJan 08 14:46:40 debian dirmngr[1203]: DBG: chan_5 -> OKJan 08 14:46:40 debian dirmngr[1203]: DBG: chan_5 <- KS_GET -- 0xDAD95197Jan 08 14:46:40 debian dirmngr[1203]: DBG: dns: libdns initializedJan 08 14:46:50 debian dirmngr[1203]: DBG: dns: getsrv(_hkp._tcp.pgp.mit.edu): Server indicated a failureJan 08 14:46:50 debian dirmngr[1203]: command 'KS_GET' failed: Server indicated a failure <Unspecified source>Jan 08 14:46:50 debian dirmngr[1203]: DBG: chan_5 -> ERR 219 Server indicated a failure <Unspecified source>Jan 08 14:46:50 debian dirmngr[1203]: DBG: chan_5 <- BYEJan 08 14:46:50 debian dirmngr[1203]: DBG: chan_5 -> OK closing connectionJan 08 14:46:50 debian dirmngr[1203]: handler for fd 5 terminate
With the current version of GnuPG you can kill dirmngr with gpgconf , like this: gpgconf --kill dirmngr
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335641", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
335,645
I'm trying to install a package in R in Unix I used the following : install.packages("rminer", repos="http://cran.r-project.org", lib="~/R/libs/") I get the following error ERROR: compilation failed for package ‘xgboost’* removing ‘/net/zmf1/cb/5/mms140130/R/libs/xgboost’ERROR: dependency ‘xgboost’ is not available for package ‘rminer’* removing ‘/net/zmf1/cb/5/mms140130/R/libs/rminer’Warning messages:1: In install.packages("rminer", dependencies = TRUE, repos = "http://cran.r-project.org", : installation of package ‘xgboost’ had non-zero exit status2: In install.packages("rminer", dependencies = TRUE, repos = "http://cran.r-project.org", : installation of package ‘rminer’ had non-zero exit status what can I do to install the "rminer" package in R in Unix I also tried mkdir ~/R/mkdir ~/R/libs/echo 'R_LIBS_USER="~/R/library"' > $HOME/.RenvironR CMD INSTALL -l ~/R/libs/ rminer_1.4.2.tar.gz it gave me the following error ERROR: dependencies ‘plotrix’, ‘kknn’, ‘pls’, ‘mda’, ‘randomForest’, ‘adabag’, ‘party’, ‘Cubist ’, ‘kernlab’, ‘e1071’, ‘glmnet’, ‘xgboost’ are not available for package ‘rminer’* removing ‘/net/zmf1/cb/5/mms140130/R/libs/rminer’ appreciate your help Thanks
With the current version of GnuPG you can kill dirmngr with gpgconf , like this: gpgconf --kill dirmngr
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205023/" ] }
335,672
I am using the KDE version. With other distributions this is a piece of cake. I can't see any options for additional keyboards in the keyboard settings or locale..
Open System Settings, choose "Input Devices", click on "layouts", add any language you want, and see the "alternate shortcut" you can change as you want and when you click the shortcut keys you choose. It will switch between the languages you have added
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209222/" ] }
335,704
─[$] cat ~/.gitconfig[user] name = Shirish Agarwal email = [email protected][core] editor = leafpad excludesfiles = /home/shirish/.gitignore gitproxy = \"ssh\" for gitorious.org[merge] tool = meld[push] default = simple[color] ui = true status = auto branch = auto Now I want to put my git credentials for github, gitlab and gitorious so each time I do not have to lookup the credentials on the browser. How can this be done so it's automated ? I am running zsh
Using SSH The common approach for handling git authentication is to delegate it to SSH. Typically you set your SSH public key in the remote repository ( e.g. on GitHub ), and then you use that whenever you need to authenticate. You can use a key agent of course, either handled by your desktop environment or manually with ssh-agent and ssh-add . To avoid having to specify the username, you can configure that in SSH too, in ~/.ssh/config ; for example I have Host git.opendaylight.org User skitt and then I can clone using git clone ssh://git.opendaylight.org:29418/aaa (note the absence of a username there). Using gitcredentials If the SSH approach doesn't apply ( e.g. you're using a repository accessed over HTTPS), git does have its own way of handling credentials, using gitcredentials (and typically git-credential-store ). You specify your username using git config credential.${remote}.username yourusername and the credential helper using git config credential.helper store (specify --global if you want to use this setup everywhere). Then the first time you access a repository, git will ask for your password, and it will be stored (by default in ~/.git-credentials ). Subsequent accesses to the repository will use the stored password instead of asking you. Warning : This does store your credentials plaintext in your home directory. So it is inadvisable unless you understand what this means and are happy with the risk.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/335704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
335,716
I'm on Ubuntu and I typed cat .bash_history | grep git and it returned Binary file (standard input) matches My bash_history does exist and there are many lines in it that starts with git . What caused it to display this error and how can I fix it?
You can use grep -a 'pattern' . from man grep page: -a, --text Process a binary file as if it were text; this is equivalent to the --binary-files=text option.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/335716", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72537/" ] }
335,785
I'm trying to figure out a way to copy the current text in a command line to the clipboard WITHOUT touching the mouse. In other words, I need to select the text with the keyboard only.I found a half-way solution that may lead to the full solution: Ctrl+a - move to the beginning of the line. Ctrl+k - cuts the entire line. Ctrl+y - yanks the cut text back. Alternatively I can also use Ctrl+u to perform the first 2 steps. This of course works, but I'm trying to figure out where exactly is the cut text saved. Is there a way to access it without using Ctrl+y ?I'm aware of xclip and I even use it to pipe text straight to the clipboard, so I was thinking about piping the data saved by Ctrl+k to xclip , but not sure how to do it. The method I got so far is writing a script which uses xdotool to add echo to the beginning of the line and | zxc to the end of the line, and then hits enter ( zxc being a custom alias which basically pipes to xclip ). This also works, but it's not a really "clean" solution. I'm using Cshell if that makes any difference. EDIT: I don't want to use screen as a solution, forgot to mention that. Thanks!
If using xterm or a derivative you can setup key bindings to start and end a text selection, and save it as the X11 primary selection or a cutbuffer. See man xterm . For example, add to your ~/.Xdefaults : XTerm*VT100.Translations: #override\n\ <Key>KP_1: select-cursor-start() \ select-cursor-end(PRIMARY, CUT_BUFFER0)\n\ <Key>KP_2: start-cursor-extend() \ select-cursor-end(PRIMARY, CUT_BUFFER0)\n You can only have one XTerm*VT100.Translations entry. Update the X11 server with the new file contents with xrdb -merge ~/.Xdefaults . Start a new xterm . Now when you have some input at the command prompt, typing 1 on the numeric keypad will start selecting text at the current text cursor position, much like button 1 down on the mouse does. Move the cursor with the arrow keys then hit 2 on the numeric keypad and the intervening text is highlighted and copied to the primary selection and cutbuffer0. Obviously other more suitable keys and actions can be chosen. You can similarly paste the selection with bindings like insert-selection(PRIMARY) .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209295/" ] }
335,801
When I do which pip3 I get /usr/local/bin/pip3 but when I try to execute pip3 I get an error as follows: bash: /usr/bin/pip3: No such file or directory This is because I recently deleted that file. Now which command points to another version of pip3 that is located in /usr/local/bin but the shell still remembers the wrong path. How do I make it forget about that path? The which manual says which returns the pathnames of the files (or links) which would be executed in the current environment, had its arguments been given as commands in a strictly POSIX-conformant shell. It does this by searching the PATH for executable files matching the names of the arguments. It does not follow symbolic links. Both /usr/local/bin and /usr/bin are in my PATH variable, and /usr/local/bin/pip3 is not a symbolic link, it's an executable. So why doesn't it execute?
When you run a command in bash it will remember the location of that executable so it doesn't have to search the PATH again each time. So if you run the executable, then change the location, bash will still try to use the old location. You should be able to confirm this with hash -t pip3 which will show the old location. If you run hash -d pip3 it will tell bash to forget the old location and should find the new one next time you try.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/335801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181600/" ] }
335,814
I installed debian. Now I am worried how will my wifi adapter will work on it. I found a thread https://ubuntuforums.org/showthread.php?t=1806839 but didn't able to install linux-firmware and sudo apt-get install linux-headers-generic build-essential also didn't work. It doesn't know about linux-firmware. Here are errors while installing above things: root@debian:/home/love# sudo apt-get install linux-headers-generic build-essentialReading package lists... DoneBuilding dependency tree Reading state information... DonePackage linux-headers-generic is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, oris only available from another sourceE: Package 'linux-headers-generic' has no installation candidate root@debian:/home/love# sudo apt-get install linux-firmwareReading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package linux-firmware While adding repository the following error showed up: root@debian:/home/love# deb http://http.debian.net/debian/ wheezy main contrib non-freebash: deb: command not found Here are some results of some commands: root@debian:/home/love# uname -aLinux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linuxroot@debian:/home/love# lsusbBus 002 Device 005: ID 138a:0005 Validity Sensors, Inc. VFS301 Fingerprint ReaderBus 002 Device 004: ID 413c:2107 Dell Computer Corp. Bus 002 Device 016: ID 2a70:f00e Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching HubBus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 004: ID 04d9:a0ac Holtek Semiconductor, Inc. Bus 001 Device 003: ID 0846:9041 NetGear, Inc. WNA1000M 802.11bgn [Realtek RTL8188CUS]Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching HubBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub What should I do?
When you run a command in bash it will remember the location of that executable so it doesn't have to search the PATH again each time. So if you run the executable, then change the location, bash will still try to use the old location. You should be able to confirm this with hash -t pip3 which will show the old location. If you run hash -d pip3 it will tell bash to forget the old location and should find the new one next time you try.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/335814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209324/" ] }
335,825
I asked a question yesterday and one of the comments answered was that it was shared that it is a 'user service' . Now how to distinguish between a 'user service' and a system service ?
According to this documentation, one can distinguish the unit file by its path. For instance; if the unit file is in the /etc/systemd/system/usr/lib/systemd/system/run/systemd/system directories, this unit belongs to system. If it is in the ~/.config/systemd/user/*/etc/systemd/user/*$XDG_RUNTIME_DIR/systemd/user/*/run/systemd/user/*~/.local/share/systemd/user/*/usr/lib/systemd/user/* directories, it belongs to user.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
335,832
I'm learning about decision making structures and I came across these codes: if [ -f ./myfile ]then cat ./myfileelse cat /home/user/myfilefi[ -f ./myfile ] &&cat ./myfile ||cat /home/user/myfile Both of them behave the same. Are there any advantages to using one way from the other?
No, constructions if A; then B; else C; fi and A && B || C are not equivalent . With if A; then B; else C; fi , command A is always evaluated and executed (at least an attempt to execute it is made) and then either command B or command C are evaluated and executed. With A && B || C , it's the same for commands A and B but different for C : command C is evaluated and executed if either A fails or B fails. In your example, suppose you chmod u-r ./myfile , then, despite [ -f ./myfile ] succeeds, you will cat /home/user/myfile My advice: use A && B or A || B all you want, this remains easy to read and understand and there is no trap. But if you mean if...then...else... then use if A; then B; else C; fi .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/335832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/197342/" ] }
335,904
In my Linux server lastlog and wtmp files have read permission set for other users (664) . Do we really need to keep the read permission for other users or can I change it to (660 or 640) . Does it affect anything in the server like some command execution and all ? In one of the server even though lastlog is with 000 (wtmp with 664) , Commands like last,who are working for non-root user .
For the file /var/log/wtmp , the read and write permission for the group utmp is to allow it to write the login, logout informations to the file. Changing it to readonly for group will affect this process. And the read access for others is to read the file on executing commands like last , who which are dependent on wtmp log. If this read is revoked, these commands will throw Permission Denied errors unless executed by root or sudo user . /var/log/lastlog is used for lastlog command, modifying the permissions of this file will lead to similar errors for lastlog command only.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171266/" ] }
335,930
I've tried to run command apt-get update && apt-get upgrade && apt-get dist-upgrade as root, but nothing happens. I think that the problem is in non-fully complete apt sources. Am I right? What sources I need to set?
Update your apt repositories to use stretch instead of jessie (This can be done manually with a text editor, but sed can be used to automatically update the file.) [user@debian-9 ~]$ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list Please note : Debian 9 (Stretch) is marked testing for a reason. You may notice stability problems when using it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202490/" ] }
335,944
Yesterday, I was upgrading packages and came across NEWS.gz in netbase 5.4 : netbase (5.4) unstable; urgency=medium Stopped recommending ifupdown because nowadays there are options. For the time being it will still be installed by default because it has important priority. (Closes: #824884) What other options are there? I looked up the bug mentioned therein but found nothing about any other tools. Can somebody share what tools the DD/DM might be talking about? I use ifup and ifdown to clear any temporary ethernet networking issues and it works most of the time: $ sudo ifdown eth0 This clears all and any dhcp leases $ sudo ifup eth0 After half a minute to a minute, do this to make sure you get a new lease and are in business. At times, when I'm not using Internet for much, I do use $ ping debian.org in one of the VT (virtual Console Terminals) to make sure things are moving along.
The two other main networking tools nowadays on Linux are Network Manager and systemd-networkd . ifupdown isn't going away yet, the change in netbase is just cleanup: there's no reason for it to recommend networking tools (considering recommendations as defined in Debian Policy), and removing the recommendation is safe because default installations still end up with ifupdown installed. Cleaning such dependencies up will simplify possible future switches to other default tools.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
335,946
To the best of my understanding, all linux process are actually files, is it possible to copy a running process from one machine to another? for example - copy a running tomcat server from one machine to another without having to restart the server
To the best of my understanding, all linux process are actually files You shouldn't take the metaphor too literally. Linux processes can indeed be accessed through a pseudo file system for debugging, monitoring and analysis purpose but processes are more than just these files and "copying" them from a source host /proc file system to a target /proc file system is doomed. Is possible to copy a running process between machines? One of the serious issues moving a running process between hosts is how to handle the open file descriptors this process is using. If a process is reading or writing a file, this very file (or an exact clone) must be available on the target host. File descriptors related to sockets would be tricky to process as the IP address they are bound to will likely change from one host to the other. Processes sharing memory segments with other ones would cease to do it after a migration. PID clashes might also happen, if a running process has the same pid that the incoming one, one of them will need to be changed. Parent child relationship will be lost, and I have just scratched the potential problems. Despite these issues, there are technical solutions providing that functionality called " Application checkpointing " like DMTCP and CRIU . This is similar to what is used with hypervisors like VMWare, VirtualBox, Oracle VM and others when they do virtual machines live migration / teleportation . With virtual machines, the job is actually "simpler" as the whole OS is moved, including the files descriptors, the file systems, the memory, the network and other devices, etc.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/335946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209416/" ] }
336,017
How to detect if isolcpus is activated and on which cpus, when for example you connect for the first time on a server. Conditions: not spawning any process to see where it will be migrated. The use case is that isolcpus=1-7 on a 6 cores i7, seems to not activate isolcpus at boot, and i would like to know if its possible from /proc/ , /sys or any kernel internals which can be read in userspace, to provide a clear status of activation of isolcpus and which cpu are concerned.Or even read active setting of the scheduler which is the first concerned by isolcpus. Consider the uptime is so big, that dmesg is no more displaying boot log to detect any error at startup.Basic answer like " look at kernel cmd line " will not be accepted :)
What you look for should be found inside this virtual file: /sys/devices/system/cpu/isolated and the reverse in /sys/devices/system/cpu/present // Thanks to John Zwinck From drivers/base/cpu.c we see that the source displayed is the kernel variable cpu_isolated_map : static ssize_t print_cpus_isolated(struct device *dev, n = scnprintf(buf, len, "%*pbl\n", cpumask_pr_args(cpu_isolated_map));...static DEVICE_ATTR(isolated, 0444, print_cpus_isolated, NULL); and cpu_isolated_map is exactly what gets set by kernel/sched/core.c at boot: /* Setup the mask of cpus configured for isolated domains */static int __init isolated_cpu_setup(char *str){ int ret; alloc_bootmem_cpumask_var(&cpu_isolated_map); ret = cpulist_parse(str, cpu_isolated_map); if (ret) { pr_err("sched: Error, all isolcpus= values must be between 0 and %d\n", nr_cpu_ids); return 0; } return 1;} But as you observed, someone could have modified the affinity of processes, including daemon-spawned ones, cron , systemd and so on. If that happens, new processes will be spawned inheriting the modified affinity mask, not the one set by isolcpus . So the above will give you isolcpus as you requested, but that might still not be helpful. Supposing that you find out that isolcpus has been issued, but has not "taken", this unwanted behaviour could be derived by some process realizing that it is bound to only CPU=0 , believing it is in monoprocessor mode by mistake, and helpfully attempting to "set things right" by resetting the affinity mask. If that was the case, you might try and isolate CPUS 0-5 instead of 1-6, and see whether this happens to work.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/336017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128153/" ] }
336,045
I have 10 column in my entry, for example, and I want my output with 5 column. More specifically, I wanted to join columns 1 and 2, columns 3 and 4, columns 5 and 6, so on. My input like as: ID01 1 2 0 1 2 0 1 0 ID02 1 0 1 0 1 0 1 0 ID03 2 1 0 2 1 0 2 1 ID04 5 0 5 0 5 2 1 2 And I wanted my input like as: ID01 12 01 20 10 ID02 10 10 10 10 ID03 21 02 10 21 ID04 50 50 52 12 For do this, I tried: perl -alne 'print join "", $F[0], split(" ", $F[1])' data But I do not known how to split by two to two character/column. My real data have a hundred of thousands of column.
Remove every other space: perl -pe 's/ (\S+) / $1/g' \S stands for "not whitespace".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336045", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171751/" ] }
336,071
I have user that have a symlink to somewhere in the computer like this : # ls -ltr /home/guirec0total 4lrwxrwxrwx 1 root root 24 Jan 9 17:56 int -> /disk2/clients/optik/intdrwxr-xr-x 2 guirec0 guirec0 4096 Jan 9 18:13 blabla I use sftp to connect to this user. I have this setup in /etc/ssh/sshd_config : Subsystem sftp internal-sftpMatch Group sftpgroup ChrootDirectory %h ForceCommand internal-sftp X11Forwarding no AllowTcpForwarding no So the root is changed and /disk2/clients/optik/int is not the same for root and for guirec0 . Is there a way to allow access /disk2/clients/optik/int for guirec0 ? The goal of chrooting is to restrict access of the users.
Use bind mount instead of symlink: rm /home/guirec0/intmkdir /home/guirec0/intmount --bind /disk2/clients/optik/int /home/guirec0/int
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92018/" ] }
336,108
Is it possible to use a pipeline command as an argument to find's -exec option? This means, I want to do something like this: find . -name CMakeLists* -exec cat '{}' | grep lib \; where I am trying to execute cat '{}' | grep lib for each file,but this doesn't work. Neither does quoting work. Does anyone have any advice? Update: The particular question was answered. Now, is there a way for a generic find <path> -type f -name <name> -exec <pipeline-command> pattern to work?
find . -type f -name "CMakeLists*" -exec grep lib /dev/null {} + This finds files in the current directory whose basename begins or is the string CMakeLists . The argument is escaped (double quoted) so that the shell doesn't expand it before find runs. There is no need to add cat with a pipe to grep --- it's a useless process with useless IO, here. Adding /dev/null insures that grep will report the filename along with the matching line(s) when there is more than one file to match. By using {} + as the terminating sequence to the -exec argument, multiple filenames are passed to each invocation of the grep command. Had we used {} \; then a grep process would have been spawned for every file found. Needless process instantiation is expensive if done hundreds or thousands of times. To use a pipe with a find -exec argument, you need to invoke the shell. A contrived example could be to grep for the string "one" but only if the string "two" isn't present too. This could be done as: find . -type f -name "CMakeLists*" -exec sh -c 'grep one "$@"|grep -v two' sh {} + This is predicated on the comments, below, by @muru, @Serg and @Scott, with thanks.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/200883/" ] }
336,143
To output all lines into a file /tmp/ps.txt $ ps -e >/tmp/ps.txt To count it with wc -l $ wc -l /tmp/ps.txt172 To count it without exporting a file. $ ps -e | wc -l173 Why ps -e | wc -l get one more line? I don't think ctrl-d has the right explanation for my question. $ echo "test" | wc -l1 Please try it in your terminal, it would yield 2 as ctrl-d would say .
The extra line is the wc program that is running. It is executed at the same time as ps, not after that.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102745/" ] }
336,149
I don't know how to make my code working for more lines. This is the original file t.txt: Hello EarthHello Mars But I get the following output: Mars Hello Earth Hello My expected output is this: Earth HelloMars Hello In general , I want to keep line order same, but reverse words. For general case input would be this: one two four five and expected output is this: two onefive four My code is the following: #!/bin/bashtext=$(cat $1)arr=($text)al=${#arr[@]}let al="al-1"while (($al >= 0))do echo -n "${arr[al]}" echo -n " " let al="al - 1"doneecho
All examples presented below work for general case where there's an arbitrary number of words on the line. The essential idea is the same everywhere - we have to read the file line by line and print the words in reverse. AWK facilitates this the best because it already has all the necessary tools for text processing done programmatically, and is most portable - it can be used with any awk derivative, and most systems have it. Python also has quite a few good utilities for string processing that allow us to do the job. It's a tool for more modern systems, I'd say. Bash, IMHO, is the least desirable approach, due to portability, potential hazards, and the amount of "trickery" that needs to be done. AWK $ awk '{for(i=NF;i>=1;i--) printf "%s ", $i;print ""}' input.txt Earth Hello Mars Hello The way this works is fairly simple: we're looping backwards through each word on the line, printing words separated with space - that's done by printf "%s ",$i function (for printing formatted strings) and for-loop. NF variable corresponds to number of fields. The default field separator is assumed to be space. We start by setting a throw-away variable i to the number of words, and on each iteration, decrement the variable. Thus, if there's 3 words on line, we print field $3, then $2, and $1. After the last pass, variable i becomes 0, the condition i>=1 becomes false, and the loop terminates. To prevent lines being spliced together, we insert a newline using print "" . AWK code blocks {} are processed for each line in this case (if there's a matching condition in front of code block, it depends on the match for the code block to be executed or not). Python For those who like alternative solutions, here's python: $ python -c "import sys;print '\n'.join([ ' '.join(line.split()[::-1]) for line in sys.stdin ])" < input.txt Earth HelloMars Hello The idea here is slightly different. < operator tells your current shell to redirect input.txt into python's stdin stream, and we read that line by line. Here we use list comprehension to create a list of lines - that's what the [ ' '.join(line.split()[::-1]) for line in sys.stdin ] part does. The part ' '.join(line.split()[::-1]) takes a line, splits it into list of words, reverses the list via [::-1] , and then ' '.join() creates a space-separated string out of it. We have as a result a list of larger strings. Finally, '\n'.join() makes an even larger string, with each item joined via newline. In short, this method is basically a "break and rebuild" approach. BASH #!/bin/bashwhile IFS= read -r linedo bash -c 'i=$#; while [ $i -gt 0 ];do printf "%s " ${!i}; i=$(($i-1)); done' sh $line echo done < input.txt And a test run: $ ./reverse_words.sh Earth Hello Mars Hello Bash itself doesn't have strong text processing capabilities. What happens here is that we read the file line by line via while IFS= read -r linedo # some codedone < text.txt This is a frequent technique and is widely used in shell scripting to read output of a command or a text file line-by-line. Each line is stored into $line variable. On the inside we have bash -c 'i=$#; while [ $i -gt 0 ];do printf "%s " ${!i}; i=$(($i-1)); done' sh $line Here we use bash with -c flag to run a set of commands enclosed into single-quotes. When -c is used, bash will start assigning command-line arguments into variables starting with $0 . Because that $0 is traditionally used to signify a program's name, I use sh dummy variable first. The unquoted $line will be broken down into individual items due to the behavior known as word-splitting. Word splitting is often undesirable in shell scripting, and you will often hear people say "always quote your variables, like "$foo"." In this case, however, word-splitting is desirable for processing simple text. If your text contains something like $var , it might break this approach. For this, and several other reasons, I'd say python and awk approach are better. As for the inner code, it's also simple: the unquoted $line is split into words and is passed to the inner code for processing. We take the number of arguments $# , store it into the throw away variable i , and again - print out each item using something known as variable indirection - that's the ${!i} part (note that this is bashism - it's not available in other shells). And again, we use printf "%s " to print out each word, space-separated. Once that's done, echo will append a newline. Essentially this approach is a mix of both awk and python. We read the file line by line, but divide and conquer each line, using several of bash 's features to do the job. A simpler variation can be done with the GNU tac command, and again playing with word splitting. tac is used to reverse lines of input stream or file, but in this case we specify -s " " to use space as separator. Thus, var will contain a newline-separated list of words in reverse order, but due to $var not being quoted, newline will be substituted with space. Trickery, and again not the most reliable, but works. #!/bin/bashwhile IFS= read -r linedo var=$(tac -s " " <<< "$line" ) echo $vardone < input.txt Test runs: And here's the 3 methods with arbitrary lines of input $ cat input.txt Hello Earth end of lineHello Mars another end of lineabra cadabra magic$ ./reverse_words.sh line of end Earth Hello line of end another Mars Hello magic cadabra abra $ python -c "import sys;print '\n'.join([ ' '.join(line.split()[::-1]) for line in sys.stdin ])" < input.txt line of end Earth Helloline of end another Mars Hellomagic cadabra abra$ awk '{for(i=NF;i>=1;i--) printf "%s ", $i;print ""}' input.txtline of end Earth Hello line of end another Mars Hello magic cadabra abra Extra: perl and ruby Same idea as with python - we split each line into array of words, reverse the array, and print it out. $ perl -lane '@r=reverse(@F); print "@r"' input.txt line of end Earth Helloline of end another Mars Hellomagic cadabra abra$ ruby -ne 'puts $_.chomp.split().reverse.join(" ")' < input.txt line of end Earth Helloline of end another Mars Hellomagic cadabra abra
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209562/" ] }
336,192
Someone once told me there was a standard when it comes to parsing options on the command line. Something like: ./script.sh [options/flags] [command or file] I understand that when parsing a shell script this makes life easier since you can shift through the flags and anything left can be accessed by $@ or $* , but is there an actual written standard? Most programmes I've looked at follow this standard, but there are some exceptions, eg ls where ls -l /path , ls /path -l and ls /path -l /path2 are all acceptable.
The POSIX Base Definitions has a section on " Utility Conventions " which applies to the POSIX base utilities. The standard getopts utility and the getopt() system interface ("C function") adheres to the guidelines (further down on the page linked to above) when parsing the command line in a shell script or C program. Specifically, for getopts (as an example): When the end of options is encountered, the getopts utility shall exit with a return value greater than zero; the shell variable OPTIND shall be set to the index of the first operand, or the value "$#" +1 if there are no operands; the name variable shall be set to the character. Any of the following shall identify the end of options: the first -- argument that is not an option-argument, finding an argument that is not an option-argument and does not begin with a - , or encountering an error. What this basically says is that options shall come first, and then operands (your "command or file"). Doing it any other way would render using getopts or getopt() impossible, and would additionally likely confuse users used to the POSIX way of specifying options and operands for a command. Note that the abovementioned standard only applies to POSIX utilities, but as such it sets a precedence for Unix utilities in general. Non-standard Unix utilities can choose to follow or to break this, obviously. For example, the GNU coreutils, even though they implement the standard utilities, allows for things like $ ls Documents/ -l if the POSIXLY_CORRECT environment variable is not set, whereas the BSD version of the same utilities do not. This has the consequence that the following works as expected (if you expect POSIX behaviour, that is) on a BSD system: $ touch -- 'test' '-l'$ ls -l test -l-rw-r--r-- 1 kk kk 0 Jan 11 16:44 -l -rw-r--r-- 1 kk kk 0 Jan 11 16:44 test But on a GNU coreutils system, you get $ ls -l test -l-rw-r--r-- 1 kk kk 0 Jan 11 16:44 test However: $ env POSIXLY_CORRECT=1 ls -l test -l and $ ls -l -- test -l will do the "right" thing on a GNU system too.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
336,219
I updated Gnome to the newest version and I realized that wayland has been installed as the default window manager. I have many problems with it, so how do I go to back to X11? I'm using Arch. //EDIT Problem solved. I just delete old x11 config and create new one :) echo $XDG_SESSION_TYPE returns X11 :)
From an arch wiki Use Xorg backend The Wayland backend is used by default and the Xorg backend is used only if the Wayland backend cannot be started. As the Wayland backend has been reported to cause problems for some users, use of the Xorg backend may be necessary. To use the Xorg backend by default, edit the /etc/gdm/custom.conf file and uncomment the following line:#WaylandEnable=false I hope it is current.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150875/" ] }
336,224
I have a bash application that is producing some result, and I'd like to echo the result to either stdout or to a user chosen file. Because I also echo other interactive messages going to the screen, requiring the user to explicitly use the > redirection when he wants to echo the result to a file is not an option (*), as those messages would also appear in the file. Right now I have a solution, but it's ugly. if [ -z $outfile ]then echo "$outbuf" # Write output buffer to the screen (stdout)else echo "$outbuf" > $outfile # Write output buffer to filefi I tried to have the variable $outfile to be equal to stdout , to &1 and perhaps something else but it would just write to file having that name and not actually to stdout. Is there a more elegant solution? (*) I could cheat and use stderr for that purpose, but I think it's also quite ugly, isn't it?
First, you should avoid echo to output arbitrary data . On systems other than Linux-based ones, you could use: logfile=/dev/stdout For Linux, that works for some types of stdout, but that fails when stdout is a socket or worse, if stdout is a regular file, that would truncate that file instead of writing at the current position stdout is in the file. Other than that, in Bourne-like shell, there's no way to have conditional redirection, though you could use eval : eval 'printf "%s\n" "$buf" '${logfile:+'> "$logfile"'} Instead of a variable , you could use a dedicated file descriptor : exec 3>&1[ -z "$logfile" ] || exec 3> "$logfile" printf '%s\n' "$buf" >&3 A (small) downside with that is that except in ksh , that fd 3 would be leaked to every command run in the script. With zsh , you can do sysopen -wu 3 -o cloexec -- "$logfile" || exit in place of exec 3> "$logfile" but bash has no equivalent. Another common idiom is to use a function like: log() { if [ -n "$logfile" ]; then printf '%s\n' "$@" >> "$logfile" else printf '%s\n' "$@" fi}log "$buf"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336224", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180653/" ] }
336,304
Consider this: $ ssh localhost bash -c 'export foo=bar'terdon@localhost's password: declare -x DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus"declare -x HOME="/home/terdon"declare -x LOGNAME="terdon"declare -x MAIL="/var/spool/mail/terdon"declare -x OLDPWDdeclare -x PATH="/usr/bin:/bin:/usr/sbin:/sbin"declare -x PWD="/home/terdon"declare -x SHELL="/bin/bash"declare -x SHLVL="2"declare -x SSH_CLIENT="::1 55858 22"declare -x SSH_CONNECTION="::1 55858 ::1 22"declare -x USER="terdon"declare -x XDG_RUNTIME_DIR="/run/user/1000"declare -x XDG_SESSION_ID="c5"declare -x _="/usr/bin/bash" Why does exporting a variable within a bash -c session run via ssh result in that list of declare -x commands (the list of currently exported variables, as far as I can tell)? Running the same thing without the bash -c doesn't do that: $ ssh localhost 'export foo=bar'terdon@localhost's password: $ Nor does it happen if we don't export : $ ssh localhost bash -c 'foo=bar'terdon@localhost's password: $ I tested this by sshing from one Ubuntu machine to another (both running bash 4.3.11) and on an Arch machine, sshing to itself as shown above (bash version 4.4.5). What's going on here? Why does exporting a variable inside a bash -c call produce this output?
When you run a command through ssh , it is run by calling your $SHELL with the -c flag: -c If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, the first argument is assigned to $0 and any remaining arguments are assigned to the positional parameters. So, ssh remote_host "bash -c foo" will actually run: /bin/your_shell -c 'bash -c foo' Now, because the command you are running ( export foo=bar ) contains spaces and is not properly quoted to form a whole, the export is taken as the command to be run and the rest are saved in the positional parameters array. This means that export is run and foo=bar is passed to it as $0 . The final result is the same as running /bin/your_shell -c 'bash -c export' The correct command would be: ssh remote_host "bash -c 'export foo=bar'"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/336304", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
336,318
Just finished installing ubuntu 16.04.1 desktop version. Now I am trying to install m4. So far installed m4 as follows. Downloaded m4-1.4.18.tar.gz tar -xvzf m4-1.4.18.tar.gz cd m4-1.4.18 ./configure --prefix=/usr/local/m4 make sudo make install Now when I type: m4 --version It still says: The program 'm4' is currently not installed.... What step am I missing? Note : I do not have internet access on this machine.
Normally on Ubuntu you'd just do apt-get install m4 to install m4 (which assumes you have an Internet connection), or download the m4 package and copy it across. The way you've gone about things, m4 has been installed in /usr/local/m4/bin , so you need to run /usr/local/m4/bin/m4 or add /usr/local/m4/bin to your PATH . Alternatively, you can re-install, using ./configure && make && sudo make install which will install m4 to /usr/local/bin , which should already be on your PATH .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209660/" ] }
336,319
I want to get the first word in every line from a file. Unfortunately a lot of lines begin with space(s). So I try to get the firs word with the following: awk -F'[ \t]+' '{print $1}' < MyFile.txt , but it's not working. I try this echo " some string: here" | awk -F'[ \t]+' '{print $1}' and the results is blank line (I thing that it prints empty string). So why this is not working? I want to make it works with the awk command and explicitly passed delimiter (with educational purposes) Thanks in advance.
Normally on Ubuntu you'd just do apt-get install m4 to install m4 (which assumes you have an Internet connection), or download the m4 package and copy it across. The way you've gone about things, m4 has been installed in /usr/local/m4/bin , so you need to run /usr/local/m4/bin/m4 or add /usr/local/m4/bin to your PATH . Alternatively, you can re-install, using ./configure && make && sudo make install which will install m4 to /usr/local/bin , which should already be on your PATH .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185704/" ] }
336,357
I want to apply grep on particular column in column command. myfile.txt 3,ST,ST01,3,3,8563,ST,ST02,4,9,02346,N1,N101,2,3,ST6,N1,N102,1,60,Comcast6,N1,N103,1,2,92 My Command: column -s, -t < myfile.txt | grep -w "ST" Here I want to only grep the pattern ST in 2nd column. How to do this ? Expected Result: 3 ST ST01 3 3 8563 ST ST02 4 9 0234
Without doing some fancy RegEx where you count commas, you're better off using awk for this problem. awk -F, '$2=="ST"' The -F, parameter specifies the delimiter, which is set to a comma for your data. $2 refers to the second column, which is what you want to match on. "ST" is the value you want to match.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/183088/" ] }
336,368
I realize the default of the Nemo's right-click "Open in Terminal" is to launch "gnome-terminal", however, my installation is opening "xfce4-terminal" instead. A while back when "gnome-terminal" was broken, I installed "xfce4-terminal" as an alternative. I configured the system-wide defaults to call "xfce4-terminal" for the terminal. After the issue with Gnome-terminal was resolved, I moved the system-wide defaults back to Gnome-terminal. Nautilus starting using Gnome-terminal again, however Nemo continues to only launch "xfce4-terminal". I uninstalled "xfce4-terminal" then the "Open in a Terminal" feature of Nemo stopped working. In attempts to resolve this issue I have done the following: ReInstalled Ubuntu 16.04 Purged and reinstalled Nemo Nemo still will only launch "xfce4-terminal". It appears to be a problem with in my home folder's Nemo configuration or some other per user cache. Creating a new user, and Nemo properly launches "Gnome-Terminal". Can someone help me with where to check and fix Nemo's functionality in my '/home/username` settings. Is there some type of editible configuration to check what happens when clicking on the "Open in Terminal" function?
Nemo uses the gsettings configuration. This restored the intended behavior: $ gsettings set org.gnome.desktop.default-applications.terminal exec gnome-terminal On Ubuntu it's different for some reason: $ gsettings set org.cinnamon.desktop.default-applications.terminal exec gnome-terminal
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/336368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81664/" ] }
336,392
From inside a Debian docker container running jessie I get vi blahbash: vi: command not found so naturally I reach for my install command sudo apt-get install vimReading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package vim while searching for some traction I came across these suggestions with various outputs cat /etc/apt/sources.listdeb http://deb.debian.org/debian jessie maindeb http://deb.debian.org/debian jessie-updates maindeb http://security.debian.org jessie/updates main apt-get install software-properties-commonReading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package software-properties-common apt-get install python-software-propertiesReading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package python-software-properties apt-get install apt-fileReading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package apt-file since this server is the docker container for a mongo image it intentionally is a bare bones Debian installation ... installing vi is just to play about during development
I found this solution apt-get updateapt-get install apt-fileapt-file updateapt-get install vim # now finally this will work !!! here is a copy N paste version of above apt-get update && apt-get install apt-file -y && apt-file update && apt-get install vim -y Alternative approach ... if you simply need to create a new file do this when no editor is available cat > myfile(use terminal to copy/paste)^D
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/336392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10949/" ] }
336,410
So what I have is 2 directories that have the same files, except that directory a is today's data and directory b is yesterday's data. What I want to do is compare the files and output the results into 3 columns, which will be the file name, whether or not the files are identical, and how many days the files have been the same. What I have so far is: ls ./dropzone_current > files.txtis_identical=falsefilename="files.txt"while read -r linedo name="$line" declare -i counter diff -qs ./dropzone_current/$name ./dropzone_backup/$name if [ $? -ne 0 ] then is_identical=false counter=0 printf '%s\t%s\t%s\n' "$name" "$is_identical" "$counter" >> test.txt else counter=$((counter + 1)) is_identical=true printf '%s\t%s\t%s\n' "$name" "$is_identical" "$counter" >> test.txt fidone < "$filename" Essentially, everything works except the counter. I need the counter to be unique to each file name that's being compared, and then update every time the script is run (once a day) but I haven't been able to figure out how to do that.
I found this solution apt-get updateapt-get install apt-fileapt-file updateapt-get install vim # now finally this will work !!! here is a copy N paste version of above apt-get update && apt-get install apt-file -y && apt-file update && apt-get install vim -y Alternative approach ... if you simply need to create a new file do this when no editor is available cat > myfile(use terminal to copy/paste)^D
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/336410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209717/" ] }
336,443
Question just out of curiosity. According to RHEL7 System Administration Guide ( https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/sect-Managing_Services_with_systemd-Services.html#sect-Managing_Services_with_systemd-Services-List ) The following command should list all loaded units systemctl list-units --type service --all But in fact it doesn't list all loaded services, only those which are enabled OR active OR (active AND enabled). For example: [root@roman-centos system]# systemctl list-units --type service --all | grep httpd[root@roman-centos system]# systemctl status httpd● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled) Active: inactive (dead) Docs: man:httpd(8) man:apachectl(8) Is it the way it is supposed to be or it might be documentation/code bug?
"Loaded" means that systemd has the read the unit from disk into memory. This will happen whenever you "look" at the unit, e.g. with status, when the unit is started, or when the unit is a dependency of another unit that is loaded. The misunderstanding here is that 'systemctl status' will always show the unit as "loaded", because systemd loads the unit to display the status. If the unit is not needed for anything else, it will be unloaded immediately after. If you want to display all a list of all possible units found on disk, use 'systemctl list-unit-files'.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336443", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205738/" ] }
336,462
The output yielded by df consistent with lsblk debian8@hwy:~$ df -h /dev/sda1Filesystem Size Used Avail Use% Mounted on/dev/sda1 47G 34G 14G 72% /media/xp_cdebian8@hwy:~$ df -h /dev/sda3Filesystem Size Used Avail Use% Mounted on/dev/sda3 92G 36G 52G 42% / The output yielded by df inconsistent with lsblk debian8@hwy:~$ df -h /dev/sda4Filesystem Size Used Avail Use% Mounted onudev 10M 0 10M 0% /devdebian8@hwy:~$ df -h /dev/sda5Filesystem Size Used Avail Use% Mounted onudev 10M 0 10M 0% /devdebian8@hwy:~$ df -h /dev/sda6Filesystem Size Used Avail Use% Mounted onudev 10M 0 10M 0% /devdebian8@hwy:~$ df -h /dev/sda7Filesystem Size Used Avail Use% Mounted onudev 10M 0 10M 0% /dev How to explain the output of lsblk and df -h ? Sometime df can't get right info about disk. sudo fdisk -l Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x3b2662b1Device Boot Start End Sectors Size Id Type/dev/sda1 * 2048 97851391 97849344 46.7G 7 HPFS/NTFS/exFAT/dev/sda2 97851392 195508223 97656832 46.6G 83 Linux/dev/sda3 195508224 390819839 195311616 93.1G 83 Linux/dev/sda4 390821886 449411071 58589186 28G 5 Extended/dev/sda5 390821888 400584703 9762816 4.7G 82 Linux swap / Solaris/dev/sda6 400586752 439646207 39059456 18.6G b W95 FAT32/dev/sda7 439648256 449411071 9762816 4.7G 7 HPFS/NTFS/exFAT
There's actually two problems. The first is the obvious one that others have pointed out: lsblk lists disk by device and df works on mounted filesystems. So lsblk /dev/sda3 is roughly equivalent to df -h / in your case since /dev/sda3 is mounted on /. Except that it's not. Because lsblk lists the size of the partition while df lists the size of the filesystem. The difference (93.1GB vs 92GB for sda3 in your example) is a combination of unusable space (if any) and filesystem overhead. Some amount of space needs to go to keeping track of the filesystem itself rather than the contents of the files it stores.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102745/" ] }
336,521
I am always surprised that in the folder /bin there is a [ program. Is this what is called when we are doing something like: if [ something ] ? By calling the [ program explicitly in a shell it asks for a corresponding ] , and when I provide the closing bracket it seems to do nothing no matter what I insert between the brackets. Needless to say, the usual way about getting help about a program does not work, i.e. neither man [ nor [ --help works.
The [ command's job is to evaluate test expressions. It returns with a 0 exit status (that means true ) when the expression resolves to true and something else (which means false ) otherwise. It's not that it does nothing, it's just that its outcome is to be found in its exit status. In a shell, you can find out about the exit status of the last command in $? for Bourne-like shells or $status in most other shells (fish/rc/es/csh/tcsh...). $ [ a = a ]$ echo "$?"0$ [ a = b ]$ echo "$?"1 In other languages like perl , the exit status is returned for instance in the return value of system() : $ perl -le 'print system("[", "a", "=", "a", "]")'0 Note that all modern Bourne-like shells (and fish ) have a built-in [ command. The one in /bin would typically only be executed when you use another shell or when you do things like env [ foo = bar ] or find . -exec [ -f {} ] \; -print or that perl command above... The [ command is also known by the test name. When called as test , it doesn't require a closing ] argument. While your system may not have a man page for [ , it probably has one for test . But again, note that it would document the /bin/[ or /bin/test implementation. To know about the [ builtin in your shell, you should read the documentation for your shell instead. For more information about the history of that utility and the difference with the [[...]] ksh test expression, you may want to have a look at this other Q&A here .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/336521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180653/" ] }
336,533
During installation I got a few pop ups stating that hardware needs non-free firmware files to operate. At that time I didn't have those files so I continued with the installation, but now my now system is not able to recognize wifi. I tried installing firmware but whatever from searching from didn't at all work. The missing files are: rtlwifi/rtl8723befw.bin rtlwifi/rtl8723befw.bin Now I completed the installation, how can I install the firmware?
You need to enable non-free first: edit /etc/apt/sources.list , and at the end of lines ending with main , add contrib non-free . You'll end up with something like deb http://ftp.fr.debian.org/debian jessie main contrib non-free etc. Then update your repositories and install firmware-realtek : apt-get update && apt-get install firmware-realtek That will provide the necessary firmware files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209820/" ] }
336,566
I am currently using Arch Linux as my OS on my desktop. When I look at my time, it is 22:38, when the time clearly is around 17:08. When I invoke the command timedatectl , I get: Local time: Wed 2017-01-11 22:37:43 ISTUniversal time: Wed 2017-01-11 17:07:43 UTC RTC time: Wed 2017-01-11 17:07:41 Time zone: Asia/Kolkata (IST, +0530)Network time on: yesNTP synchronized: noRTC in local TZ: no Update When I run sudo systemctl status systemd-timesyncd , I get: ● systemd-timesyncd.service - Network Time Synchronization Loaded: loaded (/usr/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2017-01-11 00:49:36 IST; 1 day 1h ago Docs: man:systemd-timesyncd.service(8) Main PID: 31123 (systemd-timesyn) Status: "Idle." Tasks: 2 (limit: 4915) CGroup: /system.slice/systemd-timesyncd.service └─31123 /usr/lib/systemd/systemd-timesyncdJan 12 01:39:42 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 5.9.78.71:123 (1.arch.pool.ntp.org).Jan 12 01:39:53 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 192.53.103.108:123 (1.arch.pool.ntp.org).Jan 12 01:40:03 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 139.59.19.184:123 (2.arch.pool.ntp.org).Jan 12 01:40:13 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 139.59.45.40:123 (2.arch.pool.ntp.org).Jan 12 01:40:24 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 123.108.200.124:123 (2.arch.pool.ntp.org).Jan 12 01:40:34 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 125.62.193.121:123 (2.arch.pool.ntp.org).Jan 12 01:40:44 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 139.59.45.40:123 (3.arch.pool.ntp.org).Jan 12 01:40:55 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 123.108.200.124:123 (3.arch.pool.ntp.org).Jan 12 01:41:05 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 139.59.19.184:123 (3.arch.pool.ntp.org).Jan 12 01:41:15 sharan-pc systemd-timesyncd[31123]: Timed out waiting for reply from 125.62.193.121:123 (3.arch.pool.ntp.org). traceroute I also tried the command traceroute -U -p ntp pool.ntp.org , and I get: traceroute to pool.ntp.org (139.59.19.184), 30 hops max, 60 byte packets 1 10.114.1.1 (10.114.1.1) 1.713 ms 2.020 ms 2.343 ms 2 10.10.2.41 (10.10.2.41) 1.123 ms 2.580 ms 2.836 ms 3 cyberoam.iisc.ac.in (10.10.1.98) 0.553 ms 0.806 ms 0.813 ms 4 * * * 5 * * * 6 * * * 7 * * * 8 * * * 9 * * *10 * * *11 * * *12 * * *13 * * *14 * * *15 * * *16 * * *17 * * *18 * * *19 * * *20 * * *21 * * *22 * * *23 * * *24 * * *25 * * *26 * * *27 * * *28 * * *29 * * *30 * * * How do I fix this? I've even tried timedatectl set-ntp true . Am I supposed to reboot for this to take effect?
systemd-timesyncd will not require you to reboot. I've tested timedatectl on my system. It might be necessary to wait a minute for a connection. man timedatectl status Show current settings of the system clock and RTC, including whether network time synchronization is on. Note that whether network time synchronization is on simply reflects whether the systemd-timesyncd.service unit is enabled. Even if this command shows the status as off, a different service might still synchronize the clock with the network. $ timedatectl status Local time: Wed 2017-01-11 13:45:07 GMT Universal time: Wed 2017-01-11 13:45:07 UTC RTC time: Wed 2017-01-11 13:45:07 Time zone: Europe/London (GMT, +0000) Network time on: yesNTP synchronized: yes RTC in local TZ: yes timedatectl manpage is lying on my system. Possibly the implementation was patched by Fedora, without patching the manpage. I do not know how to query which service is used; my system happens to use chronyd. I imagine it might also be possible to use ntp/ntpd. However in your case I would be quite confident that Arch uses the upstream default of timesyncd. $ systemctl status systemd-timesyncd● systemd-timesyncd.service - Network Time Synchronization Loaded: loaded (/usr/lib/systemd/system/systemd-timesyncd.service; disabled; Active: inactive (dead) Docs: man:systemd-timesyncd.service(8)$ systemctl status chronyd● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor pres Active: active (running) since Mon 2017-01-09 19:09:39 GMT; 1 day 18h ago Main PID: 928 (chronyd) Tasks: 1 (limit: 4915) CGroup: /system.slice/chronyd.service └─928 /usr/sbin/chronyd You might have errors logged underneath the status. Make sure to run systemctl as a user with access to the system journal, e.g. using sudo . Unlike chronyd with chronyc , there is no documented way to additionally query systemd-timesyncd for... anything really, beyond "NTP synchronized: no". Hope it has useful logs! I suggest aiming to Identify which well-known pool.ntp.org alias your system is trying to use. Test the alias e.g. ntpdate -q arch.pool.ntp.org . traceroute to the alias to see if there is a nearby block i.e. a firewall preventing access. As always, I would use ping first because it gets results quicker (and is less prone to mis-interpretation), or use the mtr version of traceroute (this also defaults to ICMP traceroute, which avoids lots of output from multi-path networks). Ultimately you want something like traceroute -U -p ntp pool.ntp.org , i.e. using the same UDP port as NTP does. EDIT : previous versions of this answer were confused about systemd-timesyncd's default NTP servers. Although they are commented out (disabled) in timesyncd.conf , it should only be necessary to uncomment the line if you need to change the server. The default values are built in to timesyncd at compile time. This is mentioned in all documentation. https://www.cyberciti.biz/faq/linux-unix-bsd-is-ntp-client-working/ https://wiki.archlinux.org/index.php/Systemd-timesyncd
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166916/" ] }
336,609
Whenever I use a pager like less or an editor like nano in the shell (my shell is GNU bash), I see a behaviour I cannot explain completely and which differs to the behaviour I can observe with other tools like cat or ls . I would like to ask how this behaviour comes about. The —not easy to explain— behaviour is that normally all output to stdout/stderr ends up being recorded in the terminal-emulators backbuffer, so I can scroll back, while (not so normally to me) in the case of using less or nano , output is displayed by the terminal-emulator, yet upon exiting the programs, the content "magically disappears". I would like to give those two examples: seq 1 200 (produces 200 lines in the backbuffer) seq 1 200 | less (lets me page 200 lines, yet eventually "cleans up" and nothing is recorded in the backbuffer) My suspicion is that some sort of escape codes are in play and I would appreciate someone pointing my to an explanation of this observed behavioural differences. Since some comments and answers are phrased, as if it was my desire to change the behaviour, this "would be nice to know", but indeed the desired answer should be the description of the mechanism, not as much the ways to change it.
There are two worldviews here: As far as programs using termcap/terminfo are concerned, your terminal potentially has two modes: cursor addressing mode and scrolling mode . The latter is the normal mode, and a program switches to cursor addressing mode when it needs to move the cursor around the screen by row and column addresses, treating the screen as a two-dimensional entity. termcap and terminfo handle translating this worldview, which is what programs see, into the worldview as seen by terminals. As far as a terminal (emulated or real) is concerned, there are two screen buffers, only one of which is displayed at any time. There's a primary screen buffer and an alternate screen buffer . Control sequences emitted by programs switch the terminal between the two. For some terminals, usually emulated ones, the alternate screen buffer is tailored to the usage of termcap/terminfo. They are designed with the knowledge that part of switching to cursor addressing mode is switching to the alternate screen buffer and part of switching to scrolling mode is switching to the primary screen buffer. This is how termcap/terminfo translate things. So these terminals don't show scrolling user interface widgets when the alternate screen buffer is being displayed, and simply have no scrollback mechanism for that screen buffer. For other terminals, usually real ones, the alternate screen buffer is pretty much like the primary. Both are largely identical in terms of what they support. A few emulated terminals fall into this class, note. Unicode rxvt, for example, has scrollback for both the primary and alternate screen buffers. Programs that present full-screen textual user interfaces (such as vim , nano , less , mc , and so forth) use termcap/terminfo to switch to cursor-addressing mode at start-up and back to scrolling mode when they suspend, or shell out, or exit. The ncurses library does this, but so too do non-ncurses-using programs that build more directly on top of termcap/terminfo. The scrolling within TUIs presented by less or vim is nothing to do with scrollback. That is implemented inside those programs, which are just redrawing their full-screen textual user interface as appropriate as things scroll around. Note that these programs do not "leave no content" in the alternate screen buffer. The terminal simply is no longer displaying what they leave behind. This is particularly noticable with Unicode rxvt on some platforms, where the termcap/terminfo sequences for switching to cursor addressing mode do not implicitly clear the alternate screen buffer. So using multiple such full-screen TUI programs in succession can end up displaying the old contents of the alternate screen buffer as left there by the last program, at least for a little while until the new program writes its output (most noticable when less is at the end of a pipeline). With xterm, one can switch to displaying the alternate screen buffer from the GUI menu of the terminal emulator, and see the content still there. The actual control sequences are what the relevant standards call set private mode control sequences. The relevant private mode numbers are 47, 1047, 1048, and 1049. Note the differences in what extra actions are implied by each, on top of switching to/from the alternate screen buffer. Further reading How to save/restore terminal output Entering/exiting alternate screen buffer OpenSSH, FreeBSD screen overwrite when closing application What exactly is scrollback and scrollback buffer? Building blocks of interactive terminal applications
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/336609", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
336,620
Time to time I need create servers network - link servers and quickly copy data. I'm using for that SSH Filesystem (SSHFS) because it is simple and easy to use like connect with Secure Shell (SSH) . For better security reasons I using authentication files instead of passwords. Another reasons is that is the default setting for " Amazon Web Services (AWS) EC2 ". Time to time I don't figure out where is problem with connection sshfs {{user_name}}@{{server_ip}}:{{remote_path}} {{local_path}} -o "IdentityFile={{identity_path}}" and I get just simple message on client read: Connection reset by peer and simple message on server "Connection reset by peer" Q what I can do next?
There can be many mistakes in syntax, and many more in connection. Best solution is turn on verbose mode with switch -o debug and in my case I see problem with "absolute path to identity file" no such identity: {{identity_file}}: No such file or directoryPermission denied (publickey).read: Connection reset by peer so my full command looks like this sshfs {{user_name}}@{{server_ip}}:/ /mnt/{{server_ip}} -o "IdentityFile={{absolute_path}},port={{port_number}}" -o debug
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68208/" ] }
336,727
With Network Manager in Red Hat 7, I am seeing an issue where the old/wrong search domain is being used after changing the hostname. In /etc/resolv.conf, I see: # Generated by NetworkManagersearch **ec2.internal** d.sample.comnameserver 172.31.0.2 When I type hostname , I see my desired output: [root@testing01 ~]# hostnametesting01.d.sample.com But instead of replacing the search domains, it is appending the new domain name to the search domains. I want to completely get rid of ec2.internal and give this domain the ax altogether. Editing the /etc/resolv.conf file directly gets clobbered by Network Manager. I don't want to disable Network Manager, and I'd rather not disable NM's management of /etc/resolv.conf unless I absolutely have to. So, 1) Why does NM keep reverting my search domain and 2) how can I fix this using nmcli or command line tools only?
After a few hours of poking around, I was able to resolve this. It turns out, this was being set via DHCP: nmcli -f ip4 device show eth0IP4.ADDRESS[1]: 172.31.53.162/20IP4.GATEWAY: 172.31.48.1IP4.DNS[1]: 172.31.0.2IP4.DOMAIN[1]: ec2.internal I was able to override IP4.DOMAIN[1] by overriding a network interface's ipv4.dns-search value: nmcli connection modify uuid \`nmcli connection show --active | grep 802-3-ethernet | awk '{print $(NF-2)}' | tail -n 1` ipv4.dns-search d.sample.com Or more simply, nmcli connection modify System\ eth0 ipv4.dns-search "d.sample.com" Then you have to restart NetworkManager systemctl restart NetworkManager.service I also found that because I was working with an Amazon instance, I needed to update my cloud.cfg file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336727", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104565/" ] }
336,758
I have some file like this : abc 123 abc 789 bcd 456 acb 135 I would like to print first column of next line in current line. Desired output: abc 123 abc abc 789 bcd bcd 456 acb acb 135 I prefer to use awk.
Memorise the previous line: awk 'NR > 1 { print prev, $1 } { prev = $0 } END { print prev }' This processes the input as follows: if the current line is the second or greater, print the previous line (stored in prev , see the next step) and the first field of the current line, separated by the output field separator (the space character by default); in all cases, store the current line in the prev variable; at the end of the file, print the previous line.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49654/" ] }
336,767
sample file content: --------------------NETWORKING=yesHOSTNAME=wls1.ebs-testsrvrs.com# oracle-rdbms-server-12cR1-preinstall : Add NOZEROCONF=yesNOZEROCONF=yes-------------------- I want to comment all the lines that start with "HOST"
In vi : :%s/^HOST/#&/ or :g/^HOST/s//#&/ The % in the first command means "in the whole buffer", and is a short way of saying 1,$ , i.e. from the first line to the last. & in the replacement part of the substitution will be replaced by the whole text matched by the pattern ( ^HOST ). The second command applies the substitution ( s/// ) to all lines matching ^HOST using the global ( g ) command, which vi inherited from the ed line editor. In the second case, the s/// command uses an empty regular expression. This make it reuse the most recently used regular expression ( ^HOST in the g command). The replacement is the same as in the first command. With sed : sed 's/^HOST/#&/' input >output or sed '/^HOST/s//#&/' input >output in the same manner as in vi ( sed always applies all commands to every line of the input stream, so we don't use anything like % or g explicitly with sed ). To remove the comment character for the line that starts with #HOST : sed 's/^#HOST/HOST/' input >output or sed '/^#HOST/s/.//' input >output In the second of the above two commands, the s/// command is applied to all lines that start with #HOST . The s/// command just deletes the first character on the line. The vi equivalent of the two commands are :%s/^#HOST/HOST/ and :g/^#HOST/s/.// respectively
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336767", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/176232/" ] }
336,768
I need to insert character (#) in the beginning of specified line in a text file. Input example: Hellow1Hellow2Hellow3 Desired output Hellow1#Hellow2Hellow3
To insert a # on the line with the word Hellow2 , you may use sed like this: sed '/^Hellow2/ s/./#&/' input.txt >output.txt To insert a # in the beginning of the second line of a text, you may use sed like this: sed '2 s/./#&/' input.txt >output.txt The & will be replaced by whatever was matched by the pattern. I'm avoiding using sed -i (in-place editing), because I don't know what sed you are using and most implementations of sed use incompatible ways of handling that flag (see How can I achieve portability with sed -i (in-place editing)? ). Instead, do the substitution like above and then mv output.txt input.txt if you want to replace the original data with the result. This also gives you a chance to make sure it came out correctly. Equivalent thing with awk : awk '/^Hellow2/ { $0 = "#" $0 }; 1' input.txt >output.txtawk 'NR == 2 { $0 = "#" $0 }; 1' input.txt >output.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210015/" ] }
336,789
On Xubuntu 16.04, I want to install the latest Virtualbox package. I know that I could install it through APT and receive updates through the Ubuntu repositories, or I could add a PPA (if there was one) and receive it from there. At this link I can either download the package or add it to sources.list and install it: https://www.virtualbox.org/wiki/Linux_Downloads But I would rather install the latest package by downloading it from their website. I if ran " dpkg -i install packagename ", it would install the package, but would it add a new repository from which I would receive updates whenever I ran " sudo apt-get update && sudo apt-get upgrade "? Can I somehow check if the package contains such a repository?
It's not fool-proof, but this will give a good indication: dpkg-deb -c virtualbox-5.1_5.1.12-112440\~Debian\~stretch_amd64.deb|grep etc/apt In this case nothing is found, so it looks like the package doesn't add a repository. We're specifically looking for files in /etc/apt/sources.list.d . It's not fool-proof because packages could add a repository in their postinst . You can examine the latter using dpkg-deb --ctrl-tarfile virtualbox-5.1_5.1.12-112440\~Debian\~stretch_amd64.deb|tar xf - ./postinst then reading the extracted postinst (which confirms that the package doesn't add a repository).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198922/" ] }
336,790
Every time time I connect headphones to the 3.5mm audio jack on my Dell XPS 13, I hear continuous white noise in addition to the audio I expect to hear. It's much louder than the typical noise floor for a headphone jack. I've found many other reports of this same problem for both the XPS 13 9350 ( 1 , 2 ) and the XPS 13 9360 ( 1 , 2 , 3 ), so it doesn't seem like I have a faulty unit. Is there a way to stop this noise?
Set Headphone Mic Boost gain to 10dB. Any other value seems to cause the irritating background noise in headphones. This can be done with amixer : amixer -c0 sset 'Headphone Mic Boost' 10dB To make this happen automatically every time you headphones are connected install acpid . Start it by running: sudo systemctl start acpid.service Enable it by running: sudo systemctl enable acpid.service Create following event script /etc/acpi/headphone-plug event=jack/headphone HEADPHONE plugaction=/etc/acpi/cancel-white-noise.sh %e Then create action script /etc/acpi/cancel-white-noise.sh : #! /bin/bashamixer -c0 sset 'Headphone Mic Boost' 10dB Now Headphone Mic Boost will be set to 10dB every time headphones are connected. To make this effective you need to restart your laptop.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/336790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210028/" ] }
336,804
Are there any methods to check what you are actually executing from a bash script? Say your bash script is calling several commands (for example: tar , mail , scp , mysqldump ) and you are willing to make sure that tar is the actual, real tar , which is determinable by the root user being the file and parent directory owner and the only one with write permissions and not some /tmp/surprise/tar with www-data or apache2 being the owner. Sure I know about PATH and the environment, I'm curious to know whether this can be additionally checked from a running bash script and, if so, how exactly? Example: (pseudo-code) tarfile=$(which tar)isroot=$(ls -l "$tarfile") | grep "root root"#and so on...
Instead of validating binaries you're going to execute, you could execute the right binaries from the start. E.g. if you want to make sure you're not going to run /tmp/surprise/tar , just run /usr/bin/tar in your script. Alternatively, set your $PATH to a sane value before running anything. If you don't trust files in /usr/bin/ and other system directories, there's no way to regain confidence. In your example, you're checking the owner with ls , but how do you know you can trust ls ? The same argument applies to other solutions such as md5sum and strace . Where high confidence in system integrity is required, specialized solutions like IMA are used. But this is not something you could use from a script: the whole system has to be set up in a special way, with the concept of immutable files in place.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/336804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68350/" ] }
336,814
I want to collect all the files I have used in a project. I am using find command, and I want it to find a list of files and then I pass its result to zip command to create a single zip file containing all the matched files. Just a convenience if it is possible. However, it seems there are problems with it and it does not work. find /lmms/samples/ -name warp01*,JR_effect2k*,clean_low_key*,q_kick_2*,sticky_q_kick*,upright_bass*,pizzi*,chorded_perc*,Tr77_kick*,Tr77_tom1*,Tr77_cym*,hihat_008a*,Hat_o.ds,Hat_c.ds,Kickhard.ds,Tr77_snare* -exec zip {} ~/Desktop/files.zip Output is: find: missing argument to `-exec' PS. After fixing some errors pointed out in the below answers and following their guidelines, I have reformatted the code as below: find ~/lmms/samples/ (-name warp01* -o -name JR_effect2k* -or -name clean_low_key* -or -name q_kick_2* -or -name sticky_q_kick* -or -name upright_bass*-or -name pizzi* -or -name chorded_perc* -or -name Tr77_kick* -or -name Tr77_tom1* -or -name Tr77_cym* -or -name hihat_008a* -or -name Hat_o.ds -or -name Hat_c.ds -or -name Kickhard.ds -or -name Tr77_snare*) -exec zip -add ~/Desktop/files.zip {} + It still fails with the message bash: syntax error near unexpected token ('` Removing the parentheses eliminates error but does seem to only add one file to the archive, which surprisingly, I do not seem to find on my Desktop!!! find ~/lmms/samples/ -name warp01* -o -name JR_effect2k* -or -name clean_low_key* -or -name q_kick_2* -or -name sticky_q_kick* -or -name upright_bass*-or -name pizzi* -or -name chorded_perc* -or -name Tr77_kick* -or -name Tr77_tom1* -or -name Tr77_cym* -or -name hihat_008a* -or -name Hat_o.ds -or -name Hat_c.ds -or -name Kickhard.ds -or -name Tr77_snare* -exec zip ~/Desktop/files.zip {} + adding: home/john/lmms/samples/drumsynth/tr77/Tr77_snare.ds (deflated 49%)
-exec takes two parameters: the command to execute, and a flag to tell find whether the command should be run once per match ( ; ) or with as many files as possible per run ( + ). In addition, the zip parameters are the wrong way round. The -name test doesn't work that way either; it only takes one pattern at a time. If you want to check for multiple patterns, you need to use multiple -name tests combined using -o ("or"), and wrap them in parentheses (thanks to xhienne for pointing that out): find /lmms/samples/ \( -name "warp01*" -o -name "JR_effect2k*" ... \) -exec ... (quoting each pattern to avoid issues with shell globbing). All in all, if you fix your name tests and end your find with -exec zip ~/Desktop/files.zip {} + it should do the right thing.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142579/" ] }
336,866
I have a set of image like that : 01-12_13:20:12_1366x768.png 01-12_13:20:46_1366x768.png 01-12_13:21:01_1366x768.png 01-12_13:21:06_1366x768.png01-12_13:20:40_1366x768.png 01-12_13:20:47_1366x768.png 01-12_13:21:02_1366x768.png 01-12_13:21:07_1366x768.png01-12_13:20:42_1366x768.png 01-12_13:20:49_1366x768.png 01-12_13:21:03_1366x768.png 01-12_13:21:08_1366x768.png01-12_13:20:44_1366x768.png 01-12_13:20:59_1366x768.png 01-12_13:21:04_1366x768.png 01-12_13:21:10_1366x768.png01-12_13:20:45_1366x768.png 01-12_13:21:00_1366x768.png 01-12_13:21:05_1366x768.png I need to replace every : to _ . How can I do that using bash commands ? (note : i love when everything is compact and one-lined)
You can use a simple for loop, glob and parameter expansion to achieve this: for f in *:*.png; do mv -- "$f" "${f//:/_}"; done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32990/" ] }
336,876
For testing purposes I need to create a shell script that connects with a remote IP>Port and sends a simple text TCPIP Socket message.
Using nc ( netcat ). Server: $ nc -l localhost 3000 Client: $ nc localhost 3000 Both server and client will read and write to standard output/input. This will work when the server and client are on the same machine. Otherwise, change localhost to the external name of the server. On the server, you may also use 0.0.0.0 (or remove it altogether) to let the server bind to all available interfaces. Slightly more interesting, a "server" that gives you the time of day if you connect to it and send it a d , and which quits if you send q : Server (in bash ): #!/bin/bashcoproc nc -l localhost 3000while read -r cmd; do case $cmd in d) date ;; q) break ;; *) echo 'What?' esacdone <&"${COPROC[0]}" >&"${COPROC[1]}"kill "$COPROC_PID" Client session: $ nc localhost 3000d Thu Jan 12 18:04:21 CET 2017 Hello? What? q (the server exits after q , but the client doesn't detect that it's gone until you press Enter ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336876", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/203712/" ] }
336,894
I have a small script to demonstrate what I want to do #!/bin/bash> ztail -f z | grep 'd' &echo $! The $! gives the PID of the grep process. I want to be able to kill the tail process at the same time as killing the grep process. Doing kill "pid of grep" does not kill the tail process. Nor does killall grep . I could use killall tail but I think this would be dangerous.
Enclose your command with parentheses: ( tail -f z | grep 'd' ) &kill -- -$! This will kill the whole sub-process. Here, by specifying a negative PID to kill, we kill the whole process group. See man 1 kill : Negative PID values may be used to choose whole process groups; see the PGID column in ps command output. Or man 2 kill : If pid is less than -1, then sig is sent to every process in the process group whose ID is -pid. However, kill -PID will only work if job control is enabled in bash (the default for interactive shells). Else, your subprocess won't have a dedicated process group and the kill command will fail with kill: (-PID) - No such process To work around that, either activate job control in bash ( set -m ), or use pkill -P $!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95512/" ] }
336,905
How do I pass arguments when launching bash script so that specific lines are executed within the script For example ( createfile.sh ): #!/bin/bashexport CLIENT1_DIR="<path1>"export CLIENT2_DIR="<path2>"chef-solo -c solo.rb -j client1.jsonchef-solo -c solo.rb -j client2.json Then $ ./createfile.sh client1 should only execute client1 specific lines, and replacing it with client2 should execute only client2 specific lines.
Enclose your command with parentheses: ( tail -f z | grep 'd' ) &kill -- -$! This will kill the whole sub-process. Here, by specifying a negative PID to kill, we kill the whole process group. See man 1 kill : Negative PID values may be used to choose whole process groups; see the PGID column in ps command output. Or man 2 kill : If pid is less than -1, then sig is sent to every process in the process group whose ID is -pid. However, kill -PID will only work if job control is enabled in bash (the default for interactive shells). Else, your subprocess won't have a dedicated process group and the kill command will fail with kill: (-PID) - No such process To work around that, either activate job control in bash ( set -m ), or use pkill -P $!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210099/" ] }
336,917
This time I did not install firmware-linux-nonfree. I opted to avoid it and found that everything is working fine (well, my wireless indicator light doesn't work, but the adaptor does, so it's cool). Anyway, the last call to update-initramfs produced this error: W: Possible missing firmware /lib/firmware/tigon/tg3_tso5.bin for module tg3W: Possible missing firmware /lib/firmware/tigon/tg3_tso.bin for module tg3W: Possible missing firmware /lib/firmware/tigon/tg3.bin for module tg3 It is apparently the firmware for my ethernet adaptor. That is working fine, same as last install. How do I suppress this warning or fix the problem. I don't want the nonfree fw package as it conflicts with my AMD gf fw.
This isn't an error, just a warning. The tg3 module drives many Broadcom chipsets, and needs these firmware files only for BCM5705 TSO, BCM5703/BCM5704 TSO and BCM5701A0. If you include a currently loaded module in your initramfs, but not all possible firmware files it can theoretically request, update-initramfs gives you these warnings. These tell you that the generated initramfs file isn't quite as general as it could be. You can only hack around this, for example by creating dummy firmware files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336917", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185483/" ] }
336,979
I need to resize my first disk (/dev/xvda) from 40 GB to 80 GB. I'm using XEN virtualization, and the disk is resized in XenCenter, but I need to resize its partitions without losing any data. The virtual machine is running Debian 8.6. Disk /dev/xvda: 80 GiB, 85 899 345 920 bajtů, 167 772 160 sektorůJednotky: sektorů po 1 * 512 = 512 bajtechVelikost sektoru (logického/fyzického): 512 bajtů / 512 bajtůVelikost I/O (minimální/optimální): 512 bajtů / 512 bajtůTyp popisu disku: dosIdentifikátor disku: 0x5a0b8583Device Boot Start End Sectors Size Id Type/dev/xvda1 2048 499711 497664 243M 83 Linux/dev/xvda2 501758 83884031 83382274 39,8G 5 Extended/dev/xvda5 501760 83884031 83382272 39,8G 8e Linux LVMDisk /dev/xvdb: 64 GiB, 68 719 476 736 bajtů, 134 217 728 sektorůJednotky: sektorů po 1 * 512 = 512 bajtechVelikost sektoru (logického/fyzického): 512 bajtů / 512 bajtůVelikost I/O (minimální/optimální): 512 bajtů / 512 bajtůTyp popisu disku: gptIdentifikátor disku: 0596FDE3-F7B7-46C6-8CE1-03C0B0ADD20ADevice Start End Sectors Size Type/dev/xvdb1 2048 134217694 134215647 64G Linux filesystemDisk /dev/mapper/xenhosting--vg-root: 38,1 GiB, 40 907 046 912 bajtů, 79 896 576 sektorůJednotky: sektorů po 1 * 512 = 512 bajtechVelikost sektoru (logického/fyzického): 512 bajtů / 512 bajtůVelikost I/O (minimální/optimální): 512 bajtů / 512 bajtůDisk /dev/mapper/xenhosting--vg-swap_1: 1,7 GiB, 1 782 579 200 bajtů, 3 481 600 sektorůJednotky: sektorů po 1 * 512 = 512 bajtechVelikost sektoru (logického/fyzického): 512 bajtů / 512 bajtůVelikost I/O (minimální/optimální): 512 bajtů / 512 bajtů
This should be relatively easy, since you're using LVM: First, as always, take a backup. Resize the disk in Xen (you've already done this; despite this, please re-read step 1). Use parted to resize the extended partition ( xvda2 ); run parted /dev/xvda , then at the parted prompt resizepart 2 -1s to resize it to end at the end of the disk (BTW: quit will get out of parted). Either (a) create another logical partition ( xvda6 ) with the free space, then: reboot to pick up the partition table changes pvcreate /dev/xvda6 vgextend xenhosting-vg /dev/xvda6 or (b) extend xvda5 using resizepart 5 -1s reboot to pick up the partition table changes pvresize /dev/xvda5 Finally, if you want to add that to your root filesystem, lvextend -r -l +100%FREE /dev/xenhosting-vg/root . The -r option to lvextend tells it to call resize2fs itself. Another option you didn't consider: Add another virtual disk. If you can do this in Xen w/o rebooting the guest, then you can do this entirely online (without any reboots). Partition the new disk xvdc (this will not requite a reboot, since its not in use), then proceed with pvcreate & vgextend using /dev/xvdc1 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/336979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207640/" ] }
336,983
Is there a guideline when to use the error when writing a command-line application? To my surprise, I didn't find anything when googling it. In particular, the question I'm concerned with right now is whether to use stdout or stderr when the user called the program with illegal arguments. However, a more comprehensive answer is very much appreciated because this surely won't be the only case in which a clear rule is needed to write a program which behaves in the way it's expected to by the user.
Yes, do display a message on stderr when the wrong arguments are used. And if that also causes the application to exit, exit with non-zero exit status. You should use the standard error stream for diagnostic messages or for user interaction. Diagnostic messages include error messages, warnings and other messages that are not part of the utility's output when it's operating correctly ("correctly" meaning there is nothing exceptional happening, like files not being found, or whatever it may be). Many shells (all?) display prompts, what the user types, and menus etc. on stderr so that redirecting stdout won't stop you from interacting with the shell in a meaningful way. The following is from a blog post on this topic: This is a quote from Doug McIllroy, inventor of Unix pipes, explaining how stderr came to be. 'v6' is referring to a version of specific version of the original Unix operating system that was released in 1975. All programs placed diagnostics on the standard output. This had always caused trouble when the output was redirected into a file, but became intolerable when the output was sent to an unsuspecting process. Nevertheless, unwilling to violate the simplicity of the standard-input-standard-output model, people tolerated this state of affairs through v6. Shortly thereafter Dennis Ritchie cut the Gordian knot by introducing the standard error file. That was not quite enough. With pipelines diagnostics could come from any of several programs running simultaneously. Diagnostics needed to identify themselves. -- Doug McIllroy, "A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971-1986" To "identify oneself" means simply saying "Hey! It's me talking! This went wrong: [...]": $ ls notherels: nothere: No such file or directory Doing this on stderr is preferable, since it could otherwise be read by whatever was reading on stdout (but we don't do that with ls anyway , do we?).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147785/" ] }
336,985
We are setting up a Postfix mail relay to accept only authenticated smtp sessions and forward them to our backend smarthosts. CentOS 6.8 postfix-2.6.6-6.el6_7.1.x86_64 cyrus-sasl-lib-2.1.23-15.el6_6.2.x86_64 cyrus-sasl-md5-2.1.23-15.el6_6.2.x86_64 cyrus-sasl-2.1.23-15.el6_6.2.x86_64 cyrus-sasl-plain-2.1.23-15.el6_6.2.x86_64 We have installed and configured Postfix as well as SASL according to a couple of tutorials and references from the postfix manual on postfix.org, although we seem to have a couple of configuration or permission errors. Any help would be appreciated. [root@server]# saslpasswd2 -c -u test.com testPassword: test123Again (for verification): test123[root@server]# [email protected]: userPassword[root@server]# testsaslauthd -u [email protected] -p test1230: NO "authentication failed"[root@server]# tail -n1 /var/log/messagesJan 13 08:10:19 server saslauthd[2595]: do_auth : auth failure: [[email protected]] [service=imap] [realm=] [mech=pam] [reason=PAM auth error][root@server]# postconf -nalias_database = hash:/etc/aliasesalias_maps = hash:/etc/aliasesbroken_sasl_auth_clients = yescommand_directory = /usr/sbinconfig_directory = /etc/postfixdaemon_directory = /usr/libexec/postfixdata_directory = /var/lib/postfixdebug_peer_level = 2html_directory = noinet_interfaces = allinet_protocols = allmail_owner = postfixmailq_path = /usr/bin/mailq.postfixmanpage_directory = /usr/share/manmydestination = $myhostname, localhost.$mydomain, localhostmydomain = testing.commyhostname = smtp.testing.comnewaliases_path = /usr/bin/newaliases.postfixqueue_directory = /var/spool/postfixreadme_directory = /usr/share/doc/postfix-2.6.6/README_FILESrelayhost = [mx01.testing.com]:25sample_directory = /usr/share/doc/postfix-2.6.6/samplessender_dependent_relayhost_maps = hash:/etc/postfix/relayhost_mapsendmail_path = /usr/sbin/sendmail.postfixsetgid_group = postdropsmtp_fallback_relay = [mx02.testing.com]:25smtp_tls_CAfile = /etc/postfix/ssl/smtp.testing.com.ca-filesmtp_tls_cert_file = /etc/postfix/ssl/smtp.testing.com.crtsmtp_tls_key_file = /etc/postfix/ssl/smtp.testing.com.keysmtp_use_tls = yessmtpd_banner = $myhostname ESMTP ($mail_version)smtpd_sasl_auth_enable = yessmtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymoussmtpd_sasl_tls_security_options = noanonymoussmtpd_tls_CAfile = /etc/postfix/ssl/smtp.testing.com.ca-filesmtpd_tls_cert_file = /etc/postfix/ssl/smtp.testing.com.crtsmtpd_tls_key_file = /etc/postfix/ssl/smtp.testing.com.keysmtpd_tls_security_level = mayunknown_local_recipient_reject_code = 550[root@server]# cat /etc/sasl2/smtpd.confpwcheck_method: auxpropauxprop_plugin: sasldbmech_list: PLAIN LOGIN CRAM-MD5 DIGEST-MD5log_level: 7[root@server]# cat /etc/postfix/master.cfsmtp inet n - n - - smtpd -v#submission inet n - n - - smtpd# -o smtpd_tls_security_level=encrypt# -o smtpd_sasl_auth_enable=yes# -o smtpd_client_restrictions=permit_sasl_authenticated,reject# -o milter_macro_daemon_name=ORIGINATINGsmtps inet n - n - - smtpd -v# -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject# -o milter_macro_daemon_name=ORIGINATING SMTP Client Log Stat Connected.Recv 13/01/2017 8:34:12 AM: 220 smtp.test.com ESMTP (2.6.6)<EOL>Sent 13/01/2017 8:34:12 AM: EHLO SendSMTPv2.19.0.1<EOL>Recv 13/01/2017 8:34:12 AM: 250-smtp.securmail.net.au<EOL>250-PIPELINING<EOL>250-SIZE 10240000<EOL>250-VRFY<EOL>250-ETRN<EOL>250-STARTTLS<EOL>250-AUTH LOGIN DIGEST-MD5 CRAM-MD5 PLAIN<EOL>250-AUTH=LOGIN DIGEST-MD5 CRAM-MD5 PLAIN<EOL>250-ENHANCEDSTATUSCODES<EOL>250-8BITMIME<EOL>250 DSN<EOL>Sent 13/01/2017 8:34:12 AM: STARTTLS<EOL>Recv 13/01/2017 8:34:12 AM: 220 2.0.0 Ready to start TLS<EOL>Sent 13/01/2017 8:34:12 AM: EHLO SendSMTPv2.19.0.1<EOL>Recv 13/01/2017 8:34:12 AM: 250-smtp.test.com<EOL>250-PIPELINING<EOL>250-SIZE 10240000<EOL>250-VRFY<EOL>250-ETRN<EOL>250-AUTH LOGIN DIGEST-MD5 CRAM-MD5 PLAIN<EOL>250-AUTH=LOGIN DIGEST-MD5 CRAM-MD5 PLAIN<EOL>250-ENHANCEDSTATUSCODES<EOL>250-8BITMIME<EOL>250 DSN<EOL>Sent 13/01/2017 8:34:12 AM: MAIL FROM:<[email protected]><EOL>Recv 13/01/2017 8:34:12 AM: 250 2.1.0 Ok<EOL>Sent 13/01/2017 8:34:12 AM: RCPT TO:<[email protected]><EOL>Recv 13/01/2017 8:34:12 AM: 554 5.7.1 <[email protected]>: Relay access denied<EOL>Sent 13/01/2017 8:34:12 AM: RSET<EOL>Recv 13/01/2017 8:34:13 AM: 250 2.0.0 Ok<EOL>[root@Sserver]# tail -n 50 /var/log/maillog Jan 13 08:34:23 server/smtpd[13157]: NOQUEUE: reject: RCPT from xx.xx.xx.xx.isp.com[xx.xx.xx.xx]: 554 5.7.1 <[email protected]>: Relay access denied; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<SendSMTPv2.19.0.1> Jan 13 08:34:23 server/smtpd[13157]: generic_checks: name=reject_unauth_destination status=2 Jan 13 08:34:23 server/smtpd[13157]: > xx.xx.xx.xx.isp.com[xx.xx.xx.xx]: 554 5.7.1 <[email protected]>: Relay access denied Please let me know if any more logs or configuration extracts would be helpful. Thanks in advance
Yes, do display a message on stderr when the wrong arguments are used. And if that also causes the application to exit, exit with non-zero exit status. You should use the standard error stream for diagnostic messages or for user interaction. Diagnostic messages include error messages, warnings and other messages that are not part of the utility's output when it's operating correctly ("correctly" meaning there is nothing exceptional happening, like files not being found, or whatever it may be). Many shells (all?) display prompts, what the user types, and menus etc. on stderr so that redirecting stdout won't stop you from interacting with the shell in a meaningful way. The following is from a blog post on this topic: This is a quote from Doug McIllroy, inventor of Unix pipes, explaining how stderr came to be. 'v6' is referring to a version of specific version of the original Unix operating system that was released in 1975. All programs placed diagnostics on the standard output. This had always caused trouble when the output was redirected into a file, but became intolerable when the output was sent to an unsuspecting process. Nevertheless, unwilling to violate the simplicity of the standard-input-standard-output model, people tolerated this state of affairs through v6. Shortly thereafter Dennis Ritchie cut the Gordian knot by introducing the standard error file. That was not quite enough. With pipelines diagnostics could come from any of several programs running simultaneously. Diagnostics needed to identify themselves. -- Doug McIllroy, "A Research UNIX Reader: Annotated Excerpts from the Programmer’s Manual, 1971-1986" To "identify oneself" means simply saying "Hey! It's me talking! This went wrong: [...]": $ ls notherels: nothere: No such file or directory Doing this on stderr is preferable, since it could otherwise be read by whatever was reading on stdout (but we don't do that with ls anyway , do we?).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/336985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/133336/" ] }
337,008
I am using Debian 8.6 LXDE on a Powerbook G4 15" 1.67GHz and would like to enable tap to click on the touchpad. It is already double scrolling but tap to click would help to save the ageing mouse button. Two fingered tap for left click would be the icing on the cake, is this possible?
Debian Jessie To enable the touchpad tapping permanently , copy the 50-synaptics.conf file to /etc/X11/xorg.conf.d then edit it by adding Option "TapButton1" "1" . As root: mkdir /etc/X11/xorg.conf.dcp /usr/share/X11/xorg.conf.d/50-synaptics.conf /etc/X11/xorg.conf.d/50-synaptics.conf The /etc/X11/xorg.conf.d/50-synaptics.conf should be: Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" Option "TapButton1" "1" Option "TapButton2" "3" Reboot your system Debian Stretch and Buster (updated) Remove the xserver-xorg-input-synaptics package. (important) # apt remove xserver-xorg-input-synaptics Install xserver-xorg-input-libinput : # apt install xserver-xorg-input-libinput In most cases, make sure you have the xserver-xorg-input-libinput package installed, and not the xserver-xorg-input-synaptics package. As root: create /etc/X11/xorg.conf.d/ mkdir /etc/X11/xorg.conf.d Create the 40-libinput.conf file: echo 'Section "InputClass" Identifier "libinput touchpad catchall" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Driver "libinput" Option "Tapping" "on"EndSection' > /etc/X11/xorg.conf.d/40-libinput.conf restart your DM; e,g: # systemctl restart lightdm or # systemctl restart gdm3 Debian wiki : Enable tapping on touchpad
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/337008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208773/" ] }
337,016
I'm not too familiar with find and managed to delete a ton of files by accident. I was wondering if anyone more experienced could explain where I went wrong. I wanted to clean up all of the .DS_Store and ._.DS_Store files that my MacBook's Finder was barfing all over my Raspbian samba share. I have a hard drive attached to my raspberry pi, with some symlinksin my home folder to the mounting point, hence the -L . Some of the folders are owned by system users (e.g. apache) hence the sudo . I ran the following to make sure that find was targeting the right files: hydraxan@raspberry:~ $ sudo find -L . -maxdepth 255 -name \*DS_Store\*./.DS_Store./Downloads/USBHDD1/._.DS_Store./Downloads/USBHDD1/.DS_Store./Downloads/USBHDD1/backups/ALAC/._.DS_Store./Downloads/USBHDD1/backups/ALAC/Jeff Van Dyck - Assault Android Cactus OST/._.DS_Store./Downloads/USBHDD1/backups/ALAC/Jeff Van Dyck - Assault Android Cactus OST/.DS_Store./Downloads/USBHDD1/backups/ALAC/.DS_Store./Downloads/USBHDD1/backups/._.DS_Store./Downloads/USBHDD1/backups/.DS_Store./Downloads/USBHDD1/backups/OriginalMusic/._.DS_Store./Downloads/USBHDD1/backups/OriginalMusic/.DS_Store./Downloads/USBHDD1/backups/OriginalMusic/FLAC/.DS_Store./Downloads/USBHDD1/backups/Storage/._.DS_Store./Downloads/USBHDD1/backups/Storage/.DS_Store./Downloads/OriginalMusic/._.DS_Store./Downloads/OriginalMusic/.DS_Store./Downloads/OriginalMusic/FLAC/.DS_Store./Downloads/ALAC/._.DS_Store./Downloads/ALAC/Jeff Van Dyck - Assault Android Cactus OST/._.DS_Store./Downloads/ALAC/Jeff Van Dyck - Assault Android Cactus OST/.DS_Store./Downloads/ALAC/.DS_Store All looks good! I then added the -delete flag to cause find to remove the files it found: hydraxan@raspberry:~ $ sudo find -L . -maxdepth 255 -delete -name \*DS_Store\*find: cannot delete `./Documents': Not a directoryfind: cannot delete `./Pictures': Not a directoryfind: cannot delete `./Music': Not a directory Once I realized it was trying to delete my symlinks for some reason, I punched Ctrl+C and saved about half the data. Documents, Pictures, and Music are toast. It was probably working on my giant Downloads folder that I put nearly everything in. Why did find delete all those files? Did I put -delete in the wrong spot?
Your answer is in find manpage. The delete option is processed before your name filter. -delete Delete files; true if removal succeeded. If the removal failed, an error message is issued. If -delete fails, find's exit sta‐ tus will be nonzero (when it eventually exits). Use of -delete automatically turns on the -depth option. Warnings: Don't forget that the find command line is evaluated as an expression, so putting -delete first will make find try to delete everything below the starting points you specified. When testing a find command line that you later intend to use with -delete, you should explicitly specify -depth in order to avoid later surprises. Because -delete implies -depth, you cannot usefully use -prune and -delete together. You could have moved the delete option as the last one of your command Alternatively, you could have used something like find /path -name "*pattern*" | xargs rm -f or find /path -name "*pattern*" -exec rm -f {} \;
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/337016", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172556/" ] }
337,035
Suppose that I have started a console application which finishes, then shell prompt brings up again. But how can I be sure that it's the real command prompt? What if, for example, the application is a "key logger" which starts a fake prompt when I try to exit? So, how can I detect that I am actually in a pure shell (eg. bash) prompt?
Technically, I will answer your question, ok. How you can ensure that you have come back to your shell? If you assume that the program is not malicious but you think it might run anoter shell, you can manually define a secret function with a secret content (that you will not export of course): $ my_secret_func() { echo "Still alive"; }$ ~/Downloaded/dubious_program$ my_secret_funcStill alive If dubious_program is malicious, it can easily trick you by passing on your input to the original shell and let it react.More generally, an unsafe executable has numerous ways to install a keylogger under your identity (and do many other malicious things), like installing itself in your ~/.bashrc for example. It could do so even if there were no visible effects — in fact, most malware tries not to have any immediate visible effect so as to minimize the risk of getting detected. So, if your are not certain that what you execute is safe, either execute it with user nobody in a sandbox or don't execute it at all.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337035", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/192251/" ] }
337,055
Aside from using a temporary file to help, is there a way/program could buffer input from stdin but does not output the contents until getting the EOF . I don't want to use a shell variable either(e.g. buffer=$(cat) ). This program should behave as below(assume the program name is buffered-cat ): $ buffered-catline 1line 2line 3^D # Ctr-D here(End of Line) Now that the program received ^D , the buffered-cat outputs the contents line 1line 2line 3
You can do this with sponge from moreutils . sponge will "soak up standard input and write to a file". With no arguments, that file is standard output. Input given to this command is stored in memory until EOF, and then written out all at once. For writing to a normal file, you can just give the filename: cmd | sponge filename The main purpose of sponge is to allow reading and writing from the same file within a pipeline, but it does what you want as well.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/337055", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207461/" ] }
337,141
I want to hear the audible ping Load module modprobe pcspkr Xset xset b And now ping... ping -a myhost But I hear nothing!Why?I'm on real pc,slackware 14.2 on Asus usb3/M3terminal is xfce4-terminal,with xterm works
Apparently the bell is not enabled by the default in xfce4-terminal . The fix is Go to .config/Terminal Open terminalrc in a text editor. Find the MiscBell setting, and change it to TRUE .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
337,171
xsel is a program with which you can access system clipboard from command line. If there is no newline at the end of copied content, it prints a warning after the clipboard content like this: $ xsel -bcopied text\ No newline at end of selection Earlier I used to think that this warning is printed to the standard error, but today I found that the warning is not there even if the standard error is merged with the standard output. xsel-b |& less just prints the copied text, without the warning. Why does it behave like this?
Note that that's the behaviour of xsel in not yet released versions of xsel . Introduced by this change in 2008. It's common for X selections to contain text that doesn't end in newline characters. If you dump it as is, that results in an unterminated line being displayed. With old shells like bash the display becomes: bash-4.4$ xsel -bxselbash-4.4$ (here with the CLIPBOARD selection containing xsel ). The next prompt ends up being appended to the content of the selection. Modern shells like zsh or fish work around that be detecting when the output of the last command doesn't end in newline and give you a visual indication then. With zsh : prompt% xsel -pxsel%prompt% (the reverse-video % after xsel being the indication that a newline was missing). With fish : prompt ~> xsel -px⏎prompt ~> Those newer xsel give you that visual indication themselves: bash-4.4$ xsel -bxsel\ No newline at end of selectionbash-4.4$ Now, that is only useful if xsel is run at the prompt of an old interactive shell. In particular, that "No newline" indication would not be desirable when used as: selection=$(xsel -b) (where xsel 's stdout is a pipe) or: xsel -b > selection.txt (where xsel 's stdout is a regular file). That's why xsel only outputs that indication only when stdout goes to a tty device. Now, where does it display it? Well, the intention is to display it on that tty device. If you run it under strace, you'll see: $ strace -e write ./xsel -bwrite(1, "xsel", 4xsel) = 4write(2, "\n\\ No newline at end of selectio"..., 34\ No newline at end of selection) = 34+++ exited with 0 +++ Which confirms the source : it's output on stderr. And when stdout is not a terminal: $ strace -e write ./xsel -b > /dev/nullwrite(1, "$ strace -e write ./xsel -b | ca"..., 104) = 104+++ exited with 0 +++ It's not output at all. Now one might argue it's a bit silly to output on stderr when the intent is to output that notification to the terminal (stderr could be redirected to a log file for instance as in xsel -b 2> logfile ), but: Generally, when stdout is a terminal device, stderr is as well. That means you can disable that notification when run in a terminal with xsel -b 2> /dev/null which would be more efficient than xsel -b | cat . The isatty() would return true for a serial device that is not connected to a terminal.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337171", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/187585/" ] }
337,172
I have a server: CentOS Linux release 7.3.1611 (Core)3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux I think its network connection cutout at one point (its back now). I haven't been able to find anything in /var/log/messages- maybe I just don't know what to look for? Essesntially I'm looking for two things: If there was a problem with the nic, If the server lost its internet connection. The second one is obviously harder to figure out (maybe impossible?). Obviously I should have some external monitoring solution, but from an educational perspective where would you look (locally on the host) to solve this mystery?
Note that that's the behaviour of xsel in not yet released versions of xsel . Introduced by this change in 2008. It's common for X selections to contain text that doesn't end in newline characters. If you dump it as is, that results in an unterminated line being displayed. With old shells like bash the display becomes: bash-4.4$ xsel -bxselbash-4.4$ (here with the CLIPBOARD selection containing xsel ). The next prompt ends up being appended to the content of the selection. Modern shells like zsh or fish work around that be detecting when the output of the last command doesn't end in newline and give you a visual indication then. With zsh : prompt% xsel -pxsel%prompt% (the reverse-video % after xsel being the indication that a newline was missing). With fish : prompt ~> xsel -px⏎prompt ~> Those newer xsel give you that visual indication themselves: bash-4.4$ xsel -bxsel\ No newline at end of selectionbash-4.4$ Now, that is only useful if xsel is run at the prompt of an old interactive shell. In particular, that "No newline" indication would not be desirable when used as: selection=$(xsel -b) (where xsel 's stdout is a pipe) or: xsel -b > selection.txt (where xsel 's stdout is a regular file). That's why xsel only outputs that indication only when stdout goes to a tty device. Now, where does it display it? Well, the intention is to display it on that tty device. If you run it under strace, you'll see: $ strace -e write ./xsel -bwrite(1, "xsel", 4xsel) = 4write(2, "\n\\ No newline at end of selectio"..., 34\ No newline at end of selection) = 34+++ exited with 0 +++ Which confirms the source : it's output on stderr. And when stdout is not a terminal: $ strace -e write ./xsel -b > /dev/nullwrite(1, "$ strace -e write ./xsel -b | ca"..., 104) = 104+++ exited with 0 +++ It's not output at all. Now one might argue it's a bit silly to output on stderr when the intent is to output that notification to the terminal (stderr could be redirected to a log file for instance as in xsel -b 2> logfile ), but: Generally, when stdout is a terminal device, stderr is as well. That means you can disable that notification when run in a terminal with xsel -b 2> /dev/null which would be more efficient than xsel -b | cat . The isatty() would return true for a serial device that is not connected to a terminal.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29731/" ] }
337,175
I am using ubuntu mate, when I want to access Home directory it shows The path for the directory containing caja settings need read and write permissions: /home/xxxx/.config/caja I tried to change permission and restarted it but I get into same problem
Note that that's the behaviour of xsel in not yet released versions of xsel . Introduced by this change in 2008. It's common for X selections to contain text that doesn't end in newline characters. If you dump it as is, that results in an unterminated line being displayed. With old shells like bash the display becomes: bash-4.4$ xsel -bxselbash-4.4$ (here with the CLIPBOARD selection containing xsel ). The next prompt ends up being appended to the content of the selection. Modern shells like zsh or fish work around that be detecting when the output of the last command doesn't end in newline and give you a visual indication then. With zsh : prompt% xsel -pxsel%prompt% (the reverse-video % after xsel being the indication that a newline was missing). With fish : prompt ~> xsel -px⏎prompt ~> Those newer xsel give you that visual indication themselves: bash-4.4$ xsel -bxsel\ No newline at end of selectionbash-4.4$ Now, that is only useful if xsel is run at the prompt of an old interactive shell. In particular, that "No newline" indication would not be desirable when used as: selection=$(xsel -b) (where xsel 's stdout is a pipe) or: xsel -b > selection.txt (where xsel 's stdout is a regular file). That's why xsel only outputs that indication only when stdout goes to a tty device. Now, where does it display it? Well, the intention is to display it on that tty device. If you run it under strace, you'll see: $ strace -e write ./xsel -bwrite(1, "xsel", 4xsel) = 4write(2, "\n\\ No newline at end of selectio"..., 34\ No newline at end of selection) = 34+++ exited with 0 +++ Which confirms the source : it's output on stderr. And when stdout is not a terminal: $ strace -e write ./xsel -b > /dev/nullwrite(1, "$ strace -e write ./xsel -b | ca"..., 104) = 104+++ exited with 0 +++ It's not output at all. Now one might argue it's a bit silly to output on stderr when the intent is to output that notification to the terminal (stderr could be redirected to a log file for instance as in xsel -b 2> logfile ), but: Generally, when stdout is a terminal device, stderr is as well. That means you can disable that notification when run in a terminal with xsel -b 2> /dev/null which would be more efficient than xsel -b | cat . The isatty() would return true for a serial device that is not connected to a terminal.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337175", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210299/" ] }
337,182
How can I set up a different umask for directories then files? I need dirs with umask 003 and files with umask 117
umask is global in bash . One thing you could do is to create a mkdir wrapper(a script, you give the name to it) that would change the mask after executing it. #!/bin/bashumask 0701 ; /path/to/real/mkdir $1 ; umask 0604 This was answered here: StackOverflow - Set Different Umask For Files And Folders Remember: For directories, the base permissions are ( rwxrwxrwx ) 0777 and for files they are 0666 , meaning, you will not achieve execute permissions on file creation inside your shell even if the umask allows. This is clearly done to increase security on new files creation . Example: [admin@host test]$ pwd/home/admin/test[admin@host test]$ umask0002[admin@host test]$ mkdir test[admin@host test]$ touch test_file[admin@host test]$ ls -ltotal 4drwxrwxr-x 2 admin admin 4096 Jan 13 14:53 test-rw-rw-r-- 1 admin admin 0 Jan 13 14:53 test_file umask Unix Specification tells nothing about this file permission math specifics. It's up to the shell developers to decide(and OS makers).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102176/" ] }
337,198
I have recently been trying to iron out a few minor problems with my new Debian 8.6 ppc install. I have found or been given commands to run in the terminal which fix them. I have added these commands to /etc/init.d/rc.local and to /etc/rc.local as well as .profile but I still need to run them in the terminal after booting and logging in to get them to work. One is sudo modprobe snd-aoa-i2sbus to get the sound working and the other is synclient TapButton1=1 to enable touchpad tap.
You don't need to run commands to those tasks. Use the specific configuration files to deal with module loding and peripheral configuration. Sound module loading The snd-aoa-i2sbus you could solve by editing your /etc/modules adding a line with the name of the module. It will be something like: root@host:~# cat /etc/modules# /etc/modules: kernel modules to load at boot time.## This file contains the names of kernel modules that should be loaded# at boot time, one per line. Lines beginning with "#" are ignored.snd-aoa-i2sbusloop Module options while loading : If you need to change specific module parameters during loading, add these parameters to /etc/modprobe.d/<your_module>.conf . Check what are the possible(if any) parameters by executing modinfo snd-aoa-i2sbus | grep '^parm:' Touchpad configuration To change touchpad buttons, edit /etc/X11/xorg.conf.d/50-synaptics.conf (create if does not exists) and put the following content inside. Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3"EndSection Just map button behavior/action using the Option parameter, changing the touchpad buttons to better fit your needs. As pointed out by comments to this answer, if xorg.conf.d/ directory is missing, is just a matter of creating it inside /etc/X11 . There is no need to tweak xorg.conf directly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208773/" ] }
337,211
I want to copy all files (included sub-folders) from $HOME directory to Desktop in bash. And as you know the Desktop is in $HOME . So, when I copy all files I get the message like this: cannot copy a directory, '/home/adminuser/Desktop', into itself, '/home/adminuser/Desktop/' . And I don't know the suitable code to exclude the folder Desktop .I use this: cp -r $HOME/* ~/Desktop/ Does anybody know a code for it or can anybody help me?
You don't need to run commands to those tasks. Use the specific configuration files to deal with module loding and peripheral configuration. Sound module loading The snd-aoa-i2sbus you could solve by editing your /etc/modules adding a line with the name of the module. It will be something like: root@host:~# cat /etc/modules# /etc/modules: kernel modules to load at boot time.## This file contains the names of kernel modules that should be loaded# at boot time, one per line. Lines beginning with "#" are ignored.snd-aoa-i2sbusloop Module options while loading : If you need to change specific module parameters during loading, add these parameters to /etc/modprobe.d/<your_module>.conf . Check what are the possible(if any) parameters by executing modinfo snd-aoa-i2sbus | grep '^parm:' Touchpad configuration To change touchpad buttons, edit /etc/X11/xorg.conf.d/50-synaptics.conf (create if does not exists) and put the following content inside. Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3"EndSection Just map button behavior/action using the Option parameter, changing the touchpad buttons to better fit your needs. As pointed out by comments to this answer, if xorg.conf.d/ directory is missing, is just a matter of creating it inside /etc/X11 . There is no need to tweak xorg.conf directly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210332/" ] }
337,289
I have installed centOS 7 on my machine and I am using it for last 4-5 month it was working fine. But few days back due to power cut (un-proper shut down) there is something bad happened with my machine. Now when I start system it gives me this message in the beginning. XFS (dm-0): Internal error XFS_WAIT_CORRUPTED at line 1600 of file fs/xfs/libxfs/xfs_alloc.c. Caller xfs_free_extent+0xf9/0x130 [xfs]XFS (dm-0): Failed to recover EFIs With an ending error message ...Mounting /sysroot...[ ***] A start job is running for /sysroot (3min 59s / 4min 31s)[240.527013] INFO: task mount:406 blocked for more than 120 seconds.[ 240.527056] "echo 0 > /proc/sys/kernel/hung_task_timeout+secs" disables this message."[FAILED] Failed to mount /sysroot.See 'systemctl status sysroot.mount' for more details.[DEPEND] Dependency failed for Initrd Root File System.[DEPEND] Dependency failed for Reload Configration from the Real Root.[ OK ] Stopped dracut pre-pivot and cleanup hook.[ OK ] Stopped target Initrd Default Target.[ OK ] Reached target Initrd File System.[ OK ] Stopped dracut mount hook.[ OK ] Stopped target Basic System.[ OK ] Stopped System Initialization. Starting Emergency Shell...Genrating "/run/initramfs/rdsosreport.txt"Entering emergancy mode. Exit the shell to continue.Type "journalctl" to view system logs.You might want to save "/run/initramfs/rdsosreport.txt" to usb stick or /bootafter mounting them and attach it to a bug report.:/# There can be to solution for this problem Fix this error (Corrupted files). Reinstall (Repair) the whole operating system. Like we normally doin windows. Please guide to how to do any one of the above to solve my problem . Note: Please let me know if there is another way to reinstall OS without loosing old data. Edit:1 I run the xfs_repair with live-cd and the output is sudo xfs_repair -v /dev/dm-0We trust you have received the usual lecture from the local SystemAdministrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility.Phase 1 - find and verify superblock...superblock read failed, offset 0, size 524288, ag 0, rval -1fatal error -- Input/output error
In my case, the following command helped. sudo xfs_repair -v -L /dev/dm-0 My node failed to boot after a power outage and got the error "Failed to mount /sysroot" and entered into emergency mode. Mount and unmount failed so just went ahead to run with -L option and that helped my node boot up again.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/337289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147715/" ] }
337,309
I am running below script : #!/bin/bashps ax | grep -q [v]arnishif [ $? -eq 0 ];then echo varnish is running... exit 0else echo "Critical : varnish is not running " exit 2fi The output is like :: [root@server ~]# sh -x check_varnish_pro.sh+ ps ax+ grep -q '[v]arnish'+ '[' 0 -eq 0 ']'+ echo varnish is running...varnish is running...+ exit 0 When I run same in command line I am getting exit status as 1: [root@server ~]# ps ax | grep -q [v]arnish; echo $?1 The case is like varnish is not installed in the server. This script works fine in a server where varnish is installed. Why different exit status when run using script and command line? How to improve this script?
In general, it's a bad idea to try the simple approach with ps and grep to try to determine if a given process is running. You would be much better off using pgrep for this: if pgrep "varnish" >/dev/null; then echo "Varnish in running"else echo "Varnish is not running"fi See the manual for pgrep . On some systems (probably not on Linux), you get a -q flag that corresponds to the same flag for grep which gets rid of the need to redirect to /dev/null . There's also a -f flag that performs the match on the full command line rather than on only the process name. One may also limit the match to processes belonging to a specific user using -u . Installing pgrep also gives you access to pkill which allows you to signal processes based on their names. Also, if this is a service daemon , and if your Unix system has a way of querying it for information (e.g., whether it's up and running or not), then that is the proper way of checking on it. On Linux, you have systemctl ( systemctl is-active --quiet varnish will return 0 if it's running, 3 otherwise), on OpenBSD you have rcctl , etc. Now to your script: In your script, you parse the output from ps ax . This output will contain the name of the script itself, check_varnish_pro.sh , which obviously contains the string varnish . This gives you a false positive. You would have spotted this if you had run it without the -q flag for grep while testing. #!/bin/bashps ax | grep '[v]arnish' Running it: $ ./check_varnish_pro.sh31004 p1 SN+ 0:00.04 /bin/bash ./check_varnish_pro.sh Another issue is that although you try to "hide" the grep process from being detected by grep itself by using [v] in the pattern. That approach will fail if you happen to run the script or the command line in a directory that has a file or directory named varnish in it (in which case you will get a false positive, again). This is because the pattern is unquoted and the shell will perform filename globbing with it. See: bash-4.4$ set -xbash-4.4$ ps ax | grep [v]arnish+ ps ax+ grep '[v]arnish'bash-4.4$ touch varnish+ touch varnishbash-4.4$ ps ax | grep [v]arnish+ ps ax+ grep varnish91829 p2 SN+p 0:00.02 grep varnish The presence of the file varnish will cause the shell to replace [v]arnish with the filename varnish and you get a hit on the pattern in the process table (the grep process).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337309", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/98471/" ] }