source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
332,712
when I run mount , I can see my hard drive mount as fuseblk . /dev/sdb1 on /media/ecarroll/hd type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2) However, fuseblk doesn't tell me what filesystem is on my device. I found it using gparted but I want to know how to find the fs using the command line utilities.
I found the answer provided by in the comments by Don Crissti to be the best lsblk -no name,fstype This shows me exactly what I want and I don't have to unmount the device, mmcblk0 └─mmcblk0p1 exfat See also, man page on lsblk
{ "source": [ "https://unix.stackexchange.com/questions/332712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
332,791
From here I understand that to disable Ctrl + S the stty -ixon command can be used and it works, but as soon as I close the terminal and open another I have to re-enter the command. To permanently disable Ctrl + S I have made a startup.sh that contains the stty -ixon command and run it with crontab at @reboot but it does not work. So what will be the solution to permanently disable Ctrl + S ?
To disable Ctrl - s permanently in terminal just add this line at the end of your .bashrc script (generally in your home directory) stty -ixon An explanation of why this exists and what it relates to can be found in this answer: https://retrocomputing.stackexchange.com/a/7266
{ "source": [ "https://unix.stackexchange.com/questions/332791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95588/" ] }
332,886
I've created a program that intentionally has a divide by zero error. If I run it in the command line it returns: "Floating point exception" But if I run this as a systemd service I can not see this error message. In my systemd script I have added: StandardError=journal But the error message is nowhere to be seen when using journalctl . How can this error message be added to the log seen with journalctl ?
To get all errors for running services using journalctl : $ journalctl -p 3 -xb where -p 3 means priority err , -x provides extra message information, and -b means since last boot
{ "source": [ "https://unix.stackexchange.com/questions/332886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/203765/" ] }
333,061
I just finished installing Debian 8 (Jessie) and tried to make a directory in lib/firmware , because there was a file missing ( rtl8723befw.bin ) in the installation, and it says mkdir: cannot create directory `rtlwifi`: Permission denied I tried putting sudo on the front, but then it returns: bash: sudo: command not found When trying to install sudo with apt-get install sudo or even apt-get update it returns: E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root? I am really at a loss of what to do. All the solutions that I seem to find for the latest error is to use sudo, but I don't even have that.
If you do not have sudo installed, you will need to actually become root. Use su - and provide the root user's password (not your password) when asked. Once you have become root, you can then apt-get install sudo , log out of the root shell, and actually use sudo as you are trying to, now that it will have been installed.
{ "source": [ "https://unix.stackexchange.com/questions/333061", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207390/" ] }
333,144
I'm trying to copy the contents of a failing USB thumb drive. If I read the data too fast, the drive's controller chip overheats and the drive vanishes from the system. When that happens, I need to unplug the drive, wait a minute or so for it to cool, plug it back in, and re-start the copy. I've got an old backup of the contents of the drive, so the obvious way to get the rest of the data is to use rsync to bring the backup up to date, but this runs into the whole "read too fast, the drive vanishes, and I need to start over" issue. Is there a way to tell rsync to only read X megabytes of data per minute? Alternatively, is it possible to tell it to suspend operations when the drive vanishes, and resume when it gets plugged back in?
Unlike DopeGhoti's experience, the --bwlimit flag does limit data transfer, with my rsync (v3.1.2). test: $ dd if=/dev/urandom bs=1M count=10 of=data 10+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.0871822 s, 120 MB/s $ du -h data 10M data $ time rsync -q data fast 0.065 seconds $ time rsync -q --bwlimit=1M data slow 10.004 seconds (note: my time output looks different to most time invocations ( zsh feature), those times weren't edited by me) Else, perhaps something along the lines of a double -exec in find . I believe that rsync -R should create & copy the parent folders, but if it doesn't, then cp --parents should. $ find /failing/usb -exec rsync -R {} /somewhere/safe/ \; -exec sleep 1 \; Note : also check out ddrescue , it might be right what you're looking for :)
{ "source": [ "https://unix.stackexchange.com/questions/333144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62988/" ] }
333,186
I am running in an interactive bash session. I have created some file descriptors, using exec, and I would like to list what is the current status of my bash session. Is there a way to list the currently open file descriptors?
Yes, this will list all open file descriptors: $ ls -l /proc/$$/fd total 0 lrwx------ 1 isaac isaac 64 Dec 28 00:56 0 -> /dev/pts/6 lrwx------ 1 isaac isaac 64 Dec 28 00:56 1 -> /dev/pts/6 lrwx------ 1 isaac isaac 64 Dec 28 00:56 2 -> /dev/pts/6 lrwx------ 1 isaac isaac 64 Dec 28 00:56 255 -> /dev/pts/6 l-wx------ 1 isaac isaac 64 Dec 28 00:56 4 -> /home/isaac/testfile.txt Of course, as usual: 0 is stdin, 1 is stdout and 2 is stderr. The 4th is an open file (to write) in this case.
{ "source": [ "https://unix.stackexchange.com/questions/333186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39807/" ] }
333,190
I am using the following line at the beginning of a bash shell script: IFS=':#:' But it is not separating fields with :#:, only with colon. What is the issue? EDIT: This is my data in text file: f:#:0 c:#:Test C s:#:test S ctype:#:0 a:#:test A t:#:10:02:03 r:#:test r f:#:0 c:#:Test C1 s:#:test S1 ctype:#:1 a:#:test A1 t:#:00:02:22 r:#:test r f:#:20 c:#:Test C s:#:test S ctype:#:2 a:#:test A1 t:#:00:02:03 r:#:test r ... and I am reading it using the following code: IFS=':#:' while read -r key value; do ..... done < "$FileName"
Yes, this will list all open file descriptors: $ ls -l /proc/$$/fd total 0 lrwx------ 1 isaac isaac 64 Dec 28 00:56 0 -> /dev/pts/6 lrwx------ 1 isaac isaac 64 Dec 28 00:56 1 -> /dev/pts/6 lrwx------ 1 isaac isaac 64 Dec 28 00:56 2 -> /dev/pts/6 lrwx------ 1 isaac isaac 64 Dec 28 00:56 255 -> /dev/pts/6 l-wx------ 1 isaac isaac 64 Dec 28 00:56 4 -> /home/isaac/testfile.txt Of course, as usual: 0 is stdin, 1 is stdout and 2 is stderr. The 4th is an open file (to write) in this case.
{ "source": [ "https://unix.stackexchange.com/questions/333190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125463/" ] }
333,225
https://www.centos.org/docs/5/html/5.2/Deployment_Guide/s3-proc-self.html says The /proc/self/ directory is a link to the currently running process. There are always multiple processes running concurrently, so which process is "the currently running process"? Does "the currently running process" have anything to do with which process is currently running on the CPU, considering context switching? Does "the currently running process" have nothing to do with foreground and background processes?
This has nothing to do with foreground and background processes; it only has to do with the currently running process. When the kernel has to answer the question “What does /proc/self point to?”, it simply picks the currently-scheduled pid , i.e. the currently running process (on the current logical CPU). The effect is that /proc/self always points to the asking program's pid; if you run ls -l /proc/self you'll see ls 's pid, if you write code which uses /proc/self that code will see its own pid, etc.
{ "source": [ "https://unix.stackexchange.com/questions/333225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
333,368
After the latest upgrade on Debian stretch, hitting alt+shift on my keyboard make it change layout, which breaks all my alt+shift+<anything> xbindkeys shortcuts. I have disabled all shortcuts in Settings -> Keyboard -> Input. Still the same. In Settings -> Languages, it is said that this alt+shift behaviour can be tweaked in.. Settings -> Keyboard. But alt+shift seems to be set nowhere there. Is it hardcoded? Is there a way xbindkeys can work around this?
Okay, got it: this line in my /etc/default/keyboard XKBOPTIONS="grp:alt_shift_toggle,grp_led:scroll" .. should not contain grp:alt_shift_toggle , which is the relevant xkb option according to this post . In addition, Gnome overrides xkb options according to this other post . As a consequence, this output: $ dconf read /org/gnome/desktop/input-sources/xkb-options ['grp:alt_shift_toggle','grp_led:scroll'] .. should not read grp:alt_shift_toggle on my machine either. So after I ran: dconf write /org/gnome/desktop/input-sources/xkb-options "['grp_led:scroll']" I got my good'ol behaviour back ;) I have filed this as a bug to Gnome.
{ "source": [ "https://unix.stackexchange.com/questions/333368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87656/" ] }
333,728
Security team of my organization told us to disable weak ciphers due to they issue weak keys. arcfour arcfour128 arcfour256 But I tried looking for these ciphers in ssh_config and sshd_config file but found them commented. grep arcfour * ssh_config:# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc Where else I should check to disable these ciphers from SSH?
If you have no explicit list of ciphers set in ssh_config using the Ciphers keyword, then the default value, according to man 5 ssh_config (client-side) and man 5 sshd_config (server-side), is: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128, [email protected],[email protected], [email protected], aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc, aes256-cbc,arcfour Note the presence of the arcfour ciphers. So you may have to explicitly set a more restrictive value for Ciphers . ssh -Q cipher from the client will tell you which schemes your client can support. Note that this list is not affected by the list of ciphers specified in ssh_config . Removing a cipher from ssh_config will not remove it from the output of ssh -Q cipher . Furthermore, using ssh with the -c option to explicitly specify a cipher will override the restricted list of ciphers that you set in ssh_config and possibly allow you to use a weak cipher. This is a feature that allows you to use your ssh client to communicate with obsolete SSH servers that do not support the newer stronger ciphers. nmap --script ssh2-enum-algos -sV -p <port> <host> will tell you which schemes your server supports.
{ "source": [ "https://unix.stackexchange.com/questions/333728", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19072/" ] }
333,743
When I uss less file1 file2 I get both files shown in the "less buffer viewer", but less file1 file2 | cat prints the content of both files appended to stdout. How does less know if it should show the "less buffer viewer" or produce output to stdout for a next command? What mechanism is used for doing this?
less prints text to stdout. stdout goes to a terminal (/dev/tty?) and opens the default buffer viewer through a pipe when piping it to another programm using | ( less text | cut -d: -f1 ) to a file when redirecting it with > ( less text > tmp ) There is a C function called "isa tty " which checks if the output is going to a tty (less 4.81, main.c, line 112). If so, it uses the buffer viewer otherwise it behaves like cat . In bash you can use test (see man test ) -t FD file descriptor FD is opened on a terminal -p FILE exists and is a named pipe Example: [[ -t 1 ]] && \ echo 'STDOUT is attached to TTY' [[ -p /dev/stdout ]] && \ echo 'STDOUT is attached to a pipe' [[ ! -t 1 && ! -p /dev/stdout ]] && \ echo 'STDOUT is attached to a redirection'
{ "source": [ "https://unix.stackexchange.com/questions/333743", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207872/" ] }
333,766
Is there any way to remove ^C when you hit CTRL + C in the shell include with Red Hat Enterprise Linux 6 ("Santiago")? I have permission to edit my own .bash_profile .
Edit (or create) your ~/.inputrc file. Add a line saying set echo-control-characters Off This will instruct the GNU Readline library (which Bash uses) to not output (echo) any control characters to the screen. The setting will be active in all new Bash sessions thereafter (and in any other utility that uses the Readline library). Notice that if your Unix system comes with a system-wide configuration file for the Readline library (usually /etc/inputrc ), then your personal configuration file will need to include that file: $include /etc/inputrc set echo-control-characters Off Another alternative is to make a personal copy of the system-wide configuration file and then modify that.
{ "source": [ "https://unix.stackexchange.com/questions/333766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/207888/" ] }
334,170
On Windows I have frequently changed the priority of a games process to 'high' or 'realtime' to get a performance boost. This has never resulted in any problems with my hardware. I was thinking that maybe I could do this on Linux using the chrt command to change the realtime priority of the games process, as renice ing, even to -20 (the highest priority) doesn't seem to provide any noticeable boost. However, I am wary of doing this without knowing whether it might be bad for my CPU. Can anyone inform me on the risks?
Changing the priority of a process only determines how often this process will run when other processes are competing for CPU time. It has no impact when the process is the only one using CPU time. A minimum-priority process on an otherwise idle system gets 100% CPU time, same as a maximum-priority process. So you can run your game with a higher priority, but that won't make it run faster unless something else on the system is using a significant amount of CPU time. I recommend keeping the priority lower than the X server, because if the X server wants CPU time, it's likely to be because the game is asking it to display something complex, and display is usually a CPU-demanding task (but it depends how much of the work is done in the GPU — CPU priorities have no influence on the GPU). CPUs are designed to execute code. Changing process priorities won't affect how much work the CPU does, but even if it did, that wouldn't damage the CPU, it would only make it run hotter and so make the fans in the computer blow harder.
{ "source": [ "https://unix.stackexchange.com/questions/334170", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92090/" ] }
334,382
After starting a bash terminal, I noticed that the PATH variable contains duplicate entries. My terminal starts a login shell , so ~/.bash_profile is sourced, followed by ~/.profile and ~/.bashrc . Only in ~/.profile do I create the paths entries which are duplicated. To be pedantic, this is the order in which the files that SHOULD be sourced are being sourced: Sourced /etc/profile Sourced /etc/bash.bashrc Sourced .bash_profile Sourced .profile Sourced .bashrc Before anyone marks this as a duplicate of "PATH variable contains duplicates", keep reading. At first I thought this had to do with ~/.profile being sourced twice, so I had the file write to a log file whenever it was sourced, and surprisingly it only logged one entry, which tells me that it was only sourced once. Even more surprising is the fact that when I comment out the entries which were in ~/.profile , the entries still appear in the PATH variable. This has led me to three conclusions, one of which was quickly ruled out: Bash ignores valid bash comments and still executes the commented code There is a script which reads the ~/.profile and ignores any code that prints an output (the log file for example) There is another copy of my ~/.profile which is being sourced elsewhere The first one, I quickly concluded not to be the case due to some quick testing. The second and third options are where I need help with. How do I gather a list of scripts which are executed when my terminal starts up? I used echo in the files that I checked to know if they are sourced by bash, but I need to find a conclusive method which traces the execution up the point when the terminal is ready for me to start typing into it. If the above is not possible, then can anyone suggest where else I can look to see which scripts are being run . Future reference This is the script I now use for adding to my path: function add_to_path() { for path in ${2//:/ }; do if ! [[ "${!1}" =~ "${path%/}" ]]; then # ignore last / new_path="$path:${!1#:}" export "$1"="${new_path%:}" # remove trailing : fi done } I use it like this: add_to_path 'PATH' "/some/path/bin" The script checks if the path already exists in the variable before prepending it. For zsh users, you can use this equivalent: # prepends the given path(s) to the supplied PATH variable # ex. add_to_path 'PATH' "$(go env GOPATH)/bin" function add_to_path() { # (P)1 path is expanded from $1 # ##: Removes leading : local -x pth="${(P)1##:}" # (s.:.) splits the given variable at : for p in ${(s.:.)2}; do # %%/ Remove trailing / # :P Behaves similar to realpath(3) local p="${${p%%/}:P}" if [[ ! "$pth" =~ "$p" ]]; then pth="$p:$pth" fi done export "$1"="${pth%%:}" } Edit 28/8/2018 One more thing I found I could do with this script is to also fix the path. So at the start of my .bashrc file, I do something like this: _temp_path="$PATH" PATH='/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin' add_to_path 'PATH' "$_temp_path" unset _temp_path It is up to you what the PATH should start with. Examine PATH first to decide.
If your system has strace then you can list the files opened by the shell, for example using echo exit | strace bash -li |& grep '^open' ( -li means login shell interactive; use only -i for an interactive non-login shell.) This will show a list of files which the shell opened or tried to open. On my system, they are as follows: /etc/profile /etc/profile.d/* (various scripts in /etc/profile.d/ ) /home/<username>/.bash_profile (this fails, I have no such file) /home/<username>/.bash_login (this fails, I have no such file) /home/<username>/.profile /home/<username>/.bashrc /home/<username>/.bash_history (history of command lines; this is not a script) /usr/share/bash-completion/bash_completion /etc/bash_completion.d/* (various scripts providing autocompletion functionality) /etc/inputrc (defines key bindings; this is not a script) Use man strace for more information.
{ "source": [ "https://unix.stackexchange.com/questions/334382", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44793/" ] }
334,578
One of my coworkers has provided me with a Bash syntax that I am unfamiliar with. My Google foo has failed me on figuring out what it does and why/when I should use it. The command that he sent me was of this form: someVariable=something command Initially, I thought that this was equivalent to the following: someVariable=something ; command Or someVariable=something command But this doesn't appear to the be case. Examples: [Jan-03 11:26][~]$ # Look at the environment variable BAZ. It is currently empty [Jan-03 11:26][~]$ echo $BAZ [Jan-03 11:27][~]$ # Try running a command of the same format [Jan-03 11:27][~]$ BAZ=jake echo $BAZ [Jan-03 11:27][~]$ [Jan-03 11:27][~]$ # Now, echo BAZ again. It is still empty: [Jan-03 11:27][~]$ echo $BAZ [Jan-03 11:27][~]$ [Jan-03 11:28][~]$ [Jan-03 11:28][~]$ # If we add a semi-colon to the command, we get dramatically different results: [Jan-03 11:28][~]$ BAZ=jake ; echo $BAZ jake [Jan-03 11:28][~]$ [Jan-03 11:28][~]$ # And we can see that the variable is actually set: [Jan-03 11:29][~]$ echo $BAZ jake [Jan-03 11:29][~]$ What does this syntax do? What happens to the variable that has been set? Why does this work?
This is equivalent to: ( export someVariable=something; command ) This makes someVariable an environment variable, with the assigned value, but only for the command being run. Here are the relevant parts of the bash manual: Simple Commands A simple command is a sequence of optional variable assignments followed by blank-separated words and redirections, and terminated by a control operator. The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command. (...) Simple Command Expansion If no command name results [from command expansion], the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment . Note: bear in mind that this is not specific to bash , but specified by POSIX . Edit - Summarized discussion from comments in the answer The reason BAZ=JAKE echo $BAZ , doesn't print JAKE is because variable substitution is done before anything else. If you by-pass variable substitution, this behaves as expected: $ echo_baz() { echo "[$BAZ]"; } $ BAZ=Jake echo_baz [Jake] $ echo_baz []
{ "source": [ "https://unix.stackexchange.com/questions/334578", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2650/" ] }
334,865
The difference with and without -h should only be the human readable units, right? Well apparently no... $ du -s . 74216696 . $ du -hs . 35G . Or maybe I'm mistaken and the result of du -s . isn't in KB?
du without an output format specifier gives disk usage in blocks of 512 bytes, not kilobytes . You can use the option -k to display in kilobytes instead. On OS X (or macOS, or MacOS, or Macos; whichever you like), you can customize the default unit by setting the environment variable BLOCKSIZE (this affects other commands as well).
{ "source": [ "https://unix.stackexchange.com/questions/334865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128868/" ] }
334,905
When I try to run the wget command on http urls I get this error message: ERROR: The certificate of `url' is not trusted. ERROR: The certificate of `url' hasn't got a known issuer.
If you are using Debian or Ubuntu, install the ca-certificates package: $ sudo apt-get install ca-certificates If you don't care about checking the validity of the certificate, use the --no-check-certificate option: $ wget --no-check-certificate https://download/url Note: The second option is not recommended because of the possibility of a man-in-the-middle attack.
{ "source": [ "https://unix.stackexchange.com/questions/334905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208681/" ] }
334,909
I'm writing a bash script to run on CentOS to first grab the the lines for the start and end of an application session, then output if the duration is longer than an hour. The timestamp format in the logfile is: 2017-01-03T00:00:15.529596-03:00 $i is the application session ID. Here is what I have so far: for i in $( grep 'session-enter\|session-exit' logfile | awk '{ print $5}' ); do echo "" echo "***** $i *****" grep 'session-enter\|session-exit' logfile | grep $i start=$(grep session-enter logfile | grep $i | awk '{ print $1 }' | sed 's/-03:00//g') end=$(grep session-exit logfile | grep $i | awk '{ print $1 }' | sed 's/-03:00//g') epochStart=$(date -d "$start" +%s ) epochEnd=$(date -d "$end" +%s ) duration=$( date -u -d "0 $epochEnd seconds - $epochStart seconds" +"%H:%M:%S" ) if [ "$epochStart"="" ] || [ "$epochEnd"="" ] then echo Duration: $duration else continue fi done Any help on this is greatly appreciated.
If you are using Debian or Ubuntu, install the ca-certificates package: $ sudo apt-get install ca-certificates If you don't care about checking the validity of the certificate, use the --no-check-certificate option: $ wget --no-check-certificate https://download/url Note: The second option is not recommended because of the possibility of a man-in-the-middle attack.
{ "source": [ "https://unix.stackexchange.com/questions/334909", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208685/" ] }
334,912
AFAIK The basic concept about gpg/pgp is that for two people who want to create trust between them is both publish a public key and private key (the private key is kept with the user who creates it, doesn't share) with strength (1024 bits at one time, 4096 now and 8192 in future and so on and on). Now the two of them need to publish their public keys to a keyserver (similar to a phone directory) and give a link to the keyserver where those keys are published. Now if I go to a server say https://pgp.mit.edu/ and search for ashish I will need many ones https://pgp.mit.edu/pks/lookup?op=get&search=ashish&op=index Let's say the Ashish I want is this one DAD95197 (just an example) how would I import that public key ? I did try └─[$] gpg --keyserver pgp.mit.edu --recv-keys DAD95197 gpg: keyserver receive failed: No keyserver available but as can be seen that didn't work.
gpg --keyserver pgp.mit.edu --recv-keys DAD95197 is supposed to import keys matching DAD95197 from the MIT keyserver. However the MIT keyserver often has availability issues so it’s safer to configure another keyserver. I generally use the SKS pools ; here are their results when looking for “ashish” . To import the key from there, run gpg --keyserver pool.sks-keyservers.net --recv-keys FBF1FC87DAD95197 (never use the short key ids, they can easily be spoofed). This answer explains how to configure your GnuPG installation to always use the SKS pools.
{ "source": [ "https://unix.stackexchange.com/questions/334912", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
334,920
My objective is to make a text on my remote machine (CentOS 7.2) available to seamlessly paste on my local machine (OS X 10.12.2) with the standard ⌘V shortcut. My setup connects to the remote machine with ssh -Y and then attaches to tmux (or creates a new session if non-existent). When I run either echo "test" | xsel -ib or echo "test" | xclip it hangs. The $DISPLAY variable is localhost:10.0 . If I exit tmux the $DISPLAY variable seems to be null and I get a can't open display error.
gpg --keyserver pgp.mit.edu --recv-keys DAD95197 is supposed to import keys matching DAD95197 from the MIT keyserver. However the MIT keyserver often has availability issues so it’s safer to configure another keyserver. I generally use the SKS pools ; here are their results when looking for “ashish” . To import the key from there, run gpg --keyserver pool.sks-keyservers.net --recv-keys FBF1FC87DAD95197 (never use the short key ids, they can easily be spoofed). This answer explains how to configure your GnuPG installation to always use the SKS pools.
{ "source": [ "https://unix.stackexchange.com/questions/334920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52058/" ] }
335,081
I have a Raspberry Pi running OSMC (Debian based). I have set a cron job to start a script, sync.sh, at midnight. 0 0 * * * /usr/local/bin sync.sh I need to stop the script at 7am. Currently I am using: 0 7 * * * shutdown -r now Is there a better way? I feel like rebooting is overkill. Thanks
You can run it with the timeout command , timeout - run a command with a time limit Synopsis timeout [OPTION] NUMBER[SUFFIX] COMMAND [ARG]... timeout [OPTION] Description Start COMMAND, and kill it if still running after NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days. PS. If your sync process takes too much time, you might consider a different approach for syncing your data, maybe block replication.
{ "source": [ "https://unix.stackexchange.com/questions/335081", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208805/" ] }
335,087
I use Ubuntu 16.04 and I need the following tmux solution because I want to run a timeout process with sleep as in my particular case I wasn't satisfied from at and encountered a bug with nohup (when combining nohup-sleep ). Now, tmux seems as best alternative as it has its own no-hangup mechanism and is actually working fine in manual usage (I ask the question only in regards to automizing the process I can already do manually with it). What I need: I need a way to do the following 3 actions, all in one operation: Attaching a new tmux session. Injecting a ready set of commands to that session, like (sleep 30m ; rm -rf dir_name ; exit) . I would especially prefer a multi-line set, and not one long row. Executing the above command set the moment it was finished to be written as stdin in new tmux session. In other words, I want to execute a code set in another tmux session that was specially created for that cause, but to do all in one operation. Notes: I aim to do all from my original working session (the one I work from most of the time). Generally, I have no intention to visit the newly created session, I just want to create it with its automatically executed code and that's it. If possible, I would prefer an heredoc solution. I think it's most efficient.
If you put the code you want to execute in e.g. /opt/my_script.sh , it's very easy to do what you want: tmux new-session -d -s "myTempSession" /opt/my_script.sh This starts a new detached session, named "myTempSession", executing your script. You can later attach to it to check out what it's doing, by executing tmux attach-session -t myTempSession . That is in my opinion the most straightforward and elegant solution. I'm not aware of any easy way of execute commands from stdin (read "from heredocs") with tmux. By hacking around you might even be able to do it, but it would still be (and look like) a hack. For example, here's a hack that uses the command i suggested above to simulate the behaviour you want (= execute code in a new tmux session from a heredoc. No write occurs on the server's hard drive, as the temporary file is created /dev/shm , which is a tmpfs): ( cat >/dev/shm/my_script.sh && chmod +x /dev/shm/my_script.sh && tmux new-session -d '/dev/shm/my_script.sh; rm /dev/shm/my_script.sh' ) <<'EOF' echo "hacky, but works" EOF
{ "source": [ "https://unix.stackexchange.com/questions/335087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
335,189
I have a problem which is reproducible on Linux Ubuntu VMs (14.04 LTS) created in Azure. After installing systemd package through script, the system refuses new ssh connections, infinitely. System is booting up. Connection closed by xxx.xxx.xxx.xxx The active ssh connection is maintained though. There is no /etc/nologin file present in the system. The only option I see is a hard reset which solves the problem. But how do I avoid it? Here is the script I am using: #!/bin/bash # Script input arguments user=$1 server=$2 # Tell the shell to quote your variables to be eval-safe! printf -v user_q '%q' "$user" printf -v server_q '%q' "$server" # SECONDS=0 address="$user_q"@"$server_q" function run { ssh "$address" /bin/bash "$@" } run << SSHCONNECTION # Enable autostartup # systemd is required for the autostartup sudo dpkg-query -W -f='${Status}' systemd 2>/dev/null | grep -c "ok installed" > /home/$user_q/systemd-check.txt systemdInstalled=\$(cat /home/$user_q/systemd-check.txt) if [[ \$systemdInstalled -eq 0 ]]; then echo "Systemd is not currently installed. Installing..." # install systemd sudo apt-get update sudo apt-get -y install systemd else echo "systemd is already installed. Skipping this step." fi SSHCONNECTION
I suspect there is a /etc/nologin file (whose content would be "System is booting up.") that is not removed after the systemd installation. [update] What affects you is a bug that was reported on Ubuntu's BTS last December. It is due to a /var/run/nologin file (= /run/nologin since /var/run is a symlink to /run ) that is not removed at the end of the systemd installation. /etc/nologin is the standard nologin file. /var/run/nologin is an alternate file that may be used by the nologin PAM module ( man pam_nologin ). Note that none of the nologin files affect connections by user root, only regular users are prevented from logging in.
{ "source": [ "https://unix.stackexchange.com/questions/335189", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/197423/" ] }
335,359
I am trying to do a yum update and all of the mirrors fail with a 404. I put the url into my browser and the error is correct, the url does not exist. YUM is looking for a package that does not exist on the mirrors. See below for the error message: https://mirrors.lug.mtu.edu/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.oss.ou.edu/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://mirror.csclub.uwaterloo.ca/fedora/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://mirror.math.princeton.edu/pub/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. http://kdeforge2.unl.edu/mirrors/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://muug.ca/mirror/fedora-epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. http://fedora.westmancom.com/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. https://ca.mirror.babylon.network/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. https://mirror.chpc.utah.edu/pub/epel/7/x86_64/repodata/13b91b1efe2a1db71aa132d76383fdb5311887958a910548546d58a5856e2c5d-primary.sqlite.xz: [Errno 14] HTTPS Error 404 - Not Found Trying other mirror. I have tried running yum clean all That command finished successfully, but it did not change any thing. I have also tried the following: rm -f /var/lib//rpm/__db* rpm --rebuilddb That also did not change anything.
Edit your /etc/yum.conf file and add http_caching=packages Explanation: http_caching option controls how to handle any HTTP downloads that YUM does and what yum should caches. Its default setting is to cache all downloads and that includes repo metadata. So If the metadata file gets corrupted during download (exp: it is partially downloaded), yum will not be able to verify the remote availability of packages and it will fail. The solution is to add http_caching=packages to /etc/yum.conf so yum will only cache packages and it will download new repository metadata each time.
{ "source": [ "https://unix.stackexchange.com/questions/335359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79156/" ] }
335,367
Is there any way to replace a string with a char ? Example : I have, 123456789 And, I want to replace all chars from position 3 to position 8 with * , to produce this result 12******9 Is there a way perhaps using sed -i "s/${mystring:3:5}/*/g" ?
Edit your /etc/yum.conf file and add http_caching=packages Explanation: http_caching option controls how to handle any HTTP downloads that YUM does and what yum should caches. Its default setting is to cache all downloads and that includes repo metadata. So If the metadata file gets corrupted during download (exp: it is partially downloaded), yum will not be able to verify the remote availability of packages and it will fail. The solution is to add http_caching=packages to /etc/yum.conf so yum will only cache packages and it will download new repository metadata each time.
{ "source": [ "https://unix.stackexchange.com/questions/335367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209004/" ] }
335,484
I have ubuntu file system directories in the root directory and I accidentally copied hundreds of files into root directory. I intuitively tried to remove copied files by excluding file system like rm -rf !{bin,sbin,usr,opt,lib,var,etc,srv,libx32,lib64,run,boot,proc,sys,dev} ./. bu it doesn't work. What's the proper way to exclude some directories while deleting the whole? EDIT: Never try any of the commands here without knowing what to do!
Since you are using bash : shopt -s extglob echo rm -rf ./!(bin|sbin|usr|...) I recommend to add echo at the beginning of the command line when you are running something what potentially can blow up the entire system. Remove it if you are happy with the result. Note: The above command won't remove hidden files (those which name start by a dot). If you want to remove them as well then activate also dotglob option: shopt -s dotglob
{ "source": [ "https://unix.stackexchange.com/questions/335484", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29606/" ] }
335,704
─[$] cat ~/.gitconfig [user] name = Shirish Agarwal email = [email protected] [core] editor = leafpad excludesfiles = /home/shirish/.gitignore gitproxy = \"ssh\" for gitorious.org [merge] tool = meld [push] default = simple [color] ui = true status = auto branch = auto Now I want to put my git credentials for github, gitlab and gitorious so each time I do not have to lookup the credentials on the browser. How can this be done so it's automated ? I am running zsh
Using SSH The common approach for handling git authentication is to delegate it to SSH. Typically you set your SSH public key in the remote repository ( e.g. on GitHub ), and then you use that whenever you need to authenticate. You can use a key agent of course, either handled by your desktop environment or manually with ssh-agent and ssh-add . To avoid having to specify the username, you can configure that in SSH too, in ~/.ssh/config ; for example I have Host git.opendaylight.org User skitt and then I can clone using git clone ssh://git.opendaylight.org:29418/aaa (note the absence of a username there). Using gitcredentials If the SSH approach doesn't apply ( e.g. you're using a repository accessed over HTTPS), git does have its own way of handling credentials, using gitcredentials (and typically git-credential-store ). You specify your username using git config credential.${remote}.username yourusername and the credential helper using git config credential.helper store (specify --global if you want to use this setup everywhere). Then the first time you access a repository, git will ask for your password, and it will be stored (by default in ~/.git-credentials ). Subsequent accesses to the repository will use the stored password instead of asking you. Warning : This does store your credentials plaintext in your home directory. So it is inadvisable unless you understand what this means and are happy with the risk.
{ "source": [ "https://unix.stackexchange.com/questions/335704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
335,716
I'm on Ubuntu and I typed cat .bash_history | grep git and it returned Binary file (standard input) matches My bash_history does exist and there are many lines in it that starts with git . What caused it to display this error and how can I fix it?
You can use grep -a 'pattern' . from man grep page: -a, --text Process a binary file as if it were text; this is equivalent to the --binary-files=text option.
{ "source": [ "https://unix.stackexchange.com/questions/335716", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72537/" ] }
335,801
When I do which pip3 I get /usr/local/bin/pip3 but when I try to execute pip3 I get an error as follows: bash: /usr/bin/pip3: No such file or directory This is because I recently deleted that file. Now which command points to another version of pip3 that is located in /usr/local/bin but the shell still remembers the wrong path. How do I make it forget about that path? The which manual says which returns the pathnames of the files (or links) which would be executed in the current environment, had its arguments been given as commands in a strictly POSIX-conformant shell. It does this by searching the PATH for executable files matching the names of the arguments. It does not follow symbolic links. Both /usr/local/bin and /usr/bin are in my PATH variable, and /usr/local/bin/pip3 is not a symbolic link, it's an executable. So why doesn't it execute?
When you run a command in bash it will remember the location of that executable so it doesn't have to search the PATH again each time. So if you run the executable, then change the location, bash will still try to use the old location. You should be able to confirm this with hash -t pip3 which will show the old location. If you run hash -d pip3 it will tell bash to forget the old location and should find the new one next time you try.
{ "source": [ "https://unix.stackexchange.com/questions/335801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/181600/" ] }
335,814
I installed debian. Now I am worried how will my wifi adapter will work on it. I found a thread https://ubuntuforums.org/showthread.php?t=1806839 but didn't able to install linux-firmware and sudo apt-get install linux-headers-generic build-essential also didn't work. It doesn't know about linux-firmware. Here are errors while installing above things: root@debian:/home/love# sudo apt-get install linux-headers-generic build-essential Reading package lists... Done Building dependency tree Reading state information... Done Package linux-headers-generic is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'linux-headers-generic' has no installation candidate root@debian:/home/love# sudo apt-get install linux-firmwareReading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-firmware While adding repository the following error showed up: root@debian:/home/love# deb http://http.debian.net/debian/ wheezy main contrib non-free bash: deb: command not found Here are some results of some commands: root@debian:/home/love# uname -a Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux root@debian:/home/love# lsusb Bus 002 Device 005: ID 138a:0005 Validity Sensors, Inc. VFS301 Fingerprint Reader Bus 002 Device 004: ID 413c:2107 Dell Computer Corp. Bus 002 Device 016: ID 2a70:f00e Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 04d9:a0ac Holtek Semiconductor, Inc. Bus 001 Device 003: ID 0846:9041 NetGear, Inc. WNA1000M 802.11bgn [Realtek RTL8188CUS] Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub What should I do?
When you run a command in bash it will remember the location of that executable so it doesn't have to search the PATH again each time. So if you run the executable, then change the location, bash will still try to use the old location. You should be able to confirm this with hash -t pip3 which will show the old location. If you run hash -d pip3 it will tell bash to forget the old location and should find the new one next time you try.
{ "source": [ "https://unix.stackexchange.com/questions/335814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209324/" ] }
335,832
I'm learning about decision making structures and I came across these codes: if [ -f ./myfile ] then cat ./myfile else cat /home/user/myfile fi [ -f ./myfile ] && cat ./myfile || cat /home/user/myfile Both of them behave the same. Are there any advantages to using one way from the other?
No, constructions if A; then B; else C; fi and A && B || C are not equivalent . With if A; then B; else C; fi , command A is always evaluated and executed (at least an attempt to execute it is made) and then either command B or command C are evaluated and executed. With A && B || C , it's the same for commands A and B but different for C : command C is evaluated and executed if either A fails or B fails. In your example, suppose you chmod u-r ./myfile , then, despite [ -f ./myfile ] succeeds, you will cat /home/user/myfile My advice: use A && B or A || B all you want, this remains easy to read and understand and there is no trap. But if you mean if...then...else... then use if A; then B; else C; fi .
{ "source": [ "https://unix.stackexchange.com/questions/335832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/197342/" ] }
336,017
How to detect if isolcpus is activated and on which cpus, when for example you connect for the first time on a server. Conditions: not spawning any process to see where it will be migrated. The use case is that isolcpus=1-7 on a 6 cores i7, seems to not activate isolcpus at boot, and i would like to know if its possible from /proc/ , /sys or any kernel internals which can be read in userspace, to provide a clear status of activation of isolcpus and which cpu are concerned. Or even read active setting of the scheduler which is the first concerned by isolcpus. Consider the uptime is so big, that dmesg is no more displaying boot log to detect any error at startup. Basic answer like " look at kernel cmd line " will not be accepted :)
What you look for should be found inside this virtual file: /sys/devices/system/cpu/isolated and the reverse in /sys/devices/system/cpu/present // Thanks to John Zwinck From drivers/base/cpu.c we see that the source displayed is the kernel variable cpu_isolated_map : static ssize_t print_cpus_isolated(struct device *dev, n = scnprintf(buf, len, "%*pbl\n", cpumask_pr_args(cpu_isolated_map)); ... static DEVICE_ATTR(isolated, 0444, print_cpus_isolated, NULL); and cpu_isolated_map is exactly what gets set by kernel/sched/core.c at boot: /* Setup the mask of cpus configured for isolated domains */ static int __init isolated_cpu_setup(char *str) { int ret; alloc_bootmem_cpumask_var(&cpu_isolated_map); ret = cpulist_parse(str, cpu_isolated_map); if (ret) { pr_err("sched: Error, all isolcpus= values must be between 0 and %d\n", nr_cpu_ids); return 0; } return 1; } But as you observed, someone could have modified the affinity of processes, including daemon-spawned ones, cron , systemd and so on. If that happens, new processes will be spawned inheriting the modified affinity mask, not the one set by isolcpus . So the above will give you isolcpus as you requested, but that might still not be helpful. Supposing that you find out that isolcpus has been issued, but has not "taken", this unwanted behaviour could be derived by some process realizing that it is bound to only CPU=0 , believing it is in monoprocessor mode by mistake, and helpfully attempting to "set things right" by resetting the affinity mask. If that was the case, you might try and isolate CPUS 0-5 instead of 1-6, and see whether this happens to work.
{ "source": [ "https://unix.stackexchange.com/questions/336017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128153/" ] }
336,304
Consider this: $ ssh localhost bash -c 'export foo=bar' terdon@localhost's password: declare -x DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1000/bus" declare -x HOME="/home/terdon" declare -x LOGNAME="terdon" declare -x MAIL="/var/spool/mail/terdon" declare -x OLDPWD declare -x PATH="/usr/bin:/bin:/usr/sbin:/sbin" declare -x PWD="/home/terdon" declare -x SHELL="/bin/bash" declare -x SHLVL="2" declare -x SSH_CLIENT="::1 55858 22" declare -x SSH_CONNECTION="::1 55858 ::1 22" declare -x USER="terdon" declare -x XDG_RUNTIME_DIR="/run/user/1000" declare -x XDG_SESSION_ID="c5" declare -x _="/usr/bin/bash" Why does exporting a variable within a bash -c session run via ssh result in that list of declare -x commands (the list of currently exported variables, as far as I can tell)? Running the same thing without the bash -c doesn't do that: $ ssh localhost 'export foo=bar' terdon@localhost's password: $ Nor does it happen if we don't export : $ ssh localhost bash -c 'foo=bar' terdon@localhost's password: $ I tested this by sshing from one Ubuntu machine to another (both running bash 4.3.11) and on an Arch machine, sshing to itself as shown above (bash version 4.4.5). What's going on here? Why does exporting a variable inside a bash -c call produce this output?
When you run a command through ssh , it is run by calling your $SHELL with the -c flag: -c If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, the first argument is assigned to $0 and any remaining arguments are assigned to the positional parameters. So, ssh remote_host "bash -c foo" will actually run: /bin/your_shell -c 'bash -c foo' Now, because the command you are running ( export foo=bar ) contains spaces and is not properly quoted to form a whole, the export is taken as the command to be run and the rest are saved in the positional parameters array. This means that export is run and foo=bar is passed to it as $0 . The final result is the same as running /bin/your_shell -c 'bash -c export' The correct command would be: ssh remote_host "bash -c 'export foo=bar'"
{ "source": [ "https://unix.stackexchange.com/questions/336304", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
336,368
I realize the default of the Nemo's right-click "Open in Terminal" is to launch "gnome-terminal", however, my installation is opening "xfce4-terminal" instead. A while back when "gnome-terminal" was broken, I installed "xfce4-terminal" as an alternative. I configured the system-wide defaults to call "xfce4-terminal" for the terminal. After the issue with Gnome-terminal was resolved, I moved the system-wide defaults back to Gnome-terminal. Nautilus starting using Gnome-terminal again, however Nemo continues to only launch "xfce4-terminal". I uninstalled "xfce4-terminal" then the "Open in a Terminal" feature of Nemo stopped working. In attempts to resolve this issue I have done the following: ReInstalled Ubuntu 16.04 Purged and reinstalled Nemo Nemo still will only launch "xfce4-terminal". It appears to be a problem with in my home folder's Nemo configuration or some other per user cache. Creating a new user, and Nemo properly launches "Gnome-Terminal". Can someone help me with where to check and fix Nemo's functionality in my '/home/username` settings. Is there some type of editible configuration to check what happens when clicking on the "Open in Terminal" function?
Nemo uses the gsettings configuration. This restored the intended behavior: $ gsettings set org.gnome.desktop.default-applications.terminal exec gnome-terminal On Ubuntu it's different for some reason: $ gsettings set org.cinnamon.desktop.default-applications.terminal exec gnome-terminal
{ "source": [ "https://unix.stackexchange.com/questions/336368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81664/" ] }
336,392
From inside a Debian docker container running jessie I get vi blah bash: vi: command not found so naturally I reach for my install command sudo apt-get install vim Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package vim while searching for some traction I came across these suggestions with various outputs cat /etc/apt/sources.list deb http://deb.debian.org/debian jessie main deb http://deb.debian.org/debian jessie-updates main deb http://security.debian.org jessie/updates main apt-get install software-properties-common Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package software-properties-common apt-get install python-software-properties Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package python-software-properties apt-get install apt-file Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package apt-file since this server is the docker container for a mongo image it intentionally is a bare bones Debian installation ... installing vi is just to play about during development
I found this solution apt-get update apt-get install apt-file apt-file update apt-get install vim # now finally this will work !!! here is a copy N paste version of above apt-get update && apt-get install apt-file -y && apt-file update && apt-get install vim -y Alternative approach ... if you simply need to create a new file do this when no editor is available cat > myfile (use terminal to copy/paste) ^D
{ "source": [ "https://unix.stackexchange.com/questions/336392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10949/" ] }
336,410
So what I have is 2 directories that have the same files, except that directory a is today's data and directory b is yesterday's data. What I want to do is compare the files and output the results into 3 columns, which will be the file name, whether or not the files are identical, and how many days the files have been the same. What I have so far is: ls ./dropzone_current > files.txt is_identical=false filename="files.txt" while read -r line do name="$line" declare -i counter diff -qs ./dropzone_current/$name ./dropzone_backup/$name if [ $? -ne 0 ] then is_identical=false counter=0 printf '%s\t%s\t%s\n' "$name" "$is_identical" "$counter" >> test.txt else counter=$((counter + 1)) is_identical=true printf '%s\t%s\t%s\n' "$name" "$is_identical" "$counter" >> test.txt fi done < "$filename" Essentially, everything works except the counter. I need the counter to be unique to each file name that's being compared, and then update every time the script is run (once a day) but I haven't been able to figure out how to do that.
I found this solution apt-get update apt-get install apt-file apt-file update apt-get install vim # now finally this will work !!! here is a copy N paste version of above apt-get update && apt-get install apt-file -y && apt-file update && apt-get install vim -y Alternative approach ... if you simply need to create a new file do this when no editor is available cat > myfile (use terminal to copy/paste) ^D
{ "source": [ "https://unix.stackexchange.com/questions/336410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209717/" ] }
336,521
I am always surprised that in the folder /bin there is a [ program. Is this what is called when we are doing something like: if [ something ] ? By calling the [ program explicitly in a shell it asks for a corresponding ] , and when I provide the closing bracket it seems to do nothing no matter what I insert between the brackets. Needless to say, the usual way about getting help about a program does not work, i.e. neither man [ nor [ --help works.
The [ command's job is to evaluate test expressions. It returns with a 0 exit status (that means true ) when the expression resolves to true and something else (which means false ) otherwise. It's not that it does nothing, it's just that its outcome is to be found in its exit status. In a shell, you can find out about the exit status of the last command in $? for Bourne-like shells or $status in most other shells (fish/rc/es/csh/tcsh...). $ [ a = a ] $ echo "$?" 0 $ [ a = b ] $ echo "$?" 1 In other languages like perl , the exit status is returned for instance in the return value of system() : $ perl -le 'print system("[", "a", "=", "a", "]")' 0 Note that all modern Bourne-like shells (and fish ) have a built-in [ command. The one in /bin would typically only be executed when you use another shell or when you do things like env [ foo = bar ] or find . -exec [ -f {} ] \; -print or that perl command above... The [ command is also known by the test name. When called as test , it doesn't require a closing ] argument. While your system may not have a man page for [ , it probably has one for test . But again, note that it would document the /bin/[ or /bin/test implementation. To know about the [ builtin in your shell, you should read the documentation for your shell instead. For more information about the history of that utility and the difference with the [[...]] ksh test expression, you may want to have a look at this other Q&A here .
{ "source": [ "https://unix.stackexchange.com/questions/336521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180653/" ] }
336,609
Whenever I use a pager like less or an editor like nano in the shell (my shell is GNU bash), I see a behaviour I cannot explain completely and which differs to the behaviour I can observe with other tools like cat or ls . I would like to ask how this behaviour comes about. The —not easy to explain— behaviour is that normally all output to stdout/stderr ends up being recorded in the terminal-emulators backbuffer, so I can scroll back, while (not so normally to me) in the case of using less or nano , output is displayed by the terminal-emulator, yet upon exiting the programs, the content "magically disappears". I would like to give those two examples: seq 1 200 (produces 200 lines in the backbuffer) seq 1 200 | less (lets me page 200 lines, yet eventually "cleans up" and nothing is recorded in the backbuffer) My suspicion is that some sort of escape codes are in play and I would appreciate someone pointing my to an explanation of this observed behavioural differences. Since some comments and answers are phrased, as if it was my desire to change the behaviour, this "would be nice to know", but indeed the desired answer should be the description of the mechanism, not as much the ways to change it.
There are two worldviews here: As far as programs using termcap/terminfo are concerned, your terminal potentially has two modes: cursor addressing mode and scrolling mode . The latter is the normal mode, and a program switches to cursor addressing mode when it needs to move the cursor around the screen by row and column addresses, treating the screen as a two-dimensional entity. termcap and terminfo handle translating this worldview, which is what programs see, into the worldview as seen by terminals. As far as a terminal (emulated or real) is concerned, there are two screen buffers, only one of which is displayed at any time. There's a primary screen buffer and an alternate screen buffer . Control sequences emitted by programs switch the terminal between the two. For some terminals, usually emulated ones, the alternate screen buffer is tailored to the usage of termcap/terminfo. They are designed with the knowledge that part of switching to cursor addressing mode is switching to the alternate screen buffer and part of switching to scrolling mode is switching to the primary screen buffer. This is how termcap/terminfo translate things. So these terminals don't show scrolling user interface widgets when the alternate screen buffer is being displayed, and simply have no scrollback mechanism for that screen buffer. For other terminals, usually real ones, the alternate screen buffer is pretty much like the primary. Both are largely identical in terms of what they support. A few emulated terminals fall into this class, note. Unicode rxvt, for example, has scrollback for both the primary and alternate screen buffers. Programs that present full-screen textual user interfaces (such as vim , nano , less , mc , and so forth) use termcap/terminfo to switch to cursor-addressing mode at start-up and back to scrolling mode when they suspend, or shell out, or exit. The ncurses library does this, but so too do non-ncurses-using programs that build more directly on top of termcap/terminfo. The scrolling within TUIs presented by less or vim is nothing to do with scrollback. That is implemented inside those programs, which are just redrawing their full-screen textual user interface as appropriate as things scroll around. Note that these programs do not "leave no content" in the alternate screen buffer. The terminal simply is no longer displaying what they leave behind. This is particularly noticable with Unicode rxvt on some platforms, where the termcap/terminfo sequences for switching to cursor addressing mode do not implicitly clear the alternate screen buffer. So using multiple such full-screen TUI programs in succession can end up displaying the old contents of the alternate screen buffer as left there by the last program, at least for a little while until the new program writes its output (most noticable when less is at the end of a pipeline). With xterm, one can switch to displaying the alternate screen buffer from the GUI menu of the terminal emulator, and see the content still there. The actual control sequences are what the relevant standards call set private mode control sequences. The relevant private mode numbers are 47, 1047, 1048, and 1049. Note the differences in what extra actions are implied by each, on top of switching to/from the alternate screen buffer. Further reading How to save/restore terminal output Entering/exiting alternate screen buffer OpenSSH, FreeBSD screen overwrite when closing application What exactly is scrollback and scrollback buffer? Building blocks of interactive terminal applications
{ "source": [ "https://unix.stackexchange.com/questions/336609", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
336,790
Every time time I connect headphones to the 3.5mm audio jack on my Dell XPS 13, I hear continuous white noise in addition to the audio I expect to hear. It's much louder than the typical noise floor for a headphone jack. I've found many other reports of this same problem for both the XPS 13 9350 ( 1 , 2 ) and the XPS 13 9360 ( 1 , 2 , 3 ), so it doesn't seem like I have a faulty unit. Is there a way to stop this noise?
Set Headphone Mic Boost gain to 10dB. Any other value seems to cause the irritating background noise in headphones. This can be done with amixer : amixer -c0 sset 'Headphone Mic Boost' 10dB To make this happen automatically every time you headphones are connected install acpid . Start it by running: sudo systemctl start acpid.service Enable it by running: sudo systemctl enable acpid.service Create following event script /etc/acpi/headphone-plug event=jack/headphone HEADPHONE plug action=/etc/acpi/cancel-white-noise.sh %e Then create action script /etc/acpi/cancel-white-noise.sh : #! /bin/bash amixer -c0 sset 'Headphone Mic Boost' 10dB Now Headphone Mic Boost will be set to 10dB every time headphones are connected. To make this effective you need to restart your laptop.
{ "source": [ "https://unix.stackexchange.com/questions/336790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210028/" ] }
336,804
Are there any methods to check what you are actually executing from a bash script? Say your bash script is calling several commands (for example: tar , mail , scp , mysqldump ) and you are willing to make sure that tar is the actual, real tar , which is determinable by the root user being the file and parent directory owner and the only one with write permissions and not some /tmp/surprise/tar with www-data or apache2 being the owner. Sure I know about PATH and the environment, I'm curious to know whether this can be additionally checked from a running bash script and, if so, how exactly? Example: (pseudo-code) tarfile=$(which tar) isroot=$(ls -l "$tarfile") | grep "root root" #and so on...
Instead of validating binaries you're going to execute, you could execute the right binaries from the start. E.g. if you want to make sure you're not going to run /tmp/surprise/tar , just run /usr/bin/tar in your script. Alternatively, set your $PATH to a sane value before running anything. If you don't trust files in /usr/bin/ and other system directories, there's no way to regain confidence. In your example, you're checking the owner with ls , but how do you know you can trust ls ? The same argument applies to other solutions such as md5sum and strace . Where high confidence in system integrity is required, specialized solutions like IMA are used. But this is not something you could use from a script: the whole system has to be set up in a special way, with the concept of immutable files in place.
{ "source": [ "https://unix.stackexchange.com/questions/336804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68350/" ] }
337,008
I am using Debian 8.6 LXDE on a Powerbook G4 15" 1.67GHz and would like to enable tap to click on the touchpad. It is already double scrolling but tap to click would help to save the ageing mouse button. Two fingered tap for left click would be the icing on the cake, is this possible?
Debian Jessie To enable the touchpad tapping permanently , copy the 50-synaptics.conf file to /etc/X11/xorg.conf.d then edit it by adding Option "TapButton1" "1" . As root: mkdir /etc/X11/xorg.conf.d cp /usr/share/X11/xorg.conf.d/50-synaptics.conf /etc/X11/xorg.conf.d/50-synaptics.conf The /etc/X11/xorg.conf.d/50-synaptics.conf should be: Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" Option "TapButton1" "1" Option "TapButton2" "3" Reboot your system Debian Stretch and Buster (updated) Remove the xserver-xorg-input-synaptics package. (important) # apt remove xserver-xorg-input-synaptics Install xserver-xorg-input-libinput : # apt install xserver-xorg-input-libinput In most cases, make sure you have the xserver-xorg-input-libinput package installed, and not the xserver-xorg-input-synaptics package. As root: create /etc/X11/xorg.conf.d/ mkdir /etc/X11/xorg.conf.d Create the 40-libinput.conf file: echo 'Section "InputClass" Identifier "libinput touchpad catchall" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Driver "libinput" Option "Tapping" "on" EndSection' > /etc/X11/xorg.conf.d/40-libinput.conf restart your DM; e,g: # systemctl restart lightdm or # systemctl restart gdm3 Debian wiki : Enable tapping on touchpad
{ "source": [ "https://unix.stackexchange.com/questions/337008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208773/" ] }
337,739
I have a binary (that I can't modify) and I can do: ./binary < file I also can do: ./binary << EOF > "line 1 of file" > "line 2 of file" ... > "last line of file" > EOF But cat file | ./binary gives me an error. I don't know why it doesn't work with a pipe. In all 3 cases the content of file is given to the standard input of binary (in different ways): bash reads the file and gives it to stdin of binary bash reads lines from stdin (until EOF) and gives it to stdin of binary cat reads and puts the lines of file to stdout, bash redirects them to stdin of binary The binary shouldn't notice the difference between those 3 as far as I understood it. Can someone explain why the 3rd case doesn't work? BTW: The error given by the binary is: 20170116/125624.689 - U3000011 Could not read script file '', error code '14'. But my main question is, how is there a difference for any program with that 3 options. Here are some further details: I tried it again with strace and there were in fact some errors ESPIPE (Illegal seek) from lseek followed by EFAULT (Bad address) from read right before the error message. The binary I tried to control with a ruby script (without using temporary files) is part of the callapi from Automic (UC4) .
In ./binary < file binary 's stdin is the file open in read-only mode. Note that bash doesn't read the file at all, it just opens it for reading on the file descriptor 0 (stdin) of the process it executes binary in. In: ./binary << EOF test EOF Depending on the shell, binary 's stdin will be either a deleted temporary file (AT&T ksh, zsh, bash...) that contains test\n as put there by the shell or the reading end of a pipe ( dash , yash ; and the shell writes test\n in parallel at the other end of the pipe). In your case, if you're using bash , it would be a temp file. In: cat file | ./binary Depending on the shell, binary 's stdin will be either the reading end of a pipe, or one end of a socket pair where the writing direction has been shut down (ksh93) and cat is writing the content of file at the other end. When stdin is a regular file (temporary or not), it is seekable. binary may go to the beginning or end, rewind, etc. It can also mmap it, do some ioctl()s like FIEMAP/FIBMAP (if using <> instead of < , it could truncate/punch holes in it, etc). pipes and socket pairs on the other hand are an inter-process communication means, there's not much binary can do beside read ing the data (though there are also some operations like some pipe-specific ioctl() s that it could do on them and not on regular files). Most of the times, it's the missing ability to seek that causes applications to fail/complain when working with pipes, but it could be any of the other system calls that are valid on regular files but not on different types of files (like mmap() , ftruncate() , fallocate() ). On Linux, there's also a big difference in behaviour when you open /dev/stdin while the fd 0 is on a pipe or on a regular file. There are many commands out there that can only deal with seekable files, but when that's the case, that's generally not for the files open on their stdin. $ unzip -l file.zip Archive: file.zip Length Date Time Name --------- ---------- ----- ---- 11 2016-12-21 14:43 file --------- ------- 11 1 file $ unzip -l <(cat file.zip) # more or less the same as cat file.zip | unzip -l /dev/stdin Archive: /proc/self/fd/11 End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of /proc/self/fd/11 or /proc/self/fd/11.zip, and cannot find /proc/self/fd/11.ZIP, period. unzip needs to read the index stored at the end of the file, and then seek within the file to read the archive members. But here, the file (regular in the first case, pipe in the second) is given as a path argument to unzip , and unzip opens it itself (typically on fd other than 0) instead of inheriting a fd already opened by the caller. It doesn't read zip files from its stdin. stdin is mostly used for user interaction. If you run that binary of yours without redirection at the prompt of an interactive shell running in a terminal emulator, then binary 's stdin will be inherited from its caller the shell, which itself will have inherited it from its caller the terminal emulator and will be a pty device open in read+write mode (something like /dev/pts/n ). Those devices are not seekable either. So, if binary works OK when taking input from the terminal, possibly the issue is not about seeking. If that 14 is meant to be an errno (an error code set by failing system calls), then on most systems, that would be EFAULT ( Bad address ). The read() system call would fail with that error if asked to read into a memory address that is not writable. That would be independent of whether the fd to read the data from points to a pipe or regular file and would generally indicate a bug 1 . binary possibly determines the type of file open on its stdin (with fstat() ) and runs into a bug when it's neither a regular file nor a tty device. Hard to tell without knowing more about the application. Running it under strace (or truss / tusc equivalent on your system) could help us see what is the system call if any that is failing here. 1 The scenario envisaged by Matthew Ife in a comment to your question sounds a lot plausible here. Quoting him: I suspect it is seeking to the end of file to get a buffer size for reading the data, badly handling the fact that seek doesn't work and attempting to allocate a negative size (not handling a bad malloc). Passing the buffer to read which faults given the buffer is not valid.
{ "source": [ "https://unix.stackexchange.com/questions/337739", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210721/" ] }
337,754
Does yum list command get data from the yum repository or from the redhat page via internet? Back ground: I have updated yum repository for httpd only (x86_64 with updated httpd rpm) createrepo -update /repository/yum/x86_64 Then I have reverted the original repository file createrepo -update /repository/yum/x86_64_20170116 when I check the httpd ver of x86_64_20170116 the httpd version is outdated(original state) ll x86_64_20170116/Packages/ httpd* however, when I enter below commands, the httpd version is up to date yum list available | grep httpd Could someone please shed some light on this?
In ./binary < file binary 's stdin is the file open in read-only mode. Note that bash doesn't read the file at all, it just opens it for reading on the file descriptor 0 (stdin) of the process it executes binary in. In: ./binary << EOF test EOF Depending on the shell, binary 's stdin will be either a deleted temporary file (AT&T ksh, zsh, bash...) that contains test\n as put there by the shell or the reading end of a pipe ( dash , yash ; and the shell writes test\n in parallel at the other end of the pipe). In your case, if you're using bash , it would be a temp file. In: cat file | ./binary Depending on the shell, binary 's stdin will be either the reading end of a pipe, or one end of a socket pair where the writing direction has been shut down (ksh93) and cat is writing the content of file at the other end. When stdin is a regular file (temporary or not), it is seekable. binary may go to the beginning or end, rewind, etc. It can also mmap it, do some ioctl()s like FIEMAP/FIBMAP (if using <> instead of < , it could truncate/punch holes in it, etc). pipes and socket pairs on the other hand are an inter-process communication means, there's not much binary can do beside read ing the data (though there are also some operations like some pipe-specific ioctl() s that it could do on them and not on regular files). Most of the times, it's the missing ability to seek that causes applications to fail/complain when working with pipes, but it could be any of the other system calls that are valid on regular files but not on different types of files (like mmap() , ftruncate() , fallocate() ). On Linux, there's also a big difference in behaviour when you open /dev/stdin while the fd 0 is on a pipe or on a regular file. There are many commands out there that can only deal with seekable files, but when that's the case, that's generally not for the files open on their stdin. $ unzip -l file.zip Archive: file.zip Length Date Time Name --------- ---------- ----- ---- 11 2016-12-21 14:43 file --------- ------- 11 1 file $ unzip -l <(cat file.zip) # more or less the same as cat file.zip | unzip -l /dev/stdin Archive: /proc/self/fd/11 End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of /proc/self/fd/11 or /proc/self/fd/11.zip, and cannot find /proc/self/fd/11.ZIP, period. unzip needs to read the index stored at the end of the file, and then seek within the file to read the archive members. But here, the file (regular in the first case, pipe in the second) is given as a path argument to unzip , and unzip opens it itself (typically on fd other than 0) instead of inheriting a fd already opened by the caller. It doesn't read zip files from its stdin. stdin is mostly used for user interaction. If you run that binary of yours without redirection at the prompt of an interactive shell running in a terminal emulator, then binary 's stdin will be inherited from its caller the shell, which itself will have inherited it from its caller the terminal emulator and will be a pty device open in read+write mode (something like /dev/pts/n ). Those devices are not seekable either. So, if binary works OK when taking input from the terminal, possibly the issue is not about seeking. If that 14 is meant to be an errno (an error code set by failing system calls), then on most systems, that would be EFAULT ( Bad address ). The read() system call would fail with that error if asked to read into a memory address that is not writable. That would be independent of whether the fd to read the data from points to a pipe or regular file and would generally indicate a bug 1 . binary possibly determines the type of file open on its stdin (with fstat() ) and runs into a bug when it's neither a regular file nor a tty device. Hard to tell without knowing more about the application. Running it under strace (or truss / tusc equivalent on your system) could help us see what is the system call if any that is failing here. 1 The scenario envisaged by Matthew Ife in a comment to your question sounds a lot plausible here. Quoting him: I suspect it is seeking to the end of file to get a buffer size for reading the data, badly handling the fact that seek doesn't work and attempting to allocate a negative size (not handling a bad malloc). Passing the buffer to read which faults given the buffer is not valid.
{ "source": [ "https://unix.stackexchange.com/questions/337754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210733/" ] }
337,774
Today, after doing updates in Debian Stretch, it started displaying these warnings when restarting the ssh service with my current config: /etc/ssh/sshd_config line 17: Deprecated option KeyRegenerationInterval /etc/ssh/sshd_config line 18: Deprecated option ServerKeyBits /etc/ssh/sshd_config line 29: Deprecated option RSAAuthentication /etc/ssh/sshd_config line 36: Deprecated option RhostsRSAAuthentication [....] Restarting OpenBSD Secure Shell server: sshd /etc/ssh/sshd_config line 17: Deprecated option KeyRegenerationInterval /etc/ssh/sshd_config line 18: Deprecated option ServerKeyBits /etc/ssh/sshd_config line 29: Deprecated option RSAAuthentication /etc/ssh/sshd_config line 36: Deprecated option RhostsRSAAuthentication What is happening here? Using Debian 9 with OpenSSH 7.4
In the current Stretch update, openssh version changed from 7.3 to 7.4, released on 2016-Dec-19. As it can be inferred from the Release notes, and from @Jakuje comments, OpenSSH maintainers have removed the corresponding configuration options for good, as they are obsolete. So the lines can be safely removed. Also, take head of: Future deprecation notice We plan on retiring more legacy cryptography in future releases, specifically: In approximately August 2017, removing remaining support for the SSH v.1 protocol (client-only and currently compile-time disabled). In the same release, removing support for Blowfish and RC4 ciphers and the RIPE-MD160 HMAC. (These are currently run-time disabled). Refusing all RSA keys smaller than 1024 bits (the current minimum is 768 bits) The next release of OpenSSH will remove support for running sshd(8) with privilege separation disabled. The next release of portable OpenSSH will remove support for OpenSSL version prior to 1.0.1.
{ "source": [ "https://unix.stackexchange.com/questions/337774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138261/" ] }
337,819
On Slackware, using sbopkg permits one to build a package from source. Repos is not big as Debian, but it's nice. Some software can use environment variables, for example on the VICE c64 emulator, if the variable FFMPEG is set to yes , it will enable ffmpeg recording the emulator. I tried to use $ export FFMPEG=yes; sudo sbopkg -B -i vice but ffmpeg is disabled. Instead I had to use $ su - $ export FFMPEG=yes $ sbopkg -B -i vice which works. How to use environment variables with sudo ?
You may use sudo's -E option: FMPEG=yes sudo -E sbopkg -B -i vice From the manual: -E, --preserve-env Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return an error if the user does not have permission to preserve the environment. Note that this exports all your existing environment variables. It is safer to only export the environment variables you need with the following syntax : sudo FMPEG=yes sbopkg -B -i vice
{ "source": [ "https://unix.stackexchange.com/questions/337819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
337,877
How do I silently extract files, without displaying status?
man unzip: -q perform operations quietly (-qq = even quieter). Ordinarily unzip prints the names of the files it's extracting or testing, the extraction methods, any file or zipfile comments that may be stored in the archive, and possibly a summary when finished with each archive. The -q[q] options suppress the printing of some or all of these messages.
{ "source": [ "https://unix.stackexchange.com/questions/337877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210788/" ] }
337,911
Many a times when manually grepping through a file, there are so many comments that your eyes glaze over and you start to wish there was a way in which you just could show it to display only those lines which have no comments. Is there a way to skip comments with cat or another tool? I am guessing there is a way and it involves a regular-expression. I want it just to display and not actually remove any lines or such. Comments are in the form of # and I'm using zsh as my xterm.
Well, that depends on what you mean by comments. If just lines without a # then a simple: grep -v '#' might suffice (but this will call lines like echo '#' a comment). If comment lines are lines starting with # , then you might need: grep -v '^#' And if comment lines are lines starting with # after some optional whitespace, then you could use: grep -v '^ *#' And if the comment format is something else altogether, this answer will not help you.
{ "source": [ "https://unix.stackexchange.com/questions/337911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
337,965
I have added an alias command to kill my guake terminal to my .bashrc alias killguake="kill -9 $(ps aux | grep guake | head -n -1 | awk '{print $2}')" But the problem is, the sub-command i.e. ps aux | grep guake | head -n -1 | awk '{print $2}' is executed at the time of terminal startup and killguake is set to kill -9 result_of_subcommand . Is there any way to set it like so, that the sub command is run/calculated every time I run killguake ? So that it can have the latest PID for the guake. I have also tried piping to the kill using xargs but that also result in same, that is calculating everything at startup. Here is what I tried with piping ps aux | grep guake | head -n -1 | awk '{print $2}' | xargs -I{} kill -9 {}
Use pkill instead: alias killguake='pkill guake' This is a whole lot safer than trying to parse the process table outputted by ps . Having said that, I will now explain why your alias doesn't do what you want, but really, use pkill . Your alias alias killguake="kill -9 $(ps aux | grep guake | head -n -1 | awk '{print $2}')" is double quoted. This means that when the shell parses that line in your shell initialization script, it will perform command substitutions ( $( ... ) ). So each time the file runs, instead of giving you an alias to kill guake at a later time, it will give you an alias to kill the guake process running right now . If you list your aliases (with alias ), you'll see that this alias is something like killguake='kill -9 91273' or possibly even just killquake='kill -9' if guake wasn't running at the time of shell startup. To fix this (but really, just use pkill ) you need to use single quotes and escape the $ in the Awk script (which is now in double quotes): alias killguake='kill -9 $(ps aux | grep guake | head -n -1 | awk "{print \$2}")' One of the issues with this approach in general is that you will match the processes belonging to other users. Another is that you will possibly just find the grep guake command instead of the intended process. Another is that it will throw an error if no process was found. Another is that you're invoking five external utilities to do the job of one.
{ "source": [ "https://unix.stackexchange.com/questions/337965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149432/" ] }
338,000
I am trying to get the output of a pipe into a variable. I tried the following things: echo foo | myvar=$(</dev/stdin) echo foo | myvar=$(cat) echo foo | myvar=$(tee) But $myvar is empty. I don’t want to do: myvar=$(echo foo) Because I don’t want to spawn a subshell. Any ideas? Edit: I don’t want to spawn a subshell because the command before the pipe needs to edit global variables, which it can’t do in a subshell. Can it? The echo thing is just for simplification. It’s more like: complex_function | myvar=$(</dev/stdin) And I don’t get, why that doesn’t work. This works for example: complex_function | echo $(</dev/stdin)
The correct solution is to use command substitution like this: variable=$(complex_command) as in message=$(echo 'hello') (or for that matter, message=hello in this case). Your pipeline: echo 'hello' | message=$(</dev/stdin) or echo 'hello' | read message actually works. The only problem is that the shell that you're using will run the second part of the pipeline in a subshell. This subshell is destroyed when the pipeline exits, so the value of $message is not retained in the shell. Here you can see that it works: $ echo 'hello' | { read message; echo "$message"; } hello ... but since the subshell's environment is separate (and gone): $ echo "$message" (no output) One solution for you would be to switch to ksh93 which is smarter about this: $ echo 'hello' | read message $ echo "$message" hello Another solution for bash would be to set the lastpipe shell option. This would make the last part of the pipeline run in the current environment. This however does not work in interactive shells as lastpipe requires that job control is not active. #!/bin/bash shopt -s lastpipe echo 'hello' | read message echo "$message"
{ "source": [ "https://unix.stackexchange.com/questions/338000", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210885/" ] }
338,104
How can I create multiple nested directories in one command? mkdir -p /just/one/dir But I need to create multiple different nested directories...
mkdir accepts multiple path arguments: mkdir -p -- a/foo b/bar a/baz
{ "source": [ "https://unix.stackexchange.com/questions/338104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210969/" ] }
338,116
I have the following data (a list of R packages parsed from a Rmarkdown file), that I want to turn into a list I can pass to R to install: d3heatmap data.table ggplot2 htmltools htmlwidgets metricsgraphics networkD3 plotly reshape2 scales stringr I want to turn the list into a list of the form: 'd3heatmap', 'data.table', 'ggplot2', 'htmltools', 'htmlwidgets', 'metricsgraphics', 'networkD3', 'plotly', 'reshape2', 'scales', 'stringr' I currently have a bash pipeline that goes from the raw file to the list above: grep 'library(' Presentation.Rmd \ | grep -v '#' \ | cut -f2 -d\( \ | tr -d ')' \ | sort | uniq I want to add a step on to turn the new lines into the comma separated list. I've tried adding tr '\n' '","' , which fails. I've also tried a number of the following Stack Overflow answers, which also fail: https://stackoverflow.com/questions/1251999/how-can-i-replace-a-newline-n-using-sed This produces library(stringr)))phics) as the result. https://stackoverflow.com/questions/10748453/replace-comma-with-newline-in-sed This produces ,% as the result. Can sed replace new line characters? This answer (with the -i flag removed), produces output identical to the input.
You can add quotes with sed and then merge lines with paste , like that: sed 's/^\|$/"/g'|paste -sd, - If you are running a GNU coreutils based system (i.e. Linux), you can omit the trailing '-' . If you input data has DOS-style line endings (as @phk suggested), you can modify the command as follows: sed 's/\r//;s/^\|$/"/g'|paste -sd, -
{ "source": [ "https://unix.stackexchange.com/questions/338116", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136298/" ] }
338,228
Recently, I started to use i3wm and fell in love with it. However, one thing bothers me: controlling more than 10 workspaces. In my config $mod+1 to $mod+9 switches between the workspaces 1 to 9 (and $mod+0 for 10), but sometimes 10 workspaces just aren't enough. At the moment I reach out to workspace 11 to 20 with $mod+mod1+1 to $mod+mod1+0 , i.e. hitting mod+alt+number . Of course this works without any problems, but it is quite a hassle to switch workspaces like that, since the keys aren't hit easily. Additionally, moving applications between workspaces 11 to 20 requires to mod+shift+alt+number -> ugly. In my Vim bindings (I have lots of plugins) I started to use double modifier shortcuts, like modkey + r for Plugin 1 and modkey + modkey + r for Plugin 2. This way I can bind every key twice and hitting the mod key twice is easy and fast. Can I do something similar in i3wm ? How do you make use of more than 10 workspaces in i3wm ? Any other solutions?
i3 does not really support key sequences like vim . Any key binding consists of a single key preceded by an optional list of distinct (so no Shift+Shift ) modifiers. And all of the modifiers need to be pressed down at the time the main key is pressed. That being said, there are two main ways to have a lot of workspaces without having to bind them to long lists of modifiers: 1. Dynamically create and access workspaces with external programs You can do not have to define a shortcut for every single workspace, you can just create them on the fly by sending a workspace NEW_WS to i3 , for example with the i3-msg program: i3-msg workspace NEW_WS i3-msg move container to workspace NEW_WS i3 also comes with the i3-input command, which opens a small input field then runs a command with the given input as parameter i3-input -F 'workspace %s' -P 'go to workspace: ' i3-input -F 'move container to workspace %s' -P 'move to workspace: ' Bind these these two commands to shortcuts and you can access an arbitrary number of workspaces by just pressing the shortcut and then entering the name (or number) of the workspace you want. (If you only work with numbered workspaces, you might want to use workspace number %s instead of just workspace %s ) 2. Statically bind workspaces to simple Shortcuts within key binding modes Alternatively, for a more static approach, you could use modes in your i3 configuration. You could have separate modes for focusing and moving to workspaces: set $mode_workspace "goto_ws" mode $mode_workspace { bindsym 1 workspace 1; mode "default" bindsym 2 workspace 2; mode "default" # […] bindsym a workspace a; mode "default" bindsym b workspace b; mode "default" # […] bindsym Escape mode "default" } bindsym $mod+w mode $mode_workspace set $mode_move_to_workspace "moveto_ws" mode $mode_move_to_workspace { bindsym 1 move container to workspace 1; mode "default" bindsym 2 move container to workspace 2; mode "default" # […] bindsym a move container to workspace a; mode "default" bindsym b move container to workspace b; mode "default" # […] bindsym Escape mode "default" } bindsym $mod+shift+w mode $mode_move_to_workspace Or you could have separate bindings for focusing and moving within a single mode: set $mode_ws "workspaces" mode $mode_ws { bindsym 1 workspace 1; mode "default" bindsym Shift+1 move container to workspace 1; mode "default" bindsym 2 workspace 2; mode "default" bindsym Shift+2 move container to workspace 2; mode "default" # […] bindsym a workspace a; mode "default" bindsym Shift+a move container to workspace a; mode "default" bindsym b workspace b; mode "default" bindsym Shift+b move container to workspace b; mode "default" # […] bindsym Escape mode "default" } bindsym $mod+shift+w mode $mode_move_to_workspace In both examples the workspace or move commands are chained with mode "default" , so that i3 automatically returns back to the default key binding map after each command.
{ "source": [ "https://unix.stackexchange.com/questions/338228", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95677/" ] }
338,510
I have a CSV file as input.csv "1_1_0_0_76" "1_1_0_0_77" "1_1_0_0_78" "1_1_0_0_79" "1_1_0_0_80" "1_1_0_0_81" "1_1_0_0_82" "1_1_0_0_83" "1_1_0_0_84" "1_1_0_0_85" ............. and so on. I need to convert this CSV file into result.csv 1,1,0,0,76 1,1,0,0,77 1,1,0,0,78 1,1,0,0,79 1,1,0,0,80 1,1,0,0,81 1,1,0,0,82 1,1,0,0,83 1,1,0,0,84 1,1,0,0,85
Far simpler way is to use tr $ tr '_' ',' < input.csv | tr -d '"' 1,1,0,0,76 1,1,0,0,77 1,1,0,0,78 The way this works is that tr takes two arguments - set of characters to be replaced, and their replacement. In this case we only have sets of 1 character. We redirect input.csv input tr 's stdin stream via < shell operator, and pipe the resulting output to tr -d '"' to delete double quotes. But awk can do it too. $ cat input.csv "1_1_0_0_76" "1_1_0_0_77" "1_1_0_0_78" $ awk '{gsub(/_/,",");gsub(/"/,"")};1' input.csv 1,1,0,0,76 1,1,0,0,77 1,1,0,0,78 The way this works is slightly different: awk reads each file line by line, each in-line script being /Pattern match/{ codeblock}/Another pattern/{code block for this pattern} . Here we don't have a pattern, so it means to execute codeblock for each line. gsub() function is used for global substitution within a line, thus we use it to replace underscores with commas, and double quotes with a null string (effectively deleting the character). The 1 is in place of the pattern match with missing code block, which defaults simply to printing the line; in other words the codeblock with gsub() does the job and 1 prints the result. Use the shell redirection ( > ) to send output to a new file: awk '{gsub(/_/,",");gsub(/"/,"")};1' input.csv > output.csv
{ "source": [ "https://unix.stackexchange.com/questions/338510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/203943/" ] }
338,650
I know that the system call interface is implemented on a low level and hence architecture/platform dependent, not "generic" code. Yet, I cannot clearly see the reason why system calls in Linux 32-bit x86 kernels have numbers that are not kept the same in the similar architecture Linux 64-bit x86_64? What is the motivation/reason behind this decision? My first guess has been that a backgrounding reason has been to keep 32-bit applications runnable on a x86_64 system, so that via an reasonable offset to the system call number the system would know that user-space is 32-bit or 64-bit respectively. This is however not the case. At least it seems to me that read() being system call number 0 in x86_64 cannot be aligned with this thought. Another guess has been that changing the system call numbers might have a security/hardening background, something I was not able to confirm myself. Being ignorant to the challenges of implementation the architecture-dependent code parts, I still wonder how changing the system call numbers , when there seems no need (as even a 16-bit register would store largely more then the currently ~346 numbers to represent all calls), would help to achieve anything, other than break compatibility (though using the system calls through a library, libc, mitigates it).
As for the reasoning behind the specific numbering, which does not match any other architecture [except "x32" which is really just part of the x86_64 architecture]: In the very early days of the x86_64 support in the linux kernel, before there were any serious backwards compatibility constraints, all of the system calls were renumbered to optimize it at the cacheline usage level . I don't know enough about kernel development to know the specific basis for these choices, but apparently there is some logic behind the choice to renumber everything with these particular numbers rather than simply copying the list from an existing architecture and remove the unused ones. It looks like the order may be based on how commonly they are called - e.g. read/write/open/close are up front. Exit and fork may seem "fundamental", but they're each called only once per process. There may also be something going on about keeping system calls that are commonly used together within the same cache line (these values are just integers, but there's a table in the kernel with function pointers for each one, so each group of 8 system calls occupies a 64-byte cache line for that table)
{ "source": [ "https://unix.stackexchange.com/questions/338650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
338,945
I've read that semicolon is used to separate programs: $ echo 3; ls -la Does it mean that if , then and else are separate programs here? $ if [ $VARIABLE == abcdef ] ; then echo yes ; else echo no ; fi This question is not about semicolons.
The ; separates statements (loosely speaking). It is (almost) always possible to replace a ; by a newline. To say that ; separates two programs, therefore if and then must be "programs" is a bit too simplistic as a statement may be made of reserved words, shell functions, built-in utilities and external utilities, and combinations of these using pipes and boolean operators etc. etc. Both if and then are reserved words in the shell grammar , not "programs". Here they are used to build up what's technically called a compound command . echo is likely a built-in utility in the shell (but doesn't need to be), and ls is probably an external utility (or "program" as you say).
{ "source": [ "https://unix.stackexchange.com/questions/338945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90372/" ] }
339,011
I was trying to update Java and I used a star to make sure it was removed completely not realizing it would mess up everything I used apt-get remove java* this was on the Ubuntu 16.0 or something server. I now have sysrcd or System Rescue CD on the server and I'm attempting to get my old files back to put them on a new server and reload the sysrcd server back to Ubuntu. However I can't seem to figure out how to use the mount system. I've tried running fdisk -l and I get Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x04a5ca62 Device Boot Start End Blocks Id System /dev/sda1 * 2048 4095 1024 83 Linux /dev/sda2 6142 468860927 234427393 5 Extended /dev/sda5 6144 2004991 999424 83 Linux /dev/sda6 2007040 468860927 233426944 8e Linux LVM Disk /dev/mapper/vg-root: 221.6 GiB, 237879951360 bytes, 464609280 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg-tmp: 976 MiB, 1023410176 bytes, 1998848 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg-swap: 88 MiB, 92274688 bytes, 180224 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes I'm not sure which drive to mount or how to mount it. Can someone help?
A few more steps are needed when mounting an LVM partition vs. a non-LVM partition. sudo apt-get install lvm2 #This step may or may not be required. sudo pvscan #Use this to verify your LVM partition(s) is/are detected. sudo vgscan --mknodes #Scans for LVM Volume Group(s) sudo vgchange -ay #Activates LVM Volume Group(s) sudo lvscan #Scans for available Logical Volumes sudo mount /dev/YourVolGroup00/YourLogVol00 /YourMountPoint
{ "source": [ "https://unix.stackexchange.com/questions/339011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
339,077
I have created multiple keys using gpg. Whenever I try to sign any file, gpg automatically uses the first one I have created. How to set default key for signing in gpg. I don't want to delete/revoke the other one yet. Otherwise, how can I change my default keys for signing?
To choose a default key without having to specify --default-key on the command-line every time, create a configuration file (if it doesn't already exist), ~/.gnupg/gpg.conf , and add a line containing default-key <key-fpr> replacing <key-fpr> with the id or fingerprint of the key you want to use by default.
{ "source": [ "https://unix.stackexchange.com/questions/339077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211656/" ] }
339,237
One easy install method for Docker (for example) is this: curl -sSL https://get.docker.com/ | sh However, I have also seen some that look like this (using the Docker example): sh -c "$(curl -sSL https://get.docker.com/)" They appear to be functionally the same, but is there a reason to use one over the other? Or is it just a preference/aesthetic thing? (Of note, be very careful when running script from unknown origins.)
There is a practical difference. curl -sSL https://get.docker.com/ | sh starts curl and sh at the same time, connecting the output of curl with the input of sh . curl will carry out with the download (roughly) as fast as sh can run the script. The server can detect the irregularities in the timing and inject malicious code not visible when simply downloading the resource into a file or buffer or when viewing it in a browser. In sh -c "$(curl -sSL https://get.docker.com/)" , curl is run strictly before the sh is run. The whole contents of the resource are downloaded and passed to your shell before the sh is started. Your shell only starts sh when curl has exited, and passes the text of the resource to it. The server cannot detect the sh call; it is only started after the connection ends. It is similar to downloading the script into a file first. (This may not relevant in the docker case, but it may be a problem in general and highlights a practical difference between the two commands.)
{ "source": [ "https://unix.stackexchange.com/questions/339237", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171237/" ] }
339,487
How do I make Zathura copy selected text to the system clipboard? I'm using Zathura with the poppler PDF plugin.
Add set selection-clipboard clipboard in the config file ~/.config/zathura/zathurarc or /etc/zathurarc .
{ "source": [ "https://unix.stackexchange.com/questions/339487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201493/" ] }
339,765
We use a hosting server of FreeBSD 10.3, where we don't have the authority to be a superuser. We use the server to run apache2 for web pages of our company. The previous administrator of our web pages appeared to set an ACL permission to a directory, but we want to remove it. Let us say the directory is called foobar . Now the result of ls -al foobar is as follows: drwxrwxr-x+ 2 myuser another_user 512 Nov 20 2013 foobar And the permission is as follows: [myuser@hosting_server]$ getfacl foobar # file: foobar/ # owner: myuser # group: another_user user::rwx group::rwx mask::rwx other::r-x Here we want to remove the ACL permission and the plus sign at the last of the permission list. Therefore, we did setfacl -b foobar It eliminated the special permission governed by the ACL, but didn't erase the plus sign + . Our question is how can we erase the plus sign + in the permission list, shown by 'ls -al foobar'?
Our problem was resolved by using: setfacl -bn foobar The point was we also had to remove the aclMask from the directory with an option -n... The man page of setfacl says as follows: -n Do not recalculate the permissions associated with the ACL mask entry. This option is not applicable to NFSv4 ACLs. We're not sure why this option worked, but it did... In case you get d????????? permission after the above solution, try chmod -R a+rX as two commented below.
{ "source": [ "https://unix.stackexchange.com/questions/339765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188255/" ] }
339,840
How to start ssh-agent as systemd service? There are some suggestions in the net, but they are not complete. How to add automatically unencrypted keys if ssh-agent service was started successfully? Probably, adding keys from the list of ~/.ssh/.session-keys would be good. How to set SSH_AUTH_SOCK in any login session afterwards? The most correct way is to push it from ssh-agent service to systemd-logind service (have no idea if it's ever possible). The plain naive way is just add it to /etc/profile .
To create a systemd ssh-agent service, you need to create a file in ~/.config/systemd/user/ssh-agent.service because ssh-agent is user isolated. [Unit] Description=SSH key agent [Service] Type=simple Environment=SSH_AUTH_SOCK=%t/ssh-agent.socket ExecStart=/usr/bin/ssh-agent -D -a $SSH_AUTH_SOCK [Install] WantedBy=default.target Add SSH_AUTH_SOCK="${XDG_RUNTIME_DIR}/ssh-agent.socket" to ~/.config/environment.d/ssh_auth_socket.conf . Finally enable and start this service. systemctl --user enable --now ssh-agent And, if you are using ssh version higher than 7.2. echo 'AddKeysToAgent yes' >> ~/.ssh/config This will instruct the ssh client to always add the key to a running agent, so there's no need to ssh-add it beforehand. Note that when you create the ~/.ssh/config file you may need to run: chmod 600 ~/.ssh/config or chown $USER ~/.ssh/config Otherwise, you might receive the Bad owner or permissions on ~/.ssh/config error.
{ "source": [ "https://unix.stackexchange.com/questions/339840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87248/" ] }
339,943
I'm attempting to take the last word or phrase using grep for a specific pattern. In this example, it would be the from the last comma to the end of the line: Blah,3,33,56,5,Foo 30,,,,,,,3,Great Value And so the wanted output for that line would be "Great Value". All the lines are different lengths as well, but always have a single comma preceding the last words. Basically, I would like to simply output from the last comma to the end of the line. Thank you!
Here: grep -o '[^,]\+$' [^,]\+ matches one or more characters that are not , at the end of the line ( $ ) -o prints only the matched portion Example: % grep -o '[^,]\+$' <<<'Blah,3,33,56,5,Foo 30,,,,,,,3,Great Value' Great Value
{ "source": [ "https://unix.stackexchange.com/questions/339943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211032/" ] }
339,946
I am making a word list from all the words typed into a program that will be working with the text entered into it. I want to use a stream editor like sed or awk to search the first word of every line in a document and return the line number when a pattern -stored in a variable- is found. I have this much working correctly so far: awk $1 '{print $1 " " NR""}' dictionary.txt | awk '/^**myWord** /' | cut -d" " -f2 However, I cannot figure out how to use a variable in place of "myWord". For example, I get only errors when I use: read=searchWord awk $1 '{print $1 " " NR""}' words.txt | awk '/^**$searchWord** /' | cut -d" " -f2
Here: grep -o '[^,]\+$' [^,]\+ matches one or more characters that are not , at the end of the line ( $ ) -o prints only the matched portion Example: % grep -o '[^,]\+$' <<<'Blah,3,33,56,5,Foo 30,,,,,,,3,Great Value' Great Value
{ "source": [ "https://unix.stackexchange.com/questions/339946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212258/" ] }
339,954
The zshcompsys man -page says: INITIALIZATION If the system was installed completely, it should be enough to call the shell function compinit from your initialization file; see the next section. However, the function compinstall can be run by a user to configure various aspects of the completion system. zsh can't find the commands though: genesis% compinit zsh: command not found: compinit genesis% compinstall zsh: command not found: compinstall genesis% echo $PATH /home/ravi/bin:/home/ravi/.gem/ruby/2.4.0/bin:/home/ravi/bin:/home/ravi/.gem/ruby/2.4.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/usr/local/heroku/bin:/home/ravi/.cabal/bin:/home/ravi/.local/share/fzf/bin:/usr/local/heroku/bin:/home/ravi/.cabal/bin genesis% I'm lead to ask this question because when starting zsh I get: tmuxinator.zsh:20: command not found: compdef How do I get zsh to find these completion commands?
This is the same issue I got on my mac. I am using zsh shell. Compdef is basically a function used by zsh for load the auto-completions. The completion system needs to be activated. If you’re using something like oh-my-zsh then this is already taken care of, otherwise you’ll need to add the following to your ~/.zshrc autoload -Uz compinit compinit Completion functions can be registered manually by using the compdef function directly like this compdef . But compinit need to be autoloaded in context before using compdef.
{ "source": [ "https://unix.stackexchange.com/questions/339954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
340,067
Consider the following input file: 1 2 3 4 Running { grep -q 2; cat; } < infile doesn't print anything. I'd expect it to print 3 4 I can get the expected output if I change it to { sed -n 2q; cat; } < infile Why doesn't the first command print the expected output ? It's a seekable input file and per the standard under OPTIONS : -q Quiet. Nothing shall be written to the standard output, regardless of matching lines. Exit with zero status if an input line is selected. and further down, under APPLICATION USAGE (emphasize mine): The -q option provides a means of easily determining whether or not a pattern (or string) exists in a group of files. When searching several files, it provides a performance improvement ( because it can quit as soon as it finds the first match )[...] Now, per the same standard (in Introduction , under INPUT FILES ) When a standard utility reads a seekable input file and terminates without an error before it reaches end-of-file, the utility shall ensure that the file offset in the open file description is properly positioned just past the last byte processed by the utility [...] tail -n +2 file (sed -n 1q; cat) < file ... The second command is equivalent to the first only when the file is seekable. Why does grep -q consume the whole file ? This is gnu grep if it matters (though Kusalananda just confirmed the same happens on OpenBSD)
grep does stop early, but it buffers its input so your test is too short (and yes, I realise my test is imperfect since it's not seekable): seq 1 10000 | (grep -q 2; cat) starts at 6776 on my system. That matches the 32KiB buffer used by default in GNU grep: seq 1 6775 | wc outputs 6775 6775 32768 Note that POSIX only mentions performance improvements When searching several files That doesn't set any expectations up for performance improvements due to partially reading a single file.
{ "source": [ "https://unix.stackexchange.com/questions/340067", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22142/" ] }
340,440
#!/bin/bash INT=-5 if [[ "$INT" =~ ^-?[0-9]+$ ]]; then echo "INT is an integer." else echo "INT is not an integer." >&2 exit 1 fi What does the leading ~ do in the starting regular expression?
The ~ is actually part of the operator =~ which performs a regular expression match of the string to its left to the extended regular expression on its right. [[ "string" =~ pattern ]] Note that the string should be quoted, and that the regular expression shouldn't be quoted. A similar operator is used in the Perl programming language. The regular expressions understood by bash are the same as those that GNU grep understands with the -E flag, i.e. the extended set of regular expressions. Somewhat off-topic, but good to know: When matching against a regular expression containing capturing groups, the part of the string captured by each group is available in the BASH_REMATCH array. The zeroth/first entry in this array corresponds to & in the replacement pattern of sed 's substitution command (or $& in Perl), which is the bit of the string that matches the pattern, while the entries at index 1 and onwards corresponds to \1 , \2 , etc. in a sed replacement pattern (or $1 , $2 etc. in Perl), i.e. the bits matched by each parenthesis. Example: string=$( date +%T ) if [[ "$string" =~ ^([0-9][0-9]):([0-9][0-9]):([0-9][0-9])$ ]]; then printf 'Got %s, %s and %s\n' \ "${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}" "${BASH_REMATCH[3]}" fi This may output Got 09, 19 and 14 if the current time happens to be 09:19:14. The REMATCH bit of the BASH_REMATCH array name comes from "Regular Expression Match", i.e. "RE-Match". In non- bash Bourne-like shells, one may also use expr for limited regular expression matching (using only basic regular expressions). A small example: $ string="hello 123 world" $ expr "$string" : ".*[^0-9]\([0-9][0-9]*\)" 123
{ "source": [ "https://unix.stackexchange.com/questions/340440", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212669/" ] }
340,676
In what situations would one want to use a hard-link rather than a soft-link? I personally have never run across a situation where I'd want to use a hard-link over a soft-link, and the only use-case I've come across when searching the web is deduplicating identical files .
Aside from the backup usage mentioned in another comment, which I believe also includes the snapshots on a BTRFS volume, a use-case for hard-links over soft-links is a tag-sorted collection of files. (Not necessarily the best method to create a collection, a database-driven method is potentially better, but for a simple collection that's reasonably stable, it's not too bad.) A media collection where all files are stored in one, flat, directory and are sorted into other directories based on various criteria, i.e.: year, subject, artist, genre, etc. This could be a personal movie collection, or a commercial studio's collective works. Essentially finished, the file is saved, not likely to be modified, and sorted, possibly into multiple locations by links. Bear in mind that the concept of "original" and "copy" are not applicable to hard-links: every link to the file is an original, there is no "copy" in the normal sense. For the description of the use-case, however, the terms mimic the logic of the behavior. The "original" is saved in the "catalog" directory, and the sorted "copies" are hard-linked to those files. The file attributes on the sorting directories can be set to r/o, preventing any accidental changes to the file-names and sorted structure, while the attributes on the catalog directory can be r/w allowing it to be modified as needed. (Case for that would be music files where some players attempt to rename and reorganize files based on tags embedded in the media file, from user input, or internet retrieval.) Additionally, since the attributes of the "copy" directories can be different than the "original" directory, the sorted structure could be made available to the group, or world, with restricted access while the main "catalog" is only accessible to the principal user, with full access. The files themselves, however will always have the same attributes on all links to that inode. (ACL could be explored to enhance that, but not my knowledge area.) If the original is renamed, or moved (the single "catalog" directory becomes too large to manage, for example) the hard-links remain valid, soft-links are broken. If the "copies" are moved and the soft-links are relative, then the soft-links will, again, be broken, and the hard-links will not be. Note: there seems to be inconsistency on how different tools report disk usage when soft-links are involved. With hard-links, however, it seems consistent. So with 100 files in a catalog sorted into a collection of "tags", there could easily be 500 linked "copies." (For an photograph collection, say date, photographer, and an average of 3 "subject" tags.) Dolphin, for example, would report that as 100 files for hard-links, and 600 files if soft-links are used. Interestingly, it reports that same disk-space usage either way, so it looks like a large collection of small files for soft-links, and a small collection of large files for hard-links. A caveat to this type of use-case is that in file-systems that use COW, modifying the "original" could break the hard-links, but not break the soft-links. But, if the intent is to have the master copy, after editing, saved, and sorted, COW doesn't enter the scenario.
{ "source": [ "https://unix.stackexchange.com/questions/340676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41033/" ] }
340,844
I am unable to ssh to a server that asks for a diffie-hellman-group1-sha1 key exchange method: ssh 123.123.123.123 Unable to negotiate with 123.123.123.123 port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1 How to enable the diffie-hellman-group1-sha1 key exchange method on Debian 8.0? I have tried (as proposed here ) to add the following lines to my /etc/ssh/ssh_config KexAlgorithms diffie-hellman-group1-sha1,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 Ciphers 3des-cbc,blowfish-cbc,aes128-cbc,aes128-ctr,aes256-ctr regenerate keys with ssh-keygen -A restart ssh with service ssh restart but still get the error.
The OpenSSH website has a page dedicated to legacy issues such as this one. It suggests the following approach, on the client : ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 123.123.123.123 or more permanently, adding Host 123.123.123.123 KexAlgorithms +diffie-hellman-group1-sha1 to ~/.ssh/config . This will enable the old algorithms on the client , allowing it to connect to the server.
{ "source": [ "https://unix.stackexchange.com/questions/340844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204816/" ] }
340,846
I have chrooted Debian in android marshmallow (snapdragon 650 [64bit]). I installed iceweasel in chrooted debian. But it showed this error :: (firefox:16210): Gdk-WARNING **: shmget failed: error 38 (Function not implemented) Segmentation fault So, I compiled libandroid-shmem.so from this repo using android-ndk and copied from armv8-a folder to /lib directory of chrooted debian. It then asked for liblog.so . iceweasel: error while loading shared libraries: liblog.so: cannot open shared object file: No such file or directory So I copied liblog.so from android-ndk to chrooted debian /lib directory. Now when I run env LD_PRELOAD="/lib/libandroid-shmem.so" iceweasel . It displays this error : iceweasel: error while loading shared libraries: /usr/lib/aarch64-linux-gnu/libc.so: invalid ELF header Here are some details :: file /lib/libandroid-shmem.so /lib/libandroid-shmem.so: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, BuildID[sha1]=5ad4582c76effbe27a6688369ad979fea5dfac2a, stripped $ cat /usr/lib/aarch64-linux-gnu/libc.so /* GNU ld script Use the shared library, but some functions are only in the static library, so try that secondarily. */ OUTPUT_FORMAT(elf64-littleaarch64) GROUP ( /lib/aarch64-linux-gnu/libc.so.6 /usr/lib/aarch64-linux-gnu/libc_nonshared.a AS_NEEDED ( /lib/aarch64-linux-gnu/ld-linux-aarch64.so.1 ) )
The OpenSSH website has a page dedicated to legacy issues such as this one. It suggests the following approach, on the client : ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 123.123.123.123 or more permanently, adding Host 123.123.123.123 KexAlgorithms +diffie-hellman-group1-sha1 to ~/.ssh/config . This will enable the old algorithms on the client , allowing it to connect to the server.
{ "source": [ "https://unix.stackexchange.com/questions/340846", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87903/" ] }
341,060
systemctl Returns a list of the units, whether they are loaded, active, their sub and description. systemctl is-failed Returns a list of status only. What is the syntax to return the details of the failed units?
You can use systemctl list-units --state=failed to list all failed units. The parameters for systemctl are documented in the man page systemctl(1) .
{ "source": [ "https://unix.stackexchange.com/questions/341060", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101992/" ] }
341,063
if the network interface is disconnected: ping 8.8.8.8 connect: Network is unreachable terminates nicely kernel is sending a specific signal to the ping and thus ping is shutting itself down. But if network interface is up and I am blocking all traffic via iptables.. vi /etc/sysconfig/iptables *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A OUTPUT -j REJECT --reject-with icmp-net-unreachable -A INPUT -j DROP -A FORWARD -j DROP COMMIT but it will not make ping shut off. and stop. ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. From 192.168.0.100 icmp_seq=1 Destination Net Unreachable From 192.168.0.100 icmp_seq=1 Destination Net Unreachable From 192.168.0.100 icmp_seq=1 Destination Net Unreachable it simply keeps on continuing. I have tried other --reject-with flags such as: icmp-net-unreachable icmp-host-unreachable icmp-port-unreachable icmp-proto-unreachable icmp-net-prohibited icmp-host-prohibited icmp-admin-prohibited none of them can make ping quit. What I want to see is ping terminate the same way it terminates when network interface is disconnected. if this can not be done via iptables.. is there a command I can run to send ping the same signal the kernel sends .. to tell it "network interface is not connected" ? ( it would be a lie but I want it to shut itself off basically )
You can use systemctl list-units --state=failed to list all failed units. The parameters for systemctl are documented in the man page systemctl(1) .
{ "source": [ "https://unix.stackexchange.com/questions/341063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
341,388
What is the syntax to get the current month name (e.g. jan or feb) in a bash script?
You can use the date(1) command. For example: date +%b
{ "source": [ "https://unix.stackexchange.com/questions/341388", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213363/" ] }
341,400
my centOS version is centos-release-6-6.el6.centos.12.2.x86_64 I have executed the following commands to extract and install glibc-2.15 tar zxvf glibc-2.14.tar.gz cd glibc-2.14 mkdir build cd build ../configure --prefix=/opt/glibc-2.14 make -j4 make install But when I check glib version with command yum list glibc , it shows: Installed Packages glibc.i686 2.12-1.192.el6 @base glibc.x86_64 2.12-1.192.el6 @base
You can use the date(1) command. For example: date +%b
{ "source": [ "https://unix.stackexchange.com/questions/341400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213365/" ] }
341,406
On Arch Linux running GNOME, if I press Shift + 3 , it locks X (nothing but the mouse cursor works).  All window updates are suspended.  The only option is to zap it with Ctrl + Alt + Backspace . I've looked in logs, nothing. I've searched the web, nothing. I've tried every possible keypress, nothing. Shift + 2 works just fine, as does Shift + 4 . I'm on a Mac Pro, with a UK Apple keyboard. I don't think this should matter, but it is the £ (pound) symbol that comes out on the console before I run startx . In X, I can use Alt + Shift + 3 and get a pound without any problems. Alt + 3 gives me # as expected. Any ideas where to start with this? Is there extra logging I can somehow enable? xmodmap -pke gives: keycode 12 = 3 sterling 3 sterling numbersign sterling threesuperior sterling 3 sterling threesuperior sterling xev output. I pushed x then Shift + 3 , then 1 . Interestingly, it continued writing to the output after the DM froze. KeyPress event, serial 36, synthetic NO, window 0xa00001, root 0x4a3, subw 0x0, time 338011, (655,-7), root:(840,525), state 0x10, keycode 53 (keysym 0x78, x), same_screen YES, XLookupString gives 1 bytes: (78) "x" XmbLookupString gives 1 bytes: (78) "x" XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0xa00001, root 0x4a3, subw 0x0, time 338091, (655,-7), root:(840,525), state 0x10, keycode 53 (keysym 0x78, x), same_screen YES, XLookupString gives 1 bytes: (78) "x" XFilterEvent returns: False KeyPress event, serial 36, synthetic NO, window 0xa00001, root 0x4a3, subw 0x0, time 339867, (655,-7), root:(840,525), state 0x10, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 36, synthetic NO, window 0xa00001, root 0x4a3, subw 0x0, time 340219, (655,-7), root:(840,525), state 0x11, keycode 12 (keysym 0xa3, sterling), same_screen YES, XLookupString gives 2 bytes: (c2 a3) "£" XmbLookupString gives 2 bytes: (c2 a3) "£" XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0xa00001, root 0x4a3, subw 0x0, time 340299, (655,-7), root:(840,525), state 0x11, keycode 12 (keysym 0xa3, sterling), same_screen YES, XLookupString gives 2 bytes: (c2 a3) "£" XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0xa00001, root 0x4a3, subw 0x0, time 340411, (655,-7), root:(840,525), state 0x11, keycode 62 (keysym 0xffe2, Shift_R), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False KeyPress event, serial 36, synthetic NO, window 0xa00001, root 0x4a3, subw 0x0, time 349763, (655,-7), root:(840,525), state 0x10, keycode 10 (keysym 0x31, 1), same_screen YES, XLookupString gives 1 bytes: (31) "1" XmbLookupString gives 1 bytes: (31) "1" XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0xa00001, root 0x4a3, subw 0x0, time 349835, (655,-7), root:(840,525), state 0x10, keycode 10 (keysym 0x31, 1), same_screen YES, XLookupString gives 1 bytes: (31) "1" XFilterEvent returns: False
You can use the date(1) command. For example: date +%b
{ "source": [ "https://unix.stackexchange.com/questions/341406", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32903/" ] }
341,611
I'm using Slackware. Firefox is running. I also have a virtual machine running Ubuntu 16.04 using VirtualBox. I've installed Firefox on the virtual machine, and Firefox is installed on the host computer. I opened an SSH session in the virtual machine and ran Firefox. It opened a new window of my host computer's Firefox. Why did it do this? I was expecting two running instances of Firefox: one on my host computer and one on the virtual machine.
When Firefox starts, it looks for a Firefox window running on the same display, and if it finds one, it focuses this window (and if you pass a URL on the command line, it opens a new tab to load the URL in the existing window). You must have run SSH with X11 display forwarding. Since X11 forwarding is active, all GUI programs that you start in the SSH session will be displayed on the local machine. If you X11 forwarding was not active in the SSH connection, then GUI applications run from the SSH session would have nowhere to display. They'd just complain “Error: no display specified” or some similar error message. X11 is inherently network-transparent, so it doesn't have a notion of “the local display”. The display is whatever you tell the application is the display. There can be multiple local displays, e.g. in the case of a multiseat configuration. There isn't one “true” display like there is with Windows. If you're running a program remotely and you want it to display on the monitor of the remote machine, you need to run an X server on the remote machine and you need to explicitly tell the program to connect to that display. By default, if you do nothing, programs will be displayed on the machine that you're in front of.
{ "source": [ "https://unix.stackexchange.com/questions/341611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
341,629
Wanting to play around with Trusted Platform Module stuff, I installed TrouSerS and tried to start tcsd , but I got this error: TCSD TDDL ERROR: Could not find a device to open! However, my kernel has multiple TPM modules loaded: # lsmod | grep tpm tpm_crb 16384 0 tpm_tis 16384 0 tpm_tis_core 20480 1 tpm_tis tpm 40960 3 tpm_tis,tpm_crb,tpm_tis_core So, how do I determine if my computer is lacking TPM vs TrouSerS having a bug? Neither dmidecode nor cpuid output anything about "tpm" or "trust". Looking in /var/log/messages , on the one hand I see rngd: /dev/tpm0: No such file or directory , but on the other hand I see kernel: Initialise system trusted keyrings and according to this kernel doc trusted keys use TPM. EDIT : My computer's BIOS setup menus mention nothing about TPM. Also, looking at /proc/keys : # cat /proc/keys ******** I--Q--- 1 perm 1f3f0000 0 65534 keyring _uid_ses.0: 1 ******** I--Q--- 7 perm 3f030000 0 0 keyring _ses: 1 ******** I--Q--- 3 perm 1f3f0000 0 65534 keyring _uid.0: empty ******** I------ 2 perm 1f0b0000 0 0 keyring .builtin_trusted_keys: 1 ******** I------ 1 perm 1f0b0000 0 0 keyring .system_blacklist_keyring: empty ******** I------ 1 perm 1f0f0000 0 0 keyring .secondary_trusted_keys: 1 ******** I------ 1 perm 1f030000 0 0 asymmetri Fedora kernel signing key: 34ae686b57a59c0bf2b8c27b98287634b0f81bf8: X509.rsa b0f81bf8 []
TPMs don't necessarily appear in the ACPI tables, but the modules do print a message when they find a supported module; for example [ 134.026892] tpm_tis 00:08: 1.2 TPM (device-id 0xB, rev-id 16) So dmesg | grep -i tpm is a good indicator. The definitive indicator is your firmware's setup tool: TPMs involve ownership procedures which are managed from the firmware setup. If your setup doesn't mention anything TPM-related then you don't have a TPM. TPMs were initially found in servers and business laptops (and ChromeBooks, as explained by icarus ), and were rare in desktops or "non-business" laptops; that’s changed over the last few years, and Windows 11 requires a TPM now. Anything supporting Intel TXT has a TPM.
{ "source": [ "https://unix.stackexchange.com/questions/341629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41033/" ] }
342,008
I used this a lot, the improvement I try to achieve is to avoid echo file names that did not match in grep. Better way to do this? for file in `find . -name "*.py"`; do echo $file; grep something $file; done
find . -name '*.py' -exec grep something {} \; -print would print the file name after the matching lines. find . -name '*.py' -exec grep something /dev/null {} + would print the file name in front of every matching line (we add /dev/null for the case where there's only one matching file as grep doesn't print the file name if it's passed only one file to look in. The GNU implementation of grep has a -H option for that as an alternative). find . -name '*.py' -exec grep -l something {} + would print only the file names of the files that have at least one matching line. To print the file name before the matching lines, you could use awk instead: find . -name '*.py' -exec awk ' FNR == 1 {filename_printed = 0} /something/ { if (!filename_printed) { print FILENAME filename_printed = 1 } print }' {} + Or call grep twice for each file - though that'd be less efficient as it would run at least one grep command and up to two for each file (and read the content of the file twice): find . -name '*.py' -exec grep -l something {} \; \ -exec grep something {} \; In any case, you don't want to loop over the output of find like that and remember to quote your variables . If you wanted to use a shell loop, with GNU tools: find . -name '*.py' -exec grep -l --null something {} + | xargs -r0 sh -c ' for file do printf "%s\n" "$file" grep something < "$file" done' sh (also works on FreeBSD and derivatives).
{ "source": [ "https://unix.stackexchange.com/questions/342008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/148706/" ] }
342,064
I'm using Ranger to navigate around my file system. Is there a shortcut where I cd into a folder without leaving Ranger (as in open bash with a location of a folder found by navigating in Ranger) ?
I found the answer to this in the man pages : S Open a shell in the current directory Yes, probably should have read through that before asking here.
{ "source": [ "https://unix.stackexchange.com/questions/342064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16792/" ] }
342,403
Can't install Java8 apt-get install openjdk-8-jre-headless Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: openjdk-8-jre-headless : Depends: ca-certificates-java but it is not going to be installed E: Unable to correct problems, you have held broken packages I've searched Google and I've added repos and other suggestions, but nothing has allowed me to install Java 8 yet. ideas? lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 8.7 (jessie) Release: 8 Codename: jessie
is this jessie? With backports apt install -t jessie-backports openjdk-8-jre-headless ca-certificates-java
{ "source": [ "https://unix.stackexchange.com/questions/342403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47368/" ] }
342,554
My keyboard has dedicated keys to change the audio volume and to mute/unmute audio. How can I make these work in XFCE?
Right click a panel -> Panel submenu -> Add New Items... Add an instance of PulseAudio Plugin Right click the icon that just appeared in your panel and click "Properties". Make sure "Enable keyboard shortcuts for volume control" is enabled. You may have to install the PulseAudio Plugin first. In Debian and Debian-based distributions, the package is called xfce4-pulseaudio-plugin .
{ "source": [ "https://unix.stackexchange.com/questions/342554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18110/" ] }
342,735
I'm running Docker(1.9.1) on Ubuntu 16.04. When I run docker info the last line of the output says WARNING: No swap limit support . INFO[0781] GET /v1.21/info Containers: 0 Images: 0 Server Version: 1.9.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 0 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 4.4.0-62-generic Operating System: Ubuntu 16.04.1 LTS (containerized) CPUs: 2 Total Memory: 3.664 GiB Name: lenovo ID: A3ZV:2EVK:U5QB:O7CG:PEDL:SANK:X74X:QNLC:VOTK:GFDR:S24T:C5KT WARNING: No swap limit support What does this warning mean? I definitely have a swap partition, as evidenced by free -mh though I don't understand why my swap has no entry under available total used free shared buff/cache available Mem: 3.7G 1.9G 182M 157M 1.6G 1.3G Swap: 3.8G 2.9M 3.8G
Swap limit support allows you to limit the swap the container uses, see https://docs.docker.com/engine/admin/resource_constraints According to https://docs.docker.com/engine/installation/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities : You can enable these capabilities on Ubuntu or Debian by following these instructions. Memory and swap accounting incur an overhead of about 1% of the total available memory and a 10% overall performance degradation, even if Docker is not running. 1) Log into the Ubuntu or Debian host as a user with sudo privileges. 2) Edit the /etc/default/grub file. Add or edit the GRUB_CMDLINE_LINUX line to add the following two key-value pairs: GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" 3) Update GRUB. $ sudo update-grub
{ "source": [ "https://unix.stackexchange.com/questions/342735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/206074/" ] }
342,736
I'd like to disable my CD/DVD drive so that it doesn't spin up every time I select Save in my Kate editor, or select a file-accessing action in other applications. The spinning up just delays what I'm doing, and I'm not even using the DVD drive. I want to leave the CD in the drive, and not have it spin up. I found a website that said a udev rule will definitely disable the drive. So far, I've tried the following 2 rules (separately), but neither of them disable the DVD drive (it still spins up - even when not mounted): ENV{ID_SERIAL}=="PIONEER_DVD-RW_DVRTD11RS_SAC1009942", ENV{UDISKS_IGNORE}="1" KERNEL=="sr0",ENV{UDISKS_IGNORE}="1", RUN+="/bin/touch /home/peter/udev-rule-ran" The RUN+ in the second instance, creates my test file "udev-rule-ran", so this tells me that my rule file is being executed, and that the rule line is being run. My Question: Could you tell me what I should be doing to definitely disable the darned DVD drive? I also want to be able to enable the drive again on the occasions that I need it. Supplementary Details: I'm trying very hard to write a udev rule to disable my CD/DVD drive. I've tried various non-udev methods to disable it but none of them work. There is no loaded module¹⁾ for the drive that I can unload, so I can't use that method to disable the drive. ¹⁾ So I think the driver must be compiled into the kernel.
Swap limit support allows you to limit the swap the container uses, see https://docs.docker.com/engine/admin/resource_constraints According to https://docs.docker.com/engine/installation/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities : You can enable these capabilities on Ubuntu or Debian by following these instructions. Memory and swap accounting incur an overhead of about 1% of the total available memory and a 10% overall performance degradation, even if Docker is not running. 1) Log into the Ubuntu or Debian host as a user with sudo privileges. 2) Edit the /etc/default/grub file. Add or edit the GRUB_CMDLINE_LINUX line to add the following two key-value pairs: GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" 3) Update GRUB. $ sudo update-grub
{ "source": [ "https://unix.stackexchange.com/questions/342736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214397/" ] }
342,819
Supposing I am in the same folder as an executable file, I would need to type this to execute it: ./file I would rather not have to type / , because / is difficult for me to type. Is there an easier way to execute a file? Ideally just some simple syntax like: .file or something else but easier than having to insert the / character there. Perhaps there is some way to put something in the /bin directory, or create an alias for the interpreter, so that I could use: p file
It can be "risky" but you could just have . in your PATH. As has been said in others, this can be dangerous so always ensure . is at the end of the PATH rather than the beginning.
{ "source": [ "https://unix.stackexchange.com/questions/342819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
342,828
I used the command mv folder_name .... I thought by using .. twice, it would move it back two folders. Unfortunately my files disappeared. I need to recover them.
Your directory is still there :) You have renamed it .... Because files whose names start with . are hidden, you cannot see the directory unless you display hidden files run ls -A and there it is! Revert the change: mv .... original_folder_name and do the move correctly mv original_folder_name ../..
{ "source": [ "https://unix.stackexchange.com/questions/342828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170338/" ] }
343,168
How do you stop the command service <name> status from using less on its output? I have a script that automates some sysadmin actions, and after I upgraded my server to Ubuntu 16.04, it's breaking because actions that check service status are blocking because it's using something like less to show output, specifically the supervisor service. I have several daemons configured to run, and when run sudo service supervisor status , I get: * supervisor.service - Supervisor process control system for UNIX Loaded: loaded (/lib/systemd/system/supervisor.service; disabled; vendor preset: enabled) Active: active (running) since Mon 2017-02-06 20:35:34 EST; 12h ago Docs: http://supervisord.org Process: 18476 ExecStop=/usr/bin/supervisorctl $OPTIONS shutdown (code=exited, status=0/SUCCESS) Main PID: 20228 (supervisord) CGroup: /system.slice/supervisor.service |- 7387 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7388 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7389 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7390 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7391 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7392 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7393 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7394 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7395 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7396 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7397 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7398 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7678 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7679 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7680 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7681 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7682 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7683 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7684 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7685 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7693 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7694 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7698 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7702 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7703 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7705 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7707 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7709 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7710 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7712 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7713 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7717 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7720 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7723 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7724 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7728 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7730 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7731 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7733 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7734 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7735 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7738 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7743 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7747 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7748 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7750 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7752 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7756 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7758 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7761 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7763 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7764 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7772 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7781 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7785 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7794 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7799 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7801 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload |- 7805 /usr/local//myproject/.env/bin/python2.7 /usr/local//myproject/.env/bin/celery worker -A /myproject -l info --autoreload lines 1-66 And it doesn't return until I manually scroll down or press Q to exit. How do I disable this feature?
Ubuntu is a systemd system, where the service status command actually calls systemctl status , and systemctl has a --no-pager option that does exactly what you're looking for. So you may be better off using the straight systemctl command in your script. sudo systemctl --no-pager status supervisor environment variable SYSTEMD_PAGER Another way, as pointed out by @jwodder, is to set the SYSTEMD_PAGER environment variable. This has the added benefit of also affecting the output of systemctl when called by another application like service . export SYSTEMD_PAGER= sudo service supervisor status Will allow you to achieve the same output.
{ "source": [ "https://unix.stackexchange.com/questions/343168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16477/" ] }
343,429
How to I set the bootable partition using the command line in parted? Ideally I would like a numbered list so I can select which partition to boot from easily.
I use fdisk. before to apply this I recommend to work with a live CD or USB and back up your data. First check if any bootable partition is present like in my system wich "/dev/sda1" is the bootable partition : fdisk -l /dev/sda Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00003256 Device Boot Start End Blocks Id System /dev/sda1 * 2048 959991807 479994880 83 Linux /dev/sda2 959993854 976766975 8386561 5 Extended /dev/sda5 959993856 976766975 8386560 82 Linux swap / Solaris If there is not any boot partition do like this with root login : fdisk /dev/sda Command (m for help): m Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): a Partition number (1-5): You've to type 1 if you want to make bootable the partition 1 or and following 2 if you want to make bootable the second partition etc... and aply the modification with "w" like this Command (m for help): w For modify the table of your disk and make the desired partition bootable. In hoping that help
{ "source": [ "https://unix.stackexchange.com/questions/343429", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4930/" ] }
343,548
I'm running some third-party Perl script written such that it requires an output file for the output flag, -o . Unfortunately, the script appears to require an actual file, that is, users must create an empty file filename.txt with 0 bytes and then input this empty file on the script command line perl script1.pl -o filename.txt Question: How would I create an empty file within a bash script? If one simply tries perl script1.pl -o filename.txt , the script gives an error that the file doesn't exist.
Use touch command. touch filename.txt .
{ "source": [ "https://unix.stackexchange.com/questions/343548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115891/" ] }
343,570
I'm using ag ( The Silver Searcher ) version 0.31.0. I can easily look for a string in a bunch of files using: localhost:workspace davea$ ag 'ftp' . But what if I only want to scan files with certain extensions? I tried this: localhost:workspace davea$ ag 'ftp' .java ERR: Error stat()ing: .java ERR: Error opening directory .java: No such file or directory but got the errors you see above.
Per the manual, you could use ag with -G -G --file-search-regex PATTERN Only search files whose names match PATTERN. e.g. ag -G '\.java$' 'ftp' . Per the same manual It is possible to restrict the types of files searched [...] For a list of supported types, run ag --list-file-types. So you could also run ag --java 'ftp' . though that would restrict the search to file names ending in .java or .properties
{ "source": [ "https://unix.stackexchange.com/questions/343570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166917/" ] }
343,935
I'm using "Bash on Ubuntu on Windows", from a Windows 10 Pro PC, to backup my media library to a FreeBSD server that acts as my NAS (it runs NAS4Free). This used to work perfectly but it recently stopped to work. Since I haven't changed anything on my side I reckon the change came from one of of the Windows Updates, not sure. In any case, here's what happens. Just after a few files got copied over, the rsync transfer just hangs. I've let it run overnight to confirm and it just hangs for hours. When I manually kill the task by sending CTRL + C to the terminal I get an error message, some time goes on (about 30 secs) and the program stops: arnaud@CLAVAIN:~$ rsync -arv --delete --no-compress /mnt/e/Music/ [email protected]:~/pool1/lolilol/music [email protected]'s password: sending incremental file list ost/Luke Cage (Original Soundtrack Album)/ ost/Luke Cage (Original Soundtrack Album)/40. Finding Chico.m4a ost/Luke Cage (Original Soundtrack Album)/41. I Am Carl Lucas.m4a ost/Luke Cage (Original Soundtrack Album)/42. Crispus Attucks.m4a ost/Luke Cage (Original Soundtrack Album)/43. Hideout.m4a ost/Luke Cage (Original Soundtrack Album)/44. Cuban Coffee.m4a ost/Luke Cage (Original Soundtrack Album)/45. Like a Brother.m4a ost/Luke Cage (Original Soundtrack Album)/46. Cottonmouth's Clamp.m4a ost/Luke Cage (Original Soundtrack Album)/47. Survival.m4a ost/Luke Cage (Original Soundtrack Album)/48. Cottonmouth Theme.m4a ost/Luke Cage (Original Soundtrack Album)/49. Luke Cops.m4a ost/Luke Cage (Original Soundtrack Album)/50. Crushin' On Reva.m4a ost/Luke Cage (Original Soundtrack Album)/51. Beloved Reva.m4a ^Crsync error: unexplained error (code 130) at rsync.c(632) [sender=3.1.0] [sender] io timeout after 60 seconds -- exiting arnaud@CLAVAIN:~$ You can see where the ^C is, that's when I send the kill message. This is when the "error: unexplained error" and "io timout" errors show up. I have tried an alternate command, rsync -rltvzD --progress --delete , but that produces the same error. Is there anyway I could troubleshoot this better to understand what the problem is? Note that if I do this on a local drive (like a USB external drive) the rsync works just fine.
I had this issue as well recently (as in yesterday), and what I found out, is that when I rsync without delta copy (using --whole-file/-W for whole file transfer), then everything works perfectly. I know it is not the best solution, but a quick fix for now until it is patched.
{ "source": [ "https://unix.stackexchange.com/questions/343935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215305/" ] }
344,444
I'm trying to apply the same sshd settings to multiple users. According to the manual, it seems Match User acts like an AND : Introduces a conditional block. If all of the criteria on the Match line are satisfied, the keywords on the following lines override those set in the global section of the config file How do I state "for any of these users...", so in this example bob , joe , and phil are allowed to use SSH as a proxy, but not allowed to log in: Match User bob, User joe, User phil PasswordAuthentication yes AllowTCPForwarding yes ForceCommand /bin/echo 'We talked about this guys. No SSH for you!'
Not having done this myself, I can only go on what the manuals say: From the sshd_config manual: The match patterns may consist of single entries or comma-separated lists and may use the wildcard and negation operators described in the PATTERNS section of ssh_config(5) . This means that you ought to be able to say Match User bob,joe,phil PasswordAuthentication yes AllowTCPForwarding yes ForceCommand /bin/echo 'We talked about this guys. No SSH for you!' See also this answer on the Information Security forum: https://security.stackexchange.com/a/18038
{ "source": [ "https://unix.stackexchange.com/questions/344444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5769/" ] }
344,454
Which command allows me to know if my webcam is used or not? lsof /dev/video0 is not sufficient. All block devices with major and minor number 81 and 0 should be monitored.
If your kernel uses modules (which is highly likely), one way to determine whether a program is accessing your webcam is to look at the usage count of the module: $ lsmod | grep uvcvideo uvcvideo 90112 0 The 0 in the third field shows that nothing has any device open for a uvcvideo -controlled webcam (when lsmod ran). Of course you need to know exactly which module is responsible for your webcam; it's easy to check though, you'll see the output change while running a program such as Cheese. Note that, strictly speaking, a positive count only means that something has opened a device, it doesn't mean images are being captured.
{ "source": [ "https://unix.stackexchange.com/questions/344454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189711/" ] }
344,461
I am the only user of my Linux Mint system and I noticed that the password I chose to login is the same that got assigned to sudo . So my question is: Will changing my login password also change the sudo password? If not, how can I change the sudo password?
By default, sudo asks for the user password . Thus, changing the user password (which is also used for login), will also affect sudo invocations. However, you can set in the /etc/sudoers for your user the rootpw flag, in which case it would ask for the root password instead. The relevant excerpt from the sudoers(5) man page is: Authentication and logging The sudoers security policy requires that most users authenticate them‐ selves before they can use sudo. A password is not required if the invoking user is root, if the target user is the same as the invoking user, or if the policy has disabled authentication for the user or com‐ mand. Unlike su(1), when sudoers requires authentication, it validates the invoking user's credentials, not the target user's (or root's) cre‐ dentials. This can be changed via the rootpw, targetpw and runaspw flags, described later. Similarly, the keyword for not requesting a password for sudo is NOPASSWD . If you want to set the root password, you can use sudo passwd Note that when changing sudo permissions, it is recommended to keep a root console open (eg. sudo -s ) until it is verified in a different terminal that it indeed works, and you haven't locked out yourself.
{ "source": [ "https://unix.stackexchange.com/questions/344461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93996/" ] }
344,468
I need to find the word beach, how would I specify that it starts with b and contains ch? I have tried using grep '^s.*ch' but that prints out lines that contain it and I just want the word.
By default, sudo asks for the user password . Thus, changing the user password (which is also used for login), will also affect sudo invocations. However, you can set in the /etc/sudoers for your user the rootpw flag, in which case it would ask for the root password instead. The relevant excerpt from the sudoers(5) man page is: Authentication and logging The sudoers security policy requires that most users authenticate them‐ selves before they can use sudo. A password is not required if the invoking user is root, if the target user is the same as the invoking user, or if the policy has disabled authentication for the user or com‐ mand. Unlike su(1), when sudoers requires authentication, it validates the invoking user's credentials, not the target user's (or root's) cre‐ dentials. This can be changed via the rootpw, targetpw and runaspw flags, described later. Similarly, the keyword for not requesting a password for sudo is NOPASSWD . If you want to set the root password, you can use sudo passwd Note that when changing sudo permissions, it is recommended to keep a root console open (eg. sudo -s ) until it is verified in a different terminal that it indeed works, and you haven't locked out yourself.
{ "source": [ "https://unix.stackexchange.com/questions/344468", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215681/" ] }
344,623
As I understood, "sparse file" means that the file may have 'gaps' so the actual used data may be smaller than the logical file size. How do Linux file systems save files on disk? I'm mainly interested in ext4. But: Can a file be saved not sequentially on disk? By that, I mean that part of the file is located at physical address X and the next part at physical address Y which isn't close to X + offset). Can I somehow control the file sequentiality? I want to allocate a file of 10GB. I want it to be sequential on disk and not divided between different offsets. Does it act differently between the different types?
Can a file be saved not sequentially on disk? I mean, part of the file is located under physical address X and the other part under physical address Y which isn't close to X + offset). Yes; this is known as file fragmentation and is not uncommon, especially with larger files. Most file systems allocate space as it's needed, more or less sequentially, but they can't guess future behaviour — so if you write 200MiB to a file, then add a further 100MiB, there's a non-zero chance that both sets of data will be stored in different areas of the disk (basically, any other write needing more space on disk, occurring after the first write and before the second, could come in between the two). If a filesystem is close to full, the situation will usually be worse: there may not be a contiguous area of free space large enough to hold a new file, so it will have to be fragmented. Can I somehow control the file sequentiallity? I want to allocate big file of 10GB. I want it to be sequential in disk and not divided between different offsets. You can tell the filesystem about your file's target size when it's created; this will help the filesystem store it optimally. Many modern filesystems use a technique known as delayed allocation, where the on-disk layout of a new file is calculated as late as possible, to maximise the information available when the calculation is performed. You can help this process by using the posix_fallocate(3) function to tell the filesystem how much disk space should be allocated in total. Modern filesystems will try to perform this allocation sequentially. Does it act differently between the different types? Different filesystems behave differently, yes. Log-based filesystems such as NILFS2 don't allocate storage in the same way as extent-based filesystems such as Ext4, and that's just one example of variation.
{ "source": [ "https://unix.stackexchange.com/questions/344623", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208735/" ] }
345,344
I want to learn more about UNIX systems and I think I have a pretty straight forward question. I think I know what X is used for: It gives a standard to applications to present their UI's (among other things). But why then is there a need for Gnome/KDE and how do they relate to X? I thought that they were using X as some sort of interface between the application and the GUI, so the GUI is customize-able while the interface stays the same across applications. Is that true or is Gnome/KDE independent of X?
(I am looking into the relation of GNOME and X. I'd like to share some of my understandings. I will present it in a logical way as much as I can. And I try to make the wording precise.) 1. What is GUI composed of? I guess everyone knows what GUI is. Below is an illustration of the basic components of a GUI. Windowing system is just a type of GUI that implements WIMP (windows, icons, menus, pointer) paradigm for a user interface. Here is a list of major windowing systems for both Linux and Windows systems. The key component of a windowing system is the display server (or window server, compositor). Any application that presents its GUI in a window is a client of the display server. Since client and server are involved, communication protocol is necessary, which is called display server protocol of course. A display server is a program whose primary task is to coordinate the input and output of its clients to and from the rest of the operating system, the hardware, and each other. It provides an abstraction of the graphics hardware for use by higher-level (no surprise that a GUI system has a hierarchical design) elements of the graphical interface such as a window manager . There are several display servers available. Such as: X.Org server (mostly for *nix) Wayland (mostly for *nix) Mir (mostly for *nix) SurfaceFlinger (This is for Google Android.) Quartz Compositor (This is what Apple MacOS uses.) Desktop Window Manager (This is what Microsoft Windows uses.) 2. What does X mean exactly? X, X11 and X Window System are synonyms. They all stand for a windowing system . As said above, the key component, the display server, of the X windowing system is the X.Org Server . Sometimes, X.Org server is also called X server for short. Any application that runs and presents its GUI is a client of the display server . The display server and its clients communicate with each other over a communications protocol, which is usually called display server protocol , the display server being the mediator between the clients and the user. The display server receives all the input from the kernel, that the kernel receives from all attached input devices, such as keyboard, pointing devices, or touchscreen and transmits it to the correct client. The display server is also responsible for the output of the clients to the computer monitor. A display server protocol can be network capable or even network transparent. (so you can see, it is essentially just about data flow and routing, visual data is still data.) And according to here : An X.Org Server is a program that provides display and user input services to other programs. In comparison, a file server provides other programs with access to file storage devices. File servers are typically located in a remote location and you use the services of a file server from the machine that you are located at. In contrast, an X Server is typically running on the machine that you are located at ; display and user input services may be requested by programs running on your machine, as well as by programs running on remote machines. So X windowing system is composed of: display server display server protocol some libs for development and other things According to here : X (the windowing system I think) provides the basic framework for a GUI environment: drawing and moving windows on the display device and interacting with a mouse and keyboard. X does not mandate the user interface – this is handled by individual programs. As such, the visual styling of X-based environments varies greatly; different programs may present radically different interfaces. In other words, X windowing system only gives a program the ability to do basic things like drawing/moving windows and input interacting . X doesn't enforce visual styles. It just provides a way to draw, not what to draw. So what you said " ...It gives a standard to applications to present their UI's... " is incorrect. 3. What is Window Manager? Gnome/Xfce/KDE are all window managers . Because they all work on the X display server, they are all called X window manager . The window manager collaborates with the X server and X clients. You can see where the window manager locates in the above GUI composition picture. Here are different types of windows managers. 4. What are GNOME/KDE/Xfce desktops GNOME, KDE and Xfce desktops are all Linux Desktop Environment . A desktop environment is a bundle of programs running on top of an operating system, which share a common GUI . But just like I mentioned above, X11, as a display server, only provides the basic drawing ability through some libs like Xlib or XCB. Applications that directly interface X11 through such libs can have radically different visual styles . So how to create a common GUI? Here comes the widget toolkits . Such as GTK and Qt . They are popular in Wayland and X11 windowing systems . GNOME and Xfce uses the GTK. KDE use the Qt. And here is a comparison of X Window System Desktop Environments. 5. What is gdm3, lightdm, kdm The are all display managers , as indicated the "dm" part. Personally, I think the Display Manager is a misleading name. It's better to be called graphical login manager . It is typically a graphical user interface that is displayed at the end of the boot process in place of the default shell. Different desktop environment use different login managers to keep the visual style consistent. GNOME use gdm3. Xfce use lightdm KDE use kdm A display manager can starts a session on an X server from the same or another computer . To summarize...hope I am not over elaborating... GUI can be of many types. Windowing system is one type of GUI. The key component of any windowing system is usually called the display server . A windowing system, such as X , abstracts the hardware and IO. It provides base services of drawing and moving windows and IO handling, etc. Window Manager , such as Gnome, Xfce, KDE , works on top of the display manager and provides the looks and feel you see. A desktop environment is a bundle of apps that share a common visual style. Display Manager , or graphical login manager, provides a graphical login interface. I draw a rough conceptual illustration. The 3 parts above the OS are very customizable. That's why so much flexibility (confusion) arise. ADD 1 - 1:26 PM 9/21/2018 And here are some discussion about QT and GTK (maybe offtopic to this thread though...)
{ "source": [ "https://unix.stackexchange.com/questions/345344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216307/" ] }
345,797
I have been exploring files in bash, and in /etc/ssl/certs, most of the filenames are light blue. There is a red filename though, and I can't figure out why it is red. Most of the files in this directory are .pem files. The red one is also a .pem file. It happens to be something like China_Internet_Network_Information_Center...pem According to this stack exchange question , light blue filenames mean linked files, while red file names mean "archived" files. What does that mean? Looking at the directory with ls -all , I still can't tell what makes the filename red. Can anyone explain why it is red?
First you need to know the VT100 color code https://en.wikipedia.org/wiki/ANSI_escape_code#Colors I don't know what your text actually looks like, but "red text" is 31. Then you want to look at the dircolors command, and find everything that has a 31 in it. In my case, that would be: or=40;31;01 *.tar=01;31 *.tgz=01;31 *.arj=01;31 *.taz=01;31 *.lzh=01;31 *.lzma=01;31 *.tlz=01;31 *.txz=01;31 *.zip=01;31 *.z=01;31 *.Z=01;31 *.dz=01;31 *.gz=01;31 *.lz=01;31 *.xz=01;31 *.bz2=01;31 *.bz=01;31 *.tbz=01;31 *.tbz2=01;31 *.tz=01;31 *.deb=01;31 *.rpm=01;31 *.jar=01;31 *.rar=01;31 *.ace=01;31 *.zoo=01;31 *.cpio=01;31 *.7z=01;31 *.rz=01;31 Then you can go here http://www.bigsoft.co.uk/blog/index.php/2008/04/11/configuring-ls_colors which tells you or is an "orphan", a symbolic link with no target the rest are file globs that match assorted archive and compression schemes .pem doesn't appear on my list, and .pem files aren't colored on my system, so I can't help you further than that. But I'd guess "orphan".
{ "source": [ "https://unix.stackexchange.com/questions/345797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80998/" ] }
345,862
I would like to have multiple NICs (eth0 and wlan0) in the same subnet and to serve as a backup for the applications on the host if one of the NICs fail. For this reason I have created an additional routing table. This is how /etc/network/interfaces looks: iface eth0 inet static address 192.168.178.2 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev eth0 src 192.168.178.2 post-up ip route add default via 192.168.178.1 dev eth0 post-up ip rule add from 192.168.178.2/32 post-up ip rule add to 192.168.178.2/32 iface wlan0 inet static wpa-conf /etc/wpa_supplicant.conf wireless-essid xyz address 192.168.178.3 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev wlan0 src 192.168.178.3 table rt2 post-up ip route add default via 192.168.178.1 dev wlan0 table rt2 post-up ip rule add from 192.168.178.3/32 table rt2 post-up ip rule add to 192.168.178.3/32 table rt2 That works for connecting to the host: I can still SSH into it if one of the interfaces fails. However, the applications on the host cannot initialize a connection to the outside world if eth0 is down. That is my problem. I have researched that topic and found the following interesting information: When a program initiates an outbound connection it is normal for it to use the wildcard source address (0.0.0.0), indicating no preference as to which interface is used provided that the relevant destination address is reachable. This is not replaced by a specific source address until after the routing decision has been made. Traffic associated with such connections will not therefore match either of the above policy rules, and will not be directed to either of the newly-added routing tables. Assuming an otherwise normal configuration, it will instead fall through to the main routing table. http://www.microhowto.info/howto/ensure_symmetric_routing_on_a_server_with_multiple_default_gateways.html What I want is for the main route table to have more than one default gateway (one on eth0 and one on wlan0 ) and to go to the default gateway via eth0 by default and via wlan0 if eth0 is down. Is that possible? What do I need to do to achieve such a functionality?
Solved it myself. There seems to be very little information about the networking stuff that you can do with Linux, so I have decided to document and explain my solution in detail. This is my final setup: 3 NICs: eth0 (wire), wlan0 (built-in wifi, weak), wlan1 (usb wifi adapter, stronger signal than wlan0) All of them on a single subnet, each of them with their own IP address. eth0 should be used for both incoming and outgoing traffic by default. If eth0 fails then wlan1 should be used. If wlan1 fails then wlan0 should be used. First step : Create a new route table for every interface in /etc/iproute2/rt_tables . Let's call them rt1, rt2 and rt3 # # reserved values # 255 local 254 main 253 default 0 unspec # # local # #1 inr.ruhep 1 rt1 2 rt2 3 rt3 Second step : Network configuration in /etc/network/interfaces . This is the main part and I'll try to explain as much as I can: auto eth0 wlan0 allow-hotplug wlan1 iface lo inet loopback iface eth0 inet static address 192.168.178.99 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev eth0 src 192.168.178.99 table rt1 post-up ip route add default via 192.168.178.1 dev eth0 table rt1 post-up ip rule add from 192.168.178.99/32 table rt1 post-up ip rule add to 192.168.178.99/32 table rt1 post-up ip route add default via 192.168.178.1 metric 100 dev eth0 post-down ip rule del from 0/0 to 0/0 table rt1 post-down ip rule del from 0/0 to 0/0 table rt1 iface wlan0 inet static wpa-conf /etc/wpa_supplicant.conf wireless-essid xyz address 192.168.178.97 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev wlan0 src 192.168.178.97 table rt2 post-up ip route add default via 192.168.178.1 dev wlan0 table rt2 post-up ip rule add from 192.168.178.97/32 table rt2 post-up ip rule add to 192.168.178.97/32 table rt2 post-up ip route add default via 192.168.178.1 metric 102 dev wlan0 post-down ip rule del from 0/0 to 0/0 table rt2 post-down ip rule del from 0/0 to 0/0 table rt2 iface wlan1 inet static wpa-conf /etc/wpa_supplicant.conf wireless-essid xyz address 192.168.178.98 netmask 255.255.255.0 dns-nameserver 8.8.8.8 8.8.4.4 post-up ip route add 192.168.178.0/24 dev wlan1 src 192.168.178.98 table rt3 post-up ip route add default via 192.168.178.1 dev wlan1 table rt3 post-up ip rule add from 192.168.178.98/32 table rt3 post-up ip rule add to 192.168.178.98/32 table rt3 post-up ip route add default via 192.168.178.1 metric 101 dev wlan1 post-down ip rule del from 0/0 to 0/0 table rt3 post-down ip rule del from 0/0 to 0/0 table rt3 If you type ip rule show you should see the following: 0: from all lookup local 32756: from all to 192.168.178.98 lookup rt3 32757: from 192.168.178.98 lookup rt3 32758: from all to 192.168.178.99 lookup rt1 32759: from 192.168.178.99 lookup rt1 32762: from all to 192.168.178.97 lookup rt2 32763: from 192.168.178.97 lookup rt2 32766: from all lookup main 32767: from all lookup default This tells us that traffic incoming or outgoing from the IP address "192.168.178.99" will use the rt1 route table. So far so good. But traffic that is locally generated (for example you want to ping or ssh from the machine to somewhere else) needs special treatment (see the big quote in the question). The first four post-up lines in /etc/network/interfaces are straightforward and explanations can be found on the internet, the fifth and last post-up line is the one that makes magic happen: post-up ip r add default via 192.168.178.1 metric 100 dev eth0 Note how we haven't specified a route-table for this post-up line. If you don't specify a route table, the information will be saved in the main route table that we saw in ip rule show . This post-up line puts a default route in the "main" route table that is used for locally generated traffic that is not a response to incoming traffic. (For example an MTA on your server trying to send an e-mail.) The three interfaces all put a default route in the main route table, albeit with different metrics. Let's take a look a the main route table with ip route show : default via 192.168.178.1 dev eth0 metric 100 default via 192.168.178.1 dev wlan1 metric 101 default via 192.168.178.1 dev wlan0 metric 102 192.168.178.0/24 dev wlan0 proto kernel scope link src 192.168.178.97 192.168.178.0/24 dev eth0 proto kernel scope link src 192.168.178.99 192.168.178.0/24 dev wlan1 proto kernel scope link src 192.168.178.98 We can see that the main route table has three default routes, albeit with different metrics. The highest priority is eth0, then wlan1 and then wlan0 because lower metric numbers indicate a higher priority. Since eth0 has the lowest metric this is the default route that is going to be used for as long as eth0 is up. If eth0 goes down, outgoing traffic will switch to wlan1 . With this setup we can type ping 8.8.8.8 in one terminal and ifdown eth0 in another. ping should still work because because ifdown eth0 will remove the default route related to eth0 , outgoing traffic will switch to wlan1 . The post-down lines make sure that the related route tables get deleted from the routing policy database ( ip rule show ) when the interface goes down, in order to keep everything tidy. The problem that is left is that when you pull the plug from eth0 the default route for eth0 is still there and outgoing traffic fails. We need something to monitor our interfaces and to execute ifdown eth0 if there's a problem with the interface (i.e. NIC failure or someone pulling the plug). Last step : enter ifplugd . That's a daemon that watches interfaces and executes ifup/ifdown if you pull the plug or if there's problem with the wifi connection /etc/default/ifplugd : INTERFACES="eth0 wlan0 wlan1" HOTPLUG_INTERFACES="" ARGS="-q -f -u0 -d10 -w -I" SUSPEND_ACTION="stop" You can now pull the plug on eth0 , outgoing traffic will switch to wlan1 and if you put the plug back in, outgoing traffic will switch back to eth0 . Your server will stay online as long as any of the three interfaces work. For connecting to your server you can use the ip address of eth0 and if that fails, the ip address of wlan1 or wlan0.
{ "source": [ "https://unix.stackexchange.com/questions/345862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105617/" ] }
346,118
I would like to know if there is a way of using bash expansion to view all possibilities of combination for a number of digits in hexadecimal. I can expand in binaries In base 2: echo {0..1}{0..1}{0..1} Which gives back: 000 001 010 011 100 101 110 111 In base 10: echo {0..9}{0..9} Which gives back: 00 01 02...99 But in hexadecimal: echo {0..F} Just repeat: {0..F}
You can; you just need to break the range {0..F} into two separate ranges {0..9} and {A..F} : $ printf '%s\n' {{0..9},{A..F}}{{0..9},{A..F}} 00 01 ... FE EF
{ "source": [ "https://unix.stackexchange.com/questions/346118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155550/" ] }
346,367
I want to get the first 20 or so characters from a number of files. I've seen examples using cut but they all seem to get the first 20 characters of each line in the file, while I only want the first characters in the file itself (ie. from the first line), nothing more. Is there a simple way to do this?
Complete command would be: head -c 20 yourFile.txt
{ "source": [ "https://unix.stackexchange.com/questions/346367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4148/" ] }
346,425
A hard link is defined as a pointer to an inode. A soft link , also known as a symbolic link , is defined as an independent file pointing to another link without the restrictions of hard links. What is the difference between a file and a hard link? A hard link points to an inode, so what is a file? The inode entry itself? Or an inode with a hard link? Let's say I create a file with touch. Then an inode entry is created in the inode table . And I create a hard link, which has the same inode number as the file. So did I create a new file? Or is the file just defined as an inode?
The very short answer is: a file is an anonymous blob of data a hardlink is a name for a file a symbolic link is a special file whose content is a pathname Unix files and directories work exactly like files and directories in the real world (and not like folders in the real world); Unix filesystems are (conceptually) structured like this: a file is an anonymous blob of data; it doesn't have a name, only a number (inode) a directory is a special kind of file which contains a mapping of names to files (more specifically inodes); since a directory is just a file, directories can have entries for directories, that's how recursion is implemented (note that when Unix filesystems were introduced, this was not at all obvious, a lot of operating systems didn't allow directories to contain directories back then) these directory entries are called hardlinks a symbolic link is another special kind of file, whose content is a pathname; this pathname is interpreted as the name of another file other kinds of special files are: sockets, fifos, block devices, character devices Keeping this metaphor in mind, and specifically keeping in mind that Unix directories work like real-world directories and not like real-world folders explains many of the "oddities" that newcomers often encounter, like: why can I delete a file I don't have write access to? Well, for one, you're not deleting the file, you are deleting one of many possible names for the file, and in order to do that, you only need write access to the directory, not the file. Just like in the real world. Or, why can I have dangling symlinks? Well, the symlink simply contains a pathname. There is nothing that says that there actually has to be a file with that name. My question is simply what is the difference of a file and a hard link ? The difference between a file and a hard link is the same as the difference between you and the line with your name in the phone book. Hard link is pointing to an inode, so what is a file ? Inode entry itself ? Or an Inode with a hard link ? A file is an anonymous piece of data. That's it. A file is not an inode, a file has an inode, just like you are not a Social Security Number, you have a SSN. A hard link is a name for a file. A file can have many names. Let's say, I create a file with touch, then an Inode entry is created in the Inode Table . Yes. And I create a hard link, which has the same Inode number with the file. No. A hard link doesn't have an inode number, since it's not a file. Only files have inode numbers. The hardlink associates a name with an inode number. So did I create a new file ? Yes. Or the file is just defined as an Inode ? No. The file has an inode, it isn't an inode.
{ "source": [ "https://unix.stackexchange.com/questions/346425", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154547/" ] }