source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
315,188 | I tried setting the following alias in .bashrc - $ alias llalias ll='ls --color=auto --time-style=long-iso' But the above doesn't work. I want to have the long-iso as well as descending order (date or/and time-wise) whenever I ask it to list files in CLI. Is there anyway to do that ? The above command does give me color output but not long-iso part. Am I doing something wrong ? I did see Set ls -l time format but doesn't help in my case :( | You are missing a -l to turn on the long listing format and -t to sort by modification time. Do: alias ll='ls -lt --color=auto --time-style=long-iso' To include hidden files too: alias ll='ls -alt --color=auto --time-style=long-iso' To reverse the order of sorting, oldest first, add -r : alias ll='ls -ltr --color=auto --time-style=long-iso'alias ll='ls -altr --color=auto --time-style=long-iso' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
315,235 | I am using Arch Linux with Gnome and I want to use openconnect to connect to an VPN server. I can do this at the command-line without a problem, but I can't do this with Gnome; I get the following error: NetworkManager[589]: <error> [1475998103.4381] vpn-connection[0x28a9530,dc5d3708-967d-4e50-90ac-d0c892fe8ab3,"nm-vpn-connection.c",0]: Failed to request VPN secrets #3: No agents were available for this request. The ArchLinux Wiki suggests to do: ln -s /usr/lib/networkmanager/nm-openconnect-auth-dialog /usr/lib/gnome-shell/ but this also does not solve the problem. The problem occurs when I click on connect; I am unable to activate the VPN connection with Gnome and NetworkManager. | In my case (In Debian 9 by Gnome 3.2) selecting the password option "Store the password for all users" in the VPN settings got it working. All other options produce the mentioned error. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83276/"
]
} |
315,247 | I am trying to run KDE 5 in a VNC session and although I get application windows like konsole , kate etc the shell I need for the Start Menus, taskbar, desktop background etc is not present. This is my current ~/.vnc/xstartup file. I also get a message that kwin_x11 keeps crashing and I should try another shell.What additional commands are required to get the shell workiing? #!/bin/shxrdb $HOME/.Xresourcesxsetroot -solid greyexport XKL_XMODMAP_DISABLE=1# /etc/X11/Xsessionstartkde & | In my case (In Debian 9 by Gnome 3.2) selecting the password option "Store the password for all users" in the VPN settings got it working. All other options produce the mentioned error. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
315,255 | I was trying to list some hidden files in my home directory and I encountered a very odd behavior of grep command when combining with ls command. I executed ls -a on my home directory and got all the filesincluding hidden files as expected. I wanted to list all the hidden files starting with 'xau' so I executed ls -a |grep -i .xau* and it also worked as expected. Then I executed ls -a |grep -i .x* in the same directory but itdidn't list anything at all . I then mistakenly typed ls -a |grep -i .*x (note that this time wildcard character * and character 'x' have switched places) and the interesting thing is that it behaved like what I intended in step3. I tried the same thing with this command ls -a .*x and ls -a .*X but I get no such file or directory error. I have added the actual text output here. Some of you may ask why not just use ls -a .x* but the thing with grep is that it prints with the appropriate colors. So could anyone please explain this to me? | You are suffering from premature glob expansion. .xa* doesn't expand because it doesn't match anything in the current directory. (Globs are case sensitive.) However, .x* does match some files, so this gets expanded by the shell before grep ever sees it. When grep receives multiple arguments, it assumes the first is the pattern and the remainder are files to search for that pattern. So, in the command ls -a | grep -i .x* , the output of ls is ignored , and the file ".xsession-errors.old" is searched for the pattern ".xsession-errors". Not surprisingly, nothing is found. To prevent this, put your special characters within single or double quotes. For example: ls -a | grep -i '.x*' You are also suffering from regex vs. glob confusion. You seem to be looking for files that start with the literal string ".x" and are followed by anything—but regular expressions don't work the same as file globs. The * in regex means "the preceding character zero or more times," not "any sequence of characters" as it does in file globs. So what you probably want is: ls -a | grep -i '^\.x' This searches for files whose names start with the literal characters ".x", or ".X". Actually since there's only one letter you are specifying, you could just as easily use a character class rather than -i : ls -a | grep '^\.[xX]' The point is that regular expressions are very different from file globs. If you just try ls -a | grep -i '.x*' , as has been suggested, you will be very surprised to see that EVERY file will be shown! (The same output as ls -a directly, except placed on separate lines as in ls -a -1 .) How come? Well, in regex (but not in shell globs), a period ( . ) means "any single character." And an asterisk ( * ) means "zero or more of the preceding character." So that the regex .x* means "any character, followed by zero or more instances of the character 'x'." Of course, you are not allowed to have null file names, so every file name contains "a character followed by at least zero 'x's." :) Summary: To get the results you want, you need to understand two things: Unquoted special glob characters (including * , ? , [] and some others) will get expanded by the shell before the command you are running ever sees them, and Regular expressions are different from (and more powerful than) file globs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194099/"
]
} |
315,365 | Is there a standard Unix command that does something similar to my example below $ <cmd here> 56$ echo Return code was $?Return code was 56$ <cmd here> should be something that can be fork-execed and leaves 56 as the exit code when the process exits. The exit and return shell builtins are unsuitable for what I'm looking for because they affect the invoking shell itself by exiting out of it. <some cmd> should be something that I can execute in non-shell contexts - e.g., invoking from a Python script with subprocess . E.g., /usr/bin/false always exits immediately with return code 1, but I'd like to control exactly what that return code is. I could achieve the same results by writing my own wrapper script $ cat my-wrapper-script.sh # i.e., <some cmd> = ./my-wrapper-script.sh#!/usr/bin/bashexit $1$ ./my-wrapper-script.sh 56$ echo $?56 but I'm hoping there happens to exist a standard Unix command that can do this for me. | A return based function would work, and avoids the need to open and close another shell, (as per Tim Kennedy 's comment ): freturn() { return "$1" ; } freturn 56 ; echo $? Output: 56 using exit in a subshell: (exit 56) With shells other than ksh93 , that implies forking an extra process so is less efficient than the above. bash / zsh / ksh93 only trick: . <( echo return 56 ) (that also implies an extra process (and IPC with a pipe)). zsh 's lambda functions: (){return 56} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14748/"
]
} |
315,424 | I'm trying to upload all the text files within the current folder via FTP to a server location using curl. I tried the following line: curl -T "{file1.txt, file2.txt}" ftp://XXX --user YYY where XXX is the server's IP address and YYY is the username and password. I'm able to transfer file1.txt to the server successfully, but it complains about the second file saying 'Can't open 'file_name'!' I swapped the file names and it worked for file2.txt and not file1.txt. Seems like I've got the syntax wrong, but this is what the manual says? Also, ideally I would be able to do something like this: curl -T *.txt ftp://XXX --user YYY because I won't always know the names of the txt files in the current folder or the number of files to be transferred. I'm of the opinion I may have to write a bash script that collects the output of ls *.txt into an array and put it into the multiple-files-format required by curl. I've not done bash scripting before - is this the simplest way to achieve this? | Your first command should work without whitespaces: curl -T "{file1.txt,file2.txt}" ftp://XXX/ -user YYY Also note the trailing "/" in the URLs above. This is curl's manual entry about option "-T": -T, --upload-file This transfers the specified local file to the remote URL. If there is no file part in the specified URL, Curl will append the local file name. NOTE that you must use a trailing / on the last directory to really prove to Curl that there is no file name or curl will think that your last directory name is the remote file name to use. That will most likely cause the upload operation to fail. If this is used on an HTTP(S) server, the PUT command will be used. Use the file name "-" (a single dash) to use stdin instead of a given file. Alternately, the file name "." (a single period) may be specified instead of "-" to use stdin in non-blocking mode to allow reading server output while stdin is being uploaded. You can specify one -T for each URL on the command line. Each -T + URL pair specifies what to upload and to where. curl also supports "globbing" of the -T argument, meaning that you can upload multiple files to a single URL by using the same URL globbing style supported in the URL, like this: curl -T "{file1,file2}" http://www.uploadtothissite.com or even curl -T "img[1-1000].png" ftp://ftp.picturemania.com/upload/ "*.txt" expansion does not work because curl supports only the same syntax as for URLs: You can specify multiple URLs or parts of URLs by writing part sets within braces as in: http://site .{one,two,three}.com or you can get sequences of alphanumeric series by using [] as in: ftp://ftp.numericals.com/file[1-100].txt ftp://ftp.numericals.com/file[001-100].txt (with leading zeros) ftp://ftp.letters.com/file[a-z].txt [...] When using [] or {} sequences when invoked from a command line prompt, you probably have to put the full URL within double quotes to avoid the shell from interfering with it. This also goes for other characters treated special, like for example '&', '?' and '*'. But you could use the "normal" shell globbing like this: curl -T "{$(echo *.txt | tr ' ' ',')}" ftp://XXX/ -user YYY (The last example may not work in all shells or with any kind of exotic file names.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/315424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194267/"
]
} |
315,425 | At the beginning, I have these permissions for a file: # file: jar# owner: my_user# group: my_useruser::rw-group::rw-other::r-- After running this: setfacl -m u:my_user:--- jar and get this permissións: # file: foobar# owner: my_user# group: my_useruser::rw-user:my_user:---group::rw-mask::rw-other::r-- I expected my_user not to have permissión to read (for example) this file, but it has.. | Your first command should work without whitespaces: curl -T "{file1.txt,file2.txt}" ftp://XXX/ -user YYY Also note the trailing "/" in the URLs above. This is curl's manual entry about option "-T": -T, --upload-file This transfers the specified local file to the remote URL. If there is no file part in the specified URL, Curl will append the local file name. NOTE that you must use a trailing / on the last directory to really prove to Curl that there is no file name or curl will think that your last directory name is the remote file name to use. That will most likely cause the upload operation to fail. If this is used on an HTTP(S) server, the PUT command will be used. Use the file name "-" (a single dash) to use stdin instead of a given file. Alternately, the file name "." (a single period) may be specified instead of "-" to use stdin in non-blocking mode to allow reading server output while stdin is being uploaded. You can specify one -T for each URL on the command line. Each -T + URL pair specifies what to upload and to where. curl also supports "globbing" of the -T argument, meaning that you can upload multiple files to a single URL by using the same URL globbing style supported in the URL, like this: curl -T "{file1,file2}" http://www.uploadtothissite.com or even curl -T "img[1-1000].png" ftp://ftp.picturemania.com/upload/ "*.txt" expansion does not work because curl supports only the same syntax as for URLs: You can specify multiple URLs or parts of URLs by writing part sets within braces as in: http://site .{one,two,three}.com or you can get sequences of alphanumeric series by using [] as in: ftp://ftp.numericals.com/file[1-100].txt ftp://ftp.numericals.com/file[001-100].txt (with leading zeros) ftp://ftp.letters.com/file[a-z].txt [...] When using [] or {} sequences when invoked from a command line prompt, you probably have to put the full URL within double quotes to avoid the shell from interfering with it. This also goes for other characters treated special, like for example '&', '?' and '*'. But you could use the "normal" shell globbing like this: curl -T "{$(echo *.txt | tr ' ' ',')}" ftp://XXX/ -user YYY (The last example may not work in all shells or with any kind of exotic file names.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/315425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16952/"
]
} |
315,456 | I'd like to run the command foo --bar=baz <16 zeroes> How do I type the 16 zeroes efficiently*? If I hold Alt and press 1 6 0 it will repeat the next thing 160 times, which is not what I want. In emacs I can either use Alt-[number] or Ctrl-u 1 6 Ctrl-u 0 , but in bash Ctrl-u kills the currently-being-typed line and the next zero just adds a 0 to the line. If I do foo --bar=baz $(printf '0%.0s' {1..16}) Then history shows exactly the above, and not foo --bar=baz 0000000000000000 ; i.e. bash doesn't behave the way I want. ( Edit : point being, I want to input some number of zeroes without using $(...) command substitution) (*) I guess a technical definition of "efficiently" is "with O(log n) keystrokes", preferably a number of keystrokes equal to the number of digits in 16 (for all values of 16) plus perhaps a constant; the emacs example qualifies as efficient by this definition. | Try echo Alt+1 Alt+6 Ctrl+V 0 That's 6 key strokes (assuming a US/UK QWERTY keyboard at least) to insert those 16 zeros (you can hold Alt for both 1 and 6). You could also use the standard vi mode ( set -o vi ) and type: echo 0 Esc x16p (also 6 key strokes). The emacs mode equivalent and that could be used to repeat more than a single character ( echo 0 Ctrl+W Alt+1 Alt+6 Ctrl+Y ) works in zsh , but not in bash . All those will also work with zsh (and tcsh where that comes from). With zsh , you could also use padding variable expansion flags and expand them with Tab : echo ${(l:16::0:)} Tab (A lot more keystrokes obviously). With bash , you can also have bash expand your $(printf '0%.0s' {1..16}) with Ctrl+Alt+E . Note though that it will expand everything (not globs though) on the line. To play the game of the least number of key strokes, you could bind to some key a widget that expands <some-number>X to X repeated <some-number> times. And have <some-number> in base 36 to even further reduce it. With zsh (bound to F8 ): repeat-string() { REPLY= repeat $1 REPLY+=$2}expand-repeat() { emulate -L zsh set -o rematchpcre local match mbegin mend MATCH MBEGIN MEND REPLY if [[ $LBUFFER =~ '^(.*?)([[:alnum:]]+)(.)$' ]]; then repeat-string $((36#$match[2])) $match[3] LBUFFER=$match[1]$REPLY else return 1 fi}zle -N expand-repeatbindkey "$terminfo[kf8]" expand-repeat Then, for 16 zeros, you type: echo g0 F8 (3 keystrokes) where g is 16 in base 36. Now we can further reduce it to one key that inserts those 16 zeros, though that would be cheating. We could bind F2 to two 0 s (or two $STRING , 0 by default), F3 to 3 0 s, F1 F6 to 16 0 s... up to 19... possibilities are endless when you can define arbitrary widgets. Maybe I should mention that if you press and hold the 0 key, you can insert as many zeros as you want with just one keystroke :-) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/315456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19055/"
]
} |
315,461 | In zsh , I make a symbolic link $ ln -s ~/Documents symboliclink and then I want to know what's inside this symbolic link. $ ls -l symboliclink> lrwxrwxrwx 1 user user 21 Oct 10 15:56 symboliclink -> /home/user/Documents This shows only the symbolic link, not what's inside it. If i use ls only, it lists the contents, but if I used the -l flag, it doesn't. This works in bash both for ls and ls -l . How can I get that behavior in zsh as well? | One likely cause for the seeming difference in the output of ls -l between zsh and bash is the use of Tab -completion with AUTO_REMOVE_SLASH enabled in zsh (which is the default). AUTO_REMOVE_SLASH <D> When the last character resulting from a completion is a slash and the next character typed is a word delimiter, a slash, or a character that ends a command (such as a semicolon or an ampersand), remove the slash. When typing ls -l symb Tab , both zsh and bash will complete this to ls -l symboliclink/ (note the / at the end). The difference is that zsh (with enabled AUTO_REMOVE_SLASH ) will remove the slash, if you just press Enter (i.e. end the command) there. So you will effectively run ls -l symboliclink/ in bash , which tells ls -l to look behind the link. But in zsh you will run ls -l symboliclink , which tells ls -l that you want to see the information about the link and not the target directory. ls without option -l will always show the contents of the target directory, regardless of there being an / at the end or not. In order to get zsh to not remove the slash at the end, it is sufficient to just type it explicitly after TAB -completion. Usually this will not visibly change the completed text, but if you type a space or confirm the command, the / will remain. "Usually" because it is possible to set a highlighting for automatically added suffix characters, for example magenta and bold: zle_highlight[(r)suffix:*]="suffix:fg=magenta,bold" ( Note: this may not work when using the external ZSH Syntax Highlighting plugin ) Another solution is (obviously) to disable AUTO_REMOVE_SLASH . This can be done with setopt noautoremoveslash | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58056/"
]
} |
315,502 | The Ubuntu 16.04 server VM image apparently starts the "apt-daily.service" every12 hours or so; this service performs various APT-related tasks like refreshingthe list of available packages, performing unattended upgrades if needed, etc. When starting from a VM "snapshot", the service is triggered immediately , as (Ipresume) systemd realizes quickly that the timer should have gone off long ago. However, a running APT prevents other apt processes from running as it holds alock on /var/lib/dpkg . The error message indicating this looks like this: E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it? I need to disable this automated APT task until Ansiblehas completed the machine setup (which typically involves installing packages);see https://github.com/gc3-uzh-ch/elasticluster/issues/304 for more info andcontext. I have tried various options to disable the "unattended upgrades" featurethrough a "user data" script for cloud-init , but all of them have failed sofar. 1. Disable the systemd task systemd task apt-daily.service is triggered by apt-daily.timer . I have triedto disable one or the other, or both, with various cobinations of the followingcommands; still, the apt-daily.service is started moments after the VM becomesready to accept SSH connections:: #!/bin/bash systemctl stop apt-daily.timer systemctl disable apt-daily.timer systemctl mask apt-daily.service systemctl daemon-reload 2. Disable config option APT::Periodic::Enable Script /usr/lib/apt/apt.systemd.daily reads a few APT configurationvariables; the setting APT::Periodic::Enable disables the functionalityaltogether (lines 331--337). I have tried disabling it with the followingscript:: #!/bin/bash # cannot use /etc/apt/apt.conf.d/10periodic as suggested in # /usr/lib/apt/apt.systemd.daily, as Ubuntu distributes the # unattended upgrades stuff with priority 20 and 50 ... # so override everything with a 99xxx file cat > /etc/apt/apt.conf.d/99elasticluster <<__EOF APT::Periodic::Enable "0"; // undo what's in 20auto-upgrade APT::Periodic::Update-Package-Lists "0"; APT::Periodic::Unattended-Upgrade "0"; __EOF However, despite APT::Periodic::Enable having value 0 from the command line(see below), the unattended-upgrades program is still run... ubuntu@test:~$ apt-config shell AutoAptEnable APT::Periodic::Enable AutoAptEnable='0' 3. Remove /usr/lib/apt/apt.systemd.daily altogether The following cloud-init script removes the unattended upgrades scriptaltogether:: #!/bin/bash mv /usr/lib/apt/apt.systemd.daily /usr/lib/apt/apt.systemd.daily.DISABLED Still, the task runs and I can see it in the process table! although the filedoes not exist if probed from the command line:: ubuntu@test:~$ ls /usr/lib/apt/apt.systemd.dailyls: cannot access '/usr/lib/apt/apt.systemd.daily': No such file or directory It looks as though the cloud-init script (together with the SSH command-line)and the root systemd process execute in separate filesystems and processspaces... Questions Is there something obvious I am missing? Or is there some namespace magic goingon which I am not aware of? Most importantly: how can I disable the apt-daily.service through a cloud-init script? | Yes, there was something obvious that I was missing. Systemd is all about concurrent start of services, so the cloud-init script isrun at the same time the apt-daily.service is triggered. By the time cloud-init gets to execute the user-specified payload, apt-get update isalready running. So the attempts 2. and 3. failed not because of some namespacemagic, but because they altered the system too late for apt.systemd.daily topick the changes up. This also means that there is basically no way of preventing apt.systemd.daily from running -- one can only kill it after it's started. This "user data" script takes this route:: #!/bin/bashsystemctl stop apt-daily.servicesystemctl kill --kill-who=all apt-daily.service# wait until `apt-get updated` has been killedwhile ! (systemctl list-units --all apt-daily.service | egrep -q '(dead|failed)')do sleep 1;done# now proceed with own APT tasksapt install -y python There is still a time window during which SSH logins are possible yet apt-get will not run, but I cannot imagine another solution that can works on the stockUbuntu 16.04 cloud image. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/315502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274/"
]
} |
315,504 | What is the proper syntax for adding a dot that is not bold after a word in bold? I need to write the following sentense without the dot in bold word_in_bold . Other sentence But .B word_in_bold. Other sentence does not generate "Other sentence". | Yes, there was something obvious that I was missing. Systemd is all about concurrent start of services, so the cloud-init script isrun at the same time the apt-daily.service is triggered. By the time cloud-init gets to execute the user-specified payload, apt-get update isalready running. So the attempts 2. and 3. failed not because of some namespacemagic, but because they altered the system too late for apt.systemd.daily topick the changes up. This also means that there is basically no way of preventing apt.systemd.daily from running -- one can only kill it after it's started. This "user data" script takes this route:: #!/bin/bashsystemctl stop apt-daily.servicesystemctl kill --kill-who=all apt-daily.service# wait until `apt-get updated` has been killedwhile ! (systemctl list-units --all apt-daily.service | egrep -q '(dead|failed)')do sleep 1;done# now proceed with own APT tasksapt install -y python There is still a time window during which SSH logins are possible yet apt-get will not run, but I cannot imagine another solution that can works on the stockUbuntu 16.04 cloud image. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/315504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
315,586 | Can someone suggest on how to rename the file name: head.body.date.txt To: head_body_date.txt Is there a single line statement to do the rename in Unix ? | Iterate over the filenames, and use Parameter expansion for conversion: for f in *.*.*.txt; do i="${f%.txt}"; echo mv -i -- "$f" "${i//./_}.txt"; done The parameter expansion pattern, ${f//./_} replaces all . s with _ s in the filename ( $f ). The above will do a dry-run, to let the actual renaming take place, remove echo : for f in *.*.*.txt; do i="${f%.txt}"; mv -i -- "$f" "${i//./_}.txt"; done If you want to deal with any extension, not just .txt : for f in *.*.*.*; do pre="${f%.*}"; suf="${f##*.}"; \ echo mv -i -- "$f" "${pre//./_}.${suf}"; done After checking remove echo for actual action: for f in *.*.*.*; do pre="${f%.*}"; suf="${f##*.}"; \ mv -i -- "$f" "${pre//./_}.${suf}"; done Generic, for arbitrary number of dots, at least one: for f in *.*; do pre="${f%.*}"; suf="${f##*.}"; \ mv -i -- "$f" "${pre//./_}.${suf}"; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194377/"
]
} |
315,615 | My guess is that a machine running ssh server keeps a mapping of a ssh public key to my user name (I am a remote user) who has authorized access.But a few other people and I are also able to login on behalf of some local user to the remote machine (meaning that we have a remote machine with a local user john.doe to that machine). Does it mean that there is also a mapping of a user john.doe to a multiple ssh public keys who are authorized to access on behalf of john.doe ? | The short answer is no. Sample scenario: you (Bob) want to connect to remote host ( earth ) as alice . SSH is a connection from someplace (a Unix, Windows, tablet, ...) to a user ( alice ) on a host ( earth ). When you ( bob ) connect without password, you use a private key (on Unix it is traditionally located in ~/.ssh , but you can put it anywhere). The remote host just have two facts: you want to connect as alice , you claim to have a private key (of which you provided the fingerprint) The remote host ( earth ), having public part of the key, issues a challenge that anyone having private part can answer. Once challenge is done, you simply connect. In this scenario, you proved you have private part of one of alice 's authorized keys. But the remote host ( earth ) has no way to know you are Bob, or Igor or anyone else. Remember you can connect from Windows or an Android device where user's scheme is completely different. authorized_keys This file lists public keys that can connect (after challenge). This file is located either in ${HOME}/.ssh/authorized_keys (default) any location given by AuthorizedKeysFile in sshd_config file. See man sshd_config . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/178188/"
]
} |
315,622 | I have something like this: % ls -1dF /tmp/foo/*/tmp/foo/000f9e956feab3ee4625aebb65ae7bae9533cdbc//tmp/foo/002e34c2218f2c86fefd2876f0e5c2559c5fb3c4//tmp/foo/00b483576791bab751e6cb7ee0a7143af43a8069/.../tmp/foo/fedd0f7b545e7ae9600142656756456bc16874d3//tmp/foo/ff51ac87609012137cfcb02f36624f81cdc10788//tmp/foo/ff8b983a7411395344cad64182cb17e7cdefa55e/ I want to create a directory bar under each of the subdirectories under foo . If I try to do this with % mkdir -p /tmp/foo/*/bar ...I get the error zsh: no matches found: /tmp/foo/*/bar (In hindsight, I can understand the reason for the error.) I know that I can solve the original problem with a for-loop, but I'm curious to know if zsh supports some form of parameter expansion that would produce the desired argument for a single invocation of mkdir -p . IOW, a parameter expansion equivalent to "append /bar to every prefix generated by expanding /tmp/foo/* ", resulting in % mkdir -p /tmp/foo/000f9e956feab3ee4625aebb65ae7bae9533cdbc/bar ... /tmp/foo/ff8b983a7411395344cad64182cb17e7cdefa55e/bar | setopt histsubstpattern extendedglobmkdir -p /tmp/foo/*(#q/:s_%_/bar_) This is extended globbing that has a q uiet glob flag that uses a glob qualifier to match only directories and a modifier to perform a s ubstitution (using the % pattern character that is only available in history substitution pattern mode) that appends a string to each word. man zshexpn | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
315,635 | I have a folder A in which every now and then new files appear. It may happen that there are no new files for an hour, but also that there are several new files each second. All files have a unique timestamp in their name and different file extensions. What, how and when something is put in the folder cannot be modified. It's an directory on my pc which is accessed from the outside via ftp. In another folder B I would like to have a copy of the most recent jpg file from folder A called "Newest.jpg" as soon as there is a new one. 1 second delay would be fine. It is supposed to imitate the output of a webcam. What is the best and least computational expensive way to achieve this on a Raspberry-like system running Ubuntu 14.04 LTS? | setopt histsubstpattern extendedglobmkdir -p /tmp/foo/*(#q/:s_%_/bar_) This is extended globbing that has a q uiet glob flag that uses a glob qualifier to match only directories and a modifier to perform a s ubstitution (using the % pattern character that is only available in history substitution pattern mode) that appends a string to each word. man zshexpn | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315635",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116151/"
]
} |
315,663 | $ cp --no-preserve=mode --parents /sys/power/state /tmp/test/$ cp --no-preserve=mode --parents /sys/bus/cpu/drivers_autoprobe /tmp/test/ The second of the two lines will fail with cp: cannot make directory ‘/tmp/test/sys/bus’: Permission denied And the reason is that /tmp/test/sys is created without write permission (as is the original /sys ); a normal mkdir /tmp/test/sys2 would not have done this: $ ls -la /tmp/test/total 32drwxr-xr-x 3 robert.siemer domain^users 4096 Oct 11 13:56 .drwxrwxrwt 13 root root 20480 Oct 11 13:56 ..dr-xr-xr-x 3 robert.siemer domain^users 4096 Oct 11 13:56 sysdrwxr-xr-x 2 robert.siemer domain^users 4096 Oct 11 13:59 sys2 How can I instruct cp to not preserve the mode, apart from --no-preserve=mode , which does not work as I believe it should...? Or which tool should I use to copy a list of files without preserving “anything” except symlinks? | In case you are using GNU coreutils. This is a bug which is fixed in version 8.26. https://lists.gnu.org/archive/html/bug-coreutils/2016-08/msg00016.html So the alternative tool would be an up-to-date coreutils , or for example rsync which is able to do that even with preserving permissions: $ rsync -a --relative /sys/power/state /tmp/test$ rsync -a --relative /sys/bus/cpu/drivers_autoprobe /tmp/test/ Though I see rsync has other problems for this particular sysfs files, see rsync option to disable verification? Another harsh workaround would be to chmod all the dirs after each cp command. $ find /tmp/test -type d -exec chmod $(umask -S) {} \; (The find/chmod command above would also not work for any combination of existing permissions and umask.) BTW you could report this bug to your Linux-Distribution and they might fix your 8.21 package via maintenance updates. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9417/"
]
} |
315,729 | I have space-separated file. This file contains text as well as numbers in exponential format. I want to convert exponential numbers to actual decimal values. Can someone suggest how I can achieve this? | With GNU awk : gawk -v RS='[-+]?[0-9.]+[eE][-+]?[0-9]+' \ -v ORS= \ -v CONVFMT=%.1000g '{print $0 (RT == "" ? "" : +RT)}' gawk stores what is matched by the record separator into the RT variable which we convert to floating point without exponent with the CONVFMT . For higher precision, for instance for 1e123 to be turned into 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 instead of 999999999999999977709969731404129670057984297594921577392083322662491290889839886077866558841507631684757522070951350501376 , with gawk 4.1 or above add -M -v PREC=1000 (1000 being the number of bits ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185159/"
]
} |
315,799 | This question Unix & Linux: permissions 755 on /home/ covers part of my question but: Default permissions on a home directory are 755 in many instances. However that lets other users wander into your home folder and look at stuff. Changing the permissions to 711 (rwx--x--x) means they can traverse folders but not see anything. This is required if you have authorized_keys for SSH - without it the SSH gives errors when trying to access the system using a public key. Is there some way to set up the folders / directories so SSH can access authorized_keys , postfix / mail can access files it requires, the system can access config files but without all and sundry walking the system? I can manually make the folder 711 , set ~/.ssh/authorized_keys to 644 but remembering to do that every time for every config is prone to (my) mistakes. I would have thought by default all files were private unless specifically shared but with two Ubuntu boxes (admittedly server boxes) everyone can read all newly created files. That seems a little off as a default setting. | As noted in the manual by default home folders made with useradd copy the /etc/skel folder so if you change it's subfolder rights all users created after in with default useradd will have the desired rights. Same for adduser . Editing "UMASK" in /etc/login.defs will change the rights when creating home folders. If you want more user security you can encrypt home folders and put ssh keys in /etc/ssh/%u instead of /home/%u/.ssh/authorized_keys . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194550/"
]
} |
315,812 | Typical Unix/Linux programs accept the command line inputs as an argument count ( int argc ) and an argument vector ( char *argv[] ). The first element of argv is the program name - followed by the actual arguments. Why is the program name passed to the executable as an argument? Are there any examples of programs using their own name (maybe some kind of exec situation)? | To begin with, note that argv[0] is not necessarily the program name. It is what the caller puts into argv[0] of the execve system call (e.g. see this question on Stack Overflow ). (All other variants of exec are not system calls but interfaces to execve .) Suppose, for instance, the following (using execl ): execl("/var/tmp/mybackdoor", "top", NULL); /var/tmp/mybackdoor is what is executed but argv[0] is set to top , and this is what ps or (the real) top would display. See this answer on U&L SE for more on this. Setting all of this aside: Before the advent of fancy filesystems like /proc , argv[0] was the only way for a process to learn about its own name. What would that be good for? Several programs customize their behavior depending on the name by which they were called (usually by symbolic or hard links, for example BusyBox's utilities ; several more examples are provided in other answers to this question). Moreover, services, daemons and other programs that log through syslog often prepend their name to the log entries; without this, event tracking would become next to infeasible. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/315812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104091/"
]
} |
315,815 | I have two apps that use the same tcp port (and same interface) for the monitoring console, not the main port of application. I am not interested in use that port, and I cannot change the source code for SO_REUSEADDR or for changing the port. How can I have both applications running on the same OS? | To begin with, note that argv[0] is not necessarily the program name. It is what the caller puts into argv[0] of the execve system call (e.g. see this question on Stack Overflow ). (All other variants of exec are not system calls but interfaces to execve .) Suppose, for instance, the following (using execl ): execl("/var/tmp/mybackdoor", "top", NULL); /var/tmp/mybackdoor is what is executed but argv[0] is set to top , and this is what ps or (the real) top would display. See this answer on U&L SE for more on this. Setting all of this aside: Before the advent of fancy filesystems like /proc , argv[0] was the only way for a process to learn about its own name. What would that be good for? Several programs customize their behavior depending on the name by which they were called (usually by symbolic or hard links, for example BusyBox's utilities ; several more examples are provided in other answers to this question). Moreover, services, daemons and other programs that log through syslog often prepend their name to the log entries; without this, event tracking would become next to infeasible. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/315815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63305/"
]
} |
315,823 | I have access to a list of folders that have a format like lastname, firstname(id) When I try to enter the folder from the terminal, it looks like cd test/lastname,\ firstname\(id\) I am not sure why there are backslashes where there aren't any spaces. My script has access to the credentials and I generated the exact format with the backslashes, but I still cannot enter the folder from the bash script. The variable I use is like this: folder="lastname,\ firstname\(id\)" When I do cd $HOME/test/$folder/ it says there is not such folder. I tried a couple of solutions suggested on different questions, but haven't worked. Putting it within double quotes on the folder variable, and also on the entire expression also didn't work. I guess I don't know what is going wrong and hence cannot get it to work. It'd be awesome if someone could help me out here! | To begin with, note that argv[0] is not necessarily the program name. It is what the caller puts into argv[0] of the execve system call (e.g. see this question on Stack Overflow ). (All other variants of exec are not system calls but interfaces to execve .) Suppose, for instance, the following (using execl ): execl("/var/tmp/mybackdoor", "top", NULL); /var/tmp/mybackdoor is what is executed but argv[0] is set to top , and this is what ps or (the real) top would display. See this answer on U&L SE for more on this. Setting all of this aside: Before the advent of fancy filesystems like /proc , argv[0] was the only way for a process to learn about its own name. What would that be good for? Several programs customize their behavior depending on the name by which they were called (usually by symbolic or hard links, for example BusyBox's utilities ; several more examples are provided in other answers to this question). Moreover, services, daemons and other programs that log through syslog often prepend their name to the log entries; without this, event tracking would become next to infeasible. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/315823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194533/"
]
} |
315,826 | I have the following pipeline: ➜ echo ,cats,and,dogs, | sed -e 's/,[^,]*,[^,]*,/,,,/',,,dogs, I know that I could run a command like !! to "run the last command" or !:1 to "get the last arguments" but I'm wondering is there some command that I can run that will let me "get the k th command+args from a pipeline" So in this example if I wanted to pipe some other output into the sed utility I could do something like this right after running the above pipeline: $ echo ,foo,bar,baz, | %:2 where %:2 is some maybe-fictional command that I don't know, that "runs the k th command in a pipeline" Does this command exist? | To begin with, note that argv[0] is not necessarily the program name. It is what the caller puts into argv[0] of the execve system call (e.g. see this question on Stack Overflow ). (All other variants of exec are not system calls but interfaces to execve .) Suppose, for instance, the following (using execl ): execl("/var/tmp/mybackdoor", "top", NULL); /var/tmp/mybackdoor is what is executed but argv[0] is set to top , and this is what ps or (the real) top would display. See this answer on U&L SE for more on this. Setting all of this aside: Before the advent of fancy filesystems like /proc , argv[0] was the only way for a process to learn about its own name. What would that be good for? Several programs customize their behavior depending on the name by which they were called (usually by symbolic or hard links, for example BusyBox's utilities ; several more examples are provided in other answers to this question). Moreover, services, daemons and other programs that log through syslog often prepend their name to the log entries; without this, event tracking would become next to infeasible. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/315826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173557/"
]
} |
315,829 | How to understand the output of echo $- ? It looks like some kind of flag characters. I can't get a clue by googling. | They represent the values of the shell's flags; this is defined by POSIX : - (Hyphen.) Expands to the current option flags (the single-letter option names concatenated into a string) as specified on invocation, by the set special built-in command, or implicitly by the shell. The Zsh manual mentions it briefly: - <S> Flags supplied to the shell on invocation or by the set or setopt commands. as does the Bash manual in the description of set : The current set of options may be found in $- . To understand the output of echo $- you need to look up the options in your shell's manual. For example, in Bash, echo $- outputs himBHs for me, which means that the -h , -m , -B and -H options are enabled (see help set for details), that the shell is interactive ( -i ) and reading from standard input ( -s ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194577/"
]
} |
315,839 | I'v lots bjobs running on lsf, jobs have two status, RUN and PEND. And I want to kill all bjobs with PEND status, how to do that use script? A hard-coded way I think is saving them in a file then parse every line to get the status and key. If the STAT is PEND then pass the key to bkill $key . But this is very complicated, is there any bkill function that can directly do this or a non hard-coded way to kill jobs with a specific status or name? | They represent the values of the shell's flags; this is defined by POSIX : - (Hyphen.) Expands to the current option flags (the single-letter option names concatenated into a string) as specified on invocation, by the set special built-in command, or implicitly by the shell. The Zsh manual mentions it briefly: - <S> Flags supplied to the shell on invocation or by the set or setopt commands. as does the Bash manual in the description of set : The current set of options may be found in $- . To understand the output of echo $- you need to look up the options in your shell's manual. For example, in Bash, echo $- outputs himBHs for me, which means that the -h , -m , -B and -H options are enabled (see help set for details), that the shell is interactive ( -i ) and reading from standard input ( -s ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170560/"
]
} |
315,928 | I have files in subdirectories of the current directory that may or may not have new lines at the end; how can I find files that don't have a newline at the end? I've tried this: find . -name '*.styl' | while read file; do awk 'END{print}' $file | grep -E '^$' > /dev/null || echo $file;done but it doesn't work. awk 'END{print}' $file prints the line before an empty new line, the same as tail -n 1 $file . | To clarify, the LF (aka \n or newline) character is the line delimiter , it's not the line separator. A line is not finished unless it's terminated by a newline character. A file that only contains a\nb is not a valid text file because it contains characters after the last line. Same for a file that contains only a . A file that contains a\n contains one non-empty line. So a file that ends with at least one empty line ends with two newline characters or contains a single newline character. If: tail -c 2 file | od -An -vtc Outputs \n or \n \n , then the file contains at least one trailing empty line. If it outputs nothing, then that's an empty file, if it outputs <anything-but-\0> \n , then it ends in a non-empty line. Anything else, it's not a text file. Now, to use that to find files that end in an empty line, OK that's efficient (especially for large files) in that it only reads the last two bytes of the files, but first the output is not easily parsable programmatically especially considering that it's not consistent from one implementation of od to the next, and we'd need to run one tail and one od per file. find . -type f -size +0 -exec gawk ' ENDFILE{if ($0 == "") print FILENAME}' {} + (to find files ending in an empty line) would run as few commands as possible but would mean reading the full content of all files. Ideally, you'd need a shell that can read the end of a file by itself. With zsh : zmodload zsh/systemfor f (**/*(D.L+0)) { { sysseek -w end -2 sysread [[ $REPLY = $'\n' || $REPLY = $'\n\n' ]] && print -r -- $f } < $f} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315928",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1806/"
]
} |
315,960 | I am looking for (good) backup alternatives to the time machine of MacOS/OS X devices or file history on Windows machines. Actually what I am looking for is closer to Windows' solution than to the time machine. So I know I can use rsync or - with a nice UI - Back in time .However I am not looking for an external backup solution! This means I rather want to have a file history as in Windows Vista (and above AFAIK). On Windows Vista/7 this worked with Shadow copies , so this is exactly what I'd like to have: So I want to save the backup/file history on the same drive (and probably partition, but that does not matter). I'd also save it on another internal drive, but not on an external one. Is there such a solution for Linux or how can I best replicate this behaviour?That's why existing files should not be duplicated and a backup (copy of the file) should only be saved when I actually modify or remove it. This way it saves much space, especially for larger files, which you won't edit anyway. As opposed to rsync/backintime, where never-modified files are copied even with incremental backups. | The Windows 'Shadow Copy' aka 'Volume Shadow Copy Servce' does filesystem snapshotting. The Linux equivalent requires changing your filesystem/partitions, or possibly using 3rd party tools. Options LVM - you must leave free space on your volume group, and has a pretty high performance cost. All though not super fast it is available, stable, and pretty usable out of the box on most Linux releases. btfrs - not entirely stable be careful to read the note about setups that should not be used. Apparently it has some major ways it can be broken and result in full data lose. zfs - not natively available on most distributions yet. Very popular option, but is very difficult to use as a root fs on Linux. Great for data filesystems R1Soft Hot Copy - https://www.r1soft.com/free-tool-linux-hot-copy I haven't used this, but I don't believe it is designed for long term snapshots, instead it is just used for getting a clean backup. So, if you need to snapshot your root FS, I suspect you probably need to setup the system with LVM, and leave lots of free space in your volume group. If you need snapshots for a data-only filesystem, I strongly suggest you look at zfs or maybe btrfs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146739/"
]
} |
315,963 | I am not sure how to word this, but I often I find myself typing commands like this: cp /etc/prog/dir1/myconfig.yml /etc/prog/dir1/myconfig.yml.bak I usually just type out the path twice (with tab completion) or I'll copy and paste the path with the cursor. Is there some bashfoo that makes this easier to type? | There are a number of tricks (there's a duplicate to be found I think), but for this I tend to do cp /etc/prog/dir1/myconfig.yml{,.bak} which gets expanded to your command. This is known as brace expansion . In the form used here, the {} expression specifies a number of strings separated by commas. These "expand" the whole /etc/prog/dir1/myconfig.yml{,.bak} expression, replacing the {} part with each string in turn: the empty string, giving /etc/prog/dir1/myconfig.yml , and then .bak , giving /etc/prog/dir1/myconfig.yml.bak . The result is cp /etc/prog/dir1/myconfig.yml /etc/prog/dir1/myconfig.yml.bak These expressions can be nested: echo a{b,c,d{e,f,g}} produces ab ac ade adf adg There's a variant using numbers to produce sequences: echo {1..10} produces 1 2 3 4 5 6 7 8 9 10 and you can also specify the step: echo {0..10..5} produces 0 5 10 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/315963",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29731/"
]
} |
315,968 | I am trying to automate running Cloud9, by setting it as a service on Debian. This error stops me: $ sudo service cloud9 startFailed to start cloud9.service: Unit cloud9.service failed to load: Invalid argument. See system logs and 'systemctl status cloud9.service' for details. systemctl status cloud9.service output: $ systemctl status cloud9.service● cloud9.service - cloud9 Loaded: error (Reason: Invalid argument) Active: inactive (dead) Probably it is due to misconfiguration in /etc/systemd/system/cloud9.service , which I just created: [Unit]Description=cloud9[Service]ExecStart=node server.js -w /home/user -l 0.0.0.0 -a admin:adminRestart=alwaysUser=nobodyGroup=nobodyEnvironment=PATH=/usr/bin:/usr/local/binEnvironment=NODE_ENV=productionWorkingDirectory=/home/user/c9sdk[Install]WantedBy=multi-user.target How to create a simple startup script for the service? | Your first clue is that the diagnostic said to check the output of systemctl status cloud9.service , but you didn't mention doing that or sharing that output. Perhaps it will tell you that the path the binary you pass to ExecStart= must be absolute. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315968",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137750/"
]
} |
316,065 | I run command ps -A | grep <application_name> and getting list of process like this: 19440 ? 00:00:11 <application_name>21630 ? 00:00:00 <application_name>22694 ? 00:00:00 <application_name> I want to kill all process from the list: 19440 , 21630 , 22694 . I have tried ps -A | grep <application_name> | xargs kill -9 $1 but it works with errors. kill: illegal pid ?kill: illegal pid 00:00:00kill: illegal pid <application_name> How can I do this gracefully? | pkill -f 'PATTERN' Will kill all the processes that the pattern PATTERN matches. With the -f option, the whole command line (i.e. including arguments) will be taken into account. Without the -f option, only the command name will be taken into account. See also man pkill on your system. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/316065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138156/"
]
} |
316,067 | I use a headset for both my headphones and my microphone. As a result pavucontrol is labeling both my output and my input the same thing, Built-in Audio Analogue Stereo . It makes configuring my loopback-modules somewhat frustrating for obvious reasons. How would I go about just renaming them to "Headphones" and "Mic"? | You can update the device.description with update-sink-proplist and update-source-proplist , e.g. pacmd update-sink-proplist alsa_output.my-card.analog-stereo device.description=MyCard I haven't figured out how to make that parse spaces in the name properly. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190006/"
]
} |
316,105 | When doing some installations I change the 'HISTFILE' variable to a different file to record the commands used. Now I want to use the history command to display them just like with the history command which defaults to using the .bash_history file. What option should be passed to the history command? When I try history ~/.history.d/alternate_history I get the error message -bash: history: /home/vfclists/.history.d/alternate_history: numeric argument required. The man help lists some options which appear to make some changes to other history files I don't want. | The history command never operates on a file, only on its in-memory history list. You can only read ( r ), write ( -w ), and append ( -a ) that list to or from a file, and then access or manipulate the in-memory list. Reading from a file will replace or extend the history in your current shell session. You can, however, spawn another shell and manipulate its history to run any command you want without affecting the history of your current shell: bash -c 'history -cr file ; history' or ( history -cr file ; history ) You can add any history options you want to the second history command in either case. If you'll be doing this a lot, you may want to define a function accepting the file as an argument and running the subshell version: histfile() { ( history -cr "$1" ; history )} If you're interested in displaying saved timestamps, you'll also need to set HISTTIMEFORMAT . If you're using a subshell, and you get timestamps in your host shell, that should be there automatically, but for the bash -c version or a script you'll need to set it: bash -c 'history -cr file ; HISTTIMEFORMAT="%Y%m%d%H%I%S " history' You can also export the variable from the parent shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
316,112 | I'm not able to install chrome. When I tried it shows the following as below: root@kali:~/Downloads# apt-get -f installReading package lists... DoneBuilding dependency tree Reading state information... Done0 upgraded, 0 newly installed, 0 to remove and 857 not upgraded. Could you please solve this ASAP. | The history command never operates on a file, only on its in-memory history list. You can only read ( r ), write ( -w ), and append ( -a ) that list to or from a file, and then access or manipulate the in-memory list. Reading from a file will replace or extend the history in your current shell session. You can, however, spawn another shell and manipulate its history to run any command you want without affecting the history of your current shell: bash -c 'history -cr file ; history' or ( history -cr file ; history ) You can add any history options you want to the second history command in either case. If you'll be doing this a lot, you may want to define a function accepting the file as an argument and running the subshell version: histfile() { ( history -cr "$1" ; history )} If you're interested in displaying saved timestamps, you'll also need to set HISTTIMEFORMAT . If you're using a subshell, and you get timestamps in your host shell, that should be there automatically, but for the bash -c version or a script you'll need to set it: bash -c 'history -cr file ; HISTTIMEFORMAT="%Y%m%d%H%I%S " history' You can also export the variable from the parent shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194828/"
]
} |
316,114 | I have created a service on Debian 8.6 and as I'm trying to start it using the service command, I receive an error. I have tried systemctl daemon-reload , but still getting the same result. $ sudo service cloud9 start$ sudo service cloud9 status● cloud9.service - cloud9 Loaded: loaded (/etc/systemd/system/cloud9.service; enabled) Active: failed (Result: start-limit) since Thu 2016-10-13 07:21:02 UTC; 2s ago Process: 2610 ExecStart=/opt/bitnami/nodejs/bin/node /home/user/c9sdk/server.js -w /home/user -l 0.0.0.0 -a admin:admin (code=exited, status=216/GROUP) Main PID: 2610 (code=exited, status=216/GROUP)Oct 13 07:21:02 test-vm systemd[1]: cloud9.service: main process exited, code=exited, status=216/GROUPOct 13 07:21:02 test-vm systemd[1]: Unit cloud9.service entered failed state.Oct 13 07:21:02 test-vm systemd[1]: cloud9.service holdoff time over, scheduling restart.Oct 13 07:21:02 test-vm systemd[1]: Stopping cloud9...Oct 13 07:21:02 test-vm systemd[1]: Starting cloud9...Oct 13 07:21:02 test-vm systemd[1]: cloud9.service start request repeated too quickly, refusing to start.Oct 13 07:21:02 test-vm systemd[1]: Failed to start cloud9.Oct 13 07:21:02 test-vm systemd[1]: Unit cloud9.service entered failed state. The config is in /etc/systemd/system/cloud9.service : [Unit]Description=cloud9[Service]ExecStart=/opt/bitnami/nodejs/bin/node /home/user/c9sdk/server.js -w /home/user -l 0.0.0.0 -a admin:adminRestart=alwaysUser=nobodyGroup=nobodyEnvironment=PATH=/bin:/usr/bin:/usr/local/binEnvironment=NODE_ENV=productionWorkingDirectory=/home/user/c9sdk[Install]WantedBy=multi-user.target | 2610 ExecStart=/opt/bitnami/nodejs/bin/node /home/user/c9sdk/server.js -w /home/user -l 0.0.0.0 -a admin:admin (code=exited, status=216/GROUP)…Oct 13 07:21:02 test-vm systemd[1]: cloud9.service: main process exited, code=exited, status=216/GROUP … which describes the problem. Your group nobody is not a valid group on your system. Specify a valid group. Environment=PATH=/bin:/usr/bin:/usr/local/bin This is probably unnecessary. -w /home/user -l 0.0.0.0 In a better world, the cloud9 service program here would receive its listening socket as an open file descriptor, and inherit its working directory (which, ironically, you have explicitly set elsewhere in the unit). Further reading https://unix.stackexchange.com/a/316168/5132 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316114",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137750/"
]
} |
316,161 | According to FHS-3.0 , /tmp is for temporary files and /run is for run-time variable data. Data in /run must be deleted at next boot, which is not required for /tmp , but still programs must not assume that the data in /tmp will be available at the next program start. All this seems quite similar to me. So, what is the difference between the two? By which criterion should a program decide whether to put temporary data into /tmp or into /run ? According to the FHS: Programs may have a subdirectory of /run ; this is encouraged for programs that use more than one run-time file. This indicates that the distinction between "system programs" and "ordinary programs" is not a criterion, neither is the lifetime of the program (like, long-running vs. short-running process). Although the following rationale is not given in the FHS, /run was introduced to overcome the problem that /var was mounted too late such that dirty tricks were needed to make /var/run available early enough. However, now with /run being introduced, and given its description in the FHS, there does not seem to be a clear reason to have both /run and /tmp . | The directories /tmp and /usr/tmp (later /var/tmp ) used to be the dumping ground for everything and everybody. The only protection mechanism for files in these directories is the sticky bit which restricts deletion or renaming of files there to their owners. As marcelm pointed out in a comment, there's in principle nothing that prevents someone to create files with names that are used by services (such as nginx.pid or sshd.pid ). (In practice, the startup scripts could remove such bogus files first, though.) /run was established for non-persistent runtime data of long lived services such as locks, sockets, pid files and the like. Since it is not writable for the public, it shields service runtime data from the mess in /tmp and jobs that clean up there. Indeed: Two distributions that I run (no pun intended) have permissions 755 on /run , while /tmp and /var/tmp (and /dev/shm for that matter) have permissions 1777. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/316161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194871/"
]
} |
316,186 | My understanding is that in order to block a process signal like SIGHUP , you would need to do so from within the process the signal is being sent to. Yet, a Unix shell like bash can spawn a child process and block the HUP signal for the child from within the parent, using the nohup command. How does this work? Does nohup block the signal and then exec the child process without forking? That's the only way I can think of. | You can take a look at the source code of an implementation of nohup , e.g. GNU's coreutils version . There's a ton of setup, some of it for internationalisation purposes, the rest to handle the various redirection options; then the actual "nohupping" happens: signal (SIGHUP, SIG_IGN); char **cmd = argv + optind; execvp (*cmd, cmd); As you surmise, this sets the process up to ignore the HUP signal, then exec s the requested command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188709/"
]
} |
316,257 | This script takes the user input line after line, and executes myfunction on every line #!/bin/bashSENTENCE=""while read worddo myfunction $word"doneecho $SENTENCE To stop the input, the user has to press [ENTER] and then Ctrl+D . How can I rebuild my script to end only with Ctrl+D and process the line where Ctrl+D was pressed. | To do that, you'd have to read character by character, not line by line. Why? The shell very likely uses the standard C library function read() to read the data that the user is typing in, and that function returnsthe number of bytes actually read. If it returns zero, that means it hasencountered EOF (see the read(2) manual; man 2 read ). Note that EOFisn't a character but a condition, i.e. the condition "there is nothingmore to be read", end-of-file . Ctrl+D sends an end-of-transmission character (EOT, ASCII character code 4, $'\04' in bash ) to the terminaldriver. This has the effect of sending whatever there is to send to thewaiting read() call of the shell. When you press Ctrl+D halfway throughentering the text on a line, whatever you have typed so far issent to the shell 1 . This means that if you enter Ctrl+D twice after having typed something ona line, the first one will send some data, and the second one willsend nothing , and the read() call will return zero and the shellinterpret that as EOF. Likewise, if you press Enter followedby Ctrl+D , the shell gets EOF at once as therewasn't any data to send. So how to avoid having to type Ctrl+D twice? As I said, read single characters. When you use the read shellbuilt-in command, it probably has an input buffer and asks read() toread a maximum of that many characters from the input stream (maybe 16kb or so). This means that the shell will get a bunch of 16 kb chunksof input, followed by a chunk that may be less than 16 kb, followed byzero bytes (EOF). Once encountering the end of input (or a newline, or aspecified delimiter), control is returned to the script. If you use read -n 1 to read a single character, the shell will usea buffer of a single byte in its call to read() , i.e. it will sit ina tight loop reading character by character, returning control to theshell script after each one. The only issue with read -n is that it sets the terminal to "rawmode", which means that characters are sent as they are without anyinterpretation. For example, if you press Ctrl+D ,you'll get a literal EOT character in your string. So we have to checkfor that. This also has the side-effect that the user will be unable to edit the line before submitting it to the script, for example by pressing Backspace , or by using Ctrl+W (to delete the previous word) or Ctrl+U (to delete to the beginning of the line). To make a long story short: The following is the final loop that your bash script needs to do to read a line of input, while at the same timeallowing the user to interrupt the input at any time by pressing Ctrl+D : while true; do line='' while IFS= read -r -N 1 ch; do case "$ch" in $'\04') got_eot=1 ;& $'\n') break ;; *) line="$line$ch" ;; esac done printf 'line: "%s"\n' "$line" if (( got_eot )); then break fidone Without going into too much detail about this: IFS= clears the IFS variable. Without this, we would not be able to read spaces. I use read -N instead of read -n , otherwise we wouldn't be able to detect newlines. The -r option to read enables us to read backslashes properly. The case statement acts on each read character ( $ch ). If an EOT ( $'\04' ) is detected, it sets got_eot to 1 and then falls through to the break statement which gets it out of the inner loop. If a newline ( $'\n' ) is detected, it just breaks out of the inner loop. Otherwise it adds the character to the end of the line variable. After the loop, the line is printed to standard output. This would be where you call your script or function that uses "$line" . If we got here by detecting an EOT, we exit the outermost loop. 1 You may test this by running cat >file in one terminaland tail -f file in another, and then enter a partial line into the cat and press Ctrl+D to see what happens in theoutput of tail . For ksh93 users: The loop above will read a carriage return character rather than a newline character in ksh93 , which means that the test for $'\n' will need to change to a test for $'\r' . The shell will also display these as ^M . To work around this: stty_saved="$( stty -g )"stty -echoctl# the loop goes here, with $'\n' replaced by $'\r' stty "$stty_saved" You might also want to output a newline explicitly just before the break to get exactly the same behaviour as in bash . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
316,270 | I have to extract the exception and corresponding stack trace starting from a line number in a log file. I know the starting line no of the error. how can i find out where the stack trace will end from the below example? Appreciate your help example-------2016-10-07 15:49:07,537 ERROR Some exception stacktrace line 1 stacktrace line 2 . . stacktrace line n2016-10-07 15:49:07,539 debug blah blah blah2016-10-07 15:49:07,540 debug blah blah blah | To do that, you'd have to read character by character, not line by line. Why? The shell very likely uses the standard C library function read() to read the data that the user is typing in, and that function returnsthe number of bytes actually read. If it returns zero, that means it hasencountered EOF (see the read(2) manual; man 2 read ). Note that EOFisn't a character but a condition, i.e. the condition "there is nothingmore to be read", end-of-file . Ctrl+D sends an end-of-transmission character (EOT, ASCII character code 4, $'\04' in bash ) to the terminaldriver. This has the effect of sending whatever there is to send to thewaiting read() call of the shell. When you press Ctrl+D halfway throughentering the text on a line, whatever you have typed so far issent to the shell 1 . This means that if you enter Ctrl+D twice after having typed something ona line, the first one will send some data, and the second one willsend nothing , and the read() call will return zero and the shellinterpret that as EOF. Likewise, if you press Enter followedby Ctrl+D , the shell gets EOF at once as therewasn't any data to send. So how to avoid having to type Ctrl+D twice? As I said, read single characters. When you use the read shellbuilt-in command, it probably has an input buffer and asks read() toread a maximum of that many characters from the input stream (maybe 16kb or so). This means that the shell will get a bunch of 16 kb chunksof input, followed by a chunk that may be less than 16 kb, followed byzero bytes (EOF). Once encountering the end of input (or a newline, or aspecified delimiter), control is returned to the script. If you use read -n 1 to read a single character, the shell will usea buffer of a single byte in its call to read() , i.e. it will sit ina tight loop reading character by character, returning control to theshell script after each one. The only issue with read -n is that it sets the terminal to "rawmode", which means that characters are sent as they are without anyinterpretation. For example, if you press Ctrl+D ,you'll get a literal EOT character in your string. So we have to checkfor that. This also has the side-effect that the user will be unable to edit the line before submitting it to the script, for example by pressing Backspace , or by using Ctrl+W (to delete the previous word) or Ctrl+U (to delete to the beginning of the line). To make a long story short: The following is the final loop that your bash script needs to do to read a line of input, while at the same timeallowing the user to interrupt the input at any time by pressing Ctrl+D : while true; do line='' while IFS= read -r -N 1 ch; do case "$ch" in $'\04') got_eot=1 ;& $'\n') break ;; *) line="$line$ch" ;; esac done printf 'line: "%s"\n' "$line" if (( got_eot )); then break fidone Without going into too much detail about this: IFS= clears the IFS variable. Without this, we would not be able to read spaces. I use read -N instead of read -n , otherwise we wouldn't be able to detect newlines. The -r option to read enables us to read backslashes properly. The case statement acts on each read character ( $ch ). If an EOT ( $'\04' ) is detected, it sets got_eot to 1 and then falls through to the break statement which gets it out of the inner loop. If a newline ( $'\n' ) is detected, it just breaks out of the inner loop. Otherwise it adds the character to the end of the line variable. After the loop, the line is printed to standard output. This would be where you call your script or function that uses "$line" . If we got here by detecting an EOT, we exit the outermost loop. 1 You may test this by running cat >file in one terminaland tail -f file in another, and then enter a partial line into the cat and press Ctrl+D to see what happens in theoutput of tail . For ksh93 users: The loop above will read a carriage return character rather than a newline character in ksh93 , which means that the test for $'\n' will need to change to a test for $'\r' . The shell will also display these as ^M . To work around this: stty_saved="$( stty -g )"stty -echoctl# the loop goes here, with $'\n' replaced by $'\r' stty "$stty_saved" You might also want to output a newline explicitly just before the break to get exactly the same behaviour as in bash . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194709/"
]
} |
316,349 | I have a requirement in my project to replace some existing text in a file like foo with some other text like fooofoo : abc.txtnamefoofoo1 So I tried: sed -i "s/foo/fooofoo/g" abc.txt However I get this error: sed: illegal option -- i I found in the manual that I have to use: sed -i\ "s/foo/fooofoo/g" abc.txt However this is not working either. I found alternatives in perl and awk also but a solution in Solaris sed would be much appreciated. I am using this version of bash: GNU bash, version 3.2.57(1)-release (sparc-sun-solaris2.10) | Use ed . It's available on most platforms and it can edit your files in-place. Since sed is based on ed the syntax for replacing patterns is similar: ed -s infile <<\IN,s/old/new/gwqIN | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/316349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195041/"
]
} |
316,381 | I'm on Ubuntu 16.04 Trying: grep '.*' file1 Output: file nu-mber o-nesecond string Trying: grep '.+' file1 Output is absent Why plus is not working? | You need to tell grep you're using an extended regular expression: grep -E '.+' file1 The standard Basic Regular Expression (as used by grep without -E ) equivalent of the Extended Regular Expression + operator is \{1,\} though some implementations (like GNU's) also recognise \+ for that as an extension (and you can always use ..* ). (Note that in this particular case grep -E .+ is equivalent to grep -E . as you're looking for substrings matching the regex when not using the -x option. On many systems egrep is provided as an equivalent command to grep -E , but as Graeme points out this is obsolete .) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195072/"
]
} |
316,401 | I know how to mount a drive that has a corresponding device file in /dev, but I don't know how to do this for a disk image that does not represent a physical device and does not have an analogue in /dev (e.g. an ISO file or a floppy image). I know I can do this in Mac OS X by double-clicking on the disk image's icon in Finder, which will mount the drive automatically, but I would like to be able to do this from the terminal. I'm not sure if there is a general Unix way of doing this, or if this is platform-specific. | On most modern GNU system the mount command can handle that: mount -o loop file.iso /mnt/dir to unmount you can just use the umount command umount /mnt/dir If your OS doesn't have this option you can create a loop device : losetup -f # this will print the first available loop device ex:/dev/loop0losetup /dev/loop0 /path/file.iso #associate loop0 with the specified filemount /dev/loop0 /mnt/dir #It may be necessary specify the type (-t iso9660) to umount you can use -d : umount /mnt/dirlosetup -d /dev/loop0 If the file have partitions, example a HD image, you can use the -P parameter (depending on you OS), it will map the partitions in the file content: losetup -P /dev/loop0 /path/file.iso # will create /dev/loop0 ls /dev/loop0p* #the partitions in the format /dev/loop0pX | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/316401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188709/"
]
} |
316,425 | Historically speaking I know when I run the cc command or gcc my output generally always compiles to a.out unless I have a make file or use a particular flag on the compiler. But why a.out ? Why not c.out or c.run or any myriad of a million possibilities? | It is a historical artefact, so in other words a legacy throwback. Historically a.out stands for "assembler output" . a.out is now only the name of the file but before it was also the file format of the executable. The a.out executable format is nowadays uncommonly supported. The ELF format has wider use, but we still keep the old name for the default output of the C compiler. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8041/"
]
} |
316,467 | I'm working on bash script. I have a var with a string in it. This is the string: tSA_15_20161014_11-12-50 Basically the format is tSA_(id)_(year)(month)(day)_(hour)-(min)-(seg) Important information: The id can be a number from 0 to 999 The year format is yyyy Month format mm Day format dd Hour format 24h The string I want to get is something like: 20161014111250 Which is yyyymmddHHMMSS English is not my native language, so if there's something you can't understand please tell me. Thank you. | echo "tSA_15_20161014_11-12-50" | awk -F'_' '{print $3$4}' | tr -d - echo the var in which the string is stored Explanation: awk -F'_' '{print $3$4}' change the field separator to _ and print the 3rd and 4th column The output is 2016101411-12-50 tr -d - deletes - from the previous result. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179831/"
]
} |
316,471 | I am posting this question up mostly because after years (yes, years - cf. [this post][1], for example) of living with this frustration, I finally solved this yesterday and would like to spare others this frustration. I mainly use Firefox for my browsing needs, but at some point I ran into not being able to watch videos on twitter and found that Chrome could play them, except that I couldn't hear anything. Some investigation showed that it was due to the fact that these were HTML5 videos, and neither Firefox nor Chrome would produce sound on HTML5 videos (as tested on Youtube, for example) while flash on both worked fine. If you ran into the same problem and solved this some other way, please post your solution here. | echo "tSA_15_20161014_11-12-50" | awk -F'_' '{print $3$4}' | tr -d - echo the var in which the string is stored Explanation: awk -F'_' '{print $3$4}' change the field separator to _ and print the 3rd and 4th column The output is 2016101411-12-50 tr -d - deletes - from the previous result. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85154/"
]
} |
316,486 | Is there a way to run a script on startup as a user on Debian 7?My script is screen -dmS name ./script.sh So essentially I want to run a script on startup that would make a screen window and run a script in it | You can faff about with an elaborate sudo command in /etc/rc.local , but the best way is to use the user's cron table. cron has some nifty scheduling keywords including @reboot which will run the defined script or command when the system is rebooted. So, as that user, run crontab -e , and add this line to the file: @reboot screen -dmS ScreenName /path/to/your/script.sh | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191662/"
]
} |
316,517 | I am looking for a way to put zeros and burn myiso.iso in parallel. The command dd if=/dev/zero of=/dev/sdb && (sleep 1; dd if=myiso.iso of=/dev/sdb) should be ok since the speed at which zeros are written is inferior to the speed at which the iso is written. How would you verify that the iso is written only after zeros are written? | If you're trying to ensure the USB key only contains the image and the remaining space is all zeros, you could do this instead: cat myiso.iso /dev/zero > /dev/sdb There doesn't seem to be much point in writing all zeros and then the image on top... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316517",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
316,556 | I want to find the most frequent words in a text file, with using a stop words list. I already have this code: tr -c '[:alnum:]' '[\n*]' < test.txt |fgrep -v -w -f /usr/share/groff/current/eign |sort | uniq -c | sort -nr | head -10 > test.txt from an old post but my file contains something like this: 240 21 ipsum 20 Lorem 11 Textes 9 Blindtexte 7 Text 5 F 5 Blindtext 4 Texte 4 Buchstaben The first one is just a Space and in the text they are punctuation marks (like points), but I don´t want this, so what does I have to add? | Consider this test file: $ cat text.txtthis file has "many" words, somewith punctuation. some repeat,many do not. To get a word count: $ grep -oE '[[:alpha:]]+' text.txt | sort | uniq -c | sort -nr 2 some 2 many 1 words 1 with 1 this 1 repeat 1 punctuation 1 not 1 has 1 file 1 do How it works grep -oE '[[:alpha:]]+' text.txt This returns all words, minus any spaces or punctuation, with one word per line. sort This sorts the words into alphabetical order. uniq -c This counts the number of times each word occurs. (For uniq to work, its input must be sorted.) sort -nr This sorts the output numerically so that the most frequent word is at the top. Handling mixed case Consider this mixed-case test file: $ cat Text.txtThis file has "many" words, somewith punctuation. Some repeat,many do not. If we want to count some and Some as the same: $ grep -oE '[[:alpha:]]+' Text.txt | sort -f | uniq -ic | sort -nr 2 some 2 many 1 words 1 with 1 This 1 repeat 1 punctuation 1 not 1 has 1 file 1 do Here, we added the -f option to sort so that it would ignore case and the -i option to uniq so that it also would ignore case. Excluding stop words Suppose that we want to exclude these stop words from the count: $ cat stopwords withnothasdo So, we add grep -v to eliminate these words: $ grep -oE '[[:alpha:]]+' Text.txt | grep -vwFf stopwords | sort -f | uniq -ic | sort -nr 2 some 2 many 1 words 1 This 1 repeat 1 punctuation 1 file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195213/"
]
} |
316,586 | I want to replace the lines matching a pattern from one file from the lines in order from another file, for example, given: file1.txt : aaaaaabbbbbb!! 1234!! 4567cccccddddd!! 1111 we like to replace the lines starting with !! with the lines of this file: file2.txt : first linesecond linethird line so the result should be: aaaaaabbbbbbfirst linesecond linecccccdddddthird line | Easy can be done with awk awk ' /^!!/{ #for line stared with `!!` getline <"file2.txt" #read 1 line from outer file into $0 } 1 #alias for `print $0` ' file1.txt Other version awk ' NR == FNR{ #for lines in first file S[NR] = $0 #put line in array `S` with row number as index next #starts script from the beginning } /^!!/{ #for line stared with `!!` $0=S[++count] #replace line by corresponded array element } 1 #alias for `print $0` ' file2.txt file1.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46190/"
]
} |
316,657 | Create a new account called test : $ sudo useradd test test doesn't have a password right now. So $ su test doesn't work. If you try, you're asked for test 's password. It doesn't have one. That's not the same as its password being empty so if you enter an empty password by pressing enter, you get message "su: Authentication failure". The same is true if you switch to a tty and try to log in as test : An empty password isn't accepted. Now assign test an empty password: $ sudo passwd -d test You can now log in as test on a tty by providing the empty string as the password. However, if you try $ su test again, you still get the message "su: Authentication failure" the session doesn't switch to the user test . Why is this? | From the output of ldd /bin/su , the su binary is compiled with the pam libraries ( libpam* ), so the authentication, account management, session initiation etc stuffs will be managed by pam . The following is how a typical Ubuntu system's su is managed by pam , you should find similar approach if you are using another distro. The pam rules for su are defined in the file /etc/pam.d/su . This file also includes the common-auth , common-passwd , common-session files from same directory as common templates for covering the tasks their name suggest (and used in other pam enabled services). On my system, at the bottom of /etc/pam.d/su i have: @include common-auth@include common-account@include common-session The preceding lines do not deal with the null password checking, it is mainly the job of pam_unix module. Now /etc/pam.d/common-auth has: auth [success=1 default=ignore] pam_unix.so nullok_secure From man pam_unix : nullok The default action of this module is to not permit the user access to a service if their official password is blank. The nullok argument overrides this default and allows any user with a blank password to access the service. nullok_secure The default action of this module is to not permit the user access to a service if their official password is blank. The nullok_secure argument overrides this default and allows any user with a blank password to access the service as long as the value of PAM_TTY is set to one of the values found in /etc/securetty. as you can see if the nullok_secure option is set, the unless the environment variable PAM_TTY is set in the mentioned manner, the user with null password will not be permitted to login using su . So to allow any user with null password to do su , you need to have the nullok argument to the pam_unix module: auth [success=1 default=ignore] pam_unix.so nullok this is insecure as the common-auth file is used by many other services, even for only su this should not be done. (For the sake of testing you can set it once and then revert back to original. Although if you want to do the test, it's better to incorporate all the logics in /etc/pam.d/su file, and amend any changes afterwards rather than messing with any common-* file) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147785/"
]
} |
316,671 | How can I remove the first few characters like remove ; from the selected lines using commands? I switched into the insert mode, but can't figure out how to do. ;extension=php_bz2.dll;extension=php_curl.dll;extension=php_fileinfo.dll;extension=php_ftp.dll;extension=php_gd2.dll;extension=php_gettext.dll;extension=php_gmp.dll;extension=php_intl.dll;extension=php_imap.dll;extension=php_interbase.dll;extension=php_ldap.dll;extension=php_mbstring.dll;extension=php_exif.dll ; Must be after mbstring as it depends on it;extension=php_mysqli.dll;extension=php_oci8_12c.dll ; Use with Oracle Database 12c Instant Client;extension=php_openssl.dll;extension=php_pdo_firebird.dll | Place cursor on first or last ; Press Ctrl + v to enter Visual Block mode Use arrow keys or j , k to select the ; characters you want to delete (or the other " first few characters ") Press x to delete them all at once | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/316671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195306/"
]
} |
316,765 | For example, in Ubuntu, there is always a .local directory in the home directory and .profile includes this line: PATH="$HOME/bin:$HOME/.local/bin:$PATH" $HOME/.local/bin does not exist by default, but if it is created it's already in $PATH and executables within can be found. This is not exactly mentioned in the XDG directory specification but seems derived from it. What I wonder is if this is common enough that it could be usually assumed to exist in the most common end user distributions. Is it, for instance, in all of the Debian derivatives, or at least the Ubuntu ones? How about the Red Hat/Fedora/CentOS ecosystem? And so on with Arch, SUSE, and what people are using nowadays. To be extra clear, this is only for $HOME/.local/bin , not $HOME/bin . Out of curiosity, feel free to include BSDs, OS/X and others if you have the information. :) | The ~/.local directories are part of the systemd file-hierarchy spec and is an extension of the xdg-user-dirs spec . It can be confusing as Debian-derived packages for bash lost the ~/.local path when they rebased to Bash 4.3. They did have it in Bash 4.2. It is a bug , and a patch has been sitting in the Debian system for a bit now. This bug is the reason Ubuntu 16.04 had ~/.local in the path and Ubuntu 17.04 did not. If you run systemd-path as a user, you will see that it is intended to be in the path. $ systemd-path user-binaries/home/foo/.local/bin In theory, the answer to your query is: Any distro that uses systemd or wants to maintain compatibility with systemd. There is more information in file-hierarchy(7) . | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/316765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193366/"
]
} |
316,771 | The job of my Unix executable file is to perform a long computation, andI added a interrupt/resume functionality to it as explained below. At regular intervals, the program writes all relevant data found so farin a checkpoint file, which can then be used as a starting point for a "resume"operation. To interrupt the program, I use Ctrl + C . The only problem with this methodology is that, if the interruption occurswhen the program is writing into the file, I am left with a useless half written file. The only fix I could find so far is as follows: make the program write into two files, so that at restart time one of them will be readable. Is there a cleaner, better way to create an "interruptable" Unix executable ? | It depends a bit on if you care only about the program itself crashing, or the whole system crashing. In the first case, you could write the fresh data to a new file, and then rename that to the real name only after you're done writing. That way the file will contain either the previous, or the new checkpoint data, but never only partial information. Though partial writes should be rare enough in any case, if we assume the checkpointing code itself is not likely to fail, and if relevant signals are trapped to make sure the program saves a new checkpoint in full before exiting. (In addition to SIGINT , I think you'd better catch SIGHUP and SIGTERM too.) If we consider the possibility of the whole system crashing, then I wouldn't trust only one checkpoint file. The data is not likely to actually be on the disk when system returns from the file write system call. Instead, the OS and the disk itself are likely to cache the data and actually write it some time later. So leaving one or two previous checkpoints would work as a failsafe against that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133175/"
]
} |
316,816 | To make debugging an issue easier, I deleted /var/log/messages .... and it hasn't come back! I thought debug messages from kernel and modules like autofs were routed to this file. But even after I cycle autofs (which is set to verbose logging), this file is gone. How to recover? | You need to restart your system logger: service rsyslog restart (as root ). A reboot would also have the same effect, but it's rather overkill. Log messages go to a system logger, rsyslog on CentOS, and the logger writes to various files depending on its configuration (or even other loggers on remote systems). rsyslog opens /var/log/messages when it starts, and keeps it open; deleting /var/log/messages makes it disappear from the directory, but the file still exists and is usable by any program which had it open. So rsyslog continues logging to the deleted file... Restarting it causes it to re-open the file, re-creating it first in this case. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316816",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49155/"
]
} |
316,841 | I'm trying to learn more about the dig command and have come across the -x option which is meant for reverse lookup, i.e. you give an IP address and get back a domain name. I tried doing dig -x www.google.com which I guess doesn't really make sense with the -x option, but I got back this response: ; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> -x www.google.com;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 2959;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;com.google.www.in-addr.arpa. IN PTR;; AUTHORITY SECTION:in-addr.arpa. 3180 IN SOA b.in-addr-servers.arpa. nstld.iana.org. 2015074802 1800 900 604800 3600;; Query time: 0 msec;; SERVER: 128.114.142.6#53(128.114.142.6);; WHEN: Sun Oct 16 17:06:24 PDT 2016;; MSG SIZE rcvd: 124 Can anybody help me get a better understanding of what this reponse tells us, I thought you weren't supposed to use the -x option with a domain name. | Notice in the response that you got back status: NXDOMAIN and ANSWER: 0 . This means there was no record found matching your query. The -x option to dig is merely a convenience for constructing a PTR query. It splits on dots, reverses it, appends in-addr.arpa. , and sets the type to PTR . The information you did get back is the SOA record for the authoritative domain ( in-addr.arpa ), and is for result caching. Negative lookups (queries which have no results) can be cached for a duration as specified in the SOA record. See RFC-2308 : Name servers authoritative for a zone MUST include the SOA record ofthe zone in the authority section of the response when reporting anNXDOMAIN or indicating that no data of the requested type exists.This is required so that the response may be cached. The TTL of thisrecord is set from the minimum of the MINIMUM field of the SOA recordand the TTL of the SOA itself, and indicates how long a resolver maycache the negative answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316841",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193065/"
]
} |
316,844 | I'm trying to learn how to use crontab to regularly execute bash commands. I thought I'd try something simple first, like having crontab read the date and time to stdout every minute. But when I try to do it, this is what happens: Nicholass-MacBook-Air-2:cron_test nick$ crontab 1 * * * * datecrontab: 1: No such file or directory How do I get this to work? Thanks, Nick | Notice in the response that you got back status: NXDOMAIN and ANSWER: 0 . This means there was no record found matching your query. The -x option to dig is merely a convenience for constructing a PTR query. It splits on dots, reverses it, appends in-addr.arpa. , and sets the type to PTR . The information you did get back is the SOA record for the authoritative domain ( in-addr.arpa ), and is for result caching. Negative lookups (queries which have no results) can be cached for a duration as specified in the SOA record. See RFC-2308 : Name servers authoritative for a zone MUST include the SOA record ofthe zone in the authority section of the response when reporting anNXDOMAIN or indicating that no data of the requested type exists.This is required so that the response may be cached. The TTL of thisrecord is set from the minimum of the MINIMUM field of the SOA recordand the TTL of the SOA itself, and indicates how long a resolver maycache the negative answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165209/"
]
} |
316,856 | I often see instructions that include vim or nano , meaning to open the file in that step in your text editor of choice. Is there an agnostic command I can use in place of the specific program that would open the input in the user's default in-terminal text editor, whether it's vim , nano , or something else? I see editor mentioned in the Similar Questions sidebar—is that still limited to Debian-based distros? And is there any alternative? | You can use $EDITOR , provided that it's been defined: $EDITOR filename.txt But I think most docs use nano because if someone's blindly following along, it's a safe bet to use. If the user has decided they actually prefer one editor over another, they'll know enough to replace it with vim , emacs , etc themselves. edit may work well on Debian-based systems, but on others it invokes ex , which isn't recommended. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316856",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42894/"
]
} |
316,891 | I want to change a word in a .docx file using a shell command.I tried using the sed command, but it is not working.Does anyone know a solution for this? For example, I want to change a word (e.g. exp5 ) and replace that with another ( exp3 ) in the file exo.docx . | So, you want to replace things in a brand-specific format? At the first look it looks bad, but the new docx format is a bit better for that than the old doc format, because it's actually a ZIP file containing XML files. So the answer lies in unzipping it, then you'll have to rummage through the files and figure out on which one to call sed and zip it up again. Check out the file word/document.xml in the ZIP file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/316891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195486/"
]
} |
316,998 | I am using gnome 3.22.1 but problem exists since 3.18. Before that (don't remember the exact version) I was able to switch keyboard layout using xkb-witch , simple application that uses X.org bindings under the hood. After 3.18 if you run xkb-switch, the keyboard layout won't be switched in gnome. Further investigation have shown that layout switching is working, but for a very short amount of time. If you run this script: for i in $(seq 1000); do lang=$(xkb-switch -s ru; xkb-switch); if [[ "$lang" == "ru" ]]; then echo $lang; fi;done You will get from 3 to 20 "successfull" layout switchings, depending on how lucky you are. After googling this problem I the following advice : gsettings set org.gnome.desktop.input-sources current 0 The setting is being changed, but the layout stays the same. I have found one "hacky" method to change the layout: setxkbmap us,rusetxkbmap ru,us but the gnome shell isn't aware of that change, and shows wrong language in layout indicator. I've posted about this problem (sorry, not enough reputation, https ://bbs.archlinux.org/viewtopic.php?pid=1657582 https ://github.com/ierton/xkb-switch/issues/15), but had no luck getting any good answers. And at this point I'm stuck. I'm not skilled enough to identify the problem in gnome shell code. I'm not even sure it is it's(gnome shell's) problem. What I want is a gnome-aware way to switch keyboard layout from terminal. Can someone point me in the right direction? Should I file this as a bug (especially the fact that keyboard layout cannot be changed through gsettings)? | Since gnome-shell exposes a JS eval interface on DBus which has access to all variables, the feat is possible with the following command: gdbus call --session --dest org.gnome.Shell \--object-path /org/gnome/Shell \--method org.gnome.Shell.Eval \ "imports.ui.status.keyboard.getInputSourceManager().inputSources[0].activate()" Which will activate 0th layout, and so forth. It is trivial to assign these commands to, say, underutilized 無変換 and 変換 on your Japanese keyboard. Credit. And this is how to switch to last used input method (from comments): gdbus call --session --dest org.gnome.Shell --object-path /org/gnome/Shell \--method org.gnome.Shell.Eval "imports.ui.status.keyboard.getInputSourceManager()._mruSources[1].activate()" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/316998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195568/"
]
} |
317,064 | On RHEL7, systemd-journald takes over many of the responsibilites of what was once done by rsyslogd . Whether by bug or conflict between these two daemons, sometimes /dev/log will go missing. As a result, programs relying on the syslog(3) call will not function properly, including, for instance, logger . How can I restore the /dev/log socket? | Asking and answering my own question because Google was not very helpful on this one. Normally, with rsyslogd , the imuxsock module will create the /dev/log socket on its own, unlinking the previous entry before creating it. When rsyslogd is stopped (possibly because restart which fails because of faulty configuration), rsyslogd removes /dev/log . However, the rsyslog supplied with RHEL7 is expected to be used in conjunction with systemd , and the imuxsock module will actually open and remove /run/systemd/journal/syslog socket. Meanwhile, the /dev/log device is created by the system service-file systemd-journald.socket which triggers journald . Apparently, whether or not $imjournal module is used, the following works. In sum, if /dev/log disappears: restart systemd-journald.socket: systemctl restart systemd-journald.socket then restart rsyslogd systemctl start rsyslogd UPDATE: I believe systemctl restart rsyslogd might re-delete the socket if rsyslogd is already running. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/317064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105631/"
]
} |
317,121 | I have a question about the * character at the end of a directory path in a bash script. I have a script that's supposed to automatically delete some archives from a server once they get old enough. The script is on machine A and I need to run it on machine B. I access both machines remotely through ssh (no sudo, just regular user). One of the rules of the script is that it needs to only delete archives in the folders beginning with dirA/dirB/dirC/dirD/dirE* . However, there is no dirE in that location so I'm guessing the * stands for some variable. This is what I'd like to know, what does the * mean at the end of the directory path and what does it make the script do? | The * here is a "globbing character" and means "match 0 or more characters". To illustrate, consider this directory: $ lsdirA dire dirE dirEa dirEEE$ echo dirE*dirE dirEa dirEEE As you can see above, the glob dirE* matches dirE , dirEa and dirEEE but not dirA or dire (*nix systems are case sensitive). So, in your script, that means it will delete archives from any directory in dirA/dirB/dirC/dirD/ whose name begins with dirE . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195676/"
]
} |
317,126 | I need to be able to read data sequentially from a file while not storing the data that is being read in the page cache as the file contents are not expected to ever be read again and also because there is memory pressure on the box (want to use the precious memory for useful disk I/O caching). The question I have is about how I can optimize these reads. Since I know that the data that is being read is sequentially placed on the disk (minus the fragmentation), I want to be able to read ahead (by increasing /sys/block/sda/queue/read_ahead_kb) but am not sure if this will lead to any benefit because I have to prevent the data that is being read from being stored in the page cache by using posix_fadvise (with the POSIX_FADV_DONTNEED flag). Will the read ahead data be simply discarded because of the hint to drop the data from the page cache? | Use direct IO : Direct I/O is a feature of the file system whereby file reads andwrites go directly from the applications to the storage device,bypassing the operating system read and write caches. Direct I/O isused only by applications (such as databases) that manage their owncaches. An application invokes direct I/O by opening a file with the O_DIRECT flag. For example: int fd = open( filename, O_RDONLY | O_DIRECT ); Direct IO on Linux is quirky and has some restrictions. The application IO buffer must be page-aligned, and some file systems require that each IO request be an exact multiple of the page size. That last restriction can make reading/writing the last portion of a file difficult. An easy-to-code way to handle readahead in your application can be done using fdopen and setting a large page-aligned buffer using posix_memalign and setvbuf : // should really get page size using sysconf()// but beware of systems with multiple page sizes#define ALIGNMENT ( 4UL * 1024UL )#define BUFSIZE ( 1024UL * 1024UL )char *buffer;...int fd = open( filename, O_RDONLY | O_DIRECT );FILE *file = fdopen( fd, "rb" );int rc = posix_memalign( &buffer, ALIGNMENT, BUFSIZE );rc = setvbuf( file, buffer, _IOFBF, BUFSIZE ); You can also use mmap() to get anonymous memory to use for the buffer. That has the advantage of being naturally page-aligned: ...char *buffer = mmap( NULL, BUFSIZE, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0 );rc = setvbuf( file, buffer, _IOFBF, BUFSIZE ); Then just use fread() / fgets() or any FILE * -type read function you want to read from the file stream. You do need to check using a tool such as strace that the actual read system calls are done with a page-aligned and page-sized buffer - some C library implementations of FILE * -based stream processing don't use the buffer specified by setvbuf for just IO buffering, so the alignment and size can be off. I don't think Linux/glibc does that, but if you don't check and the size and/or alignment is off, your IO calls will fail. And again - Linux direct IO can be quirky. Only some file systems support direct IO, and some of them are more particular than others. TEST this thoroughly if you decide to use it. The posted code will do a 1 MB read-ahead whenever the stream's buffer needs to be filled. You can also implement more sophisticated read-ahead using threads - one thread fills one buffer, other thread(s) read from a full buffer. That would avoid processing "stutters" as the read-ahead is done, but at the cost of a good amount of relatively complex multi-threaded code. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317126",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195679/"
]
} |
317,134 | On Arch Linux on a MacBook Air 5.1 i get the error message DMAR-IR: [Firmware Bug]: ioapic 2 has no mapping iommu,interrupt remapping will be disabled when booting. I can't notice any problem, but what is this?Does it need to be fixed and if so how? | Use direct IO : Direct I/O is a feature of the file system whereby file reads andwrites go directly from the applications to the storage device,bypassing the operating system read and write caches. Direct I/O isused only by applications (such as databases) that manage their owncaches. An application invokes direct I/O by opening a file with the O_DIRECT flag. For example: int fd = open( filename, O_RDONLY | O_DIRECT ); Direct IO on Linux is quirky and has some restrictions. The application IO buffer must be page-aligned, and some file systems require that each IO request be an exact multiple of the page size. That last restriction can make reading/writing the last portion of a file difficult. An easy-to-code way to handle readahead in your application can be done using fdopen and setting a large page-aligned buffer using posix_memalign and setvbuf : // should really get page size using sysconf()// but beware of systems with multiple page sizes#define ALIGNMENT ( 4UL * 1024UL )#define BUFSIZE ( 1024UL * 1024UL )char *buffer;...int fd = open( filename, O_RDONLY | O_DIRECT );FILE *file = fdopen( fd, "rb" );int rc = posix_memalign( &buffer, ALIGNMENT, BUFSIZE );rc = setvbuf( file, buffer, _IOFBF, BUFSIZE ); You can also use mmap() to get anonymous memory to use for the buffer. That has the advantage of being naturally page-aligned: ...char *buffer = mmap( NULL, BUFSIZE, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0 );rc = setvbuf( file, buffer, _IOFBF, BUFSIZE ); Then just use fread() / fgets() or any FILE * -type read function you want to read from the file stream. You do need to check using a tool such as strace that the actual read system calls are done with a page-aligned and page-sized buffer - some C library implementations of FILE * -based stream processing don't use the buffer specified by setvbuf for just IO buffering, so the alignment and size can be off. I don't think Linux/glibc does that, but if you don't check and the size and/or alignment is off, your IO calls will fail. And again - Linux direct IO can be quirky. Only some file systems support direct IO, and some of them are more particular than others. TEST this thoroughly if you decide to use it. The posted code will do a 1 MB read-ahead whenever the stream's buffer needs to be filled. You can also implement more sophisticated read-ahead using threads - one thread fills one buffer, other thread(s) read from a full buffer. That would avoid processing "stutters" as the read-ahead is done, but at the cost of a good amount of relatively complex multi-threaded code. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134690/"
]
} |
317,139 | I am looking to calculate time difference between the two below Value1=2016-10-13 14:19:23Value2=2016-10-13 18:19:23 I want to get difference in the form of hours/minutes. Any quick solution available?. | You can use date (assuming the GNU implementation) with command substitution, and to get the difference between the times use arithmetic expansion: % Value1='2016-10-13 14:19:23'% Value2='2016-10-13 18:19:23' % echo "$(($(date -d "$Value2" '+%s') - $(date -d "$Value1" '+%s')))"14400 The result is in seconds. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317139",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124631/"
]
} |
317,226 | On Linux, when you a create folder, it automatically creates two hard links to the corresponding inode.One which is the folder you asked to create, the other being the . special folder this folder. Example: $ mkdir folder$ ls -litotal 0124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 folder$ ls -lai foldertotal 0124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 .124593716 drwxr-xr-x 3 fantattitude staff 102 18 oct 16:52 .. As you can see, both folder and . 's inside folder have the same inode number (shown with -i option). Is there anyway to delete this special . hardlink? It's only for experimentation and curiosity. Also I guess the answer could apply to .. special file as well. I tried to look into rm man but couldn't find any way to do it. When I try to remove . all I get is: rm: "." and ".." may not be removed I'm really curious about the whole way these things work so don't refrain from being very verbose on the subject. EDIT: Maybe I wasn't clear with my post, but I want to understand the underlying mechanism which is responsible for . files and the reasons why they can't be deleted. I know the POSIX standard disallows a folder with less than 2 hardlinks, but don't really get why. I want to know if it could be possible to do it anyway. | It is technically possible to delete . , at least on EXT4 filesystems. If you create a filesystem image in test.img , mount it and create a test folder, then unmount it again, you can edit it using debugfs : debugfs -w test.imgcd testunlink . debugfs doesn't complain and dutifully deletes the . directory entry in the filesystem. The test directory is still usable, with one surprise: sudo mount test.img /mnt/tempcd /mnt/temp/testls shows only .. so . really is gone. Yet cd . , ls . , pwd still behave as usual! I'd previously done this test using rmdir . , but that deletes the directory's inode ( huge thanks to BowlOfRed for pointing this out ), which leaves test a dangling directory entry and is the real reason for the problems encountered. In this scenario, the test folder then becomes unusable; after mounting the image, running ls produces ls: cannot access '/mnt/test': Structure needs cleaning and the kernel log shows EXT4-fs error (device loop2): ext4_lookup:1606: inode #2: comm ls: deleted inode referenced: 38913 Running e2fsck in this situation on the image deletes the test directory entirely (the directory inode is gone so there's nothing to restore). All this shows that . exists as a specific entity in the EXT4 filesystem. I got the impression from the filesystem code in the kernel that it expects . and .. to exist, and warns if they don't (see namei.c ), but with the unlink . -based test I didn't see that warning. e2fsck doesn't like the missing . directory entry, and offers to fix it: $ /sbin/e2fsck -f test.imge2fsck 1.43.3 (04-Sep-2016)Pass 1: Checking inodes, blocks, and sizesPass 2: Checking directory structureMissing '.' in directory inode 30721.Fix<y>? This re-creates the . directory entry. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/317226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21795/"
]
} |
317,234 | I have a server where a partition ( /var ) switched to read-only. So I try to reproduce this problem on another server with the following command. mount -o remount,ro /var/ -f When I check our application log on that same partition I remounted ro I see entries recently added. tail -f /var/log/httpd/* CentOS 6.7 Apache: 2.2.15 uname -r : 2.6.32-573.7.1.el6.x86_64 | It is the correct behaviour. You use the -f flag, which mean: -f , --fake : Causes everything to be done except for the actual system call; if it's not obvious, this ``fakes'' mounting the filesystem. This option is useful in conjunction with the -v flag to determine what the mount command is trying to do. It can also be used to add entries for devices that were mounted earlier with the -n option. The -f option checks for an existing record in /etc/mtab and fails when the record already exists (with a regular non- fake mount, this check is done by the kernel). See also Remount a busy disk to read-only mode . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110754/"
]
} |
317,253 | In line((s) that originate from csv file) 14/Feb/2016:15:21:33-0500]http://map1.link.de/mk what is the easiest part to replace/delete&add ]http://map1.link.de/ with , having in mind that after map can come any number map1, map2, map3 Example of couple of lines: 14/Feb/2016:15:21:33-0500]http://map1.link.de/mk14/Feb/2016:16:21:33-0500]http://map5.link.de/mk Final result 14/Feb/2016:15:21:33-0500,mk14/Feb/2016:16:21:33-0500,mk | It is the correct behaviour. You use the -f flag, which mean: -f , --fake : Causes everything to be done except for the actual system call; if it's not obvious, this ``fakes'' mounting the filesystem. This option is useful in conjunction with the -v flag to determine what the mount command is trying to do. It can also be used to add entries for devices that were mounted earlier with the -n option. The -f option checks for an existing record in /etc/mtab and fails when the record already exists (with a regular non- fake mount, this check is done by the kernel). See also Remount a busy disk to read-only mode . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193066/"
]
} |
317,254 | Let's say a have a following command search /home/user proc .h .c .txt ... I am building a script with find command to get all files that start with given name and end with one of given extensions. I've managed to build this using a loop: directory=$1fileName=$2fileExtensions=""for arg in "$@"do #skipping first and second argument if [ $arg = $1 -o $arg = $2 ]; then continue; fi fileExtensions+="${arg//.}\|"done#removing the last '|', otherwise regex parser error occursfileExtensions=${fileExtensions::-1}find $directory -name "$fileName*" -regex ".*\.\($fileExtensions)" Is there more elegant way of achieving this using regex? Thank you for your help! | It is the correct behaviour. You use the -f flag, which mean: -f , --fake : Causes everything to be done except for the actual system call; if it's not obvious, this ``fakes'' mounting the filesystem. This option is useful in conjunction with the -v flag to determine what the mount command is trying to do. It can also be used to add entries for devices that were mounted earlier with the -n option. The -f option checks for an existing record in /etc/mtab and fails when the record already exists (with a regular non- fake mount, this check is done by the kernel). See also Remount a busy disk to read-only mode . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195165/"
]
} |
317,282 | Gnome 3.22 uses wayland by default. Gnome on wayland does not read ~/.profile (or ~/.bash_profile or /etc/profile ). See https://bugzilla.gnome.org/show_bug.cgi?id=736660 . I have my initialization files set up as following: .bash_profile does nothing but source .profile and .bashrc .profile only sets environment variables like PATH and LC_MESSAGES .bashrc sets some bash specific settings and aliases and environment variables for applications like less and grep . The effect (before wayland) was following: when I login graphically .profile was read and environment variables like PATH and LC_MESSAGES were set. when I open bash inside a terminal emulator then .bashrc was read. when I login under a virtual terminal then .bash_profile was read which in turn reads .profile and .bashrc . when I login using ssh then behaviour is similar to virtual terminal. In all cases .profile and .bashrc were read and my environment was set up. So now gnome 3.22 uses wayland and wayland does not read .profile . How can I set up my initialization files so that I again have the effects as described above? Note that I do not insist that certain files (like .profile ) are read. What I want is to have my environment set up in a sensible way. That means I want to keep bash specific settings to the bash initialization files and other settings to other initialization files. Also I would like to not copy the settings over different files. I use arch linux. Answers for all distributions are welcome. When suggesting a workaround please also describe the side effects and the advantages and disadvantages. update november 2017: as far as i understand the gnome developers have acknowledged that people expect their login shell config files ( .profile and .bash_profile in case of bash) are sourced after login. regardless of text or graphical login. so my use case outlined above works again. still the gnome developers want to move away from starting a login shell. it seems that the direction they are going is to use environmentd from systemd: https://in.waw.pl/~zbyszek/blog/environmentd.html it seems that it will take a while until all login methods are adapted to environmentd. | Systemd version 233 (March 2017) added support for setting environment variables in ~/.config/environment.d/*.conf . See the environment.d man page and the discussion that led to the feature on this preliminary PR and this final one . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/317282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1170/"
]
} |
317,346 | I'm wondering how I can get timedatectl to show that NTP is enabled in CentOS 7 root@voip:~ $ timedatectl Local time: Tue 2016-10-18 20:58:23 EDT Universal time: Wed 2016-10-19 00:58:23 UTC RTC time: Wed 2016-10-19 00:58:23 Time zone: America/New_York (EDT, -0400) NTP enabled: no ##THIS LINE##NTP synchronized: yes RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2016-03-13 01:59:59 EST Sun 2016-03-13 03:00:00 EDT Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2016-11-06 01:59:59 EDT Sun 2016-11-06 01:00:00 EST | Well, its because of Chrony. (RHEL 7 new NTP Server) First yum remove chrony Then timedatectl set-ntp true Not sure what effect this has other then making me happy with timedatectl, everything else said it was working, but I have an XYMon script for NTP which greps that line. Suppose thats the point of the script, it was red because it was telling me something was wrong! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130767/"
]
} |
317,366 | I have a CSV file from which I need to remove one column from it.The problem is I have exported the CSV file without headers.So how can I remove the column from the CSV file.For example if I have the example.csv I want to remove the last column from it which is a boolean data and have the file as input.csv . input.csv 1,"data",100.00,TRUE2,"code",91.8,TRUE3,"analytics",100.00,TRUE output.csv 1,"data",100.002,"code",91.83,"analytics",100.00 | To remove the fourth column, $ cut -d, -f4 --complement example.csv > input.csv Adjust the -f option to match the column number. If the CSV file is more complicated, you could use some perl and the Text::CSV package, $ perl -MText::CSV -E '$csv = Text::CSV->new({binary=>1}); while ($row = $csv->getline(STDIN)) { print "$row->[0],$row->[1],$row->[2]\n" }' < example.csv > input.csv | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/317366",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192625/"
]
} |
317,432 | I would like a cleaner way to nest this command; if [ ! -d $BACKUPDIR ]; then mkdir -p $BACKUPDIR mkdir -p $BACKUPDIR/directories mkdir -p $BACKUPDIR/databases mkdir -p $BACKUPDIR/logselse :fi | With brace expansion, you could do just mkdir -p "$BACKUPDIR"/{directories,databases,logs} If you want to make sure the subdirectories exist, too, you can just run mkdir without the test. With -p it shouldn't complain about existing directories, and there will be no chance of the main directory $BACKUPDIR existing, but the subdirectories missing. (Of course, if BACKUPDIR is empty, this will (try to) create the subdirectories in the file system root directory. But I'll assume you've set BACKUPDIR to some value earlier in the script.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106673/"
]
} |
317,436 | When I connect to a remote host via mc's Shell link, I always start at that host's root directory. Is there a way to specify that I want to start in a different directory, e.g. the home directory of the user I connect as? If I just ssh to the host, I start at the home directory by default. | With brace expansion, you could do just mkdir -p "$BACKUPDIR"/{directories,databases,logs} If you want to make sure the subdirectories exist, too, you can just run mkdir without the test. With -p it shouldn't complain about existing directories, and there will be no chance of the main directory $BACKUPDIR existing, but the subdirectories missing. (Of course, if BACKUPDIR is empty, this will (try to) create the subdirectories in the file system root directory. But I'll assume you've set BACKUPDIR to some value earlier in the script.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54300/"
]
} |
317,448 | Suppose we declare test="/this/isjust/atestvariable/for/stringoperation" and we want to replace each instance of '/' with the colon ':'. Then, I think this command should work: echo ${test//\/:} (as ${variable//pattern/string} replaces all matches of the pattern with the specified string ) But, on running echo ${test//\/:} , I get the output as /this/isjust/atestvariable/for/stringoperation Where I could be going wrong? Thank you for your help. | escape slash with backslash echo ${test//\//:} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195918/"
]
} |
317,458 | I want to rsync a folder locally then remotely It appears I'm getting stuck in a loop and filling up the local server. The original folder is 3.9GB and I have over 17GB on the local server All I want to do is rsync the folder locally as a backup then rsync the original folder to another backup server. I can't see where it's going wrong. Please help! Below is the entire script; #!/bin/bash# check that BACKUPDIR existsBACKUPDIR="/home/deploy/backups"if [ ! -d $BACKUPDIR ]; then mkdir -p $BACKUPDIR/{directories,databases,logs}else :fi# set time variableNOW=$(date +"%F_%H:%M") # year-month-day_hour:minute format# set logsLOCALLOG="$BACKUPDIR/logs/$NOW.webapps.log"exec 3>&1 4>&2trap 'exec 2>&4 1>&3' 0 1 2 3exec 1>$LOCALLOG 2>&1# Everything below will go to the file 'webapps.log'# Remove files older than 7 daysfind $BACKUPDIR/{directories,databases,logs} -mtime +8 -exec rm {} \;# set directory variablesLOCALDIR="$BACKUPDIR/directories"BKUP_SERV="[email protected]"BKUP_DIR="/home/deploy/backups/$HOSTNAME/directories"BKUP_LOG="/home/deploy/backups/$HOSTNAME/logs"DJANGODIR="/usr/local/django"WEBAPPSDIR="/webapps"# set output variablesWEBAPPS_YES="SUCCESSFULL sync of webapps folder"WEBAPPS_NO="FAILED to sync webapps folder"RSYNC_YES="SUCCESSFULL rsync to log file"RSYNC_NO="FAILED to rsync log file"# check webapps or django folder to rsyncif [ ! -d "$WEBAPPSDIR" ]; then rsync -avh "$DJANGODIR" "$LOCALDIR"else rsync -avh "$WEBAPPSDIR" "$LOCALDIR"fiRESULT1="$?"# Outputs whether the rsync was successful or notif [ "$RESULT1" != "0" ]; then echo -e "EXIT Code:" $RESULT1 "\n$WEBAPPS_NO"else echo "$WEBAPPS_YES"fi# check webapps or django folder to rsyncif [ ! -d "$WEBAPPSDIR" ]; then rsync -azvPh "$DJANGODIR" -e ssh "$BKUP_SERV":"$BKUP_DIR"else rsync -avzPh "$WEBAPPSDIR" -e ssh "$BKUP_SERV":"$BKUP_DIR"fiRESULT2="$?"# Outputs whether the rsync was successful or notif [ "$RESULT2" != "0" ]; then echo -e "EXIT Code:" $RESULT2 "\n$RSYNC_NO"else echo "$RSYNC_YES"fi# Command to rsync 'webapps.log'rsync -azvPh "$LOCALLOG" -e ssh "$BKUP_SERV":"$BKUP_LOG"RESULT3="$?"# Outputs whether the rsync was successful or notif [ "$RESULT3" != "0" ]; then echo -e "EXIT Code:" $RESULT3 "\n$RSYNC_NO"else echo "$RSYNC_YES"fi | escape slash with backslash echo ${test//\//:} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106673/"
]
} |
317,479 | Currently my prompt is: xiaobai@dnxb:/tmp$root@dnxb:/tmp# My desired prompt: xb@dnxb:/tmp$rt@dnxb:/tmp# My idea is alias of username, so i tried reuse the same uid to create a new user: xiaobai@dnxb:~$ sudo useradd -ou 1000 -g1000 -d /home/xiaobai -s /bin/bash xbxiaobai@dnxb:~$ suPassword: root@dnxb:/home/xiaobai# passwd xbEnter new UNIX password: Retype new UNIX password: passwd: password updated successfullyroot@dnxb:/home/xiaobai# exitxiaobai@dnxb:~$ su xbPassword: xiaobai@dnxb:~$ pwd/home/xiaobaixiaobai@dnxb:~$ PS1='\u:\W\$ 'xiaobai:~$ exitxiaobai@dnxb:~$ iduid=1000(xiaobai) gid=1000(xiaobai) groups=1000(xiaobai),27(sudo)xiaobai@dnxb:~$ It doesn't change to xb for PS1 '\u'. If so, how ? | What's wrong with setting manually? PS1="xb@\h:\w\$ " | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64403/"
]
} |
317,482 | I have a simple bash script; I basically want to make sure that the file exists on a remote machine. I've found numerous examples of how to do this, but the missing component is how to do this with spaces in the path and/or filename being evaluated. #!/bin/bashHOST=server.localDIR=/foo/barFILE="Foo Bar File With Spaces"if ssh $HOST [[ -f ${DIR}/${FILE} ]]then echo "The file exists"else echo "The file doesn't exist."fi So, it fails. It gets me a syntax error in conditional expression. However, if I change the FILE variable to say: FILE="Foo\ Bar\ File\ With\ Spaces" The script works (it finds the file since it's there). I have tried the following variations to my conditional expression: if ssh $HOST [[ -f "${DIR}/${FILE}" ]] and if ssh $HOST [[ -f "${DIR}"/"${FILE}" ]] Neither of which work; I know that I am missing something simple. Can someone please point me in the right direction? | Add an extra pair of quotes so there's one for the local shell and one for the remote shell that ssh runs. $ dir=/tmp; file="foo bar"; $ ssh somewhere ls -l "'$dir/$file'"-rw-r--r-- 1 foo foo 4194304 Oct 19 18:05 /tmp/foo bar$ ssh somewhere [[ -f "'$dir/$file'" ]] ; echo $?0 You want the double-quotes on the outside so that the local shell expands the variables before the ssh command runs. With a single-quote on the inside, the remote shell won't expand special characters further. Unless the file name contains single-quotes, that is. In which case you'll run into trouble. To work around that, you'll need something that adds the necessary escapes to the string. Some systems have printf "%q" , new versions of Bash have the ${var@Q} expansion which should do something similar similar. $ dir=/tmp; file="\$foo' bar"$ fullpath="$(printf "%q" "$dir/$file")"$ ssh somewhere ls -l "$fullpath"-rw-r--r-- 1 foo foo 0 Oct 19 18:45 /tmp/$foo' bar | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317482",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/107777/"
]
} |
317,491 | How can I rewrite the following command with ProxyCommand ? ssh -l username1 -t jumphost1 \ssh -l username2 -t jumphost2 \ssh -l username3 -t jumphost3 \ssh -l username4 server This doesn't work ssh -o ProxyCommand="\ssh -l username1 -t jumphost1 \ssh -l username2 -t jumphost2 \ssh -l username3 -t jumphost3" \ -l username4 serverusername1@jumphost1's password:Pseudo-terminal will not be allocated because stdin is not a terminal.Permission denied, please try again.Permission denied, please try again.Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).ssh_exchange_identification: Connection closed by remote host I'm aware of its use with nc , but I'm searching for way to use it with 3+ hops, and also use this option with scp . I checked ssh_config man page, but the information is quite scarce, for me at least. EDIT I tried using ProxyCommand nested in another ProxyCommand as suggested below but I always get something along the following lines debug3: ssh_init_stdio_forwarding: 192.17.2.2:2222debug1: channel_connect_stdio_fwd 192.17.2.2:2222debug1: channel 0: new [stdio-forward]debug2: fd 4 setting O_NONBLOCKdebug2: fd 5 setting O_NONBLOCKdebug1: getpeername failed: Bad file descriptordebug3: send packet: type 90debug2: fd 3 setting TCP_NODELAYdebug3: ssh_packet_set_tos: set IP_TOS 0x10debug1: Requesting [email protected]: send packet: type 80debug1: Entering interactive session. Fortunately, since 7.3 -J or ProxyJump serves my purpose — although I still to have to work around my keys setup. ssh -q -J user1@jumphost1,user2@jumphost2,user3@jumphost3 user@server | The nc version is not recommended anymore. Use the -W switch, which is provided in recent versions of OpenSSH. Also, you don't need to copy the config to other hosts! All of the config needs to be done on your host and it does not interfere with the scp in any way. Just create a file ~/.ssh/config with: Host jumphost1 User username1Host jumphost2 User username2 ProxyCommand ssh -W %h:%p jumphost1Host jumphost3 User username3 ProxyCommand ssh -W %h:%p jumphost2Host server User username4 ProxyCommand ssh -W %h:%p jumphost3 And then connect using ssh server or use scp file server:path/ . If you insist on oneliner (or not sure what you mean about ProxyCommand nesting), then as already pointed out, it is hell of escapes: ssh -oProxyCommand= \ 'ssh -W %h:%p -oProxyCommand= \ \'ssh -W %h:%p -oProxyCommand= \ \\\'ssh -W %h:%p username1@jumphost1\\\' \ username2@jumphost2\' \ username3@jumphost3' \username4@server You basically need to go from inside. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/317491",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36191/"
]
} |
317,535 | There are file A, B, C.I want to concatenate file A, B and C (Skip first line of C).And then send them to myProgram as input.How can I write this in shell script? I wrote that cat A > fileecho >> file //want to start all contents in new linecat B >> filetail -n+2 C >> file./myProgram < file But I have no idea how to concatenate them and send to a program without generating a file | Try this : { cat A ; echo; cat B ; awk 'NR>1' C ; } | programm puttings commands inside curly brackets is grouping . Not that the ; is mandatory if there's no newline to finish the grouping . And no need a another subshell here ;) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195987/"
]
} |
317,576 | I know how I can search and replace a string, either globally or in a selected area. But I have the habit to go over a word and then hit * to search for that word. This won't only search for the word string, but it will also ignore other strings of which the searched string is a sub string. So the search looks like \<word\> instead of just word . Now I'd like to be able to hit * on a word, and then replace all of the already searched and highlighted occurrences of that string, without having to enter it again for the search & replace command. Is there a good way of doing that? | Sample text: catconcatenatescatdog and cat Say * is pressed on first line, it will search for pattern \<cat\> When search string is left empty during search and replace, it will reuse the last matched pattern, So doing :%s//CAT/g will result in CATconcatenatescatdog and CAT From :h :substitute If the {pattern} for the substitute command is empty, the command uses the pattern from the last substitute or :global command. If there is none, but there is a previous search pattern, that one is used. With the [r] flag, the command uses the pattern from the last substitute, :global , or search command. To change behavior of * and # for visually selected text to search only part of text instead of whole word: vnoremap * y/<C-R>"<CR>vnoremap # y?<C-R>"<CR> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32436/"
]
} |
317,687 | I have a function in my ~/.zshrc : findPort() { lsof -t -i :$1} The usual invocation is findPort 3306 . I want to run it with elevated privileges. But I get "command not found". ➜ git sudo findPort 3306sudo: findPort: command not found I presume the reason is that the root user either runs as a non-interactive shell (thus does not refer to a .zshrc), or refers to a different .zshrc . I have seen similar questions regarding alias , but no question regarding user-defined functions. The answers for this problem regarding alias involves adding an alias to ~/.zshrc : alias sudo='nocorrect sudo ' Or perhaps: alias sudo='sudo ' I have tried both of these solutions, and the problem still exists (yes I've relaunched the shell). I have also tried running sudo chsh to ensure that my root shell runs under zsh . None of these solutions removes the "command not found" problem. Is there a way to run my user-defined functions under sudo? | sudo runs commands directly, not via a shell, and even if it ran a shell to run that command, it would be a new shell invocation, and not one that reads your ~/.zshrc (even if it started an interactive shell, it would probably read root 's ~/.zshrc , not yours unless you've configured sudo to not reset the $HOME variable). Here, you'd need to tell sudo to start a new zsh shell, and tell that zsh to read your ~/.zshrc before running that function: sudo zsh -c '. $0; "$@"' ~/.zshrc findPort 3306 Or: sudo zsh -c '. $0; findPort 3306' ~/.zshrc Or to share your current zsh functions with the new zsh invoked by sudo : sudo zsh -c "$(functions); findPort 3306" Though you might get an arg list too long error if you have a lot of functions defined (like when using the completion system). So you may want to limit it to the findPort function (and every other function it relies on if any): sudo zsh -c "$(functions findPort); findPort 3306" You could also do: sudo zsh -c "(){$functions[findPort]} 3306" To embed the code of the findPort function in an anonymous function to which you pass the 3306 argument. Or even: sudo zsh -c "$functions[findPort]" findPort 3306 (the inline script passed to -c is the body of the function). You could use a helper function like: zsudo() sudo zsh -c "$functions[$1]" "$@" Do not use: sdo() { sudo zsh -c "(){$functions[$1]} ${@:2}" } As the arguments of sdo would undergo another level of shell parsing. Compare: $ e() echo "$@"$ sdo e 'tname;uname'tnameLinux$ zsudo e 'tname;uname'tname;uname | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/317687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191304/"
]
} |
317,695 | I had discovered something funny today. So, I have Kali Linux and I am trying to fully update the system using the repo http://http.kali.org/kali . All is good and well until I get 403 denied for backdoor-factory and mimikatz. At first I thought it was a server configuration error and so ignored it, but then I got curious and decided to pop the URLs into Firefox. Sure enough, my university blocks these specific URLs, but not anything else in the repo. I decided to check out if I could load the URLs in https (yes, I knew it was a long shot as most (afaik) APT servers don't even support https at all) and found out it does work, but only when accepting the certificate for archive-8.kali.org. (yes, I know invalid certs aren't good, but I figured if it is using GPG to check the validity and it uses http with no encryption anyway, then why not). Also, I know I can just use https://archive-8.kali.org/kali in place of the old url and have done so, but the reason I asked about accepting invalid certs is for if this solution of just switching domains is impossible. | You can configure certain parameters for the HTTPS transport in /etc/apt/apt.conf.d/ — see man apt.conf (section "THE ACQUIRE GROUP", subsection "https") for details. There is also a helpful example over at the trusted-apt project. For example, you can disable certificate checking completely: // Do not verify peer certificateAcquire::https::Verify-Peer "false";// Do not verify that certificate name matches server nameAcquire::https::Verify-Host "false"; … or just for a specific host: Acquire::https::repo.domain.tld::Verify-Peer "false";Acquire::https::repo.domain.tld::Verify-Host "false"; These options should be placed in a newly created file in /etc/apt/apt.conf.d/ so they won't interfere with options installed by official packages (which will create separate files of their own). The filename determines the order in which the option files are parsed, so you'll probably want to choose a rather high number to have your options parsed after the ones installed by other packages. Try 80ssl-exceptions , for example. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/317695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181269/"
]
} |
317,721 | A common scenario is having a zip file in a directory with other work: me@work ~/my_working_folder $ lsdownloaded.zip workfile1 workfile2 workfile3 I want to unzip downloaded.zip , but I don't know if it will make a mess or if it nicely creates its own directory. My ongoing workaround is to create a temporary folder and unzip it there: me@work ~/my_working_folder $ mkdir temp && cp downloaded.zip temp && cd tempme@work ~/my_working_folder/temp $ lsdownloaded.zipme@work ~/my_working_folder/temp $ unzip downloaded.zip Archive: downloaded.zip creating: nice_folder/ This prevents my_working_folder from being populated with lots of zip file contents. My question is: Is there a better way to determine if a zip file contains only one folder before unzipping? | From the manual... [-d exdir] An optional directory to which to extract files. By default, all files and subdirectories are recreated in the current directory; the -d option allows extraction in an arbitrary directory (always assuming one has permission to write to the directory). This option need not appear at the end of the command line; it is also accepted before the zipfile specification (with the normal options), immediately after the zipfile specification, or between the file(s) and the -x option. The option and directory may be concatenated without any white space between them, but note that this may cause normal shell behavior to be suppressed. In particular, -d ~ (tilde) is expanded by Unix C shells into the name of the user's home directory, but -d~ is treated as a literal subdirectory ~ of the current directory. So... unzip -d new_dir zipfile.zip This creates a directory, new_dir, and extracts the archive within it, which avoids the potential mess every time even without looking first. It is also very useful to look at man unzip . More help for manual pages . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99989/"
]
} |
317,734 | I have an issue with my terminal prompt line. When the line is too long it wraps on the same line and then Up arrow makes it look even worse. I have already checked Terminal prompt not wrapping correctly ,but it looks like I am apparently closing all the squared brackets for non printable characters. This is my PS1: \[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot) }\[\033[01;36m\]\u@\h\[\033[00m\]\033[01;34m\]\w\033[00m\][$(type __git_ps1 >/dev/null 2>&1 && __git_ps1 "(%s)")] Consider this as my standard prompt line MELISC@work~/dev/bin_tools[((main))] I was able to get assdasdasdasdasdadasdsadadasdaddasdadadasdadsadasdsa((main))] asdsadsadsadsadasdasdassdasdasdassdasdassdasdasdasdasdasdasdsadsad I have already checked my .bashrc I have and shopt -s checkwinsize should autocheck the columns | You've completely banjanxed the Bourne Again shell's idea of what's been printed and what it has to erase/rewrite as it displays command history and lets you edit the command line. Breaking your prompt down into sections: \[\e]0;\u@\h: \w\a\] — non-printing characters, properly enclosed ${debian_chroot:+($debian_chroot) } — printing characters only, presumably \[\033[01;36m\] — non-printing characters, properly enclosed \u@\h — printing characters only \[\033[00m\] — non-printing characters, properly enclosed \033[01;34m\] — non-printing characters, improperly enclosed so the Bourne Again shell does not know that they are \w\033[00m\] — an erroneous mixture of printing and non-printing characters [$(type __git_ps1 >/dev/null 2>&1 && __git_ps1 "(%s)")] — printing characters only, presumably I've given this advice before , but it is general advice that applies here as well: Use either \e or \033 consistently, for your own sanity. Make your \[ and \] strictly matching non-nesting pairs. Make sure that all non-printing sequences are within \[ and \] (and that, conversely, that all printing sequences are not). (This is why I personally prefer the Z Shell and its alternative prompt expansion mechanism for when I want wacky coloured prompts. It knows that things like %F{green} aren't printing sequences, without having to be told; and it also works out the correct escape sequences from terminfo , without having them hardwired.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/317734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196156/"
]
} |
317,814 | I had a 32GB SD Card with this structure (or very close): luis@Fresoncio:~$ sudo fdisk -lDisk /dev/mmcblk0: 29.2 GiB, 31393316864 bytes, 61315072 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xec4e4f57Device Boot Start End Sectors Size Id Type/dev/mmcblk0p1 1 125000 125000 61M c W95 FAT32 (LBA)/dev/mmcblk0p2 125001 33292287 33167287 15.8G 83 Linux/dev/mmcblk0p3 33292288 61315071 28022784 13.4G 83 Linux And I transferred (from another computer, so the devices where sda and sdb ) it to a (I choose the wrong one) 64GB SD Card via dd ( dcfldd , in fact): # dcfldd if=/dev/sda of=/dev/sdb bs=1M So now, my new 64GB SD Card is: luis@Fresoncio:~$ sudo fdisk -lDisk /dev/mmcblk0: 59.5 GiB, 63864569856 bytes, 124735488 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xec4e4f57Device Boot Start End Sectors Size Id Type/dev/mmcblk0p1 1 125000 125000 61M c W95 FAT32 (LBA)/dev/mmcblk0p2 125001 33292287 33167287 15.8G 83 Linux/dev/mmcblk0p3 33292288 61315071 28022784 13.4G 83 Linux Well, no problem for now, but now I don't have the source 32 GB SD Card anymore, only the 64GB SD Card remains, and I would like to transfer it to some empty 32 GB SD Card again. In this case, I assume I can not use dd or dcfldd What may I do?Can I use dd or dcfldd ? What could happen when the transfer arrives to the 32 GB boundary on the destination SD Card (data integrity problems)? Further notes : Any other method to clone the SD cards would be OK, but I have a problem: this case scenario is some SD card boot drive for a Raspberry Pi 2 , and cloning via partimage or gparted did not work (the Raspberry does not boot). Only dd seems to do the cloning without flaws. Similar question, but I think not the same. The dcfldd tool has the same syntax and behavior as dd . It just gives more info (progress... etc). Here is the man page . | Assuming sda is your 64GB source SD card and sdb is your 32GB destination SD card.You can limit dd to only copy the number of required sectors with: dd if=/dev/sda of=/dev/sdb bs=512 count=61315072 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/317814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57439/"
]
} |
317,822 | I am trying to go through individual emails and retrieve the host name.Each email has a To: section with an email address " [email protected] ". I'm trying to retrieve just " aol.com " Eg : To: [email protected] (abc123)To: [email protected],hk (Jim)To: [email protected]\ (Jim) Expected output : aol.comyahoo.com,hkyahoo.com\ | Assuming sda is your 64GB source SD card and sdb is your 32GB destination SD card.You can limit dd to only copy the number of required sectors with: dd if=/dev/sda of=/dev/sdb bs=512 count=61315072 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/317822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196214/"
]
} |
317,836 | I have two files, file1 and file2 file1: r11_abc_gkhsa 1.0 1.5 1.9r11_bcd_gkhsa 1.0 1.5 1.7r11_acd_gkhsa 1.3 1.6 1.5r11_xyz_gkhsa 1.0 1.5 1.9 file2: sd1_bcd_gkhsa 1.8 1.5 1.9ab1_abc_gkhsa 1.6 1.4 1.5sfs_xyz_gkhsa 1.4 1.6 1.4sd1_acd_gkhsa 1.2 1.3 1.5sfs_ryb_gkhsa 1.5 1.2 1.7 I want to match " abc , bcd, acd, and xyz" of file1 with file2. Whenever it matched with file2 I want to print it the following way. Output: r11_abc_gkhsa 1.0 1.5 1.9 ab1_abc_gkhsa 1.6 1.4 1.5r11_bcd_gkhsa 1.0 1.5 1.7 sd1_bcd_gkhsa 1.8 1.5 1.9r11_acd_gkhsa 1.3 1.6 1.5 sd1_acd_gkhsa 1.2 1.3 1.5r11_xyz_gkhsa 1.0 1.5 1.9 sfs_xyz_gkhsa 1.4 1.6 1.4sfs_ryb_gkhsa 1.5 1.2 1.7 can use Perl or sed. can someone give me ideas to work on it. | Assuming sda is your 64GB source SD card and sdb is your 32GB destination SD card.You can limit dd to only copy the number of required sectors with: dd if=/dev/sda of=/dev/sdb bs=512 count=61315072 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/317836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196233/"
]
} |
317,853 | Say I do this: tar cvf allfiles.tar $(<mylist.txt) and mylist.txt contains this: /tmp/lib1/tmp/path2/lib2/path3/lib3 How to tar the files so the tarball contains just lib1lib2lib3 with no directory structure at all? The -C of --directory options are usually recommended ( ref , ref ), but there are multiple directories so this doesn't work. Also, --xform looks like it requires a fixed pattern ( ref ) which we do not have. | The --xform argument takes any number of sed substitute expressions, which are very powerful. In your case use a pattern that matches everything until the last / and replace it with nothing: tar cvf allfiles.tar --xform='s|.*/||' $(<mylist.txt) Add --show-transformed-names to see the new names. Note, this substitution applies to all filenames, not just those given on the command line, so, for example, if you have a file /a/b/c and your list just specifies /a , then the final filename is just c , not b/c . You can always be more explicit and provide an exact list of substitutions, eg in your case --xform='s|^tmp/path2/||;s|^tmp/||;s|^path3/||' Note, the initial / will be removed by tar (unless you use -P ) so the above expressions are missing it. Also, the list of directories has to be sorted so the longest match is done first, else tmp/path2/ won't match as tmp/ has already been removed. But you can automate the creation of this list, eg: --xform="$(sed <mylist.txt 's|[^/]*$||; s|^/||; s:.*:s|^&||;:' | sort | tr -d '\n')" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152084/"
]
} |
317,859 | We have a CUPS server running on Ubuntu 14.04 which has about 10 HP printers configured. We have a mix of Windows, Linux and Mac clients printing to the server flawlessly. Flawlessly, until some of the Macs were upgraded to Sierra recently. Now, if a user tries to print via the server, the client behaves normally, as does cups. The job is processed through the server and the job log shows the job printed ok. However, nothing emerges out of the printer. The Mac clients can print directly to the printer using AirPrint. We had an older version of cups so we built a new 16.04 server with the latest cups, same result. We can print directly to the printer from the print server on port 9100 so that part is working ok. Is it safe to assume the problem does not lie with cups, rather with MacOS 12.12? Anyone have any troubleshooting ideas? | The --xform argument takes any number of sed substitute expressions, which are very powerful. In your case use a pattern that matches everything until the last / and replace it with nothing: tar cvf allfiles.tar --xform='s|.*/||' $(<mylist.txt) Add --show-transformed-names to see the new names. Note, this substitution applies to all filenames, not just those given on the command line, so, for example, if you have a file /a/b/c and your list just specifies /a , then the final filename is just c , not b/c . You can always be more explicit and provide an exact list of substitutions, eg in your case --xform='s|^tmp/path2/||;s|^tmp/||;s|^path3/||' Note, the initial / will be removed by tar (unless you use -P ) so the above expressions are missing it. Also, the list of directories has to be sorted so the longest match is done first, else tmp/path2/ won't match as tmp/ has already been removed. But you can automate the creation of this list, eg: --xform="$(sed <mylist.txt 's|[^/]*$||; s|^/||; s:.*:s|^&||;:' | sort | tr -d '\n')" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123597/"
]
} |
317,864 | So, I got a new battery for my laptop, and right from the get go, I'm having problems. The battery came almost entirely discharged from the manufacturer. I plugged it into my computer and the batter would report that the charge rate is 0, and hence would never reach "fully charged" After a couple minutes and having X11 crash, I'm now at this ( upower -i /org/freedesktop/UPower/devices/battery_BAT0 ): native-path: BAT0 vendor: Hewlett-Packard model: Primary power supply: yes updated: Fri 21 Oct 2016 08:28:33 AM CEST (106 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: charging warning-level: none energy: 17.8704 Wh energy-empty: 0 Wh energy-full: 24.192 Wh energy-full-design: 95.04 Wh energy-rate: 0.0996923 W voltage: 15.947 V percentage: 73% capacity: 25.4545% technology: lithium-ion icon-name: 'battery-full-charging-symbolic' So the battery is charging, the energy capacity is only about a quarter of what it was designed (even though the battery is only a couple days old), it sits at 73%, and the charge rate is so small, it doesn't even report how much it'd take till fully charged. Now, I know you can kinda "calibrate" a battery, by charging it for a couple hours, then letting it run flat, and then charge it up again. This doesn't seem to be the right way to do, though. I'm wondering if I can't access the smart data directly, via the SMBus . i2cdetect -l reports: i2c-0 smbus SMBus I801 adapter at 8000 SMBus adapteri2c-1 i2c NVIDIA i2c adapter 0 at 1:00.0 I2C adapteri2c-2 i2c NVIDIA i2c adapter 2 at 1:00.0 I2C adapteri2c-3 i2c NVIDIA i2c adapter 3 at 1:00.0 I2C adapteri2c-4 i2c NVIDIA i2c adapter 5 at 1:00.0 I2C adapter So, I tried probing SMBus ( i2cdetect -r 0 ): WARNING! This program can confuse your I2C bus, cause data loss and worse!I will probe file /dev/i2c-0 using read byte commands.I will probe address range 0x03-0x77.Continue? [Y/n] y 0 1 2 3 4 5 6 7 8 9 a b c d e f00: -- -- -- -- -- -- -- -- -- -- -- -- -- 10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 70: -- -- -- -- -- -- -- -- This is quite strange to me, does that mean there's nothing connected to the bus? No matter which address I'm trying to dump, the result is always the same: i2cdump 0 0x03 (all other valid addresses produce the same output): No size specified (using byte-data access)WARNING! This program can confuse your I2C bus, cause data loss and worse!I will probe file /dev/i2c-0, address 0x1a, mode byteContinue? [Y/n] y 0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef00: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX10: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX20: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX30: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX40: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX50: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX60: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX70: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX80: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX90: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXXa0: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXXb0: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXXc0: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXXd0: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXXe0: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXXf0: XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XXXXXXXXXXXXXXXX That's how far I got. The system gets it's battery information from somewhere , but I can't figure out how and from where. As for the I²C / SMBus access to the battery: no idea if I'm doing something wrong, or it's impossible like that. I'd like to know how to access smart battery data, how to set it (presumably with i2cset ), and possibly how it's formatted (what data encodes which information, etc.) acpi -V is even more confused: Battery 0: Unknown, 73%Battery 0: design capacity 6600 mAh, last full capacity 1680 mAh = 25%Adapter 0: on-line (design capacity reported incorrectly, etc.) Last bit of information I can come up with, is dmidecode output: Handle 0x0010, DMI type 39, 22 bytesSystem Power Supply Location: OEM_Define0 Name: OEM_Define1 Manufacturer: OEM_Define2 Serial Number: OEM_Define2 Asset Tag: OEM_Define3 Model Part Number: OEM_Define4 Revision: OEM_Define5 Max Power Capacity: 75 W Status: Present, OK Type: Regulator Input Voltage Range Switching: Auto-switch Plugged: No Hot Replaceable: No You can see all these "OEM_Define2", etc. strings in there, that aren't telling much. dmidecode -t connector reports: Getting SMBIOS data from sysfs.SMBIOS 2.4 present. | The Smart Battery Specification (SBS) bus is not directly accessible from the OS. It is however, possible to communicate directly with the battery via a USB-I2C adapter connected directly to the battery pins. EDIT: https://media.blackhat.com/bh-us-11/Miller/BH_US_11_Miller_Battery_Firmware_Public_WP.pdf EDIT 2:I personally managed to talk directly to the battery using a Raspberry PI's i2c pins and the commands you mentioned. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1290/"
]
} |
317,981 | This morning we discover this exploit CVE-2016-5195 How do we patch CentOS kernel? is there any patch available? http://www.cyberciti.biz/faq/dirtycow-linux-cve-2016-5195-kernel-local-privilege-escalation-vulnerability-fix/ | Wait for RedHat (the CentOS upstream vendor) to issue an update , then CentOS will port that update over to the CentOS update repositories so you can simply patch via yum update as normal. DirtyCOW isn't that scary of a vulnerability. It requires the attacker already has some manner of shell access to your system. RedHat has got it rated as a CVSSv3 score of 7.8/10 , which means it's not something I'd patch outside of the normal monthly patch cycle. It's much more important that you're regularly patching your system at least monthly, as such vulnerabilities are hardly rare . Update : CentOS has released a fix (Thanks, @Roflo!). Running a yum update should get your system updated with a patched kernel. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/317981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29656/"
]
} |
318,016 | I need a regex expression to use along with find to find all the files which name start with a given string, for example proc . I tried with find . -regex '^proc*' but it gives me no results. | It would be better to use just the -name 'proc*' in this case, which uses globbing, and searches through the filename only (unlike -regex which searches through the whole directory entries leading to the filename). If you insist on using -regex , leveraging greediness: find . -type f -regex '.*/proc[^/]*$' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318016",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195165/"
]
} |
318,066 | I just got dual monitors and I want to restrict my Wacom tablet to draw on only one. Unfortunately, xsetwacom doesn't seem to recognize the DVI inputs--even though they're listed in xrandr plain as day. Any ideas? Here's my console output: andrewcarr@andrewcarr-desktop:~$ xrandrScreen 0: minimum 8 x 8, current 3200 x 900, maximum 16384 x 16384DVI-I-0 connected primary 1600x900+1600+0 (normal left inverted right x axis y axis) 443mm x 249mm 1600x900 60.00*+ 1440x900 59.89 1280x1024 60.02 1280x720 60.00 1024x768 60.00 800x600 60.32 640x480 59.94 DVI-I-1 disconnected (normal left inverted right x axis y axis)HDMI-0 disconnected (normal left inverted right x axis y axis)DVI-D-0 connected 1600x900+0+0 (normal left inverted right x axis y axis) 443mm x 249mm 1600x900 60.00*+ 1440x900 59.89 1280x1024 60.02 1280x720 60.00 1024x768 60.00 800x600 60.32 640x480 59.94 andrewcarr@andrewcarr-desktop:~$ xsetwacom set "Wacom Intuos PT M Pen stylus" MapToOutput DVI-I-0Unable to find an output 'DVI-I-0'. | Instead of the monitor name from xrandr, use HEAD-[head index] . For instance, use HEAD-0 for the first monitor listed, HEAD-1 for the second, etc. e.g. xsetwacom --set "Wacom Intuos PT S 2 Pen stylus" MapToOutput HEAD-1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196407/"
]
} |
318,098 | My current idea is to create one software array, class RAID-6, with 4 member drives, using mdadm . Specifically, the drives would be 1 TB HDDs on SATA in a small server Dell T20. Operating System is GNU/Linux Debian 8.6 (later upgraded: Jessie ⟶ Stretch ⟶ Buster ) That would make 2 TB of disk space with 2 TB of parity in my case. I would also like to have it with GPT partition table, for that to work, I am unsure how to proceed specifically supposing I would prefer to do this purely over the terminal. As I never created a RAID array, could you guide me on how I should proceed? Notes: This array will serve for the sole data only. No boot or OS on it. I opted for RAID-6 due to the purpose of this array. Two drive failures the array must be able to survive. Since I am limited by hardware to 4 drives, there is no alternative to RAID-6 that I know of. (However ugly the RAID-6 slowdown may seem, it does not matter in this array.) | In this answer, let it be clear that all data will be destroyed on all of the array members (drives), so back it up first! Open terminal and become root ( su ); if you have sudo enabled, you may also do for example sudo -i ; see man sudo for all options): sudo -i First, we should erase the drives, if there was any data and filesystems before, that is. Suppose we have 4 members: sdi , sdj , sdk , sdl . For the purpose of having feedback of this process visually, the pv ( pipe viewer ) was used here: pv < /dev/zero > /dev/sdipv < /dev/zero > /dev/sdjpv < /dev/zero > /dev/sdkpv < /dev/zero > /dev/sdl Alternatively, to just check if there is nothing left behind, you may peek with GParted on all of the drives, and if there is any partition with or without any filesystem, wiping it could be enough, though I myself prefer the above zeroing all of the drives involved, remember to un-mount all partitions before doing so, it could be done similar to these one-liners: umount /dev/sdi?; wipefs --all --force /dev/sdi?; wipefs --all --force /dev/sdiumount /dev/sdj?; wipefs --all --force /dev/sdj?; wipefs --all --force /dev/sdjumount /dev/sdk?; wipefs --all --force /dev/sdk?; wipefs --all --force /dev/sdkumount /dev/sdl?; wipefs --all --force /dev/sdl?; wipefs --all --force /dev/sdl Then, we initialize all drives with GUID partition table (GPT), and we need to partition all of the drives, but don't do this with GParted, because it would create a filesystem in the process, which we don't want, use gdisk instead: gdisk /dev/sdigdisk /dev/sdjgdisk /dev/sdkgdisk /dev/sdl In all cases use the following: o Enter for new empty GUID partition table (GPT) y Enter to confirm your decision n Enter for new partition Enter for default of first partition Enter for default of the first sector Enter for default of the last sector fd00 Enter for Linux RAID type w Enter to write changes y Enter to confirm your decision You can examine the drives now: mdadm --examine /dev/sdi /dev/sdj /dev/sdk /dev/sdl It should say: (type ee) If it does, we now examine the partitions: mdadm --examine /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 It should say: No md superblock detected If it does, we can create the RAID6 array: mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 We should wait until the array is fully created, this process we can easily watch : watch cat /proc/mdstat After the creation of the array, we should look at its detail: mdadm --detail /dev/md0 It should say: State : clean Active Devices : 4Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Now we create a filesystem on the array, if you use ext4 , the below hidden command is better to be avoided, because of ext4lazyinit would take noticeable amount of time in case of a large array, hence the name, " lazyinit ", therefore I recommend you to avoid this one: mkfs.ext4 /dev/md0 Instead, you should force a full instant initialization (with 0% reserved for root as it is a data array): mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/md0 By specifying these options, the inodes and the journal will be initialized immediately during creation, useful for larger arrays. If you chose to take a shortcut and created the ext4 filesystem with the "better avoided command", note that ext4lazyinit will take noticeable amount of time to initialize all of the inodes, you may watch it until it is done, e.g. with iotop or nmon . Either way you choose to make the file system initialization, you should mount it after it has finished its initialization. We now create some directory for this RAID6 array: mkdir -p /mnt/raid6 And simply mount it: mount /dev/md0 /mnt/raid6 Since we are essentially done, we may use GParted again to quickly check if it shows linux-raid filesystem, together with the raid flag on all of the drives. If it does, we properly created the RAID6 array with GPT partitions and can now copy files on it. See what UUID the md filesystem has: blkid /dev/md0 Copy the UUID to clipboard. Now we need to edit fstab , with your favorite text editor, I used nano , though sudoedit might better be used: nano /etc/fstab And add add an entry to it: UUID=<the UUID you have in the clipboard> /mnt/raid6 ext4 defaults 0 0 I myself do not recommend using defaults set of flags, I merely wanted the line not to be overly complex. Here is what mount flags I use on a UPS backed-up data RAID (instead of defaults ):nofail,nosuid,nodev,noexec,nouser,noatime,auto,async,rw,data=journal,errors=remount-ro You may check if it is correct after you save the changes: mount -av | grep raid6 It should say: already mounted If it does, we save the array configuration; in case you don't have any md device yet created, you can simply do: mdadm --detail --scan >> /etc/mdadm/mdadm.conf In case there are arrays already existent, just run the previous command without redirection to the config file: mdadm --detail --scan and add the new array to the config file manually. In the end, don't forget to update your initramfs , because otherwise your new array will only auto read-only assemble, probably as /dev/md127 or similar: update-initramfs -u -k all Check if you did everything according to plan, and if so, you may restart: reboot | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/318098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
318,157 | I want to capture the exit status of a command that takes place somewhere in a pipeline before the last position. For example, if the pipeline is something like command_1 ... | command_2 ... | command_3 ... | ... | command_n ...I would like to know how to capture the exit status of command_1 , or of command_2 , or of command_3 , etc. (Capturing the exit status of command_n is trivial, of course.) Also, in case it matters, this pipeline is occurring inside a zsh shell function. I tried to capture the exit status of command_1 with something like function_with_pipeline () { local command_1_status=-999999 # sentinel value { command_1 ...; command_1_status=$? } | command_2 ... | ... | command_n ...} ...but after running the pipeline, the value of the command_1_status variable was still the sentinel value. FWIW, here's a working example, where the pipeline has only two commands: foo ... | grep ... foo is a function defined for the sake of this example, like so: foo () { (( $1 & 1 )) && echo "a non-neglible message" (( $1 & 2 )) && echo "a negligible message" (( $1 & 4 )) && echo "error message" >&2 return $(( ( $1 & 4 ) >> 2 ))} The goal is to capture the exit status of the call to foo in the pipeline. The function function_with_pipeline implements the (ultimately ineffective) strategy I described above to do this: function_with_pipeline () { local foo_status=-999999 # sentinel value { foo $1; foo_status=$? } | grep -v "a negligible message" printf '%d\ndesired: %d; actual: %d\n\n' $1 $(( ( $1 & 4 ) >> 2 )) $foo_status} The loop below exercises the function_with_pipeline function. The output shows that the value of the local variable foo_status ends up no different from how it started. for i in $(seq 0 7)do function_with_pipeline $idone# 0# desired: 0; actual: -999999# # a non-neglible message# 1# desired: 0; actual: -999999# # 2# desired: 0; actual: -999999# # a non-neglible message# 3# desired: 0; actual: -999999# # error message# 4# desired: 1; actual: -999999# # error message# a non-neglible message# 5# desired: 1; actual: -999999# # error message# 6# desired: 1; actual: -999999# # error message# a non-neglible message# 7# desired: 1; actual: -999999# I get the same results if I omit the local declaration in the definition of foo_status . | There is special array pipestatus for that in zsh , so try command_1 ... | command_2 ... | command_3 and echo $pipestatus[1] $pipestatus[2] $pipestatus[3] and the reason your approach doesn't work is because each pipe runs in separate subshell, with its own variables which are destroyed once you exit the subshell. Just for reference, it is PIPESTATUS (with capital letters) in bash . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318157",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
318,164 | I've got multiple audiobooks that are stored in large mp3s. And I'm trying to split these large mp3s into multiple smaller files. I've found a tool that can detect silence in audio files and split audio files based on this "delimiter". Here is an example: sox -V3 audiobook.mp3 audiobook_part_.mp3 \silence 1 0.5 0.1% 1 0.5 0.1% : newfile : restart This will basically split audiobook.mp3 into audiobook_part_001.mp3 , audiobook_part_002.mp3 , ... where silence >= 0.5 seconds. Now the problem is that this command not only splits the file but it also removes the silence. Therefore when you play the new files in a playlist the tracks/paragraphs sound squeezed together. So how do you tell sox to only split the file but to keep the silence (at the end of each track)? | You can preserve all the silences in the split parts with some small changes. Starting with your original command: silence 1 0.5 0.1% 1 0.5 0.1% The first triplet of values means removes silence, if any, at the start until .5 seconds of sound above .1%. The second triplet means stop when there is at least .5 seconds of silence below .1%. The rest of your command, : newfile : restart , then starts a new output file and begins again to look for sound at the start. So the first file ends when the silence begins, and the second file will start when the silence ends. The simplest option available to improve this is silence -l . It will preserve the .5 seconds of silence that triggered the end of file. Unfortunately, any longer silence will be removed because it is the start of the next file. An easy way to keep a longer gap is to combine -l with a longer detection time, eg 2 seconds: silence -l 1 0.5 0.1% 1 2.0 0.1% You will now only split if there is at least 2 seconds of silence, but you will preserve the first 2 seconds of the gap.To avoid losing all silence, simply remove the detection of silence at the start. You need to replace the triplet by a single 0 : silence -l 0 1 2.0 0.1% If you want to play with simple sound files to see how sox handles situations, you can easily create 2 sound files, one consisting of 1 second of a tone, and one consisting of 1 second of silence, then join them together as you wish before presenting the result as input to the silence effect. For example, create: sox -n gap.wav trim 0 1sox -n tone.wav synth 1.001t sine C5 then join gap-tone-gap-tone and create out.wav using your effect and listen to the result: sox gap.wav tone.wav gap.wav tone.wav out.wav silence 1 0.5 0.1%play out.wav | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/318164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124579/"
]
} |
318,262 | Whenever I run clear in my terminal, it deletes the scrollback buffer from the top of the screen to the current line. I have tried it in xterm, st, and urxvt, and the problem remains. Is there any way I can change the behavior of clear so it does not touch the scrollback buffer? | The comment "from the top of the screen to the current line" is ambiguous. If you meant from the top of the visible part of the screen, that is not the scrollback . The scrollback of a terminal is the part that you can see only by using the scrollbar (or suitable keys such as shift pageup). XTerm Control Sequences documents the relevant escape sequence: CSI Ps J Erase in Display (ED). Ps = 0 -> Erase Below (default). Ps = 1 -> Erase Above. Ps = 2 -> Erase All. Ps = 3 -> Erase Saved Lines (xterm). The terminal description capability clear uses the next-to-last one, e.g., clear=\E[H\E[2J to position the cursor to the upper left and then clear the whole (visible) screen. You could use the Erase Below , but that is not used in the terminal description. Referring to clearing the scrollback : That's a terminal-specific feature, originally an escape sequence in xterm ( 1999 , documented in ctlseqs.ms but not mentioned in changes) and later ( 2011 ) implemented as an extension for Linux console and the corresponding terminal description. The terminal database lists it as a "miscellaneous extension" . Currently, these terminal descriptions have the feature: linux3.0 (the current default for "linux") putty xterm+basic (a building block used in most "xterm" variants) Whether it is supported in xterm look-alikes such as VTE would have to be answered by testing (there is no useful documentation for VTE or Konsole). If you prefer to not use the extension, you could remove the E3 capability from the terminal description which you use, e.g., infocmp -1x >fooedit foo, removing the line with "E3="tic -x foo I suggested using the options -1 and -x to simplify the formatting and to show the feature to change. The example given in https://ghostbin.com/paste/kfsbj is consistent with that advice: the pathname /home/flowerpick/.terminfo/x/xterm would be used by ncurses the capabilities AX and XT are extended capabilities (like E3 ), shown with the -x option. If you are using more than one terminal type, you would have to do this for each (value of $TERM ), and the change only applies to the machine where you run clear . The first couple of lines of the infocmp output show which one you are working on: # Reconstructed via infocmp from file: /home/flowerpick/.terminfo/x/xtermxterm|xterm terminal emulator (X Window System), For instance, uxrvt sets $TERM to something like rxvt-unicode , producing lines like this in infocmp : # Reconstructed via infocmp from file: /lib/terminfo/r/rxvt-unicode rxvt-unicode|rxvt-unicode terminal (X Window System), The st program uses xterm (or possibly xterm-256color ), though it's been a while since I saw a copy of that which worked well enough to comment upon. By the way, you could have an alias for clear which is sending the given escape sequence (ignoring the terminal description), but I haven't seen this reported by anyone. If you wanted to "clear above", that is not as straightforward as typing "clear". The escape \033[1J erases from the upper-left to the current cursor position. You could make a script which does this, to clear only the lines above your current cursor: use the cursor position report to find the row/column on which the cursor currently is, and if the cursor is not on the first line, (saving that position), move the cursor up one line and then (with the hpa sequence) move right a large number, issue the "clear above", and return to the original position using cup (cursor addressing). That part with the cursor position report doesn't seem as if it would work in (for example) a readline binding, so I suggested a script. You could make a binding that used the save/restore cursor capabilities if there were not the problem of being on the first line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318262",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196545/"
]
} |
318,266 | I'd like to find an equivalent of cmd 1 && cmd 2 && ... && cmd 20 but with commands expressed within a for loop like for i in {1..20}do cmd $idone What would you suggest to change in the second expression to find an equivalent of the first? | The equivalent to your original sequence would be: for i in {1..20}do cmd $i || breakdone The difference with Amit's answer is the script won't exit, i.e. will execute potential commands that might follow the sequence/loop. Note that the return status of the whole loop will always be true with my suggestion, this might be fixed if relevant in your case. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/318266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
318,281 | I'm using tmux and OSX. When copying and pasting from the terminal with tmux I'm able to hold down Option and select text. However I can't get the text to stay inside the pane. So when I want to copy text I either need to cycle the pane to the far left, or zoom the pane, as shown below. This in addition to having to hold down the Option key is a pain. I know I can enter visual mode and use vim movements to get there, but I'd rather have a way to use my mouse. Has anyone found a workaround for this? | Put this block of code in your ~/.tmux.conf . This will enable mouse integration letting you copy from a pane with your mouse without having to zoom. set -g mouse onbind -n WheelUpPane if-shell -F -t = "#{mouse_any_flag}" "send-keys -M" "if -Ft= '#{pane_in_mode}' 'send-keys -M' 'select-pane -t=; copy-mode -e; send-keys -M'"bind -n WheelDownPane select-pane -t= \; send-keys -Mbind -n C-WheelUpPane select-pane -t= \; copy-mode -e \; send-keys -Mbind -t vi-copy C-WheelUpPane halfpage-upbind -t vi-copy C-WheelDownPane halfpage-downbind -t emacs-copy C-WheelUpPane halfpage-upbind -t emacs-copy C-WheelDownPane halfpage-down# To copy, drag to highlight text in yellow, press Enter and then release mouse# Use vim keybindings in copy modesetw -g mode-keys vi# Update default binding of `Enter` to also use copy-pipeunbind -t vi-copy Enterbind-key -t vi-copy Enter copy-pipe "pbcopy" After that, restart your tmux session. Highlight some text with mouse, but don't let go the mouse. Now while the text is stil highlighted and mouse pressed, press return key. The highlighted text will disappear and will be copied to your clipboard. Now release the mouse. Apart from this, there are also some cool things you can do with the mouse like scroll up and down, select the active pane, etc. If you are using a newer version of tmux on macOS, try the following instead of the one above: # macOS onlyset -g mouse onbind -n WheelUpPane if-shell -F -t = "#{mouse_any_flag}" "send-keys -M" "if -Ft= '#{pane_in_mode}' 'send-keys -M' 'select-pane -t=; copy-mode -e; send-keys -M'"bind -n WheelDownPane select-pane -t= \; send-keys -Mbind -n C-WheelUpPane select-pane -t= \; copy-mode -e \; send-keys -Mbind -T copy-mode-vi C-WheelUpPane send-keys -X halfpage-upbind -T copy-mode-vi C-WheelDownPane send-keys -X halfpage-downbind -T copy-mode-emacs C-WheelUpPane send-keys -X halfpage-upbind -T copy-mode-emacs C-WheelDownPane send-keys -X halfpage-down# To copy, left click and drag to highlight text in yellow, # once you release left click yellow text will disappear and will automatically be available in clibboard# # Use vim keybindings in copy modesetw -g mode-keys vi# Update default binding of `Enter` to also use copy-pipeunbind -T copy-mode-vi Enterbind-key -T copy-mode-vi Enter send-keys -X copy-pipe-and-cancel "pbcopy"bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "pbcopy" If using iTerm on macOS, goto iTerm2 > Preferences > “General” tab, and in the “Selection” section, check “Applications in terminal may access clipboard”. And if you are using Linux and a newer version of tmux, then # Linux onlyset -g mouse onbind -n WheelUpPane if-shell -F -t = "#{mouse_any_flag}" "send-keys -M" "if -Ft= '#{pane_in_mode}' 'send-keys -M' 'select-pane -t=; copy-mode -e; send-keys -M'"bind -n WheelDownPane select-pane -t= \; send-keys -Mbind -n C-WheelUpPane select-pane -t= \; copy-mode -e \; send-keys -Mbind -T copy-mode-vi C-WheelUpPane send-keys -X halfpage-upbind -T copy-mode-vi C-WheelDownPane send-keys -X halfpage-downbind -T copy-mode-emacs C-WheelUpPane send-keys -X halfpage-upbind -T copy-mode-emacs C-WheelDownPane send-keys -X halfpage-down# To copy, left click and drag to highlight text in yellow, # once you release left click yellow text will disappear and will automatically be available in clibboard# # Use vim keybindings in copy modesetw -g mode-keys vi# Update default binding of `Enter` to also use copy-pipeunbind -T copy-mode-vi Enterbind-key -T copy-mode-vi Enter send-keys -X copy-pipe-and-cancel "xclip -selection c"bind-key -T copy-mode-vi MouseDragEnd1Pane send-keys -X copy-pipe-and-cancel "xclip -in -selection clipboard" In Debian and Debian based distros (Ubuntu, Kali), you might need to install xclip : sudo apt-get install -y xclip (You may also check out https://github.com/gpakosz/.tmux for many other tmux options.) | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/318281",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173557/"
]
} |
318,287 | Taking my first Linux course and, ironically, I think I've hit a problem that could be fixed by someone proficient in Linux! As part of the course I'm taking, we're required to download and install CentOS 7. I'm having problems with the installation part. Some context: Downloaded the 'DVD ISO' file from official website File name: CentOS-7-x86_64-DVD.iso File size: 4.33 GB Running OS X El Capitan v10.11 However, when I double-click on the file I get the following error: The following disk images couldn't be opened: Image: CentOS-7-x86_64-DVD-1511.iso Reason: no mountable file systems I would delete and download the file again but I don't have a stable and/or fast connection, so I would rather not do that as it is a real pain. Is there a way to fix this? I did some research online and didn't find satisfying solutions. My first thought was perhaps the file is corrupted due to my bad connection but it seems to be a common problem, so perhaps it isn't that? | Linux (and Unix for that matter). Are operating systems. What is an Operating System? An Operating system (OS) is the software that runs "directly" (let's ignore firmware for now) on your computer's hardware, and provides a standard environment from which other software can run. Usual programs/apps, such as itunes or microsoft word don't want to deal with your actual hardware, they simply ask the Operating System for something, it deals with the hardware, and gives the result back to the program/app. Operating systems are thus installed outside of other operating systems (since they're used to access the computer's hardware directly). Your MAC would already be running macOS as its Operating System, Apple's Operating System for its devices. Usually as the computer starts up you can change which device it starts, and choosing an operating system installation DVD or USB drive is a common method for installing a new Operating system. This means that an operating system can't really be "installed" on another operating system, but there are workarounds. Dual Booting "Dual Booting" refers to installing multiple Operating Systems on a single computer. This is easiest if you have multiple hard drives so that you can install Operating Systems on their own hard drive, then simply choose which hard drive to boot. You can also install multiple Operating Systems on a single hard drive, but that requires a boot manager (software that figures out where on the hard disk each Operating System starts). I wouldn't recommend this method for a MAC, simply because they aren't really meant to run anything except macOS, and I wouldn't trust other OS's to support them. Virtual Machines Virtual Machines are programs that run within an Operating System that pretend to be a full computer. Because they pretend to be a computer you can install an operating system on them! The most common Virtual Machine program is likely VMware, but you can search around to find one you like. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318287",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/196572/"
]
} |
318,321 | fold can wrap the line if it has more than a certain amount of characters. However, I want to wrap a text file which has less than 40 characters in each line into two columns (80 chars per line total). I want to make applebanana(28 items omitted)grapeguava into apple ...banana ...(12 items omitted) (12 items omitted)... grape... guava How can I make it? | Using the -COLUMN or --columns=COLUMN option of pr -COLUMN, --columns=COLUMN output COLUMN columns and print columns down, unless -a is used. Balance number of lines in the columns on each page so either pr -t -2 yourfile or pr -t --columns=2 yourfile For example, augmenting your entries with some random dictionary words, $ cat << EOF | pr -t -2> apple> banana> `shuf -n28 /usr/share/dict/words`> grape> guava> EOFapple overachievesbanana wickerworkcottonmouths supersonicadapter's draftiestboudoir's insistingcruised programsmousetrap parcelshticks basicallyTLC's coruscatesconduction Jonesgeeing Tygloamings bondageinvesting candelabra'sradiotherapists Inchon'sclasp's grapecritters guava | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60221/"
]
} |
318,324 | Question: I'm using i3-wm and I have Mod3 working as a hotkey. I have the following in ./config/i3/config : #This command works bindsym Mod3+f exec "firefox" #This doesn't work nor do my other scripts bindsym Mod3+w exec "openBrowser" Both of these commands work fine when I run them from bash but only the 'firefox' command runs with the hotkey. Running my own script doesn't work. Additional Details: openBrowser is a script in /opt/bin/ which is in my path. Also tried doing: #This command works bindsym Mod3+f exec /opt/bin/openBrowser I've also tried other scripts none of which work when invoked by i3. Thus I've determined it's not an issue with the script. I also noticed when I'm in bash if I do Mod3+w my cursor blinks, where as if I do Mod3+[any unset key] the key writes it's value to the screen. So it seems i3 is at least trying to run the function. | I attempted to duplicate the issue you describe. What I found is that I had two i3 config files existing at the same time. ~/.config/i3/config and ~/.i3/config . In my case, editing ~/.config/i3/config had no effect because it seems that ~/.i3/config trumps it. It's a long shot, but see if maybe you have more than one config file, and possibly you are editing the wrong one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/318324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.