source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
312,261 | I'm having difficultly finding an explanation for allowing the OS default copy/paste capabilities (i.e. highlight a portion of text and then use standard shortcut or right click menu) and allow mouse scrolling at the same time. Mouse mode turns on tmux's own copy/paste system, but leaving it off removes the mouse scrolling. As I'm switching between an IDE, browser, and terminal with tmux I would like the controls to be consistent between all of them. Is there a way to have the standard OS copy/paste controls while also allowing the mouse to scroll in tmux? (Note: I originally asked, but deleted, this question on SO. I decided it was more appropriate here.) | Hold the shift key when attempting to copy and paste when in 'mode-mouse on' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191805/"
]
} |
312,280 | I have a string: one_two_three_four_five I need to save in a variable A value two and in variable B value four from the above string I am using ksh. | Use cut with _ as the field delimiter and get desired fields: A="$(cut -d'_' -f2 <<<'one_two_three_four_five')"B="$(cut -d'_' -f4 <<<'one_two_three_four_five')" You can also use echo and pipe instead of Here string: A="$(echo 'one_two_three_four_five' | cut -d'_' -f2)"B="$(echo 'one_two_three_four_five' | cut -d'_' -f4)" Example: $ s='one_two_three_four_five'$ A="$(cut -d'_' -f2 <<<"$s")"$ echo "$A"two$ B="$(cut -d'_' -f4 <<<"$s")"$ echo "$B"four Beware that if $s contains newline characters, that will return a multiline string that contains the 2 nd /4 th field in each line of $s , not the 2 nd /4 th field in $s . | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/312280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15717/"
]
} |
312,283 | I have a file called File1 in which I have one word: Frida . How can I print the output of cat File1 three times in the same line? It should show Frida Frida Frida | Use cut with _ as the field delimiter and get desired fields: A="$(cut -d'_' -f2 <<<'one_two_three_four_five')"B="$(cut -d'_' -f4 <<<'one_two_three_four_five')" You can also use echo and pipe instead of Here string: A="$(echo 'one_two_three_four_five' | cut -d'_' -f2)"B="$(echo 'one_two_three_four_five' | cut -d'_' -f4)" Example: $ s='one_two_three_four_five'$ A="$(cut -d'_' -f2 <<<"$s")"$ echo "$A"two$ B="$(cut -d'_' -f4 <<<"$s")"$ echo "$B"four Beware that if $s contains newline characters, that will return a multiline string that contains the 2 nd /4 th field in each line of $s , not the 2 nd /4 th field in $s . | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/312283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191824/"
]
} |
312,297 | I have a file with many numbers in it (only numbers and each number is in one line). I want to find out the number of lines in which the number is greater than 100 (or infact anything else). How can I do that? | Let's consider this test file: $ cat myfile9899100101102103104105 Now, let's count the number of lines with a number greater than 100: $ awk '$1>100{c++} END{print c+0}' myfile5 How it works $1>100{c++} Every time that the number on the line is greater than 100, the variable c is incremented by 1. END{print c+0} After we have finished reading the file, the variable c is printed. By adding 0 to c , we force awk to treat c like a number. If there were any lines with numbers >100 , then c is already a number. If there were not, then c would be an empty (hat tip: iruvar ). By adding zero to it, we change the empty string to a 0 , giving a more correct output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191820/"
]
} |
312,308 | So basically I want to add these two cmd lines together ls *[Aa]* ls *[Bb]* I'm looking for a file that contains both A and B (lower or uppercase) and they can appear more than once. Here's what I tried: ls *[Aa]*&&*[Bb]* | Using brace expansion One method is to use brace expansion. Let's consider a directory with these files: $ ls1a2a3 1a2b3 1b2A3 1b2b3 To select the ones that have both a and b in either case: $ ls *{[bB]*[aA],[aA]*[bB]}*1a2b3 1b2A3 Improvement A possible issue is how brace expansion behaves if one of the options has no matching files. Consider a directory with these files: $ ls1a2a3 1b2A3 1b2b3 Now, let's run our command: $ ls *{[bB]*[aA],[aA]*[bB]}*ls: cannot access '*[aA]*[bB]*': No such file or directory1b2A3 If we don't like that warning message, we can set nullglob and it will go away: $ shopt -s nullglob$ ls *{[bB]*[aA],[aA]*[bB]}*1b2A3 A limitation of this approach though, is that, if neither glob matches, then ls is run with no arguments and consequently it will list all files. Using extended globs Let's again consider a directory with these files: $ ls1a2a3 1a2b3 1b2A3 1b2b3 Now, let's set extglob : $ shopt -s extglob And, let's use an extended glob to find our files: $ ls *@([bB]*[aA]|[aA]*[bB])*1a2b3 1b2A3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312308",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191835/"
]
} |
312,398 | I was analyzing the script file and I came across the line below sed -i '/JBOSS_HOME\/bin\/run.sh/i \export TMP_FOLDER=$JBOSS_HOME/server/default/tmp ' /home/jboss/runJBOSSEAP.sh I am still not able to figure out what does this command do. I know -i means it is an inline operation. But what it does is still unknown to me. Please help me understand this line. | sed -i -i says to edit file in-place, that is write the new version over the same name /JBOSS_HOME\/bin\/run.sh/ A pattern, separated by slashes, the slashes contained in the pattern are quoted with backslashes, so this matches any line containing JBOSS_HOME/bin/run.sh . (Actually since it's a regex, the dot matches any character.) i \export TMP_FOLDER=$JBOSS_HOME/server/default/tmp ' Command to run when the pattern matches, i is for inserting a line (before the current). The line to be added is separated by the backslash, so this adds the string export TMP_FOLDER=$JBOSS_HOME/server/default/tmp . /home/jboss/runJBOSSEAP.sh Target file name. e.g. $ echo JBOSS_HOME/bin/run.sh > pla $ sed -i '/JBOSS_HOME\/bin\/run.sh/i \export TMP_FOLDER=$JBOSS_HOME/server/default/tmp ' pla$ cat plaexport TMP_FOLDER=$JBOSS_HOME/server/default/tmp JBOSS_HOME/bin/run.sh It's pretty much the same as e.g. the example here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312398",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23687/"
]
} |
312,435 | How can I view IPv6 router advertisements that are being received by my computer for diagnostic purposes? Are there any tools "built-in" to the majority of distros? | Using tcpdump which is installed by default on many distributions: tcpdump -n -i eth0 icmp6 will show you all ICMPv6 packets of which - under usual conditions - almost all are neighbor discovery packets. In order to see only router advertisements, use the following command: tcpdump -n -i eth0 icmp6 and ip6[40] == 134 For more verbosity, add -v ; to display packet contents, use the option -X . tshark is usually bundled with wireshark, which most distributions do not install by default but provide as additional package. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/312435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14143/"
]
} |
312,456 | I am on Debian Jessie 8.6. I noticed that apt-get gets the expected bash autocompletion when pressing tab for packages and command, but when trying to use it with apt it does not work. I remember using xubuntu 16.04 where it worked, so I find it strange that it does not work here. Is there a way to enable it for the command apt as well? If so, how? | Debian does not come with 'bash-completion' installed and enabled. If you're coming to Debian from, say, an Ubuntu background, where it is pre-installed and enabled by default, this can be a source of some confusion. To enable/'fix' this, run (as root): apt-get install bash-completion Then, you have two options. You can either: 1. Enable it on a per-user basis for yourself, or 2. Enable it globally. 1. If you want to enable it for just your user, edit ~/.bashrc - add the following: if [ -f /etc/bash_completion ]; then . /etc/bash_completion fi To try it without logging out and back in, run: . ~/.bashrc Or open a new shell. Then try to use tab-completion with apt. That dot and space at the beginning ( . ) is the same as using the source keyword in bash, but is more portable. If you want it to work when su 'd into the root account, do the same thing in root's home directory (typically /root ). 2. To enable it globally, do the changes from ( 1 ) in the file /etc/bash.bashrc instead. To anyone who's wondering why this works, the . in front of /etc/bash_completion does not refer to the current directory, since it has spaces around it. It instead makes the contents of the given file be evaluated in the currently running shell, instead of being executed in a new subshell. It is standardized here. In Bash, this . can be replaced by the command source , but this is not standardized by POSIX and is less portable so I tend to steer people away from using it. In this case, since it is specifically a program for extending bash, rather than something that needs to work in a bourne shell or ksh , you can feel free to substitute source for readability. Incidentally, this behavior (not opening a sub-shell) is similar to the way DOS/Windows .BAT scripts work normally, changing the state of the shell they are run in. This is why if you cd into a different path in a shell script, you won't be in that path when the script exits like you would be in a .BAT. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/312456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169342/"
]
} |
312,480 | Recently I did the usual update + upgrade .. however after doing so, my network interface refused to work. ( no connection ) What happened ? How can I get bring my network-interface up ? ... I am running a debian - stretch. ( The same issue might occur on debian-derivates, like e.g. Ubuntu) | After some search in the web ( god sake I have as well a laptop ) I figured out that some renaming of the network interfaces occurred ... so first thing to do: See which network interfaces currently are up ( for me only the Loopback was started ) sudo ifconfig Now let's check the naming of all available network interfaces: networkctl For me the output looked like that: WARNING: systemd-networkd is not running, output will be incomplete.IDX LINK TYPE OPERATIONAL SETUP 1 lo loopback n/a unmanaged 2 enp3s0 ether n/a unmanaged 3 enp4s0 ether n/a unmanaged After that I took a look into /etc/network/interfaces ... which for me looks like that: source /etc/network/interfaces.d/*# The loopback network interfaceauto loiface lo inet loopback# Comment in the right one (the one plugged in) otherwise system.d will run a startjob#auto net0#allow-hotplug net0#iface net0 inet dhcpauto net1allow-hotplug net1iface net1 inet dhcp ... you probably can guess what comes next ... replace net0 / net1 (or whatever you have there) by the LINKS listed by networkctl . Start the new interface (or reboot): sudo ifup enp3s0 And check if it is listed now: sudo ifconfig | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57118/"
]
} |
312,486 | I have a file and its content looks like this: a1b1c1aabbccaaabbbcccd1e1f1ddeeffdddeeefffg1h1i1gghhiiggghhhiii What's the best way to merge the rows with fixed interval (3 in this case) and get something like: a1 aa aaab1 bb bbbc1 cc cccd1 dd ddde1 ee eeef1 ff fffg1 gg gggh1 hh hhhi1 ii iii The algorithm to get the out put from the input is: First we get row 1 which is a1. We know the interval is 3 So row 1, row (1+3), row (1+3+3) should be on the same row Similarly, row 2, 5, 8 should be on the same row etc. Those a1 , aa and aaa etc. are just random dummy text and they could be any random string. The point is that there is a fixed interval between a1 , aa and aaa . Currently I use emacs keyboard macro to do this task. However I want to know if there are any better ways to solve this problem. Thanks in advance. | If you're on gnu /anything and the number of lines is multiple of 9, you could run split -l9 --filter='pr -3 -s" " -t' infile This splits the input into pieces of nine lines and each piece is piped to pr -3 -s" " -t' which columnates it... Depending on the no. of lines and their length you may need to play with pr options -w and -l . See the man page for more details. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/312486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8839/"
]
} |
312,488 | Pseudocode but originally developed for Windows 7 iso file but applied for Windows 8 in the thread How to create bootable Windows 8 iso image in Linux? but it does not work with Windows 10 iso # https://rwmj.wordpress.com/2010/11/04/customizing-a-windows-7-install-iso/# https://unix.stackexchange.com/a/312477/16920$ dd if=../en_windows_10_x64_dvd.iso \ of=boot.img bs=2048 count=8 skip=734$ mkisofs -o ../new-win.iso -b boot.img -no-emul-boot -c BOOT.CAT \ -iso-level 2 -udf \ -J -l -D -N -joliet-long -relaxed-filenames . Unsuccessful output when run on Windows 10 image dd if=/home/masi/Downloads/en_windows_10_multiple_editions_version_1511_x64_dvd.iso of=/home/masi/Downloads/boot.img bs=2048 count=8 skip=7348+0 records in8+0 records out16384 bytes (16 kB) copied, 0.000392973 s, 41.7 MB/s Some of the following fields have changed for the iso file used in dd bs=2048 count=8 skip=734 How can you study which field values you can use for Windows 10 iso? OS: Debian 8.5 64 bit Hardware: Asus Zenbook UX303UA Linux kernel: 4.6 of backports Related threads: How to create bootable Windows 7 iso image in Linux? , Customizing a Windows 7 install ISO Motivation: I need Windows 10 to use Canon P-150 duplex scanner, but when I started my Windows, I got Error 0xC0000428 because Windows update has again broken things there and I use Windows otherwise so rarely; and I have no spare Windows left to make bootable media | I tried the Win7 solution described by Microsoft on a Windows machine: https://www.microsoft.com/en-us/download/windows-usb-dvd-download-tool and obtained the 0x80080005 error so went to Debian Stretch 9 to try to build the Windows 10 bootable USB using a e5.onthehub.com college/school ISO. Using dd absolutely doesn't work for Windows 10. This only works for Linux OSes. Use: dd if=my-linux-os.iso of=/dev/sdX bs=4M Note: Never try write to /dev/sdX1 where X={a,b,c or d} and always check you are not overwriting your hard disk which is usually /dev/sda or /dev/sdb ! For Windows 10 you can use WoeUSB but not from the apt/yum repos. These ones are obsolete, at least for Debian 9. So instead use: git clone https://github.com/slacka/WoeUSB.git Then follow the instructions at the end of: https://github.com/slacka/WoeUSB You must have all the prerequisites such as gparted and so forth installed first. I also found at the end of the process that I must run woeusb with sudo. So you just use: sudo woeusb --device local/of/my/windows-10-image.iso /dev/sdX and hey presto it just works brilliantly. In my case my harddisk was /dev/sda and my USB drive was /dev/sdb so I wrote the ISO to /dev/sdb (again be careful, you don't want to overwrite your OS by accident). I then installed it on a military class MSI motherboard with 2TB hard disk attached with no fast boot options inside the BIOS and it just works. I turned on absolutely every UEFI option first to get it into the right state. I also had problems at install time, with the system hanging forever when using a Gmail email account for login and when the internet cable was connected at the second restart of the machine (during the install process). If you have this issue, disconnect internet, restart machine, let generic account be built, login, restart with internet cable | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
312,490 | I was searching the net to find a way to do a complete backup of my linux machine (not a server) and restore everything anytime. I started with the most linked guide on every thread, http://www.mikerubel.org/computers/rsync_snapshots/index.html and I thought that this type of backup was an incremental backup, after another day of research I found What's the differernce between differential and incremental backup in terms of rsync command? and other discussion, but I am not sure what I am really doing. Following the guide and the posts I decided to try with a single folder (before backup the entire system) called source rsync -av --delete /home/user/source /home/user/backup/backup0/ then I added and removed some files and did the backup three more times rsync -avH --delete --link-dest=/home/user/backup/backup0 /home/user/source /home/user/backup/backup1 rsync -avH --delete --link-dest=/home/user/backup/backup1 /home/user/source /home/user/backup/backup2rsync -avH --delete --link-dest=/home/user/backup/backup2 /home/user/source /home/user/backup/backup3 I thought that with this type of backup I was going to have something like: backup0-->backup1-->backup2-->backup3 So if I wanted to restore the content of "backup3" the #1 e #2 were needed, but I deleted them and then restored the backup3, and everything was back in place. So I ran user@user:/backup$ du -sh *450M backup0620K backup1624K backup2628K backup3 It looks like a differential backup, not an incremental, but I thought that to be differential I would had to set for every backup --link-dest=/home/user/backup/backup0 My question is: Am I mistaking something? Is there a better way to backup a complete system using rsync? P.S: the destination device is going to be an external drive with the same filesystem. | I tried the Win7 solution described by Microsoft on a Windows machine: https://www.microsoft.com/en-us/download/windows-usb-dvd-download-tool and obtained the 0x80080005 error so went to Debian Stretch 9 to try to build the Windows 10 bootable USB using a e5.onthehub.com college/school ISO. Using dd absolutely doesn't work for Windows 10. This only works for Linux OSes. Use: dd if=my-linux-os.iso of=/dev/sdX bs=4M Note: Never try write to /dev/sdX1 where X={a,b,c or d} and always check you are not overwriting your hard disk which is usually /dev/sda or /dev/sdb ! For Windows 10 you can use WoeUSB but not from the apt/yum repos. These ones are obsolete, at least for Debian 9. So instead use: git clone https://github.com/slacka/WoeUSB.git Then follow the instructions at the end of: https://github.com/slacka/WoeUSB You must have all the prerequisites such as gparted and so forth installed first. I also found at the end of the process that I must run woeusb with sudo. So you just use: sudo woeusb --device local/of/my/windows-10-image.iso /dev/sdX and hey presto it just works brilliantly. In my case my harddisk was /dev/sda and my USB drive was /dev/sdb so I wrote the ISO to /dev/sdb (again be careful, you don't want to overwrite your OS by accident). I then installed it on a military class MSI motherboard with 2TB hard disk attached with no fast boot options inside the BIOS and it just works. I turned on absolutely every UEFI option first to get it into the right state. I also had problems at install time, with the system hanging forever when using a Gmail email account for login and when the internet cable was connected at the second restart of the machine (during the install process). If you have this issue, disconnect internet, restart machine, let generic account be built, login, restart with internet cable | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191986/"
]
} |
312,491 | How does systemd handle the death of the children of managed processes? Suppose that systemd launches the daemon foo , which then launches three other daemons: bar1 , bar2 , and bar3 . Will systemd do anything to foo if bar2 terminates unexpectedly? From my understanding, under Service Management Facility (SMF) on Solaris foo would be killed or restarted if you didn't tell startd otherwise by changing the property ignore_error . Does systemd behave differently? Edit #1: I've written a test daemon to test systemd 's behavior. The daemon is called mother_daemon because it spawns children. #include <iostream>#include <unistd.h>#include <string>#include <cstring>using namespace std;int main(int argc, char* argv[]){ cout << "Hi! I'm going to fork and make 5 child processes!" << endl; for (int i = 0; i < 5; i++) { pid_t pid = fork(); if (pid > 0) { cout << "I'm the parent process, and i = " << i << endl; } if (pid == 0) { // The following four lines rename the process to make it easier to keep track of with ps int argv0size = strlen(argv[0]); string childThreadName = "mother_daemon child thread PID: "; childThreadName.append( to_string(::getpid()) ); strncpy(argv[0],childThreadName.c_str(),argv0size + 25); cout << "I'm a child process, and i = " << i << endl; pause(); // I don't want each child process spawning its own process break; } } pause(); return 0; } This is controlled with a systemd unit called mother_daemon.service : [Unit]Description=Testing how systemd handles the death of the children of a managed processStopWhenUnneeded=true[Service]ExecStart=/home/my_user/test_program/mother_daemonRestart=always The mother_daemon.service unit is controlled by the mother_daemon.target : [Unit]Description=A target that wants mother_daemon.serviceWants=mother_daemon.service When I run sudo systemctl start mother_daemon.target (after sudo systemctl daemon-reload ) I can see the parent daemon and the five children daemons. Killing one of the children has no effect on the parent, but killing the parent (and thus triggering a restart) does restart the children. Stopping mother_daemon.target with sudo systemctl stop mother_daemon.target ends the children as well. I think that this answers my question. | It doesn't. The main process handles the death of its children, in the normal way. This is the POSIX world. If process A has forked B, and process B has forked C, D, and E; then process B is what sees the SIGCHLD and wait() status from the termination of C, D, and E. Process A is unaware of what happens to C, D, and E, and this is irrespective of systemd. For A to be aware of C, D, and E terminating, two things have to happen. A has to register itself as a "subreaper" . systemd does this, as do various other service managers including upstart and the nosh service-manager . B has to exit() . Services that foolishly, erroneously, and vainly try to "dæmonize" themselves do this. (One can get clever with kevent() on the BSDs. But this is a Linux question.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312491",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191989/"
]
} |
312,551 | Given a file with the following string: fastcgi_param WP_ENV staging; I need an sed expression that will replace the word 'staging' with a new string: fastcgi_param WP_ENV production; In the first example the 3rd word is variable. It could be any lowercase string eg development, local, etc. I tried the following: sed 's/fastcgi_param WP_ENV [\w+]/fastcgi_param WP_ENV production/g' but it does not pick up the work correctly. The regexp for the word does not match. What would be the correct sed command to do this type of replacement? | Add -E and remove the square brackets: $ sed -E 's/fastcgi_param WP_ENV \w+/fastcgi_param WP_ENV production/g' filefastcgi_param WP_ENV production; Notes: + is not supported in basic regular expressions. -E turns on extended regex which does support + . \w+ matches one or more word characters. [\w+] matches any one of \ , w , or + . \w is not portable. For POSIX compatibility, use: $ sed -E 's/fastcgi_param WP_ENV [[:alnum:]]+/fastcgi_param WP_ENV production/g' filefastcgi_param WP_ENV production; You can avoid the double typing of the line by using a capture group: $ sed -E 's/(fastcgi_param WP_ENV) [[:alnum:]]+/\1 production/g' filefastcgi_param WP_ENV production; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192028/"
]
} |
312,627 | I've a string separated with commas where I want to find my input in that line: echo US | grep "US,CA,CH,JP" The output is empty!How should I use grep to find my input in that string? | Swap the arguments of the commands: echo "US,CA,CH,JP" | grep US In: echo US | grep "US,CA,CH,JP" you are looking for the string (pattern) US,CA,CH,JP in the input string US , which is not matching expectedly. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/312627",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192083/"
]
} |
312,631 | I use set -e to stop bash script on first error . All work OK unless I use command with && : $ cat scriptset -ecd not_existing_dir && echo 123echo "I'm running! =P"$$ ./script./script: line 2: cd: not_existing_dir: No such file or directoryI'm running! =P$ compared with: $ cat scriptset -ecd not_existing_direcho "I'm running! =P"$$ ./script./script: line 2: cd: not_existing_dir: No such file or directory$ The first example still echoes I'm running! , but the second one doesn't. Why do they behave differently? UPD. Similar question: https://stackoverflow.com/questions/6930295/set-e-and-short-tests | This is documented behavior. The bash(1) man page says, for set -e , The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test following the if or elif reserved words, part of any command executed in a && or || list except the command following the final && or || , any command in a pipeline but the last, or if the command's return value is being inverted with ! . [Emphasis added.] And the POSIX Shell Command Language Specification confirms that this is the correct behavior: The -e setting shall be ignored when executing the compound list following the while , until , if , or elif reserved word, a pipeline beginning with the ! reserved word, or any command of an AND-OR list other than the last. and Section 2.9.3 Lists of that document defines An AND-OR list is a sequence of one or more pipelines separated by the operators " && " and " || " . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/312631",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26279/"
]
} |
312,687 | From the Arch Linux Wiki: https://wiki.archlinux.org/index.php/USB_flash_installation_media # dd bs=4M if=/path/to/archlinux.iso of=/dev/sdx status=progress && sync [...] Do not miss sync to complete before pulling the USB drive. I would like to know What does it do? What consequences are there if left out? Notes dd command used with optional status=progress : tar -xzOf archlinux-2016-09-03-dual.iso | dd of=/dev/disk2 bs=4M status=progress && sync Or using pv for progress tar -xzOf archlinux-2016-09-03-dual.iso | pv | dd of=/dev/disk2 bs=4M && sync | The dd does not bypass the kernel disk caches when it writes to a device, so some part of data may be not written yet to the USB stick upon dd completion. If you unplug your USB stick at that moment, the content on the USB stick would be inconsistent. Thus, your system could even fail to boot from this USB stick. Sync flushes any still-in-cache data to the device. Instead of invoking sync you could use fdatasync dd 's conversion option: fdatasync physically write output file data before finishing In your case, the command would be: tar -xzOf archlinux-2016-09-03-dual.iso | \dd of=/dev/disk2 bs=4M status=progress conv=fdatasync The conv=fdatasync makes dd effectively call fdatasync() system call at the end of transfer just before dd exits (I checked this with dd 's sources). This confirms that dd would not bypass nor flush the caches unless explicitly instructed to do so. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/312687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33386/"
]
} |
312,697 | I am trying to curl some URL which returns a json file, then I want to parse hosts from it and create a comma separated string. I have the first part working curl -s -u "admin:admin" -H "X-Requested-By: ambari" "https://hbasecluster.net/api/v1/clusters/mycluster/services/ZOOKEEPER/components/ZOOKEEPER_SERVER" | jq -r '.host_components[].HostRoles.host_name' which returns zk0-mycluster.netzk1-mycluster.netzk2-mycluster.net Now I want to join these into one string like zk0-mycluster.net,zk1-mycluster.net,zk2-mycluster.net | Do it in jq , but see @Kusalananda's answer first jq -r '.host_components[].HostRoles.host_name | join(",")' No, that's wrong. This is what you need: jq -r '.host_components | map(.HostRoles.host_name) | join(",")' Demo: jq -r '.host_components | map(.HostRoles.host_name) | join(",")' <<DATA{"host_components":[ {"HostRoles":{"host_name":"one"}}, {"HostRoles":{"host_name":"two"}}, {"HostRoles":{"host_name":"three"}}]}DATA outputs one,two,three | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/312697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66640/"
]
} |
312,702 | I'm trying to create a JSON in BASH where one of the fields is based on the result of an earlier command BIN=$(cat next_entry)OUTDIR="/tmp/cpupower/${BIN}"echo $OUTDIRJSON="'"'{"hostname": "localhost", "outdir": "${OUTDIR}", "port": 20400, "size": 100000}'"'"echo $JSON The above script when executed, returns: /tmp/cpupower/0, port: 20400, size: 100000}': /tmp/cpupower/0 How can I properly substitute variables inside these multi-quoted strings? | JSON=\''{"hostname": "localhost", "outdir": "'"$OUTDIR"'", "port": 20400, "size": 100000}'\' That is get out of the single quotes for the expansion of $OUTDIR . We did put that expansion inside double-quotes for good measure even though for a scalar variable assignment it's not strictly necessary. When you're passing the $JSON variable to echo , quotes are necessary though to disable the split+glob operator. It's also best to avoid echo for arbitrary data: printf '%s\n' "$JSON" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/312702",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83216/"
]
} |
312,754 | [ EDIT #1 by OP: Turns out this question is quite well answered by exiftool creator/maintainer Phil Harvey in a duplicate thread on the ExifTool Forum ] [ EDIT #2 by OP: From ExifTool FAQ : ExifTool is not guaranteed to remove metadata completely from a file when attempting to delete all metadata. See 'Writer Limitations'.] I'd like to search my old hard drives for photos that are not on my current backup drive. Formats include jpg, png, tif, etc..., as well as various raw formats (different camera models and manufacturers). I'm only interested in uniqueness of the image itself and not uniqueness due to differences in, say, the values of exif tags, the presence/absence of a given exif tag itself, embedded thumbnails, etc ... Even though I don't expect to find any corruption/data-rot between different copies of otherwise identical images, I'd like to detect that, as well as differences due to resizing and color changes. [ Edit #3 by OP: For clarification: A small percentage of false positives is tolerable (a file is concluded to be unique when it isn't) and false negatives are highly undesirable (a file is wrongly concluded to be a duplicate).] My plan is to identify uniqueness based on md5sums after stripping any and all metadata. How can I strip the metadata? Will exiftool -all= <filename> suffice? | With imagemagick package and not only for JPEGs you can simply: mogrify -strip ./*.jpg The ./ is to avoid problems with filenames starting with "-". From manual : -strip strip the image of any profiles, comments or these PNG chunks: bKGD,cHRM,EXIF,gAMA,iCCP,iTXt,sRGB,tEXt,zCCP,zTXt,date. Much more info and caveats here . This is similar to @grochmal, but much more straightforward and simple. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/312754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192033/"
]
} |
312,759 | I'd like to copy a partition to another laptop (to back it up). I'll connectthe side-by-side laptops with an ethernet cable. To do the backup, I'dspecifically like to use the "dd" command. For simplicity, I haven't shown below the "netcat" commands (which provide thelaptop-to-laptop communication) that I'll be using along with dd. So below isthe effective command that I plan to run to do the backup: dd if=/dev/sda6 of=/home/name/sda6.img To restore the backup later, I plan to run dd if=/home/name/sda6.img of=/dev/sda6 BUT I'VE READ that in order to do the latter command (which is a reversal of thefirst), the destination partition needs to be LARGER than the imagefile, even though I'm just putting the SAME DATA back where it came from. This would involve me in increasing the size of the original partition, whichmight involve extensive work, which I'd rather not do. SO MY QUESTION IS , for the restore, will I really need to increase the size ofthe destination partition, or will the data fit neatly back on the partition? Also, if I WOULD need to increase the partition size with the above scenario,is there some tweak I can do to the dd commands that makes increasing thepartition size unnecessary? Added detail I realise that the following are alternative ways to do such a backup, but I chose not to use them because of the reasons below: tar : Apparently doesn't backup certain attributes or something like that. cp : Apparrently doesn't backup certain things, like ACLs. rsync : This command has too many options that just confuses me and complicates things. Whereas I know where I am with dd. | With imagemagick package and not only for JPEGs you can simply: mogrify -strip ./*.jpg The ./ is to avoid problems with filenames starting with "-". From manual : -strip strip the image of any profiles, comments or these PNG chunks: bKGD,cHRM,EXIF,gAMA,iCCP,iTXt,sRGB,tEXt,zCCP,zTXt,date. Much more info and caveats here . This is similar to @grochmal, but much more straightforward and simple. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/312759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192185/"
]
} |
312,796 | In my quest for security related knowledge on a more specific StackExchange site, someone pointed me towards the more general Debian mailing lists (as my chances for answers would be probably better there). However I am fairly intimidated by the topics and the deep conversations on there. I have already subscribed and will first search the existing archive naturally.. but I think I'll end up having to ask my question there, in the end. So hence my - more general - normative question here: Is it generally accepted to ask a newbie question on a distro's mailing-list (of course after having tried to search the web / read-up as much as possible)? I found (just discovered) StackEchange to have a low barrier of entry (as long as one respects the basic rules / uses common sense). But I'm not sure whether dedicated old-school mailing list followers are as kind as the people on here.. | Debian has a number of user-oriented mailing lists ; the most general one is debian-user which is described as "Community assistance and support for Debian users". This list is most definitely open to newbies, and newbie questions are welcome there! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186373/"
]
} |
312,803 | when I run follow command as normal user everything works correctly: fabio@myclient:~$ rsync -rv myserver:~/backup /home/fabio/backup/ It works without any user interaction, but I need to run in a script executed as root so I tried: root@myclient:~# sudo -u fabio rsync -rv myserver:~/backup /home/fabio/backup/ and also tried: root@myclient:~# su - fabio -c "rsync -rv myserver:~/backup /home/fabio/backup" both works but ask me a "passphrase for key", can I avoid it? | Debian has a number of user-oriented mailing lists ; the most general one is debian-user which is described as "Community assistance and support for Debian users". This list is most definitely open to newbies, and newbie questions are welcome there! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312803",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63753/"
]
} |
312,892 | My file has the content below: rat|minty|ruhul|balaji|rat_123|decode|rat_123|abc|def|ghi|jkl|rat|cde|ind|ratrat|minty|ruhul|balaji|rat1_123|decode|rat_123|abc|def|ghi|jkl|rat|cde|ind|ratrat|minty|ruhul|balaji|rat2_123|decode|rat_123|abc|def|ghi|jkl|rat|cde|ind|rat I need to replace _ with | but only on the 5th field only. Expected output: rat|minty|ruhul|balaji|rat|123|decode|rat_123|abc|def|ghi|jkl|rat|cde|ind|ratrat|minty|ruhul|balaji|rat1|123|decode|rat_123|abc|def|ghi|jkl|rat|cde|ind|ratrat|minty|ruhul|balaji|rat2|123|decode|rat_123|abc|def|ghi|jkl|rat|cde|ind|rat | With awk , use gsub() on the 5th field: $ awk 'BEGIN{FS=OFS="|"} {gsub("_",FS,$5)}1' filerat|minty|ruhul|balaji|rat|123|decode|rat_123|abc|def|ghi|jkl|rat|cde|ind|ratrat|minty|ruhul|balaji|rat1|123|decode|rat_123|abc|def|ghi|jkl|rat|cde|ind|ratrat|minty|ruhul|balaji|rat2|123|decode|rat_123|abc|def|ghi|jkl|rat|cde|ind|rat Explanation BEGIN{FS=OFS="|"} Set the field separator to | . This way, we can address $5 as the 5th field and so on. {gsub("_",FS,$5)} Replace all the _ in the 5th field with FS . That is, with | . 1 Trigger awk's default action: print the current (modified) record. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192288/"
]
} |
312,978 | Using bash , how can I read the terminal answerback into a variable w/o user interaction? The following one-liner still requires Enter to be pressed once: echo -ne '\005' && read -s && echo ${REPLY} Also, how can I configure Xterm to send something meaningful in response to ^E ? So far the only terminal emulator I've seen sending any answerback is PuTTY . | The answerback string is configurable in xterm using the answerbackString resource. That resource was added in 1998 . Initially returning "xterm", since 1999 it defaults to an empty string because some users noticed the potential for abuse of control sequences which might send an unexpected command to the computer. Other terminals may return an empty string always (konsole, mlterm, vte). But rxvt (and rxvt-unicode) return an unexpected response: the device attributes response for VT102 (an escape sequence). PuTTY returns "PuTTY" (probably due to early influence by xterm). In a quick check, Linux console displays an "a" (probably a bug). Because the original VT100 provided this as a setup/configurable feature, that would have been limited to printable characters. For that reason, rxvt/urxvt's response is unexpected. The manual page makes an obscure comment about this: answerbackString : string Specify the reply rxvt-unicode sends to the shell when an ENQ (control-E) character is passed through. It may contain escape values as described in the entry on keysym following. (the promised description is absent). Because the length of the answerback string is unknown, an application that reads it must allow for waiting (in case the characters arrive in more than one read operation). There is of course the ksh/bash-specific TMOUT feature which can help with shell-scripting, along with the -t option for the read command. For general use, I avoid that, using stty , e.g., (see dynamic.sh ): stty raw -echo min 0 time 5 to temporarily set the terminal so that a read will timeout in 0.5 seconds, and allow it to return without reading any characters. To see how the settings are saved/restored, it helps to read the script. Further reading: the vttests scripts in xterm , many of which read terminal response strings. stty - set the options for a terminal (POSIX) read - read a line from standard input (POSIX) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/312978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94327/"
]
} |
313,017 | I have the following function split in my .bash_profile file. function split { name="${$1%.*}" ext="${$1##*.}" echo filename=$name extension=$ext} Now I should expect that the command split foo.bar will give me filename=foo extension=bar But I get get -bash: ${$1%.*}: bad substitution error message. The same however works for usual shell variable in a shell script, say $x instead of $1 in .bash_profile (I think the same goes in .bashrc as well). What's wrong and any remedy? | Drop the $ preceding the variable name ( 1 ) inside the parameter expansion: name="${1%.*}"ext="${1##*.}" you are already referring to the variable with the $ preceding the starting brace { , no need for another one in front of the variable name. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/115203/"
]
} |
313,046 | How do I rename the RedHat7 hostname without a reboot? I am also trying to automate this task. | [root@stephan ~]# echo stephan2 > /etc/hostname # this is the file that your system reads on boot, to determine the hostname[root@stephan ~]# sed -i s/stephan/stephan2/g /etc/hosts # many networking headaches will ensue if this isn't updated[root@stephan ~]# hostname -F /etc/hostname # reread the hostname file to update the systems hostname. Your prompt will continue to display the old hostname until you log out of it and back in, or execute a new shell session. [root@stephan ~]# logout[stephan@stephan ~]$ sudo su -Last login: Wed Sep 28 18:06:35 EDT 2016 on pts/0[root@stephan2 ~]# | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190994/"
]
} |
313,080 | I've compiled hpn version of openssh(OpenSSH_7.2p2-hpn14v11), sshd itself is working just fine. The problem is that every 2-3 minutes systemd restarts sshd as it doesn't get that service started properly. When I replace with Ubuntu's package of the same version it's working as it should. I've even tested on VM with clean install - same thing. What am I doing wrong? ● ssh.service - OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: activating (start) since Wed 2016-09-28 20:18:49 EDT; 42s ago Main PID: 24279 (sshd) Tasks: 9 Memory: 6.8M CPU: 164ms CGroup: /system.slice/ssh.service ├─20041 sshd: root@pts/0 ├─20047 -bash ├─24279 /usr/sbin/sshd -D ├─24628 └─24629 pagerSep 28 20:18:49 hostname systemd[1]: Starting OpenBSD Secure Shell server... Sep 28 20:18:49 hostname sshd[24279]: Server listening on 0.0.0.0 port 22 cat /lib/systemd/system/ssh.service[Unit]Description=OpenBSD Secure Shell serverAfter=network.target auditd.serviceConditionPathExists=!/etc/ssh/sshd_not_to_be_run[Service]EnvironmentFile=-/etc/default/sshExecStart=/usr/sbin/sshd -D $SSHD_OPTSExecReload=/bin/kill -HUP $MAINPIDKillMode=processRestart=on-failureRestartPreventExitStatus=255Type=notify[Install]WantedBy=multi-user.targetAlias=sshd.serviceLogs: Sep 29 02:22:03 xxx sshd[15007]: Server listening on 0.0.0.0 port 22. Sep 29 02:22:03 xxx sshd[15007]: Server listening on :: port 22. Sep 29 02:23:33 xxx systemd[1]: ssh.service: Start operation timed out. Terminating. Sep 29 02:23:33 xxx systemd[1]: Failed to start OpenBSD Secure Shell server. Sep 29 02:23:33 xxx systemd[1]: ssh.service: Unit entered failed state. Sep 29 02:23:33 xxx systemd[1]: ssh.service: Failed with result 'timeout'. Sep 29 02:23:33 xxx systemd[1]: ssh.service: Service hold-off time over, scheduling restart. Sep 29 02:23:33 xxx systemd[1]: Stopped OpenBSD Secure Shell server. Sep 29 02:23:33 xxx systemd[1]: Starting OpenBSD Secure Shell server... Sep 29 02:23:33 xxx sshd[15775]: Server listening on 0.0.0.0 port 22. Sep 29 02:23:33 xxx sshd[15775]: Server listening on :: port 22. | Ubuntu backed down to use systemd-way of letting systemd know when it started. It is obvious from the option Type=notify , which makes impossible to use OpenSSH without Systemd patch. There are two possible solutions: Change the line Type=notify to Type=forking and add a new line with PIDFile=/var/run/sshd.pid and ExecStart should be changed to /usr/sbin/sshd $SSHD_OPTS : Type=forkingPIDFile=/var/run/sshd.pidExecStart=/usr/sbin/sshd $SSHD_OPTS Build your OpenSSH with the patch from Debian/Ubuntu: From fe97848e044743f0bac019a491ddf0138f84e14a Mon Sep 17 00:00:00 2001From: Michael Biebl <[email protected]>Date: Mon, 21 Dec 2015 16:08:47 +0000Subject: Add systemd readiness notification supportBug-Debian: https://bugs.debian.org/778913Forwarded: noLast-Update: 2016-01-04Patch-Name: systemd-readiness.patch--- configure.ac | 24 ++++++++++++++++++++++++ sshd.c | 9 +++++++++ 2 files changed, 33 insertions(+)diff --git a/configure.ac b/configure.acindex f822fb3..6cafb15 100644--- a/configure.ac+++ b/configure.ac@@ -4319,6 +4319,29 @@ AC_ARG_WITH([kerberos5], AC_SUBST([GSSLIBS]) AC_SUBST([K5LIBS])+# Check whether user wants systemd support+SYSTEMD_MSG="no"+AC_ARG_WITH(systemd,+ [ --with-systemd Enable systemd support],+ [ if test "x$withval" != "xno" ; then+ AC_PATH_TOOL([PKGCONFIG], [pkg-config], [no])+ if test "$PKGCONFIG" != "no"; then+ AC_MSG_CHECKING([for libsystemd])+ if $PKGCONFIG --exists libsystemd; then+ SYSTEMD_CFLAGS=`$PKGCONFIG --cflags libsystemd`+ SYSTEMD_LIBS=`$PKGCONFIG --libs libsystemd`+ CPPFLAGS="$CPPFLAGS $SYSTEMD_CFLAGS"+ SSHDLIBS="$SSHDLIBS $SYSTEMD_LIBS"+ AC_MSG_RESULT([yes])+ AC_DEFINE(HAVE_SYSTEMD, 1, [Define if you want systemd support.])+ SYSTEMD_MSG="yes"+ else+ AC_MSG_RESULT([no])+ fi+ fi+ fi ]+)+ # Looking for programs, paths and files PRIVSEP_PATH=/var/empty@@ -5121,6 +5144,7 @@ echo " libedit support: $LIBEDIT_MSG" echo " Solaris process contract support: $SPC_MSG" echo " Solaris project support: $SP_MSG" echo " Solaris privilege support: $SPP_MSG"+echo " systemd support: $SYSTEMD_MSG" echo " IP address in \$DISPLAY hack: $DISPLAY_HACK_MSG" echo " Translate v4 in v6 hack: $IPV4_IN6_HACK_MSG" echo " BSD Auth support: $BSD_AUTH_MSG"diff --git a/sshd.c b/sshd.cindex 837409b..868df9e 100644--- a/sshd.c+++ b/sshd.c@@ -85,6 +85,10 @@ #include <prot.h> #endif+#ifdef HAVE_SYSTEMD+#include <systemd/sd-daemon.h>+#endif+ #include "xmalloc.h" #include "ssh.h" #include "ssh1.h"@@ -2117,6 +2121,11 @@ main(int ac, char **av) unsetenv("SSH_SIGSTOP"); }+#ifdef HAVE_SYSTEMD+ /* Signal systemd that we are ready to accept connections */+ sd_notify(0, "READY=1");+#endif+ /* Accept a connection and return in a forked child */ server_accept_loop(&sock_in, &sock_out, &newsock, config_s); | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313080",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192417/"
]
} |
313,085 | I'm writing a man page for a program that I am packaging. How can I display the manpage file that I created, to check if it's all right? Is there a way to pass my file directly to the man command instead of having it search the installed manpages by name? I tried doing things like man myprog.1 and man < myprog.1 but in both cases I got an error saying that the man page could not be found. | man has an option to read a local file: -l -l, --local-file Activate `local' mode. Format and display local manual files instead of searching through the system's manual collection. Each manual page argument will be interpreted as an nroff source file in the correct format. No cat file is produced. If '-' is listed as one of the arguments, input will be taken from stdin. When this option is not used, and man fails to find the page required, before displaying the error message, it attempts to act as if this option was supplied, using the name as a filename and looking for an exact match. So you can preview your work in progress with: man -l /path/to/manfile.1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/313085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23960/"
]
} |
313,089 | I have a home file server that I use Ubuntu on. Recently, one of my drives filled up so I got another and threw it in there. I have a very large folder, the directory is about 1.7 T in size and contains a decent amount of files. I used GCP to COPY the files from the old drive to the new one and it seems to have worked fine. I want to now validate the new directory on the new drive against the original directory on the old drive before I delete the data from the old drive to free up space. I understand that I can do a CRC check to do this. How, specifically, can I do this? | I’d simply use the diff command: diff -rq --no-dereference /path/to/old/drive/ /path/to/new/drive/ This reads and compares every file in the directory trees and reports any differences. The -r flag compares the directories recursively while the -q flag just prints a message to screen when files differ – as opposed to printing the actual differences (as it does for text files). The --no-dereference flag may be useful if there are symbolic links that differ, e.g., in one directory, a symbolic link, and in its corresponding directory, a copy of the file that was linked to. If the diff command prints no output , that means the directory trees are indeed identical; you can run echo $? to verify that its exit status is 0 , indicating that both sets of files are the same. I don’t think computing CRCs or checksums is particularly beneficial in this case. It would make more sense if the two sets of files were on different systems and each system could compute the checksums for their own set of files so only the checksums need to be sent over the network. Another common reason for computing checksums is to keep a copy of the checksums for future use. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122624/"
]
} |
313,093 | I use zsh's menu-based tab completion. I press Tab once, and a list of possible completions appears. If I press Tab again, I can navigate this list with the arrow keys. However, is it possible to navigate them with the vi -like H , J , K , L keys instead? I use emacs mode for command-line input, with bindkey -e in ~/.zshrc . I also use zim with zsh. If relevant, the commands that specify the tab-completion system are here . | Yes, you can by enabling menu select : zstyle ':completion:*' menu selectzmodload zsh/complist...# use the vi navigation keys in menu completionbindkey -M menuselect 'h' vi-backward-charbindkey -M menuselect 'k' vi-up-line-or-historybindkey -M menuselect 'l' vi-forward-charbindkey -M menuselect 'j' vi-down-line-or-history | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/313093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18887/"
]
} |
313,101 | I am trying to download some debian packages and their dependencies in a directory. I tried using the command aptitude download <package_name> it downloaded the package without its dependencies. How do I tell it to download the dependencies too? | You can use apt-rdepends to build the complete set of dependencies (recursively), including the main package, then download that: apt-get download $(apt-rdepends "${package}" | grep -v ^\ ) (replacing "${package}" of course). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192437/"
]
} |
313,107 | I just made an image of a freshly installed dual boot (Ubuntu and Windows) using this command (which I've been using for a while for smaller images): dd if=/dev/sda | gzip > /mnt/drive.img.gz On this drive less than 60G out of 500G are used. Nevertheless that image-file now is 409G big. How is that? Shouldn't gzip manage to compress all those zeros? As I said, it is a freshly installed system. It couldn't be that cluttered. Now I didn't expect for the file to be 60G, but 400G seems very huge to me. | How is that? Shouldn't gzip manage to compress all those zeros? Yes, if they were zeroes . Unused disk space does not mean it contains zeros; it means it is unused, and may contain anything . There are programs that wipe unused disk space to zeroes. I suggest you use those before making the disk image. (I don't recall any offhand; in Linux, I'd just use dd if=/dev/zero bs=1048576 of=somefile to create files containing only zeroes, filling up each filesystem; then remove them before making the image. Also, I prefer xz over gzip .) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/313107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/178565/"
]
} |
313,148 | I have a python script which is a little unstable and gives an SSL error every now and then. The exception is raised by some function buried deep within some library so there is essentially no fix to it. I have implemented a hacky solution by creating a shell script with a while loop that runs a large number of times and executing the python script from within that loop. Now what I'm hoping is that as an iteration of the loop starts and the script is executed, the loop inside the shell script stays where it is until the python script fails and the next iteration of the loop re-executes the script and so on. Is doing it like this efficient? Is there a better way to do it? and most importantly is it correct? | In: cmd1cmd2 or cmd1; cmd2 They are executed sequentially. In cmd1 && cmd2 Or cmd1 || cmd2 They are executed sequentially but whether cmd2 is executed at all depends on whether cmd1 succeeds (for && ) or fails (for || ). In cmd1 | cmd2 or cmd1 & cmd2cmd1 |& cmd2 # kshcoproc cmd1; cmd2 # bash/zsh or cmd1 <(cmd2) # ksh/zsh/bash (also yash though with a different meaning) They are executed concurrently. In the first case, some shells only wait for cmd2 before carrying on with the rest of the script while some wait for both. In the second case, shells only wait for cmd2 ( cmd1 is said to be run asynchronously (or in background when run in an interactive shell)) and in the third for cmd1 ( cmd2 asynchronous). In: < "$(cmd1)" x=$(cmd2) y=$(cmd3) cmd4 "$(cmd5)" > "$(cmd6)" Commands are run sequentially, but the order depends on the shell. In any case, cmd4 will be executed last. In: cmd1 =(cmd2) # zsh They are executed sequentially ( cmd2 first). Note that in all those cases, any of those commands could start other processes. The shell would have no knowledge of them so can't possibly wait for them. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192487/"
]
} |
313,211 | I am looking for the file that represents the solaris 10 package database - if there is such thing. The reason is that I want to be able to efficiently compute a checksum that represents the current patch level (including any third party packages), so that after a possible rollback I can "prove" the rollback was a success (e.g. rolling back using zfs snapshots). So I tough the package database where pkginfo gets its data from would be a natural choice. Any ideas? | Solaris 10 uses /var/sadm/pkg and /var/sadm/patch to track package and patch versions. It does not use a database like RPM does. If you are using ZFS snapshots as a way to rollback from patches, try checking the contents of /var/sadm/patch before patching, post patching, and post rollback. If you are not already using the feature, investigate Alternate Boot Environments for Solaris especially since you are already using ZFS. Here is a whitepaper to get you started. You create your ABE, patch it, activate it, and reboot. If things don't work, activate the previous boot environment, activate it, and reboot again. It's a great feature of Solaris. With this methodology, you have an explicit copy (or zfs snapshot/clone) that you apply patches to, so there is a clear delineation between the pre and post patching environments for your validation purposes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49331/"
]
} |
313,218 | CentOS7. Network is all OK. # cat /etc/resolv.conf nameserver 192.168.1.1 Then I decide to change the host name. # hostnamectl set-hostname host.domain# reboot# cat /etc/resolv.conf # Generated by NetworkManagersearch domain# No nameservers found; try putting DNS servers into your# ifcfg files in /etc/sysconfig/network-scripts like so:## DNS1=xxx.xxx.xxx.xxx# DNS2=xxx.xxx.xxx.xxx# DOMAIN=lab.foo.com bar.foo.com Why does hostnamectl set-hostname mess up my /etc/resolv.conf ? | Solaris 10 uses /var/sadm/pkg and /var/sadm/patch to track package and patch versions. It does not use a database like RPM does. If you are using ZFS snapshots as a way to rollback from patches, try checking the contents of /var/sadm/patch before patching, post patching, and post rollback. If you are not already using the feature, investigate Alternate Boot Environments for Solaris especially since you are already using ZFS. Here is a whitepaper to get you started. You create your ABE, patch it, activate it, and reboot. If things don't work, activate the previous boot environment, activate it, and reboot again. It's a great feature of Solaris. With this methodology, you have an explicit copy (or zfs snapshot/clone) that you apply patches to, so there is a clear delineation between the pre and post patching environments for your validation purposes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313218",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14237/"
]
} |
313,234 | The goal is to launch Xephyr in a script. The most popular approach is this: Xephyr :4 &sleep 1 # or sleep 2launch_your_program_here Let's see what happens here. First, unfortunately, Xephyr cannot daemonize itself, so we have to launch it asynchonously ( & ). Then we don't know when it's initialized, so we give it 1-2 seconds to initialize. This feels hacky. How to do it faster and more reliably? | Solaris 10 uses /var/sadm/pkg and /var/sadm/patch to track package and patch versions. It does not use a database like RPM does. If you are using ZFS snapshots as a way to rollback from patches, try checking the contents of /var/sadm/patch before patching, post patching, and post rollback. If you are not already using the feature, investigate Alternate Boot Environments for Solaris especially since you are already using ZFS. Here is a whitepaper to get you started. You create your ABE, patch it, activate it, and reboot. If things don't work, activate the previous boot environment, activate it, and reboot again. It's a great feature of Solaris. With this methodology, you have an explicit copy (or zfs snapshot/clone) that you apply patches to, so there is a clear delineation between the pre and post patching environments for your validation purposes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92541/"
]
} |
313,242 | I am responsible for some daily backups that are over 1GB in size. I don't want to rsync them all to the backup server. I'd like to delete any files in a certain folder, older than X days then rsync the rest. Can this be done via a cron or will it be better to try and write a script? | Solaris 10 uses /var/sadm/pkg and /var/sadm/patch to track package and patch versions. It does not use a database like RPM does. If you are using ZFS snapshots as a way to rollback from patches, try checking the contents of /var/sadm/patch before patching, post patching, and post rollback. If you are not already using the feature, investigate Alternate Boot Environments for Solaris especially since you are already using ZFS. Here is a whitepaper to get you started. You create your ABE, patch it, activate it, and reboot. If things don't work, activate the previous boot environment, activate it, and reboot again. It's a great feature of Solaris. With this methodology, you have an explicit copy (or zfs snapshot/clone) that you apply patches to, so there is a clear delineation between the pre and post patching environments for your validation purposes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313242",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106673/"
]
} |
313,247 | I accidentally made a mistake in the sudoers file, and now I can't fix it because if I try to change the file it give a permission denied message. If I use sudo to open the file, then it says syntax error and no valid sudoers sources found and won't execute the command. I'm on raspbian, a flavor of debian. Does anyone know how to get out of this catch-22? | You're going to have to boot into single user mode. https://serverfault.com/questions/482079/debian-boot-to-single-user-mode As the root user, you'll be able to edit the sudoers file to fix it. I highly recommend using the visudo command to edit your sudoers file in the future, to prevent having to go through this again, as visudo does a syntax check on the file before saving it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/182020/"
]
} |
313,256 | At work, I write bash scripts frequently. My supervisor has suggested that the entire script be broken into functions, similar to the following example: #!/bin/bash# Configure variablesdeclare_variables() { noun=geese count=three}# Announce somethingi_am_foo() { echo "I am foo" sleep 0.5 echo "hear me roar!"}# Tell a jokewalk_into_bar() { echo "So these ${count} ${noun} walk into a bar..."}# Emulate a pendulum clock for a bitdo_baz() { for i in {1..6}; do expr $i % 2 >/dev/null && echo "tick" || echo "tock" sleep 1 done}# Establish run ordermain() { declare_variables i_am_foo walk_into_bar do_baz}main Is there any reason to do this other than "readability", which I think could be equally well established with a few more comments and some line spacing? Does it make the script run more efficiently (I would actually expect the opposite, if anything), or does it make it easier to modify the code beyond the aforementioned readability potential? Or is it really just a stylistic preference? Please note that although the script doesn't demonstrate it well, the "run order" of the functions in our actual scripts tends to be very linear -- walk_into_bar depends on stuff that i_am_foo has done, and do_baz acts on stuff set up by walk_into_bar -- so being able to arbitrarily swap the run order isn't something we would generally be doing. For example, you wouldn't suddenly want to put declare_variables after walk_into_bar , that would break things. An example of how I would write the above script would be: #!/bin/bash# Configure variablesnoun=geesecount=three# Announce somethingecho "I am foo"sleep 0.5echo "hear me roar!"# Tell a jokeecho "So these ${count} ${noun} walk into a bar..."# Emulate a pendulum clock for a bitfor i in {1..6}; do expr $i % 2 >/dev/null && echo "tick" || echo "tock" sleep 1done | I've started using this same style of bash programming after reading Kfir Lavi's blog post "Defensive Bash Programming" . He gives quite a few good reasons, but personally I find these the most important: procedures become descriptive: it's much easier to figure out what a particular part of code is supposed to do. Instead of wall of code, you see "Oh, the find_log_errors function reads that log file for errors ". Compare it with finding whole lot of awk/grep/sed lines that use god knows what type of regex in the middle of a lengthy script - you've no idea what's it doing there unless there's comments. you can debug functions by enclosing into set -x and set +x . Once you know the rest of the code works alright , you can use this trick to focus on debugging only that specific function. Sure, you can enclose parts of script, but what if it's a lengthy portion ? It's easier to do something like this: set -x parse_process_list set +x printing usage with cat <<- EOF . . . EOF . I've used it quite a few times to make my code much more professional. In addition, parse_args() with getopts function is quite convenient. Again, this helps with readability, instead of shoving everything into script as giant wall of text. It's also convenient to reuse these. And obviously, this is much more readable for someone who knows C or Java, or Vala, but has limited bash experience. As far as efficiency goes, there's not a lot of what you can do - bash itself isn't the most efficient language and people prefer perl and python when it comes to speed and efficiency. However, you can nice a function: nice -10 resource_hungry_function Compared to calling nice on each and every line of code, this decreases whole lot of typing AND can be conveniently used when you want only a part of your script to run with lower priority. Running functions in background, in my opinion, also helps when you want to have whole bunch of statements to run in background. Some of the examples where I've used this style: https://askubuntu.com/a/758339/295286 https://askubuntu.com/a/788654/295286 https://github.com/SergKolo/sergrep/blob/master/chgreeterbg.sh | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/313256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62503/"
]
} |
313,263 | A "selection" tool in my okular 0.19.3 (soft -- ubuntu, 14-04) allows me to choose only one of those: "selection tool", "text selection tool", and "table selection tool". I need only "text selection tool". However when I choose it, my mouse cannot make rectangular around the text I want -- the mouse simply cannot do anything. If I choose "selection tool", a little window invites me to make rectangular around the text/image I want, and once I did it, it suggest that "Image (of such and such size) is ready to be copied to clipboard", but any attempt to paste it into clipboard fails. I do remember that when it was working on my old machine, it would state "text" instead of "image", and worked perfectly well. Is there anything I can do about it? Oh, I forgot to say that my "clipboard" is a "vim" window. | I've started using this same style of bash programming after reading Kfir Lavi's blog post "Defensive Bash Programming" . He gives quite a few good reasons, but personally I find these the most important: procedures become descriptive: it's much easier to figure out what a particular part of code is supposed to do. Instead of wall of code, you see "Oh, the find_log_errors function reads that log file for errors ". Compare it with finding whole lot of awk/grep/sed lines that use god knows what type of regex in the middle of a lengthy script - you've no idea what's it doing there unless there's comments. you can debug functions by enclosing into set -x and set +x . Once you know the rest of the code works alright , you can use this trick to focus on debugging only that specific function. Sure, you can enclose parts of script, but what if it's a lengthy portion ? It's easier to do something like this: set -x parse_process_list set +x printing usage with cat <<- EOF . . . EOF . I've used it quite a few times to make my code much more professional. In addition, parse_args() with getopts function is quite convenient. Again, this helps with readability, instead of shoving everything into script as giant wall of text. It's also convenient to reuse these. And obviously, this is much more readable for someone who knows C or Java, or Vala, but has limited bash experience. As far as efficiency goes, there's not a lot of what you can do - bash itself isn't the most efficient language and people prefer perl and python when it comes to speed and efficiency. However, you can nice a function: nice -10 resource_hungry_function Compared to calling nice on each and every line of code, this decreases whole lot of typing AND can be conveniently used when you want only a part of your script to run with lower priority. Running functions in background, in my opinion, also helps when you want to have whole bunch of statements to run in background. Some of the examples where I've used this style: https://askubuntu.com/a/758339/295286 https://askubuntu.com/a/788654/295286 https://github.com/SergKolo/sergrep/blob/master/chgreeterbg.sh | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/313263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181973/"
]
} |
313,338 | I'm guessing I need to edit one of the schemas available in gsettings but I don't know which one. and when I listed all the schemas, there's just too many of them. | The key you want is favorite-apps , the schema ID is org.gnome.shell . Now to list your favorite apps you can simply run gsettings get org.gnome.shell favorite-apps or dconf read /org/gnome/shell/favorite-apps These will return an array of strings e.g. ['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.Nautilus.desktop', 'org.gnome.gedit.desktop', 'gnome-calculator.desktop'] Now, to remove a value from that array you could use text processing tools like sed / awk to check if an item is in that list and remove it keeping the same format (not that trivial though definitely doable) and once you get it right just write the new settings to the database e.g. assuming you wanted to remove org.gnome.Nautilus.desktop you would run (note the double quotes): gsettings set org.gnome.shell favorite-apps "['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.gedit.desktop', 'gnome-calculator.desktop']" or dconf write /org/gnome/shell/favorite-apps "['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.gedit.desktop', 'gnome-calculator.desktop']" Still, it's easier to write your own utility ( using gsettings API ) that will accept one or more desktop file names as positional parameters and remove them from favorites; to get you started, here is a very basic example in python that accepts one param (run as script.py firefox.desktop ): #!/usr/bin/env pythonfrom sys import argvfrom gi.repository import Gio,GLibitem=argv[1]gschema = Gio.Settings('org.gnome.shell')gvalues=gschema.get_value('favorite-apps').unpack()if item in gvalues: gvalues.remove(item)gschema.set_value('favorite-apps', GLib.Variant('as', gvalues)) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65536/"
]
} |
313,344 | I am reading through the source of MIT's xv6 OS . This snippet comes at the beginning of sh.c : // Ensure that three file descriptors are open.while((fd = open("console", O_RDWR)) >= 0){ if(fd >= 3){ close(fd); break; }} I understand that this checks to see if atleast 3 file descriptors are open (presumably for stdin, stdout and stderr) by checking if the newly allocated file descriptor is above (or same as) 3. 1) How is it possible to open the same device multiple times from the same process and expect different file descriptors? 2) To understand this, I ran a similar snippet on my host machine (x86_64 Linux 4.6.0.1). The test program repeatedly open ed a text file in a loop to see if we can expect a different fd but it always produced the same file descriptor. From this, I concluded that open -ing a real file and a device (like /dev/console ) somehow differs because the snippet from xv6 obviously works (tested in Qemu). What exactly is the difference? #include <stdlib.h>#include <fcntl.h>#include <stdio.h>int main(void){ int fd; int cnt = 0; while ((fd = open("sample.txt", O_RDWR) > 0)) { if (cnt != 10) { cnt++; printf("File descriptor opened: %d\n", fd); } else { break; } } return 0;} Here's the output on running it: $ ./a.outFile descriptor opened: 1File descriptor opened: 1[snip]File descriptor opened: 1File descriptor opened: 1 EDIT Based on one of the answers, I ran strace on the executable and found that open indeed returns multiple file descriptors but all of which are not printed, for some reason. Why would that be? 3) Somewhat unrelated, but isn't the convention of using stdio streams in fds 0-2 is just that - a convention? For example, if the initializing sequence alloted the input/output file descriptors to something else - would it somehow affect how its children do their I/O? | The key you want is favorite-apps , the schema ID is org.gnome.shell . Now to list your favorite apps you can simply run gsettings get org.gnome.shell favorite-apps or dconf read /org/gnome/shell/favorite-apps These will return an array of strings e.g. ['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.Nautilus.desktop', 'org.gnome.gedit.desktop', 'gnome-calculator.desktop'] Now, to remove a value from that array you could use text processing tools like sed / awk to check if an item is in that list and remove it keeping the same format (not that trivial though definitely doable) and once you get it right just write the new settings to the database e.g. assuming you wanted to remove org.gnome.Nautilus.desktop you would run (note the double quotes): gsettings set org.gnome.shell favorite-apps "['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.gedit.desktop', 'gnome-calculator.desktop']" or dconf write /org/gnome/shell/favorite-apps "['firefox.desktop', 'org.gnome.Terminal.desktop', 'org.gnome.gedit.desktop', 'gnome-calculator.desktop']" Still, it's easier to write your own utility ( using gsettings API ) that will accept one or more desktop file names as positional parameters and remove them from favorites; to get you started, here is a very basic example in python that accepts one param (run as script.py firefox.desktop ): #!/usr/bin/env pythonfrom sys import argvfrom gi.repository import Gio,GLibitem=argv[1]gschema = Gio.Settings('org.gnome.shell')gvalues=gschema.get_value('favorite-apps').unpack()if item in gvalues: gvalues.remove(item)gschema.set_value('favorite-apps', GLib.Variant('as', gvalues)) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104091/"
]
} |
313,409 | We have RH based Linux images; on which I have to "apply" some "special archive" in order to upgrade them to the latest development version of our product. The person creating the archive figured that within our base image, some permissions are wrong; so we were told to run sudo chgrp -R nobody /whatever We did that; and later on, when our application is running, obscure problems came up. What I found later on: the call to chgrp will clear the setuid bit information on our binaries within /whatever. And the actual problem is: some of our binaries must have that setuid bit set in order to function properly. Long story short: is there a way to run that "chgrp" command without killing my setuid bits? I just ran the following on my local Ubuntu; leading to the same result: mkdir stickycd sticky/touch blubchmod 4755 blub ls -al blub --> shows me file name with red background --> so, yep, setuid chgrp -R myuser .ls -al blub --> shows me file name without red background --> setuid is gone | If you want to implement your chgrp -R nobody /whatever while retaining the setuid bit you can use these two find commands find /whatever ! -type l -perm -04000 -exec chgrp nobody {} + \ -exec chmod u+s {} +find /whatever ! -type l ! -perm -04000 -exec chgrp nobody {} + The find ... -perm 04000 option picks up files with the setuid bit set. The first command then applies the chgrp and then a chmod to reinstate the setuid bit that has been knocked off. The second one applies chgrp to all files that do not have a setuid bit. In any case, you don't want to call chgrp or chmod on symlinks as that would affect their targets instead, hence the ! -type l . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192690/"
]
} |
313,417 | I just installed Elementary OS and almost everything is working great.The only problem is that the touchpad isn't working. I can not move the cursor with the touchpad. But if I use a mouse it is working fine. Results when I run xinput : ⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ Mitsumi Electric Apple Optical USB Mouse id=11 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Video Bus id=8 [slave keyboard (3)] ↳ Sleep Button id=9 [slave keyboard (3)] ↳ USB2.0 HD UVC WebCam id=10 [slave keyboard (3)] ↳ Asus WMI hotkeys id=12 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=13 [slave keyboard (3)] Please note the Apple Optical Mouse is a usb connected mouse. The touchpad vendor is Elan. | If you want to implement your chgrp -R nobody /whatever while retaining the setuid bit you can use these two find commands find /whatever ! -type l -perm -04000 -exec chgrp nobody {} + \ -exec chmod u+s {} +find /whatever ! -type l ! -perm -04000 -exec chgrp nobody {} + The find ... -perm 04000 option picks up files with the setuid bit set. The first command then applies the chgrp and then a chmod to reinstate the setuid bit that has been knocked off. The second one applies chgrp to all files that do not have a setuid bit. In any case, you don't want to call chgrp or chmod on symlinks as that would affect their targets instead, hence the ! -type l . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313417",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187728/"
]
} |
313,421 | I've just installed Kali Linux 2016.1. The installation was successful at first, the Grub worked normally. But after that, quite often, when I boot I can't see the Grub screen, after a few seconds, I automatically enter the login screen of Linux, but half of the screen was static and become grayer and grayer, when I enter the root user and password, it's still login, but the screen now was fuzzing up and becomes grayer and grayer and I can't do anything with this. I can't even access the BIOS (the words "Press ESC to enter BIOS", which normally appear when I start computer doesn't show up). Sometimes I can enter Grub normally, but when I use Kali Linux, sometimes, a black screen appears in a second and then it returns to the normal screen. I tried to search, but doesn't find any suitable solution. I'm using HP Elitebook 8460. | If you want to implement your chgrp -R nobody /whatever while retaining the setuid bit you can use these two find commands find /whatever ! -type l -perm -04000 -exec chgrp nobody {} + \ -exec chmod u+s {} +find /whatever ! -type l ! -perm -04000 -exec chgrp nobody {} + The find ... -perm 04000 option picks up files with the setuid bit set. The first command then applies the chgrp and then a chmod to reinstate the setuid bit that has been knocked off. The second one applies chgrp to all files that do not have a setuid bit. In any case, you don't want to call chgrp or chmod on symlinks as that would affect their targets instead, hence the ! -type l . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192707/"
]
} |
313,442 | I am trying to find an efficient way to do the level 5 of the OverTheWire bandit challenge . Anyway, I have a bunch of files and there is only one that fulfills the following criteria: Human-readable 1033 bytes in size Non-executable Right now, I am using the find command. I am able to find the files matching the two last criteria: find . -size 1033c ! -executable However, I don't know how to exclude non-human-readable files. Solutions I found for that challenge use the -readable test parameter, but I don't think this works. -readable only looks at the files' permissions, and not at its content, while the challenge description asks for an ASCII file or something like that. | Yes, you can use find to look for non-executable files of the right size and then use file to check for ASCII. Something like: find . -type f -size 1033c ! -executable -exec file {} + | grep ASCII The question, however, isn't as simple as it sounds. 'Human readable' is a horribly vague term. Presumably, you mean text. OK, but what kind of text? Latin character ASCII only? Full Unicode? For example, consider these three files: $ cat file1abcde$ cat file2αβγδε$ cat file3abcdeαβγδε$ cat file4#!/bin/shecho foo These are all text and human readable. Now, let's see what file makes of them: $ file *file1: ASCII textfile2: UTF-8 Unicode textfile3: UTF-8 Unicode textfile4: POSIX shell script, ASCII text executable So, the find command above will only find file1 (for the sake of this example, let's imagine those files had 1033 characters). You could expand the find to look for the string text : find . -type f -size 1033c ! -executable -exec file {} + | grep -w text With the -w , grep will only print lines where text is found as a stand-alone word. That should be pretty close to what you want, but I can't guarantee that there is no other file type whose description might also include the string text . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192714/"
]
} |
313,467 | When I run the command ps -e -o cmd,stime,etime= the cmd comlumn is truncated, so that the cmd path is trunctated.How do I format the column width so that all the text is displayed? | In GNU/Linux you can set the column width as follows: ps -e -o cmd:50,stime,etime= From the ps(1) manual page: -o format User-defined format. format is a single argument in the form of a blank-separated or comma-separated list, which offers a way to specify individual output columns. The recognized keywords are described in the STANDARD FORMAT SPECIFIERS section below. Headers may be renamed (ps -o pid,ruser=RealUser -o comm=Command) as desired. If all column headers are empty (ps -o pid= -o comm=) then the header line will not be output. Column width will increase as needed for wide headers; this may be used to widen up columns such as WCHAN (ps -o pid,wchan=WIDE-WCHAN-COLUMN -o comm). Explicit width control (ps opid,wchan:42,cmd) is offered too. The behavior of ps -o pid=X,comm=Y varies with personality; output may be one column named "X,comm=Y" or two columns named "X" and "Y". Use multiple -o options when in doubt. Use the PS_FORMAT environment variable to specify a default as desired; DefSysV and DefBSD are macros that may be used to choose the default UNIX or BSD columns. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96016/"
]
} |
313,523 | I've just completed OverTheWire Bandit wargame, level 18 . That was a surprise. Here are the instructions for this level. The password for the next level is stored in a file readme in the homedirectory. Unfortunately, someone has modified .bashrc to log you out when you log in with SSH. I was thinking of a way to have the shell skip sourcing .bashrc . But before I found out a solution, I got tempted into simply starting an SFTP connection with the server. I didn't expect it to work. But it did. And I would like to know why. I thought SFTP runs over SSH. For clarity, when I SSH into the server, I get kicked out, as expected. | On bash in general Bash's design with respect to startup files is rather peculiar. Bash loads .bashrc in two unrelated circumstances: When it's an interactive shell, except when it's a login shell (and except when it's invoked as sh ). This is why .bash_profile typically loads .bashrc . When bash is not interactive nor a login shell nor invoked as sh but given a command to execute with -c and SHLVL is unset or less or equal to 1, and one of the following is true: If standard input is a socket. In practice, this mostly happens when bash is invoked by rshd , i.e. when running rsh remotehost.example.com somecommand . If activated at compile time (which is the case on some distributions, such as Debian and derivatives), if one of the environment variables SSH_CLIENT or SSH2_CLIENT is defined. In practice, this means that bash is invoked by sshd , i.e. ssh remotehost.example.com somecommand . If you don't know how bash was compiled, you can find out whether this option was set by checking whether the binary contains the string SSH_CLIENT : strings /bin/bash | grep SSH_CLIENT On SSH in general When you execute a command through the SSH protocol, the command is passed over the wire as a string. The string is executed by the remote shell. When you run ssh example.com somecommand , if the remote user's login shell is /bin/bash , the SSH server runs /bin/bash -c somecommand . There is no way to bypass the login shell. This permits restricted login shells, for example to allow only file copying and not general command execution. There is one exception: the SSH protocol allows the client to request a specific subsystem. If the client requests the sftp subsystem, then by default the OpenSSH server invokes the program /usr/lib/openssh/sftp-server (the location may vary) via the user's login shell. But it can also be configured to run an internal SFTP server through the line Subsystem sftp internal-sftp in the sshd_config file. In the case of the internal SFTP server, and only in this case, the user's login shell is bypassed. For this challenge In the case of OverTheWire Bandit 18, .bashrc contains …# If not running interactively, don't do anythingcase $- in *i*) ;; *) return;;esac…echo 'Byebye !'exit 0 So you can solve this level by doing anything that causes bash not to be interactive. As you discovered, SFTP works. But ssh [email protected] cat readme would also work. As would echo 'cat readme' | ssh [email protected] . And pressing Ctrl+C at the right time during an interactive login would also work: it would interrupt bash, so the .bashrc would not be completely executed. Bash takes macroscopic time to start up, so while this doesn't work reliably, it can be done in practice. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
313,545 | I am currently trying to ssh into my hosting server I have created authorised keys and added the .pub to my hosting. To save the keys so I don't have to keep doing ssh-add I ran the command ssh-add -K ~/.ssh/privatekey for each key. This worked perfectly for my MacBook which always connects, however this is not the case for my iMac. With my iMac I can connect with my ssh key fine until I reboot the computer. Once I reboot I get prompted to enter a password. To stop this I also ran the -K command. Which added the identities and allowed me to connect, but unlike my MacBook I am still having to run ssh-add every time I want to connect to my hosting on my iMac. When my iMac asks for the password and, if try to enter the ssh passphrase I get access denied. I have set up a config file, but nothing seems to work for my iMac. I am also running the latest version of macOS Sierra on both machines. After searching for days on Google and talking with my hosting provider I keep getting the same answer to use ssh-add -K . It just seems strange that it is not working. | On bash in general Bash's design with respect to startup files is rather peculiar. Bash loads .bashrc in two unrelated circumstances: When it's an interactive shell, except when it's a login shell (and except when it's invoked as sh ). This is why .bash_profile typically loads .bashrc . When bash is not interactive nor a login shell nor invoked as sh but given a command to execute with -c and SHLVL is unset or less or equal to 1, and one of the following is true: If standard input is a socket. In practice, this mostly happens when bash is invoked by rshd , i.e. when running rsh remotehost.example.com somecommand . If activated at compile time (which is the case on some distributions, such as Debian and derivatives), if one of the environment variables SSH_CLIENT or SSH2_CLIENT is defined. In practice, this means that bash is invoked by sshd , i.e. ssh remotehost.example.com somecommand . If you don't know how bash was compiled, you can find out whether this option was set by checking whether the binary contains the string SSH_CLIENT : strings /bin/bash | grep SSH_CLIENT On SSH in general When you execute a command through the SSH protocol, the command is passed over the wire as a string. The string is executed by the remote shell. When you run ssh example.com somecommand , if the remote user's login shell is /bin/bash , the SSH server runs /bin/bash -c somecommand . There is no way to bypass the login shell. This permits restricted login shells, for example to allow only file copying and not general command execution. There is one exception: the SSH protocol allows the client to request a specific subsystem. If the client requests the sftp subsystem, then by default the OpenSSH server invokes the program /usr/lib/openssh/sftp-server (the location may vary) via the user's login shell. But it can also be configured to run an internal SFTP server through the line Subsystem sftp internal-sftp in the sshd_config file. In the case of the internal SFTP server, and only in this case, the user's login shell is bypassed. For this challenge In the case of OverTheWire Bandit 18, .bashrc contains …# If not running interactively, don't do anythingcase $- in *i*) ;; *) return;;esac…echo 'Byebye !'exit 0 So you can solve this level by doing anything that causes bash not to be interactive. As you discovered, SFTP works. But ssh [email protected] cat readme would also work. As would echo 'cat readme' | ssh [email protected] . And pressing Ctrl+C at the right time during an interactive login would also work: it would interrupt bash, so the .bashrc would not be completely executed. Bash takes macroscopic time to start up, so while this doesn't work reliably, it can be done in practice. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192601/"
]
} |
313,599 | My file is as below; I want to display those records of students whose percentage is above 80. Studid StudName Asp.Net DBMS Unix 1 Ani 75 62 80 2 George 90 95 82 3 Jake 45 30 40 4 Dennie 89 92 90 so I used following code: awk '(($3+$4+$5)/3)>80 {print}' stud It works, but I want to assign these columns into variable and then want to display output. So I tried below code, but it didn't work awk 'total=$3+$4+$5, per=total/3, per>80 {print}' stud any solution with variables? | You can move the logic from the rule section into an action awk '{total=$3+$4+$5; per=total/3; if (per>80) print}' stud 2 George 90 95 82 4 Dennie 89 92 90 Note that this attempts to evaluate the column headers arithmetically - which "works" because in awk , non-numeric fields are treated as zero when you try to do arithmetic on them - but would cause the header line to be printed if, for example, you changed the test to per<80 . Better IMHO would be to either explicitly skip the header line using a next action for the rule NR==1 awk 'NR==1 {next} {total=$3+$4+$5; per=total/3; if (per>80) print}' stud 2 George 90 95 82 4 Dennie 89 92 90 or, if you want the header, explicitly print it awk 'NR==1 {print; next} {total=$3+$4+$5; per=total/3; if (per>80) print}' studStudid StudName Asp.Net DBMS Unix 2 George 90 95 82 4 Dennie 89 92 90 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184505/"
]
} |
313,612 | I have a laptop using Arch Linux and Gnome Shell. My graphics card is an Intel HD 520. My laptop's screen resolution is 3200×1800. At work I connect my laptop to a monitor which has resolution of 1920×1080. The connection is being done using an HDMI cable. The system detects both screens and their resolutions correctly. The problem is the following: In ultra HD resolutions Gnome (does Gnome do this or is it the graphics card driver?) scales the windows in order to be bigger. This is makes perfect sense, else Firefox's bar for example would be a thin line. But the problem is that when I move a window (e.g., Firefox) from one monitor to the other, this scaling is being preserved. As a result things look huge in my 1920×1080 monitor, even though the resolution is detected correctly. Is this a bug? If yes, whose fault is it? Gnome, Intel HD graphics or the Linux Kernel? Can I correct it? | You can move the logic from the rule section into an action awk '{total=$3+$4+$5; per=total/3; if (per>80) print}' stud 2 George 90 95 82 4 Dennie 89 92 90 Note that this attempts to evaluate the column headers arithmetically - which "works" because in awk , non-numeric fields are treated as zero when you try to do arithmetic on them - but would cause the header line to be printed if, for example, you changed the test to per<80 . Better IMHO would be to either explicitly skip the header line using a next action for the rule NR==1 awk 'NR==1 {next} {total=$3+$4+$5; per=total/3; if (per>80) print}' stud 2 George 90 95 82 4 Dennie 89 92 90 or, if you want the header, explicitly print it awk 'NR==1 {print; next} {total=$3+$4+$5; per=total/3; if (per>80) print}' studStudid StudName Asp.Net DBMS Unix 2 George 90 95 82 4 Dennie 89 92 90 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152402/"
]
} |
313,656 | How can I preserve permissions while compressing a folder using zip ? I know how to preserve symlinks using --symlinks is there a similar option for permissions? | info-zip (the program you probably are using) can save/restore permissions for Unix -like systems. It is mentioned for directories in the manual page : Dates, times and permissions of stored directories are not restored except under Unix. (On Windows NT and successors, timestamps are now restored.) File-permissions for read/write/execute are saved/restored. But a quick check shows (zip 3.0) that setuid/setgid permissions are not preserved. The feature is not optional; zip/unzip simply do this when they are able. On other systems, the ability to save/restore permissions is less complete. For example, on Windows the ZIP file uses the permission settings from the %temp% folder. Further reading: Is ZIP archive capable of storing permissions? Can i store unix permissions in a zip file (built with apache ant)? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313656",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13394/"
]
} |
313,677 | I would like to take the output from a program and interactively filter which lines to pipe to the next command. ls | interactive-filter | xargs rm For example, I have a list of files that a pattern cannot match against to reduce. I would like a command interactive-filter that will page the output of the file list and I could interactively indicate which lines to forward to the next command. In this case each line would then be removed. | iselect provides an up-down list, (as input from a prior pipe), in which the user can tag multiple entries, (as output to the next pipe): # show some available executables ending in '*sh*' to run through `whatis`find /bin /sbin /usr/bin -maxdepth 1 -type f -executable -name '*sh' |iselect -t "select some executables to run 'whatis' on..." -a -m |xargs -d '\n' -r whatis Output after pressing the spacebar to tag a few on my system: dash (1) - command interpreter (shell)ssh (1) - OpenSSH SSH client (remote login program)mosh (1) - mobile shell with roaming and intelligent local echoyash (1) - a POSIX-compliant command line shell vipe allows interactively editing (with one's favorite text editor) what goes through a pipe. Example: # take a list of executables with long names from `/bin`, edit that# list as needed with `mcedit`, and run `wc` on the output.find /bin -type f | grep '...............' | EDITOR=mcedit vipe | xargs wc Output (after deleting some lines while in mcedit ): 378 2505 67608 /bin/ntfs-3g.secaudit 334 2250 105136 /bin/lowntfs-3g 67 952 27152 /bin/nc.traditional 126 877 47544 /bin/systemd-machine-id-setup 905 6584 247440 total Note on push & pull: iselect starts with a list in which nothing is selected. vipe starts with a list in which every item shown will be sent through the pipe, unless the user deletes it. In Debian -based distros, both utils can be installed with apt-get install moreutils iselect . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55363/"
]
} |
313,768 | Taking the first few shaking steps with FreeBSD. Started by installing vim I thought, but: root@rpi:~ # pkg install vimUpdating FreeBSD repository catalogue...FreeBSD repository is up-to-date.All repositories are up-to-date.Checking integrity... done (1 conflicting)Cannot solve problem using SAT solver, trying another planChecking integrity... done (0 conflicting)The most recent version of packages are already installedroot@rpi:~ # vimvim: Command not found.root@rpi:~ # echo $PATH/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/root/binroot@rpi:~ # find / -type f -name vimroot@rpi:~ # find / -type l -name vimroot@rpi:~ # echo $SHELL/bin/cshroot@rpi:~ # rehashroot@rpi:~ # vimvim: Command not found. Even after reboot situation is the same: root@rpi:~ # vimvim: Command not found. What am I missing? pkg can't really have done what it was supposed to, can it? root@rpi:~ # pkg delete vimChecking integrity... done (0 conflicting)Package(s) not found! vim-lite installs OK though. root@rpi:~ # pkg info -l vimpkg: No package(s) matching vimroot@rpi:~ # pkg which /usr/local/bin/vim/usr/local/bin/vim was installed by package vim-lite-7.4.1832 pkg upgrade found nothing to upgrade, but pkg autoremove nuked all the vim dependencies: root@rpi:~ # pkg autoremoveChecking integrity... done (0 conflicting)Deinstallation has been requested for the following 70 packages:Installed packages to be REMOVED: atk-2.18.0 harfbuzz-1.2.3 pango-1.38.0_1 cairo-1.14.6,2 cscope-15.8b ctags-5.8 libXdamage-1.1.4_3 libglapi-11.2.2 gbm-11.2.2 libEGL-11.2.2 libGL-11.2.2 damageproto-1.2.1 xorg-fonts-truetype-7.7_1 dejavu-2.35 dri2proto-2.8 encodings-1.0.4_3,1 fontconfig-2.11.1_2,1 libXft-2.3.2_1 font-misc-meltho-1.0.3_3 font-bh-ttf-1.0.3_3 font-misc-ethiopic-1.0.3_3 libXfixes-5.0.1_3 fixesproto-5.0 font-util-1.3.1 mkfontscale-1.1.2 mkfontdir-1.0.7 freetype2-2.6.3 libXpm-3.5.11_4 python27-2.7.11_3 glib-2.46.2 llvm37-3.7.1_2 glproto-1.4.17 graphite2-1.3.8 icu-55.1 libX11-1.6.3,1 libXt-1.1.5,1 libXv-1.0.10_3,1 libXvMC-1.0.9 libXrender-0.9.9 libXext-1.3.3_1,1 libXxf86vm-1.1.4_1 kbproto-1.0.7 libSM-1.2.2_3,1 libICE-1.0.9_1,1 libxcb-1.11.1 xcb-util-0.4.0_1,1 xcb-util-renderutil-0.3.9_1 libXau-1.0.8_3 libXdmcp-1.1.2 libdevq-0.0.2_1 libdrm-2.4.66,1 ruby-2.2.5,1 lua52-5.2.4 libffi-3.2.1 libfontenc-1.1.3 libiconv-1.14_9 libpciaccess-0.13.4 libpthread-stubs-0.3_6 libxshmfence-1.2 libyaml-0.1.6_2 pciids-20160522 pixman-0.34.0 png-1.6.21 readline-6.3.8 renderproto-0.11.1 tcl86-8.6.5_1 videoproto-2.3.2 xextproto-7.3.0 xf86vidmodeproto-2.3.1 xproto-7.0.28The operation will free 402 MiB.Proceed with deinstalling packages? [y/N]: y[...] After installing vim-lite the find from before finds the vim binary: root@rpi:~ # find / -type f -name vim/usr/local/bin/vim So pkg really did not install the package. | Ok, that is weird. On the RPi, pkg install vim goes through the process of downloading 46 packages, but only installs 17 of them. Consequently vim-7.4.1832.txz is never actually installed. Clearly, this is a bug with one or more of the packages on the ARM platform. Hopefully, you can live with vim-lite for now. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30608/"
]
} |
313,791 | So no directories and no hidden files. Just the files. Listing only files can be done with the line as follows. ls -p | grep -v / Now I want the result of this line to be separated by commas. | You can use tr for that job. ls -p | grep -v / | tr '\n' ',' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193005/"
]
} |
313,819 | I have hundreds of files in various different subdirectories. Some of them have the correct file extension, but some of them don't. I want to rename all files that don't have a file extension and append a .mp4 extension to their file name. The other files should be left untouched. How can I automate this renaming operation using Bash? Or do I need a real scripting language like Perl or Python for this? | Something like this: find . -type f ! -name "*.*" -exec mv {} {}.mp4 \; | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/313819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192858/"
]
} |
313,825 | How do I delete a line only if it is at a specified line number and it matches the pattern? For example: I want to delete ( d ); the third line ( 3 ); if it's blank ( ^$ ); The following syntax: cat file | sed '3 /^$/d' Returns the following error: sed: -e expression #1, char 3: unknown command: `/' | Try doing this: sed '3{/^$/d;}' file Note the braces. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/313825",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
313,940 | I'm working on a LAMP web app and there is a scheduled process somewhere which keeps creating a folder called shop in the root of the site. Every time this appears it causes conflicts with rewrite rules in the app, not good. Until I find the offending script, is there a way to prevent any folder called shop being created in the root? I know that I can change the permissions on a folder to prevent it's contents being changed, but I have not found a way to prevent a folder of a certain name being created. | You can't, given the user creating the directory has sufficient permission to write on the parent directory. You can instead leverage the inotify family of system calls provided by the Linux kernel, to watch for the creation (and optionally mv -ing) of directory shop in the given directory, if created (or optionally mv -ed), rm the directory. The userspace program you need in this case is inotifywait (comes with inotify-tools , install it first if needed). Assuming the directory shop would be residing in /foo/bar directory, let's set a monitoring for /foo/bar/shop creation, and rm instantly if created: inotifywait -qme create /foo/bar | \ awk '/,ISDIR shop$/ { system("rm -r -- /foo/bar/shop") }' inotifywait -qme create /foo/bar watches /foo/bar directory for any file/directory that might be created i.e. watch for any create event If created, awk '/,ISDIR shop$/ { system("rm -r -- /foo/bar/shop") }' checks if the file happens to be a directory and the name is shop ( /,ISDIR shop$/ ), if so rm the directory ( system("rm -r -- /foo/bar/shop") ) You need to run the command as a user that has write permission on directory /foo/bar for removal of shop from the directory. If you want to monitor mv -ing operations too, add watch for moved_to event too: inotifywait -qme create,moved_to /foo/bar | \ awk '/,ISDIR shop$/ { system("rm -r -- /foo/bar/shop") }' Just to note, if you are looking for a file, not directory, named shop : inotifywait -qme create /foo/bar | \ awk '$NF == "shop" { system("rm -- /foo/bar/shop") }'inotifywait -qme create,moved_to /foo/bar | \ awk '$NF == "shop" { system("rm -- /foo/bar/shop") }' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/313940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193075/"
]
} |
314,032 | I have this script which designed to download the pronounciation of the words you give it as argument: #!/bin/bash m=$#for ((i=1;i<=m;i++));doecho $i#wget https://ssl.gstatic.com/dictionary/static/sounds/de/0/"$i".mp3done if I run it by this command ./a.sh personality brave selfish it should print on the stdout personality brave selfish but instead it prints 123 would you help me solve this problem? p.s : If I write the script without for loop with $1 for example it will work correctly but I want to download many files at the same time | In any Bourne-like shell, it's: for argdo printf 'Something with "%s"\n' "$arg"done That is, for does loop on the positional parameters ( $1 , $2 ...) by default (if you don't give a in ... part). Note that that's more portable than: for arg; do printf 'Something with "%s"\n' "$arg"done Which was not POSIX until the 2016 edition of the standard nor Bourne (though works in most other Bourne-like shells including bash even in POSIX mode) Or than: for arg in "$@"; do printf 'Something with "%s"\n' "$arg"done Which is POSIX but doesn't work properly in the Bourne shell or ksh88 when $IFS doesn't contain the space character, or with some versions of the Bourne shell when there's no argument, or with some shells (including some versions of bash ) when there's no argument and the -u option is enabled. Or than for arg do printf 'Something with "%s"\n' "$arg"done which is POSIX and Bourne but doesn't work in very old ash-based shells. I personally ignore that and use that syntax myself as I find it's the most legible and don't expect any of the code I write will ever end up interpreted by such an arcane shell. More info at: http://www.in-ulm.de/~mascheck/various/bourne_args/ What is the purpose of the “do” keyword in Bash for loops? Now if you do want $i to loop over [1..$#] and access the corresponding elements, you can do: in any POSIX shell: i=1for arg do printf '%s\n' "Arg $i: $arg" i=$((i + 1))done or: i=1while [ "$i" -le "$#" ]; do eval "arg=\${$i}" printf '%s\n' "Arg $i: $arg" i=$((i + 1))done Or with bash for ((i = 1; i <= $#; i++ )); do printf '%s\n' "Arg $i: ${!i}"done ${!i} being an indirect variable expansion, that is expand to the content of the parameter whose name is stored in the i variable, similar to zsh 's P parameter expansion flag: for ((i = 1; i <= $#; i++ )); do printf '%s\n' "Arg $i: ${(P)i}"done Though in zsh , you can also access positional parameters via the $argv array (like in csh ): for ((i = 1; i <= $#; i++ )); do printf '%s\n' "Arg $i: $argv[i]"done | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/314032",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160201/"
]
} |
314,059 | I'm looking for an editor to print (on paper) C++ code. I'm currently in engineering school and the instructor has asked us to submit the code on paper. He wants name + surname, the class number (on header), the number of page at the bottom, and the reserved words bolded for every page! On Windows it can be done with notepadd++ . But I'm on Linux and I haven't found an IDE or text editor that works. (I've already tried SCITE , gedit , and Syntaxic ) | Well, if you want to go the extra mile, do it in LaTeX and provide a professional level PDF file. You haven't mentioned your distribution so I'll give instructions for Debian based systems. The same basic idea can be done on any Linux though. Install a LaTeX system and necessary packages sudo apt-get install texlive-latex-extra latex-xcolor texlive-latex-recommended Create a new file (call it report.tex ) with the following contents: \documentclass{article}\usepackage{fancyhdr}\pagestyle{fancy}%% Define your header here. %% See http://texblog.org/2007/11/07/headerfooter-in-latex-with-fancyhdr/\fancyhead[CO,CE]{John Doe, Class 123}\usepackage[usenames,dvipsnames]{color} %% Allow color names%% The listings package will format your source code\usepackage{listings}\lstdefinestyle{customasm}{ belowcaptionskip=1\baselineskip, xleftmargin=\parindent, language=C++, breaklines=true, %% Wrap long lines basicstyle=\footnotesize\ttfamily, commentstyle=\itshape\color{Gray}, stringstyle=\color{Black}, keywordstyle=\bfseries\color{OliveGreen}, identifierstyle=\color{blue}, xleftmargin=-8em, showstringspaces=false} \begin{document}\lstinputlisting[style=customasm]{/path/to/your/code.c}\end{document} Just make sure to change /path/to/your/code.c in the penultimate line so that it point to the actual path of your C file. If you have more than one file to include, add a \newpage and then a new \lstinputlisting for the other file. Compile a PDF (this creates report.pdf ) pdflatex report.tex I tested this on my system with an example file I found here and it creates a PDF that looks like this: For a more comprehensive example that will automatically find all .c files in the target folder and create an indexed PDF file with each in a separate section, see my answer here . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/314059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193164/"
]
} |
314,297 | I wrote this code to echo a greeting depending on what time of day it is, but when I run it it doesn't show any errors but doesn't echo anything to the command line either. To try to troubleshoot I commented out everything and echoed just the time variable, which worked fine. So, what am I doing wrong?! #!/bin/bashtime=$(date +%H)case $time in#check if its morning [0-11] ) echo "greeting 1";;#check if its afternoon [12-17] ) echo "greeting 2";;#check if its evening [18-23] ) echo "greeting 3"esac | [...] introduces a character class, not an integer interval. So, [18-23] is identical to [138-2] , which is the same as [13] , as there's nothing between 8 and 2. You can use the following as a fix: case $time in#check if its morning 0?|1[01] ) echo "greeting 1";;#check if its afternoon 1[2-7] ) echo "greeting 2";;#check if its evening 1[89]|2? ) echo "greeting 3"esac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193369/"
]
} |
314,324 | So I have a program lets call it foo.I am attempting to redirect its terminal output to a file using the following command. foo > ./someFile.txt Now when I run that command someFile.txt gets created however it is empty. Any suggestions on how I could redirect the terminal output? | It is expected behavior, that a file someFile.txt will be created. Whether or not this file contains anything, depends on what your program foo is supposed to do. Whatever problem you are encountering, does not seem to be related to output redirection. You can try following command as a test: cat > someFile.txt type anything. Whatever you typed will be redirected to someFile.txt (end with ctrl + d ). Bytheway, the output file is being created by your shell, not by your program foo . Even if you type a nonexistent command, the output file will still be created (empty): /bin/nonexistent > zzz | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193387/"
]
} |
314,365 | I would like to do the following at one point in a script: start_time=date and this after a process or processes have run: end_time=date and then do this: elapsed=$end_time-$start_timeecho "Total of $elapsed seconds elapsed for process" How would I do this? | Use the time since epoch to easily identify a span of time in a script man date%s seconds since 1970-01-01 00:00:00 UTC%N nanoseconds (000000000..999999999) . start_time="$(date -u +%s)"sleep 5end_time="$(date -u +%s)"elapsed="$(($end_time-$start_time))"echo "Total of $elapsed seconds elapsed for process" Total of 5 seconds elapsed for process Bash doesn't support floating point numbers, so you'll need to use a external tool like bc to compare times like 1475705058.042270582-1475705053.040524971 start_time="$(date -u +%s.%N)"sleep 5end_time="$(date -u +%s.%N)"elapsed="$(bc <<<"$end_time-$start_time")"echo "Total of $elapsed seconds elapsed for process" Total of 5.001884264 seconds elapsed for process | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/314365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104388/"
]
} |
314,394 | I have 20GB for my Mint-KDE 18 root partition. There is no extra home partition. I am doing nothing special, just Chrome, KRDC, Teamviewer and the partition was half empty. One thing I did was copying in Dolphin from webdavs repo on the internet to my network drive via samba. Nothing was stored on my PC. Now I got the message that my disk is full. When I open my root or home directory in Dolphin it says 0 MB free. What is the fastest way via terminal to view my directories and files, biggest first or newest etc? | du -h -d 1 / This will display the size for all of the top-level directories in your root directory in 'human readable' format. You can also just do du -h -d 1 / | grep '[0-9]\+G' to only see the ones taking a couple GB or more. For a more granular level of detail, do something like ls -R -shl / | grep '^[0-9.]\{4,12\}\+[KG]' which will show all files in and below your root directory that are 1G or over in size. ** note that you might need to prepend sudo to the commands above. edit -- just saw you want them sorted by newest or largest Try this du -h -d 1 / | grep '[0-9]\+G' | sort -h | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/314394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135657/"
]
} |
314,402 | I have Debian GNU/Linux 7 without GUI. My LCD monitor native resolution is 1280x1024. I'd like my OS to use this resolution by default on tty1 , tty2 etc. These are the lines from my /etc/default/grub : GRUB_GFXMODE=1280x1024GRUB_GFXPAYLOAD_LINUX=1280x1024 I have run sudo update-grub , it went without any problems. Rebooted. This should be enough but it's not. The behavior is as follows: GRUB2 menu always uses the configured resolution. I have checked with different GRUB_GFXMODE it can use lower resolutions as well, in this case the picture is stretched and utilizes the entire screen – all OK. When the system starts, few initial messages are displayed as expected and the monitor works with its native resolution. At one moment, about when the message Waiting for /dev to be fully populated... appears, the screen flickers. Just after a second the picture is back with the same resolution but there is garbage (like static pixel noise) at the bottom and on the right edge of the screen. When I log in and work afterwards, there are two stripes (bottom, right edge) that are unavailable to me. This is how it looks (excuse low photo resolution, it shouldn't matter though). Blue mc window should occupy the entire screen. Graphical garbage can bee seen in upper right corner. The stripe at the bottom is all black but often there is garbage there as well. Despite GRUB2 setting, my OS doesn't use the desired resolution. This is the output of fbset : mode "1024x768" geometry 1024 768 1280 1024 32 timings 0 0 0 0 0 0 0 accel true rgba 8/16,8/8,8/0,0/0endmode Still, the monitor does use its native resolution. See the last line on this photo: How to make GRUB2 setting work? Additional info: The hardware is JBC362F36W-2600-B Barebone Mini-ITX System . Its motherboard is NF36-N2600 . The graphics is integrated to Intel Atom N2600 CPU. The cable is DVI-D. The monitor is Philips 190B . It behaves well with GRUB2 menu, so I don't think there is something wrong with it. And more: # lspci | grep VGA00:02.0 VGA compatible controller: Intel Corporation Atom Processor D2xxx/N2xxx Integrated Graphics Controller (rev 09)# uname -aLinux [censored] 3.2.0-4-amd64 #1 SMP Debian 3.2.81-1 x86_64 GNU/Linux# grep -A 6 1280x1024-60 /etc/fb.modes mode "1280x1024-60" # D: 108.00 MHz, H: 63.981 kHz, V: 60.02 Hz geometry 1280 1024 1280 1024 8 timings 9260 248 48 38 1 112 3 hsync high vsync highendmode | These GRUB settings control the display mode used by GRUB, they don't specify the default framebuffer mode used by the kernel. To configure the latter, you can use the video kernel parameter; in Debian, add this to the GRUB_CMDLINE_LINUX_DEFAULT line in /etc/default/grub : video=1280x1024 This should set the display up correctly as soon as the kernel initialises the framebuffer. The video parameter is documented in detail in fb/modedb.rst in the kernel documentation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108618/"
]
} |
314,422 | I'm having a problem here on a bash script I made. In a for loop, I iterate on all the arguments to construct a string variable that is later fed to an "eval" command: for arg in "$*" do if [ $arg != $lastArg ]; then findTarget+="-name $arg -o " else findTarget=$(echo $findTarget | sed 's/-o$//') break fi done The problem stems from the "$*". For example when I enter "*.c" in the arguments, and the current folder contains files that match that pattern, the *.c argument is expanded into those files; I do not want that, I want findTarget to be concatenated with -name *.c -o , I have tried with and witout quotes, using eval, nothing seems to work. Any idea how to do this (simply if possible) ? Note: the total number of arguments can vary. This is an example of how I run the script: $ trouver.bash *.c *.f90 someString At the end of my for loop, the variable findTarget should read -name *.c -o -name *.f90 This does not work if the *.c or *.f90 match files in the current folder... | These GRUB settings control the display mode used by GRUB, they don't specify the default framebuffer mode used by the kernel. To configure the latter, you can use the video kernel parameter; in Debian, add this to the GRUB_CMDLINE_LINUX_DEFAULT line in /etc/default/grub : video=1280x1024 This should set the display up correctly as soon as the kernel initialises the framebuffer. The video parameter is documented in detail in fb/modedb.rst in the kernel documentation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193467/"
]
} |
314,424 | I'm installing the timezone tables on MySQL, which are generated from /usr/share/zoneinfo on my Debian Server. But there is a point on the notes about ensuring I keep them up to date whenever the zoneinfo files are updated. I've searched Google, but can't find the answer to the question: How often is `/usr/share/zoneinfo` updated? Is it every time that the Linux OS is updated, or is it infrequently? | The timezone data is updated several times a year; in the past that meant every month or thereabouts, but last year there were only three updates (in February, March and October 2017). Updates are always available from the IANA , and are made available as stable updates in Debian. While there were only three releases in 2017, there were ten releases altogether in 2016, seven in 2015, ten in 2014, nine in 2013... If you want to set something up so that the MySQL timezone tables are updated whenever the timezone info is updated, you could look at adding a trigger against the tzdata package; that way, whenever tzdata is updated, the timezone tables will be updated too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102428/"
]
} |
314,486 | what is preferred between if ! [ ... ]; then and if [ ! ... ]; then actually they do the same result, is there a preferred syntax?in the former syntax the evaluated not is the shell builtin, while in the latter the not is the test one, does it make any difference? | portability consideration. The ! keyword is POSIX but not Bourne while ! has been supported by the [ / test command from the start. So [ ! ... ] is more portable than ! [ ... ] . Otherwise, as long as you don't use the deprecated -o and -a binary operators, they should be equivalent (if we put aside the parsing bugs in some old test / [ implementations). Actually, in the Bourne shell, to do if ! cmd1; then cmd2fi You had to do: if cmd1; then :else cmd2fi (or use cmd1 || cmd2 , though that could result in a different exit status in the end). interaction with set -o errexit / set -e / ERR trap Outside of the condition part of if / fi statements, as noted by @mosvy, another difference between [ ! ... ] and ! [ ... ] is that the latter will not cause the shell to exit when ! [ ... ] returns false and the errexit option is on. ! applied to any pipeline cancels the effect of errexit as the shell considers the exit status of the pipeline is being used as a condition . The same applies for failing-pipeline || cmd or failing-pipeline && cmd ... While with [ ! ... ] , ! is just a regular argument passed to the [ command and in the end as far as the shell and errexit is concerned, it's just a [ command returning with either a success or failure exit status. $ sh -ec '[ ! a = a ]; echo here'; echo "$?"1$ sh -ec '! [ a = a ]; echo here'; echo "$?"here0 That doesn't apply when those commands are run as part of the condition section of if / while / until statements, as in there, errexit doesn't apply. The same applies to the ERR trap of Korn-like shells: $ ksh -c 'trap "echo OUCH" ERR; ! [ a = a ]'$ ksh -c 'trap "echo OUCH" ERR; [ ! a = a ]'OUCH In practice that likely won't make a difference as [ is almost always used as a condition, where errexit doesn't apply (whether it's in if / while / until statements or followed by && / || ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193529/"
]
} |
314,512 | I have: sleep 210m && for i in $(seq 1 5); do echo -e '\a'; sleep 0.5; done running as a simple, no-frills timer to remind me when something should be done. That sleep 210m is PID 25347. I'm trying to figure out how much time is left in the sleep. The best I've come up with, with me putting in the original sleep amount (210 minutes) is: $ echo "210 60 * $(ps -o etimes= 25347) - 60 ~ r n [:] P p" | dc78:11 (Explanation: First bit computes the number of seconds in the original sleep; the $(ps…) bit gets the time since sleep started in seconds, then the rest subtracts them and displays in minutes and seconds.) It occurs to me this would be useful in general; is there a better way to do this? Or at least a clever way to parse the sleep time from ps -o args ? | Supporting GNU or Solaris 11 sleep arguments (one or more <double>[smhd] durations, so would also work with traditional implementations that support only one decimal integer number (like on FreeBSD), but not with those accepting more complex arguments like ISO-8601 durations ). Using etime instead of etimes as that's more portable (standard Unix). remaining_sleep_time() { # arg: pid ps -o etime= -o args= -p "$1" | perl -MPOSIX -lane ' %map = qw(d 86400 h 3600 m 60 s 1); $F[0] =~ /(\d+-)?(\d+:)?(\d+):(\d+)/; $t = -($4+60*($3+60*($2+24*$1))); for (@F[2..$#F]) { s/\?//g; ($n, $p) = strtod($_); $n *= $map{substr($_, -$p)} if $p; $t += $n } print $t'} (the s/\?//g is to get rid of the ? characters that procps ' ps uses as replacement for control characters. Without it, it would fail to parse sleep $'\r1d' or sleep $'\t1d' ... Unfortunately, in some locales, including the C locale, it uses . instead of ? . Not much we can do in that case as there's no way to tell a \t5d from a .5d (half day)). Pass the pid as argument. That also assumes the argv[0] passed to sleep doesn't contain blanks and that the number of arguments is small enough that it's not truncated by ps . Examples: $ sleep infinity & remaining_sleep_time "$!"Inf$ sleep 0xffp-6d &$ remaining_sleep_time "$!"344249$ sleep 1m 1m 1m 1m 1m & remaining_sleep_time "$!"300 For a [[[ddd-]HH:]MM:]SS output instead of just the number of seconds, replace the print $t with: $output = "";for ([60,"%02d\n"],[60,"%02d:"],[24,"%02d:"],[inf,"%d-"]) { last unless $t; $output = sprintf($_->[1], $t % $_->[0]) . $output; $t = int($t / $_->[0])}printf "%s", $output; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314512",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/977/"
]
} |
314,550 | I'm wondering how to get a shell script to listen in on a certain port (maybe using netcat?). Hopefully so that when a message is sent to that port, the script records the message and then runs a function. Example: Computer 1 has the script running in the background, the script opened port 1234 to incoming traffic Computer 2 sends message "hello world" to port 1234 of computer 1 Script on Computer 1 records the message "hello world" to a variable $MESSAGE Script runs function now that variable $MESSAGE has been set How do I go about doning this? | Should be possible with socat . Write such a script "getmsg.sh" to receive one message via stdin: #!/bin/bashread MESSAGEecho "PID: $$"echo "$MESSAGE" Then run this socat command to invoke our script for each tcp connection on port 7777: socat -u tcp-l:7777,fork system:./getmsg.sh Send a test message from another shell: echo "message 1" | netcat localhost 7777 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193587/"
]
} |
314,553 | I have created a user. useradd -M -d /usr/my_user my_user chown -R my_user. /usr/my_user Now as a root I can type su - my_user -c /usr/my_user/some_dir/script.sh But if I want to do more complicated things, for example navigate between my_user folders I have to type the su - my_user pattern everytime. Otherwise it states that I do not have permissions. How can I make my life easier and not type the su everytime? | Should be possible with socat . Write such a script "getmsg.sh" to receive one message via stdin: #!/bin/bashread MESSAGEecho "PID: $$"echo "$MESSAGE" Then run this socat command to invoke our script for each tcp connection on port 7777: socat -u tcp-l:7777,fork system:./getmsg.sh Send a test message from another shell: echo "message 1" | netcat localhost 7777 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314553",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145899/"
]
} |
314,569 | I want to compare the contents of two directories, recursively, showing which files are missing from one or the other, and which files have different content. But I don't want output on the differences within the files, just whether they are different or not. There won't be any links to worry about. I hope this isn't a duplicate, I've trawled through examples and can't find an answer to this. Thanks | Usually this looks already good: diff -rq dirA dirB | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/314569",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191416/"
]
} |
314,640 | I have a cron job that runs a script that writes checks for errors in an sql database and writes the errors to a log file. The log file is supplied in the command to run the script. I only want to receive an email when the script finds errors, and I want the log to be included in the email. I don't want to receive an email if the script doesn't find any errors. Apparently the script writes to the log when the script both finds or does not find errors (I didn't write this script). 20 6-10 * * 1-5 ~/job_failure_test.sh -o ~/job_fail.log 2>&1 /dev/null | mail -s "Errors" [email protected] < ~/job_fail.log So far this line sends me emails when there are errors written to the log, but it doesn't send me the updated log. It sends me the log from the previous execution of the cron job. | Just change | (a pipe ) to || (an or ) (assuming the script uses exit codes properly) though changing the script to only output on error and doing this is better practice: [email protected][email protected] 6-10 * * 1-5 ~/job_failure_test.sh The ugly way; 20 6-10 * * 1-5 ~/job_failure_test.sh > ~/job_fail.log 2>&1 || mail -s "Errors" [email protected] < ~/job_fail.log | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193647/"
]
} |
314,642 | On Mac OS X, you can "sample" a process, which takes a backtrace of every thread in a specified process a specified number of times and then displays what methods are running ( Apple man page , example output from my Mac ). How does one accomplish this on Linux? | Just change | (a pipe ) to || (an or ) (assuming the script uses exit codes properly) though changing the script to only output on error and doing this is better practice: [email protected][email protected] 6-10 * * 1-5 ~/job_failure_test.sh The ugly way; 20 6-10 * * 1-5 ~/job_failure_test.sh > ~/job_fail.log 2>&1 || mail -s "Errors" [email protected] < ~/job_fail.log | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100654/"
]
} |
314,662 | The Linux ls command comes with these options. --block-size=SIZE scale sizes by SIZE before printing them; e.g., '--block-size=M' prints sizes in units of 1,048,576 bytes; see SIZE format below -l use a long listing format list subdirectories recursively -s, --size print the allocated size of each file, in blocks I presume that ls -l is the actual file size and ls -s --block-size=1 is the amount of disk space allocated to storing the file. (In this case 991232 = 968x1024 = 968K.) $ ls -s --block-size=1 summary.pdf 991232 summary.pdf$ ls -l summary.pdf -rwxrwx---. 1 chris chris 989838 May 1 2015 summary.pdf Is there an to get the file size in bytes without the additional information in "long listing format"? | You can use stat with a custom format. stat -c'%s' summary.pdf will output the actual file size in bytes. If you want allocated blocks, use %b for the number of blocks allocated and %B for the block size of each block in bytes. This also works with wildcards and an additional %n for the file name. See the man page for more format options. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314662",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44245/"
]
} |
314,698 | On Arch Linux, I've backed up my system with rsync and restoredit again, but it seems that my way of doing it (which I did get from the ArchWiki but must be wrong?) has kept old files deletedby Pacman. This results in the error duplicated database entry when I try to upgrade my system with pacman -Syu . What should I do? | I suggest that you read the information in the links here and here . Basically, you need to remove the duplicates (manually or using a script) from /var/lib/pacman/local/ . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/314698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134690/"
]
} |
314,706 | I'd like to list all meta packages that I installed. Installed with pikaur or pacman operating system is Arch Linux Problem When I install plasma-meta and run pacman -Qg , I can only see plasma . This is, of course, the expected behavior given for the manual entry for the query Q parameter g . Desired outcome plasma-meta | pacman -Qqe | grep meta should do the trick. -q suppresses the package version -e filters explicitly installed packages (no dependencies) grep filters the meta packages, assuming they follow the naming convention -g does not help you here, because plasma-meta is not a package group, but a metapackage. Groups vs. Metapackages Groups and metapackages are a solutions to a similar problem, but technically they are very different: A group is a logical group of packages. When you install a group, each package that is contained in the group gets installed. Groups are a concept supported by your package manager A metapackage is an empty package (i.e. no files are installed) that depends on a bunch of packages. When a metapackage is installed, each dependency gets installed. This does not need special support from the package manager Because metapackages look just like normal packages to pacman, you have to rely on the naming convention with [name]-meta . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314706",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33386/"
]
} |
314,725 | I would like to know difference between user and service account. I know that e.g. Jenkins installed to ubuntu is not a user, but service account . What is use of service account? When we need them? How can I create service account? | User accounts are used by real users, service accounts are used by system services such as web servers, mail transport agents, databases etc. By convention, and only by convention, service accounts have user IDs in the low range, e.g. < 1000 or so. Except for UID 0, service accounts don't have any special privileges. Service accounts may - and typically do - own specific resources, even device special files, but they don't have superuser-like privileges. Service accounts can be created like ordinary user accounts (e.g. using useradd ). However, service accounts are typically created and configured by the package manager upon installation of the service software. So, even as an administrator you should be rarely directly concerned with the creation of service accounts. For good reason: In contrast to user accounts, service accounts often don't have a "proper" login shell, i.e. they have /usr/sbin/nologin as login shell (or, back in the old days, /bin/false ). Moreover, service accounts are typically locked, i.e. it is not possible to login (for traditional /etc/passwd and /etc/shadow this can be achieved by setting the password hash to arbitrary values such as * or x ). This is to harden the service accounts against abuse ( defense in depth ). Having individual service accounts for each service serves two main purposes: It is a security measure to reduce the impact in case of an incident with one service ( compartmentalization ), and it simplifies administration as it becomes easier to track down what resources belong to which service. See this or this answers on related questions for more details. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/314725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152046/"
]
} |
314,804 | System Info OS: OS X bash: GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin16) Background I want time machine to exclude a set of directories and files from all my git/nodejs project. My project directories are in ~/code/private/ and ~/code/public/ so I'm trying to use bash looping to do the tmutil . Issue Short Version If I have a calculated string variable k , how do I make it glob in or right before a for-loop: i='~/code/public/*'j='*.launch'k=$i/$j # $k='~/code/public/*/*.launch'for i in $k # I need $k to glob heredo echo $idone In the long version below, you will see k=$i/$j . So I cannot hardcode the string in the for loop. Long Version #!/bin/bashexclude='*.launch.classpath.sass-cacheThumbs.dbbower_componentsbuildconnect.lockcoveragediste2e/*.jse2e/*.maplibpeerconnection.lognode_modulesnpm-debug.logtestem.logtmptypings'dirs='~/code/private/*~/code/public/*'for i in $dirsdo for j in $exclude do k=$i/$j # It is correct up to this line for l in $k # I need it glob here do echo $l # Command I want to execute # tmutil addexclusion $l done donedone Output They are not globbed. Not what I want. ~/code/private/*/*.launch ~/code/private/*/.DS_Store ~/code/private/*/.classpath ~/code/private/*/.sass-cache ~/code/private/*/.settings ~/code/private/*/Thumbs.db ~/code/private/*/bower_components ~/code/private/*/build ~/code/private/*/connect.lock ~/code/private/*/coverage ~/code/private/*/dist ~/code/private/*/e2e/*.js ~/code/private/*/e2e/*.map ~/code/private/*/libpeerconnection.log ~/code/private/*/node_modules ~/code/private/*/npm-debug.log ~/code/private/*/testem.log ~/code/private/*/tmp ~/code/private/*/typings ~/code/public/*/*.launch ~/code/public/*/.DS_Store ~/code/public/*/.classpath ~/code/public/*/.sass-cache ~/code/public/*/.settings ~/code/public/*/Thumbs.db ~/code/public/*/bower_components ~/code/public/*/build ~/code/public/*/connect.lock ~/code/public/*/coverage ~/code/public/*/dist ~/code/public/*/e2e/*.js ~/code/public/*/e2e/*.map ~/code/public/*/libpeerconnection.log ~/code/public/*/node_modules ~/code/public/*/npm-debug.log ~/code/public/*/testem.log ~/code/public/*/tmp ~/code/public/*/typings | You can force another round of evaluation with eval , but that's not actually necessary. (And eval starts having serious problems the moment your file names contain special characters like $ .) The problem isn't with globbing, but with the tilde expansion. Globbing happens after variable expansion, if the variable is unquoted, as here (*) : $ x="/tm*" ; echo $x/tmp Another thing that happens for unquoted expansions is word splitting, which will be an issue if the patterns in question contain characters in IFS , usually whitespace. To prevent this issue, word splitting needs to be disabled by setting IFS to the empty string. So, in the same vein, this is similar to what you did, and works: $ IFS=$ mkdir -p ~/public/foo/ ; touch ~/public/foo/x.launch$ i="$HOME/public/*"; j="*.launch"; k="$i/$j"$ echo $k/home/foo/public/foo/x.launch But with the tilde it doesn't: $ i="~/public/*"; j="*.launch"; k="$i/$j"$ echo $k~/public/*/*.launch This is clearly documented for Bash: The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, ... Tilde expansion happens before variable expansion so tildes inside variables are not expanded. The easy workaround isto use $HOME or the full path instead. (* expanding globs from variables is usually not what you want) Another thing: When you loop over the patterns, as here: exclude="foo *bar"for j in $exclude ; do ... note that as $exclude is unquoted, it's both split, and also globbed at this point. So if the current directory contains something matching the pattern, it's expanded to that: $ IFS=$ i="$HOME/public/foo"$ exclude="*.launch"$ touch $i/real.launch$ for j in $exclude ; do # glob, no match printf "%s\n" "$i"/$j ; done/home/foo/public/foo/real.launch$ touch ./hello.launch$ for j in $exclude ; do # glob, matches in current dir! printf "%s\n" "$i"/$j ; done/home/foo/public/foo/hello.launch # not the expected result To work around this, use an array variable instead of a split string: $ IFS=$ exclude=("*.launch")$ exclude+=("*.not this")$ for j in "${exclude[@]}" ; do printf "%s\n" "$i"/$j ; done/home/foo/public/foo/real.launch/home/foo/public/foo/some file.not this Though note that if the patterns don't match anything, they'll by default be left as-is. So if the directory is empty, .../*.launch would be printed etc. Something similar could be done with find -path , if you don't mind what directory level the targeted files should be. E.g. to find any path ending in /e2e/*.js : $ dirs="$HOME/public $HOME/private"$ pattern="*/e2e/*.js"$ find $dirs -path "$pattern"/home/foo/public/one/two/three/e2e/asdf.js We have to use $HOME instead of ~ for the same reason as before, and $dirs needs to be unquoted on the find command line so it gets split, but $pattern should be quoted so it isn't accidentally expanded by the shell. (I think you could play with -maxdepth on GNU find to limit how deep the search goes, if you care, but that's a bit of a different issue.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/314804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27121/"
]
} |
314,826 | I have a .toc (table of contents file) from my .tex document. It contains a lot of lines and some of them have the form \contentsline {part}{Some title here\hfil }{5}\contentsline {chapter}{\numberline {}Person name here}{5} I know how to grep for part and for chapter . But I'd like to filter for those lines and have the output in a csv file like this: {Some title here},{Person name here},{5} or with no braces Some title here,Person name here,5 1. For sure the number (page number) in the last pair {} is the same for both two lines, so we can filter only the second one. 2. Note that some empty pair {} could happens or also could contain another pair {} . For example, it could be \contentsline {part}{Title with math $\frac{a}{b}$\hfil }{15} which should be filtered as Title with math $\frac{a}{b}$ edit 1: I was able to obtain the numbers without braces at end of line using grep '{part}' file.toc | awk -F '[{}]' '{print $(NF-1)}' edit 2: I was able to filter the chapter lines and remove the garbage with grep '{chapter}' file.toc | sed 's/\\numberline//' | sed 's/\\contentsline//' | sed 's/{chapter}//' | sed 's/{}//' | sed 's/^ {/{/' and the output without blank spaces was {Person name here}{5} edit 3: I was able to filter for part and clean the output with \contentsline {chapter}{\numberline {}Person name here}{5} which returns {Title with math $\frac{a}{b}$}{15} | You can force another round of evaluation with eval , but that's not actually necessary. (And eval starts having serious problems the moment your file names contain special characters like $ .) The problem isn't with globbing, but with the tilde expansion. Globbing happens after variable expansion, if the variable is unquoted, as here (*) : $ x="/tm*" ; echo $x/tmp Another thing that happens for unquoted expansions is word splitting, which will be an issue if the patterns in question contain characters in IFS , usually whitespace. To prevent this issue, word splitting needs to be disabled by setting IFS to the empty string. So, in the same vein, this is similar to what you did, and works: $ IFS=$ mkdir -p ~/public/foo/ ; touch ~/public/foo/x.launch$ i="$HOME/public/*"; j="*.launch"; k="$i/$j"$ echo $k/home/foo/public/foo/x.launch But with the tilde it doesn't: $ i="~/public/*"; j="*.launch"; k="$i/$j"$ echo $k~/public/*/*.launch This is clearly documented for Bash: The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, ... Tilde expansion happens before variable expansion so tildes inside variables are not expanded. The easy workaround isto use $HOME or the full path instead. (* expanding globs from variables is usually not what you want) Another thing: When you loop over the patterns, as here: exclude="foo *bar"for j in $exclude ; do ... note that as $exclude is unquoted, it's both split, and also globbed at this point. So if the current directory contains something matching the pattern, it's expanded to that: $ IFS=$ i="$HOME/public/foo"$ exclude="*.launch"$ touch $i/real.launch$ for j in $exclude ; do # glob, no match printf "%s\n" "$i"/$j ; done/home/foo/public/foo/real.launch$ touch ./hello.launch$ for j in $exclude ; do # glob, matches in current dir! printf "%s\n" "$i"/$j ; done/home/foo/public/foo/hello.launch # not the expected result To work around this, use an array variable instead of a split string: $ IFS=$ exclude=("*.launch")$ exclude+=("*.not this")$ for j in "${exclude[@]}" ; do printf "%s\n" "$i"/$j ; done/home/foo/public/foo/real.launch/home/foo/public/foo/some file.not this Though note that if the patterns don't match anything, they'll by default be left as-is. So if the directory is empty, .../*.launch would be printed etc. Something similar could be done with find -path , if you don't mind what directory level the targeted files should be. E.g. to find any path ending in /e2e/*.js : $ dirs="$HOME/public $HOME/private"$ pattern="*/e2e/*.js"$ find $dirs -path "$pattern"/home/foo/public/one/two/three/e2e/asdf.js We have to use $HOME instead of ~ for the same reason as before, and $dirs needs to be unquoted on the find command line so it gets split, but $pattern should be quoted so it isn't accidentally expanded by the shell. (I think you could play with -maxdepth on GNU find to limit how deep the search goes, if you care, but that's a bit of a different issue.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/314826",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19195/"
]
} |
314,834 | I'm trying to make a stopwatch, and when user press Q I want to exit. I found two script, one where a clock is displayed until ctrl + z is pressed. And one script that exit if "q" is pressed. I've tried to combined them, but "read" seems to mess it all up. The reason I want to achieve this is that if the user press Q the time elapsed will be saved to a file. Stopwatch: BEGIN=$(date +%s)echo Starting Stopwatch...while true; do NOW=$(date +%s) let DIFF=$(($NOW - $BEGIN)) let MINS=$(($DIFF / 60)) let SECS=$(($DIFF % 60)) let HOURS=$(($DIFF / 3600)) let DAYS=$(($DIFF / 86400)) # \r is a "carriage return" - returns cursor to start of line printf "\r%3d Days, %02d:%02d:%02d" $DAYS $HOURS $MINS $SECS sleep 0.25done Exit on q: while true; do echo -en "Press Q to exit \t\t: " read input if [[ $input = "q" ]] || [[ $input = "Q" ]] then break else echo "Invalid Input." fidone PS: I'm very new to this. | Maybe this helps you. I integrated both of them, with slight modifications though. Here's the result. BEGIN=$(date +%s)echo Starting Stopwatch...echo Press Q to exit.while true; do NOW=$(date +%s) let DIFF=$(($NOW - $BEGIN)) let MINS=$(($DIFF / 60)) let SECS=$(($DIFF % 60)) let HOURS=$(($DIFF / 3600)) let DAYS=$(($DIFF / 86400)) # \r is a "carriage return" - returns cursor to start of line printf "\r%3d Days, %02d:%02d:%02d" $DAYS $HOURS $MINS $SECS# In the following line -t for timeout, -N for just 1 character read -t 0.25 -N 1 input if [[ $input = "q" ]] || [[ $input = "Q" ]]; then# The following line is for the prompt to appear on a new line. echo break fidone As you can see, I put the second script in place of the sleep command in the former first one. The time-out in read now bears the time lapse function. Notice that the -N option is needed for read not to wait for Enter and react as soon as the first key has been pressed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/314834",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193808/"
]
} |
314,850 | I have a script that captures packets and outputs the 2 digit country code linked to the newly found IP. I want to monitor the number of packets for each country and do it in a continuous fashion during the packet capture. The output of uniq -c is exactly what I need, but I want it to change over time and update while reading the output of the pipe. Here is my script: #!/bin/bash ngrep |grep -oE "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" | while read i do whois $i | grep -i 'Country' | sed 's/Country:[ ]*//I' done | Maybe this helps you. I integrated both of them, with slight modifications though. Here's the result. BEGIN=$(date +%s)echo Starting Stopwatch...echo Press Q to exit.while true; do NOW=$(date +%s) let DIFF=$(($NOW - $BEGIN)) let MINS=$(($DIFF / 60)) let SECS=$(($DIFF % 60)) let HOURS=$(($DIFF / 3600)) let DAYS=$(($DIFF / 86400)) # \r is a "carriage return" - returns cursor to start of line printf "\r%3d Days, %02d:%02d:%02d" $DAYS $HOURS $MINS $SECS# In the following line -t for timeout, -N for just 1 character read -t 0.25 -N 1 input if [[ $input = "q" ]] || [[ $input = "Q" ]]; then# The following line is for the prompt to appear on a new line. echo break fidone As you can see, I put the second script in place of the sleep command in the former first one. The time-out in read now bears the time lapse function. Notice that the -N option is needed for read not to wait for Enter and react as soon as the first key has been pressed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/314850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189711/"
]
} |
314,858 | I have a script that tracks time. When I print my output in my console I get this: 0 Days, 00:00:33 When I save it to a txt file I get this: ^[[2K 0 Days, 00:00:33 Code: now=$(date +%s)diff=$(($now - $begin))mins=$(($diff / 60))secs=$(($diff % 60))hours=$(($diff / 3600))days=$(($diff / 86400))printf "\33[2K\r%3d Days, %02d:%02d:%02d" $days $hours $mins $secsprintf "\33[2K\r%3d Days, %02d:%02d:%02d" $days $hours $mins $secs >> test.txt Where is the ^[[2K coming from? I'm guessing it has something to do with the format in printf. I read a bit about printf here: http://wiki.bash-hackers.org/commands/builtin/printf . But I didn't get any wiser.... | You have escape control sequences in your printf command i.e. printf "\33[2K..... , ( \e[2K clears a line), that is necessarily a control command for the terminal, that can only be understood and acted upon by a terminal device. When you are running the script interactively inside the terminal, the terminal is interpreting the sequence properly. While saving the command's output in a file, the sequence is being treated literally as there is nothing to interpret the sequence then. Now if you do: cat test.txt you'll see that the terminal is interpreting the sequence correctly again before the output being printed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/314858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193808/"
]
} |
314,890 | I have created a long running screen session with many windows and the C-a A command to rename a window is not working. What is the text command for renaming a window? I have tried :caption string windowname but it doesn't work. Is that the right command or am I missing something? | That is the title command, e.g,. :title bad-window In the manual: title [windowtitle] Set the name of the current window to windowtitle . If no name is specified, screen prompts for one. This command was known as aka in previous releases. If the shortcut is not working, of course, the long name may not work either. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26026/"
]
} |
314,893 | I have a file called alarm.log , and I want a script to automatically run when this file is changed. | That is the title command, e.g,. :title bad-window In the manual: title [windowtitle] Set the name of the current window to windowtitle . If no name is specified, screen prompts for one. This command was known as aka in previous releases. If the shortcut is not working, of course, the long name may not work either. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193862/"
]
} |
314,974 | I have created symlinks to a large amount of logfiles. The syntax of the logfiles is yyyymmdd.log.gz . To simplify things I use a simple sequence without parsing it with date : for dd in $(seq -w 20150101 20151231) ; do ln -s $origin/$dd.log.gz $target/$dd.log.gzdone How do I get rid of all the broken symlinks I just created in a single fell swoop? | This simple one-liner does the job quite fast. It requires GNU Findutils : find . -xtype l -delete A bit of explanation: -xtype l tests for links that are broken (it is the opposite of -type ) -delete deletes the files directly, no need for further bothering with xargs or -exec NOTE: -xtype l means -xtype lower case L (as in link ) ;) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/314974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83367/"
]
} |
314,977 | I'm using Linux Mint 18 XFCE and installed Apache, MySQL and PHP packages to make local development. Apache and MySQL services are always active and start from the beginning. How can I disable those service to do not start at the beginning? The idea is that whenever I want to work locally I'll start both services using something like: sudo service apache2 start && sudo service mysql start | You should disable them using either systemctl (if you're using systemd) or update-rc.d : systemctl disable apache2 mysql or update-rc.d apache2 disableupdate-rc.d mysql disable | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/314977",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191511/"
]
} |
314,985 | I'm tailing logs of my own app and postgres. tail -f /tmp/myapp.log /var/log/postgresql/postgresql.main.log I need to include pgpool's logs. It used to be syslog but now it is in journalctl. Is there a way to tie tail -f && journalctl -f together? | You could use: journalctl -u service-name -f -f, --follow Show only the most recent journal entries, and continuously print new entries as they are appended to the journal. Here I've added "service-name" to distinguish this answer from others; you substitute the actual service name instead of the text service-name . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/314985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193929/"
]
} |
315,050 | This is a very basic question I am just quite new to bash and couldn't figure out how to do this. Googling unfortunately didn't get me anywhere. My goal is to connect with sftp to a server, upload a file, and then disconnect. I have the following script: UpdateJar.sh #!/bin/bashsftp -oPort=23 [email protected]:/home/kalenpw/TestWorld/plugins#Change directory on server#cd /home/kalenpw/TestWorld/plugins#Upload fileput /home/kalenpw/.m2/repository/com/Khalidor/TestPlugin/0.0.1-SNAPSHOT/TestPlugin-0.0.1-SNAPSHOT.jarexit the issue is, this script will establish an sftp connection and then do nothing. Once I manually type exit in connection it tries to execute the put command but because the sftp session has been closed it just says put: command not found. How can I get this to work properly? Thanks | You can change your script to pass commands in a here-document, e.g., #!/bin/bashsftp -oPort=23 [email protected]:/home/kalenpw/TestWorld/plugins <<EOFput /home/kalenpw/.m2/repository/com/Khalidor/TestPlugin/0.0.1-SNAPSHOT/TestPlugin-0.0.1-SNAPSHOT.jar exitEOF The << marker followed by the name ( EOF ) tells the script to pass the following lines until the name is found at the beginning of the line (by itself). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/315050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171715/"
]
} |
315,062 | So I have a file like so [ABC]value1=blavalue2=blavalue3=bla[XYZ]value1=blavalue2=blavalue3=bla And I would like to replace the value1 under the [ABC] block with "value1=notbla" and not under the [XYZ] block. I've already tried sed '/ABC/{n;s/.*/change/}' file But that would only work when trying to go to the next line and would not change the specific pattern (ie. if value1 was under value 2)What sed or awk command could I use if I didn't know the specific line numbers.And can you please label the function of each sed or awk tag used. I would really appreciate it. | Sed can handle this quite easily. It's a single "substitute" command, prefixed with an address range. I've added extra spacing for better readability: sed -e '/^\[ABC\]$/ , /^\[.*\]$/ s/^\(value1=\).*$/\1notbla/' Without the extra spacing, it's: sed -e '/^\[ABC\]$/,/^\[.*\]$/s/^\(value1=\).*$/\1notbla/' You don't really need anchored regexes, but they may be safer in some cases of unusual inputs. A slightly shorter version with unanchored regexes is: sed -e '/\[ABC\]/,/^\[/s/^\(value1=\).*$/\1notbla/' Explanation: You asked for each flag or option to be explained, and I've got the time, so here you go. I'm explaining the final (shortest) version out of the three Sed commands listed above. The first part of the line is an address range: /startregex/,/stopregex/ The s ubstitute command which follows the address range is only applied to lines from startregex to stopregex (inclusive). In this case the start regex is /\[ABC\]/ . Square brackets are usually special characters within a regex, so we put a backslash before each to signify literal square bracket characters. The stop regex is /^\[/ , which uses the special regex character ^ to signify the start of a line. This pattern will match any line that starts with a literal left square bracket ( [ ). The s ubstitute command is basically quite simple; the general format is s/findregex/replacetext/ . It can also have special flags placed after the final / to modify its behavior, but I'm not using any such flags here. The "find regex" is ^\(value1=\).*$ . The caret ( ^ ) matches the start of the line, as mentioned earlier, and the dollar sign ( $ ) matches the end of the line. So this whole pattern must match an entire line, not merely part of one. The parentheses ( () ), unlike square brackets, are non -special by default in regexes, so we put the backslashes before them to give them their special meaning. They allow parts of the matched text (the text matched by the "find regex") to be used in the replacement text. Specifically, the \1 in the replacement text means, "The text matched within the first set of parentheses in the regex." In this case, that is always just "value1=". The final element in the "find regex" is .* . The dot ( . ) means "any single character," and the asterisk ( * ) means "any number of times (zero or more)." So the dot star ( .* ) matches the entire rest of the line, after the equals sign. "notbla" in the replacement text is just static text, nothing special about it. To really learn Sed properly, I highly recommend the Grymoire Sed tutorial , which is free online. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193982/"
]
} |
315,063 | I added a new hard drive ( /dev/sdb ) to Ubuntu Server 16, ran parted /dev/sdb mklabel gpt and sudo parted /dev/sdb mkpart primary ext4 0G 1074GB . All went fine. Then I tried to mount the drive mkdir /mnt/storage2mount /dev/sdb1 /mnt/storage2 It resulted in mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. I tried mount -t ext4 /dev/sdb1 /mnt/storage2 with identical outcome. I've done this stuff many times before and have never ran into anything like this. I've already read this mount: wrong fs type, bad option, bad superblock on /dev/sdb on CentOS 6.0 to no avail. fdisk output regarding the drive Disk /dev/sdb: 1000 GiB, 1073741824000 bytes, 2097152000 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: gptDisk identifier: 0E136427-03AF-48E2-B56B-A467E991629FDevice Start End Sectors Size Type/dev/sdb1 2048 2097149951 2097147904 1000G Linux filesystem | WARNING: This will wipe out your drive! You still need to create a ( new ) file system (aka "format the partition"). Double-check that you really want to overwrite the current content of the specified partition ! Replace XY accordingly, but double check that you are specifying the correct partition, e.g., sda2 , sdb1 : mkfs.ext4 /dev/sd XY parted / mkpart does not create a file system. The Parted User's Manual shows: 2.4.5 mkpart Command: mkpart [part-type fs-type name] start end Creates a new partition, without creating a new file system on that partition. [Emphasis added.] | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/315063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120144/"
]
} |
315,064 | If you only experience this problem when using VLC see this question When the screen is blocked, the xscreensaver (version 5.35) password prompt pops out without any mouse/touchpad movement. It just appears, blinks out when the time is gone (there is also a message like " the PAM timeout is cancelled ") and appears again. Then the cycle is repeated. I tried to reinstall it which was not helpful. I'm using Arch ( 4.7.6-1-ARCH ) on a laptop. Here are the log messages (I removed xscreensaver: in the beginning of all lines). The event I did not trigger is ClientMessage at 10:48:47: 10:48:29: 0: grabbing keyboard on 0xd4... AlreadyGrabbed.10:48:30: 0: grabbing keyboard on 0xd4... GrabSuccess.10:48:30: 0: grabbing mouse on 0xd4... GrabSuccess.10:48:47: DEACTIVATE ClientMessage received.10:48:47: user is active (ClientMessage)10:48:47: pam_start ("xscreensaver", "xenohunter", ...) ==> 0 (Success)10:48:47: pam_set_item (p, PAM_TTY, ":0.0") ==> 0 (Success)10:48:47: pam_authenticate (...) ...10:48:47: pam_conversation (ECHO_OFF="Password: ") ...10:48:47: 0: mouse is at 1047,514.10:48:47: 0: creating password dialog ("")10:48:47: grabbing server...10:48:47: 0: ungrabbing mouse (was 0xd4).10:48:47: 0: grabbing mouse on 0x140003c... GrabSuccess.10:48:47: ungrabbing server.10:49:17: input timed out.10:49:17: pam_conversation (...) ==> PAM_CONV_ERR10:49:17: pam_authenticate (...) ==> 20 (Authentication token manipulation error)10:49:17: pam_end (...) ==> 0 (Success)10:49:17: authentication via PAM timed out.10:49:17: grabbing server...10:49:17: 0: ungrabbing mouse (was 0x140003c).10:49:17: 0: grabbing mouse on 0xd4... GrabSuccess.10:49:17: ungrabbing server.10:49:17: 0: moving mouse back to 1047,514.10:49:17: discarding MotionNotify event.10:49:17: 0: destroying password dialog. UPD 2016-10-11 I printed journalctl -p 3 -xb and got tons of such lines: Oct 08 14:02:57 regulus xscreensaver[12913]: pam_unix(xscreensaver:auth): conversation failedOct 08 14:02:57 regulus xscreensaver[12913]: pam_unix(xscreensaver:auth): auth could not identify password for [xenohunter]Oct 08 14:03:37 regulus xscreensaver[12913]: pam_unix(xscreensaver:auth): conversation failedOct 08 14:03:37 regulus xscreensaver[12913]: pam_unix(xscreensaver:auth): auth could not identify password for [xenohunter]Oct 08 14:04:17 regulus xscreensaver[12913]: pam_unix(xscreensaver:auth): conversation failedOct 08 14:04:17 regulus xscreensaver[12913]: pam_unix(xscreensaver:auth): auth could not identify password for [xenohunter]Oct 08 14:04:57 regulus xscreensaver[12913]: pam_unix(xscreensaver:auth): conversation failedOct 08 14:04:57 regulus xscreensaver[12913]: pam_unix(xscreensaver:auth): auth could not identify password for [xenohunter] The cycle is always 40 seconds which is likely to be the time the password prompt reappears. I did evtest /dev/input/event${X} where ${X} is every id from xinput list . Plus, I did the same for event streams with id=0 and id=1 which are physical mouse and keyboard. All those streams are empty when the password prompt appears. | I know this is an old thread but disabling Presentation mode in xfce4-power-manager applet fixed this | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129826/"
]
} |
315,097 | When attempting to create an audio alarm (or a display alarm with an audio component), Kalarm plays the audio if the file type is .ogg but refuses for files of type .mp3. When running Kalarm from terminal, this information is given: structure: gstreamer1(decoder-audio/mpeg)(mpegversion=1)(layer=3)()(64bit) ** Message: PackageKit: Did not install codec: GDBus.Error:org.freedesktop.DBus.Error.UnknownInterface: No such interface 'org.freedesktop.PackageKit.Modify2' at object path '/org/freedesktop/PackageKit' How can I get Kalarm to play audio alarms? | I know this is an old thread but disabling Presentation mode in xfce4-power-manager applet fixed this | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315097",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3056/"
]
} |
315,104 | When I'm supplying username and password to telnet to login, it is automatically closed after the following command: echo password | telnet mymachine -l mysuername The console outputs: Connected to mymachine.Connection closed by foreign host. Then the connection is closed. Is there a way to maintain the connection while still supplying username and password to the login at one time? Update:I wrote an alias according to the accepted answer that does the login without typing username and password everytime: alias logon='rm loginscript; echo \#\!\/usr\/bin\/expect >> loginscript ; echo spawn telnet remotemachine -l username >> loginscript ; echo expect Password: >> loginscript ; echo send password"\\r" >> loginscript ; echo expect -- \{\$ \} >> loginscript ; echo interact >> loginscript; chmod 777 loginscript; ./loginscript' | If you want to continue with an interactive session, the usual way is to use expect to do the login: #!/usr/bin/expectspawn telnet mymachine -l myusernameexpect Password:send password\rexpect -- {$ }interact | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315104",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194026/"
]
} |
315,151 | I just read the answers to Removing a newline character at the end of a file and everyone said to delete the last character. My question is, isn't the eof character the last one? | ASCII control characters have definitions from the 1960s (actually preceding what you might consider a network ). Not all of those control characters are used in the way that they were defined for telecommunications equipment back then. On Unix-like systems, there is no need for an EOF character; none is used. The system can tell applications how many bytes are in a file: On some other systems (seen in VMS, DOS, Windows), a control-Z may act as an end-of-file marker because in older versions the system could not tell some applications how many bytes are in the file. In the case of VMS, the limitation was due to the way the C runtime worked. Assembly-language applications could (and did) get the correct file size. Unix systems in the shell conventionally use control-D to tell an application that an end of input (file) has been reached, but the control-D is not stored in the file. In C, EOF is purposely made -1 to indicate that it is not a valid character. Standard I/O returns EOF when an end-of-file condition is detected — not a special character. By the way, files need not end with a newline (ASCII line-feed) character. Text editors can cope with files which are all printable text but lack a trailing newline. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/315151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194076/"
]
} |
315,158 | I have a tree of source code and I am looking to find all files that contain a certain word and must not contain a second word. This, because I need to update older files to include some newer code. I know I can use find, but I feel like if I try to chain grep statements it won't work because the second grep statement is going to be searching the results of the first and then I got lost. I tried: find . -type f -name "*.c" -exec grep -Hl "ABC" {} \; | grep -L "123" And this totally did not work. Any help would be appreciated. | Since the exit status of grep indicates whether or not it found a match, you should be able to test that directly as a find predicate (with the necessary negation, ! or -not ) e.g. find . -type f -name "*.c" \( -exec grep -q "ABC" {} \; ! -exec grep -q "123" {} \; \) -print -q makes grep exit silently on the first match - we don't need to hear from it because we let find print the filename. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194078/"
]
} |
315,159 | I´m using Parabola (Arch based) with OpenRC. It had originally Systemd, but then I moved to OpenRC. I don´t know why but when I turn on my PC, it appears the message error "Failed to find cpu0 device node" and "starting version 231" from Systemd, and then it displays "OpenRC [version] is starting up Parabola (i386)" It does not harm my system, and does not matter, but I would like to know why this happens and if it could be removed Thanks in advance :) | Since the exit status of grep indicates whether or not it found a match, you should be able to test that directly as a find predicate (with the necessary negation, ! or -not ) e.g. find . -type f -name "*.c" \( -exec grep -q "ABC" {} \; ! -exec grep -q "123" {} \; \) -print -q makes grep exit silently on the first match - we don't need to hear from it because we let find print the filename. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315159",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191694/"
]
} |
315,161 | I have read this post - How to dd a remote disk using SSH on local machine and save to a local disk However, the following code doesn't work: "dd if=/dev/sda | gzip -1 -" | dd of=image.gz As I do not have any /dev/sda directory. I'm assuming this is because I'm using Debian 7.0. Any help to take a complete copy of my vps is highly appreciated. | The device name of a disk depends on what type of disk it is (more precisely, on what type of bus and controller the disk is connected to, and what driver handles them). /dev/sda is the typical name for the first disk in a PC (other names can be used depending on the driver, e.g. for some older types of disk controllers or some hardware RAID controllers). In a VPS, the disk is normally a virtual one, and the device name depends on the virtualization technology, e.g. /dev/vda or /dev/xvda . You can find the block device names on your system with df or lsblk . But do not use this to make a backup from a live system ! You are very likely to end up with an unreadable backup. If the disk content changes while you're making the backup, which is unavoidable on a live system (e.g. a log gets written somewhere), then your image will be inconsistent — a bit of the old state, a bit of the new state — and may not be recoverable. The preferred way to make a backup is to make a snapshot , i.e. a view of the filesystem that's frozen and doesn't change even as the real system keeps changing. How to do that, and whether it is at all possible, depends on how your system is set up. Some filesystem types such as btrfs and zfs have a built-in snapshot ability. LVM also can make snapshots of a volume. If you can't make a snapshot (or even if you can) make a file-level backup. A file-level backup from a system that changes will be inconsistent, but if you don't make “important” changes then the backup will be usable. For example, if you move files around, they may be omitted from the backup if they happen to be moved from a directory that the backup program has not traversed yet to one that it has already traversed. On the other hand, if a log file keeps growing, you'll have some intermediate version of that file, but you won't have a damaged filesystem image. If a file is deleted and another file is created, you may have none of those files, or either one, or both, but you won't have the directory entry of the old file pointing at some data in the new file. You can use GNU tar (as root!) to back up a Linux system complete with important metadata. ssh root@vps 'tar -cJf - --acls --selinux --one-file-system /' >vps.tar.xz | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174929/"
]
} |
315,169 | This question is about executing /usr/bin/Xorg directly on Ubuntu 14.04. And I know there exists Xdummy, but I couldn't make the dummy driver work properly with the nvidia GPU so it's not an option. I copied the system-wide xorg.conf and /usr/lib/xorg/modules , and modified them a little bit. (Specified ModulePath in my xorg.conf too) Running the following command as root works fine: Xorg -noreset +extension GLX +extension RANDR +extension RENDER -logfile ./16.log -config ./xorg.conf :16 But if I do that as a non-root user (the log file permission is OK), this error occurs: (EE) Fatal server error:(EE) xf86OpenConsole: Cannot open virtual console 9 (Permission denied)(EE) (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. (EE) Please also check the log file at "./16.log" for additional information.(EE) (EE) Server terminated with error (1). Closing log file. Could you please help me to run Xorg without sudo?? | To determine who is allowed to run X configure it with dpkg-reconfigure x11-common There are three options: root only, console users only, or anybody. The entry is located in /etc/X11/Xwrapper.config . Since Debian 9 and Ubuntu 16.04 this file does not exist. After installing xserver-xorg-legacy , the file reappears and its content has to be changed from: allowed_users=console to: allowed_users=anybodyneeds_root_rights=yes You also need to specify the virtual terminal to use when starting X, otherwise, errors may occur. For example: Xorg :8 vt8 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/315169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192990/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.