source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
129,084
Suppose that I have a variable var in bash. I can assign a value to it. For example, I will make it a string: var="Test" I want to echo the name of var , not the value held by var . (I can do the latter with echo $var , but I actually want to do the former.) The answer to this question from SO says to use echo ${!var} , but when I do that I echo just returns a blank line. For example, this bash script #!/bin/bash echo "Hi" var="Test" echo ${!var} echo "Bye" returns this output: Hi Bye with just a blank line between Hi and Bye , instead of var . What am I doing wrong? I'm running bash 4.1.5(1) on Ubuntu 10.04.4 .
The shell parameter expansion ${!name@} or ${!name*} could do the trick, $ foo=bar $ var_name=(${!foo@}) $ echo $var_name" = "$foo foo = bar Although feasible I can't imagine the utility of this ...
{ "source": [ "https://unix.stackexchange.com/questions/129084", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9605/" ] }
129,143
I found the .bashrc file and I want to know the purpose/function of it. Also how and when is it used?
.bashrc is a Bash shell script that Bash runs whenever it is started interactively. It initializes an interactive shell session. You can put any command in that file that you could type at the command prompt. You put commands here to set up the shell for use in your particular environment, or to customize things to your preferences. A common thing to put in .bashrc are aliases that you want to always be available. .bashrc runs on every interactive shell launch. If you say: $ bash ; bash ; bash and then hit Ctrl-D three times, .bashrc will run three times. But if you say this instead: $ bash -c exit ; bash -c exit ; bash -c exit then .bashrc won't run at all, since -c makes the Bash call non-interactive. The same is true when you run a shell script from a file. Contrast .bash_profile and .profile which are only run at the start of a new login shell. ( bash -l ) You choose whether a command goes in .bashrc vs .bash_profile depending on whether you want it to run once or for every interactive shell start. As a counterexample to aliases, which I prefer to put in .bashrc , you want to do PATH adjustments in .bash_profile instead, since these changes are typically not idempotent : export PATH="$PATH:/some/addition" If you put that in .bashrc instead, every time you launched an interactive sub-shell, :/some/addition would get tacked onto the end of the PATH again, creating extra work for the shell when you mistype a command. You get a new interactive Bash shell whenever you shell out of vi with :sh , for example.
{ "source": [ "https://unix.stackexchange.com/questions/129143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
129,159
I need to record every keystroke and store in a file in the user directory ~, when using my account, I am not sudoer and I cannot install programs (like logKeys ) in any way. How could I do so using terminal? NOTE: This question it's not a duplicate of the other mention question; in this question I'm asking about every keystroke, while in the other the asker asked about keystroke in terminal session.
xinput test can report all keyboard events to the X server. On a GNU system: xinput list | grep -Po 'id=\K\d+(?=.*slave\s*keyboard)' | xargs -P0 -n1 xinput test If you want to get key names from the key codes, you could post-process that output with: awk 'BEGIN{while (("xmodmap -pke" | getline) > 0) k[$2]=$4} {print $0 "[" k[$NF] "]"}' Add > file.log to store in a log file. Or | tee file.log to both log and see it. xinput queries the XinputExtension of the X server. That's as close as you're going to get as a standard (I am not aware of any standard that covers X utilities) or common command to do that. That also does not require root privileges. If the X server and xinput support version 2 of the XinputExtension, you can use test-xi2 instead of test which gives more information, in particular the state of the modifiers (shift, ctrl, alt...). Example: $ xinput test-xi2 --root EVENT type 2 (KeyPress) device: 11 (11) detail: 54 flags: root: 846.80/451.83 event: 846.80/451.83 buttons: modifiers: locked 0 latched 0 base 0x4 effective: 0x4 group: locked 0 latched 0 base 0 effective: 0 valuators: windows: root 0x26c event 0x26c child 0x10006e6 You can translate the keycode (in detail ) to a keysym with the help of xmodmap -pke again, and the effective modifier bitmask to something more helpful with the help of xmodmap -pm . For instance: xinput test-xi2 --root | perl -lne ' BEGIN{$"=","; open X, "-|", "xmodmap -pke"; while (<X>) {$k{$1}=$2 if /^keycode\s+(\d+) = (\w+)/} open X, "-|", "xmodmap -pm"; <X>;<X>; while (<X>) {if (/^(\w+)\s+(\w*)/){($k=$2)=~s/_[LR]$//;$m[$i++]=$k||$1}} close X; } if (/^EVENT type.*\((.*)\)/) {$e = $1} elsif (/detail: (\d+)/) {$d=$1} elsif (/modifiers:.*effective: (.*)/) { $m=$1; if ($e =~ /^Key/){ my @mods; for (0..$#m) {push @mods, $m[$_] if (hex($m) & (1<<$_))} print "$e $d [$k{$d}] $m [@mods]" } }' would output: KeyPress 24 [q] 0x19 [Shift,Alt,Num_Lock] when I press Shift+Alt+q when num-lock is on. Note that you don't need to have super-user privileges to install a program. If you have write access to somewhere on the file system where execute permission is granted (your home directory, /tmp , /var/tmp ...) then you can copy an xinput command from a compatible system there and execute it.
{ "source": [ "https://unix.stackexchange.com/questions/129159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29336/" ] }
129,231
In /etc/profile I see this: for i in /etc/profile.d/*.sh ; do if [ -r "$i" ]; then if [ "${-#*i}" != "$-" ]; then . "$i" else . "$i" >/dev/null 2>&1 fi fi done What does ${-#*i} mean. I cannot find a definition of a parameter expansion starting ${- .
$- is current option flags set by the shell itself, on invocation, or using the set builtin command: $ echo $- himBH $ set -a $ echo $- ahimBH "${-#*i}" is syntax for string removal: (from POSIX documentation ) ${parameter#[word]} Remove Smallest Prefix Pattern. The word shall be expanded to produce a pattern. The parameter expansion shall then result in parameter, with the smallest portion of the prefix matched by the pattern deleted. If present, word shall not begin with an unquoted '#'. ${parameter##[word]} Remove Largest Prefix Pattern. The word shall be expanded to produce a pattern. The parameter expansion shall then result in parameter, with the largest portion of the prefix matched by the pattern deleted. So ${-#*i} remove the shortest string till the first i character: $ echo "${-#*i}" mBH In your case, if [ "${-#*i}" != "$-" ] checking if your shell is interactive or not.
{ "source": [ "https://unix.stackexchange.com/questions/129231", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67540/" ] }
129,305
I cannot see any USB devices within my VirtualBox guest VMs from my host. How do I enable access for my guest VMs?
In order to enable access to these devices you'll need to add your username to the group vboxusers . $ sudo usermod -a -G vboxusers <username> Example $ sudo usermod -a -G vboxusers saml You can confirm the change afterwards: $ groups saml saml : saml wheel vboxusers wireshark After doing the above you'll want to logout and log back in, so that for the newly added group to get picked up by your user account. Then from the VirtualBox GUI you'll be able to right click on the USB icon in the lower right group of icons, and select whatever USB devices you want to give control over to your running guest VM. Detecting USB devices You can use VirtualBox's little known command line tool VBoxManage to list out the USB devices that are accessible. This is a good way to also confirm that the group addition made above to your username are being picked up correctly. Example without group $ VBoxManage list usbhost Host USB Devices: <none> with group $ VBoxManage list usbhost | head -19 Host USB Devices: UUID: abcd1234-123a-2345-b1e0-8a0b1c1f2511 VendorId: 0x046d (046D) ProductId: 0x0809 (0809) Revision: 0.9 (0009) SerialNumber: ABC34567 Address: sysfs:/sys/devices/pci0000:00/0000:00:12.2/usb1/1-4//device:/dev/vboxusb/001/004 Current State: Busy UUID: d2abc46d-123-1234-b8c3-691a7ca551ce VendorId: 0x046d (046D) ProductId: 0xc504 (C504) Revision: 19.16 (1916) Manufacturer: Logitech Product: USB Receiver Address: sysfs:/sys/devices/pci0000:00/0000:00:12.0/usb3/3-3//device:/dev/vboxusb/003/003 Current State: Busy ... References VirtualBox USB support on Fedora. The right way. Set up USB for Virtualbox 3.10.1. USB settings - VirtualBox documentation
{ "source": [ "https://unix.stackexchange.com/questions/129305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
129,312
I run the following vim command to change the colour of highlighted columns to something more palatable than the default red: :highlight ColorColumn ctermbg=235 guibg=#2c2d27 rather than run this manually everytime I start vim, I'd like to automate this. But how? I've tried adding the following to .vimrc : highlight ColorColumn ctermbg=235 guibg=#2c2d27 But that has no effect (no errors, it's just ignored after restart). Am I doing something wrong? I got the command from this Q: https://stackoverflow.com/questions/2447109/showing-a-different-background-colour-in-vim-past-80-characters But it didn't seem to shed light on my particular problem.
In order to enable access to these devices you'll need to add your username to the group vboxusers . $ sudo usermod -a -G vboxusers <username> Example $ sudo usermod -a -G vboxusers saml You can confirm the change afterwards: $ groups saml saml : saml wheel vboxusers wireshark After doing the above you'll want to logout and log back in, so that for the newly added group to get picked up by your user account. Then from the VirtualBox GUI you'll be able to right click on the USB icon in the lower right group of icons, and select whatever USB devices you want to give control over to your running guest VM. Detecting USB devices You can use VirtualBox's little known command line tool VBoxManage to list out the USB devices that are accessible. This is a good way to also confirm that the group addition made above to your username are being picked up correctly. Example without group $ VBoxManage list usbhost Host USB Devices: <none> with group $ VBoxManage list usbhost | head -19 Host USB Devices: UUID: abcd1234-123a-2345-b1e0-8a0b1c1f2511 VendorId: 0x046d (046D) ProductId: 0x0809 (0809) Revision: 0.9 (0009) SerialNumber: ABC34567 Address: sysfs:/sys/devices/pci0000:00/0000:00:12.2/usb1/1-4//device:/dev/vboxusb/001/004 Current State: Busy UUID: d2abc46d-123-1234-b8c3-691a7ca551ce VendorId: 0x046d (046D) ProductId: 0xc504 (C504) Revision: 19.16 (1916) Manufacturer: Logitech Product: USB Receiver Address: sysfs:/sys/devices/pci0000:00/0000:00:12.0/usb3/3-3//device:/dev/vboxusb/003/003 Current State: Busy ... References VirtualBox USB support on Fedora. The right way. Set up USB for Virtualbox 3.10.1. USB settings - VirtualBox documentation
{ "source": [ "https://unix.stackexchange.com/questions/129312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65634/" ] }
129,355
Many packages are available in both GTK and QT versions. What's the difference between them? Is there any difference in performance or working method?
In general, the difference from a user perspective should be purely cosmetic. Qt and GTK set themes independently (via, e.g., qtconfig or gtk-chtheme ) but this is harmonized by some desktop environments. GNU/Linux tends to be more GTK oriented than Qt in the sense that the former is more commonly used, so you may want to prefer it when given the choice; an exception would be under KDE, which uses Qt for its own applications -- although none of these factors is very important. A more significant issue would be in contexts with very limited RAM (as in, < 1/4 GB); in this case you would probably want the system to use exclusively GTK or Qt, but not both. On most modern desktops with gigabytes of memory, however, this is not a concern -- using both extensively might cost you an extra (wild guess) 50-100 MB. Note that there are also different versions of both GTK (2 and 3) and Qt (3 and 4) still widely used but not backward compatible (so a Qt 3 application cannot use Qt 4 libraries). However, both versions may exist on a system at the same time and the most serious consequence of this would be the potential for confusion and a bit more memory bloat.
{ "source": [ "https://unix.stackexchange.com/questions/129355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
129,391
Is there any easy way to pass (receive) named parameters to a shell script? For example, my_script -p_out '/some/path' -arg_1 '5' And inside my_script.sh receive them as: # I believe this notation does not work, but is there anything close to it? p_out=$ARGUMENTS['p_out'] arg1=$ARGUMENTS['arg_1'] printf "The Argument p_out is %s" "$p_out" printf "The Argument arg_1 is %s" "$arg1" Is this possible in Bash or Zsh?
If you don't mind being limited to single-letter argument names i.e. my_script -p '/some/path' -a5 , then in bash you could use the built-in getopts , e.g. #!/bin/bash while getopts ":a:p:" opt; do case $opt in a) arg_1="$OPTARG" ;; p) p_out="$OPTARG" ;; \?) echo "Invalid option -$OPTARG" >&2 exit 1 ;; esac case $OPTARG in -*) echo "Option $opt needs a valid argument" exit 1 ;; esac done printf "Argument p_out is %s\n" "$p_out" printf "Argument arg_1 is %s\n" "$arg_1" Then you can do $ ./my_script -p '/some/path' -a5 Argument p_out is /some/path Argument arg_1 is 5 There is a helpful Small getopts tutorial or you can type help getopts at the shell prompt. Edit: The second case statement in while loop triggers if the -p option has no arguments and is followed by another option, e.g. my_script -p -a5 , and exit s the program.
{ "source": [ "https://unix.stackexchange.com/questions/129391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
129,497
Can someone explain the difference between the UUID's reported by blkid and mdadm ? On one of our CentOS systems, for example: [root@server ~]# blkid | grep /dev/md1 /dev/md1: UUID="32cb0a6e-8148-44e9-909d-5b23df045bd1" TYPE="ext4" [root@server ~]# mdadm --detail /dev/md1 | grep UUID UUID : f204c558:babf732d:85bd7296:bbfebeea Why are they different and how would we change the UUID used by mdadm ? I understand we would use tune2fs to change the UUID for the partition (which would change what is returned by blkid ) but not sure how to change what mdadm uses.
The first one reports the UUID of the ext4 filesystem on the md block device. It helps the system identify the file system uniquely among the filesystems available on the system. That is stored in the structure of the filesystem, that is in the data stored on the md device. The second one is the UUID of the RAID device. It helps the md subsystem identify that particular RAID device uniquely. In particular, it helps identify all the block devices that belong to the RAID array. It is stored in the metadata of the array (on each member). Array members also have their own UUID (in the md system, they may also have partition UUIDs if they are GPT partitions (which itself would be stored in the GPT partition table), or LVM volumes...). blkid is a bit misleading, as what it returns is the ID of the structure stored on the device (for those kind of structures it knows about like most filesystems, LVM members and swap devices). Also note that it's not uncommon to have block devices with structures with identical UUIDs (for instance LVM snapshots). And a block device can contain anything, including things whose structure doesn't include a UUID. So, as an example, you could have a system with 3 drives, with GPT partitioning. Those drives could have a World Wide Name which identifies it uniquely. Let's say the 3 drives are partitioned with one partition each ( /dev/sd[abc]1 ). Each partition will have a GPT UUID stored in the GPT partition table. If those partitions make up a md RAID5 array. Each will get a md UUID as a RAID member, and the array will get a UUID as md RAID device. That /dev/md0 can be further partitioned with MSDOS or GPT-type partitioning. For instance, we could have a /dev/md0p1 partition with a GPT UUID (stored in the GPT partition table that is stored in the data of /dev/md0). That could in turn be a physical volume for LVM. As such it will get a PV UUID. The volume group will also have a VG UUID. In that volume group, you would create logical volumes, each getting a LV UUID. On one of those LVs (like /dev/VG/LV ), you could make an ext4 filesystem. That filesystem would get an ext4 UUID. blkid /dev/VG/LV would get you the (ext4) UUID of that filesystem. But as a partition inside the VG volume, it would also get a partition UUID (some partitioning scheme like MSDOS/MBR don't have UUIDs). That volume group is made of members PVs which are themselves other block devices. blkid /dev/md0p1 would give you the PV UUID. It also has a partition UUID in the GPT table on /dev/md0 . /dev/md0 itself is made off other block devices. blkid /dev/sda1 will return the raid-member UUID. It also has a partition UUID in the GPT table on /dev/sda .
{ "source": [ "https://unix.stackexchange.com/questions/129497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66537/" ] }
129,555
What is the difference between the command $ env FOO=bar baz and $ FOO=bar baz What effect does env have?
They are functionally equivalent. The main difference is that env FOO=bar baz involves invoking an intermediary process between the shell and baz , where as with FOO=bar baz the shell directly invokes baz . So in that regard, FOO=bar baz is preferred. The only situations I find myself using env FOO=bar in is where I have to pass a command to another command. As a specific example, lets say I have a wrapper script that performs some modifications of the environment, and then calls exec on the command that was passed to it, such as: #!/bin/bash FOO=bob some stuff exec "$@" If you execute it as myscript FOO=bar baz , the exec will throw an error as exec FOO=bar baz is invalid. Instead you call it as myscript env FOO=bar baz which gets executed as exec env FOO=bar baz , and is perfectly valid.
{ "source": [ "https://unix.stackexchange.com/questions/129555", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36718/" ] }
129,599
I have a bash script that creates a '.tar' file. Once the file is created, I would like to test its integrity and send an email to the root user if the integrity is bad. I know I would need to use the command tar -tf /root/archive.tar to check the integrity of the file, but how would I implement this in a bash if statement and check for errors?
If tar finds errors in its input it will exit(3) ¹ with a non-zero exit value. This — with most tar implementations — is also done when listing archive contents with t . So you could simply check for the exit value of tar to determine if something has gone wrong: if ! tar tf /root/archive.tar &> /dev/null; then write_an_email_to_root fi If your tar does not find all errors with t , you could still extract the archive to stdout and redirect stdout to /dev/null , which would be the slower but more reliable approach: if ! tar xOf /root/archive.tar &> /dev/null; then write_an_email_to_root fi ¹ This notation denotes the manpage, not the actual call. See man 3 exit .
{ "source": [ "https://unix.stackexchange.com/questions/129599", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66537/" ] }
130,730
I have the following in my .tmux.conf set -g prefix M-j bind-key j send-prefix I need to press ( Atl + J ) + ( J ) + bound-key to send something to the nested tmux session. I feel it is rather slow. Is there any better way? For example, I would love to be able to do ( Alt + J ) + (2x bound-key) to do stuff in the nested session. I constantly execute commands in the top tmux session instead of executing them in the nested one. Also, how come everybody binds prefix to C-a ? I find it awfully slow and unpleasant to type this combination. Am I missing something?
It is one less keypress to send a command to your nested session if you choose a different key. I use Ctrl t for my standard prefix, and Ctrl a for nested sessions. # set prefix key to ctrl+t unbind C-b set -g prefix C-t # send the prefix to client inside window bind-key -n C-a send-prefix Note that I use the -n switch. From the bind-key entry in man tmux : if -n is specified, it is not necessary to use the prefix key, command is bound to key alone. So, as an example, Ctrl t , c opens a new window in tmux; Ctrl a , c does the same in the nested session.
{ "source": [ "https://unix.stackexchange.com/questions/130730", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67833/" ] }
130,786
I use Fedora and these directories contains a large amount of files, I wonder whether I can delete them? The system is running low on space.
journal logs Yes you can delete everything inside of /var/log/journal/* but do not delete the directory itself. You can also query journalctl to find out how much disk space it's consuming: $ journalctl --disk-usage Journals take up 3.8G on disk. You can control the size of this directory using this parameter in your /etc/systemd/journald.conf : SystemMaxUse=50M You can force a log rotation: $ sudo systemctl kill --kill-who=main --signal=SIGUSR2 systemd-journald.service NOTE: You might need to restart the logging service to force a log rotation, if the above signaling method does not do it. You can restart the service like so: $ sudo systemctl restart systemd-journald.service abrt logs These files too under /var/cache/abrt-di/* can be deleted as well. The size of the log files here is controlled under: $ grep -i size /etc/abrt/abrt.conf # Max size for crash storage [MiB] or 0 for unlimited MaxCrashReportsSize = 1000 You can control the max size of /var/cache/abrt-di by changing the following in file, /etc/abrt/plugins/CCpp.conf : DebugInfoCacheMB = 2000 NOTE: If not defined DebugInfoCacheMB defaults to 4000 (4GB). References Is it safe to delete /var/log/journal log files? Surprising behaviour when out of disk space 19.4. Generating Backtraces
{ "source": [ "https://unix.stackexchange.com/questions/130786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15978/" ] }
130,797
Is it possible to setup a Linux system so that it provides more than 65,535 ports? The intent would be to have more than 65k daemons listening on a given system. Clearly there are ports being used so this is not possible for those reasons, so think of this as a theoretical exercise in trying to understand where TCP would be restrictive in doing something like this.
Looking at the RFC for TCP: RFC 793 - Transmission Control Protocol , the answer would seem to be no because of the fact that a TCP header is limited to 16-bits for the source/destination port field. Does IPv6 improve things? No. Even though IPv6 will give us a much larger IP address space, 32-bit vs. 128-bits, it makes no attempt to improve the TCP packet limitation of 16-bits for the port numbers. Interestingly the RFC for IPv6: Internet Protocol, Version 6 (IPv6) Specification , the IP field needed to be expanded. When TCP runs over IPv6, the method used to compute the checksum is changed, as per RFC 2460 : Any transport or other upper-layer protocol that includes the addresses from the IP header in its checksum computation must be modified for use over IPv6, to include the 128-bit IPv6 addresses instead of 32-bit IPv4 addresses. So how can you get more ports? One approach would be to stack additional IP addresses using more interfaces. If your system has multiple NICs this is easier, but even with just a single NIC, one can make use of virtual interfaces (aka. aliases ) to allocate more IPs if needed. NOTE: Using aliases have been supplanted by iproute2 which you can use to stack IP addresses on a single interface (i.e. eth0 ) instead. Example $ sudo ip link set eth0 up $ sudo ip addr add 192.0.2.1/24 dev eth0 $ sudo ip addr add 192.0.2.2/24 dev eth0 $ ip addr show dev eth0 2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 00:d0:b7:2d:ce:cf brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global eth1 inet 192.0.2.2/24 scope global secondary eth1 Source: iproute2: Life after ifconfig References OpenWrt Wiki » Documentation » Networking » Linux Network Interfaces Some useful command with iproute2 Linux Advanced Routing & Traffic Control HOWTO Multiple default routes / public gateway IPs under Linux iproute2 cheat sheet - Daniil Baturin's website
{ "source": [ "https://unix.stackexchange.com/questions/130797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
130,958
I have switched over to zsh, and it is working fine. One strange thing, when I try to scp with a * wildcard, it does not work, and I have to drop into bash. The second command below works fine. Any ideas on why this would be and how to fix it? ~/dmp ⌚ 16:06:10 $ scp abc@123:/home/se/exports/201405091107/* . zsh: no matches found: root@uf3:/home/se/exports/201405091107/* ~/dmp ⌚ 16:06:53 $ bash sean@seanlaptop:~/dmp$ scp abc@123:/home/se/exports/201405091107/* .
The shell (both bash & zsh) tries to interpret abc@123:/home/se/exports/201405091107/* as a glob to match files on your local system. The shell doesn't know what scp is, or that you're trying to match remote files. The difference between bash and zsh is their default behavior when it comes to failed globbing. In bash, if a glob doesn't match anything, it passes the original glob pattern as an argument. In zsh it throws an error instead. To address the issue, you need to quote it so the shell doesn't try to interpret it as a local glob. scp 'abc@123:/home/se/exports/201405091107/*' . (other things like ...1107/'*' or ...1107/\* work too) If you want to change it so the zsh no-match behavior is the same as bash, you can do the following setopt nonomatch
{ "source": [ "https://unix.stackexchange.com/questions/130958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63878/" ] }
130,971
Are these two commands any different on how they go about zero-ing out files? Is the latter a shorter way of doing the former? What is happening behind the scenes? Both $ cat /dev/null > file.txt $ > file.txt yield -rw-r--r-- 1 user wheel 0 May 18 10:33 file.txt
cat /dev/null > file.txt is a useless use of cat . Basically cat /dev/null simply results in cat outputting nothing. Yes it works, but it's frowned upon by many because it results in invoking an external process that is not necessary. It's one of those things that is common simply because it's common. Using just > file.txt will work on most shells, but it's not completely portable. If you want completely portable, the following are good alternatives: true > file.txt : > file.txt Both : and true output no data, and are shell builtins (whereas cat is an external utility), thus they are lighter and more 'proper'. Update: As tylerl mentioned in his comment, there is also the >| file.txt syntax. Most shells have a setting which will prevent them from truncating an existing file via > . You must use >| instead. This is to prevent human error when you really meant to append with >> . You can turn the behavior on with set -C . So with this, I think the simplest, most proper, and portable method of truncating a file would be: :>| file.txt
{ "source": [ "https://unix.stackexchange.com/questions/130971", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4252/" ] }
130,985
I read here that the purpose of export in a shell is to make the variable available to sub-processes started from the shell. However, I have also read here and here that "Processes inherit their environment from their parent (the process which started them)." If this is the case, why do we need export ? What am I missing? Are shell variables not part of the environment by default? What is the difference?
Your assumption is that shell variables are in the environment . This is incorrect. The export command is what defines a name to be in the environment at all. Thus: a=1 b=2 export b results in the current shell knowing that $a expands to 1 and $b to 2, but subprocesses will not know anything about a because it is not part of the environment (even in the current shell). Some useful tools: set : Useful for viewing the current shell's parameters, exported-or-not set -k : Sets assigned args in the environment. Consider f() { set -k; env; }; f a=1 set -a : Tells the shell to put any name that gets set into the environment. Like putting export before every assignment. Useful for .env files, as in set -a; . .env; set +a . export : Tells the shell to put a name in the environment. Export and assignment are two entirely different operations. env : As an external command, env can only tell you about the inherited environment, thus, it's useful for sanity checking. env -i : Useful for clearing the environment before starting a subprocess. Alternatives to export : name=val command # Assignment before command exports that name to the command. declare/local -x name # Exports name, particularly useful in shell functions when you want to avoid exposing the name to outside scope. set -a # Exports every following assignment. Motivation So why do shells need to have their own variables and and environment that is different? I'm sure there are some historical reasons, but I think the main reason is scoping. The enviroment is for subprocesses, but there are lots of operations you can do in the shell without forking a subprocess. Suppose you loop: for i in {0..50}; do somecommand done Why waste memory for somecommand by including i , making its environment any bigger than it needs to be? What if the variable name you chose in the shell just happens to mean something unintended to the program? (Personal favorites of mine include DEBUG and VERBOSE . Those names are used everywhere and rarely namespaces adequately.) What is the environment if not the shell? Sometimes to understand Unix behavior you have to look at the syscalls, the basic API for interacting with the kernel and OS. Here, we're looking at the exec family of calls, which is what the shell uses when it creates a subprocess. Here's a quote from the manpage for exec(3) (emphasis mine): The execle() and execvpe() functions allow the caller to specify the environment of the executed program via the argument envp. The envp argument is an array of pointers to null-terminated strings and must be terminated by a NULL pointer. The other functions take the environment for the new process image from the external variable environ in the calling process. So writing export somename in the shell would be equivalent to copying the name to the global dictionary environ in C. But assigning somename without exporting it would be just like assigning it in C, without copying it to the environ variable.
{ "source": [ "https://unix.stackexchange.com/questions/130985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
131,011
I'm running tmux 1.6 and I'm trying to configure it to use vi-style keybindings as well as use the system clipboard when copying in interactive mode: set-window-option -g mode-keys vi bind-key -t vi-copy 'v' begin-selection bind-key -t vi-copy 'y' "copy-selection && run \"tmux save-buffer | xclip -selection clipboard\"" Simply put, I'd like to be able to do C + [ and then use v to begin selecting text for copying, then when y is pushed, copy the selection to the tmux selection and then export it to the system clipboard using xclip . Unfortunately, when I try to do this, I see the following: .tmux.conf: 14: unknown command: copy-selection && run "tmux save-buffer | xclip -selection clipboard" Is there a way to do this in tmux configuration?
This was also answered here , but it took me a while to understand how to use it, so I'll explain for anyone else that was confused. This is basically the setting you're going for: (for tmux versions <2.5 ) bind -t vi-copy y copy-pipe 'xclip -in -selection clipboard' (for tmux versions >=2.5 ) bind -T copy-mode-vi y send-keys -X copy-pipe-and-cancel 'xclip -in -selection clipboard' Then hit Ctrl+b [ to enter copy mode. Then hit Space followed by whatever vi movement keys to make a selection. Then, instead of hitting Enter , hit y and the selection will be copied to the clipboard. Note: this assumes you're using tmux's default bindings with vi keys. Tmux has different key binding tables for different modes. So, bind-key -t vi-copy y sets the action for the y key in copy mode. Initially, I was confused because I was used to hitting Enter after making a selection. Enter is actually just the default key binding for the copy-selection command (when in copy mode). The copy-pipe command allows us to bind a new key to pipe the selection to a command, which in this case is xclip . You can see all key bindings for copy mode by running list-keys -t vi-copy .
{ "source": [ "https://unix.stackexchange.com/questions/131011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
131,073
I need to printf a number out, but with given width and rounded (with awk!) %10s I have this and somehow I need to connect the %d but everything I do, ends up with too much parametres for awk (because I have more columns).
You can try this: $ awk 'BEGIN{printf "%3.0f\n", 3.6}' 4 Our format option has two parts: 3 : meaning output will be padded to 3 characters. .0f : meaning output will have no precision, meaning rounded up. From man awk , you can see more details: width The field should be padded to this width. The field is normally padded with spaces. If the 0 flag has been used, it is padded with zeroes. .prec A number that specifies the precision to use when printing. For the %e, %E, %f and %F, formats, this specifies the number of digits you want printed to the right of the decimal point. For the %g, and %G formats, it specifies the maximum number of significant digits. For the %d, %o, %i, %u, %x, and %X formats, it specifies the minimum number of digits to print. For %s, it specifies the maximum number of characters from the string that should be printed.
{ "source": [ "https://unix.stackexchange.com/questions/131073", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66407/" ] }
131,105
So I have a lot of data WITHOUT NEW LINES on the clipboard (it's a large SVG file on one line). I went $ cat >file.svg then tried to paste (in Gnome Terminal), but only the first 4kB characters were accepted. I assume this is a readline feature/limitation. Is there a way to read from STDIN that would avoid this problem? EDIT Test case: Create a demo file. This one will have ~4k "=" symbols followed by "foo bar". { printf '=%.0s' {1..4095} ; echo "foo bar" ; } > test.in Copy that into your clipboard xclip test.in (if you want to middle-click to insert) or xclip -selection clipboard test.in (if you want to use Ctrl-Shift-Insert to past it in) Then cat >test.out , paste (whichever way). Press Ctrl-D to end the stream. cat test.out - do you see "foo bar"? On my set-up (Ubuntu 12.04, Gnome Terminal, zsh) when I paste I only see the = and I don't see foo bar . Same when I inspect test.out .
If I understand the source correctly, under Linux, the maximum number of characters that can be read in one go on a terminal is determined by N_TTY_BUF_SIZE in the kernel source. The value is 4096. This is a limitation of the terminal interface, specifically the canonical (“cooked”) mode which provides an extremely crude line editor (backspace, enter, Ctrl + D at the start of a line for end-of-file). It happens entirely outside the process that's reading. You can switch the terminal to raw mode, which disables line processing. It also disables Ctrl + D and other niceties, putting an extra burden on your program. This is an ancient Unix limitation that's never been fixed because there's little motivation. Humans don't enter such long lines. If you were feeding input from a program, you'd redirect your program's input from a file or a pipe. For example, to use the content of the X clipboard, pipe from xsel or xclip . In your case: xsel -b >file.svg xclip -selection clipboard >file.svg Remove -b or -selection clipboard to use the X selection (the one that is set by highlighting with the mouse) rather than the clipboard. On OSX, use pbpaste to paste the clipboard content (and pbcopy to set it). You can access the X clipboard over SSH if you activate X11 forwarding with ssh -X (which some servers may forbid). If you can only use ssh without X11 forwarding, you can use scp , sftp or sshfs to copy a file. If pasting is the only solution because you can't forward the clipboard or you aren't pasting but e.g. faking typing into a virtual machine, an alternative approach is to encode the data into something that has newlines. Base64 is well-suited for this: it transforms arbitrary data into printable characters, and ignores whitespace when decoding. This approach has the additional advantage that it supports arbitrary data in the input, even control characters that the terminal would interpret when pasting. In your case, you can encode the content: xsel -b | base64 | xsel -b then decode it: base64 -d Paste Ctrl + D
{ "source": [ "https://unix.stackexchange.com/questions/131105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23542/" ] }
131,180
The problem is I want to be able to see errors when moving a file, but not see errors with permissions problem. In other words - I care if the file is not fully transmitted, but don't want to see errors like this: mv: failed to preserve ownership for `/home/blah/backup/pgsql.tar.gz': Operation not permitted So I want something like: mv $backupfile $destination --ignore-permissions . The backup file can be anything from 1 MiB to 5 GiB and is transfered through NFS.
mv is the wrong tool for this job; you want cp and then rm . Since you're moving the file to another filesystem this is exactly what mv is doing behind the scenes anyway, except that mv is also trying to preserve file permission bits and owner/group information. This is because mv would preserve that information if it were moving a file within the same filesystem and mv tries to behave the same way in both situations. Since you don't care about the preservation of file permission bits and owner/group information, don't use that tool. Use cp --no-preserve=mode and rm instead.
{ "source": [ "https://unix.stackexchange.com/questions/131180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26197/" ] }
131,186
I am writing a bash script that I want to echo out metadata (length, resolution etc.) of a set of videos (mp4) into a file. Is there a simple way to get this information from an MP4 file?
On a Debian-based system (but presumably, other distributions will also have mediainfo in their repositories): $ sudo apt-get install mediainfo $ mediainfo foo.mp4 That will spew out a lot of information. To get, for example, the length, resolution, codec and dimensions use: $ $ mediainfo "The Blues Brothers.mp4" | grep -E 'Duration|Format |Width|Height' | sort | uniq Duration : 2h 27mn Format : AAC Format : AVC Format : MPEG-4 Height : 688 pixels Width : 1 280 pixels
{ "source": [ "https://unix.stackexchange.com/questions/131186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61235/" ] }
131,217
Below awk command removes all duplicate lines as explained here : awk '!seen[$0]++' If the text contains empty lines, all but one empty line will be deleted. How can I keep all empty lines whilst deleting all non-empty duplicate lines, using only awk ? Please, also include a brief explanation.
Another option is to check NF , eg: awk '!NF || !seen[$0]++' Or equivalently: awk '!(NF && seen[$0]++)'
{ "source": [ "https://unix.stackexchange.com/questions/131217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39845/" ] }
131,311
I am attempting to move some folders (such as /var and /home ) to a separate partition after reading this guide: 3.2.1 Choose an intelligent partition scheme I was able to move one folder successfully following this guide. However, it doesn't seem to work for multiple folders, and all my folders are dumped into the partition without proper folders. I would like to mount /var , /home , and /tmp onto the separate partition; can someone guide me on this?
1. First you need some unallocated space to create the partitions for each mountpoint (/var, /home, /tmp). Use Gparted for this. 2. Then you need to create the filesystems for those partitions (can be done with Gparted too) or use: mkfs.ext4 /dev/sdaX for example to create a new ext4 filesystem on the /dev/sdaX device (replace /dev/sdaX with your own device) 3. Mount the new filesystem under /mnt mkdir /mnt/var mount /dev/sdaX /mnt/var 4. Go to single-user mode so that there is no rw activity on the directory during the process init 1 5. Enter your root password. 6. Backup data in var only (not the /var directory itself) cd /var cp -ax * /mnt/var 7. Rename the /var directory after your data has been transferred successfully. cd / mv var var.old 8. Make the new var directory mkdir var 9. Unmount the new partition. umount /dev/sdaX 10. Remount it as /var mount /dev/sdaX /var 11. Edit /etc/fstab file to include the new partition, with /var being the mount point, so that it will be automatically mounted at boot. /dev/sdaX /var ext4 defaults 0 0 12. Repeat steps 1-11 for /home and /tmp. 13. Finally return to multitasking mode. init 5
{ "source": [ "https://unix.stackexchange.com/questions/131311", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67199/" ] }
131,334
I have generated RSA private key using below command: openssl genrsa -out privkey.pem 2048 And created a self signed certificate using below command: openssl req -new -x509 -key privkey.pem -out cacert.pem -days 3650 Now I am trying to convert cacert .pem file to certificate .cer Any ideas?
You can use the following command: openssl x509 -inform PEM -in cacert.pem -outform DER -out certificate.cer
{ "source": [ "https://unix.stackexchange.com/questions/131334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29378/" ] }
131,432
I would like to know how to determine which driver (out of those below) is handling my touchpad: appletouch.ko.gz, cyapa.ko.gz, sermouse.ko.gz, synaptics_usb.ko.gz, bcm5974.ko.gz, psmouse.ko.gz, synaptics_i2c.ko.gz, vsxxxaa.ko.gz
It's likely that none of them are doing it. On my system for example where I'm using Fedora 19 and a Thinkpad 410 with a Synaptic touchpad I have no Kernel driver as well. $ lsmod|grep -iE "apple|cyapa|sermouse|synap|psmouse|vsxx|bcm" So then what's taking care of this device? Well it's actually this Kernel module: $ lsmod|grep -iE "input" uinput 17672 0 If you want to see more about this module you can use modinfo uinput : $ modinfo uinput filename: /lib/modules/3.13.11-100.fc19.x86_64/kernel/drivers/input/misc/uinput.ko version: 0.3 license: GPL description: User level driver support for input subsystem author: Aristeu Sergio Rozanski Filho alias: devname:uinput alias: char-major-10-223 ... As it turns out input devices such as these are often dealt with at a higher level, in this case the actual drivers are implemented at the X11 level. uinput is a linux kernel module that allows to handle the input subsystem from user land. It can be used to create and to handle input devices from an application. It creates a character device in /dev/input directory. The device is a virtual interface, it doesn't belong to a physical device. SOURCE: Getting started with uinput: the user level input subsystem So then where's my touchpad drivers? They're in X11's subsystem. You can see the device using the xinput --list command. For example, Here's the devices on my Thinkpad laptop: $ xinput --list ⎡ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ Logitech USB Receiver id=9 [slave pointer (2)] ⎜ ↳ Logitech USB Receiver id=10 [slave pointer (2)] ⎜ ↳ SynPS/2 Synaptics TouchPad id=12 [slave pointer (2)] ⎜ ↳ TPPS/2 IBM TrackPoint id=13 [slave pointer (2)] ⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Sleep Button id=8 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=11 [slave keyboard (3)] ↳ ThinkPad Extra Buttons id=14 [slave keyboard (3)] Notice that my TouchPad shows up in this list. You can find out additional info about these devices through /proc , for example: $ cat /proc/bus/input/devices ... I: Bus=0011 Vendor=0002 Product=0007 Version=01b1 N: Name="SynPS/2 Synaptics TouchPad" P: Phys=isa0060/serio1/input0 S: Sysfs=/devices/platform/i8042/serio1/input/input5 U: Uniq= H: Handlers=mouse0 event4 B: PROP=9 B: EV=b B: KEY=6420 30000 0 0 0 0 B: ABS=260800011000003 ... OK but where's the driver? Digging deeper if your system is using a Synaptic touchpad (which I believe they make ~90% of all touchpads), you can do a locate synaptics | grep xorg which should reveal the following files: $ locate synaptics | grep xorg /usr/lib64/xorg/modules/input/synaptics_drv.so /usr/share/X11/xorg.conf.d/50-synaptics.conf /usr/share/doc/xorg-x11-drv-synaptics-1.7.1 /usr/share/doc/xorg-x11-drv-synaptics-1.7.1/COPYING /usr/share/doc/xorg-x11-drv-synaptics-1.7.1/README The first results there is the actual driver you're asking about. It get's loaded into X.org via the second file here: Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" EndSection And this line: MatchDevicePath "/dev/input/event*" Is what associates the physical devices with this driver. And you're probably asking yourself, how can this guy be so sure? Using this command shows the device associated with my given Synaptic TouchPad using id=12 from the xinput --list output I showed earlier: $ xinput --list-props 12 | grep "Device Node" Device Node (251): "/dev/input/event4"
{ "source": [ "https://unix.stackexchange.com/questions/131432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68217/" ] }
131,504
I've tried to search in ~/.bash_history for my recent commands while in a terminal session but they just weren't there. I guess this is because I have multiple terminal sessions open. Is there a way that I can sync (ie. sync-push or sync-write-out) the current terminal session's command history into the bash_history file (without closing the session and losing that environment)? (It would be remotely similar in idea to how the sync command stores the file-system modifications on some systems.) I imagine I could set up bash to preserve multiple session history but the ability to push the current history buffer would still be useful in scenarios when you are working on a new machine and you accidentally forgot to set up bash the way you may would have wanted.
Add this line to .bashrc : export PROMPT_COMMAND="history -a; history -n" Open new terminal and check. Explanation history -a appends new history lines to history file. history -n tells bash to read lines that is not read from history file to current history list of session. PROMPT_COMMAND : contents of this variable is run as regular command before bash show prompt. So every time after you execute a command, history -a; history -n is executed, and your bash history is synced.
{ "source": [ "https://unix.stackexchange.com/questions/131504", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18768/" ] }
131,535
Which is more efficient for finding which files in an entire filesystem contain a string: recursive grep or find with grep in an exec statement? I assume find would be more efficient because you can at least do some filtering if you know the file extension or a regex that matches the file name, but when you only know -type f which is better? GNU grep 2.6.3; find (GNU findutils) 4.4.2 Example: grep -r -i 'the brown dog' / find / -type f -exec grep -i 'the brown dog' {} \;
I'm not sure: grep -r -i 'the brown dog' /* is really what you meant. That would mean grep recursively in all the non-hidden files and dirs in / (but still look inside hidden files and dirs inside those). Assuming you meant: grep -r -i 'the brown dog' / A few things to note: Not all grep implementations support -r . And among those that do, the behaviours differ: some follow symlinks to directories when traversing the directory tree (which means you may end up looking several times in the same file or even run in infinite loops), some will not. Some will look inside device files (and it will take quite some time in /dev/zero for instance) or pipes or binary files..., some will not. It's efficient as grep starts looking inside files as soon as it discovers them. But while it looks in a file, it's no longer looking for more files to search in (which is probably just as well in most cases) Your: find / -type f -exec grep -i 'the brown dog' {} \; (removed the -r which didn't make sense here) is terribly inefficient because you're running one grep per file. ; should only be used for commands that accept only one argument. Moreover here, because grep looks only in one file, it will not print the file name, so you won't know where the matches are. You're not looking inside device files, pipes, symlinks..., you're not following symlinks, but you're still potentially looking inside things like /proc/mem . find / -type f -exec grep -i 'the brown dog' {} + would be a lot better because as few grep commands as possible would be run. You'd get the file name unless the last run has only one file. For that it's better to use: find / -type f -exec grep -i 'the brown dog' /dev/null {} + or with GNU grep : find / -type f -exec grep -Hi 'the brown dog' {} + Note that grep will not be started until find has found enough files for it to chew on, so there will be some initial delay. And find will not carry on searching for more files until the previous grep has returned. Allocating and passing the big file list has some (probably negligible) impact, so all in all it's probably going to be less efficient than a grep -r that doesn't follow symlink or look inside devices. With GNU tools: find / -type f -print0 | xargs -r0 grep -Hi 'the brown dog' As above, as few grep instances as possible will be run, but find will carry on looking for more files while the first grep invocation is looking inside the first batch. That may or may not be an advantage though. For instance, with data stored on rotational hard drives, find and grep accessing data stored at different locations on the disk will slow down the disk throughput by causing the disk head to move constantly. In a RAID setup (where find and grep may access different disks) or on SSDs, that might make a positive difference. In a RAID setup, running several concurrent grep invocations might also improve things. Still with GNU tools on RAID1 storage with 3 disks, find / -type f -print0 | xargs -r0 -P2 grep -Hi 'the brown dog' might increase the performance significantly. Note however that the second grep will only be started once enough files have been found to fill up the first grep command. You can add a -n option to xargs for that to happen sooner (and pass fewer files per grep invocation). Also note that if you're redirecting xargs output to anything but a terminal device, then the greps s will start buffering their output which means that the output of those grep s will probably be incorrectly interleaved. You'd have to use stdbuf -oL (where available like on GNU or FreeBSD) on them to work around that (you may still have problems with very long lines (typically >4KiB)) or have each write their output in a separate file and concatenate them all in the end. Here, the string you're looking for is fixed (not a regexp) so using the -F option might make a difference (unlikely as grep implementations know how to optimise that already). Another thing that could make a big difference is fixing the locale to C if you're in a multi-byte locale: find / -type f -print0 | LC_ALL=C xargs -r0 -P2 grep -Hi 'the brown dog' To avoid looking inside /proc , /sys ..., use -xdev and specify the file systems you want to search in: LC_ALL=C find / /home -xdev -type f -exec grep -i 'the brown dog' /dev/null {} + Or prune the paths you want to exclude explicitly: LC_ALL=C find / \( -path /dev -o -path /proc -o -path /sys \) -prune -o \ -type f -exec grep -i 'the brown dog' /dev/null {} +
{ "source": [ "https://unix.stackexchange.com/questions/131535", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43342/" ] }
131,702
I'm asking myself: is there, on linux, any software that can build and show simple slides on terminal, like the slides you make on Libreoffice Impress (but way more simple)? This would be a great experience to make a presentation using only the console, whithout any advanced graphics (like GL and framebuffer), maybe using only ncurses or other lib like that. Any help? EDIT 1: I'm using and recommending vimdeck. Thank you all :D EDIT 2: This question is still open for a standalone software or any plugin that can use LaTeX.
Okay, several things here: You're not even remotely the only person that wants something like this (I've been looking for a good one for a while now). There are a couple of projects out there that attempt to fill this niche but none of the ones I have found are quite as simple to use as I'd hoped . Big Update! It looks like there is a wonderful soul out there that has finally accomplished nearly the perfect setup! patat is a terminal presentation tool written in Haskell which uses pandoc to parse the slides. This means that you can use nearly any format you might want for the slides (markdown, reStructuredText, LaTeX, etc.)! The closest project I have found to meeting this need is tpp . Tpp (Text Presentation Program) allows you to create presentation slides from Ruby and then run through them in a presentation format through ncurses. You may also find tkn (Terminal Keynote) to be a helpful project. The slides are also written in Ruby, but there appears to be much less markup required to write the slides themselves, so it may be simpler to use. And, to my surprise, there is a third Ruby-based project, slider , which also attempts to fill this niche. Slider seems less flexible than either tpp or tkn, but perhaps it would better suit your needs. There is also a vim plugin, posero , but it seems rather limited. If you're willing to invest a little effort in figuring out some spacing. You could actually use LaTeX to generate some files. You could either use latex2man to generate a man page, which you could then present using whatever pager you would like; or, if you are still interested in presenting using a text-based web-browser, you could use latex2html to generate the web page(s). Personally, I would love to see a project that used a format compatible with something like pandoc so that users could write slides in anything (e.g., LaTeX) and then generate the presentation without much extra effort. But, to-date, I have yet to find such a mythical tool (I may end up breaking down and writing one myself). In the meantime, if these projects are too much for your goal (or are just too difficult to work with), writing an HTML slideshow (using links to another page as slide transitions) and then presenting using a text-based web browser is a good fall-back (just as Stéphane pointed out) . Big update! I think I finally found a project that could meet almost all these goals. It's still not LaTeX-based, but it uses Markdown slides (a significant improvmement over having to code the slides directly with Ruby). mdp , written in C, allows you to create a simple markdown file and display it with transitions and fairly strong support for basic formatting. It's not entirely perfect, but it's much better than any of the other projects I've seen so far.
{ "source": [ "https://unix.stackexchange.com/questions/131702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45505/" ] }
131,716
I want to be able to start zsh with a custom rc file similar to the command: bash --rc-file /path/to/file If this is not possible, then is it possible to start zsh, run source /path/to/file , then stay in the same zsh session? Note: The command zsh --rcs /path/to/file does not work, at least not for me... EDIT: In its entirety I wish to be able to do the following: ssh to a remote server "example.com", run zsh , source my configuration located at /path/to/file , all in 1 command. This is where I've struggled, especially because I'd rather not write over any configuration files on the remote machine.
From the man pages: STARTUP/SHUTDOWN FILES Commands are first read from /etc/zshenv; this cannot be overridden. Subsequent be‐ haviour is modified by the RCS and GLOBAL_RCS options; the former affects all startup files, while the second only affects global startup files (those shown here with an path starting with a /). If one of the options is unset at any point, any subsequent startup file(s) of the corresponding type will not be read. It is also possible for a file in $ZDOTDIR to re-enable GLOBAL_RCS. Both RCS and GLOBAL_RCS are set by default. Commands are then read from $ZDOTDIR/.zshenv. If the shell is a login shell, com‐ mands are read from /etc/zprofile and then $ZDOTDIR/.zprofile. Then, if the shell is interactive, commands are read from /etc/zshrc and then $ZDOTDIR/.zshrc. Finally, if the shell is a login shell, /etc/zlogin and $ZDOTDIR/.zlogin are read. When a login shell exits, the files $ZDOTDIR/.zlogout and then /etc/zlogout are read. This happens with either an explicit exit via the exit or logout commands, or an implicit exit by reading end-of-file from the terminal. However, if the shell termi‐ nates due to exec'ing another process, the logout files are not read. These are also affected by the RCS and GLOBAL_RCS options. Note also that the RCS option affects the saving of history files, i.e. if RCS is unset when the shell exits, no history file will be saved. If ZDOTDIR is unset, HOME is used instead. Files listed above as being in /etc may be in another directory, depending on the installation. As /etc/zshenv is run for all instances of zsh, it is important that it be kept as small as possible. In particular, it is a good idea to put code that does not need to be run for every single shell behind a test of the form `if [[ -o rcs ]]; then ...' so that it will not be executed when zsh is invoked with the `-f' option. so you should be able to set the environment variable ZDOTDIR to a new directory to get zsh to look for a different set of dotfiles. As the man page suggests, RCS and GLOBAL_RCS are not paths to rc files, as you are attempting to use them, but rather options you can enable or disable. So, for instance, the flag --rcs will enable the RCS option, causing zsh to read from rc files. You can use the following command-line flags to zsh to enable or disable RCS or GLOBAL_RCS : --globalrcs --rcs -d equivalent to --no-globalrcs -f equivalent to --no-rcs To answer your other question: is it possible to start zsh, run "source /path/to/file", then stay in the same zsh session? Yes, this is pretty easy according to the above directions. Just run zsh -d -f and then source /path/to/zshrc .
{ "source": [ "https://unix.stackexchange.com/questions/131716", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68400/" ] }
131,766
Or, an introductory guide to robust filename handling and other string passing in shell scripts. I wrote a shell script which works well most of the time. But it chokes on some inputs (e.g. on some file names). I encountered a problem such as the following: I have a file name containing a space hello world , and it was treated as two separate files hello and world . I have an input line with two consecutive spaces and they shrank to one in the input. Leading and trailing whitespace disappears from input lines. Sometimes, when the input contains one of the characters \[*? , they are replaced by some text which is is actually the name of files. There is an apostrophe ' (or a double quote " ) in the input and things got weird after that point. There is a backslash in the input (or: I am using Cygwin and some of my file names have Windows-style \ separators). What is going on and how do I fix it?
Always use double quotes around variable substitutions and command substitutions: "$foo" , "$(foo)" If you use $foo unquoted, your script will choke on input or parameters (or command output, with $(foo) ) containing whitespace or \[*? . There, you can stop reading. Well, ok, here are a few more: read — To read input line by line with the read builtin, use while IFS= read -r line; do … Plain read treats backslashes and whitespace specially. xargs — Avoid xargs . If you must use xargs , make that xargs -0 . Instead of find … | xargs , prefer find … -exec … . xargs treats whitespace and the characters \"' specially. This answer applies to Bourne/POSIX-style shells ( sh , ash , dash , bash , ksh , mksh , yash …). Zsh users should skip it and read the end of When is double-quoting necessary? instead. If you want the whole nitty-gritty, read the standard or your shell's manual. Note that the explanations below contains a few approximations (statements that are true in most conditions but can be affected by the surrounding context or by configuration). Why do I need to write "$foo" ? What happens without the quotes? $foo does not mean “take the value of the variable foo ”. It means something much more complex: First, take the value of the variable. Field splitting: treat that value as a whitespace-separated list of fields, and build the resulting list. For example, if the variable contains foo * bar ​ then the result of this step is the 3-element list foo , * , bar . Filename generation: treat each field as a glob, i.e. as a wildcard pattern, and replace it by the list of file names that match this pattern. If the pattern doesn't match any files, it is left unmodified. In our example, this results in the list containing foo , following by the list of files in the current directory, and finally bar . If the current directory is empty, the result is foo , * , bar . Note that the result is a list of strings. There are two contexts in shell syntax: list context and string context. Field splitting and filename generation only happen in list context, but that's most of the time. Double quotes delimit a string context: the whole double-quoted string is a single string, not to be split. (Exception: "$@" to expand to the list of positional parameters, e.g. "$@" is equivalent to "$1" "$2" "$3" if there are three positional parameters. See What is the difference between $* and $@? ) The same happens to command substitution with $(foo) or with `foo` . On a side note, don't use `foo` : its quoting rules are weird and non-portable, and all modern shells support $(foo) which is absolutely equivalent except for having intuitive quoting rules. The output of arithmetic substitution also undergoes the same expansions, but that isn't normally a concern as it only contains non-expandable characters (assuming IFS doesn't contain digits or - ). See When is double-quoting necessary? for more details about the cases when you can leave out the quotes. Unless you mean for all this rigmarole to happen, just remember to always use double quotes around variable and command substitutions. Do take care: leaving out the quotes can lead not just to errors but to security holes . How do I process a list of file names? If you write myfiles="file1 file2" , with spaces to separate the files, this can't work with file names containing spaces. Unix file names can contain any character other than / (which is always a directory separator) and null bytes (which you can't use in shell scripts with most shells). Same problem with myfiles=*.txt; … process $myfiles . When you do this, the variable myfiles contains the 5-character string *.txt , and it's when you write $myfiles that the wildcard is expanded. This example will actually work, until you change your script to be myfiles="$someprefix*.txt"; … process $myfiles . If someprefix is set to final report , this won't work. To process a list of any kind (such as file names), put it in an array. This requires mksh, ksh93, yash or bash (or zsh, which doesn't have all these quoting issues); a plain POSIX shell (such as ash or dash) doesn't have array variables. myfiles=("$someprefix"*.txt) process "${myfiles[@]}" Ksh88 has array variables with a different assignment syntax set -A myfiles "someprefix"*.txt (see assignation variable under different ksh environment if you need ksh88/bash portability). Bourne/POSIX-style shells have a single one array, the array of positional parameters "$@" which you set with set and which is local to a function: set -- "$someprefix"*.txt process -- "$@" What about file names that begin with - ? On a related note, keep in mind that file names can begin with a - (dash/minus), which most commands interpret as denoting an option. Some commands (like sh , set or sort ) also accept options that start with + . If you have a file name that begins with a variable part, be sure to pass -- before it, as in the snippet above. This indicates to the command that it has reached the end of options, so anything after that is a file name even if it starts with - or + . Alternatively, you can make sure that your file names begin with a character other than - . Absolute file names begin with / , and you can add ./ at the beginning of relative names. The following snippet turns the content of the variable f into a “safe” way of referring to the same file that's guaranteed not to start with - nor + . case "$f" in -* | +*) "f=./$f";; esac On a final note on this topic, beware that some commands interpret - as meaning standard input or standard output, even after -- . If you need to refer to an actual file named - , or if you're calling such a program and you don't want it to read from stdin or write to stdout, make sure to rewrite - as above. See What is the difference between "du -sh *" and "du -sh ./*"? for further discussion. How do I store a command in a variable? “Command” can mean three things: a command name (the name as an executable, with or without full path, or the name of a function, builtin or alias), a command name with arguments, or a piece of shell code. There are accordingly different ways of storing them in a variable. If you have a command name, just store it and use the variable with double quotes as usual. command_path="$1" … "$command_path" --option --message="hello world" If you have a command with arguments, the problem is the same as with a list of file names above: this is a list of strings, not a string. You can't just stuff the arguments into a single string with spaces in between, because if you do that you can't tell the difference between spaces that are part of arguments and spaces that separate arguments. If your shell has arrays, you can use them. cmd=(/path/to/executable --option --message="hello world" --) cmd=("${cmd[@]}" "$file1" "$file2") "${cmd[@]}" What if you're using a shell without arrays? You can still use the positional parameters, if you don't mind modifying them. set -- /path/to/executable --option --message="hello world" -- set -- "$@" "$file1" "$file2" "$@" What if you need to store a complex shell command, e.g. with redirections, pipes, etc.? Or if you don't want to modify the positional parameters? Then you can build a string containing the command, and use the eval builtin. code='/path/to/executable --option --message="hello world" -- /path/to/file1 | grep "interesting stuff"' eval "$code" Note the nested quotes in the definition of code : the single quotes '…' delimit a string literal, so that the value of the variable code is the string /path/to/executable --option --message="hello world" -- /path/to/file1 . The eval builtin tells the shell to parse the string passed as an argument as if it appeared in the script, so at that point the quotes and pipe are parsed, etc. Using eval is tricky. Think carefully about what gets parsed when. In particular, you can't just stuff a file name into the code: you need to quote it, just like you would if it was in a source code file. There's no direct way to do that. Something like code="$code $filename" breaks if the file name contains any shell special character (spaces, $ , ; , | , < , > , etc.). code="$code \"$filename\"" still breaks on "$\` . Even code="$code '$filename'" breaks if the file name contains a ' . There are two solutions. Add a layer of quotes around the file name. The easiest way to do that is to add single quotes around it, and replace single quotes by '\'' . quoted_filename=$(printf %s. "$filename" | sed "s/'/'\\\\''/g") code="$code '${quoted_filename%.}'" Keep the variable expansion inside the code, so that it's looked up when the code is evaluated, not when the code fragment is built. This is simpler but only works if the variable is still around with the same value at the time the code is executed, not e.g. if the code is built in a loop. code="$code \"\$filename\"" Finally, do you really need a variable containing code? The most natural way to give a name to a code block is to define a function. What's up with read ? Without -r , read allows continuation lines — this is a single logical line of input: hello \ world read splits the input line into fields delimited by characters in $IFS (without -r , backslash also escapes those). For example, if the input is a line containing three words, then read first second third sets first to the first word of input, second to the second word and third to the third word. If there are more words, the last variable contains everything that's left after setting the preceding ones. Leading and trailing whitespace are trimmed. Setting IFS to the empty string avoids any trimming. See Why is `while IFS= read` used so often, instead of `IFS=; while read..`? for a longer explanation. What's wrong with xargs ? The input format of xargs is whitespace-separated strings which can optionally be single- or double-quoted. No standard tool outputs this format. xargs -L1 or xargs -l is not to split the input on lines , but to run one command per line of input (that line still split to make up the arguments, and continued on the next line if ending in blanks). xargs -I PLACEHOLDER does use one line of input to substitute the PLACEHOLDER but quotes and backslashes are still processed and leading blanks trimmed. You can use xargs -r0 where applicable (and where available: GNU (Linux, Cygwin), BusyBox, BSDs, OSX, but it isn't in POSIX). That's safe, because null bytes can't appear in most data, in particular in file names and external command arguments. To produce a null-separated list of file names, use find … -print0 (or you can use find … -exec … as explained below). How do I process files found by find ? find … -exec some_command a_parameter another_parameter {} + some_command needs to be an external command, it can't be a shell function or alias. If you need to invoke a shell to process the files, call sh explicitly. find … -exec sh -c ' for x do … # process the file "$x" done ' find-sh {} + I have some other question Browse the quoting tag on this site, or shell or shell-script . (Click on “learn more…” to see some general tips and a hand-selected list of common questions.) If you've searched and you can't find an answer, ask away .
{ "source": [ "https://unix.stackexchange.com/questions/131766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
131,775
Is there a command I can type in a terminal that will tell me the last time a machine was rebooted?
uptime If you want it in numerical form, it's the first number in /proc/uptime (in seconds), so the time of the last reboot is date -d "$(</proc/uptime awk '{print $1}') seconds ago" The uptime includes the time spent in a low-power state (standby, suspension or hibernation).
{ "source": [ "https://unix.stackexchange.com/questions/131775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38705/" ] }
131,801
In the bash tutorial I am reading, it says that if you open a file descriptor for reading, i.e. exec 3< echolist Then you must close it like this, exec 3<&- However, if you open a file descriptor for writing, it must be closed like this: exec 3>&- Yet when I look on the internet, I see people opening files and then closing them with this: exec 3>&- NOTE: when, according to the tutorial, they should be using exec 3<&1 . So my question is, can all file descriptors be closed via exec n>&- where n is the file descriptor number? Regardless if it was opened for reading, writing, or both?
You can close file descriptor using both <&- and >&- , bash will parse two syntax as the same. From file y.tab.c in bash source code: 5385 /* Hack <&- (close stdin) case. Also <&N- (dup and close). */ 5386 if MBTEST(character == '-' && (last_read_token == LESS_AND || last_read_token == GREATER_AND)) 5387 return (character);
{ "source": [ "https://unix.stackexchange.com/questions/131801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66859/" ] }
131,886
I've been struggling with this for a couple hours so any help is greatly appreciated... I have 2x servers both of which I can ssh to with public keys from OSX, no issues at all there so I'm certain everything is good with sshd_config . I'm trying to configure a cron job for rsync to sync the two servers and need server B (backup) to ssh into server A using a public key. I cannot for the life of me work out why it doesn't find my public keys - they are in ~/.ssh/ (ie. /root/.ssh ) and all file permissions are correct on A & B. This is the output: debug2: we did not send a packet, disable method debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug3: no such identity: /root/.ssh/identity debug1: Trying private key: /root/.ssh/id_rsa debug3: no such identity: /root/.ssh/id_rsa debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password Also note it is looking for private keys which don't exist... drwx------. 2 root root 4096 May 25 10:15 . dr-xr-x---. 4 root root 4096 May 24 18:52 .. -rw-------. 1 root root 403 May 25 01:37 authorized_keys -rw-------. 1 root root 0 May 25 01:41 config -rw-------. 1 root root 1675 May 25 02:35 id_rsa_tm1 -rw-------. 1 root root 405 May 25 02:35 id_rsa_tm1.pub -rw-------. 1 root root 395 May 25 02:36 known_hosts
Have a look at your ssh man page: -i identity_file Selects a file from which the identity (private key) for public key authentication is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_dsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ed25519 and ~/.ssh/id_rsa for protocol version 2. Identity files may also be specified on a per-host basis in the configuration file. It is possible to have multiple -i options (and multiple identities specified in configuration files). or the ssh_config man page: IdentityFile Specifies a file from which the user's DSA, ECDSA, ED25519 or RSA authentication identity is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_dsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ed25519 and ~/.ssh/id_rsa for proto‐ col version 2. Additionally, any identities represented by the authentication agent will be used for authentication unless IdentitiesOnly is set. You see, there are a few special file names which are tried if you do not specify a key. Those are also the files you see in your log output. To use a key in a file with different name you have three options: specify the file explicitly using the above -i option. configure the file in your client config using the above IdentityFile option. add the key to your agent using ssh-add . For interactive sessions the agent is the most flexible one. For your cron job the -i option is probably the easiest one.
{ "source": [ "https://unix.stackexchange.com/questions/131886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61288/" ] }
131,918
The problem is I have some database dumps which are either compressed or in plain text. There is no difference in file extension etc. Using zcat on uncompressed files produces an error instead of the output. Is there maybe another cat sort of tool that is smart enough to detect what type of input it gets?
Just add the -f option. $ echo foo | tee file | gzip > file.gz $ zcat file file.gz gzip: file: not in gzip format foo $ zcat -f file file.gz foo foo (use gzip -dcf instead of zcat -f if your zcat is not the GNU (or GNU-emulated like in modern BSDs) one and only knows about .Z files).
{ "source": [ "https://unix.stackexchange.com/questions/131918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7976/" ] }
132,065
I have set up automatic (password less) ssh login to some servers using ssh-copy-id. ssh-agent works only from the terminal where it was run. How do I get ssh-add to work in all my terminals? Naturally, I would not prefer SSH key without a passphrase.
If you're logging into a graphical session, arrange to start ssh-agent during your session startup. Some distributions already do that for you. If yours doesn't, arrange to run ssh-agent from your session startup script or from your window manager. How you do that depends on your desktop environment and your window manager. For example, if you start your window manager manually, simply replace the call to my_favorite_wm by ssh-agent my_favorite_wm . Do not start ssh-agent from .bashrc or .zshrc , since these files are executed by each new interactive shell. The place to start ssh-agent is in a session startup file such as .profile or .xsession . If you want to use the same SSH agent on all processes no matter where you logged in from, you can make it always use the same socket name, instead of using a randomly-named socket. For example, you might put this in your ~/.profile : export SSH_AUTH_SOCK=~/.ssh/ssh-agent.$HOSTNAME.sock ssh-add -l 2>/dev/null >/dev/null if [ $? -ge 2 ]; then ssh-agent -a "$SSH_AUTH_SOCK" >/dev/null fi
{ "source": [ "https://unix.stackexchange.com/questions/132065", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16448/" ] }
132,102
I have a bash script, which runs manager() function as a separate process for x-times. How is it possible to forward messages to all manager() processes from within the script? I've read about anonymous pipes but I have no idea how to share the messages with it.. I tried doing it with named pipes, but it appears that I would have to create a separate named pipe for each process? What is the most elegant way to do this? Here's my code so far: #!/bin/bash manager () { while : do echo "read what has been passed to \$line" done } x=1 while [ $x -le 5 ] do manager x & x=$(( $x + 1 )) done while : do while read line do echo "What has been passed through the pipe is ${line}" # should pass $line to every manager process done < $1 done exit 0
The term for what you're trying to accomplish is multiplexing . This can be accomplished fairly easily in bash, but it does require some more advanced bash features. I created a script based on your which I think does what you're trying to accomplish. I'll explain it below. #!/bin/bash manager() { while IFS= read -r line; do echo "manager[$1:$BASHPID]: $line" done } fds=() for (( i=0; i<5; i++ )); do exec {fd}> >(manager $i) fds+=( $fd ) done while IFS= read -r line; do echo "master: $line" for fd in "${fds[@]}"; do printf -- '%s\n' "$line" >&$fd done done manager is a bash function which simply reads from STDIN and writes it's identifier and the line to STDOUT. We use $BASHPID instead of $$ as $$ does not get updated for subshells (which is what we will be using to launch manager . fds is an array that will hold the file descriptors pointing to the STDIN pipes of the various manager s spawned off. Then we loop through and create 5 manager processes. I use the for (( )) syntax instead of the way you were doing it as it's cleaner. This is bash specific, but several of the things this script does are bash specific, so might as well go all the way. Next we get to exec {fd}> >(manager $i) . This does several more bash specific things. The first of which is {fd}> . This grabs the next available file descriptor on or after number 10, opens a pipe with the writing side of the pipe assigned to that file descriptor, and assigns the file descriptor number to the variable $fd . The >(manager $i) launches manager $i and basically substitutes >(manager $i) with a path to a STDIN of that process. So if manager was launched as PID 1234, >(manager $i) might get substituted with /proc/1234/fd/0 (this is OS dependent). So assuming the next available file descriptor number is 10, and manager is launched with PID 1234, the command exec {fd}> >(manager $i) basically becomes exec 10>/proc/1234/fd/0 , and bash now has file descriptor pointing to STDIN of that manager. Then since bash puts that file descriptor number in $fd , we add that descriptor to the array fds for later usage. The rest of it is pretty simple. The master reads a line from STDIN, iterates over all the file descriptors in $fds , and sends the line to that file desciptor ( printf ... >&$fd ). The result looks like this: $ /tmp/test.sh hello master: hello manager[0:8876]: hello manager[1:8877]: hello manager[4:8880]: hello manager[2:8878]: hello manager[3:8879]: hello world master: world manager[0:8876]: world manager[1:8877]: world manager[3:8879]: world manager[2:8878]: world manager[4:8880]: world Where I typed hello and world .
{ "source": [ "https://unix.stackexchange.com/questions/132102", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68540/" ] }
132,192
I have a command line application that when run does not do what it is supposed to do and at a certain point leaves the message: Segmentation fault What does this mean? What should I do?
A segmentation fault is the result of a memory access violation. The program has referred to a memory address outside of what was allocated to it, and the OS kernel responds by killing the program with SIGSEGV. This is a mistake, since there is no point in trying to access inaccessible memory (it cannot be done). Mistakes of this sort are easy to make, however, particularly in languages such as C and C++ (which account for a lot of common applications). It indicates a bug in either the program itself or a library it links to. If you wish to report the bug (do -- this helps), it is a good idea to include a backtrace of the events that led up to the seg fault. To do this, you can run the program inside gdb (the GNU debugger), which should be available from any linux distro if it is not installed already (the package will just be called "gdb"). If the broken application is called "brokenapp": gdb brokenapp A paragraph about copyright and licensing will appear, and at the end a prompt with the cursor: (gdb) _ Type run and hit enter. If you need to supply arguments (e.g. -x --foo=bar whatever ) append those ( run -x --foo=bar whatever ). The program will do what it does, you will see the output and if you need to interact you can (note you can run any sort of program, including a GUI one, inside gdb). At the point where it usually segfaults you will see: Program received signal SIGSEGV, Segmentation fault. 0x00000000006031c9 in ?? () (gdb) _ The second line of output here is just an example. Now type bt (for "backtrace") and hit enter. You'll see something like this, although it may be much longer: (gdb) bt #0 0x00000000006031c9 in ?? () #1 0x000000000040157f in mishap::what() const () #2 0x0000000000401377 in main () If it is longer, you'll only get a screenful at a time and there will be a --More-- message. Keep hitting enter until it's done. You can now quit , the output will remain in your terminal. Copy everything from Program received signal SIGSEGV onward into a text file, and file a bug report with the application's bug tracker; you can find these online by searching, e.g. "brokenapp bug report" -- you will probably have to register so a reply can be sent to you by email. Include your description of the problem, any arguments you supplied to run , etc., and a copy of the backtrace (if it is very long, there may be a means to attach a text file in the bug tracker interface). Also include the version, if you know what it is ( brokenapp --version may work, or the man page may indicate how to get this), and which distribution you are using. Someone will hopefully get back to you in not too long. Filing bugs is a usually appreciated.
{ "source": [ "https://unix.stackexchange.com/questions/132192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
132,235
I know how to rename files in Unix: $ mv ~/folder/subfolder/file.txt ~/folder/subfolder/file.sh ^-------this part------^ ^------this part-------^ It takes too long time to repeat ~/folder/subfolder/file twice. Is there a quicker way?
If your shell supported Brace Expansion (works with csh , tcsh , ksh , zsh , bash , mksh , lksh , pdksh , yash with brace-expand enabled by calling yash --brace-expand or set in interative shell with set -o brace-expand , or fish ): mv ~/folder/subfolder/file.{txt,sh}
{ "source": [ "https://unix.stackexchange.com/questions/132235", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52995/" ] }
132,313
We know that we can get the second column of the line we want from a file using these two techniques: awk '/WORD/ { print $2 }' filename or grep WORD filename| cut -f 2 -d ' ' My questions are: What are the differences between the two commands above? Which one has the best performance? What are the advantages of using awk over using cut , and vice versa? What options does awk give us over cut and vice versa?
The most prominent difference between your two lines would be depending on the input. cut takes a single character in -d as the field delimiter (the default being TAB), and every single occurrence of that character starts a new field. awk , however, is more flexible. The separator is in the FS variable and can be an empty string (every input character makes a separate field), a single character, or a regular expression. The special case of a single space character (the default) means to split on any sequence of whitespace. Also, awk suppresses leading whitespace by default. Please compare: $ echo "abc def" | cut -f 2 -d ' ' def $ echo "abc def" | cut -f 2 -d ' ' $ echo " abc def" | cut -f 2 -d ' ' abc $ echo "abc def" | awk '{ print $2 }' def $ echo "abc def" | awk '{ print $2 }' def $ echo " abc def" | awk '{ print $2 }' def Here, awk splits on the sequence of spaces between abc and def whereas cut takes every space as a separator. What you take would depend on what you want to achieve. Otherwise, I would expect cut to be faster since it is a smaller, single purpose tool whereas awk has its own programming language.
{ "source": [ "https://unix.stackexchange.com/questions/132313", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45764/" ] }
132,417
The problem I am getting is, when I enter the command, su - root at the beginning of my shell script file, it prompts the user to enter the password and then does NOT continue with the rest of the shell script. I then have to manually locate and run the shell script via terminal. I want the script to make sure that the user logs in as root and then continue with the rest of the shell script. In other words, I want to run the script as any user but as soon as the script begins to execute, the user must change to root and then continue on with the rest of the script as root until it is done. Can this be done?
This is very easy to accomplish: #!/bin/sh [ "$(whoami)" != "root" ] && exec sudo -- "$0" "$@" When the current user isn't root, re-exec the script through sudo . Note that I am using sudo here instead of su . This is because it allows you to preserve arguments. If you use su , your command would have to be su -c "$0 $@" which would mangle your arguments if they have spaces or special shell characters. If your shell is bash, you can avoid the external call to whoami : (( EUID != 0 )) && exec sudo -- "$0" "$@"
{ "source": [ "https://unix.stackexchange.com/questions/132417", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68857/" ] }
132,480
How can I write a shell script that will do a case-insensitive substring match of command output?
You can do case-insensitive substring matching natively in bash using the regex operator =~ if you set the nocasematch shell option. For example s1="hElLo WoRlD" s2="LO" shopt -s nocasematch [[ $s1 =~ $s2 ]] && echo "match" || echo "no match" match s1="gOoDbYe WoRlD" [[ $s1 =~ $s2 ]] && echo "match" || echo "no match" no match
{ "source": [ "https://unix.stackexchange.com/questions/132480", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68805/" ] }
132,511
I was tasked to create an automated server hardening script and one thing that they need is a report of all the output of each command executed. I want to store the error message inside a string and append it in a text file. Let's say I ran this command: /sbin/modprobe -n -v hfsplus The output of running this in my machine would be: FATAL: Module hfsplus not found How can I store that error message inside a string? Any help would be greatly appreciated. Thanks!
you can do it by redirecting errors command: /sbin/modprobe -n -v hfsplus 2> fileName as a script #!/bin/bash errormessage=$( /sbin/modprobe -n -v hfsplus 2>&1) echo $errormessage or #!/bin/bash errormessage=`/sbin/modprobe -n -v hfsplus 2>&1 ` echo $errormessage if you want to append the error use >> instead of > Make sure to use 2>&1 and not 2> &1 to avoid the error "syntax error near unexpected token `&'"
{ "source": [ "https://unix.stackexchange.com/questions/132511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68805/" ] }
132,623
If I execute the test command in bash, test (evaluates conditional expression) built-in utility is started: $ type test test is a shell builtin $ type -a test test is a shell builtin test is /usr/local/bin/test test is /usr/bin/test $ However, as seen in output of type -a test above, there is another test in /usr/local/bin directory and yet another one in /usr/bin directory. How are executables ordered, i.e. are the built-in commands always preferred and then the rest of the commands depend on the directory order in $PATH variable? In addition, is it possible to change the order of the executables started, e.g. if I type in test , then /usr/bin/test is started instead of bash-builtin test ?
Highest priority is bash alias, then special builtins (only in POSIX mode), then functions, then builtins, then a search in $PATH . To execute a builtin, use builtin test . To execute an external application, use an explicit path: /bin/test . To ignore functions and aliases, use command test . To bypass just alias, use \test or any other kind of expansion. It's possible to disable/enable a builtin with enable test . (Updated according to comments below) (Fixed incorrect admin edit that bash has disable builtin - in fact, there is only enable )
{ "source": [ "https://unix.stackexchange.com/questions/132623", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
132,779
If we have this string ( IP address ): 192.168.1.1 How can I derive the ( DNS reverse record form ) from this string, so it will be shown like 1.1.168.192.in-addr.arpa using a shell script?
You can do it with AWK . There are nicer ways to do it, but this is the simplest, I think. echo '192.168.1.1' | awk 'BEGIN{FS="."}{print $4"."$3"."$2"."$1".in-addr.arpa"}' This will reverse the order of the IP address. Just to save a few keystrokes, as Mikel suggested, we can further shorten the upper statement: echo '192.168.1.1' | awk -F . '{print $4"."$3"."$2"."$1".in-addr.arpa"}' OR echo '192.168.1.1' | awk -F. '{print $4"."$3"."$2"."$1".in-addr.arpa"}' OR echo '192.168.1.1' | awk -F. -vOFS=. '{print $4,$3,$2,$1,"in-addr.arpa"}' AWK is pretty flexible. :)
{ "source": [ "https://unix.stackexchange.com/questions/132779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45764/" ] }
132,791
I want to put ssh-add /path/to/special_key at the top of a script. This works fine, but it always prompts for the passphrase. This is strange, and a little annoying, as it still asks for the passphrase even when ssh-add -l shows the key has already been added. Is there a way to tell it: "add this key and ask the passphrase if not already been added, otherwise do nothing" ?
I don't see any options to ssh-add that help achieve your desired result, but it's pretty easy to work around this, given that you're concerned with one key in particular. First, grab the fingerprint for your special_key: ssh-keygen -lf /path/to/special_key | awk '{print $2}' Let's say this fingerprint looks like 6d:98:ed:8c:07:07:fe:57:bb:19:12:89:5a:c4:bf:25 Then, at the top of your script, use ssh-add -l to check whether that key is loaded, before prompting to add it: ssh-add -l |grep -q 6d:98:ed:8c:07:07:fe:57:bb:19:12:89:5a:c4:bf:25 || ssh-add /path/to/special_key You can fold all this together into one line if you wish: ssh-add -l |grep -q `ssh-keygen -lf /path/to/special_key | awk '{print $2}'` || ssh-add /path/to/special_key
{ "source": [ "https://unix.stackexchange.com/questions/132791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52393/" ] }
132,797
How can I create a backup of a remote disk using SSH on my local machine and save it to a local disk? I've tried the following: ssh [email protected] "sudo dd if=/dev/sdX " | \ dd of=/home/username/Documents/filename.image` However, I receive the following error: no tty present and no askpass program specified
If your intent is to backup a remote computer's HDD A via SSH to a single file that's on your local computer's HDD, you could do one of the following. Examples run from remote computer $ dd if=/dev/sda | gzip -1 - | ssh user@local dd of=image.gz run from local computer $ ssh user@remote "dd if=/dev/sda | gzip -1 -" | dd of=image.gz Live example $ ssh skinner "dd if=/dev/sda5 | gzip -1 -" | dd of=image.gz 208782+0 records in 208782+0 records out 106896384 bytes (107 MB) copied, 22.7608 seconds, 4.7 MB/s 116749+1 records in 116749+1 records out 59775805 bytes (60 MB) copied, 23.9154 s, 2.5 MB/s $ ll | grep image.gz -rw-rw-r--. 1 saml saml 59775805 May 31 01:03 image.gz Methods for monitoring? Login via ssh in another terminal and ls -l the file to see what it's size is. You can use pv to monitor the progress of a large dd operation, for instance, for the remote example above, you can do: $ dd if=/dev/sda | gzip -1 - | pv | ssh user@local dd of=image.gz Send a "SIGUSR1" signal to dd and it will print stats. Something like: $ pkill -USR1 dd Use dd 's progress switch, status=progress References The methods mentioned above for monitoring were originally left via comments by @Ryan & @bladt and myself. I've moved them into the answer to make them more obvious.
{ "source": [ "https://unix.stackexchange.com/questions/132797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61691/" ] }
133,863
Related question: initiate ssh connection from server to client Answer from there helped me a lot, this command does what I need: ssh -R 2225:localhost:22 loginOfServerWithPublicIP@publicIP So I wrote the script to reconnect all the time: #!/bin/bash while true; do echo "try to connect..." ssh -o ServerAliveInterval=240 -R 2225:localhost:22 user@host echo "restarting in 5 seconds.." sleep 5 done And added it to the /etc/crontab . But I found out that works if only I execute it "by hand" from shell, but if it is called by cron, ssh connects and immediately finishes. (so, the script above reconnects all the time) From man ssh , I found that for background connections I should call it with -n key, but it didn't help. Then, I just looked around for similar scripts and I found that it works if I call tail -f something , i.e. some "neverending" command, so I just created empty file /tmp/dummy_file and now my ssh command looks like this: ssh -o ServerAliveInterval=240 -R 2225:localhost:22 -n user@host tail -f /tmp/dummy_file It works now! But, this solution seems a bit ugly, plus I don't really understand actual reasons of that behavior. Just by chance, I tried to call bash instead of tail -f ( bash seems to me "neverending" command, too), but it doesn't work. So, could anyone please explain this behavior, and what is the correct way to create background ssh connection to keep reverse ssh tunnel up?
It sounds like you want the -N option to ssh. -N Do not execute a remote command. This is useful for just forwarding ports (protocol version 2 only).
{ "source": [ "https://unix.stackexchange.com/questions/133863", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46011/" ] }
133,914
I would like to setup my gnome terminal's background( #002b36 ) and foreground color in ubuntu 13, using bash script. I tried gconftool but couldn't succeed. GCONFTOOL-2(1) User Commands GCONFTOOL-2(1) NAME gconftool-2 - GNOME configuration tool My gnome terminal version is $ gnome-terminal --version GNOME Terminal 3.6.1 Currently I'm using ubuntu terminal preferences UI to achieve this.
Method #1 - Using dconf Background You can use the dconf tool to accomplish this, however it's a mult-step process. DESCRIPTION The dconf program can perform various operations on a dconf database, such as reading or writing individual values or entire directories. This tool operates directly on the dconf database and does not read gsettings schema information.Therefore, it cannot perform type and consistency checks on values. The gsettings(1) utility is an alternative if such checks are needed. Usage $ dconf error: no command specified Usage: dconf COMMAND [ARGS...] Commands: help Show this information read Read the value of a key list List the contents of a dir write Change the value of a key reset Reset the value of a key or dir update Update the system databases watch Watch a path for changes dump Dump an entire subpath to stdout load Populate a subpath from stdin Use 'dconf help COMMAND' to get detailed help. General approach First you'll need to get a list of your gnome-terminal profiles. $ dconf list /org/gnome/terminal/legacy/profiles:/ <profile id> Using this <profile id> you can then get a list of configurable settings $ dconf list /org/gnome/terminal/legacy/profiles:/<profile id> background-color default-size-columns use-theme-colors use-custom-default-size foreground-color use-system-font font You can then read the current colors of either the foreground or background foreground $ dconf read /org/gnome/terminal/legacy/profiles:/<profile id>/foreground-color 'rgb(255,255,255)' background $ dconf read /org/gnome/terminal/legacy/profiles:/<profile id>/background-color 'rgb(0,0,0)' You can change the colors as well foreground $ dconf write /org/gnome/terminal/legacy/profiles:/<profile id>/foreground-color "'rgb(255,255,255)'" background $ dconf write /org/gnome/terminal/legacy/profiles:/<profile id>/background-color "'rgb(0,0,0)'" Example Get my profile ID $ dconf list /org/gnome/terminal/legacy/profiles:/ :b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ Use the profile ID to get a list of settings $ dconf list /org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ background-color default-size-columns use-theme-colors use-custom-default-size foreground-color use-system-font font Change your background blue $ dconf write /org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/background-color "'rgb(0,0,255)'" A Note on colors You can use either the notation rgb(R,G,B) when specifying your colors or the hash notation #RRGGBB . In the both notations the arguments are red, green, and blue. The values in the first notation are integers ranging from 0-255 for R, G, or B. In the second notation the values are in hexidecimal ranging from 00 to FF for RR, GG, or BB. When providing either of these to dconf you need to wrap it properly in double quotes with single quotes nested inside. Otherwise dconf will complain. "'rgb(0,0,0)'" "'#FFFFFF'" etc. Method #2 - Using gconftool-2 On my Ubuntu 12.04 system I was able to change the colors via the command line as follows. NOTE: The options are ultimately stored in this file, $HOME/.gconf/apps/gnome-terminal/profiles/Default/%gconf.xml . General approach First you'll need to get the tree for gnome-terminal 's profile. $ gconftool-2 --get /apps/gnome-terminal/global/profile_list [Default] Using the resulting tree we can find out what attributes are configurable. $ gconftool-2 -a "/apps/gnome-terminal/profiles/Default" | grep color bold_color_same_as_fg = true bold_color = #000000000000 background_color = #FFFFFFFFFFFF foreground_color = #000000000000 use_theme_colors = false Get/Set the background_color & foreground_color attributes $ gconftool-2 --get "/apps/gnome-terminal/profiles/Default/foreground_color" #000000000000 $ gconftool-2 --set "/apps/gnome-terminal/profiles/Default/background_color" --type string "#000000FFFFFF" Confirm $ gconftool-2 -R /apps/gnome-terminal/profiles/Default | grep color bold_color_same_as_fg = true bold_color = #000000000000 background_color = #000000FFFFFF foreground_color = #000000000000 use_theme_colors = true References CHANGING TERMINAL PREFERENCES IN GNOME 3 base16-gnome-terminal / base16-tomorrow.light.sh Is there a way to temporarily change the terminal colour?
{ "source": [ "https://unix.stackexchange.com/questions/133914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17781/" ] }
133,951
I just noticed that if I execute ssh user@remote_host tail -f /some/file , then tail -f /some/file keeps running on the remote_host even if ssh connection is closed! So, after several connects and disconnects, number of running tail -f /some/file grows. How to actually terminate tail -f when ssh connection is closed?
In ssh host tail -f file The ssh client connects to the sshd server on host over a TCP connection. sshd runs tail -f with its stdout redirected to a pipe. sshd reads what's coming from the other end of the pipe and encapsulates it in the sshd protocol to send to the ssh client. (with rshd , tail stdout would have been the socket directly, but sshd adds encryption and is able to multiplex several streams (like for port/agent/X11/tunnel redirection, stderr) on a single TCP connection so has to resort to pipes). When you press CTRL-C, a SIGINT is sent to the ssh client. That causes ssh to die. Upon dying the TCP connection is closed. And therefore, on host , sshd dies as well. tail is not killed, but its stdout is now a pipe with no reader at the other end. So, the next time it writes something to its stdout, it will receive a SIGPIPE and die. In: ssh -t host 'tail -f file' It's the same thing except that instead of being with a pipe, the communication between sshd and tail is via a pseudo-terminal. tail 's stdout is a slave pseudo-terminal (like /dev/pts/12 ) and whatever tail write there is read on the master side (possibly modified by the tty line discipline) by sshd and sent encapsulated to the ssh client. On the client side, with -t , ssh puts the terminal in raw mode. In particular, that disables the terminal canonical mode and terminal signal handling. So, when you press Ctrl+C , instead of the client's terminal line discipline sending a SIGINT to the ssh job, that just sends the ^C character over the connection to sshd and sshd writes that ^C to the master side of the remote terminal. And the line discipline of the remote terminal sends a SIGINT to tail . tail then dies, and sshd exits and closes the connection and ssh terminates (if it's not otherwise still busy with port forwardings or other). Also, with -t , if the ssh client dies (for instance if you enter ~. ), the connection is closed and sshd dies. As a result, a SIGHUP will be sent to tail . Now, beware that using -t has side effects. For instance, with the default terminal settings, \n characters are converted to \r\n and more things may happen depending on the remote system, so you may want to issue a stty -opost (to disable output post-processing) on the remote host if that output is not intended for a terminal: $ ssh localhost 'echo x' | hd 00000000 78 0a |x.| 00000002 $ ssh -t localhost 'echo x' | hd 00000000 78 0d 0a |x..| 00000003 $ ssh -t localhost 'stty -opost; echo x' | hd 00000000 78 0a |x.| 00000002 Another drawback of using -t / -tt is that stdout and stderr are not differentiated on the client. Both the stdout and stderr of the remote command will be written to the ssh client's stdout: $ ssh localhost ls /x | wc -l ls: cannot access /x: No such file or directory 0 $ ssh -t localhost ls /x | wc -l 1
{ "source": [ "https://unix.stackexchange.com/questions/133951", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46011/" ] }
133,972
Why would following script not execute, but give a syntax error of else : LOGS3_DIR=~/logs if [ -d "$LOGS3_DIR" ]; then cd cd "$LOGS3_DIR" echo "$LOGS3_DIR" for filename in `find "." -mtime 1 -type f` do if lsof "$filename" > /dev/null then # file is open else echo "deleting $filename" rm "$filename" fi done fi
It seems that you want to do a no-op if the file is open so you should add a : , which is a null command in bash : if lsof "$filename" > /dev/null; then # file is open : else printf 'deleting %s\n' "$filename" rm -- "$filename" fi If you don't use : , bash can not parse you code, and will show error like bash: syntax error near unexpected token 'else' .
{ "source": [ "https://unix.stackexchange.com/questions/133972", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14172/" ] }
134,212
I have an awk script and I have passed a CSV file to it. awk -f script.awk /home/abc/imp/asgd.csv What am I doing is to get FILENAME within script.awk . FILENAME gives me the whole path. As I am in awk I cannot use basename FILENAME . print FILENAME; /home/abc/imp/asgd.csv I have tried with this within script.awk echo $FILENAME | awk -F"/" '{print $NF}' but I cannot execute this within script.awk . How can I get asgd.csv within an awk program?
Several options: awk ' function basename(file) { sub(".*/", "", file) return file } {print FILENAME, basename(FILENAME)}' /path/to/file Or: awk ' function basename(file, a, n) { n = split(file, a, "/") return a[n] } {print FILENAME, basename(FILENAME)}' /path/to/file Note that those implementations of basename should work for the common cases, but not in corner cases like basename /path/to/x/// where they return the empty string instead of x or / where they return the empty string instead of / , though for regular files, that should not happen. The first one will not work properly if the file paths (up to the last / ) contain sequences of bytes that don't form valid characters in the current locale (typically this kind of thing happens in UTF-8 locales with filenames encoded in some 8 bit single byte character set). You can work around that by fixing the locale to C where every sequence of byte form valid characters.
{ "source": [ "https://unix.stackexchange.com/questions/134212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45726/" ] }
134,301
I have installed nginx server. I've just checked listening ports and saw the following: $ sudo lsof -nP -i | grep LISTEN sshd 614 root 3u IPv4 7712 0t0 TCP *:22 (LISTEN) nginx 822 root 7u IPv4 8745 0t0 TCP *:80 (LISTEN) nginx 827 www-data 7u IPv4 8745 0t0 TCP *:80 (LISTEN) nginx 828 www-data 7u IPv4 8745 0t0 TCP *:80 (LISTEN) nginx 829 www-data 7u IPv4 8745 0t0 TCP *:80 (LISTEN) nginx 830 www-data 7u IPv4 8745 0t0 TCP *:80 (LISTEN) . . . And I'm just interested why there is four nginx processes run as 'www-data' user and one as 'root user'?
The process you noticed is the master process, the process that starts all other nginx processes. This process is started by the init script that starts nginx. The reason this process is running as root is simply because you started it as root! You can start it as another user, but you will have to make sure that all resources nginx needs are available to this user. That would typically be at least /var/log/nginx and the pid-file under /var/run/. Most importantly; Only root processes can listen to ports below 1024. A webserver typically runs at port 80 and/or 443. That means it needs to be started as root. In conclusion, the master process being run by root is completely normal and in most cases necessary for normal operation. Edit: Running anything as root carries an implicit security risk. Normally developers of this kind of software have much knowledge about attack vectors and take great care to execute as little as possible as root. In the end you simply have to trust that the software is of good quality. If you still feel uneasy there is a way to run nginx as another user and still use ports below 1024. You can use iptables to redirect all incoming traffic on port 80 to another port, for example 8080, and have nginx listen on that port.
{ "source": [ "https://unix.stackexchange.com/questions/134301", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27850/" ] }
134,332
I just ran into something unexpected (for me) regarding file permissions on Linux (Arch Linux). Basically I have: userX in groupX fileX userX:groupX ---rwx---- What puzzles me: I cannot perform any action ( rwx ) on fileX . Is this right? Can someone please confirm this is indeed the expected behaviour? The only actions I can perform are mv and rm because I have write permissions on the parent directory. The thing is, I always thought these permissions collapse on each other, starting with the most general one (other -> group -> user). In other words, if o=rwx who cares what the persmissions for group and user are? Apparently this is not the case but it doesn't make much sense to me; it seems counterintuitive. The only thing this approach seems to be useful at, is to easily exclude a very specific person / group, which doesn't seem like a smart way to go at it (imho). Besides, the owner (and group?) should be able to chmod anyway right? Any thoughts on this matter?
The thing is, I always thought these permissions collapse on each other, starting with the most general one (other -> group -> user). If it was the case then “other” permissions would apply to everyone. In other words, if o=rwx who cares what the persmissions for group and user are? That's different from your previous sentence. Here you're implying that the permissions are or'ed together, e.g. that userX has the read permission if userX owns the file and the file is user-readable, or if a group that userX belongs to owns the file and the file is group-readable, or if the file is other-readable. But that's not how it works. In fact, o=rwx means that the rwx permissions apply to others, but it doesn't say anything about entities that are not others. First, it doesn't directly matter which groups a user belongs to. The kernel doesn't have a notion of users belonging to groups. What the kernel maintains is, for every process, a user ID (the effective UID ) and a list of group IDs (the effective GID and the supplementary GIDs). The groups are determined at login time, by the login process — it's the login process that reads the group database (e.g. /etc/group ). User and group IDs are inherited by child processes¹. When a process tries to open a file, with traditional Unix permissions: If the file's owning user is the process's effective UID, then the user permission bits are used. Otherwise, if the file's owning group is the process's effective GID or one of the process's supplementary group ID, then the group permission bits are used. Otherwise, the other permission bits are used. Only one set of rwx bits are ever used. User takes precedence over group which takes precedence over other. When there are access control lists , the algorithm described above is generalized: If there is an ACL on the file for the process's effective UID, then it is used to determine whether access is granted. Otherwise, if there is an ACL on the file for the process's effective GID or one of the process's supplementary group ID, then the group permission bits are used. Otherwise, the other permission bits are used. See also Precedence of ACLS when a user belongs to multiple groups for more details about how ACL entries are used, including the effect of the mask. Thus -rw----r-- alice interns indicates a file which can be read and written by Alice, and which can be read by all other users except interns. A file with permissions and ownership ----rwx--- alice interns is accessible only to interns except Alice (whether she is an intern or not). Since Alice can call chmod to change the permissions, this does not provide any security; it's an edge case. On systems with ACLs, the generalized mechanism allows removing permissions from specific users or specific groups, which is sometimes useful. Using a single set of bits, rather than or-ing all the bits for each action (read, write, execute), has several advantages: It has the useful effect of allowing removing permissions from a set of users or groups, on systems with ACLs. On systems without ACLs, permissions can be removed from one group. It is simpler to implement: check one set of bits, rather than combining several sets of bits together. It is simpler to analyse a file's permissions, because fewer operations are involved. ¹ They can change when a setuid or setgid process is executed. This isn't related to the issue at hand.
{ "source": [ "https://unix.stackexchange.com/questions/134332", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39335/" ] }
134,385
I often have multiple terminal windows opened with less (eg comparing various log files). I forget which file is which. Is there a similar command to ^G in vi which displays the name of the file currently being viewed?
Compatible: ^G . Easy to type: = . Less copied several key bindings from vi, including this one. This displays the file name (the path that you passed on the less command line) and the position in the file. You can have this information permanently by calling less with the -M option. Include -M in the LESS environment variable. You can set this variable in your ~/.profile , ~/.pam_environment or wherever you define environment variables. Alternatively, you can set LESS by using lesskey to produce the configuration file ~/.less which is read by less .
{ "source": [ "https://unix.stackexchange.com/questions/134385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70455/" ] }
134,437
How do I stop a bash script until a user has pressed Space ? I would like to have the question in my script Press space to continue or CTRL + C to exit and then the script should stop and wait until Space is pressed.
You can use read : read -n1 -s -r -p $'Press space to continue...\n' key if [ "$key" = ' ' ]; then # Space pressed, do something # echo [$key] is empty when SPACE is pressed # uncomment to trace else # Anything else pressed, do whatever else. # echo [$key] not empty fi Replace ' ' for space at above with '' for Enter key, $'\t' for Tab key.
{ "source": [ "https://unix.stackexchange.com/questions/134437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
134,483
When I run ifconfig -a , I only get lo and enp0s10 interfaces, not the classical eth0 What does enp0s10 mean? Why is there no eth0 ?
That's a change in how now udevd assigns names to ethernet devices. Now your devices use the "Predictable Interface Names", which are based on ( and quoting the sources ): Names incorporating Firmware/BIOS provided index numbers for on-board devices (example: eno1) Names incorporating Firmware/BIOS provided PCI Express hotplug slot index numbers (example: ens1) Names incorporating physical/geographical location of the connector of the hardware (example: enp2s0) Names incorporating the interfaces's MAC address (example: enx78e7d1ea46da) Classic, unpredictable kernel-native ethX naming (example: eth0) The why's this changed is documented in the systemd freedesktop.org page , along with the method to disable this: ln -s /dev/null /etc/udev/rules.d/80-net-setup-link.rules or if you use older versions: ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules
{ "source": [ "https://unix.stackexchange.com/questions/134483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48971/" ] }
134,500
I'm afraid I've run into something rather strange. When I open a file normally, vim README.txt , everything is fine. But upon sudo vim README.txt , the file renders blank, and gives me a E138: Can't write viminfo file $HOME/.viminfo! error upon trying to exit. I suspected the .viminfo file was corrupt, so I deleted it. This problem remains. Can anyone help?
When you run sudo vim you start vim as root. That means that it is the viminfo file in /root that is the problem. You should do rm /root/.viminf* . To make sure of this, run sudo vim and execute this command: :!echo $HOME . This will show you that your home directory is /root. I would recommend that you do not run vim as root, but rather use sudoedit . This is a more secure solution as the editor is not running as root. You never know what a plugin might do. Additionally it allows you to use your own settings and plugins in vim and not the ones in roots vimrc. sudoedit is the same as running sudo -e . sudoedit works by making a temporary copy of the file that is owned by the invoking user (you). When you finish editing, the changes are written to the actual file and the temporary file is deleted. As a general rule of thumb: Do not run things as root if it is not necessary.
{ "source": [ "https://unix.stackexchange.com/questions/134500", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70518/" ] }
134,695
TexPad is creating it. I know that it is under some deadkey. I just cannot remember it is name. The blue character: I just want to mass remove them from my document. How can you type it?
It is known as carriage return. If you're using vim you can enter insert mode and type CTRL - v CTRL - m . That ^M is the keyboard equivalent to \r . Inserting 0x0D in a hex editor will do the task. How do I remove it? You can remove it using the command perl -p -i -e "s/\r//g" filename As the OP suggested in the comments of this answer here , you can even try a ` dos2unix filename and see if that fixes it. As @steeldriver suggests in the comments, after opening the vim editor, press esc key and type :set ff=unix . References https://stackoverflow.com/questions/1585449/insert-the-carriage-return-character-in-vim https://stackoverflow.com/a/7742437/1742825 -ksh: revenue_ext.ksh: not found [No such file or directory]
{ "source": [ "https://unix.stackexchange.com/questions/134695", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
134,797
I know how to use /etc/fstab to automatically mount devices on boot or when doing sudo mount -a , which works perfectly fine. For example, here is my current line for my device UUID=B864-497A /media/usbstick vfat defaults,users,noatime,nodiratime,umask=000 0 0 How do I achieve automatic mounting when this USB device with known UUID is plugged in while the system is already running, so that I don't have to run sudo mount -a after it is plugged in? Additional info: I'm working on an up-to-date console-only Debian wheezy linux.
I use the usbmount package to automount USB drives on my Ubuntu server install. I have confirmed that the package exists for Wheezy too. Recently also added for Jessie . sudo apt-get install usbmount usbmount will automount hfsplus, vfat, and ext (2, 3, and 4) file systems. You can configure it to mount more/different file systems in /etc/usbmount/usbmount.conf . By default it mounts these file systems with the sync,noexec,nodev,noatime,nodiratime options, however this can also be changed in the aforementioned configuration file. usbmount also supports custom mount options for different file system types and custom mountpoints.
{ "source": [ "https://unix.stackexchange.com/questions/134797", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50666/" ] }
134,924
I know that I can append & to a command to run the process in the background. I'm SSH'ing into an Ubuntu 12.04 box and running a python program with $python program.py & -- but when I go to close the terminal window I get a message saying that closing the terminal will kill the running process. Why is this? I am using the ampersand to run the process in the background. How can I get it to run regardless of if I am SSH'ed in?
When you close a terminal window, the terminal emulator sends a SIGHUP to the process it is running, your shell. Your shell then forwards that SIGHUP to everything it's running. On your local system, this is the ssh. The ssh then forwards the SIGHUP to what it's running, the remote shell. So your remote shell then sends a SIGHUP to all its processes, your backgrounded program. There are 2 ways around this. Disassociate the backgrounded program from your shell. Use the disown command after backgrounding your process. This will make the shell forget about it. Prefix your command with nohup ( nohup $python program.py & ). This accomplishes the same thing, but by using an intermediate process. Basically it ignores the SIGHUP signal, and then forks & executes your program which inherits the setting, and then exits. Because it forked, the program being launched is not a child of the shell, and the shell doesn't know about it. And unless it installs a signal handler for SIGHUP, it keeps the ignore action anyway. Use logout (or Ctrl + d ) instead of closing the terminal window. When you use logout , this isn't a SIGHUP, and so the shell won't send a SIGHUP to any of its children. Additionally you must make sure that your program doesn't write to the terminal through STDOUT or STDERR, as both of those will no longer exist once the terminal exits. If you don't redirect them to something like /dev/null , the program will still run, but if it tries to write to them, it'll get a SIGPIPE, and the default action of SIGPIPE is to kill the process).
{ "source": [ "https://unix.stackexchange.com/questions/134924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18366/" ] }
135,010
This Bash guide says: If the index number is @ or * , all members of an array are referenced. When I do this: LIST=(1 2 3) for i in "${LIST[@]}"; do echo "example.$i" done it gives the desired result: example.1 example.2 example.3 But when I use ${LIST[*]} , I get example.1 2 3 instead. Why? Edit: when using printf , @ and * actually do give the same results.
The difference is subtle; "${LIST[*]}" (like "$*" ) creates one argument, while "${LIST[@]}" (like "$@" ) will expand each item into separate arguments, so: LIST=(1 2 3) for i in "${LIST[@]}"; do echo "example.$i" done will deal with the list (print it) as multiple variables. But: LIST=(1 2 3) for i in "${LIST[*]}"; do echo "example.$i" done will deal with the list as one variable.
{ "source": [ "https://unix.stackexchange.com/questions/135010", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20865/" ] }
135,084
When I do a rm * in zsh, I get something like this: 2014-06-08 10:14:23 $ rm * zsh: sure you want to delete all the files in /home/assay/assay/log [yn]? y rm: remove regular file `development.log'? y First zsh asks me if I am sure I want to delete all, and then rm asks for for each specific file. How can I just have the zsh verification?
The message “zsh: sure you want to delete all the files” is a zsh feature, specifically triggered by invoking a command called rm with an argument that is * or something/* before glob expansion. You can turn this off with setopt rm_star_silent . The message “rm: remove regular file” comes from the rm command itself. It will not show up by default, it only appears when rm is invoked with the option -i . If you don't want this message, don't pass that option. Even without -i , rm prompts for confirmation (with a different message) if you try to delete a read-only file; you can remove this confirmation by passing the option -f . Since you didn't pass -i on the command line, rm is presumably an alias for rm -i (it could also be a function, a non-standard wrapper command, or a different alias, but the alias rm -i is by far the most plausible). Some default configurations include alias rm='rm -i' in their shell initialization files; this could be something that your distribution or your system administrator set up, or something that you picked up from somewhere and added to your configuration file then forgot. Check your ~/.zshrc for an alias definition for rm . If you find one, remove it. If you don't find one, add a command to remove the alias: unalias rm
{ "source": [ "https://unix.stackexchange.com/questions/135084", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63878/" ] }
136,118
My question is how can I convert all text from uppercase to lowercase and vice versa? That is to change the cases of all the letters. It has to be done with a sed replacement somehow.
Here is a straight way in sed : $ echo qWeRtY | sed -e 'y/abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ/ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz/' QwErTy or a shorter way with GNU sed , working with any character for which a lowercase<->uppercase conversion exists in your locale: $ echo qWeRtY | sed -E 's/([[:lower:]])|([[:upper:]])/\U\1\L\2/g' QwErTy if you can use another tools, like: perl (limited to ASCII letters): $ echo qWeRtY | perl -pe 'y/[a-z][A-Z]/[A-Z][a-z]/' QwErTy perl (more generally): $ echo 'αΒγ' | perl -Mopen=locale -pe 's/(\p{Ll})|(\p{Lu})/uc($1).lc($2)/ge' ΑβΓ
{ "source": [ "https://unix.stackexchange.com/questions/136118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/71884/" ] }
136,269
Recently we had a rather unpleasant situation with our customer - Raspberry Pi based "kiosk" used to display remote sensing data (nothing more fancy than a kiosk mode browser displaying a self-updating webpage from the data-collection server) failed to boot due to filesystem corruption. Ext4, Manual fsck required, the system will be a part of tomorrow's important presentation, service required immediately. Of course we can't require the customer to shut down the system nicely when switching it off for the night; the system must simply withstand such mistreatment. I'd like to avoid such situations in the future, and I'd like to move the OS to a filesystem that would prevent this. There's a bunch of filesystems intended for MTD devices, where getting them to run on SD card (a standard block device) requires some serious hoop-jumping. There are also some other filesystems (journalling etc) that boast good resistance against corruption. I still need to see some reasonable comparison of their pros and cons. Which filesystem available in Linux would provide best resistance against corruption on unexpected power failures and not require jumping through impossible hoops like yaffs2 in order to install to SD. Wear-balancing is a plus, but not a requirement - SD cards usually have their own mechanisms, if less than perfect, though the system should be "gentle for flash" (systems like NTFS can murder an SD card within a month).
The best resistance against corruption on a single SD card would be offered by BTRFS in RAID1 mode with automatic scrub run every predefined period of time. The benefits: retaining ability to RW to the filesystem modern, fully featured filesystem with very useful options for an RPi, like transparent compression and snapshots designed with flash memory in mind (among other things) Here is how to do it: I run my RaspberryPi on ArchARM linux and my card is in the SD reader, so modify those instructions accordingly for other distros and /dev interfaces. Here is an example partition layout: /dev/mmcblk0p1: fat32 boot partition /dev/mmcblk0p2: to be used as btrfs partition /dev/mmcblk0p3: to be used as btrfs partition (mirrored with the above) /dev/mmcblk0p4 (optional): swap To get btrfs into RAID1, you create the filesystem like so: mkfs.btrfs -m raid1 -d raid1 /dev/mmcblk0p2 /dev/mmcblk0p3 Then you rsync -aAXv to it your previously backed up system. To get it to boot from BTRFS in raid1, you need to modify initramfs . Therefore, you need to do the following while you still have your system running on your old filesystem. Raspberry does not normally use mkinitcpio so you must install it. Then, you need to add “btrfs” to MODULES array in mkinitcpio.conf and recreate initramfs with mkinitcpio -g /boot/initrd -k YOUR_KERNEL_VERSION To know what to type instead of YOUR_KERNEL_VERSION, run ls /lib/modules If you update the kernel, you MUST recreate initramfs BEFORE you reboot. Then, you need to modify RPi’s boot files. In cmdline.txt, you need to have root=/dev/mmcblk0p2 initrd=0x01f00000 rootfstype=btrfs and in config.txt, you need to add initramfs initrd 0x01f00000 Once you’ve done all that and successfully booted into your btrfs RAID1 system, the only thing left is to set up periodic scrub (every 3-7 days) either with systemd timer (preferred), or cron (dcron) like so: btrfs scrub start / It will run on your filesystem comparing checksums of all the files and fixing them (replacing with the correct copy) if it finds any corruption. The combination of BTRFS RAID1, single medium and Raspberry Pi make this pretty arcane stuff. It took some time and work to put all the pieces together, but here it is.
{ "source": [ "https://unix.stackexchange.com/questions/136269", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/30534/" ] }
136,278
It's a question about user space applications, but hear me out! Three "applications", so to speak, are required to boot a functional distribution of Linux: Bootloader - For embedded typically that's U-Boot, although not a hard requirement. Kernel - That's pretty straightforward. Root Filesystem - Can't boot to a shell without it. Contains the filesystem the kernel boots to, and where init is called form. My question is in regard to #3. If someone wanted to build an extremely minimal rootfs (for this question let's say no GUI, shell only), what files/programs are required to boot to a shell?
That entirely depends on what services you want to have on your device. Programs You can make Linux boot directly into a shell . It isn't very useful in production — who'd just want to have a shell sitting there — but it's useful as an intervention mechanism when you have an interactive bootloader: pass init=/bin/sh to the kernel command line. All Linux systems (and all unix systems) have a Bourne/POSIX-style shell in /bin/sh . You'll need a set of shell utilities . BusyBox is a very common choice; it contains a shell and common utilities for file and text manipulation ( cp , grep , …), networking setup ( ping , ifconfig , …), process manipulation ( ps , nice , …), and various other system tools ( fdisk , mount , syslogd , …). BusyBox is extremely configurable: you can select which tools you want and even individual features at compile time, to get the right size/functionality compromise for your application. Apart from sh , the bare minimum that you can't really do anything without is mount , umount and halt , but it would be atypical to not have also cat , cp , mv , rm , mkdir , rmdir , ps , sync and a few more. BusyBox installs as a single binary called busybox , with a symbolic link for each utility. The first process on a normal unix system is called init . Its job is to start other services. BusyBox contains an init system. In addition to the init binary (usually located in /sbin ), you'll need its configuration files (usually called /etc/inittab — some modern init replacement do away with that file but you won't find them on a small embedded system) that indicate what services to start and when. For BusyBox, /etc/inittab is optional; if it's missing, you get a root shell on the console and the script /etc/init.d/rcS (default location) is executed at boot time. That's all you need, beyond of course the programs that make your device do something useful. For example, on my home router running an OpenWrt variant, the only programs are BusyBox, nvram (to read and change settings in NVRAM), and networking utilities. Unless all your executables are statically linked, you will need the dynamic loader ( ld.so , which may be called by different names depending on the choice of libc and on the processor architectures) and all the dynamic libraries ( /lib/lib*.so , perhaps some of these in /usr/lib ) required by these executables. Directory structure The Filesystem Hierarchy Standard describes the common directory structure of Linux systems. It is geared towards desktop and server installations: a lot of it can be omitted on an embedded system. Here is a typical minimum. /bin : executable programs (some may be in /usr/bin instead). /dev : device nodes (see below) /etc : configuration files /lib : shared libraries, including the dynamic loader (unless all executables are statically linked) /proc : mount point for the proc filesystem /sbin : executable programs. The distinction with /bin is that /sbin is for programs that are only useful to the system administrator, but this distinction isn't meaningful on embedded devices. You can make /sbin a symbolic link to /bin . /mnt : handy to have on read-only root filesystems as a scratch mount point during maintenance /sys : mount point for the sysfs filesystem /tmp : location for temporary files (often a tmpfs mount) /usr : contains subdirectories bin , lib and sbin . /usr exists for extra files that are not on the root filesystem. If you don't have that, you can make /usr a symbolic link to the root directory. Device files Here are some typical entries in a minimal /dev : console full (writing to it always reports “no space left on device”) log (a socket that programs use to send log entries), if you have a syslogd daemon (such as BusyBox's) reading from it null (acts like a file that's always empty) ptmx and a pts directory , if you want to use pseudo-terminals (i.e. any terminal other than the console) — e.g. if the device is networked and you want to telnet or ssh in random (returns random bytes, risks blocking) tty (always designates the program's terminal) urandom (returns random bytes, never blocks but may be non-random on a freshly-booted device) zero (contains an infinite sequence of null bytes) Beyond that you'll need entries for your hardware (except network interfaces, these don't get entries in /dev ): serial ports, storage, etc. For embedded devices, you would normally create the device entries directly on the root filesystem. High-end systems have a script called MAKEDEV to create /dev entries, but on an embedded system the script is often not bundled into the image. If some hardware can be hotplugged (e.g. if the device has a USB host port), then /dev should be managed by udev (you may still have a minimal set on the root filesystem). Boot-time actions Beyond the root filesystem, you need to mount a few more for normal operation: procfs on /proc (pretty much indispensible) sysfs on /sys (pretty much indispensible) tmpfs filesystem on /tmp (to allow programs to create temporary files that will be in RAM, rather than on the root filesystem which may be in flash or read-only) tmpfs, devfs or devtmpfs on /dev if dynamic (see udev in “Device files” above) devpts on /dev/pts if you want to use [pseudo-terminals (see the remark about pts above) You can make an /etc/fstab file and call mount -a , or run mount manually. Start a syslog daemon (as well as klogd for kernel logs, if the syslogd program doesn't take care of it), if you have any place to write logs to. After this, the device is ready to start application-specific services. How to make a root filesystem This is a long and diverse story, so all I'll do here is give a few pointers. The root filesystem may be kept in RAM (loaded from a (usually compressed) image in ROM or flash), or on a disk-based filesystem (stored in ROM or flash), or loaded from the network (often over TFTP ) if applicable. If the root filesystem is in RAM, make it the initramfs — a RAM filesystem whose content is created at boot time. Many frameworks exist for assembling root images for embedded systems. There are a few pointers in the BusyBox FAQ . Buildroot is a popular one, allowing you to build a whole root image with a setup similar to the Linux kernel and BusyBox. OpenEmbedded is another such framework. Wikipedia has an (incomplete) list of popular embedded Linux distributions . An example of embedded Linux you may have near you is the OpenWrt family of operating systems for network appliances (popular on tinkerers' home routers). If you want to learn by experience, you can try Linux from Scratch , but it's geared towards desktop systems for hobbyists rather than towards embedded devices. A note on Linux vs Linux kernel The only behavior that's baked into the Linux kernel is that the first program that's launched at boot time. (I won't get into initrd and initramfs subtleties here.) This program, traditionally called init , has process ID 1 and has certain privileges (immunity to KILL signals ) and responsibilities (reaping orphans ). You can run a system with a Linux kernel and start whatever you want as the first process, but then what you have is an operating system based on the Linux kernel, and not what is normally called “Linux” — Linux , in the common sense of the term, is a Unix -like operating system whose kernel is the Linux kernel . For example, Android is an operating system which is not Unix-like but based on the Linux kernel.
{ "source": [ "https://unix.stackexchange.com/questions/136278", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36126/" ] }
136,291
I was running a shell script with commands to run several memory-intensive programs (2-5 GB) back-to-back. When I went back to check on the progress of my script I was surprised to discover that some of my processes were Killed , as my terminal reported to me. Several programs had already successively completed before the programs that were later Killed started, but all the programs afterwards failed in a segmentation fault (which may or may not have been due to a bug in my code, keep reading). I looked at the usage history of the particular cluster I was using and saw that someone started running several memory-intensive processes at the same time and in doing so exhausted the real memory (and possibly even the swap space) available to the cluster. As best as I can figure, these memory-intensive processes started running about the same time I started having problems with my programs. Is it possible that Linux killed my programs once it started running out of memory? And is it possible that the segmentation faults I got later on were due to the lack of memory available to run my programs (instead of a bug in my code)?
It can. There are two different out of memory conditions you can encounter in Linux. Which you encounter depends on the value of sysctl vm.overcommit_memory ( /proc/sys/vm/overcommit_memory ) Introduction: The kernel can perform what is called 'memory overcommit'. This is when the kernel allocates programs more memory than is really present in the system. This is done in the hopes that the programs won't actually use all the memory they allocated, as this is a quite common occurrence. overcommit_memory = 2 When overcommit_memory is set to 2 , the kernel does not perform any overcommit at all. Instead when a program is allocated memory, it is guaranteed access to have that memory. If the system does not have enough free memory to satisfy an allocation request, the kernel will just return a failure for the request. It is up to the program to gracefully handle the situation. If it does not check that the allocation succeeded when it really failed, the application will often encounter a segfault. In the case of the segfault, you should find a line such as this in the output of dmesg : [1962.987529] myapp[3303]: segfault at 0 ip 00400559 sp 5bc7b1b0 error 6 in myapp[400000+1000] The at 0 means that the application tried to access an uninitialized pointer, which can be the result of a failed memory allocation call (but it is not the only way). overcommit_memory = 0 and 1 When overcommit_memory is set to 0 or 1 , overcommit is enabled, and programs are allowed to allocate more memory than is really available. However, when a program wants to use the memory it was allocated, but the kernel finds that it doesn't actually have enough memory to satisfy it, it needs to get some memory back. It first tries to perform various memory cleanup tasks, such as flushing caches, but if this is not enough it will then terminate a process. This termination is performed by the OOM-Killer. The OOM-Killer looks at the system to see what programs are using what memory, how long they've been running, who's running them, and a number of other factors to determine which one gets killed. After the process has been killed, the memory it was using is freed up, and the program which just caused the out-of-memory condition now has the memory it needs. However, even in this mode, programs can still be denied allocation requests. When overcommit_memory is 0 , the kernel tries to take a best guess at when it should start denying allocation requests. When it is set to 1 , I'm not sure what determination it uses to determine when it should deny a request but it can deny very large requests. You can see if the OOM-Killer is involved by looking at the output of dmesg , and finding a messages such as: [11686.043641] Out of memory: Kill process 2603 (flasherav) score 761 or sacrifice child [11686.043647] Killed process 2603 (flasherav) total-vm:1498536kB, anon-rss:721784kB, file-rss:4228kB
{ "source": [ "https://unix.stackexchange.com/questions/136291", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42001/" ] }
136,322
Given: there are 40 columns in a record. I want to replace the 35th column so that the 35th column will be replaced with the content of the 35th column and a "$" symbol. What came to mind is something like: awk '{print $1" "$2" "...$35"$ "$36...$40}' It works but because it is infeasible when the number of column is as large as 10k. I need a better way to do this.
You can do like this: awk '$35=$35"$"'
{ "source": [ "https://unix.stackexchange.com/questions/136322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56579/" ] }
136,350
I entered crontab -r instead of crontab -e and all my cron jobs have been removed. What is the best way (or is there one) to recover those jobs?
crontab -r removes the only file containing the cron jobs. So if you did not make a backup, your only recovery options are: On RedHat/CentOS, if your jobs have been triggered before, you can find the cron log in /var/log/cron . The file will help you rewrite the jobs again. Another option is to recover the file using a file recovery tool. This is less likely to be successful though, since the system partition is usually a busy one and corresponding sectors probably have already been overwritten. On Ubuntu/Debian, if your task has run before, try grep CRON /var/log/syslog
{ "source": [ "https://unix.stackexchange.com/questions/136350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72015/" ] }
136,351
I have a few servers configured in ~/.ssh/config , such as alpha and beta . How might I configure Bash such that the commands $ ssh al Tab and $ scp file.tgz al Tab autocomplete the names of the configured servers? I don't want to add the servers to another file (i.e. a Bash array) each time one is added, as we add and remove servers regularly and the list is quite large. This is on Kubuntu 12.10, and I do have bash-completion installed.
Found it!! It seems that in Ubuntu the entries in ~/.ssh/known_hosts are hashed , so SSH completion cannot read them. This is a feature, not a bug. Even by adding HashKnownHosts no to ~/.ssh/config and /etc/ssh/ssh_config I was unable to prevent the host hashing. However, the hosts that I am interested in are also found in ~/.ssh/config . Here is a script for Bash Completion that reads the entries from that file: _ssh() { local cur prev opts COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" opts=$(grep '^Host' ~/.ssh/config ~/.ssh/config.d/* 2>/dev/null | grep -v '[?*]' | cut -d ' ' -f 2-) COMPREPLY=( $(compgen -W "$opts" -- ${cur}) ) return 0 } complete -F _ssh ssh Put that script in /etc/bash_completion.d/ssh and then source it with the following command: $ . /etc/bash_completion.d/ssh I found this guide ( Archive.org copy ) invaluable and I would not have been able to script this without it. Thank you Steve Kemp for writing that terrific guide!
{ "source": [ "https://unix.stackexchange.com/questions/136351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
136,364
It is explained here: Will Linux start killing my processes without asking me if memory gets short? that the OOM-Killer can be configured via overcommit_memory and that: 2 = no overcommit. Allocations fail if asking too much. 0, 1 = overcommit (heuristically or always). Kill some process(es) based on some heuristics when too much memory is actually accessed. Now, I may completely misunderstand that, but why isn't there an option (or why isn't it the default) to kill the very process that actually tries to access too much memory it allocated?
Consider this scenario: You have 4GB of memory free. A faulty process allocates 3.999GB. You open a task manager to kill the runaway process. The task manager allocates 0.002GB. If the process that got killed was the last process to request memory, your task manager would get killed. Or: You have 4GB of memory free. A faulty process allocates 3.999GB. You open a task manager to kill the runaway process. The X server allocates 0.002GB to handle the task manager's window. Now your X server gets killed. It didn't cause the problem; it was just "in the wrong place at the wrong time". It happened to be the first process to allocate more memory when there was none left, but it wasn't the process that used all the memory to start with.
{ "source": [ "https://unix.stackexchange.com/questions/136364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27664/" ] }
136,371
I want to download a folder from my google drive using terminal? Is there any way to do that? I tried this: $ wget "https://drive.google.com/folderview?id=0B-Zc9K0k9q-WWUlqMXAyTG40MjA&usp=sharing" But it is downloading this text file: folderview?id=0B-Zc9K0k9q-WdEY5a1BCUDBaejQ&usp=sharing . Is there any way to download google drive folder from terminal?
I was able to download a public shared file using this command: $ wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O FILENAME Where FILEID must be replaced by the actual file ID. FILENAME is the path/filename where download will be stored. Note you cannot use a folderid instead of fileid. I have used view source in a folder view where I could find the following HTML <div id="entry-0B0jxxycBojSwVW... . The string starting with 0B was the fileid.
{ "source": [ "https://unix.stackexchange.com/questions/136371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56594/" ] }
136,380
I have two files that essentially contain a memory dumps in a hex format. At the moment I use diff to see if the files are different and where the differences are. However, this can be misleading when trying to determine the exact location (i.e. memory address) of the difference. Consider the following example showing the two files side-by-side. file1: file2: 0001 | 0001 ABCD | FFFF 1234 | ABCD FFFF | 1234 Now diff -u will show one insertion and one deletion, although 3 lines (memory locations) have changed between the two files: 0001 +FFFF ABCD 1234 -FFFF Is there an easy way to compare the two files such that each line is only compared with the same line (in terms of line numbering) in the other file? So in this example it should report that the last 3 lines have changed, along with the changed lines from file1 and file2 . The output doen't have to be diff-style, but it would be cool if it could be colored (at the moment I color the diff -u output using sed so that could easily be adapted).
This could be an approach: diff <(nl file1) <(nl file2) With nl number the lines that diff recognizes the lines line by line.
{ "source": [ "https://unix.stackexchange.com/questions/136380", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26216/" ] }
136,407
under an intel I know I can look at the outcome of uname -m to know if my OS is 32 or 64 bit, but under ARM this gives: armv7l I deduced from file /usr/bin/ls that I'm on a 32-bit OS, but how can I know this in an easier way?
There are several gradations, since you can run a 32-bit or mixed operating system on a 64-bit-capable CPU. See 64-bit kernel, but all 32-bit ELF executable running processes, how is this? for a detailed discussion (written for x86, but most of it applies to arm as well). You can find the processor model in /proc/cpuinfo . For example: $ cat /proc/cpuinfo Processor : ARMv7 Processor rev 10 (v7l) ARMv7 (and below) is 32-bit. ARMv8 introduces the 64-bit instruction set. If you want to see whether your system supports 64-bit binaries, check the kernel architecture: $ uname -m armv7l On a 64-bit processor, you'd see a string starting with armv8 (or above) if the uname process itself is a 32-bit process, or aarch64 if it's a 64-bit process. (See also https://stackoverflow.com/questions/45125516/possible-values-for-uname-m )
{ "source": [ "https://unix.stackexchange.com/questions/136407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64031/" ] }
136,423
I am using my school's computers and would like to use zsh instead of bash . I'd like to make it the default shell, but I cannot run a command such as $ chsh -s $(which zsh) because I don't have admin privileges. Is there a way I can put something in my .bashrc or something that automatically calls zsh when it opens as a workaround? To clarify, zsh is already installed.
Create .bash_profile in your home directory and add these lines: export SHELL=/bin/zsh exec /bin/zsh -l Update: .profile may work as a general solution when default shell is not bash. I'm not sure if .profile may be called by Zsh as well that it could go redundant but we can do it safely with a simple check: export SHELL=/bin/zsh [ -z "$ZSH_VERSION" ] && exec /bin/zsh -l We can also use which to get the dynamic path of zsh which relies on the value of $PATH : export SHELL=`which zsh` [ -z "$ZSH_VERSION" ] && exec "$SHELL" -l
{ "source": [ "https://unix.stackexchange.com/questions/136423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72085/" ] }
136,494
I've read a lot about the realpath command and how it has been deprecated with readlink -f being now recommended.  I have also seen in some places that the reason why realpath was introduced was for the lack of such functionality in readlink and that once it was introduced, realpath was no longer needed and its support discontinued by most OS vendors. The reason for my question is that I've also seen many people recommending readlink -f as a command "pretty much similar" to realpath , and that is what is bothering me, because no one elaborates on that "pretty much similar" part.  What are the actual differences?
There are several realpath commands around. The realpath utility is a wrapper around the realpath library functions and has been reinvented many times . Debian used to maintain a realpath package ( separated from dwww since woody ) which hasn't changed except regarding packaging and documentation since 2001, but has now been phased out. This utility was deprecated because there are now more standard alternatives (GNU readlink and soon GNU realpath ), but at the time, GNU utilities didn't even have readlink at all. This implementation of realpath supports a few options to prevent symbolic link resolution or produce null-terminated output. BusyBox also includes its own realpath command (which takes no option). GNU coreutils introduced a realpath command in version 8.15 in January 2012. This is a compatible replacement for BusyBox's and Debian's realpath , and also has many options in common with GNU readlink . realpath has the same effect as readlink -f with GNU readlink . What distinguishes the two commands (or rather the various realpath commands from readlink -f ) is the extra options that they support. GNU realpath is not deprecated; it has the opposite problem: it's too new to be available everywhere. Debian used to omit GNU realpath from its coreutils package and stick with its own realpath . I don't know why, since GNU realpath should be a drop-in replacement. As of Debian jessie and Ubuntu 16.04, however, GNU realpath is used. On Linux systems, at the moment, your best bet to canonicalize a path that may contain symbolic links is readlink -f . BSD systems have a readlink command, with different capabilities from GNU readlink . In particular, BSD readlink does not have an option to canonicalize paths, it only traverses the symlink passed to it. readlink , incidentally, had the same problem — it was also invented many times (not adding this utility when symbolic links were added to Unix was a regrettable omission). It has now stabilized in several implementations with many incompatible flags (in particular BSD vs. GNU).
{ "source": [ "https://unix.stackexchange.com/questions/136494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72105/" ] }
136,547
If script.sh is just something typical like #!/bin/bash echo "Hello World!" Is there a preferred way to run the script? I think you first have to chmod it so it becomes executable?
For your specific script either way will work, except that ./script.sh requires execution and readable bits, while bash script.sh only requires readable bit. The reason of the permissions requirement difference lies in how the program that interprets your script is loaded: ./script.sh makes your shell run the file as if it was a regular executable. The shell forks itself and uses a system call (e.g. execve ) to make the operating system execute the file in the forked process. The operating system will check the file's permissions (hence the execution bit needs to be set) and forward the request to the program loader , which looks at the file and determines how to execute it. In Linux compiled executables start with an ELF magic number, while scripts start with a #! ( hashbang ). A hashbang header means that the file is a script and needs to be interpreted by the program that is specified after the hashbang. This allows a script itself to tell the system how to interpret the script. With your script, the program loader will execute /bin/bash and pass ./script.sh as the command-line argument. bash script.sh makes your shell run bash and pass script.sh as the command-line argument So the operating system will load bash (not even looking at script.sh , because it's just a command-line argument). The created bash process will then interpret the script.sh because it's passed as the command-line argument. Because script.sh is only read by bash as a regular file, the execution bit is not required. I recommend using ./script.sh though, because you might not know which interpreter the script is requiring. So let the program loader determine that for you.
{ "source": [ "https://unix.stackexchange.com/questions/136547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72136/" ] }
136,599
How can I perform two commands on one input, without typing that input twice? For example, the stat command tells a lot about a file, but doesn't indicate its file type: stat fileName The file command, tells what type a file is: file fileName You can perform this in one line this way: stat fileName ; file fileName However, you have to type the fileName twice. How can you execute both commands on the same input (without typing the input or a variable of the input twice)? In Linux, I know how to pipe outputs, but how do you pipe inputs?
Here is another way: $ stat filename && file "$_" Example: $ stat /usr/bin/yum && file "$_" File: ‘/usr/bin/yum’ Size: 801 Blocks: 8 IO Block: 4096 regular file Device: 804h/2052d Inode: 1189124 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:rpm_exec_t:s0 Access: 2014-06-11 22:55:53.783586098 +0700 Modify: 2014-05-22 16:49:35.000000000 +0700 Change: 2014-06-11 19:15:30.047017844 +0700 Birth: - /usr/bin/yum: Python script, ASCII text executable That works in bash and zsh . That also works in mksh and dash but only when interactive. In AT&T ksh, that only works when the file "$_" is on a different line from the stat one.
{ "source": [ "https://unix.stackexchange.com/questions/136599", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40149/" ] }
136,628
I am reading bash script I do not understand what is going there. #!/bin/sh [ x$1 = x ] What is going on the second line and what [ x$1 = x ] mean?
That checks that $1 is empty, though it should be quoted (identical to [ -z "$1" ] ). Some very old shells didn't handle empty strings properly, so writers of portable scripts adopted this style of checking. It hasn't been necessary for decades, but people still do it that way because people still do it that way.
{ "source": [ "https://unix.stackexchange.com/questions/136628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65321/" ] }
136,631
I'd like to switch directly to a pane in Tmux, by pane #. How can I do this? I know how to cycle between panes, and move to panes that are beside the current pane. I'd like to be able to run the display-panes command, which shows the "pane #" on each pane, then later on jump directly to a pane using the pane #'s that were displayed by display-panes . Is this possible? NOTE: And just to be clear, I don't mean window, I mean pane. Thanks!
You can jump directly to a pane by typing pane's index while it is showed by display-panes command. From man tmux : display-panes [-t target-client] (alias: displayp) Display a visible indicator of each pane shown by target-client. See the display-panes-time, display-panes-colour, and display-panes-active-colour session options. While the indicator is on screen, a pane may be selected with the ‘0’ to ‘9’ keys. Or instead of typing command, you can use: C-b q C-b send prefix key q display panes indexes
{ "source": [ "https://unix.stackexchange.com/questions/136631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22734/" ] }
136,637
In Unix whenever we want to create a new process, we fork the current process, creating a new child process which is exactly the same as the parent process; then we do an exec system call to replace all the data from the parent process with that for the new process. Why do we create a copy of the parent process in the first place and not create a new process directly?
The short answer is, fork is in Unix because it was easy to fit into the existing system at the time, and because a predecessor system at Berkeley had used the concept of forks. From The Evolution of the Unix Time-sharing System (relevant text has been highlighted ): Process control in its modern form was designed and implemented within a couple of days. It is astonishing how easily it fitted into the existing system; at the same time it is easy to see how some of the slightly unusual features of the design are present precisely because they represented small, easily-coded changes to what existed . A good example is the separation of the fork and exec functions. The most common model for the creation of new processes involves specifying a program for the process to execute; in Unix, a forked process continues to run the same program as its parent until it performs an explicit exec. The separation of the functions is certainly not unique to Unix, and in fact it was present in the Berkeley time-sharing system, which was well-known to Thompson . Still, it seems reasonable to suppose that it exists in Unix mainly because of the ease with which fork could be implemented without changing much else . The system already handled multiple (i.e. two) processes; there was a process table, and the processes were swapped between main memory and the disk. The initial implementation of fork required only 1) Expansion of the process table 2) Addition of a fork call that copied the current process to the disk swap area, using the already existing swap IO primitives, and made some adjustments to the process table. In fact, the PDP-7's fork call required precisely 27 lines of assembly code. Of course, other changes in the operating system and user programs were required, and some of them were rather interesting and unexpected. But a combined fork-exec would have been considerably more complicated , if only because exec as such did not exist; its function was already performed, using explicit IO, by the shell. Since that paper, Unix has evolved. fork followed by exec is no longer the only way to run a program. vfork was created to be a more efficient fork for the case where the new process intends to do an exec right after the fork. After doing a vfork, the parent and child processes share the same data space, and the parent process is suspended until the child process either execs a program or exits. posix_spawn creates a new process and executes a file in a single system call. It takes a bunch of parameters that let you selectively share the caller's open files and copy its signal disposition and other attributes to the new process.
{ "source": [ "https://unix.stackexchange.com/questions/136637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72211/" ] }
136,642
For network catastrophe simulations of our server environment, we are looking for a way to intentionally timeout a TCP socket. Are there any simple ways for existing sockets? Also, little C test-case program would be a plus. We have already tried putting down network interfaces during TCP buffer reading, and reading from disconnected mounted resources (samba). Out test server is Ubuntu 12.04.4.
To cause an exiting connection to timeout you can use iptables . Just enable a DROP rule on the port you want to disable. So to simulate a timeout for your Samaba server, while an active connection is up, execute the following on the server: sudo iptables -A INPUT -p tcp --dport 445 -j DROP The DROP target will not reply with a RST packet or ICMP error to the packet's sender. The client will stop receiving packets from the server and eventually timeout. Depending on if/how you have iptables configured, you may want to insert the rule higher into the INPUT ruleset.
{ "source": [ "https://unix.stackexchange.com/questions/136642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64768/" ] }
136,794
How do I replace the following string hd_ma_prod_customer_ro:*:123456789:john.doe with john.doe Basically I need to look for the last colon (:) and delete everything before and including it.
Assuming what you actually mean is that you want to delete everything up to the last colon and leave the john.doe intact: echo 'hd_ma_prod_customer_ro:*:123456789:john.doe' | sed 's/.*://' Explanation: First line just pipes the test string to sed for example purposes. The second is a basic sed substitution . The part between the first and second / is the regex to search for and the part between the second and third is what to replace it with (nothing in this case as we are deleting). For the regex, . matches any character, * repeats this any number of times (including zero) and : matches a colon. So effectively it is anything followed by a colon. Since .* can include a colon, the match is 'greedy' and everything up to the last colon is included.
{ "source": [ "https://unix.stackexchange.com/questions/136794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72332/" ] }
136,804
I need to remove files older than 3 days with a cron job in 3 different directories. (these 3 directories are children of a parent directory /a/b/c/1 & /a/b/c/2 & /a/b/c/3 ) Can this be done with one line in the crontab?
This is easy enough (although note that this goes by a modification time more than 3 days ago since a creation time is only available on certain filesystems with special tools): find /a/b/c/1 /a/b/c/2 -type f -mtime +3 #-delete Remove the # before the -delete once you are sure that it is finding the files you want to remove. To have it run by cron, I would probably just create an executable script (add a shebang - #!bin/sh to the top line of the file and make executable with chmod a+x ), then put it in an appropriate cron directory like /etc/cron.daily or /etc/cron.weekly . Provided of course that you do not need a more specific schedule and that these directories exist on your distro. Update As noted below, the -delete option for find isn't very portable. A POSIX compatible approach would be: find /a/b/c/1 /a/b/c/2 -type f -mtime +3 #-exec rm {} + Again remove the # when you are sure you have the right files. Update2 To quote from Stéphane Chazelas comment below: Note that -exec rm {} + has race condition vulnerabilities which -delete (where available) doesn't have. So don't use it on directories that are writeable by others. Some finds also have a -execdir that mitigates against those vulnerabilities.
{ "source": [ "https://unix.stackexchange.com/questions/136804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72336/" ] }
136,884
I need some help to figure out how to use the sed command to only show the first column and last column in a text file. Here is what I have so far for column 1: cat logfile | sed 's/\|/ /'|awk '{print $1}' My feeble attempt at getting the last column to show as well was: cat logfile | sed 's/\|/ /'|awk '{print $1}{print $8}' However this takes the first column and last column and merges them together in one list. Is there a way to print the first column and last columns clearly with sed and awk commands? Sample input: foo|dog|cat|mouse|lion|ox|tiger|bar
Almost there. Just put both column references next to each other. cat logfile | sed 's/|/ /' | awk '{print $1, $8}' Also note that you don't need cat here. sed 's/|/ /' logfile | awk '{print $1, $8}' Also note you can tell awk that the column separators is | , instead of blanks, so you don't need sed either. awk -F '|' '{print $1, $8}' logfile As per suggestions by Caleb , if you want a solution that still outputs the last field, even if there are not exactly eight, you can use $NF . awk -F '|' '{print $1, $NF}' logfile Also, if you want the output to retain the | separators, instead of using a space, you can specify the output field separators. Unfortunately, it's a bit more clumsy than just using the -F flag, but here are three approaches. You can assign the input and output field separators in awk itself, in the BEGIN block. awk 'BEGIN {FS = OFS = "|"} {print $1, $8}' logfile You can assign these variables when calling awk from the command line, via the -v flag. awk -v 'FS=|' -v 'OFS=|' '{print $1, $8}' logfile or simply: awk -F '|' '{print $1 "|" $8}' logfile
{ "source": [ "https://unix.stackexchange.com/questions/136884", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70573/" ] }
136,976
My folder parent has the following content: A.Folder B.Folder C.File It has both folders and files inside. B.Folder is newer. Now I just want to get B.Folder , how could I achieve this? I tried this, ls -ltr ./parent | grep '^d' | tail -1 but it gives me drwxrwxr-x 2 user user 4096 Jun 13 10:53 B.Folder , but I just need the name B.Folder .
Try this: $ ls -td -- */ | head -n 1 -t options make ls sort by modification time, newest first. If you want remove / : $ ls -td -- */ | head -n 1 | cut -d'/' -f1
{ "source": [ "https://unix.stackexchange.com/questions/136976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19404/" ] }
136,980
I am getting the following error when unzipping a file unzip user_file_batch1.csv.zip Archive: user_file_batch1.csv End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of user_file_batch1.csv or user_file_batch1.csv.zip, and cannot find user_file_batch1.csv.ZIP, period. I believe this file is not corrupted or a part of multi archive file as using Archive Utility I was able to unzip it. I have tried to rename it to .zip but did not work. The output of type file user_file_batch1.csv.zip was user_file_batch1.csv.zip: uuencoded or xxencoded text
Your file has a .zip name, but is not in zip format. Renaming a file doesn't change its content, and in particular doesn't magically transform it into a different format. (Alternatively, the same error could happen with an incomplete zip file — but since that Archive Utility worked, this isn't the case.) Run file user_file_batch1.csv.zip to see what type of file this is. It's presumably some other type of archive that Archive Utility understands. user_file_batch1.csv.zip: uuencoded or xxencoded text Run the following command: uudecode user_file_batch1.csv.zip This creates a file whose name is indicated in user_file_batch1.csv.zip . If you want to pick a different output file name: uudecode -o user_file_batch1.csv.decoded user_file_batch1.csv.zip The output file at this stage may, itself, be an archive. (Perhaps it's a zip, in fact.) Run the file utility again on this file to see what it is. If you choose the automatic file name, it might give a clue.
{ "source": [ "https://unix.stackexchange.com/questions/136980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65774/" ] }
136,987
So there's chown which lets you change the owner and group of files and/or directories. But there's also chgrp which only changes the group. Why was chgrp created? Isn't it redundant?
chown initially couldn't set the group. Later, some implementations added it as chown user.group , some as chown user:group until it was eventually standardised (emphasis mine): The 4.3 BSD method of specifying both owner and group was included in this volume of POSIX.1-2008 because: There are cases where the desired end condition could not be achieved using the chgrp and chown (that only changed the user ID) utilities. (If the current owner is not a member of the desired group and the desired owner is not a member of the current group, the chown() function could fail unless both owner and group are changed at the same time.) Even if they could be changed independently, in cases where both are being changed, there is a 100% performance penalty caused by being forced to invoke both utilities. Even now: chown :group to only change the group is not portable or standard. chown user: to assign the primary group of the user in the user database is not standard either.
{ "source": [ "https://unix.stackexchange.com/questions/136987", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/69099/" ] }
137,030
I have output from VBoxManage list vms which looks like this: "arch" {de1a1db2-86c5-43e7-a8de-a0031835f7a7} "arch2" {92d8513c-f13e-41b5-97e2-2a6b17d47b67} I need to grab the names arch and arch2 and save them into a variable.
Using grep + sed This will parse the contents of those 2 strings: $ grep -o '".*"' somefile | sed 's/"//g' arch arch2 The above looks for a string matching the pattern ".*" . That will match anything that occurs within double quotes. So grep will return these types of values: "arch" "arch2" The pipe to sed will strip off any double quotes from these strings giving your the strings you're looking for. The notation sed 's/"//g' is instructing sed to do a search and replace on all occurrences of double quotes, substituting them with nothing, s/"//g . The command s/find/replace/g is what's going on there, and the trailing g to search tells it to do it globally on the entire string that it's given. Using just sed You can also use sed to chop off the beginning double quote, keep what's in between them, and chop off the remaining quote + everything there after: $ sed 's/^"\(.*\)".*/\1/' a arch arch2 Other methods $ grep -o '".*"' somefile | tr -d '"' arch arch2 The command tr can be used to delete characters. In this case it's deleting the double quotes. $ grep -oP '(?<=").*(?=")' somefile arch arch2 Using grep 's PCRE feature you can look for any substrings that begin with a double quote or end with a double quote and report just the substring.
{ "source": [ "https://unix.stackexchange.com/questions/137030", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40919/" ] }
137,175
I want to create a bash script that must be executed with sudo but should take into account the name of the non-sudo user who executed it. So if user bob runs sudo ./myscript.sh I would like myscript.sh to know bob was the one who executed it. Let's look inside myscript.sh : USER=$(whoami) # Do something that takes into account the username. How can I know the name of the user who spawned the process? More specifically, what should I use instead of whoami to get bob and not root ?
I'm not sure how standard it is, but at least in Ubuntu systems sudo sets the following environment variables (among others - see the ENVIRONMENT section of the sudo manpage): SUDO_UID Set to the user ID of the user who invoked sudo SUDO_USER Set to the login of the user who invoked sudo for example, steeldriver@lap-t61p:~$ sudo sh -c 'whoami' root steeldriver@lap-t61p:~$ sudo sh -c 'echo $SUDO_USER' steeldriver
{ "source": [ "https://unix.stackexchange.com/questions/137175", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16321/" ] }
137,183
I was trying new dev environments including zsh and oh-my-zsh. Now that I have installed oh-my-zsh, it starts by default on my terminals (iTerm2 and terminal) always start with zsh and with the settings on from oh-my-zsh. I was wondering if it was possible to "disable" or stop using zsh and its setup with oh-my-zsh without having to uninstall oh-my-zsh? It would also be nice to know how to turn them back on too. Currently, my terminals goes into zsh automatically (I think) and use the oh-my-zsh automatically. I want to have more control over that and me able to control both, when the zsh is being used and when the oh-my-zsh features are being used. One thing I am also interested on knowing is, how do the terminal applications know which shell to start running on start up. That would be nice to be able to control too! If you explain as much as you can of the "why" of every command you give me, that would useful! :) I am on OS X. Not sure if that matters, but I tend to like answers more that are more applicable to more general Unix environments rather to my own.
The wording of your question is ambiguous, so I can't tell if you mean you want to stop using zsh or you want to stop using oh-my-zsh . I will cover both. Disabling zsh Simply run chsh and select whatever shell you were using before. If you don't know what shell you were using before, it is almost certainly bash . This command changes the "login shell" that is associated with your user. Essentially, it changes your default shell. You will need to open a new terminal window for changes to take effect. If this does not work, you will need to log out and log back in again to reinitialize your environment. Disabling only oh-my-zsh Check if ~/.zshrc.pre-oh-my-zsh exists. It probably does. (This file will have been created when the oh-my-zsh installation script moved your previous .zshrc out of the way. .zshrc is a startup file of zsh , similar to .bashrc for bash .) If it does, do mv ~/.zshrc ~/.zshrc.oh-my-zsh . This will put the oh-my-zsh -created .zshrc out of the way, so we can restore the original, by doing mv ~/.zshrc.pre-oh-my-zsh ~/.zshrc . If it does not exist, open ~/.zshrc in a text editor. Find the line that says source $ZSH/.oh-my-zsh and either comment it out or remove it. This will disable the initialization of oh-my-zsh . You will need to restart your shell for changes to take effect.
{ "source": [ "https://unix.stackexchange.com/questions/137183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55620/" ] }
137,266
I have a home server which runs an up to date Debian 7.5 (wheezy) installation. I just discovered that the server has its internal clock set to ± 3 minutes in the future . I knew that I could use NTP to synchronize Debian (and the motherboard internal clock) with NTP, so I installed NTP by following the steps described in the french Debian Wiki (the English page is less detailed). I used the following command to sync the internal clock: ntpdate -B -q 192.168.0.254 The clock was successfully adjusted. But this is a temporary solution, so I installed the NTP daemon and added a local server in the /etc/ntp.conf file: # pool.ntp.org maps to about 1000 low-stratum NTP servers. Your server will # pick a different set every time it starts up. Please consider joining the # pool: <http://www.pool.ntp.org/join.html> # added server 192.168.0.254 server 0.debian.pool.ntp.org iburst server 1.debian.pool.ntp.org iburst server 2.debian.pool.ntp.org iburst server 3.debian.pool.ntp.org iburst Is it the right solution? In fact I was surprised to find that the ntp daemon wasn't already installed. I'm wondering if the default installation of Debian installs a daemon to keep the internal clock synchronized. Are all the Debian installations time-shifting until their admins install ntpd ? Please tell me that the ntp daemon won't be useless because Debian has a built-in synchronization mechanism.
Debian expects you to install ntp yourself if you want your clock synchronized. Pretty much all you should have to do is apt-get install ntp . The default install, without any tasks, is fairly minimal. I believe the GNOME desktop task, at least, will install it by default (as well as many other packages). Not sure if the other desktops will as well. There isn't any other time synchronization method installed & running by default.
{ "source": [ "https://unix.stackexchange.com/questions/137266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50687/" ] }
137,270
I often use "locate" command on CentOs to find files. What's the alternative for this command on Debian ?
I recommend locate . sudo apt-get install locate
{ "source": [ "https://unix.stackexchange.com/questions/137270", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72585/" ] }
137,320
I have recently installed Arch Linux and found that I am eating away at a lot of storage relatively quickly. For whatever reason I have already used 17GB in just about 2 weeks. I do not have a great deal of software installed so I am led to believe that all of the old packages are maintained somewhere. To support this, I have noticed that if I installed a package, remove that package, and then re-install it that pacman merely unpacks and re-installs the software without having to re-download it. After I installed my base system, before extra software, I used about 2GB or so maybe. I have since only installed Matlab, Skype, Wine, and a few other small programs. Of course I have also installed missing libraries and the like, but not nearly 15GB worth. Am I completely wrong here or does Arch never delete old packages when downloading/upgrading to new versions? If so, how do I delete these un-used packages? Also, when I remove installed packages I use pacman -R ...
No, pacman doesn't remove old packages from your cache ( /var/cache/pacman/pkg ) so, over time, it can fill up. You can adopt two approaches to clearing the cache: the brute force one with pacman -Sc : -c, --clean Remove packages that are no longer installed from the cache as well as currently unused sync databases to free up disk space. When pacman downloads packages, it saves them in a cache directory. In addition, databases are saved for every sync DB you download from, and are not deleted even if they are removed from the configuration file pacman.conf(5). Use one --clean switch to only remove packages that are no longer installed; use two to remove all files from the cache. In both cases, you will have a yes or no option to remove packages and/or unused downloaded databases. Or, for a more nuanced approach, you can use one of the utilities that ships with pacman-contrib, paccache : paccache is a flexible pacman cache cleaning utility, which has numerous options to help control how much, and what, is deleted from any directory containing pacman package tarballs. By default, paccache -r will remove all but the last three versions of an installed package, but you can change this number with the -k, --keep switch. There is also a -d, --dryrun switch to preview your changes. You can also use the -m, --move <dir> option to move the packages to a separate directory of your choice. See paccache -h or paccache --help for all the switches. There are a number of utilities in the pacman-contrib package to assist with package management, it is worth looking though them all and gaining an understanding of how they work and can make running Arch much easier. You can see the full list with: pacman -Ql pacman-contrib | awk -F"[/ ]" '/\/usr\/bin/ {print $NF}'
{ "source": [ "https://unix.stackexchange.com/questions/137320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72608/" ] }
137,482
What's the easiest way to resize an ext4 partition (or any type partition depending on the method) from the command line (potentially with the fewest commands, but also the easiest to understand)? Using a tool like Gparted is obviously easy in a GUI, but what about in the command line? I guess text-based GUIs can count for the answer too since it's technically still in the command line. It just needs to be easy. By partition I mean a simple partition on a single disk of a personal computer (e.g. on a laptop). For example, I want to resize /dev/sda4 . There's no RAIDs, there's not more than one disk drive, there's not anything complicated here. Just a simple partition on a single disk (/dev/sdaX on /dev/sda).
You can use fdisk to change your partition table while running.  Refer to Live resizing of an ext4 filesytem on Linux (on The silence of the code blog): Disclaimer: The following instructions can easily screw your data if you make a mistake.  I was doing this on a VM which I backed up before performing the following actions.  If you lose your data because you didn’t perform a backup don’t come and complain. ... First: Increase the disk size. In ESXi this is simple, just increase the size of the virtual disk. Now you have a bigger hard drive but you still need to a) increase the partition size and b) resize the filesystem. Second: Increase the partition size. You can use fdisk to change the partition table while running.  The stock Ubuntu install has created 3 partitions: one primary (sda1), one extended (sda2) with a single logical partition (sda5) in it. The extended partition is simply used for swap, so I could easily move it without losing any data. Delete the primary partition Delete the extended partition Create a new primary partition starting at the same sector as the original one just with a bigger size (leave some for swap) Create a new extended partition with a logical partition in it to hold the swap space me@ubuntu:~$ sudo fdisk /dev/sda Command (m for help): p Disk /dev/sda: 268.4 GB, 268435456000 bytes 255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e49fa Device Boot Start End Blocks Id System /dev/sda1 * 2048 192940031 96468992 83 Linux /dev/sda2 192942078 209713151 8385537 5 Extended /dev/sda5 192942080 209713151 8385536 82 Linux swap / Solaris Command (m for help): d Partition number (1-5): 1 Command (m for help): d Partition number (1-5): 2 Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): Using default value 1 First sector (2048-524287999, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-524287999, default 524287999): 507516925 Command (m for help): p Disk /dev/sda: 268.4 GB, 268435456000 bytes 255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e49fa Device Boot Start End Blocks Id System /dev/sda1 2048 507516925 253757439 83 Linux Command (m for help): n Partition type: p primary (1 primary, 0 extended, 3 free) e extended Select (default p): e Partition number (1-4, default 2): 2 First sector (507516926-524287999, default 507516926): Using default value 507516926 Last sector, +sectors or +size{K,M,G} (507516926-524287999, default 524287999): Using default value 524287999 Command (m for help): p Disk /dev/sda: 268.4 GB, 268435456000 bytes 255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e49fa Device Boot Start End Blocks Id System /dev/sda1 2048 507516925 253757439 83 Linux /dev/sda2 507516926 524287999 8385537 5 Extended Command (m for help): n Partition type: p primary (1 primary, 1 extended, 2 free) l logical (numbered from 5) Select (default p): l Adding logical partition 5 First sector (507518974-524287999, default 507518974): Using default value 507518974 Last sector, +sectors or +size{K,M,G} (507518974-524287999, default 524287999): Using default value 524287999 Command (m for help): p Disk /dev/sda: 268.4 GB, 268435456000 bytes 255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e49fa Device Boot Start End Blocks Id System /dev/sda1 2048 507516925 253757439 83 Linux /dev/sda2 507516926 524287999 8385537 5 Extended /dev/sda5 507518974 524287999 8384513 83 Linux Command (m for help): t Partition number (1-5): 5 Hex code (type L to list codes): 82 Changed system type of partition 5 to 82 (Linux swap / Solaris) Command (m for help): p Disk /dev/sda: 268.4 GB, 268435456000 bytes 255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e49fa Device Boot Start End Blocks Id System /dev/sda1 2048 507516925 253757439 83 Linux /dev/sda2 507516926 524287999 8385537 5 Extended /dev/sda5 507518974 524287999 8384513 82 Linux swap / Solaris Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks. me@ubuntu:~$ sudo reboot I noticed afterwards that I didn’t set the bootable flag but apparently you don’t really need it . Third: Enlargen the filesystem. You can do this with resize2fs online on a mounted partition. me@ubuntu:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 91G 86G 12M 100% / udev 3.9G 4.0K 3.9G 1% /dev tmpfs 1.6G 696K 1.6G 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.9G 144K 3.9G 1% /run/shm none 100M 16K 100M 1% /run/user me@ubuntu:~$ sudo resize2fs /dev/sda1 resize2fs 1.42.5 (29-Jul-2012) Filesystem at /dev/sda1 is mounted on /; on-line resizing required old_desc_blocks = 6, new_desc_blocks = 16 The filesystem on /dev/sda1 is now 63439359 blocks long. me@ubuntu:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 239G 86G 142G 38% / udev 3.9G 12K 3.9G 1% /dev tmpfs 1.6G 696K 1.6G 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.9G 152K 3.9G 1% /run/shm none 100M 36K 100M 1% /run/user Slight catch: After rebooting the swap space wasn’t active. Turned out you need to run mkswap , adjust /etc/fstab to the new UUID and turn the swap on me@ubuntu:~$ sudo mkswap /dev/sda5 Setting up swapspace version 1, size = 8384508 KiB no label, UUID=141d401a-b49d-4a96-9b85-c130cb0de40a me@ubuntu:~$ sudo swapon --all --verbose swapon on /dev/sda5 swapon: /dev/sda5: found swap signature: version 1, page-size 4, same byte order swapon: /dev/sda5: pagesize=4096, swapsize=8585740288, devsize=8585741312 Edit /etc/fstab to replace the UUID for the old swap partition with the new one from mkswap .
{ "source": [ "https://unix.stackexchange.com/questions/137482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5536/" ] }
137,566
I am trying to use arrays in Bourne shell ( /bin/sh ). I found that the way to initialize array elements is: arr=(1 2 3) But it is encountering an error: syntax error at line 8: `arr=' unexpected Now the post where I found this syntax says it is for bash , but I could not find any separate syntax for Bourne shell. Does the syntax stand the same for /bin/sh as well?
/bin/sh is hardly ever a Bourne shell on any systems nowadays (even Solaris which was one of the last major system to include it has now switched to a POSIX sh for its /bin/sh in Solaris 11). /bin/sh was the Thompson shell in the early 70s. The Bourne shell replaced it in Unix V7 in 1979. /bin/sh has been the Bourne shell for many years thereafter (or the Almquist shell, a free reimplementation on BSDs). Nowadays, /bin/sh is more commonly an interpreter or another for the POSIX sh language which is itself based on a subset of the language of ksh88 (and a superset of the Bourne shell language with some incompatibilities). The Bourne shell or the POSIX sh language specification don't support arrays. Or rather they have only one array: the positional parameters ( $1 , $2 , $@ , so one array per function as well). ksh88 did have arrays which you set with set -A , but that didn't get specified in the POSIX sh as the syntax is awkward and not very usable. Other shells with array/lists variables include: csh / tcsh , rc , es , bash (which mostly copied the ksh syntax the ksh93 way), yash , zsh , fish each with a different syntax ( rc the shell of the once to-be successor of Unix, fish and zsh being the most consistent ones)... In standard sh (also works in modern versions of the Bourne shell): set '1st element' 2 3 # setting the array set -- "$@" more # adding elements to the end of the array shift 2 # removing elements (here 2) from the beginning of the array printf '<%s>\n' "$@" # passing all the elements of the $@ array # as arguments to a command for i do # looping over the elements of the $@ array ($1, $2...) printf 'Looping over "%s"\n' "$i" done printf '%s\n' "$1" # accessing individual element of the array. # up to the 9th only with the Bourne shell though # (only the Bourne shell), and note that you need # the braces (as in "${10}") past the 9th in other # shells (except zsh, when not in sh emulation and # most ash-based shells). printf '%s\n' "$# elements in the array" printf '%s\n' "$*" # join the elements of the array with the # first character (byte in some implementations) # of $IFS (not in the Bourne shell where it's on # space instead regardless of the value of $IFS) (note that in the Bourne shell and ksh88, $IFS must contain the space character for "$@" to work properly (a bug), and in the Bourne shell, you can't access elements above $9 ( ${10} won't work, you can still do shift 1; echo "$9" or loop over them)).
{ "source": [ "https://unix.stackexchange.com/questions/137566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72759/" ] }
137,651
What is the difference between Arch Linux and Gentoo Linux? Their ideologies seem quite similar to me.
Yes, the distros are of similar, with both being set to satisfy more experienced users, and both aim to be fast and highly customizable. The most technical similarity is that both are based upon the Linux Kernel. While most functions may seem similar, the two are different in many ways. Apparently, Gentoo documentation is said to be very intimidating to new users, while Arch documentation is very much up to the KISS (Keep it simple, stupid) motto. Package managers are also different. Arch Linux uses the Pacman (or in some spins, such as antergos, Pacman XG) which uses the good precompiled package system while Gentoo uses the Portage manager which makes packages from source code . With the difference in package managers one distribution may have fewer packages readied than the other. I would say that Arch would have a larger selection of packages compared to Gentoo, while Gentoo allows fine-grained control of specific packages features via USE flags . However, most packages are available in source code. So you can fairly easily build them to suit whatever package manager you may be using. (If you may be interested, Gentoo's portage manager has many good features not available in the freshly-installed pacman.) Popularity is a difference. While you may be interested in being original, the adoption of your OS can make a big difference in your Linux experience. Primarily in how many files you can access out-of-the-disk and how many tutorials you have to look at in times of need. According to distrowatch, Arch Linux is the 8th in overall popularity, while Gentoo is at 47th . While popularity may help, this may not help you to easily choose a distro. I haven't personally tried Gentoo, it could just be an amazingly functional and simple OS, while Arch had risen up much further with its head-start. I could list many more differences, but aside from the above (and maybe other) differences, the distributions are quite similar. If you would like a good resource to make comparisons with, I recommend distrowatch.com , if you haven't looked at it already.
{ "source": [ "https://unix.stackexchange.com/questions/137651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72827/" ] }
137,703
I'm relatively new to programming as a whole and some tutorials have been telling me to use ls -l to look at files in a directory and others have been saying ll . I know that ls is a short list, but is there a difference between the other two?
On many systems, ll is an alias of ls -l : $ type ll ll is aliased to `ls -l' They are the same.
{ "source": [ "https://unix.stackexchange.com/questions/137703", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72852/" ] }
137,712
after logging into a server, I used netstat to check out the ports of this server and wanted to find which port was communicating with me My IP is 143.248.143.198 and my search results are like below: [kwagjj@James5 ~]$ netstat | grep 143.248.143.198 tcp 0 52 James5:smakynet 143.248.143.198:49690 ESTABLISHED [kwagjj@James5 ~]$ netstat | smakynet smakynet: Command not found. [kwagjj@James5 ~]$ netstat | grep smakynet tcp 0 0 James5:smakynet 143.248.143.199:49573 ESTABLISHED tcp 0 0 James5:smakynet 143.248.143.198:49690 ESTABLISHED tcp 0 0 James5:smakynet 143.248.143.212:51070 ESTABLISHED tcp 0 0 James5:smakynet 143.248.143.210:9693 ESTABLISHED tcp 0 0 James5:smakynet 143.248.143.217:azeti ESTABLISHED tcp 0 0 James5:smakynet 143.248.143.216:51892 ESTABLISHED tcp 0 0 James5:smakynet 143.248.143.210:10599 ESTABLISHED I tried to see if James5:smakynet lead to some other port but it looks like my side of the port is only communicating with 'James5:smakynet'. Does anyone know what this 'smakynet' is? What does this do? I googled it but it didn't give me any proper info.
On many systems, ll is an alias of ls -l : $ type ll ll is aliased to `ls -l' They are the same.
{ "source": [ "https://unix.stackexchange.com/questions/137712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72825/" ] }
137,759
I know that, nohup being a binary, it can be reached from any shell. But the exec built-in probably exists in every shell. Is there a reason to prefer one of them, to the other?
What's better, a fish or a bicycle? nohup and exec do different things. exec replaces the shell with another program. Using exec in a simple background job isn't useful: exec myprogram; more stuff replaces the shell with myprogram and so doesn't run more stuff , unlike myprogram; more stuff which runs more stuff when myprogram terminates; but exec myprogram & more stuff starts myprogram in the background and then runs more stuff , just like myprogram & more stuff . nohup runs the specificed program with the SIGHUP signal ignored. When a terminal is closed, the kernel sends SIGHUP to the controlling process in that terminal (i.e. the shell). The shell in turn sends SIGHUP to all the jobs running in the background. Running a job with nohup prevents it from being killed in this way if the terminal dies (which happens e.g. if you were logged in remotely and the connection drops, or if you close your terminal emulator). nohup also redirects the program's output to the file nohup.out . This avoids the program dying because it isn't able to write to its output or error output. Note that nohup doesn't redirect the input. To fully disconnect a program from the terminal where you launched it, use nohup myprogram </dev/null >myprogram.log 2>&1 &
{ "source": [ "https://unix.stackexchange.com/questions/137759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57494/" ] }