source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
137,842
Ctrl + S stops all output to the terminal which can be restarted with Ctrl + Q . But, why does Ctrl + S exist in the first place? What problem was trying to be solved by putting that control sequence in place?
Long before there were computers, there were teleprinters (a.k.a. teletypewriters, a.k.a. teletypes). Think of them as roughly the same technology as a telegraph, but with some type of keyboard and some type of printer attached to them. Because teletypes already existed when computers were first being built, and because computers at the time were room-sized, teletypes became a convenient user interface to the first computers – type in a command, hit the send button, wait for a while, and the output of the command is printed to a sheet of paper in front of you. Software flow control originated around this era – if the printer couldn't print as fast as the teletype was receiving data, for instance, the teletype could send an XOFF flow control command ( Ctrl + S ) to the remote side saying "Stop transmitting for now", and then could send the XON flow control command ( Ctrl + Q ) to the remote side saying "I've caught up, please continue". And this usage survives in Unix because modern terminal emulators are emulating physical terminals (like the vt100 ) which themselves were (in some ways) emulating teletypes.
{ "source": [ "https://unix.stackexchange.com/questions/137842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18636/" ] }
137,862
For example, this is the first line of my /etc/fstab : UUID=050e1e34-39e6-4072-a03e-ae0bf90ba13a / ext4 errors=remount-ro 0 1 And here's the output of df -h command (reporting free disk space): honey@bunny:~$ df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/vda ext4 30832636 4884200 24359188 17% / none tmpfs 4 0 4 0% /sys/fs/cgroup udev devtmpfs 498172 12 498160 1% /dev tmpfs tmpfs 101796 320 101476 1% /run none tmpfs 5120 0 5120 0% /run/lock none tmpfs 508972 0 508972 0% /run/shm none tmpfs 102400 0 102400 0% /run/user From the two is it okay to deduce that UUID=050e1e34-39e6-4072-a03e-ae0bf90ba13a represents /dev/vda given that the first column in fstab is <file system> ? So, would it be okay if I modified /etc/fstab to this? /dev/vda / ext4 errors=remount-ro 0 1 EDIT: If yes (to above question), why does the sudo blkid command show a different UUID for /dev/vda ? $ sudo blkid /dev/vda: LABEL="DOROOT" UUID="6f469437-4935-44c5-8ac6-53eb54a9af26" TYPE="ext4" What am I missing here? Answer: I'd conclude (3) to be a bug in the cloud of my host. So yes, the UUID reported by blkid (or ls -l /dev/disk/by-uuid ) should be the same as the one used in /etc/fstab .
The advantage of using the UUID is that it is independent from the actual device number the operating system gives your hard disk. Imagine you add another hard disk to the system, and for some reason the OS decides that your old disk is now sdb instead of sda . Your boot process would be screwed up if fstab points to the device name. But in case of the UUIDs, it is fine. More detailed information about UUIDs can also be found on the blog post "UUIDs and Linux: Everything you ever need to know"
{ "source": [ "https://unix.stackexchange.com/questions/137862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9610/" ] }
137,894
I have the 03:00.0 Network controller: Intel Corporation Centrino Wireless-N 2200 (rev c4) How do I find out if that card/driver support 5 GHz?
Find out the interface name, by running iwconfig $ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"EvanCarroll" Mode:Managed Frequency:2.437 GHz Access Point: D8:50:E6:44:B2:C8 Bit Rate=19.5 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=61/70 Signal level=-49 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:1 Invalid misc:80 Missed beacon:0 In this case it is wlan0 , then run iwlist <interface> freq , $ iwlist wlan0 freq wlan0 13 channels in total; available frequencies : Channel 01 : 2.412 GHz Channel 02 : 2.417 GHz Channel 03 : 2.422 GHz Channel 04 : 2.427 GHz Channel 05 : 2.432 GHz Channel 06 : 2.437 GHz Channel 07 : 2.442 GHz Channel 08 : 2.447 GHz Channel 09 : 2.452 GHz Channel 10 : 2.457 GHz Channel 11 : 2.462 GHz Channel 12 : 2.467 GHz Channel 13 : 2.472 GHz Current Frequency:2.437 GHz (Channel 6) None of these channels are outside of 2.4 GHz. It does not support 5 GHz.
{ "source": [ "https://unix.stackexchange.com/questions/137894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
137,905
(I'm on Arch Linux, using i3 as my wm and xterm as my terminal emulator, though I don't know if any of that is relevant.) Occasionally, a website asks me to drag a file with my mouse from my desktop into the internet browser's window. Almost always, there is an alternative, but recently I found something I want to do that requires the drag and drop. Unfortunately, I don't have a file manager. I navigate my computer's file system solely through bash. Is there a way I can fake the drag and drop action? Can I tell my browser "I just dropped this file onto you" without actually doing it? Worst case scenario, I can download a graphical file manager exclusively to drop files into my web browser, but I'd like to avoid that solution.
I had exactly the same problem a few months back and ultimately just wrote a tool to do it for me. When I saw this and found someone else had the same itch I cleaned it up so that someone other than me could actually get it running, and finished off my to-do list. The code is up now: https://github.com/mwh/dragon To get it, run git clone https://github.com/mwh/dragon.git cd dragon make That will give you a standalone dragon executable - you can move it wherever you want. make install will put it in $HOME/.local/bin . Either way, you can then: dragon *.jpg to get a simple window with draggable buttons for each of those files: You can drag any of those into a browser, a file manager, an editor, or anywhere else that speaks the standard drag-and-drop protocol. If you want to go the other way, and drag things in to it, use --target — they'll be printed to standard output, or available to drag out again with if you use --keep as well. To build you'll need a C compiler and the GTK+ 3 development headers - if you're on Arch you'll get those just by installing GTK+, but on other distributions you may have to apt-get install build-essentials libgtk3-dev or yum install gtk3-devel or similar first. Other than that it's entirely self-contained, with no constituent libraries or anything, and you can just put the executable where you want. My use case is mostly one-off drags of only a few files (usually just one), without particularly caring how they show up, so if that doesn't line up with what you want then Dragbox (which I didn't see until recently) might still be better for you. Just yesterday I added the support for using it as a drag target as well, so that part hasn't had much use on my end. Other than that, though, I've been using this successfully for a while now. There are other modes and options described in the readme file.
{ "source": [ "https://unix.stackexchange.com/questions/137905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68662/" ] }
138,090
I recently resized the hard drive of a VM from 150 GB to 500 GB in VMWare ESXi. After doing this, I used Gparted to effectively resize the partition of this image. Now all I have to do is to resize the file system, since it still shows the old value (as you can see from the output of df -h ): Filesystem Size Used Avail Use% Mounted on /dev/mapper/owncloud--vg-root 157G 37G 112G 25% / udev 488M 4.0K 488M 1% /dev tmpfs 100M 240K 100M 1% /run none 5.0M 0 5.0M 0% /run/lock none 497M 0 497M 0% /run/shm /dev/sda1 236M 32M 192M 14% /boot However, running sudo resize2fs /dev/mapper/owncloud--vg-root returns this: resize2fs 1.42 (29-Nov-2011) The filesystem is already 41608192 blocks long. Nothing to do! Since Gparted says that my partition is /dev/sda5 , I also tried running sudo resize2fs /dev/sda5 , but in this case I got this: resize2fs 1.42 (29-Nov-2011) resize2fs: Device or resource busy while trying to open /dev/sda5 Couldn't find valid filesystem superblock. Finally, this is the output of pvs : PV VG Fmt Attr PSize PFree /dev/sda5 owncloud-vg lvm2 a- 499.76g 340.04g fdisk -l /dev/sda shows the correct amount of space. How can I resize the partition so that I can finally make the OS see 500 GB of hard drive?
If you only changed the partition size, you're not ready to resize the logical volume yet. Once the partition is the new size, you need to do a pvresize on the PV so the volume group sees the new space. After that you can use lvextend to expand the logical volume into the volume group's new space. You can pass -r to the lvextend command so that it automatically kicks off the resize2fs for you. Personally, I would have just made a new partition and used vgextend on it since I've had mixed results with pvresize .
{ "source": [ "https://unix.stackexchange.com/questions/138090", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43872/" ] }
138,095
Given: myvar="present-value: 1" , I'd expect expr match "$myvar" '\([0-9]\)' to output 1 . However, instead it outputs blank and exits with a non-zero status code indicating no match. How can I get it to match?
If you only changed the partition size, you're not ready to resize the logical volume yet. Once the partition is the new size, you need to do a pvresize on the PV so the volume group sees the new space. After that you can use lvextend to expand the logical volume into the volume group's new space. You can pass -r to the lvextend command so that it automatically kicks off the resize2fs for you. Personally, I would have just made a new partition and used vgextend on it since I've had mixed results with pvresize .
{ "source": [ "https://unix.stackexchange.com/questions/138095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65034/" ] }
138,188
I'm attempting to install Intel's OpenCL SDK but the DEB files are buggy conversions from RPM (see here for the curious). I need to edit the postinst script in the DEB they provide. How can I take an existing DEB, extract the contents (including the control information), then later repackage the contents to make a new DEB? I will only edit files, no files will be added or removed.
The primary command to manipulate deb packages is dpkg-deb . To unpack the package, create an empty directory and switch to it, then run dpkg-deb to extract its control information and the package files. Use dpkg-deb -b to rebuild the package. mkdir tmp dpkg-deb -R original.deb tmp # edit DEBIAN/postinst dpkg-deb -b tmp fixed.deb Beware that unless your script is running as root, the files' permissions and ownership will be corrupted at the extraction stage. One way to avoid this is to run your script under fakeroot . Note that you need to run the whole sequence under fakeroot , not each dpkg-deb individually, since it's the fakeroot process that keeps the memory of the permissions of the files that can't be created as they are. fakeroot sh -c ' mkdir tmp dpkg-deb -R original.deb tmp # edit DEBIAN/postinst dpkg-deb -b tmp fixed.deb ' Rather than mess with permissions, you can keep the data archive intact and modify only the control archive. dpkg-deb doesn't provide a way to do that. Fortunately, deb packges are in a standard format: they're ar archives. So you can use ar to extract the control archive, modify its files, and use ar again to replace the control archive by a new version. mkdir tmp cd tmp ar p ../original.deb control.tar.gz | tar -xz # edit postinst cp ../original.deb ../fixed.deb tar czf control.tar.gz *[!z] ar r ../fixed.deb control.tar.gz You should add a changelog entry and change the version number if you modify anything in the package. The infrastructure to manipulate Debian packages assumes that if two packages have the same name and version, they're the same package. Add a suffix to the debian_revision part at the end of the version number; for sorting reasons the suffix should start with ~ , e.g. 1.2.3-4.1 becomes 1.2.3-4.1~johnjumper1 . Instead of using shell tools, you can use Emacs. The dpkg-dev-el package (which is its own upstream as this is a native Debian package) contains modes to edit .deb files and to edit Debian changelogs. Emacs can be used interactively or scripted.
{ "source": [ "https://unix.stackexchange.com/questions/138188", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73089/" ] }
138,202
I have noticed that | is used to send results of first command to the another. I would like to kill all processes that match a name. This is what pgrep normally does: $ pgrep name 5089 5105 And multiple arguments seem to work with kill : sudo kill 5089 5105 But this is wrong: pgrep name | kill So how to do it properly?
Try this: pgrep name | xargs kill If you use pgrep name | kill , the ouput of pgrep name is feed to stdin of kill . Because kill does not read arguments from stdin, so this will not work. Using xargs , it will build arguments for kill from stdin. Example: $ pgrep bash | xargs echo 5514 22298 23079
{ "source": [ "https://unix.stackexchange.com/questions/138202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27719/" ] }
138,398
I have file with 200 lines. I need to extract lines from 10 to 100 and put them into a new file. How do you do this in unix/Linux? What are the possible commands you could use?
Use sed : sed -n -e '10,100p' input.txt > output.txt sed -n means don't print each line by default. -e means execute the next argument as a sed script. 10,100p is a sed script that means starting on line 10, until line 100 (inclusive), print ( p ) that line. Then the output is saved into output.txt . If your file is longer than suggested, this version (suggested in the comments) will be faster: sed -e '1,9d;100q' That means delete lines 1-9, quit after line 100, and print the rest. For 200 lines it's not going to matter, but for 200,000 lines the first version will still look at every line even when it's never going to print them. I prefer the first version in general for being explicit, but with a long file this will be much faster — you know your data best. Alternatively, you can use head and tail in combination: tail -n +10 input.txt | head -n 91 > output.txt This time, tail -n +10 prints out the entire file starting from line 10, and head -n 91 prints the first 91 lines of that (up to and including line 100 of the original file). It's redirected to output.txt in the same way.
{ "source": [ "https://unix.stackexchange.com/questions/138398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73199/" ] }
138,463
From what I've read, putting a command in parentheses should run it in a subshell, similar to running a script. If this is true, how does it see the variable x if x isn't exported? x=1 Running (echo $x) on the command line results in 1 Running echo $x in a script results in nothing, as expected
A subshell starts out as an almost identical copy of the original shell process. Under the hood, the shell calls the fork system call 1 , which creates a new process whose code and memory are copies 2 . When the subshell is created, there are very few differences between it and its parent. In particular, they have the same variables. Even the $$ special variable keeps the same value in subshells: it's the original shell's process ID. Similarly $PPID is the PID of the parent of the original shell. A few shells change a few variables in the subshell. Bash ≥4.0 sets BASHPID to the PID of the shell process, which changes in subshells. Bash, zsh and mksh arrange for $RANDOM to yield different values in the parent and in the subshell. But apart from built-in special cases like these, all variables have the same value in the subshell as in the original shell, the same export status, the same read-only status, etc. All function definitions, alias definitions, shell options and other settings are inherited as well. A subshell created by (…) has the same file descriptors as its creator. Some other means of creating subshells modify some file descriptors before executing user code; for example, the left-hand side of a pipe runs in a subshell 3 with standard output connected to the pipe. The subshell also starts out with the same current directory, the same signal mask, etc. One of the few exceptions is that subshells do not inherit custom traps: ignored signals ( trap '' SIGNAL ) remain ignored in the subshell, but other traps ( trap CODE SIGNAL ) are reset to the default action 4 . A subshell is thus different from executing a script. A script is a separate program. This separate program might coincidentally be also a script which is executed by the same interpreter as the parent, but this coincidence doesn't give the separate program any special visibility on internal data of the parent. Non-exported variables are internal data, so when the interpreter for the child shell script is executed , it doesn't see these variables. Exported variables, i.e. environment variables, are transmitted to executed programs. Thus: x=1 (echo $x) prints 1 because the subshell is a replication of the shell that spawned it. x=1 sh -c 'echo $x' happens to run a shell as a child process of a shell, but the x on the second line has no more connection with the x on the second line than in x=1 perl -le 'print $x' or x=1 python -c 'print x' 1 Unless the shell optimizes the forking out, but emulates forking as much as necessary to preserve the behavior of the code that it's executing. Ksh93 optimizes a lot, other shells mostly don't. 2 Semantically, they're copies. From an implementation perspective, there's a lot of sharing going on. 3 For the right-hand side, it depends on the shell. 4 If you test this out, note that things like $(trap) may report the traps of the original shell. Note also that many shells have bugs in corner cases involving traps. For example ninjalj notes that as of bash 4.3, bash -x -c 'trap "echo ERR at \$BASH_SUBSHELL \$BASHPID" ERR; set -E; false; echo one subshell; (false); echo two subshells; ( (false) )' runs the ERR trap from the nested subshell in the “two subshells” case, but not the ERR trap from the intermediate subshell — set -E option should propagate the ERR trap to all subshells but the intermediate subshell is optimized away and so isn't there to run its ERR trap.
{ "source": [ "https://unix.stackexchange.com/questions/138463", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8624/" ] }
138,472
I suppose tr is a more core method and thus probably a faster way to replace things within a given file. However tr can only replace equal amounts of characters. meaning... 2 characters can only be replaced with 2 characters which means replacing \r\n with \n is out of the question via tr is the next best option sed ? is sed the most core and fastest way to replacing \r\n with \n in a file given the lack of capabilities in tr ? would like an example if possible.
With sed , you can do: sed 's/\r$//' The same way can do with tr , you only have to remove \r : tr -d '\r' although this will remove all instances of \r , not necessary followed by \n .
{ "source": [ "https://unix.stackexchange.com/questions/138472", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73235/" ] }
138,479
I mounted an NFS filesytem from the shell using the code: LINE='nfs.mit.edu:/export/evodesign/beatdb /beatdb nfs tcp,intr,rw 0 0' grep "$LINE" /etc/fstab >/dev/null || echo $LINE >> /etc/fstab mkdir /beatdb mount -a # Remount /etc/fstab Without Reboot in Linux I show the files as nobody:nogroup: Any idea to fix this issue and display the right owners? I use Ubuntu 12.04. Edit: Client-side (I don't have access to the NFS server): rpcidmapd is running: rpcinfo -p : /etc/idmapd.conf :
With sed , you can do: sed 's/\r$//' The same way can do with tr , you only have to remove \r : tr -d '\r' although this will remove all instances of \r , not necessary followed by \n .
{ "source": [ "https://unix.stackexchange.com/questions/138479", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16704/" ] }
138,532
For each Linux kernel version, there is a patch file available for download. For instance, linux-3.12.22 has a corresponding patch-3.12.22 . What is the purpose of that patch? To always patch the corresponding kernel before compiling it, or to bring a former kernel version up-to-date with the kernel that the patch matches (3.12.22, in this case)?
The purpose is to save lots of traffic. The Linux tarball is around 75MB, whereas the patches usually just have a few KB. So if you compile your own kernel, and update to each new minor version the day it is released, instead of redownloading a new 75MB tarball for each minor update, you just download (for example) the main tarball for a given version once and then the patch for the version you actually want. When there is an update you re-use the already downloaded main tarball. linux-3.14.tar.xz + patch-3.14.{1..n}.xz is below 100MB in total. linux-3.14.tar.xz + linux-3.14.{1..n}.tar.xz is several times 100MB. There is no downside to patching, the final result is identical, unless you do something wrong.
{ "source": [ "https://unix.stackexchange.com/questions/138532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8785/" ] }
138,730
I have a bash script as below which installs zookeeper but only if not installed already. ##zookeper installZook(){ ZOOK_VERSION="3.4.5" ZOOK_TOOL="zookeeper-${ZOOK_VERSION}" ZOOK_DOWNLOAD_URL="http://www.us.apache.org/dist/zookeeper/${ZOOK_TOOL}/${ZOOK_TOOL}.tar.gz" if [ -e $DEFAULT_INSTALLATION_DEST/${ZOOK_TOOL} ]; then echo "${ZOOK_TOOL} alreay installed"; exit 1; # <<<< here elif [ ! -e $DEFAULT_SOURCE_ROOT/${ZOOK_TOOL}.tar.gz ]; then wgetIt $ZOOK_DOWNLOAD_URL else echo "[info] : $DEFAULT_SOURCE_ROOT/$ZOOK_TOOL already exists" fi sudo mkdir -p /var/lib/zookeeper sudo mkdir -p /var/log/zookeeper tarIt "$DEFAULT_SOURCE_ROOT/$ZOOK_TOOL.tar.gz" sudo chmod 777 -R $DEFAULT_INSTALLATION_DEST/$ZOOK_TOOL cp $DEFAULT_INSTALLATION_DEST/$ZOOK_TOOL/conf/zoo_sample.cfg $DEFAULT_INSTALLATION_DEST/$ZOOK_TOOL/conf/zoo.cfg cat >> ~/.bash_profile <<'EOF' ############################### ########### ZOOK ############### ############################### ZOOK_HOME=/usr/local/zookeper-3.4.5 export ZOOK_HOME export PATH=$PATH:$ZOOK_HOME/bin EOF } At the line marked <<<< here , if zookeeper is already installed, what I want is to exit the script below it. But using exit exits the terminal itself.
TL;DR Use return instead of exit AND run your script with source your-script.sh aka. . your-script.sh Full details If launching a script with an exit statement in it, you have to launch it as a child of you current child. If you launch it inside the current shell of started with your terminal session (using . ./<scriptname> any exit will close the main shell, the one started along your terminal session. If you had launched your script like bash ./<scriptname> (or any other shell instead of bash ), then exit would have stopped your child shell and not the one used by your terminal. If your script has executable permissions, executing it directly without giving the name of the shell will execute it in a child shell too. Using return instead of exit will allow you to still launch your script using . ./<script name> without closing the current shell. But you need to use return to exit from a function only or a sourced script (script ran using the . ./<scriptname> syntax).
{ "source": [ "https://unix.stackexchange.com/questions/138730", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17781/" ] }
139,082
I need my $TERM to be xterm-256color outside of tmux (in "plain" terminal with zsh), but screen-256color inside tmux. First I tried: add export TERM='xterm-256color' to my ~/.zshrc . add set -g default-terminal "screen-256color" to my ~/.tmux.conf Now, when I open terminal (say, xterm), TERM is xterm-256color , which is correct. But when I run tmux, TERM is again xterm-256color ! Then I tried to comment out line in my ~/.zshrc . Now, when I open terminal, TERM is xterm , and when I run tmux, TERM is screen-256color . So it seems if I set TERM in the .zshrc , tmux firstly sets TERM to screen-256color , runs shell (which is zsh), and zsh reads .zshrc and resets TERM to xterm-256color . So, how to make TERM to be xterm-256color in "plain" terminal, and screen-256color in tmux?
The TERM environment variable should be set by the application that is acting as your terminal. This is the whole point of the thing: letting programs running inside them know what terminal is being used and hence what sort of features it supports. Zsh is not a terminal. It is a shell. It might care what your TERM is set to if it wants to do special things, but it should not be responsible for setting it. Instead it is responsible for setting variables such as ZSH_VERSION which can be used by scripts or other child processes to understand what behavior to expect from their parent shell. Instead, you need to check the configuration for whatever terminal application you are using and ask it to report itself properly. For example you can do this for xterm by adding this line to the ~/.Xdefaults file it uses for configuration values: xterm*termName: xterm-256color It appears gnome-terminal does the idiotic thing of reading what your xterm configuration would be instead of having it's own. This might get you by in some cases but is should more properly be set to vte-256color. This appears to be a long standing gripe against it (and some other VTE based terminal emulators). A common way to hack around this is exploit another value it does set: if [ "$COLORTERM" = "gnome-terminal" ]; then export TERM=vte-256color fi But this brings you back around to your problem with tmux, so you would have to account for that by not resetting TERM if it is already something like "screen-256color" or "screen": if [ "$COLORTERM" = "gnome-terminal" -a "$TERM" =~ xterm.* ]; then export TERM=vte-256color fi For other terminals you will need to lookup their proper configuration routines.
{ "source": [ "https://unix.stackexchange.com/questions/139082", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46011/" ] }
139,089
I have text file. Task - get first and last line from file after $ cat file | grep -E "1|2|3|4" | commandtoprint $ cat file 1 2 3 4 5 Need this without cat output (only 1 and 5). ~$ cat file | tee >(head -n 1) >(wc -l) 1 2 3 4 5 5 1 Maybe awk and more shorter solution exist...
sed Solution: sed -e 1b -e '$!d' file When reading from stdin if would look like this (for example ps -ef ): ps -ef | sed -e 1b -e '$!d' UID PID PPID C STIME TTY TIME CMD root 1931 1837 0 20:05 pts/0 00:00:00 sed -e 1b -e $!d head & tail Solution: (head -n1 && tail -n1) <file When data is coming from a command ( ps -ef ): ps -ef 2>&1 | (head -n1 && tail -n1) UID PID PPID C STIME TTY TIME CMD root 2068 1837 0 20:13 pts/0 00:00:00 -bash awk Solution: awk 'NR==1; END{print}' file And also the piped example with ps -ef : ps -ef | awk 'NR==1; END{print}' UID PID PPID C STIME TTY TIME CMD root 1935 1837 0 20:07 pts/0 00:00:00 awk NR==1; END{print}
{ "source": [ "https://unix.stackexchange.com/questions/139089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73552/" ] }
139,115
I am often logged in several SSH sessions at once. To logout from several sessions, I press CTRL + d , until I am back on my local machine. However, I occasionally press it once too many, and my terminal exits. Is there a way to make CTRL + d unable to close my terminal ? I am using terminator as my terminal emulator.
You can also disable eof generally in bash: set -o ignoreeof
{ "source": [ "https://unix.stackexchange.com/questions/139115", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
139,124
I have a list of bots to block, so I was thinking that fail2ban could be a solution until I realized that mod_security would be more efficient in this kind of tasks. The number of bots is huge, so the file of configuration will contain a long list. My question is about performance (memory, processor, disk ..etc): Is having a huge list of bots to block will affect the performance of apache in a site with huge traffic ?
You can also disable eof generally in bash: set -o ignoreeof
{ "source": [ "https://unix.stackexchange.com/questions/139124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49573/" ] }
139,129
I am trying to create partition of SD card, for this I am following this tutorial . When I type the command ll /dev/mmcblk* I got this ls: cannot access /dev/mmcblk*: No such file or directory So, I check the list of items in /dev by typing this command ls /dev/ I got a big list of items but there is nothing like mmcblk0 or mmcblk1 . The list I am getting is this autofs dvdrw loop4 psaux ram6 sdb tty10 tty24 tty38 tty51 tty8 ttyS2 ttyS5 vcs6 block ecryptfs loop5 ptmx ram7 sdb1 tty11 tty25 tty39 tty52 tty9 ttyS20 ttyS6 vcs7 bsg fb0 loop6 pts ram8 sg0 tty12 tty26 tty4 tty53 ttyprintk ttyS21 ttyS7 vcsa btrfs-control fd loop7 ram0 ram9 sg1 tty13 tty27 tty40 tty54 ttyS0 ttyS22 ttyS8 vcsa1 bus full loop-control ram1 random sg2 tty14 tty28 tty41 tty55 ttyS1 ttyS23 ttyS9 vcsa2 cdrom fuse mapper ram10 rfkill shm tty15 tty29 tty42 tty56 ttyS10 ttyS24 uhid vcsa3 cdrw hidraw0 mcelog ram11 rtc snapshot tty16 tty3 tty43 tty57 ttyS11 ttyS25 uinput vcsa4 char hpet mei ram12 rtc0 snd tty17 tty30 tty44 tty58 ttyS12 ttyS26 urandom vcsa5 console input mem ram13 sda sr0 tty18 tty31 tty45 tty59 ttyS13 ttyS27 v4l vcsa6 core kmsg net ram14 sda1 stderr tty19 tty32 tty46 tty6 ttyS14 ttyS28 vcs vcsa7 cpu log network_latency ram15 sda2 stdin tty2 tty33 tty47 tty60 ttyS15 ttyS29 vcs1 vga_arbiter cpu_dma_latency loop0 network_throughput ram2 sda3 stdout tty20 tty34 tty48 tty61 ttyS16 ttyS3 vcs2 vhost-net disk loop1 null ram3 sda4 tty tty21 tty35 tty49 tty62 ttyS17 ttyS30 vcs3 video0 dri loop2 port ram4 sda5 tty0 tty22 tty36 tty5 tty63 ttyS18 ttyS31 vcs4 zero dvd loop3 ppp ram5 sda6 tty1 tty23 tty37 tty50 tty7 ttyS19 ttyS4 vcs5 I have followed this tutorial before but I do not any idea what's wrong this time. So,please tell how to get mmcblk list.
You can also disable eof generally in bash: set -o ignoreeof
{ "source": [ "https://unix.stackexchange.com/questions/139129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70720/" ] }
139,191
I am tweaking in Webkit-browser land in Linux and I come accross the terms " Primary Selection " and " Clipboard selection or buffer " very often. I want to understand what are they and what difference do they have? Where does drag and drop pasting fit in? What is the job of xclip in this matter exactly?
They are part of Selection Atoms , or X Atoms . The Inter-Client Communication Conventions Manual for X states: There can be an arbitrary number of selections, each named by an atom. To conform with the inter-client conventions, however, clients need deal with only these three selections: PRIMARY SECONDARY CLIPBOARD In short: PRIMARY selection is typically used by e.g. terminals when selecting text and pasting it by pressing middle mouse button. As in selected text is in Primary Clipboard without any explicit copy action taking place. Quick-Copy is a good name for it. (Not limited to terminal emulators, but as an example.) CLIPBOARD is primarily used in connection with MS Windows-style clipboard operations. Select+Copy. The data resides in the buffer . Read more here. Support for PRIMARY was added to WebKit back in 2008 . xclip , which is a command line interface (tool) for X selections (clipboard), traditionally adds data to Primary Clipboard. Optionally one can choose which one to use by the -clipboard option given argument of either. Corr.: Drag And Drop resides under Xdnd. There is also a Wikipedia entry on the spec . It uses XdndSelection and should not interfere with PRIMARY. The protocol is at least implemented by Qt and GTK.
{ "source": [ "https://unix.stackexchange.com/questions/139191", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49640/" ] }
139,254
I have a 100MB MySQL database backup file and I have trouble to open it in Vim on my Linux box that has 16G of RAM. Vim just hangs (at least unusable). This is something I don't understand. I have 16 GB RAM, why can't I load a 100 MB file in an editor? Is it because of Vim? I thought all the memory management is handled by the OS.
Vim sometimes has trouble with files that have unusually long lines. It's a text editor, so it's designed for text files, with line lengths that are usually at most a few hundred characters wide. A database file may not contain many newline characters, so it could conceivably be one single 100 Mb long line. Vim will not be happy with that, and although it will probably work, it might take quite a long time to load the file. I have certainly opened text files much larger than 100 Mb with Vim. The file doesn't even need to fit in memory all at once (since Vim can swap changes to disk as needed).
{ "source": [ "https://unix.stackexchange.com/questions/139254", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39569/" ] }
139,285
We have an Ubuntu 12.04 server with httpd on port 80 and we want to limit: the maximum connections per IP address to httpd to 10 the maximum new connections per second to httpd to 150 How can we do this with iptables?
iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 15 --connlimit-mask 32 -j REJECT --reject-with tcp-reset This will reject connections above 15 from one source IP. iptables -A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 150/second --limit-burst 160 -j ACCEPT In this 160 new connections (packets really) are allowed before the limit of 150 NEW connections (packets) per second is applied.
{ "source": [ "https://unix.stackexchange.com/questions/139285", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61246/" ] }
139,418
I have a program that stores its settings in ~/.config/myprogram that I use both interactively and with a batch queuing system. When running interactively, I want this program to use my configuration files (and it does). But when running in batch mode, the configuration files aren't necessary because I specify command-line options that overwrite all the relevant settings. Further, accessing the configuration files over the network increases the program's startup time by several seconds; if the files don't exist, the program launches much faster (as each job only takes about a minute, this has a significant impact on batch job throughput). But because I also use the program interactively, I don't want to be moving/deleting my configuration files all the time. Depending on when my batch jobs get scheduled on the cluster (based on other users' usage), I may want to use the program interactively and as part of a batch job at the same time. (Aside: that network file performance is so slow is probably a bug, but I'm just a user of the cluster, so I can only work around it, not fix it.) I could build a version of the program that doesn't read the configuration files (or has a command-line option not to) for batch use, but this program's build environment is poorly-engineered and difficult to set up. I'd much prefer to use the binaries installed through my system's package manager. How can I trick particular instances of this program into pretending my configuration files don't exist (without modifying the program)? I'm hoping for a wrapper of the form pretendfiledoesntexist ~/.config/myprogram -- myprogram --various-options... , but I'm open to other solutions.
That program probably resolves the path to that file from $HOME/.config/myprogram . So you could tell it your home directory is elsewhere, like: HOME=/nowhere your-program Now, maybe your-program needs some other resource in your home directory. If you know which they are, you can prepare a fake home for your-program with links to the resource it needs in there. mkdir -p ~/myprogram-home/.config ln -s ~/.Xauthority ~/myprogram-home/ ... HOME=~/myprogram-home myprogram
{ "source": [ "https://unix.stackexchange.com/questions/139418", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73723/" ] }
139,441
When I run sudo apt-get update I get this error: Reading package lists... Error! E: Unable to parse package file /var/lib/dpkg/status (1) E: The package lists or status file could not be parsed or opened. What each line is saying and how to solve it? I'm running Linux Mint 17 Qiana Cinnamon in VMWare Workstation 10.0.2.
If you google out this error there are plenty of links which describe this error. It seems that the file is messed up. You can try out the options specified here . sudo mv /var/lib/dpkg/status /var/lib/dpkg/status.bad sudo cp /var/lib/dpkg/status-old /var/lib/dpkg/status sudo apt-get update This below option did not work for this particular case. Another link that describes the similar issue is here . sudo rm /var/lib/apt/lists/* -vf sudo apt-get clean sudo apt-get update sudo apt-get upgrade
{ "source": [ "https://unix.stackexchange.com/questions/139441", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65805/" ] }
139,490
I have configured rsyslog to log certain log events to /dev/xconsole : *.*;cron.!=info;mail.!=info |/dev/xconsole /dev/xconsole is a named pipe ( fifo ). If I want to see what is being logged, I can do cat /dev/xconsole . I am surprised to see, that the command cat /dev/xconsole does not finish after reading the file, but instead acts as tail -f . in other words, the two commands behave the same: cat /dev/xconsole tail -f /dev/xconsole Can somebody please explain why is that? Is there any difference between the two?
cat keeps reading until it gets EOF. A pipe produces EOF on the output only when it gets EOF on the input. The logging daemon is opening the file, writing to it, and keeping it open — just like it does for a regular file — so EOF is never generated on the output. cat just keeps reading, blocking whenever it exhausts what's currently in the pipe. You can try this out yourself manually: $ mkfifo test $ cat test And in another terminal: $ cat > test hello There will be output in the other terminal. Then enter: world There will be more output in the other terminal. If you now Ctrl-D the input then the other cat will terminate too. In this case, the only observable difference between cat and tail -f will be if the logging daemon is terminated or restarted: cat will stop permanently when the write end of the pipe is closed, but tail -f will keep going (reopening the file) when the daemon is restarted.
{ "source": [ "https://unix.stackexchange.com/questions/139490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
139,513
I couldn't find in google any safe way to clear systemd journal. Do anyone know any safe and reliable way to do so? Let's say I was experimenting with something and my logs got cluttered with various error messages. Moreover I'm displaying my journal on my desktop by using Conky. I really don't want to see those errors as they remind me an awful day I was fixing this stuff, I want to feel like a fresh man after this horror. I think everyone will agree that this is a valid reason to clear the logs :P .
The self maintenance method is to vacuum the logs by size or time. Retain only the past two days: journalctl --vacuum-time=2d Retain only the past 500 MB: journalctl --vacuum-size=500M man journalctl for more information.
{ "source": [ "https://unix.stackexchange.com/questions/139513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37309/" ] }
139,514
This is a section of the log from u-boot running on Avnet Microzed board -ARM CORTEX A9: [Thu Jun 26 17:40:53.656 2014] [Thu Jun 26 17:40:53.656 2014] [Thu Jun 26 17:40:53.656 2014] U-Boot 2013.07 (Jun 26 2014 - 17:34:41) [Thu Jun 26 17:40:53.656 2014] [Thu Jun 26 17:40:53.656 2014] 1 GiB [Thu Jun 26 17:40:53.671 2014] SF: Detected S25FL129P_64K/S25FL128S_64K with page size 64 KiB, total 16 MiB [Thu Jun 26 17:40:53.703 2014] *** Warning - bad CRC, using default environment [Thu Jun 26 17:40:53.703 2014] [Thu Jun 26 17:40:53.703 2014] In: serial [Thu Jun 26 17:40:53.703 2014] Out: serial [Thu Jun 26 17:40:53.703 2014] Err: serial [Thu Jun 26 17:40:53.703 2014] U-BOOT for ab [Thu Jun 26 17:40:53.703 2014] [Thu Jun 26 17:40:53.703 2014] [Thu Jun 26 17:40:53.703 2014] SF: Detected S25FL129P_64K/S25FL128S_64K with page size 64 KiB, total 16 MiB [Thu Jun 26 17:40:54.453 2014] SF: 5242880 bytes @ 0x520000 Read: OK [Thu Jun 26 17:40:54.453 2014] Description: PetaLinux Kernel [Thu Jun 26 17:40:54.453 2014] 0x010000f0 [Thu Jun 26 17:40:54.453 2014] 4620145 Bytes = 4.4 MiB [Thu Jun 26 17:40:54.453 2014] Description: Flattened Device Tree blob [Thu Jun 26 17:40:54.453 2014] 0x01468114 [Thu Jun 26 17:40:54.453 2014] 9766 Bytes = 9.5 KiB [Thu Jun 26 17:40:54.453 2014] Hash algo: crc32 [Thu Jun 26 17:40:54.453 2014] Hash value: 9a94aca8 [Thu Jun 26 17:40:54.453 2014] Hash algo: sha1 [Thu Jun 26 17:40:54.453 2014] Hash value: 97b81e3014decb706ff19e61e1227dace97d4232 [Thu Jun 26 17:40:54.453 2014] crc32+ sha1+ Uncompressing Kernel Image ... OK . . . I noticed that the following lines are coming twice: SF: Detected S25FL129P_64K/S25FL128S_64K with page size 64 KiB, total 16 MiB This corresponds to the function spi_flash_probe from drivers/mtd/spi/spi_flash.c I need to know: 1- Why it is probed twice? 2- The name and location of the file from where this function is called (twice). 3- The second time it is being probed it is considerably slow, why it is so? These are my u-boot environmental variable; U-Boot-PetaLinux> printenv autoload=no baudrate=115200 bootdelay=4 bootenvsize=0x00020000 bootenvstart=0x00500000 bootfile=image.ub bootsize=0x00500000 bootstart=0x00000000 clobstart=0x1000000 console=console=ttyPS0,115200 cp_dtb2ram=sf probe 0; sf read ${clobstart} ${dtbstart} ${dtbsize} dtbboot=sf probe 0; sf read ${netstart} ${kernstart} ${kernsize}; sf read ${dtbnetstart} ${dtbstart} ${dtbsize}; bootm ${netstart} - ${dtbnetstart} dtbnetboot=tftp ${netstart} image.ub; tftp ${dtbnetstart} system.dtb; bootm ${netstart} - ${dtbnetstart} dtbnetstart=0x1500000 eraseconf=sf probe 0; sf erase ${confstart} ${confsize} eraseenv=sf probe 0; sf erase ${bootenvstart} ${bootenvsize} ethact=Gem.e000b000 ethaddr=00:0a:35:00:07:c0 fault=echo $img image size is greater than allocated place - $img is NOT UPDATED get_dtb=run cp_dtb2ram; fdt addr ${clobstart} hostname="Peta_MicroZed" install_dtb=sf probe 0; sf erase ${dtbstart} ${dtbsize};sf write ${clobstart} ${dtbstart} ${filesize} install_jffs2=sf probe 0; sf erase ${jffs2start} ${jffs2size};sf write ${clobstart} ${jffs2start} ${filesize} install_kernel=sf probe 0; sf erase ${kernstart} ${kernsize};sf write ${fileaddr} ${kernstart} ${filesize} install_uboot=sf probe 0; sf erase ${bootstart} ${bootsize};sf write ${clobstart} ${bootstart} ${filesize} kernsize=0x00500000 kernstart=0x00520000 load_boot=tftp ${clobstart} BOOT.BIN load_dtb=tftp ${clobstart} system.dtb load_jffs2=tftp ${clobstart} rootfs.jffs2 load_kernel=tftp ${clobstart} image.ub loadaddr=0x1000000 mtdids=nor0=0 mtdparts=mtdparts=0:5M(boot),128K(bootenv),851968(image) nc=setenv stdout nc;setenv stdin nc; ncip=192.168.1.11 netboot=tftp ${netstart} image.ub && bootm netstart=0x1000000 psserial0=setenv stdout ttyPS0;setenv stdin ttyPS0 sd_update_boot=echo Updating BOOT from SD;mmcinfo && fatload mmc 0:1 ${clobstart} BOOT.BIN && run install_uboot sd_update_kernel=echo Updating Kernel from SD;mmcinfo && fatload mmc 0:1 ${clobstart} ${bootfile}&& set fileaddr ${clobstart}&&run install_kernel sdboot=echo boot Petalinux; mmcinfo && fatload mmc 0 ${netstart} ${bootfile}&& bootm serial=setenv stdout serial;setenv stdin serial serverip=192.168.1.11 sfboot=sf probe 0; sf read ${netstart} ${kernstart} ${kernsize}; bootm ${netstart} silent=1 silent-kinux=yes silent_linux=yes test_crc=if imi ${clobstart}; then run test_img; else echo $img Bad CRC - $img is NOT UPDATED; fi test_img=setenv var "if test ${filesize} -gt ${psize}; then run fault; else run ${installcmd}; fi"; run var; setenv var update_boot=setenv img BOOT.BIN; setenv psize ${bootsize}; setenv installcmd install_uboot; run load_boot test_img; setenv img; setenv psize; setenv installcmd update_dtb=setenv img DTB; setenv psize ${dtbsize}; setenv installcmd install_dtb; run load_dtb test_img; setenv img update_jffs2=setenv img JFFS2; setenv psize ${jffs2size}; setenv installcmd install_jffs2; run load_jffs2 test_img; setenv img; setenv psize; setenv installcmd update_kernel=setenv img KERNEL; setenv psize ${kernsize}; setenv installcmd "install_kernel"; run load_kernel test_crc; setenv img; setenv psize; setenv installcmd varify=n Environment size: 3214/131068 bytes
The self maintenance method is to vacuum the logs by size or time. Retain only the past two days: journalctl --vacuum-time=2d Retain only the past 500 MB: journalctl --vacuum-size=500M man journalctl for more information.
{ "source": [ "https://unix.stackexchange.com/questions/139514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55826/" ] }
139,578
I was trying to copy paste something from vim to another application and also, from that application to vim using right click with mouse and then copy and paste (or with Ctrl + v and Ctrl + c and also tried the Command version for mac OSX, obviously.). However, when I try doing it, it only copies the first word when I do it from vim or when I copy from the application to vim , it copies everything, but inserts strange tabs and spaces. I think this happened when I decided to set my mouse on in the terminal. As in: :set mouse=a I have that line on my .vimrc file on iTerm (mac os x). Though, is it possible to make my copy paste with other applications that are not in vim not to break with the mouse=a on? Or is it at least possible to set my mouse off while I do the copy paste? I did :help mouse but the comments were not useful for me. I would paste them here but... my copy paste tool is broken! I did try :set mouse! and :set mouse=a! but these did nothing useful... :( Additional info of my environment: I am also using tmux most of the time, though, I tested this error/bug without a tmux session, thats why I posted this mainly as a vim question.
mouse=a prevents the ability of copying and pasting out of vim with readable characters. Change mouse=a to mouse=r and that should fix your issue with that. one thing I am wondering is, are you changing the config file for your vim with the mouse set to mouse=a? orignal answer ^ If mouse=r doesn't give you all the copy past options change it to mouse=v Both mouse=r and mouse=v have the same functions you are needing, but depending on the vimrc you are using one will work better then the other.
{ "source": [ "https://unix.stackexchange.com/questions/139578", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55620/" ] }
139,698
I try to download a file with wget and curl and it is rejected with a 403 error (forbidden). I can view the file using the web browser on the same machine. I try again with my browser's user agent, obtained by http://www.whatsmyuseragent.com . I do this: wget -U 'Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0' http://... and curl -A 'Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0' http://... but it is still forbidden. What other reasons might there be for the 403, and what ways can I alter the wget and curl commands to overcome them? (this is not about being able to get the file - I know I can just save it from my browser; it's about understanding why the command-line tools work differently) update Thanks to all the excellent answers given to this question. The specific problem I had encountered was that the server was checking the referrer. By adding this to the command-line I could get the file using curl and wget . The server that checked the referrer bounced through a 302 to another location that performed no checks at all, so a curl or wget of that site worked cleanly. If anyone is interested, this came about because I was reading this page to learn about embedded CSS and was trying to look at the site's css for an example. The actual URL I was getting trouble with was this and the curl I ended up with is curl -L -H 'Referer: http://css-tricks.com/forums/topic/font-face-in-base64-is-cross-browser-compatible/' http://cloud.typography.com/610186/691184/css/fonts.css and the wget is wget --referer='http://css-tricks.com/forums/topic/font-face-in-base64-is-cross-browser-compatible/' http://cloud.typography.com/610186/691184/css/fonts.css Very interesting.
A HTTP request may contain more headers that are not set by curl or wget. For example: Cookie: this is the most likely reason why a request would be rejected, I have seen this happen on download sites. Given a cookie key=val , you can set it with the -b key=val (or --cookie key=val ) option for curl . Referer (sic): when clicking a link on a web page, most browsers tend to send the current page as referrer. It should not be relied on, but even eBay failed to reset a password when this header was absent. So yes, it may happen. The curl option for this is -e URL and --referer URL . Authorization: this is becoming less popular now due to the uncontrollable UI of the username/password dialog, but it is still possible. It can be set in curl with the -u user:password (or --user user:password ) option. User-Agent: some requests will yield different responses depending on the User Agent. This can be used in a good way (providing the real download rather than a list of mirrors) or in a bad way (reject user agents which do not start with Mozilla , or contain Wget or curl ). You can normally use the Developer tools of your browser (Firefox and Chrome support this) to read the headers sent by your browser. If the connection is not encrypted (that is, not using HTTPS), then you can also use a packet sniffer such as Wireshark for this purpose. Besides these headers, websites may also trigger some actions behind the scenes that change state. For example, when opening a page, it is possible that a request is performed on the background to prepare the download link. Or a redirect happens on the page. These actions typically make use of Javascript, but there may also be a hidden frame to facilitate these actions. If you are looking for a method to easily fetch files from a download site, have a look at plowdown, included with plowshare .
{ "source": [ "https://unix.stackexchange.com/questions/139698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9259/" ] }
139,730
I love the window snap feature of the Gnome 3 shell. However, it only allows you to maximize windows or to snap to the left or right half of the screen. Is there a way to snap to quarters of the screen? Maybe some shell extension I'm unaware of?
There are several extensions on the GNOME extensions site which can give you various modes of "snapping" your windows. One that works particularly well is gTile . References Keyboard Shortcuts GNOME 3
{ "source": [ "https://unix.stackexchange.com/questions/139730", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57550/" ] }
139,809
Suppose I have a PDF and I want to obtain whatever metadata is available for that PDF. What utility should I use? I find the piece of information I am usually most interested in knowing is the paper size , something that PDF viewers usually don't report. E.g. is the PDF size letter, legal, A4 or something else? But the other information available may be of interest too.
One of the canonical tools for this is pdfinfo , which comes with xpdf, if I recall. Example output: [0 1017 17:10:17] ~/temp % pdfinfo test.pdf Creator: TeX Producer: pdfTeX-1.40.14 CreationDate: Sun May 18 09:53:06 2014 ModDate: Sun May 18 09:53:06 2014 Tagged: no Form: none Pages: 1 Encrypted: no Page size: 595.276 x 841.89 pts (A4) Page rot: 0 File size: 19700 bytes Optimized: no PDF version: 1.5
{ "source": [ "https://unix.stackexchange.com/questions/139809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4671/" ] }
139,866
A file is being sequentially downloaded by wget . If I start unpacking it with cat myfile.tar.bz2 | tar -xj , it may unpack correctly or fail with "Unexpected EOF", depending on what is faster. How to "cat and follow" a file, i.e. output content of the file to stdout, but don't exit on EOF, instead keep subsribed to that file and continue outputting new portions of the data, exiting only if the file is closed by writer and not re-opened within N seconds. I've created a script cat_and_follow based on @arielCo's answer that also terminates the tail when the file is not being opened for writing anymore.
tail +1f file I tested it on Ubuntu with the LibreOffice source tarball while wget was downloading it: tail +1f libreoffice-4.2.5.2.tar.xz | tar -tvJf - It also works on Solaris 10, RHEL3, AIX 5 and Busybox 1.22.1 in my Android phone (use tail +1 -f file with Busybox).
{ "source": [ "https://unix.stackexchange.com/questions/139866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17594/" ] }
140,007
I was trying to learn how to use the bind-key [-cnr] [-t key-table] key command [arguments] better, but was having some trouble figuring out what "valid keys " are for bind-key command. I tried doing man tmux and Google too, but I couldn't find anything useful. How can I figure out what the syntax for valid keys are? Is there a help command or a man page for this? Maybe I don't know the technical term for this valid keys, is there a term for these keys so that I can do a better google search? For example, I was trying to figure out what the following remapping of commands meant: bind-key -n M-S-Left resize-pane -L 2 bind-key -n M-S-Right resize-pane -R 2 bind-key -n M-S-Up resize-pane -U 2 bind-key -n M-S-Down resize-pane -D 4 The -n was easy to find in the man page (doesn't need prefix). But I can't figure out what M-S-Left key means. I am guessing that its mapping shift and the left arrow plus whatever M means to the resize-pane -L 2 command. How do I figure out what M means? What if I wanted control + whatever key I wanted. Is control = C ? How can I figure this out without just trying random keys on my keyboard until something works? Also, how do I confirm, figure out if I am not mapping it to a key set that is already used? Is there such a thing as "show all aliases" or something? As an addition to the question, are these valid keys the same as the ones for vim ? The thing is that vim seems to have a different scripting for its own language since it sometimes require and stuff.
Available Keys Look at man tmux , search / for KEY BINDINGS : tmux allows a command to be bound to most keys, with or without a prefix key. When specifying keys, most represent themselves (for example ‘A’ to ‘Z’). Ctrl keys may be prefixed with ‘C-’ or ‘^’, and Alt (meta) with ‘M-’. In addition, the following special key names are accepted: Up, Down, Left, Right, BSpace, BTab, DC (Delete), End, Enter, Escape, F1 to F20, Home, IC (Insert), NPage/PageDown/PgDn, PPage/PageUp/PgUp, Space, and Tab. Note that to bind the ‘"’ or ‘'’ keys, quotation marks are necessary [...] M-S-Left should be Alt + Shift + Left for example. List all bound keys To list all key bindings, simply press Ctrl - b then ? while in a tmux session. This is also documented in man tmux in section EXAMPLES : Typing ‘C-b ?’ lists the current key bindings in the current window; up and down may be used to navigate the list or ‘q’ to exit from it. You can also list all key-bindings via tmux list-keys . If you want to check for already set keys, you can grep it's output to check, if it's already set. Research To find more via Google, search for Section names in man tmux - just type in tmux default key bindings for example :). But often man tmux is sufficient. This site is a very good documentation about tmux and pops up, if you search for said string in Google. Arch wiki is always good, too.
{ "source": [ "https://unix.stackexchange.com/questions/140007", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55620/" ] }
140,021
I have some question in closing port, I think I got some strange things. When I use execute nmap --top-ports 10 192.168.1.1 it shows that 23/TCP port is open. But when I execute nmap --top-ports 10 localhost it show that 23/tcp port is closed. Which of them is true? I want to close this port on my whole system, how can I do it?
Nmap is a great port scanner, but sometimes you want something more authoritative. You can ask the kernel what processes have which ports open by using the netstat utility: me@myhost:~$ sudo netstat -tlnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 1004/dnsmasq tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 380/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 822/cupsd tcp6 0 0 :::22 :::* LISTEN 380/sshd tcp6 0 0 ::1:631 :::* LISTEN 822/cupsd The options I have given are: -t TCP only -l Listening ports only -n Don't look up service and host names, just display numbers -p Show process information (requires root privilege) In this case, we can see that sshd is listening on any interface ( 0.0.0.0 ) port 22, and cupsd is listening on loopback ( 127.0.0.1 ) port 631. Your output may show that telnetd has a local address of 192.168.1.1:23 , meaning it will not answer to connections on the loopback adapter (e.g. you can't telnet 127.0.0.1 ). There are other tools that will show similar information (e.g. lsof or /proc ), but netstat is the most widely available. It even works on Windows ( netstat -anb ). BSD netstat is a little different: you'll have to use sockstat(1) to get the process information instead. Once you have the process ID and program name, you can go about finding the process and killing it if you wish to close the port. For finer-grained control, you can use a firewall (iptables on Linux) to limit access to only certain addresses. You may need to disable a service startup. If the PID is "-" on Linux, it's probably a kernel process (this is common with NFS for instance), so good luck finding out what it is. Note: I said "authoritative" because you're not being hindered by network conditions and firewalls. If you trust your computer, that's great. However, if you suspect that you've been hacked, you may not be able to trust the tools on your computer. Replacing standard utilities (and sometimes even system calls) with ones that hide certain processes or ports (a.k.a. rootkits) is a standard practice among attackers. Your best bet at this point is to make a forensic copy of your disk and restore from backup; then use the copy to determine the way they got in and close it off.
{ "source": [ "https://unix.stackexchange.com/questions/140021", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74080/" ] }
140,075
I added a ssh key to the agent by: $ ssh-add ~/.ssh/id_rsa_mac Identity added: /Users/alex/.ssh/id_rsa_mac (/Users/alex/.ssh/id_rsa_mac) After a reboot the agent doesn't have this key added anymore: $ ssh-add -l The agent has no identities. Why did this happen?
The addition of keys to the agent is transient. They last only so long as the agent is running. If you kill it or restart your computer they're lost until you re-add them again. From the ssh-agent man page: ssh-agent is a program to hold private keys used for public key authentication (RSA, DSA, ECDSA). The idea is that ssh-agent is started in the beginning of an X-session or a login session, and all other windows or programs are started as clients to the ssh-agent program. Through use of environment variables the agent can be located and automatically used for authentication when logging in to other machines using ssh(1). The agent initially does not have any private keys. Keys are added using ssh-add(1). When executed without arguments, ssh-add(1) adds the files ~/.ssh/id_rsa , ~/.ssh/id_dsa , ~/.ssh/id_ecdsa and ~/.ssh/identity . If the identity has a passphrase, ssh-add(1) asks for the passphrase on the terminal if it has one or from a small X11 program if running under X11. If neither of these is the case then the authentication will fail. It then sends the identity to the agent. Several identities can be stored in the agent; the agent can automatically use any of these identities. ssh-add -l displays the identities currently held by the agent. macOS Sierra Starting with macOS Sierra 10.12.2 , Apple has added a UseKeychain config option for SSH configs. You can activate this feature by adding UseKeychain yes to your ~/.ssh/config . Host * UseKeychain yes OSX Keychain I do not use OSX but did find this Q&A on SuperUser titled: How to use Mac OS X Keychain with SSH keys? . I understand that since Mac OS X Leopard the Keychain has supported storing SSH keys. Could someone please explain how this feature is supposed to work. So from the sound of it you could import your SSH keys into Keychain using this command: $ ssh-add -K [path/to/private SSH key] Your keys should then persist from boot to boot. Whenever you reboot your Mac, all the SSH keys in your keychain will be automatically loaded. You should be able to see the keys in the Keychain Access app, as well as from the command line via: ssh-add -l Source: Super User - How to use Mac OS X Keychain with SSH keys?
{ "source": [ "https://unix.stackexchange.com/questions/140075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61718/" ] }
140,286
How to list available shells for use by command-line?
To list available valid login shells for use at time, type following command: cat /etc/shells Example: pandya@pandya-desktop:~$ cat /etc/shells # /etc/shells: valid login shells /bin/sh /bin/dash /bin/bash /bin/rbash /bin/ksh93 For information about shell visit wikipedia .
{ "source": [ "https://unix.stackexchange.com/questions/140286", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
140,299
In my bare centos6.5 system, which is a docker container, en_US.utf-8 locale is missing: bash-4.1# locale -a C POSIX Normally in Ubuntu there is command locale-gen to do this: # locale-gen en_US.UTF-8 # echo 'LANG="en_US.UTF-8"' > /etc/default/locale How can I do this in centos 6.5?
locale-gen is not present in Centos/Fedora . You must use localedef : localedef -v -c -i en_US -f UTF-8 en_US.UTF-8 From man localedef : NAME localedef - define locale environment SYNOPSIS localedef [-c][-f charmap][-i sourcefile][-u code_set_name] name DESCRIPTION The localedef utility shall convert source definitions for locale cate‐ gories into a format usable by the functions and utilities whose opera‐ tional behavior is determined by the setting of the locale environment variables defined in the Base Definitions volume of IEEE Std 1003.1-2001, Chapter 7, Locale. It is implementation-defined whether users have the capability to create new locales, in addition to those supplied by the implementation. If the symbolic constant POSIX2_LOCALEDEF is defined, the system supports the creation of new locales. On XSI-conformant systems, the symbolic constant POSIX2_LOCALEDEF shall be defined.
{ "source": [ "https://unix.stackexchange.com/questions/140299", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15212/" ] }
140,350
I want to install on my Linux red-hat machine gettext-0.19.1.tar.xz . First I do the following cd gettext-0.19.1 ./configure make During make it fails on g++: command not found libtool: compile: g++ -DIN_LIBASPRINTF -DHAVE_CONFIG_H -I. -c autosprintf.cc - o .libs/autosprintf.o ./libtool: line 1128: g++: command not found make[5]: *** [autosprintf.lo] Error 1 make[5]: Leaving directory `/var/tmp/gettext-0.19.1/gettext-runtime/libasprintf' make[4]: *** [all] Error 2 make[4]: Leaving directory `/var/tmp/gettext-0.19.1/gettext-runtime/libasprintf' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/var/tmp/gettext-0.19.1/gettext-runtime' make[2]: *** [all] Error 2 make[2]: Leaving directory `/var/tmp/gettext-0.19.1/gettext-runtime' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/var/tmp/gettext-0.19.1' make: *** [all] Error 2 How do I fix this? Remark - I have GCC which gcc /usr/bin/gcc
Install the suite of development tools first. Then go back to compile the software. yum groupinstall 'Development Tools' You could need much more than just the compiler. The Development Tools package includes the core development tools like automake , gcc , perl , python , flex , make , gdb , bison , and many more. To list all of the software in the package group, use yum as follows. yum group info 'Development Tools' For Fedora 20 (at least), you'll additionally need to install gcc-c++ . For Debian-based systems, install the suite of development tools as follows. apt-get install build-essential In Void Linux , it's xbps-install -Su base-devel , which provides m4 , autoconf , automake , bc , binutils , bison , ed , libfl-devel , flex , libgcc-devel , kernel-libc-headers , glibc-devel , isl , cloog , mpfr , libmpc , gcc , libstdc++-devel , gcc-c++ , gettext-libs , gettext , groff , libtool , make , patch , pkg-config , texinfo , unzip , and xz .
{ "source": [ "https://unix.stackexchange.com/questions/140350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67059/" ] }
140,367
I have a linux server, which currently has below space usage: /dev/sda3 20G 15G 4.2G 78% / /dev/sda6 68G 42G 23G 65% /u01 /dev/sda2 30G 7.4G 21G 27% /opt /dev/sda1 99M 19M 76M 20% /boot tmpfs 48G 8.2G 39G 18% /dev/shm As you can see. / is at 78%. I want to check, which files or folders are consuming space. I tried this: find . -type d -size +100M Which shows result like this: ./u01/app/june01.dbf ./u01/app/temp01.dbf ./u01/app/smprd501.dbf ./home/abhishek/centos.iso ./home/abhishek/filegroup128.jar Now this is my issue. I only want the name of those files located in folders that are consuming space at / and not at /u01 or /home . Since / is base of everything, it is showing me every file of my server. Is is possible to get big files that is contributing to 78% of / ?
Try: find / -xdev -type f -size +100M It lists all files that has size bigger than 100M. If you want to know about directory, you can try ncdu . If you aren't running Linux, you may need to use -size +204800 or -size +104857600c , as the M suffix to mean megabytes isn't in POSIX. find / -xdev -type f -size +102400000c
{ "source": [ "https://unix.stackexchange.com/questions/140367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59734/" ] }
140,482
I am trying to write a script that kills service running at a specific port. This is my script: a=$(ps ax | grep 4990 | grep java | awk '{print $1}') kill -9 $a It's a java program. This script works sometimes, but mysteriously fails most of the time. Is there any other way to kill a service running on a port? My port is 4990 .
You can try fuser : fuser -k 4990/tcp Or using lsof to get the process id, then feed to kill : kill $(sudo lsof -t -i:4990)
{ "source": [ "https://unix.stackexchange.com/questions/140482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59734/" ] }
140,601
I'm toying around with a Puppet agent and a Puppet master and I've noticed that the Puppet cert utility provides a fingerprint for my agent's public key as it has requested to be signed: $ puppet cert list "dockerduck" (SHA256) 1D:72:C5:42:A5:F4:1C:46:35:DB:65:66:B8:B8:06:28:7A:D4:40:FA:D2:D5:05:1A:8F:43:60:6C:CA:D1:FF:79 How do I verify that this is the right key? On the Puppet agent, taking a sha256sum gives me something dramatically different: $ sha256sum /var/lib/puppet/ssl/public_keys/dockerduck.pem f1f1d198073c420af466ec05d3204752aaa59ebe3a2f593114da711a8897efa3 If I recall correctly, certificates provide checksums of their public keys in the actual key files themselves. How can I get access to a keys fingerprint(s)?
The OpenSSL command-line utility can be used to inspect certificates (and private keys, and many other things). To see everything in the certificate, you can do: openssl x509 -in CERT.pem -noout -text To get the SHA256 fingerprint, you'd do: openssl x509 -in CERT.pem -noout -sha256 -fingerprint
{ "source": [ "https://unix.stackexchange.com/questions/140601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
140,602
When I open my non-login shell in Ubuntu, my present working directory is /home/user_name (my $HOME environment variable), but I want to change this such that when I start my terminal I am in some other directory. I have read that when I start my terminal in Ubuntu a .bashrc file is sourced. So I added export HOME=/home/user_name/Documents to my .bashrc file. Now, when I open my terminal I am still in /home/user_name directory. How can I change this?
First of all, remove that line from your .bashrc . The way to do this in not by playing with $HOME , that variable will always point to your home directory and you don't want to change that just so your shells start in a different place. I'm sure there will be a more elegant way to do this but as a temporary workaround you can simply add this line to your .bashrc : cd ~/Documents Since that file is read every time you start a new non-login shell (open a new terminal), the cd command will be executed and your terminals will start at ~/Documents as you desire.
{ "source": [ "https://unix.stackexchange.com/questions/140602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74406/" ] }
140,741
Every Linux user has experienced this annoying thing: you begin typing a long and boring command, then realise you should have executed another one before. How to save the first one to execute it later? Example You begin typing mycommand -a -F --conf /very/long/path --and /another/one /input/file.txt But before pressing "Enter", you realise you should've done cp f.txt /input/file.txt at first. So, you're stuck with your command, and if you don't press Enter you won't be able to have it back using your bash history. What's the best way to handle this?
Hit CTRL - U (kill line - this saves the line in the shell's kill-ring), do what you need to do, then at the new prompt, hit CTRL - Y (yank from kill-ring) to get back the original command. Alternatively, and this is particularly useful if you are in a nested command, such as a while or for loop, hit CTRL - C , which adds the command to history without executing it and clears the line, so you can then recall it using the shell's history mechanism when you are ready to use it.
{ "source": [ "https://unix.stackexchange.com/questions/140741", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74488/" ] }
140,750
After googling a bit I couldn't find a simple way to use a shell command to generate a random decimal integer number included in a specific range, that is between a minimum and a maximum. I read about /dev/random , /dev/urandom and $RANDOM , but none of these can do what I need. Is there another useful command, or a way to use the previous data?
You can try shuf from GNU coreutils: shuf -i 1-100 -n 1
{ "source": [ "https://unix.stackexchange.com/questions/140750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48707/" ] }
140,763
I'm so confused with GNU sed , POSIX sed , and BSD sed 's myriad of options. How do I replace literal \n with the newline character using these three sed types?
sed 's/\\n/\ /g' Notice the backslash just before hitting return in the replacement string.
{ "source": [ "https://unix.stackexchange.com/questions/140763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56901/" ] }
140,950
Every now and then some key combination clears my working (Gnu) screen and brings up this message: Screen used by <username> on host01 Password: What key combination causes this and what does it signify?
According to this source it's the key combination Ctrl + a and then x . It signifies locking the screen and unlocking it with your password.
{ "source": [ "https://unix.stackexchange.com/questions/140950", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26026/" ] }
141,013
If I want to convert a pdf to a png image with imagemagick I do something like: convert -trim -density 400 this_is_a_very_long_filename_of_my_pdf_file.pdf this_is_a_very_long_filename_of_my_pdf_file.png The pdf file often has a very long file name for some reasons and I want that the png file has the same name except of the extension. Usually I select this_is_a_very_long_filename_of_my_pdf_file.pdf twice via tab and zsh-menu and change then pdf to png manually for the second argument. However is there a faster way to do this?
You can use brace expansions : convert -trim -density 400 this_is_a_very_long_filename_of_my_pdf_file.{pdf,png}
{ "source": [ "https://unix.stackexchange.com/questions/141013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5289/" ] }
141,016
I know that "Everything is a file" means that even devices have their filename and path in Unix and Unix-like systems, and that this allows for common tools to be used on a variety of resources regardless of their nature. But I can't contrast that to Windows, the only other OS I have worked with. I have read some articles about the concept, but I think they are somewhat uneasy to grasp for non-developers. A layman's explanation is what people need! For example, when I want to copy a file to CF card that is attached to a card reader, I will use something like zcat name_of_file > /dev/sdb In Windows, I think the card reader will appear as a driver, and we will do something similar, I think. So, how does the "Everything is a file" philosophy make a difference here?
"Everything is a file" is a bit glib. "Everything appears somewhere in the filesystem " is closer to the mark, and even then, it's more an ideal than a law of system design. For example, Unix domain sockets are not files, but they do appear in the filesystem. You can ls -l a domain socket to display its attributes, modify its access control via chmod , and on some Unix type systems (e.g. macOS, but not Linux) you can even cat data to/from one. But, even though regular TCP/IP network sockets are created and manipulated with the same BSD sockets system calls as Unix domain sockets, TCP/IP sockets do not show up in the filesystem,¹ even though there is no especially good reason that this should be true. Another example of non-file objects appearing in the filesystem is Linux's /proc filesystem . This feature exposes a great amount of detail about the kernel's run-time operation to user space , mostly as virtual plain text files. Many /proc entries are read-only, but a lot of /proc is also writeable, so you can change the way the system runs using any program that can modify a file. Alas, here again we have a nonideality: BSD Unixes run without /proc by default , and the System V Unixes expose a lot less via /proc than Linux does. I can't contrast that to MS Windows First, much of the sentiment you can find online and in books about Unix being all about file I/O and Windows being "broken" in this regard is obsolete. Windows NT fixed a lot of this. Modern versions of Windows have a unified I/O system, just like Unix, so you can read network data from a TCP/IP socket via ReadFile() rather than the Windows Sockets specific API WSARecv() , if you want to. This exactly parallels the Unix Way , where you can read from a network socket with either the generic read(2) Unix system call or the sockets-specific recv(2) call.² Nevertheless, Windows still fails to take this concept to the same level as Unix, even here in 2021. There are many areas of the Windows architecture that cannot be accessed through the filesystem, or that can't be viewed as file-like. Some examples: Drivers. Windows' driver subsystem is easily as rich and powerful as Unix's, but to write programs to manipulate drivers, you generally have to use the Windows Driver Kit , which means writing C or .NET code. On Unix type OSes, you can do a lot to drivers from the command line. You've almost certainly already done this, if only by redirecting unwanted output to /dev/null .³ Inter-program communication. Windows programs don't communicate easily with each other as Unix command line programs do, via text streams and pipes. Unix GUIs are often either built on top of command line programs or export a text command interface, so the same simple text-based communication mechanisms work with GUI programs, too. The registry. Unix has no direct equivalent of the Windows registry. The same information is scattered through the filesystem, largely in /etc , /proc and /sys . If you don't see that drivers, pipes, and Unix's answer to the Windows registry have anything to do with "everything is a file," read on. How does the "Everything is a file" philosophy make a difference here? I will explain that by expanding on my three points above, in detail. Long answer, part 1: Drives vs Device Files Let's say your CF card reader appears as E: under Windows and /dev/sdc under Linux. What practical difference does it make? It is not just a minor syntax difference. On Linux, I can say dd if=/dev/zero of=/dev/sdc to overwrite the contents of /dev/sdc with zeroes. Think about what that means for a second. Here I have a normal user space program ( dd(1) ) that I asked to read data in from a virtual device ( /dev/zero ) and write what it read out to a real physical device ( /dev/sdc ) via the unified Unix filesystem. dd doesn't know it is reading from and writing to special devices. It will work on regular files just as well, or on a mix of devices and files, as we will see below. There is no easy way to zero the E: drive on Windows, because Windows makes a distinction between files and drives, so you cannot use the same commands to manipulate them. The closest you can get is to do a disk format without the Quick Format option, which zeroes most of the drive contents, but then writes a new filesystem on top of it. What if I don't want a new filesystem? What if I really do want the disk to be filled with nothing but zeroes? Let's be generous and accept this requirement to put a fresh new filesystem on E: . To do that in a program on Windows, I have to call a special formatting API.⁴ On Linux, you don't need to write a program to access the OS's "format disk" functionality: you just run the appropriate user space program for the filesystem type you want to create, whether that's mkfs.ext4 , mkfs.xfs , or what have you. These programs will write a filesystem onto whatever file or /dev node you pass. Because mkfs type programs on Unixy systems don't make artificial distinctions between devices and normal files, I can create an ext4 filesystem inside a normal file on my Linux box: $ dd if=/dev/zero of=myfs bs=1k count=1k $ mkfs.ext4 -F myfs That creates a 1 MiB disk image called myfs in the current directory. I can then mount it as if it were any other external filesystem: $ mkdir mountpoint $ sudo mount -o loop myfs mountpoint $ grep $USER /etc/passwd > mountpoint/my-passwd-entry $ sudo umount mountpoint Now I have an ext4 disk image with a file called my-passwd-entry in it which contains my user's /etc/passwd entry. If I want, I can blast that image onto my CF card: $ sudo dd if=myfs of=/dev/sdc1 Or, I can pack that disk image up, mail it to you, and let you write it to a medium of your choosing, such as a USB memory stick: $ gzip myfs $ echo "Here's the disk image I promised to send you." | mutt -a myfs.gz -s "Password file disk image" \ [email protected] All of this is possible on Linux⁵ because there is no artificial distinction between files, filesystems, and devices. Many things on Unix systems either are files, or are accessed through the filesystem so they look like files, or in some other way look sufficiently file-like that they can be treated as such. Windows' concept of the filesystem is a hodgepodge; it makes distinctions between directories, drives, and network resources. There are three different syntaxes, all blended together in Windows: the Unix-like ..\FOO\BAR path system, drive letters like C: , and UNC paths like \\SERVER\PATH\FILE.TXT . This is because it's an accretion of ideas from Unix, CP/M , MS-DOS , and LAN Manager , rather than a single coherent design. It is why there are so many illegal characters in Windows file names . Unix has a unified filesystem, with everything accessed by a single common scheme. To a program running on a Linux box, there is no functional difference between /etc/passwd , /media/CF_CARD/etc/passwd , and /mnt/server/etc/passwd . Local files, external media, and network shares all get treated the same way.⁶ Windows can achieve similar ends to my disk image example above, but you have to use special programs written by uncommonly talented programmers. This is why there are so many "virtual DVD" type programs on Windows . The lack of a core OS feature has created an artificial market for programs to fill the gap, which means you have a bunch of people competing to create the best virtual DVD type program. We don't need such programs on *ix systems, because we can just mount an ISO disk image using a loop device . The same goes for other tools like disk wiping programs, which we also don't need on Unix systems. Want your CF card's contents irretrievably scrambled instead of just zeroed? Okay, use /dev/random as the data source instead of /dev/zero : $ sudo dd if=/dev/random of=/dev/sdc On Linux, we don't keep reinventing such wheels because the core OS features not only work well enough, they work so well they're used pervasively. For just one example, a typical scheme for booting a Linux box involves a virtual disk image created using techniques like I show above.⁷ I feel it's only fair to point out that if Unix had integrated TCP/IP I/O into the filesystem from the start, we wouldn't have the netcat vs socat vs Ncat vs nc mess , the cause of which was the same design weakness that lead to the disk imaging and wiping tool proliferation on Windows: lack of an acceptable OS facility. Long Answer, part 2: Pipes as Virtual Files Despite its roots in DOS, Windows never has had a rich command line tradition. This is not to say that Windows doesn't have a command line, or that it lacks many command line programs. Windows even has a very powerful command shell these days, appropriately called PowerShell . Yet, there are knock-on effects of this lack of a command-line tradition. You get tools like DISKPART which is almost unknown in the Windows world, because most people do disk partitioning and such through the Computer Management MMC snap-in. Then when you do need to script the creation of partitions, you find that DISKPART wasn't really made to be driven by another program. Yes, you can write a series of commands into a script file and run it via DISKPART /S scriptfile , but it's all-or-nothing. What you really want in such a situation is something more like GNU parted , which will accept single commands like parted /dev/sdb mklabel gpt . That allows your script to do error handling on a step-by-step basis. What does all this have to do with "everything is a file"? Easy: pipes make command line program I/O into "files," of a sort. Pipes are unidirectional streams , not random-access like a regular disk file, but in many cases the difference is of no consequence. The important thing is that you can attach two independently developed programs and make them communicate via simple text. In that sense, any two programs designed with the Unix Way in mind can communicate. In those cases where you really do need a file, it is easy to turn program output into a file: $ some-program --some --args > myfile $ vi myfile But why write the output to a temporary file when the "everything is a file" philosophy gives you a better way? If all you want to do is read the output of that command into a vi editor buffer, vi can do that for you directly. From the vi "normal" mode, say: :r !some-program --some --args That inserts that program's output into the active editor buffer at the current cursor position. Under the hood, vi is using pipes to connect the output of the program to a bit of code that uses the same OS calls it would use to read from a file instead. I wouldn't be surprised if the two cases of :r — that is, with and without the ! — both used the same generic data reading loop in all common implementations of vi . I can't think of a good reason not to. This isn't a recent feature of vi , either; it goes clear back to the ancient ed(1) text editor. This powerful idea pops up over and over in Unix. For a second example of this, recall my mutt email command above. The only reason I had to write that as two separate commands is that I wanted the temporary file to be named *.gz , so that the email attachment would be correctly named. If I didn't care about the file's name, I could have used process substitution to avoid creating the temporary file: $ echo "Here's the disk image I promised to send you." | mutt -a <(gzip -c myfs) -s "Password file disk image" \ [email protected] That avoids the temporary by turning the output of gzip -c into a FIFO (which is file-like) or a /dev/fd object (which is file-like).⁸ For yet a third way this powerful idea appears in Unix, consider gdb on Linux systems. This is the debugger used for any software written in C and C++. Programmers coming to Unix from other systems look at gdb and almost invariably gripe about it, "Yuck, it's so primitive!" Then they go searching for a GUI debugger, find one of several that exist, and happily continue their work...often never realizing that the GUI just runs gdb underneath, providing a pretty shell on top of it. There aren't competing low-level debuggers on most Unix systems because there is no need for programs to compete at that level. All we need is one good low-level tool that we can all base our high-level tools on, if that low-level tool communicates easily via pipes. This means we now have a documented debugger interface which would allow drop-in replacement of gdb . It's unfortunate that the primary competitor to gdb didn't take this low-friction path , but that quibble aside, lldb is just as scriptable as gdb . To pull the same thing off on a Windows box, the creators of the replaceable tool would have had to define some kind of formal plugin or automation API. That means it doesn't happen except for the very most popular programs, because it's a lot of work to build both a normal command line user interface and a complete programming API. This magic happens through the grace of pervasive text-based IPC . Although Windows' kernel has Unix-style anonymous pipes , it's rare to see normal user programs use them for IPC outside of a command shell, because Windows lacks this tradition of creating all core services in a command line version first, then building the GUI on top of it separately. This leads to being unable to do some things without the GUI, which is one reason why there are so many remote desktop systems for Windows, as compared to Linux. This is doubtless part of the reason why Linux is the operating system of the cloud, where everything's done by remote management. Command line interfaces are easier to automate than GUIs in large part because "everything is a file." Consider SSH. You may ask, how does it work? SSH connects a network socket (which is file-like) to a pseudo tty at /dev/pty* (which is file-like). Now your remote system is connected to your local one through a connection that so seamlessly matches the Unix Way that you can pipe data through the SSH connection , if you need to. Are you getting an idea of just how powerful this concept is now? A piped text stream is indistinguishable from a file from a program's perspective, except that it's unidirectional. A program reads from a pipe the same way it reads from a file: through a file descriptor . FDs are absolutely core to Unix; the fact that files and pipes use the same abstraction for I/O on both should tell you something.⁹ The Windows world, lacking this tradition of simple text communications, makes do with heavyweight OOP interfaces via COM or .NET . If you need to automate such a program, you must also write a COM or .NET program. This is a fair bit more difficult than setting up a pipe on a Unix box. Windows programs lacking these complicated programming APIs can only communicate through impoverished interfaces like the clipboard or File/Save followed by File/Open. Long Answer, part 3: The Registry vs Configuration Files The practical difference between the Windows registry and the Unix Way of system configuration also illustrates the benefits of the "everything is a file" philosophy. On Unix type systems, I can look at system configuration information from the command line merely by examining files. I can change system behavior by modifying those same files. For the most part, these configuration files are just plain text files, which means I can use any tool on Unix to manipulate them that can work with plain text files. Scripting the registry is not nearly so easy on Windows. The easiest method is to make your changes through the Registry Editor GUI on one machine, then blindly apply those changes to other machines with regedit via *.reg files . That isn't really "scripting," since it doesn't let you do anything conditionally: it's all or nothing. If your registry changes need any amount of logic, the next easiest option is to learn PowerShell , which amounts to learning .NET system programming. It would be like if Unix only had Perl, and you had to do all ad hoc system administration through it. Now, I'm a Perl fan, but not everyone is. Unix lets you use any tool you happen to like, as long as it can manipulate plain text files. Footnotes: Plan 9 fixed this design misstep, exposing network I/O via the /net virtual filesystem . Bash has a feature called /dev/tcp that allows network I/O via regular filesystem functions. Since it is a Bash feature, rather a kernel feature, it isn't visible outside of Bash or on systems that don't use Bash at all . This shows, by counterexample, why it is such a good idea to make all data resources visible through the filesystem. By "modern Windows," I mean Windows NT and all of its direct descendants, which includes Windows 2000, all versions of Windows Server, and all desktop-oriented versions of Windows from XP onward. I use the term to exclude the DOS-based versions of Windows, being Windows 95 and its direct descendants, Windows 98 and Windows ME, plus their 16-bit predecessors. You can see the distinction by the lack of a unified I/O system in those latter OSes. You cannot pass a TCP/IP socket to ReadFile() on Windows 95; you can only pass sockets to the Windows Sockets APIs. See Andrew Schulman's seminal article, Windows 95: What It's Not for a deeper dive into this topic. Make no mistake, /dev/null is a real kernel device on Unix type systems, not just a special-cased file name, as is the superficially equivalent NUL in Windows. Although Windows tries to prevent you from creating a NUL file, it is possible to bypass this protection with mere trickery , fooling Windows' file name parsing logic. If you try to access that file with cmd.exe or Explorer, Windows will refuse to open it, but you can write to it via Cygwin, since it opens files using similar methods to the example program, and you can delete it via similar trickery . By contrast, Unix will happily let you rm /dev/null , as long as you have write access to /dev , and let you recreate a new file in its place, all without trickery, because that dev node is just another file. While that dev node is missing, the kernel's null device still exists; it's just inaccessible until you recreate the dev node via mknod . You can even create additional null device dev nodes elsewhere: it doesn't matter if you call it /home/grandma/Recycle Bin , as long as it's a dev node for the null device, it will work exactly the same as /dev/null . There are actually two high-level "format disk" APIs in Windows: SHFormatDrive() and Win32_Volume.Format() . There are two for a very...well... Windows sort of reason. The first one asks Windows Explorer to display its normal "Format Disk" dialog box, which means it works on any modern version of Windows, but only while a user is interactively logged in. The other you can call in the background without user input, but it wasn't added to Windows until Windows Server 2003. That's right, core OS behavior was hidden behind a GUI until 2003, in a world where Unix shipped mkfs from day 1 . The /etc/mkfs in my copy of Unix V5 from 1974 is a 4136 byte statically-linked PDP-11 executable. (Unix didn't get dynamic linkage until the late 1980s , so it's not like there's a big library somewhere else doing all the real work.) Its source code — included in the V5 system image as /usr/source/s2/mkfs.c — is an entirely self-contained 457-line C program. There aren't even any #include statements! This means you can not only examine what mkfs does at a high level, you can experiment with it using the same tool set Unix was created with, just like you're Ken Thompson , four decades ago. Try that with Windows. The closest you can come today is to download the DOS source code , first released in 2014 , which you find amounts to just a pile of assembly sources. It will only build with obsolete tools you probably won't have on-hand, and in the end you get your very own copy of DOS 2.0, an OS far less powerful than 1974's Unix V5 , despite its being released nearly a decade later. (Why talk about Unix V5? Because it is the earliest complete Unix system still available. Earlier versions are apparently lost to time . There was a project that pieced together a V1/V2 era Unix, but it appears to be missing mkfs , despite the existence of the V1 manual page linked above proving it must have existed somewhere, somewhen. Either those putting this project together couldn't find an extant copy of mkfs to include, or I suck at finding files without find(1) , which also doesn't exist in that system. :) ) Now, you might be thinking, "Can't I just call format.com ? Isn't that the same on Windows as calling mkfs on Unix?" Alas, no, it isn't the same, for a bunch of reasons: First, format.com wasn't designed to be scripted. It prompts you to "press ENTER when ready", which means you need to send an Enter key to its input, or it'll just hang. Then, if you want anything more than a success/failure status code, you have to open its standard output for reading, which is far more complicated on Windows than it has to be . (On Unix, everything in that linked article can be accomplished with a simple popen(3) call.) Having gone through all this complication, the output of format.com is harder to parse for computer programs than the output of mkfs , being intended primarily for human consumption. If you trace what format.com does, you find that it does a bunch of complicated calls to DeviceIoControl() , ufat.dll , and such. It is not simply opening a device file and writing a new filesystem onto that device. This is the sort of design you get from a company that employs 176000 people worldwide and needs to keep employing them. When talking about loop devices, I talk only about Linux rather than Unix in general because loop devices aren't portable between Unix type systems. There are similar mechanisms in macOS, BSD, etc., but the syntax varies somewhat . Back in the days when disk drives were the size of washing machines and cost more than the department head's luxury car, big computer labs would share a larger proportion of their collective disk space as compared to modern computing environments. The ability to transparently graft a remote disk into the local filesystem made such distributed systems far easier to use. This is where we get /usr/share , for instance. Contrast Windows, where a remote disk is typically either mapped to a drive letter or must be accessed through a UNC path, rather than integrated transparently into the local filesystem. Drive letters offer you few choices for symbolic expression; does P: refer to the "public" space on BigServer, or to the "packages" directory on the software mirror server? UNC paths mean you have to remember which server your remote files are on, which gets difficult in a large organization with hundreds or thousands of file servers. Windows didn't get symlinks until Windows Vista, released in 2007, which introduced NTFS symbolic links , and they weren't made usable until a decade later . Windows' symbolic links are a bit more powerful than Unix's symbolic links — a feature of Unix since since 1977 — in that they can also point to a remote file share, not just to a local path. Unix did that differently, via NFS in 1984 , which builds on top of Unix's preexisting mount point feature, which it has had since the beginning. So, depending on how you look at it, Windows trailed Unix by roughly 2, 3, or 4 decades. You may object, "But it has Unix-style symlinks now! " Yet this misses the point, since it means there is no decades-old tradition of using them on Windows, so most people are unaware of them in a world where Unix systems use them pervasively, so it is not possible to use a Unix system for very long without learning about them. It doesn't help that Windows' MKLINK program is backwards , and you still can't create them from Windows Explorer, whereas the Unix equivalents to Windows Explorer typically do let you create symlinks. Linux boxes don't always use a virtual disk image in the boot sequence. There are a bunch of different ways to do it . Bash chooses the method based on the system's capabilities, since /dev/fd isn't available everywhere. Network socket descriptors are FDs underneath, too, by the way.
{ "source": [ "https://unix.stackexchange.com/questions/141016", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59882/" ] }
141,097
How can I enable and do code folding in Vim? Do I have to change anything in ~/.vimrc ? I type z + a and z + c and z + o and nothing happens. Here is an article about folding: Code folding in Vim .
No you don't have to put the command from the page you linked to in your ~/.vimrc , you can just type them after issuing : in vim to get the command prompt. However if you put the lines: set foldmethod=indent set foldnestmax=10 set nofoldenable set foldlevel=2 as indicated in the link you gave, in your ~/.vimrc , you don't have to type them every time you want to use folding in a file. The set nofoldenable makes sure that when opening, files are "normal", i.e. not folded.
{ "source": [ "https://unix.stackexchange.com/questions/141097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49764/" ] }
141,114
Is there any relation between GNU and GNOME? And further related license GPL?
@rob is right. GNOME is technically an official GNU project. However, there is a lot of interesting history. Let's roll back the clock It's 1996. There are no desktop environments. Users and sysadmins assemble environments from a hodge-podge of programs. Different window managers, different applications, maybe a dock. There are two major toolkits on the market: Qt and GTK+. Qt had been around for a while, and was a commercial product of a company called Trolltech. GTK+ had also been around for a fair while. It was loosely associated with the FSF, since it was originally written for use in the GIMP. There were more toolkits, like (for example) Motif, but for the purposes of this discussion, we don't care about them. The Kool Desktop Environment, also known as KDE, was created in October of that year in response to the fact that there was no unified desktop environment for UNIX systems. (The KDE project quickly dropped "Kool" in favor of just an undefined "K". It was clearly a good choice.) The creator of KDE, Matthias Ettrich, chose to use Qt for his new desktop. This was a major problem for the free software community. It meant that in order to use the awesome, free desktop that Matthias had created, they would have to install proprietary software - Qt. What to do? The FSF responded with not one but two projects, both working in parallel just in case one didn't pan out. The first was a project called Harmony. Harmony was intended to be an LGPL-licensed, API-compatible free software clone of Qt. The idea was that the community would keep KDE, simply replacing the proprietary bit. The Harmony project never really worked out. Development went on for about 4 years before Qt was relicensed in 2000 to be fully free software (as defined by the FSF), thus eliminating the original motivation for Harmony. Due to both the relicensing and the success of the second project, Harmony was abandoned. I bet you've guessed what the second project was by now. It was GNOME. Tying it all together I've given the history above. Now let's tie it all together in a nice knot. So, to answer your question: yes, there is a relationship between GNU and GNOME. GNOME is the official desktop environment of the GNU project and is therefore an official GNU project and a part of the GNU operating system. Historically, it was created by GNU in response to KDE's dependence on Qt. In fact, the G in GNOME stands for GNU. The full acronym expands to GNU Network Object Model Environment - this refers to a technology that was planned but never implemented, as the project decided that it "didn't fit with the core GNOME vision". That being said, GNOME is a huge project now. GTK+ is maintained by the GNOME people nowadays, for example, instead of being an independent project. It is safe to say that GNOME as an entity is independent of GNU, even though they are historically and technically related. GNOME has its own infrastructure; its own community; its own governance processes. As a side note, this is also why GNOME and KDE are (friendly) rivals nowadays. It is because back in 1996, when KDE was founded, GNOME was created with the express purpose of directly competing with KDE. And that rivalry has persisted all the way up until the present.
{ "source": [ "https://unix.stackexchange.com/questions/141114", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
141,248
I've used the following command to change the color of the status bar at the bottom of the screen: set -g status-bg colour244 But I don't know how to change the color of the lines that divide the panes; currently, they're a mix of the original green and gray (color244). man tmux gives me a lot of info about the status line but this seems to refer to the status bar itself, not the dividing lines. I suspect I'm just missing some terminology here.
You want pane-active-border-style and pane-border-style : See the entry in the man page: pane-active-border-style style Set the pane border style for the currently active pane. For how to specify style, see the message-command-style option. Attributes are ignored. pane-border-style style Set the pane border style for pane as aside from the active pane. For how to specify style, see the message-command-style option. Attributes are ignored. So, in your ~/.tmux.conf you could specify colours like so: # border colours set -g pane-border-style fg=magenta set -g pane-active-border-style "bg=default fg=magenta" Note, I use tmux 1.9a, and I find I get more consistent behaviour using: set -g pane-border-fg magenta set -g pane-active-border-fg green set -g pane-active-border-bg default
{ "source": [ "https://unix.stackexchange.com/questions/141248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65634/" ] }
141,281
I have a Linux Red Hat machine version 6.4 (64-bit). I notice that the rcp command does not exist on my machine (no rcp binary). I also searched in Google in order to find an rcp binary that will fit for my Linux machine but with no success. Where I can download rcp ?
You want pane-active-border-style and pane-border-style : See the entry in the man page: pane-active-border-style style Set the pane border style for the currently active pane. For how to specify style, see the message-command-style option. Attributes are ignored. pane-border-style style Set the pane border style for pane as aside from the active pane. For how to specify style, see the message-command-style option. Attributes are ignored. So, in your ~/.tmux.conf you could specify colours like so: # border colours set -g pane-border-style fg=magenta set -g pane-active-border-style "bg=default fg=magenta" Note, I use tmux 1.9a, and I find I get more consistent behaviour using: set -g pane-border-fg magenta set -g pane-active-border-fg green set -g pane-active-border-bg default
{ "source": [ "https://unix.stackexchange.com/questions/141281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67059/" ] }
141,436
I created this file structure: test/src test/firefox When I run this command: ln -s test/src test/firefox I would expect a symbolic link test/firefox/src to be created pointing to test/src , however I get this error instead: -bash: cd: src: Too many levels of symbolic links What am I doing wrong? Can you not create a symbolic link to one folder which is stored in a sibling of that folder? What's the point of this?
As Dubu points out in a comment, the issue lies in your relative paths. I had a similar problem symlinking my nginx configuration from /usr/local/etc/nginx to /etc/nginx . If you create your symlink like this: cd /usr/local/etc ln -s nginx/ /etc/nginx You will in fact make the link /etc/nginx -> /etc/nginx, because the source path is relative to the link's path. The solution is as simple as using absolute paths: ln -s /usr/local/etc/nginx /etc/nginx If you want to use relative paths and have them behave the way you probably expect them to, you can use the $PWD variable to easily add in the path to the current working directory path, like so: cd /usr/local/etc ln -s "$PWD/nginx/" /etc/nginx Make sure that the path is in double quotes, to make sure things like spaces in your current path are escaped. Note that you must use double quotes when doing this, as $PWD will not be substituted if you use single quotes.
{ "source": [ "https://unix.stackexchange.com/questions/141436", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63845/" ] }
141,480
I have a directory in which I would like to list all the content (files and sub directories) without showing the symbolic links. I am using GNU utilities on Linux. The ls version is 8.13. Example: Full directory listing: ~/test$ ls -Gg total 12 drwxrwxr-x 2 4096 Jul 9 10:29 dir1 drwxrwxr-x 2 4096 Jul 9 10:29 dir2 drwxrwxr-x 2 4096 Jul 9 10:29 dir3 -rw-rw-r-- 1 0 Jul 9 10:29 file1 -rw-rw-r-- 1 0 Jul 9 10:29 file2 lrwxrwxrwx 1 5 Jul 9 10:29 link1 -> link1 lrwxrwxrwx 1 5 Jul 9 10:30 link2 -> link2 What I would like to get ~/test$ ls -somthing (or bash hack) total 12 dir1 dir2 dir3 file1 file2 NOTE: My main motivation is to do a recursive grep (GNU grep 2.10) without following symlinks.
For the stated question you can use find : find . -mindepth 1 ! -type l will list all files and directories in the current directory or any subdirectories that are not symlinks. mindepth 1 is just to skip the . current-directory entry. The meat of it is the combination of -type l , which means "is a symbolic link", and ! , which means negate the following test. In combination they match every file that is not a symlink. This lists all files and directories recursively, but no symlinks. If you just want regular files (and not directories): find . -type f To include only the direct children of this directory, and not all others recursively: find . -mindepth 1 -maxdepth 1 You can combine those (and other) tests together to get the list of files you want. To execute a particular grep on every file matching the tests you're using, use -exec : find . -type f -exec grep -H 'some pattern' '{}' + The '{}' will be replaced with the files. The + is necessary to tell find your command is done. The option -H forces grep to display a file name even if it happens to run with a single matching file.
{ "source": [ "https://unix.stackexchange.com/questions/141480", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29436/" ] }
141,488
I need to get some specific columns from the output of free command. The way I'm following doesn't seem good, could anyone please suggest me a better way? bash-3.2$ free -gotsi total used free shared buffers cached Mem: 56 29 27 0 0 29 Swap: 11 0 11 Total: 67 29 38 bash-3.2$ bash-3.2$ free -gotsi | cut -c-40,64- total used free cached Mem: 56 29 27 29 Swap: 11 0 11 Total: 67 29 38
For the stated question you can use find : find . -mindepth 1 ! -type l will list all files and directories in the current directory or any subdirectories that are not symlinks. mindepth 1 is just to skip the . current-directory entry. The meat of it is the combination of -type l , which means "is a symbolic link", and ! , which means negate the following test. In combination they match every file that is not a symlink. This lists all files and directories recursively, but no symlinks. If you just want regular files (and not directories): find . -type f To include only the direct children of this directory, and not all others recursively: find . -mindepth 1 -maxdepth 1 You can combine those (and other) tests together to get the list of files you want. To execute a particular grep on every file matching the tests you're using, use -exec : find . -type f -exec grep -H 'some pattern' '{}' + The '{}' will be replaced with the files. The + is necessary to tell find your command is done. The option -H forces grep to display a file name even if it happens to run with a single matching file.
{ "source": [ "https://unix.stackexchange.com/questions/141488", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74910/" ] }
141,539
While trying to convert a text file into its ASCII equivalent, I get error message that iconv: illegal input sequence at position . Command I use is iconv -f UTF-8 -t ascii//TRANSLIT file The offending character is æ . Text file itself is present here . Why does it say illegal sequence? The input character is proper UTF-8 character (U+00E6).
The file is encoded in ISO-8859-1, not in UTF-8: $ hd 0606461.txt | grep -B1 '^0002c520' 0002c510 64 75 6d 20 66 65 72 69 65 6e 74 20 72 75 69 6e |dum ferient ruin| 0002c520 e6 0d 0a 2d 2d 48 6f 72 61 63 65 2e 0d 0a 0d 0a |...--Horace.....| And the byte "e6" alone is not a valid UTF-8 sequence. So, use iconv -f latin1 -t ascii//TRANSLIT file .
{ "source": [ "https://unix.stackexchange.com/questions/141539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23301/" ] }
141,572
Does make install with a Makefile most of the time call /usr/bin/install ? What necessary work does /usr/bin/install do besides copying the just compiled files to /usr/bin/local ? The man page says /usr/bin/install copy files and set attributes. What attributes are so important to set? Does it just set permission modes and owner/group, which are not necessary?
install offers a number of features in addition to copying files to a directory. the -s option removes the symbol table from an executable, saving space the -m option sets the permission bits. The files sitting in the developer's directory were created subject to his or her umask, which may prevent others from executing them. install -m 755 file1 /usr/local/bin ensures that everyone can execute the file, which is likely what the developer wants for a file in a shared directory. the -o and -g options set the owner and group. With cp , the owner and group of the destination file would be set to the uid and gid of whoever ran the cp , and with cp -p , the owner and group of the destination file would be the same as the file in the build directory, neither of which might be what the developer wants. The wall program needs to be in group tty , the screen program needs to be group utmp , etc. it reduces the number of commands that need to be put in a makefile recipe. install -s -m 755 -o root -g bin file1 file2 lib/* $(DESTDIR) is more succinct than the four commands cp , strip , chmod , and chown . The last bullet point is likely why the install command was invented and why many makefiles use it. Install isn't always used, though. I've seen cp -r lib $(DESTDIR)/lib when there's a whole tree full of stuff to copy, and ./install.sh if the developer prefers to use a custom script. Many packages have a install.sh derived from the one that comes with X11, which is like install but supports a -t (transform) option to rename the destination files in a specified way.
{ "source": [ "https://unix.stackexchange.com/questions/141572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
141,573
I'm trying to use a factor utility but it tells me that number is too large. Is there any utility that can do what factor doing but not tells that number is too large?
install offers a number of features in addition to copying files to a directory. the -s option removes the symbol table from an executable, saving space the -m option sets the permission bits. The files sitting in the developer's directory were created subject to his or her umask, which may prevent others from executing them. install -m 755 file1 /usr/local/bin ensures that everyone can execute the file, which is likely what the developer wants for a file in a shared directory. the -o and -g options set the owner and group. With cp , the owner and group of the destination file would be set to the uid and gid of whoever ran the cp , and with cp -p , the owner and group of the destination file would be the same as the file in the build directory, neither of which might be what the developer wants. The wall program needs to be in group tty , the screen program needs to be group utmp , etc. it reduces the number of commands that need to be put in a makefile recipe. install -s -m 755 -o root -g bin file1 file2 lib/* $(DESTDIR) is more succinct than the four commands cp , strip , chmod , and chown . The last bullet point is likely why the install command was invented and why many makefiles use it. Install isn't always used, though. I've seen cp -r lib $(DESTDIR)/lib when there's a whole tree full of stuff to copy, and ./install.sh if the developer prefers to use a custom script. Many packages have a install.sh derived from the one that comes with X11, which is like install but supports a -t (transform) option to rename the destination files in a specified way.
{ "source": [ "https://unix.stackexchange.com/questions/141573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74982/" ] }
141,600
I want to see a list of all outgoing HTTP requests from my desktop. I think it should be possible to monitor HTTPS hostnames as well for local clients using Server Name Indication (SNI). OS X has a nice GUI utility called Little Snitch , which is a per-app HTTP monitor and firewall rule front-end. I would settle for a nice terminal utility. tcpdump is overkill as I just want to see where the traffic is going in real-time and not the transmitted data. Ideally, I would like to see what process made the request as well, but just seeing what dials home would be a nice start.
You can use lsof and watch to do this, like so: $ watch -n1 lsof -i TCP:80,443 Example output dropbox 3280 saml 23u IPv4 56015285 0t0 TCP greeneggs.qmetricstech.local:56003->snt-re3-6c.sjc.dropbox.com:http (ESTABLISHED) thunderbi 3306 saml 60u IPv4 56093767 0t0 TCP greeneggs.qmetricstech.local:34788->ord08s09-in-f20.1e100.net:https (ESTABLISHED) mono 3322 saml 15u IPv4 56012349 0t0 TCP greeneggs.qmetricstech.local:54018->204-62-14-135.static.6sync.net:https (ESTABLISHED) chrome 11068 saml 175u IPv4 56021419 0t0 TCP greeneggs.qmetricstech.local:42182->stackoverflow.com:http (ESTABLISHED)
{ "source": [ "https://unix.stackexchange.com/questions/141600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7670/" ] }
141,650
Say I have a file: PRO 1 GLN 5.55112e-17 ILE -6.245e-17 THR 5.55112e-17 I want every line that has a number unequal to 1 in the second column to change it to 0 and keep the rest. If I use if (i.e conditional statement), everything is OK: awk '{if($2!=1){print $1,"0"}else{print $0}}' file PRO 1 GLN 0 ILE 0 THR 0 But when I use the conditional block, something undesired happens: awk '$2!=1 {print $1,"0"} {print $0}' file PRO 1 GLN 0 GLN 5.55112e-17 ILE 0 ILE -6.245e-17 THR 0 THR 5.55112e-17 You can see what's wrong. How do I fix this error? Why does this error occur? What's the different between a conditional statement and a conditional block?
In an if statement, you have an else . If if doesn't match, the else branch is executed. In a conditional statement, both actions are executed, regardless of the condition is true or false. A simple fix: $ awk '$2!=1 {print $1,"0";next};{print $0}' file PRO 1 GLN 0 ILE 0 THR 0 And you can make it more concise: $ awk '$2 != 1 {print $1,"0";next};1' file PRO 1 GLN 0 ILE 0 THR 0 When condition is true 1 and there is no action, awk default behavior is print . print with no argument will print $0 by default.
{ "source": [ "https://unix.stackexchange.com/questions/141650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60602/" ] }
143,845
Is there a way to tell ping to show its usual termination statistics without stopping the execution? For instance, I'd like to quickly view: --- 8.8.8.8 ping statistics --- 2410 packets transmitted, 2274 received, +27 errors, 5% packet loss, time 2412839ms rtt min/avg/max/mdev = 26.103/48.917/639.493/52.093 ms, pipe 3 without having to stop the program, thus losing the accumulated data.
From the ping manpage (emphasis mine): When the specified number of packets have been sent (and received) or if the program is terminated with a SIGINT, a brief summary is displayed. Shorter current statistics can be obtained without termination of process with signal SIGQUIT. So this will work if you're fine with your stats being slightly less verbose: # the second part is only for showing you the PID ping 8.8.8.8 & jobs ; fg <... in another terminal ...> kill -SIGQUIT $PID Short statistics look like this: 19/19 packets, 0% loss, min/avg/ewma/max = 0.068/0.073/0.074/0.088 ms
{ "source": [ "https://unix.stackexchange.com/questions/143845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53301/" ] }
143,958
With ksh I'm using read as a convenient way to separate values: $ echo 1 2 3 4 5 | read a b dump $ echo $b $a 2 1 $ But it fails in Bash: $ echo 1 2 3 4 5 | read a b dump $ echo $b $a $ I didn't find a reason in the man page why it fails, any idea?
bash runs the right-hand side of a pipeline in a subshell context , so changes to variables (which is what read does) are not preserved — they die when the subshell does, at the end of the command. Instead, you can use process substitution : $ read a b dump < <(echo 1 2 3 4 5) $ echo $b $a 2 1 In this case, read is running within our primary shell, and our output-producing command runs in the subshell. The <(...) syntax creates a subshell and connects its output to a pipe, which we redirect into the input of read with the ordinary < operation . Because read ran in our main shell the variables are set correctly. As pointed out in a comment, if your goal is literally to split a string into variables somehow, you can use a here string : read a b dump <<<"1 2 3 4 5" I assume there's more to it than that, but this is a better option if there isn't.
{ "source": [ "https://unix.stackexchange.com/questions/143958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48684/" ] }
144,016
I saw "pager" in several places: less is a terminal pager program on Unix option -P for man Specify which output pager to use. What is a pager? How is it related to and different from a terminal? Thanks.
As the name implies, roughly a pager is a piece of software that helps the user get the output one page at a time, by getting the size of rows of the terminal and displaying that many lines. The most popular pagers in a UNIX text environment are more and less . The latter is kind of a joke as less can actually do more then more .
{ "source": [ "https://unix.stackexchange.com/questions/144016", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
144,029
I have a question regarding the ports in Linux. If I connect my device via USB and want to check its port I can't do it using the command lsusb, which only specifies bus number and device number on this bus: [ziga@Ziga-PC ~]$ lsusb Bus 003 Device 007: ID 0403:6001 Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC Is there a command that tells me the port the device is connected to directly? Only way to do this until now was to disconect and reconnect and using the command: [ziga@Ziga-PC ~]$ dmesg | grep tty [ 0.000000] console [tty0] enabled [ 0.929510] 00:09: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 4.378109] systemd[1]: Starting system-getty.slice. [ 4.378543] systemd[1]: Created slice system-getty.slice. [ 8.786474] usb 3-4.4: FTDI USB Serial Device converter now attached to ttyUSB0 In the last line it can be seen that my device is connected to /dev/ttyUSB0 .
I'm not quite certain what you're asking. You mention 'port' several times, but then in your example, you say the answer is /dev/ttyUSB0 , which is a device dev path, not a port. So this answer is about finding the dev path for each device. Below is a quick and dirty script which walks through devices in /sys looking for USB devices with a ID_SERIAL attribute. Typically only real USB devices will have this attribute, and so we can filter with it. If we don't, you'll see a lot of things in the list that aren't physical devices. #!/bin/bash for sysdevpath in $(find /sys/bus/usb/devices/usb*/ -name dev); do ( syspath="${sysdevpath%/dev}" devname="$(udevadm info -q name -p $syspath)" [[ "$devname" == "bus/"* ]] && exit eval "$(udevadm info -q property --export -p $syspath)" [[ -z "$ID_SERIAL" ]] && exit echo "/dev/$devname - $ID_SERIAL" ) done On my system, this results in the following: /dev/ttyACM0 - LG_Electronics_Inc._LGE_Android_Phone_VS930_4G-991c470 /dev/sdb - Lexar_USB_Flash_Drive_AA26MYU15PJ5QFCL-0:0 /dev/sdb1 - Lexar_USB_Flash_Drive_AA26MYU15PJ5QFCL-0:0 /dev/input/event5 - Logitech_USB_Receiver /dev/input/mouse1 - Logitech_USB_Receiver /dev/input/event2 - Razer_Razer_Diamondback_3G /dev/input/mouse0 - Razer_Razer_Diamondback_3G /dev/input/event3 - Logitech_HID_compliant_keyboard /dev/input/event4 - Logitech_HID_compliant_keyboard Explanation: find /sys/bus/usb/devices/usb*/ -name dev Devices which show up in /dev have a dev file in their /sys directory. So we search for directories matching this criteria. syspath="${sysdevpath%/dev}" We want the directory path, so we strip off /dev . devname="$(udevadm info -q name -p $syspath)" This gives us the path in /dev that corresponds to this /sys device. [[ "$devname" == "bus/"* ]] && exit This filters out things which aren't actual devices. Otherwise you'll get things like USB controllers & hubs. The exit exits the subshell, which flows to the next iteration of the loop. eval "$(udevadm info -q property --export -p $syspath)" The udevadm info -q property --export command lists all the device properties in a format that can be parsed by the shell into variables. So we simply call eval on this. This is also the reason why we wrap the code in the parenthesis, so that we use a subshell, and the variables get wiped on each loop. [[ -z "$ID_SERIAL" ]] && exit More filtering of things that aren't actual devices. echo "/dev/$devname - $ID_SERIAL" I hope you know what this line does :-)
{ "source": [ "https://unix.stackexchange.com/questions/144029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9135/" ] }
144,172
ORIGINAL QUESTION: If I have 2 identical hard drives with the following characteristics: SATA 6.0 Gb/s 5400 rpm 3TB How long should a full dd copy take to complete? So far it's been running for 5 hours and still going... I am using Linux Ubuntu 12.04 64bit and the command I am using is: dd if=/dev/sdb of=/dev/sdc UPDATE: 1 I can now see the progress, and it's been 6+ hours for copy 430GB. The HDD is 3TB... Is there no faster way of doing this? UPDATE: 2 This seems a lot better than before (Thanks to Groxxda for the suggestions ): sudo dd if=/dev/sdb bs=128K | pv -s 3000G | sudo dd of=/dev/sdc bs=128K ETA is about 9 hours for 3TB, whereas before it reached 430GB after 6 hours, so I am guessing it would have taken about 36 hours using the previous command.
dd was useful in the old days when people used tapes (when block sizes mattered) and when simpler tools such as cat might not be binary-safe. Nowadays, dd if=/dev/sdb of=/dev/sdc is a just complicated, error-prone, slow way of writing cat /dev/sdb >/dev/sdc . While dd still useful for some relatively rare tasks , it is a lot less useful than the number of tutorials mentioning it would let you believe. There is no magic in dd , the magic is all in /dev/sdb . Your new command sudo dd if=/dev/sdb bs=128K | pv -s 3000G | sudo dd of=/dev/sdc bs=128K is again needlessly slow and complicated. The data is read 128kB at a time (which is better than the dd default of 512B, but not as good as even larger values). It then goes through two pipes before being written. Use the simpler and faster cat command. (In some benchmarks I made a couple of years ago under Linux, cat was faster than cp for a copy between different disks, and cp was faster than dd with any block size; dd with a large block size was slightly faster when copying onto the same disk.) cat /dev/sdb >/dev/sdc If you want to run this command in sudo , you need to make the redirection happen as root: sudo sh -c 'cat /dev/sdb >/dev/sdc' If you want a progress report, since you're using Linux, you can easily get one by noting the PID of the cat process (say 1234) and looking at the position of its input (or output) file descriptor. # cat /proc/1234/fdinfo/0 pos: 64155648 flags: 0100000 If you want a progress report and your unix variant doesn't provide an easy way to get at a file descriptor positions, you can install and use pv instead of cat .
{ "source": [ "https://unix.stackexchange.com/questions/144172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4430/" ] }
144,268
I am moving a website from one server to another and Git does not store metadata such as file permissions. I need to find the directories and files that are not 775 / 664 respectively. Right now, I'm using this cobbled-together contraption: $ find . -type d -exec ls -la {} \; | grep ^d | grep -v ^drwxrwxr-x $ find . -type f -exec ls -la {} \; | grep -v ^d | grep -v ^-rw-rw-r-- | grep -v '.git' Though this works, I feel it is rather hacky. Is there a better way to do this, perhaps a canonical way, or should I just be hacky? This is running on a recent Ubuntu version with GNU tools under Bash.
Use the -perm test to find in combination with -not : find -type d -not -perm 775 -o -type f -not -perm 664 -perm 775 matches all files with permissions exactly equal to 775 . -perm 664 does the same for 664 . -not (boolean NOT) negates the test that follows, so it matches exactly the opposite of what it would have: in this case, all those files that don't have the correct permissions. -o (boolean OR) combines two sets of tests together, matching when either of them do: it has the lowest precedence, so it divides our tests into two distinct groups. You can also use parentheses to be more explicit. Here we match directories with permissions that are not 775 and ordinary files with permissions that are not 664 . If you wanted two separate commands for directories and files, just cut it in half at -o and use each half separately: find -type f -not -perm 664 find -type d -not -perm 775
{ "source": [ "https://unix.stackexchange.com/questions/144268", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9760/" ] }
144,277
My ibus input method was broken after an update. So I switched to fcitx . Actually, there are a handful IMs (Input Methods) installed in my Fedora 19 system, e.g. fcitx, ibus, yong, etc. However, I don't know how to configure them. My default IM for gnome-terminal is yong , ibus for gmrun . As for firefox or chrome , I guess they use ibus by default, because GTK_IM_MODULE=ibus . There are just-work solutions. I can switch IM by right-click-menu in some applications like gnome-terminal or gmrun . I can also specify IM with GTK_IM_MODULE . But how to do it automatically? I know the IM settings have something to do with configuration files like ~/.xinputrc /etc/X11/xinit/xinputrc /etc/X11/xinit/xinput.d/ibus.conf /etc/X11/xinit/xinput.d/fcitx.conf /etc/X11/xinit/xinitrc /etc/alternatives/xinputrc The questions are How to configure IM properly? What configuration files really matter? In which execution order?
Use the -perm test to find in combination with -not : find -type d -not -perm 775 -o -type f -not -perm 664 -perm 775 matches all files with permissions exactly equal to 775 . -perm 664 does the same for 664 . -not (boolean NOT) negates the test that follows, so it matches exactly the opposite of what it would have: in this case, all those files that don't have the correct permissions. -o (boolean OR) combines two sets of tests together, matching when either of them do: it has the lowest precedence, so it divides our tests into two distinct groups. You can also use parentheses to be more explicit. Here we match directories with permissions that are not 775 and ordinary files with permissions that are not 664 . If you wanted two separate commands for directories and files, just cut it in half at -o and use each half separately: find -type f -not -perm 664 find -type d -not -perm 775
{ "source": [ "https://unix.stackexchange.com/questions/144277", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20713/" ] }
144,298
I would like to delete the last character of a string, I tried this little script : #! /bin/sh t="lkj" t=${t:-2} echo $t but it prints "lkj", what I am doing wrong?
In a POSIX shell, the syntax ${t:-2} means something different - it expands to the value of t if t is set and non null, and otherwise to the value 2 . To trim a single character by parameter expansion, the syntax you probably want is ${t%?} Note that in ksh93 , bash or zsh , ${t:(-2)} or ${t: -2} (note the space) are legal as a substring expansion but are probably not what you want, since they return the substring starting at a position 2 characters in from the end (i.e. it removes the first character i of the string ijk ). See the Shell Parameter Expansion section of the Bash Reference Manual for more info: Bash Reference Manual – Shell Parameter Expansion
{ "source": [ "https://unix.stackexchange.com/questions/144298", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77361/" ] }
144,391
Is there a better way to compress and then encrypt files other than tar followed by openssl or gpg ?
tar is the usual tool to bundle files. Plain tar itself doesn't compress. There are separate tools such as gzip , bzip2 and xz (in increasing order of compression ratio on typical files) that compress one file. Many tar implementation, including GNU tar (the normal implementation on Linux), can automatically compress with an option ( -z for gzip, -j for bzip2, -J for xz): tar -cJf myarchive.tar.xz file1 file2 file3 To encrypt a file, use gpg . Create a key and associate it with your email address (GPG/PGP key identifiers usually contain an email address, though it is not necessary ). Encrypt your files, specifying your email as the recipient. To decrypt a file, you'll need to enter the passphrase to unlock your private key. GPG also lets you encrypt a file with a password. This is less secure and less flexible. It's less flexible because you need to specify the password when encrypting (so for example you can't make unattended backups). It's less secure because the only security is the password, whereas key-based encryption splits the security between the password and the key. Don't use the openssl command line tool. It's a showcase for the OpenSSL library, not designed for production use. Although you can do some things with it (in particular, it does have all the primitives needed for a basic certification authority), it's hard to use correctly and it doesn't have all you need to do things right. Where GPG gives you a bicycle, OpenSSL gives you some metal rods of various sizes and a couple of rubber chambers (screws and pump not included). Use GPG.
{ "source": [ "https://unix.stackexchange.com/questions/144391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/75027/" ] }
144,476
I'm connected on a host via ssh and I'd like to compare (let's say with diff ) a certain config file against its counterpart on an another host, also accessible via ssh , without having to manually download the remote file first before running the diff.
ssh user@remote_host "cat remote_file.txt" | diff - local_file.txt Source
{ "source": [ "https://unix.stackexchange.com/questions/144476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18455/" ] }
144,514
Let's say that I want to run a command through Bash like this: /bin/bash -c "ls -l" According to Bash man page, I could also run it like this: # don't process arguments after this one # | pass all unprocessed arguments to command # | | # V V /bin/bash -c ls -- -l except it doesn't seem to work (it seems ignored). Am I doing something wrong, or am I interpreting man page wrong? Relevant quotes from man: If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0. and A -- signals the end of options and disables further option processing. Any arguments after the -- are treated as filenames and arguments.
You're interpreting the man page wrong. Firstly, the part about -- signalling the end of options is irrelevant to what you're trying to do. The -c overrides the rest of the command line from that point on, so that it's no longer going through bash's option handling at all, meaning that the -- would be passed through to the command, not handled by bash as an end of options marker. The second mistake is that extra arguments are assigned as positional parameters to the shell process that's launched, not passed as arguments to the command. So, what you're trying to do could be done as one of: /bin/bash -c 'echo "$0" "$1"' foo bar /bin/bash -c 'echo "$@"' bash foo bar In the first case, passing echo the parameters $0 and $1 explicitly, and in the second case, using "$@" to expand as normal as "all positional parameters except $0". Note that in that case we have to pass something to be used as $0 as well; I've chosen "bash" since that's what $0 would normally be, but anything else would work. As for the reason it's done this way, instead of just passing any arguments you give directly to the command you list: note that the documentation says "command s are read from string", plural. In other words, this scheme allows you to do: /bin/bash -c 'mkdir -p -- "$1" && cd -P -- "$1" && touch -- "$2"' bash dir file But, note that a better way to meet your original goal might be to use env rather than bash : /usr/bin/env -- "ls" "-l" If you don't need any of the features that a shell is providing, there's no reason to use it - using env in this case will be faster, simpler, and less typing. And you don't have to think as hard to make sure it will safely handle filenames containing shell metacharacters or whitespace.
{ "source": [ "https://unix.stackexchange.com/questions/144514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59541/" ] }
144,515
Our Unix team often uses Samba to join machines to the domain. The command they have traditionally used is: net join ADS -w [domain name] -U [username] I am one of our AD admins and I am trying to find out how to get them to be able to join to a specific OU so we can have all of the Samba machines organized in AD. From all of my research, it seems that this should work: net join ads "Servers/Samba" -w [domain] -U [username] This still allows the machine to join to the domain without issue but it keeps ending up in the 'computers' container and we receive no errors. I have made sure on the AD side that the user they are using have join domain rights and create/delete computer objects on both the "servers" OU tree and the "computers" container. What am I missing? I can't find much documentation on the Samba net commands without having access to a unix box with it. Also, I noticed in most examples people always had 'net ads join...' rather than 'net join ads...' - our Unix admin got errors when trying to use net ads join. I do not know why our syntax seems different then most examples I found but I wanted to point it out. Here are some sites that support my research: https://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/domain-member.html http://www.members.optushome.com.au/~wskwok/poptop_ads_howto_6a.htm
You're interpreting the man page wrong. Firstly, the part about -- signalling the end of options is irrelevant to what you're trying to do. The -c overrides the rest of the command line from that point on, so that it's no longer going through bash's option handling at all, meaning that the -- would be passed through to the command, not handled by bash as an end of options marker. The second mistake is that extra arguments are assigned as positional parameters to the shell process that's launched, not passed as arguments to the command. So, what you're trying to do could be done as one of: /bin/bash -c 'echo "$0" "$1"' foo bar /bin/bash -c 'echo "$@"' bash foo bar In the first case, passing echo the parameters $0 and $1 explicitly, and in the second case, using "$@" to expand as normal as "all positional parameters except $0". Note that in that case we have to pass something to be used as $0 as well; I've chosen "bash" since that's what $0 would normally be, but anything else would work. As for the reason it's done this way, instead of just passing any arguments you give directly to the command you list: note that the documentation says "command s are read from string", plural. In other words, this scheme allows you to do: /bin/bash -c 'mkdir -p -- "$1" && cd -P -- "$1" && touch -- "$2"' bash dir file But, note that a better way to meet your original goal might be to use env rather than bash : /usr/bin/env -- "ls" "-l" If you don't need any of the features that a shell is providing, there's no reason to use it - using env in this case will be faster, simpler, and less typing. And you don't have to think as hard to make sure it will safely handle filenames containing shell metacharacters or whitespace.
{ "source": [ "https://unix.stackexchange.com/questions/144515", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77506/" ] }
144,518
I am currently working with the bsub job submission system from Platform LSF. I need to submit job scripts, but I am running into a problem with passing arguments to the job script. What I need is as follows: the job script is a bash script that takes one command line argument. I need to pass the argument to that script, then redirect the script to be used as input to bsub. The problem is that the argument isn't being applied to the script. I have tried: bsub < script.bsub 1 script.bsub is my script and 1 is the numeric argument. This approach doesn't work. The 1 is being treated as an argument to bsub. Another approach: bsub < "script.bsub 1" This approach makes bsub treat the entire double-quoted line as a filename. Is there a way to apply a command line argument to a script first, then redirect that script with the argument into another program as input? If anyone has experience with bsub that would be even better. UPDATE AND WORKAROUND: It seems applying a command line argument to a bsub file is a very complicated process. I tried the HEREDOC method stated below, but bsub acted as if the script filename was a command. So the easiest way to get around this problem is just to not use input redirection at all. To pass an argument to a script to be run in bsub, first specify all bsub arguments in the command line rather than in the script file. Then to run the script file, use "sh script.sh [arg]" after all of the bsub arguments. Thus the entire line will look something like: bsub -q [queue] -J "[name]" -W 00:10 [other bsub args] "sh script.sh [script args]" This doesn't answer my original question, but at least with bsub this is the simplest resolution to my problem.
You're interpreting the man page wrong. Firstly, the part about -- signalling the end of options is irrelevant to what you're trying to do. The -c overrides the rest of the command line from that point on, so that it's no longer going through bash's option handling at all, meaning that the -- would be passed through to the command, not handled by bash as an end of options marker. The second mistake is that extra arguments are assigned as positional parameters to the shell process that's launched, not passed as arguments to the command. So, what you're trying to do could be done as one of: /bin/bash -c 'echo "$0" "$1"' foo bar /bin/bash -c 'echo "$@"' bash foo bar In the first case, passing echo the parameters $0 and $1 explicitly, and in the second case, using "$@" to expand as normal as "all positional parameters except $0". Note that in that case we have to pass something to be used as $0 as well; I've chosen "bash" since that's what $0 would normally be, but anything else would work. As for the reason it's done this way, instead of just passing any arguments you give directly to the command you list: note that the documentation says "command s are read from string", plural. In other words, this scheme allows you to do: /bin/bash -c 'mkdir -p -- "$1" && cd -P -- "$1" && touch -- "$2"' bash dir file But, note that a better way to meet your original goal might be to use env rather than bash : /usr/bin/env -- "ls" "-l" If you don't need any of the features that a shell is providing, there's no reason to use it - using env in this case will be faster, simpler, and less typing. And you don't have to think as hard to make sure it will safely handle filenames containing shell metacharacters or whitespace.
{ "source": [ "https://unix.stackexchange.com/questions/144518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77508/" ] }
144,600
In the history, i would simply edit a file and then reboot the whole server. i would clone the line that had port 22 open change it to 80 and then save the file.. and reboot the whole system so the iptables would start with port 80 open. but in the recent times.. that file is no longer in existent in my centos 6.5 O.S. most answers on google suggest i must interact with iptables in order to enable and disable ports. is it possible to not interact with iptables but rather just see everything infront of you as one editable file ?
In CentOS you have the file /etc/sysconfig/iptables if you don't have it there, you can create it simply by using iptables-save to dump the current rule set into a file. iptables-save > /etc/sysconfig/iptables To load the file you don't need to restart the machine, you can use iptables-restore iptables-restore < /etc/sysconfig/iptables
{ "source": [ "https://unix.stackexchange.com/questions/144600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
144,606
I bought a samsung laptop series 5. There are a 24gb ssd and normal hard disk. I need to install linux ubuntu on this notebook. Can I just install the ubuntu on the build-in 24gb ssd? If not, how can I use the 24gb ssd for anything?
In CentOS you have the file /etc/sysconfig/iptables if you don't have it there, you can create it simply by using iptables-save to dump the current rule set into a file. iptables-save > /etc/sysconfig/iptables To load the file you don't need to restart the machine, you can use iptables-restore iptables-restore < /etc/sysconfig/iptables
{ "source": [ "https://unix.stackexchange.com/questions/144606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77579/" ] }
144,656
For example, can I set: gb = cd /media/Dan/evolution ... so that every time I execute gb in bash, I can cd to that particular directory? I found something online: the alias command. But it seems that it can't do the work above. Is it possible to do it? How?
just type: alias gb='cd /media/Dan/evolution' To make this setting permanent (so that it sticks after you restart or open another console) add this line to the file ~/.bashrc (assuming you use the bash as your default shell)
{ "source": [ "https://unix.stackexchange.com/questions/144656", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
144,702
The ssh-keygen generates the following output: The key fingerprint is: dd:e7:25:b3:e2:5b:d9:f0:25:28:9d:50:a2:c9:44:97 user@machine The key's randomart image is: +--[ RSA 2048]----+ | .o o.. | | o +Eo | | + . | | . + o | | S o = * o| | . o @.| | . = o| | . o | | o. | +-----------------+ What is the purpose of this image, does it provide any value for the user? Note this is a client (user) key, not a host key.
This was explained in this question: https://superuser.com/questions/22535/what-is-randomart-produced-by-ssh-keygen . It doesn't really have any use for the user generating the key, rather it's for ease of validation. Personally. would you rather look at this: (Please note this is a host key example) 2048 1b:b8:c2:f4:7b:b5:44:be:fa:64:d6:eb:e6:2f:b8:fa 192.168.1.84 (RSA) 2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 gist.github.com,207.97.227.243 (RSA) 2048 a2:95:9a:aa:0a:3e:17:f4:ac:96:5b:13:3b:c8:0a:7c 192.168.2.17 (RSA) 2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 github.com,207.97.227.239 (RSA) Which, being a human, it'd take you a good while longer to verify, or this: 2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 gist.github.com,207.97.227.243 (RSA) +--[ RSA 2048]----+ | . | | + . | | . B . | | o * + | | X * S | | + O o . . | | . E . o | | . . o | | . . | +-----------------+ 2048 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48 github.com,207.97.227.239 (RSA) +--[ RSA 2048]----+ | . | | + . | | . B . | | o * + | | X * S | | + O o . . | | . E . o | | . . o | | . . | +-----------------+ Examples pulled from http://sanscourier.com/blog/2011/08/31/what-the-what-are-ssh-fingerprint-randomarts-and-why-should-i-care/ Essentially, the random art generated by the user's keys can also be used in the same sort of way. If the image generated initially is different from the current image of the key, for example if you had moved a key, then the key had likely been tampered with, corrupted, or replaced. This, from the other question is a really good read: http://users.ece.cmu.edu/~adrian/projects/validation/validation.pdf
{ "source": [ "https://unix.stackexchange.com/questions/144702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
144,718
I'm stumped. I have a script in my /home directory which is executable: [user@server ~]$ ll total 4 -rwx------ 1 user user 2608 Jul 15 18:23 qa.sh However, when I attempt to run it with sudo it says it can't find it: [user@server ~]$ sudo ./qa.sh [sudo] password for user: sudo: unable to execute ./qa.sh: No such file or directory This is on a fresh build. No changes have been made which would cause problems. In fact, the point of the script is to ensure that it is actually built according to our policies. Perhaps maybe it isn't and sudo is actually being broken during the build? I should also note that I can run sudo with other commands in other directories. EDIT: The script ( I didn't write it so don't /bin/bash me over it, please ;) ) #! /bin/bash . /root/.bash_profile customer=$1 if [ -z "$customer" ]; then echo "Customer not provided. Exiting..." exit 1 fi space () { echo echo '###########################################################################' echo '###########################################################################' echo '###########################################################################' echo } g=/bin/egrep $g ^Listen /etc/ssh/sshd_config $g ^PermitR /etc/ssh/sshd_config $g ^LogL /etc/ssh/sshd_config $g ^PubkeyA /etc/ssh/sshd_config $g ^HostbasedA /etc/ssh/sshd_config $g ^IgnoreR /etc/ssh/sshd_config $g ^PermitE /etc/ssh/sshd_config $g ^ClientA /etc/ssh/sshd_config space $g 'snyder|rsch|bream|shud|mweb|dam|kng|cdu|dpr|aro|pvya' /etc/passwd ; echo ; echo ; $g 'snyder|rsch|bream|shud|mweb|dam|kng|cdu|dpr|aro|pvya' /etc/shadow space $g 'dsu|scan' /etc/passwd ; echo ; echo ; $g 'dsu|scan' /etc/shadow space $g ${customer}admin /etc/passwd space chage -l ${customer}admin space $g 'urs|cust|dsu' /etc/sudoers space $g dsu /etc/security/access.conf space $g account /etc/pam.d/login space /sbin/ifconfig -a | $g addr | $g -v inet6 space echo "10.153.156.0|10.153.174.160|10.120.80.0|10.152.80.0|10.153.193.0|172.18.1.0|10.153.173.0" echo $g '10.153.156.0|10.153.174.160|10.120.80.0|10.152.80.0|10.153.193.0|172.18.1.0|10.153.173.0' /etc/sysconfig/network-scripts/route-eth1 space cat /etc/sysconfig/network-scripts/route-eth2 space netstat -rn | tail -1 space cat /etc/sysconfig/iptables space cat /etc/hosts space ##file /usr/local/groundwork ; echo ; echo ; /sbin/service gdma status ##space cat /etc/resolv.conf space HOSTNAME=`echo $HOSTNAME | awk -F. '{ print $1 }'` nslookup ${HOSTNAME} echo echo nslookup ${HOSTNAME}-mgt echo echo nslookup ${HOSTNAME}-bkp space /sbin/service rhnsd status ; echo ; echo ; /sbin/chkconfig --list rhnsd ; echo ; echo ; yum update --security space /sbin/service osad status ; echo ; echo ; /sbin/chkconfig --list osad space /sbin/service sshd status ; echo ; echo ; /sbin/chkconfig --list sshd space /sbin/service snmpd status ; echo ; echo ; /sbin/chkconfig --list snmpd ; echo ; echo ; echo ; cat /etc/snmp/snmpd.conf space df -h space cat /proc/cpuinfo | $g ^processor space free -g space if [ -f /etc/rsyslog.conf ]; then tail -3 /etc/rsyslog.conf else echo "This system is not running rsyslog." fi rm -f $0
This usually happens when the shebang ( #! ) line in your script is broken. The shebang is what tells the kernel the file needs to be executed using an interpreter. When run without sudo , the message is a little more meaningful. But with sudo you get the message you got. For example: $ cat test.sh #!/bin/foo echo bar $ ./test.sh bash: ./test.sh: /bin/foo: bad interpreter: No such file or directory $ bash test.sh bar $ sudo ./test.sh sudo: unable to execute ./test.sh: No such file or directory $ sudo bash ./test.sh bar The bad interpreter message clearly indicates that it's the shebang which is faulty.
{ "source": [ "https://unix.stackexchange.com/questions/144718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77650/" ] }
144,752
I know that we can escape a special character like *(){}$ with \ so as to be considered literals. For example \* or \$ But in case of . I have to do it twice, like \\. otherwise it is considered special character. Example: man gcc | grep \\. Why is it so?
Generally, you only have to escape one time to make special character considered literal. Sometime you have to do it twice, because your pattern is used by more than one program. Let's discuss your example: man gcc | grep \\. This command is interpreted by two programs, the bash interpreter and grep . The first escape causes bash to know \ is literal, so the second is passed for grep . If you escape only one time, \. , bash will know this dot is literal, and pass . to grep . When grep see this . , it thinks the dot is special character, not literal . If you escape twice, bash will pass the pattern \. to grep . Now grep knows that it is a literal dot.
{ "source": [ "https://unix.stackexchange.com/questions/144752", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55673/" ] }
144,794
I interrupted tcpdump with Ctrl + C and got this total summary: 579204 packets captured 579346 packets received by filter 142 packets dropped by kernel What are the "packets dropped by kernel"? Why does that happen?
From the tcpdump's manual: packets ``dropped by kernel'' (this is the number of packets that were dropped, due to a lack of buffer space, by the packet capture mechanism in the OS on which tcpdump is running, if the OS reports that information to applications; if not, it will be reported as 0). A bit of explanation: The tcpdump captures raw packets passing through a network interface. The packets have to be parsed and filtered according to rules specified by you in the command line, and that takes some time, so incoming packets have to be buffered (queued) for processing. Sometimes there are too many packets, they are saved to a buffer, but they are saved faster than processed, so eventually the buffer runs out of space, so the kernel drops all further packets until there is some free space in the buffer. You can increase the buffer size with the -B ( --buffer-size ) option like this: tcpdump -B 4096 .... Note that the size is specified in kilobytes, so the line above sets the buffer size to 4MB.
{ "source": [ "https://unix.stackexchange.com/questions/144794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19072/" ] }
144,812
Can we generate a unique id for each PC, something like uuuidgen, but it will never change unless there are hardware changes? I was thinking about merging CPUID and MACADDR and hash them to generate a consistent ID, but I have no idea how to parse them using bash script, what I know is how can I get CPUID from dmidecode -t 4 | grep ID and ifconfig | grep ether then I need to combine those hex strings and hash them using sha1 or md5 to create fixed length hex string. How can I parse that output?
How about these two: $ sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g' 52060201FBFBEBBF $ ifconfig | grep eth1 | awk '{print $NF}' | sed 's/://g' 0126c9da2c38 You can then combine and hash them with: $ echo $(sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g') \ $(ifconfig | grep eth1 | awk '{print $NF}' | sed 's/://g') | sha256sum 59603d5e9957c23e7099c80bf137db19144cbb24efeeadfbd090f89a5f64041f - To remove the trailing dash, add one more pipe: $ echo $(sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g') \ $(ifconfig | grep eth1 | awk '{print $NF}' | sed 's/://g') | sha256sum | awk '{print $1}' 59603d5e9957c23e7099c80bf137db19144cbb24efeeadfbd090f89a5f64041f As @mikeserv points out in his answer , the interface name can change between boots. This means that what is eth0 today might be eth1 tomorrow, so if you grep for eth0 you might get a different MAC address on different boots. My system does not behave this way so I can't really test but possible solutions are: Grep for HWaddr in the output of ifconfig but keep all of them, not just the one corresponding to a specific NIC. For example, on my system I have: $ ifconfig | grep HWaddr eth1 Link encap:Ethernet HWaddr 00:24:a9:bd:2c:28 wlan0 Link encap:Ethernet HWaddr c4:16:19:4f:ac:g5 By grabbing both MAC addresses and passing them through sha256sum , you should be able to get a unique and stable name, irrespective of which NIC is called what: $ echo $(sudo dmidecode -t 4 | grep ID | sed 's/.*ID://;s/ //g') \ $(ifconfig | grep -oP 'HWaddr \K.*' | sed 's/://g') | sha256sum | awk '{print $1}' 662f0036cba13c2ddcf11acebf087ebe1b5e4044603d534dab60d32813adc1a5 Note that the hash is different from the ones above because I am passing both MAC addresses returned by ifconfig to sha256sum . Create a hash based on the UUIDs of your hard drive(s) instead: $ blkid | grep -oP 'UUID="\K[^"]+' | sha256sum | awk '{print $1}' 162296a587c45fbf807bb7e43bda08f84c56651737243eb4a1a32ae974d6d7f4
{ "source": [ "https://unix.stackexchange.com/questions/144812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/317/" ] }
144,924
How can I create a message box from the command line, either GUI message boxes or message boxes shown inside the terminal? It would also be interesting to be able to get a simple input back from the user, for example, an input given with radio buttons (yes/no, OK, etc).
For a standard "box around a message", use boxes : echo 'This is a test' | boxes boxes will look like this (First one. Second one is a custom like cowsay ): If you mean an alert box, use notify-send : notify-send 'title' 'message' notify-send looks like this: You also can use zenity for a popup window: zenity --error --text="An error occurred\!" --title="Warning\!" Zenity is more graphical and has more options, like having the window appear as a question, using: zenity --question --text="Do you wish to continue/?" or even progress bars, using: find /usr | zenity --progress --pulsate --auto-close --auto-kill --text="Working..." zenity looks like this: Or use dialog , for a command-line only message box: dialog --checklist "Choose OS:" 15 40 5 \ 1 Linux off \ 2 Solaris on \ 3 'HP UX' off \ 4 AIX off dialog looks like this: Another option is whiptail : whiptail --title "Example Dialog" --msgbox "This is an example of a message box. You must hit OK to continue." 8 78 whiptail looks like this: And if you are truly crazy, use toilet : toilet -F border -F gay "CRAZY" toilet looks like this: Source for boxes Source for dialog 1 Source for dialog 2 Source for zenity 1 Source for zenity 2 Source for whiptail 1 Source for whiptail 2 Source for toilet
{ "source": [ "https://unix.stackexchange.com/questions/144924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62511/" ] }
144,931
I installed vnstat on my Ubuntu 14.04 server to track my internet usage (I have a limited monthly transfer). The database never updates, however. I've tried uninstalling/reinstalling, but that doesn't work. What do I need to do to get vnstat to update and be accurate?
For a standard "box around a message", use boxes : echo 'This is a test' | boxes boxes will look like this (First one. Second one is a custom like cowsay ): If you mean an alert box, use notify-send : notify-send 'title' 'message' notify-send looks like this: You also can use zenity for a popup window: zenity --error --text="An error occurred\!" --title="Warning\!" Zenity is more graphical and has more options, like having the window appear as a question, using: zenity --question --text="Do you wish to continue/?" or even progress bars, using: find /usr | zenity --progress --pulsate --auto-close --auto-kill --text="Working..." zenity looks like this: Or use dialog , for a command-line only message box: dialog --checklist "Choose OS:" 15 40 5 \ 1 Linux off \ 2 Solaris on \ 3 'HP UX' off \ 4 AIX off dialog looks like this: Another option is whiptail : whiptail --title "Example Dialog" --msgbox "This is an example of a message box. You must hit OK to continue." 8 78 whiptail looks like this: And if you are truly crazy, use toilet : toilet -F border -F gay "CRAZY" toilet looks like this: Source for boxes Source for dialog 1 Source for dialog 2 Source for zenity 1 Source for zenity 2 Source for whiptail 1 Source for whiptail 2 Source for toilet
{ "source": [ "https://unix.stackexchange.com/questions/144931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74725/" ] }
145,019
I can't seem to change the hostname on my CentOS 6.5 host. I am following instructions I found on this (now defunct) page . I set my /etc/hosts like so ... [root@mig-dev-006 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain 192.168.32.128 ost-dev-00.domain.example ost-dev-00 192.168.32.129 ost-dev-01.domain.example ost-dev-01 ... then I make my /etc/sysconfig/network file like so ... [root@mig-dev-006 ~]# cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=ost-dev-00.domain.example NTPSERVERARGS=iburst ... then I run hostname like so ... [root@mig-dev-006 ~]# hostname ost-dev-00.domain.example ... and then I run bash and all seems well ... [root@mig-dev-006 ~]# bash ... but when I restart my network the old hostname comes back: [root@ost-dev-00 ~]# /etc/init.d/network restart Shutting down interface eth0: Device state: 3 (disconnected) [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Active connection state: activating Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/6 state: activated Connection activated [ OK ] [root@ost-dev-00 ~]# bash [root@mig-dev-006 ~]#
to change the hostname permanently, you need to change it in two places: vi /etc/sysconfig/network NETWORKING=yes HOSTNAME=newHostName and: a good idea if you have any applications that need to resolve the IP of the hostname) vi /etc/hosts 127.0.0.1 newHostName 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 and then rebooting the system
{ "source": [ "https://unix.stackexchange.com/questions/145019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31667/" ] }
145,131
Is it possible to take an image from the clipboard and output it to a file (using X)? I can do this with text easily: $ xclip -selection c -o > file.text But when I try the above with an image nothing is written. The reason I want to do this is I don't have an image editor installed, and it got me thinking whether I could do this without installing one.
You can actually do this with xclip using -t option. See what targets are available: $ xclip -selection clipboard -t TARGETS -o TARGETS image/png text/html Note the image/png target; go ahead and get it: $ xclip -selection clipboard -t image/png -o > /tmp/clipboard.png Refer to the ICCCM Section 2.6.2 for further reading. Note: xclip SVN revision 81 (from April 2010) or patches later required.
{ "source": [ "https://unix.stackexchange.com/questions/145131", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42906/" ] }
145,150
I've to verify the length of variable read (my script limit to five the characters inserted), I think about something like this: #!/bin/bash read string check=${#string} echo $check if [ $check -ge 5 ]; then echo "error" ; exit else echo "done" fi is there a more "elegant" solution?
More elegant? No Shorter? Yes :) #!/bin/bash read string if [ ${#string} -ge 5 ]; then echo "error" ; exit else echo "done" fi And if you have no problem on trading more elegance in favor of being shorter, you can have a script with 2 lines less: #!/bin/bash read string [ ${#string} -ge 5 ] && echo "error" || echo "done" You could use double brackets if you think it is safer. Explanation here .
{ "source": [ "https://unix.stackexchange.com/questions/145150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40628/" ] }
145,160
I have the following if block in my bash script: if [ ${PACKAGENAME} -eq kakadu-v6_4-00902C ]; then echo "successfully entered if block!!" fi The script execution is not entering my if block even though $PACKAGENAME is equal to kakadu-v6_4-00902C . What am I doing wrong?
-eq is an arithmetic operator, which compares two numbers. Use = (portable/standard sh ), =~ or == instead. Also use quotes, because if ${PACKAGENAME} contains a whitespace or wildcard character, then it will be split into multiple arguments, which causes to make [ see more arguments than desired. See here a list of common bash pitfalls. if [ "${PACKAGENAME}" = 'kakadu-v6_4-00902C' ]; then echo "successfully entered if block!!" fi See man bash , search ( / ) for CONDITIONAL EXPRESSIONS .
{ "source": [ "https://unix.stackexchange.com/questions/145160", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3811/" ] }
145,181
Is there a way to use mosh without giving up the local scrollback? Basically, in some circumstances, IP-roaming is indeed useful and needed, but the extra terminal emulation and key prediction seems to only be getting rid of the local scrollback buffer lines and the session history.
Filippo Valsorda has a solution for OS X that incorporates iTerm 2, tmux, and mosh . His solution uses a single window/tab to connect to a remote shell. The shell survives disconnects (e.g., connection failure, IP changes, laptop reboots) and supports scrollback with a touchpad, copy-paste, and colors. Caveats are that you must build mosh from source, scrolling is less fluid than native, and click-drag is relayed, so you must hold Option to select. iTerm In the Terminal Profile settings, Enable xterm mouse reporting and set Report Terminal Type to xterm-256color . tmux Set ~/.tmux.conf on the server to the following. With these settings, if you try to attach and there are no sessions, a new one is created. The settings also enable mouse interactions (and thus touchpad scrolling). new-session set-window-option -g mode-mouse on set -g history-limit 30000 Note: On more recent tmux (i.e. > 2.1), as reported by tmux -V , the various mouse options (mouse-resize-pane, mouse-mode, etc.) have been rewritten to a single option mouse , so you have to change the second line above to set-window-option -g mouse on instead. This mouse scroll will also work when you are in keyboard scroll mode (e.g. Ctrl - b then [ ), described in the article How to scroll in tmux . mosh The stable build of mosh is old and does not support mouse reporting (and touchpad scrolling). To install the latest version, do the following: OS X (your client) brew install --HEAD mobile-shell Linux/UNIX (the server) git clone https://github.com/keithw/mosh.git cd mosh/ sudo apt-get build-dep mosh ./autogen.sh && ./configure && make sudo make install Now, to connect, just type the following: mosh HOST -- tmux a
{ "source": [ "https://unix.stackexchange.com/questions/145181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28354/" ] }
145,182
I am trying to copy .ssh/id_rsa.pub from our central server to multiple servers. I have the following script which I usually use to push changes to the different servers. #!/bin/bash for ip in $(<IPs); do # Tell the remote server to start bash, but since its # standard input is not a TTY it will start bash in # noninteractive mode. ssh -q "$ip" bash <<-'EOF' EOF done But in this case, I need to cat the public key on the local server and then add that to multiple servers. Is there a way by using the above here document script to execute the following. cat .ssh/id_rsa.pub |ssh [email protected] 'cat > .ssh/authorized_keys'
With this simple loop you can automate it and spread to all remote servers. #!/bin/bash for ip in `cat /home/list_of_servers`; do ssh-copy-id -i ~/.ssh/id_rsa.pub $ip done
{ "source": [ "https://unix.stackexchange.com/questions/145182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67186/" ] }
145,247
I am trying to understand what %CPU means when I run top . I am seeing %CPU for my application at "400" or "500" most of the time. Does anyone know what this means? 19080 david 20 0 27.9g 24g 12m S 400 19.7 382:31.81 paper_client lscpu gives me this output: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 45 Stepping: 7 CPU MHz: 2599.928 BogoMIPS: 5199.94 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
%CPU -- CPU Usage : The percentage of your CPU that is being used by the process. By default, top displays this as a percentage of a single CPU. On multi-core systems, you can have percentages that are greater than 100%. For example, if 3 cores are at 60% use, top will show a CPU use of 180%. See here for more information. You can toggle this behavior by hitting Shift i while top is running to show the overall percentage of available CPUs in use. Source for above quote . You can use htop instead. To answer your question about how many cores and virtual cores you have: According to your lscpu output: You have 32 cores ( CPU(s) ) in total. You have 2 physical sockets ( Socket(s) ), each contains 1 physical processor. Each processor of yours has 8 physical cores ( Core(s) per socket ) inside, which means you have 8 * 2 = 16 real cores. Each real core can have 2 threads ( Thread(s) per core ), which means you have real cores * threads = 16 * 2 = 32 cores in total. So you have 32 virtual cores from 16 real cores. Also see this , this and this link.
{ "source": [ "https://unix.stackexchange.com/questions/145247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64455/" ] }
145,250
If I run history , I can see my latest executed commands. But if I do tail -f $HISTFILE or tail -f ~/.bash_history , they do not get listed. Does the file get locked, is there a temporary location or something similar?
Bash maintains the list of commands internally in memory while it's running. They are written into .bash_history on exit : When an interactive shell exits, the last $HISTSIZE lines are copied from the history list to the file named by $HISTFILE If you want to force the command history to be written out, you can use the history -a command, which will: Append the new history lines (history lines entered since the beginning of the current Bash session) to the history file. There is also a -w option: Write out the current history to the history file. which may suit you more depending on exactly how you use your history. If you want to make sure that they're always written immediately, you can put that command into your PROMPT_COMMAND variable: export PROMPT_COMMAND='history -a'
{ "source": [ "https://unix.stackexchange.com/questions/145250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57143/" ] }
145,289
I know I can use wc for counting characters, words and lines of files at the command line . Is there any way I can can count the number of words while in vim ?
You can count words and lines inside vi using vi 's own counter: Press g and then CTRL - g . Then the bottom line look for example like this: Col 1 of 11; Line 1 of 106; Word 1 of 344; Byte 1 of 2644 Or use vi 's method to call shell commands: :w !wc -w This calls the save ( :w ) command first and then wc -w and shows the output. Example: :w !wc -w 344 Press ENTER or type command to continue Press Enter to go back to vi .
{ "source": [ "https://unix.stackexchange.com/questions/145289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
145,299
This ought to be really simple, but for some reason it is not working: sed -i.bak -E 's/\t/ /' file.txt Instead of replacing tab characters, it's replacing t characters. I've tried every variation on this I could think of, playing with quoting, etc. I've Googled and found everyone else using pretty similar expressions and they seem to work for them. The -E is an OS X thing. I thought the failure might be a result of some weird quirk of OS X's sed , so I tried it with Ruby as well (without the -i ), and got the same result: ruby -pe '$_.gsub!(/\t/," ")' < file.txt > file.new I'm using Bash 3.2.51 on OS X, and iTerm, although I can't see how any of those could be terribly relevant. I haven't set any weird environment variables, though I can post any that you think might be relevant. What could be wrong? UPDATE : I must have made some other mistake or typo when I tried the Ruby version, since Gilles points out that it does work (and I've never had him steer me wrong!). I'm not sure what happened, but I'm pretty sure it must have been my mistake.
The syntax \t for a tab character in sed is not standard. That escape is a GNU sed extension . You find a lot of examples online that use it because a lot of people use GNU sed (it's the sed implementation on non-embedded Linux). But OS X sed , like other *BSD sed, doesn't support \t for tab and instead treats \t as meaning backslash followed by t . There are many solutions, such as: Use a literal tab character. sed -i.bak 's/ / /' file.txt Use tr or printf to produce a tab character. sed -i.bak "s/$(printf '\t')/ /" file.txt sed -i.bak "s/$(echo a | tr 'a' '\t')/ /" file.txt Use bash's string syntax allowing backslash escapes . sed -i.bak $'s/\t/ /' file.txt Use Perl, Python or Ruby. The Ruby snippet that you posted does work.
{ "source": [ "https://unix.stackexchange.com/questions/145299", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3358/" ] }
145,348
What is the cleanest way to modify this command in bash to only run if the group does not exist? groupadd somegroupname A one-liner would be best.
getent group somegroupname || groupadd somegroupname
{ "source": [ "https://unix.stackexchange.com/questions/145348", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37818/" ] }
145,402
I cannot seem to make it work. GNU sed documentation says to escape the pipe, but that doesn't work, nor does using a straight pipe without the escape. Adding parens makes no difference. $ echo 'cat dog pear banana cat dog' | sed 's/cat|dog/Bear/g' cat dog pear banana cat dog $ echo 'cat dog pear banana cat dog' | sed 's/cat\|dog/Bear/g' cat dog pear banana cat dog
By default sed uses POSIX Basic Regular Expressions , which don't include the | alternation operator. You can switch it into using Extended Regular Expressions , which do include | alternation, with -E (or -r in some older versions of some implementations). You can use: echo 'cat dog pear banana cat dog' | sed -E -e 's/cat|dog/Bear/g' and it will work on compliant systems. ( -e optionally marks the sed script itself - you can leave it out, it just guards against some kinds of mistake) Portability to very old sed s is complicated, but you can also switch to awk if you need it, which uses EREs everywhere.
{ "source": [ "https://unix.stackexchange.com/questions/145402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43342/" ] }
145,447
I've just installed CentOS7 as a virtual machine on my mac (osx10.9.3 + virtualbox) .Running ifconfig returns command not found. Also running sudo /sbin/ifconfig returns commmand not found. I am root. The output of echo $PATH is as below. /usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/robbert/.local/bin:/home/robbert/bin Is my path normal? If not, how can I change it? Also, I don't have an internet connection on virtual machine yet, maybe that's a factor.
TL/DR: ifconfig is now ip a . Try ip -s -c -h a . Your path looks OK, but does not include /sbin , which may be intended. You were probably looking for the command /sbin/ifconfig . If this file does not exist (try ls /sbin/ifconfig ), the command may just be not installed. It is part of the package net-tools , which is not installed by default, because it's deprecated and superseded by the command ip from the package iproute2 . The function of ifconfig without options is replaced by ip specifying the object address . ifconfig is equivalent to ip addr show and, because the object argument can be abbreviated and command defaults to show , also to ip a The output format is somewhat different: $ ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:10553 errors:0 dropped:0 overruns:0 frame:0 TX packets:10553 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:9258474 (9.2 MB) TX bytes:9258474 (9.2 MB) [ ... ] and $ ip address 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever [ ... ] Note the output is more terse: It does not show counts of packets handled in normal or other ways. For that, add the option -s ( -stats , -statistics ): $ ip -s addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever RX: bytes packets errors dropped overrun mcast 74423 703 0 0 0 0 TX: bytes packets errors dropped carrier collsns 74423 703 0 0 0 0 But what you actually want to see may be this: $ ip -stats -color -human addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever RX: bytes packets errors dropped overrun mcast 74.3k 700 0 0 0 0 TX: bytes packets errors dropped carrier collsns 74.3k 700 0 0 0 0 It shows counts with suffixes like 26.1M or 79.3k and colors some relevant terms and addresses. If you feel the command is too long, use the short options: This is equivalent: ip -s -c -h a
{ "source": [ "https://unix.stackexchange.com/questions/145447", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78028/" ] }
145,490
We know that users' passwords are saved in /etc/passwd , but in an encrypted way, so even the root can't see them: jane:x:501:501::/home/jane:/bin/bash fred:x:502:502::/home/fred:/bin/bash As shown above, :x: represents the password. Is there a way (possible configuration) to save the password in the /etc/passwd in clear text and such that the root can see them?
The other two answers have told you—correctly!—that this is a Bad Idea™ . But they've also told you its hard to do, requiring changing a bunch of programs. That's not true. It's very easy. You only need to change one or two configuration files. I feel its important to point this out, because you should be aware of it when logging into systems you don't control. These won't actually put a plain-text password in /etc/passwd or /etc/shadow , it'll go into a different file. Note I haven't tested these, as I'd rather not have my password in plain text. Edit /etc/pam.d/common-password (to catch on password changed) or /etc/pam.d/common-auth (to catch on login) and add in … pam_exec expose_authtok log=/root/passwords /bin/cat Edit both of those, and switch from pam_unix to pam_userdb with crypt=none . Alternatively, you could put it only in common-password (leaving pam_unix as well) to just record passwords when they're changed. You could remove the shadow (as well as any strong hash options) option from pam_unix to disable the shadow file, and go back to traditional crypt passwords. Not plain text, but John the Ripper will fix that for you. For further details, check the PAM System Admin Guide . You could also edit the source code of PAM, or write your own module. You'd only need to compile PAM (or your module), nothing else.
{ "source": [ "https://unix.stackexchange.com/questions/145490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78050/" ] }
145,522
I've seen the phrase "sh compatible" used usually in reference to shells. I'm not sure if it also applies to the programs that might be run from within shells. What does it mean for a shell or other program to be "sh compatible"? What would it mean to be "sh incompatible"? Edit: This question asking the difference between bash and sh is very relevant: Difference between sh and bash I'd still like a direct answer to what it means to be "sh compatible". A reasonable expectation might be that "sh compatible" means "implements the Shell Command Language" but then why are there so many "sh compatible" shells and why are they different?
Why are there so many "sh compatible" shells? The Bourne shell was first publicly released in 1979 as part of Unix V7 . Since pretty much every Unix and Unix-like system descends from V7 Unix — even if only spiritually — the Bourne shell has been with us "forever."¹ The Bourne shell actually replaced an earlier shell, retronymed the Thompson shell , but it happened so early in Unix's history that it's all but forgotten today. The Bourne shell is a superset of the Thompson shell .² Both the Bourne and Thompson shells were called sh . The shell specified by POSIX is also called sh . So, when someone says sh -compatible, they are handwavingly referring to this series of shells. If they wanted to be specific, they'd say "POSIX shell" or "Bourne shell."³ The POSIX shell is based on the 1988 version of KornShell , which in turn was meant to replace the Bourne shell on AT&T Unix, leapfrogging the BSD C shell in terms of features.⁴ To the extent that ksh is the ancestor of the POSIX shell, most Unix and Unix-like systems include some variant of the Korn shell today. The exceptions are generally tiny embedded systems, which can't afford the space a complete POSIX shell takes. That said, the Korn shell — as a thing distinct from the POSIX shell — never really became popular outside the commercial Unix world. This is because its rise corresponded with the early years of Unix commercialization, so it got caught up in the Unix wars . BSD Unixes eschewed it in favor of the C shell, and its source code wasn't freely available for use in Linux when it got started.⁵ So, when the early Linux distributors went looking for a command shell to go with their Linux kernel, they usually chose GNU Bash , one of those sh -compatibles you're talking about.⁶ That early association between Linux and Bash pretty much sealed the fate of many other shells, including ksh , csh and tcsh . There are die-hards still using those shells today, but they're very much in the minority.⁷ All this history explains why the creators of relative latecomers like bash , zsh , and yash chose to make them sh -compatible: Bourne/POSIX compatibility is the minimum a shell for Unix-like systems must provide in order to gain widespread adoption. In many systems, the default interactive command shell and /bin/sh are different things. /bin/sh may be: The original Bourne shell. This is common in older UNIX® systems, such as Solaris 10 (released in 2005) and its predecessors.⁸ A POSIX-certified shell. This is common in newer UNIX® systems, such as Solaris 11 (2010). The Almquist shell . This is an open source Bourne/POSIX shell clone originally released on Usenet in 1989 , which was then contributed to Berkeley's CSRG for inclusion in the first BSD release containing no AT&T source code, 4.4BSD-Lite . The Almquist shell is often called ash , even when installed as /bin/sh . 4.4BSD-Lite in turn became the base for all modern BSD derivatives, with /bin/sh remaining as an Almquist derivative in most of them, with one major exception noted below. You can see this direct descendancy in the source code repositories for NetBSD and FreeBSD : they were shipping an Almquist shell derivative from day 1. There are two important ash forks outside the BSD world: dash , famously adopted by Debian and Ubuntu in 2006 as the default /bin/sh implementation. (Bash remains the default interactive command shell in Debian derivatives.) The ash command in BusyBox , which is frequently used in embedded Linuxes and may be used to implement /bin/sh . Since it postdates dash and it was derived from Debian's old ash package , I've chosen to consider it a derivative of dash rather than ash , despite its command name within BusyBox. (BusyBox also includes a less featureful alternative to ash called hush . Typically only one of the two will be built into any given BusyBox binary: ash by default, but hush when space is really tight. Thus, /bin/sh on BusyBox-based systems is not always dash -like.) GNU Bash , which disables most of its non-POSIX extensions when called as sh . This choice is typical on desktop and server variants of Linux, except for Debian and its derivatives. Mac OS X has also done this since Panther, released in 2003. A shell with ksh93 POSIX extensions , as in OpenBSD . Although the OpenBSD shell changes behavior to avoid syntax and semantic incompatibilities with Bourne and POSIX shells when called as sh , it doesn't disable any of its pure extensions, being those that don't conflict with older shells. This is not common; you should not expect ksh93 features in /bin/sh . I used "shell script" above as a generic term meaning Bourne/POSIX shell scripting. This is due to the ubiquity of Bourne family shells. To talk about scripting on other shells, you need to give a qualifier, like "C shell script." Even on systems where a C family shell is the default interactive shell, it is better to use the Bourne shell for scripting. It is telling that when Wikipedia classifies Unix shells , they group them into Bourne shell compatible, C shell compatible, and "other." This diagram may help: (Click for SVG version, 31 kB, or view full-size PNG version , 218 kB.) What would it mean to be "sh incompatible"? Someone talking about an sh -incompatible thing typically means one of three things: They are referring to one of those "other" shells.⁹ They are making a distinction between the Bourne and C shell families. They are talking about some specific feature in one Bourne family shell that isn't in all the other Bourne family shells. ksh93 , bash , and zsh in particular have many features that don't exist in the older "standard" shells. Those three are also mutually-incompatible in a lot of ways, once you get beyond the shared POSIX/ ksh88 base. It is a classic error to write a shell script with a #!/bin/sh shebang line at the top but to use Bash or Korn shell extensions within. Since /bin/sh is one of the shells in the Korn/POSIX family diagram above on so many systems these days, such scripts will work on the system they are written on, but then fail on systems where /bin/sh is something from the broader Bourne family of shells. Best practice is to use #!/bin/bash or #!/bin/ksh shebang lines if the script uses such extensions. There are many ways to check whether a given Bourne family shell script is portable: Run checkbashisms on it, a tool from the Debian project that checks a script for " bashisms ." Run it under posh , a shell in the Debian package repository that purposely implements only features specified by SUS3 , plus a few other minor features . Run it under obosh from the Schily Tools project , an improved version of the Bourne shell as open sourced by Sun as part of OpenSolaris in 2005, making it one of the easiest ways to get a 1979 style Bourne shell on a modern computer. The Schily Tools distribution also includes bosh , a POSIX type shell with many nonstandard features , but which may be useful for testing the compatibility of shell scripts intended to run on all POSIX family shells. It tends to be more conservative in its feature set than bash , zsh and the enhanced versions of ksh93 . Schily Tools also includes a shell called bsh , but that is an historical oddity which is not a Bourne family shell at all. Go through the Portable Shell Programming chapter in the GNU Autoconf manual . You may recognize some of the problematic constructs it talks about in your scripts. Why are they different? For the same reasons all "New & Improved!" things are different: The improved version could only be improved by breaking backwards compatibility. Someone thought of a different way for something to work, which they like better, but which isn't the same way the old one worked. Someone tried reimplementing an old standard without completely understanding it, so they messed up and created an unintentional difference. Footnotes and Asides : Early versions of BSD Unix were just add-on software collections for V6 Unix. Since the Bourne shell wasn't added to AT&T Unix until V7, BSD didn't technically start out having the Bourne shell. BSD's answer to the primitive nature of the Thompson shell was the C shell . Nevertheless, the first standalone versions of BSD (2.9BSD and 3BSD) were based on V7 or its portable successor UNIX/32V , so they did include the Bourne shell. (The 2BSD line turned into a parallel fork of BSD for Digital's PDP minicomputers , while the 3BSD and 4BSD lines went on to take advantage of newer computer types like Vaxen and Unix workstations . 2.9BSD was essentially the PDP version of 4.1cBSD; they were contemporaneous, and shared code . PDPs didn't just disappear when the VAX arrived, so the 2BSD line is still shambling along .) It is safe to say that the Bourne shell was everywhere in the Unix world by 1983. That's a good approximation to "forever" in the computing industry. MS-DOS got a hierarchical filesystem that year (awww, how cuuute!) and the first 24-bit Macintosh with its 9" B&W screen — not grayscale, literally black and white — wouldn't come out until early the next year. The Thompson shell was quite primitive by today's standards. It was only an interactive command shell, rather than the script programming environment we expect today. It did have things like pipes and I/O redirection, which we think of as prototypically part of a "Unix shell," so that we think of the MS-DOS command shell as getting them from Unix. The Bourne shell also replaced the PWB shell , which added important things to the Thompson shell like programmability ( if , switch and while ) and an early form of environment variables. The PWB shell is even less well-remembered than the Thompson shell since it wasn't part of every version of Unix. When someone isn't specific about POSIX vs Bourne shell compatibility, there is a whole range of things they could mean. At one extreme, they could be using the 1979 Bourne shell as their baseline. An " sh -compatible script" in this sense would mean it is expected to run perfectly on the true Bourne shell or any of its successors and clones: ash , bash , ksh , zsh , etc. Someone at the other extreme assumes the shell specified by POSIX as a baseline instead. We take so many POSIX shell features as "standard" these days that we often forget that they weren't actually present in the Bourne shell: built-in arithmetic, job control, command history, aliases, command line editing, the $() form of command substitution, etc. Although the Korn shell has roots going back to the early 1980s, AT&T didn't ship it in Unix until System V Release 4 in 1988. Since so many commercial Unixes are based on SVR4, this put ksh in pretty much every relevant commercial Unix from the late 1980s onward. (A few weird Unix flavors based on SVR3 and earlier held onto pieces of the market past the release of SVR4, but they were the first against the wall when the revolution came.) 1988 is also the year the first POSIX standard came out, with its Korn shell based "POSIX shell." Later, in 1993, an improved version of the Korn shell came out. Since POSIX effectively nailed the original in place, ksh forked into two major versions: ksh88 and ksh93 , named after the years involved in their split. ksh88 is not entirely POSIX-compatible, though the differences are small, so that some versions of the ksh88 shell were patched to be POSIX-compatible. (This from an interesting interview on Slashdot with Dr. David G. Korn . Yes, the guy who wrote the shell.) ksh93 is a fully-compatible superset of the POSIX shell . Development on ksh93 has been sporadic since the primary source repository moved from AT&T to GitHub with the newest release being about 3 years old as I write this, ksh93v. (The project's base name remains ksh93 with suffixes added to denote release versions beyond 1993.) Systems that include a Korn shell as a separate thing from the POSIX shell usually make it available as /bin/ksh , though sometimes it is hiding elsewhere. When we talk about ksh or the Korn shell by name, we are talking about ksh93 features that distinguish it from its backwards-compatible Bourne and POSIX shell subsets. You rarely run across the pure ksh88 today. AT&T kept the Korn shell source code proprietary until March 2000 . By that point, Linux's association with GNU Bash was very strong. Bash and ksh93 each have advantages over the other , but at this point inertia keeps Linux tightly associated with Bash. As to why the early Linux vendors most commonly choose GNU Bash over pdksh , which was available at the time Linux was getting started, I'd guess it's because so much of the rest of the userland also came from the GNU project . Bash is also somewhat more advanced than pdksh , since the Bash developers do not limit themselves to copying Korn shell features. Work on pdksh stopped about the time AT&T released the source code to the true Korn shell. There are two main forks that are still maintained, however: the OpenBSD pdksh and the MirBSD Korn Shell, mksh . I find it interesting that mksh is the only Korn shell implementation currently packaged for Cygwin. GNU Bash goes beyond POSIX in many ways, but you can ask it to run in a more pure POSIX mode . csh / tcsh was usually the default interactive shell on BSD Unixes through the early 1990s. Being a BSD variant , early versions of Mac OS X were this way, through Mac OS X 10.2 "Jaguar" . OS X switched the default shell from tcsh to Bash in OS X 10.3 "Panther" . This change did not affect systems upgraded from 10.2 or earlier. The existing users on those converted systems kept their tcsh shell. FreeBSD claims to still use tcsh as the default shell , but on the FreeBSD 10 VM I have here, the default shell appears to be one of the POSIX-compatible Almquist shell variants . This is true on NetBSD as well. OpenBSD uses a fork of pdksh as the default shell instead. The higher popularity of Linux and OS X makes some people wish FreeBSD would also switch to Bash, but they won't be doing so any time soon for philosophical reasons . It is easy to switch it , if this bothers you. It is rare to find a system with a truly vanilla Bourne shell as /bin/sh these days. You have to go out of your way to find something sufficiently close to it for compatibility testing. I'm aware of only one way to run a genuine 1979 vintage Bourne shell on a modern computer: use the Ancient Unix V7 disk images with the SIMH PDP-11 simulator from the Computer History Simulation Project . SIMH runs on pretty much every modern computer , not just Unix-like ones. SIMH even runs on Android and on iOS . With OpenSolaris , Sun open-sourced the SVR4 version of the Bourne shell for the first time. Prior to that, the source code for the post-V7 versions of the Bourne shell was only available to those with a Unix source code license. That code is now available separately from the rest of the defunct OpenSolaris project from a couple of different sources. The most direct source is the Heirloom Bourne shell project . This became available shortly after the original 2005 release of OpenSolaris. Some portability and bug fixing work was done over the next few months, but then development on the project halted. Jörg Schilling has done a better job of maintaining a version of this code as obosh in his Schily Tools package. See above for more on this. Keep in mind that these shells derived from the 2005 source code release contain multi-byte character set support, job control, shell functions , and other features not present in the original 1979 Bourne shell. One way to tell whether you are on an original Bourne shell is to see if it supports an undocumented feature added to ease the transition from the Thompson shell: ^ as an alias for | . That is to say, a command like ls ^ more will give an error on a Korn or POSIX type shell, but it will behave like ls | more on a true Bourne shell. Occasionally you encounter a fish , scsh or rc/es adherent, but they're even rarer than C shell fans. The rc family of shells isn't commonly used on Unix/Linux systems, but the family is historically important, which is how it earned a place in the diagram above. rc is the standard shell of the Plan 9 from Bell Labs operating system, a kind of successor to 10th edition Unix , created as part of Bell Labs' continued research into operating system design. It is incompatible with both Bourne and C shell at a programming level; there's probably a lesson in there. The most active variant of rc appears to be the one maintained by Toby Goodwin , which is based on the Unix rc clone by Byron Rakitzis.
{ "source": [ "https://unix.stackexchange.com/questions/145522", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67136/" ] }
145,651
In a bash script, how can I redirect all standard outputs to a log file and tee the output on the screen using exec ? log_file="$HOME/logs/install.txt-`date +'%Y-%m-%d_%H-%M-%S'`" [ -f "$log_file" ] || touch "$log_file" exec 1>> $log_file 2>&1 This code redirect all the log to the log file but not to the screen .
Use process substitution with & redirection and exec : exec &> >(tee -a "$log_file") echo "This will be logged to the file and to the screen" $log_file will contain the output of the script and any subprocesses, and the output will also be printed to the screen. >(...) starts the process ... and returns a file representing its standard input. exec &> ... redirects both standard output and standard error into ... for the remainder of the script (use just exec > ... for stdout only). tee -a appends its standard input to the file, and also prints it to the screen.
{ "source": [ "https://unix.stackexchange.com/questions/145651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49573/" ] }
145,672
How can one print the element after the last tab in a file? Exemple: File1 A 3 8 6 7 B 4 6 2 3 6 8 c 1 9 would return: 7 8 9
You can make smart use of the NF variable in awk awk '{print $NF}' File1 From man awk NF The number of fields in the current input record. So NF will give you the amount of fields and $NF will then expand to $3 for example, which you can use in a print statement.
{ "source": [ "https://unix.stackexchange.com/questions/145672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74555/" ] }