source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
20,493
I need to view large logs files using a bash shell. I was using less to open the files, but since the lines are too lengthy there is some kind of line/word wrapping going on. Since the files are Log4J logs, and there is a pattern in the beginning of each line, having lines wrapped makes it difficult to analyze the output, so I started using less -S which chops long lines. But now I need to use tail -f , and it also line wraps the output. Is it possible to disable line wrap in a bash shell for all the commands? Note: there is an answer to a different question that mentions the escape sequence echo -ne '\e[?7l' , but it seems to not work on bash.
Try: less -S +F filename = less --chop-long-lines +F filename And then: Press Ctrl c to stop tailing and now you can move left and right using cursor keys. Press Shift f to resume tailing Press Ctrl c , q to quit less manual: + If a command line option begins with +, the remainder of that option is taken to be an initial command to less. For example, +F tells less to scroll forward, and keep trying to read when the end of file is reached
{ "source": [ "https://unix.stackexchange.com/questions/20493", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6713/" ] }
20,513
I'm currently downloading Debian 6 DVD. I don't want to use Stable, I want to use Testing or Sid, but I don't know wich one is better for me. Is Sid really unstable ? Is Testing up-to-date like Arch does ? Or it's like a non-rolling release distro ? Thanks
There is an interesting part of Debian GNU/Linux FAQ devoted to this question. In particular the choice depends on security/stability considerations expertise of the user need for newer versions of software support for new hardware I would like to point out the following passage from that page: Stable is rock solid. It does not break. Testing breaks less often than Unstable. But when it breaks, it takes a long time for things to get rectified. Sometimes this could be days and it could be months at times. Unstable changes a lot, and it can break at any point. However, fixes get rectified in many occasions in a couple of days and it always has the latest releases of software packaged for Debian.
{ "source": [ "https://unix.stackexchange.com/questions/20513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8961/" ] }
20,523
I have an NFS share which is shared across about two other machines. I recently realized that one of the servers isn't sharing the directory and is keeping files all for itself. Is there a way to see if the NFS share is mounted in the directory I think it is in?
Maybe you are looking for df . When you are in the directory you want to know the mountpoint of?
{ "source": [ "https://unix.stackexchange.com/questions/20523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
20,550
My mouse has an unfortunate feature. On the left side, right where my thumb sits ever so gently when I'm using it, there are two buttons that are so sensitive a mere brush will make them click. I'm talking of course about the pesky forward/back buttons which, if pressed in a browser, can make watching that hour-long youtube video that much harder. Is there a way for me to disable them? Would this be handled by X?
Start the program xev in a terminal. Move the mouse inside the xev window; you'll see a lot of stuff scroll by. Press each button in turn. Then switch back to the terminal window and press Ctrl + C . xev shows a description of each input event, in particular ButtonPress and ButtonRelease for mouse clicks (you'll also see a number of MotionNotify for mouse movements and other events). It's likely that your forward and back buttons are mapped to mouse buttons, maybe buttons 8 and 9: ButtonPress event, serial 29, synthetic NO, window 0x2e00001, root 0x105, subw 0x0, time 2889100159, (166,67), root:(1769,98), state 0x0, button 8, same_screen YES If that's the case, remap these buttons to a different action in your browser, if you can. Alternatively, you can remap the buttons to different button numbers which your browser doesn't react to or disable the buttons altogether at the system level. To do this, put these lines in a file called ~/.Xmodmap : ! Remap button 8 to 10 and disable button 9. pointer = 1 2 3 4 5 6 7 10 0 Test it with the command xmodmap ~/.Xmodmap . Most desktop environments and window managers run this command automatically when you log in; if yours doesn't, arrange for it to run when X starts. It's also possible that your mouse sends a keyboard event when you press these buttons: KeyPress event, serial 32, synthetic NO, window 0x2e00001, root 0x105, subw 0x0, time 2889100963, (957,357), root:(2560,388), state 0x0, keycode 166 (keysym 0x1008ff26, XF86Back), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False In that case, put lines like these in ~/.Xmodmap : keycode 166 = NoSymbol keycode 167 = NoSymbol
{ "source": [ "https://unix.stackexchange.com/questions/20550", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6266/" ] }
20,570
My .muttrc file looks something like this one or see below a glimpse. I am hesitant with the password. How should I store my password to use it with mutt ? set imap_user = "[email protected]" set imap_pass = "password" set smtp_url = "smtp://[email protected]:587/" set smtp_pass = "password" set from = "[email protected]" set realname = "Your Real Name"
This tweak should get rid of your problem. Use gpg as suggested, or set imap_pass=`getpassword email_id` where you use pwsafe or passwords to fetch the passwords. Edit: If mutt is built with IMAP support (--enable-imap), then mutt should prompt you for the password if you do not set it in the config file. From the manual: imap_pass Type: string Default: "" Specifies the password for your IMAP account. If unset, Mutt will prompt you for your password when you invoke the fetch-mail function. Warning: you should only use this option when you are on a fairly secure machine, because the superuser can read your muttrc even if you are the only one who can read the file.
{ "source": [ "https://unix.stackexchange.com/questions/20570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
20,573
This sed command inserts a tag to the beginning of a file: sed -i "1s/^/<?php /" file How can I insert something to the end of each file with sed?
The simplest way is: sed -i -e '$aTEXTTOEND' filename How it works $ matches the last line (it's a normal sed address; 4aTEXTTOEND would insert after the fourth line), a is the append command, and TEXTTOEND is what to append, filename is the file where the TEXTTOEND will be inserted
{ "source": [ "https://unix.stackexchange.com/questions/20573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
20,600
What are the "standards" -- should I put application (not just binary, but entire distribution) to /usr/local or /usr/local/share. For example scala or weka -- it contains examples, binaries, libraries, and so on. So it would be /usr/local/scala-2.9.1 or /usr/local/share/scala-2.9.1 Since I am the only admin it is not a big deal for me, but I prefer to using something which is widely used, not with my own customs. Important: I am not asking about cases, where you should split app into /usr/local/bin, /usr/local/lib and so on. Rather I am asking about case when you have to keep one main directory for entire application.
I think /opt is more standard in this sort of context. The relevant section in the Filesystem Hierarchy Standard is quoted below. Distributions may install software in /opt, but must not modify or delete software installed by the local system administrator without the assent of the local system administrator.  Rationale The use of /opt for add-on software is a well-established practice in the UNIX community. The System V Application Binary Interface [AT&T 1990], based on the System V Interface Definition (Third Edition), provides for an /opt structure very similar to the one defined here. The Intel Binary Compatibility Standard v. 2 (iBCS2) also provides a similar structure for /opt. Generally, all data required to support a package on a system must be present within /opt/, including files intended to be copied into /etc/opt/ and /var/opt/ as well as reserved directories in /opt. The minor restrictions on distributions using /opt are necessary because conflicts are possible between distribution-installed and locally-installed software, especially in the case of fixed pathnames found in some binary software. The structure of the directories below /opt/ is left up to the packager of the software, though it is recommended that packages are installed in /opt// and follow a similar structure to the guidelines for /opt/package. A valid reason for diverging from this structure is for support packages which may have files installed in /opt//lib or /opt//bin.
{ "source": [ "https://unix.stackexchange.com/questions/20600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5884/" ] }
20,601
I have just found out about uzbl but for some reason it crashes with segmentation fault when I log into gmail (Arch). I don't have the time nor the knowledge of dealing with this segmentation fault so I have thought about using another browser of that type. Are there any? Thanks.
I think /opt is more standard in this sort of context. The relevant section in the Filesystem Hierarchy Standard is quoted below. Distributions may install software in /opt, but must not modify or delete software installed by the local system administrator without the assent of the local system administrator.  Rationale The use of /opt for add-on software is a well-established practice in the UNIX community. The System V Application Binary Interface [AT&T 1990], based on the System V Interface Definition (Third Edition), provides for an /opt structure very similar to the one defined here. The Intel Binary Compatibility Standard v. 2 (iBCS2) also provides a similar structure for /opt. Generally, all data required to support a package on a system must be present within /opt/, including files intended to be copied into /etc/opt/ and /var/opt/ as well as reserved directories in /opt. The minor restrictions on distributions using /opt are necessary because conflicts are possible between distribution-installed and locally-installed software, especially in the case of fixed pathnames found in some binary software. The structure of the directories below /opt/ is left up to the packager of the software, though it is recommended that packages are installed in /opt// and follow a similar structure to the guidelines for /opt/package. A valid reason for diverging from this structure is for support packages which may have files installed in /opt//lib or /opt//bin.
{ "source": [ "https://unix.stackexchange.com/questions/20601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7140/" ] }
20,611
I want to solve the problem 'list the top 10 most recent files in the current directory over 20MB'. With ls I can do: ls -Shal |head to get top 10 largest files, and: ls -halt |head to get top 10 most recent files With find I can do: find . -size +20M To list all files over 20MB in the current directory (and subdirectories, which I don't want). Is there any way to list the top ten most recent files over a certain size, preferably using ls ?
The 'current directory' option for find is -maxdepth 1 . The whole commandline for your needs is: find . -maxdepth 1 -type f -size +20M -print0 | xargs -0 ls -Shal | head or find . -maxdepth 1 -type f -size +20M -print0 | xargs -0 ls -halt | head
{ "source": [ "https://unix.stackexchange.com/questions/20611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2291/" ] }
20,629
I have csh as my default shell, as shown by echo $SHELL . I want to switch to bash as my default shell. I tried the following approaches to no avail: With chsh I get: chsh: can only change local entries; use ypchsh instead. With ypchsh I get: ypchsh: yppasswdd not running on NIS master host ("dcsun2"). I only have .chsrc in my home directory and I cannot find any .profile files in /etc . How can I change my default shell to bash ?
Make sure you've got bash installed. Learn the location of bash : which bash or whereis bash Below, I'll assume the location is /bin/bash . a) If you have administrative rights, just run as root: usermod -s /bin/bash YOUR_USERNAME (replacing YOUR_USERNAME with your user name). b) If you don't have adm. rights, you can still just run bash --login at login, by putting the below line at the end of your .cshrc or .profile (in your home directory) : setenv SHELL /bin/bash exec /bin/bash --login
{ "source": [ "https://unix.stackexchange.com/questions/20629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10714/" ] }
20,645
Is there a command or flag to clone the user/group ownership and permissions on a file from another file? To make the perms and ownership exactly those of another file?
On GNU/Linux chown and chmod have a --reference option chown --reference=otherfile thisfile chmod --reference=otherfile thisfile
{ "source": [ "https://unix.stackexchange.com/questions/20645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394/" ] }
20,670
I know what hard links are, but why would I use them? What is the utility of a hard link?
The main advantage of hard links is that, compared to soft links, there is no size or speed penalty. Soft links are an extra layer of indirection on top of normal file access; the kernel has to dereference the link when you open the file, and this takes a small amount of time. The link also takes a small amount of space on the disk, to hold the text of the link. These penalties do not exist with hard links because they are built into the very structure of the filesystem. The best way I know of to see this is: $ ls -id . 1069765 ./ $ mkdir tmp ; cd tmp $ ls -id .. 1069765 ../ The -i option to ls makes it give you the inode number of the file. On the system where I prepared the example above, I happened to be in a directory with inode number 1069765, but the specific value doesn't matter. It's just a unique value that identifies a particular file/directory. What this says is that when we go into a subdirectory and look at a different filesystem entry called .. , it has the same inode number we got before. This isn't happening because the shell is interpreting .. for you, as happens with MS-DOS and Windows. On Unix filesystems .. is a real directory entry; it is a hard link pointing back to the previous directory. Hard links are the tendons that tie the filesystem's directories together. Once upon a time, Unix didn't have hard links. They were added to turn Unix's original flat file system into a hierarchical filesystem. (For more on this, see Why does '/' have an '..' entry? .) It is also somewhat common on Unix systems for several different commands to be implemented by the same executable. It doesn't seem to be the case on Linux any more, but on systems I used in the past, cp , mv and rm were all the same executable. It makes sense if you think about it: when you move a file between volumes, it is effectively a copy followed by a removal, so mv already had to implement the other two commands' functions. The executable can figure out which operation to provide because it gets passed the name it was called by. Another example, common in embedded Linuxes, is BusyBox , a single executable that implements dozens of commands. I should point out that on most filesystems, users aren't allowed to make hard links to directories. The . and .. entries are automatically managed by the filesystem code, which is typically part of the kernel. The restriction exists because it is possible to cause serious filesystem problems if you aren't careful with how you create and use directory hard links. This is one of many reasons soft links exist; they don't carry the same risk.
{ "source": [ "https://unix.stackexchange.com/questions/20670", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/393/" ] }
20,718
I am trying to convert <id>1</id> <Name>ENTERPRISE RESOURCE PLANNING</Name> to: <column name="id">1</column> <column name="Name">ENTERPRISE RESOURCE PLANNING</column> I am assuming the best tool for the job would be sed , however I can't figure out how to keep parts of the original text in the replace part. If I do: $ sed -i 's/<.*>.*<.*>/<column name="\\1">\\2<\/column>/g' filename.xml The output is: <column name="\1">\2</column> <column name="\1">\2</column> Or doing similar from within vi , it outputs: <column name=""></column> <column name=""></column> How can I make it so that \1 and \2 are substituted back to their original values?
You can use groups, e.g.: $ sed -i 's/<\(.*\)>\(.*\)<.*>/<column name="\1">\2<\/column>/g' filename.xml Probably the most confusing part about REs is that there are various syntactic flavors. For example sed and vim use basic regular expressions where you have to quote () to get their meta-meaning. With extended regular expressions (e.g. awk, egrep and less) you have to quote () to get the literal meaning. There are similar differences for other meta-characters. The rationale behind BRE () semantics is that when most of your input is C-Code then it is more practical to have to quote parentheses for meta-use.
{ "source": [ "https://unix.stackexchange.com/questions/20718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10457/" ] }
20,784
What's the most concise way to resolve a hostname to an IP address in a Bash script? I'm using Arch Linux .
You can use getent , which comes with glibc (so you almost certainly have it on Linux). This resolves using gethostbyaddr/gethostbyname2, and so also will check /etc/hosts /NIS/etc: getent hosts unix.stackexchange.com | awk '{ print $1 }' Or, as Heinzi said below, you can use dig with the +short argument (queries DNS servers directly, does not look at /etc/hosts /NSS/etc) : dig +short unix.stackexchange.com If dig +short is unavailable, any one of the following should work. All of these query DNS directly and ignore other means of resolution: host unix.stackexchange.com | awk '/has address/ { print $4 }' nslookup unix.stackexchange.com | awk '/^Address: / { print $2 }' dig unix.stackexchange.com | awk '/^;; ANSWER SECTION:$/ { getline ; print $5 }' If you want to only print one IP, then add the exit command to awk 's workflow. dig +short unix.stackexchange.com | awk '{ print ; exit }' getent hosts unix.stackexchange.com | awk '{ print $1 ; exit }' host unix.stackexchange.com | awk '/has address/ { print $4 ; exit }' nslookup unix.stackexchange.com | awk '/^Address: / { print $2 ; exit }' dig unix.stackexchange.com | awk '/^;; ANSWER SECTION:$/ { getline ; print $5 ; exit }'
{ "source": [ "https://unix.stackexchange.com/questions/20784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/903/" ] }
20,838
I was under the impression that any sort of call to mount requires root privileges. But recently I was told "You should instead create appropriate entries in /etc/fstab so that the filesystems can be mounted by unprivileged users"... which is counter to my experience using mount . Anytime I have used mount I have needed to sudo it. (I have only used mount for mounting network drives. Specifically cifs type network drives.) Does mount always require root privileges? If not: What kind of mount does and what kind of mount doesn't require sudo IN GENERAL? In my specific case I am doing mount -t cifs , how does one go about making this mount not require require sudo ?
Mounting a filesystem does not require superuser privileges under certain conditions, the most common being that the entry for the filesystem in /etc/fstab contains a flag that permits unprivileged users to mount it, typically user . To allow unprivileged users to mount a CIFS share (but not automount it), you would add something like the following to /etc/fstab : //server/share /mount/point cifs noauto,user 0 0 For more information on /etc/fstab and its syntax, Wikipedia has a good article here , and man 8 mount has a good section on mounting as an unprivileged user under the heading "[t]he non-superuser mounts".
{ "source": [ "https://unix.stackexchange.com/questions/20838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5510/" ] }
20,844
I'm just getting into server administration and now everybody needs something. Lately I'm getting stuck on package hunting, so I'm on the search to find the most comprehensive repo list of LAMP packages. What repositories do you use in your list for LAMP resources and why do you use them?
Mounting a filesystem does not require superuser privileges under certain conditions, the most common being that the entry for the filesystem in /etc/fstab contains a flag that permits unprivileged users to mount it, typically user . To allow unprivileged users to mount a CIFS share (but not automount it), you would add something like the following to /etc/fstab : //server/share /mount/point cifs noauto,user 0 0 For more information on /etc/fstab and its syntax, Wikipedia has a good article here , and man 8 mount has a good section on mounting as an unprivileged user under the heading "[t]he non-superuser mounts".
{ "source": [ "https://unix.stackexchange.com/questions/20844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10775/" ] }
20,880
I have a Python script that need to be run with a particular python installation. Is there a way to craft a shebang so that it runs with $FOO/bar/MyCustomPython ?
The shebang line is very limited. Under many unix variants (including Linux), you can have only two words: a command and a single argument. There is also often a length limitation. The general solution is to write a small shell wrapper. Name the Python script foo.py , and put the shell script next to foo.py and call it foo . This approach doesn't require any particular header on the Python script. #!/bin/sh exec "$FOO/bar/MyCustomPython" "$0.py" "$@" Another tempting approach is to write a wrapper script like the one above, and put #!/path/to/wrapper/script as the shebang line on the Python script. However, most unices don't support chaining of shebang scripts, so this won't work. If MyCustomPython was in the $PATH , you could use env to look it up: #!/usr/bin/env MyCustomPython import … Yet another approach is to arrange for the script to be both a valid shell script (which loads the right Python interpreter on itself) and a valid script in the target language (here Python). This requires that you find a way to write such a dual-language script for your target language. In Perl, this is known as if $running_under_some_shell . #!/bin/sh eval 'exec "$FOO/bar/MyCustomPerl" -wS $0 ${1+"$@"}' if $running_under_some_shell; use … Here's one way to achieve the same effect in Python. In the shell, "true" is the true utility, which ignores its arguments (two single-character strings : and ' ) and returns a true value. In Python, "true" is a string which is true when interpreted as a boolean, so this is an if instruction that's always true and executes a string literal. #!/bin/sh if "true" : '''\' then exec "$FOO/bar/MyCustomPython" "$0" "$@" exit 127 fi ''' import … Rosetta code has such dual-language scripts in several other languages.
{ "source": [ "https://unix.stackexchange.com/questions/20880", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10800/" ] }
20,944
I want to know how many files I have on my filesystem. I know I can do something like this: find / -type f | wc -l This seems highly inefficient. What I'd really like is to do is find the total number of unique inodes that are considered a 'file'. Is there a better way? Note: I would like to do this because I am developing a file synchronization program, and I would like to do some statistical analysis (like how many files the average user has total vs how many files are on the system). I don't, however, need to know anything about those files, just that they exist (paths don't matter at all). I would especially like to know this info for each mounted filesystem (and it's associated mount point).
The --inodes option to df will tell you how many inodes are reserved for use. For example: $ df --inodes / /home Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3981312 641704 3339608 17% / /dev/sda8 30588928 332207 30256721 2% /home $ sudo find / -xdev -print | wc -l 642070 $ sudo find /home -print | wc -l 332158 $ sudo find /home -type f -print | wc -l 284204 Notice that the number of entries returned from find is greater than IUsed for the root (/) filesystem, but is less for /home. But both are within 0.0005%. The reason for the discrepancies is because of hard links and similar situations. Remember that directories, symlinks, UNIX domain sockets and named pipes are all 'files' as it relates to the filesystem. So using find -type f flag is wildly inaccurate, from a statistical viewpoint.
{ "source": [ "https://unix.stackexchange.com/questions/20944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8471/" ] }
20,979
How do I list both programs that came with my distribution and those I manually installed?
That depends on your distribution. Aptitude-based distributions (Ubuntu, Debian, etc): dpkg -l RPM-based distributions (Fedora, RHEL, etc): rpm -qa pkg*-based distributions (OpenBSD, FreeBSD, etc): pkg_info Portage-based distributions (Gentoo, etc): equery list or eix -I pacman-based distributions (Arch Linux, etc): pacman -Q Cygwin: cygcheck --check-setup --dump-only * Slackware: slapt-get --installed All of these will list the packages rather than the programs however. If you truly want to list the programs, you probably want to list the executables in your $PATH , which can be done like so using bash's compgen : compgen -c Or, if you don't have compgen : #!/bin/bash IFS=: read -ra dirs_in_path <<< "$PATH" for dir in "${dirs_in_path[@]}"; do for file in "$dir"/*; do [[ -x $file && -f $file ]] && printf '%s\n' "${file##*/}" done done
{ "source": [ "https://unix.stackexchange.com/questions/20979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9431/" ] }
20,983
Let's say I've gone and done a silly thing, such as using 'chsh' to change the root user's shell to a bad file path. Future logins to the root account will abruptly fail, citing /bin/whatever not being found, and boot you back out to the login screen. Barring a recovery mode or inserting a LiveCD to edit /etc/passwd, what are my options for getting my system back? Let's also assume (for fun?) that there are no other users in wheel. Thoughts?
When booting, append init=/bin/bash (or a path to any other functional shell) to your boot options - you will be dropped straight to a single user shell. You might need to do mount -o remount,rw / before modifying the /etc/passwd entry in that environment. After that, just reboot or do exec /sbin/init 3 . Just do not type exit or press Ctrl+D, as these would result in kernel panic*. One additional variation of this method might be necessary on some systems loaded in two-stage mode (with an initrd image). If you notice that the boot options contain init= and, most importantly, real_init= , then the place to put /bin/bash should be the latter parameter (i.e. real_init=/bin/bash ). * This is because in that environment, the shell is seen by the kernel as the init program - which is the only process that kernel knows - it represents a running system underneath to the kernel's eye. Suddenly ending that process, without telling the kernel to shutdown the system, must result in kernel panic. (Wouldn't you panic if suddenly everything around you went black and silent?)
{ "source": [ "https://unix.stackexchange.com/questions/20983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10834/" ] }
20,993
I've got a brand new CentOS 6 installation, which has a symlink in the document root to my development files: [root@localhost html]# ls -l total 4 -rwxrwxrwx. 1 root root 0 Sep 18 20:16 index.html -rwxrwxrwx. 1 root root 17 Sep 18 20:16 index.php lrwxrwxrwx. 1 root root 24 Sep 18 20:19 refresh-app -> /home/billy/refresh-app/ My httpd.conf has this: <Directory "/"> Options All AllowOverride None Order allow,deny Allow from all </directory> The target of the symbolic link has permissions which should allow apache to read anything it wants: [root@localhost billy]# ls -l total 40 (Some entries were omitted because the list was too long drwxr-xr-x. 7 billy billy 4096 Sep 18 20:03 refresh-app I've also tried disabling SELinux by changing /etc/selinux/conf : SELINUX=disabled Yet no matter what I do, when someone tries to go to that link, http://localhost/refresh-app/ , I get a 403 FORBIDDEN error page and this is written in the /var/log/httpd/error_log : Symbolic link not allowed or link target not accessible Why can't Apache access the target of the symlink?
Found the issue. Turns out, Apache wants access to not just the directory I'm serving, /home/billy/refresh-app/ , but also every directory above that, namely /home/billy/ , /home , and / . (I have no idea why... giving someone access to a subdirectory shouldn't require giving away permissions to everything above that subdirectory....) I would guess it's looking for .htaccess or something, or perhaps *nix being strange about how it treats permissions for directory transversal.
{ "source": [ "https://unix.stackexchange.com/questions/20993", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4885/" ] }
21,033
Better to explain on examples. I can: find . -name "*.py" -type f > output.txt But how can I store the output to the same file for: find . -name "*.py" -type f -exec grep "something" {} \ I can't just do find . -name "*.py" -type f -exec grep "something" {} \ > output.txt
If I understand you correctly this is what you want to do: find . -name '*.py' -print0 | xargs -0 grep 'something' > output.txt Find all files with extension .py , grep only rows that contain something and save the rows in output.txt . If the output.txt file exists, it will be truncated, otherwise it will be created. Using -exec : find . -name '*.py' -exec grep 'something' {} \; > output.txt I'm incorporating Chris Downs comment here: The above command will result in grep being executed as many times as find finds pathnames that passes the given tests (only the single -name test above). However, if you replace the \; with a + , grep is called with multiple pathnames from find (up to a certain limit). See question Using semicolon (;) vs plus (+) with exec in find for more on the subject.
{ "source": [ "https://unix.stackexchange.com/questions/21033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9746/" ] }
21,041
I would like to use syslog to log messages coming from my PHP based site. My question is - can I add custom facility name? I know there are predefined facilities like: auth, authpriv, cron, dæmon, kern, lpr, mail, mark, news, syslog, user, UUCP and local0 through local7. And as I understand I could use local0 - local6 facilities for this. But I just feel that if I could add something like this to syslog: mySiteName.* /var/log/mySiteName.log It would be visually easier to understand for others. Unfortunately above line results in: rsyslogd-3000: unknown facility name "mySiteName" So - is there a way to use custom facility name?
The syslog interface only allows a fixed set of facilities, defined by constants in /usr/include/sys/syslog.h . The only provision for custom facilities are local0 through local7 . What you can do to separate the logs of various servers is use filters to match on the origin or text of each log message. Rsyslog has fairly powerful filters (read the properties available for matching ).
{ "source": [ "https://unix.stackexchange.com/questions/21041", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2657/" ] }
21,076
I know that by using the "-A NUM" switch I can print specific number of trailing lines after each match. I am just wondering if it's possible to print trailing lines until a specific word is found after each match. e.g. When I search for "Word A" I want to see the line containing "Word A" and also the lines after it until the one containing "Word D". context: Word A Word B Word C Word D Word E Word F command: grep -A10 'Word A' I need this output: Word A Word B Word C Word D
It seems that you want to print lines between ' Word A ' and ' Word D ' (inclusive). I suggest you to use sed instead of grep . It lets you to edit a range of input stream which starts and ends with patterns you want. You should just tell sed to print all lines in range and no other lines: sed -n -e '/Word A/,/Word D/ p' file
{ "source": [ "https://unix.stackexchange.com/questions/21076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10867/" ] }
21,089
I am trying to control the volume using my programming script. How can I do the following in Fedora 15, Ubuntu linux? Mute/ Unmute Volume up and volume down Note: Please note that I use a web USB microphone/speaker and also Analogue microphone/speaker. I want to apply to all to be sure.
You can use amixer . It's in the alsa-utils package on Ubuntu and Debian. Run amixer without parameters to get an overview about your controls for the default device. You can also use alsamixer without parameters (from the same package) to get a more visual overview. Use F6 to see and switch between devices. Commonly, you might have PulseAudio and a hardware sound card to select from. Then use amixer with the set command to set the volume. For example, to set the master channel to 50%: amixer set Master 50% Master is the control name and should match one that you see when running without parameters. Note the % sign, without it, it will treat the value as a 0 - 65536 level. If PulseAudio is not your default device, you can use the -D switch: amixer -D pulse set Master 50% Other useful commands pointed out in the comments: To increase/decrease the volume use +/- after the number, use amixer set Master 10%+ amixer set Master 10%- To mute, unmute or toggle between muted/unmuted state, use amixer set Master mute amixer set Master unmute amixer set Master toggle Also note that there might be two different percentage scales, the default raw and for some devices a more natural scale based on decibel , which is also used by alsamixer . Use -M to use the latter. Finally, if you're interested only in PulseAudio, you might want to check out pactl (see one of the other answers).
{ "source": [ "https://unix.stackexchange.com/questions/21089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
21,093
My question is basically the same as Only allow certain outbound traffic on certain interfaces . I have two interfaces eth1 (10.0.0.2) and wlan0 (192.168.0.2). My default route is for eth1 . Let's say I want all https-traffic to go through wlan0 . Now if I use the solution suggested in the other question, https traffic will go through wlan0 , but will still have the source-address of eth1 (10.0.0.2). Since this address is not routeable for the wlan0 gateway, answers won't ever come back. The easy way would be to just set the bind-addr properly in the application, but in this case it is not applicable. I figure I need to rewrite the src-addr: # first mark it so that iproute can route it through wlan0 iptables -A OUTPUT -t mangle -o eth1 -p tcp --dport 443 -j MARK --set-mark 1 # now rewrite the src-addr iptables -A POSTROUTING -t nat -o wlan0 -p tcp --dport 443 -j SNAT --to 192.168.0.2 Now tcpdump sees the outgoing packets just fine and ingoing packets arrive for 192.168.0.2, however they probably never end up in the application, because all I ever get to see, is that the application is resending the SYN-packet, although the SYN-ACK was already received. So I thought, maybe I need to rewrite the incoming address too: iptables -A PREROUTING -t nat -i wlan0 -p tcp --sport 443 -j DNAT --to 10.0.0.2 but that didn't work either. So I’m kind of stuck here. Any suggestions?
You're close. The actual reason that the application isn't seeing the return traffic is because of the kernel's built in IP spoofing protection. I.e., the return traffic doesn't match the routing table and is therefore dropped. You can fix this by turning off spoofing protection like this: sudo sysctl net.ipv4.conf.wlan0.rp_filter=0 But I wouldn't recommend it. The more proper way is to create an alternate routing instance. The mark is necessary. Keep it. Source NAT is also necessary. The final DNAT is unnecessary, so you can remove it. Make sure you have the iproute package installed. If you have the ip command then you're set (which it looks like you do, but if not get that first). Edit /etc/iproute2/rt_tables and add a new table by appending the following line: 200 wlan-route You then need to configure your new routing table named wlan-route with a default gateway and create rules to conditionally send traffic to that table. I'll assume your default gateway is 192.168.0.1. Naturally this needs to match your actual network, and not just my assumptions. ip route add default via 192.168.0.1 dev wlan0 table wlan-route ip rule add fwmark 0x1 table wlan-route Your final annotated script would look like this: # Populate secondary routing table ip route add default via 192.168.0.1 dev wlan0 table wlan-route # Anything with this fwmark will use the secondary routing table ip rule add fwmark 0x1 table wlan-route # Mark these packets so that iproute can route it through wlan-route iptables -A OUTPUT -t mangle -o eth1 -p tcp --dport 443 -j MARK --set-mark 1 # now rewrite the src-addr iptables -A POSTROUTING -t nat -o wlan0 -p tcp --dport 443 -j SNAT --to 192.168.0.2
{ "source": [ "https://unix.stackexchange.com/questions/21093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10879/" ] }
21,147
This might be really basic question but I want to understand it thoroughly. What is a pseudo terminal? (tty/pty) Why do we need them? How they got introduced and what was the need for it? Are they outdated? Do we not need them anymore? Is there anything that replaced them? Any useful use-case? What I did: Read man pages - got some info but not the exact picture. Tried to read on them from Unix Network Programming by Richard Stevens. Got some info but not the why? part.
What is a pseudo terminal? (tty/pty) A device that has the functions of a physical terminal without actually being one. Created by terminal emulators such as xterm. More detail is in the manpage pty(7). Why do we need them? How they got introduced and what was the need for it? Traditionally, UNIX has a concept of a controlling terminal for a group of processes, and many I/O functions are built with terminals in mind. Pseudoterminals handle, for example, some control characters like ^C. Are they outdated? Do we not need them anymore? Is there anything that replaced them? They are not outdated and are used in many programs, including ssh. Any useful use-case? ssh.
{ "source": [ "https://unix.stackexchange.com/questions/21147", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1358/" ] }
21,169
need to search for something in entire content I am trying: find . | xargs grep word I get error: xargs: unterminated quote How to achieve this?
xargs expects input in a format that no other command produces, so it's hard to use effectively. What's going wrong here is that you have a file whose name must be quoted on input to xargs (probably containing a ' ). If your grep supports the -r or -R option for recursive search, use it. grep -r word . Otherwise, use the -exec primary of find . This is the usual way of achieving the same effect as xargs , except without constraints on file names. Reasonably recent versions of find allow you to group several files in a single call to the auxiliary command. Passing /dev/null to grep ensures that it will show the file name in front of each match, even if it happens to be called on a single file. find . -type f -exec grep word /dev/null {} + Older versions of find (on older systems or OpenBSD, or reduced utilities such as BusyBox) can only call the auxiliary command on one file at a time. find . -type f -exec grep word /dev/null {} \; Some versions of find and xargs have extensions that let them communicate correctly, using null characters to separate file names so that no quoting is required. These days, only OpenBSD has this feature without having -exec … {} + . find . -type f -print0 | xargs -0 grep word /dev/null
{ "source": [ "https://unix.stackexchange.com/questions/21169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10968/" ] }
21,185
In VI, I know that if you do :some_number and hit enter, you will jump to the line specified by "some_number". Is there an equivalent for jumping to a specific character in a single line? Basically, I have a large csv and there is some characters that are breaking the parser; so I have to debug it. I'm getting an error message that basically says "unexpected character on line XXX character YYY". I know how to get to XXX but how do I get to YYY?
If you want the cursor on a particular column, the command n| , where n is the column number, and | is the pipe symbol, puts the cursor on the intended column in the line the cursor already appears on.
{ "source": [ "https://unix.stackexchange.com/questions/21185", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10915/" ] }
21,199
I have recently been having problems running df , where it just hangs . Here's strace output, and in it, you'll see that I killed since it was just sitting there: $ strace /bin/df execve("/bin/df", ["/bin/df"], [/* 35 vars */]) = 0 brk(0) = 0x8d03000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7840000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=90781, ...}) = 0 mmap2(NULL, 90781, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7829000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/i386-linux-gnu/i686/cmov/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\240o\1\0004\0\0\0"..., 512) = 512 fstat64(3, {st_mode=S_IFREG|0755, st_size=1401000, ...}) = 0 mmap2(NULL, 1415544, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb76cf000 mprotect(0xb7822000, 4096, PROT_NONE) = 0 mmap2(0xb7823000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x153) = 0xb7823000 mmap2(0xb7826000, 10616, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7826000 close(3) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76ce000 set_thread_area({entry_number:-1 -> 6, base_addr:0xb76ce8d0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0 mprotect(0xb7823000, 8192, PROT_READ) = 0 mprotect(0xb785e000, 4096, PROT_READ) = 0 munmap(0xb7829000, 90781) = 0 brk(0) = 0x8d03000 brk(0x8d24000) = 0x8d24000 open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=1534656, ...}) = 0 mmap2(NULL, 1534656, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7557000 close(3) = 0 open("/etc/mtab", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=708, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb783f000 read(3, "/dev/sda6 / ext4 rw,errors=remou"..., 4096) = 708 read(3, "", 4096) = 0 close(3) = 0 munmap(0xb783f000, 4096) = 0 statfs64("/", 84, {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=4805813, f_bfree=3325193, f_bavail=3081072, f_files=1220608, f_ffree=1007617, f_fsid={-1624337824, -871214780}, f_namelen=255, f_frsize=4096}) = 0 open("/usr/share/locale/locale.alias", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=2570, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb783f000 read(3, "# Locale name alias data base.\n#"..., 4096) = 2570 read(3, "", 4096) = 0 close(3) = 0 munmap(0xb783f000, 4096) = 0 open("/usr/share/locale/en_ZA.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en_ZA/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en.utf8/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/share/locale/en/LC_MESSAGES/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory) statfs64("/lib/init/rw", 84, {f_type=0x1021994, f_bsize=4096, f_blocks=1280, f_bfree=1280, f_bavail=1280, f_files=215959, f_ffree=215956, f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0 statfs64("/run", 84, {f_type=0x1021994, f_bsize=4096, f_blocks=102000, f_bfree=101823, f_bavail=101823, f_files=215959, f_ffree=215559, f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0 statfs64("/sys", 84, {f_type="SYSFS_MAGIC", f_bsize=4096, f_blocks=0, f_bfree=0, f_bavail=0, f_files=0, f_ffree=0, f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0 statfs64("/dev", 84, {f_type=0x1021994, f_bsize=4096, f_blocks=508762, f_bfree=508762, f_bavail=508762, f_files=213490, f_ffree=213031, f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0 statfs64("/run/shm", 84, {f_type=0x1021994, f_bsize=4096, f_blocks=203999, f_bfree=203816, f_bavail=203816, f_files=215959, f_ffree=215955, f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0 statfs64("/dev/pts", 84, {f_type="DEVPTS_SUPER_MAGIC", f_bsize=4096, f_blocks=0, f_bfree=0, f_bavail=0, f_files=0, f_ffree=0, f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0 statfs64("/boot", 84, {f_type="EXT2_SUPER_MAGIC", f_bsize=1024, f_blocks=188403, f_bfree=150550, f_bavail=140822, f_files=48768, f_ffree=48525, f_fsid={-655942775, 1382872797}, f_namelen=255, f_frsize=1024}) = 0 statfs64("/home", 84, {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=66535124, f_bfree=6683145, f_bavail=3303357, f_files=16900096, f_ffree=16633097, f_fsid={-515912651, 307591087}, f_namelen=255, f_frsize=4096}) = 0 statfs64("/sys/fs/fuse/connections", 84, {f_type=0x65735543, f_bsize=4096, f_blocks=0, f_bfree=0, f_bavail=0, f_files=0, f_ffree=0, f_fsid={0, 0}, f_namelen=255, f_frsize=4096}) = 0 statfs64("/home/wena/temp/mount", 84, ^C <unfinished ...> Another tool that fails is gnome-system-monitor , which also seems to hang immediately after being launched.
I used sshfs to mount a directory from some ssh server, and my network connection was lost. It appears df was trying to list that mount and instead of failing gracefully, it just got stuck :(
{ "source": [ "https://unix.stackexchange.com/questions/21199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
21,251
In my CMS, I noticed that directories need the executable bit ( +x ) set for the user to open them. Why is the execute permission required to read a directory, and how do directory permissions in Linux work?
When applying permissions to directories on Linux, the permission bits have different meanings than on regular files. The read bit ( r ) allows the affected user to list the files within the directory The write bit ( w ) allows the affected user to create, rename, or delete files within the directory, and modify the directory's attributes The execute bit ( x ) allows the affected user to enter the directory, and access files and directories inside The sticky bit ( T , or t if the execute bit is set for others) states that files and directories within that directory may only be deleted or renamed by their owner (or root)
{ "source": [ "https://unix.stackexchange.com/questions/21251", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8467/" ] }
21,280
Possible Duplicate: What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'? I always see pts and tty when I use the who command but I never understand how they are different? Can somebody please explain me this?
A tty is a native terminal device, the backend is either hardware or kernel emulated. A pty (pseudo terminal device) is a terminal device which is emulated by an other program (example: xterm , screen , or ssh are such programs). A pts is the slave part of a pty . (More info can be found in man pty .) Short summary : A pty is created by a process through posix_openpt() (which usually opens the special device /dev/ptmx ), and is constituted by a pair of bidirectional character devices: The master part, which is the file descriptor obtained by this process through this call, is used to emulate a terminal. After some initialization, the second part can be unlocked with unlockpt() , and the master is used to receive or send characters to this second part (slave). The slave part, which is anchored in the filesystem as /dev/pts/x (the real name can be obtained by the master through ptsname() ) behaves like a native terminal device ( /dev/ttyx ). In most cases, a shell is started that uses it as a controlling terminal.
{ "source": [ "https://unix.stackexchange.com/questions/21280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3979/" ] }
21,297
I am using crontab for the first time. Want to write a few very simple test cron tasks, and run them. $crontab * * * * * echo "Hi" doesn't produce anything. crontab */1 * * * * echo "hi" says */1: No such file or directory . Also, how do I list the currently running cron tasks (not just the ones I own, but ones started by other users such as root as well). And how do I delete a particular cron task?
You can't use crontab like that. Use man crontab to read about the correct way of calling this utility. You'll want to use crontab -e to edit the current user's cron entries (you can add/modify/remove lines). Use crontab -l to see the current list of configured tasks. As for seeing other user's crontabs, that's not possible without being root on default installations. See How do I list all cron jobs for all users for some ways to list everything (as root). Note: be very careful when you use shell globbing characters on the command line ( * and ? especially). * will be expanded to the list of files in the current directory, which can have unexpected effects. If you want to pass * as an argument to something, quote it ( '*' ).
{ "source": [ "https://unix.stackexchange.com/questions/21297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10968/" ] }
21,298
I am using RHEL 5.4 I killed the cron daemon accidentally. I wanted to stop a cron task, didn't know how to do it, ended up killing the cron daemon itself. How do I start it again?
You can't use crontab like that. Use man crontab to read about the correct way of calling this utility. You'll want to use crontab -e to edit the current user's cron entries (you can add/modify/remove lines). Use crontab -l to see the current list of configured tasks. As for seeing other user's crontabs, that's not possible without being root on default installations. See How do I list all cron jobs for all users for some ways to list everything (as root). Note: be very careful when you use shell globbing characters on the command line ( * and ? especially). * will be expanded to the list of files in the current directory, which can have unexpected effects. If you want to pass * as an argument to something, quote it ( '*' ).
{ "source": [ "https://unix.stackexchange.com/questions/21298", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10968/" ] }
21,334
I know you can use write to send a message to a currently logged in user, but how do you leave a message for a user who is not logged in? The solution I have seen is modify the motd, but that will be displayed to all users. How can I leave a message for individual users to read when they login?
You can use the mail command to send a message to user jdoe like this: mail -s "The subject goes here" jdoe You will enter an interactive environment where you can type your message (mail body). Type Control-D in the beginning of a line to end the message and send it (you will be asked for an optional CC recipient - just hit enter if you don't want one). You can also do: mail -s "The subject goes here" jdoe < textfile or echo "John, please don't forget our meeting" | mail -s "Reminder" jdoe The next time jdoe logs in, he will receive a notification like "You have new mail" and he must type mail to read it (perhaps this is a drawback if the user doesn't know he must do this).
{ "source": [ "https://unix.stackexchange.com/questions/21334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4092/" ] }
21,363
What is the best way to execute a script when entering into a directory? When I move into a new directory I would like bash to execute the projectSettings.bash script much like RVM does.
You can make cd a function (and pop and pushd ), and make it detect if you enter that particular directory. cd () { builtin cd "$@" && chpwd; } pushd () { builtin pushd "$@" && chpwd; } popd () { builtin popd "$@" && chpwd; } unset_all_project_settings () { # do whatever it takes to undo the effect of projectSettings.bash, # e.g. unset variables, remove PATH elements, etc. } chpwd () { case $PWD in /some/directory|/some/other/directory) . ./projectSettings.bash;; *) unset_all_project_settings;; esac } Do not do this in directories that you haven't whitelisted, because it would make it very easy for someone to trick you into running arbitrary code — send you an archive, so you unzip it, change into the directory it created, and you've now run the attacker's code. I don't recommend this approach, because it means the script will be executed even if you enter that directory for some reason that's unrelated to working on the project. I suggest having a specific function that changes to the project directory and sources the settings script. myproj () { cd /some/directory && . ./projectSettings.bash }
{ "source": [ "https://unix.stackexchange.com/questions/21363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9110/" ] }
21,471
Is there a way to limit the quantity of listed files on a ls command? I've seen: ls | head -4 but to get head or tail to be executed I need to wait for ls to finish execution, and with directories with an enourmous quantity of files that can take considerable time. I wish to execute a ls command that limits without using that head command.
Have you tried ls -U | head -4 This should skip the sorting, which is probably why ls is taking so long. https://stackoverflow.com/questions/40193/quick-ls-command
{ "source": [ "https://unix.stackexchange.com/questions/21471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7048/" ] }
21,473
I have an issue where most often my screensaver will not activate and it's somewhat hard to determine what's preventing it. Is there a way that I can view whatever processes are asking the OS not to display the screensaver? I'm on Linux Mint 11.
Have you tried ls -U | head -4 This should skip the sorting, which is probably why ls is taking so long. https://stackoverflow.com/questions/40193/quick-ls-command
{ "source": [ "https://unix.stackexchange.com/questions/21473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
21,548
I used to use cat to view files. Then I learned that less is usually better, and is a must if the file is longer than a few dozen rows. My question: Is there ever a reason to use cat instead of less ? Is there any situation where cat is a better solution?
Although both commands allow you to view the content of a file, their original purposes are quite different. less extends the capabilities of more . The latter was created to view the content of a file one screenful at a time. less adds features such as backward movements and better memory management (no need to read the entire file before being able to see the first lines). cat concatenates files and prints the result on the standard output. If you provide only one file, you will see the content of that file. It becomes 'powerful' when you provide multiple files. A good example is the combination of split and cat. The first command will divide a large file in small portions. The second one will then concatenate the small portions into a single file. Back to your question, cat would be preferred in an autonomous script requiring files to be read entirely (or concatenated) without interaction. In terms of file viewing, I think it's more a question of taste.
{ "source": [ "https://unix.stackexchange.com/questions/21548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11086/" ] }
21,598
This is irritating me. I seen several suggestions (all using different files and syntax) and none of them worked. How do I set an environment variable for a specific user? I am on debian squeeze. What is the exact syntax I should put in the file to make ABC = "123"?
You have to put the declaration in the initialization files of your shell: If you are using bash, ash, ksh or some other Bourne-style shell, you can add ABC="123"; export ABC in your .profile file ( ${HOME}/.profile ). This is the default situation on most Unix installations, and in particular on Debian. If your login shell is bash, you can use .bash_profile ( ${HOME}/.bash_profile ) or .bash_login instead. Note: If either of these files exists and your login shell is bash, .profile is not read when you log in over ssh or on a text console, but it might still be read instead of .bash_profile if you log in from the GUI. Also, if there is no .bash_profile , then use .bashrc . If you've set zsh as your login shell, use ~/.zprofile instead of ~/.profile . If you are using tcsh, add setenv ABC "123" in .login file ( ${HOME}/.login ) if you are using another shell look at the shell manual how to define environment variables and which files are executed at the shell startup.
{ "source": [ "https://unix.stackexchange.com/questions/21598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
21,638
I'm using the command ls -a | grep '^\.' for showing only the hidden files. I added the line alias hidden='ls -a | grep '^\.'' # show only hidden files to .bash_aliases file but this does not work. It's probably the problem with ' character. Could you please help me write the correct alias?
Have the shell list the dot files, and tell ls not to see through directories: ls -d .*
{ "source": [ "https://unix.stackexchange.com/questions/21638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
21,678
This is the command I am using to list some files: find . -name \*.extract.sys -size +1000000c -exec ls -lrt {} \; -rw-r--r-- 1 qa1wrk15 test 1265190 Sep 29 01:14 ./var/can/projs/ar/rep/extract/Sep/29/ar.ARAB-PI_7.20110929.extract.sys -rw-r--r-- 1 qa1wrk15 test 1345554 Sep 29 01:14 ./var/can/projs/ar/rep/extract/Sep/29/ar.ARAB-PI_2.20110929.extract.sys -rw-r--r-- 1 qa1wrk15 test 1370532 Sep 29 01:14 ./var/can/projs/ar/rep/extract/Sep/29/ar.ARAB-PI_3.20110929.extract.sys -rw-r--r-- 1 qa1wrk15 test 1399854 Sep 29 01:14 ./var/can/projs/ar/rep/extract/Sep/29/ar.ARAB-PI_8.20110929.extract.sys and so on. Now I want to calculate the total size of these files by summing up the 5th column. I thought of using awk , to do this so I tested the following in a particular directory >ls -lrt | awk `{ print $1 }` ksh: syntax error at line 1 : `{' unmatched I don't understand what is the problem, why this syntax error . I am thinking to try ls -lrt | awk `BEGIN {total = 0} {for(i=0;i<NR;i++){total+=$5}} END {printf "%d",total} this also, but a simple awk script is not working. Please suggest or correct me if I am wrong, or if there is a workaround for this.
First of all, you should use straight single quotes ( ' ), not the inclined ones ( ` ). The awk inline script could be as follow: ls -lrt | awk '{ total += $5 }; END { print total }' so, no need to initialize total ( awk initializes it to zero), and no need to loop, awk already executes the script on every line of input.
{ "source": [ "https://unix.stackexchange.com/questions/21678", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8032/" ] }
21,680
I'm learning matlab and using emacs to edit .m files. Is there a method in Emacs to call matlab to run programs?
First of all, you should use straight single quotes ( ' ), not the inclined ones ( ` ). The awk inline script could be as follow: ls -lrt | awk '{ total += $5 }; END { print total }' so, no need to initialize total ( awk initializes it to zero), and no need to loop, awk already executes the script on every line of input.
{ "source": [ "https://unix.stackexchange.com/questions/21680", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11137/" ] }
21,689
I've installed jdk1.7.0.rpm package in RHEL6. Where I do find the path to execute my first java program?
Try either of the two: $ which java $ whereis java For your first java program read this tutorial: "Hello World!" for Solaris OS and Linux
{ "source": [ "https://unix.stackexchange.com/questions/21689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11141/" ] }
21,705
I want to check, from the linux command line, if a given cleartext password is the same of a crypted password on a /etc/shadow (I need this to authenticate web users. I'm running an embedded linux.) I have access to the /etc/shadow file itself.
You can easily extract the encrypted password with awk. You then need to extract the prefix $algorithm$salt$ (assuming that this system isn't using the traditional DES, which is strongly deprecated because it can be brute-forced these days). correct=$(</etc/shadow awk -v user=bob -F : 'user == $1 {print $2}') prefix=${correct%"${correct#\$*\$*\$}"} For password checking, the underlying C function is crypt , but there's no standard shell command to access it. On the command line, you can use a Perl one-liner to invoke crypt on the password. supplied=$(echo "$password" | perl -e '$_ = <STDIN>; chomp; print crypt($_, $ARGV[0])' "$prefix") if [ "$supplied" = "$correct" ]; then … Since this can't be done in pure shell tools, if you have Perl available, you might as well do it all in Perl. (Or Python, Ruby, … whatever you have available that can call the crypt function.) Warning, untested code. #!/usr/bin/env perl use warnings; use strict; my @pwent = getpwnam($ARGV[0]); if (!@pwent) {die "Invalid username: $ARGV[0]\n";} my $supplied = <STDIN>; chomp($supplied); if (crypt($supplied, $pwent[1]) eq $pwent[1]) { exit(0); } else { print STDERR "Invalid password for $ARGV[0]\n"; exit(1); } On an embedded system without Perl, I'd use a small, dedicated C program. Warning, typed directly into the browser, I haven't even tried to compile. This is meant to illustrate the necessary steps, not as a robust implementation! /* Usage: echo password | check_password username */ #include <stdio.h> #include <stdlib.h> #include <pwd.h> #include <shadow.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { char password[100]; struct spwd shadow_entry; char *p, *correct, *supplied, *salt; if (argc < 2) return 2; /* Read the password from stdin */ p = fgets(password, sizeof(password), stdin); if (p == NULL) return 2; *p = 0; /* Read the correct hash from the shadow entry */ shadow_entry = getspnam(username); if (shadow_entry == NULL) return 1; correct = shadow_entry->sp_pwdp; /* Extract the salt. Remember to free the memory. */ salt = strdup(correct); if (salt == NULL) return 2; p = strchr(salt + 1, '$'); if (p == NULL) return 2; p = strchr(p + 1, '$'); if (p == NULL) return 2; p[1] = 0; /*Encrypt the supplied password with the salt and compare the results*/ supplied = crypt(password, salt); if (supplied == NULL) return 2; return !!strcmp(supplied, correct); } A different approach is to use an existing program such as su or login . In fact, if you can, it would be ideal to arrange for the web application to perform whatever it needs via su -c somecommand username . The difficulty here is to feed the password to su ; this requires a terminal. The usual tool to emulate a terminal is expect , but it's a big dependency for an embedded system. Also, while su is in BusyBox, it's often omitted because many of its uses require the BusyBox binary to be setuid root. Still, if you can do it, this is the most robust approach from a security point of view.
{ "source": [ "https://unix.stackexchange.com/questions/21705", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6001/" ] }
21,742
I frequently end up with more than ten windows in tmux. Later on, I close some of my older ones. Is there a way to renumber, say window 15, to window 3 (which doesn't exist anymore)? Or to pack them all up again, so that there are no empty slots? I'd like to do this because it is difficult to jump to higher numbered windows, because you can't do Ctrl+B, 15 . I have to use Ctrl+B, w to list the windows and then type the letter corresponding to the window I want to open. I know that I can swap windows. For example, I could create a new window ( Ctrl+B, c ) which would open in the empty slot 3. I can then swapw window 15 and window 3 and then close window 15. Obviously, this is a tedious approach. How do you manage many windows in tmux?
Looks like you need this: move-window [-rdk] [-s src-window] [-t dst-window] (alias: movew) This is similar to link-window, except the window at src-window is moved to dst-window. With -r, all windows in the session are renumbered in sequential order, respecting the base-index option. Calling movew without parameters moves current window to first free position. movew -r will renumber all the windows at once.
{ "source": [ "https://unix.stackexchange.com/questions/21742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/872/" ] }
21,752
Does this concept only apply to terminal drivers (which is what most sites cover) or to any driver in general?
The terms raw and cooked only apply to terminal drivers. "Cooked" is called canonical and "raw" is called non-canonical mode. The terminal driver is, by default a line-based system: characters are buffered internally until a carriage return ( Enter or Return ) before it is passed to the program - this is called "cooked". This allows certain characters to be processed (see stty(1) ), such as Ctrl D , Ctrl S , Ctrl U , Backspace ); essentially rudimentary line-editing. The terminal driver "cooks" the characters before serving them up. The terminal can be placed into "raw" mode where the characters are not processed by the terminal driver, but are sent straight through (it can be set that INTR and QUIT characters are still processed). This allows programs like emacs and vi to use the entire screen more easily. You can read more about this in the "Canonical mode" section of the termios(3) manpage.
{ "source": [ "https://unix.stackexchange.com/questions/21752", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2869/" ] }
21,764
I am trying to use grep with a regex to find lines in a file that match 1 of 2 possible strings. Here is my grep: $ grep "^ID.*(ETS|FBS)" my_file.txt The above grep returns no results. However if I execute either: $ grep "^ID.*ETS" my_file.txt or $ grep "^ID.*FBS" my_file.txt I do match specific lines. Why is my OR regex not matching? Thanks in advance for the help!
With normal regex, the characters ( , | and ) need to be escaped. So you should use $ grep "^ID.*\(ETS\|FBS\)" my_file.txt You don't need the escapes when you use the extended regex ( -E )option. See man grep , section " Basic vs Extended Regular Expressions ".
{ "source": [ "https://unix.stackexchange.com/questions/21764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8713/" ] }
21,778
I was wondering what the difference between these two were: ~/somedirectory/file.txt and ~/.somedirectory/file.txt It's really difficult to ask this on Google since I didn't know how to explain the . when I didn't even know what to call it. But can someone describe the difference between including the dot and excluding it?
Under unix-like systems, all directories contain two entries, . and .. , which stand for the directory itself and its parent respectively. These entries are not interesting most of the time, so ls hides them, and shell wildcards like * don't include them. More generally, ls and wildcards hide all files whose name begins with a . ; this is a simple way to exclude . and .. and allow users to hide other files from listings. Other than being excluded from listings, there's nothing special about these files. Unix stores per-user configuration files in the user's home directory. If all configuration files appeared in file listings, the home directory would be cluttered with files that users don't care about every day. So configuration files always begin with a . : typically, the configuration file for the application Foo is called something like .foo or .foorc . For this reason, user configuration files are often known as dot files .
{ "source": [ "https://unix.stackexchange.com/questions/21778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11195/" ] }
21,788
Is there a shortcut in bash and zsh to delete one component of a path? For example, if I type ls ~/local/color/ , and the cursor is at the end of line, is there a shortcut to delete the color/ at the end? Ideally I want solutions in both vi-mode and emacs-mode
The most commonly used commands in the default bash emacs mode , for most commonly used keyboards: Movement Ctrl - p , or Up : previous command Ctrl - n , or Down : next command Ctrl - b , or Left : previous character Ctrl - f , or Right : next character Alt - b : previous word Alt - f : next word Ctrl - a , or Home : begin of command Ctrl - e , or End : end of command Editing BkSpc : delete previous character Ctrl - d , or Del : delete current character Alt - BkSpc : delete word to left Alt - d : delete word to right Ctrl - u : delete to start of command Ctrl - k : delete to end of command Ctrl - y : paste last cut Miscellanea Cltr - / : undo Cltr - r : incremental backward history search
{ "source": [ "https://unix.stackexchange.com/questions/21788", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11192/" ] }
21,793
There's a bzip2 process running in the background and I have no idea where it came from. It's eating up a lot of resources. Can I do a reverse lsof to see which files are being accessed by this process? I've suspended the process for the time being.
I'm not sure why that'd be a "reverse lsof " -- lsof does exactly that. You can pass it the -p flag to specify which PIDs to include/exclude in the results: $ lsof -p $(pidof bzip2)
{ "source": [ "https://unix.stackexchange.com/questions/21793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7596/" ] }
21,838
150 directories I can handle but 900 files is too many for a review. I've no manual entry for tree so maybe I can ask you if you know how to output only directories since files get too detailed? . ├── agreement.htm ├── appengine_admin │   ├── admin_forms.py │   ├── admin_forms.pyc │   ├── admin_settings.py │   ├── admin_settings.pyc │   ├── admin_widgets.py │   ├── admin_widgets.pyc │   ├── authorized.py │   ├── authorized.pyc │   ├── db_extensions.py │   ├── encoding.py │   ├── __init__.py │   ├── __init__.pyc │   ├── media │   │   ├── images │   │   │   ├── default-bg.gif │   │   │   ├── icon_calendar.gif │   │   │   ├── icon_clock.gif │   │   │   ├── nav-bg.gif │   │   │   └── sidebar-li.png │   │   ├── js │   │   │   ├── calendar.js │   │   │   ├── core.js │   │   │   └── DateTimeShortcuts.js │   │   └── style.css │   ├── model_register.py │   ├── model_register.pyc │   ├── templates │   │   ├── 404.html │   │   ├── 500.html │   │   ├── admin_base.html │   │   ├── index.html │   │   ├── model_item_edit.html │   │   └── model_item_list.html │   ├── utils.py │   ├── utils.pyc │   ├── views.py │   └── views.pyc ├── appengine_config.py ├── appengine_config.py.bak ├── appengine_config.pyc ├── app.yaml ├── br.py ├── br.py.bak ├── br.py.old ├── captcha.py ├── captcha.pyc ├── common │   ├── __init__.py │   ├── __init__.pyc │   ├── templatefilters.py │   └── templatefilters.pyc ├── conf │   ├── __init__.py │   ├── __init__.pyc │   ├── locale │   │   ├── ar │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── bg │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── en │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── es │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── fi │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── fr │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   ├── django.po │   │   │   └── django.po~ │   │   ├── ja │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── pt │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   ├── django.po │   │   │   └── django.po~ │   │   ├── ro │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── ru │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── sq │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── sv │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── tl │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   ├── tr │   │   │   └── LC_MESSAGES │   │   │   ├── django.mo │   │   │   └── django.po │   │   └── zh │   │   └── LC_MESSAGES │   │   ├── django.mo │   │   └── django.po │   └── settings.py ├── credit │   └── credit.html ├── cron.yaml ├── demjson.py ├── demjson.pyc ├── errorpages.py ├── facebookapi.py ├── facebookapi.py.bak ├── facebookapi.pyc ├── facebookconf.py ├── facebookconf.pyc ├── geo │   ├── geocell.py │   ├── geocell.pyc │   ├── geocell_test.py │   ├── geomath.py │   ├── geomath.pyc │   ├── geomath_test.py │   ├── geomodel.py │   ├── geomodel.pyc │   ├── geotypes.py │   ├── geotypes.pyc │   ├── geotypes_test.py │   ├── __init__.py │   ├── __init__.pyc │   ├── test_coverage.sh │   ├── util.py │   ├── util.pyc │   └── util_test.py ├── help.html ├── i18n.py ├── i18n.py.bak ├── i18n.pyc ├── index.yaml ├── __init__.py ├── in.old.py ├── in.old.py.bak ├── in.py ├── in.py.bak ├── in.tidy.py ├── javascript.py ├── js │   ├── a.css │   ├── b.css │   ├── c.css │   ├── d.css │   ├── listcss.css │   ├── load.js │   ├── main.css │   └── mycss.css ├── jsmin.py ├── json.py ├── json.pyc ├── listfiles.py ├── login_required.py ├── login_required.py.bak ├── mailman.py ├── main.old.py ├── main.py ├── main.py.bak ├── main.pyc ├── mapreduce │   ├── base_handler.py │   ├── base_handler.py.bak │   ├── context.py │   ├── context.py.bak │   ├── control.py │   ├── control.py.bak │   ├── errors.py │   ├── errors.py.bak │   ├── handlers.py │   ├── handlers.py.bak │   ├── hooks.py │   ├── hooks.py.bak │   ├── __init__.py │   ├── __init__.py.bak │   ├── __init__.pyc │   ├── input_readers.py │   ├── input_readers.py.bak │   ├── lib │   │   ├── blobstore │   │   ├── files │   │   │   ├── blobstore.py │   │   │   ├── crc32c.py │   │   │   ├── file.py │   │   │   ├── file_service_pb.py │   │   │   ├── __init__.py │   │   │   ├── records.py │   │   │   └── testutil.py │   │   ├── graphy │   │   │   ├── backends │   │   │   │   ├── google_chart_api │   │   │   │   │   ├── encoders.py │   │   │   │   │   ├── __init__.py │   │   │   │   │   └── util.py │   │   │   │   └── __init__.py │   │   │   ├── bar_chart.py │   │   │   ├── common.py │   │   │   ├── formatters.py │   │   │   ├── __init__.py │   │   │   ├── line_chart.py │   │   │   ├── pie_chart.py │   │   │   ├── README │   │   │   └── util.py │   │   ├── __init__.py │   │   ├── key_range │   │   │   └── __init__.py │   │   ├── pipeline │   │   │   ├── common.py │   │   │   ├── handlers.py │   │   │   ├── __init__.py │   │   │   ├── models.py │   │   │   ├── pipeline.py │   │   │   ├── simplejson │   │   │   │   ├── decoder.py │   │   │   │   ├── encoder.py │   │   │   │   ├── __init__.py │   │   │   │   ├── ordered_dict.py │   │   │   │   ├── scanner.py │   │   │   │   └── tool.py │   │   │   ├── testutil.py │   │   │   ├── ui │   │   │   │   ├── common.css │   │   │   │   ├── common.js │   │   │   │   ├── images │   │   │   │   │   ├── treeview-black.gif │   │   │   │   │   ├── treeview-black-line.gif │   │   │   │   │   ├── treeview-default.gif │   │   │   │   │   └── treeview-default-line.gif │   │   │   │   ├── jquery-1.4.2.min.js │   │   │   │   ├── jquery.ba-hashchange.min.js │   │   │   │   ├── jquery.cookie.js │   │   │   │   ├── jquery.json.min.js │   │   │   │   ├── jquery.timeago.js │   │   │   │   ├── jquery.treeview.css │   │   │   │   ├── jquery.treeview.min.js │   │   │   │   ├── status.css │   │   │   │   ├── status.html │   │   │   │   └── status.js │   │   │   └── util.py │   │   └── simplejson │   │   ├── decoder.py │   │   ├── encoder.py │   │   ├── __init__.py │   │   ├── README │   │   └── scanner.py │   ├── main.py │   ├── main.py.bak │   ├── mapper_pipeline.py │   ├── mapper_pipeline.py.bak │   ├── mapreduce_pipeline.py │   ├── mapreduce_pipeline.py.bak │   ├── model.py │   ├── model.py.bak │   ├── namespace_range.py │   ├── namespace_range.py.bak │   ├── operation │   │   ├── base.py │   │   ├── base.pyc │   │   ├── counters.py │   │   ├── counters.pyc │   │   ├── db.py │   │   ├── db.pyc │   │   ├── __init__.py │   │   └── __init__.pyc │   ├── output_writers.py │   ├── output_writers.py.bak │   ├── quota.py │   ├── quota.py.bak │   ├── shuffler.py │   ├── shuffler.py.bak │   ├── static │   │   ├── base.css │   │   ├── detail.html │   │   ├── jquery-1.4.2.min.js │   │   ├── overview.html │   │   └── status.js │   ├── status.py │   ├── status.py.bak │   ├── test_support.py │   ├── test_support.py.bak │   ├── util.py │   └── util.py.bak ├── mapreduce.yaml ├── market │   ├── categories.html │   ├── credit.html │   ├── __init__.py │   ├── market_ad_detail.html │   ├── market_ad_edit.html │   ├── market_ad_newpasswd.html │   ├── market_ad_pay.html │   ├── market_ad_preview.html │   ├── market_ad_remove.html │   ├── market_ad_renew.html │   ├── market_detail.html │   ├── market_full.html │   ├── market_list.html │   ├── market_mailc2c.html │   ├── market_search.html │   ├── market_usersads_list.html │   ├── models.py │   ├── nav.html │   └── publish.html ├── onlinedebug │   ├── codeinput.html │   └── onlinedebug.py ├── paginator.py ├── paginator.pyc ├── PythonTidy-1.20.py ├── PythonTidy-1.20.py.bak ├── reindent.py ├── remote_api.py ├── reports.py ├── settings.py ├── settings.pyc ├── static │   ├── base.css │   ├── button-background.gif │   ├── cancel.png │   ├── challenge │   ├── channel.html │   ├── codebase │   │   ├── dhtmlxcommon.js │   │   ├── dhtmlxtree.css │   │   ├── dhtmlxtree.js │   │   └── imgs │   │   ├── blank.gif │   │   ├── but_cut.gif │   │   ├── folderClosed.gif │   │   ├── folderOpen.gif │   │   ├── iconCheckAll.gif │   │   ├── iconCheckDis.gif │   │   ├── iconCheckGray.gif │   │   ├── iconUncheckAll.gif │   │   ├── iconUncheckDis.gif │   │   ├── leaf.gif │   │   ├── line1.gif │   │   ├── line1_rtl.gif │   │   ├── line2.gif │   │   ├── line2_rtl.gif │   │   ├── line3.gif │   │   ├── line3_rtl.gif │   │   ├── line4.gif │   │   ├── line4_rtl.gif │   │   ├── line.gif │   │   ├── lock.gif │   │   ├── minus2.gif │   │   ├── minus2_rtl.gif │   │   ├── minus3.gif │   │   ├── minus3_rtl.gif │   │   ├── minus4.gif │   │   ├── minus4_rtl.gif │   │   ├── minus5.gif │   │   ├── minus5_rtl.gif │   │   ├── minus_ar.gif │   │   ├── minus.gif │   │   ├── plus2.gif │   │   ├── plus2_rtl.gif │   │   ├── plus3.gif │   │   ├── plus3_rtl.gif │   │   ├── plus4.gif │   │   ├── plus4_rtl.gif │   │   ├── plus5.gif │   │   ├── plus5_rtl.gif │   │   ├── plus_ar.gif │   │   ├── plus.gif │   │   ├── radio_off.gif │   │   ├── radio_on.gif │   │   └── Thumbs.db │   ├── comet-helper.js │   ├── comet.js │   ├── common.js │   ├── credit.html │   ├── crossajax.html │   ├── css │   │   ├── 960.css │   │   ├── business.css │   │   ├── montao.css │   │   ├── openid.css │   │   ├── reset.css │   │   ├── style.css │   │   ├── text.css │   │   ├── uni-form.css │   │   └── uni-form-generic.css │   ├── extinfowindow.js │   ├── failure.png │   ├── favicon.ico │   ├── favindex.ico │   ├── fbjs.js │   ├── feedicon.gif │   ├── file.html │   ├── for_sale_files │   │   ├── 1x1_pages_li_c6_apuk_qh5569.gif │   │   ├── all_pages.js │   │   ├── arrays_v2.js │   │   ├── bevakning_mini.gif │   │   ├── common_in.css │   │   ├── common.js │   │   ├── ga.js │   │   ├── generic.css │   │   ├── jquery-1.js │   │   ├── list.css │   │   ├── searchbox.js │   │   ├── thumb_extra_left_bottom.gif │   │   ├── thumb_extra_right_bottom.gif │   │   ├── thumb_extra_right_top.gif │   │   ├── thumb_left_top.gif │   │   ├── thumb_single_left_bottom.gif │   │   ├── thumb_single_right_bottom.gif │   │   ├── thumb_single_right_top.gif │   │   ├── transparent.gif │   │   └── xtcore.js │   ├── frames.html │   ├── ga.js │   ├── geo.html │   ├── googleb4b3b9748fe57cbf.html │   ├── greybox.js │   ├── iframe.js │   ├── images │   │   ├── 1.jpg │   │   ├── 2.jpg │   │   ├── 3.jpg │   │   ├── dl.jpg │   │   ├── go.jpg │   │   ├── openid-providers-en.png │   │   ├── openid-providers-ru.png │   │   ├── openid-providers-uk.png │   │   ├── p_debug1.JPG │   │   ├── p_debug2.JPG │   │   ├── p_debug3_1.JPG │   │   ├── p_debug3_2.JPG │   │   ├── p_debug3.JPG │   │   ├── p_listfiles.JPG │   │   ├── preview.jpg │   │   ├── Thumbs.db │   │   └── view.jpg │   ├── images.large │   │   ├── aol.gif │   │   ├── facebook.gif │   │   ├── google.gif │   │   ├── myopenid.gif │   │   ├── openid.gif │   │   ├── rambler.gif │   │   ├── verisign.gif │   │   ├── yahoo.gif │   │   └── yandex.gif │   ├── images.small │   │   ├── aol.ico │   │   ├── aol.ico.gif │   │   ├── aol.ico.png │   │   ├── blogger.ico │   │   ├── blogger.ico.gif │   │   ├── blogger.ico.png │   │   ├── claimid.ico │   │   ├── claimid.ico.gif │   │   ├── claimid.ico.png │   │   ├── clickpass.ico │   │   ├── clickpass.ico.gif │   │   ├── clickpass.ico.png │   │   ├── facebook.ico │   │   ├── facebook.ico.gif │   │   ├── facebook.ico.png │   │   ├── flickr.ico │   │   ├── flickr.ico.gif │   │   ├── flickr.ico.png │   │   ├── google.ico │   │   ├── google.ico.gif │   │   ├── google.ico.png │   │   ├── google_profile.ico │   │   ├── google_profile.ico.gif │   │   ├── google_profile.ico.png │   │   ├── launchpad.ico │   │   ├── launchpad.ico.gif │   │   ├── launchpad.ico.png │   │   ├── linkedin.ico │   │   ├── linkedin.ico.gif │   │   ├── linkedin.ico.png │   │   ├── livejournal.ico │   │   ├── livejournal.ico.gif │   │   ├── livejournal.ico.png │   │   ├── myopenid.ico │   │   ├── myopenid.ico.gif │   │   ├── myopenid.ico.png │   │   ├── openid.ico │   │   ├── openid.ico.gif │   │   ├── openid.ico.png │   │   ├── rambler.ico │   │   ├── rambler.ico.gif │   │   ├── rambler.ico.png │   │   ├── technorati.ico │   │   ├── technorati.ico.gif │   │   ├── technorati.ico.png │   │   ├── twitter.ico │   │   ├── twitter.ico.gif │   │   ├── twitter.ico.png │   │   ├── verisign.ico │   │   ├── verisign.ico.gif │   │   ├── verisign.ico.png │   │   ├── vidoop.ico │   │   ├── vidoop.ico.gif │   │   ├── vidoop.ico.png │   │   ├── wordpress.ico │   │   ├── wordpress.ico.gif │   │   ├── wordpress.ico.png │   │   ├── yahoo.ico │   │   ├── yahoo.ico.gif │   │   ├── yahoo.ico.png │   │   ├── yandex.ico │   │   ├── yandex.ico.gif │   │   └── yandex.ico.png │   ├── img │   │   ├── anuncio.gif │   │   ├── facebook.gif │   │   ├── facebook.png │   │   ├── favicon.ico │   │   ├── flerabildericon1.gif │   │   ├── gratis.gif │   │   ├── kamera.gif │   │   ├── kool_business.png │   │   ├── logo.gif │   │   ├── mail.gif │   │   ├── marketlogo.gif │   │   ├── montao.gif │   │   ├── montao_small.gif │   │   ├── monton.gif │   │   ├── sign-in-with-twitter-d.png │   │   ├── sign-in-with-twitter-d-sm.png │   │   ├── sign-in-with-twitter-l.png │   │   ├── sign-in-with-twitter-l-sm.png │   │   ├── tele.gif │   │   ├── thumb_left_top.gif │   │   ├── thumb_single_left_bottom.gif │   │   ├── thumb_single_right_bottom.gif │   │   ├── thumb_single_right_top.gif │   │   └── twittericon.png │   ├── jquery-1.js │   ├── js │   │   ├── jquery-1.2.6.min.js │   │   ├── openid-jquery-en.js │   │   ├── openid-jquery.js │   │   ├── openid-jquery-ru.js │   │   └── openid-jquery-uk.js │   ├── json2.js │   ├── labeledmarker.js │   ├── listfileshelp.html │   ├── login.js │   ├── main.js │   ├── mootools-1.js │   ├── myfbjs2.js │   ├── myfbjs3.js │   ├── myfbjs.js │   ├── obrigado.txt │   ├── onlinedebughelp.html │   ├── openid-icon.png │   ├── openid-logo.png │   ├── orbited.js │   ├── plugin.js │   ├── plugin_management.html │   ├── plugin_management.js │   ├── privacy.html │   ├── recaptcha.js │   ├── robots.txt │   ├── round_bottom_left.png │   ├── round_bottom_right.png │   ├── round_top_left.png │   ├── round_top_right.png │   ├── search.html │   ├── selectuser.js │   ├── success.png │   ├── terms.html │   ├── Thumbs.db │   ├── Transform.xls │   ├── transparentpixel.gif │   ├── tree.html │   ├── wz_dragdrop.js │   ├── xd_receiver.htm │   ├── xd_receiver.html │   └── yui │   ├── assets │   │   ├── bg_hd.gif │   │   ├── dpSyntaxHighlighter.css │   │   ├── dpSyntaxHighlighter.js │   │   ├── example-hd-bg.gif │   │   ├── Thumbs.db │   │   ├── title_h_bg.gif │   │   ├── yui-candy.jpg │   │   ├── yui.css │   │   ├── yuiDistribution.css ... 153 directories, 890 files
tree man page says: -d List directories only. So the output of tree -d YOUR_TARGET_FOLDER looks like: ├── appengine_admin │ ├── media │ │ ├── images │ │ └── js │ └── templates ├── common ├── conf │ └── locale │ ├── ar │ │ └── LC_MESSAGES │ ├── bg │ │ └── LC_MESSAGES │ ├── en │ │ └── LC_MESSAGES │ ├── es │ │ └── LC_MESSAGES │ ├── fi │ │ └── LC_MESSAGES │ ├── fr │ │ └── LC_MESSAGES │ ├── ja │ │ └── LC_MESSAGES │ ├── pt │ │ └── LC_MESSAGES │ ├── ro │ │ └── LC_MESSAGES │ ├── ru │ │ └── LC_MESSAGES │ ├── sq │ │ └── LC_MESSAGES │ ├── sv │ │ └── LC_MESSAGES │ ├── tl │ │ └── LC_MESSAGES │ ├── tr │ │ └── LC_MESSAGES │ └── zh │ └── LC_MESSAGES ├── credit ├── geo ├── js ├── mapreduce │ ├── lib │ │ ├── blobstore │ │ ├── files │ │ ├── graphy │ │ │ └── backends │ │ ├── key_range │ │ ├── pipeline │ │ │ ├── simplejson │ │ │ └── ui │ │ │ └── images │ │ └── simplejson │ ├── operation │ └── static ├── market ├── onlinedebug ├── static │ ├── challenge │ ├── codebase │ │ └── imgs │ ├── css │ ├── for_sale_files │ ├── images │ ├── images.large │ ├── images.small │ ├── img │ ├── jquery-1.js │ ├── js │ └── yui │ ├── assets
{ "source": [ "https://unix.stackexchange.com/questions/21838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
21,852
Where did the convention of using single dashes for letters and doubles dashes for words come from and why is continued to be used? For example if I type in ls --help , you see: -a, --all do not ignore entries starting with . -A, --almost-all do not list implied . and .. --author with -l, print the author of each file -b, --escape print octal escapes for nongraphic characters --block-size=SIZE use SIZE-byte blocks -B, --ignore-backups do not list implied entries ending with ~ ... I tried googling - and -- convention even with quotes with little success.
In The Art of Unix Programming Eric Steven Raymond describes how this practice evolved: In the original Unix tradition, command-line options are single letters preceded by a single hyphen... The original Unix style evolved on slow ASR-33 teletypes that made terseness a virtue; thus the single-letter options. Holding down the shift key required actual effort; thus the preference for lower case, and the use of “-” (rather than the perhaps more logical “+”) to enable options. The GNU style uses option keywords (rather than keyword letters) preceded by two hyphens. It evolved years later when some of the rather elaborate GNU utilities began to run out of single-letter option keys ( this constituted a patch for the symptom, not a cure for the underlying disease ). It remains popular because GNU options are easier to read than the alphabet soup of older styles. 1 [1] http://www.catb.org/esr/writings/taoup/html/ch10s05.html
{ "source": [ "https://unix.stackexchange.com/questions/21852", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11227/" ] }
21,920
I'm running a script on a remote machine like this: ssh $host "pip install -r /path/to/requirements.txt" But the output isn't line buffered; instead of seeing one line returned at a time, all the lines (~10) are all printed at once as the connection terminates. What's up with this? Is there any way to force them to be line buffered? (also, to state the obvious: when I ssh into $host and run the command “manually”, the output is line buffered, as expected)
Use ssh -t ... to force a pseudo-tty allocation (which is what you get when you log in normally via ssh.)
{ "source": [ "https://unix.stackexchange.com/questions/21920", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11252/" ] }
21,967
I am confused what's the actual difference between SNAT and Masquerade? If I want to share my internet connection on local network, should I select SNAT or Masquerade?
The SNAT target requires you to give it an IP address to apply to all the outgoing packets. The MASQUERADE target lets you give it an interface, and whatever address is on that interface is the address that is applied to all the outgoing packets. In addition, with SNAT , the kernel's connection tracking keeps track of all the connections when the interface is taken down and brought back up; the same is not true for the MASQUERADE target. Good documents include the HOWTOs on the Netfilter site and the iptables man page .
{ "source": [ "https://unix.stackexchange.com/questions/21967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2063/" ] }
22,009
I'm writing a bash script that needs to distinguish a regular file from a symlink. I thought I could do this with if/test expression, but it doesn't work as I expected to: $ touch regular_file $ test -f regular_file; echo $? 0 $ test -h regular_file; echo $? 1 $ ln -s regular_file symlink $ test -h symlink; echo $? 0 $ test -f symlink; echo $? 0 Why is that? And, how can I properly do this?
It looks like you are just scrambling your tests a bit. You don't need to run both test, the only one you need for this case is the -h one to tell you if the file is a symlink. test -h file && echo "is symlink" || echo "is regular file" The -f test you used only tells you if the object is a file or a link to a file. This would return 1 (a failure) if it was a directory, a device node, a symlink to a directory, etc., but will return 0 (a success) if it is a symlink to a file. In other words some symlinks would evaluate to true, others to false. If you also needed to know if it was a symlink to a file rather than a directory, you would need to combine the results of both tests with a little bit of logic.
{ "source": [ "https://unix.stackexchange.com/questions/22009", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9922/" ] }
22,044
Sometimes you have to make sure that only one instance of a shell script is running at the same time. For example a cron job which is executed via crond that does not provide locking on its own (e.g. the default Solaris crond). A common pattern to implement locking is code like this: #!/bin/sh LOCK=/var/tmp/mylock if [ -f $LOCK ]; then # 'test' -> race begin echo Job is already running\! exit 6 fi touch $LOCK # 'set' -> race end # do some work rm $LOCK Of course, such code has a race condition. There is a time window where the execution of two instances can both advance after line 3 before one is able to touch the $LOCK file. For a cron job this is usually not a problem because you have an interval of minutes between two invocations. But things can go wrong - for example when the lockfile is on a NFS server - that hangs. In that case several cron jobs can block on line 3 and queue up. If the NFS server is active again then you have thundering herd of parallel running jobs. Searching on the web I found the tool lockrun which seems like a good solution to that problem. With it you run a script that needs locking like this: $ lockrun --lockfile=/var/tmp/mylock myscript.sh You can put this in a wrapper or use it from your crontab. It uses lockf() (POSIX) if available and falls back to flock() (BSD). And lockf() support over NFS should be relatively widespread. Are there alternatives to lockrun ? What about other cron daemons? Are there common crond's that support locking in a sane way? A quick look into the man page of Vixie Crond (default on Debian/Ubuntu systems) does not show anything about locking. Would it be a good idea to include a tool like lockrun into coreutils ? In my opinion it implements a theme very similar to timeout , nice and friends.
Here's another way to do locking in shell script that can prevent the race condition you describe above, where two jobs may both pass line 3. The noclobber option will work in ksh and bash. Don't use set noclobber because you shouldn't be scripting in csh/tcsh. ;) lockfile=/var/tmp/mylock if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT # do stuff here # clean up after yourself, and release your trap rm -f "$lockfile" trap - INT TERM EXIT else echo "Lock Exists: $lockfile owned by $(cat $lockfile)" fi YMMV with locking on NFS (you know, when NFS servers are not reachable), but in general it's much more robust than it used to be. (10 years ago) If you have cron jobs that do the same thing at the same time, from multiple servers, but you only need 1 instance to actually run, the something like this might work for you. I have no experience with lockrun, but having a pre-set lock environment prior to the script actually running might help. Or it might not. You're just setting the test for the lockfile outside your script in a wrapper, and theoretically, couldn't you just hit the same race condition if two jobs were called by lockrun at exactly the same time, just as with the 'inside-the-script' solution? File locking is pretty much honor system behavior anyways, and any scripts that don't check for the lockfile's existence prior to running will do whatever they're going to do. Just by putting in the lockfile test, and proper behavior, you'll be solving 99% of potential problems, if not 100%. If you run into lockfile race conditions a lot, it may be an indicator of a larger problem, like not having your jobs timed right, or perhaps if interval is not as important as the job completing, maybe your job is better suited to be daemonized. EDIT BELOW - 2016-05-06 (if you're using KSH88) Base on @Clint Pachl's comment below, if you use ksh88, use mkdir instead of noclobber . This mostly mitigates a potential race condition, but doesn't entirely limit it (though the risk is miniscule). For more information read the link that Clint posted below . lockdir=/var/tmp/mylock pidfile=/var/tmp/mylock/pid if ( mkdir ${lockdir} ) 2> /dev/null; then echo $$ > $pidfile trap 'rm -rf "$lockdir"; exit $?' INT TERM EXIT # do stuff here # clean up after yourself, and release your trap rm -rf "$lockdir" trap - INT TERM EXIT else echo "Lock Exists: $lockdir owned by $(cat $pidfile)" fi And, as an added advantage, if you need to create tmpfiles in your script, you can use the lockdir directory for them, knowing they will be cleaned up when the script exits. For more modern bash, the noclobber method at the top should be suitable.
{ "source": [ "https://unix.stackexchange.com/questions/22044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
22,121
$ ps -Awwo pid,comm,args PID COMMAND COMMAND 1 init /sbin/init 2 kthreadd [kthreadd] 3 ksoftirqd/0 [ksoftirqd/0] 5 kworker/u:0 [kworker/u:0] 6 migration/0 [migration/0] 7 cpuset [cpuset] 8 khelper [khelper] 9 netns [netns] 10 sync_supers [sync_supers] 11 bdi-default [bdi-default] 12 kintegrityd [kintegrityd] 13 kblockd [kblockd] 14 kacpid [kacpid] 15 kacpi_notify [kacpi_notify] 16 kacpi_hotplug [kacpi_hotplug] 17 ata_sff [ata_sff] 18 khubd [khubd] What do the brackets mean? Does args always return the full path to the process command (e.g. /bin/cat )?
Brackets appear around command names when the arguments to that command cannot be located. The ps(1) man page on FreeBSD explains why this typically happens to system processes and kernel threads: If the arguments cannot be located (usually because it has not been set, as is the case of system processes and/or kernel threads) the command name is printed within square brackets. The ps(1) man page on Linux states similarly: Sometimes the process args will be unavailable; when this happens, ps will instead print the executable name in brackets.
{ "source": [ "https://unix.stackexchange.com/questions/22121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1572/" ] }
22,128
I want to scp a file to a server. The file is a symbolic link, and actually what I want to do is copy the source file. I don't want to track the source file's path manually, because it can be replaced. How do I get the source file's absolute path so that I can then scp with it?
Try this line: readlink -f `which command` If command is in your $PATH variable , otherwise you need to specify the path you know.
{ "source": [ "https://unix.stackexchange.com/questions/22128", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5056/" ] }
22,162
How does commands like ls know what its stdout is? It seems ls is operating different depending on what the target stdout is. For example if I do: ls /home/matt/tmp the result is: a.txt b.txt c.txt However if I do ls /home/matt/tmp | cat the result is (i.e. new line per result): a.txt b.txt c.txt The process is passed a file descriptor 1 for stdout right? How does it determine how to format the result? Does the file descriptor reveal information?
The ls program uses isatty() to know whether fd 1 is a tty or something else (pipe, file, etc…). From man 3 isatty : int isatty(int fd); DESCRIPTION The isatty() function tests whether fd is an open file descriptor referring to a terminal Updade: Line 1538 in ls.c from coreutils (git revision 43a987e1): if (isatty (STDOUT_FILENO)) { format = many_per_line; /* See description of qmark_funny_chars, above. */ qmark_funny_chars = true; } ( many_per_line should be self-descriptive.)
{ "source": [ "https://unix.stackexchange.com/questions/22162", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4479/" ] }
22,218
How can I use ls in linux to get a listing of filenames date and size only. I don't need to see the other info such as owner or permission. Is this possible?
Try stat instead of ls : stat -c "%y %s %n" * To output in columnar format: stat -c "%n,%s" * | column -t -s,
{ "source": [ "https://unix.stackexchange.com/questions/22218", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11366/" ] }
22,229
I would like to run: ./a.out < x.dat > x.ans for each * .dat file in the directory A . Sure, it could be done by bash/python/whatsoever script, but I like to write sexy one-liner. All I could reach is (still without any stdout): ls A/*.dat | xargs -I file -a file ./a.out But -a in xargs doesn't understand replace-str 'file'. Thank you for help.
First of all, do not use ls output as a file list . Use shell expansion or find . See below for potential consequences of ls+xargs misuse and an example of proper xargs usage. 1. Simple way: for loop If you want to process just the files under A/ , then a simple for loop should be enough: for file in A/*.dat; do ./a.out < "$file" > "${file%.dat}.ans"; done 2. pre1 Why not ls | xargs ? Here's an example of how bad things may turn if you use ls with xargs for the job. Consider a following scenario: first, let's create some empty files: $ touch A/mypreciousfile.dat\ with\ junk\ at\ the\ end.dat $ touch A/mypreciousfile.dat $ touch A/mypreciousfile.dat.ans see the files and that they contain nothing: $ ls -1 A/ mypreciousfile.dat mypreciousfile.dat with junk at the end.dat mypreciousfile.dat.ans $ cat A/* run a magic command using xargs : $ ls A/*.dat | xargs -I file sh -c "echo TRICKED > file.ans" the result: $ cat A/mypreciousfile.dat TRICKED with junk at the end.dat.ans $ cat A/mypreciousfile.dat.ans TRICKED So you've just managed to overwrite both mypreciousfile.dat and mypreciousfile.dat.ans . If there were any content in those files, it'd have been erased. 2. Using xargs : the proper way with find If you'd like to insist on using xargs , use -0 (null-terminated names) : find A/ -name "*.dat" -type f -print0 | xargs -0 -I file sh -c './a.out < "file" > "file.ans"' Notice two things: this way you'll create files with .dat.ans ending; this will break if some file name contains a quote sign ( " ). Both issues can be solved by different way of shell invocation: find A/ -name "*.dat" -type f -print0 | xargs -0 -L 1 bash -c './a.out < "$0" > "${0%dat}ans"' 3. All done within find ... -exec find A/ -name "*.dat" -type f -exec sh -c './a.out < "{}" > "{}.ans"' \; This, again, produces .dat.ans files and will break if file names contain " . To go about that, use bash and change the way it is invoked: find A/ -name "*.dat" -type f -exec bash -c './a.out < "$0" > "${0%dat}ans"' {} \;
{ "source": [ "https://unix.stackexchange.com/questions/22229", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11368/" ] }
22,273
I'm trying to use regex as a field seperator in awk . From my reading this seems possible but I can't get the syntax right. rpm -qa | awk '{ 'FS == [0-9]' ; print $1 }' awk: cmd. line:1: { FS awk: cmd. line:1: ^ unexpected newline or end of string Thoughts? The goal if not obviouse is to get a list of software without version number.
You have mucked up your quotes and syntax. To set the input field separator, the easiest way to do it is with the -F option on the command line: awk -F '[0-9]' '{ print $1 }' or awk -F '[[:digit:]]' '{ print $1 }' This would use any digit as the input field separator, and then output the first field from each line. The [0-9] and [[:digit:]] expressions are not quite the same, depending on your locale. See " Difference between [0-9], [[:digit:]] and \d ". One could also set FS in the awk program itself. This is usually done in a BEGIN block as it's a one-time initialisation: awk 'BEGIN { FS = "[0-9]" } { print $1 }' Note that single quotes can't be used in a single-quoted string in the shell, and that awk strings always use double quotes.
{ "source": [ "https://unix.stackexchange.com/questions/22273", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11390/" ] }
22,291
I am trying to update the timestamps of all folders in the cwd using this: for file in `ls`; do touch $file; done But it doesn't seem to work. Any ideas why?
All the answers so far (as well as your example in the question) assume that you want to touch everything in the directory, even though you said "touch all folders". If it turns out the directory contains files and folders and you only want to update the folders, you can use find : $ find . -maxdepth 1 -mindepth 1 -type d -exec touch {} + Or if your find implementation doesn't support the non-standard -mindepth / -maxdepth predicates: $ find . ! -name . -prune -type d -exec touch {} + This: $ touch -c -- */ Should work in most shells except that: it will also touch symlinks to directories in addition to plain directories it will omit hidden ones if there's no directory or symlink to directory, it would create a file called * in shells other than csh , tcsh , zsh , fish or the Thompson shell (which would report an error instead). Here, we're using -c to work around it, though that could still touch a non-directory file called * . With zsh , to touch directories only, including hidden ones: touch -- *(D/)
{ "source": [ "https://unix.stackexchange.com/questions/22291", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11401/" ] }
22,394
I read in text books that Unix/Linux doesn't allow hard links to directories but does allow soft links. Is it because, when we have cycles and if we create hard links, and after some time we delete the original file, it will point to some garbage value? If cycles were the sole reason behind not allowing hard links, then why are soft links to directories allowed?
This is just a bad idea, as there is no way to tell the difference between a hard link and an original name. Allowing hard links to directories would break the directed acyclic graph structure of the filesystem, possibly creating directory loops and dangling directory subtrees, which would make fsck and any other file tree walkers error prone. First, to understand this, let's talk about inodes. The data in the filesystem is held in blocks on the disk, and those blocks are collected together by an inode. You can think of the inode as THE file.  Inodes lack filenames, though. That's where links come in. A link is just a pointer to an inode. A directory is an inode that holds links. Each filename in a directory is just a link to an inode. Opening a file in Unix also creates a link, but it's a different type of link (it's not a named link). A hard link is just an extra directory entry pointing to that inode. When you ls -l , the number after the permissions is the named link count. Most regular files will have one link. Creating a new hard link to a file will make both filenames point to the same inode. Note: % ls -l test ls: test: No such file or directory % touch test % ls -l test -rw-r--r-- 1 danny staff 0 Oct 13 17:58 test % ln test test2 % ls -l test* -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test2 % touch test3 % ls -l test* -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test2 -rw-r--r-- 1 danny staff 0 Oct 13 17:59 test3 ^ ^ this is the link count Now, you can clearly see that there is no such thing as a hard link. A hard link is the same as a regular name. In the above example, test or test2 , which is the original file and which is the hard link? By the end, you can't really tell (even by timestamps) because both names point to the same contents, the same inode: % ls -li test* 14445750 -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test 14445750 -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test2 14445892 -rw-r--r-- 1 danny staff 0 Oct 13 17:59 test3 The -i flag to ls shows you inode numbers in the beginning of the line. Note how test and test2 have the same inode number, but test3 has a different one. Now, if you were allowed to do this for directories, two different directories in different points in the filesystem could point to the same thing. In fact, a subdir could point back to its grandparent, creating a loop. Why is this loop a concern? Because when you are traversing, there is no way to detect you are looping (without keeping track of inode numbers as you traverse). Imagine you are writing the du command, which needs to recurse through subdirs to find out about disk usage. How would du know when it hit a loop? It is error prone and a lot of bookkeeping that du would have to do, just to pull off this simple task. Symlinks are a whole different beast, in that they are a special type of "file" that many file filesystem APIs tend to automatically follow. Note, a symlink can point to a nonexistent destination, because they point by name, and not directly to an inode. That concept doesn't make sense with hard links, because the mere existence of a "hard link" means the file exists. So why can du deal with symlinks easily and not hard links? We were able to see above that hard links are indistinguishable from normal directory entries. Symlinks, however, are special, detectable, and skippable! du notices that the symlink is a symlink, and skips it completely! % ls -l total 4 drwxr-xr-x 3 danny staff 102 Oct 13 18:14 test1/ lrwxr-xr-x 1 danny staff 5 Oct 13 18:13 test2@ -> test1 % du -ah 242M ./test1/bigfile 242M ./test1 4.0K ./test2 242M .
{ "source": [ "https://unix.stackexchange.com/questions/22394", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3539/" ] }
22,432
How can I get the size of all files and all files in its subdirectories using the du command. I am trying the following command to get the size of all files (and files in subdirectories) find . -type f | du -a But this prints out the folder sizes as well. How can I get a listing of sizes of all files and files in subdirectories? I also tried the exec flag but I am not sure how to pipe the output into another command after it executes the results of find into du . The operating system is AIX 6.1 with ksh shell.
I usually use the -exec utility. Like this: find . -type f -exec du -a {} + I tried it both on bash and ksh with GNU find. I never tried AIX, but I'm sure your version of find has some -exec syntax. The following snippet sorts the list, largest first: find . -type f -exec du -a {} + | sort -n -r | less
{ "source": [ "https://unix.stackexchange.com/questions/22432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11475/" ] }
22,447
I've never had this problem before, but for some reason, I can't rename my folder to packages/ . This is the structure: root - packages_old/ - packages When I try to rename the folder with Thunar, I get an error message saying that the file already exists. Same with mv : blender iso $ mv packages_old/ packages/ mv: accessing `packages/': Not a directory Why am I unable to rename my file and folder to have the same name: root - packages/ - packages I swear I've done this before.
In Unix, almost everything is a file. A directory is a special type of file that from the user's perspective can "contain" other files. The error Not a directory occurs because your existing file is not a directory, and since a directory is a type of file, and there cannot be two identically named files in one directory, the operation cannot be performed.
{ "source": [ "https://unix.stackexchange.com/questions/22447", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3032/" ] }
22,494
I'm coping file to xclip cat file.txt | xclip I'm pasting without problem with xclip -o When I want to paste it to Firefox with Ctrl+V it pastes old text (that shouldn't already be in the clipboard). When I go back to terminal and run xclip -o it pastes the correct text. Why is there a problem with pasting to Firefox ?
X11 has several available clipboards. By default, xclip places data in the primary selection buffer. To paste it, you use middle-click. If you want to use Ctrl + v , use xclip -selection clipboard . See man xclip for more information. There is good information about the different clipboards on freedesktop.org .
{ "source": [ "https://unix.stackexchange.com/questions/22494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
22,502
I am on machine A and want to pull a file from machine B. A$ scp <myuserid>@hostB:<path of file in B> . it says that: scp: <path of file in B>: No such file or directory But on machine B, this file exists in this path. What is going wrong?
You didn't specify any file: you have to add the file (with path) after the colon: A$ scp <myuserid>@hostB:/absolutepath/file . or A$ scp <myuserid>@hostB:relativepath/file . for a path relative to your home directory. If you don't specify a different user (i.e., the user on A and B are the same) you don't need the @ A$ scp hostB:/path/file .
{ "source": [ "https://unix.stackexchange.com/questions/22502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10968/" ] }
22,520
When I list al groups I see one called 'nogroup'. What is this for? Is it supposed to be least privileged one or something? I'm using ubuntu 11.04.
nogroup is the group analog to the nobody user. It is used for unprivileged processes so that even if something goes wrong the process does not have the permissions to cause any serious damage to an important user or group.
{ "source": [ "https://unix.stackexchange.com/questions/22520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10125/" ] }
22,545
Is there a way to connect to a serial terminal just as you would do with SSH? There must be a simpler way than tools such as Minicom, like this $ serial /dev/ttyS0 I know I can cat the output from /dev/ttyS0 but only one way communication is possible that way, from the port to the console. And echo out to the port is just the same but the other way around, to the port. How can I realize two way communication with a serial port the simplest possible way on Unix/Linux?
I find screen the most useful program for serial communication since I use it for other things anyway. It's usually just screen /dev/ttyS0 <speed> , although the default settings may be different for your device. It also allows you to pipe anything into the session by entering command mode and doing exec !! <run some program that generates output> .
{ "source": [ "https://unix.stackexchange.com/questions/22545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11536/" ] }
22,560
How can I get a list of all of the RPM packages that have been installed on my system from a particular repo (e.g., "epel")?
Fedora 36 or later dnf repository-packages epel list CentOS / RHEL / Fedora 22 or earlier yum list installed | grep @epel Fedora 23 dnf list installed | grep @epel RHEL8 dnf repo-pkgs epel list installed
{ "source": [ "https://unix.stackexchange.com/questions/22560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/663/" ] }
22,604
Everybody knows :-) that in Windows plain text files lines are terminated with CR+LF, and in Unix&Linux - with LF only. How can I quickly convert all my source code files from one format to another and back?
That depends: if the files are under version control, this could be a rather unpopular history-polluting decision. Git has the option to automagically convert line endings on check-out. If you do not care and want to quickly convert, there are programs like fromdos / todos and dos2unix / unix2dos that do this for you. You can use find : find . -type f -name '*.php' -exec dos2unix '{}' + .
{ "source": [ "https://unix.stackexchange.com/questions/22604", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2119/" ] }
22,612
I'm using cygwin. and use it with ssh to login into a ubuntu server. sometimes, I want to switch back to my cygwin and then quickly switch to ubuntu shell. How to do this?
You can background the ssh client just like any other shell job by sending it a signal and returning to the parent shell. Since the ssh client traps most key strokes, it would grab the normal shell job control keystrokes, but you can get its attention and get through to the shell using an escape sequence. For most ssh-clients this involves sending a ~ that has been directly preceded by a newline. The following sequence should do the job: Enter ~ Ctrl + Z After it is suspended you can continue using the local shell. When you want to go back to the suspended job, just run the fg command. For further reading, look up Bash Job Control .
{ "source": [ "https://unix.stackexchange.com/questions/22612", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3970/" ] }
22,615
I need to find my external IP address from a shell script. At the moment I use this function: myip () { lwp-request -o text checkip.dyndns.org | awk '{ print $NF }' } But it relies on perl-libwww , perl-html-format , and perl-html-tree being installed. What other ways can I get my external IP?
I'd recommend getting it directly from a DNS server. Most of the other answers below all involve going over HTTP to a remote server. Some of them required parsing of the output, or relied on the User-Agent header to make the server respond in plain text. Those change quite frequently (go down, change their name, put up ads, might change output format etc.). The DNS response protocol is standardised (the format will stay compatible). Historically, DNS services (Akamai, Google Public DNS, OpenDNS, ..) tend to survive much longer and are more stable, more scalable, and generally more looked-after than whatever new hip whatismyip dot-com HTTP service is hot today. This method is inherently faster (be it only by a few milliseconds!). Using dig with an OpenDNS resolver: $ dig @resolver4.opendns.com myip.opendns.com +short Perhaps alias it in your bashrc so it's easy to remember # https://unix.stackexchange.com/a/81699/37512 alias wanip='dig @resolver4.opendns.com myip.opendns.com +short' alias wanip4='dig @resolver4.opendns.com myip.opendns.com +short -4' alias wanip6='dig @resolver1.ipv6-sandbox.opendns.com AAAA myip.opendns.com +short -6' Responds with a plain ip address: $ wanip # wanip4, or wanip6 80.100.192.168 # or, 2606:4700:4700::1111 Syntax (Abbreviated from https://ss64.com/bash/dig.html ) : usage: dig [@global-dnsserver] [q-type] <hostname> <d-opt> [q-opt] q-type one of (A, ANY, AAAA, TXT, MX, ...). Default: A. d-opt ... +[no]short (Display nothing except short form of answer) ... q-opt one of: -4 (use IPv4 query transport only) -6 (use IPv6 query transport only) ... The ANY query type returns either an AAAA or an A record. To prefer IPv4 or IPv6 connection specifically, use the -4 or -6 options accordingly. To require the response be an IPv4 address, replace ANY with A ; for IPv6, replace it with AAAA . Note that it can only return the address used for the connection. For example, when connecting over IPv6, it cannot return the A address. Alternative servers Various DNS providers offer this service, including OpenDNS , Akamai , and Google Public DNS : # OpenDNS (since 2009) $ dig @resolver3.opendns.com myip.opendns.com +short $ dig @resolver4.opendns.com myip.opendns.com +short 80.100.192.168 # OpenDNS IPv6 $ dig @resolver1.ipv6-sandbox.opendns.com AAAA myip.opendns.com +short -6 2606:4700:4700::1111 # Akamai (since 2009) $ dig @ns1-1.akamaitech.net ANY whoami.akamai.net +short 80.100.192.168 # Akamai approximate # NOTE: This returns only an approximate IP from your block, # but has the benefit of working with private DNS proxies. $ dig +short TXT whoami.ds.akahelp.net "ip" "80.100.192.160" # Google (since 2010) # Supports IPv6 + IPv4, use -4 or -6 to force one. $ dig @ns1.google.com TXT o-o.myaddr.l.google.com +short "80.100.192.168" Example alias that specifically requests an IPv4 address: # https://unix.stackexchange.com/a/81699/37512 alias wanip4='dig @resolver4.opendns.com myip.opendns.com +short -4' $ wanip4 80.100.192.168 And for your IPv6 address: # https://unix.stackexchange.com/a/81699/37512 alias wanip6='dig @ns1.google.com TXT o-o.myaddr.l.google.com +short -6' $ wanip6 "2606:4700:4700::1111" Troubleshooting If the command is not working for some reason, there may be a network problem. Try one of the alternatives above first. If you suspect a different issue (with the upstream provider, the command-line tool, or something else) then run the command without the +short option to reveal the details of the DNS query. For example: $ dig @resolver4.opendns.com myip.opendns.com ;; Got answer: ->>HEADER<<- opcode: QUERY, status: NOERROR ;; QUESTION SECTION: ;myip.opendns.com. IN A ;; ANSWER SECTION: myip.opendns.com. 0 IN A 80.100.192.168 ;; Query time: 4 msec
{ "source": [ "https://unix.stackexchange.com/questions/22615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/903/" ] }
22,623
I am writing a PHP script to parse a large text file to do database inserts from it. However on my host, the file is too large, and I hit the memory limit for PHP. The file has about 16,000 lines; I want to split it up into four separate files (at first) to see if I can load those. The first part I can get with head -4000 file.txt . The middle sections are slightly trickier -- I was thinking about piping tail output into head ( tail -4001 file.txt | head -4000 > section2.txt ), but is there another/better way? Actually my logic is messed up -- for section two, I would need to so something like tail -12001 file.txt | head - 4000 , and then lower the tail argument for the next sections. I'm getting mixed up already! :P
If you want not to get messed up but still do it using tail and head , there is a useful way of invoking tail using a line-count from the beginning, not the end: tail -n +4001 yourfile | head -4000 ... But a better, automatic tool made just for splitting files is called... split ! It's also a part of GNU coreutils, so any normal Linux system should have it. Here's how you can use it: split -l 4000 yourInputFile thePrefixForOutputFiles (See man split if in doubt.)
{ "source": [ "https://unix.stackexchange.com/questions/22623", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394/" ] }
22,645
I've seen that rvm (ruby version manager) is installed using the following command: bash < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer ) So as I understand we get the script content and pass it to the bash (I believe < < and << is the same thing?) I am interested in the < < part, found following description on the net: << token Means use the current input stream as STDIN for the program until token is seen. This is somehow not clear for me, can someone make an example or explain it in more simple way?
No, < < and << are not the same thing. The first is composed of the common < redirection character combined with the first character of the <(command) syntax. This is a ksh construct (also found in bash and zsh ) known as process substitution that takes the output of command and provides it in a file whose name refers to the other end of the pipe command is writing to. In other word you can think of < <(command) as < file , where file contains the output of command .
{ "source": [ "https://unix.stackexchange.com/questions/22645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11575/" ] }
22,664
I've seen many explanations for why the link count for an empty directory in Unix based OSes is 2 instead of 1. They all say that it's because of the '.' directory, which every directory has pointing back to itself. I understand why having some concept of '.' is useful for specifying relative paths, but what is gained by implementing it at the filesystem level? Why not just have shells or the system calls that take paths know how to interpret it? That '..' is a real link makes much more sense to me -- the filesystem needs to store a pointer back to the parent directory in order to navigate to it. But I don't see why '.' being a real link is necessary. It also seems like it leads to an ugly special case in the implementation -- you would think you could only free the space used by inodes that have a link count less than 1, but if they're directories, you actually need to check for a link count less than 2. Why the inconsistency?
An interesting question, indeed. At first glance I see the following advantages: First of all you state that interpreting " . " as the current directory may be done by the Shell or by system calls. But having the dot-entry in the directory actually removes this necessity and forces consistency at even a lower level. But I don't think that this was the basic idea behind this design decision. When a file is being created or removed from a directory, the directory's modification time stamp has to be updated, too. This timestamp is stored in its inode. The inode number is stored in the corresponding directory entry. IF the dot entry would not be there, the routines would have to search for the inode number at the entry for this directory in the parent directory, which would cause a directory search again. BUT luckily there is the dot entry in the current directory. The routine that adds or removes a file in the current directory just has to jump back to the first entry (where the dot-entry usually resides) and immediately has found the inode number for the current directory. There is a third nice thing about the dot entry: When fsck checks a rotten filesystem and has to deal with non-connected blocks that are also not on the free list, it's easy for it to verify if a data block (when interpreted as a directory list) has a dot entry that's pointing to an inode which in turn points back to this data block. If so, this data block may be considered as a lost directory which has to be reconnected.
{ "source": [ "https://unix.stackexchange.com/questions/22664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7001/" ] }
22,682
I wanted to launch a few commands simultaneously in tmux or gnome-terminal or xfterminal , every different tab with a single command running , and close that tab when that command finishes. Any other software is welcomed as well I wanted to issue a single script to do the job , e.g XX "cmd1" "cmd2" "cmd3"
tmux new -d -s my-session 'echo window-1 pane-1; sleep 8' \; \ split-window -d 'echo window-1 pane-2; sleep 6' \; down-pane \; \ new-window -d 'echo window-2; sleep 4' \; next-window \; \ attach \; The above is a running example of the general idea ... more here: How to run streamripper and mplayer in a split-screen X terminal, via a single script
{ "source": [ "https://unix.stackexchange.com/questions/22682", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
22,713
I created a symbolic link (yesterday) like this: sudo ln -s bin/python /usr/bin/prj-python When I run: prj-python file.py I get: prj-python: command not found When I try creating the link again, I get: ln: creating symbolic link `/usr/bin/prj-python': File exists Why is that happening? My $PATH is: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/opt/real/RealPlayer
You forgot the initial slash before bin/python . This means /usr/bin/prj-python now points to /usr/bin/bin/python . What would you like it to point to exactly?
{ "source": [ "https://unix.stackexchange.com/questions/22713", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
22,721
Is there a way to completely restart Bash and reload .bashrc and .profile and the like? I'd like to make sure my changes worked out properly after editing these files.
Have it replace itself with itself. exec bash -l Note that this won't affect things such as the cwd or exported variables.
{ "source": [ "https://unix.stackexchange.com/questions/22721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
22,726
How can I do something like this in bash? if "`command` returns any error"; then echo "Returned an error" else echo "Proceed..." fi
How to conditionally do something if a command succeeded or failed That's exactly what bash's if statement does: if command ; then echo "Command succeeded" else echo "Command failed" fi Adding information from comments: you don't need to use the [ ... ] syntax in this case. [ is itself a command, very nearly equivalent to test . It's probably the most common command to use in an if , which can lead to the assumption that it's part of the shell's syntax. But if you want to test whether a command succeeded or not, use the command itself directly with if , as shown above.
{ "source": [ "https://unix.stackexchange.com/questions/22726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
22,729
If I wanted to use zsh , for instance, rather than bash , where would I make this change for the current user?
Chris Browne's answer works well if you don't have access to the user and have root privileges. If you want to change the default shell of your current user you can also use: chsh -s /bin/ksh More info The login shell of a user is defined in a file ( /etc/passwd on Debian). This files has an entry for each user with the info entered at creation. rahmu:x:1000:1000:My Nameisrahmu,,,:/home/rahmu:/bin/bash anotheruser:x:1001:1001:,,,:/home/anotheruser:/bin/ksh The last column is the login shell. It will be forked by the login program if successful. However it is highly recommended that you do not modify this file by hand. You should use chsh or usermod whenever possible.
{ "source": [ "https://unix.stackexchange.com/questions/22729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
22,764
I'm trying to use printf to format some pretty output in a bash script e.g.: ----------------------- | This is some output | ----------------------- But I've stumbled over some behavior I don't understand. $ printf "--" gives me the error: printf: usage: printf [-v var] format [arguments] and $ printf "-stuff" results in -bash: printf: -s: invalid option So apparently printf thinks I'm trying to pass some arguments while I'm not. Meanwhile, completely by accident, I've found this workaround: $ printf -- "--- this works now ----\n" gives me --- this works now ---- Can anyone explain this behavior?
The -- is used to tell the program that whatever follows should not be interpreted as a command line option to printf . Thus the printf "--" you tried basically ended up as " printf with no arguments" and therefore failed.
{ "source": [ "https://unix.stackexchange.com/questions/22764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/466/" ] }
22,781
Am doing some work on a remote CentOS 5.6 machine and my network keeps dropping. Is there a way that I can recover my hung sessions after I reconnect? EDIT: am doing some updating and installing with yum and am worried this might be a problem if processes keep hanging in the middle of whatever they're doing.
There is no way, but to prevent this I like using tmux . I start tmux, start the operation and go on my way. If I return and find the connection has been broken, all I have to do is reconnect and type tmux attach . Here's an example. $ tmux $ make <something big> ...... Connection fails for some reason Reconect $ tmux ls 0: 1 windows (created Tue Aug 23 12:39:52 2011) [103x30] $ tmux attach -t 0 Back in the tmux sesion
{ "source": [ "https://unix.stackexchange.com/questions/22781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
22,796
source some_file some_file: doit () { echo doit $1 } export TEST=true If I source some_file the function "doit" and the variable TEST are available on the command line. But running this script: script.sh: #/bin/sh echo $TEST doit test2 Will return the value of TEST, but will generate an error about the unknown function "doit". Can I "export" the function, too, or do I have to source some_file in script.sh to use the function there?
In Bash you can export function definitions to other shell scripts that your script calls with export -f function_name For example you can try this simple example: ./script1 : #!/bin/bash myfun() { echo "Hello!" } export -f myfun ./script2 ./script2 : #!/bin/bash myfun Then if you call ./script1 you will see the output Hello! .
{ "source": [ "https://unix.stackexchange.com/questions/22796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11083/" ] }
22,803
I came up with the following snippet for counting files in each subdirectory: for x (**/*(/)); do print $x; find $x -maxdepth 1 -type f | wc -l; done The command outputs consecutive pairs (one below the other) as follows: directory_name # of files I would like to change the code above to: Print each match on the same line (i.e. directory_name ':' # of files ) Only count files if the folders are leaves in the directory tree (i.e. they don't have any subfolders). How can I do that?
In Bash you can export function definitions to other shell scripts that your script calls with export -f function_name For example you can try this simple example: ./script1 : #!/bin/bash myfun() { echo "Hello!" } export -f myfun ./script2 ./script2 : #!/bin/bash myfun Then if you call ./script1 you will see the output Hello! .
{ "source": [ "https://unix.stackexchange.com/questions/22803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
22,815
I have two questions. First, which command lists files and directories, but lists directories first? Second question: I want to copy a list of files into a single directory, but make the target directory the first filename in the command.
Got GNU? The gnu version of ls has --group-directories-first . And cp has -t . No GNU? On systems that don't have gnu's ls , your best bet is two successive calls to find with -maxdepth n / -mindepth n and -type t with the appropriate options. find . -maxdepth 1 -mindepth 1 -type d find . -maxdepth 1 -mindepth 1 \! -type d For copying files, with the target first, you would have to write a script that saves the first argument, then uses shift , and appends the argument to the end. #!/bin/sh target="$1" shift cp -r -- "$@" "$target" Watch Out! If you were planning on using these together - that is, collecting the list from find or ls (possibly by using xargs ) and passing it to cp (or a cp wrapper), you should be aware of what dangers lie in parsing lists of files (basically, filenames can contain characters like newlines that can mess up your script). Specifically, look into find 's -exec and -print0 options and xargs 's -0 option. An alternative tool for efficiently copying directory trees. You might want to look into using rsync instead; it has lots of functionality that might make your job easier.
{ "source": [ "https://unix.stackexchange.com/questions/22815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11624/" ] }
22,834
I have created zlib-compressed data in Python, like this: import zlib s = '...' z = zlib.compress(s) with open('/tmp/data', 'w') as f: f.write(z) (or one-liner in shell: echo -n '...' | python2 -c 'import sys,zlib; sys.stdout.write(zlib.compress(sys.stdin.read()))' > /tmp/data ) Now, I want to uncompress the data in shell. Neither zcat nor uncompress work: $ cat /tmp/data | gzip -d - gzip: stdin: not in gzip format $ zcat /tmp/data gzip: /tmp/data.gz: not in gzip format $ cat /tmp/data | uncompress - gzip: stdin: not in gzip format It seems that I have created gzip-like file, but without any headers. Unfortunately I don't see any option to uncompress such raw data in gzip man page, and the zlib package does not contain any executable utility. Is there a utility to uncompress raw zlib data?
It is also possible to decompress it using standard shell-script + gzip , if you don't have, or want to use openssl or other tools. The trick is to prepend the gzip magic number and compress method to the actual data from zlib.compress : printf "\x1f\x8b\x08\x00\x00\x00\x00\x00" |cat - /tmp/data |gzip -dc >/tmp/out Edits: @d0sboots commented: For RAW Deflate data, you need to add 2 more null bytes: → "\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00" This Q on SO gives more information about this approach. An answer there suggests that there is also an 8 byte footer. Users @Vitali-Kushner and @mark-bessey reported success even with truncated files, so a gzip footer does not seem strictly required. @tobias-kienzler suggested this function for the bashrc : zlibd() (printf "\x1f\x8b\x08\x00\x00\x00\x00\x00" | cat - "$@" | gzip -dc)
{ "source": [ "https://unix.stackexchange.com/questions/22834", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11649/" ] }
22,842
I try to create an archive with tar using relative paths. I use the following command: tar czf ~/files/wp/my-page-order.tar.gz -C ~/webapps/zers/wp-content/plugins/ ~/webapps/zers/wp-content/plugins/my-page-order But the archived files still have absolute paths. How can I use tar with relative paths?
~ is expanded by the shell. Don't use ~ with -C: tar czf ~/files/wp/my-page-order.tar.gz \ -C ~ \ webapps/zers/wp-content/plugins/my-page-order (tar will include webapps/zers/wp-content/plugins/my-page-order path) or tar czf ~/files/wp/my-page-order.tar.gz \ -C ~/webapps/zers/wp-content/plugins \ my-page-order (tar will include my-page-order path) Or just cd first.... cd ~/webapps/zers/wp-content/plugins tar czf ~/files/wp/my-page-order.tar.gz my-page-order
{ "source": [ "https://unix.stackexchange.com/questions/22842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11219/" ] }
22,919
When I define a new alias in .bash_aliases file or a new function in .bashrc file, is there some refresh command to be able immediately use the new aliases or functions without closing the terminal (in my case xfce4-terminal with a few tabs open, many files open and in the middle of the work)?
Sourcing the changed file will provide access to the newly written alias or function in the current terminal, for example: source ~/.bashrc An alternative syntax: . ~/.bashrc Note that if you have many instances of bash running in your terminal (you mentionned multiple tabs), you will have to run this in every instance.
{ "source": [ "https://unix.stackexchange.com/questions/22919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
22,924
I finished installing CentOS 6, but when I tried running yum update I got: [root@centos6test ~]# yum update Loaded plugins: fastestmirror, refresh-packagekit Determining fastest mirrors Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=i386&repo=os error was 14: PYCURL ERROR 6 - "" Error: Cannot find a valid baseurl for repo: base Why is that happening? How can I fix it?
First you need to get connected, AFAIK CentOS 6 minimal set your network device to ONBOOT=No , just do a dhclient with admin privileges to your network interface and you should be up and running: $ sudo dhclient
{ "source": [ "https://unix.stackexchange.com/questions/22924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223359/" ] }
22,926
I understand how to define include shared objects at linking/compile time. However, I still wonder how do executables look for the shared object ( *.so libraries) at execution time. For instance, my app a.out calls functions defined in the lib.so library. After compiling, I move lib.so to a new directory in my $HOME . How can I tell a.out to go look for it there?
The shared library HOWTO explains most of the mechanisms involved, and the dynamic loader manual goes into more detail. Each unix variant has its own way, but most use the same executable format ( ELF ) and have similar dynamic linkers ¹ (derived from Solaris). Below I'll summarize the common behavior with a focus on Linux; check your system's manuals for the complete story. (Terminology note: the part of the system that loads shared libraries is often called “dynamic linker”, but sometimes “dynamic loader” to be more precise. “Dynamic linker” can also mean the tool that generates instructions for the dynamic loader when compiling a program, or the combination of the compile-time tool and the run-time loader. In this answer, “linker” refers to the run-time part.) In a nutshell, when it's looking for a dynamic library ( .so file) the linker tries: directories listed in the LD_LIBRARY_PATH environment variable ( DYLD_LIBRARY_PATH on OSX); directories listed in the executable's rpath ; directories on the system search path, which (on Linux at least) consists of the entries in /etc/ld.so.conf plus /lib and /usr/lib . The rpath is stored in the executable (it's the DT_RPATH or DT_RUNPATH dynamic attribute). It can contain absolute paths or paths starting with $ORIGIN to indicate a path relative to the location of the executable (e.g. if the executable is in /opt/myapp/bin and its rpath is $ORIGIN/../lib:$ORIGIN/../plugins then the dynamic linker will look in /opt/myapp/lib and /opt/myapp/plugins ). The rpath is normally determined when the executable is compiled, with the -rpath option to ld , but you can change it afterwards with chrpath . In the scenario you describe, if you're the developer or packager of the application and intend for it to be installed in a …/bin , …/lib structure, then link with -rpath='$ORIGIN/../lib' . If you're installing a pre-built binary on your system, either put the library in a directory on the search path ( /usr/local/lib if you're the system administrator, otherwise a directory that you add to $LD_LIBRARY_PATH ), or try chrpath .
{ "source": [ "https://unix.stackexchange.com/questions/22926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4098/" ] }
22,965
I have the following entry in my .ssh/config file Host AAA User BBB HostName CCC ControlMaster auto ControlPath ~/.ssh/%r@%h:%p The above allows me to multiplex multiple ssh sessions through the same ssh connection without having to type in the password every time I need a new session (as long as the master connection remains open). However, I have noticed that once I have a relatively high # of connections multiplexed (~7), I can't add more sessions to the same multiplexed connection, and I start get the following error: > ssh -X AAA mux_client_request_session: session request failed: Session open refused by peer Password: My questions: Why am I getting this error? Is there a limit in the # of ssh sessions I can multiplex in the same connection? Can I change that limit? Would that be a bad idea?
The sshd daemon on the server is limiting the number of sessions per network connection. This is controlled by MaxSessions option in /etc/ssh/sshd_config . Also the MaxStartups option may need to be increased if you use a large number of sessions. (See man sshd_config for more details.) The option to modify MaxSessions limit has been introduced in OpenSSH 5.1 and it looks that the number was previously hard-fixed at 10. If you exceed MaxSessions on the server, you'll see sshd[####]: error: no more sessions in the server's log.
{ "source": [ "https://unix.stackexchange.com/questions/22965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
23,026
So I like to harden my bash scripts wherever I can (and when not able to delegate to a language like Python/Ruby) to ensure errors do not go uncaught. In that vein I have a strict.sh, which contains things like: set -e set -u set -o pipefail And source it in other scripts. However, while pipefail would pick up: false | echo it kept going | true It will not pick up: echo The output is '`false; echo something else`' The output would be The output is '' False returns non-zero status, and no-stdout. In a pipe it would have failed, but here the error isn't caught. When this is actually a calculation stored in a variable for later, and the value is set to blank, this may then cause later problems. So - is there a way to get bash to treat a non-zero returncode inside a backtick as reason enough to exit?
The exact language used in the Single UNIX specification to describe the meaning of set -e is: When this option is on, if a simple command fails for any of the reasons listed in Consequences of Shell Errors or returns an exit status value >0, and is not [a conditional or negated command], then the shell shall immediately exit. There is an ambiguity as to what happens when such a command occurs in a subshell . From a practical point of view, all the subshell can do is exit and return a nonzero status to the parent shell. Whether the parent shell will in turn exit depends on whether this nonzero status translates into a simple command failing in the parent shell. One such problematic case is the one you encountered: a nonzero return status from a command substitution . Since this status is ignored, it does not cause the parent shell to exit. As you've already discovered , a way to take the exit status into account is to use the command substitution in a simple assignment : then the exit status of the assignment is the exit status of the last command substitution in the assignment(s) . Note that this will perform as intended only if there is a single command substitution, as only the last substitution's status is taken into account. For example, the following command is successful (both according to the standard and in every implementation I've seen): a=$(false)$(echo foo) Another case to watch for is explicit subshells : (somecommand) . According to the interpretation above, the subshell may return a nonzero status, but since this is not a simple command in the parent shell, the parent shell should continue. In fact, all the shells I know of do make the parent return at this point. While this is useful in many cases such as (cd /some/dir && somecommand) where the parentheses are used to keep an operation such as a current directory change local, it violates the specification if set -e is turned off in the subshell, or if the subshell returns a nonzero status in a way that would not terminate it, such as using ! on a true command. For example, all of ash, bash, pdksh, ksh93 and zsh exit without displaying foo on the following examples: set -e; (set +e; false); echo "This should be displayed" set -e; (! true); echo "This should be displayed" Yet no simple command has failed while set -e was in effect! A third problematic case is elements in a nontrivial pipeline . In practice, all shells ignore failures of the elements of the pipeline other than the last one, and exhibit one of two behaviors regarding the last pipeline element: ATT ksh and zsh, which execute the last element of the pipeline in the parent shell, do business as usual: if a simple command fails in the last element of the pipeline, the shell executing that command, which happens to be the parent shell, exits. Other shells approximate the behavior by exiting if the last element of the pipeline returns a nonzero status. Like before, turning off set -e or using a negation in the last element of the pipeline causes it to return a nonzero status in a way that should not terminate the shell; shells other than ATT ksh and zsh will then exit. Bash's pipefail option causes a pipeline to exit immediately under set -e if any of its elements returns a nonzero status. Note that as a further complication, bash turns off set -e in subshells unless it's in POSIX mode ( set -o posix or have POSIXLY_CORRECT in the environment when bash starts). All of this shows that the POSIX specification unfortunately does a poor job at specifying the -e option. Fortunately, existing shells are mostly consistent in their behavior.
{ "source": [ "https://unix.stackexchange.com/questions/23026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3201/" ] }
23,062
How can I check what version of the vi editor I have? What's the best way to upgrade it or install vim on Solaris?
According to http://www.vim.org/download.php , Sun Solaris Vim is included in the Companion Software: http://wwws.sun.com/software/solaris/freeware/ . vi has had the :ve[rsion] command going back at least as far as 1979, so it should work on any Solaris release.
{ "source": [ "https://unix.stackexchange.com/questions/23062", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
23,072
How can I check if swap is active, and which swap devices are set up, on the command line?
in linux, you can use cat /proc/meminfo to see total swap, and free swap (all linux) cat /proc/swaps to see which swap devices are being used (all linux) swapon -s to see swap devices and sizes (where swapon is installed) vmstat for current virtual memory statistics in Mac OS X, you can use vm_stat to see information about virtual memory (swap) ls -lh /private/var/vm/swapfile* to see how many swap files are being used. in Solaris, you can use swap -l to see swap devices/files, and their sizes swap -s to see total swap size, used & free vmstat to see virtual memory statistics On some systems, "virtual memory" refers only to disk-backed memory devices, and on other systems, like Solaris, Virtual Memory can refer to any user process address space, including tmpfs filesystems (like /tmp) and shared memory space.
{ "source": [ "https://unix.stackexchange.com/questions/23072", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7826/" ] }
23,077
(Duplicated from Stack Overflow: https://stackoverflow.com/questions/7854975/how-to-exclude-a-list-of-full-directory-paths-in-find-command-on-solaris ) I have a very specific need to find unowned files and directories in Solaris using a script, and need to be able to exclude full directory paths from the find because they contain potentially thousands of unowned files (and it's normal because they are files hosted on other servers). I don't even want find to search in those directories as it will hang the server (cpu spiking to 99% for a long time), therefore piping the find results in egrep to filter out those directories is not an option. I know I can do this to exclude one of more directories by name: find / -mount -local \( -type d -a \( -name dir1 -o -name dir2 -o dir3 \) \) -prune -o \( -nouser -o -nogroup \) -print However, this will match dir1 and dir2 anywhere in the directory structure of any directories, which is not what I want at all. I want to be able to prevent find from even searching in the following directories (as an example): /opt/dir1 /opt/dir2 /var/dir3/dir4 And I still want it to find unowned files and directories in the following directories: /opt/somedir/dir1 /var/dir2 /home/user1/dir1 I have tried using regex in the -name arguments, but since find only matches 'name' against the basename of what it finds, I can't specify a path. Unfortunately, Solaris's find does not support GNU find options such as -wholename or -path, so I'm kind of screwed. My goal would be to have a script with the following syntax: script.sh "/path/to/dir1,/path/to/dir2,/path/to/dir3" How could I do that using find and standard sh scripting (/bin/sh) on Solaris (5.8 and up)?
in linux, you can use cat /proc/meminfo to see total swap, and free swap (all linux) cat /proc/swaps to see which swap devices are being used (all linux) swapon -s to see swap devices and sizes (where swapon is installed) vmstat for current virtual memory statistics in Mac OS X, you can use vm_stat to see information about virtual memory (swap) ls -lh /private/var/vm/swapfile* to see how many swap files are being used. in Solaris, you can use swap -l to see swap devices/files, and their sizes swap -s to see total swap size, used & free vmstat to see virtual memory statistics On some systems, "virtual memory" refers only to disk-backed memory devices, and on other systems, like Solaris, Virtual Memory can refer to any user process address space, including tmpfs filesystems (like /tmp) and shared memory space.
{ "source": [ "https://unix.stackexchange.com/questions/23077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11755/" ] }
23,106
How to limit process to one cpu core ? Something similar to ulimit or cpulimit would be nice. (Just to ensure: I do NOT want to limit percentage usage or time of execution. I want to force app (with all it's children, processes (threads)) to use one cpu core (or 'n' cpu cores)).
Under Linux, execute the sched_setaffinity system call. The affinity of a process is the set of processors on which it can run. There's a standard shell wrapper: taskset . For example, to pin a process to CPU #0 (you need to choose a specific CPU): taskset -c 0 mycommand --option # start a command with the given affinity taskset -c -pa 0 1234 # set the affinity of a running process There are third-party modules for both Perl ( Sys::CpuAffinity ) and Python ( affinity ) to set a process's affinity. Both of these work on both Linux and Windows (Windows may require other third-party modules with Sys::CpuAffinity ); Sys::CpuAffinity also works on several other unix variants. If you want to set a process's affinity from the time of its birth, set the current process's affinity immediately before calling execve . Here's a trivial wrapper that forces a process to execute on CPU 0. #!/usr/bin/env perl use POSIX; use Sys::CPUAffinity; Sys::CpuAffinity::setAffinity(getpid(), [0]); exec $ARGV[0] @ARGV
{ "source": [ "https://unix.stackexchange.com/questions/23106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9689/" ] }
23,111
What can you do with the eval command? Why is it useful? Is it some kind of a built-in function in bash? There is no man page for it..
eval is part of POSIX. It's an interface which can be a shell built-in. It's described in the "POSIX Programmer's Manual": http://www.unix.com/man-page/posix/1posix/eval/ eval - construct command by concatenating arguments It will take an argument and construct a command of it, which will then be executed by the shell. This is the example from the manpage: 1) foo=10 x=foo 2) y='$'$x 3) echo $y 4) $foo 5) eval y='$'$x 6) echo $y 7) 10 In the first line you define $foo with the value '10' and $x with the value 'foo' . Now define $y , which consists of the string '$foo' . The dollar sign must be escaped with '$' . To check the result, echo $y . The result will be the string '$foo' Now we repeat the assignment with eval . It will first evaluate $x to the string 'foo' . Now we have the statement y=$foo which will get evaluated to y=10 . The result of echo $y is now the value '10' . This is a common function in many languages, e.g. Perl and JavaScript. Have a look at perldoc eval for more examples: http://perldoc.perl.org/functions/eval.html
{ "source": [ "https://unix.stackexchange.com/questions/23111", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }