source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
18,198
The following links discuss these concepts in different contexts. I have read their definitions, but I still can't tell how they are related, or if some of them are just the same. Current group ID Group ID Primary and supplementary group IDs Effective and real group IDs (also on Wikipedia ) Here is one example of the source of my confusion: According to man id , if I type id , I should get what they call effective and real group IDs. id uid=501(joe) gid=501(joe) groups=501(joe), 100(users) However, Wikipedia refers to the output of id to distinguish between primary and supplementary IDs. Moreover, Wikipedia distinguishes between primary vs supplementary and effective vs real group ids. How do these concepts relate to each other? Also, is it true that primary group ID = group ID = current group ID?
You mix two different distinctions here: Between real and effective group ids Between primary and supplementary users' groups The first distinction refers to how processes are being run . Normally, when you run a command/program, it is run with the privileges of your user. It has the real group id same as your user's primary group. This can be changed by a process in order to perform some tasks as a member of another special group. To do that, programs use the setgid function that changes their effective group id. The second distinction refers to users . Each user has his/her primary group . There is only one per user and is referred to as gid in the output of the id command. Apart from that, each user can belong to a number of supplementary groups - and these are listed at the end of id output. [Edit] : I agree that the manpage for id is somewhat misleading here. It is probably because it is a stripped-down version of the description provided by the info document. To see it more clearly, run info coreutils "id invocation" (as suggested at the end of the id manual).
{ "source": [ "https://unix.stackexchange.com/questions/18198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
18,209
This may have more to do with detecting operating systems, but I specifically need the init system currently in use on the system. Fedora 15 and Ubuntu now use systemd, Ubuntu used to use Upstart (long time default until 15.04), while others use variations of System V. I have an application that I am writing to be a cross-platform daemon. The init scripts are being dynamically generated based on parameters that can be passed in on configure. What I'd like to do is only generate the script for the particular init system that they are using. This way the install script can be run reasonably without parameters as root and the daemon can be "installed" automagically. This is what I've come up with: Search for systemd, upstart, etc in /bin Compare /proc/1/comm to the systemd, upstart, etc Ask the user What would be the best cross/platform way of doing this? Kind of related, Can I depend on bash to be on the majority of *nix or is it distribution/OS dependent? Target platforms: Mac OS Linux (all distributions) BSD (all versions) Solaris, Minix, and other *nix
For the second question, the answer is no and you should have a look at Resources for portable shell programming . As for the first part - first of all, you certainly have to be careful. I'd say perform several tests to make sure - because the fact that someone does have systemd (for ex.) installed, does not mean it is actually used as the default init . Also, looking at /proc/1/comm can be misleading, because some installations of various init programs can automatically make /sbin/init a symlink hardlink or even a renamed version of their main program. Maybe the most useful thing could be to look at the init scripts type - because those are what you'll actually be creating, no matter what runs them. As a side note, you might also have a look at OpenRC which aims to provide a structure of init scripts that is compatible with both Linux and BSD systems.
{ "source": [ "https://unix.stackexchange.com/questions/18209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8471/" ] }
18,212
First of all, this is not a duplicate of any existing threads on SE. I have read these two threads ( 1st , 2nd ) on better bash history, but none of the answers work - - I am on Fedora 15 by the way. I added the following to the .bashrc file in the user directory (/home/aahan/), and it doesn't work. Anyone has a clue? HISTCONTROL=ignoredups:erasedups # no duplicate entries HISTSIZE=1000 # custom history size HISTFILESIZE=100000 # custom history file size shopt -s histappend # append to history, don't overwrite it PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND" # Save and reload the history after each command finishes Okay this is what I want with the bash history (priority): don't store duplicates, erase any existing ones immediately share history with all open terminals always append history, not overwrite it store multi-line commands as a single command (which is off by default) what's the default History size and history file size?
This is actually a really interesting behavior and I confess I have greatly underestimated the question at the beginning. But first the facts: 1. What works The functionality can be achieved in several ways, though each works a bit differently. Note that, in each case, to have the history "transferred" to another terminal (updated), one has to press Enter in the terminal, where he/she wants to retrieve the history. option 1: shopt -s histappend HISTCONTROL=ignoredups PROMPT_COMMAND="history -a; history -n; $PROMPT_COMMAND" This has two drawbacks: At login (opening a terminal), the last command from the history file is read twice into the current terminal's history buffer; The buffers of different terminals do not stay in sync with the history file. option 2: HISTCONTROL=ignoredups PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND" (Yes, no need for shopt -s histappend and yes, it has to be history -c in the middle of PROMPT_COMMAND ) This version has also two important drawbacks: The history file has to be initialized. It has to contain at least one non-empty line (can be anything). The history command can give false output - see below. [Edit] "And the winner is..." option 3: HISTCONTROL=ignoredups:erasedups shopt -s histappend PROMPT_COMMAND="history -n; history -w; history -c; history -r; $PROMPT_COMMAND" This is as far as it gets. It is the only option to have both erasedups and common history working simultaneously. This is probably the final solution to all your problems, Aahan. 2. Why does option 2 not seem to work (or: what really doesn't work as expected)? As I mentioned, each of the above solutions works differently. But the most misleading interpretation of how the settings work comes from analysing the output of history command . In many cases, the command can give false output. Why? Because it is executed before the sequence of other history commands contained in the PROMPT_COMMAND ! However, when using the second or third option, one can monitor the changes of .bash_history contents (using watch -n1 "tail -n20 .bash_history" for example) and see what the real history is. 3. Why option 3 is so complicated? It all lies in the way erasedups works. As the bash manual states, "(...) erasedups causes all previous lines matching the current line to be removed from the history list before that line is saved" . So this is really what the OP wanted (and not just, as I previously thought, to have no duplicates appearing in sequence ) . Here's why each of the history -. commands either has to or can not be in the PROMPT_COMMAND : history -n has to be there before history -w to read from .bash_history the commands saved from any other terminal, history -w has to be there in order to save the new history to the file after bash has checked if the command was a duplicate, history -a must not be placed there instead of history -w , because it will add to the file any new command, regardless of whether it was checked as a duplicate. history -c is also needed because it prevents trashing the history buffer after each command, and finally, history -r is needed to restore the history buffer from file, thus finally making the history shared across terminal sessions. Be aware that this solution will mess up the history order by putting all history from other terminals in front of the newest command entered in the current terminal. It also does not delete duplicate lines already in the history file unless you enter that command again.
{ "source": [ "https://unix.stackexchange.com/questions/18212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9610/" ] }
18,231
I've been experiencing very strange behavior of SCP for some time: whenever I try to copy a file, the output of SCP contains a bunch of underscores and the file is not copied. $ scp test.txt 192.168.0.2:~ [email protected]'s password: ________________________________________ When I create an SSH connection using Midnight Commander and copy files it does work. Some info about my machine: $ ssh -V OpenSSH_5.8p1 Debian-1ubuntu3, OpenSSL 0.9.8o 01 Jun 2010 $ uname -a Linux squatpc 2.6.38-10-generic #46-Ubuntu SMP Tue Jun 28 15:05:41 UTC 2011 i686 i686 i386 GNU/Linux And I'm running Kubuntu 11.04. Edit: Some more info as requested by the comments: $ scp -v test.txt 192.168.0.2:~ Executing: program /usr/bin/ssh host 192.168.0.2, user (unspecified), command scp -v -t -- ~ OpenSSH_5.8p1 Debian-1ubuntu3, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 192.168.0.2 [192.168.0.2] port 22. debug1: Connection established. debug1: identity file /home/job/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/job/.ssh/id_rsa-cert type -1 debug1: identity file /home/job/.ssh/id_dsa type -1 debug1: identity file /home/job/.ssh/id_dsa-cert type -1 debug1: identity file /home/job/.ssh/id_ecdsa type -1 debug1: identity file /home/job/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-1ubuntu3 debug1: match: OpenSSH_5.8p1 Debian-1ubuntu3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-1ubuntu3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 28:f3:2b:31:36:43:9b:07:d8:33:ca:43:4f:ca:6c:4c debug1: Host '192.168.0.2' is known and matches the ECDSA host key. debug1: Found key in /home/job/.ssh/known_hosts:20 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/job/.ssh/id_rsa debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/job/.ssh/id_dsa debug1: Trying private key: /home/job/.ssh/id_ecdsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 192.168.0.2 ([192.168.0.2]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 debug1: Sending command: scp -v -t -- ~ ________________________________________ debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: fd 1 clearing O_NONBLOCK Transferred: sent 2120, received 1872 bytes, in 0.3 seconds Bytes per second: sent 7783.1, received 6872.6 debug1: Exit status 0 and $ type scp scp is hashed (/usr/bin/scp)
Ok LOL, I just figured out what the problem is. Since I like cows so much, I've put fortune | cowsay at the top of my .bashrc file which produces output like the following when starting bash : _______________________________________ < You will lose an important disk file. > --------------------------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || This is all fine (and sometimes funny) when running bash interactively. However, bash reads ~/.bashrc when it is interactive and not a login shell, or when it is a login shell and its parent process is rshd or sshd . When you run scp , the server starts a shell which starts a remote scp instance. The output from .bashrc confuses scp because it is sent the same way the scp protocol data is sent. This is apparently a known bug, see here for more details. Also note that the underscores I mentioned in the question are those in the top line of the text balloon. So the solution was simple: I put the following at the top of .bashrc on the remote (destination) machine: # If not running interactively, don't do anything [[ $- == *i* ]] || return This line is present in the default .bashrc but was put way down because of my many (apparently careless) edits.
{ "source": [ "https://unix.stackexchange.com/questions/18231", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6598/" ] }
18,239
$ ls -l /dev/stdin /dev/fd/0 lrwx------ 1 tim tim 64 2011-08-07 09:53 /dev/fd/0 -> /dev/pts/2 lrwxrwxrwx 1 root root 15 2011-08-06 08:14 /dev/stdin -> /proc/self/fd/0 $ ls -l /dev/pts/2 /proc/self/fd/0 crw--w---- 1 tim tty 136, 2 2011-08-07 09:54 /dev/pts/2 lrwx------ 1 tim tim 64 2011-08-07 09:54 /proc/self/fd/0 -> /dev/pts/2 I was wondering if all the files under /dev and its subdirectories are all file descriptors of devices? Why are there so many links from each other? For example, /dev/fd/0 , /dev/stdin , /proc/self/fd/0 are all links to /dev/pts/2 . If l in lrwx------ mean link, what does c in crw--w---- mean?
Almost all the files under /dev are device files . Whereas reading and writing to a regular file stores data on a disk or other filesystem, accessing a device file communicates with a driver in the kernel, which generally in turn communicates with a piece of hardware (a hardware device, hence the name). There are two types of device files: block devices (indicated by b as the first character in the output of ls -l ), and character devices (indicated by c ). The distinction between block and character devices is not completely universal. Block devices are things like disks, which behave like large, fixed-size files: if you write a byte at a certain offset, and later read from the device at that offset, you get that byte back. Character devices are just about anything else, where writing a byte has some immediate effect (e.g. it's emitted on a serial line) and reading a byte also has some immediate effect (e.g. it's read from the serial port). The meaning of a device file is determined by its number, not by its name (the name matters to applications, but not to the kernel). The number is actually two numbers: the major number indicates which driver is responsible for this device, and the minor number allows a driver to drive several devices¹. These numbers appear in the ls -l listing, where you would normally find the file size. E.g. brw-rw---- 1 root disk 8, 0 Jul 12 15:54 /dev/sda → this device is major 8, minor 0. Some device files under /dev don't correspond to hardware devices. One that exists on every unix system is /dev/null ; writing to it has no effect, and reading from it never returns any data. It's often convenient in shell scripts, when you want to ignore the output from a command ( >/dev/null ) or run a command with no input ( </dev/null ). Other common examples are /dev/zero (which returns null bytes ad infinitum ) /dev/urandom (which returns random bytes ad infinitum ). A few device files have a meaning that depends on the process that accesses it. For example, /dev/stdin designates the standard input of the current process; opening from has approximately the same effect as opening the original file that was opened as the process's standard input. Somewhat similarly, /dev/tty designates the terminal to which the process is connected. Under Linux, nowadays, /dev/stdin and friends are not implemented as character devices, but instead as symbolic links to a more general mechanism that allows every file descriptor to be referenced (as opposed to only 0, 1 and 2 under the traditional method); for example /dev/stdin is a symbolic link to /proc/self/fd/0 . See How does /dev/fd relate to /proc/self/fd/? . You'll find a number of symbolic links under /dev . This can occur for historical reasons: a device file was moved from one name to another, but some applications still use the old name. For example, /dev/scd0 is a symbolic link to /dev/sr0 under Linux; both designate the first CD device. Another reason for symbolic links is organization: under Linux, you'll find your hard disks and partitions in several places: /dev/sda and /dev/sda1 and friends (each disk designated by an arbitrary letter, and partitions according to the partition layout), /dev/disk/by-id/* (disks designated by a unique serial number), /dev/disk/by-label/* (partitions with a filesystem, designated by a human-chosen label); and more. Symbolic links are also used when a generic device name could be one of several; for example /dev/dvd might be a symbolic link to /dev/sr0 , or it might be a link to /dev/sr1 if you have two CD readers and the second one is to be the default DVD reader. Finally, there are a few other files that you might find under /dev , for traditional reasons. You won't find the same on every system. On most unices, /dev/log is a socket that programs use to emit log messages. /dev/MAKEDEV is a script that creates entries in /dev . On modern Linux systems, entries in /dev/ are created automatically by udev , obsoleting MAKEDEV . ¹ This is actually no longer true under Linux, but this detail only matters to device driver writers.
{ "source": [ "https://unix.stackexchange.com/questions/18239", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
18,255
$ ls -l /dev/stdin /dev/fd/0 lrwx------ 1 tim tim 64 2011-08-07 09:53 /dev/fd/0 -> /dev/pts/2 lrwxrwxrwx 1 root root 15 2011-08-06 08:14 /dev/stdin -> /proc/self/fd/0 $ ls -l /dev/pts/2 /proc/self/fd/0 crw--w---- 1 tim tty 136, 2 2011-08-07 09:54 /dev/pts/2 lrwx------ 1 tim tim 64 2011-08-07 09:54 /proc/self/fd/0 -> /dev/pts/2 What differences and relations are between /dev/fd/ and /proc/self/fd/? Do the two fd 's mean both floppy disk , both file descriptor , or one for each? What are /proc/self and /proc usually for?
/dev/fd and /proc/self/fd are exactly the same; /dev/fd is a symbolic link to /proc/self/fd . /proc/self/fd is part of a larger scheme that exposes the file descriptor of all processes ( /proc/$pid/fd/$number ). /dev/fd exists on other unices and is provided under Linux for compatibility. /proc/*/fd is specific to Linux.
{ "source": [ "https://unix.stackexchange.com/questions/18255", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
18,284
When printing with Linux is there an alternative to using CUPS ? After using it for a couple of weeks printing suddenly stopped working. The 'processing' light would come on and that would be the end. CUPS would list the job as finished. I cannot find any solution that works.
There are two traditional printing interfaces in the unix world: lp (the System V interface) and lpr (the BSD interface). There are three printing systems available on Linux: the traditional BSD lpr , the newer BSD system LPRng , and CUPS . LPRng and CUPS both provide a BSD-style interface and a System-V-style interface. Nowadays, CUPS is the de facto standard printing system for unix; it's the default or only system under Mac OS X and most Linux distributions as well as recent versions of Solaris, and it's available as a package on all major BSD distributions. Nonetheless your distribution may provide lpr and LPRng, typically in packages with these names. CUPS has better support for input and output filters (automatically converting various input format, giving access to printer features such as paper source selection and double-sided printing). If you install an alternative, you're likely to need to tune them quite a bit to get these extra features working. And there's no guarantee that these systems will work better than CUPS anyway. So I'd recommend fixing whatever's broken (given your description, it could be the printer itself!).
{ "source": [ "https://unix.stackexchange.com/questions/18284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7671/" ] }
18,288
I'm trying to get emacs (23.3 on Arch Linux) to map Ctrl + F12 to the built-in "compile" function when in C-mode (actually CC-mode, which comes built-in as well). So far I've tried the following: (defun my-c-mode-common-hook (define-key c-mode-map (kbd "C-<f12>") 'compile)) (add-hook 'c-mode-common-hook 'my-c-mode-common-hook) and: (eval-after-load 'c-mode '(define-key c-mode-map (kbd "C-<f12>") 'compile)) But neither works; I get <C-f12> is undefined . Based on what I've read here , here , and here , I can't see why it isn't working. Any thoughts?
There are two traditional printing interfaces in the unix world: lp (the System V interface) and lpr (the BSD interface). There are three printing systems available on Linux: the traditional BSD lpr , the newer BSD system LPRng , and CUPS . LPRng and CUPS both provide a BSD-style interface and a System-V-style interface. Nowadays, CUPS is the de facto standard printing system for unix; it's the default or only system under Mac OS X and most Linux distributions as well as recent versions of Solaris, and it's available as a package on all major BSD distributions. Nonetheless your distribution may provide lpr and LPRng, typically in packages with these names. CUPS has better support for input and output filters (automatically converting various input format, giving access to printer features such as paper source selection and double-sided printing). If you install an alternative, you're likely to need to tune them quite a bit to get these extra features working. And there's no guarantee that these systems will work better than CUPS anyway. So I'd recommend fixing whatever's broken (given your description, it could be the printer itself!).
{ "source": [ "https://unix.stackexchange.com/questions/18288", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2333/" ] }
18,360
I have a directory tree which has a bunch of symbolic links to files under /home ... however, I have moved /home to /mnt/home and need a way to "relink" all of the symlinks. Does such functionality exist or do I need to write a script to do so? As an example, I have something like the following: [root@trees ~]# ls -l /mnt/home/someone/something total 4264 lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 a -> /home/someone/someotherthing/a lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 b -> /home/someone/someotherthing/b lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 c -> /home/someone/someotherthing/c lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 d -> /home/someone/someotherthing/d lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 e -> /home/someone/someotherthing/e /mnt/home/someone/something/subdir: total 4264 lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 a -> /home/someone/someotherthing/subdir/a lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 b -> /home/someone/someotherthing/subdir/b lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 c -> /home/someone/someotherthing/subdir/c lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 d -> /home/someone/someotherthing/subdir/d lrwxrwxrwx 1 jnet www-data 55 2011-08-07 13:50 e -> /home/someone/someotherthing/subdir/e I want a command which will find all the symlinks and relink to the same places but underneath /mnt/home instead of /home Does such a command exist?
There is no command to retarget a symbolic link, all you can do is remove it and create another one. Assuming you have GNU utilities (e.g. under non-embedded Linux or Cygwin), you can use the -lname primary of find to match symbolic links by their target, and readlink to read the contents of the link. Untested: find /mnt/home/someone/something -lname '/home/someone/*' \ -exec sh -c 'ln -snf "/mnt$(readlink "$1")" "$1"' sh {} \; It would be better to make these symbolic links relative. Assuming ln is from GNU coreutils 8.22, just pass the -r option to ln in addition to the other options and it will convert the target to a relative path. For older systems or for bulk rewriting of valid links, there's a convenient little utility called symlinks (originally by Mark Lord, now maintained by J. Brandt Buckley), present in many Linux distributions. Before the move, or after you've restored valid links as above, run symlinks -c /mnt/home/someone/something to convert all absolute symlinks under the specified directory to relative symlinks unless they cross a filesystem boundary.
{ "source": [ "https://unix.stackexchange.com/questions/18360", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1974/" ] }
18,362
I accidentally executed rm /* while logged in as root in a remote Ubuntu Server and deleted pretty much all binaries and currently I can neither log in via ssh or ftp to restore the files (and hope for the best). Is there a way to somehow fix this mess, or should I call the datacenter and ask for a format?
rm /* should delete very little. There is no -r flag in there that would recursively delete anything, and without it directories will not be deleted (and even if directories were deleted, only empty ones can be deleted). This answer is predicated on the assumption that you did not run rm -rf /* . The only files in the root filesystem of consequence may be the symlinks to the kernel and initrd (although on one Ubuntu system I'm looking at, they don't exist) or a /lib64 symlink on 64-bit systems. The problem may just be that the /lib64 -> /lib symlink has been deleted. That's pretty nasty though, as just about every program will rely on that symlink: $ ldd /bin/bash ... /lib64/ld-linux-x86-64.so.2 (0x00007f8946ab7000) That ld-linux is the dynamic loader, and if it is not available, you cannot run any dynamic executables. This will make it extremely difficult to log in, and you may not be able to at all. One saviour may be busybox . Run this to check: $ ldd /bin/busybox not a dynamic executable In this case, busybox should be runnable, but the question is how can you run it? If you have access to the boot loader prompt, you may be able to boot with init=/bin/static-sh , where static-sh is a symlink to busybox (check that /bin/static-sh exists - it does on my system, but it's not standard Ubuntu. This bug suggests that is is available.) Once you have a root shell, you can re-create the /lib64 symlink. You may need to first remount the root filesystem as read/write. busybox should have these tools built in, which you can run as follows: # busybox mount -o remount,rw / # busybox ln -s /lib /lib64 # /bin/bash bash# If bash works, the problem should be fixed.
{ "source": [ "https://unix.stackexchange.com/questions/18362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9702/" ] }
18,388
Btrfs has begun to gain some momentum in replacing ext4 as the default filesystem of choice for a few distributions such as Fedora Core 16 . It is experimentally available in a number of other distributions ( From Wikipedia: openSUSE 11.3, SLES 11 SP1, Ubuntu 10.10, Sabayon Linux, RHEL6,MeeGo, Debian 6.0, and Slackware 13.37). I'm certainly not ready to convert all my workplace servers over (my file system choice is generally conservative), I'm considering using it at home and on select non-mission critical production machines at work. Btrfs brings a feature set that is similar to ZFS in many ways. I can understand why this would be desirable in an "enterprise" environment, especially with systems that focus on storage delivery. But how is this same feature set useful for end users? What advantages does Btrfs' feature list give me on machines whose primary function is not the presentation of storage? What advantages does it give me on my laptop? Outside of enterprise storage, why should I bother switching to Btrfs from the tried and true Ext filesystem?
From wiki : Extent based file storage 2^64 byte == 16 EiB maximum file size Space-efficient packing of small files Space-efficient indexed directories Dynamic inode allocation Writable snapshots, read-only snapshots Subvolumes (separate internal filesystem roots) Checksums on data and metadata Compression (gzip and LZO) Integrated multiple device support RAID-0, RAID-1 and RAID-10 implementations Efficient incremental backup Background scrub process for finding and fixing errors on files with redundant copies Online filesystem defragmentation Explanation for desktop users: Space-efficient packing of small files: Important for desktops with tens of thousands of files (maildirs, repos with code, etc). Dynamic inode allocation: Avoid the limits of Ext2/3/4 in numbers of inodes. Btrfs inode limits is in a whole different league (whereas ext4's inodes are allocated at filesystem creation time and cannot be resized after creation, typically at 1-2 million, with a hard limit of 4 billion, btrfs's inodes are dynamically allocated as needed, and the hard limit is 2^64, around 18.4 quintillion, which is around 4.6 billion times the hard limit of ext4). Read-only snapshots: fast backups. Checksums on data and metadata: essential for data integrity. Ext4 only has metadata integrity. Compression: LZO compression is very fast. Background scrub process to find and to fix errors on files with redundant copies: data integrity. Online filesystem defragmentation: autodefrag in 3.0 will defrag some types of files like databases (e.g. firefox profiles or akonadi storage). I recommend you the kernel 3.0. Also btrfs is a good FS for SSD.
{ "source": [ "https://unix.stackexchange.com/questions/18388", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
18,405
I installed Cygwin, to be disappointed that bash by default runs within "cmd.exe". I googled around and found Console2 . It's not a particularly well-designed application, as doing adjustments is slightly painful, although most of the time it works well. I am still looking for a better way to survive in a Windows environment as even Console 2 occasionally crashes e.g. when trying to resize my terminal when editing in vim and there are plenty of other annoyances that I'm really not satisfied with. Any ideas? I tried using Cygwin via PuTTY and that was an equally bad user experience.
MinTTY - here . It makes Cygwin entirely usable on Windows. I would be lost without it. Based on the original PuTTY code, but integrates straight into Cygwin (and in fact, is bundled with Cygwin). Start it with, C:\cygwin\bin\mintty.exe - Or where-ever you installed it. The '-' is key. There are a few other useful additions for Cygwin as well, one being apt-cyg . It's not perfect, but it's better than running setup.exe every time you remember you're missing a package. Even with Cygwin/X, I still use MinTTY as my primary terminal (I hate the scroll bars on xterm).
{ "source": [ "https://unix.stackexchange.com/questions/18405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9718/" ] }
18,413
I have a Linux system where it is showing to me two different times when the system was last booted. root@linux:~ # who -b; uptime system boot 2009-07-09 20:51 11:48am up 1 day 0:54, 1 user, load average: 0.01, 0.03, 0.00 I presume that who is showing the content of /var/log/wtmp and uptime I have no idea. Is there some way to fix this difference? I rebooted the system yesterday so I know the contents that uptime is showing me is correct.
MinTTY - here . It makes Cygwin entirely usable on Windows. I would be lost without it. Based on the original PuTTY code, but integrates straight into Cygwin (and in fact, is bundled with Cygwin). Start it with, C:\cygwin\bin\mintty.exe - Or where-ever you installed it. The '-' is key. There are a few other useful additions for Cygwin as well, one being apt-cyg . It's not perfect, but it's better than running setup.exe every time you remember you're missing a package. Even with Cygwin/X, I still use MinTTY as my primary terminal (I hate the scroll bars on xterm).
{ "source": [ "https://unix.stackexchange.com/questions/18413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/821/" ] }
18,435
Reading the Virtualbox user manual, I finally got [ here ], which explains how to install Virtualbox Guest Additions on a Linux guest via Command Line. But it's not clear enough for me (I just started learning some commands). Can someone put down the exact commands you would use to install Virtualbox Guest Additions via CLI? (which includes finding where virtualbox guest additions has been mounted etc.)
... finally this worked for me, should also work for anybody else trying to install VirtualBox Guest Additions on a CentOS (x86_64) virtual server in command line mode. # yum update # yum install dkms gcc make kernel-devel bzip2 binutils patch libgomp glibc-headers glibc-devel kernel-headers elfutils-libelf-devel # mkdir -p /media/cdrom # mount /dev/scd0 /media/cdrom # sh /media/cdrom/VBoxLinuxAdditions.run Note: In CentOS 7 and higher the cdrom is at /dev/sr0 instead of /dev/scd0 . When the process is complete, reboot the system. That's all.
{ "source": [ "https://unix.stackexchange.com/questions/18435", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9610/" ] }
18,503
When I try to shutdown the computer from a command line or terminal I must have root privileges: amy@amy:~$ shutdown now shutdown: Need to be root and amy@amy:~$ halt halt: Need to be root but when shutting down using the graphical user interface, i.e. shutdown button, or the hardware shutdown button, I'm not asked for the password to do so. What does that shutdown for the graphical interface, and why it doesn't need the password or root privileges? I'm using Ubuntu 11.04 Natty.
The hardware power button triggers an ACPI event that acpid (the ACPI daemon) notices and reacts to; in this case by shutting down the system, although you could have it do whatever you want. The ACPI daemon runs as root, so it has permission to shutdown the system. Desktop environments (e.g. gdm for Gnome) typically run as root as well, so I suspect they work the same way -- you don't have permission to shutdown the system, but you can tell gdm you want it shut down and it can do it on your behalf
{ "source": [ "https://unix.stackexchange.com/questions/18503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9632/" ] }
18,506
I did a website scrape for a conversion project. I'd like to do some statistics on the types of files in there -- for instance, 400 .html files, 100 .gif , etc. What's an easy way to do this? It has to be recursive. Edit: With the script that maxschelpzig posted, I'm having some problems due to the architecture of the site I've scraped. Some of the files are of the name *.php?blah=blah&foo=bar with various arguments, so it counts them all as unique. So the solution needs to consider *.php* to be all of the same type, so to speak.
You could use find and uniq for this, e.g.: $ find . -type f | sed 's/.*\.//' | sort | uniq -c 16 avi 29 jpg 136 mp3 3 mp4 Command explanation find recursively prints all filenames sed deletes from every filename the prefix until the file extension uniq assumes sorted input -c does the counting (like a histogram).
{ "source": [ "https://unix.stackexchange.com/questions/18506", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394/" ] }
18,552
I am working on SunOS 5.10. I have a folder that contains about 200 zip files. Each zip file contains only one text file in it. I would like to search for a specific string in all the text files in all the zip files. I tried this (which searches for any text file in the zip file that contains the string "ORA-") but it didn't work. zipgrep ORA-1680 *.zip What is the correct of doing it without uncompressing the zip files?
It is in general not possible to search for content within a compressed file without uncompressing it one way or another. Since zipgrep is only a shellscript, wrapping unzip and egrep itself, you might just as well do it manually: for file in *.zip; do unzip -c "$file" | grep "ORA-1680"; done If you need just the list of matching zip files, you can use something like: for file in *.zip; do if ( unzip -c "$file" | grep -q "ORA-1680"); then echo "$file" fi done This way you are only decompressing to stdout (ie. to memory) instead of decompressing the files to disk. You can of course try to just grep -a the zip files but depending on the content of the file and your pattern, you might get false positives and/or false negatives.
{ "source": [ "https://unix.stackexchange.com/questions/18552", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6933/" ] }
18,564
I have read these threads: rsync --delete --files-from=list / dest/ does not delete unwanted files Delete extraneous files from dest dir via rsync? But, as far as I can tell (maybe I am missing something), they don't cover the following question: How do you ask rsync to copy files and delete those on the receiving side that do not exist on the sending side, with exceptions? (e.g. don't remove a mercurial repository .hg on the receiving side, even if there is no repository on the sending side). One possibility? Borrowing from @Richard Holloway's answer below. Say I have the following line: rsync -av --exclude=dont_delete_me --delete /sending/path /receiving/path As far as I understand, this line would make rsync delete everything on the receiving path that does not exist on the sending path, except those things matched by dont_delete_me . My question now is: Would rsync keep files on the receiving side that are matched by dont_delete_me even if nothing on the sending side matches dont_delete_me ?
If you use --delete and --exclude together what is in the excluded location won't get deleted even if the source files are removed. But that raises the issue that the folder won't be rsync 'd at all. So you will need another rsync job to sync that folder. Eg. rsync -nav /home/richardjh/keepall/ /home/backup/richardjh/keepall/ rsync -nav --exclude=keepall --delete /home/richardjh /home/backup/richardjh You could run these the other way around, but then it would delete all removed files and then replace them, which is not as efficient. You can't do it as a one liner.
{ "source": [ "https://unix.stackexchange.com/questions/18564", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
18,605
My question is how directories are implemented? I can believe a data structure like a variable e.g. table, array or similar. Since UNIX is Open Source I can look in the source what the program does when it created a new directory. Can you tell me where to look or elaborate on the topic? That a directory "is" a file I could understand and is a directory really a file? I'm not sure that it is true that files are stored "in" files while still in way you could say the word file about nearly anything and I'm not sure what absolutely not is a file since you could call even a variable a file. For example a link is certainly not a file and a link is like a directory but then this violates that a directory is a file?
The internal structure of directories is dependent on the filesystem in use. If you want to know precisely what happens, have a look at filesystem implementations. Basically, in most filesystems, a directory is an associative array between filenames (keys) and inodes numbers (values). Something like this¹: 1167010 . 1158721 .. 1167626 subdir 132651 barfile 132650 bazfile This list is coded in some – more or less – efficient way inside a chain of (usually) 4KB blocks. Notice that the content of regular files is stored similarly. In the case of directories, there is no point in knowing which size is actually used inside these blocks. That's why the sizes of directories reported by du are multiples of 4KB. Inodes are there to tie blocks together, forming a single entity, namely a 'file' in the general sense. They are identified by a number which is some kind of address and each one is usually stored as a single, special block. Management of all this happens in kernel mode. Software just asks for the creation of a directory with a function named int mkdir(const char *pathname, mode_t mode); leading to a system call, and all the rest is performed behind the scenes. About links structure: A hard link is not a file, it's just a new directory entry (i.e. a name – inode number association) referring to a preexisting inode entity². This means that the same inode can be accessed from different pathnames. In particular, since metadatas (permissions, ownership, timestamps…) are stored within the inode, these are unique and independent of the pathname chosen to access the file. A symbolic link is a file and it's distinct from its target. This means that it has its own inode. It used to be handled just like a regular file: the target path was stored in a data block. But now, for efficiency reasons in recent ext filesystems, paths shorter than 60 bytes long are stored within the inode itself (using the fields which would normally be used to store the pointers to data blocks). — 1. this was obtained using ls -ai1 testdir . 2. whose type must be different than 'directory' nowadays.
{ "source": [ "https://unix.stackexchange.com/questions/18605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
18,614
In many cases lsof is not installed on the machines that I have to work with, but the "function" of lsof would be needed very much (for example on AIX). :\ Are there any lsof like applications in the non-Windows world? For example, I need to know which processes use the /home/username directory?
I know of fuser , see if it's available on your system.
{ "source": [ "https://unix.stackexchange.com/questions/18614", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
18,642
I'm trying to use NTPD to update my Linux machine's time to a specified NTP server. Here is the scenario: Each time the Linux machine starts up, I want to update the time from NTP server and if it's not successful, I want to try again every 5 minutes until successfully (max is 2 hours). I searched around and find that I should(?) use NTPD and use some command like: #ntpdate ntp.server.com (before starting NTPD) #ntpd some_options_to_start The questions are: How can I know if the time was successfully updated by these commands? Can I set the interval to update time from ntpd? (or I have to use something like sleep and loop with do .. while / for in shell?) Note that I want to execute the above commands in a shell script and will put the shell in a web server. Then clients (with a web browser browser) will execute the script on the website. So I need to check if the update is successful or not to send result to the client (over the web).
Using a script to monitor ntpd is not commonly done. Usually a monitoring tool like nagios or munin is used to monitor the daemon. The tool can send you an alert when things go wrong. I have munin emailing me if the offset exceeds 15 milliseconds. Normally, you should use an odd number of servers so that the daemon can perform an election among the servers if one goes off. Three is usually adequate, and more than five is excessive. Clients on your internal network should be able to get by with one internal server if you monitor it. Use legitimate servers or your ISPs NTP or DNS servers as clock sources. There are public pools as well as public servers. ntpd is self tuning and you should not need to adjust it once it is configured and started. With recent ntpd implementations you can drop use of ntpdate entirely as they can do the initial setting of the date. The following script will parse the offsets in the output of ntpd and report an excessive offset. You could run it from cron to email you if there are problems. The script defaults to alerting on an offset of 0.1 seconds. #!/bin/bash limit=100 # Set your limit in milliseconds here offsets=$(ntpq -nc peers | tail -n +3 | cut -c 62-66 | tr -d '-') for offset in ${offsets}; do if [ ${offset:-0} -ge ${limit:-100} ]; then echo "An NTPD offset is excessive - Please investigate" exit 1 fi done # EOF
{ "source": [ "https://unix.stackexchange.com/questions/18642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9756/" ] }
18,657
If I tar a folder that is a git repository, can I do so without including the .git related files? If not, how would I go about doing that via a command?
Have a look first at git help archive . archive is a git command that allows to make archives containing only git tracked files. Probably what you are looking for. One example listed at the end of the man page: git archive --format=tar --prefix=git-1.4.0/ v1.4.0 | gzip >git-1.4.0.tar.gz
{ "source": [ "https://unix.stackexchange.com/questions/18657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7768/" ] }
18,673
I am trying reinstall pacman on my Arch Linux distribution. When I run the configure script "configure.ac", I get a bunch of undefined macros: error: possibly undefined macro: AM_INIT_AUTOMAKE. If this token and others are legitimate, please use m4_pattern_allow. See the autoconf documentation. error: possibly undefined macro: AC_PROG_LIBTOOL error: possibly undefined macro: AM_GNU_GETTEXT error: possibly undefined macro: AM_GNU_GETTEXT_VERSION error: possibly undefined macro: AM_CONDITIONAL Does anyone know what would cause these macros to be undefined? Having come from Ubuntu (where everything just works, and is therefore boring), I don't really know about automake.
Try this, maybe it can help: autoreconf --install (See the manpage, there is a --force option also)
{ "source": [ "https://unix.stackexchange.com/questions/18673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9813/" ] }
18,712
cp -r is meant to copy files recursively, and cp -R for copying directories recursively. But I've checked, and both appear to copy both files and directories, the same thing. So, what's the difference actually?
While -R is posix well-defined, -r is not portable! On Linux, in the GNU and BusyBox implementations of cp , -r and -R are equivalent. On the other side, as you can read in the POSIX manual page of cp , -r behavior is implementation-defined . * If neither the -R nor -r options were specified, cp shall take actions based on the type and contents of the file referenced by the symbolic link, and not by the symbolic link itself. * If the -R option was specified: * If none of the options -H, -L, nor -P were specified, it is unspecified which of -H, -L, or -P will be used as a default. * If the -H option was specified, cp shall take actions based on the type and contents of the file referenced by any symbolic link specified as a source_file operand. * If the -L option was specified, cp shall take actions based on the type and contents of the file referenced by any symbolic link specified as a source_file operand or any symbolic links encoun- tered during traversal of a file hierarchy. * If the -P option was specified, cp shall copy any symbolic link specified as a source_file operand and any symbolic links encoun- tered during traversal of a file hierarchy, and shall not follow any symbolic links. * If the -r option was specified, the behavior is implementation- defined.
{ "source": [ "https://unix.stackexchange.com/questions/18712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9610/" ] }
18,736
I was wondering how to count the number of a specific character in each line by some text processing utilities? For example, to count " in each line of the following text "hello!" Thank you! The first line has two, and the second line has 0. Another example is to count ( in each line.
You can do it with sed and awk : $ sed 's/[^"]//g' dat | awk '{ print length }' 2 0 Where dat is your example text, sed deletes (for each line) all non- " characters and awk prints for each line its size (i.e. length is equivalent to length($0) , where $0 denotes the current line). For another character you just have to change the sed expression. For example for ( to: 's/[^(]//g' Update: sed is kind of overkill for the task - tr is sufficient. An equivalent solution with tr is: $ tr -d -c '"\n' < dat | awk '{ print length; }' Meaning that tr deletes all characters which are not ( -c means complement) in the character set "\n .
{ "source": [ "https://unix.stackexchange.com/questions/18736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
18,743
Some compilers (especially C or C++ ones) give you warnings about: No new line at end of file I thought this would be a C-programmers-only problem, but github displays a message in the commit view: \ No newline at end of file for a PHP file. I understand the preprocessor thing explained in this thread , but what has this to do with PHP? Is it the same include() thing or is it related to the \r\n vs \n topic? What is the point in having a new line at the end of a file?
It's not about adding an extra newline at the end of a file, it's about not removing the newline that should be there. A text file , under unix, consists of a series of lines , each of which ends with a newline character ( \n ). A file that is not empty and does not end with a newline is therefore not a text file. Utilities that are supposed to operate on text files may not cope well with files that don't end with a newline; historical Unix utilities might ignore the text after the last newline, for example. GNU utilities have a policy of behaving decently with non-text files, and so do most other modern utilities, but you may still encounter odd behavior with files that are missing a final newline¹. With GNU diff, if one of the files being compared ends with a newline but not the other, it is careful to note that fact. Since diff is line-oriented, it can't indicate this by storing a newline for one of the files but not for the others — the newlines are necessary to indicate where each line in the diff file starts and ends. So diff uses this special text \ No newline at end of file to differentiate a file that didn't end in a newline from a file that did. By the way, in a C context, a source file similarly consists of a series of lines. More precisely, a translation unit is viewed in an implementation-defined as a series of lines, each of which must end with a newline character ( n1256 §5.1.1.1). On unix systems, the mapping is straightforward. On DOS and Windows, each CR LF sequence ( \r\n ) is mapped to a newline ( \n ; this is what always happens when reading a file opened as text on these OSes). There are a few OSes out there which don't have a newline character, but instead have fixed- or variable-sized records; on these systems, the mapping from files to C source introduces a \n at the end of each record. While this isn't directly relevant to unix, it does mean that if you copy a C source file that's missing its final newline to a system with record-based text files, then copy it back, you'll either end up with the incomplete last line truncated in the initial conversion, or an extra newline tacked onto it during the reverse conversion. ¹ Example: the output of GNU sort on non-empty files always ends with a newline. So if the file foo is missing its final newline, you'll find that sort foo | wc -c reports one more byte than cat foo | wc -c . The read builtin of sh is required to return false if the end-of-file is reached before the end of the line is reached, so you'll find that loops such as while IFS= read -r line; do ...; done skip an unterminated line altogether.
{ "source": [ "https://unix.stackexchange.com/questions/18743", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9850/" ] }
18,752
I have a disk with two partitions: sda1 and sda2. I would like change the number of sda1 to sda2 and sda2 to sda1. It's possible but I don't remember the procedure. i.e. My first partition will be sda2 and the second sda1, so I need to specify a manual order, not an automatic ordering like in fdisk -> x -> f. How can I change the order? Links to manuals or tutorials are also welcome. Thanks. The reason: I have an application that needs to read data from sda1 but the data is in sda2. Changing the partition table is the fastest fix for this issue. The system isn't critical but I don't want to keep the system halted for too much time. Update : the fdisk version of OpenBSD includes this functionality.
I just did this in an easier way: # sfdisk -d /dev/sdb > sdb.bkp leave a copy for safety # cp sdb.bkp sdb.new now edit sdb.new changing ONLY the lines order and partition numbers, as in my case: from # partition table of /dev/sdb unit: sectors /dev/sdb1 : start= 1026048, size=975747120, Id=83 /dev/sdb2 : start= 2048, size= 204800, Id=83 /dev/sdb3 : start= 206848, size= 819200, Id= b /dev/sdb4 : start= 0, size= 0, Id= 0 to # partition table of /dev/sdb unit: sectors /dev/sdb1 : start= 2048, size= 204800, Id=83 /dev/sdb2 : start= 206848, size= 819200, Id= b /dev/sdb3 : start= 1026048, size=975747120, Id=83 /dev/sdb4 : start= 0, size= 0, Id= 0 then throw it back to the disk partition table? # sfdisk /dev/sdb < sdb.new My numbering sequence was mangled after I shrank&shifted right the only partition (sdb1) to add two smaller partitions at the start of the disk using gparted . If the last command does not work, as in my case, change it for: # sfdisk --no-reread -f /dev/sdb < sdb.new
{ "source": [ "https://unix.stackexchange.com/questions/18752", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9316/" ] }
18,760
$ tail -f testfile the command is supposed to show the latest entries in the specified file, in real-time right? But that's not happening. Please correct me, if what I intend it to do is wrong... I created a new file "aaa" and added a line of text and closed it. then issued this command (first line): $ tail -f aaa xxx xxa axx the last three lines are the contents of the file aaa. Now that the command is still running (since I used -f ), I opened the file aaa via the GUI and started adding a few more lines manually. But the terminal doesn't show the new lines added in the file. What's wrong here? The tail -f command only shows new entries if they are written by system only? (like log files etc)
From the tail(1) man page : With --follow (-f), tail defaults to following the file descriptor, which means that even if a tail’ed file is renamed, tail will continue to track its end. This default behavior is not desirable when you really want to track the actual name of the file, not the file descrip- tor (e.g., log rotation). Use --follow=name in that case. That causes tail to track the named file in a way that accommodates renaming, removal and creation. Your text editor is renaming or deleting the original file and saving the new file under the same filename. Use -F instead.
{ "source": [ "https://unix.stackexchange.com/questions/18760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9610/" ] }
18,775
It's said that VirtualBox's VBoxManage modifyhd --resize command can only be used on either VDI or VHD files. Sadly, I have a VirtualBox image that is in VMDK format, and I don't know how to convert it to those other two formats.
You can use a two-step procedure then - first, use the clonemedium command to create a VDI image: VBoxManage clonemedium disk aaaa.vmdk aaaa.vdi --format VDI (Have a look also at other options to clonemedium , like --variant . To read the help, just run VBoxManage | less or visit https://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevdi ). Once you have the .vdi file, you can proceed with your modifications.
{ "source": [ "https://unix.stackexchange.com/questions/18775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
18,796
Assume I'm logged in with user takpar : takpar@skyspace:/$ As root, I've added takpar as a member of group webdev using: # usermod -a -G webdev takpar But it seems it has not been applied, because for example I can't get into a webdev 's directory that has read permission for group: 400169 drwxr-x--- 3 webdev webdev 4.0K 2011-08-15 22:34 public_html takpar@skyspace:/home/webdev/$ cd public_html/ bash: cd: public_html/: Permission denied But after a reboot I have access as I expect. As this kind of group changing is in my routine, is there any way to apply changes without needing a reboot? Answer It seems there is no way to make the current session know the new group, for example the file manager won't work with new changes. But a re-login will do the job. The su command is also appropriate for temp commands in urrent session.
Local solution: use su yourself to login again. In the new session you'll be considered as a member of the group. Man pages for newgrp and sg might also be of interest to change your current group id (and login into a new group): To use webdev 's group id (and privileges) in your current shell use: newgrp webdev To start a command with some group id (and keep current privileges in your shell) use: sg webdev -c "command" ( sg is like su but for groups, and it should work without the group password if you are listed as a member of the group in the system's data)
{ "source": [ "https://unix.stackexchange.com/questions/18796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9870/" ] }
18,807
I am trying to compile a program written in Fortran using make (I have a Makefile and, while in the directory containing the Makefile , I type the command $ make target , where "target" is a system-specific target specification is present in my Makefile . As I experiment with various revisions of my target specification, I often get a variety of error messages when attempting to call make . To give a few examples: make[1]: Entering directory /bin/sh: line 0: test: too many arguments ./dpp angfrc.f > angfrc.tmp.f /bin/sh: ./dpp: Permission denied make[1]: *** [angfrc.o] Error 126 make[1]: Leaving directory make: *** [cmu60] Error 2 and make[1]: Entering directory /bin/sh: line 0: test: too many arguments ./dpp -DSTRESS -DMPI -P -D'pointer=integer'-I/opt/mpich_intel/include angfrc.f > angfrc.tmp.f /bin/sh: ./dpp: Permission denied make[1]: *** [angfrc.o] Error 126 make[1]: Leaving directory make: *** [mpich-c2] Error 2 and make[1]: Entering directory /bin/sh: line 0: test: too many arguments ./dpp -DSTRESS -DMPI -P -D'pointer=integer' -I/opt/mpich_intel/include angfrc.f > angfrc.tmp.f /bin/sh: ./dpp: Permission denied make[1]: *** [angfrc.o] Error 126 make[1]: Leaving directory make: *** [mpi-intel] Error 2 Do you know how I can find a list of what the error codes, such as "Error 126" and "Error 2," mean? I found this thread on another website, but I am not sure what the reply means. Does it mean that there is no system-independent meaning of the make error codes? Can you please help me? Thank you.
The error codes aren't from make: make is reporting the return status of the command that failed. You need to look at the documentation of each command to know what each status value means. Most commands don't bother with distinctions other than 0 = success, anything else = failure. In each of your examples, ./dpp cannot be executed. When this happens, the shell that tried to invoke it exits with status code 126 (this is standard behavior ). The instance of make that was running that shell detects a failed command (the shell) and exits, showing you Error 126 . That instance of make is itself a command executed by a parent instance of make, and the make utility returns 2 on error, so the parent make reports Error 2 . The failure of your build is likely to stem from test: too many arguments . This could be a syntax error in the makefile, or it could be due to relying on bash-specific features when you have a /bin/sh that isn't bash. Try running make SHELL=/bin/bash target or make SHELL=/bin/ksh target ; if that doesn't work, you need to fix your makefile.
{ "source": [ "https://unix.stackexchange.com/questions/18807", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9605/" ] }
18,811
Is there a way to head / tail a document and get the reverse output; because you don't know how many lines there are in a document? I.e. I just want to get everything but the first 2 lines of foo.txt to append to another document.
You can use this to strip the first two lines: tail -n +3 foo.txt and this to strip the last two lines, if your implementation of head supports it: head -n -2 foo.txt (assuming the file ends with \n for the latter) Just like for the standard usage of tail and head these operations are not destructive. Use >out.txt if you want to redirect the output to some new file: tail -n +3 foo.txt >out.txt In the case out.txt already exists, it will overwrite this file. Use >>out.txt instead of >out.txt if you'd rather have the output appended to out.txt .
{ "source": [ "https://unix.stackexchange.com/questions/18811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7768/" ] }
18,830
I need to run something as sudo without a password, so I used visudo and added this to my sudoers file: MYUSERNAME ALL = NOPASSWD: /path/to/my/program Then I tried it out: $ sudo /path/to/my/program [sudo] password for MYUSERNAME: Why does it ask for a password? How can I run/use commands as root with a non-root user, without asking for a password?
If there are multiple matching entries in /etc/sudoers , sudo uses the last one. Therefore, if you can execute any command with a password prompt, and you want to be able to execute a particular command without a password prompt, you need the exception last. myusername ALL = (ALL) ALL myusername ALL = (root) NOPASSWD: /path/to/my/program Note the use of (root) , to allow the program to be run as root but not as other users. (Don't give more permissions than the minimum required unless you've thought out the implications.) Note for readers who aren't running Ubuntu or who have changed the default sudo configuration (Ubuntu's sudo is ok by default) : Running shell scripts with elevated privileges is risky, you need to start from a clean environment (once the shell has started, it's too late (see Allow setuid on shell scripts ), so you need sudo to take care of that). Make sure that you have Defaults env_reset in /etc/sudoers or that this option is the compile-time default ( sudo sudo -V | grep env should include Reset the environment to a default set of variables ).
{ "source": [ "https://unix.stackexchange.com/questions/18830", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
18,838
How can I find out whether an SSH connection has been established with or without agent forwarding? I'm trying to do the following: ssh-add -D (delete all stored keys) ssh --vvv something ssh-add (adding key) ssh --vvv something and compare output, but I can see only subtle differences.
When ssh agent forward is enabled on the client ( ForwardAgent yes on ~/.ssh/config ) and is also enabled on the remote server AllowAgentForwarding yes , when logging to the remote server the environment variable SSH_AUTH_SOCK should exist. Then if you log into another server (you public key must reside on this third server) you should not be prompted for any password. To clarify: home$ ssh-add Enter passphrase ... Identity added ... $ ssh hostA hostA$ env | grep SSH_AUTH_SOCK SSH_AUTH_SOCK=/tmp/... $ ssh hostB hostB$
{ "source": [ "https://unix.stackexchange.com/questions/18838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9885/" ] }
18,841
time is a brilliant command if you want to figure out how much CPU time a given command takes. I am looking for something similar that can measure the max RAM usage of the program and any children. Preferably it should distinguish between allocated memory that was used and unused. Maybe it could even give the median memory usage (so the memory usage you should expect when running for a long time). So I would like to do: rammeassure my_program my_args and get output similar to: Max memory allocated: 10233303 Bytes Max memory used: 7233303 Bytes Median memory allocation: 5233303 Bytes I have looked at memusg https://gist.github.com/526585/590293d6527c91e48fcb08edb8de9fd6c88a6d82 but I regard that as somewhat a hack.
You can use tstime to measure the highwater memory usage (RSS and virtual) of a process. For example: $ tstime date Tue Aug 16 21:35:02 CEST 2011 Exit status: 0 pid: 31169 (date) started: Tue Aug 16 21:35:02 2011 real 0.017 s, user 0.000 s, sys 0.000s rss 888 kb, vm 9764 kb It also supports a more easy to parse output mode ( -t ).
{ "source": [ "https://unix.stackexchange.com/questions/18841", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
18,844
time is a brilliant command if you want to figure out how much CPU time a given command takes. I am looking for something similar that can list the files being accessed by a program and its children. Either in real time or as a report afterwards. Currently I use: #!/bin/bash strace -ff -e trace=file "$@" 2>&1 | perl -ne 's/^[^"]+"(([^\\"]|\\[\\"nt])*)".*/$1/ && print' but its fails if the command to run involves sudo . It is not very intelligent (it would be nice if it could only list files existing or that had permission problems or group them into files that are read and files that are written). Also strace is slow, so it would be good with a faster choice.
I gave up and coded my own tool. To quote from its docs: SYNOPSIS tracefile [-adefnu] command tracefile [-adefnu] -p pid OPTIONS -a List all files -d List only dirs -e List only existing files -f List only files -n List only non-existing files -p pid Trace process id -u List only files once It only outputs the files so you do not need to deal with the output from strace . https://gitlab.com/ole.tange/tangetools/tree/master/tracefile
{ "source": [ "https://unix.stackexchange.com/questions/18844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
18,853
I'm trying to do a git clone trough a bash script, but the first time that I run the script and the server is not known yet the script fails. I have something like this: yes | git clone [email protected]:repo/repoo.git The authenticity of host 'github.com (207.97.227.239)' can't be established. RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48. Are you sure you want to continue connecting (yes/no)? But it's ignoring the yes . Do you know how to force git clone to add the key to the known hosts?
Add the following to your ~/.ssh/config file: Host github.com StrictHostKeyChecking no Anything using the open-ssh client to establish a remote shell (with the git client does) should skip the key checks to github.com. This is actually a bad idea since any form of skipping the checks (whether you automatically hit yes or skip the check in the first place) creates room for a man in the middle security compromise. A better way would be to retrieve and validate the fingerprint and store it in the known_hosts file before needing to run some script that automatically connects.
{ "source": [ "https://unix.stackexchange.com/questions/18853", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8832/" ] }
18,877
According to the comments in /etc/sudoers (Fedora 13): ## Syntax: ## ## user MACHINE=COMMANDS ## ## The COMMANDS section may have other options added to it. My two related questions: What does the ALL=(ALL) ALL mean in the following line: root ALL=(ALL) ALL I've tested these two lines but I cannot figure out how they are functionally different: superadm ALL=(ALL) ALL superadm ALL=ALL I've read the manual but the syntax specification is difficult to follow. I have derived that the (ALL) ALL part is the command and tag specifications but I still cannot get my head around it.
Note: I'm answering 1. , since Ignacio already answered 2. . In the following sudo entry: superadm ALL=(ALL) ALL there are four fields: The first one specifies a user that will be granted privileges for some command(s). The second one is rarely used. It's a list of hostnames on which this sudo entry will be effective. On standard setups only one host is relevant (localhost) so this field is usually left as ALL . The fourth field is the list of commands superadm will be able to run with elevated privileges. ALL means all commands. Otherwise use a comma-separated list of commands. The third field (the one written (…) that is optional) specifies which users (and groups) the superadm user will be able to run the following commands as. ALL means they can choose anything (unrestricted). It this field is omitted, it means the same as (root) . Example: alan ALL = (root, bin : operator, system) /bin/ls, /bin/kill Here, alan is allowed to run the two commands /bin/ls and /bin/kill as root (or bin ), possibly with additional operator or system groups privileges. So alan may choose to run ls as the bin user and with operator 's group privileges like this: sudo -u bin -g operator /bin/ls /whatever/directory If -u is omitted, it's the same as -u root . If -g is omitted, no additional group privileges are granted.
{ "source": [ "https://unix.stackexchange.com/questions/18877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2372/" ] }
18,886
It seems that normal practice would put the setting of IFS outside the while loop in order to not repeat setting it for each iteration... Is this just a habitual "monkey see, monkey do" style, as it has been for this monkey until I read man read , or am I missing some subtle (or blatantly obvious) trap here?
The trap is that IFS=; while read.. sets the IFS for the whole shell environment outside the loop, whereas while IFS= read redefines it only for the read invocation (except in the Bourne shell). You can check that doing a loop like while IFS= read xxx; ... done then after such loop, echo "blabalbla $IFS ooooooo" prints blabalbla ooooooo whereas after IFS=; read xxx; ... done the IFS stays redefined: now echo "blabalbla $IFS ooooooo" prints blabalbla ooooooo So if you use the second form, you have to remember to reset : IFS=$' \t\n' . The second part of this question has been merged here , so I've removed the related answer from here.
{ "source": [ "https://unix.stackexchange.com/questions/18886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2343/" ] }
18,899
I know you can create a file descriptor and redirect output to it. e.g. exec 3<> /tmp/foo # open fd 3. echo a >&3 # write to it exec 3>&- # close fd 3. But you can do the same thing without the file descriptor: FILE=/tmp/foo echo a > "$FILE" I'm looking for a good example of when you would have to use an additional file descriptor.
Most commands have a single input channel (standard input, file descriptor 0) and a single output channel (standard output, file descriptor 1) or else operate on several files which they open by themselves (so you pass them a file name). (That's in addition from standard error (fd 2), which usually filters up all the way to the user.) It is however sometimes convenient to have a command that acts as a filter from several sources or to several targets. For example, here's a simple script that separates the odd-numbered lines in a file from the even-numbered ones while IFS= read -r line; do printf '%s\n' "$line" if IFS= read -r line; then printf '%s\n' "$line" >&3; fi done >odd.txt 3>even.txt Now suppose you want to apply a different filter to the odd-number lines and to the even-numbered lines (but not put them back together, that would be a different problem, not feasible from the shell in general). In the shell, you can only pipe a command's standard output to another command; to pipe another file descriptor, you need to redirect it to fd 1 first. { while … done | odd-filter >filtered-odd.txt; } 3>&1 | even-filter >filtered-even.txt Another, simpler use case is filtering the error output of a command . exec M>&N redirects a file descriptor to another one for the remainder of the script (or until another such command changes the file descriptors again). There is some overlap in functionality between exec M>&N and somecommand M>&N . The exec form is more powerful in that it does not have to be nested: exec 8<&0 9>&1 exec >output12 command1 exec <input23 command2 exec >&9 command3 exec <&8 Other examples that may be of interest: What does “3>&1 1>&2 2>&3” do in a script? (it swaps stdout with stderr) File descriptors & shell scripting How big is the pipe buffer? Bash script testing if a command has run correctly And for even more examples: questions tagged io-redirection questions tagged file-descriptors search for examples on this site in the Data Explorer (a public read-only copy of the Stack Exchange database) P.S. This is a surprising question coming from the author of the most upvoted post on the site that uses redirection through fd 3 !
{ "source": [ "https://unix.stackexchange.com/questions/18899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/872/" ] }
18,918
When I issue top in Linux, I get a result similar to this: One of the lines has CPU usage information represented like this: Cpu(s): 87.3%us, 1.2%sy, 0.0%ni, 27.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st While I know the definitions of each of them (far below), I don't understand what these tasks exactly mean. hi - what does servicing hardware interrupts mean? si - what does servicing software interrupts mean? st - they say it's the "CPU time in involuntary wait by the virtual CPU while hypervisor is servicing another processor (or) % CPU time stolen from a virtual machine". But what does it actually mean? Can someone be more clear? I listed all of us , sy , ni , etc, because it could help others searching for the same. This information is not in the man pages. us: user cpu time (or) % CPU time spent in user space sy: system cpu time (or) % CPU time spent in kernel space ni: user nice cpu time (or) % CPU time spent on low priority processes id: idle cpu time (or) % CPU time spent idle wa: io wait cpu time (or) % CPU time spent in wait (on disk) hi: hardware irq (or) % CPU time spent servicing/handling hardware interrupts si: software irq (or) % CPU time spent servicing/handling software interrupts st: steal time - - % CPU time in involuntary wait by virtual cpu while hypervisor is servicing another processor (or) % CPU time stolen from a virtual machine
hi is the time spent processing hardware interrupts. Hardware interrupts are generated by hardware devices (network cards, keyboard controller, external timer, hardware sensors, ...) when they need to signal something to the CPU (data has arrived, for example). Since these can happen very frequently, and since they essentially block the current CPU while they are running, kernel hardware interrupt handlers are written to be as fast and simple as possible. If long or complex processing needs to be done, these tasks are deferred using a mechanism call softirqs . These are scheduled independently, can run on any CPU, can even run concurrently (none of that is true of hardware interrupt handlers). The part about hard IRQs blocking the current CPU, and the part about softirqs being able to run anywhere are not exactly correct, there can be limitations, and some hard IRQs can interrupt others. As an example, a "data received" hardware interrupt from a network card could simply store the information "card ethX needs to be serviced" somewhere and schedule a softirq . The softirq would be the thing that triggers the actual packet routing. si represents the time spent in these softirqs . A good read about the softirq mechanism (with a bit of history too) is Matthew Wilcox's I'll Do It Later: Softirqs, Tasklets, Bottom Halves, Task Queues, Work Queues and Timers (PDF, 64k). st , "steal time", is only relevant in virtualized environments. It represents time when the real CPU was not available to the current virtual machine — it was "stolen" from that VM by the hypervisor (either to run another VM, or for its own needs). The CPU time accounting document from IBM has more information about steal time, and CPU accounting in virtualized environments. (It's aimed at zSeries type hardware, but the general idea is the same for most platforms.)
{ "source": [ "https://unix.stackexchange.com/questions/18918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9610/" ] }
18,925
I read some resources about the mount command for mounting devices on Linux, but none of them is clear enough (at least for me). On the whole this what most guides state: $ mount (lists all currently mounted devices) $ mount -t type device directory (mounts that device) for example (to mount a USB drive): $ mount -t vfat /dev/sdb1 /media/disk What's not clear to me: How do I know what to use for "device" as in $ mount -t type device directory ? That is, how do I know that I should use "/dev/sdb1" in this command $ mount -t vfat /dev/sdb1 /media/disk to mount my USB drive? what does the "-t" parameter define here? type? I read the man page ( $ man mount ) a couple of times, but I am still probably missing something. Please clarify.
You can use fdisk to have an idea of what kind of partitions you have, for example: fdisk -l Shows: Device Boot Start End Blocks Id System /dev/sda1 * 63 204796619 102398278+ 7 HPFS/NTFS /dev/sda2 204797952 205821951 512000 83 Linux /dev/sda3 205821952 976773119 385475584 8e Linux LVM That way you know that you have sda1,2 and 3 partitions. The -t option is the filesystem type; it can be NTFS, FAT, EXT. In my example, sda1 is ntfs, so it should be something like: mount -t ntfs /dev/sda1 /mnt/ USB devices are usually vfat and Linux are usually ext.
{ "source": [ "https://unix.stackexchange.com/questions/18925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9610/" ] }
19,008
There is a list of IP addresses in a .txt file, ex.: 1.1.1.1 2.2.2.2 3.3.3.3 Behind every IP address there is a server, and on every server there is an sshd running on port 22. Not every server is in the known_hosts list (on my PC, Ubuntu 10.04 LTS/bash). How can I run commands on these servers, and collect the output? Ideally, I'd like to run the commands in parallel on all the servers. I'll be using public key authentication on all the servers. Here are some potential pitfalls: The ssh prompts me to put the given servers ssh key to my known_hosts file. The given commands might return a nonzero exit code, indicating that the output is potentially invalid. I need to recognize that. A connection might fail to be established to a given server, for example because of a network error. There should be a timeout, in case the command runs for longer than expected or the server goes down while running the command. The servers are AIX/ksh (but I think that doesn't really matter.
There are several tools out there that allow you to log in to and execute series of commands on multiple machines at the same time. Here are a couple: pssh pdsh clusterssh clusterit mussh ansible ad-hoc
{ "source": [ "https://unix.stackexchange.com/questions/19008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
19,014
sed on AIX is not doing what I think it should. I'm trying to replace multiple spaces with a single space in the output of IOSTAT: # iostat System configuration: lcpu=4 drives=8 paths=2 vdisks=0 tty: tin tout avg-cpu: % user % sys % idle % iowait 0.2 31.8 9.7 4.9 82.9 2.5 Disks: % tm_act Kbps tps Kb_read Kb_wrtn hdisk9 0.2 54.2 1.1 1073456960 436765896 hdisk7 0.2 54.1 1.1 1070600212 435678280 hdisk8 0.0 0.0 0.0 0 0 hdisk6 0.0 0.0 0.0 0 0 hdisk1 0.1 6.3 0.5 63344916 112429672 hdisk0 0.1 5.0 0.2 40967838 98574444 cd0 0.0 0.0 0.0 0 0 hdiskpower1 0.2 108.3 2.3 2144057172 872444176 # iostat | grep hdisk1 hdisk1 0.1 6.3 0.5 63345700 112431123 #iostat|grep "hdisk1"|sed -e"s/[ ]*/ /g" h d i s k 1 0 . 1 6 . 3 0 . 5 6 3 3 4 5 8 8 0 1 1 2 4 3 2 3 5 4 sed should search & replace (s) multiple spaces (/[ ]*/) with a single space (/ /) for the entire group (/g)... but it's not only doing that... its spacing each character. What am I doing wrong? I know its got to be something simple... AIX 5300-06 edit: I have another computer that has 10+ hard drives. I'm using this as a parameter to another program for monitoring purposes. The problem I ran into was that "awk '{print $5}' didn't work because I'm using $1, etc in the secondary stage and gave errors with the Print command. I was looking for a grep/sed/cut version. What seems to work is: iostat | grep "hdisk1 " | sed -e's/ */ /g' | cut -d" " -f 5 The []s were "0 or more" when I thought they meant "just one". Removing the brackets got it working. Three very good answers really quickly make it hard to choose the "answer".
/[ ]*/ matches zero or more spaces, so the empty string between characters matches. If you're trying to match "one or more spaces", use one of these: ... | sed 's/ */ /g' ... | sed 's/ \{1,\}/ /g' ... | tr -s ' '
{ "source": [ "https://unix.stackexchange.com/questions/19014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1936/" ] }
19,058
How do you rename all files/subdirs in the current folder? Lets say, I have many files and subdirs that are with spaces and I want to replace all the spaces with an underscore. File 1 File 2 File 3 Dir 1 Dir 3 should be renamed to File_1 File_2 File_3 Dir_1 Dir_3
If you need to rename files in subdirectories as well, and your find supports the -execdir predicate, then you can do find /search/path -depth -name '* *' \ -execdir bash -c 'mv -- "$1" "${1// /_}"' bash {} \; Thank to @glenn jackman for suggesting -depth option for find and to make me think. Note that on some systems (including GNU/Linux ones), find may fail to find files whose name contains spaces and also sequences of bytes that don't form valid characters (typical with media files with names with non-ASCII characters encoded in a charset different from the locale's). Setting the locale to C (as in LC_ALL=C find... ) would address the problem.
{ "source": [ "https://unix.stackexchange.com/questions/19058", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9988/" ] }
19,104
There are Linux programs, for example vlc, that recommend typing ctrl + c twice to kill their execution from a terminal if the program didn't stop after the first one. Why would typing ctrl + c twice work when the first time didn't work?
What it does is entirely application specific. When you press ctrl + c , the terminal emulator sends a SIGINT signal to the foreground application, which triggers the appropriate "signal handler". The default signal handler for SIGINT terminates the application. But any program can install its own signal handler for SIGINT (including a signal handler that does not stop the execution at all). Apparently, vlc installs a signal handler that attempts to do some cleanup / graceful termination upon the first time it is invoked, and falls back to the default behavior of instantly terminating execution when it is invoked for a second time.
{ "source": [ "https://unix.stackexchange.com/questions/19104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8667/" ] }
19,124
Consider echo \ # this is a comment foo This gives: $ sh foo.sh # this is a comment foo.sh: line 2: foo: command not found After some searching on the web, I found a solution by DigitalRoss on sister site Stack Overflow. So one can do echo `: this is a comment` \ foo or alternatively echo $(: this is a comment) \ foo However, DigitalRoss didn't explain why these solutions work. I'd appreciate an explanation. He replied with a comment: There used to be a shell goto command which branched to labels specified like : here. The goto is gone but you can still use the : whatever syntax ... : is a sort of parsed comment now. But I'd like more details and context, including a discussion of portability. Of course, if anyone has other solutions, that would be good too. See also the earlier question How to comment multi-line commands in shell scripts? . Take home message from the discussion below. The `: this is a comment` is just a command substitution. The output of : this is a comment is nothing, and that gets put in the place of `: this is a comment` . A better choice is the following: echo `# this is a comment` \ foo
Comments end at the first newline (see shell token recognition rule 10 ), without allowing continuation lines , so this code has foo in a separate command line: echo # this is a comment \ foo As for your first proposal, the backslash isn't followed by a newline, you're just quoting the space: it's equivalent to echo ' # this is a comment' foo $(: this is a comment) substitutes the output of the command : this is a comment . If the output of that command is empty, this is effectively a highly confusing way to insert a comment in the middle of a line. There's no magic going on: : is an ordinary command, the colon utility, which does nothing. The colon utility is mostly useful when the shell syntax requires a command but you happen to have nothing to do. # Sample code to compress files that don't look compressed case "$1" in *.gz|*.tgz|*.bz2|*.zip|*.jar|*.od?) : # do nothing, the file is already compressed ;; *) bzip2 -9 "$1" ;; esac Another use case is an idiom for setting a variable if it's not already set. : "${foo:=default value}" The remark about goto is a historical one. The colon utility dates back from even before the Bourne shell , all the way to the Thompson shell , which had a goto instruction. The colon then meant a label; a colon is a fairly common syntax for goto labels (it's still present in sed ).
{ "source": [ "https://unix.stackexchange.com/questions/19124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4671/" ] }
19,129
Currently I'm running tar czf to combine backup files. The files are in a specific directory. But the number of files is growing. Using tzr czf takes too much time (more than 20 minutes and counting). I need to combine the files more quickly and in a scalable fashion. I've found genisoimage , readom and mkisofs . But I don't know which is fastest and what the limitations are for each of them.
You should check if most of your time are being spent on CPU or in I/O. Either way, there are ways to improve it: A: don't compress You didn't mention "compression" in your list of requirements so try dropping the "z" from your arguments list: tar cf . This might be speed up things a bit. There are other techniques to speed-up the process, like using "-N " to skip files you already backed up before. B: backup the whole partition with dd Alternatively, if you're backing up an entire partition, take a copy of the whole disk image instead. This would save processing and a lot of disk head seek time. tar and any other program working at a higher level have a overhead of having to read and process directory entries and inodes to find where the file content is and to do more head disk seeks , reading each file from a different place from the disk. To backup the underlying data much faster, use: dd bs=16M if=/dev/sda1 of=/another/filesystem (This assumes you're not using RAID, which may change things a bit)
{ "source": [ "https://unix.stackexchange.com/questions/19129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9964/" ] }
19,149
In bash, if I use $ find ./ -name "*.sqlite" , it will list all the sqlite files in current directory. I also want to see the modified time of the files, can anybody give me some help?
You can add to find the following expression: -printf '%Tc %p\n' to see something like Sun Aug 14 06:29:38 2011 ./.nx/config/host.nxs or -printf '%TD %TT %p\n' for 08/14/11 06:29:38.2184481010 ./.nx/config/host.nxs or -printf '%T+ %p\n' if you have GNU find , to see 2011-08-14+06:29:38.2184481010 ./.nx/config/host.nxs This last one, if available, is also useful for time sorting purposes. See manual page of find , where it talks about printf .
{ "source": [ "https://unix.stackexchange.com/questions/19149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10028/" ] }
19,150
I am using AIX 7.1. I made some shell scripts8 and called /bin/bsh. Inside the script, I connect to a server via FTP and have many get commands because I would like to avoid the wildcard problem. I just declare every filename and get them one by one, for example: get PDF_02378230_20110609.PDF get PDF_02432482_20110609.PDF get PDF_02432565_20110609.PDF get PDF_02432573_20110609.PDF get PDF_02432581_20110609.PDF get PDF_02432599_20110609.PDF get PDF_02432607_20110609.PDF get PDF_02432615_20110609.PDF get PDF_02432623_20110609.PDF get PDF_02432649_20110609.PDF get PDF_02432656_20110609.PDF get PDF_02432672_20110609.PDF get PDF_02432755_20110609.PDF get PDF_02432763_20110609.PDF get PDF_02432821_20110609.PDF get PDF_02432920_20110609.PDF get PDF_02433175_20110609.PDF get PDF_02433266_20110609.PDF get PDF_02433290_20110609.PDF get PDF_02433308_20110609.PDF get PDF_02433373_20110609.PDF get PDF_02433399_20110609.PDF There could be up to 100000 files to be transfer in 1 shell scripts. I would run these shell scripts and let them ftp the files down. But in some shell script there are 1 or 2 missing files not been "get". For example, when I count the number of files, there might be 1 or 2 less files than the number of "get" command in the shell script. I cannot found an exception or error inside the log. The truth is, I cannot go through the whole log line by line because it is too long. But I have found which file was missing, and I tried to search the log file with the missing filename, and I cannot found any trace of that filename.
You can add to find the following expression: -printf '%Tc %p\n' to see something like Sun Aug 14 06:29:38 2011 ./.nx/config/host.nxs or -printf '%TD %TT %p\n' for 08/14/11 06:29:38.2184481010 ./.nx/config/host.nxs or -printf '%T+ %p\n' if you have GNU find , to see 2011-08-14+06:29:38.2184481010 ./.nx/config/host.nxs This last one, if available, is also useful for time sorting purposes. See manual page of find , where it talks about printf .
{ "source": [ "https://unix.stackexchange.com/questions/19150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9079/" ] }
19,161
Is there any package which shows PID of a window by clicking on it?
Yes. Try xprop and you are looking for the value of _NET_WM_PID : xprop _NET_WM_PID | cut -d' ' -f3 {click on window}
{ "source": [ "https://unix.stackexchange.com/questions/19161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8467/" ] }
19,296
I had a really bad lockup of my X server and had to do a Sys Rq + r to release my keyboard from X and get into a console. I was able to kill the process that was locking up my system, and continue my work in my still running X server. Now whenever I e.g. push Alt + F4 to kill a window, my system switches to the 4th console instead of killing the active window. So it seems that my keyboard still is in released mode. How do I undo my previous Sys Rq + r command, such that I can continue my work in my running X server?
I found the solution myself just after asking this question. To switch back the console in which X is running (usually tty7), from ASCII mode to RAW mode execute the following command: sudo kbd_mode -s -C /dev/tty7 And now everything works as expected again. :) More information available in the question: What does raw/unraw keyboard mode mean?
{ "source": [ "https://unix.stackexchange.com/questions/19296", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4819/" ] }
19,317
Can I get less not to monochrome its output? E.g., the output from git diff is colored, but git diff | less is not.
Use: git diff --color=always | less -r --color=always is there to tell git to output color codes even if the output is a pipe (not a tty). And -r is there to tell less to interpret those color codes and other escape sequences. Use -R for ANSI color codes only.
{ "source": [ "https://unix.stackexchange.com/questions/19317", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
19,333
Let's say I create a user named "bogus" using the adduser command. How can I make sure this user will NOT be a viable login option, without disabling the account. In short, I want the account to be accessible via su - bogus , but I do not want it to be accessible via a regular login prompt. Searching around, it seems I need to disable that user's password, but doing passwd -d bogus didn't help. In fact, it made things worse, because I could now login to bogus without even typing a password. Is there a way to disable regular logins for a given a account? Note: Just to be clear, I know how to remove a user from the menu options of graphical login screens such as gdm, but these methods simply hide the account without actually disabling login. I'm looking for a way to disable regular login completely, text-mode included.
passwd -l user is what you want. That will lock the user account. But you'll still be able to su - user but you'll have to su - user as root. Alternatively, you can accomplish the same thing by prepending a ! to the user's password in /etc/shadow (this is all passwd -l does behind the scenes). And passwd -u will undo this.
{ "source": [ "https://unix.stackexchange.com/questions/19333", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5025/" ] }
19,344
I have a directory that is unpacked, but is in a folder. How can I move the contents up one level? I am accessing CentOS via SSH.
With the folder called 'myfolder' and up one level in the file hierarchy (the point you want it to put) the command would be: mv myfolder/* . So for example if the data was in /home/myuser/myfolder then from /home/myuser/ run the command.
{ "source": [ "https://unix.stackexchange.com/questions/19344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
19,369
Under linux, I launch a software by typing, e.g., fluidplot. How can I find the installation path for this software?
You can use: which fluidpoint to see where it is executing from (if it's in your $PATH). Or: find / -name fluidpoint 2> /dev/null to look for a file named fluipoint and redirect errors on virtual filesystems. Usually they are in /sbin , /usr/sbin , /usr/local/bin or ~ as a hidden directory. From Manual: NAME which - shows the full path of (shell) commands. SYNOPSIS which [options] [--] programname [...] Full manual: https://linux.die.net/man/1/which
{ "source": [ "https://unix.stackexchange.com/questions/19369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5997/" ] }
19,430
Code find / -name netcdf Output find: `/root/.dbus': Permission denied find: `/root/.gconf': Permission denied find: `/root/.gconfd': Permission denied find: `/root/.gnome': Permission denied find: `/root/.gnome2': Permission denied find: `/root/.gnome2_private': Permission denied
Those messages are sent to stderr, and pretty much only those messages are generally seen on that output stream. You can close it or redirect it on the command-line. $ find / -name netcdf 2>&- or $ find / -name netcdf 2>/dev/null Also, if you are going to search the root directory (/), then it is often good to nice the process so find doesn't consume all the resources. $ nice find / -name netcdf 2>&- This decreases the priority of the process allowing other processes more time on the CPU. Of course if nothing else is using the CPU, it doesn't do anything. :) To be technical, the NI value (seen from ps -l ) increase the PRI value. Lower PRI values have a higher priority. Compare ps -l with nice ps -l .
{ "source": [ "https://unix.stackexchange.com/questions/19430", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9431/" ] }
19,439
Is there a variant of cat that outputs syntax-highlighted lines of code when used on a source file? An idea: maybe vi[m] or another editor can be asked to dump the syntax-highlighted contents of said files to stdout and exit immediately?
Passing the file through pygmentize -f terminal will attempt to detect the type from the filename and highlight it appropriately.
{ "source": [ "https://unix.stackexchange.com/questions/19439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9294/" ] }
19,451
I know that these command will help to get syntax and options for commands but my question is that how they differ from each other?
help is a bash command. It uses internal bash structures to store and retrieve information about bash commands. man is a macro set for the troff (via groff) processor. The output of processing a single file is sent to a pager by the man command by default. info is a text-only viewer for archives in the info format output of Texinfo .
{ "source": [ "https://unix.stackexchange.com/questions/19451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10084/" ] }
19,467
Is there a simpler way to do this? grep -v foo file | grep -v bar There're probably very elegant ways to do it with egrep, but how to go with plain old grep? EDIT: grep -v 'foo\|bar' file seems to work only with GNU grep. I'm on Solaris. Any solution for that?
You can have multiple tests in a single grep command by passing each expression with a -e argument instead of as the first non-option argument as you normally see: $ grep -v -e foo -e bar file That says the same thing your command does: print lines from file that contain neither "foo" nor "bar". Keep in mind that removing -v completely inverts this; it changes the logical operation, too! You get all lines that contain "foo" or "bar".
{ "source": [ "https://unix.stackexchange.com/questions/19467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4098/" ] }
19,470
I want my cron-run reporting script to notify me in case there are updates for my packages. Is the a way to make apt-get give me the list of available updates but don't do anything more?
apt For modern versions of apt there is a specific switch for this: apt list --upgradable apt-get For the old apt-get command the -u switch shows a list of packages that are available for upgrade: # apt-get -u upgrade --assume-no From the apt-get man page : -u --show-upgraded Show upgraded packages; Print out a list of all packages that are to be upgraded. Configuration Item: APT::Get::Show-Upgraded. --assume-no Automatic "no" to all prompts. <== To prevent it from starting to install
{ "source": [ "https://unix.stackexchange.com/questions/19470", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10142/" ] }
19,480
How to display line number while doing grep on a file. For example: grep CONFIG_PM_ADVANCED_DEBUG /boot/config-`uname -r
There is the option -n , and many more in the manual page, worth reading.
{ "source": [ "https://unix.stackexchange.com/questions/19480", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10143/" ] }
19,485
How can I monitor incoming HTTP requests to port 80 ? I have set up web hosting on my local machine using DynDNS and Nginx . I wanted to know how many request are made on my server every day. Currently I'm using this command: netstat -an | grep 80
You may use tcpdump . # tcpdump filter for HTTP GET sudo tcpdump -s 0 -A 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420' # tcpdump filter for HTTP POST sudo tcpdump -s 0 -A 'tcp dst port 80 and (tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x504f5354)' For a solution using tshark see: https://serverfault.com/questions/84750/monitoring-http-traffic-using-tcpdump
{ "source": [ "https://unix.stackexchange.com/questions/19485", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10143/" ] }
19,491
I am using following command to grep character set range for hexadecimal code 0900 (instead of अ) to 097F (instead of व). How I can use hexadecimal code in place of अ and व? bzcat archive.bz2 | grep -v '<[अ-व]*\s' | tr '[:punct:][:blank:][:digit:]' '\n' | uniq | grep -o '^[अ-व]*$' | sort -f | uniq -c | sort -nr | head -50000 | awk '{print "<w f=\""$1"\">"$2"</w>"}' > hindi.xml I get the following output: <w f="399651">और</w> <w f="264423">एक</w> <w f="213707">पर</w> <w f="74728">कर</w> <w f="44281">तक</w> <w f="35125">कई</w> <w f="26628">द</w> <w f="23981">इन</w> <w f="22861">जब</w> ... I just want to use hexadecimal code instead of अ and व in the above command. If using hexadecimal code is not at all possible , can I use unicode instead of hexadecimal code for character set ('अ-व') ? I am using Ubuntu 10.04
Look at grep: Find all lines that contain Japanese kanjis . Text is usually encoded in UTF-8; so you have to use the hex vales of the bytes used in UTF-8 encoding. grep "["$'\xe0\xa4\x85'"-"$'\xe0\xa4\xb5'"]" and grep '[अ-व]' are equivalent, and they perform a character class / bracket expression locale-based matching (that is, matching is dependent on the sorting rules of Devanagari script (that is, the matching is NOT "any char between \u0905 and \0935" but instead "anything sorting between Devanagari A and Devanagari VA"; there may be differences. ( $'...' is the "ANSI-C escape string" syntax for bash, ksh, and zsh. It is just an easier way to type the characters. You can also use the \uXXXX and \UXXXXXXXX escapes to directly ask for code points in bash and zsh.) On the other hand, you have this (note -P): grep -P "\xe0\xa4[\x85-\xb5]" that will do a binary matching with those byte values.
{ "source": [ "https://unix.stackexchange.com/questions/19491", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10152/" ] }
19,530
The script below works in bash but not in zsh. I think it is because in the variable OPTS , I am "expanding" (not sure if this is the right word) the variable $EXCLUDE , and this syntax doesn't work in zsh. How would I replace that line to make it work on zsh? SRC="/path_to_source" DST="/path_to_dest" EXCLUDE=".hg" OPTS="-avr --delete --progress --exclude=${EXCLUDE} --delete-excluded" rsync $OPTS $SRC $DST
The problem here is that $OPTS is not split into several arguments on the rsync command line. In zsh syntax, use: rsync ${=OPTS} $SRC $DST (an alternative is to simulate standard shell behavior globably with the option -o shwordsplit …) From the manpage: One commonly encountered difference [in zsh ] is that variables substituted onto the command line are not split into words. See the description of the shell option SH_WORD_SPLIT in the section 'Parameter Expansion' in zshexpn(1) . In zsh , you can either explicitly request the splitting (e.g. ${=foo} ) or use an array when you want a variable to expand to more than one word. See the section 'Array Parameters' in zshparam(1) .
{ "source": [ "https://unix.stackexchange.com/questions/19530", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
19,547
If you are root, and you issue rm -rf / Then how far can the command go? Can you recover data from this kind of an action? Even after the binaries are gone, would the running processes still be active? What would it take to make the same physical machine boot again? What files would you need to restore to make this happen? I could try this on a VM and see, but I want to know the rationale behind what to expect if I do this.
This command does nothing, at least on the OS I use (Solaris) with which this security feature was first implemented: # rm -rf / rm of / is not allowed On other *nix, especially the Linux family, if a recent enough Gnu rm is provided, you would need to add the --no-preserve-root option to enable the command to complete (or at least start). How far would this command go is undefined. It depends on plenty of more or less unpredictable events. Generally, processes can run even after their binaries have been removed.
{ "source": [ "https://unix.stackexchange.com/questions/19547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9763/" ] }
19,608
I'm using curl in this syntax: curl -o myfile.jpg http://example.com/myfile.jpg If I run this command twice, I get two files: myfile.jpg myfile-1.jpg How can I tell Curl that I want it to overwrite the file if it exists?
Instead of using -o option to write to a file, use your shell to direct the output to the file: curl http://example.com/myfile.jpg > myfile.jpg
{ "source": [ "https://unix.stackexchange.com/questions/19608", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
19,620
I'm coming from this question: https://superuser.com/questions/359799/how-to-make-freebsd-box-accessible-from-internet I want to understand this whole process of port forwarding . I read so many things, but am failing to understand the very basic concept of port forwarding itself. What I have: a freebsd server sitting at my home. netgear router This is what I am trying to achieve: to be able to access freebsd server from a windows machine over internet to be able to open a webbrowser and access internet. I also want to access this freebsd box from a ubuntu machine that I have. It will be great if someone can please help me. Here is the netgear router setup that I did for port forwarding.
I'll start with the raw facts : You have: A - your FreeBSD box, B - your router and C - some machine with Internet access. This is how it looks like: .-----. .-----. .-----. | A | == | B | - - ( Internet ) - - | C | '-----' '-----' '-----' \_________ ________/ v `- this is your LAN Notice how your router normally works: it allows connections from machines on your LAN to the Internet (simply speaking). So if the A (or any other machine on LAN) wants to access the Internet, it will be allowed (again, just talking about basic understanding and configuration) : .-----. .-----. .-----. | A | == | B | - - ( Internet ) - - | C | '-----' '-----' '-----' `-->----' `--->--->---^ And the following is not allowed by default: .-----. .-----. .-----. | A | == | B | - - ( Internet ) - - | C | '-----' '-----' '-----' `--<----' `---<--- - - - - --<---<-----' (That is, the router protects the machines on your LAN from being accessed from the Internet.) Notice that the router is the only part of your LAN that is seen from the Internet 1) . Port forwarding is what allows the third schema to take place. This consists in telling the router what connection from C 2) should go to which machine on the LAN. This is done based on port numbers - that is why it is called port forwarding . You configure that by instructing the router that all the connections coming on a given port from the Internet should go to a certain machine on LAN. Here's an example for port 22 forwarded to machine A : .------. .-------. .-----. | A | == | B | - - ( Internet ) - - | C | | | | | '-----' '-|22|-' ',--|22|' | `--<-22---' `---<---- - - - - - --<-22---' Such connections through the Internet occur based on IP addresses. So a bit more precise representation of the above example would be: .------. .-------. .-----. | A | == | B | - - - - - ( Internet ) - - - - | C | | | | | '-----' '-|22|-' ',--|22|' | `--<-A:22--' `--<-YourIP:22 - - - - --<-YourIP:22--' If you do not have an Internet connection with a static IP, then you'd have to somehow learn what IP is currently assigned to your router by the ISP. Otherwise, C will not know what IP it has to connect to in order to get to your router (and further, to A ). To solve this in an easy way, you can use a service called dynamic DNS . This would make your router periodically send information to a special DNS server that will keep track of your IP and provide you with a domain name . There are quite a few free dynamic DNS providers. Many routers come with configuration options to easily contact with those. 1) This is, again, a simplification - the actual device that is seen to the Internet is the modem - which can often be integrated with the router, but might also be a separate box. 2) or any other machine with Internet connection. Now for what you want: Simply allowing ssh access to your machine from the Internet is a bad idea. There are thousands of bots set up by crackers that search the Internet for machines with open SSH port. They typically "knock" on the default SSH port of as many IPs as they can and once they find an SSH daemon running somewhere, the try to gain bruteforce access to the machine. This is not only a risk of potential break-in, but also of network slow-downs while the machine is being bruteforced. If you really need such access, you should at least assure that you have strong passwords for all the user accounts, disallow root access over SSH (you can always log in as normal user and su or sudo then), change the default port on which your SSH server would run, introduce a mechanism of disallowing numerous SSH login attempts (with icreasing time-to-wait for subsequent attempts - I don't remember how exactly this is called - I had it enabled some time ago on FreeBSD and I recall it was quite easy - try searching some FreeBSD forums etc. about securing SSH an you'll find it.) If possible, try to run ssh daemon only when you know you will be accessing the machine in near future an turn it off afterwards Get used to going through your system logs. If you begin noticing anything suspicious, introduce additional security mechanisms like IP tables or port knocking .
{ "source": [ "https://unix.stackexchange.com/questions/19620", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1358/" ] }
19,654
I would like to change a file extension from *.txt to *.text . I tried using the basename command, but I'm having trouble on changing more than one file. Here's my code: files=`ls -1 *.txt` for x in $files do mv $x "`basename $files .txt`.text" done I'm getting this error: basename: too many arguments Try basename --help' for more information
Straight from Greg's Wiki : # Rename all *.txt to *.text for file in *.txt; do mv -- "$file" "${file%.txt}.text" done *.txt is a globbing pattern , using * as a wildcard to match any string. *.txt matches all filenames ending with '.txt'. -- marks the end of the option list . This avoids issues with filenames starting with hyphens. ${file%.txt} is a parameter expansion , replaced by the value of the file variable with .txt removed from the end. Also see the entry on why you shouldn't parse ls . If you have to use basename , your syntax would be: for file in *.txt; do mv -- "$file" "$(basename -- "$file" .txt).text" done
{ "source": [ "https://unix.stackexchange.com/questions/19654", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7879/" ] }
19,701
I am setting up a yum repository, and need to debug some of the URLs in the yum.conf file. I need to know why is Scientific Linux trying to grab this URL, when I was expecting it to grab another URL: # yum install package http://192.168.1.100/pub/scientific/6.1/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: sl. Please verify its path and try again The yum.conf(5) manpage gives some information about these variables: Variables There are a number of variables you can use to ease maintenance of yum's configuration files. They are available in the values of several options including name, baseurl and commands. $releasever This will be replaced with the value of the version of the package listed in distroverpkg. This defaults to the version of 'redhat-release' package. $arch This will be replaced with your architecture as listed by os.uname()[4] in Python. $basearch This will be replaced with your base architecture in yum. For example, if your $arch is i686 your $basearch will be i386. $YUM0-$YUM9 These will be replaced with the value of the shell environment variable of the same name. If the shell environment variable does not exist then the configuration file variable will not be replaced. Is there a way to view these variables by using the yum commandline utility? I would prefer to not hunt down the version of the 'redhat-release' package, or manually get the value of os.uname()[4] in Python.
When this answer was written in 2011, json wasn't installed for python by default for all the versions of RHEL/CentOS at that time so I used pprint to print the stuff nicely. It is now 2020 and all current versions of RHEL/CentOS have json by default for python. The answer has been updated to use json and updated to include RHEL/CentOS 8 by modifying @sysadmiral's answer for Fedora. RHEL/CentOS 8: /usr/libexec/platform-python -c 'import dnf, json; db = dnf.dnf.Base(); print(json.dumps(db.conf.substitutions, indent=2))' RHEL/CentOS 6 and 7 python -c 'import yum, json; yb = yum.YumBase(); print json.dumps(yb.conf.yumvar, indent=2)' RHEL/CentOS 4 and 5 # if you install python-simplejson python -c 'import yum, simplejson as json; yb = yum.YumBase(); print json.dumps(yb.conf.yumvar, indent=2)' # otherwise python -c 'import yum, pprint; yb = yum.YumBase(); pprint.pprint(yb.conf.yumvar, width=1)' Example output: # CentOS 8: # --- [root@0928d3917e32 /]# /usr/libexec/platform-python -c 'import dnf, json; db = dnf.dnf.Base(); print(json.dumps(db.conf.substitutions, indent=2))' Failed to set locale, defaulting to C { "arch": "x86_64", "basearch": "x86_64", "releasever": "8" } [root@0928d3917e32 /]# # CentOS 7: # --- [root@c41adb7f40c2 /]# python -c 'import yum, json; yb = yum.YumBase(); print json.dumps(yb.conf.yumvar, indent=2)' Loaded plugins: fastestmirror, ovl { "uuid": "cb5f5f60-d45c-4270-8c36-a4e64d2dece4", "contentdir": "centos", "basearch": "x86_64", "infra": "container", "releasever": "7", "arch": "ia32e" } [root@c41adb7f40c2 /]# # CentOS 6: # --- [root@bfd11c9a0880 /]# python -c 'import yum, json; yb = yum.YumBase(); print json.dumps(yb.conf.yumvar, indent=2)' Loaded plugins: fastestmirror, ovl { "releasever": "6", "basearch": "x86_64", "arch": "ia32e", "uuid": "3e0273f1-f5b6-481b-987c-b5f21dde4310", "infra": "container" } [root@bfd11c9a0880 /]# Original answer below: If you install yum-utils , that will give you yum-debug-dump which will write those variables and more debugging info to a file. There is no option to write to stdout, it will always write to some file which really isn't that helpful. This is obviously not a great solution so here's a python one-liner you can copy and paste which will print those variables to stdout. python -c 'import yum, pprint; yb = yum.YumBase(); pprint.pprint(yb.conf.yumvar, width=1)' This works on CentOS 5 and 6, but not 4. yum is written in python, so the yum python module is already on your server, no need to install anything exra. Here's what it looks like on CentOS 5: [root@somebox]# python -c 'import yum, pprint; yb = yum.YumBase(); pprint.pprint(yb.conf.yumvar, width=1)' {'arch': 'ia32e', 'basearch': 'x86_64', 'releasever': '5', 'yum0': '200', 'yum5': 'foo'} [root@somebox]#
{ "source": [ "https://unix.stackexchange.com/questions/19701", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4/" ] }
19,772
I found an obfuscated e-mail address that can be decrypted using this command: echo "[email protected]" | tr '[a-z]' '[n-za-m]' How does that output a valid e-mail address? What is that command doing?
tr 's man page explains it pretty well; it's a filter that converts characters from one set to another. The first set specified is [a-z] , which is a shorthand way of typing [abcdefghijklmnopqrstuvwxyz] . The second is [n-za-m] , which turns into [nopqrstuvwxyzabcdefghijklm] . tr reads each character from stdin, and if it appears in the first set, it replaces it with the character in the same position in the second set (this means [ and ] are getting replaced with themselves, so including them was pointless, but a lot of people do it by mistake because regular expressions use them to represent character classes so they think tr requires them). So a simpler example: $ echo abc | tr ab xy xyc a turned into x , b turned into y , and c was left unchanged because it wasn't in the first set. All the user did in this case is run their e-mail address through the same filter (since it's symmetric -- a maps to n and n back to a , etc.) to get the rotated version; you running it through again swaps all the characters back to their originals Sidenote: This particular swap, where each letter is replaced by the one 13 characters away from it in the alphabet, is called ROT13 (rotate 13); it was popular on newsgroups as a way to hide things people might not want to see
{ "source": [ "https://unix.stackexchange.com/questions/19772", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
19,791
How do I set up the firewall on a system in a LAN so that some ports are only open to connections from the local area network, and not from the outside world? For example, I have a box running Scientific Linux 6.1 (a RHEL based distro), and I want its SSH server to only accept connections from localhost or LAN. How do I do this?
With the kernel's iptables completely empty ( iptables -F ), this will do what you ask: # iptables -A INPUT -p tcp --dport 22 -s 192.168.0.0/24 -j ACCEPT # iptables -A INPUT -p tcp --dport 22 -s 127.0.0.0/8 -j ACCEPT # iptables -A INPUT -p tcp --dport 22 -j DROP This says that all LAN addresses are allowed to talk to TCP port 22, that localhost gets the same consideration (yes, 127.* not just 127.0.0.1), and packets from every other address not matching those first two rules get unceremoniously dropped into the bit bucket . You can use REJECT instead of DROP if you want an active rejection (TCP RST) instead of making TCP port 22 a black hole for packets. If your LAN doesn't use the 192.168.0.* block, you will naturally need to change the IP and mask on the first line to match your LAN's IP scheme. These commands may not do what you want if your firewall already has some rules configured. (Say iptables -L as root to find out.) What frequently happens is that one of the existing rules grabs the packets you're trying to filter, so that appending new rules has no effect. While you can use -I instead of -A with the iptables command to splice new rules into the middle of a chain instead of appending them, it's usually better to find out how the chains get populated on system boot and modify that process so your new rules always get installed in the correct order. RHEL 7+ On recent RHEL type systems, the best way to do that is to use firewall-cmd or its GUI equivalent. This tells the OS's firewalld daemon what you want, which is what actually populates and manipulates what you see via iptables -L . RHEL 6 and Earlier On older RHEL type systems, the easiest way to modify firewall chains when ordering matters is to edit /etc/sysconfig/iptables . The OS's GUI and TUI firewall tools are rather simplistic, so once you start adding more complex rules like this, it's better to go back to good old config files. Beware, once you start doing this, you risk losing your changes if you ever use the OS's firewall tools to modify the configuration, since it may not know how to deal with handcrafted rules like these. Add something like this to that file: -A RH-Firewall-1-INPUT -p tcp --dport 22 -s 192.168.0.0/24 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp --dport 22 -s 127.0.0.0/8 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp --dport 22 -j DROP Where you add it is the tricky bit. If you find a line in that file talking about --dport 22 , simply replace it with the three lines above. Otherwise, it should probably go before the first existing line ending in -j ACCEPT . Generally, you'll need to acquire some familiarity with the way iptables works, at which point the correct insertion point will be obvious. Save that file, then say service iptables restart to reload the firewall rules. Be sure to do this while logged into the console, in case you fat-finger the edits! You don't want to lock yourself out of your machine while logged in over SSH. The similarity to the commands above is no coincidence. Most of this file consists of arguments to the iptables command. The differences relative to the above are that the iptables command is dropped and the INPUT chain name becomes the special RHEL-specific RH-Firewall-1-INPUT chain. (If you care to examine the file in more detail, you'll see earlier in the file where they've essentially renamed the INPUT chain. Why? Couldn't say.)
{ "source": [ "https://unix.stackexchange.com/questions/19791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1375/" ] }
19,804
I'd like to essentially tar/gz a directory on a remote machine and save the file to my local computer without having to connect back into my local machine from the remote one. Is there a way to do this over SSH? The tar file doesn't need to be stored on the remote machine, only on the local machine. Is this possible?
You can do it with an ssh command, just tell tar to create the archive on its standard output: ssh remote.example.com 'cd /path/to/directory && tar -cf - foo | gzip -9' >foo.tgz Another approach, which is more convenient if you want to do a lot of file manipulations on the other machine but is overkill for a one-shot archive creation, is to mount the remote machine's filesystem with SSHFS (a FUSE filesystem). You should enable compression at the SSH level. mkdir ~/net/remote.example.com sshfs -C remote.example.com:/ ~/net/remote.example.com tar -czf foo.tgz -C ~/net/remote.example.com/path/to/directory foo
{ "source": [ "https://unix.stackexchange.com/questions/19804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
19,840
I was wondering whether (and, of course, how) it’s possible to tell tar to extract multiple files in a single run. I’m an experienced Unix user for several years and of course I know that you can use for or find or things like that to call tar once for each archive you want to extract, but I couldn’t come up with a working command line that caused my tar to extract two .tar.gz files at once. (And no, there’s nothing wrong with for , I’m merely asking whether it’s possible to do without.) I’m asking this question rather out of curiosity, maybe there’s a strange fork of tar somewhere that supports this someone knows how to use the -M parameter that tar suggested to me when I tried tar -zxv -f a.tgz -f b.tgz we’re all blind and it’s totally easy to do — but I couldn’t find any hint in the web that didn’t utilize for or find or xargs or the like. Please don’t reply with tar -zxvf *.tar.gz (because that does not work) and only reply with “doesn’t work” if you’re absolutely sure about it (and maybe have a good explanation why , too). Edit: I was pointed to an answer to this question on Stack Overflow which says in great detail that it’s not possible without breaking current tar syntax, but I don’t think that’s true. Using tar -zxv -f a.tgz -f b.tgz or tar -zxv --all-args-are-archives *.tar.gz would break no existing syntax, imho.
This is possible, the syntax is pretty easy: $ cat *.tar | tar -xvf - -i The -i option ignores the EOF at the end of the tar archives, from the man page: -i, --ignore-zeros ignore blocks of zeros in archive (normally mean EOF)
{ "source": [ "https://unix.stackexchange.com/questions/19840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10382/" ] }
19,867
For example, if I want to have a file with the name gitconfig (no leading . ) be recognised by vim as being of filetype=gitconfig , is there a means of indicating this in a comment or something similar within the file itself? Note, I want this to work across systems, so modifying vim's startup files is not preferred.
This sounds like the modeline feature (see the on-line help). In gitconfig you could have a modeline like the following, near the beginning or end of the file: # vi: ft=gitconfig This requires modelines to be enabled and since they can be a security hazard they are disabled by default on many systems though. Another approach which might be slightly more work is to make a .vim file containing au BufRead,BufNewFile */gitconfig setfiletype gitconfig and drop it in ~/.vim/ftdetect on all your systems.
{ "source": [ "https://unix.stackexchange.com/questions/19867", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2459/" ] }
19,874
I'm using emacs for my daily javascript editing, to switch between buffers I use C-x LEFT and C-x RIGHT and I'm fine with this (even if I find it difficult to know the path of the file I'm modifying). My problems: at the startup I always have *scratch* and *Messages* opened, I thought that putting (kill-buffer "*scratch*") in my .emacs would solve the issue, but it's not, do you have a suggestion? when I open files I always do TAB autocomplete, so each time I'm creating a new *Messages* buffer containing the options for the completion, how do I prevent this from creating, or better, how do I make emacs kill it, after I've made my choice? Do say your opinion if you think I'm doing something that's not "the way it should be" with me navigating as I said at the top.
This drove me mad.. until I fixed it. Now there's no scratch , messages , or completions buffers to screw with your flow. Enjoy! Place this in your .emacs: ;; Makes *scratch* empty. (setq initial-scratch-message "") ;; Removes *scratch* from buffer after the mode has been set. (defun remove-scratch-buffer () (if (get-buffer "*scratch*") (kill-buffer "*scratch*"))) (add-hook 'after-change-major-mode-hook 'remove-scratch-buffer) ;; Removes *messages* from the buffer. (setq-default message-log-max nil) (kill-buffer "*Messages*") ;; Removes *Completions* from buffer after you've opened a file. (add-hook 'minibuffer-exit-hook '(lambda () (let ((buffer "*Completions*")) (and (get-buffer buffer) (kill-buffer buffer))))) ;; Don't show *Buffer list* when opening multiple files at the same time. (setq inhibit-startup-buffer-menu t) ;; Show only one active window when opening multiple files at the same time. (add-hook 'window-setup-hook 'delete-other-windows) Bonus: ;; No more typing the whole yes or no. Just y or n will do. (fset 'yes-or-no-p 'y-or-n-p)
{ "source": [ "https://unix.stackexchange.com/questions/19874", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4855/" ] }
19,875
As per the accepted answer to this question , I'm attempting to use modelines in vim to force filetype detection in some files. For example, at the top of a file named gitconfig (note there is no leading . ), I have the following line: # vim: set filetype=gitconfig : modeline is enabled on my system. However, when I open the file in vim, set filetype? returns conf , rather than the expected gitconfig . Is it possible that other parts of my vim configuration (e.g. filetype.vim) are causing this strange behaviour? Edited in response to comments: set compatible? returns nocompatible set modeline? returns modeline verbose set filetype? returns: filetype=conf Last set from /usr/share/vim/vim73/filetype.vim I don't understand why the system wide filetype plugin would be overriding what I have set in the file itself. One final note: this is the version of Vim 7.3 shipped with OSX. The latest version of MacVim running on the same system using the same .vimrc behaves as expected, with set ft? returning filetype=gitconfig .
So, after some digging, it transpires that the system vimrc shipped with OSX sets the modelines (note the trailing 's') variable to 0. This variable controls the number of lines in a file which are checked for set commands. Setting modelines to a non-zero value in my .vimrc solved the problem. Full output, for the curious: the output of vim --version prompted me to check the system vimrc: % vim --version VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Jun 24 2011 20:00:09) Compiled by [email protected] Normal version without GUI. Features included (+) or not (-): ... system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -D_FORTIFY_SOURCE=0 -Iproto -DHAVE_CONFIG_H -arch i386 -arch x86_64 -g -Os -pipe Linking: gcc -arch i386 -arch x86_64 -o vim -lncurses Looking at the system vimrc: % cat /usr/share/vim/vimrc " Configuration file for vim set modelines=0 " CVE-2007-2438 ... Led me to the modelines variable. It appears that MacVim does not source this system file (perhaps looking for a system GVIMRC instead? :help startup isn't clear). VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Jul 27 2011 19:46:24) MacOS X (unix) version Included patches: 1-260 Compiled by XXXXX Huge version with MacVim GUI. Features included (+) or not (-): ... system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" system gvimrc file: "$VIM/gvimrc" user gvimrc file: "$HOME/.gvimrc" system menu file: "$VIMRUNTIME/menu.vim" fall-back for $VIM: "/Applications/MacVim.app/Contents/Resources/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_MACVIM -Wall -Wno-unknown-pragmas -p ipe -DMACOS_X_UNIX -no-cpp-precomp -g -O2 -D_FORTIFY_SOURCE=1 Linking: gcc -L. -Wl,-syslibroot,/Developer/SDKs/MacOSX10.6.sdk -L/usr/local/lib -o V im -framework Cocoa -framework Carbon -lncurses -liconv -framework Cocoa -fstack-prote ctor -L/usr/local/lib -L/System/Library/Perl/5.10/darwin-thread-multi-2level/CORE -lperl -lm - lutil -lc -framework Python -framework Ruby
{ "source": [ "https://unix.stackexchange.com/questions/19875", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2459/" ] }
19,907
I have some database dumps from a Windows system on my box. They are text files. I'm using cygwin to grep through them. These appear to be plain text files; I open them with text editors such as notepad and wordpad and they look legible. However, when I run grep on them, it will say binary file foo.txt matches . I have noticed that the files contain some ascii NUL characters, which I believe are artifacts from the database dump. So what makes grep consider these files to be binary? The NUL character? Is there a flag on the filesystem? What do I need to change to get grep to show me the line matches?
If there is a NUL character anywhere in the file, grep will consider it as a binary file. There might a workaround like this cat file | tr -d '\000' | yourgrep to eliminate all null first, and then to search through file.
{ "source": [ "https://unix.stackexchange.com/questions/19907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394/" ] }
19,918
Sometimes when I want to umount a device, e.g. sudo umount /dev/loop0 I will get the message umount: /mnt: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) I usually solve this issue by closing a console window (in my case xfce4-terminal) and then umount . What does this problem mean? Is there some smarter solution?
It means that some process has a working directory or an open file handle underneath the mount point. The best thing to do is to end the offending process, change its working directory or close the file handle before unmounting. There is an alternative on Linux though. Using umount -l calls a "lazy" unmount. The filesystem will still be mounted but you won't be able to see or use it, except for processes that are already using it. When the offending program exits (through whatever means) the system will "finish" unmounting the filesystem.
{ "source": [ "https://unix.stackexchange.com/questions/19918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
19,945
I'm trying to use Vim more and more when I can. One of my biggest grip between Vim and an IDE like Aptana is the ability to auto indent. Is there a means of auto formatting code (HTML, CSS, PHP) so it is properly indented? If so how do you install this into vim? I don't understand plugins very much. I tried reviewing this thread and it confused me more: How to change vim auto-indent behavior?
To indent the whole file automatically: gg =G Explained: gg - go to beginning of the file G - go to end of the file = - indent
{ "source": [ "https://unix.stackexchange.com/questions/19945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7768/" ] }
20,026
I did something like convert -page A4 -compress A4 *.png CH00.pdf But the 1st page is much larger than the subsequent pages. This happens even though the image dimensions are similar. These images are scanned & cropped thus may have slight differences in dimensions I thought -page A4 should fix the size of the pages?
Last time I used convert for such a task I explicitly specified the size of the destination via resizing: $ i=150; convert a.png b.png -compress jpeg -quality 70 \ -density ${i}x${i} -units PixelsPerInch \ -resize $((i*827/100))x$((i*1169/100)) \ -repage $((i*827/100))x$((i*1169/100)) multipage.pdf The convert command doesn't always use DPI as default density/page format unit, thus we explicitly specify DPI with the -units option (otherwise you may get different results with different versions/input format combinations). The new size (specified via -resize ) is the dimension of a DIN A4 page in pixels. The resize argument specifies the maximal page size. What resolution and quality to pick exactly depends on the use case - I selected 150 DPI and average quality to save some space while it doesn't look too bad when printed on paper. Note that convert by default does not change the aspect ratio with the resize operation: Resize will fit the image into the requested size. It does NOT fill, the requested box size. ( ImageMagick manual ) Depending on the ImageMagick version and the involved input formats it might be ok to omit the -repage option. But sometimes it is required and without that option the PDF header might contain too small dimensions. In any case, the -repage shouldn't hurt. The computations use integer arithmetic since bash only supports that. With zsh the expressions can be simplified - i.e. replaced with $((i*8.27))x$((i*11.69)) . Lineart Images If the PNG files are bi-level (black & white a.k.a lineart) images then the img2pdf tool yields superior results over ImageMagick convert . That means img2pdf is faster and yields smaller PDFs. Example: $ img2pdf -o multipage.pdf a.png b.png or: $ img2pdf --pagesize A4 -o multipage.pdf a.png b.png
{ "source": [ "https://unix.stackexchange.com/questions/20026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10247/" ] }
20,035
When I do str="Hello World\n===========\n" I get the \n printed out too. How can I have newlines then?
In bash you can use the syntax str=$'Hello World\n===========\n' Single quotes preceded by a $ is a new syntax that allows to insert escape sequences in strings. Also printf builtin allows to save the resulting output to a variable printf -v str 'Hello World\n===========\n' Both solutions do not require a subshell. If in the following you need to print the string, you should use double quotes, like in the following example: echo "$str" because when you print the string without quotes, newline are converted to spaces.
{ "source": [ "https://unix.stackexchange.com/questions/20035", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10247/" ] }
20,104
Let's say user has Directory1 and it contains File1 File2 CantBeDeletedFile How to make so the user would never be allowed to delete the CantBeDeletedFile ? If I change the ownership of Directory1 and remove write permissions users wouldn't be able to delete any file. They also wouldn't be able to add new files etc. I just want to be able to set some files which would never be deleted. More specific description. I am creating user profiles. I am creating application launcher files in their Desktop . So I want to set some launcher files (.desktop) and make them so user can only launch them and they couldn't rename nor delete, just launch. Currently if user owns the directory which contain any file. He can delete. If there is no generic way for all *nix, it's a Linux and ext4 FS.
(I dislike intruding users' home, I think they should be allowed to do whatever they want to do with they homes… but anyway…) This should work on linux (at least). I'm assuming user is already a member of the group user . A solution is to change ownership of Directory1 and set the sticky bit on the directory: chown root:user Directory1 chmod 1775 Directory1 Then use: chown root Directory1/CantBeDeletedFile Now, user won't be able to remove this file due to the sticky bit¹. The user is still able to add/remove their own files in Directory1 . But notice that they won't be able to delete Directory1 because it will never be emptied. — 1. When the sticky bit is enabled on a directory, users (other than the owner) can only remove their own files inside a directory. This is used on directories like /tmp whose permissions are 1777 = rwxrwxrwt .
{ "source": [ "https://unix.stackexchange.com/questions/20104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9746/" ] }
20,157
The following examples show that a newline is added to a here-string . Why is this done? xxd -p <<<'a' # output: 610a xxd -p <<<'a ' # output: 610a0a
The easy answer is because ksh is written that way (and bash is compatible). But there's a reason for that design choice. Most commands expect text input. In the unix world, a text file consists of a sequence of lines, each ending in a newline . So in most cases a final newline is required. An especially common case is to grab the output of a command with a command susbtitution, process it in some way, then pass it to another command. The command substitution strips final newlines; <<< puts one back. tmp=$(foo) tmp=${tmp//hello/world} tmp=${tmp#prefix} bar <<<$tmp Bash and ksh can't manipulate binary data anyway (it can't cope with null characters), so it's not surprising that their facilities are geared towards text data. The <<< here-string syntax is mostly only for convenience anyway, like << here-documents. If you need to not add a final newline, use echo -n (in bash) or printf and a pipeline.
{ "source": [ "https://unix.stackexchange.com/questions/20157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2343/" ] }
20,161
Is there a way to make sed ask me for confirmation before each replace? Something similar to 'c' when using replace inside vim. Does sed do this at all?
Doing it with sed would probably not be possible as it's a non-interactive stream editor. Wrapping sed in a script would require far too much thinking. It is easier to just do it with vim : vim -c '%s/PATTERN/REPLACEMENT/gc' -c 'wq' file.in Since it was mentioned in comments below, here's how this would be used on multiple files matching a particular filename globbing pattern in the current directory: for fname in file*.txt; do vim -c '%s/PATTERN/REPLACEMENT/gc' -c 'wq' "$fname" done Or, if you first want to make sure that the file really contains a line that matches the given pattern first, before performing the substitution, for fname in file*.txt; do grep -q 'PATTERN' "$fname" && vim -c '%s/PATTERN/REPLACEMENT/gc' -c 'wq' "$fname" done The above two shell loops modified into find commands that do the same things but for all files with a particular name somewhere in or under some top-dir directory, find top-dir -type f -name 'file*.txt' \ -exec vim -c '%s/PATTERN/REPLACEMENT/gc' -c 'wq' {} \; find top-dir -type f -name 'file*.txt' \ -exec grep -q 'PATTERN' {} \; \ -exec vim -c '%s/PATTERN/REPLACEMENT/gc' -c 'wq' {} \; Or, using the original shell loops and having find feed pathnames into them: find top-dir -type f -name 'file*.txt' -exec sh -c ' for pathname do vim -c "%s/PATTERN/REPLACEMENT/gc" -c "wq" "$pathname" done' sh {} + find top-dir -type f -name 'file*.txt' -exec sh -c ' for pathname do grep -q "PATTERN" "$pathname" && vim -c "%s/PATTERN/REPLACEMENT/gc" -c "wq" "$pathname" done' sh {} + You do, in any case, not want to do something like for filename in $( grep -rl ... ) since it would require that grep finishes running before even starting the first iteration of loop, which is inelegant , and the pathnames returned by grep would be split into words on whitespaces, and these words would undergo filename globbing (this disqualifies pathnames that contains spaces and special characters). Related: Understanding the -exec option of `find`
{ "source": [ "https://unix.stackexchange.com/questions/20161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6207/" ] }
20,193
Each time putty is closing the session after some time if it is idle. There is no time parameter on putty, so how can I keep my putty ssh session always Alive?
Enable SSH keep-alives by changing the following setting to a positive value: A value of 300 should suffice in most cases. (5 minutes.) This causes PuTTY to send SSH null packets to the remote host periodically, so that the session doesn't time out. Note that we don't want the SO_KEEPALIVE option lower on that page. That is a much lower-level mechanism which is best used only when the application-level protocol doesn't have its own keepalive mechanism. SSH does, so we shouldn't use TCP keepalives in this case. There are other things that can cause connections to drop, but this is a solid first thing to try. If it doesn't work, you'd need to look into these other things: VPN timeouts, router timeouts, settings changes on the remote SSH server, flaky connections, etc.
{ "source": [ "https://unix.stackexchange.com/questions/20193", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3373/" ] }
20,262
I am using find and getting a list of files I want to grep through. How do I pipe that list to grep ?
Well, the generic case that works with any command that writes to stdout is to use xargs , which will let you attach any number of command-line arguments to the end of a command: $ find … | xargs grep 'search' Or to embed the command in your grep line with backticks or $() , which will run the command and substitute its output: $ grep 'search' $(find …) Note that these commands don't work if the file names contain whitespace, or certain other “weird characters” ( \'" for xargs, \[*? for $(find …) ). However, in the specific case of find the ability to execute a program on the given arguments is built-in: $ find … -exec grep 'search' {} \; Everything between -exec and ; is the command to execute; {} is replaced with the filename found by find . That will execute a separate grep for each file; since grep can take many filenames and search them all, you can change the ; to + to tell find to pass all the matching filenames to grep at once: $ find … -exec grep 'search' {} \+
{ "source": [ "https://unix.stackexchange.com/questions/20262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9110/" ] }
20,285
Let's say I have a script called script , that reads from stdin and spits out some results to the screen. If I wanted to feed it contents of one file, I would have typed: $ ./script < file1.txt But what if I want to feed the contents of the multiple files to the script the same way, is it at all possible? The best I came up with so far was: cat file1.txt file2.txt > combined.txt && ./script < combined.txt Which uses two commands and creates a temp file. Is there a way to do the same thing but bypassing creating the combined file?
You can use cat and a pipe: cat file1 file2 file3 ... fileN | ./script Your example, using a pipe, and no temp file: join file1.txt file2.txt | ./script
{ "source": [ "https://unix.stackexchange.com/questions/20285", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9148/" ] }
20,322
I have two different files: File1 /home/user1/ /home/user2/bin /home/user1/a/b/c File2 <TEXT1> <TEXT2> I want to replace the <TEXT1> of File2 with the contents of File1 using sed . I tried this command, but not getting proper output: cat File2|sed "s/<TEXT1>/$(cat File1|sed 's/\//\\\//g'|sed 's/$/\\n/g'|tr -d "\n")/g" You can use other tools also to solve this problem.
Here's a sed script solution (easier on the eyes than trying to get it into one line on the command line): /<TEXT1>/ { r File1 d } Running it: $ sed -f script.sed File2 /home/user1/ /home/user2/bin /home/user1/a/b/c <TEXT2>
{ "source": [ "https://unix.stackexchange.com/questions/20322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5250/" ] }
20,357
I think I read something a while back about this, but I can't remember how it's done. Essentially, I have a service in /etc/init.d which I'd like to start automatically at boot time. I remember it has something to do with symlinking the script into the /etc/rc.d directory, but I can't remember at the present. What is the command for this? I believe I'm on a Fedora/CentOS derivative.
If you are on a Red Hat based system, as you mentioned, you can do the following: Create a script and place in /etc/init.d (e.g /etc/init.d/myscript ). The script should have the following format: #!/bin/bash # chkconfig: 2345 20 80 # description: Description comes here.... # Source function library. . /etc/init.d/functions start() { # code to start app comes here # example: daemon program_name & } stop() { # code to stop app comes here # example: killproc program_name } case "$1" in start) start ;; stop) stop ;; restart) stop start ;; status) # code to check status of app comes here # example: status program_name ;; *) echo "Usage: $0 {start|stop|status|restart}" esac exit 0 The format is pretty standard and you can view existing scripts in /etc/init.d . You can then use the script like so /etc/init.d/myscript start or chkconfig myscript start . The ckconfig man page explains the header of the script: > This says that the script should be started in levels 2, 3, 4, and > 5, that its start priority should be 20, and that its stop priority > should be 80. The example start, stop and status code uses helper functions defined in /etc/init.d/functions Enable the script $ chkconfig --add myscript $ chkconfig --level 2345 myscript on Check the script is indeed enabled - you should see "on" for the levels you selected. $ chkconfig --list | grep myscript
{ "source": [ "https://unix.stackexchange.com/questions/20357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
20,370
Is there a simple linux command that will tell me what my display manager is? I'm using Xfce. Are different desktop environments usually affiliated with different display managers?
Unfortunately the configuration differs for each distribution: Debian/Ubuntu /etc/X11/default-display-manager RedHat & Fedora /etc/sysconfig/desktop see Fedora docs: Switching desktop environments OpenSuSe /etc/sysconfig/displaymanager
{ "source": [ "https://unix.stackexchange.com/questions/20370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7826/" ] }
20,379
Can someone tell me the difference between a Desktop Install, a Basic Server install, and a Minimal Install? During installation, it doesn't give a description and I can't find documentation on it either. This is for a CentOS 6 installation.
As you've already noticed, Red Hat's description is vague about what each suite actually includes. Below is a list of the package groups the each suite will install. You can get more information about what package group by running yum groupinfo foo-bar . The names listed below differ from what yum grouplist will list but the groupinfo cobase, console-internet, core, debugging, directory-client, hardware-monitoring, java-platform, large-systems, network-file-system-client, performance, perl-runtime, server-platformmmand still works on them. I got this by mounting http://mirror.centos.org/centos-6/6/os/x86_64/images/install.img and looking at /usr/lib/anaconda/installclasses/rhel.py inside the image. Desktop : base, basic-desktop, core, debugging, desktop-debugging, desktop-platform, directory-client, fonts, general-desktop, graphical-admin-tools, input-methods, internet-applications, internet-browser, java-platform, legacy-x, network-file-system-client, office-suite, print-client, remote-desktop-clients, server-platform, x11 Minimal Desktop : base, basic-desktop, core, debugging, desktop-debugging, desktop-platform, directory-client, fonts, input-methods, internet-browser, java-platform, legacy-x, network-file-system-client, print-client, remote-desktop-clients, server-platform, x11 Minimal : core Basic Server : base, console-internet, core, debugging, directory-client, hardware-monitoring, java-platform, large-systems, network-file-system-client, performance, perl-runtime, server-platform Database Server : base, console-internet, core, debugging, directory-client, hardware-monitoring, java-platform, large-systems, network-file-system-client, performance, perl-runtime, server-platform, mysql-client, mysql, postgresql-client, postgresql, system-admin-tools Web Server : base, console-internet, core, debugging, directory-client, java-platform, mysql-client, network-file-system-client, performance, perl-runtime, php, postgresql-client, server-platform, turbogears, web-server, web-servlet Virtual Host : base, console-internet, core, debugging, directory-client, hardware-monitoring, java-platform, large-systems, network-file-system-client, performance, perl-runtime, server-platform, virtualization, virtualization-client, virtualization-platform Software Development Workstation : additional-devel, base, basic-desktop, core, debugging, desktop-debugging, desktop-platform, desktop-platform-devel, development, directory-client, eclipse, emacs, fonts, general-desktop, graphical-admin-tools, graphics, input-methods, internet-browser, java-platform, legacy-x, network-file-system-client, performance, perl-runtime, print-client, remote-desktop-clients, server-platform, server-platform-devel, technical-writing, tex, virtualization, virtualization-client, virtualization-platform, x11
{ "source": [ "https://unix.stackexchange.com/questions/20379", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9786/" ] }
20,385
I posted a question and noticed people weren't distinguishing correctly between many of these things: Windows Managers vs Login Managers Vs Display Managers Vs Desktop Environment. Can someone please clear this up, i.e. tell us the difference between them and how they are related perhaps? What category does Xorg fall under? What about Gdm/Kdm/Xdm? People also talk about X. What is X?
From the bottom up: Xorg, XFree86 and X11 are display servers . This creates the graphical environment. [gkx]dm (and others) are display managers . A login manager is a synonym. This is the first X program run by the system if the system (not the user) is starting X and allows you to log on to the local system, or network systems. A window manager controls the placement and decoration of windows. That is, the window border and controls are the decoration. Some of these are stand alone (WindowMaker, sawfish, fvwm, etc). Some depend on an accompanying desktop environment. A desktop environment such as XFCE, KDE, GNOME, etc. are suites of applications designed to integrate well with each other to provide a consistent experience. In theory (and mostly so in practice) any of those components are interchangeable. You can run kmail using GNOME with WindowMaker on Xorg.
{ "source": [ "https://unix.stackexchange.com/questions/20385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7826/" ] }
20,396
I find that I often do the following: %> cd bla/bla %> ls I would like it that whenever I cd into a directory it automatically does an ls . I fiddled with my .bashrc for a while, but couldn't figure out how to make it happen.
You can do this with a function: $ cdls() { cd "$@" && ls; } The && means ' cd to a directory, and if successful (e.g. the directory exists), run ls '. Using the && operator is better than using a semicolon ; operator in between the two commands, as with { cd "$@"; ls; } . This second command will run ls regardless if the cd worked or not. If the cd failed, ls will print the contents of your current directory, which will be confusing for the user. As a best practice, use && and not ; . $ cdls /var/log CDIS.custom fsck_hfs.log monthly.out system.log $ pwd /var/log In general, it is a bad practice to rename a command which already exists, especially for a commonly called command like cd . Instead, create a new command with a different name. If you overwrite cd with a function or alias which is also named cd , what would happen when you enter a directory with 100,000 files? There are many utilities that use cd , and they may get confused by this unusual behavior. If you use a shared account (Such as root when you are working with other system administrators), it can be very dangerous to replace an existing command because the environment is different from what people expect.
{ "source": [ "https://unix.stackexchange.com/questions/20396", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10600/" ] }
20,399
I have created a simple systemd service file for a custom application. The application works well when I run it manually, but my CPU gets maxed out when I run it with systemd. I'm trying do track down where my problem is, but I don't know where to find the output (or how to configure systemd to put the output somewhere). Here is my service file: [Unit] Description=Syncs files with a server when they change Wants=network.target After=network.target [Service] ExecStart=/usr/local/bin/filesync-client --port 2500 WorkingDirectory=/usr/local/lib/node_modules/filesync-client Restart=always [Install] WantedBy=multi-user.target Throughout the application, I output to stdout and stderr. How can I read the output of my daemon? Edit: I found man systemd.exec , which mentioned the StandardOutput= option, but I'm not sure how to use it. From the man page : StandardOutput= Controls where file descriptor 1 (STDOUT) of the executed processes is connected to. Takes one of inherit , null , tty , syslog , kmsg , kmsg+console , syslog+console or socket . If set to inherit the file descriptor of standard input is duplicated for standard output. If set to null standard output will be connected to /dev/null , i.e. everything written to it will be lost. If set to tty standard output will be connected to a tty (as configured via TTYPath= , see below). If the TTY is used for output only the executed process will not become the controlling process of the terminal, and will not fail or wait for other processes to release the terminal. syslog connects standard output to the syslog(3) system logger. kmsg connects it with the kernel log buffer which is accessible via dmesg(1). syslog+console and kmsg+console work similarly but copy the output to the system console as well. socket connects standard output to a socket from socket activation, semantics are similar to the respective option of StandardInput= . This setting defaults to inherit. Does this mean that these are my only options? I would like, for example, to put output in /dev/shm or something. I suppose I could use a Unix domain socket and write a simple listener, but this seems a little unnecessary. I just need this for debugging, and I'll probably end up removing most of the logs and change the output to syslog.
Update As mikemaccana notes, the systemd journal is now the standard logging device for most distros. To view the stdout and stderr of a systemd unit use the journalctl command. sudo journalctl -u [unit] Original Answer By default stdout and stderr of a systemd unit are sent to syslog. If you're using the full systemd, this will be accesible via journalctl . On Fedora, it should be /var/log/messages but syslog will put it where your rules say. Due to the date of the post, and assuming most people that are exposed to systemd are via fedora, you were probably hit by the bug described here: https://bugzilla.redhat.com/show_bug.cgi?id=754938 It has a good explanation of how it all works too =) (This was a bug in selinux-policy that caused error messages to not be logged, and was fixed in selinux-policy-3.10.0-58.fc16 )
{ "source": [ "https://unix.stackexchange.com/questions/20399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8471/" ] }
20,460
In what order are the dated ordered by? Certainly not alphanumeric order. ls -lt sorts by modification time. But I need creation time.
Most unices do not have a concept of file creation time. You can't make ls print it because the information is not recorded. If you need creation time, use a version control system : define creation time as the check-in time. If your unix variant has a creation time, look at its documentation. For example, on Mac OS X (the only example I know of), use ls -tU . Windows also stores a creation time, but it's not always exposed to ports of unix utilities, for example Cygwin ls doesn't have an option to show it. The stat utility can show the creation time, called “birth time” in GNU utilities, so under Cygwin you can show files sorted by birth time with stat -c '%W %n' * | sort -k1n . Note that the ctime ( ls -lc ) is not the file creation time , it's the inode change time. The inode change time is updated whenever anything about the file changes (contents or metadata) except that the ctime isn't updated when the file is merely read (even if the atime is updated). In particular, the ctime is always more recent than the mtime (file content modification time) unless the mtime has been explicitly set to a date in the future.
{ "source": [ "https://unix.stackexchange.com/questions/20460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9431/" ] }
20,469
I wanted to know the difference between the following two commands 2>&1 > output.log and 2>&1 | tee output.log I saw one of my colleague use second option to redirect. I know what 2>&1 does, my only question is what is the purpose of using tee where a simple redirection ">" operator can be used?
Looking at the two commands separately: utility 2>&1 >output.log Here, since redirections are processed in a left-to-right manner, the standard error stream would first be redirected to wherever the standard output stream goes (possibly to the console), and then the standard output stream would be redirected to a file. The standard error stream would not be redirected to that file. The visible effect of this would be that you get what's produced on standard error on the screen and what's produced on standard output in the file. utility 2>&1 | tee output.log Here, you redirect standard error to the same place as the standard output stream. This means that both streams will be piped to the tee utility as a single intermingled output stream, and that this standard output data will be saved to the given file by tee . The data would additionally be reproduced by tee in the console (this is what tee does, it duplicates data streams). Which ever one of these is used depends on what you'd like to achieve. Note that you would not be able to reproduce the effect of the second pipeline with just > (as in utility >output.log 2>&1 , which would save both standard output and error in the file by first redirecting standard output to the output.log file and then redirecting standard error to where standard output is now going). You would need to use tee to get the data in the console as well as in the output file. Additional notes: The visible effect of the first command, utility 2>&1 >output.log would be the same as utility >output.log I.e., the standard output goes to the file and standard error goes to the console. If a further processing step was added to the end of each of the above commands, there would be a big difference though: utility 2>&1 >output.log | more_stuff utility >output.log | more_stuff In the first pipeline, more_stuff would get what's originally the standard error stream from utility as its standard input data, while in the second pipeline, since it's only the resulting standard output stream that is ever sent across a pipe, the more_stuff part of the pipeline would get nothing to read on its standard input.
{ "source": [ "https://unix.stackexchange.com/questions/20469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10654/" ] }
20,483
Is there any way to find out from terminal which process is causing high CPU Usage ? It would also be useful to order processes in descending order of cpu Usage
top will display what is using your CPU. If you have it installed, htop allows you more fine-grained control, including filtering by—in your case—CPU
{ "source": [ "https://unix.stackexchange.com/questions/20483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10611/" ] }
20,487
I want to stop internet on my system using iptables so what should I do? iptables -A INPUT -p tcp --sport 80 -j DROP or iptables -A INPUT -p tcp --dport 80 -j DROP ?
Reality is you're asking 2 different questions. --sport is short for --source-port --dport is short for --destination-port also the internet is not simply the HTTP protocol which is what typically runs on port 80. I Suspect you're asking how to block HTTP requests. to do this you need to block 80 on the outbound chain. iptables -A OUTPUT -p tcp --dport 80 -j DROP will block all outbound HTTP requests, going to port 80, so this won't block SSL, 8080 (alt http) or any other weird ports, to do those kinds of things you need L7 filtering with a much deeper packet inspection.
{ "source": [ "https://unix.stackexchange.com/questions/20487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2063/" ] }
20,490
I understand that in Windows as well as Linux and Unix, a program|application|software can be installed in any directory. Also if packages are installed using the distribution's packaging system, it'll place files in the correct location. But at times, a software installation prompts for a path to place files. In case of a Linux distro where is this default place ( C:\Program Files or C:\progra~1 equivalent)? Is it different for various distributions? If yes, where would this be for RHEL , Suse and Ubuntu ?
The Linux Documentation Project has a description of the Linux filesystem hierarchy where they explain the different folders and their (partly historical) meaning. As xenoterracide already pointed out /bin and /opt are the standard directories which can be compared to "Program Files" on Windows. /bin contains several useful commands that are of use to both the system administrator as well as non-privileged users. It usually contains the shells like bash , csh , etc.... and commonly used commands like cp , mv , rm , cat , ls . ( quoted from TLDP ) /opt is reserved for all the software and add-on packages that are not part of the default installation. For example, StarOffice, Kylix, Netscape Communicator and WordPerfect packages are normally found here. ( quoted from TLDP )
{ "source": [ "https://unix.stackexchange.com/questions/20490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7318/" ] }