source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
8,214
I usually use grep when developing, and there are some extensions that I always want to exclude (like *.pyc ). Is it possible to create a ~/.egreprc or something like that, and add filtering to exclude pyc files from all results? Is this possible, or will I have to create an alias for using grep in this manner, and call the alias instead of grep ?
No, there's no rc file for grep. GNU grep 2.4 through 2.21 applied options from the environment variable GREP_OPTIONS , but more recent versions no longer honor it. For interactive use, define an alias in your shell initialization file ( .bashrc or .zshrc ). I use a variant of the following: alias regrep='grep -Er --exclude=*~ --exclude=*.pyc --exclude-dir=.bzr --exclude-dir=.git --exclude-dir=.svn' If you call the alias grep , and you occasionally want to call grep without the options, type \grep . The backslash bypasses the alias.
{ "source": [ "https://unix.stackexchange.com/questions/8214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2689/" ] }
8,342
Suppose I have export MY_VAR=0 in ~/.bashrc . I have an opened gnome terminal, and in this terminal, I change $MY_VAR value to 200 . So, if I do echo $MY_VAR in this terminal, 200 is shown. Now, I opened another tab in my gnome terminal, and do echo $MY_VAR ...and instead of 200 , I have 0 . What should I do to persist the 200 value when a terminal modifies an environment variable, making this modification (setting to 200) available to all subsequent sub shells and such? Is this possible?
A copy of the environment propagates to sub-shells, so this works: $ export MY_VAR=200 $ bash $ echo $MY_VAR 200 but since it's a copy, you can't get that value up to the parent shell — not by changing the environment, at least. It sounds like you actually want to go a step further, which is to make something which acts like a global variable, shared by "sibling" shells initiated separately from the parent — like your new tab in Gnome Terminal. Mostly, the answer is "you can't, because environment variables don't work that way". However, there's another answer, which is, well, you can always hack something up. One approach would be to write the value of the variable to a file, like ~/.myvar , and then include that in ~/.bashrc . Then, each new shell will start with the value read from that file. You could go a step further — make ~/.myvar be in the format MYVAR=200 , and then set PROMPT_COMMAND=source ~/.myvar , which would cause the value to be re-read every time you get a new prompt. It's still not quite a shared global variable, but it's starting to act like it. It won't activate until a prompt comes back, though, which depending on what you're trying to do could be a serious limitation. And then, of course, the next thing is to automatically write changes to ~/.myvar . That gets a little more complicated, and I'm going to stop at this point, because really, environment variables were not meant to be an inter-shell communication mechanism, and it's better to just find another way to do it.
{ "source": [ "https://unix.stackexchange.com/questions/8342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2689/" ] }
8,351
I'm a bit lost with virt-manager / libvirt / KVM. I've got a working KVM VM (Windows XP) which works nicely. The VM is backed by a 4GB file or so (a .img ). Now I want to do something very simple: I want to duplicate my VM. I thought "OK, no problem, let's copy the 4GB file and copy the XML" file. But then the libvirt FAQ states in all uppercase: "you SHOULD NOT CARE WHERE THE XML IS STORED" libvirt FAQ OK fine, I shouldn't care. But then how do I duplicate my VM? I want to create a new VM that is a copy of that VM.
The most convenient is simply: # virt-clone --connect=qemu://example.com/system -o this-vm -n that-vm --auto-clone Which will make a copy of this-vm , named that-vm , and takes care of duplicating storage devices. Nothing new here except details. More to the point, What the FAQ is saying is that the XML domain descriptions are not directly editable, you need to go through libvirt. To complete the steps taken by the virt-clone command, you could: source_vm=vm_name new_vm=new_vm_name # You cannot "clone" a running vm, stop it. suspend and destroy # are also valid options for less graceful cloning virsh shutdown "$source_vm" # copy the storage image. cp /var/lib/libvirt/images/{"$source_vm","$new_vm"}.img # dump the xml for the original virsh dumpxml "$source_vm" > "/tmp/$new_vm.xml" # hardware addresses need to be removed, libvirt will assign # new addresses automatically sed -i /uuid/d "/tmp/$new_vm.xml" sed -i '/mac address/d' "/tmp/$new_vm.xml" # and actually rename the vm: #(this also updates the storage path) sed -i "s/$source_vm/$new_vm/" "/tmp/$new_vm.xml" # finally, create the new vm virsh define "/tmp/$new_vm.xml" virsh start "$source_vm" virsh start "$new_vm"
{ "source": [ "https://unix.stackexchange.com/questions/8351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5213/" ] }
8,414
I'd like to be able to tail the output of a server log file that has messages like: INFO SEVERE etc, and if it's SEVERE , show the line in red; if it's INFO , in green. What kind of alias can I setup for a tail command that would help me do this?
Try out multitail ¹. This is an übergeneralization of tail -f . You can watch multiple files in separate windows, highlight lines based on their content, and more. multitail -c /path/to/log The colors are configurable. If the default color scheme doesn't work for you, write your own in the config file. For example, call multitail -cS amir_log /path/to/log with the following ~/.multitailrc : colorscheme:amir_log cs_re:green:INFO cs_re:red:SEVERE Another solution, if you're on a server where it's inconvenient to install non- standard tools, is to combine tail -f with sed or awk to add color selection control sequences. This requires tail -f to flush its standard output without delay even when its standard output is a pipe, I don't know if all implementations do this. tail -f /path/to/log | awk ' /INFO/ {print "\033[32m" $0 "\033[39m"} /SEVERE/ {print "\033[31m" $0 "\033[39m"} ' or with sed tail -f /path/to/log | sed --unbuffered \ -e 's/\(.*INFO.*\)/\o033[32m\1\o033[39m/' \ -e 's/\(.*SEVERE.*\)/\o033[31m\1\o033[39m/' If your sed isn't GNU sed, replace \o033 by a literal escape character and remove --unbuffered . Yet another possibility is to run tail -f in an Emacs shell buffer and use Emacs's syntax coloring abilities. ¹ The historical website vanished in early 2021. The latest version is still available in many distributions, e.g. Arch , Debian .
{ "source": [ "https://unix.stackexchange.com/questions/8414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3115/" ] }
8,430
How can I remove all empty directories in a subtree? I used something like find . -type d -exec rmdir {} 2>/dev/null \; but I needs to be run multiple times in order to remove directories containing empty directories only. Moreover, it's quite slow, especially under cygwin.
Combining GNU find options and predicates, this command should do the job: find . -type d -empty -delete -type d restricts to directories -empty restricts to empty ones -delete removes each directory The tree is walked from the leaves without the need to specify -depth as it is implied by -delete .
{ "source": [ "https://unix.stackexchange.com/questions/8430", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5116/" ] }
8,469
sometimes I run an app in the gnome-terminal, but then I suddenly have to restart gnome or something. I guess the answer to the question is also useful then I want to disconnect from SSH where something is happenning. Gnome's terminal tree looks like this: gnome-terminal bash some-boring-process Can I 'detach' bash from gnome-terminal (or detach some-boring-process from bash and redirect its output somewhere)? If I just kill gnome-terminal , bash will be killed to will all its subprocesses
If some-boring-process is running in your current bash session: halt it with ctrl-z to give you the bash prompt put it in the background with bg note the job number, or use the jobs command detach the process from this bash session with disown -h %1 (substitute the actual job number there). That doesn't do anything to redirect the output -- you have to think of that when you launch your boring process. [Edit] There seems to be a way to redirect it https://gist.github.com/782263 But seriously, look into screen. I have shells on a remote server that have been running for months. Looks like this: $ sleep 999999 ^Z [1]+ Stopped sleep 999999 $ bg [1]+ sleep 999999 & $ disown -h %1
{ "source": [ "https://unix.stackexchange.com/questions/8469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5239/" ] }
8,476
I ran mv a-folder /home/me on a machine and half way through the move the destination device filled up. a-folder was made up of folders and files in various subdirectories. Does mv leave the source folder intact until the move has successfully completed? The source and destination folders were on different filesystems. The reason I ask is that I ran this command on the wrong machine, so if the source folder is intact then that makes my life a lot easier :)
If some-boring-process is running in your current bash session: halt it with ctrl-z to give you the bash prompt put it in the background with bg note the job number, or use the jobs command detach the process from this bash session with disown -h %1 (substitute the actual job number there). That doesn't do anything to redirect the output -- you have to think of that when you launch your boring process. [Edit] There seems to be a way to redirect it https://gist.github.com/782263 But seriously, look into screen. I have shells on a remote server that have been running for months. Looks like this: $ sleep 999999 ^Z [1]+ Stopped sleep 999999 $ bg [1]+ sleep 999999 & $ disown -h %1
{ "source": [ "https://unix.stackexchange.com/questions/8476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2163/" ] }
8,485
We have a large file system on which a full du (disk usage) summary takes over two minutes. I'd like to find a way to speed up a disk usage summary for arbitrary directories on that file system. For small branches I've noticed that du results seem to be cached somehow, as repeat requests are much faster, but on large branches the speed up become negligible. Is there a simple way of speeding up du , or more aggressively caching results for branches that haven't been modified since the previous search? Or is there an alternative command that can deliver disk usage summaries faster?
What you are seeing when you rerun a du command is the effect of disk buffering. Once you read a block its disk buffer is kept in the buffer cache until that block is needed. For du you need to read the directory and the inode for each file in the directory. The du results are not cached in this case, but can be derived with far less disk IO. While it would be possible to force the system to cache this information, overall performance would suffer as the required buffer space would not be available for actively accessed files. The directory itself has no idea how large a file is, so each file's inode needs to be accessed. To keep the cached value up to date every time a file changed size the cached value would need to be updated. As a file can be listed in 0 or more directories this would require each file's inode to know which directories it is listed in. This would greatly complicate the inode structure and reduce IO performance. Also as du allows you to get results assuming different block sizes, the data required in the cache would need to increment or decrement the cached value for each possible block size further slowing performance.
{ "source": [ "https://unix.stackexchange.com/questions/8485", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4292/" ] }
8,503
I have a xmms2d process running, but two possible executable files (in different directories, both in the executable path) that could have spawned it. I suspect that one of those is corrupted, because sometimes this program works and sometimes it doesn't. The process running now works, so I want to delete (or rename) the other one. ps ax|grep "xmms" returns 8505 ? SLl 2:38 xmms2d -v without path information. Given the PID, could I find whether it was run from /usr/bin/xmms2d or /usr/local/bin/xmms2d ? Thanks!
Try this: ls -l /proc/8505/exe Or if you don't want to parse the output of ls , just do: readlink /proc/8505/exe or realpath /proc/8505/exe
{ "source": [ "https://unix.stackexchange.com/questions/8503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2333/" ] }
8,518
How can I get my own IP address and save it to a variable in a shell script?
I believe the "modern tools" way to get your ipv4 address is to parse ip rather than ifconfig , so it'd be something like: ip4=$(/sbin/ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1) ip6=$(/sbin/ip -o -6 addr list eth0 | awk '{print $4}' | cut -d/ -f1) or something like that.
{ "source": [ "https://unix.stackexchange.com/questions/8518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3608/" ] }
8,528
If I do the following: sudo su - //enter password exit exit //login again straight away sudo su - The second invocation of sudo does not request a password because even though I have logged out again, I am still within some time limit meaning that I do not need to be prompted for my password again. Because I am trying out some new privs to make sure they work, this is really slowing me down while I wait for the timeout to happen. Is there a command I can run to reset the timeout? I don't want to change the timeout or affect other users, by the way!
sudo -k Will kill the timeout timestamp. You can even put the command afterwards, like sudo -k test_my_privileges.sh From man sudo : -K The -K (sure kill) option is like -k except that it removes the user's time stamp entirely and may not be used in conjunction with a command or other option. This option does not require a password. -k When used by itself, the -k (kill) option to sudo invalidates the user's time stamp by setting the time on it to the Epoch. The next time sudo is run a password will be required. This option does not require a password and was added to allow a user to revoke sudo permissions from a .logout file. When used in conjunction with a command or an option that may require a password, this option will cause sudo to ignore the user's cached credentials. As a result, sudo will prompt for a password (if one is required by the security policy) and will not update the user's cached credentials. You can also change it permanently. From man sudoers : timestamp_timeout Number of minutes that can elapse before sudo will ask for a passwd again. The timeout may include a fractional component if minute granularity is insufficient, for example 2.5. The default is 5. Set this to 0 to always prompt for a password. If set to a value less than 0 the user's timestamp will never expire. This can be used to allow users to create or delete their own timestamps via sudo -v and sudo -k respectively.
{ "source": [ "https://unix.stackexchange.com/questions/8528", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2163/" ] }
8,581
I would like to have the root account in safety even if my unprivileged user is compromised. On Ubuntu you can only use sudo for "security reasons" by default. However I am not sure it is any safer than just using login on a text-mode console. There are too many things that can go wrong if an attacker can run code as my normal user. For example adding aliases, adding stuff to my PATH, setting LD_PRELOAD and X11 keyloggers just to mention a few. The only advantage I can see is the timeout so I never forget to log out. I have the same doubts about su but it doesn't even have time limit. Some operations (especially IO redirection) are more convinient with su but security-wise this seems to be worse. Login on a text-mode console seems to be the safest. Since it is started by init if an attacker can control PATH or LD_PRELOAD he is already root. The keypress events can't be intercepted by programs running on X. I don't know if programs running on X can intercept [ctrl]+[alt]+[f1] (and open a fullscreen window that looks like a console) or it is safe like [ctrl]+[alt]+[del] on Windows. Besides that the only problem I see is the lack of timeout. So am I missing something? Why did the Ubuntu guys decide to only allow sudo? What can I do to improve the security of any of the methods? What about SSH? Traditionally root can't log in through SSH. But using the above logic wouldn't this be the safest thing to do: allow root through SSH switch to text-mode log in as root ssh to the other machine log in as root?
Security is always about making trade-offs. Just like the proverbial server which is in a safe, unplugged, at the bottom of the ocean, root would be most secure if there were no way to access it at all. LD_PRELOAD and PATH attacks like those you describe assume that there is an attacker with access to your account already, or at least to your dotfiles. Sudo doesn't protect against that very well at all — if they have your password, after all, no need to try tricking you for later... they can just use sudo now . It's important to consider what Sudo was designed for originally: delegation of specific commands (like those to manage printers) to "sub-administrators" (perhaps grad students in a lab) without giving away root completely. Using sudo to do everything is the most common use I see now, but it's not necessarily the problem the program was meant to solve (hence the ridiculously complicated config file syntax). But, sudo-for-unrestricted-root does address another security problem: manageability of root passwords. At many organizations, these tend to be passed around like candy, written on whiteboards, and left the same forever. That leaves a big vulnerability, since revoking or changing access becomes a big production number. Even keeping track of what machine has what password is a challenge — let alone tracking who knows which one. Remember that most "cyber-crime" comes from within. With the root password situation described, it's hard to track down who did what — something sudo with remote logging deals with pretty well. On your home system, I think it's really more a matter of the convenience of not having to remember two passwords. It's probable that many people were simply setting them to be the same — or worse, setting them to be the same initially and then letting them get out of sync, leaving the root password to rot. Using passwords at all for SSH is dangerous, since password-sniffing trojaned ssh daemons are put into place in something like 90% of the real-world system compromises I've seen. It's much better to use SSH keys, and this can be a workable system for remote root access as well. But the problem there is now you've moved from password management to key management, and ssh keys aren't really very manageable. There's no way of restricting copies, and if someone does make a copy, they have all the attempts they want to brute-force the passphrase. You can make policy saying that keys must be stored on removable devices and only mounted when needed, but there's no way of enforcing that — and now you've introduced the possibility of a removable device getting lost or stolen. The highest security is going to come through one-time keys or time/counter-based cryptographic tokens. These can be done in software, but tamper-resistant hardware is even better. In the open source world, there's WiKiD , YubiKey , or LinOTP , and of course there's also the proprietary heavyweight RSA SecurID . If you're in a medium-to-large organization, or even a security-conscious small one, I highly recommend looking into one of these approaches for administrative access. It's probably overkill for home, though, where you don't really have the management hassles — as long as you follow sensible security practices.
{ "source": [ "https://unix.stackexchange.com/questions/8581", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4477/" ] }
8,584
I'm setting up a Cronjob that will backup a MySQL database I have in my server, but I don't want it to keep overwriting the same file over and over again. Instead, I want to have an array of backups to choose from, done automatically. For example: ## Cronjob, run May 21st, 2011: mysqldump -u username -ppasword database > /path/to/file/21-03-2011.sql ## SAME Conjob, run May 28th, 2011: mysqldump -u username -ppasword database > /path/to/file/28-03-2011.sql And so on. Is there any way that I can use the system date and/or time as some kind of variable in my Cronjob? If not, what are your suggestions to accomplish the same?
You could try something like this (as glenn jackmann notes below, you have to escape all % characters): 15 11 * * * touch "/tmp/$(date +\%d-\%m-\%Y).sql" To see if your particular cron will run the command out of crontab as a script in and of itself, or if you need to write a script that figures out the date as a string, and then runs your mysqldump command. Without escaping the % , "cron" on Redhat Enterprise Linux 5.0 (I think) kept giving me errors about not finding a matching ) . This is because everything after an unescaped % is sent to standard input of the command. I would also take the recommendation to use ISO8601 date format (yyyy-mm-dd, which is %F ) to make the file names order by date when sorted lexically.
{ "source": [ "https://unix.stackexchange.com/questions/8584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5317/" ] }
8,607
I am executing every now and then some python scripts which take quite long to execute. I execute them like this: $ time python MyScript.py How can I play a sound as soon as the execution of the script is done? I use Ubuntu 10.10 (Gnome desktop).
Append any command that plays a sound; this could be as simple as $ time mycommand; printf '\7' or as complex as $ time mycommand && paplay itworked.ogg || paplay bombed.ogg (Commands assume pulseaudio is installed; substitute your sound player, which will depend on your desktop environment.)
{ "source": [ "https://unix.stackexchange.com/questions/8607", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4784/" ] }
8,646
On my fedora VM, when running with my user account I have /usr/local/bin in my path: [justin@justin-fedora12 ~]$ env | grep PATH PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/justin/bin And likewise when running su : [justin@justin-fedora12 ~]$ su - Password: [root@justin-fedora12 justin]# env | grep PATH PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/justin/bin However, when running via sudo , this directory is not in the path: [root@justin-fedora12 justin]# exit [justin@justin-fedora12 ~]$ sudo bash [root@justin-fedora12 ~]# env | grep PATH PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/sbin:/bin:/usr/sbin:/usr/bin Why would the path be different when running via sudo ?
Take a look at /etc/sudoers . The default file in Fedora (as well as in RHEL, and also Ubuntu and similar) includes this line: Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin Which ensures that your path is clean when running binaries under sudo. This helps protect against some of the concerns noted in this question . It's also convenient if you don't have /sbin and /usr/sbin in your own path.
{ "source": [ "https://unix.stackexchange.com/questions/8646", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3635/" ] }
8,656
Why are there so many places to put a binary in Linux? There are at least these five: /bin/ /sbin/ /usr/bin/ /usr/local/bin/ /usr/local/sbin/ And on my office box, I do not have write permissions to some of these. What type of binary goes into which of these bin s?
/bin (and /sbin ) were intended for programs that needed to be on a small / partition before the larger /usr , etc. partitions were mounted. These days, it mostly serves as a standard location for key programs like /bin/sh , although the original intent may still be relevant for e.g. installations on small embedded devices. /sbin , as distinct from /bin , is for system management programs (not normally used by ordinary users) needed before /usr is mounted. /usr/bin is for distribution-managed normal user programs. There is a /usr/sbin with the same relationship to /usr/bin as /sbin has to /bin . /usr/local/bin is for normal user programs not managed by the distribution package manager, e.g. locally compiled packages. You should not install them into /usr/bin because future distribution upgrades may modify or delete them without warning. /usr/local/sbin , as you can probably guess at this point, is to /usr/local/bin as /usr/sbin to /usr/bin . In addition, there is also /opt which is for monolithic non-distribution packages, although before they were properly integrated various distributions put Gnome and KDE there. Generally you should reserve it for large, poorly behaved third party packages such as Oracle.
{ "source": [ "https://unix.stackexchange.com/questions/8656", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/912/" ] }
8,677
I have often started to think about this but never found a good answer. Why are these two Unix directories not /user and /temp instead? All the other directories under root seem to be exactly what one would guess them to be, but these two seem odd, I would have always guessed them as user and temp . Is there some historical reason for the spellings?
Yup there were reasons. They are pronounced user and temp. passwd is similar, as is resolv.conf. Unix is an expert friendly, user antagonistic operating system. I was a student when 300 Baud modems were the norm. I was the envy of my fellow students, since I had a Silent 700 terminal from Control Data where I was working. You could see the delay from typing each character and waiting for it to be echoed. Every character counted; I also see it as fostering the start of leet speak. The hjkl from vi have a history which few know. vi was developed by Bill Joy when he was a grad student at UCB during these same years. The ADM 3a terminals in Cory Hall had arrow keys above those letters
{ "source": [ "https://unix.stackexchange.com/questions/8677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/912/" ] }
8,690
What is the difference between the halt and shutdown commands?
Generally, one uses the shutdown command . It allows a time delay and warning message before shutdown or reboot, which is important for system administration of multiuser shell servers; it can provide the users with advance notice of the downtime. As such, the shutdown command has to be used like this to halt/switch off the computer immediately (on Linux and FreeBSD at least): shutdown -h now Or to reboot it with a custom, 30 minute advance warning: shutdown -r +30 "Planned software upgrades" After the delay, shutdown tells init to change to runlevel 0 (halt) or 6 (reboot). (Note that omitting -h or -r will cause the system to go into single-user mode (runlevel 1), which kills most system processes but does not actually halt the system; it still allows the administrator to remain logged in as root.) Once system processes have been killed and filesystems have been unmounted, the system halts/powers off or reboots automatically. This is done using the halt or reboot command , which syncs changes to disks and then performs the actual halt/power off or reboot. On Linux, if halt or reboot is run when the system has not already started the shutdown process, it will invoke the shutdown command automatically rather than directly performing its intended action. However, on systems such as FreeBSD , these commands first log the action in wtmp and then will immediately perform the halt/reboot themselves, without first killing processes or unmounting filesystems.
{ "source": [ "https://unix.stackexchange.com/questions/8690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2897/" ] }
8,707
I used to think SCP is a tool to copy files over SSH, and copying files over SSH is called SFTP, which is itself a synonym to FISH. But now as I was looking for a Total Commander plugin to do this in Windows, I've noticed that on its page it says "Allows access to remote servers via secure FTP (FTP via SSH). Requires SSH2. This is NOT the same as SCP!". If it's not the same then what am I misunderstanding?
SFTP isn't the FTP protocol over ssh, but an extension to the SSH protocol included in SSH2 (and some SSH1 implementations). SFTP is a file transfer protocol similar to FTP but uses the SSH protocol as the network protocol (and benefits from leaving SSH to handle the authentication and encryption). SCP is only for transferring files, and can't do other things like list remote directories or removing files, which SFTP does do. FISH appears to be yet another protocol that can use either SSH or RSH to transfer files. UPDATE (2021/03/09): According to the release notes of OpenSSH 8.0/8.0p1 (2019-04-17) : The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.
{ "source": [ "https://unix.stackexchange.com/questions/8707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2119/" ] }
8,736
What is SSH - the protocol? What is ssh - the unix utility and how does it work? How is SSH protocol related to SFTP? What is sshd ? Does the command su use ssh or sshd ?
The SSH protocol is defined by what the ssh and sshd programs accept. (There is a standard defined for it, but it's an after-the-fact thing and is mostly ignored when one of the implementations adds new features.) Since there are multiple implementations of those (OpenSSH, F-Secure, PuTTY, etc.) occasionally you'll find that one of them doesn't support the same protocol as the others. Basically, it defines authentication negotiation and creation of a multiplexed data stream. This stream can carry one or more (with OpenSSH and ControlMaster ) terminal sessions and zero or more tunnels (forwarding socket connections from either local or remote to the other side; X11 forwarding is a special case of remote forwarding). It also defines "subsystems" that can be used over the stream; terminal sessions are the basic subsystem but others can be defined. sftp is one of these. ssh the utility uses the SSH protocol to talk to sshd on another machine. How it works depends on what version it is (see above), but the gist of it is that it attempts to figure out which version of the SSH protocol to use, then it and sshd negotiate supported authentication methods, then it tries to authenticate you using one of those methods (asking for remote user password/private key paasword/S-Key phrase as necessary), and on successful authentication sets up a multiplexed stream with the sshd . sshd , as said above, implements the server side of the SSH protocol. sftp is a (at present, the only standard) subsystem defined in most sshd implementations. When the SFTP subsystem is requested, sshd connects sftp-server to the subsystem session; the sftp program then talks to it, similarly to ftp but with file transfers multiplexed on the stream instead of using separate connections as with ftp . su has nothing to do with ssh , sshd , or sftp , except insofar as there may be PAM modules to arrange for the multiplexed stream to be available within the shell or program run by it.
{ "source": [ "https://unix.stackexchange.com/questions/8736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4950/" ] }
8,750
Is it possible to find all php files within a certain directory that have been modified on a certain date I'm using find /var/www/html/dir/ -mtime -28 | grep '\.php' to get files modified within the last 28 days, but I only need files that have been modified on the following date: 2011-02-08
On recent versions of find (e.g. GNU 4.4.0) you can use the -newermt option. For example, to find all files that have been modified on the 2011-02-08 $ find /var/www/html/dir/ -type f -name "*.php" -newermt 2011-02-08 ! -newermt 2011-02-09 Also note that you don't need to pipe into grep to find php files because find can do that for you in the -name option. Take a look at this SO answer for more suggestions: How to use 'find' to search for files created on a specific date?
{ "source": [ "https://unix.stackexchange.com/questions/8750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1245/" ] }
8,760
Is there a linux command or some way to look at logs from bottom up rather than from top towards bottom. I know about tail -n <number of lines> , but is there something that I can actually scroll and go from bottom up?
Some systems have tac , which is a whimsically-named backward cat . Without that, you can still do something like awk '{print NR ":" $0}' $file | sort -t: -k 1nr,1 | sed 's/^[0-9][0-9]*://'
{ "source": [ "https://unix.stackexchange.com/questions/8760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4709/" ] }
8,840
Is it possible to get the time when file was opened last time and sort all files in a directory by those times?
This depends on exactly what you mean by "opened", but in general, yes. There are three timestamps normally recorded: mtime — updated when the file contents change. This is the "default" file time in most cases. ctime — updated when the file or its metadata (owner, permissions) change atime — updated when the file is read So, generally, what you want to see is the atime of a file. You can get that with stat or with ls . You can use ls -lu to do this, although I prefer to use ls -l --time=atime (which should be supported in almost all modern Linux distributions) because I don't use it often, and when I do I can remember it better. And to sort by time, add the -t flag to ls. So there you go. There is a big caveat, though. Updating the atime every time a file is read causes a lot of usually-unnecessary IO, slowing everything down. So, most Linux distributions now default to the noatime filesystem mount option, which basically kills atimes, or else relatime , which only updates atimes once a limit has passed (normally once per day) or if the file was actually modified since the previous read. You can find if these options are active by running the mount command. Also, note that access times are by inode, not by filename, so if you have hardlinks, reading from one will update all names that refer to the same file. And, be aware that c is not "creation"; creation isn't tracked by Unix/Linux filesystems, which seems strange but actually makes sense because the filesystem has no way of knowing if it is the original — maybe the file was created forty years ago and copied here. And, in fact, many file editors work by making copies over the original. If you need that information, it's best to use a version control system like git . A little update, a decade later: some filesystems, like btrfs, include a fourth timestamp: birth , or sometimes but not consistently, btime — this is explicitly a "file creation timestamp" ... at least, in the statx(2) system call man page. It seems to be pretty sparsely documented overall. It really suffers from the same problem that made it get left out in the first place: what exactly does it mean? Consider: $ echo > foo; stat foo|grep Birth Birth: 2022-04-30 02:38:19.084919920 -0400 $ cp foo foo.tmp; mv foo.tmp foo; stat foo|grep Birth Birth: 2022-04-30 02:39:00.950269045 -0400 Here, I've created a file, and then copied it to a temporary file, and then moved that back. This changes the "birth" time. Is that right? Maybe! I'm not sure that's really useful, at least not for humans. But anyway, for completeness: that's a thing you might have available on a modern system.
{ "source": [ "https://unix.stackexchange.com/questions/8840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5289/" ] }
8,914
I have noticed that subsequent runs of grep on the same query (and also a different query, but on the same file) are much faster than the first run (the effect is easily noticeable when searching through a big file). This suggests that grep uses some sort of caching of the structures used for search, but I could not find a reference on the Internet. What mechanism enables grep to return results faster in subsequent searches?
Not grep as such, but the filesystem itself often caches recently read data, causing later runs to go faster since grep is effectively searching in memory instead of disk.
{ "source": [ "https://unix.stackexchange.com/questions/8914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/912/" ] }
8,916
I am always very hesitant to run kill -9 , but I see other admins do it almost routinely. I figure there is probably a sensible middle ground, so: When and why should kill -9 be used? When and why not? What should be tried before doing it? What kind of debugging a "hung" process could cause further problems?
Generally, you should use kill (short for kill -s TERM , or on most systems kill -15 ) before kill -9 ( kill -s KILL ) to give the target process a chance to clean up after itself. (Processes can't catch or ignore SIGKILL , but they can and often do catch SIGTERM .) If you don't give the process a chance to finish what it's doing and clean up, it may leave corrupted files (or other state) around that it won't be able to understand once restarted. strace / truss , ltrace and gdb are generally good ideas for looking at why a stuck process is stuck. ( truss -u on Solaris is particularly helpful; I find ltrace too often presents arguments to library calls in an unusable format.) Solaris also has useful /proc -based tools, some of which have been ported to Linux. ( pstack is often helpful).
{ "source": [ "https://unix.stackexchange.com/questions/8916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3169/" ] }
8,986
Is there a Linux command like cat that joins files with the same number of lines horizontally?
paste may do the trick. % cat t1 a b c c d f g % cat t2 h i j k l m n % paste t1 t2 a h b i c j c k d l f m g n At least some of the time, you don't need to have a "key" to concatenate the lines.
{ "source": [ "https://unix.stackexchange.com/questions/8986", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
9,024
I'd like to know how to reuse the last output from the console, ie: pv-3:method Xavier$ python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" /Library/Python/2.6/site-packages pv-3:method Xavier$ cd **LASTOUTPUT**
Assuming history expansion is enabled, that you're running Bash or some other shell that supports it, that the command is idempotent, and that waiting for it to run a second time is not an issue, you could use the !! form of history expansion to get the last command line again, to run the previous command again in a command substitution: % python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" /usr/lib/python2.7/site-packages % cd $(!!) cd $(python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()") % pwd /usr/lib/python2.7/site-packages
{ "source": [ "https://unix.stackexchange.com/questions/9024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5613/" ] }
9,123
I find myself repeating a lot of: mkdir longtitleproject cd longtitleproject Is there a way of doing it in one line without repeating the directory name? I'm on bash here.
This is the one-liner that you need. No other config needed: mkdir longtitleproject && cd $_ The $_ variable, in bash, is the last argument given to the previous command. In this case, the name of the directory you just created. As explained in man bash : _ At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the envi‐ ronment or argument list. Subsequently, expands to the last argument to the previous command, after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. When check‐ ing mail, this parameter holds the name of the mail file cur‐ rently being checked."$_" is the last argument of the previous command. Use cd $_ to retrieve the last argument of the previous command instead of cd !$ because cd !$ gives the last argument of previous command in the shell history : cd ~/ mkdir folder && cd !$ you end up home (or ~/ ) cd ~/ mkdir newfolder && cd $_ you end up in newfolder under home !! ( or ~/newfolder )
{ "source": [ "https://unix.stackexchange.com/questions/9123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5613/" ] }
9,152
I have several script that will launch all the apps and files related to a specific project. But, it will launch multiple emacs instances, rather than simply cause the current emacs to open the requested files. I'd rather the current emacs simply opened the project text files in a new buffer. Any ideas how I can do that?
M-x server-start inside the Emacs session, then use emacsclient -n file1 file2 ... to add files to the existing Emacs. There are additional options you might want to use, e.g. -c to open the files in a new window (frame).
{ "source": [ "https://unix.stackexchange.com/questions/9152", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3127/" ] }
9,170
I often use vim / search command to verify my regular expressions (just to see what it matches). After that I usually use the :%s replace command, where I use that regexp from search as a string to be replaced, e.g. I first look for such string: /TP-\(\d\{5\}\)-DD-\d\{3\} It matches exactly what I want, so I do my replace: :%s/TP-\(\d\{5\}\)-DD-\d\{3\}/\1/g But I have to write again entire regexp here. Usually that regexp is much longer, that's why I'm looking for solution: Is there any existing shortcut or vim script for pasting that search pattern directly into replace command? I use vim in terminal (no gvim).
In general, an empty regular expression means to use the previously entered regular expression, so :%s//\1/g should do what you want.
{ "source": [ "https://unix.stackexchange.com/questions/9170", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5361/" ] }
9,247
How can I get the list of all files under current directory along with their modification date and sorted by that date? Now I know how to achieve that with find , stat and sort , but for some weird reason the stat is not installed on the box and it's unlikely that I can get it installed. Any other option? PS: gcc is not installed either
My shortest method uses zsh: print -rl -- **/*(.Om) (add the D glob qualifiers if you also want to list the hidden files or the files in hidden directories). If you have GNU find, make it print the file modification times and sort by that. I assume there are no newlines in file names. find . -type f -printf '%T@ %p\n' | sort -k 1 -n | sed 's/^[^ ]* //' If you have Perl (again, assuming no newlines in file names): find . -type f -print | perl -l -ne ' $_{$_} = -M; # store file age (mtime - now) END { $,="\n"; print sort {$_{$b} <=> $_{$a}} keys %_; # print by decreasing age }' If you have Python (again, assuming no newlines in file names): find . -type f -print | python -c 'import os, sys; times = {} for f in sys.stdin.readlines(): f = f[0:-1]; times[f] = os.stat(f).st_mtime for f in sorted(times.iterkeys(), key=lambda f:times[f]): print f' If you have SSH access to that server, mount the directory over sshfs on a better-equipped machine: mkdir mnt sshfs server:/path/to/directory mnt zsh -c 'cd mnt && print -rl **/*(.Om)' fusermount -u mnt With only POSIX tools, it's a lot more complicated, because there's no good way to find the modification time of a file. The only standard way to retrieve a file's times is ls , and the output format is locale-dependent and hard to parse. If you can write to the files, and you only care about regular files, and there are no newlines in file names, here's a horrible kludge: create hard links to all the files in a single directory, and sort them by modification time. set -ef # disable globbing IFS=' ' # split $(foo) only at newlines set -- $(find . -type f) # set positional arguments to the file names mkdir links.tmp cd links.tmp i=0 list= for f; do # hard link the files to links.tmp/0, links.tmp/1, … ln "../$f" $i i=$(($i+1)) done set +f for f in $(ls -t [0-9]*); do # for each file, in reverse mtime order: eval 'list="${'$i'} # prepend the file name to $list $list"' done printf %s "$list" # print the output rm -f [0-9]* # clean up cd .. rmdir links.tmp
{ "source": [ "https://unix.stackexchange.com/questions/9247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1946/" ] }
9,252
I know that using the command: lsof -i TCP (or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what. Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.
netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not all others (like AIX.) Add -t if you want TCP only. # netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:24800 0.0.0.0:* LISTEN 27899/synergys tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 3361/python tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2264/mysqld tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22964/apache2 tcp 0 0 192.168.99.1:53 0.0.0.0:* LISTEN 3389/named tcp 0 0 192.168.88.1:53 0.0.0.0:* LISTEN 3389/named etc.
{ "source": [ "https://unix.stackexchange.com/questions/9252", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
9,314
I created a startup script to start/restart/stop a group of applications. I used the lib /etc/init.d/functions in my script. It is working well on my system, but it not working for my client; he is getting the error: No such file or directory /etc/init.d/functions Right now I don't know which linux distro my client uses. Is the init.d/functions file different for different Linux distros? If so, how can I find it?
It's specific to whatever distribution you're running. Debian and Ubuntu have /lib/lsb/init-functions ; SuSE has /etc/rc.status ; none of them are compatible with the others. In fact, some distributions don't use /etc/init.d at all, or use it in an incompatible way (Slackware and Arch occur to me off the top of my head; there are others).
{ "source": [ "https://unix.stackexchange.com/questions/9314", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5754/" ] }
9,332
I would like to create a " /dev/null " directory (or a "blackhole" directory) such that any files written to it are not really written, but just disappear. I have an application that writes out large temporary files to a directory. I have no control over the name of the files and I don't really care about the content of these files. I could write a script that periodically clobbers these files, but the files are written out very quickly and fill my disk. I'm looking for something cleverer. I want the application to "think" that it is writing out these files, when in fact, the writes are just being discarded at the other end. Also see this old related thread.
This isn't supported out-of-the-box on any unix I know, but you can do pretty much anything with FUSE . There's at least one implementation of nullfs¹ , a filesystem where every file exists and behaves like /dev/null (this isn't the only implementation I've ever seen). ¹ Not to be confused with the *BSD nullfs , which is analogous to bindfs .
{ "source": [ "https://unix.stackexchange.com/questions/9332", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/872/" ] }
9,356
I want to print lines from a file backwards without using tac command. Is there any other solution to do such thing with bash?
Using sed to emulate tac : sed '1!G;h;$!d' "${inputfile}"
{ "source": [ "https://unix.stackexchange.com/questions/9356", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
9,432
On occasion I've seen comments online along the lines of "make sure you set 'bs=' because the default value will take too long," and my own extremely-unscientific experiences of, "well that seemed to take longer than that other time last week" seem to bear that out. So whenever I use 'dd' (typically in the 1-2GB range) I make sure to specify the bytes parameter. About half the time I use the value specified in whatever online guide I'm copying from; the rest of the time I'll pick some number that makes sense from the 'fdisk -l' listing for what I assume is the slower media (e.g. the SD card I'm writing to). For a given situation (media type, bus sizes, or whatever else matters), is there a way to determine a "best" value? Is it easy to determine? If not, is there an easy way to get 90-95% of the way there? Or is "just pick something bigger than 512" even the correct answer? I've thought of trying the experiment myself, but (in addition to being a lot of work) I'm not sure what factors impact the answer, so I don't know how to design a good experiment.
dd dates from back when it was needed to translate old IBM mainframe tapes, and the block size had to match the one used to write the tape or data blocks would be skipped or truncated. (9-track tapes were finicky. Be glad they're long dead.) These days, the block size should be a multiple of the device sector size (usually 4KB, but on very recent disks may be much larger and on very small thumb drives may be smaller, but 4KB is a reasonable middle ground regardless) and the larger the better for performance. I often use 1MB block sizes with hard drives. (We have a lot more memory to throw around these days too.)
{ "source": [ "https://unix.stackexchange.com/questions/9432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
9,456
I can sudo, but I don't have the root password so I can't su root . Using sudo, can I change the root password?
So you want to run something like sudo passwd root ?
{ "source": [ "https://unix.stackexchange.com/questions/9456", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
9,466
I used crontab -e to add the following line to my crontab: * * * * * echo hi >> /home/myusername/test Yet, I don't see that the test file is written to. Is this a permission problem, or is crontab not working correctly? I see that the cron process is running. How can I debug this? Edit - Ask Ubuntu has a nice question about crontab , unfortunately that still doesn't help me. Edit 2 - Hmm, it seems my test file has 214 lines, which means for the last 214 minutes it has been written to every minute. I'm not sure what was the problem, but it's evidently gone.
There are implementations of cron (not all of them, and I don't remember which offhand, but I've encountered one under Linux) that check for updated crontab files every minute on the minute, and do not consider new entries until the next minute. Therefore, a crontab can take up to two minutes to fire up for the first time. This may be what you observed.
{ "source": [ "https://unix.stackexchange.com/questions/9466", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
9,468
How to get the char at a given position of a string in shell script?
In bash with "Parameter Expansion" ${parameter:offset:length} $ var=abcdef $ echo ${var:0:1} a $ echo ${var:3:1} d The same parameter expansion can be used to assign a new variable: $ x=${var:1:1} $ echo $x b Edit: Without parameter expansion (not very elegant, but that's what came to me first) $ charpos() { pos=$1;shift; echo "$@"|sed 's/^.\{'$pos'\}\(.\).*$/\1/';} $ charpos 8 what ever here r
{ "source": [ "https://unix.stackexchange.com/questions/9468", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3608/" ] }
9,496
I wrote the following script to diff the outputs of two directores with all the same files in them as such: #!/bin/bash for file in `find . -name "*.csv"` do echo "file = $file"; diff $file /some/other/path/$file; read char; done I know there are other ways to achieve this. Curiously though, this script fails when the files have spaces in them. How can I deal with this? Example output of find: ./zQuery - abc - Do Not Prompt for Date.csv
Short answer (closest to your answer, but handles spaces) OIFS="$IFS" IFS=$'\n' for file in `find . -type f -name "*.csv"` do echo "file = $file" diff "$file" "/some/other/path/$file" read line done IFS="$OIFS" Better answer (also handles wildcards and newlines in file names) find . -type f -name "*.csv" -print0 | while IFS= read -r -d '' file; do echo "file = $file" diff "$file" "/some/other/path/$file" read line </dev/tty done Best answer (based on Gilles' answer ) find . -type f -name '*.csv' -exec sh -c ' file="$0" echo "$file" diff "$file" "/some/other/path/$file" read line </dev/tty ' exec-sh {} ';' Or even better, to avoid running one sh per file: find . -type f -name '*.csv' -exec sh -c ' for file do echo "$file" diff "$file" "/some/other/path/$file" read line </dev/tty done ' exec-sh {} + Long answer You have three problems: By default, the shell splits the output of a command on spaces, tabs, and newlines Filenames could contain wildcard characters which would get expanded What if there is a directory whose name ends in *.csv ? 1. Splitting only on newlines To figure out what to set file to, the shell has to take the output of find and interpret it somehow, otherwise file would just be the entire output of find . The shell reads the IFS variable, which is set to <space><tab><newline> by default. Then it looks at each character in the output of find . As soon as it sees any character that's in IFS , it thinks that marks the end of the file name, so it sets file to whatever characters it saw until now and runs the loop. Then it starts where it left off to get the next file name, and runs the next loop, etc., until it reaches the end of output. So it's effectively doing this: for file in "zquery" "-" "abc" ... To tell it to only split the input on newlines, you need to do IFS=$'\n' before your for ... find command. That sets IFS to a single newline, so it only splits on newlines, and not spaces and tabs as well. If you are using sh or dash instead of ksh93 , bash or zsh , you need to write IFS=$'\n' like this instead: IFS=' ' That is probably enough to get your script working, but if you're interested to handle some other corner cases properly, read on... 2. Expanding $file without wildcards Inside the loop where you do diff $file /some/other/path/$file the shell tries to expand $file (again!). It could contain spaces, but since we already set IFS above, that won't be a problem here. But it could also contain wildcard characters such as * or ? , which would lead to unpredictable behavior. (Thanks to Gilles for pointing this out.) To tell the shell not to expand wildcard characters, put the variable inside double quotes, e.g. diff "$file" "/some/other/path/$file" The same problem could also bite us in for file in `find . -name "*.csv"` For example, if you had these three files file1.csv file2.csv *.csv (very unlikely, but still possible) It would be as if you had run for file in file1.csv file2.csv *.csv which will get expanded to for file in file1.csv file2.csv *.csv file1.csv file2.csv causing file1.csv and file2.csv to be processed twice. Instead, we have to do find . -name "*.csv" -print | while IFS= read -r file; do echo "file = $file" diff "$file" "/some/other/path/$file" read line </dev/tty done read reads lines from standard input, splits the line into words according to IFS and stores them in the variable names that you specify. Here, we're telling it not to split the line into words, and to store the line in $file . Also note that read line has changed to read line </dev/tty . This is because inside the loop, standard input is coming from find via the pipeline. If we just did read , it would be consuming part or all of a file name, and some files would be skipped. /dev/tty is the terminal where the user is running the script from. Note that this will cause an error if the script is run via cron, but I assume this is not important in this case. Then, what if a file name contains newlines? We can handle that by changing -print to -print0 and using read -d '' on the end of a pipeline: find . -name "*.csv" -print0 | while IFS= read -r -d '' file; do echo "file = $file" diff "$file" "/some/other/path/$file" read char </dev/tty done This makes find put a null byte at the end of each file name. Null bytes are the only characters not allowed in file names, so this should handle all possible file names, no matter how weird. To get the file name on the other side, we use IFS= read -r -d '' . Where we used read above, we used the default line delimiter of newline, but now, find is using null as the line delimiter. In bash , you can't pass a NUL character in an argument to a command (even builtin ones), but bash understands -d '' as meaning NUL delimited . So we use -d '' to make read use the same line delimiter as find . Note that -d $'\0' , incidentally, works as well, because bash not supporting NUL bytes treats it as the empty string. To be correct, we also add -r , which says don't handle backslashes in file names specially. For example, without -r , \<newline> are removed, and \n is converted into n . A more portable way of writing this that doesn't require bash or zsh or remembering all the above rules about null bytes (again, thanks to Gilles): find . -name '*.csv' -exec sh -c ' file="$0" echo "$file" diff "$file" "/some/other/path/$file" read char </dev/tty ' exec-sh {} ';' * 3. Skipping directories whose names end in .csv find . -name "*.csv" will also match directories that are called something.csv . To avoid this, add -type f to the find command. find . -type f -name '*.csv' -exec sh -c ' file="$0" echo "$file" diff "$file" "/some/other/path/$file" read line </dev/tty ' exec-sh {} ';' As glenn jackman points out, in both of these examples, the commands to execute for each file are being run in a subshell, so if you change any variables inside the loop, they will be forgotten. If you need to set variables and have them still set at the end of the loop, you can rewrite it to use process substitution like this: i=0 while IFS= read -r -d '' file; do echo "file = $file" diff "$file" "/some/other/path/$file" read line </dev/tty i=$((i+1)) done < <(find . -type f -name '*.csv' -print0) echo "$i files processed" Note that if you try copying and pasting this at the command line, read line will consume the echo "$i files processed" , so that command won't get run. To avoid this, you could remove read line </dev/tty and send the result to a pager like less . NOTES I removed the semi-colons ( ; ) inside the loop. You can put them back if you want, but they are not needed. These days, $(command) is more common than `command` . This is mainly because it's easier to write $(command1 $(command2)) than `command1 \`command2\`` . read char doesn't really read a character. It reads a whole line so I changed it to read line .
{ "source": [ "https://unix.stackexchange.com/questions/9496", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3115/" ] }
9,501
How to check what shell I am using in a terminal? What is the shell I am using in MacOS?
Several ways, from most to least reliable (and most-to-least "heavy"): ps -p$$ -ocmd= . (On Solaris, this may need to be ps -p$$ -ofname= and on macOS and on BSD should be ps -p$$ -ocommand= .) Check for $BASH_VERSION , $ZSH_VERSION , and other shell-specific variables. Check $SHELL ; this is a last resort, as it specifies your default shell and not necessarily the current shell.
{ "source": [ "https://unix.stackexchange.com/questions/9501", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5837/" ] }
9,509
I can never remember what the conversion is from something like rw-r--r-- to 644 . Is there a simple web based converter between the 2?
This site provides an interactive way to see what permissions bits are set when various bits are set/unset. http://permissions-calculator.org/ The "calculator" looks like this:
{ "source": [ "https://unix.stackexchange.com/questions/9509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2076/" ] }
9,575
When would you use one over the other?
The different semantics between hard and soft links make them suitable for different things. Hard links: indistinguishable from other directory entries, because every directory entry is hard link "original" can be moved or deleted without breaking other hard links to the same inode only possible within the same filesystem permissions must be the same as those on the "original" (permissions are stored in the inode, not the directory entry) can only be made to files, not directories Symbolic links (soft links) simply records that point to another file path. ( ls -l will show what path a symlink points to) will break if original is moved or deleted. (In some cases it is actually desirable for a link to point to whatever file currently occupies a particular location) can point to a file in a different filesystem can point to a directory on some file system formats, it is possible for the symlink to have different permissions than the file it points to (this is uncommon)
{ "source": [ "https://unix.stackexchange.com/questions/9575", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
9,597
I'm trying to pipe grep output to rm , but it outputs useless stuff. Is any switch required for rm ? Or can rm can be provided a regexp directly? ls | grep '^\[Daruchini'| rm rm: missing operand Try `rm --help' for more information.
You need to use xargs to turn standard input into arguments for rm . $ ls | grep '^Dar' | xargs rm (Beware of special characters in filenames; with GNU grep, you might prefer $ ls | grep -Z '^Dar' | xargs -0 rm ) Also, while the shell doesn't use regexps, that's a simple pattern: $ rm Dar* (meanwhile, I think I need more sleep.)
{ "source": [ "https://unix.stackexchange.com/questions/9597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5871/" ] }
9,600
When you use a / forward search or a ? backward search in less, all instances of the file get highlighted. After I've found the instance of the word I'm looking for, what is the most correct way to unhighlight something? Currently I just press / then mash gibberish into the input field. No results = no highlights! I'm looking for something akin to vim's :nohl feature, in less.
You can use Alt + u to remove the highlight on last search results. You can highlight them again with Alt + u , it's a toggle. Switching off the highlight does not switch off the status column , showing marks on each line containing a match , if the column is enabled using options -J or --status-column or keys - J . To hide the status column , use - + J . To show the status column, use - J . (Technically, Alt + u it's equivalent to ESC u on terminal level - that is why the Alt -key is not mentioned in the man page.)
{ "source": [ "https://unix.stackexchange.com/questions/9600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2098/" ] }
9,605
I want to detect from a shell script (more specifically .zshrc) if it is controlled through SSH. I tried the HOST variable but it's always the name of the computer which is running the shell. Can I access the hostname where the SSH session is coming from? Comparing the two would solve my problem. Every time I log in there is a message stating the last login time and host: Last login: Fri Mar 18 23:07:28 CET 2011 from max on pts/1 Last login: Fri Mar 18 23:11:56 2011 from max This means the server has this information.
Here are the criteria I use in my ~/.profile : If one of the variables SSH_CLIENT or SSH_TTY is defined, it's an ssh session. If the login shell's parent process name is sshd , it's an ssh session. if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then SESSION_TYPE=remote/ssh # many other tests omitted else case $(ps -o comm= -p "$PPID") in sshd|*/sshd) SESSION_TYPE=remote/ssh;; esac fi (Why would you want to test this in your shell configuration rather than your session startup?)
{ "source": [ "https://unix.stackexchange.com/questions/9605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4477/" ] }
9,665
I'm wanting to create a tar archive of a specific directory (with its subdirectories of course). But when I do it, using the tar command, I get a list of files that were included, for example: a calendar_final/._style.css a calendar_final/style.css As you can see, there are two versions of the same file. This goes for every file, and there are many. How do I exclude the temporary files, with the ._ prefix, from the tar archive?
You posted in a comment that you are working on a Mac OS X system. This is an important clue to the purpose of these ._* files. These ._* archive entries are chunks of AppleDouble data that contain the extra information associated with the corresponding file (the one without the ._ prefix). They are generated by the Mac OS X–specific copyfile(3) family of functions. The AppleDouble blobs store access control data (ACLs) and extended attributes (commonly, Finder flags and “resource forks”, but xattrs can be used to store any kind of data). The system-supplied Mac OS X archive tools ( bsdtar (also symlinked as tar ), gnutar , and pax ) will generate a ._* archive member for any file that has any extended information associated with it; in “unarchive” mode, they will also decode those archive members and apply the resulting extended information to the associated file. This creates a “full fidelity” archive for use on Mac OS X systems by preserving and later extracting all the information that the HFS+ filesystem can store. The corresponding archive tools on other systems do not know to give special handling to these ._* files, so they are unpacked as normal files. Since such files are fairly useless on other systems, they are often seen as “junk files”. Correspondingly, if a non–Mac OS X system generates an archive that includes normal files that start with ._ , the Mac OS X unarchiving tools will try to decode those files as extended information. There is, however an undocumented(?) way to make the system-supplied Mac OS X archivers behave like they do on other Unixy systems: the COPYFILE_DISABLE environment variable. Setting this variable (to any value, even the empty string), will prevent the archivers from generating ._* archive members to represent any extended information associated with the archived files. Its presence will also prevent the archivers from trying to interpret such archive members as extended information. COPYFILE_DISABLE=1 tar czf new.tar.gz … COPYFILE_DISABLE=1 tar xzf unixy.tar.gz … You might set this variable in your shell’s initialization file if you want to work this way more often than not. # disable special creation/extraction of ._* files by tar, etc. on Mac OS X COPYFILE_DISABLE=1; export COPYFILE_DISABLE Then, when you need to re-enable the feature (to preserve/restore the extended information), you can “unset” the variable for individual commands: (unset COPYFILE_DISABLE; tar czf new-osx.tar.gz …) The archivers on Mac OS X 10.4 also do something similar, though they use a different environment variable: COPY_EXTENDED_ATTRIBUTES_DISABLE
{ "source": [ "https://unix.stackexchange.com/questions/9665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
9,683
I think most are familiar with the which command, and I use it frequently. I just ran into a situation where I'm curious not just which command is first in my path, but how many and where all the commands in all my paths are. I tried the which man page (typing man which made me laugh), but didn't see anything.
On some systems, which -a shows all matches. If your shell is bash or zsh¹, you can use type instead: type foo shows the first match and type -a foo shows all matches. The three commands type , which and whence do mostly the same thing; they differ between shells and operating systems in availability, options, and what exactly they report. type is always available and shows all possible command-like names (aliases, keywords, shell built-ins, functions, and external commands). The only fully portable way to display all matches is to parse $PATH yourself. Here's a shell script that does this. If you make it a shell function, make sure to enclose the function body in parentheses (so that the change to IFS and set -f don't escape the function), and change exit to return . #!/bin/sh set -f # disable globbing IFS=: # break words at : only not_found=1 for d in $PATH; do if [ -f "$d/$x" ] && [ -x "$d/$x" ]; then printf '%s\n' "$d/$x" not_found=0 fi done exit $not_found ¹ Or ksh 93, according to the documentation, though ksh 93s+ 2008-01-31 only prints the first match when I try.
{ "source": [ "https://unix.stackexchange.com/questions/9683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4954/" ] }
9,714
Why do we need fakeroot command at all? Can't we simply use the sudo or su commands? The man page says: fakeroot - run a command in an environment faking root privileges for file manipulation About.com says: Gives a fake root environment. This package is intended to enable something like: dpkg-buildpackage -rfakeroot i.e. to remove the need to become root for a package build. This is done by setting LD_PRELOAD to libfakeroot.so , which provides wrappers around getuid , chown , chmod , mknod , stat , ..., thereby creating a fake root environment. If you don't understand any of this, you do not need fakeroot ! My question is, what special purpose does it solve that a simple su or sudo don't? For example, for repacking all installed packages in ubuntu we give following command: $ fakeroot -u dpkg-repack `dpkg --get-selections | grep install | cut -f1` Can we do the above command with sudo or su instead of fakeroot like this: $ sudo dpkg-repack `dpkg --get-selections | grep install | cut -f1` EDIT: Running: $ sudo dpkg-repack `dpkg --get-selections | grep install | cut -f1` gives me this error: control directory has bad permissions 700 (must be >=0755 and <=0775) Any reason why?
Imagine that you are a developer/package maintainer, etc. working on a remote server. You want to update the contents of a package and rebuild it, download and customize a kernel from kernel.org and build it, etc. While trying to do those things, you'll find out that some steps require you to have root rights ( UID and GID 0) for different reasons (security, overlooked permissions, etc). But it is not possible to get root rights, since you are working on a remote machine (and many other users have the same problem as you). This is what exactly fakeroot does: it pretends an effective UID and GID of 0 to the environment which requires them. In practice you never get real root privileges (in opposite to su and sudo that you mention).
{ "source": [ "https://unix.stackexchange.com/questions/9714", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5071/" ] }
9,784
I have a variable which contains multiline output of a command. What's the most effecient way to read the output line by line from the variable? For example: jobs="$(jobs)" if [ "$jobs" ]; then # read lines from $jobs fi
You can use a while loop with process substitution: while read -r line do echo "$line" done < <(jobs) An optimal way to read a multiline variable is to set a blank IFS variable and printf the variable in with a trailing newline: # Printf '%s\n' "$var" is necessary because printf '%s' "$var" on a # variable that doesn't end with a newline then the while loop will # completely miss the last line of the variable. while IFS= read -r line do echo "$line" done < <(printf '%s\n' "$var") Note: As per shellcheck sc2031 , the use of process substition is preferable to a pipe to avoid [subtly] creating an subshell. Also, please realize that by naming the variable jobs it may cause confusion since that is also the name of a common shell command.
{ "source": [ "https://unix.stackexchange.com/questions/9784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/903/" ] }
9,819
E.g. I'm seeing this in /var/log/messages : Mar 01 23:12:34 hostname shutdown: shutting down for system halt Is there a way to find out what caused the shutdown? E.g. was it run from console, or someone hit power button, etc.?
Try the following commands: Display list of last reboot entries: last reboot | less Display list of last shutdown entries: last -x | less or more precisely: last -x | grep shutdown | less You won't know who did it however. If you want to know who did it, you will need to add a bit of code which means you'll know next time. I've found this resource online. It might be useful to you: How to find out who or what halted my system
{ "source": [ "https://unix.stackexchange.com/questions/9819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1946/" ] }
9,832
I am wondering if there is any historical or practical reason why the umount command is not unmount .
This dates all the way back to the very first edition of Unix , where all the standard file names were only at most 6 characters long (think passwd ), even though this version supported a whooping 8 characters in a file name . Most commands had an associated source file ending in .c (e.g. umount.c ), which left only 6 characters for the base name. A 6-character limitation might also have been a holdover from an earlier development version, or inherited from a then-current IBM system that did have a 6-character limitation. (Early C implementations had a 6-character limit on identifiers — longer identifiers were accepted but the compiler only looked at the first 6 characters, so foobar1 and foobar2 were the same variable.) (I thought I remembered a umount man page that listed the spelling as a bug of unknown origin, but I can't find it now.)
{ "source": [ "https://unix.stackexchange.com/questions/9832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4078/" ] }
9,837
I'm allowing a friend a local account on my machine, exclusively for SCP. Can I specify his account's shell as /bin/true , or in any other way limit the account, while still allowing SCP?
You can set that user's shell to rssh or scponly , which are designed precisely for that purpose: rssh is a restricted shell for use with OpenSSH, allowing only scp and/or sftp. It now also includes support for rdist, rsync, and cvs. scponly is an alternative 'shell' (of sorts) for system administrators who would like to provide access to remote users to both read and write local files without providing any remote execution priviledges. When you run scp, the OpenSSH daemon fires off an scp process with the -f option. When you run sftp, the OpenSSH daemon fires off an sftp-server process. In either case, the subprocess is executed through the user's shell, so that shell must support at least these commands, with a Bourne-like syntax. Any Bourne-style shell will do, as will csh (I think its quoting rules are compatible enough for what sshd uses). Rssh and scponly allow these commands and nothing else. /bin/true would not even run these commands.
{ "source": [ "https://unix.stackexchange.com/questions/9837", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
9,870
I have an Ubuntu server running on EC2 (which I didn't install myself, just picked up an AMI). So far I'm using putty to work with it, but I am wondering how to work on it with GUI tools (I'm not familiar with Linux UI tools, but I want to learn). Silly me, I'm missing the convenience of Windows Explorer. I currently have only Windows at home. How do I set up GUI tools to work with a remote server? Should I even do this, or should I stick to the command line? Do the answers change if I have a local linux machine to play with?
You can use X11 forwarding over SSH; make sure the option X11Forwarding yes is enabled in /etc/ssh/sshd_config on the remote server, and either enable X11 forwarding by hand with ssh -X remoteserver or add a line saying ForwardX11 yes to the relevant host entry in ~/.ssh/config Of course, that requires a working X display at the local end, so if you're using Windows you're going to have to install something like XMing , then set up X11 forwarding in PuTTY as demonstrated in these references: Using PuTTY and Xming to Connect to CSE X11 Forwarding using Xming and PuTTY Use Linux over Windows with Xming, here or here ETA: Reading again and seeing your clarifications in the comments, FTP might suit your needs even better, as it will let you 'mount' SFTP folders as if they're regular network drives.  See here , here , here (for Windows XP/7/Vista) , or here (for Windows 8) .
{ "source": [ "https://unix.stackexchange.com/questions/9870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
9,899
I have a ton of files and dirs in a subdirectory I want to move to the parent directory. There are already some files and dirs in the target directory which need to be overwritten. Files that are only present in the target should be left untouched. Can I force mv to do that? It ( mv * .. ) complains mv: cannot move `xyz' to `../xyz': Directory not empty What am I missing?
You will have to copy them to the destination and then delete the source, using the commands cp -r * .. followed by rm -rf * . I don't think you can "merge" directories using mv .
{ "source": [ "https://unix.stackexchange.com/questions/9899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258/" ] }
9,918
Is there some better solution for printing unique lines other than a combination of sort and uniq ?
To print each identical line only one, in any order: sort -u To print only the unique lines, in any order: sort | uniq -u To print each identical line only once, in the order of their first occurrence: (for each line, print the line if it hasn't been seen yet, then in any case increment the seen counter) awk '!seen[$0] {print} {++seen[$0]}' To print only the unique lines, in the order of their first occurrence: (record each line in seen , and also in lines if it's the first occurrence; at the end of the input, print the lines in order of occurrence but only the ones seen only once) awk '!seen[$0]++ {lines[i++]=$0} END {for (i in lines) if (seen[lines[i]]==1) print lines[i]}'
{ "source": [ "https://unix.stackexchange.com/questions/9918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2671/" ] }
9,940
The ISP I work at is setting up an internal IPv6 network in preparation for eventually connecting to the IPv6 internet. As a result, several of the servers in this network now try to connect to security.debian.org via its IPv6 address by default when running apt-get update , and that results in having to wait for a lengthy timeout whenever I'm downloading updates of any sort. Is there a way to tell apt to either prefer IPv4 or ignore IPv6 altogether?
Add -o Acquire::ForceIPv4=true when running apt-get . If you want to make the setting persistent just create /etc/apt/apt.conf.d/99force-ipv4 and put Acquire::ForceIPv4 "true"; in it: echo 'Acquire::ForceIPv4 "true";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4 Config options Acquire::ForceIPv4 and Acquire::ForceIPv6 were added to version 0.9.7.9~exp1 (see bug 611891 ) which is available since Ubuntu Saucy (released in October 2013) and Debian Jessie (released in April 2015).
{ "source": [ "https://unix.stackexchange.com/questions/9940", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4949/" ] }
9,944
My question is with regards to booting a Linux system from a separate /boot partition. If most configuration files are located on a separate / partition, how does the kernel correctly mount it at boot time? Any elaboration on this would be great. I feel as though I am missing something basic. I am mostly concerned with the process and order of operations. Thanks! EDIT: I think what I needed to ask was more along the lines of the dev file that is used in the root kernel parameter. For instance, say I give my root param as root=/dev/sda2. How does the kernel have a mapping of the /dev/sda2 file?
Linux initially boots with a ramdisk (called an initrd , for "INITial RamDisk") as / . This disk has just enough on it to be able to find the real root partition (including any driver and filesystem modules required). It mounts the root partition onto a temporary mount point on the initrd , then invokes pivot_root(8) to swap the root and temporary mount points, leaving the initrd in a position to be umount ed and the actual root filesystem on / .
{ "source": [ "https://unix.stackexchange.com/questions/9944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4856/" ] }
9,957
I want to know if there's any way to check if my program can output terminal output using colors or not. Running commands like less and looking at the output from a program that outputs using colors, the output is displayed incorrectly, like this: [ESC[0;32m0.052ESC[0m ESC[1;32m2,816.00 kbESC[0m]
The idea is for my application to know not to color the output if the program can't print, say, logging output from through a cron job to a file, no need to log colored output, but when running manually, i like to view the output colored What language are you writing your application in? The normal approach is to check if the output device is a tty, and if it is, check if that type of terminal supports colors. In bash , that would look like # check if stdout is a terminal... if test -t 1; then # see if it supports colors... ncolors=$(tput colors) if test -n "$ncolors" && test $ncolors -ge 8; then bold="$(tput bold)" underline="$(tput smul)" standout="$(tput smso)" normal="$(tput sgr0)" black="$(tput setaf 0)" red="$(tput setaf 1)" green="$(tput setaf 2)" yellow="$(tput setaf 3)" blue="$(tput setaf 4)" magenta="$(tput setaf 5)" cyan="$(tput setaf 6)" white="$(tput setaf 7)" fi fi echo "${red}error${normal}" echo "${green}success${normal}" echo "${green}0.052${normal} ${bold}${green}2,816.00 kb${normal}" # etc. In C, you have to do a lot more typing, but can achieve the same result using isatty and the functions listed in man 3 terminfo .
{ "source": [ "https://unix.stackexchange.com/questions/9957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5980/" ] }
9,971
How can I find the time since a Linux system was first installed, provided that nobody has tried to hide it?
sudo tune2fs -l /dev/sda1 **OR** /dev/sdb1* | grep 'Filesystem created:' This will tell you when the file system was created. * = In the first column of df / you can find the exact partition to use.
{ "source": [ "https://unix.stackexchange.com/questions/9971", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
9,981
How can i delete a line if it is longer than e.g.: 2048 chars?
sed '/^.\{2048\}./d' input.txt > output.txt
{ "source": [ "https://unix.stackexchange.com/questions/9981", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
9,997
What resources exist for portable shell programming? The ultimate answer is to test on all targeted platforms, but that's rarely practical. The POSIX / Single UNIX specification is a start, but it tells neither you what the level of support of each implementation is, nor what common extensions exist. You can read the documentation of each implementation, but that's very time consuming and not completely accurate. I seems to me that an ideal format would be some kind of community-annotated version of the POSIX spec, where each feature is annotated by its support level amongst the different implementations. Is there such a thing? Or are there other useful resources? For example, there is Sven Mascheck's shell portability pages , but it's only about syntactic elements and a few built-ins, and only covers old shells. I'm looking for a more comprehensive resource.
The autoconf manual has a section on portable shell programming . Although that's not specifically targeting POSIX, it's probably the most complete collection of what to do and not to do when attempting to write portable shell code.
{ "source": [ "https://unix.stackexchange.com/questions/9997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
10,026
I have a directory that's got several gigabytes and several thousand small files. I want to copy it over the network with scp more than once. CPU time on the source and destination machines is cheap, but the network overhead added by copying each file individually is huge. I would tar/gzip it up and ship it over, but the source machine is short on disk. Is there a way for me to pipe the output of tar -czf <output> <directory> to scp? If not, is there another easy solution? My source machine is ancient (SunOS) so I'd rather not go installing things on it.
You can pipe tar across an ssh session: $ tar czf - <files> | ssh user@host "cd /wherever && tar xvzf -"
{ "source": [ "https://unix.stackexchange.com/questions/10026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6008/" ] }
10,041
With the ls command, is it possible to show only the files created after a specific date, hour...? I'm asking it because I have a directory with thousand of files. I want so see all files that were created since yesterday. I use ls -ltr but I have to wait to see all files... There is an equivalent of DIRECTORY/SINCE=date from OpenVMS ?
You can use the find command to find all files that have been modified after a certain number of days. For example, to find all files in the current directory that have been modified since yesterday (24 hours ago) use: find . -maxdepth 1 -mtime -1 Note that to find files modified before 24 hours ago, you have to use -mtime +1 instead of -mtime -1 .
{ "source": [ "https://unix.stackexchange.com/questions/10041", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/393/" ] }
10,050
In Linux, in /proc/PID/fd/X , the links for file descriptors that are pipes or sockets have a number, like: l-wx------ 1 user user 64 Mar 24 00:05 1 -> pipe:[6839] l-wx------ 1 user user 64 Mar 24 00:05 2 -> pipe:[6839] lrwx------ 1 user user 64 Mar 24 00:05 3 -> socket:[3142925] lrwx------ 1 user user 64 Mar 24 00:05 4 -> socket:[3142926] lr-x------ 1 user user 64 Mar 24 00:05 5 -> pipe:[3142927] l-wx------ 1 user user 64 Mar 24 00:05 6 -> pipe:[3142927] lrwx------ 1 user user 64 Mar 24 00:05 7 -> socket:[3142930] lrwx------ 1 user user 64 Mar 24 00:05 8 -> socket:[3142932] lr-x------ 1 user user 64 Mar 24 00:05 9 -> pipe:[9837788] Like on the first line: 6839. What is that number representing?
That's the inode number for the pipe or socket in question. A pipe is a unidirectional channel, with a write end and a read end. In your example, it looks like FD 5 and FD 6 are talking to each other, since the inode numbers are the same. (Maybe not, though. See below.) More common than seeing a program talking to itself over a pipe is a pair of separate programs talking to each other, typically because you set up a pipe between them with a shell: shell-1$ ls -lR / | less Then in another terminal window: shell-2$ ...find the ls and less PIDs with ps; say 4242 and 4243 for this example... shell-2$ ls -l /proc/4242/fd | grep pipe l-wx------ 1 user user 64 Mar 24 12:18 1 -> pipe:[222536390] shell-2$ ls -l /proc/4243/fd | grep pipe l-wx------ 1 user user 64 Mar 24 12:18 0 -> pipe:[222536390] This says that PID 4242's standard output (FD 1, by convention) is connected to a pipe with inode number 222536390, and that PID 4243's standard input (FD 0) is connected to the same pipe. All of which is a long way of saying that ls 's output is being sent to less 's input. Getting back to your example, FD 1 and FD 2 are almost certainly not talking to each other. Most likely this is the result of tying stdout (FD 1) and stderr (FD 2) together, so they both go to the same destination. You can do that with a Bourne shell like this: $ some-program 2>&1 | some-other-program So, if you poked around in /proc/$PID_OF_SOME_OTHER_PROGRAM/fd , you'd find a third FD attached to a pipe with the same inode number as is attached to FDs 1 and 2 for the some-program instance. This may also be what's happening with FDs 5 and 6 in your example, but I have no ready theory how these two FDs got tied together. You'd have to know what the program is doing internally to figure that out.
{ "source": [ "https://unix.stackexchange.com/questions/10050", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6013/" ] }
10,077
Is there some way I can check which of my processes the kernel has killed? Sometimes I log onto my server and find that something that should've run all night just stopped 8 hours in and I'm unsure if it's the applications doing or the kernels.
If the kernel killed a process (because the system ran out of memory), there will be a kernel log message. Check in /var/log/kern.log (on Debian/Ubuntu, other distributions might send kernel logs to a different file, but usually under /var/log under Linux). Note that if the OOM-killer (out-of-memory killer) triggered, it means you don't have enough virtual memory. Add more swap (or perhaps more RAM). Some process crashes are recorded in kernel logs as well (e.g. segmentation faults). If the processes were started from cron, you should have a mail with error messages. If the processes were started from a shell in a terminal, check the errors in that terminal. Run the process in screen to see the terminal again in the morning. This might not help if the OOM-killer triggered, because it might have killed the cron or screen process as well; but if you ran into the OOM-killer, that's the problem you need to fix.
{ "source": [ "https://unix.stackexchange.com/questions/10077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3125/" ] }
10,095
I want to change the password I assigned to root on my Debian webserver to something longer and more secure. How do I do that? I haven’t forgotten/lost the current password, I just want to change it.
Ah, use the passwd program as root : sudo passwd root Or, if you’re running as root already (which you shouldn’t be), just: passwd The root argument can be omitted, because when you execute passwd it defaults to the current user (which is root, as only root can change the root password).
{ "source": [ "https://unix.stackexchange.com/questions/10095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5646/" ] }
10,121
Once upon a time, DISPLAY=:0.0 totem /path/to/movie.avi after ssh 'ing into my desktop from my laptop would cause totem to play movie.avi on my desktop. Now it gives the error: No protocol specified Cannot open display: I reinstalled Debian squeeze when it went stable on both computers, and I guess I broke the config. I've googled on this, and cannot for the life of me figure out what I'm supposed to be doing. (VLC has an HTTP interface that works, but it isn't as convenient as ssh.) The same problem arises when I try to run this from a cron job.
(Adapted from Linux: wmctrl cannot open display when session initiated via ssh+screen ) DISPLAY and AUTHORITY An X program needs two pieces of information in order to connect to an X display. It needs the address of the display, which is typically :0 when you're logged in locally or :10 , :11 , etc. when you're logged in remotely (but the number can change depending on how many X connections are active). The address of the display is normally indicated in the DISPLAY environment variable. It needs the password for the display. X display passwords are called magic cookies . Magic cookies are not specified directly: they are always stored in X authority files, which are a collection of records of the form “display :42 has cookie 123456 ”. The X authority file is normally indicated in the XAUTHORITY environment variable. If $XAUTHORITY is not set, programs use ~/.Xauthority . You're trying to act on the windows that are displayed on your desktop. If you're the only person using your desktop machine, it's very likely that the display name is :0 . Finding the location of the X authority file is harder, because with gdm as set up under Debian squeeze or Ubuntu 10.04, it's in a file with a randomly generated name. (You had no problem before because earlier versions of gdm used the default setting, i.e. cookies stored in ~/.Xauthority .) Getting the values of the variables Here are a few ways to obtain the values of DISPLAY and XAUTHORITY : You can systematically start a screen session from your desktop, perhaps automatically in your login scripts (from ~/.profile ; but do it only if logging in under X: test if DISPLAY is set to a value beginning with : (that should cover all the cases you're likely to encounter)). In ~/.profile : case $DISPLAY in :*) screen -S local -d -m;; esac Then, in the ssh session: screen -d -r local You could also save the values of DISPLAY and XAUTHORITY in a file and recall the values. In ~/.profile : case $DISPLAY in :*) export | grep -E '(^| )(DISPLAY|XAUTHORITY)=' >~/.local-display-setup.sh;; esac In the ssh session: . ~/.local-display-setup.sh screen You could detect the values of DISPLAY and XAUTHORITY from a running process. This is harder to automate. You have to figure out the PID of a process that's connected to the display you want to work on, then get the environment variables from /proc/$pid/environ ( eval export $(</proc/$pid/environ tr \\0 \\n | grep -E '^(DISPLAY|XAUTHORITY)=') ¹). Copying the cookies Another approach (following a suggestion by Arrowmaster ) is to not try to obtain the value of $XAUTHORITY in the ssh session, but instead to make the X session copy its cookies into ~/.Xauthority . Since the cookies are generated each time you log in, it's not a problem if you keep stale values in ~/.Xauthority . There can be a security issue if your home directory is accessible over NFS or other network file system that allows remote administrators to view its contents. They'd still need to connect to your machine somehow, unless you've enabled X TCP connections (Debian has them off by default). So for most people, this either does not apply (no NFS) or is not a problem (no X TCP connections). To copy cookies when you log into your desktop X session, add the following lines to ~/.xprofile or ~/.profile (or some other script that is read when you log in): case $DISPLAY:$XAUTHORITY in :*:?*) # DISPLAY is set and points to a local display, and XAUTHORITY is # set, so merge the contents of `$XAUTHORITY` into ~/.Xauthority. XAUTHORITY=~/.Xauthority xauth merge "$XAUTHORITY";; esac ¹ In principle this lacks proper quoting, but in this specific instance $DISPLAY and $XAUTHORITY won't contain any shell metacharacter.
{ "source": [ "https://unix.stackexchange.com/questions/10121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6027/" ] }
10,150
I have a system that came with a firewall already in place. The firewall consists of over 1000 iptables rules. One of these rule is dropping packets I don't want dropped. (I know this because I did iptables-save followed by iptables -F and the application started working.) There are way too many rules to sort through manually. Can I do something to show me which rule is dropping the packets?
You could add a TRACE rule early in the chain to log every rule that the packet traverses. I would consider using iptables -L -v -n | less to let you search the rules. I would look port; address; and interface rules that apply. Given that you have so many rules you are likely running a mostly closed firewall, and are missing a permit rule for the traffic. How is the firewall built? It may be easier to look at the builder rules than the built rules.
{ "source": [ "https://unix.stackexchange.com/questions/10150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2180/" ] }
10,214
/proc/sys/vm/swappiness is nice, but I want a knob that is per process like /proc/$PID/oom_adj . So that I can make certain processes less likely than others to have any of their pages swapped out. Unlike memlock() , this doesn't prevent a program from being swapped out. And like nice , the user by default can't make their programs less likely, but only more likely to get swapped. I think I had to call this /proc/$PID/swappiness_adj .
You can configure swappiness per cgroup: http://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt http://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt For an easier introduction to cgroups, with examples, see https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/ch01.html
{ "source": [ "https://unix.stackexchange.com/questions/10214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4096/" ] }
10,226
Is it possible to do a multiline pattern match using sed , awk or grep ? Take for example, I would like to get all the lines between { and } So it should be able to match 1. {} 2. {.....} 3. {..... .....} Initially the question used <p> as an example. Edited the question to use { and } .
While I agree with the advice above, that you'll want to get a parser for anything more than tiny or completely ad-hoc, it is (barely ;-) possible to match multi-line blocks between curly braces with sed. Here's a debugging version of the sed code sed -n '/[{]/,/[}]/{ p /[}]/a\ end of block matching brace }' *.txt Some notes, -n means 'no default print lines as processed'. 'p' means now print the line. The construct /[{]/,/[}]/ is a range expression. It means scan until you find something that matches the first pattern (/[{]/) AND then scan until you find the 2nd pattern (/[}]/) THEN perform whatever actions you find in between the { } in the sed code. In this case 'p' and the debugging code. (not explained here, use it, mod it or take it out as works best for you). You can remove the /[}]/a\ end of block debugging when you prove to your satisfaction that the code is really matching blocks delimited by {,}. This code sample will skip over anything not inside a curly brace pair. It will, as noted by others above, be easly confused if you have any extra {,} embedded in strings, reg-exps, etc., OR where the closing brace is the same line , (with thanks to fred.bear) I hope this helps.
{ "source": [ "https://unix.stackexchange.com/questions/10226", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
10,231
My server program received a SIGTERM and stopped (with exit code 0). I am surprised by this, as I am pretty sure that there was plenty of memory for it. Under what conditions does linux (busybox) send a SIGTERM to a process?
I'll post this as an answer so that there's some kind of resolution if this turns out to be the issue. An exit status of 0 means a normal exit from a successful program. An exiting program can choose any integer between 0 and 255 as its exit status. Conventionally, programs use small values. Values 126 and above are used by the shell to report special conditions, so it's best to avoid them. At the C API level, programs report a 16-bit status¹ that encodes both the program's exit status and the signal that killed it, if any. In the shell, a command's exit status (saved in $? ) conflates the actual exit status of the program and the signal value: if a program is killed by a signal, $? is set to a value greater than 128 (with most shells, this value is 128 plus the signal number; ATT ksh uses 256 + signal number and yash uses 384 + signal number, which avoids the ambiguity, but the other shells haven't followed suit). In particular, if $? is 0, your program exited normally. Note that this includes the case of a process that receives SIGTERM, but has a signal handler for it, and eventually exits normally (perhaps as an indirect consequence of the SIGTERM signal, perhaps not). To answer the question in your title, SIGTERM is never sent automatically by the system. There are a few signals that are sent automatically like SIGHUP when a terminal goes away, SIGSEGV/SIGBUS/SIGILL when a process does things it shouldn't be doing, SIGPIPE when it writes to a broken pipe/socket, etc. And there are a few signals that are sent due to a key press in a terminal, mainly SIGINT for Ctrl + C , SIGQUIT for Ctrl + \ and SIGTSTP for Ctrl + Z , but SIGTERM is not one of those. If a process receives SIGTERM, some other process sent that signal. ¹ roughly speaking
{ "source": [ "https://unix.stackexchange.com/questions/10231", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6001/" ] }
10,233
I am usually connecting to the remote server with ssh [email protected] -p 11000 and then giving the password each time for user. How should I avoid entering the password each time I connect using ssh ?
First , put this in ~/.ssh/config : Host server HostName server.com Port 11000 User user You will be able to ssh server , then type the password. Second , check in ~/.ssh/ to see if you have files named id_rsa and id_rsa.pub . If not, you don't have any key set up, so you have to generate a pair using ssh-keygen . You can give the keys a password or not. The generated file id_rsa.pub should look like this: ssh-rsa lotsofrandomtext user@local Third , ssh to the server, create the file ~/.ssh/authorized_keys if it doesn't exist. Then append the contents of the ~/.ssh/id_rsa.pub that you generated earlier here. This might mean copying the file contents to your clipboard, then opening ~/.ssh/authorized_keys in a text editor and pasting the thing. Alternatively, use the command ssh-copy-id server (replace server with the name in ~/.ssh/config ). This will do the same thing as above. At times I have seen ssh-copy-id getting stuck, so I don't really like it. You should now be able to ssh with just ssh server , unless you have chosen to protect your private key with a passphrase. Generally if you don't use a passphrase, you should protect your private key by other means (e.g. full disk encryption). Fourth (only needed if you protect your private key with a passphrase), put this in ~/.bashrc : start_ssh_agent() { # Try to use an existing agent save=~/.ssh-agent if [[ -e "$save" ]] then . "$save" > /dev/null fi # No existing agent, start a new one if [[ -z "$SSH_AGENT_PID" || ! -e "/proc/$SSH_AGENT_PID" ]] then ssh-agent > "$save" . "$save" > /dev/null ssh-add fi } start_ssh_agent With this, you will only need to enter the passphrase once per computer boot.
{ "source": [ "https://unix.stackexchange.com/questions/10233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6004/" ] }
10,241
I have a bash script which enumerates through every *.php file in a directory and applies iconv to it. This gets output in STDOUT. Since adding the -o parameter ( in my experience ) actually writes a blank file probably before the conversion takes place, how can I adjust my script so it does the conversion, then overwrites the input file? for file in *.php do iconv -f cp1251 -t utf8 "$file" done
This isn't working because iconv first creates the output file (since the file already exists, it truncates it), then starts reading its input file (which is now empty). Most programs behave this way. Create a new, temporary file for the output, then move it into place. for file in *.php do iconv -f cp1251 -t utf8 -o "$file.new" "$file" && mv -f "$file.new" "$file" done If your platform's iconv doesn't have -o , you can use a shell redirection to the same effect. for file in *.php do iconv -f cp1251 -t utf8 "$file" >"$file.new" && mv -f "$file.new" "$file" done Colin Watson's sponge utility (included in Joey Hess's moreutils ) automates this: for file in *.php do iconv -f cp1251 -t utf8 "$file" | sponge "$file" done This answer applies not just to iconv but to any filter program. A few special cases are worth mentioning: GNU sed and Perl -p have a -i option to replace files in place. If your file is extremely large, your filter is only modifying or removing some parts but never adding things (e.g. grep , tr , sed 's/long input text/shorter text/' ), and you like living dangerously, you may want to genuinely modify the file in place (the other solutions mentioned here create a new output file and move it into place at the end, so the original data is unchanged if the command is interrupted for any reason).
{ "source": [ "https://unix.stackexchange.com/questions/10241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2547/" ] }
10,251
I've got a full album flac and a cue file for it. How can I split this into a flac per track? I'm a KDE user, so I would prefer a KDE/Qt way. I would like to see command line and other GUI answers as well, but they are not my preferred method.
Shnsplit can read a cue file directly, which also means it can access the other data from the cue file (not just the breakpoints) and generate nicer filenames than split-*.flac : shnsplit -f file.cue -t %n-%t -o flac file.flac Granted, this makes it more difficult to use cuetag.sh if the original flac file is in the same directory.
{ "source": [ "https://unix.stackexchange.com/questions/10251", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
10,263
I need to concatenate two strings in bash, so that: string1=hello string2=world mystring=string1+string2 echo mystring should produce helloworld
simply concatenate the variables: mystring="$string1$string2"
{ "source": [ "https://unix.stackexchange.com/questions/10263", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6001/" ] }
10,267
Today I learned that I can use perl -c filename to find unmatched curly brackets {} in arbitrary files, not necessarily Perl scripts. The problem is, it doesn't work with other types of brackets () [] and maybe <>. I also had experiments with several Vim plugins that claims to help finding unmatched brackets but so far not so good. I have a text file with quite a few brackets and one of them is missing! Is there any program / script / vim plugin / whatever that can help me identify the unmatched bracket?
In Vim you can use [ and ] to quickly travel to nearest unmatched bracket of the type entered in the next keystroke. So [ { will take you back up to the nearest unmatched "{"; ] ) would take you ahead to the nearest unmatched ")", and so on.
{ "source": [ "https://unix.stackexchange.com/questions/10267", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250/" ] }
10,362
In ps xf 26395 pts/78 Ss 0:00 \_ bash 27016 pts/78 Sl+ 0:04 | \_ unicorn_rails master -c config/unicorn.rb 27042 pts/78 Sl+ 0:00 | \_ unicorn_rails worker[0] -c config/unicorn.rb In htop , it shows up like: Why does htop show more process than ps?
By default, htop lists each thread of a process separately, while ps doesn't. To turn off the display of threads, press H , or use the "Setup / Display options" menu, "Hide userland threads". This puts the following line in your ~/.htoprc or ~/.config/htop/htoprc (you can alternatively put it there manually): hide_userland_threads=1 (Also hide_kernel_threads=1 , toggled by pressing K , but it's 1 by default.) Another useful option is “Display threads in a different color” in the same menu ( highlight_threads=1 in .htoprc ), which causes threads to be shown in a different color (green in the default theme). In the first line of the htop display, there's a line like “Tasks: 377, 842 thr, 161 kthr; 2 running”. This shows the total number of processes, userland threads, kernel threads, and threads in a runnable state. The numbers don't change when you filter the display, but the indications “thr” and “kthr” disappear when you turn off the inclusion of user/kernel threads respectively. When you see multiple processes that have all characteristics in common except the PID and CPU-related fields (NIce value, CPU%, TIME+, ...), it's highly likely that they're threads in the same process.
{ "source": [ "https://unix.stackexchange.com/questions/10362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1875/" ] }
10,370
I can do this: $ pwd /home/beau $ ln -s /home/beau/foo/bar.txt /home/beau/bar.txt $ readlink -f bar.txt /home/beau/foo/bar.txt But I'd like to be able to do this: $ pwd /home/beau $ cd foo $ ln -s bar.txt /home/beau/bar.txt $ readlink -f /home/beau/bar.txt /home/beau/foo/bar.txt Is this possible? Can I somehow resolve the relative pathname and pass it to ln ?
If you create a symbolic link to a relative path, it will store it as a relative symbolic link, not absolute like your example shows. This is generally a good thing. Absolute symbolic links don't work when the filesystem is mounted elsewhere. The reason your example doesn't work is that it's relative to the parent directory of the symbolic link and not where ln is run. You can do: $ pwd /home/beau $ ln -s foo/bar.txt bar.txt $ readlink -f /home/beau/bar.txt /home/beau/foo/bar.txt Or for that matters: $ cd foo $ ln -s foo/bar.txt ../bar.txt It's important to realise that the first argument after ln -s is stored as the target of the symlink. It can be any arbitrary string (with the only restrictions that it can't be empty and can't contain the NUL character), but at the time the symlink is being used and resolved, that string is understood as a relative path to the parent directory of the symlink (when it doesn't start with / ).
{ "source": [ "https://unix.stackexchange.com/questions/10370", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5358/" ] }
10,408
I need to learn about AIX , and I only have a laptop with Fedora 14/VirtualBox on it. Is there any chance that I could run an AIX guest in my VirtualBox? My laptop has an Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz, and I read that it only runs on RISC architecture. So there's no way I can run it on my laptop?
The best way to learn AIX would be to obtain an account on a machine that's running it. Really, part of what sets AIX apart from other unices is that it's designed for high-end systems (with lots of processors, fancy virtualization capabilities and so on). You won't learn as much by running it in a virtual machine. If you really want to run an x86 version of AIX on your laptop, you'll have to get an old PS/2 version that runs on an x86 CPU. I don't know if AIX will run on VirtualBox's emulated hardware (PS/2 is peculiar, it's the same problem as running OSX in a VM), but there are hints that it might ( user claiming to run an AIX guest ). It seems that AIX can run in Virtual PC . Qemu can emulate PowerPC processors, and it is apparently possible to run a recent, PowerPC version of AIX : see these guides on running AIX 4.3.3 , AIX 5.1 , and AIX 7.2 on Qemu. In summary, getting AIX in a VM would be costly (it's not free software), difficult, and not very useful. Try and get an account on some big iron, or get a second-hand system (if you can afford it).
{ "source": [ "https://unix.stackexchange.com/questions/10408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
10,418
I've been asked to estimate the power consumption of the servers I run for my laboratory. I thought that I'd ask if there was some handy Linux commandline to get the power consumption of the server. It looks like powertop is useful for minimizing power consumption but it does not seem to show information that server A is using B watts. Is there something buried in the /proc system that would help me out?
If your computer actually keeps track of power (e.g. notebook), than on kernel 3.8.11 you can use the command below. It returns power measured in microwatts. cat /sys/class/power_supply/BAT0/power_now This works on kernel 3.8.11 (Ubuntu Quantal mainline generic).
{ "source": [ "https://unix.stackexchange.com/questions/10418", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6209/" ] }
10,421
I think I may be overlooking a relatively fundamental point regarding shell. Output from the ls command by default separates output with newlines, but the shell displays the output on a single line. Can anyone explain this to me? I had always presumed that the output was simply separated by spaces, but now that I see the output separated by newlines, I would expect the output to be displaying on separate lines. Example: cpoweradm@debian:~/lpi103-4$ ls text* text1 text2 text3 od shows that the output is separated by newlines: cpoweradm@debian:~/lpi103-4$ ls text* | od -c 0000000 t e x t 1 \n t e x t 2 \n t e x t 0000020 3 \n 0000022 If newlines are present, then why doesn't the output display as: text1 text2 text3
When you pipe the output, ls acts differently. This fact is hidden away in the info documentation : If standard output is a terminal, the output is in columns (sorted vertically) and control characters are output as question marks; otherwise, the output is listed one per line and control characters are output as-is. To prove it, try running ls and then ls | less This means that if you want the output to be guaranteed to be one file per line, regardless of whether it is being piped or redirected, you have to run ls -1 ( -1 is the number one) Or, you can force ls | less to output in columns by running ls -C ( -C is a capital C)
{ "source": [ "https://unix.stackexchange.com/questions/10421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5270/" ] }
10,428
I have a development server, which is only accessible from 127.0.0.1:8000, not 192.168.1.x:8000. As a quick hack, is there a way to set up something to listen on another port (say, 8001) so that from the local network I could connect 192.168.1.x:8001 and it would tunnel the traffic between the client and 127.0.0.1:8000?
Using ssh is the easiest solution. ssh -g -L 8001:localhost:8000 -f -N [email protected] This forwards the local port 8001 on your workstation to the localhost address on remote-server.com port 8000. -g means allow other clients on my network to connect to port 8001 on my workstation. Otherwise only local clients on your workstation can connect to the forwarded port. -N means all I am doing is forwarding ports, don't start a shell. -f means fork into background after a successful SSH connection and log-in. Port 8001 will stay open for many connections, until ssh dies or is killed. If you happen to be on Windows, the excellent SSH client PuTTY can do this as well. Use 8001 as the local port and localhost:8000 and the destination and add a local port forwarding in settings. You can add it after a successful connect with PuTTY.
{ "source": [ "https://unix.stackexchange.com/questions/10428", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6211/" ] }
10,438
Is it possible to add a list of hosts that are only specific to a certain user? Perhaps a user-specific hosts file? This mechanism should also complement the entries in the /etc/hosts file.
The functionality you are looking for is implemented in glibc. You can define a custom hosts file by setting the HOSTALIASES environment variable. The names in this file will be picked up by gethostbyname (see documentation ). Example (tested on Ubuntu 13.10): $ echo 'g www.google.com' >> ~/.hosts $ export HOSTALIASES=~/.hosts $ wget g -O /dev/null Some limitations: HOSTALIASES only works for applications using getaddrinfo(3) or gethostbyname(3) For setuid / setgid / setcap applications, libc sanitizes the environment, which means that the HOSTALIASES setting is lost. ping is setuid root or is given the net_raw capability upon execution (because it needs to listen for ICMP packets), so HOSTALIASES will not work with ping unless you're already root before you call ping .
{ "source": [ "https://unix.stackexchange.com/questions/10438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6217/" ] }
10,524
I have this input: sdkxyosl 1 safkls 2 asdf--asdfasxy_asd 5 dkd8k jasd 29 sdi44sw 43 asasd afsdfs 10 rklyasd 4 I need this output: sdi44sw 43 dkd8k jasd 29 asasd afsdfs 10 asdf--asdfasxy_asd 5 rklyasd 4 safkls 2 sdkxyosl 1 So i need to sort the lines by the last column. I don't know how many columns are in one line. I just can't figure it out, how to do it. I don't have "perl powers". I just have ~average scripting powers with sed, awk, cut, etc.. Does somebody know how to do it?
The following command line uses awk to prepend the last field of each line of file.txt, does a reverse numerical sort, then uses cut to remove the added field: awk '{print $NF,$0}' file.txt | sort -nr | cut -f2- -d' '
{ "source": [ "https://unix.stackexchange.com/questions/10524", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
10,525
Like most users, I have a bunch of aliases set up to give a default set of flags for frequently used programs. For instance, alias vim='vim -X' alias grep='grep -E' alias ls='ls -G' The problem is that if I want to use which to see where my vim / grep / ls /etc is coming from, the alias gets in the way: $ which vim vim: aliased to vim -X This is useful output, but not what I'm looking for in this case; I know vim is aliased to vim -X but I want to know where that vim is coming from. Short of temporarily un-defining the alias just so I can use which on it, is there an easy way to have which 'unwrap' the alias and run itself on that? Edit: It seems that which is a shell-builtin with different behaviors across different shells. In Bash, SiegeX's suggestion of the --skip-alias flag works; however, I'm on Zsh. Does something similar exist there?
which is actually a bad way to do things like this, as it makes guesses about your environment based on $SHELL and the startup files (it thinks) that shell uses; not only does it sometimes guess wrong, but you can't generally tell it to behave differently. ( which on my Ubuntu 10.10 doesn't understand --skip-alias as mentioned by @SiegeX, for example.) type uses the current shell environment instead of poking at your config files, and can be told to ignore parts of that environment, so it shows you what will actually happen instead of what would happen in a reconstruction of your default shell. In this case, type -P will bypass any aliases or functions: $ type -P vim /usr/bin/vim You can also ask it to peel off all the layers, one at a time, and show you what it would find: $ type -a vim vim is aliased to `vim -X' vim is /usr/bin/vim (Expanding on this from the comments:) The problem with which is that it's usually an external program instead of a shell built-in, which means it can't see your aliases or functions and has to try to reconstruct them from the shell's startup/config files. (If it's a shell built-in, as it is in zsh but apparently not bash , it is more likely to use the shell's environment and do the right thing.) type is a POSIX-compliant command which is required to behave as if it were a built-in (that is, it must use the environment of the shell it's invoked from including local aliases and functions), so it usually is a built-in. It isn't generally found in csh / tcsh , although in most modern versions of those which is a shell builtin and does the right thing; sometimes the built-in is what instead, and sometimes there's no good way to see the current shell's environment from csh / tcsh at all.
{ "source": [ "https://unix.stackexchange.com/questions/10525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/502/" ] }
10,589
I pressed something and accidentally swapped my two screens. My left one is actually considered as the right one, and vice versa. How can I swap them back? Edit - Specifically, I'm using Gnome, though we might also want to keep this question generic. Edit 2 - It appears that my driver isn't compatible with xrandr. I'm attaching log of /var/log/Xorg.0.log here
Your desktop environment probably has a way, but you don't say which one you're using (if any). If your display driver is compatible with the XRandR extension , which is the standard X.org method for managing display resolutions and arrangements, you can use the command-line utility xrandr . I think the proprietary NVidia driver bypasses XRandR, so if you're using it, you'll have to use a dedicated NVidia tool. Run xrandr (with no argument) to see your monitor (screen) arrangement. You'll see lines like these: DVI-0 connected 1600x1200+1600+0 (normal left inverted right x axis y axis) 408mm x 306mm DVI-1 connected 1600x1200+0+0 (normal left inverted right x axis y axis) 408mm x 306mm This example means that I have two monitors called DVI-0 and DVI-1 , and DVI-1 is at the top left (position +0+0 ) while DVI-0 is to its right (position +1600+0 ). To swap them, I would run xrandr --output DVI-0 --left-of DVI-1
{ "source": [ "https://unix.stackexchange.com/questions/10589", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
10,646
There's a built-in Unix command repeat whose first argument is the number of times to repeat a command, where the command (with any arguments) is specified by the remaining arguments to repeat . For example, % repeat 100 echo "I will not automate this punishment." will echo the given string 100 times and then stop. I'd like a similar command – let's call it forever – that works similarly except the first argument is the number of seconds to pause between repeats, and it repeats forever. For example, % forever 5 echo "This will get echoed every 5 seconds forever and ever." I thought I'd ask if such a thing exists before I write it. I know it's like a 2-line Perl or Python script, but maybe there's a more standard way to do this. If not, feel free to post a solution in your favorite scripting language. PS: Maybe a better way to do this would be to generalize repeat to take both the number of times to repeat (with -1 meaning infinity) and the number of seconds to sleep between repeats. The above examples would then become: % repeat 100 0 echo "I will not automate this punishment." % repeat -1 5 echo "This will get echoed every 5 seconds forever."
Try the watch command. Usage: watch [-dhntv] [--differences[=cumulative]] [--help] [--interval=<n>] [--no-title] [--version] <command>` So that: watch -n1 command will run the command every second (well, technically, every one second plus the time it takes for command to run as watch (at least the procps and busybox implementations) just sleeps one second in between two runs of command ), forever. Would you want to pass the command to exec instead of sh -c , use -x option: watch -n1 -x command On macOS, you can get watch from Mac Ports : port install watch Or you can get it from Homebrew : brew install watch
{ "source": [ "https://unix.stackexchange.com/questions/10646", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6329/" ] }
10,685
I haven't found any concise explanation of this.
So you're looking for a package containing a file called System.Windows.Forms.dll . You can search: on your machine: apt-file search System.Windows.Forms.dll (the apt-file package must be installed) online: at packages.ubuntu.com . Both methods lead you to (as of Ubuntu 14.04): libmono-system-windows-forms4.0-cil and libmono-winforms2.0-cil . Install it with: sudo apt-get install libmono-system-windows-forms4.0-cil
{ "source": [ "https://unix.stackexchange.com/questions/10685", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
10,689
I like to keep my bash_profile in a git repository and clone it to whatever machines I have shell access to. Since I'm in tmux most of the time I have a user@host string in the status line, rather than its traditional spot in the shell prompt. Not all sites I use have tmux installed, though, or I may not always be using it. I'd like to detect when I'm not in a tmux session and adjust my prompt accordingly. So far my half-baked solution in .bash_profile looks something like this: _display_host_unless_in_tmux_session() { # ??? } export PROMPT_COMMAND='PS1=$(_display_host_unless_in_tmux_session)${REST_OF_PROMPT}' (Checking every time probably isn't the best approach, so I'm open to suggestions for a better way of doing this. Bash scripting is not my forte.)
Tmux sets the TMUX environment variable in tmux sessions, and sets TERM to screen . This isn't a 100% reliable indicator (for example, you can't easily tell if you're running screen inside tmux or tmux inside screen ), but it should be good enough in practice. if ! { [ "$TERM" = "screen" ] && [ -n "$TMUX" ]; } then PS1="@$HOSTNAME $PS1" fi If you need to integrate that in a complex prompt set via PROMPT_COMMAND (which is a bash setting, by the way, so shouldn't be exported): if [ "$TERM" = "screen" ] && [ -n "$TMUX" ]; then PS1_HOSTNAME= else PS1_HOSTNAME="@$HOSTNAME" fi PROMPT_COMMAND='PS1="$PS1_HOSTNAME…"' If you ever need to test whether tmux is installed: if type tmux >/dev/null 2>/dev/null; then # you can start tmux if you want fi By the way, this should all go into ~/.bashrc , not ~/.bash_profile (see Difference between .bashrc and .bash_profile ). ~/.bashrc is run in every bash instance and contains shell customizations such as prompts and aliases. ~/.bash_profile is run when you log in (if your login shell is bash). Oddly, bash doesn't read ~/.bashrc in login shells, so your ~/.bash_profile should contain case $- in *i*) . ~/.bashrc;; esac
{ "source": [ "https://unix.stackexchange.com/questions/10689", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3649/" ] }
10,698
I have a shell script that's reading from standard input . In rare circumstances, there will be no one ready to provide input, and the script must time out . In case of timeout, the script must execute some cleanup code. What's the best way to do that? This script must be very portable , including to 20th century unix systems without a C compiler and to embedded devices running busybox, so Perl, bash, any compiled language, and even the full POSIX.2 can't be relied on. In particular, $PPID , read -t and perfectly POSIX-compliant traps are not available. Writing to a temporary file is also excluded; the script might run even if all filesystems are mounted read-only. Just to make things more difficult, I also want the script to be reasonably fast when it doesn't time out. In particular, I also use the script in Windows (mainly in Cygwin), where fork and exec are particularly low, so I want to keep their use to a minimum. In a nutshell, I have trap cleanup 1 2 3 15 foo=`cat` and I want to add a timeout. I can't replace cat with the read built-in. In case of timeout, I want to execute the cleanup function. Background: this script is guessing the encoding of the terminal by printing some 8-bit characters and comparing the cursor position before and after. The beginning of the script tests that stdout is connected to a supported terminal, but sometimes the environment is lying (e.g. plink sets TERM=xterm even if it's called with TERM=dumb ). The relevant part of the script looks like this: text='Éé' # UTF-8; shows up as Ãé on a latin1 terminal csi='␛['; dsr_cpr="${csi}6n"; dsr_ok="${csi}5n" # ␛ is an escape character stty_save=`stty -g` cleanup () { stty "$stty_save"; } trap 'cleanup; exit 120' 0 1 2 3 15 # cleanup code stty eol 0 eof n -echo # Input will end with `0n` # echo-n is a function that outputs its argument without a newline echo-n "$dsr_cpr$dsr_ok" # Ask the terminal to report the cursor position initial_report=`tr -dc \;0123456789` # Expect ␛[42;10R␛[0n for y=42,x=10 echo-n "$text$dsr_cpr$dsr_ok" final_report=`tr -dc \;0123456789` cleanup # Compute and return initial_x - final_x How can I modify the script so that if tr hasn't read any input after 2 seconds, it is killed and the script executes the cleanup function?
What about this: foo=`{ { cat 1>&3; kill 0; } | { sleep 2; kill 0; } } 3>&1` That is: run the output-producing command and sleep in the same process group, a process group just for them. Whichever command returns first kills the whole process group. Would anyone wonder: Yes, the pipe is not used; it's bypassed using the redirections. The sole purpose of it is to have the shell run the two process in the same process group. As Gilles pointed out in his comment, this won't work in a shell script because the script process would be killed along with the two subprocesses. One way¹ to force a command to run in a separate process group is to start a new interactive shell: #!/bin/sh foo=`sh -ic '{ cat 1>&3; kill 0; } | { sleep 2; kill 0; }' 3>&1 2>/dev/null` [ -n "$foo" ] && echo got: "$foo" || echo timeouted But there might be caveats with this (e.g. when stdin is not a tty?). The stderr redirection is there to get rid of the "Terminated" message when the interactive shell is killed. Tested with zsh , bash and dash . But what about oldies? B98 suggests the following change, working on Mac OS X, with GNU bash 3.2.57, or Linux with dash: foo=`sh -ic 'exec 3>&1 2>/dev/null; { cat 1>&3; kill 0; } | { sleep 2; kill 0; }'` – 1. other than setsid which appears to be non-standard.
{ "source": [ "https://unix.stackexchange.com/questions/10698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
10,706
Okay, so I'm trying to setup a login for logging in with ssh, using a public key. First, I am using puttygen to generate a rsa-ssh2 public key using a passphrase. I followed the directions and generated the key. I saved the private key in its own file for putty and puttygen also generated the public key. In putty I set it up to use the private key file and use rsa-ssh2 etc... So I c/p'd my houtput ssh2 public key stuff from puttygen and on the server, I put that into username/.ssh/authorized_keys So I tried to then login through putty and first it prompted me for my username instead of asking for passphrase, and then when I entered it in (I tried both username and passphrase) it said my public key was invalid. I thought maybe I somehow c/p'd or formatted the info into authorized_keys wrong, so I went back and double checked. I made sure it was all on one line, properly spaced etc... I also checked in the following file /etc/ssh/ssh_config and I have the following: IdentityFile ~/.ssh/id_rsa I tried renaming my authorized_keys file to id_rsa and no joy. I tried changing that line in ssh_config to IdentityFile ~/.ssh/authorized_keys ...and no joy. I went back to thinking maybe my public key was malformed or that putty wasn't configured properly so I asked a friend to make a temp account for me on his server and add my public key and I was able to login through putty just fine...when I connected to his server, it prompted me for the passphrase for my key and logged me in just fine. So he suggested I look at that stuff above but no joy and he doesn't know what else to check soo...I guess I'm appealing to the experts here :P thoughts?
What about this: foo=`{ { cat 1>&3; kill 0; } | { sleep 2; kill 0; } } 3>&1` That is: run the output-producing command and sleep in the same process group, a process group just for them. Whichever command returns first kills the whole process group. Would anyone wonder: Yes, the pipe is not used; it's bypassed using the redirections. The sole purpose of it is to have the shell run the two process in the same process group. As Gilles pointed out in his comment, this won't work in a shell script because the script process would be killed along with the two subprocesses. One way¹ to force a command to run in a separate process group is to start a new interactive shell: #!/bin/sh foo=`sh -ic '{ cat 1>&3; kill 0; } | { sleep 2; kill 0; }' 3>&1 2>/dev/null` [ -n "$foo" ] && echo got: "$foo" || echo timeouted But there might be caveats with this (e.g. when stdin is not a tty?). The stderr redirection is there to get rid of the "Terminated" message when the interactive shell is killed. Tested with zsh , bash and dash . But what about oldies? B98 suggests the following change, working on Mac OS X, with GNU bash 3.2.57, or Linux with dash: foo=`sh -ic 'exec 3>&1 2>/dev/null; { cat 1>&3; kill 0; } | { sleep 2; kill 0; }'` – 1. other than setsid which appears to be non-standard.
{ "source": [ "https://unix.stackexchange.com/questions/10706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
10,723
I just started using Ubuntu as my main OS and I wanted to learn about things I should not do, and learn by the bad things people have done in the past. I came across these email about horror stories that UNIX & Linux sys admins had done on their own system when they where new. Many of them involved the use of the mknod command to both distory and fix a problem. I've never heard of this command before and the man page within Ubuntu is not very helpful. So my question is, what is this command used for, and what are some examples where it is useful in day to day use?
mknod was originally used to create the character and block devices that populate /dev/ . Nowadays software like udev automatically creates and removes device nodes on the virtual filesystem when the corresponding hardware is detected by the kernel, but originally /dev was just a directory in / that was populated during install. So yes, in case of a near complete disaster causing the /dev virtual filesystem not to load and/or udev failing spectacularly, using mknod to painstakingly repopulate at least a rudimentary device tree to get something back up can be done... But yeah, that's sysadmin horror story time. Personally, I recommend a rescue USB stick or CD. Aside from creating named pipes, I can't think of a single possible day-to-day use for it that an end user would need to concern themselves with -- and even that is stretching the definition of 'day to day use'.
{ "source": [ "https://unix.stackexchange.com/questions/10723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6330/" ] }
10,735
I need to allow a non-root user to run a server listening on port tcp/80. Is there any way to do this?
setcap 'cap_net_bind_service=+ep' /path/to/program this will work for specific processes. But to allow a particular user to bind to ports below 1024 you will have to add him to sudoers. Have a look at this discussion for more.
{ "source": [ "https://unix.stackexchange.com/questions/10735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4041/" ] }
10,736
After having some problems with my NAS, I switched to Debian/Lenny. I've managed to install and configure most of the software I need, but I've hit a brick wall with Samba. I can access the shares and read all the files, but if I try and send anything across it tells me there's not enough space. I'm using Windows, so I opened a command prompt and ran > dir \\MyNAS.home\Public 1 File(s) 44,814,336 bytes 12 Dir(s) 507, 998, 060, 544 bytes free The free space reported is correct (~500GB), so what's the problem? The following is my smb.conf: [global] workgroup = MEDUS realm = WORKGROUP netbios name = MyNAS map to guest = bad user server string = My Book Network Storage load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes log file = /var/log/samba/log.smbd max log size = 50 dead time = 15 security = share auth methods = guest, sam_ignoredomain, winbind:ntdomain encrypt passwords = yes passdb backend = smbpasswd:/opt/etc/samba/smbpasswd create mask = 0664 directory mask = 0775 local master = no domain master = no preferred master = no socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536 min receivefile size = 128k use sendfile = yes dns proxy = no idmap uid = 10000-65000 idmap gid = 10000-65000 don't descend = /proc, /dev, /etc admin users = null passwords = yes guest account = nobody unix extensions = no [Public] path=/shares/internal/PUBLIC guest ok = yes read only = no dfree cache time = 10 dfree command = /opt/etc/samba/dfree The dfree command parameters I added myself, in an attempt to fix the problem (which didn't work). However, I suspect that the NAS is reporting the correct disk space anyway, as evident from the results of the command I used above. I've also tried playing around with the block size command, to no avail. I was able to create an empty text file on the share, and I repeatedly edited and saved the file -- it stopped at around 130 bytes. Does anyone have any idea what the problem might be?
setcap 'cap_net_bind_service=+ep' /path/to/program this will work for specific processes. But to allow a particular user to bind to ports below 1024 you will have to add him to sudoers. Have a look at this discussion for more.
{ "source": [ "https://unix.stackexchange.com/questions/10736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6216/" ] }
10,745
(The linux equivalent of TimeThis.exe) Something like: timethis wget foo.com Receiving foo.com ... wget foo.com took 3 seconds.
Try just time instead of timethis . Although be aware that there's often a shell builtin version of time and a binary version, which will give results in different formats: $ time wget -q -O /dev/null https://unix.stackexchange.com/ real 0m0.178s user 0m0.003s sys 0m0.005s vs $ \time wget -q -O /dev/null https://unix.stackexchange.com/ 0.00user 0.00system 0:00.17elapsed 4%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+613minor)pagefaults 0swaps Unlike your "timethis" program, you get three values back. That's broken down in What is "system time" when using "time" in command line , but in short: real means "wall-clock time", while user and sys show CPU clock time, split between regular code and system calls.
{ "source": [ "https://unix.stackexchange.com/questions/10745", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
10,776
I have a partition /dev/sda1. Disk utility shows it has the capacity of 154 GB. df -h shows Filesystem Size Used Avail Use% Mounted on /dev/sda1 123G 104G 14G 89% / devtmpfs 1006M 280K 1006M 1% /dev none 1007M 276K 1006M 1% /dev/shm none 1007M 216K 1006M 1% /var/run none 1007M 0 1007M 0% /var/lock none 1007M 0 1007M 0% /lib/init/rw Why are the results different? Where are the missing 31 GB?
One reason the partition capacities can differ is that some space is reserved for root, in the event the partitions become full. If there is no space reserved for root, and the partitions become full, the system cannot function. However, this difference is usually of the order of 1%, so that does not explain the difference in your case. From the man page for df If an argument is the absolute file name of a disk device node containing a mounted file system, df shows the space available on that file system rather than on the file system containing the device node (which is always the root file system). So df is really showing the size of your filesystem, which is usually the size of the device, but this may not be true in your case. Does your filesystem extend over the whole of your partition? Does resize2fs /dev/sda1 make any difference? This command tries to increase your filesystem to cover the entire partition. But make sure you have a backup if you try this.
{ "source": [ "https://unix.stackexchange.com/questions/10776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
10,806
How do I configure Ctrl-Left and Ctrl-Right as previous/next word shortcuts for bash (currently alt-b and alt-f)?
The correct answer depends on which terminal you are using. For Gnome Terminal or recent versions of xterm, put this in ~/.inputrc: "\e[1;5C": forward-word "\e[1;5D": backward-word For PuTTY, put this in your ~/.inputrc: "\eOC": forward-word "\eOD": backward-word For rxvt, put this in your ~/.inputrc: "\eOc": forward-word "\eOd": backward-word You can probably get away with putting all of those together in ~/.inputrc. In all cases, you also need to put this in your ~/.bashrc (or ~/.zshrc): export INPUTRC=~/.inputrc If that doesn't work, or you have a different terminal, go to your terminal and type Ctrl + V Ctrl + -> . Then use that instead of "\e[1;5C" or "\eOC" above. Repeat for Ctrl + <- . Note that you need to write the keyboard escape sequences using the inputrc syntax , e.g. \C means control \e means escape (which appears as ^[ when typing it using Ctrl+V above)
{ "source": [ "https://unix.stackexchange.com/questions/10806", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
10,814
I wanted to put my work files (code) in /usr/local/src , but I think it's already a folder that has some other semantic meaning. What is that? Should I put source code there, or is there a better place? Edit - I am the sole user and admin of the machine, and I don't want to use my home directory because it's on an NFS drive.
According to Linux FHS , /usr is the location where Distribution-based items are placed and /usr/local is the location where you'd place your own localized changes ( /usr/local will be empty after a base install). So, for example, if you wanted to recompile an Ubuntu package from source, their package manager would place the source for package in /usr/src/{package dir} . If you downloaded a program not managed by your distribution and wanted to compile/install it, FHS dictates that you do that in /usr/local/src . EDIT: Short answer, yes, put your code in /usr/local/src .
{ "source": [ "https://unix.stackexchange.com/questions/10814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
10,823
I sometime want to pipe the color-coded output fror a process, eg. grep... but when I pipe it to another process, eg. sed, the color codes are lost... Is the some way to keep thes codes intact ? Here is an example which loses the colored output: echo barney | grep barney | sed -n 1,$\ p
Many programs that generate colored output detect if they're writing to a TTY, and switch off colors if they aren't. This is because color codes are annoying when you only want to capture the text, so they try to "do the right thing" automatically. The simplest way to capture color output from a program like that is to tell it to write color even though it's not connected to a TTY. You'll have to read the program's documentation to find out if it has that option. (e.g., grep has the --color=always option.) You could also use the expect script unbuffer to create a pseudo-tty like this: echo barney | unbuffer grep barney | sed -n 1,$\ p
{ "source": [ "https://unix.stackexchange.com/questions/10823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2343/" ] }