source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
1,960
Why would you use VNC (or for that matter NX) instead of just using ssh -X (-Y) . I read that VNC uses less bandwidth, but is there any other differences/advantages with respective tool?
Aside from bandwidth and latency issues (which can vary a bit), the big differences are the functionality it provides. VNC exports a whole session, desktop and all, while ssh will run a single program and show its windows on your workstation. The VNC server exports a session that survives even when you disconnect your screen, and you can reconnect to it later with all the windows open etc. This is not possible with an ssh X tunnel, since when your X server dies, the windows go away.
{ "source": [ "https://unix.stackexchange.com/questions/1960", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/991/" ] }
1,974
Using bash, how can I make the pc speaker beep? Something like echo 'beepsound' > /dev/pcspkr would be nice.
I usually use the little utility beep installed on many systems. This command will try different approaches to create a system sound. There are 3 ways of creating a sound from the beep manpage: The traditional method of producing a beep in a shell script is to write an ASCII BEL ( \007 ) character to standard output, by means of a shell command such as echo -ne '\007' This only works if the calling shell's standard output is currently directed to a terminal device of some sort; if not, the beep will produce no sound and might even cause unwanted corruption in whatever file the output is directed to. There are other ways to cause a beeping noise. A slightly more reliable method is to open /dev/tty and send your BEL character there. This is robust against I/O redirection, but still fails in the case where the shell script wishing to generate a beep does not have a controlling terminal, for example because it is run from an X window manager. A third approach is to connect to your X display and send it a bell command. This does not depend on a Unix terminal device, but does (of course) require an X display. beep will simply try these 3 methods.
{ "source": [ "https://unix.stackexchange.com/questions/1974", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
1,993
I have a package that's installed on my PC as a dependency of another package. I would like to have the package explicitly installed, but without actually re-installing it, or downloading any files. Is this possible? I do not have any packages cached in /var/cache/pacman/pkg , which is one of the reasons I want to change the package detail without a re-install. Even If I had the packages cached, running pacman -S <package> would mean the whole install process is run, which I also want to avoid.
I found the answer on the Arch Linux forums . Since pacman 3.4 you can use # pacman -D to modify only the database. So: # pacman -D --asexplicit <pkgs> will make <pkgs> explicitly installed . The pacman man page further describes this command.
{ "source": [ "https://unix.stackexchange.com/questions/1993", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
2,007
I want to fetch user's login time and login date, is there any command in Unix providing User's login date and User's login time ? this problem i want to perform in Shell-script where username is accepting from the end-user and after checking the availability of user, i would like to fetch that user's login time and login date in different variable and then display using 'echo' command.
For past logins: last "$USER_NAME" Also, the command who lists current logins. If you're looking for the date of the user's last login, some systems provide it directly, for example lastlog -u "$USER_NAME" on Linux or lastlogin "$USER_NAME" on FreeBSD. It's also available in the output of finger , but not in an easy-to-parse form. In any case, it's available in the output of last (on many unix variants, last -n 1 "$USER_NAME" shows the last login; otherwise you can do last "$USER_NAME" | head -n 1 ). Note that last login may not correspond to the last logout (e.g. a user remained connected from one origin for a long time and made a quick network login recently).
{ "source": [ "https://unix.stackexchange.com/questions/2007", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1608/" ] }
2,010
Sometimes my SSH session disconnects with a Write failed: Broken pipe message. What does it mean? And how can I keep my session open? I know about screen , but that's not the answer I'm looking for. I think this is a sshd config option.
It's possible that your server closes connections that are idle for too long. You can update either your client ( ServerAliveInterval ) or your server ( ClientAliveInterval ) ServerAliveInterval Sets a timeout interval in seconds after which if no data has been received from the server, ssh(1) will send a message through the encrypted channel to request a response from the server. The default is 0, indicating that these messages will not be sent to the server. This option applies to protocol version 2 only. ClientAliveInterval Sets a timeout interval in seconds after which if no data has been received from the client, sshd(8) will send a message through the encrypted channel to request a response from the client. The default is 0, indicating that these messages will not be sent to the client. This option applies to protocol version 2 only. To update your server (and restart your sshd ) echo "ClientAliveInterval 60" | sudo tee -a /etc/ssh/sshd_config Or client-side: echo "ServerAliveInterval 60" >> ~/.ssh/config
{ "source": [ "https://unix.stackexchange.com/questions/2010", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1431/" ] }
2,089
Does it depend on what file system I use? For example, ext2/ext3/ext4 but also what happens when I insert one of those "joliet" CD-ROMs with ISO 9660? I've heard that POSIX contains some sort of spec for the charset encoding of filenames? Essentially, what I wonder is if I got a UTF-8 encoded filename, what processing/coversion do I need to do before I pass it to a file I/O API in Linux?
As noted by others, there isn't really an answer to this: filenames and paths do not have an encoding; the OS only deals with sequence of bytes. Individual applications may choose to interpret them as being encoded in some way, but this varies. Specifically, Glib (used by Gtk+ apps) assumes that all file names are UTF-8 encoded, regardless of the user's locale . This may be overridden with the environment variables G_FILENAME_ENCODING and G_BROKEN_FILENAMES . On the other hand, Qt defaults to assuming that all file names are encoded in the current user's locale . An individual application may choose to override this assumption, though I do not know of any that do, and there is no external override switch. Modern Linux distributions are set up such that all users are using UTF-8 locales and paths on foreign filesystem mounts are translated to UTF-8, so this difference in strategies generally has no effect. However, if you really want to be safe, you cannot assume any structure about filenames beyond "NUL-terminated, '/'-delimited sequence of bytes". (Also note: locale may vary by process. Two different processes run by the same user may be in different locales simply by having different environment variables set.)
{ "source": [ "https://unix.stackexchange.com/questions/2089", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1759/" ] }
2,107
In the bash terminal I can hit Control + Z to suspend any running process... then I can type fg to resume the process. Is it possible to suspend a process if I only have it's PID? And if so, what command should I use? I'm looking for something like: suspend-process $PID_OF_PROCESS and then to resume it with resume-process $PID_OF_PROCESS
You can use kill to stop the process. For a 'polite' stop to the process (prefer this for normal use), send SIGTSTP : kill -TSTP [pid] For a 'hard' stop, send SIGSTOP : kill -STOP [pid] Note that if the process you are trying to stop by PID is in your shell's job table, it may remain visible there, but terminated, until the process is fg 'd again. To resume execution of the process, sent SIGCONT : kill -CONT [pid]
{ "source": [ "https://unix.stackexchange.com/questions/2107", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
2,126
I notice a weird (well, according to me) thing about passwords. For example, if I type an incorrect password during login, there will be a few seconds' delay before the system tells me so. When I try to sudo with a wrong password I would also have to wait before the shell says "Sorry, try again". I wonder why it takes so long to "recognize" an incorrect password? This has been seen on several distributions I use (and even OSX), so I think it's not a distribution specific thing.
This is a security thing, it's not actually taking long to realize it. 2 vulnerabilities this solves: this throttles login attempts, meaning someone can't pound the system as fast as it can go trying to crack it (1M attempts a sec? I don't know). If it did it as soon as it verified your credentials were incorrect, you could use the amount of time it took for it to invalidate your credentials to help guess if part of your credentials were correct, dramatically reducing the guessing time. to prevent these 2 things the system just takes a certain amount of time to do it, I think you can configure the wait time with PAM ( see Michael's answer ). Security Engineering ( 3rd ed., Amazon | 2nd ed., free ) gives a much better explanation of these problems. See chapter 2 (PDF) — particularly §2.4 and §2.5.3.3.
{ "source": [ "https://unix.stackexchange.com/questions/2126", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250/" ] }
2,150
I have two big files (6GB each). They are unsorted, with linefeeds ( \n ) as separators. How can I diff them? It should take under 24h.
The most obvious answer is just to use the diff command and it is probably a good idea to add the --speed-large-files parameter to it. diff --speed-large-files a.file b.file You mention unsorted files so maybe you need to sort the files first sort a.file > a.file.sorted sort b.file > b.file.sorted diff --speed-large-files a.file.sorted b.file.sorted you could save creating an extra output file by piping the 2nd sort output direct into diff sort a.file > a.file.sorted sort b.file | diff --speed-large-files a.file.sorted - Obviously these will run best on a system with plenty of available memory and you will likely need plenty of free disk space too. It wasn't clear from your question whether you have tried these before. If so then it would be helpful to know what went wrong (took too long etc.). I have always found that the stock sort and diff commands tend to do at least as well as custom commands unless there are some very domain specific properties of the files that make it possible to do things differently.
{ "source": [ "https://unix.stackexchange.com/questions/2150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1797/" ] }
2,161
I am trying to create a directory that will house all and only my PDFs compiled from LaTeX. I like keeping each project in a separate folder, all housed in a big folder called LaTeX . So I tried running: rsync -avn *.pdf ~/LaTeX/ ~/Output/ which should find all the pdfs in ~/LaTeX/ and transfer them to the output folder. This doesn't work. It tells me it's found no matches for " *.pdf ". If I leave out this filter, the command lists all the files in all the project folders under LaTeX. So it's a problem with the *.pdf filter. I tried replacing ~/ with the full path to my home directory, but that didn't have an effect. I'm, using zsh. I tried doing the same thing in bash and even with the filter that listed every single file in every subdirectory... What's going on here? Why isn't rsync understanding my pdf only filter? OK. So update: No I'm trying rsync -avn --include="*/" --include="*.pdf" LaTeX/ Output/ And this gives me the whole file list. I guess because everything matches the first pattern...
TL,DR: rsync -am --include='*.pdf' --include='*/' --exclude='*' ~/LaTeX/ ~/Output/ Rsync copies the source(s) to the destination. If you pass *.pdf as sources, the shell expands this to the list of files with the .pdf extension in the current directory. No recursive traversal happens because you didn't pass any directory as a source. So you need to run rsync -a ~/LaTeX/ ~/Output/ , but with a filter to tell rsync to copy .pdf files only. Rsync's filter rules can seem daunting when you read the manual, but you can construct many examples with just a few simple rules. Inclusions and exclusions: Excluding files by name or by location is easy: --exclude=*~ , --exclude=/some/relative/location (relative to the source argument, e.g. this excludes ~/LaTeX/some/relative/location ). If you only want to match a few files or locations, include them, include every directory leading to them (for example with --include=*/ ), then exclude the rest with --exclude='*' . This is because: If you exclude a directory, this excludes everything below it. The excluded files won't be considered at all. If you include a directory, this doesn't automatically include its contents. In recent versions, --include='directory/***' will do that. For each file, the first matching rule applies (and anything never matched is included). Patterns: If a pattern doesn't contain a / , it applies to the file name sans directory. If a pattern ends with / , it applies to directories only. If a pattern starts with / , it applies to the whole path from the directory that was passed as an argument to rsync . * any substring of a single directory component (i.e. never matches / ); ** matches any path substring. If a source argument ends with a / , its contents are copied ( rsync -r a/ b creates b/foo for every a/foo ). Otherwise the directory itself is copied ( rsync -r a b creates b/a ). Thus here we need to include *.pdf , include directories containing them, and exclude everything else. rsync -a --include='*.pdf' --include='*/' --exclude='*' ~/LaTeX/ ~/Output/ Note that this copies all directories, even the ones that contain no matching file or subdirectory containing one. This can be avoided with the --prune-empty-dirs option (it's not a universal solution since you then can't copy a directory even by matching it explicitly, but that's a rare requirement). rsync -am --include='*.pdf' --include='*/' --exclude='*' ~/LaTeX/ ~/Output/
{ "source": [ "https://unix.stackexchange.com/questions/2161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12/" ] }
2,179
After installing new software, an already opened terminal with zsh won't know about the new commands and cannot generate auto-complete for those. Apparently opening a new terminal fix the problem, but can the index (or whatever you call it) be rebuilt so that auto-complete will work on the old terminal? I tried with compinit but that didn't help. Also, is there a way that is not shell-dependent? It's nice to have a way to verify the answer as well (except for uninstalling something and reinstalling it). What I mean is after typing a few characters of a command's name, I can press Tab , and zsh should do the rest to pull up the full name.
To rebuild the cache of executable commands, use rehash or hash -rf . Make sure you haven't unset the hash_list_all option (it causes even fewer disk accesses but makes the cache update less often). If you don't want to have to type a command, you can tell zsh not to trust its cache when completing by putting the following line in your ~/.zshrc ¹: zstyle ":completion:*:commands" rehash 1 There is a performance cost, but it is negligible on a typical desktop setting today. (It isn't if you have $PATH on NFS, or a RAM-starved system.) The zstyle command itself is documented in the zshmodule man page. The styles values are documented in the zshcompsys and zshcompwid man pages, or you can read the source (here, of the _command_names function). If you wanted some readable documentation… if you find some, let me know! ¹ requires zsh≥4.3.3, thanks Chris Johnsen
{ "source": [ "https://unix.stackexchange.com/questions/2179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250/" ] }
2,207
It seems to me a swap file is more flexible.
A swap file is more flexible but also more fallible than a swap partition. A filesystem error could damage the swap file. A swap file can be a pain for the administrator, since the file can't be moved or deleted. A swap file can't be used for hibernation. A swap file was slightly slower in the past, though the difference is negligible nowadays. The advantage of a swap file is not having to decide the size in advance. However, under Linux, you still can't resize a swap file online: you have to unregister it, resize, then reregister (or create a different file and remove the old one). So there isn't that much benefit to a swap file under Linux, compared to a swap partition. It's mainly useful when you temporarily need more virtual memory, rather than as a permanent fixture.
{ "source": [ "https://unix.stackexchange.com/questions/2207", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1875/" ] }
2,212
A couple of weeks ago I attended a talk on Git by someone who seemed to be from a Windows background. I say "seemed to be" because he kept saying "dash" when referring to command-line options. I then recalled something that I found curious in my early days of learning Linux; that is, when referring to options, the resident Unix-heads always said "minus". That is: rm -rf /var/tmp/bogus/junk Would be said "arr em minus arr ef" as opposed to "arr em dash arr ef". Why is this?
Two of the most important UNIX books, The UNIX Programming Environment and The C Programming Language both refer to it as minus. The Unix Programming Environment, page 13 : Options follow the command name on the command line, and are usually made up of an initial minus sign ( - ) and a single letter. The C Programming Language, 2nd Edition, page 116 : A common convention for C programs on UNIX systems is that an argument that begins with a minus sign introduces an optional flag. Many UNIX users will have read one or both of these books, so may have have taken the terminology from there. Calling it a minus makes sense, because the character you are typing is a hyphen-minus ( - ). A dash ( — ) is longer. The reason for saying "minus" rather than "hyphen" is probably twofold: fewer people know what a hyphen is some utilities accept options starting with + , so it's logical to think of plus and minus Also, many word processing programs convert a double hyphen-minus ( -- ) into a dash ( — ), so saying "dash" when you mean "minus" could lead to confusion when discussing GNU long options, e.g. --help .
{ "source": [ "https://unix.stackexchange.com/questions/2212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1804/" ] }
2,244
I have a large JSON file that is on one line, and I want to use the command line to be able to count the number of occurrences of a word in the file. How can I do that?
$ tr ' ' '\n' < FILE | grep WORD | wc -l Where tr replaces spaces with newlines, grep filters all resulting lines matching WORD and wc counts the remaining ones. One can even save the wc part using the -c option of grep: $ tr ' ' '\n' < FILE | grep -c WORD The -c option is defined by POSIX. If it is not guaranteed that there are spaces between the words, you have to use some other character (as delimiter) to replace. For example alternative tr parts are tr '"' '\n' or tr "'" '\n' if you want to replace double or single quotes. Of course, you can also use tr to replace multiple characters at once (think different kinds of whitespace and punctuation). In case you need to count WORD but not prefixWORD, WORDsuffix or prefixWORDsuffix, you can enclose the WORD pattern in begin/end-of-line markers: grep -c '^WORD$' Which is equivalent to word-begin/end markers, in our context: grep -c '\<WORD\>'
{ "source": [ "https://unix.stackexchange.com/questions/2244", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/698/" ] }
2,273
I have two *.avi files: sequence1.avi sequence2.avi How do I merge these two files using a command-line or GUI?
There's a dedicated tool to do this, avimerge : avimerge -o cd.avi -i cd1.avi cd2.avi If not installed, install transcode: Avimerge is part of transcode package: https://manpages.debian.org/jessie/transcode/avimerge.1.en.html http://manpages.ubuntu.com/manpages/bionic/man1/avimerge.1.html
{ "source": [ "https://unix.stackexchange.com/questions/2273", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
2,291
I'm on a CentOS machine. I updated and installed some packages a few weeks back, but I don't remember the name of every package or the names of every dependency. I used yum . Can I list the packages on my system by the date they were last installed or updated?
To list all packages and their install dates, latest first: rpm -qa --last
{ "source": [ "https://unix.stackexchange.com/questions/2291", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4/" ] }
2,300
How can I check what hardware I have? (With BIOS version etc.)
If your system supports a procfs , you can get much information of your running system. Its an interface to the kernels data structures, so it will also contain information about your hardware. For example to get details about the used CPU you could cat /proc/cpuinfo For more information you should see the man proc . More hardware information can be obtained through the kernel ring buffer logmessages with dmesg . For example this will give you a short summary of recently attached hardware and how it is integreated in the system. These are some basic "interfaces" you will have on every distribution to obtain some hardware information. Other 'small' tools to gather hardware information are: lspci - PCI Hardware lsusb - USB Hardware Depending on your distribution you will also have access to one of these two tools to gather a detailed overview of your hardware configuration: lshw hwinfo (SuSE specific but availible under other distributions also) The "gate" to your hardware is thorugh the "Desktop Management Interface" (-> DMI). This framework will expose your system information to your software and is used by lshw for example. A tool to interact directly with the DMI is dmidecode and availible on the most distributions as package. It will come with biosdecode which shows you also the complete availbile BIOS informations.
{ "source": [ "https://unix.stackexchange.com/questions/2300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1924/" ] }
2,342
I've noticed that throughout the Internet, within forums and blog posts, Unix always has a * in the word, whether it is *nix or Un*x, as I noticed at the welcoming banner at the Unix StackExchange site. Why is this like this?
Most of these answers are far too late to the game, as the * usage was used on Usenet and elsewhere to refer to the multiplicity of Unixoid systems. This was significantly before "the suits" even knew what was happening in cyberspace and didn't understand it if they did. I found a reference in the comp.risks archive dated May 1987 where the title Concerning UN*X (in)security was already so pedestrian as to warrant no explanation. By this time Xenix had been long on the market as were various "*ix" based variants which were decidedly "unix" but not "Unix".
{ "source": [ "https://unix.stackexchange.com/questions/2342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/900/" ] }
2,366
Is there any program or script available for decrypt Linux shadow file ?
Passwords on a linux system are not encrypted, they are hashed which is a huge difference. It is not possible to reverse a hash function by definition. For further information see the Hash Wikipedia entry . Which hash function is used, depends on your system configuration. MD5 and blowfish are common examples for used hash functions. So the "real" password of a user is never stored on the system. If you login, the string you enter as the password will be hashed and checked against your /etc/shadow file. If it matches, you obviously entered the correct password. Anyway there are still some attack vectors against the password hashes. You could keep a dictionary of popular passwords and try them automatically. There are a lot of dictionaries available on the internet. Another approach would be to just try out all possible combinations of characters which will consume a huge amount of time. This is known as bruteforce attack. Rainbowtables are another nice attack vector against hashes. The idea behind this concept, is to simply pre calculate all possible hashes and then just lookup a hash in the tables to find the corresponding password. There are several distributed computing projects to create such tables , the size differs on the characters used and is mostly several TB. To minimize the risk of such lookup tables its a common practice and the default behaviour in Unix/Linux to add a so called " salt " to the password hash. You hash your password, add a random salt value to the hash and hash this new string again. You need to save the new hash and the salt to be able to check if a entered value is the correct password. The huge advantage of this method is, that you would have to create new lookup tables for each unique salt. A popular tool to execute dictionary or brute force attacks against user passwords of different operating systems is John The Ripper (or JTR). See the project homepage for more details: John the Ripper is a fast password cracker, currently available for many flavors of Unix, Windows, DOS, BeOS, and OpenVMS. Its primary purpose is to detect weak Unix passwords.
{ "source": [ "https://unix.stackexchange.com/questions/2366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1903/" ] }
2,386
I have Eucalyptus installed on my Linux machine, and I've noticed that for processes owned by the eucalyptus user, ps reports the userid instead of the username. For example: $ sudo -i -u eucalyptus $ ps u USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 107 29764 0.0 0.0 19376 2104 pts/2 S 11:43 0:00 -bash 107 30198 0.0 0.0 15256 1180 pts/2 R+ 11:44 0:00 ps u What would cause this to happen? Note that there's a proper entry in /etc/passwd: $ grep eucalyptus /etc/passwd eucalyptus:x:107:115::/var/lib/eucalyptus:/bin/bash Also note that ls property reports the ownership of files by the eucalyptus account: $ touch foo $ ls -l foo -rw-r--r-- 1 eucalyptus eucalyptus 0 2010-09-23 11:47 foo
ps uses the uid when the username is longer than 8 characters.
{ "source": [ "https://unix.stackexchange.com/questions/2386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/663/" ] }
2,432
When I open up my terminal it says "you have mail", anyone has any idea of why? I am running OS X, but since it too is based on Unix and relies on files such as bashrc, bash_profile etc. I thought somebody here might know, and I'm not sure it's a platform specific problem!
It sounds like something has sent mail on (and to) the machine using the local mail exchanger. Most likely the email is an automated message from some installed package. Once you log in, type mail on the terminal to read and (presumably) delete the relevant mail. (Inside mail , use ? to find out what the commands are.) Once you've read or deleted any unread mail, you won't see the "You have mail" message again until/unless something else sends mail in the same way. Odds are once you know what's sending you the mail, you can find a configuration option to change where it sends it to.
{ "source": [ "https://unix.stackexchange.com/questions/2432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
2,434
I've been a Linux user for a while, and I've a pretty decent understanding of most the common command line utilities. However, ones that come up and up again in relation to programming are grep , awk , and sed . About the only thing I've used grep for is piping stuff into it to find files in log files, the output of ps etc. I haven't used awk or sed at all. Are there any good tutorials for these utilities?
AWK is particularly well suited for tabular data and has a lower learning curve than some alternatives. AWK: A Tutorial and Introduction An AWK Primer ( alt link ) RegularExpressions.info sed tutorial grep tutorial info sed , info grep and info awk or info gawk
{ "source": [ "https://unix.stackexchange.com/questions/2434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/131/" ] }
2,445
In Ubuntu, I want to copy a big file from my hard drive to a removable drive by rsync . For some other reason, the operation cannot complete in a single run. So I am trying to figure out how to use rsync to resume copying the file from where it left off last time. I have tried to use the option --partial or --inplace , but together with --progress , I found rsync with --partial or --inplace actually starts from the beginning instead of from what was left last time. Manually stopping rsync early and checking the size of the received file also confirmed what I found. But with --append , rsync starts from what was left last time. I am confused as I saw on the man page --partial , --inplace , and --append seem to relate to resuming copying from what was left last time. Is someone able to explain the difference? Why don't --partial or --inplace work for resuming copying? Is it true that for resuming copying, rsync has to work with the --append option? Also, if a partial file was left by mv or cp , not by rsync, will rsync --append correctly resume copying the file?
To resume an interrupted copy, you should use rsync --append . From the man page's explanation of --append : This causes rsync to update a file by appending data onto the end of the file, which presumes that the data that already exists on the receiving side is identical with the start of the file on the sending side. [...] Implies --inplace , [...] Option --inplace makes rsync (over)write the destination file contents directly; without --inplace , rsync would: create a new file with a temporary name, copy updated content into it, swap it with the destination file, and finally delete the old copy of the destination file. The normal mode of operation mainly prevents conflicts with applications that might have the destination file open, and a few other mishaps which are duly listed in the rsync manpage. Note that, if a copy/update operation fails in steps 1.-3. above, rsync will delete the temporary destination file; the --partial option disables this behavior and rsync will leave partially-transferred temporary files on the destination filesystem. Thus, resuming a single file copy operation will not gain much unless you called the first rsync with --partial or --partial-dir (same effect as --partial , in addition instructs rsync to create all temporary files in a specific directory).
{ "source": [ "https://unix.stackexchange.com/questions/2445", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
2,447
Assume that we have two disks, one master SATA and one master ATA. How will they show up in /dev?
Depending on your SATA driver and your distribution's configuration, they might show up as /dev/hda and /dev/hdb , or /dev/hda and /dev/sda , or /dev/sda and /dev/sdb . Distributions and drivers are moving towards having everything hard disk called sd? , but PATA drivers traditionally used hd? and a few SATA drivers also did. The device names are determined by the udev configuration. For example, on Ubuntu 10.04, the following lines from /lib/udev/rules.d/60-persistent-storage.rules make all ATA hard disks appear as /dev/sd* and all ATA CD drives appear as /dev/sr* : # ATA devices with their own "ata" kernel subsystem KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", SUBSYSTEMS=="ata", IMPORT{program}="ata_id --export $tempnode" # ATA devices using the "scsi" subsystem KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", SUBSYSTEMS=="scsi", ATTRS{vendor}=="ATA", IMPORT{program}="ata_id --export $tempnode"
{ "source": [ "https://unix.stackexchange.com/questions/2447", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1924/" ] }
2,459
How to create a file even root user can't delete it ?
Simple answer: You can't, root can do everything. You can set the "i" attribute with chattr (at least if you are on ext{2,3,4}) which makes a file unchangeable but root can just unset the attribute and delete the file anyways. More complex (and ugly hackish workaround): Put the directory you want unchangeable for root on remote server and mount it via NFS or SMB. If the server does not offer write permissions that locks out the local root account. Of course the local root account could just copy the files over locally, unmount the remote stuff, put the copy in place and change that. You cannot lock out root from deleting your files. If you cannot trust your root to keep files intact, you are having a social problem, not a technical one.
{ "source": [ "https://unix.stackexchange.com/questions/2459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1903/" ] }
2,464
I just know that ls -t and ls -f give different sorting of files and subdirectories under a directory. What are the differences between timestamp, modification time, and created time of a file? How to get and change these kinds of information by commands? In terms of what kind of information do people say a file is "newer" than the other? What kinds of information's change will not make the file different? For example, I saw someone wrote: By default, the rsync program only looks to see if the files are different in size and timestamp. It doesn't care which file is newer, if it is different, it gets overwritten. You can pass the '--update' flag to rsync which will cause it to skip files on the destination if they are newer than the file on the source, but only so long as they are the same type of file. What this means is that if, for example, the source file is a regular file and the destination is a symlink, the destination file will be overwritten, regardless of timestamp. On a side note, does the file type here mean only regular file and simlink, not the type such as pdf, jpg, htm, txt etc?
There are 3 kind of "timestamps": Access - the last time the file was read Modify - the last time the file was modified (content has been modified) Change - the last time meta data of the file was changed (e.g. permissions) To display this information, you can use stat which is part of the coreutils. stat will show you also some more information like the device, inodes, links, etc. Remember that this sort of information depends highly on the filesystem and mount options. For example if you mount a partition with the noatime option, no access information will be written. A utility to change the timestamps would be touch . There are some arguments to decide which timestamp to change (e.g. -a for access time, -m for modification time) and to influence the parsing of a new given timestamp. See man touch for more details. touch can become handy in combination with cp -u ( "copy only when the SOURCE file is newer than the destination file or when the destination file is missing" ) or for the creation of empty marker files.
{ "source": [ "https://unix.stackexchange.com/questions/2464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
2,523
I'm looking for a clean and easy way to share a tmux session with another user on the same machine. I've tried the -S socket-path option, but it requires opening up all permissions of the socket-path before someone else can connect to the session. It works, but it's a little cumbersome. For example: # Me $ tmux -S /tmp/pair $ chmod 777 /tmp/pair # Another user $ tmux -S /tmp/pair attach This works, but both users now share the same tmux configuration (the configuration of the user who initiated the session). Is there a way to allow the two users to use their own tmux config and their own individual tmux key bindings? For bonus points, ideally, it would also be nice to give read-only access of the tmux session to other users.
From https://github.com/zolrath/wemux : wemux enhances tmux to make multi-user terminal multiplexing both easier and more powerful. It allows users to host a wemux server and have clients join in either: Mirror Mode gives clients (another SSH user on your machine) read-only access to the session, allowing them to see you work, or Pair Mode allows the client and yourself to work in the same terminal (shared cursor) Rogue Mode allows the client to pair or work independently in another window (separate cursors) in the same tmux session. It features multi-server support as well as user listing and notifications when users attach/detach. It is a shellscript wrapper over tmux - no compiling necessary.
{ "source": [ "https://unix.stackexchange.com/questions/2523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2038/" ] }
2,544
I have a local mirror (created with debmirror), and when I run apt-get update after a few days, I get this: E: Release file expired, ignoring file:/home/wena/.repo_bin/dists/sid/Release (invalid since 14h 31min 45s) How do I work around that?
Add this to the command: -o Acquire::Check-Valid-Until=false For example: sudo apt-get -o Acquire::Check-Valid-Until=false update
{ "source": [ "https://unix.stackexchange.com/questions/2544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
2,577
When moving large directories using mv , is there a way to view the progress (%)? The cp command on gentoo had a -g switch that showed the progress.
You can build a patched cp and mv which then both support the -g switch to show progress. There are instructions and patches at this page . However : The page instructs you to do $ sudo cp src/cp /usr/bin/cp $ sudo cp src/mv /usr/bin/mv which overwrites the original cp and mv. This has two disadvantages: Firstly, if an updated coreutils package arrives at your system, they are overwritten. Secondly, if the patched version has a problem, they might break scripts relying on standard cp and mv. I would rather do something like this: $ sudo cp src/cp /usr/local/bin/cpg $ sudo cp src/mv /usr/local/bin/mvg which copies the files to /usr/local/bin which is intended for user compiled programs and gives them a different name. So when you want a progress bar, you say mvg -g bigfile /mnt/backup and use mv normally. Also you can do alias mvg="/usr/local/mvg -g" then you only need to say mvg bigfile /mnt/backup and directly get the progress bar.
{ "source": [ "https://unix.stackexchange.com/questions/2577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2088/" ] }
2,588
The situation is, I have an MP3 player mpg321 that accepts a list of files as argument. I keep my music in a directory named "music", in which there are a few more directories. I just want to play all of them, so I run the program with mpg321 $(find /music -iname "*\.mp3") . The problem is, some file names have whitespace in them, and the program breaks those names into smaller parts and complains about missing files. Wrapping the result of find in quotes mpg321 "$(find /music -iname "*\.mp3")" does not help because all will become one big "file name", which is obviously not found. How can I do this then? If that matters, I am using bash , but will be switching to zsh soon.
Try using find's -print0 or -printf option in combination with xargs like this: find /music -iname "*\.mp3" -print0 | xargs -0 mpg321 How this works is explained by find's manual page : -print0 True; print the full file name on the standard output, followed by a null character (instead of the newline character that -print uses). This allows file names that contain newlines or other types of white space to be correctly interpreted by programs that process the find output. This option corresponds to the -0 option of xargs.
{ "source": [ "https://unix.stackexchange.com/questions/2588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250/" ] }
2,658
Using swap space instead of RAM can drastically slow down a PC. So why, when I have more than enough RAM available, does my Linux system (Arch) use the swap? Checkout my conky output below: Also, could this be the cause of speed and system-responsiveness issues I'm having? Output of free -m : $ free -m total used free shared buffers cached Mem: 1257 1004 252 0 51 778 -/+ buffers/cache: 174 1082 Swap: 502 144 357
It is normal for Linux systems to use some swap even if there is still RAM free. The Linux kernel will move to swap memory pages that are very seldom used (e.g., the getty instances when you only use X11, and some other inactive daemon). Swap space usage becomes an issue only when there is not enough RAM available, and the kernel is forced to continuously move memory pages to swap and back to RAM, just to keep applications running. In this case, system monitor applications would show a lot of disk I/O activity. For comparison, my Ubuntu 10.04 system, with two users logged in with X11 sessions both running GNOME desktop, uses ~600MB of swap and ~1GB of RAM (not counting buffers and fs cache), so I'd say that your figures for swap usage look normal.
{ "source": [ "https://unix.stackexchange.com/questions/2658", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
2,661
I used dd to backup a 80GB drive dd if=/dev/sdb of=~/sdb.img Now I need to access some files on that drive, but I don't want to copy the ".img" back over the drive. mount ~/sdb.img /mnt/sdb doesn't work either. It returns : mount: you must specify the filesystem type I tried to find the filesystem type with file -s fox@shoebox $ file -s sdb.img sdb.img: x86 boot sector; partition 1: ID=0x12, starthead 1, startsector 63, 10233342 sectors; partition 2: ID=0xc, active, starthead 0, startsector 10233405, 72517410 sectors; partition 3: ID=0xc, starthead 0, startsector 82750815, 73545570 sectors, code offset 0xc0 Is it possible to mount sdb.img , or must I use dd to restore the drive?
When you use dd on /dev/sdb instead of /dev/sdb1 or /dev/sdb2 , you copy all the partitions from the said drive into one file. You must mount each partition separately. To mount a partition from a file , you must first find out where in the file that partition resides. Using your output from file -s sdb.img we find the startsectors for each partition: sdb.img: x86 boot sector; partition 1 : ID=0x12, starthead 1, startsector 63 , 10233342 sectors; partition 2 : ID=0xc, active, starthead 0, startsector 10233405 , 72517410 sectors; partition 3 : ID=0xc, starthead 0, startsector 82750815 , 73545570 sectors, code offset 0xc0 Partition Startsector 1                   63 2                   10233405 3                   82750815 To mount a single partition, where X is the startsector of that partition, run: mount ~/sdb.img /mnt/sdb -o offset=$((X*512)) So to mount the second partition , you will have to run: mount ~/sdb.img /mnt/sdb2 -o offset=$((10233405*512)) sidenote: make sure that /mnt/sdb2 exists before you run this. Have fun! update: In the answer, I assumed that the sector size for you image was 512 , please see this question on how to calculate that.
{ "source": [ "https://unix.stackexchange.com/questions/2661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1945/" ] }
2,668
I answered this question , assuming that the *.img file had a sector size of 512 . How do I query a device, or the image of a device, to find the correct sector size?
fdisk -l (that's lower L in the parameter) will show you, among other information, the sector size too. $ sudo fdisk -l Disk /dev/sda: 150.3 GB, 150323855360 bytes 255 heads, 63 sectors/track, 18275 cylinders, total 293601280 sectors Units = sectors of 1 * 512 = 512 bytes Device Boot Start End Blocks Id System /dev/sda1 * 63 208844 104391 83 Linux /dev/sda2 208845 209712509 104751832+ 83 Linux This shows that the sector size is 512 bytes. EDIT: Newer versions of fdisk e.g., fdisk (from package util-linux 2.20.1 ), will also show you the logical and physical sector sizes. For example, relevant output from a "WDC WD10EFRX 1TB drive": Disk /dev/sdn: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
{ "source": [ "https://unix.stackexchange.com/questions/2668", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
2,677
I accidentally deleted a file from my laptop. I'm using Fedora. Is it possible to recover the file?
I would advise against immediately installing some utility. Basically your biggest enemy here are disk writes. You want to avoid them at all costs right now. Your best bet is an auto-backup created by your editor--if it exists. If not, I would try the following trick using grep if you remember some unique string in your .tex file: $ sudo grep -i -a -B100 -A100 'string' /dev/sda1 > file.txt Replace /dev/sda1 with the device that the file was on and replace 'string' with the unique string in your file. This could take some time. But basically, what this does is it searches for the string on the device and then returns 100 lines before and after that line and puts it in file.txt . If you need more lines returned just adjust the -B and -A options as appropriate. You might get a bunch of extra garbage returned, but you should be able to get your text back. Good luck.
{ "source": [ "https://unix.stackexchange.com/questions/2677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1267/" ] }
2,690
I have to download a file from this link . The file download is a zip file which I will have to unzip in the current folder. Normally, I would download it first, then run the unzip command. wget http://www.vim.org/scripts/download_script.php?src_id=11834 -O temp.zip unzip temp.zip But in this way, I need to execute two commands, wait for the completion of first one to execute the next one, also, I must know the name of the file temp.zip to give it to unzip . Is it possible to redirect output of wget to unzip ? Something like unzip < `wget http://www.vim.org/scripts/download_script.php?src_id=11834` But it didn't work. bash: `wget http://www.vim.org/scripts/download_script.php?src_id=11834 -O temp.zip`: ambiguous redirect Also, wget got executed twice, and downloaded the file twice.
You have to download your files to a temp file, because (quoting the unzip man page): Archives read from standard input are not yet supported, except with funzip (and then only the first member of the archive can be extracted). Just bring the commands together: wget "http://www.vim.org/scripts/download_script.php?src_id=11834" -O temp.zip unzip temp.zip rm temp.zip But in order to make it more flexible you should probably put it into a script so you save some typing and in order to make sure you don't accidentally overwrite something you could use the mktemp command to create a safe filename for your temp file: #!/bin/bash TMPFILE=`mktemp` PWD=`pwd` wget "$1" -O $TMPFILE unzip -d $PWD $TMPFILE rm $TMPFILE
{ "source": [ "https://unix.stackexchange.com/questions/2690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2045/" ] }
2,692
What are the bare minimum components for a Linux OS to be functional, and that I can use as a base to expand and improve as I learn Linux and my understanding and needs grow?
If you mean learn Linux as in getting to know the source code, you may want to try Linux from scratch
{ "source": [ "https://unix.stackexchange.com/questions/2692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2141/" ] }
2,732
Is there a site someplace that lists the contents of /proc and what each entry means?
The documentation for Linux's implementation of /proc is in Documentation/filesystems/proc.txt in the kernel documentation. Beware that /proc is one of the areas where *ixes differ most. It started out as a System V specific feature, was then greatly extended by Linux, and is now in the process of being deprecated by things like /sys . The BSDs — including OS X — haven't adopted it at all. Therefore, if you write a program or script that accesses things in /proc , there is a good chance it won't work on other *ixes.
{ "source": [ "https://unix.stackexchange.com/questions/2732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1942/" ] }
2,751
Is it possible for my Linux box to become infected with a malware? I haven't heard of it happening to anyone I know, and I've heard quite a few times that it isn't possible. Is that true? If so, what's up with Linux Anti-Virus (security) software?
First, it's certainly possible to have viruses under Unix and Unix-like operating systems such as Linux. The inventor of the term computer virus , Fred Cohen, did his first experiments under 4.3BSD. A How-To document exists for writing Linux viruses , although it looks like it hasn't had an update since 2003. Second, source code for sh-script computer viruses has floated around for better than 20 years. See Tom Duff's 1988 paper , and Doug McIllroy's 1988 paper . More recently, a platform-independent LaTeX virus got developed for a conference. Runs on Windows and Linux and *BSD. Naturally, its effects are worse under Windows... Third, a handful of real, live computer viruses for (at least) Linux have appeared, although it's not clear if more than 2 or 3 of these (RST.a and RST.b) ever got found "in the wild". So, the real question is not Can Linux/Unix/BSD contract computer viruses? but rather, Given how large the Linux desktop and server population is, why doesn't that population have the kind of amazing plague of viruses that Windows attracts? I suspect that the reason has something to do with the mild protection given by traditional Unix user/group/other discretionary protections, and the fractured software base that Linux supports. I mean, my server still runs Slackware 12.1, but with a custom-compiled kernel and lots of re-compiled packages. My desktop runs Arch, which is a rolling release. Even though they both run "Linux", they don't have much in common. The state of viruses on linux may actually be the normal equilibrium. The situation on Windows might be the "dragon king", really unusual situation. The Windows API is insanely baroque, Win32, NT-native API, magic device names like LPT , CON , AUX that can work from any directory, the ACLs that nobody understands, the tradition of single-user, nay, single root user, machines, marking files executable by using part of the file name ( .exe ), all of this probably contributes to the state of malware on Windows.
{ "source": [ "https://unix.stackexchange.com/questions/2751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
2,857
If I'm logged in to a system via SSH, is there a way to copy a file back to my local system without firing up another terminal or screen session and doing scp or something similar or without doing SSH from the remote system back to the local system?
Master connection It's easiest if you plan in advance. Open a master connection the first time. For subsequent connections, route slave connections through the existing master connection. In your ~/.ssh/config , set up connection sharing to happen automatically: ControlMaster auto ControlPath ~/.ssh/control:%h:%p:%r If you start an ssh session to the same (user, port, machine) as an existing connection, the second session will be tunneled over the first. Establishing the second connection requires no new authentication and is very fast. So while you have your active connection, you can quickly: copy a file with scp or rsync ; mount a remote filesystem with sshfs . Forwarding On an existing connection, you can establish a reverse ssh tunnel. On the ssh command line, create a remote forwarding by passing -R 22042:localhost:22 where 22042 is a randomly chosen number that's different from any other port number on the remote machine. Then ssh -p 22042 localhost on the remote machine connects you back to the source machine; you can use scp -P 22042 foo localhost: to copy files. You can automate this further with RemoteForward 22042 localhost:22 . The problem with this is that if you connect to the same computer with multiple instances of ssh, or if someone else is using the port, you don't get the forwarding. If you haven't enabled a remote forwarding from the start, you can do it on an existing ssh session. Type Enter ~C Enter -R 22042:localhost:22 Enter . See “Escape characters” in the manual for more information. There is also some interesting information in this Server Fault thread . Copy-paste If the file is small, you can type it out and copy-paste from the terminal output. If the file contains non-printable characters, use an encoding such as base64 . remote.example.net$ base64 <myfile (copy the output) local.example.net$ base64 -d >myfile (paste the clipboard contents) Ctrl + D More conveniently, if you have X forwarding active, copy the file on the remote machine and paste it locally. You can pipe data in and out of xclip or xsel . If you want to preserve the file name and metadata, copy-paste an archive. remote.example.net$ tar -czf - myfile | xsel local.example.net$ xsel | tar -xzf -
{ "source": [ "https://unix.stackexchange.com/questions/2857", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2180/" ] }
2,865
I know that it can, in some circumstances, be difficult to move a Windows installation from one computer to another (physically move the hard drive), but how does that work on Linux? Aren't most of the driver modules loaded at bootup? So theoretically would it be that much of a hassle? Obviously, xorg configs would change and proprietary ATI drivers and such would have to be recompiled (maybe?). Is there more to it than I'm thinking of? Assume the two computers are from the same era, e.g. both i7s but slightly different hardware. Update: Thanks for the answers. This is mostly for my own curiosity. I have my Linux system up and running at work, but eventually I'd like to move to a computer that I can get dual video cards into so I can run more than two monitors. But not any time soon.
Moving or cloning a Linux installation is pretty easy, assuming the source and target processors are the same architecture (e.g. both x86, both x64, both arm…). Moving When moving, you have to take care of hardware dependencies. However most users won't encounter any difficulty other than xorg.conf (and even then modern distributions tend not to need it) and perhaps the bootloader. If the disk configuration is different, you may need to reconfigure the bootloader and filesystem tables ( /etc/fstab , /etc/crypttab if you use cryptography, /etc/mdadm.conf if you use md RAID). For the bootloader, the easiest way is to pop the disk into the new machine, boot your distribution's live CD/USB and use its bootloader reparation tool. Note that if you're copying the data rather than physically moving the disk (for example because one or both systems dual boot with Windows), it's faster and easier to copy whole partitions (with (G)Parted or dd ). If you have an xorg.conf file to declare display-related options (e.g. in relation with a proprietary driver), it will need to be modified if the target system has a different graphics card or a different monitor setup. You should also install the proprietary driver for the target system's graphics card before moving, if applicable. If you've declared module options or blacklists in /etc/modprobe.d , they may need to be adjusted for the target system. Cloning Cloning an installation involves the same hardware-related issues as moving, but there are a few more things to take care of to give the new machine a new identity. Edit /etc/ hostname to give the new machine a new name. Search for other occurrences of the host name under /etc . Common locations are /etc/hosts (alias for 127.0.0.1) and /etc/mailname or other mail system configuration. Regenerate the ssh host key . Make any necessary change to the networking configuration (such as a static IP address). Change the UUID of RAID volumes (not necessary, but recommended to avoid confusion), e.g., mdadm -U uuid . See also a step-by-step cloning guide targeted at Ubuntu . My current desktop computer installation was cloned from its predecessor by unplugging one of two RAID-1 mirrored disks, moving it into the new computer, creating a RAID-1 volume on the already present disk, letting the mirror resynchronize, and making the changes outlined above where applicable.
{ "source": [ "https://unix.stackexchange.com/questions/2865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1571/" ] }
2,897
I have the following setup in .bashrc for coloring of listings. export CLICOLOR=1 export LS_COLORS='no=00:fi=00:di=00;34:ln=01;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.avi=01;35:*.fli=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.ogg=01;35:*.mp3=01;35:*.wav=01;35:'; This site shows the code for colors, and I want to change the directory color to `light color'. But making di as follows doesn't affect it. di=04;94 The interesting thing is that even after my commenting out LS_COLORS, I can see colored output as long as I have CLICOLOR=1. What should I do to make directory color to Light blue (94)? What's it for CLICOLOR and LS_COLORS? Why coloring works without LS_COLORS?
There are several different implementations of color for ls, and you've conflated some of them. On FreeBSD and Mac OS X , ls shows colors if the CLICOLOR environment variable is set or if -G is passed on the command line. The actual colors are configured through the LSCOLORS environment variable (built-in defaults are used if this variable is not set). To show directories in light blue, use export LSCOLORS=Exfxcxdxbxegedabagacad With GNU ls , e.g. on Linux, ls shows colors if --color is passed on the command line. The actual colors are configured through the LS_COLORS environment variable, which can be set with the dircolors command (built-in defaults are used if this variable is not set).
{ "source": [ "https://unix.stackexchange.com/questions/2897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1090/" ] }
2,919
Is there a way to disconnect from an SSH session that has become unresponsive without killing the whole terminal? Specifically I'm using konsole, and the machine I'm working with sometimes hangs, but doesn't actually die (thus killing the connection). So SSH just hangs and I have to close the terminal and open a new one to try to ssh back into it or do anything else. Is there a way to effectively ctrl+c out of an ssh session?
One way is to use the ssh escape character. By default this is "~", but it can be set manually with -e option when invoking ssh or via EscapeChar in your ssh config. To kill the hung session this will often work: ~. As pointed out by Gilles this is only recognized immediately after hitting Enter .
{ "source": [ "https://unix.stackexchange.com/questions/2919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1571/" ] }
2,928
I've read in so many websites that, in Linux, symbolic links (soft links, symlinks) are just like pointers that reference another file, which may be located anywhere (like Windows shortcuts). However, when I check the disk usage of a folder in which there are symbolic links, there's a mismatch between what my file manager says and what du reports. However, if I type du -L ( -L, --dereference; dereference all symbolic links from the man page), the output of du -L and the size that my file manager reports are the same . My question is : if I have a softlink to a big file in, for example, my separate home partition, will I have any problems? Example : My /var/tmp folder is now plain empty. Let's create a file: $ cat /some/file.txt > file.txt $ du -ac 164 ./file.txt 168 . 168 total And my file manager (Thunar, in this case) reports Size: 1 item, totalling 163.0 kB All right. Now, lets create a really big file in /tmp and a symlink to it: $ cat /dir/really_big.txt > /tmp/heavy.txt $ du -a | grep heavy.txt 408 ./heavy.txt $ ln -s /tmp/heavy.txt heavy.txt $ du -ac 164 ./file.txt 0 ./heavy.txt 168 . 168 total Everything is fine for now. But if I open my file manager: Size: 2 items, totalling 570.3 kB And, finally: $ du -acL 164 ./file.txt 408 ./heavy.txt 576 . 576 total If the partition in which /var/tmp is located is 1 GiB big, and I create a link in it to a 1 GiB file, ¿will my hard disk die? I know that du will output 168 and Thunar 1 GiB, but I don't know which is right.
Symbolic links do take room, of course, but just the room it takes to store the name and target plus a few bytes for other metadata. The space taken by a symbolic link does not depend on the space taken by the target (after all, the target is not even required to exist). Plain du reports the space taken by a directory tree on the disk. du -L reports the space that would be taken by a directory tree if all symbolic links were replaced by their target. The former is usually the useful information; for example, it's the space you'd recover if you deleted the tree, and it's (approximately) the space you need to back up the tree. du on a directory tree shows (usually) a little more than the total of the file sizes. That's due to two things. First, du also counts directories, which take a little room to store file names and metadata. Second, du counts the disk space taken by a file, which can be different from the file size: the most common effect is that files take up an integer number of blocks (4kB on a typical Linux installation), so a 1-byte file can show as 4kB in du output; but compression (such as the primitive form provided by sparse files on just about every unix filesystem) can make the file size larger than its disk usage. From the numbers you give, it appears that Thunar reports the sum of the sizes of the files in the directory tree, following symbolic links . It's actually saying so in a subtle way — it's claiming that the total size is 570.3 kB, not that the disk usage is 570.3 kB. What is not at all apparent from the user interface or documentation is that Thunar follows symbolic links when computing the size. Which one is “right” is a subjective matter. du reports disk usage. Thunar reports total size following symbolic links. Creating a symbolic link has a negligible impact on disk usage, but by definition does change total-size-following-symbolic-links that Thunar reports.
{ "source": [ "https://unix.stackexchange.com/questions/2928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2221/" ] }
2,942
I've noticed lot of admins change default ssh port. Is there any rational reason to do so?
The most likely reason is to make it harder for people randomly trying to brute force any SSH login they can find. My internet-facing machine uses the default SSH port, and my logs used to be filled with stuff like this (excerpted from an actual log file): sshd[16359]: Invalid user test from 92.241.180.96 sshd[16428]: Invalid user oracle from 92.241.180.96 sshd[16496]: Invalid user backup from 92.241.180.96 sshd[16556]: Invalid user ftpuser from 92.241.180.96 sshd[16612]: Invalid user nagios from 92.241.180.96 sshd[16649]: Invalid user student from 92.241.180.96 sshd[16689]: Invalid user tomcat from 92.241.180.96 sshd[16713]: Invalid user test1 from 92.241.180.96 sshd[16742]: Invalid user test from 92.241.180.96 sshd[16746]: Invalid user cyrus from 92.241.180.96 sshd[16774]: Invalid user temp from 92.241.180.96 sshd[16790]: Invalid user postgres from 92.241.180.96 sshd[16806]: Invalid user samba from 92.241.180.96 These days I use DenyHosts to block IPs that fail to authenticate too many times, but it's probably just as easy to just switch ports; virtually all brute force attacks of this kind aren't going to bother scanning to see if your sshd is listening on another port, they'll just assume you're not running one and move on
{ "source": [ "https://unix.stackexchange.com/questions/2942", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2226/" ] }
2,976
There are several ways to execute a script. The ones I know are: /path/to/script # using the path (absolute or relative) . script # using the . (dot) source script # using the `source` command Are there any other way? What are the differences between them? Are there situations that I must use one and not another?
Another way is by calling the interpreter and passing the path to the script to it: /bin/sh /path/to/script The dot and source are equivalent. (EDIT: no, they're not: as KeithB points out in a comment on another answer, "." only works in bash related shells, where "source" works in both bash and csh related shells.) It executes the script in-place (as if you copied and pasted the script right there). This means that any functions and non-local variables in the script remain. It also means if the script does a cd into a directory, you'll still be there when its done. The other ways of running a script will run it in its own subshell. Variables in the script are not still alive when it's done. If the script changed directories, then it doesn't affect the calling environment. /path/to/script and /bin/sh script are slightly different. Typically, a script has a "shebang" at the beginning that looks like this: #! /bin/bash This is the path to the script interpreter. If it specifies a different interpreter than you do when you execute it, then it may behave differently (or may not work at all). For example, Perl scripts and Ruby scripts begin with (respectively): #! /bin/perl and #! /bin/ruby If you execute one of those scripts by running /bin/sh script , then they will not work at all. Ubuntu actually doesn't use the bash shell, but a very similar one called dash. Scripts that require bash may work slightly wrong when called by doing /bin/sh script because you've just called a bash script using the dash interpreter. Another small difference between calling the script directly and passing the script path to the interpreter is that the script must be marked executable to run it directly, but not to run it by passing the path to the interpreter. Another minor variation: you can prefix any of these ways to execute a script with eval, so, you can have eval sh script eval script eval . script and so on. It doesn't actually change anything, but I thought I'd include it for thoroughness.
{ "source": [ "https://unix.stackexchange.com/questions/2976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250/" ] }
2,983
Is there something out there for parallel archiving of files? Tar is great, but I don't use tape archives, and it's more important to me that the archiving happens quickly (with compression like bzip2) since I have smp.
I think you are looking for pbzip2: PBZIP2 is a parallel implementation of the bzip2 block-sorting file compressor that uses pthreads and achieves near-linear speedup on SMP machines. Have a look at the project homepage or check your favorite package repository.
{ "source": [ "https://unix.stackexchange.com/questions/2983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/92/" ] }
2,987
How do I convert an epoch timestamp to a human readable format on the cli? I think there's a way to do it with date but the syntax eludes me (other ways welcome).
On *BSD: date -r 1234567890 On Linux (specifically, with GNU coreutils ≥5.3): date -d @1234567890 With older versions of GNU date, you can calculate the relative difference to the UTC epoch: date -d '1970-01-01 UTC + 1234567890 seconds' If you need portability, you're out of luck. The only time you can format with a POSIX shell command (without doing the calculation yourself) line is the current time. In practice, Perl is often available: perl -le 'print scalar localtime $ARGV[0]' 1234567890
{ "source": [ "https://unix.stackexchange.com/questions/2987", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
3,004
I have a DVD+-RW drive that has quit working. Apparently many users of this laptop model experience the same problem under windows and are required to edit the registry to correct the problem. So where should I look to make a similar edit?
Thankfully, there is no Linux equivalent of the Windows registry. Configuration is kept in (mostly) text files: The system configuration is in text files under /etc . The system state, which in Windows ends up mixed with configuration data, lives under /var. User configuration and state lives in “dot files”, i.e., files and directories whose name begins with a . in your home directory. You can't simply transpose a registry edit to a configuration in another operating system: registry edits are completely Windows-specific. You'll have to understand what the registry edit is doing and transpose it to Linux. It's likely that you'll end up modifying a file under /etc , but there are too many potential candidates to list here (also, it might depend on your distribution).
{ "source": [ "https://unix.stackexchange.com/questions/3004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1389/" ] }
3,011
Does w3m offer a keyboard shortcut to go back one page? I couldn't find anything in the man pages.
It's B ( Shift - B ). It's the shortcut for previous buffer in w3m jargon. See the Manual or a short introduction here .
{ "source": [ "https://unix.stackexchange.com/questions/3011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2245/" ] }
3,019
How to know the size of a directory? Including subdirectories and files.
du -s directory_name Or to get human readable output: du -sh directory_name The -s option means that it won't list the size for each subdirectory, only the total size.
{ "source": [ "https://unix.stackexchange.com/questions/3019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2252/" ] }
3,026
I found this question , but I'm sorry I don't quite understand the settings on the two variables ServerAliveInterval and ClientAliveInterval mentioned in the accepted response. If my local server is timing out, should I set this value to zero? Will it then never time out? Should I instead set it to 300 seconds or something? My question is simply, some of my connections time out when I suspend & then unsuspend my laptop with the response Write failed: Broken pipe and some don't. How can I correctly configure a local sshd so that they don't fail with a broken pipe?
ServerAliveInterval : number of seconds that the client will wait before sending a null packet to the server (to keep the connection alive). ClientAliveInterval : number of seconds that the server will wait before sending a null packet to the client (to keep the connection alive). Setting a value of 0 (the default) will disable these features so your connection could drop if it is idle for too long. ServerAliveInterval seems to be the most common strategy to keep a connection alive. To prevent the broken pipe problem, here is the ssh config I use in my .ssh/config file: Host myhostshortcut HostName myhost.com User barthelemy ServerAliveInterval 60 ServerAliveCountMax 10 The above setting will work in the following way, The client will wait idle for 60 seconds (ServerAliveInterval time) and, send a "no-op null packet" to the server and expect a response. If no response comes, then it will keep trying the above process till 10 (ServerAliveCountMax) times (600 seconds). If the server still doesn't respond, then the client disconnects the ssh connection. ClientAliveCountMax on the server side might also help. This is the limit of how long a client are allowed to stay unresponsive before being disconnected. The default value is 3, as in three ClientAliveInterval.
{ "source": [ "https://unix.stackexchange.com/questions/3026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2094/" ] }
3,033
Ubuntu 10.10 comes with Perl 5.10.1 pre-installed, but strangely enough perldoc isn't part of the package. $ perldoc You need to install the perl-doc package to use this program. How can I make this feature available?
I feel a bit stupid saying it, but did you try installing perl-doc ? # apt-get install perl-doc
{ "source": [ "https://unix.stackexchange.com/questions/3033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/446/" ] }
3,037
I'm looking for an easy way (a command or series of commands, probably involving find ) to find duplicate files in two directories, and replace the files in one directory with hardlinks of the files in the other directory. Here's the situation: This is a file server which multiple people store audio files on, each user having their own folder. Sometimes multiple people have copies of the exact same audio files. Right now, these are duplicates. I'd like to make it so they're hardlinks, to save hard drive space.
rdfind does exactly what you ask for (and in the order johny why lists). Makes it possible to delete duplicates, replace them with either soft or hard links. Combined with symlinks you can also make the symlink either absolute or relative. You can even pick checksum algorithm (sha256, md5, or sha1). Since it is compiled it is faster than most scripted solutions: time on a 15 GiB folder with 2600 files on my Mac Mini from 2009 returns this 9.99s user 3.61s system 66% cpu 20.543 total (using md5). Available in most package handlers (e.g. MacPorts for Mac OS X).
{ "source": [ "https://unix.stackexchange.com/questions/3037", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1974/" ] }
3,051
I tried to create a script by echo 'ing the contents into a file, instead of opening it with a editor echo -e "#!/bin/bash \n /usr/bin/command args" > .scripts/command The output : bash: !/bin/bash: event not found I've isolated this strange behavior to the bang . $ echo ! ! $ echo "!" bash: !: event not found $ echo \#! #! $ echo \#!/bin/bash bash: !/bin/bash: event not found Why is bang causing this? What are these "events" that bash refers to? How do I get past this problem and print "#!/bin/bash" to the screen or my file?
Try using single quotes. echo -e '#!/bin/bash \n /usr/bin/command args' > .scripts/command echo '#!' echo '#!/bin/bash' The problem is occurring because bash is searching its history for !/bin/bash. Using single quotes escapes this behaviour.
{ "source": [ "https://unix.stackexchange.com/questions/3051", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
3,052
Is ~/.bashrc the only place to specify user specific environment variables, aliases, modifications to PATH variable, etc? I ask because it seems that ~/.bashrc seems to be bash -only, but other shells exist too…
The file $HOME/.profile is used by a number of shells, including bash, sh, dash, and possibly others. From the bash man page: When bash is invoked as an interactive login shell, ... it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. csh and tcsh explicitly don't look at ~/.profile but those shells are kinda antiquated.
{ "source": [ "https://unix.stackexchange.com/questions/3052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
3,063
I need to run a command with administrative privileges. Someone said I should run a command as root. How do I do this?
The main two commandline possibilities are: Use su and enter the root password when prompted. Put sudo in front of the command, and enter your password when prompted. Running a shell command as root sudo (preferred when not running a graphical display) This is the preferred method on most systems, including Ubuntu, Linux Mint, (arguably) Debian, and others. If you don't know a separate root password, use this method. Sudo requires that you type your own password. (The purpose is to limit the damage if you leave your keyboard unattended and unlocked, and also to ensure that you really wish to run that command and it wasn't e.g. a typo.) It is often configured to not ask again for a few minutes so you can run several sudo commands in succession. Example: sudo service apache restart If you need to run several commands as root, prefix each of them with sudo . Sometimes, it is more convenient to run an interactive shell as root. You can use sudo -i for that: $ sudo -i # command 1 # command 2 ... # exit Instead of sudo -i , you can use sudo -s . The difference is that -i re i nitializes the environment to sane defaults, whereas -s uses your configuration files for better or for worse. For more information, see the sudo website , or type man sudo on your system. Sudo is very configurable; for example it can be configured to let a certain user only execute certain commands as root. Read the sudoers man page for more information; use sudo visudo to edit the sudoers file. su The su command exists on most unix-like systems. It lets you run a command as another user, provided you know that user's password. When run with no user specified, su will default to the root account. Example: su -c 'service apache restart' The command to run must be passed using the -c option. Note that you need quotes so that the command is not parsed by your shell, but passed intact to the root shell that su runs. To run multiple commands as root, it is more convenient to start an interactive shell. $ su # command 1 # command 2 ... # exit On some systems, you need to be in group number 0 (called wheel ) to use su . (The point is to limit the damage if the root password is accidentally leaked to someone.) Logging in as root If there is a root password set and you are in possession of it, you can simply type root at the login prompt and enter the root password. Be very careful, and avoid running complex applications as root as they might do something you didn't intend. Logging in directly as root is mainly useful in emergency situations, such as disk failures or when you've locked yourself out of your account. Single User Mode Single user mode, or run-level 1, also gives you root privileges. This is intended primarily for emergency maintenance situations where booting into a multi-user run-level is not possible. You can boot into single user mode by passing single or emergency on the kernel command line. Note that booting into single-user mode is not the same as booting the system normally and logging in as root. Rather, the system will only start the services defined for run-level 1. Typically, this is the smallest number of services required to have a usable system. You can also get to single user mode by using the telinit command: telinit 1 ; however, this command requires you to already have gotten root privileges via some other method in order to run. On many systems booting into single user mode will give the user access to a root shell without prompting for a password. Notably, systemd -based systems will prompt you for the root password when you boot this way. Other programs Calife Calife lets you run commands as another user by typing your own password, if authorized. It is similar to the much more widespread sudo (see above). Calife is more light-weight than sudo but also less configurable. Op Op lets you run commands as another user, including root. This not a full-blown tool to run arbitrary commands: you type op followed by a mnemonic configured by the system administrator to run a specific command. Super Super lets you run commands as another user, including root. The command must have been allowed by the system administrator. Running a graphical command as root See also Wikipedia . PolicyKit (preferred when using GNOME) Simply prefix your desired command with the command pkexec . Be aware that while this works in most cases, it does not work universally. See man pkexec for more information. KdeSu, KdeSudo (preferred when using KDE) kdesu and kdesudo are graphical front-ends to su and sudo respectively. They allow you to run X Window programs as root with no hassle. They are part of KDE . Type kdesu -c 'command --option argument' and enter the root password, or type kdesudo -c 'command --option argument' and enter your password (if authorized to run sudo ). If you check the “keep password” option in KdeSu, you will only have to type the root password once per login session. Other programs Ktsuss Ktsuss (“keep the su simple, stupid”) is a graphical version of su. Beesu Beesu is a graphical front-end to the su command that has replaced Gksu in Red Hat-based operating systems. It has been developed mainly for RHEL and Fedora. Obsolete methods gksu and gksudo gksu and gksudo are graphical front-ends to su and sudo respectively. They allow you to run X Window programs as root with no hassle. They are part of Gnome . Type gksu command --option argument and enter the root password, or type gksudo command --option argument and enter your password (if authorized to run sudo ). gksu and gksudo are obsolete. They have been replaced by PolicyKit in GNOME, and many distributions (such as Ubuntu) no longer install them by default. You should not depend on them being available or working properly. Manually via one of the shell-based methods Use one of the methods in the "running a shell command as root section". You will need to ensure that neither the DISPLAY environment variable nor the XAUTHORITY environment get reset during the transition to root. This may require additional configuration of those methods that is outside the scope of this question. Overall, this is a bad idea, mostly because graphical applications will read and write configuration files as root, and when you try to use those applications again as your normal user, those applications won't have permission to read their own configurations. Editing a file as root See How do I edit a file as root?
{ "source": [ "https://unix.stackexchange.com/questions/3063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
3,077
When I do ls -l | grep ^d it lists only directories in the current directory. What I'd like to know is what does the caret ^ in ^d mean?
Andy's answer is correct, as seen in the man page: Anchoring The caret ^ and the dollar sign $ are meta-characters that respectively match the empty string at the beginning and end of a line. The reason it works is the -l flag to ls makes it use the long-listing format. The first thing shown in each line is the human-readable permissions for the file, and the first character of that is either d for a directory or - for a file
{ "source": [ "https://unix.stackexchange.com/questions/3077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1679/" ] }
3,109
Sometimes, I would like to unmount a usb device with umount /run/media/theDrive , but I get a drive is busy error. How do I find out which processes or programs are accessing the device?
Use lsof | grep /media/whatever to find out what is using the mount. Also, consider umount -l (lazy umount) to prevent new processes from using the drive while you clean up.
{ "source": [ "https://unix.stackexchange.com/questions/3109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
3,127
Mail logs are incredibly difficult to read. How could I output a blank line between each line printed on the command line? For example, say I'm grep-ing the log. That way, multiple wrapped lines aren't being confused.
sed G # option: g G Copy/append hold space to pattern space. G is not often used, but is nice for this purpose. sed maintains two buffer spaces: the “pattern space” and the “hold space”. The lines processed by sed usually flow through the pattern space as various commands operate on its contents ( s/// , p , etc.); the hold space starts out empty and is only used by some commands. The G command appends a newline and the contents of the hold space to the pattern space. The above sed program never puts anything in the hold space, so G effectively appends just a newline to every line that is processed.
{ "source": [ "https://unix.stackexchange.com/questions/3127", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
3,192
I've heard the term "mounting" when referring to devices in Linux. What is its actual meaning? How it handling now unlike older versions? I haven't done that manually via the command-line. Can you give the steps (commands) for mounting a simple device in Linux?
Unix systems have a single directory tree. All accessible storage must have an associated location in this single directory tree. This is unlike Windows where (in the most common syntax for file paths) there is one directory tree per storage component (drive). Mounting is the act of associating a storage device to a particular location in the directory tree. For example, when the system boots, a particular storage device (commonly called the root partition) is associated with the root of the directory tree, i.e., that storage device is mounted on / (the root directory). Let's say you now want to access files on a CD-ROM. You must mount the CD-ROM on a location in the directory tree (this may be done automatically when you insert the CD). Let's say the CD-ROM device is /dev/cdrom and the chosen mount point is /media/cdrom . The corresponding command is mount /dev/cdrom /media/cdrom After that command is run, a file whose location on the CD-ROM is /dir/file is now accessible on your system as /media/cdrom/dir/file . When you've finished using the CD, you run the command umount /dev/cdrom or umount /media/cdrom (both will work; typical desktop environments will do this when you click on the “eject” or ”safely remove” button). Mounting applies to anything that is made accessible as files, not just actual storage devices. For example, all Linux systems have a special filesystem mounted under /proc . That filesystem (called proc ) does not have underlying storage: the files in it give information about running processes and various other system information; the information is provided directly by the kernel from its in-memory data structures.
{ "source": [ "https://unix.stackexchange.com/questions/3192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2269/" ] }
3,196
Can you explain the evolution hierarchy of operating systems (Linux and Windows) from Unix?
This is a highly simplified history of Unix and its derivatives . Windows does not figure in it because its history is essentially separate. Once upon a time operating systems were complex and unwieldy. One day in the late 1960s, Ken Thompson , Dennis Ritchie and a few of their colleagues at AT&T Bell Labs decided to write a simpler version of Multics to run games on their PDP-7 , and thus Unix was born. AT&T held the rights to the code, and licenses were expensive. Many other companies sublicensed Unix and sold their own version. Major players included DEC , HP , IBM , Sun . Unix variants added their own extensions, often nicking ideas from each other and from academia. Meanwhile, in Berkeley , a number of academics were unhappy with the licensing situation and decided to create a version of Unix that didn't include any AT&T-licensed code. Thus in the early 1980s the Berkeley Software Distribution, or BSD , became a free variant of Unix. BSD first ran on Minicomputers such as PDP-11 and VAXen . Meanwhile, on the East coast , Richard Stallman threw a fit when he couldn't get the source code to his printer driver. He founded the GNU ( G NU's n ot U nix) project in 1983 intending to make a free Unix-like operating system, only better. After a little hesitation, the kernel of this operating system was chosen to be Hurd , which is going to be usable any decade now. Many components of the GNU project are included in all current free unices, in particular the compiler GCC . Meanwhile, in Finland, Linus Torvalds went on a hacking binge in summer 1991. When he woke up, he realized that he'd written an operating system for his PC , and he decided to share it by putting it on an FTP server in a directory called linux . The success exceeded his expectations. Many people created software distributions including the Linux kernel, many GNU programs, the X Window System , and other free software. These distributions ( Slackware , Debian , Red Hat , SUSE , Gentoo , Ubuntu , etc.) are what people generally refer to when they say “Linux”. Most Linux distributions consist mostly of free-as-in-speech software, though software that is merely free-as-in-beer is often included when no free equivalent exists. Other currently existing unices include the various forks of BSD (you get a choice of FreeBSD , NetBSD and OpenBSD , all being free, open and developed through the 'net), as well as a disminishing number of commercial variants targeted towards servers: and AIX , HP-UX , Solaris , and a few very minor contenders. Another proprietary unix-based operating system is Mac OS X running on Apple desktops, laptops and PDAs .
{ "source": [ "https://unix.stackexchange.com/questions/3196", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2269/" ] }
3,212
If I open a terminal like xterm I will have a shell. Then if I use ssh or zsh I will have another "level" of shell. Is there a way to know how many times I have to Ctrl+D or type exit to exit all of them? My real intention is to exit everything except the "root" shell. It will also be nice to know what effect(s) terminal multiplexers (like screen ) have on the solution. PS: Please feel free to change the title, I don't know if those are the correct terms.
You have in fact hit upon the correct term¹. There is an environment variable SHLVL which all major interactive shells (bash, tcsh, zsh) increment by 1 when they start. So if you start a shell inside a shell, SHLVL increases by 1. This doesn't directly answer your concern, however, because SHLVL carries over things like terminal emulators. For example, in my typical configuration, $SHLVL is 2 in an xterm, because level 1 corresponds to the shell that runs my X session ( ~/.xinitrc or ~/.xsession ). What I do is to display $SHLVL in my prompt, but only if the parent process of the shell is another shell (with heuristics like “if its name ends in sh plus optional punctuation and digits, it's a shell”). That way, I have an obvious visual indication in the uncommon case of a shell running under another shell. Maybe you would prefer to detect shells that are running directly under a terminal emulator. You can do this fairly accurately: these are the shells whose parent process has a different controlling terminal, so that ps -o tty= -p$$ and ps -o tty= -p$PPID produce different output. You might manually reset SHLVL to 1 in these shells, or set your own TERMSHLVL to 1 in these shells (and incremented otherwise). ¹ Although one wouldn't think it looking at the manual pages: none of the three shells that support it include the word “level” in their documentation of SHLVL .
{ "source": [ "https://unix.stackexchange.com/questions/3212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250/" ] }
3,218
When I ssh into my VPS, I have irssi running in screen. When someone sends a unicode character (such as © or €), irssi displays garbage when I use it via the screen in a ssh session. If I connect to that irssi using irssi's proxy module, from irssi running on my local computer, it shows up correctly. Likewise, if I run ghci on my VPS (outside a screen) and enter in one of those characters, it crashes. So, obviously, there is a character encoding issue of some sort with my connection to my VPS, either in ssh or the system setup. How can I find out what is causing this, and solve it? Details: Client system Arch Linux x64 UTF-8 encoding VPS system Ubuntu Server 10.04 Unknown encoding used. How do I find this? (I just have to look in my /etc/rc.conf for Arch)
Running the locale command will give you information about your locale settings; the character encoding is given by the LC_CTYPE setting. Under Ubuntu, the default locale settings are given in /etc/default/locale . You can change the character encoding by setting LC_CTYPE in your ~/.profile on the VPS, e.g. export LC_CTYPE=en_US.UTF-8 You'll have to make sure that the en_US.UTF-8 locale is available. Ubuntu only generates locale data for requested locales. All English locales should be available if you have the package language-pack-en-base installed. You can manually request their generation with sudo locale-gen en You can also add entries to /var/lib/locales/supported.d/local to make sure a particular locale is installed (e.g., add the line en_US.UTF-8 UTF-8 ).
{ "source": [ "https://unix.stackexchange.com/questions/3218", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/131/" ] }
3,229
Is it possible to do a tail -f (or similar) on a file, and grep it at the same time? I wouldn't mind other commands just looking for that kind of behavior.
Using GNU tail and GNU grep , I am able to grep a tail -f using the straight-forward syntax: tail -f /var/log/file.log | grep search_term
{ "source": [ "https://unix.stackexchange.com/questions/3229", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
3,247
I want to understand what mounting is. It is used in different contexts and situations and I can't find resources which: Describe the mount concept Explain the actions taken by the computer/OS/utility when a mount is performed How and in which situations mount is used Which features in the Linux mount command are of frequent use and some examples ( I hear mount applied to diverse entities directories, flash drives, network card, etc )
As fschnitt points out, a comprehensive answer to this would likely be a chapter in a systems administration manual, so I'll try just to sketch the basic concepts. Ask new questions if you need more detail on specific points. In UNIX, all files in the system are organized into a single directory tree structure (as opposed to Windows, where you have a separate directory tree for each drive). There is a "root" directory, which is denoted by / , which corresponds to the top directory on the main drive/partition (in the Windows world, this would be C: ). Any other directory and file in the system can be reached from the root, by walking down sub-directories. How can you make other drives/partitions visible to the system in such a unique tree structure? You mount them: mounting a drive/partition on a directory (e.g., /media/usb ) means that the top directory on that drive/partition becomes visible as the directory being mounted. Example: if I insert a USB stick in Windows I get a new drive, e.g., F: ; if in Linux I mount it on directory /media/usb , then the top directory on the USB stick (what I would see by opening the F: drive in Windows) will be visible in Linux as directory /media/usb . In this case, the /media/usb directory is called a "mount point". Now, drives/partitions/etc. are traditionally called "(block) devices" in the UNIX world, so you always speak of mounting a device on a directory. By abuse of language, you can just say "mount this device" or "unmount that directory". I think I've only covered your point 1., but this could get you started for more specific questions. Further reading: * http://ultra.pr.erau.edu/~jaffem/tutorial/file_system_basics.htm
{ "source": [ "https://unix.stackexchange.com/questions/3247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1325/" ] }
3,259
I find dead.letter files from time to time in my $HOME directory. What they are for?
Either a program tried to send mail and failed (this is more likely), or you were in the middle of writing mail and broke out, so the client saved the draft in dead.letter . From the mail man page: Normally, when you abort a message with two interrupt characters (usually control-C), mail copies the partial letter to the file dead.letter in your home directory.
{ "source": [ "https://unix.stackexchange.com/questions/3259", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/305/" ] }
3,264
I know that Apple's Terminal.app provides a bash shell. Are there any differences between this and a bash on Linux?
Terminal is a terminal emulator. It interprets various control sequences sent by programs (control characters like CR, LF, BS and longer control sequences for commands like “clear screen”, “move cursor up 3 lines”, etc.). Terminal is the same kind of program as xterm , rxvt , Konsole , or GNOME Terminal . Almost all modern terminal emulators support the “xterm” control sequences, so they are generally highly compatible (and most programs use the ncurses library and its terminfo database to abstract over the actual control sequences). bash is a shell. It interprets commands that usually involve running other programs. In normal, interactive use the shell’s input comes from a user via a terminal emulator. The terminal emulator and the shell are connected via a “pseudo tty” device (e.g. /dev/pts/24 , or /dev/ttyp9 ). Because the tty devices are the only interface between Terminal and bash , they are completely independent. You can use bash with iTerm instead of Terminal , and you can use zsh instead of bash inside a Terminal window. The version of bash installed on your Mac OS X and Linux systems may be different, but should be fairly easy to install pretty much whatever version of bash you want on either system. You might look at MacPorts , homebrew , or Fink for ways to install recent versions of bash (and other shells) on Mac OS X. Whatever Linux distribution you are using surely comes with packages for common shells.
{ "source": [ "https://unix.stackexchange.com/questions/3264", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1465/" ] }
3,284
Sadly, I only learned about this last year by stumbling upon it randomly on the internet. I use it so infrequently that I always forget what it is by the time I need it again. How do you change to your previous directory?
The shortcut is - Try cd - If you want to use this in your prompt, you have to refer to it with ~- . See the example: [echox@kaffeesatz ~]$ cd /tmp [echox@kaffeesatz tmp]$ ls cron.iddS32 serverauth.CfIgeXuvka [echox@kaffeesatz tmp]$ cd - /home/echox [echox@kaffeesatz ~]$ ls ~- cron.iddS32 serverauth.CfIgeXuvka
{ "source": [ "https://unix.stackexchange.com/questions/3284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1425/" ] }
3,297
I have a script which outputs the following text. This is the output from a Netopia 2210-02 ADSL2 modem. ADSL Line State: Up ADSL Startup Attempts: 1 ADSL Modulation: DMT ADSL Data Path: Fast Datapump Version: DSP 7.2.3.0, HAL 7.2.1.0 SNR Margin: 8.20 9.00 dB Line Attenuation: 57.50 31.00 dB Output Power: 17.09 12.34 dBm Errored Seconds: 0 0 Loss of Signal: 0 476 Loss of Frame: 0 0 CRC Errors: 57921 416 Data Rate: 2880 1024 How can I remove the end-of-line character for each line? I would like the output to look something like this (Yes it's ugly): ADSL Line State: Up ADSL Startup Attempts: 1 ADSL Modulation: DMT ADSL Data Path: Fast Datapump Version: DSP 7.2.3.0, HAL 7.2.1.0 SNR Margin: 8.20 9.00 dB Line Attenuation: 57.50 31.00 dB Output Power: 17.09 12.34 dBm Errored Seconds: 0 0 Loss of Signal: 0 476 Loss of Frame: 0 0 CRC Errors: 57921 416 Data Rate: 2880 1024 I tried some solutions like this, but they don't work: # (This simply outputs the contents of the script, unmodified) stefanl@hosta:~/Work/Cacti $ ./script | sed -e 's/$//' I also tried using tr . I was expecting the following command to replace each newline character with the space character. This would take the multiple lines and combine them into one long single line. Instead, this only displays the last line of output. It seems to overwrite each subsequent line with the next line of output. stefanl@hosta:~/Work/Cacti $ ./script | tr '\n' ' ' Data Rate: 2880 1024stefanl@hosta:~/Work/Cacti $ stefanl@hosta:~/Work/Cacti $ Update : Upon further examination, it looks like each line is preceded by a return character. This shows up as ^M when using less . So, I added two tr statements. One to delete newline characters, one to delete the return character. ./script | | tr -d '\n' | tr -d '\r'
sed won't work (easily) because it operates on lines one at a time; you could do it, but it would involve copying the whole input into the hold buffer tr actually should work the way you pasted it; are you sure the newlines are \n s? You can simplify it a bit by deleting the newlines instead of converting them to spaces: tr -d '\n'
{ "source": [ "https://unix.stackexchange.com/questions/3297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4/" ] }
3,304
In Windows I can open "My Computer" and click on the "Webcam" icon to get a feed from my webcam. I can also take snapshots of that feed. Can I do the same in Ubuntu? Without installing any extra applications like Photo Booth .
Since you want an answer "without installing any extra applications like Photobooth," I've tried to give a solution that doesn't depend on very much. Also I'm assuming that your webcam uses " Video4Linux2 " and that it is /dev/video0 . If this is a modern webcam and if you only have one, these are pretty good assumptions. From the command line: $ gst-launch-0.10 v4l2src device=/dev/video0 ! xvimagesink Note that "v4l2src" contains a lowercase L and not the number 1. On your system the command may be gst-launch or something starting with gst-launch but with a different version number. Tab completion should help you find the exact command name. This tool is in the gstreamer0.10-tools package on my Ubuntu system, which is a dependency of libgstreamer, which is a dependency of a large number of the apps on my Ubuntu system and is likely present in the default installation. Other Applications If you don't mind installing other applications, here is how you can do this in a few other applications. All of them can easily be installed via apt-get or another package manager of your choosing: VLC : $ vlc v4l2:///dev/video0 Also, you can do this from the VLC GUI by going to File->Open Capture Device mplayer : mplayer tv://device=/dev/video01 (from Stefan in the comments) Cheese : This is a photobooth-like app that is very simple to use.
{ "source": [ "https://unix.stackexchange.com/questions/3304", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1945/" ] }
3,320
What are the fundamental differences between the mainstream *NIX shells and what scenarios might prompt you to use one over the other? I understand that some of it probably comes down to user preference but I've only ever used Bash and I'm interested to hear where another shell might be useful. Also, is there an impact on user-written shell scripts when running under one shell or another or is it simply a matter of changing the shell at the top of the file? My instinct says it's not that easy.
For interactive use, there are two main contenders, Bash and zsh , plus the straggler tcsh and the newcomer fish . Bash is the official shell of the GNU project and the default shell on most Linux distributions. On other unices that don't ship with a decent interactive shell as part of the base installation, I think Bash is what people tend to choose, in a self-reinforcement “Bash is everywhere so I'll use it too” loop. See also Why is Bash everywhere? (with a lot of historical information). Zsh has almost every feature of Bash and many more (useful!) features. Its main downside is being less well-known, which as a practical matter means you're less likely to find it already installed on a system someone else set up and there is less third-party documentation about it. See also What zsh features do you use? , What features are in zsh and missing from Bash, or vice versa? , What are the practical differences between Bash and Zsh? . Tcsh was once (up to the early 1990s) the shell with the best interactive features, like its predecessor csh. That made it popular for interactive use (but not for scripting ). Zsh caught up with tcsh and fairly quickly improved further, and Bash caught up (with programmable completion) in the early 2000s, while tcsh has barely made any progress in the past 15 years. Therefore there is little reason to learn tcsh now. Fish tries to be cleaner than its predecessors. It has some neat features (simpler syntax, syntax coloring on the command line) but lacks others (whatever the author doesn't like). The fish community is a lot smaller than even zsh's, making the effects even more acute. For scripting, there are several languages you might want to target, depending on how portable you want your scripts to be. Anything that pretends to be Unix-like has a Bourne -derived shell as /bin/sh . There are still some commercial unices around where /bin/sh is not POSIX compliant. Almost every now-running unix has an sh executable that is at least compliant with at least POSIX.2-1992 and usually at least POSIX:2001 a.k.a. Single Unix v3 . This shell might live in a different directory such as /usr/bin/posix or /usr/xpg6/bin . POSIX emulation layers also exist for just about every system that's powerful enough to support it, making it an attractive target. Many unix systems have ksh93 , which brings some very useful features that POSIX sh lacks (arrays, associative arrays, extended globs ( *(foo) , @(foo|bar) , …), null globs ( ~(N)foo* ), …). Ksh was initially commercial software (it became free in 2000, after some habits had set), and many free unices (Linux, *BSD) got into the habit of only providing a much older free clone ( pdksh ) lacking many of these useful features. Pdksh is now getting displaced by mksh outside of OpenBSD, but even mksh falls short of implementing all ksh93 features. Today, you can't count on ksh93 being available everywhere, especially on Linux where Bash is the norm. Bash is always available on Linux (except some embedded variants) and often on other unices. It has most of ksh93's useful features, though sometimes with a different syntax. Zsh has most of ksh93 and Bash's useful features. Its core syntax is cleaner but incompatible with Bourne. Except for macOS, don't count on zsh being available on a system you didn't install. For more advanced scripting, you can turn to Perl or Python . These languages have proper data structures, decent text manipulation features, decent process combination and communication mechanisms, and tons of available libraries. Most unix systems have them, either bundled with the OS or installed by the administrator (because there are so many Perl and Python scripts out there that it's a rare system that doesn't have at least one of each).
{ "source": [ "https://unix.stackexchange.com/questions/3320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/451/" ] }
3,322
I know there are many differences between OSX and Linux, but what makes them so totally different, that makes them fundamentally incompatible?
The whole ABI is different, not just the binary format (Mach-O versus ELF) as sepp2k mentioned. For example, while both Linux and Darwin/XNU (the kernel of OS X) use sc on PowerPC and int 0x80 / sysenter / syscall on x86 for syscall entry, there's not much more in common from there on. Darwin directs negative syscall numbers at the Mach microkernel and positive syscall numbers at the BSD monolithic kernel ­— see xnu/osfmk/mach/syscall_sw.h and xnu/bsd/kern/syscalls.master . Linux's syscall numbers vary by architecture — see linux/arch/powerpc/include/asm/unistd.h , linux/arch/x86/include/asm/unistd_32.h , and linux/arch/x86/include/asm/unistd_64.h — but are all nonnegative. So obviously syscall numbers, syscall arguments, and even which syscalls exist are different. The standard C runtime libraries are different too; Darwin mostly inherits FreeBSD's libc, while Linux typically uses glibc (but there are alternatives, like eglibc and dietlibc and uclibc and Bionic). Not to mention that the whole graphics stack is different; ignoring the whole Cocoa Objective-C libraries, GUI programs on OS X talk to WindowServer over Mach ports, while on Linux, GUI programs usually talk to the X server over UNIX domain sockets using the X11 protocol. Of course there are exceptions; you can run X on Darwin, and you can bypass X on Linux, but OS X applications definitely do not talk X. Like Wine, if somebody put the work into implementing a binary loader for Mach-O trapping every XNU syscall and converting it to appropriate Linux syscalls writing replacements for OS X libraries like CoreFoundation as needed writing replacements for OS X services like WindowServer as needed then running an OS X program "natively" on Linux could be possible. Years ago, Kyle Moffet did some work on the first item, creating a prototype binfmt_mach-o for Linux, but it was never completed, and I know of no other similar projects. (In theory this is quite possible, and similar efforts have been done many times; in addition to Wine, Linux itself has support for running binaries from other UNIXes like HP-UX and Tru64, and the Glendix project aims to bring Plan 9 compatiblity to Linux.) Somebody has put in the effort to implement a Mach-O binary loader and API translator for Linux! shinh/maloader - GitHub takes the Wine-like approach of loading the binary and trapping/translating all the library calls in userspace. It completely ignores syscalls and all graphical-related libraries, but is enough to get many console programs working. Darling builds upon maloader, adding libraries and other supporting runtime bits.
{ "source": [ "https://unix.stackexchange.com/questions/3322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1571/" ] }
3,334
I'm new to vi . I started vi on my Ubuntu machine and I can't quit. I see the editor and I can write text, at the bottom line there is a label "recording". How do I quit the vi editor?
vim is a modal editor . Hit the ESC key to get into Normal (command) mode then type :q and press Enter . To quit without saving any changes, type :q! and press Enter . See also Getting out in Vim documentation.
{ "source": [ "https://unix.stackexchange.com/questions/3334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1615/" ] }
3,340
I use Ubuntu Server 10.10 and I would like to see what processes are running. I know that PostgreSQL is running on my machine but I can not see it with the top or ps commands, so I assume that they aren't showing all of the running processes. Is there another command which will show all running processes or is there any other parameters I can use with top or ps for this?
From the ps man page: -e Select all processes. Identical to -A. Thus, ps -e will display all of the processes. The common options for "give me everything" are ps -ely or ps aux , the latter is the BSD-style. Often, people then pipe this output to grep to search for a process, as in xenoterracide's answer. In order to avoid also seeing grep itself in the output, you will often see something like: ps -ef | grep [f]oo where foo is the process name you are looking for. However, if you are looking for a particular process, I recommend using the pgrep command if it is available. I believe it is available on Ubuntu Server. Using pgrep means you avoid the race condition mentioned above. It also provides some other features that would require increasingly complicated grep trickery to replicate. The syntax is simple: pgrep foo where foo is the process for which you are looking. By default, it will simply output the Process ID (PID) of the process, if it finds one. See man pgrep for other output options. I found the following page very helpful: http://mywiki.wooledge.org/ProcessManagement
{ "source": [ "https://unix.stackexchange.com/questions/3340", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1615/" ] }
3,378
I would like to do some basic editing on existing PDF file. More specifically: Add chapters/bookmarks Change page numbering However, I cannot find any tool, GUI or command line, which would offer this functionality. Is there any free-open alternative tools?
I use pdftk mainly. But here are some others to consider: pdfsam (PDF Split and Merge) : "pdfsam is an open source tool (GPL license) designed to handle pdf files" PDF Slicer : "A simple application to extract, merge, rotate and reorder pages of PDF documents." PDFJam "A small collection of shell scripts which provide a simple interface to much of the functionality of the excellent pdfpages PDF file package (by Andreas Matthias) for pdfLaTeX ." (You can also use pdfLaTeX directly.) jPDFTweak : "jPDF Tweak is a Java Swing application that can combine, split, rotate, reorder, watermark, encrypt, sign, and otherwise tweak PDF files." Inkscape: is a vector graphics editor that can both import PDF pages into its native SVG format, and also export as PDF. Calibre: Open source ebook management software that can convert PDFs to other formats, and manipulate them in other ways. Comes with command line tools such as pdfmanipulate which can be useful. Ghostscript of course can do a lot of things with PDF files too.
{ "source": [ "https://unix.stackexchange.com/questions/3378", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/305/" ] }
3,396
Is it possible for a user without root access to mount an arbitrary iso? If so how?
You can do this without root access using the fuse module fuseiso . After fuse and fuseiso have been installed, you can do as a normal user fuseiso cdimage.iso ~/somedirectory to mount it. You may also need to add your user to the fuse group if you get permission errors when trying to use fuseiso .
{ "source": [ "https://unix.stackexchange.com/questions/3396", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1942/" ] }
3,454
If I have really long output from a command (single line) but I know I only want the first [x] (let's say 8) characters of the output, what's the easiest way to get that? There aren't any delimiters.
One way is to use cut : command | cut -c1-8 This will give you the first 8 characters of each line of output. Since cut is part of POSIX, it is likely to be on most Unices.
{ "source": [ "https://unix.stackexchange.com/questions/3454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
3,467
Is it "resource configuration", by any chance?
As is often the case with obscure terms, the Jargon File has an answer: [Unix: from runcom files on the CTSS system 1962-63, via the startup script /etc/rc] Script file containing startup instructions for an application program (or an entire operating system), usually a text file containing commands of the sort that might have been invoked manually once the system was running but are to be executed automatically each time the system starts up. Thus, it would seem that the "rc" part stands for "runcom", which I believe can be expanded to "run commands". In fact, this is exactly what the file contains, commands that bash should run.
{ "source": [ "https://unix.stackexchange.com/questions/3467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/912/" ] }
3,490
I am using gtk-recordmydesktop to record the video output to my desktop. However, the videos have no sound. All the tutorials I found regarding this involved getting sound recorded from a microphone, while I am interested in getting the sound output recorded. How can I do this? The official FAQ says "The solution is in your mixer's settings. Keep playing with it ;)." which doesn't clarify anything. How can I get the sound output recorded, while being able to hear it myself also?
I managed to get it going with the steps on the Ubuntu Forums , for clarity here is what I did: sudo apt-get install gtk-recordmydesktop pavucontrol Opened the Pulse Audio Volume Control dialog: Applications > Sound & Video > PulseAudio Volume Control Opened gtk-recordmydesktop In gtk-rmd advanced preferences, "Sound" tab, set "Device" to pulse In gtk-rmd start a recording In Volume Control goto the Recording tab and change the recordmydesktop entry to 'Monitor of ' This is what seems to have worked for me.
{ "source": [ "https://unix.stackexchange.com/questions/3490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/131/" ] }
3,497
Is it possible to change the name of a buffer in vim? Specifically, I'm using Conque Shell to open shells in vim (each shell is in a buffer) and with multiple shells, I see: 10: bash - 1 11: bash - 2 in my buffer list. I would like to rename these buffers with more meaningful names (e.g., "mercurial" instead of "bash - 2"). Is it possible?
You can use :file newname to change the buffer name. From :help :file_f : Sets the current file name to {name} . The optional ! avoids truncating the message, as with :file . If the buffer did have a name, that name becomes the alternate-file name. An unlisted buffer is created to hold the old name.
{ "source": [ "https://unix.stackexchange.com/questions/3497", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1416/" ] }
3,505
I've got deb http://debian-multimedia.org squeeze main in " /etc/apt/sources.list ", but wajig update && wajig install acroread results in: E: Package ‘acroread’ has no installation candidate What’s happening? Are there alternative repos?
NOTE: The 9.x branch of reader has been EOL'd as of June 26, 2013 . If you need native Adobe Reader support on Linux, 9.x is your only option! 10 doesn't list Linux as being supported , and likely never will. More on it too here: Adobe abandons Linux . Many may question the relevance of needing Adobe Reader but there are several use cases that the open source versions of reading tools simply do not provide. Signing documents, filling out forms, and printing are just a few of these use cases where your only option is to use Adobe Reader! To install Adobe Reader on Wheezy or higher you can use the following steps. Step #1 - Download Adobe maintains all the official versions of Adobe Reader on their FTP site so you can simply go there and download the latest version, packaged as a .deb file. The primary URL for all versions of Adobe Reader 9.x versions of Adobe Reader If you go to the 2nd URL above you'll get to a page that looks like this: From this page you can select whatever happens to be the latest version of Reader at the time you're attempting to do this. For this example we'll be downloading 9.5.5 , so we select that link. This will take us to another page with the link, "enu". This denotes that we're downloading the English version of the tool. Apparently they only offer the package in this language. I'm not 100% on this particular point, but no matter, we press on. At this point we should be at this URL: ftp://ftp.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/ From here we can download the .deb file. I typically do this using wget like so: $ wget ftp://ftp.adobe.com/pub/adobe/reader/unix/9.x/9.5.5/enu/AdbeRdr9.5.5-1_i386linux_enu.deb After doing this we should have the file, AdbeRdr9.5.5-1_i386linux_enu.deb . Now we're ready to install it. Step #2 - Installation The file we just downloaded is the 32-bit version of Adobe Reader. Adobe only provides Reader as a 32-bit binary, there is no 64-bit variant, but this is perfectly fine, we just need to install it a bit differently than most .deb packages. First we need to add the 32-bit architecture to our system (multiarch), then update. $ sudo dpkg --add-architecture i386 $ sudo apt-get update Now attempt to install Adobe Reader with either dpkg and apt-get OR gdebi . If you pick the first option, it will require you to tell apt-get to fix any broken installed packages. This would seem to be a hack, but it basically gets apt to do the heavy lifting for us and install/fix any missing or broken packages with relatively little fuss. Alternatively, using the second method, gdebi will automatically resolve the dependencies. Using dpkg and apt-get : $ sudo dpkg -i AdbeRdr9.5.5-1_i386linux_enu.deb $ sudo apt-get install -f Using gdebi : $ sudo apt-get install gdebi $ sudo gdebi AdbeRdr9.5.5-1_i386linux_enu.deb Now, attempting to launch acroread with $ acroread gives /opt/Adobe/Reader9/Reader/intellinux/bin/acroread: error while loading shared libraries: libxml2.so.2: cannot open shared object file: No such file or directory Adobe forgot a dependency. We can figure out which package to install using apt-file . $ apt-file search libxml2.so.2 which gives libxml2 . So we do $ apt-get install libxml2:i386 to install the i386 version of libxml2 . Now invoke acroread using a non-root account. $ acroread Here is a screenshot of Acrobat Reader running on Debian Wheezy. NOTE: Adobe installs Acrobat Reader in /opt , which is icky, and in violation of the FHS. References [SOLVED] how to install Adobe Acrobat Reader? DebianAMD64FAQ Adobe Reader 9.5.4 on 64 bit Linux
{ "source": [ "https://unix.stackexchange.com/questions/3505", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
3,510
The bash builtin command set , if invoked without arguments, will print all shell and environment variables, but also all defined functions. This makes the output unusable for humans and difficult to grep . How can I make the bash builtin command set print only the variables and not the functions? Are there other commands which prints only the shell variables, without the functions? Note: bash differentiates between shell variables and environment variables. see here Difference between environment variables and exported environment variables in bash
"Are there other commands which prints only the shell variables, without the functions?" In man bash, in section SHELL BUILTIN COMMANDS (in the set section) it says: "In posix mode, only shell variables are listed." (set -o posix; set) note: () syntax spawns a subshell, if you don't like forking just use the more verbose version set -o posix; set; set +o posix
{ "source": [ "https://unix.stackexchange.com/questions/3510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1170/" ] }
3,514
I am using ffmpeg to get the meta info of an audio clip. But I am unable to grep it. $ ffmpeg -i 01-Daemon.mp3 |grep -i Duration FFmpeg version SVN-r15261, Copyright (c) 2000-2008 Fabrice Bellard, et al. configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib --mandir=/usr/share/man --arch=i386 --extra-cflags=-O2 ... I checked, this ffmpeg output is directed to stderr. $ ffmpeg -i 01-Daemon.mp3 2> /dev/null So I think that grep is unable to read error stream to catch matching lines. How can we enable grep to read error stream? Using nixCraft link, I redirected standard error stream to standard output stream, then grep worked. $ ffmpeg -i 01-Daemon.mp3 2>&1 | grep -i Duration Duration: 01:15:12.33, start: 0.000000, bitrate: 64 kb/s But what if we do not want to redirect stderr to stdout?
If you're using bash why not employ anonymous pipes, in essence shorthand for what phunehehe said: ffmpeg -i 01-Daemon.mp3 2> >(grep -i Duration)
{ "source": [ "https://unix.stackexchange.com/questions/3514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2045/" ] }
3,533
What could cause touch to fail with this error message? touch: cannot touch `foo': No such file or directory Note that an error due to incorrect permissions looks different: touch: cannot touch `foo': Permission denied
Following sequence causes this error message: $ mkdir foo $ cd foo In another terminal: $ rm -r foo In the previous terminal: $ touch x touch: cannot touch `x': No such file or directory Of course, other events that also result in invalidating the current working directory (CWD) of a process that tries to create a file there also yield this error message.
{ "source": [ "https://unix.stackexchange.com/questions/3533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/663/" ] }
3,568
I'd like to log in as a different user without logging out of the current one (on the same terminal). How do I do that?
How about using the su command? $ whoami user1 $ su - user2 Password: $ whoami user2 $ exit logout If you want to log in as root, there's no need to specify username: $ whoami user1 $ su - Password: $ whoami root $ exit logout Generally, you can use sudo to launch a new shell as the user you want; the -u flag lets you specify the username you want: $ whoami user1 $ sudo -u user2 zsh $ whoami user2 There are more circuitous ways if you don't have sudo access, like ssh username@localhost, but sudo is probably simplest, provided that it's installed and you have permission to use it.
{ "source": [ "https://unix.stackexchange.com/questions/3568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
3,575
Is this possible?
The standard coreutils cp command doesn't support this. There's a Gentoo patch floating around that adds it for different versions, although it's not included in Gentoo anymore for some reason; the version for coreutils 6.10 is in their bugzilla , and I'm sure there are lots of others around. If you don't want to patch cp , you need to use some other command. For example, rsync has a --progress flag, so you can do: rsync --progress source destination If instead of copying you cat the data and then redirect stdout to the destination (i.e. cat source > destination ), then you can use a program that measures pipe throughput and insert it in the middle ( cat source | SOME-PROGRAM > destination ); there are a couple mentioned in this related question . The one I recommended there was pv (Pipe Viewer): If you give it the --rate flag it will show the transfer rate
{ "source": [ "https://unix.stackexchange.com/questions/3575", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1253/" ] }
3,586
So, for example, when I type man ls I see LS(1) . But if I type man apachectl I see APACHECTL(8) and if I type man cd I end up with cd(n) . I'm wondering what the significance of the numbers in the parentheses are, if they have any.
The number corresponds to what section of the manual that page is from; 1 is user commands, while 8 is sysadmin stuff. The man page for man itself ( man man ) explains it and lists the standard ones: MANUAL SECTIONS The standard sections of the manual include: 1 User Commands 2 System Calls 3 C Library Functions 4 Devices and Special Files 5 File Formats and Conventions 6 Games et. al. 7 Miscellanea 8 System Administration tools and Daemons Distributions customize the manual section to their specifics, which often include additional sections. There are certain terms that have different pages in different sections (e.g. printf as a command appears in section 1, as a stdlib function appears in section 3); in cases like that you can pass the section number to man before the page name to choose which one you want, or use man -a to show every matching page in a row: $ man 1 printf $ man 3 printf $ man -a printf You can tell what sections a term falls in with man -k (equivalent to the apropos command). It will do substring matches too (e.g. it will show sprintf if you run man -k printf ), so you need to use ^term to limit it: $ man -k '^printf' printf (1) - format and print data printf (1p) - write formatted output printf (3) - formatted output conversion printf (3p) - print formatted output printf [builtins] (1) - bash built-in commands, see bash(1) Note that the section can sometimes include a subsection (e.g., the p in 1p and 3p above). The p subsection is for POSIX specifications; the x subsection is for X Window System documentation.
{ "source": [ "https://unix.stackexchange.com/questions/3586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2151/" ] }
3,593
Say I have a file with the following bob john sue Now these directly corrospond to (in this case) URL pattern such as http://example.com/persons/bob.tar , john.tar , sue.tar . I would like to take these lines and run them through xargs . I don't know what is passed to the command being executed though. How do I access the parameter either from the prompt (say I want to simply echo each line like cat file | xargs echo $PARAM ) or from a bash script.
Michael's answer is right, and should sort out your problem. Running cat file | xargs -I % curl http://example.com/persons/%.tar will download files bob.tar john.tar. sue.tar as expected. BUT : cat here is useless rather use: <file xargs -I % curl http://example.com/persons/%.tar
{ "source": [ "https://unix.stackexchange.com/questions/3593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/163/" ] }
3,595
In Gentoo there is the file /var/lib/portage/world that contains packages that I explicitly installed. By explicit I mean, packages that I choose, not including anything installed by default, or pulled in by the dependencies. Is there a similar file or a command to find that information in Ubuntu?
Just the code aptitude search '~i !~M' -F '%p' --disable-columns | sort -u > currentlyinstalled.txt wget -qO - http://mirror.pnl.gov/releases/precise/ubuntu-12.04.3-desktop-amd64.manifest \ | cut -f1 | sort -u > defaultinstalled.txt comm -23 currentlyinstalled.txt defaultinstalled.txt Explanation One way to think about this problem is to break this into three parts: How do I get a list of packages not installed as dependencies? How do I get a list of the packages installed by default? How can I get the difference between these two lists? How do I get a list of packages not installed as dependencies? The following command seems to work on my system: $ aptitude search '~i !~M' -F '%p' --disable-columns | sort -u > currentlyinstalled.txt Similar approaches can be found in the links that Gilles posted as a comment to the question. Some sources claim that this will only work if you used aptitude to install the packages; however, I almost never use aptitude to install packages and found that this still worked. The --disable-columns prevents aptitude from padding lines of package names with blanks that would hinder the comparison below. The | sort -u sorts the file and removes duplicates. This makes the final step much easier. How do I get a list of the packages installed by default? Note: This section starts out with a 'wrong path' that I think is illustrative. The second piece of code is the one that works. This is a bit trickier. I initially thought that a good approximation would be all of the packages that are dependencies of the meta-packages ubuntu-minimal, ubuntu-standard, ubuntu-desktop, and the various linux kernel related packages. A few results on google searches seemed to use this approach. To get a list of these dependencies, I first tried the following (which didn't work): $ apt-cache depends ubuntu-desktop ubuntu-minimal ubuntu-standard linux-* | awk '/Depends:/ {print $2}' | sort -u This seems to leave out some packages that I know had to come by default. I still believe that this method should work if one constructs the right list of metapackages. However, it seems that Ubuntu mirrors contain a "manifest" file that contains all of the packages in the default install. The manifest for Ubuntu 12.04.3 is here: http://mirror.pnl.gov/releases/precise/ubuntu-12.04.3-desktop-amd64.manifest If you search through this page (or the page of a mirror closer to you): http://mirror.pnl.gov/releases/precise/ You should be able to find the ".manifest" file that corresponds to the version and architecture you are using. To extract just the package names I did this: wget -qO - http://mirror.pnl.gov/releases/precise/ubuntu-12.04.3-desktop-amd64.manifest | cut -f1 | sort -u > defaultinstalled.txt The list was likely already sorted and unique, but I wanted to be sure it was properly sorted to make the next step easier. I then put the output in defaultinstalled.txt . How can I get the difference between these two lists? This is the easiest part since most Unix-like systems have many tools to do this. The comm tool is one of many ways to do this: comm -23 currentlyinstalled.txt defaultinstalled.txt This should print the list of lines that are unique to the first file. Thus, it should print a list of installed packages not in the default install.
{ "source": [ "https://unix.stackexchange.com/questions/3595", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250/" ] }
3,651
So SSH has these files that configure settings for a specific user. ~/.ssh/authorized_keys ~/.ssh/config ~/.ssh/id_rsa ~/.ssh/id_rsa.pub ~/.ssh/known_hosts I'd like to globalise some of these files, like config and known_hosts . So that other users ( including root ) could share the configured hosts. What would be the best way to do this?
For ~/.ssh/config you can place relevant system-wide settings in /etc/ssh/ssh_config according to the man page : ssh(1) obtains configuration data from the following sources in the following order: command-line options user's configuration file (~/.ssh/config) system-wide configuration file (/etc/ssh/ssh_config) For each parameter, the first obtained value will be used. The configuration files contain sections separated by “Host” specifications, and that section is only applied for hosts that match one of the patterns given in the specification. Note that only the first value will be used, which means that the user can always override the system-wide configuration options locally. For ~/.ssh/known_hosts you can use /etc/ssh/ssh_known_hosts or another file specified by the GlobalKnownHostsFile configuration option: GlobalKnownHostsFile Specifies a file to use for the global host key database instead of /etc/ssh/ssh_known_hosts. I'm unsure if it is possible for the other files, but I imagine you could work something out with symlinks if you really wanted to share private keys among users as well.
{ "source": [ "https://unix.stackexchange.com/questions/3651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1327/" ] }
3,672
I want to delete all files with a given name in all the subdirectories of my home directory. I tried: rm -r file in my home directory, but it didn't work because that file doesn't exist in that directory.
find . -name "filename" -delete
{ "source": [ "https://unix.stackexchange.com/questions/3672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1898/" ] }
3,675
sha1sum outputs a hex encoded format of the actual sha. I would like to see a base64 encoded variant. possibly some command that outputs the binary version that I can pipe, like so: echo -n "message" | <some command> | base64 or if it outputs it directly that's fine too.
If you have the command line utility from OpenSSL , it can produce a digest in binary form, and it can even translate to base64 (in a separate invocation). printf %s foo | openssl dgst -binary -sha1 | openssl base64 -A -sha256 , -sha512 , etc are also supported.
{ "source": [ "https://unix.stackexchange.com/questions/3675", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
3,685
Is there any way to find out if the OS I'm running (actually installing) is running in a VMWare machine. I need to disable ntp settings if the automated install is done on a virtual machine but keep them enabled if installing on bare metal.
Linux adds the hypervisor flag to /proc/cpuinfo if the kernel detects running on some sort of a hypervisor.
{ "source": [ "https://unix.stackexchange.com/questions/3685", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2142/" ] }
3,747
I used history | less to get the lines of previous commands and from the numbers on the left hand side I found the line I wanted repeated (eg. 22) and did !22 at the command prompt and it worked -- executing the set of commands on the line I did at that time. I cannot figure out where the exclamation mark is used, what does it represent in terms of actions taken by bash, and where to use it. From the documentation I do not see an explanation that is 'tangible'.
! invokes history expansion, a feature that originally appeared in the C shell , back in the days before you could count on terminals to have arrow keys. It's especially useful if you add the current command number to the prompt ( PS1="\!$ " ) so you can quickly look at your screen to get numbers for past commands. Now that you can use arrow keys and things like Ctrl-R to search the command history, I don't see much use for the feature. One variant of it you might still find useful is !! , which re-executes the previous command. On its own, I don't find ! ! Enter any faster than just ↑ Enter , but it can be helpful when combined into a larger command. Example: A common pilot error on sudo based systems is to forget the sudo prefix on a command that requires extra privileges. A novice retypes the whole command. The diligent student edits the command from the shell's command history. The enlightened one types sudo !! . Processing ! in this way is enabled in Bash by default in interactive shells and can be disabled with set +o histexpand or set +H . You can disable it in Zsh with set -K .
{ "source": [ "https://unix.stackexchange.com/questions/3747", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1325/" ] }
3,770
I've got 14 files all being parts of one text. I'd like to merge them into one. How to do that?
This is technically what cat ("concatenate") is supposed to do, even though most people just use it for outputting files to stdout. If you give it multiple filenames it will output them all sequentially, and then you can redirect that into a new file; in the case of all files just use ./* (or /path/to/directory/* if you're not in the directory already) and your shell will expand it to all the filenames (excluding hidden ones by default). $ cat ./* > merged-file Make sure you don't use the csh or tcsh shells for that which expand the glob after opening the merged-file for output, and that merged-file doesn't exist before hand, or you'll likely end up with an infinite loop that fills up the filesystem. The list of files is sorted lexically. If using zsh , you can change the order (to numeric, or by age, size...) with glob qualifiers. To include files in sub-directories, use: find . ! -path ./merged-file -type f -exec cat {} + > merged-file Though beware the list of files is not sorted and hidden files are included. -type f here restricts to regular files only as it's unlikely you'll want to include other types of files. With GNU find , you can change it to -xtype f to also include symlinks to regular files. With the zsh shell, cat ./**/*(-.) > merged-file Would do the same ( (-.) achieving the equivalent of -xtype f ) but give you a sorted list and exclude hidden files (add the D qualifier to bring them back). zargs can be used there to work around argument list too long errors.
{ "source": [ "https://unix.stackexchange.com/questions/3770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2119/" ] }
3,773
For bash script, I can use "$@" to access arguments. What's the equivalent when I use an alias?
Aliases don't take arguments. They work more or less like simple text replacement, meaning all words after the alias (the ones that look like "arguments" to the alias) just get left at the end of the expanded alias. For instance, if you were to alias ls to ls -la , then typing ls foo bar would really execute ls -la foo bar on the command line. Which is probably fine in that example, but if foob is an alias to foo | bar , then foob abc def expands to foo | bar abc def , and there's no way to arrange those two words to be used as arguments to the left-hand side of the pipeline. One might attempt changing the alias to something like foo "$@" | bar or so, but that would expand to foo "$@" | bar abc def and use the positional parameters of the outer context in the expansion of "$@" . That's probably not what you want. If you want to have actual control over how the arguments are interpreted, then you could write a function like so: my_program_wrapper() { local first_arg=$1 \ second_arg=$2 shift 2 # get rid of the first two arguments # ... /path/to/my_program "$@" }
{ "source": [ "https://unix.stackexchange.com/questions/3773", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1090/" ] }
3,782
By default Kate inserts 2 spaces on Tab press but switches to real tabs starting from the fourth Tab level. Can I disable this and use spaces always, regardless to the depth? I want this because I use Kate to code Scala, and using space pairs instead of tabs is a convention there.
Aliases don't take arguments. They work more or less like simple text replacement, meaning all words after the alias (the ones that look like "arguments" to the alias) just get left at the end of the expanded alias. For instance, if you were to alias ls to ls -la , then typing ls foo bar would really execute ls -la foo bar on the command line. Which is probably fine in that example, but if foob is an alias to foo | bar , then foob abc def expands to foo | bar abc def , and there's no way to arrange those two words to be used as arguments to the left-hand side of the pipeline. One might attempt changing the alias to something like foo "$@" | bar or so, but that would expand to foo "$@" | bar abc def and use the positional parameters of the outer context in the expansion of "$@" . That's probably not what you want. If you want to have actual control over how the arguments are interpreted, then you could write a function like so: my_program_wrapper() { local first_arg=$1 \ second_arg=$2 shift 2 # get rid of the first two arguments # ... /path/to/my_program "$@" }
{ "source": [ "https://unix.stackexchange.com/questions/3782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2119/" ] }
3,809
What should I do if I want to be able to run a given program regardless of my current directory? Should I create a symbolic link to the program in the /bin folder?
If you just type export PATH=$PATH:</path/to/file> at the command line it will only last for the length of the session. If you want to change it permanently add export PATH=$PATH:</path/to/file> to your ~/.bashrc file (just at the end is fine).
{ "source": [ "https://unix.stackexchange.com/questions/3809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2504/" ] }
3,842
I use the watch command to see the contents of my directory changing as a script runs on it (via watch ls dir/ ) It's a great tool, except that I can't seem to scroll down or up to see all of the contents once the number of entries fills the vertical length of the screen. Is there a way to do this?
watch is great, but this is one of the things it can't do. You can use tail to show the latest entries: watch "ls -rtx dir/ | tail -n $(($LINES - 2))"
{ "source": [ "https://unix.stackexchange.com/questions/3842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/446/" ] }
3,886
What are the differences between $ nohup foo and $ foo & and $ foo & $ disown
Let's first look at what happens if a program is started from an interactive shell (connected to a terminal) without & (and without any redirection). So let's assume you've just typed foo : The process running foo is created. The process inherits stdin, stdout, and stderr from the shell. Therefore it is also connected to the same terminal. If the shell receives a SIGHUP , it also sends a SIGHUP to the process (which normally causes the process to terminate). Otherwise the shell waits (is blocked) until the process terminates or gets stopped. Now, let's look what happens if you put the process in the background, that is, type foo & : The process running foo is created. The process inherits stdout/stderr from the shell (so it still writes to the terminal). The process in principle also inherits stdin, but as soon as it tries to read from stdin, it is halted. It is put into the list of background jobs the shell manages, which means especially: It is listed with jobs and can be accessed using %n (where n is the job number). It can be turned into a foreground job using fg , in which case it continues as if you would not have used & on it (and if it was stopped due to trying to read from standard input, it now can proceed to read from the terminal). If the shell received a SIGHUP , it also sends a SIGHUP to the process. Depending on the shell and possibly on options set for the shell, when terminating the shell it will also send a SIGHUP to the process. Now disown removes the job from the shell's job list, so all the subpoints above don't apply any more (including the process being sent a SIGHUP by the shell). However note that it still is connected to the terminal, so if the terminal is destroyed (which can happen if it was a pty, like those created by xterm or ssh , and the controlling program is terminated, by closing the xterm or terminating the SSH connection), the program will fail as soon as it tries to read from standard input or write to standard output. What nohup does, on the other hand, is to effectively separate the process from the terminal: It closes standard input (the program will not be able to read any input, even if it is run in the foreground. it is not halted, but will receive an error code or EOF ). It redirects standard output and standard error to the file nohup.out , so the program won't fail for writing to standard output if the terminal fails, so whatever the process writes is not lost. It prevents the process from receiving a SIGHUP (thus the name). Note that nohup does not remove the process from the shell's job control and also doesn't put it in the background (but since a foreground nohup job is more or less useless, you'd generally put it into the background using & ). For example, unlike with disown , the shell will still tell you when the nohup job has completed (unless the shell is terminated before, of course). So to summarize: & puts the job in the background, that is, makes it block on attempting to read input, and makes the shell not wait for its completion. disown removes the process from the shell's job control, but it still leaves it connected to the terminal. One of the results is that the shell won't send it a SIGHUP . Obviously, it can only be applied to background jobs, because you cannot enter it when a foreground job is running. nohup disconnects the process from the terminal, redirects its output to nohup.out and shields it from SIGHUP . One of the effects (the naming one) is that the process won't receive any sent SIGHUP . It is completely independent from job control and could in principle be used also for foreground jobs (although that's not very useful).
{ "source": [ "https://unix.stackexchange.com/questions/3886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1170/" ] }