source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
25,903
I just ran across a screenshot of someone's terminal: Is there a list of all of the characters which can be used in a Bash prompt, or can someone get me the character for the star and the right arrow?
You can use any printable character, bash doesn't mind. You'll probably want to configure your terminal to support Unicode (in the form of UTF-8 ). There are a lot of characters in Unicode, so here are a few tips to help you search through the Unicode charts: You can try to draw the character on Shapecatcher . It tries to recognize a Unicode character in what you draw. You can try to figure out which block the character is in. For example, that weird-looking symbol and that star would be in a block of miscellaneous symbols; characters like Ǫ and ı are latin letters with modifiers; ∉ is a mathematical symbol, and so on. You can try to think of a word in the description of the character and look for it in a list of unicode symbol names and descriptions. Gucharmap or Kcharselect can help. P.S. On Shapecatcher, I got U+2234 THEREFORE for ∴ , U+2192 RIGHTWARDS ARROW for → , U+263F MERCURY for ☿ and U+2605 BLACK STAR for ★ . In a bash script, up to bash 4.1, you can write a byte by its code point, but not a character. If you want to avoid non-ASCII characters to make your .bashrc resilient to file encoding changes, you'll need to enter the bytes corresponding to these characters in the UTF-8 encoding. You can see the hexidecimal values by running echo ∴ → ☿ ★ | hexdump -C in a UTF-8 terminal, e.g. ∴ is encoded by \xe2\x88\xb4 in UTF-8. if [[ $LC_CTYPE =~ '\.[Uu][Tt][Ff]-?8' ]]; then PS1=$'\\[\e[31m\\]\xe2\x88\xb4\\[\e[0m\\]\n\xe2\x86\x92 \xe2\x98\xbf \\~ \\[\e[31m\\]\xe2\x98\x85 $? \\[\e[0m\\]' fi Since bash 4.2, you can use \u followed by 4 hexadecimal digits in a $'…' string. PS1=$'\\[\e[31m\\]\u2234\\[\e[0m\\]\n\u2192 \u263f \\~ \\[\e[31m\\]\u2605 $? \\[\e[0m\\]'
{ "source": [ "https://unix.stackexchange.com/questions/25903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
25,921
How would one run a specific command for each file that was found by using the find command? For the purpose of the question lets say that I would simply like to delete each file found by find .
Edit: While the following answer explains the general usage case, I should note that deleting files and directories is a special case. Instead of of using the -execdir rm {} \; construct, just use -delete , as in: find -iname '*.txt' -delete This handles a bunch of edge cases you might not think about including what order files and directories need to be deleted to not run into errors. For other use cases... The best way to handle running commands of results of a find is usually to use the various -exec options to the find command. In particular you should try to use -execdir whenever possible since it runs inside the directory of the file that was found and is generally safer (in the sense of preventing stupid mistakes being disastrous) than other options. The -exec options are followed by the command you would like to run with {} denoting the spot where the file found by find should be included and are terminated by either \; to run the command once for each file or + to replace {} with a list of arguments of all the matches. Note that the semicolon terminator is escaped so that it is not understood by the shell to be a separator leading to a new command. Lets say you were finding all text files: find -iname '*.txt' -execdir rm {} \; Here is the relevant bit from the find manual ( man find ): -exec command ; Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of ‘;’ is encountered. The string ‘{}’ is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find. Both of these constructions might need to be escaped (with a ‘\’) or quoted to protect them from expansion by the shell. See the EXAMPLES sec- tion for examples of the use of the -exec option. The specified command is run once for each matched file. The command is exe- cuted in the starting directory. There are unavoidable secu- rity problems surrounding use of the -exec action; you should use the -execdir option instead. -exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invoca- tions of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of ‘{}’ is allowed within the command. The command is executed in the starting directory. -execdir command ; -execdir command {} + Like -exec, but the specified command is run from the subdirec- tory containing the matched file, which is not normally the directory in which you started find. This a much more secure method for invoking commands, as it avoids race conditions dur- ing resolution of the paths to the matched files. As with the -exec action, the ‘+’ form of -execdir will build a command line to process more than one matched file, but any given invocation of command will only list files that exist in the same subdirec- tory. If you use this option, you must ensure that your $PATH environment variable does not reference ‘.’; otherwise, an attacker can run any commands they like by leaving an appropri- ately-named file in a directory in which you will run -execdir. The same applies to having entries in $PATH which are empty or which are not absolute directory names.
{ "source": [ "https://unix.stackexchange.com/questions/25921", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12951/" ] }
25,945
How do you check if $* is empty? In other words, how to check if there were no arguments provided to a command?
To check if there were no arguments provided to the command, check value of $# variable then, if [ $# -eq 0 ]; then >&2 echo "No arguments provided" exit 1 fi If you want to use $* ( not preferable ) then, if [ "$*" == "" ]; then >&2 echo "No arguments provided" exit 1 fi Some explanation: The second approach is not preferable because in positional parameter expansion * expands to the positional parameters, starting from one. When the expansion occurs within double quotes, it expands to a single word with the value of each parameter separated by the first character of the IFS special variable. That means a string is constructed. So there is extra overhead. On the other hand # expands to the number of positional parameters. Example: $ command param1 param2 Here, Value of $# is 2 and value of $* is string "param1 param2" (without quotes), if IFS is unset. Because if IFS is unset, the parameters are separated by spaces For more details man bash and read topic named Special Parameters
{ "source": [ "https://unix.stackexchange.com/questions/25945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3149/" ] }
26,047
I'm wondering where a new path has to be added to the PATH environment variable. I know this can be accomplished by editing .bashrc (for example), but it's not clear how to do this. This way: export PATH=~/opt/bin:$PATH or this? export PATH=$PATH:~/opt/bin
The simple stuff PATH=$PATH:~/opt/bin or PATH=~/opt/bin:$PATH depending on whether you want to add ~/opt/bin at the end (to be searched after all other directories, in case there is a program by the same name in multiple directories) or at the beginning (to be searched before all other directories). You can add multiple entries at the same time. PATH=$PATH:~/opt/bin:~/opt/node/bin or variations on the ordering work just fine. Don't put export at the beginning of the line as it has additional complications (see below under “Notes on shells other than bash”). If your PATH gets built by many different components, you might end up with duplicate entries. See How to add home directory path to be discovered by Unix which command? and Remove duplicate $PATH entries with awk command to avoid adding duplicates or remove them. Some distributions automatically put ~/bin in your PATH if it exists, by the way. Where to put it Put the line to modify PATH in ~/.profile , or in ~/.bash_profile or if that's what you have. (If your login shell is zsh and not bash, put it in ~/.zprofile instead.) The profile file is read by login shells, so it will only take effect the next time you log in. (Some systems configure terminals to read a login shell; in that case you can start a new terminal window, but the setting will take effect only for programs started via a terminal, and how to set PATH for all programs depends on the system.) Note that ~/.bash_rc is not read by any program, and ~/.bashrc is the configuration file of interactive instances of bash. You should not define environment variables in ~/.bashrc . The right place to define environment variables such as PATH is ~/.profile (or ~/.bash_profile if you don't care about shells other than bash). See What's the difference between them and which one should I use? Don't put it in /etc/environment or ~/.pam_environment : these are not shell files, you can't use substitutions like $PATH in there. In these files, you can only override a variable, not add to it. Potential complications in some system scripts You don't need export if the variable is already in the environment: any change of the value of the variable is reflected in the environment.¹ PATH is pretty much always in the environment; all unix systems set it very early on (usually in the very first process, in fact). At login time, you can rely on PATH being already in the environment, and already containing some system directories. If you're writing a script that may be executed early while setting up some kind of virtual environment, you may need to ensure that PATH is non-empty and exported: if PATH is still unset, then something like PATH=$PATH:/some/directory would set PATH to :/some/directory , and the empty component at the beginning means the current directory (like .:/some/directory ). if [ -z "${PATH-}" ]; then export PATH=/usr/local/bin:/usr/bin:/bin; fi Notes on shells other than bash In bash, ksh and zsh, export is special syntax, and both PATH=~/opt/bin:$PATH and export PATH=~/opt/bin:$PATH do the right thing even. In other Bourne/POSIX-style shells such as dash (which is /bin/sh on many systems), export is parsed as an ordinary command, which implies two differences: ~ is only parsed at the beginning of a word, except in assignments (see How to add home directory path to be discovered by Unix which command? for details); $PATH outside double quotes breaks if PATH contains whitespace or \[*? . So in shells like dash, export PATH=~/opt/bin:$PATH sets PATH to the literal string ~/opt/bin/: followed by the value of PATH up to the first space. PATH=~/opt/bin:$PATH (a bare assignment) doesn't require quotes and does the right thing. If you want to use export in a portable script, you need to write export PATH="$HOME/opt/bin:$PATH" , or PATH=~/opt/bin:$PATH; export PATH (or PATH=$HOME/opt/bin:$PATH; export PATH for portability to even the Bourne shell that didn't accept export var=value and didn't do tilde expansion). ¹ This wasn't true in Bourne shells (as in the actual Bourne shell, not modern POSIX-style shells), but you're highly unlikely to encounter such old shells these days.
{ "source": [ "https://unix.stackexchange.com/questions/26047", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8115/" ] }
26,053
Given a directory of font files (TTF and OTF) I'd like to inspect each font and determine what style (regular, italic, bold, bold-italic) it is. Is there a command line tool for unix flavored operating systems that can do this? Or does anyone know how to extract the metadata from a TTF or OTF font file?
I think you're looking for otfinfo . There doesn't seem to be an option to get at the Subfamily directly, but you could do: otfinfo --info *.ttf | grep Subfamily Note that a number of the fonts I looked at use "Oblique" instead of "Italic".
{ "source": [ "https://unix.stackexchange.com/questions/26053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13017/" ] }
26,063
While reading up on how to set up grub , I came across an article claiming that I need to use one of the following two syntaxes, echo \(hd0,0\) >> /boot/grub/grub.conf or echo '(hd0,0)' >> /boot/grub/grub.conf because, at the command line, parentheses are interpreted in a special way. What is special about the parentheses? How are they interpreted?
Parentheses denote a subshell in bash. To quote the man bash page: (list) list is executed in a subshell environment (see COMMAND EXECUTION ENVIRONMENT below). Variable assignments and builtin commands that affect the shell's environment do not remain in effect after the command completes. The return status is the exit status of list. where a list is just a normal sequence of commands. This is actually quite portable and not specific to just bash though. The POSIX Shell Command Language spec has the following description for the (compound-list) syntax: Execute compound-list in a subshell environment; see Shell Execution Environment . Variable assignments and built-in commands that affect the environment shall not remain in effect after the list finishes.
{ "source": [ "https://unix.stackexchange.com/questions/26063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13023/" ] }
26,074
I'm going to be doing a fair amount of PHP work shortly, and I'm interested in learning RoR, so I installed Linux Mint 12 in my VirtualBox. The most frustrating aspect of the switch, so far, has been dealing with Linux permissions. It seems like I can't do anything useful (like, say, copy the Symfony2 tarball from my Downloads directory to my document root and extract it) without posing as the root via sudo. Is there an easy way to tell linux to give me unfettered access to certain directories without simply blowing open all of their permissions?
Two options come to my mind: Own the directory you want by using chown : sudo chown your_username directory (replace your_username with your username and directory with the directory you want.) The other thing you can do is work as root as long as you KNOW WHAT YOU ARE DOING . To use root do: sudo -s and then you can do anything without having to type sudo before every command.
{ "source": [ "https://unix.stackexchange.com/questions/26074", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13026/" ] }
26,082
So there are: https://www.scientificlinux.org/distributions/6x/61/ hashes, but not for the latest: SL-61-x86_64-2011-11-09-Install-DVD.iso Where can I find up-to-date hashes for it? [of course over HTTPS, not FTP or HTTP!!]
Two options come to my mind: Own the directory you want by using chown : sudo chown your_username directory (replace your_username with your username and directory with the directory you want.) The other thing you can do is work as root as long as you KNOW WHAT YOU ARE DOING . To use root do: sudo -s and then you can do anything without having to type sudo before every command.
{ "source": [ "https://unix.stackexchange.com/questions/26082", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
26,086
I recently installed ratpoison as my first real foray into tiling window managers. The first problem I ran into is that the Ctrl-t keybinding used for ratpoison commands conflicts with those of other software, such as the new tab command in Mozilla Firefox (which is also Ctrl-t), or activating the menu in Debian's aptitude. In fact, it seems that even shortcuts with no apparent conflict are also affected. For instance, the Page Up/Down keys no longer scrolls, and Backspace can't bring me to the previous page in Firefox. I saw a sample .ratpoisonrc that changes the Ctrl-t key to "less" but what key is that on the keyboard?? Also, why do some other keys stop working (i.e. like Page Up Backspace, etc.)? Thanks.
Two options come to my mind: Own the directory you want by using chown : sudo chown your_username directory (replace your_username with your username and directory with the directory you want.) The other thing you can do is work as root as long as you KNOW WHAT YOU ARE DOING . To use root do: sudo -s and then you can do anything without having to type sudo before every command.
{ "source": [ "https://unix.stackexchange.com/questions/26086", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1375/" ] }
26,133
I want two jobs to run sometime every day, serially, in exactly the order I specify. Will this crontab reliably do what I want? @daily job1 @daily job2 I'm assuming they run one after the other, but I was unable to find the answer by searching the Web or from any of these manpages: cron(1) , crontab(1) , crontab(5) . The crontab above obviously won't do what I want if cron runs things scheduled with @daily in parallel or in an unpredictable order. I know I can simply make one shell script to fire them off in order, I'm just curious how cron is supposed to work (and I'm too lazy to gather test data or read the source code). Cron is provided by the cron package. OS is Ubuntu 10.04 LTS (server).
After a quick glance at the source (in Debian squeeze, which I think is the same version), it does look like entries within a given file and with the same times are executed in order. For this purpose, @daily and 0 0 * * * are identical (in fact @daily is identical to 0 0 * * * in this cron). I would not rely on this across the board. It's possible that one day someone will decide that cron should run jobs in parallel, to take advantage of these 32-core CPUs that have 31 cores running idle. This might be done when implementing this 20-year old todo item encountered in the cron source: All of these should be flagged and load-limited; i.e., instead of @hourly meaning "0 * * * *" it should mean "close to the front of every hour but not 'til the system load is low". (…) (vix, jan90) It's very easy to write @daily job1; job2 here. If it's important that the jobs execute in order, make it a direct consequence of what you write. Additionally, making the order explicit removes the risk that a future administrator will reorder the lines thinking that it won't matter.
{ "source": [ "https://unix.stackexchange.com/questions/26133", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8383/" ] }
26,175
Say, I have a command command which prints huge number of lines to stdout: line1 line2 ..... lineN I want to save the output to disk, but not as a single file, but as a sequence of files each having 1000 lines of stdout: file0001.txt: ------------- line1 .... line1000 file0002.txt: ------------- line1001 .... line2000 etc I've tried to google the answer, but every time google points me to tee command, which is useless in this situation. Probably, I'm entering wrong queries.
Once you are done saving the file, you could always split the file into file pieces or multiple files based on the number of lines. split -l 1000 output_file or even better just try command | split -l 1000 - This will split the output stream into files with each 1000 lines (default is 1000 lines without -l option). The below command will give you additional flexibility to put or enforce a prefix to the filename that will be generated when the output is generated and splitted to store into the file. command | split -l 1000 - small-
{ "source": [ "https://unix.stackexchange.com/questions/26175", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13068/" ] }
26,182
I don't understand the need for an rsync server in daemon mode. What are the benefits from it if I can use rsync with SSH or telnet?
Many, but I will cite a few off the top of my head. What if ssh/rsh are not available on the remote server or if they are broken in terms of configuration or stricter network rules? Using rsh/ssh still would require the client (depends on the sender or receiver role), the remote side would have to however fork the rsync binary locally and establish the connection with the rsync process running at the local side. rsh/ssh would merely provide a connection tunnel; as far as rsync is concerned, rsync is communicating with the other rsync process over the pipe(s). Having a daemon mode rsync process would make the server a true ftp look-alike server where some of the filesystems can be made available through rsync modules. Everything else can be avoided. Say I want to make available only /usr/local and /var for download and refuse any rsync client's request for other downloads. I can use discretion at the host level or at the filesystem (modules) level to allow either upload or download (read only). Can control host/user level access, authentication, authorization, logging and filesystem (structure) modules for download/upload specifically through a configuration file. Every time a change is made to the configuration file, rsyncd --daemon need not be restarted or HUPped . Can also put control on how many clients can connect to the rsync server process at a time. This is good, since I do not want my rsyncd server process to hog down the host completely over CPU or disk based I/O operations. chroot functionality can be made available through the configuration for rsyncd in daemon mode. I can use this as a pretty neat security feature if I want to avoid clients connecting to my rsyncd for any of the files/filesystems that must be secured on the host and should not have outside access. I can outright deny some of the options used by rsync client and not entertain at the server end, such as not allowing the --delete option. Can have an option to run some commands/scripts before and after the rsync process. An example would be reporting and storing the rsync stats in post-transfer mode. These are some of them, but I am sure the expert users of rsync can throw more light on this.
{ "source": [ "https://unix.stackexchange.com/questions/26182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12704/" ] }
26,205
Why does Unix time start at 1970-01-01? Why not 1971-01-01 or any other date?
I wouldn't have known the answer except google was there for me: From Here (needs free subscription): Linux is following the tradition set by Unix of counting time in seconds since its official "birthday," -- called "epoch" in computing terms -- which is Jan. 1, 1970. A more complete explanation can be found in this Wired News article . It explains that the early Unix engineers picked that date arbitrarily, because they needed to set a uniform date for the start of time, and New Year's Day, 1970, seemed most convenient.
{ "source": [ "https://unix.stackexchange.com/questions/26205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10922/" ] }
26,235
I'd like to know if there is a way that I could cat file like php.ini and remove all lines starting with ; For example, if the file contained this: ; - Show all errors, except for notices ; ;error_reporting = E_ALL & ~E_NOTICE ; ; - Show only errors ; ;error_reporting = E_COMPILE_ERROR|E_ERROR|E_CORE_ERROR ; ; - Show all errors except for notices ; error_reporting = E_ALL & ~E_NOTICE and I ran the correct command cat | {remove comments command} , then I would end up with: error_reporting = E_ALL & ~E_NOTICE Note - I assumed that cat would be the best way to do this but I'm actually fine with the answer using another utility like awk , sed , egrep , etc.
You can use: sed -e '/^;/d' php.ini
{ "source": [ "https://unix.stackexchange.com/questions/26235", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
26,245
I have a lot of commands I routinely need to execute, often with the slightest variation. Right now I'm storing them all in .bash_history and use CTRL - R to access them, but I wonder if there's a better way. What I'm looking for: Easy to add a new command Easy to search and re-execute a wanted command Avoid unwanted commands in suggestions Unfortunately, bash_history is not so strong on the third demand: if I do a few cd and ls , it fills the history file quickly. I have recently learned about HIST_SIZE and that you can configure the history to avoid duplicates or certain commands, but before configuring all that, I wanted to make sure it is the best way.
I find very useful the following readline commands history-search-backward, history-search-forward (be aware they are different from the usual reverse-search-history , forward-search-history , tied to Ctrl - R , Ctrl - S ). I have these commands associated to Ctrl - Up and Ctrl - Down putting the following lines into ~/.inputrc : "\e[1;5A": history-search-backward "\e[1;5B": history-search-forward How they work: write few chars of the beginning of the command, press Ctrl - Up and the next older command starting with that prefix will be shown, press again to see the next, and so on. When you are satisfied, after possibly modifying the command, press Enter to execute.
{ "source": [ "https://unix.stackexchange.com/questions/26245", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5586/" ] }
26,284
I've noticed that, if I add \n to a pattern for substituting using sed , it does not match. Example: $ cat > alpha.txt This is a test Please do not be alarmed $ sed -i'.original' 's/a test\nPlease do not/not a test\nBe/' alpha.txt $ diff alpha.txt{,.original} $ # No differences printed out How can I get this to work?
In the simplest calling of sed , it has one line of text in the pattern space, ie. 1 line of \n delimited text from the input. The single line in the pattern space has no \n ... That's why your regex is not finding anything. You can read multiple lines into the pattern-space and manipulate things surprisingly well, but with a more than normal effort.. Sed has a set of commands which allow this type of thing... Here is a link to a Command Summary for sed . It is the best one I've found, and got me rolling. However forget the "one-liner" idea once you start using sed's micro-commands. It is useful to lay it out like a structured program until you get the feel of it... It is surprisingly simple, and equally unusual. You could think of it as the "assembler language" of text editing. Summary: Use sed for simple things, and maybe a bit more, but in general, when it gets beyond working with a single line, most people prefer something else... I'll let someone else suggest something else.. I'm really not sure what the best choice would be (I'd use sed, but that's because I don't know perl well enough.) sed '/^a test$/{ $!{ N # append the next line when not on the last line s/^a test\nPlease do not$/not a test\nBe/ # now test for a successful substitution, otherwise #+ unpaired "a test" lines would be mis-handled t sub-yes # branch_on_substitute (goto label :sub-yes) :sub-not # a label (not essential; here to self document) # if no substituion, print only the first line P # pattern_first_line_print D # pattern_ltrunc(line+nl)_top/cycle :sub-yes # a label (the goto target of the 't' branch) # fall through to final auto-pattern_print (2 lines) } }' alpha.txt Here it is the same script, condensed into what is obviously harder to read and work with, but some would dubiously call a one-liner sed '/^a test$/{$!{N;s/^a test\nPlease do not$/not a test\nBe/;ty;P;D;:y}}' alpha.txt Here is my command "cheat-sheet" : # label = # line_number a # append_text_to_stdout_after_flush b # branch_unconditional c # range_change d # pattern_delete_top/cycle D # pattern_ltrunc(line+nl)_top/cycle g # pattern=hold G # pattern+=nl+hold h # hold=pattern H # hold+=nl+pattern i # insert_text_to_stdout_now l # pattern_list n # pattern_flush=nextline_continue N # pattern+=nl+nextline p # pattern_print P # pattern_first_line_print q # flush_quit r # append_file_to_stdout_after_flush s # substitute t # branch_on_substitute w # append_pattern_to_file_now x # swap_pattern_and_hold y # transform_chars
{ "source": [ "https://unix.stackexchange.com/questions/26284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2372/" ] }
26,364
The distribution is an Ubuntu server running the 2.6.35-30 Linux kernel. I would like to have a directory that sits completely in memory. Is this possible without root privileges?
Linux provides a tmpfs device which any user can use, /dev/shm . It is not mounted to a specific directory by default, but you can still use it as one. Simply create a directory in /dev/shm and then symlink it to wherever you want. You can give the created directory any permissions you choose, so that other users can't access it. This is a RAM backed device, so what's there is in memory by default. You can create any directories you need inside /dev/shm Naturally, files placed here will not survive a reboot, and if your machine starts swapping, /dev/shm won't help you. The Solaris parallel to /dev/shm is /tmp which is a "swap" type partition, and also memory based. As with /dev/shm , arbitrary users may create files in /tmp on Solaris. OpenBSD has the capability to use a memory based mount as well, but does not have one available by default. The mount_mfs command is availabe to the super user. I'm not sure about other *BSDs.
{ "source": [ "https://unix.stackexchange.com/questions/26364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9495/" ] }
26,501
What's the difference between patch -p0 and patch -p1 ? Is there any difference at all?
The most common way to create a patch is to run the diff command or some version control's built-in diff -like command. Sometimes, you're just comparing two files, and you run diff like this: diff -u version_by_alice.txt version_by_bob.txt >alice_to_bob.patch Then you get a patch that contains changes for one file and doesn't contain a file name at all. When you apply that patch, you need to specify which file you want to apply it to: patch <alice_to_bob.patch version2_by_alice.txt Often, you're comparing two versions of a whole multi-file project contained in a directory. A typical invocation of diff looks like this: diff -ru old_version new_version >some.patch Then the patch contains file names, given in header lines like diff -ru old_version/dir/file new_version/dir/file . You need to tell patch to strip the prefix ( old_version or new_version ) from the file name. That's what -p1 means: strip one level of directory. Sometimes, the header lines in the patch contain the file name directly with no lead-up. This is common with version control systems; for example cvs diff produces header lines that look like diff -r1.42 foo . Then there is no prefix to strip, so you must specify -p0 . In the special case when there are no subdirectories in the trees that you're comparing, no -p option is necessary: patch will discard all the directory part of the file names. But most of the time, you do need either -p0 or -p1 , depending on how the patch was produced.
{ "source": [ "https://unix.stackexchange.com/questions/26501", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7768/" ] }
26,534
[USER@SERVER ~] sleep 3 & [1] 5232 [USER@SERVER ~] [1]+ Done sleep 3 [USER@SERVER ~] How do I /dev/null these two messages?: [1] 5232 [1]+ Done sleep 3 p.s.: so I need the output of the process, but not the mentioned two lines!
It's not the program output, it's some useful shell information. Anyway, those can be hided by using subshell and output redirection ( sleep 3 & ) > /dev/null 2>&1
{ "source": [ "https://unix.stackexchange.com/questions/26534", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
26,548
How can I write all the scrollback in a tmux session to a file? capture-panel can grab the current screen, but not the entire scrollback.
For those looking for a simple answer: Use prefix + : , then type in capture-pane -S -3000 + Return . (Replace -3000 with however many lines you'd like to save, or with - for all lines.) This copies those lines into a buffer. Then, to save the buffer to a file, just use prefix + : again, and type in save-buffer filename.txt + return . (By default Prefix is Ctrl + B .)
{ "source": [ "https://unix.stackexchange.com/questions/26548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11252/" ] }
26,576
I know that I could delete the last three chars with: echo -ne '\b\b\b' But how can I delete a full line? I mean I don't want to use: echo -ne '\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b' ...etc... to delete a long line.
You can use \b or \r to move the cursor back and then overwrite the printed character with a new character. Note that neither \b nor \r deletes the printed characters. It just moves the cursor back. \b moves the cursor back one character and \r moves the cursor to the beginning of the line. Example: both echo -e 'foooo\b\b\b\b\bbar' and echo -e 'foooo\rbar' will print: baroo If you want the characters deleted then you have to use the following workaround: echo -e 'fooooo\r \rbar' output: bar Excerpt from man echo : If -e is in effect, the following sequences are recognized: \0NNN the character whose ASCII code is NNN (octal) \\ backslash \a alert (BEL) \b backspace \c produce no further output \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab NOTE: your shell may have its own version of echo, which usually super‐ sedes the version described here. Please refer to your shell's docu‐ mentation for details about the options it supports.
{ "source": [ "https://unix.stackexchange.com/questions/26576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
26,577
Possible Duplicate: Does anybody here have experience in automating some tasks in web applications using curl? There are a number of GUI based tools to test Web Service (e.g. soapUI) Is there any our command line driven tool that can be used to test a Webservice. Not just the connectivity, but also capture and compare the result returned by a web service
You can use \b or \r to move the cursor back and then overwrite the printed character with a new character. Note that neither \b nor \r deletes the printed characters. It just moves the cursor back. \b moves the cursor back one character and \r moves the cursor to the beginning of the line. Example: both echo -e 'foooo\b\b\b\b\bbar' and echo -e 'foooo\rbar' will print: baroo If you want the characters deleted then you have to use the following workaround: echo -e 'fooooo\r \rbar' output: bar Excerpt from man echo : If -e is in effect, the following sequences are recognized: \0NNN the character whose ASCII code is NNN (octal) \\ backslash \a alert (BEL) \b backspace \c produce no further output \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab NOTE: your shell may have its own version of echo, which usually super‐ sedes the version described here. Please refer to your shell's docu‐ mentation for details about the options it supports.
{ "source": [ "https://unix.stackexchange.com/questions/26577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7003/" ] }
26,595
I have a text file thousands (roughly 148,000 lines long) that consists of a lot of sequences like this: b 29. b 52. c 84. c 83. c 94. c 93. c 61. b 38. c 81. c 92. c 28. c 37. c 27. ... and since the file is so large, I want to be able to search for patterns like this (non-functional one-liner): grep "b\ 34.\nc53.\nb\ 54.\na\ 45.\nd\ 44.\nd\ 63.\nd\ 64.\n" filename It seems like awk is a good choice. How can I do that, and print line numbers for matches too?
You can use \b or \r to move the cursor back and then overwrite the printed character with a new character. Note that neither \b nor \r deletes the printed characters. It just moves the cursor back. \b moves the cursor back one character and \r moves the cursor to the beginning of the line. Example: both echo -e 'foooo\b\b\b\b\bbar' and echo -e 'foooo\rbar' will print: baroo If you want the characters deleted then you have to use the following workaround: echo -e 'fooooo\r \rbar' output: bar Excerpt from man echo : If -e is in effect, the following sequences are recognized: \0NNN the character whose ASCII code is NNN (octal) \\ backslash \a alert (BEL) \b backspace \c produce no further output \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab NOTE: your shell may have its own version of echo, which usually super‐ sedes the version described here. Please refer to your shell's docu‐ mentation for details about the options it supports.
{ "source": [ "https://unix.stackexchange.com/questions/26595", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1389/" ] }
26,598
I had a problem (new to me) last week. I have a ext4 (Fedora 15) filesystem. The application that runs on the server suddenly stopped. I couldn't find the problem at first look. df showed 50% available space. After searching for about an hour I saw a forum post where the guy used df -i . The option looks for inodes usage. The system was out of inodes, a simple problem that I didn't realize. The partition had only 3.2M inodes. Now, my questions are: Can I make the system have more inodes? Should/can it be set when formatting the disk? With the 3.2M inodes, how many files could I have?
It seems that you have a lot more files than normal expectation. I don't know whether there is a solution to change the inode table size dynamically. I'm afraid that you need to back-up your data, and create new filesystem, and restore your data. To create new filesystem with such a huge inode table, you need to use '-N' option of mke2fs(8). I'd recommend to use '-n' option first (which does not create the fs, but display the use-ful information) so that you could get the estimated number of inodes. Then if you need to, use '-N' to create your filesystem with a specific inode numbers.
{ "source": [ "https://unix.stackexchange.com/questions/26598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13239/" ] }
26,601
I wanted to know if there is any way of reading from two input files in a nested while loop one line at a time. For example, lets say I have two files FileA and FileB . FileA: [jaypal:~/Temp] cat filea this is File A line1 this is File A line2 this is File A line3 FileB: [jaypal:~/Temp] cat fileb this is File B line1 this is File B line2 this is File B line3 Current Sample Script: [jaypal:~/Temp] cat read.sh #!/bin/bash while read lineA do echo $lineA while read lineB do echo $lineB done < fileb done < filea Execution: [jaypal:~/Temp] ./read.sh this is File A line1 this is File B line1 this is File B line2 this is File B line3 this is File A line2 this is File B line1 this is File B line2 this is File B line3 this is File A line3 this is File B line1 this is File B line2 this is File B line3 Problem and desired output: This loops over FileB completely for each line in FileA. I tried using continue, break, exit but none of them are meant for achieving the output I am looking for. I would like the script to read just one line from File A and then one line from FileB and exit the loop and continue with second line of File A and second line of File B. Something similar to the following script - [jaypal:~/Temp] cat read1.sh #!/bin/bash count=1 while read lineA do echo $lineA lineB=`sed -n "$count"p fileb` echo $lineB count=`expr $count + 1` done < filea [jaypal:~/Temp] ./read1.sh this is File A line1 this is File B line1 this is File A line2 this is File B line2 this is File A line3 this is File B line3 Is this possible to achieve with while loop?
If you can guarantee that some character will never occur in the first file then you can use paste. For example you know for sure that @ will never occur: paste -d@ file1 file2 | while IFS="@" read -r f1 f2 do printf 'f1: %s\n' "$f1" printf 'f2: %s\n' "$f2" done Note that it is enough if the character is guaranteed to not occur in the first file. This is because read will ignore IFS when filling the last variable. So even if @ occurs in the second file it will not be split. Example using some bash features for arguably cleaner code and paste using default delimiter tab: while IFS=$'\t' read -r f1 f2 do printf 'f1: %s\n' "$f1" printf 'f2: %s\n' "$f2" done < <(paste file1 file2) Bash features used: ansi c string ( $'\t' ) and process substitution ( <(...) ) to avoid the while loop in a subshell problem . If you cannot be certain that any character will never occur in both files then you can use two file descriptors . while true do read -r f1 <&3 || break read -r f2 <&4 || break printf 'f1: %s\n' "$f1" printf 'f2: %s\n' "$f2" done 3<file1 4<file2 Not tested much. Might break on empty lines. File descriptors number 0, 1, and 2 are already used for stdin, stdout, and stderr, respectively. File descriptors from 3 and up are (usually) free. The bash manual warns from using file descriptors greater than 9, because they are "used internally". Note that open file descriptors are inherited to shell functions and external programs. Functions and programs inheriting an open file descriptor can read from (and write to) the file descriptor. You should take care to close all file descriptors which are not required before calling a function or external program. Here is the same program as above with the actual work (the printing) separated from the meta-work (reading line by line from two files in parallel). work() { printf 'f1: %s\n' "$1" printf 'f2: %s\n' "$2" } while true do read -r f1 <&3 || break read -r f2 <&4 || break work "$f1" "$f2" done 3<file1 4<file2 Now we pretend that we have no control over the work code and that code, for whatever reason, tries to read from file descriptor 3. unknowncode() { printf 'f1: %s\n' "$1" printf 'f2: %s\n' "$2" read -r yoink <&3 && printf 'yoink: %s\n' "$yoink" } while true do read -r f1 <&3 || break read -r f2 <&4 || break unknowncode "$f1" "$f2" done 3<file1 4<file2 Here is an example output. Note that the second line from the first file is "stolen" from the loop. f1: file1 line1 f2: file2 line1 yoink: file1 line2 f1: file1 line3 f2: file2 line2 Here is how you should close the file descriptors before calling external code (or any code for that matter). while true do read -r f1 <&3 || break read -r f2 <&4 || break # this will close fd3 and fd4 before executing anycode anycode "$f1" "$f2" 3<&- 4<&- # note that fd3 and fd4 are still open in the loop done 3<file1 4<file2
{ "source": [ "https://unix.stackexchange.com/questions/26601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11209/" ] }
26,645
I've been working on *nix for a few years now, and one of the things I just can't get used to is octal permissions in code. Is there some other reason than line length to prefer chmod 644 ... over chmod u=rw,go=r ... ? PS: I'm not looking for an explanation of octal permissions. I know how they work, and it's well explained in the manual. I'm asking why octal seems to be preferred over the more human-readable form.
Using the octal codes has two advantages I can think of, neither of which is that huge: They're shorter, easier to type. A few things only understand them, and if you routinely use them you'll not be scratching your head (or running to documentation) when you run into one. E.g., you have to use octal for chmod in Perl or C. Sometimes really simple utilities won't handle the "friendly" versions; especially in non-GNU userlands. Further, some utilities spit out octal. For example, if you run umask to see what your current umask is, it'll spit it out in octal (though in bash, umask -S does symbolic). So, in short, I'd say the only reason to prefer them is to type fewer characters, but that even if you elect not to use them, you should know how they map so that you can figure out an octal code if you run into one of the things that only does octal. But you don't need to immediately know that 5 maps to rx , you only need to be able to figure that out.
{ "source": [ "https://unix.stackexchange.com/questions/26645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
26,654
In vim I use yy and p all the time to yank and paste lines. However, if I want to replace one line multiple places in the file, I can't use yy p dd p because the dd deletes the line to the clipboard / register. While I should probably know how to use registers better, i feel like there is probably a way to p that replaces the current line.
By default, the paste commands use the " (“unnamed”) register . Effectively, any command that writes to a register also writes to the unnamed register, so yanks, deletes, and changes all affect it. This is why your yank-delete-paste sequence pastes the deleted text instead of the yanked text. The 0 register can help here. Any yank commands that do not specify a register put the yanked text in register 0 (in addition to " ). It is not affected by delete or change operations, so you can use it to paste a yanked line multiple times even if you do intermediate deletes or changes. yy : Registers 0 and " both now have the yanked line. Move to a line to replace. dd : Register " now has the deleted line, but register 0 still has the yanked line. "0P : Paste the originally yanked line from register 0 . Move to the next line to replace. dd"0P (same as above) (Due to the way cursor positioning works when replacing the last line of a buffer, you will want to use "0p instead of "0P .) This is very close to Bruce Ediger’s answer , except that you do not have to specify a register when initially yanking. Using one or more named registers can be very handy though if you need to (for example) replace some lines with AAA , but other lines with BBB (put AAA in register a , and BBB in register b (or leave one of them in register 0 ), then paste them accordingly). You can also paste from 0 in line-wise visual mode ( V ) to save a keystroke: V"0p . If you do not like having to type "0 , you might find a mapping more convenient: noremap <Leader>p "0p noremap <Leader>P "0P vnoremap <Leader>p "0p An alternate approach is to delete to the _ (“blackhole”) register . When you delete to it, the " register is not affected, so your yank-delete-paste sequence can still paste the yanked text from the unnamed register. yy : Register 0 and " both now have the yanked line. "_dd : No change to the registers. P : Paste the originally yanked text from register " . Again, you might find a mapping more convenient: noremap <Leader>d "_d
{ "source": [ "https://unix.stackexchange.com/questions/26654", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
26,661
I'm getting confused about the role word splitting plays in zsh . I have not been exposed to this concept when programming in C, Python or MATLAB, and this has triggered my interest of why word splitting seems to be something specific to shell programming. I have read about word splitting on this and other sites before, but haven't found a clear explanation of the concept. Wikipedia has a definition of word splitting but does not seem to have references on how it applies to Unix shells. Here's an example of my confusion in zsh : In the Z Shell FAQ , I read the following: 3.1: Why does $var where var="foo bar" not do what I expect? In most Bourne-shell derivatives, multiple-word variables such as var="foo bar" are split into words when passed to a command or used in a for foo in $var loop. By default, zsh does not have that behaviour: the variable remains intact. (This is not a bug! See below.) The option SH_WORD_SPLIT exists to provide compatibility. However, in the Z Shell Manual , I read the following: SH_WORD_SPLIT (-y) <K> <S> Causes field splitting to be performed on unquoted parameter expansions. Note that this option has nothing to do with word splitting. (See Parameter Expansion.) Why does it say that SH_WORD_SPLIT has nothing to do with word splitting? Isn't word splitting precisely what this is all about?
Early shells had only a single data type: strings. But it is common to manipulate lists of strings, typically when passing multiple file names as arguments to a program. Another common use case for splitting is when a command outputs a list of results: the command's output is a string, but the desired data is a list of strings. To store a list of file names in a variable, you would put spaces between them. Then a shell script like this files="foo bar qux" myprogram $files called myprogram with three arguments, as the shell split the string $files into words. At the time, spaces in file names were either forbidden or widely considered Not Done. The Korn shell introduced arrays: you could store a list of strings in a variable. The Korn shell remained compatible with the then-established Bourne shell, so bare variable expansions kept undergoing word splitting, and using arrays required some syntactic overhead. You would write the snippet above files=(foo bar qux) myprogram "${files[@]}" Zsh had arrays from the start, and its author opted for a saner language design at the expense of backward compatibility. In zsh (under the default expansion rules) $var does not perfom word splitting; if you want to store a list of words in a variable, you are meant to use an array; and if you really want word splitting, you can write $=var . files=(foo bar qux) myprogram $files These days, spaces in file names are something you need to cope with, both because many users expect them to work and because many scripts are executed in security-sensitive contexts where an attacker may be in control of file names. So automatic word splitting is often a nuisance; hence my general advice to always use double quotes, i.e. write "$foo" , unless you understand why you need word splitting in a particular use case. (Note that bare variable expansions undergo globbing as well.) In my answer, I used the term “word splitting”. This is also called “field splitting”, because what constitutes a word (also called field) can be configured by setting the IFS variable: any character in IFS is considered a word separator, and a word is a sequence of characters that are not word separators. By default, IFS contains basic whitespace characters (ASCII space, tab and newline — not carriage return, unbreakable space, etc.). The zsh manual uses “word splitting” only to refer to a step in parsing shell code, which has nothing to do with the field/word splitting that is part of the expansion that happens after variable and command substitutions.
{ "source": [ "https://unix.stackexchange.com/questions/26661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
26,666
I'm getting a lot of mail in my root user's mail account. This appears to be mostly reports and errors from things like cron scripts. I'm trying to work though and solve these things, possibly even have them be piped to some sort of "dashboard" - but until then how can I have these messages go to my personal e-mail account instead?
Any user, including root, can forward their local email by putting the forwarding address in a file called ~/.forward . You can have multiple addresses there, all on one line and separated by comma. If you want both local delivery and forwarding, put root@localhost as one of the addresses. The system administrator can define email aliases in the file /etc/aliases . This file contains lines like root: [email protected], /root/mailbox ; the effect is the same as having [email protected], /root/mailbox in ~root/.forward . You may need to run a program such as newaliases after changing /etc/aliases . Note that the workings of .forward and /etc/aliases depend on your MTA . Most MTAs implement the main features provided by the traditional sendmail, but check your MTA's documentation.
{ "source": [ "https://unix.stackexchange.com/questions/26666", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
26,671
I'm running Ubuntu 10.04. Is there a way I can get a daily report of who has logged onto the box, what time, and even - this may be asking too much - a report of the commands they used? This is a low-usage box and so I think this would be a nice way to see what activity is happening on it. Along these same lines, I heard it was not possible to track when things are done on the box via non-interactive shells, such as rsync or just remotely executing single commands via ssh. Is that true, or is there a way to log and track this as well?
The information of who logged in when is available in /var/log/auth.log (or other log files on other distributions). There are multiple log monitoring programs that can extract the information you configure as relevant. On any sane system, every user authentication is logged. To log every command invocation (but not their arguments), use process accounting , provided by the acct package on Ubuntu. If the accounting subsystem is up and running, then lastcomm shows information about finished processes.
{ "source": [ "https://unix.stackexchange.com/questions/26671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
26,675
Being new to Linux administration, I'm a little confused about the following commands: useradd usermod groupadd groupmod I've just finished reading the user administration book in the Linux/Unix Administrator's handbook, but some things are still a little hazy. Basically useradd seems straight forward enough: useradd -c "David Hilbert" -d /home/math/hilbert -g faculty -G famous -m -s /bin/sh hilbert I can add "David Hilbert" with username hilbert , setting his default directory, shell, and groups. And I think that -g is his primary/default group and -G are his other groups. So these are my next questions: Would this command still work if the groups faculty and famous did not exist? Would it just create them? If not, what command do I use to create new groups? If I remove the user hilbert and there are no other users in those groups, will they still exist? Should I remove them? After I run the useradd command above, how do I remove David from the famous group, and reassign his primary group to hilbert which does not yet exist?
The usermod command will allow you to change a user's primary group, supplementary group or a number of other attributes. The -g switch controls the primary group. For your other questions... If you specify a group, groupname , that does not exist during the useradd stage, you will receive an error - useradd: unknown group groupname The groupadd command creates new groups. The group will remain if you remove all users contained within. You don't necessarily have to remove the empty group. Create the hilbert group via groupadd hilbert . Then move David's primary group using usermod -g hilbert hilbert . (Please note that the first hilbert is the group name and the second hilbert is the username. This is important in cases, where you are moving a user to a group with a different name) You may be complicating things a bit here, though. In many Linux distributions, a simple useradd hilbert will create the user hilbert and a group of the same name as the primary. I would add supplementary groups specified together using the -G switch.
{ "source": [ "https://unix.stackexchange.com/questions/26675", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
26,676
I think I understand the differences between an interactive, a login and a batch shell. See the following links for more help: What is the difference between a 'Login' and an 'Interactive' bash shell (from the sister site: Server Fault ) Difference between Login Shell and Non-Login Shell? 2.1: Types of shell: interactive and login shells (from A User's Guide to the Z-Shell ) My question is, how can I test with a command/condition if I am on an interactive, a login or a batch shell? I am looking for a command or condition (that returns true or false ) and that I could also place in an if statement. For example: if [[ condition ]] echo "This is a login shell" fi
I'm assuming a bash shell, or similar, since there is no shell listed in the tags. To check if you are in an interactive shell: [[ $- == *i* ]] && echo 'Interactive' || echo 'Not interactive' To check if you are in a login shell: shopt -q login_shell && echo 'Login shell' || echo 'Not login shell' By "batch", I assume you mean "not interactive", so the check for an interactive shell should suffice.
{ "source": [ "https://unix.stackexchange.com/questions/26676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
26,685
I saw some body split their window to 2x2, I just want to know how to do that? I know the 'split' command in Screen can only split the window horizontally.
The latest version of GNU screen allows you split the window vertically without any external patches. Here is one way to get it and use it: Checkout/clone/download the source Build it in an easy sequence of ./autogen.sh , ./configure , make and install . I didn't have any problems with dependencies on Mountain Lion. To get a vertical split use: C-a | // Create a split C-a <Tab> // Move to the split C-a c // Create a new window within the split I don't think this is a reason to switch to tmux any more like others have been suggesting.
{ "source": [ "https://unix.stackexchange.com/questions/26685", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13101/" ] }
26,688
What effect does setting a partition to active still have with common boot loaders. For instance if you install grub on the mbr, do you still need to specify the /boot as active, or does grub just select it anyway. I am asking because it say old bootloaders would load the volume boot record of the active partition. Is it still done this way with linux and grub, depending on where you install grub? This question was posted under superuser before in case any one was looking for it, but I felt it was more appropriate to move it here
The latest version of GNU screen allows you split the window vertically without any external patches. Here is one way to get it and use it: Checkout/clone/download the source Build it in an easy sequence of ./autogen.sh , ./configure , make and install . I didn't have any problems with dependencies on Mountain Lion. To get a vertical split use: C-a | // Create a split C-a <Tab> // Move to the split C-a c // Create a new window within the split I don't think this is a reason to switch to tmux any more like others have been suggesting.
{ "source": [ "https://unix.stackexchange.com/questions/26688", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11213/" ] }
26,695
I frequently edited the .bashrc file to export new environment variables. Rather than close the console and start a new one to refresh the env variables, is there a convenient way to refresh?
Within the same window, you can simply type bash to start a new one. This is equivalent to closing the window and re-opening a new one. Alternatively, you can type source ~/.bashrc to source the .bashrc file.
{ "source": [ "https://unix.stackexchange.com/questions/26695", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12218/" ] }
26,710
What's the difference between two dd commands that have different bs and count values, as long as they multiply to the same? For example: dd if=/dev/random of=aa bs=1G count=2 dd if=/dev/random of=aa bs=2G count=1
As far as the end result is concerned, they will do the same. The difference is in how dd would process data. And actually, both your examples are quite extreme in that regard: the bs parameter tells dd how much data it should buffer into the memory before outputting it. So, essentially, the first command would try to read 2GB in two chunks of 1GB, and the latter would try to read whole 2GB at one go and then output it to the aa file.
{ "source": [ "https://unix.stackexchange.com/questions/26710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13101/" ] }
26,715
I’m running a Debian Squeeze web server. I’ve installed memcached on it, and configured memcached to listen on a Unix domain socket (at /tmp/memcached.sock ), as it only needs to receive messages from the website, which lives on the same server. It seems to be working fine, but I’d also like to communicate with memcached via the shell, to check that it’s doing what I think it’s doing. memcached accepts messages via a simple ASCII protocol (if I understand correctly). If it was listening on TCP/IP, I could send messages to it via e.g. nc : $ echo "stats settings" | nc localhost 11211 But I can’t figure out how to send that text to the domain socket instead. On my laptop (which runs OS X Lion), both nc and telnet have options ( -U and -u respectively) to use domain sockets. However, on my Debian Squeeze web server, these options aren’t present.
With netcat-openbsd , there is a -U option. If you don't have it, you probably have netcat-traditional installed instead; I'd suggest switching. Example command: nc -U /var/run/socket
{ "source": [ "https://unix.stackexchange.com/questions/26715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5646/" ] }
26,723
I want to print the odd-numbered and even-numbered lines from files. I found this shell script which makes use of echo. #!/bin/bash # Write a shell script that, given a file name as the argument will write # the even numbered line to a file with name evenfile and odd numbered lines # in a text file called oddfile. # ------------------------------------------------------------------------- # Copyright (c) 2001 nixCraft project <http://cyberciti.biz/fb/> # This script is licensed under GNU GPL version 2.0 or above # ------------------------------------------------------------------------- # This script is part of nixCraft shell script collection (NSSC) # Visit http://bash.cyberciti.biz/ for more information. # ------------------------------------------------------------------------- file=$1 counter=0 eout="evenfile.$$" # even file name oout="oddfile.$$" # odd file name if [ $# -eq 0 ] then echo "$(basename $0) file" exit 1 fi if [ ! -f $file ] then echo "$file not a file" exit 2 fi while read line do # find out odd or even line number isEvenNo=$( expr $counter % 2 ) if [ $isEvenNo -ne 0 ] then # even match echo $line >> $eout else # odd match echo $line >> $oout fi # increase counter by 1 (( counter ++ )) done < $file echo "Even file - $eout" echo "Odd file - $oout" But isn't there a way to do it in one line? Yes, use awk, I read. Even-numbered lines: awk 'NR % 2' filename odd-numbered lines: awk 'NR % 2 == 1' filename But it doesn't work for me. Both produce the same output, according to diff. When compared to the original file, they are both indeed half as long, and they both contain the odd-numbered lines. Am I doing something wrong?
I prefer to be POSIX compatible, whenever possible, so I thought I'd post this alternative method. I often use these to mangle text, before xargs pipelines. Print Even Numbered Lines, sed -n 'n;p' Print Odd Numbered Lines, sed -n 'p;n' Although I often use awk , it's overkill for this type of task.
{ "source": [ "https://unix.stackexchange.com/questions/26723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1389/" ] }
26,728
I wish to prepend a timestamp to each line of output from a command. For example: foo bar baz would become [2011-12-13 12:20:38] foo [2011-12-13 12:21:32] bar [2011-12-13 12:22:20] baz ...where the time being prefixed is the time at which the line was printed. How can I achieve this?
moreutils includes ts which does this quite nicely: command | ts '[%Y-%m-%d %H:%M:%S]' It eliminates the need for a loop too, every line of output will have a timestamp put on it. $ echo -e "foo\nbar\nbaz" | ts '[%Y-%m-%d %H:%M:%S]' [2011-12-13 22:07:03] foo [2011-12-13 22:07:03] bar [2011-12-13 22:07:03] baz You want to know when that server came back up you restarted? Just run ping | ts , problem solved :D. Note : Use [%Y-%m-%d %H:%M:%.S] for microsecond precision.
{ "source": [ "https://unix.stackexchange.com/questions/26728", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
26,743
I would like to force GNU screen to reflow to the existing terminal width when I reattach a session. It seems to me this worked properly before I upgraded a machine to CentOS 6, but I cannot figure out how to restore it. ( TERM=xterm ) Whenever I reattach a session, regardless of state when I detached it, it launches at 80 columns, resizing my terminal (PuTTY, in this case) along with it. I'm launching & reattaching with: screen -aA -R <session> My .screenrc contains only the following, and a few irrelevant key bindings: term xterm defscrollback 10000 # status line at the bottom hardstatus on hardstatus alwayslastline hardstatus string "${-}%{.0c}%-w%{.y0}%f%n %t%{-}%+w %=%{..G}[%H] %{..Y} %D %M %d, %Y %c | Load: %l" caption splitonly "%{.yK}%3n t" caption string "%{.c0}%3n %t" vbell off # Fix fullscreen programs altscreen on
after you reattach a ctrl-a F runs the "fit" command to resize the current window. if you reattach using the -A option it should resize all windows when you reattach. Are there others still attached to the screen session when you are attaching? For instance, are you having to use -x to reattach instead of -r? you can detach others when you reattach with "screen -D -r" instead of "screen -x", and I'd expect this to automatically refit windows.
{ "source": [ "https://unix.stackexchange.com/questions/26743", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4248/" ] }
26,784
The following few threads on this site and StackOverflow were helpful for understanding how IFS works: What is IFS in context of for looping? How to loop over the lines of a file Bash, read line by line from file, with IFS But I still have some short questions. I decided to ask them in the same post since I think it may help better future readers: Q1. IFS is typically discussed in the context of "field splitting". Is field splitting the same as word splitting ? Q2: The POSIX specification says : If the value of IFS is null, no field splitting shall be performed. Is setting IFS= the same as setting IFS to null? Is this what is meant by setting it to an empty string too? Q3: In the POSIX specification, I read the following: If IFS is not set, the shell shall behave as if the value of IFS is <space>, <tab> and <newline> Say I want to restore the default value of IFS . How do I do that? (more specifically, how do I refer to <tab> and <newline> ?) Q4: Finally, how would this code: while IFS= read -r line do echo $line done < /path_to_text_file behave if we we change the first line to while read -r line # Use the default IFS value or to: while IFS=' ' read -r line
Yes, they are the same. Yes. In bash, and similar shells, you could do something like IFS=$' \t\n' . Otherwise, you could insert the literal control codes by using [space] CTRL+V [tab] CTRL+V [enter] . If you are planning to do this, however, it's better to use another variable to temporarily store the old IFS value, and then restore it afterwards (or temporarily override it for one command by using the var=foo command syntax). The first code snippet will put the entire line read, verbatim, into $line , as there are no field separators to perform word splitting for. Bear in mind however that since many shells use cstrings to store strings, the first instance of a NUL may still cause the appearance of it being prematurely terminated. The second code snippet may not put an exact copy of the input into $line . For example, if there are multiple consecutive field separators, they will be made into a single instance of the first element. This is often recognised as loss of surrounding whitespace. The third code snippet will do the same as the second, except it will only split on a space (not the usual space, tab, or newline).
{ "source": [ "https://unix.stackexchange.com/questions/26784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
26,788
Say I have a shell variable $string that holds some text with several newlines, e.g.: string="this is a test" I would like to convert this string into a new string new_string where all line breaks are converted into spaces: new_string="this is a test" I tried: print $string | sed 's/\n/ /g' but it didn't work I'm also wondering if there is a way of doing this using perl -0777 's/\n/ /g' or maybe the command tr ?
If you only want to remove the new lines in the string, you don't need to use sed . You can use just $ echo "$string" | tr '\n' ' ' as others had pointed. But if you want to convert new lines into spaces on a file using sed , then you can use: $ sed -i ':a;N;$!ba;s/\n/\t/g' file_with_line_breaks or even awk : $ awk '$1=$1' ORS=' ' file_with_line_breaks > new_file_with_spaces
{ "source": [ "https://unix.stackexchange.com/questions/26788", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
26,790
The program is located in /usr/bin/mail . Upon execution, Version 8.1.2 01/15/2001 is shown. Entering list produces: Commands are: next, alias, print, type, Type, Print, visual, top, touch, preserve, delete, dp, dt, undelete, unset, mail, mbox, pipe, |, more, page, More, Page, unread, Unread, !, copy, chdir, cd, save, source, set, shell, version, group, write, from, file, folder, folders, ?, z, headers, help, =, Reply, Respond, reply, respond, edit, echo, quit, list, xit, exit, size, hold, if, else, endif, alternates, ignore, discard, retain, saveignore, savediscard, saveretain, core, #, inc, new Entering ? produces: Mail Command Description ------------------------- -------------------------------------------- t [message list] type message(s). n goto and type next message. e [message list] edit message(s). f [message list] give head lines of messages. d [message list] delete message(s). s [message list] <file> append message(s) to file. u [message list] undelete message(s). R [message list] reply to message sender(s). r [message list] reply to message sender(s) and all recipients. p [message list] print message list. pre [message list] make messages go back to /var/mail. m <recipient list> mail to specific recipient(s). q quit, saving unresolved messages in mbox. x quit, do not remove system mailbox. h print out active message headers. ! shell escape. | [msglist] command pipe message(s) to shell command. pi [msglist] command pipe message(s) to shell command. cd [directory] chdir to directory or home if none given fi <file> switch to file (%=system inbox, %user=user's system inbox). + searches in your folder directory for the file. set variable[=value] set Mail variable. Entering z shows the end of the list of messages - but that command is not presented in the ? help page. What program is this? Are there tutorials for its use? What are some common commands and helpful tricks for its use? How can the message list be navigated (the opposite of z ) or refreshed? Clarification : This question is about the interactive program and not the script-able command - i.e. the result of typing mail with no flags or parameters into a terminal.
This page describes the interactive command in detail, and is in fact a fairly thorough tutorial. Describes commands such as z and z- : If there is more than a screenful of messages, then z will show the next screenful, and z- will show the previous screenful.
{ "source": [ "https://unix.stackexchange.com/questions/26790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
26,805
Say I have a folder with three files: foo1 foo2 bar 1. If I run list_of_files=$(print foo*) echo $list_of_files I get: foo1 foo2 2. If I run list_of_files=$(print bar*) echo $list_of_files I get: bar 3. However, if I run list_of_files=$(print other*) echo $list_of_files I get: zsh: no matches found: other* (the variable $list_of_files is empty though) Is there a way to ask zsh to not complain if it can't match a glob expansion? My goal is to use the mechanism above to silently collect a list of files that match a given glob pattern.
Turn on the null_glob option for your pattern with the N glob qualifier. list_of_files=(*(N)) If you're doing this on all the patterns in a script or function, turn on the null_glob option: setopt null_glob This answer has bash and ksh equivalents. Do not use print or command substitution! That generates a string consisting of the file names with spaces between them, instead of a list of strings. (See What is word splitting? Why is it important in shell programming? )
{ "source": [ "https://unix.stackexchange.com/questions/26805", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
26,826
Can less follow (by pressing F) a piped input (similarly to a file)? For a file that is being written to, the command less <file> will follow the file when pressing F. But if I have a command that pipes output directly into less, like this command | less pressing F will do nothing. So it looks like pipes cannot be followed like files can? Or maybe it has to do with command also writing to STDERR? The effect I'm trying to achieve is always see the latest output of the command: just like keeping PageDown pressed! A related remark holds for G (go to end): when piping directly to less, it won't work.
Pressing F or G makes less try to reach input EOF. If the input is a pipe, less hangs until the pipe is closed on the other side (and not "does nothing"). This can be worked around by saving the command output to a temporary file in the background, and then by using it as input for less : command > /tmp/x & less +F /tmp/x; kill %; rm /tmp/x There is no option to do this in less only; however, I admit it would be useful.
{ "source": [ "https://unix.stackexchange.com/questions/26826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13335/" ] }
26,836
What concise command can I use to find all files that do NOT contain a text string? I tried this (using -v to invert grep's parameters) with no luck: find . -exec grep -v -l shared.php {} \; Someone said this would work: find . ! -exec grep -l shared.php {} \; But it does not seem to work for me. This page has this example: find ./logs -size +1c > t._tmp while read filename do grep -q "Process Complete" $filename if [ $? -ne 0 ] ; then echo $filename fi done < t._tmp rm -f t_tmp But that's cumbersome and not at all concise. ps: I know that grep -L * will do this, but how can I use the find command in combination with grep to excluded files is what I really want to know. pss: Also I'm not sure how to have grep include subdirectories with the grep -L * syntax, but I still want to know how to use it with find :)
Your find should work if you change -v -l (files that have any line not matching) to -L (files with no lines matching), but you could also use grep 's recursive ( -r ) option: grep -rL shared.php .
{ "source": [ "https://unix.stackexchange.com/questions/26836", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
26,887
I am trying to get all the processes listening for a network connection on Mac OS X. netstat does not have the -p option and I am trying with lsof lsof -i -sTCP:LISTEN gives me a fair list of listening processes but not all. I can for example telnet to port 10080 where I have a process listening for a connection but this is not shown in the output of lsof . What am I missing? $ telnet localhost 10080 Trying ::1... Connected to localhost. Escape character is '^]'. ^] telnet> Connection closed. but $ sudo lsof -n -i | grep 10080 $
sudo lsof -iTCP -sTCP:LISTEN sudo lsof -iTCP -sTCP:LISTEN -P sudo lsof -iTCP -sTCP:LISTEN -P -n sudo lsof -iTCP -sTCP:LISTEN -n All return the same 32 entries ( ... | wc -l ) on my heavily used Lion MBP. -P -n prevents lsof from doing name resolution, and it doesn't block. Missing either one of these, it can be very slow. For UDP: sudo lsof -iUDP -P -n | egrep -v '(127|::1)' . Without -n and -P , it takes a long time. Reminder: This does not include firewall settings.
{ "source": [ "https://unix.stackexchange.com/questions/26887", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10616/" ] }
26,924
I'm having an issue generating a public key that the openssl PEM_read_bio_RSA_PUBKEY() function can consume. I keep getting errors. Obviously I cannot simply use the ASCII string in the ssh-keygen <>.pub key file as it is in SSH file format or I perhaps SubjectPublicKeyInfo structure. Here's the key gen code: ssh-keygen -t rsa -b 1024 -C "Test Key" I found a converter in php on the web which will convert the contents of the public key into a base64 PEM ASCII string format. However the function still doesn't like it. The Openssl documentation states: “RSA_PUBKEY() function which process a public key using an EVP_PKEY structure” “RSA_PUBKEY functions also process an RSA public key using an RSA structure” How do I get my OpenSSH public key into either format that the OpenSSL function will consume it?
OK! So I walked into this thinking "Easy, I got this." Turns out there's a whole lot more to it than even I thought. The first issue is that (according to the man pages for OpenSSL, man 3 pem ), OpenSSL is expecting the RSA key to be in PKCS#1 format. Clearly, this isn't what ssh-keygen is working with. You have two options (from searching around). If you have OpenSSH v. 5.6 or later (I did not on my laptop), you can run this: ssh-keygen -f key.pub -e -m pem The longer method of doing this is to break apart your SSH key into its various components (the blog entry I found some of this in accuses OpenSSH of being "proprietary", I prefer to call it "unique") and then use an ASN1 library to swap things around. Fortunately for you, someone wrote the code to do this: https://gist.github.com/1024558
{ "source": [ "https://unix.stackexchange.com/questions/26924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13387/" ] }
26,934
When I try to use sftp to transfer a directory containing files, I get an error message: skipping non-regular file directory_name The directory contains a couple of files and two subdirectories. What am I doing wrong?
sftp , like cp and scp , requires that when you copy a folder (and its contents, obviously), you have to explicitly tell it you want to transfer the folder recursively with the -r option. So, add -r to the command.
{ "source": [ "https://unix.stackexchange.com/questions/26934", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12997/" ] }
26,975
When we use clear command or Ctrl + L in terminal, it clears terminal but we can still scroll back to view the last used commands. Is there a way to completely clear the terminal?
You can use tput reset . Besides reset and tput reset you can use following shell script. #!/bin/sh echo -e \\033c This sends control characters Esc-C to the console which resets the terminal. Google Keywords: Linux Console Control Sequences man console_codes says: The sequence ESC c causes a terminal reset, which is what you want if the screen is all garbled. The oft-advised "echo ^V^O" will only make G0 current, but there is no guarantee that G0 points at table a). In some distributions there is a program reset(1) that just does "echo ^[c". If your terminfo entry for the console is correct (and has an entry rs1=\Ec), then "tput reset" will also work.
{ "source": [ "https://unix.stackexchange.com/questions/26975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11906/" ] }
26,980
I believe that if there is any output from a cronjob it is mailed to the user who the job belongs to. I think you can also add something like [email protected] at the top of the cron file to change where the output is sent to. Can I set an option so that cron jobs system-wide will be emailed to root instead of to the user who runs them? (i.e. so that I don't have to set this in each user's cron file)
You can use tput reset . Besides reset and tput reset you can use following shell script. #!/bin/sh echo -e \\033c This sends control characters Esc-C to the console which resets the terminal. Google Keywords: Linux Console Control Sequences man console_codes says: The sequence ESC c causes a terminal reset, which is what you want if the screen is all garbled. The oft-advised "echo ^V^O" will only make G0 current, but there is no guarantee that G0 points at table a). In some distributions there is a program reset(1) that just does "echo ^[c". If your terminfo entry for the console is correct (and has an entry rs1=\Ec), then "tput reset" will also work.
{ "source": [ "https://unix.stackexchange.com/questions/26980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
27,005
Suppose I want to encrypt a file so that only I can read it, by knowing my SSH private key password. I am sharing a repo where I want to encrypt or obfuscate sensitive information. By that, I mean that the repo will contain the information but I will open it only in special cases. Suppose I am using SSH-agent, is there some easy way to encrypt the file for only me to open it later? I cannot see why I should use GPG for this, question here ; basically I know the password and I want to only decrypt the file by the same password as my SSH key. Is this possible?
I think your requirement is valid, but on the other hand it is also difficult, because you are mixing symmetric and asymmetric encryption. Please correct me if I'm wrong. Reasoning: The passphrase for your private key is to protect your private key and nothing else. This leads to the following situation: You want to use your private key to encrypt something that only you can decrypt. Your private key isn't intended for that, your public key is there to do that. Whatever you encrypt with your private key can be decrypted by your public key (signing), that's certainly not what you want. (Whatever gets encrypted by your public key can only be decrypted by your private key.) So you need to use your public key to encrypt your data, but for that, you don't need your private key passphrase for that. Only if you want to decrypt it you would need your private key and the passphrase. Conclusion: Basically you want to re-use your passphrase for symmetric encryption. The only program you would want to give your passphrase is ssh-agent and this program does not do encryption/decryption only with the passphrase. The passphrase is only there to unlock your private key and then forgotten. Recommendation: Use openssl enc or gpg -e --symmetric with passphrase-protected keyfiles for encryption. If you need to share the information, you can use the public key infrastucture of both programs to create a PKI/Web of Trust. With openssl, something like this: $ openssl enc -aes-256-ctr -in my.pdf -out mydata.enc and decryption something like $ openssl enc -aes-256-ctr -d -in mydata.enc -out mydecrypted.pdf Update: It is important to note that the above openssl commands do NOT prevent the data from being tampered with. A simple bit flip in the enc file will result in corrupted decrypted data as well. The above commands cannot detected this, you need to check this for instance with a good checksum like SHA-256. There are cryptographic ways to do this in an integrated way, this is called a HMAC (Hash-based Message Authentication Code).
{ "source": [ "https://unix.stackexchange.com/questions/27005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
27,013
Is it possible to easily format seconds as a human-readable time in bash? I don't want to format it as a date, but as the number of days/hours/minutes, etc...
You can use something like this: function displaytime { local T=$1 local D=$((T/60/60/24)) local H=$((T/60/60%24)) local M=$((T/60%60)) local S=$((T%60)) (( $D > 0 )) && printf '%d days ' $D (( $H > 0 )) && printf '%d hours ' $H (( $M > 0 )) && printf '%d minutes ' $M (( $D > 0 || $H > 0 || $M > 0 )) && printf 'and ' printf '%d seconds\n' $S } Examples: $ displaytime 11617 3 hours 13 minutes and 37 seconds $ displaytime 42 42 seconds $ displaytime 666 11 minutes and 6 seconds
{ "source": [ "https://unix.stackexchange.com/questions/27013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9191/" ] }
27,017
Is there a command to set settings in sshd_config , instead of manually editing the file? I would prefer it to do this way, because it's easier to automate. Otherwise I'd have to grep config with my script.
You can use something like this: function displaytime { local T=$1 local D=$((T/60/60/24)) local H=$((T/60/60%24)) local M=$((T/60%60)) local S=$((T%60)) (( $D > 0 )) && printf '%d days ' $D (( $H > 0 )) && printf '%d hours ' $H (( $M > 0 )) && printf '%d minutes ' $M (( $D > 0 || $H > 0 || $M > 0 )) && printf 'and ' printf '%d seconds\n' $S } Examples: $ displaytime 11617 3 hours 13 minutes and 37 seconds $ displaytime 42 42 seconds $ displaytime 666 11 minutes and 6 seconds
{ "source": [ "https://unix.stackexchange.com/questions/27017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6442/" ] }
27,027
I have a directory tree that I would like to shred with the Linux 'shred' utility. Unfortunately, shred has no -R option for recursive shredding. How can I shred an entire directory tree recursively?
Use the find command to execute shred recursively: find <dir> -type f -exec shred {} \;
{ "source": [ "https://unix.stackexchange.com/questions/27027", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3056/" ] }
27,054
I've created a bash script but when I try to execute it, I get #!/bin/bash no such file or directory I need to run the command: bash script.sh for it to work. How can I fix this?
This kind of message is usually due to a bogus shebang line, either an extra carriage return at the end of the first line or a BOM at the beginning of it. Run: $ head -1 yourscript | od -c and see how it ends. This is wrong: 0000000 # ! / b i n / b a s h \r \n This is wrong too: 0000000 357 273 277 # ! / b i n / b a s h \n This is correct: 0000000 # ! / b i n / b a s h \n Use dos2unix (or sed , tr , awk , perl , python …) to fix your script if this is the issue. Here is one that will remove both of a BOM and tailing CRs: sed -i '1s/^.*#//;s/\r$//' brokenScript Note that the shell you are using to run the script will slightly affect the error messages that are displayed. Here are three scripts just showing their name ( echo $0 ) and having the following respective shebang lines: correctScript: 0000000 # ! / b i n / b a s h \n scriptWithBom: 0000000 357 273 277 # ! / b i n / b a s h \n scriptWithCRLF: 0000000 # ! / b i n / b a s h \r \n Under bash, running them will show these messages: $ ./correctScript ./correctScript $ ./scriptWithCRLF bash: ./scriptWithCRLF: /bin/bash^M: bad interpreter: No such file or directory $ ./scriptWithBom ./scriptWithBom: line 1: #!/bin/bash: No such file or directory ./scriptWithBom Running the bogus ones by explicitely calling the interpreter allows the CRLF script to run without any issue: $ bash ./scriptWithCRLF ./scriptWithCRLF $ bash ./scriptWithBom ./scriptWithBom: line 1: #!/bin/bash: No such file or directory ./scriptWithBom Here is the behavior observed under ksh : $ ./scriptWithCRLF ksh: ./scriptWithCRLF: not found [No such file or directory] $ ./scriptWithBom ./scriptWithBom[1]: #!/bin/bash: not found [No such file or directory] ./scriptWithBom and under dash : $ ./scriptWithCRLF dash: 2: ./scriptWithCRLF: not found $ ./scriptWithBom ./scriptWithBom: 1: ./scriptWithBom: #!/bin/bash: not found ./scriptWithBom
{ "source": [ "https://unix.stackexchange.com/questions/27054", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6184/" ] }
27,087
When ever I need to kill a background process I do ps -e | grep <process_name> Which prints something like this 1766 ? 00:00:13 conky , Then I use the process ID to kill it like so kill 1766 . Is there any way I can simplify this ? Make it quicker ? reduce the amount of typing ?
(TL,DR: pgrep , pkill ) Many unix variants come with the pgrep and its companion pkill : Solaris , Linux (part of the standard process utilities , may be absent from embedded Linux systems), FreeBSD , OpenBSD , NetBSD , … but only from MacPorts on OS X , not AIX , and only recently in HP-UX . The pgrep utility shows the process ID of processes matched by name, user and a few other criteria. The argument to pgrep is interpreted as a regexp that must match part of the process's executable's name (unless you pass an option to change this). If you call pkill instead of pgrep , the utility sends a signal instead of displaying the process IDs. Another similar utility is pidof . On Linux , it's provided by SysVinit or BusyBox (so you'll often find it on an embedded Linux system that doesn't have pgrep ); there are also ports on other unix variants. The pidof utility has fewer options, it mostly only matches whole executable file names. Its companion utility killall sends a signal to the matched programs¹. ¹ Beware that killall has a different meaning on Solaris and possibly other unix variants; do not type killall as root on Solaris.
{ "source": [ "https://unix.stackexchange.com/questions/27087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10611/" ] }
27,139
I want to run a script to simply change the current working directory: #!/bin/bash cd web/www/project But, after I run it, the current pwd remains unchanged! How can I do that?
It is an expected behavior. The script is run in a subshell, and cannot change the parent shell working directory. Its effects are lost when it finishes. To change the current shell's directory permanently you should use the source command, also aliased simply as . , which runs a script in the current shell environment instead of a sub shell. The following commands are identical: . script or source script
{ "source": [ "https://unix.stackexchange.com/questions/27139", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10881/" ] }
27,143
I have configured SSH to be on port 20000. When I try: svn co svn+ssh://server.com:20000/home/svn/proj1 proj1 --username jm I get svn: To better debug SSH connection problems, remove the -q option from 'ssh' in the [tunnels] section of your Subversion configuration file. svn: Network connection closed unexpectedly I think I need to tell SVN to use port 20000 also? I am on Ubuntu 11.10
You can define a new 'tunnel' in your Subversion configuration ( ~/.subversion/config ). Find the section [tunnels] there and define something like: [tunnels] foo = ssh -p 20000 Afterwards you can contact your repository via the URL svn+foo://server.com/home/svn/proj1 proj1 .
{ "source": [ "https://unix.stackexchange.com/questions/27143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10247/" ] }
27,250
What does the letter S mean below? The file in question is a folder. I read here that an upper-case S can represent that the setgid bit is active for a binary executable . But this is a folder. Does it still mean that the setgid bit is activated for it? If so, what does that mean?
That means that any file dropped into the folder will take on the folder's owning group. For example: Suppose you have a folder called "shared" which belongs to user "intrpc" and group "users", and you (as user "initrpc") drop a file into it. As a result, the file will be belong to user "intrpc" and group "users", regardless of "initrpc"'s primary group. On most systems, if a directory's set-group-ID bit is set, newly created subfiles inherit the same group as the directory, and newly created subdirectories inherit the set-group-ID bit of the parent directory. You can read about it here . Why is the letter uppercase (from the link you gave)? setgid has no effect if the group does not have execute permissions. setgid is represented with a lower-case "s" in the output of ls. In cases where it has no effect it is represented with an upper-case "S".
{ "source": [ "https://unix.stackexchange.com/questions/27250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
27,279
I deleted my /dev/null. How can I restore it?
mknod /dev/null c 1 3 chmod 666 /dev/null Use these command to create /dev/null or use null(4) manpage for further help.
{ "source": [ "https://unix.stackexchange.com/questions/27279", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
27,289
How can I run a cron command with existing environmental variables? If I am at a shell prompt I can type echo $ORACLE_HOME and get a path. This is one of my environmental variables that gets set in my ~/.profile . However, it seems that ~/.profile does not get loaded fron cron scripts and so my scripts fail because the $ORACLE_HOME variable is not set. In this question the author mentions creating a ~/.cronfile profile which sets up variables for cron, and then he does a workaround to load all his cron commands into scripts he keeps in his ~/Cron directory. A file like ~/.cronfile sounds like a good idea, but the rest of the answer seems a little cumbersome and I was hoping someone could tell me an easier way to get the same result. I suppose at the start of my scripts I could add something like source ~/.profile but that seems like it could be redundant. So how can I get make my cron scripts load the variables from my interactive-shell profile?
In the crontab, before you command, add . $HOME/.profile . For example: 0 5 * * * . $HOME/.profile; /path/to/command/to/run Cron knows nothing about your shell; it is started by the system, so it has a minimal environment. If you want anything, you need to have that brought in yourself.
{ "source": [ "https://unix.stackexchange.com/questions/27289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
27,323
I'm sure this question has been asked again and again elsewhere (I did not find anything specific to CentOS vs RHEL in SE), but I would still like to ask and confirm a few specific points. I am well aware that CentOS removes all RH trademarks, logos, etc. and is based on the same codes with packages built by the community. Are the packages built for CentOS exactly the same? Will the contents of the packages and the behavior of the programs be identical to those found on RHEL? What is RHN other than a medium for license registration? What is it to CentOS? I'm an Ubuntu desktop user. Attended a RH299 course which did not really touch anything about the support aspect (i.e. RHN). Other than that I've no professional Linux knowledge or experience. EDIT I did read the CentOS 6.2 release notes , but I found the details unsatisfactory. The release notes mentions packages modified , removed or added to upstream. But it neither explains nor links to any document detailing what exactly is different in the modified packages. Granted the branding packages are self-explanatory, but it mentions packages like kernel , ntp , anaconda , etc. which have nothing to do with branding as far as I'm aware.
CentOS is very close to being RHEL without the branding and support. In particular, the library versions are the same, so binaries that work on one will work on the other. The administration tools are the same and configured in similar ways. However, there are a few differences, as the two distributions sometimes apply different minor patches. For example, in this question , it was apparent that RHEL 5 and CentOS 5 apply different rules to identify files under /etc/cron.d . In other words, at the level of your course, you can treat CentOS and RHEL as interchangeable. But if you needed to look up the precise behavior of a program in a corner of the man page, you may encounter differences.
{ "source": [ "https://unix.stackexchange.com/questions/27323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8305/" ] }
27,350
Why is the chown command root-only? Why can't non-root users use chown to give away files they own?
Most unix systems prevent users from “giving away” files, that is, users may only run chown if they have the target user and group privileges. Since using chown requires owning the file or being root (users can never appropriate other users' files), only root can run chown to change a file's owner to another user. The reason for this restriction is that giving away a file to another user can allow bad things to happen in uncommon, but still important situations. For example: If a system has disk quotas enabled, Alice could create a world-writable file under a directory accessible only by her (so no one else could access that world-writable file), and then run chown to make that file owned by another user Bill. The file would then count under Bill's disk quota even though only Alice can use the file. If Alice gives away a file to Bill, there is no trace that Bill didn't create that file. This can be a problem if the file contains illegal or otherwise compromising data. Some programs require that their input file belongs to a particular user in order to authenticate a request (for example, the file contains some instructions that the program will perform on behalf of that user). This is usually not a secure design, because even if Bill created a file containing syntactically correct instructions, he might not have intended to execute them at this particular time. Nonetheless, allowing Alice to create a file with arbitrary content and have it taken as input from Bill can only make things worse.
{ "source": [ "https://unix.stackexchange.com/questions/27350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13549/" ] }
27,351
I wanted to record a linux session so I could use it as documentation for a "how to install" guide. I found something on the internet that suggested that the script command would be good for this, and so I started it and ran through my installation. Of course, I didn't read closely enough to realize that the script command actually records keystrokes , so when I go to create my documentation it's full of lines that look like this: $ make test[K[K[K[Kinstall[1@s[1@u[1@d[1@o[1@ I know that I can use script replay to play the script back, but what I really want to do is run something like scriptreplay but pipe the list of commands that would get executed to a file (I don't want to actually run them). Is this possible? I know about the history command, which I probably should have used instead, but I don't have access to the session's history anymore.
Most unix systems prevent users from “giving away” files, that is, users may only run chown if they have the target user and group privileges. Since using chown requires owning the file or being root (users can never appropriate other users' files), only root can run chown to change a file's owner to another user. The reason for this restriction is that giving away a file to another user can allow bad things to happen in uncommon, but still important situations. For example: If a system has disk quotas enabled, Alice could create a world-writable file under a directory accessible only by her (so no one else could access that world-writable file), and then run chown to make that file owned by another user Bill. The file would then count under Bill's disk quota even though only Alice can use the file. If Alice gives away a file to Bill, there is no trace that Bill didn't create that file. This can be a problem if the file contains illegal or otherwise compromising data. Some programs require that their input file belongs to a particular user in order to authenticate a request (for example, the file contains some instructions that the program will perform on behalf of that user). This is usually not a secure design, because even if Bill created a file containing syntactically correct instructions, he might not have intended to execute them at this particular time. Nonetheless, allowing Alice to create a file with arbitrary content and have it taken as input from Bill can only make things worse.
{ "source": [ "https://unix.stackexchange.com/questions/27351", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13548/" ] }
27,362
I'd like to compress and package everything, including files and folders in current directory, into a single ZIP file on Ubuntu. What would be the most convenient command for this (and name of the tool needed to be installed if any)? Edit: What if I need to exclude one folder or several files?
Install zip and use zip -r foo.zip . You can use the flags -0 (none) to -9 (best) to change compressionrate Excluding files can be done via the -x flag. From the man-page: -x files --exclude files Explicitly exclude the specified files, as in: zip -r foo foo -x \*.o which will include the contents of foo in foo.zip while excluding all the files that end in .o. The backslash avoids the shell filename substitution, so that the name matching is performed by zip at all directory levels. Also possible: zip -r foo foo [email protected] which will include the contents of foo in foo.zip while excluding all the files that match the patterns in the file exclude.lst. The long option forms of the above are zip -r foo foo --exclude \*.o and zip -r foo foo --exclude @exclude.lst Multiple patterns can be specified, as in: zip -r foo foo -x \*.o \*.c If there is no space between -x and the pattern, just one value is assumed (no list): zip -r foo foo -x\*.o See -i for more on include and exclude.
{ "source": [ "https://unix.stackexchange.com/questions/27362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8882/" ] }
27,419
Why can't I copy with scp when I'm using * characters in the path? scp SERVERNAME:/DIR/* . What configuration does SCP need in order to allow * in the path? UPDATE: the problem is not on server side; pscp is trying to use SCPv1, and that's why the error message:
You need to pass a literal escape to scp to avoid the remote machine treating * as a glob (notice that it is doubly quoted): scp 'SERVERNAME:/DIR/\*' .
{ "source": [ "https://unix.stackexchange.com/questions/27419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
27,428
I came across the following command: sudo chown `id -u` /somedir and I wonder: what is the meaning of the ` symbol. I noticed for instance that while the command above works well, the one below does not: sudo chown 'id -u' /somedir
This is a backtick . A backtick is not a quotation sign. It has a very special meaning. Everything you type between backticks is evaluated (executed) by the shell before the main command (like chown in your examples), and the output of that execution is used by that command, just as if you'd type that output at that place in the command line. So, what sudo chown `id -u` /somedir effectively runs (depending on your user ID ) is: sudo chown 1000 /somedir \ \ \ \ \ \ \ `-- the second argument to "chown" (target directory) \ \ `-- your user ID, which is the output of "id -u" command \ `-- "chown" command (change ownership of file/directory) `-- the "run as root" command; everything after this is run with root privileges Have a look at this question to learn why, in many situations, it is not a good idea to use backticks. Btw, if you ever wanted to use a backtick literally, e.g. in a string, you can escape it by placing a backslash ( \ ) before it.
{ "source": [ "https://unix.stackexchange.com/questions/27428", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13571/" ] }
27,586
I know I can open multiple files with vim by doing something like vim 2011-12*.log , but how can I switch between files and close the files one at a time? Also, how can I tell the file name of the current file that I'm editing?
First of all, in vim you can enter : (colon) and then help help , ala :help for a list of self-help topics, including a short tutorial. Within the list of topics, move your cursor over the topic of interest and then press ctrl ] and that topic will be opened. A good place for you to start would be the topic |usr_07.txt| Editing more than one file Ok, on to your answer. After starting vim with a list of files, you can move to the next file by entering :next or :n for short. :wnext is short for write current changes and then move to next file; :wn is an abbreviation for :wnext . There's also an analogous :previous , :wprevious and :Next . (Note that :p is shorthand for :print . The shorthand for :previous is :prev or :N .) To see where you are in the file list, enter :args and the file currently being edited will appear in [] (brackets). Example: vim foo.txt bar.txt :args result: [foo.txt] bar.txt
{ "source": [ "https://unix.stackexchange.com/questions/27586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
27,588
I regularly ssh to a centos 5 box. Somehow they keys are mapped so that control+d will log me out of my current shell. If I am sudo'ed to another use it puts me back to the previous user. If I am not sudo'ed it just disconnects me. How can I keep this from happening? I regularly use control+d to cancel out of the python interpreter and sometimes I accidentally press it more than once.
You're looking for the IGNOREEOF environment variable if you use bash : IGNOREEOF Controls the action of an interactive shell on receipt of an EOF character as the sole input. If set, the value is the number of consecutive EOF characters which must be typed as the first characters on an input line before bash exits. If the variable exists but does not have a numeric value, or has no value, the default value is 10. If it does not exist, EOF signifies the end of input to the shell. So export IGNOREEOF=42 and you'll have to press Ctrl+D fourty-two times before it actually quits your shell. POSIX set has an -o ignoreeof setting too. So consult your shell's documentation to see if your shell has this (it should), and to check its exact semantics.
{ "source": [ "https://unix.stackexchange.com/questions/27588", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
27,594
I understand that if you want to modify who can use sudo and what they can do with it that you should use visudo . I know I'm not supposed to directly modify the /etc/sudoers file myself. What is it that visudo does that directly modifying the file doesn't do? What can go wrong?
visudo checks the file syntax before actually overwriting the sudoers file. If you use a plain editor, mess up the syntax, and save... sudo will (probably) stop working, and, since /etc/sudoers is only modifiable by root , you're stuck (unless you have another way of gaining root). Additionally it ensures that the edits will be one atomic operation. This locking is important if you need to ensure nobody else can mess up your carefully considered config changes. For editing other files as root besides /etc/sudoers there is the sudoedit command which also guard against such editing conflicts.
{ "source": [ "https://unix.stackexchange.com/questions/27594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11683/" ] }
27,596
How can I tell the lpr command (cups) that my file is actually a pdf? lpr file.pdf won't print anything.
visudo checks the file syntax before actually overwriting the sudoers file. If you use a plain editor, mess up the syntax, and save... sudo will (probably) stop working, and, since /etc/sudoers is only modifiable by root , you're stuck (unless you have another way of gaining root). Additionally it ensures that the edits will be one atomic operation. This locking is important if you need to ensure nobody else can mess up your carefully considered config changes. For editing other files as root besides /etc/sudoers there is the sudoedit command which also guard against such editing conflicts.
{ "source": [ "https://unix.stackexchange.com/questions/27596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11975/" ] }
27,636
From a Linux SSH shell, type /etc/init.d/network restart to restart the network service. I expect my SSH connection to die since the network service goes down. But it doesn't. Very cool. But how does Linux achieve this? How does it keep my SSH connection alive across the service restart?
It does this by doing nothing special. The network restarts in less time than the TCP connection takes to time out, so the TCP connection survives the "outage" the same way it would survive any transient network outage. The only reason Windows doesn't do the same thing is because Windows specifically resets TCP connections when a network interface goes down. This is, at least arguably, a pretty boneheaded thing to do because TCP was specifically designed to survive transient network outages.
{ "source": [ "https://unix.stackexchange.com/questions/27636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13661/" ] }
27,666
Sometimes you run a program from the terminal, say, lxpanel † . The terminal won't drop you back to the prompt, it'll hang. You can press Ctrl + C to get back to the prompt, but that will kill lxpanel . However, pressing Alt + F2 (which pops up a window to take a command) and running lxpanel works gracefully. Why is this? What is different between running a command from the terminal and from the 'run' window that appears when you press Alt + F2 ? † lxpanel here was just used as an example. I have experienced this with multiple programs
By default the terminal will run the program in the foreground, so you won't end up back at the shell until the program has finished. This is useful for programs that read from stdin and/or write to stdout -- you generally don't want many of them running at once. If you want a program to run in the background, you can start it like this: $ lxpanel & Or if it's already running, you can suspend it with Ctrl + Z and then run bg to move it into the background. Either way you will end up with a new shell prompt, but the program is still running and its output will appear in the terminal (so it can suddenly show up while you're in the middle of typing) Some programs (typically daemons) will fork a separate process when they start, and then let the main process immediately exit. This lets the program keep running without blocking your shell
{ "source": [ "https://unix.stackexchange.com/questions/27666", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13678/" ] }
27,734
Just as a curiosity; something went wrong with a Linux machine, making the root file system show up as "64Z". A few commands work, like top , df , and kill , but others like reboot come up with "command not found" (since it can't read the root filesystem), and chmod comes up with a segmentation fault. Is there any way to restart the system anyway, i.e. without the reboot program? I tried kill -PWR 1 (sending SIGPWR to init), but this didn't seem to do anything. It's mostly an academic curiosity. The labmate who was doing whatever large-database work that caused the failure will be physically restarting the machine soon.
Try to reboot with magic sysrq key: echo b > /proc/sysrq-trigger For more information read wiki or kernel documentation .
{ "source": [ "https://unix.stackexchange.com/questions/27734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4272/" ] }
27,762
Is there a way to replace /etc configuration files from a package, overwriting my local changes? I've tried apt-get install --reinstall mypackage but it doesn't update the files. How can I do this?
A related serverfault question describes how to restore package conffiles if you've removed them, and requires that you track down the actual .deb file. All you need to do: Find the list of conffiles provided by the package: dpkg --status <package> (look under the Conffiles: section). Remove those conffiles yourself. Reinstall the package. If you've found the .deb file, dpkg -i --force-confmiss <package_deb>.deb Alternatively, passing the dpkg option via apt should work: apt-get install --reinstall -o Dpkg::Options::="--force-confmiss" <package>
{ "source": [ "https://unix.stackexchange.com/questions/27762", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
27,780
I am running a screen session and I'd like to change it's name. I know that when starting a new screen session I can use the '-S' option to give it a name. How do I change that name once the session has already started?
There is a screen command to do this. From the manual : Command: sessionname [ name ] (none) Rename the current session. Note that for screen -list the name shows up with the process-id prepended. If the argument name is omitted, the name of this session is displayed. Caution : The $STY environment variable still reflects the old name. This may result in confusion. The default is constructed from the tty and host names. To access the screen command line, use Prefix : , where Prefix is typically Ctrl-a . So you will most likely have to do so: Ctrl-a : sessionname [name]
{ "source": [ "https://unix.stackexchange.com/questions/27780", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/283057/" ] }
27,843
I've become pretty proficient with a number of bash shortcut keys that make my bash-ing faster: C-a/C-e, C-u, C-w, M-f/M-b, C-r etc. One common task that I haven't found a good shortcut for though is when I want to delete the last segment of a path: Say I have ls ~/projects/arcaneweb/libraries and I realize I actually meant ls ~/projects/arcaneweb/sources Is there a way to just delete libraries , saving a load of keystrokes?
A single shortcut: M-backspace Alt + ←
{ "source": [ "https://unix.stackexchange.com/questions/27843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13759/" ] }
27,886
I want to list (or delete, or do some other operation) on certain files in a directory, like this: $ ls /opt/somedir/ aa bb cc aa.txt bb.txt cc.txt $ ls /opt/somedir/(aa|bb|cc) ## pseudo-bash :p aa bb cc How can I achieve this (without cd-ing to the directory first)?
Use curly braces for it: ls /opt/somedir/{aa,bb,cc} For more information read about brace expansion .
{ "source": [ "https://unix.stackexchange.com/questions/27886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13769/" ] }
27,923
As part of the program I wrote, I constantly read and write data from files. I noticed that as part of doing so, I am inadvertently creating swap .swp files. What do you think is going on? What would cause swap files to appear if you had to reproduce the problem?
The .swp file is not a swap file in the OS sense. It is a state file. It keeps your changes since the last save (except the last 200 characters), buffers that you have saved, unsaved macros and the undo structure. You can read more in VIM's help: vim +help\ swap-file . If there is a crash (power failure, OS crash, etc.), then you can recover your changes using this swap-file. After saving the changes from the swap file to the original file, you will need to exit vim and remove the swap file yourself.
{ "source": [ "https://unix.stackexchange.com/questions/27923", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1651/" ] }
27,955
I tried to run an example java program using the following command line. However, I do not know what is the trailing part < /dev/null & used for? java -cp /home/weka.jar weka.classifiers.trees.J48 –t train_file >& log < /dev/null &
< /dev/null is used to instantly send EOF to the program, so that it doesn't wait for input ( /dev/null , the null device, is a special file that discards all data written to it, but reports that the write operation succeeded, and provides no data to any process that reads from it, yielding EOF immediately). & is a special type of command separator used to background the preceding process. Without knowing the program being called, I do not directly know why it is required to run it in this way.
{ "source": [ "https://unix.stackexchange.com/questions/27955", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9502/" ] }
27,959
I have a "command" text file that issues a data file download command on each line. I send the command file to bash. However, a small percentage of the downloads fail. Here is the algorithm I use to find out what's missing: After downloading, I go back through the command file and check if each download file exists. If the download doesn't exist, I copy the command line into a new command file. I am left with a new command file for the remaining downloads. Here is the bash script I implemented the algorithm with: 1 #!/bin/bash 2 while read line 3 do 4 for item in $line 5 do 6 if [[ $item == *out_fname* ]]; then 7 splitline=(${item//=/ }) 8 target_file=${splitline[1]} 9 if [ ! -f $target_file ]; then 10 echo $line >> stillneed.txt 11 fi 12 fi 13 done 14 done < "$@" Question: This works well, but is there a better algorithm or implementation (maybe using something other than bash)? What I did was just have bash do what a human would have to do. But it seems Unix always has a better way of doing things...
< /dev/null is used to instantly send EOF to the program, so that it doesn't wait for input ( /dev/null , the null device, is a special file that discards all data written to it, but reports that the write operation succeeded, and provides no data to any process that reads from it, yielding EOF immediately). & is a special type of command separator used to background the preceding process. Without knowing the program being called, I do not directly know why it is required to run it in this way.
{ "source": [ "https://unix.stackexchange.com/questions/27959", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2953/" ] }
28,144
I'm aware that it is good to run yum update from time to time to keep my Centos server updated, and I even see that there is way to automate this update . My question is, is it necessary to restart the server after the update? Is it a good idea? What happens if I keep updating and never restart?
You don't have to restart the server unless you are getting a message (from yum) that explicitly encourages you to do so. But you can't use the new kernel that was updated until you restart the system (unless you are using something like Ksplice , which is a technology that switches the old kernel with the new one without the need for a reboot). So in the end, it's your decision if you want to reboot. I would suggest that unless there's a major security update for the kernel you shouldn't reboot during work hours. Otherwise, if the server is idle (and you don't need it for the next 30 minutes or so, because in some rare circumstances updates could interfere with the boot process), I would suggest you reboot it.
{ "source": [ "https://unix.stackexchange.com/questions/28144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8146/" ] }
28,155
How can I use find to find all files that have a .xls or .csv extension? I have seen a -regex option but I don't know how to use it.
Why not simply use this: find -name "*.xls" -o -name "*.csv" You don't need regex for this. If you absolutely want to use regex simply use find -regex ".*\.\(xls\|csv\)"
{ "source": [ "https://unix.stackexchange.com/questions/28155", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
28,158
Is there any tool that can get lines which file A contains, but file B doesn't? I could make a little simple script with, e.g, perl, but if something like that already exists, I'll save my time from now on.
Yes. The standard grep tool for searching files for text strings can be used to subtract all the lines in one file from another. grep -F -x -v -f fileB fileA This works by using each line in fileB as a pattern ( -f fileB ) and treating it as a plain string to match (not a regular regex) ( -F ). You force the match to happen on the whole line ( -x ) and print out only the lines that don't match ( -v ). Therefore you are printing out the lines in fileA that don't contain the same data as any line in fileB. The downside of this solution is that it doesn't take line order into account and if your input has duplicate lines in different places you might not get what you expect. The solution to that is to use a real comparison tool such as diff . You could do this by creating a diff file with the context value at 100% of the lines in the file, then parsing it for just the lines that would be removed if converting file A to file B. (Note this command also removes the diff formatting after it gets the right lines.) diff -U $(wc -l < fileA) fileA fileB | sed -n 's/^-//p' > fileC
{ "source": [ "https://unix.stackexchange.com/questions/28158", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
28,181
I'd like to run a script if the Gnome session is locked and unlocked. Is there a way that I can intercept this and perform certain actions when the desktop is locked or unlocked?
Gnome-screensaver emits some signals on dbus when something happens. Here the documentation (with some examples). You could write a scripts that runs: dbus-monitor --session "type='signal',interface='org.gnome.ScreenSaver'" and that does what you need anytime dbus-monitor prints a line about the screen being locked/unlocked. Here a bash command to do what you need: dbus-monitor --session "type='signal',interface='org.gnome.ScreenSaver'" | while read x; do case "$x" in *"boolean true"*) echo SCREEN_LOCKED;; *"boolean false"*) echo SCREEN_UNLOCKED;; esac done Just replace echo SCREEN_LOCKED and echo SCREEN_UNLOCKED with what you need.
{ "source": [ "https://unix.stackexchange.com/questions/28181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
28,198
We are hosting an application on remote server. We need to test it with a limited network bandwidth (for users with bad Internet access). Can I limit my internet bandwidth? For instance: 128 KB per second. This question focuses on system-wide or container-wide solutions on Linux. See Limiting a specific shell's internet bandwidth usage for process- or session-specific solutions.
You can throttle the network bandwidth on the interface using the command called tc Man page available at http://man7.org/linux/man-pages/man8/tc.8.html For a simple script, try wondershaper . An example from using tc: tc qdisc add dev eth0 root tbf rate 1024kbit latency 50ms burst 1540
{ "source": [ "https://unix.stackexchange.com/questions/28198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13905/" ] }
28,363
When I ls -la , it prints many attributes. Something like this: -rwSrwSr-- 1 www-data www-data 45 2012-01-04 05:17 README Shamefully, I have to confess I don't know the exact meaning of each attributes. For example, what's the meaning of big S in the string -rwSrwSr-- ? What's the following 1 ? I know others roughly.
The documentation of the ls command answers these questions. On most unix variants, look up the ls man page ( man ls or online). On Linux, look up the Info documentation ( info ls ) or online . The letter s denotes that the setuid (or setgid, depending on the column) bit is set. When an executable is setuid, it runs as the user who owns the executable file instead of the user who invoked the program. The letter s replaces the letter x . It's possible for a file to be setuid but not executable; this is denoted by S , where the capital S alerts you that this setting is probably wrong because the setuid bit is (almost always) useless if the file is not executable. When a directory has setuid (or setgid) permissions, any files created in that directory will be owned by the user (or group) matching the owner (or group) of the directory. The number after the permissions is the hard link count. A hard link is a path to a file (a name, in other words). Most files have a single path, but you can make more with the ln command. (This is different from symbolic links: a symbolic link says “oh, actually, this file is elsewhere, go to <location>”.) Directories have N+2 hard links where N is the number of subdirectories, because they can be accessed from their parent, from themselves (through the . entry), and from each subdirectory (through the .. entry).
{ "source": [ "https://unix.stackexchange.com/questions/28363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5056/" ] }
28,366
I have recently installed Arch Linux and GNOME 3 on my new laptop and have a problem whereby nm-applet is not showing in gnome-panel , unless I manually restart the NetworkManager daemon.
The documentation of the ls command answers these questions. On most unix variants, look up the ls man page ( man ls or online). On Linux, look up the Info documentation ( info ls ) or online . The letter s denotes that the setuid (or setgid, depending on the column) bit is set. When an executable is setuid, it runs as the user who owns the executable file instead of the user who invoked the program. The letter s replaces the letter x . It's possible for a file to be setuid but not executable; this is denoted by S , where the capital S alerts you that this setting is probably wrong because the setuid bit is (almost always) useless if the file is not executable. When a directory has setuid (or setgid) permissions, any files created in that directory will be owned by the user (or group) matching the owner (or group) of the directory. The number after the permissions is the hard link count. A hard link is a path to a file (a name, in other words). Most files have a single path, but you can make more with the ln command. (This is different from symbolic links: a symbolic link says “oh, actually, this file is elsewhere, go to <location>”.) Directories have N+2 hard links where N is the number of subdirectories, because they can be accessed from their parent, from themselves (through the . entry), and from each subdirectory (through the .. entry).
{ "source": [ "https://unix.stackexchange.com/questions/28366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13937/" ] }
28,384
Say I have process 1 and process 2 . Both have a file descriptor corresponding to the integer 4. In each process however the file descriptor 4 points to a totally different file in the Open File Table of the kernel: How is that possible? Isn't a file descriptor supposed to be the index to a record in the Open File Table?
The file descriptor, i.e. the 4 in your example, is the index into the process-specific file descriptor table , not the open file table. The file descriptor entry itself contains an index to an entry in the kernel's global open file table, as well as file descriptor flags.
{ "source": [ "https://unix.stackexchange.com/questions/28384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12217/" ] }
28,403
I have learned to use tar without '-' for options, like tar cvfz dir.tar.gz Directory/ but I recently came accross the slightly different tar -czvf syntax (I think the 'f' must be the last option in this case). Both work on linux and Mac OS. Is there a recommended syntax, with ou without '-' which is more portable accross unix flavors ?
tar is one of those ancient commands from the days when option syntax hadn't been standardized. Because all useful invocations of tar require specifying an operation before providing any file name, most tar implementations interpret their first argument as an option even if it doesn't begin with a - . Most current implementations accept a - ; the only exception that I'm aware of is Minix . Older versions of POSIX and Single Unix included a tar command with no - before the operation specifier. Single Unix v2 had both traditional archivers cpio and tar , but very few flags could be standardized because existing implementations were too different, so the standards introduced a new command, pax , which is the only standard archiver in since Single Unix v3 . If you want standard compliance, use pax , but beware that many Linux distributions don't include it in their base installation, and there's no pax in Minix . If you want portability in practice, use tar cf filename.tar .
{ "source": [ "https://unix.stackexchange.com/questions/28403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2917/" ] }
28,410
I want to program a script that allows what is said in the title. So basically I gunzip the initrd, than unpack the cpio, open vi to allow editing, save, pack with cpio, and gzip again, so nothing fancy here (at least I hope that, I am not good at shell scripting). Now after gunzipping the archive the ending .gzip or .gz is left out so that I can't use $1 as the name. How should I delete the ending so that I can use a new variable foo, for further processing? This is probably not an very elegant way, but I hope it works :) #/bin/bash # This script should make it possible to edit the preseed file # within a initrd gzipped cpio archive, without unpacking and packing it # manually mkdir temporarydirectory # $1 will be the initrd (cpio archive which is compressed with gzip) mv $1 temporarydirectory cd temporarydirectory gunzip $1 cpio -id < $1 # here is where i need to cut of the gzip ending rm $1 # again without the gzip ending cutted of vim preseed.cfg find . | cpio -H newc -o > $1 # again without gzip ending gzip $1 # here the same mv $1 .. # here the gzip ending is used again cd .. rm -r temporarydirectory
tar is one of those ancient commands from the days when option syntax hadn't been standardized. Because all useful invocations of tar require specifying an operation before providing any file name, most tar implementations interpret their first argument as an option even if it doesn't begin with a - . Most current implementations accept a - ; the only exception that I'm aware of is Minix . Older versions of POSIX and Single Unix included a tar command with no - before the operation specifier. Single Unix v2 had both traditional archivers cpio and tar , but very few flags could be standardized because existing implementations were too different, so the standards introduced a new command, pax , which is the only standard archiver in since Single Unix v3 . If you want standard compliance, use pax , but beware that many Linux distributions don't include it in their base installation, and there's no pax in Minix . If you want portability in practice, use tar cf filename.tar .
{ "source": [ "https://unix.stackexchange.com/questions/28410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14008/" ] }
28,425
I'm using Mac OS X. When I SSH into servers I find the ll command useful, but it's not available on my local machine. How can I install it?
MacOS: alias ll='ls -lG' Linux: alias ll='ls -l --color=auto' Stick that in the appropriate startup file for your shell , e.g. ~/.bashrc or ~/.zshrc . To apply the setting, source the file, or quit and restart your terminal.
{ "source": [ "https://unix.stackexchange.com/questions/28425", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5056/" ] }
28,463
I need to run some program within crontab , but how can the program know about dbus session id ? it's only available for programs launched by session managers.
The problem is somewhat similar to accessing the X display and finding the location of the X cookie file . (Also, refer to these questions if you want to launch a GUI program on the user's display.) Dbus stores the session address in a file in ~/.dbus/session-bus . The name of the file is $machine_id-$display_number , where $machine_id is a randomly generated number stored in /var/lib/dbus/machine-id and $display_number is the X display number ( $DISPLAY is :$display_number or :$display_number.$screen_number ). The file in ~/.dbus/session-bus is parseable by a shell and contains definitions for DBUS_SESSION_BUS_ADDRESS and DBUS_SESSION_BUS_PID . dbus_session_file=~/.dbus/session-bus/$(cat /var/lib/dbus/machine-id)-0 if [ -e "$dbus_session_file" ]; then . "$dbus_session_file" export DBUS_SESSION_BUS_ADDRESS DBUS_SESSION_BUS_PID dbus-send … fi Beware that there's no guarantee that the dbus daemon is still available. The user may have logged out. An alternative method is to find the PID of a process in the desktop session, and obtain the dbus address from its environment. export $(</proc/$pid/environ tr \\0 \\n | grep -E '^DBUS_SESSION_BUS_ADDRESS=') If the crontab is running as root and you want to communicate with the session of whatever user is logged in on the console, see Can I launch a graphical program on another user's desktop as root?
{ "source": [ "https://unix.stackexchange.com/questions/28463", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11318/" ] }
28,503
There are some commands which filter or act on input, and then pass it along as output, I think usually to stdout - but some commands will just take the stdin and do whatever they do with it, and output nothing. I'm most familiar with OS X and so there are two that come to mind immediately are pbcopy and pbpaste - which are means of accessing the system clipboard. Anyhow, I know that if I want to take stdout and spit the output to go to both stdout and a file then I can use the tee command. And I know a little about xargs , but I don't think that's what I'm looking for. I want to know how I can split stdout to go between two (or more) commands. For example: cat file.txt | stdout-split -c1 pbcopy -c2 grep -i errors There is probably a better example than that one, but I really am interested in knowing how I can send stdout to a command that does not relay it and while keeping stdout from being "muted" - I'm not asking about how to cat a file and grep part of it and copy it to the clipboard - the specific commands are not that important. Also - I'm not asking how to send this to a file and stdout - this may be a "duplicate" question (sorry) but I did some looking and could only find similar ones that were asking about how to split between stdout and a file - and the answers to those questions seemed to be tee , which I don't think will work for me. Finally, you may ask "why not just make pbcopy the last thing in the pipe chain?" and my response is 1) what if I want to use it and still see the output in the console? 2) what if I want to use two commands which do not output stdout after they process the input? Oh, and one more thing - I realize I could use tee and a named pipe ( mkfifo ) but I was hoping for a way this could be done inline, concisely, without a prior setup :)
You can use tee and process substitution for this: cat file.txt | tee >(pbcopy) | grep errors This will send all the output of cat file.txt to pbcopy , and you'll only get the result of grep on your console. You can put multiple processes in the tee part: cat file.txt | tee >(pbcopy) >(do_stuff) >(do_more_stuff) | grep errors
{ "source": [ "https://unix.stackexchange.com/questions/28503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
28,514
I'm frustrated that this was removed/altered in gnome-shell3. There are certain key bindings for resizing and moving windows like alt+right click etc, that I'd like back. I've tried to use the system settings but to no avail. Has anyone else worked with this and got it to work?
In more recent gnome versions (e.g., gnome-shell), you need to use this instead: gsettings set org.gnome.desktop.wm.preferences resize-with-right-button true Gnome defaults to using the Super ("Windows") key for window actions, so the above alone will enable moving (super-leftdrag) and resizing (super-rightdrag). To use the Alt key instead of the Super key do: gsettings set org.gnome.desktop.wm.preferences mouse-button-modifier '<Alt>' (note that using the Alt key for window operations will interfere with some apps, like Inkscape, that use alt-click and alt-drag for app related actions)
{ "source": [ "https://unix.stackexchange.com/questions/28514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13677/" ] }
28,526
I've been running the useradd {user} command to add users to my system, though I plan on running this in an automated environment, and it might end up being run again, even though the user already exists. Is there a way that I can run this only if the user doesn't already exist? The user doesn't have a home folder.
id -u somename returns a non-zero exit code when the user does not exist. You can test it quite simply... ( &>/dev/null just supresses the normal output/warning) id -u somename &>/dev/null || useradd somename
{ "source": [ "https://unix.stackexchange.com/questions/28526", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14057/" ] }
28,548
What is the state-of-the-art method for automatically executing custom scripts upon USB device plug-in under current Linux distributions like Debian/CentOS/Fedora? For example if you want to automatically mount/copy some files/umount a USB mass storage device based on its UUID (or device ID etc.).
Put a line like this in a file in /etc/udev/rules.d : KERNEL=="sd*", ATTRS{vendor}=="Yoyodyne", ATTRS{model}=="XYZ42", ATTRS{serial}=="123465789", RUN+="/pathto/script" Add a clause like NAME="subdir/mydisk%n" if you want to use a custom entry path under /dev . Run udevadm info -a -n sdb to see what attributes you can match against ( attribute=="value" ; replace sdb by the device name automatically assigned to the disk, corresponding to the new entry created in /dev when you plug it in). Note that you can use ATTRS clauses from any one stanza: you can pick any stanza, but the ATTRS clauses must all come from the same stanza, you can't mix and match. You can mix ATTRS clauses with other types of clauses listed in a different stanza.
{ "source": [ "https://unix.stackexchange.com/questions/28548", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
28,555
For example, I have git installed on my system. But I don't remember where I installed it, so which command is fit to find this out?
If it is in your path, then you can run either type git or which git . The which command has had problems getting the proper path (confusion between environment and dot files). For type , you can get just the path with the -p argument. If it is not in your path, then it's best to look for it with locate -b git It will find anything named 'git'. It'll be a long list, so might be good to qualify it with locate -b git | fgrep -w bin .
{ "source": [ "https://unix.stackexchange.com/questions/28555", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11962/" ] }
28,603
I would like to password protect or encrypt a directory and all the files within it (for the whole directory tree below it). I do not want to bother the whole home directory, I want a specific directory with some files and folders in it. I would like to be able to encrypt the directory or decrypt it using a password. Command line would be nicest to use. I don't want to have to create a new file as an encrypted version and then, delete the previous ones which are the non-encrypted version.
Use encfs (available as a package on most distributions). To set up: mkdir ~/.encrypted ~/encrypted encfs ~/.encrypted ~/encrypted # enter a passphrase mv existing-directory ~/encrypted The initial call to encfs sets up an encrypted filesystem. After that point, every file that you write under ~/encrypted is not stored directly on the disk, it is encrypted and the encrypted data is stored under ~/.encrypted . The encfs command leaves a daemon running, and this daemon handles the encryption (and decryption when you read a file from under ~/encrypted ). In other words, for files under ~/encrypted , actions such as reads and writes do not translate directly to reading or writing from the disk. They are performed by the encfs process, which encrypts and decrypts the data and uses the ~/.encrypted directory to store the ciphertext. When you've finished working with your files for the time being, unmount the filesystem so that the data can't be accessed until you type your passphrase again: fusermount -u ~/encrypted After that point, ~/encrypted will be an empty directory again. When you later want to work on these files again, mount the encrypted filesystem: encfs ~/.encrypted ~/encrypted # enter your passphrase This, again, makes the encrypted files in ~/.encrypted accessible under the directory ~/encrypted . You can change the mount point ~/encrypted as you like: encfs ~/.encrypted /somewhere/else (but mount the encrypted directory only once at a time). You can copy or move the ciphertext (but not while it's mounted) to a different location or even to a different machine; all you need to do to work on the files is pass the location of the ciphertext as the first argument to encfs and the location of an empty directory as the second argument.
{ "source": [ "https://unix.stackexchange.com/questions/28603", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1325/" ] }