source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
13,073
Is there a way to update only selected programs in pacman? I'm running ArchLinux on my netbook, and the complete upgrade of my system takes up more temporary space than I have on my system, so I'd like to just update one program at a time
Pacman's install command really means 'synchronize', so the command to install a new package and to upgrade a single package is the same. pacman -S packagename This will upgrade the package.
{ "source": [ "https://unix.stackexchange.com/questions/13073", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7209/" ] }
13,085
My wife is sitting at her home desktop, alpha , which is running a recent version of Ubuntu. I am on a bus, using ConnectBot on my G1 phone, and can SSH into alpha from wherever I am. For complicated reasons, I cannot use IM, email, or the telephone in order to contact her. (E.g. I don't want to wake the baby, my IM client is broken, my email server is down.) My only option is to ssh into alpha remotely and try to somehow make something appear on the screen. She's using KDE; how can I make something pop up to get her attention and let her know I'm trying to communicate with her? I thought it was possible to remotely trigger something (like xmessage) to appear on her screen even though my SSH session doesn't have an X display. EDIT: To clarify, my phone is not running any flavor of X, so X-over-ssh is not possible. Would the following work? $ export DISPLAY=:0 $ xmessage "test"
You can tell an X program which display to use with the DISPLAY environment variable, as long as you know which display alpha is currently showing. Almost certainly the only display is :0 , unless you've manually fiddled with it, so if you run: $ export DISPLAY=:0 Then any X applications you run will be displayed on alpha 's monitor. xmessage is a good choice for showing messages; there's also xdialog . If you have libnotify installed, you can use notify-send to popup a message in the corner of the screen:
{ "source": [ "https://unix.stackexchange.com/questions/13085", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7433/" ] }
13,091
I have a flash drive, let's name it FLASH . I want, when on my mac, when FLASH is plugged (and automatically mounted), execute a specific script and make ~/Documents to be automatically copied to /Volumes/FLASH/Documents (mac mounts drives at /Volumes ). This same drive, FLASH , (with this new Documents folder added before with the mac situation), when plugged in an Ubuntu machine, I want it to automatically copy FLASH/Documents to ~/Documents (or automatically execute an script, after mounting). How should I do this in these different scenarios? I don't want to use third party applications for this, I prefer using core/builtin tools available in both platforms.
You can tell an X program which display to use with the DISPLAY environment variable, as long as you know which display alpha is currently showing. Almost certainly the only display is :0 , unless you've manually fiddled with it, so if you run: $ export DISPLAY=:0 Then any X applications you run will be displayed on alpha 's monitor. xmessage is a good choice for showing messages; there's also xdialog . If you have libnotify installed, you can use notify-send to popup a message in the corner of the screen:
{ "source": [ "https://unix.stackexchange.com/questions/13091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2689/" ] }
13,093
Is there a way to add/update a file in a tar.gz archive? Basically, I have an archive which contains a file at /data/data/com.myapp.backup/./files/settings.txt and I'd like to pull that file from the archive (already done) and push it back into the archive once the edit has been done. How can I accomplish this? Is it problematic because of the . in the path?
To pull your file from your archive, you can use tar xzf archive.tar.gz my/path/to/file.txt . Note that the directories in the file's path will be created as well. Use tar t (i.e. tar tzf archive.tar.gz ) to list the files in the archive. tar does not support "in-place" updating of files. However, you can add files to the end of an archive, even if they have the same path as a file already in the archive. In that case, both copies of the file will be in the archive, and the file added later will override the earlier one. The command to use for this is tar r (or tar u to only add files that are newer than the archive) is the command to use. The . in the path should not be a problem. There is a catch, though: you can't add to a compressed archive. So you would have to do: gunzip archive.tar.gz tar rf archive.tar data/data/com.myapp.backup/./files/settings.txt gzip archive.tar Which is probably not what you want to hear, since it means rewriting the entire archive twice over. If it's not a very large archive, it might be better to untar the whole thing and then re-tar it after editing. Alternately, you could use an uncompressed archive.
{ "source": [ "https://unix.stackexchange.com/questions/13093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
13,104
I copied many files to my new linux host. I see that all files have the owner and group both set to 515 . What does that mean?
You probably did a copy that preserved the original group and owner of these files. Within linux internally the owner and group is basically just an id (in your case, the number 515). This id is then mapped on a group and user name listed in /etc/passwd or /etc/group . You will see that in those files, you can find the name of the user and also the id used for that specific user and group. Most likely in the /etc/group and /etc/passwd , the id "515" is not listed, and for this reason the id itself is shown. You can change the ower and group to an existing owner and group with the commands chown and chgrp respectively.
{ "source": [ "https://unix.stackexchange.com/questions/13104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6797/" ] }
13,192
As far as I understand they are libraries, but what is the difference between the two?
A .a file is a static library, while a .so file is a shared object (dynamic) library similar to a DLL on Windows. There's some detailed information about the differences between the two on this page .
{ "source": [ "https://unix.stackexchange.com/questions/13192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7183/" ] }
13,240
Is there any way to specify that sudo should preserve certain environment variables for specified commands only? For some purposes I'd like my $HOME env. variable preserved when I run certain commands. For other purposes and other commands, I want it reset. Can this be done with /etc/sudoers ? Edit: Thank you for the answers. I wonder if I might ask a follow-up question, which is "Why, then, does this not work?" In the example I'm trying to get working, I want sudo nano to read my $HOME/.nanorc . If I use this: Defaults:simon env_keep=HOME it works perfectly. If I use this: Defaults!/bin/nano env_keep=HOME or this: Cmnd_Alias NANO = /usr/bin/nano,/bin/nano,/bin/rnano Defaults!NANO env_keep=HOME it's not working at all. Any suggestions as to why? (I'm on Debian testing, btw.) (Note: I don't think it's nano specific, btw -- I can reproduce the behaviour with a one-line bash script that simply echo s $HOME ).
To override env_keep only for /path/to/command (when invoked through any rule, by any user) and for the user joe (no matter what command he is running): Defaults!/path/to/command env_keep=HOME Defaults:joe env_keep=HOME You can use -= or += to remove or add an entry to the env_keep list.
{ "source": [ "https://unix.stackexchange.com/questions/13240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5454/" ] }
13,245
I'm searching a GUI [or CLI] tool that can produce output like the Total Commander compare content function does! For example: Are there any that do that? It's a very handy tool and I miss it
To override env_keep only for /path/to/command (when invoked through any rule, by any user) and for the user joe (no matter what command he is running): Defaults!/path/to/command env_keep=HOME Defaults:joe env_keep=HOME You can use -= or += to remove or add an entry to the env_keep list.
{ "source": [ "https://unix.stackexchange.com/questions/13245", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
13,247
If I nmap a host, then it outputs an "Uptime" too. What does it mean, is it trustable? Does it gives correct times?
To override env_keep only for /path/to/command (when invoked through any rule, by any user) and for the user joe (no matter what command he is running): Defaults!/path/to/command env_keep=HOME Defaults:joe env_keep=HOME You can use -= or += to remove or add an entry to the env_keep list.
{ "source": [ "https://unix.stackexchange.com/questions/13247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
13,254
What could DUP mean when using ping?
DUP means duplicate packet. From man ping : Duplicate and Damaged Packets ping will report duplicate and damaged packets. Duplicate packets should never occur, and seem to be caused by inappropriate link-level retransmissions. Duplicates may occur in many situations and are rarely (if ever) a good sign, although the presence of low levels of duplicates may not always be cause for alarm. Damaged packets are obviously serious cause for alarm and often indicate broken hardware somewhere in the ping packet's path (in the network or in the hosts). There are different reasons for this, did you capture your network traffic with an interface in promiscous mode? Sometimes this is the reason for dupplicated packets.
{ "source": [ "https://unix.stackexchange.com/questions/13254", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
13,258
I have a problem with dark-blue color in vim or ls output. Because I'm using black background color, words colored in dark-blue are almost completely invisible. How can I address this problem?
You can modify the color theme of vim with the background option. Use set background=dark in your current session or set it permanent in your vimrc. The output of ls is configured with /etc/DIR_COLORS . See the manpage for more information. The settings can be overwritten with a ~/.dir_colors (On Ubuntu: ~/.dircolors - see entry in ~/.bashrc ) file in your home directory. An entry like DIR 01;36 will produce a more readable background with cyan.
{ "source": [ "https://unix.stackexchange.com/questions/13258", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
13,290
I have always learned that the init process is the ancestor of all processes. Why does process 2 have a PPID of 0? $ ps -ef | head -n 3 UID PID PPID C STIME TTY TIME CMD root 1 0 0 May14 ? 00:00:01 /sbin/init root 2 0 0 May14 ? 00:00:00 [kthreadd]
First, “ancestor” isn't the same thing as “parent”. The ancestor can be the parent's parent's … parent's parent, and the kernel only keeps track of one level. However, when a process dies, its children are adopted by init, so you will see a lot of processes whose parent is 1 on a typical system. Modern Linux systems additionally have a few processes that execute kernel code, but are managed as user processes, as far as scheduling is concerned. (They don't obey the usual memory management rules since they're running kernel code.) These processes are all spawned by kthreadd (it's the init of kernel threads). You can recognize them by the fact that /proc/2/exe (normally a symbolic link to the process executable) can't be read. Also, ps lists them with a name between square brackets (which is possible for normal user processes, but unusual). Most processes whose parent process ID is 2 are kernel processes, but there are also a few kernel helper processes with PPID 2 (see below). Processes 1 ( init ) and 2 ( kthreadd ) are created directly by the kernel at boot time, so they don't have a parent. The value 0 is used in their ppid field to indicate that. Think of 0 as meaning “the kernel itself” here. Linux also has some facilities for the kernel to start user processes whose location is indicated via a sysctl parameter in certain circumstances. For example, the kernel can trigger module loading events (e.g. when new hardware is discovered, or when some network protocols are first used) by calling the program in the kernel.modprobe sysctl value. When a program dumps core, the kernel calls the program indicated by kernel.core_pattern if any. Those processes are user processes, but their parent is registered as kthreadd .
{ "source": [ "https://unix.stackexchange.com/questions/13290", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7533/" ] }
13,323
$ ps aux | grep -i ssh USER 4364 0.0 0.0 9004 1032 ? Ss 12:20 0:00 ssh -v -fND localhost:4000 USERNAME@SERVER-IP-ADDRESS $ pgrep localhost:4000 Why doesn't this work?
By default, pgrep(1) will only match against the process name. If you want to match against the full command line, use the -f option: $ pgrep -f localhost:4000
{ "source": [ "https://unix.stackexchange.com/questions/13323", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
13,338
I have very long export PATH=A:B:C ... . Can I make a multiple lines to have more organized one as follows? export PATH = A: B: C:
You can do: export PATH="A" export PATH="$PATH:B" export PATH="$PATH:C" Each subsequent line appends onto the previously defined path. This is generally a good habit, as it avoids trashing the existing path. If you want the new component to take precedence, swap the order: export PATH="A" export PATH="B:$PATH" export PATH="C:$PATH" Alternatively, you might be able to do: export PATH=A:\ B:\ C where \ marks a line continuation. Haven't tested this method.
{ "source": [ "https://unix.stackexchange.com/questions/13338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1090/" ] }
13,369
This keyboard has only one super key, so I want to remap the menu key to make up for that.
Use xev to find the keycode for the key you want to remap. For example if I press Menu key it tells me that that is keycode 135 . Next in my ~/.xmodmaprc file, I add a line like this: keycode 135 = Super_R ... to make it the right hand windows key. Then all that remains is to activate the key remaps. This usually happens automatically on login to your x session, but if your Desktop Environment doesn't do that you can run it manually as xmodmap ~/.xmodmaprc from a command line or whatever script gets run when you login.
{ "source": [ "https://unix.stackexchange.com/questions/13369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/839/" ] }
13,377
I often download tarballs with wget from sourceforge.net. The downloaded files then are named, e.g SQliteManager-1.2.4.tar.gz?r=http:%2F%2Fsourceforge.net%2Fprojects%2Fsqlitemanager%2Ffiles%2F&ts=1305711521&use_mirror=switch When I try to tar xzf SQliteManager-1.2.4.tar.gz\?r\=http\:%2F%2Fsourceforge.net%2Fprojects%2Fsqlitemanager%2Ffiles%2F\&ts\=1305711521\&use_mirror\=switch I receive the following error message: tar (child): Cannot connect to SQliteManager-1.2.4.tar.gz?r=http: resolve failed gzip: stdin: unexpected end of file tar: Child returned status 128 tar: Error is not recoverable: exiting now After renaming the file to foo.tar.gz the extraction works perfect. Is there a way, that i am not forced to rename each time the target file before extracting?
The reason for the error you are seeing can be found in the GNU tar documentation : If the archive file name includes a colon (‘:’), then it is assumed to be a file on another machine[...] That is, it is interpretting SQliteManager-1.2.4.tar.gz?r=http as a host name and trying to resolve it to an IP address, hence the "resolve failed" error. That same documentation goes on to say: If you need to use a file whose name includes a colon, then the remote tape drive behavior can be inhibited by using the ‘--force-local’ option.
{ "source": [ "https://unix.stackexchange.com/questions/13377", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7571/" ] }
13,391
I have currently a strange problem on debian (wheezy/amd64). I have created a chroot to install a server (i can't give any more detail about it, sorry). Let's call its path /chr_path/ . To make things easy, I have initialized this chroot with a debootstrap (also wheezy/amd64). All seemed to work well inside the chroot but when I started the installer script of my server I got : zsh: Not found /some_path/perl (the installer includes a perl binary for some reasons) Naturally, I checked the /some_path/ location and I found the "perl" binary. file in chroot environment returns : /some_path/perl ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped The file exists, seems ok, has correct rights. I can use file , ls , vim on it but as soon as I try to execute it - ./perl for example - I get : zsh: Not found ./perl . This situation is quite understandable for me. Moreover : I can execute other basic binaries (/bin/ls,...) in the chroot without getting errors I have the same problems for other binaries that came with the project When I try to execute the binary from the main root ( /chr_path/some_path/perl ), it works. I have tried to put one of the binaries with a copy of my ls . I checked that the access rights were the same but this didn't change anything (one was working, and the other wasn't)
When you fail to execute a file that depends on a “loader”, the error you get may refer to the loader rather than the file you're executing. The loader of a dynamically-linked native executable is the part of the system that's responsible for loading dynamic libraries. It's something like /lib/ld.so or /lib/ld-linux.so.2 , and should be an executable file. The loader of a script is the program mentioned on the shebang line, e.g. /bin/sh for a script that begins with #!/bin/sh . (Bash and zsh give a message “bad interpreter” instead of “command not found” in this case.) The error message is rather misleading in not indicating that the loader is the problem. Unfortunately, fixing this would be hard because the kernel interface only has room for reporting a numeric error code, not for also indicating that the error in fact concerns a different file. Some shells do the work themselves for scripts (reading the #! line on the script and re-working out the error condition), but none that I've seen attempt to do the same for native binaries. ldd won't work on the binaries either because it works by setting some special environment variables and then running the program, letting the loader do the work. strace wouldn't provide any meaningful information either, since it wouldn't report more than what the kernel reports, and as we've seen the kernel can't report everything it knows. This situation often arises when you try to run a binary for the right system (or family of systems) and superarchitecture but the wrong subarchitecture. Here you have ELF binaries on a system that expects ELF binaries, so the kernel loads them just fine. They are i386 binaries running on an x86_64 processor, so the instructions make sense and get the program to the point where it can look for its loader. But the program is a 32-bit program (as the file output indicates), looking for the 32-bit loader /lib/ld-linux.so.2 , and you've presumably only installed the 64-bit loader /lib64/ld-linux-x86-64.so.2 in the chroot. You need to install the 32-bit runtime system in the chroot: the loader, and all the libraries the programs need. From Debian wheezy onwards, if you want both i386 and x86_64 support, start with an amd64 installation and activate multiarch support: run dpkg --add-architecture i386 then apt-get update and apt-get install libc6:i386 zlib1g:i386 … (if you want to generate a list of the dependencies of Debian's perl package, to see what libraries are likely to be needed, you can use aptitude search -F %p '~Rdepends:^perl$ ~ri386' ). You can pull in a collection of common libraries by installing the ia32-libs package (you need to enable multiarch support first). On Debian amd64 up to wheezy, the 32-bit loader is in the libc6-i386 package. You can install a bigger set of 32-bit libraries by installing ia32-libs .
{ "source": [ "https://unix.stackexchange.com/questions/13391", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2249/" ] }
13,432
For some reason, I have a few open sessions on an SSH server that I don't know about. I assume they're leftovers from when my pipe broke. $ users user1 user2 user3 me me me me Is there a way to log me out across all sessions?
You could try killing off the individual processes that are still running as you, or just purge the system of everything running as you: pkill -u username
{ "source": [ "https://unix.stackexchange.com/questions/13432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7596/" ] }
13,464
So say I'm in a directory that is deep within a file structure, such as /home/cory/projects/foo/bar/bash/baz I know that in either the current folder baz or one of the folders above me (such as home , cory , projects , foo , bar , or bash ) there is a file named happy . How can I find that file without searching a lot of extra folders?
#!/bin/bash function upsearch () { test / == "$PWD" && return || test -e "$1" && echo "found: " "$PWD" && return || cd .. && upsearch "$1" } This function will walk up from the current dir. Note, that it is a function, and will change the directory, while traversing. It will stop in the directory, where it finds the file, and will walk up to the root / if not. You might want to change it to a script instead of a function, and maybe jump back, if the file isn't found in the root directory. If you never want to cd to the dir: upsearch () { local slashes=${PWD//[^\/]/} local directory="$PWD" for (( n=${#slashes}; n>0; --n )) do test -e "$directory/$1" && echo "$directory/$1" && return directory="$directory/.." done } This will lead to results like /home/cory/a/b/c/../../../happy if the file is in /home/cory/. If you need a cleaned path, you could do some cd "$directory" echo "$PWD" cd - on success. To restrict the search to regular files and excludes directories, symbolic links and the like, you can change the tests to -f instead of -e.
{ "source": [ "https://unix.stackexchange.com/questions/13464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4143/" ] }
13,466
Say I have a file: # file: 'test.txt' foobar bash 1 bash foobar happy foobar I only want to know what words appear after "foobar", so I can use this regex: "foobar \(\w\+\)" The parenthesis indicate that I have a special interest in the word right after foobar. But when I do a grep "foobar \(\w\+\)" test.txt , I get the entire lines that match the entire regex, rather than just "the word after foobar": foobar bash 1 foobar happy I would much prefer that the output of that command looked like this: bash happy Is there a way to tell grep to only output the items that match the grouping (or a specific grouping) in a regular expression?
GNU grep has the -P option for perl-style regexes, and the -o option to print only what matches the pattern. These can be combined using look-around assertions (described under Extended Patterns in the perlre manpage ) to remove part of the grep pattern from what is determined to have matched for the purposes of -o . $ grep -oP 'foobar \K\w+' test.txt bash happy $ The \K is the short-form (and more efficient form) of (?<=pattern) which you use as a zero-width look-behind assertion before the text you want to output. (?=pattern) can be used as a zero-width look-ahead assertion after the text you want to output. For instance, if you wanted to match the word between foo and bar , you could use: $ grep -oP 'foo \K\w+(?= bar)' test.txt or (for symmetry) $ grep -oP '(?<=foo )\w+(?= bar)' test.txt
{ "source": [ "https://unix.stackexchange.com/questions/13466", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4143/" ] }
13,573
To create a tar file for a directory, the tar command with compress , verbose and file options can be typed thus: $ tar -cvf my.tar my_directory/ But it also works to do it this way: $ tar cvf my.tar my_directory/ That is, without the dash (-) preceding the options. Why would you ever pass a dash (-) to the option list?
There are several different patterns for options that have been used historically in UNIX applications. Several old ones, like tar , use a positional scheme: command options arguments as for example tar uses tar *something*f "file operated on" *"paths of files to manipulate"* In a first attempt to avoid the confusion, tar and a few other programs with the old flags-arguments style allowed delimiting the flags with dashes, but most of us old guys simply ignored that. Some other commands have a more complicated command line syntax, like dd(1) which uses flags, equal signs, pathnames, arguments and a partridge in a pear tree, all with wild abandon. In BSD and later versions of unix, this had more or less converged to single-character flags marked with '-', but this began to present a couple of problems: the flags could be hard to remember sometimes you actually wanted to use a name with '-' and especially with GNU tools, there began to be limitations imposed by the number of possible flags. So GNU tools added GNU long options like --output . Then Sun decided that the extra '-' was redundant and started using long-style flags with single '-'s. And that's how it came to be the mess it is now.
{ "source": [ "https://unix.stackexchange.com/questions/13573", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7662/" ] }
13,591
Usability testing of a shell script I wrote found that people had different expectations on how to answer a question that expected ‘yes’ as the answer. See variations in the below code example. Surely there must be a better way that what I came up with? What is your readable and shorter form take on this? read -p 'Answer this question with yes: ' answer if [ "$answer" = 'Y' -o "$answer" = 'YES' -o "$answer" = 'Yes' -o "$answer" = 'y' -o "$answer" = 'yes' -o some-alternate-condition ]; then echo 'Surely this can be written better?' fi
The UNIX standard provides example code for this using the locale utility : if printf "%s\n" "$response" | grep -Eq "$(locale yesexpr)" then affirmative processing goes here else non-affirmative processing goes here fi The value for 'yesexpr' in the POSIX locale (and on English locales on real systems) is "^[yY]" . It is to be interpreted as an extended regular expression. See also noexpr.
{ "source": [ "https://unix.stackexchange.com/questions/13591", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7670/" ] }
13,676
Using a common command line tool like sed or awk, is it possible to join all lines that end with a given character, like a backslash? For example, given the file: foo bar \ bash \ baz dude \ happy I would like to get this output: foo bar bash baz dude happy
a shorter and simpler sed solution: sed ' : again /\\$/ { N s/\\\n// t again } ' textfile or one-liner if using GNU sed : sed ':x; /\\$/ { N; s/\\\n//; tx }' textfile
{ "source": [ "https://unix.stackexchange.com/questions/13676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4143/" ] }
13,711
I am having some issues in using an answer provided on this site for this question about a sed command to replace a blank line with two other lines of content , and it was brought up if the sed command on Mac OS (10.6.7 for me) is different. I don't think that it is, but was wondering if others on this site thought differently.
The behavior of shell utilities does differ in minor ways between unix variants. There are many unix variants, with a complex history . There are standardisation efforts such as the POSIX standard and its superset the Single UNIX specification . Most systems nowadays implement POSIX:2001, also known as the Single UNIX Specification version 3 , with minor deviations and many extensions. The Single Unix specification is not a tutorial, but version 3 is readable if you already have an idea of what a command is doing. You can consult it to know if some feature is standard or an extension of a particular system. A majority of unix users use Linux and haven't used any other variant. Linux comes with GNU utilities, which often have many extensions to the standard. So you'll find quite a lot of code out there that works on Linux but not on other unices, because it relies on those extensions. Regarding sed, consult the sed Single Unix specification for the minimum that every system is supposed to support, the man page on your system for what your implementation supports, and the GNU sed manual for what most people out there use. One of the nonstandard extensions in GNU sed is supporting multiple commands run together. For example, this GNU sed program prints all lines containing an a , but changes b into c first: sed -ne '/a/ {s/b/c/g; p}' { and } are actually separate commands, so for full portability, you need to specify them either on separate lines (in a file) or in separate -e arguments (on the command line). The lack of a command separator after { and the use of ; as a command separator are common extensions. The lack of a command separator before } is a less common extension. This is standard-compliant: sed -n -e '/a/ {' -e 's/b/c/g' -e p -e '}' This is nonstandard but commonly accepted: sed -ne '/a/ { s/b/c/g; p; }' Another nonstandard but common extension is the use of \n to mean a newline in a s replacement text (the use in a regexp is standard). The portable method is to include backslash-newline in the sed script. Another common extension is \+ , \? and \| in regexps to mean one or more, at most one and alternation; portable basic regular expressions have none of these. For example, the first command is a non-portable way of replacing contiguous sequences of whitespace by a newline; the second command is a standards-compliant equivalent. sed -e 's/ \+/\n/' sed -e 's/ */\ /'
{ "source": [ "https://unix.stackexchange.com/questions/13711", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7723/" ] }
13,724
I am having a very hard time understanding how does one use file descriptors in shell scripts. I know the basics such as exec 5 > /tmp/foo So fd 5 is attached to foo for writing. exec 6 < /tmp/bar … for reading. exec 5>&- … close fd. Now what does this do? #!/bin/bash exec 5 > /tmp/foo exec 6 < /tmp/bar cat <&6 | while read a do echo $a >&5 done As i understand &5 closes the fd, so how is the output still being re-directed successfully after each call? This is a copy pasta from : Here It claims using this over a simple echo $a > file would make it much quicker, I however fail to understand. I would appreciate any links to decent tutorial. I google powers seem to be failing me.
First, note that the syntax for closing is 5>&- or 6<&- , depending on whether the file descriptor is being read for writing or for reading. There seems to be a typo or formatting glitch in that blog post. Here's the commented script. exec 5>/tmp/foo # open /tmp/foo for writing, on fd 5 exec 6</tmp/bar # open /tmp/bar for reading, on fd 6 cat <&6 | # call cat, with its standard input connected to # what is currently fd 6, i.e., /tmp/bar while read a; do # echo $a >&5 # write to fd 5, i.e., /tmp/foo done # There's no closing here. Because all the inputs and outputs are going to the same place in this simple example, the use of extra file descriptors is not necessary. You could write cat </tmp/bar | while read a; do echo $a done >/tmp/foo Using explicit file descriptors becomes useful when you want to write to multiple files in turn. For example, consider a script that outputs data to a data output file and logging data to a log file and possibly error messages as well. That means three output channels: one for data, one for logs and one for errors. Since there are only two standard descriptors for output, a third is needed. You can call exec to open the output files: exec >data-file exec 3>log-file echo "first line of data" echo "this is a log line" >&3 … if something_bad_happens; then echo error message >&2; fi exec >&- # close the data output file echo "output file closed" >&3 The remark about efficiency comes in when you have a redirection in a loop, like this (assume the file is empty to begin with): while …; do echo $a >>/tmp/bar; done At each iteration, the program opens /tmp/bar , seeks to the end of the file, appends some data and closes the file. It is more efficient to open the file once and for all: while …; do echo $a; done >/tmp/bar When there are multiple redirections happening at different times, calling exec to perform redirections rather than wrapping a block in a redirection becomes useful. exec >/tmp/bar while …; do echo $a; done You'll find several other examples of redirection by browsing the io-redirection tag on this site .
{ "source": [ "https://unix.stackexchange.com/questions/13724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4290/" ] }
13,731
I have a list of numbers in a file, one per line. How can I get the minimum, maximum, median and average values? I want to use the results in a bash script. Although my immediate situation is for integers, a solution for floating-point numbers would be useful down the line, but a simple integer method is fine.
You can use the R programming language . Here is a quick and dirty R script: #! /usr/bin/env Rscript d<-scan("stdin", quiet=TRUE) cat(min(d), max(d), median(d), mean(d), sep="\n") Note the "stdin" in scan which is a special filename to read from standard input (that means from pipes or redirections). Now you can redirect your data over stdin to the R script: $ cat datafile 1 2 4 $ ./mmmm.r < datafile 1 4 2 2.333333 Also works for floating points: $ cat datafile2 1.1 2.2 4.4 $ ./mmmm.r < datafile2 1.1 4.4 2.2 2.566667 If you don't want to write an R script file you can invoke a true one-liner (with linebreak only for readability) in the command line using Rscript : $ Rscript -e 'd<-scan("stdin", quiet=TRUE)' \ -e 'cat(min(d), max(d), median(d), mean(d), sep="\n")' < datafile 1 4 2 2.333333 Read the fine R manuals at http://cran.r-project.org/manuals.html . Unfortunately the full reference is only available in PDF. Another way to read the reference is by typing ?topicname in the prompt of an interactive R session. For completeness: there is an R command which outputs all the values you want and more. Unfortunately in a human friendly format which is hard to parse programmatically. > summary(c(1,2,4)) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.000 1.500 2.000 2.333 3.000 4.000
{ "source": [ "https://unix.stackexchange.com/questions/13731", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2343/" ] }
13,732
Recently I have been exploring the enchanted /dev folder. I want to write some random data to an audio device in order to generate some noise. I am using ALSA. So I instruct cat to pipe some random data to the playback file in the /dev folder... cat file-of-random-data > /dev/snd/pcmC0D0p then I recieve what seems to be an error from cat cat: write error: File descriptor in bad state How can I fix this so I can hear some delicious static play from my sound card?
I think the reason this isn't working for you is because that interface has been deprecated. You normally can't write audio using /dev/dsp anymore, at least without being tricky. There is a program that will accomplish this for you on your system: padsp . This will map the /dev/audio or /dev/dsp file to the new Audio Server system. Fire up the terminal and get into root mode with sudo su . Then, I'm going to cat /dev/urandom and pipe the output into padsp and use the tee command to send the data to /dev/audio . You'll get a ton of garbage in your terminal, so you may want to redirect to /dev/null . Once you're in superuser, try this command: cat /dev/urandom | padsp tee /dev/audio > /dev/null You may even want to try with other devices, like your mouse: Use: /dev/psaux , for instance or the usb driver. You can even run your memory through it: /dev/mem Hope this clarifies why it wasn't working before. Personally, I found the mouse and memory to be way more interesting than playing random static!
{ "source": [ "https://unix.stackexchange.com/questions/13732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7138/" ] }
13,751
I'm currently facing a problem on a linux box where as root I have commands returning error because inotify watch limit has been reached. # tail -f /var/log/messages [...] tail: cannot watch '/var/log/messages': No space left on device # inotifywatch -v /var/log/messages Establishing watches... Failed to watch /var/log/messages; upper limit on inotify watches reached! Please increase the amount of inotify watches allowed per user via '/proc/sys/fs/inotify/max_user_watches'.` I googled a bit and every solution I found is to increase the limit with: sudo sysctl fs.inotify.max_user_watches=<some random high number> But I was unable to find any information of the consequences of raising that value. I guess the default kernel value was set for a reason but it seems to be inadequate for particular usages. (e.g., when using Dropbox with a large number of folder, or software that monitors a lot of files) So here are my questions: Is it safe to raise that value and what would be the consequences of a too high value? Is there a way to find out what are the currently set watches and which process set them to be able to determine if the reached limit is not caused by a faulty software?
Is it safe to raise that value and what would be the consequences of a too high value? Yes, it's safe to raise that value and below are the possible costs [ source ]: Each used inotify watch takes up 540 bytes (32-bit system), or 1 kB (double - on 64-bit) [sources: 1 , 2 ] This comes out of kernel memory , which is unswappable. Assuming you set the max at 524288 and all were used (improbable), you'd be using approximately 256MB/512MB of 32-bit/64-bit kernel memory. Note that your application will also use additional memory to keep track of the inotify handles, file/directory paths, etc. -- how much depends on its design. To check the max number of inotify watches: cat /proc/sys/fs/inotify/max_user_watches To set max number of inotify watches Temporarily: Run sudo sysctl fs.inotify.max_user_watches= with your preferred value at the end. Permanently ( more detailed info ): put fs.inotify.max_user_watches=524288 into your sysctl settings. Depending on your system they might be in one of the following places: Debian/RedHat: /etc/sysctl.conf Arch: put a new file into /etc/sysctl.d/ , e.g. /etc/sysctl.d/40-max-user-watches.conf you may wish to reload the sysctl settings to avoid a reboot: sysctl -p (Debian/RedHat) or sysctl --system (Arch) Check to see if the max number of inotify watches have been reached: Use tail with the -f (follow) option on any old file, e.g. tail -f /var/log/dmesg : - If all is well, it will show the last 10 lines and pause; abort with Ctrl-C - If you are out of watches , it will fail with this somewhat cryptic error : tail: cannot watch '/var/log/dmsg': No space left on device To see what's using up inotify watches find /proc/*/fd -lname anon_inode:inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o '%p %U %c' -p '{}' | uniq -c | sort -nr The first column indicates the number of inotify fds (not the number of watches though) and the second shows the PID of that process [sources: 1 , 2 ].
{ "source": [ "https://unix.stackexchange.com/questions/13751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7755/" ] }
13,776
Given file path, how can I determine which process creates it (and/or reads/writes to it)?
The lsof command (already mentioned in several answers) will tell you what process has a file open at the time you run it. lsof is available for just about every unix variant. lsof /path/to/file lsof won't tell you about file that were opened two microseconds ago and closed one microsecond ago. If you need to watch a particular file and react when it is accessed, you need different tools. If you can plan a little in advance, you can put the file on a LoggedFS filesystem. LoggedFS is a FUSE stacked filesystem that logs all accesses to files in a hierarchy. The logging parameters are highly configurable. FUSE is available on all major unices . You'll want to log accesses to the directory where the file is created. Start with the provided sample configuration file and tweak it according to this guide . loggedfs -l /path/to/log_file -c /path/to/config.xml /path/to/directory tail -f /path/to/log_file Many unices offer other monitoring facilities. Under Linux, you can use the relatively new audit subsystem . There isn't much literature about it (but more than about loggedfs); you can start with this tutorial or a few examples or just with the auditctl man page . Here, it should be enough to make sure the daemon is started, then run auditctl : auditctl -w /path/to/file (I think older systems need auditctl -a exit,always -w /path/to/file ) and watch the logs in /var/log/audit/audit.log .
{ "source": [ "https://unix.stackexchange.com/questions/13776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6080/" ] }
13,802
Is there a way to execute a command in a different directory without having to cd to it? I know that I could simply cd in and cd out, but I'm just interested in the possibilities of forgoing the extra steps :)
I don't know if this counts, but you can make a subshell: $ (cd /var/log && cp -- *.log ~/Desktop) The directory is only changed for that subshell, so you avoid the work of needing to cd - afterwards.
{ "source": [ "https://unix.stackexchange.com/questions/13802", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
13,848
Is there an easy command that I can use to zero out the last 1MB of a hard drive? For the start of the drive I would dd if=/dev/zero of=/dev/sdx bs=1M count=1 . The seek option for dd looks promising, but does someone have an easy way to determine exactly how far I should seek? I have a hardware RAID appliance, that stores some of the RAID configuration at the end of the drive. I need the RAID appliance to see the drives as un-configured, so I want to remove the RAID configuration without having to spend the time to do a full wipe of the drives. I have a dozen 2TB drives, and a full erase of all of those drives would take a long time.
The simplest way on Linux to get the size of the disk is with blockdev --getsz : sudo -s dd bs=512 if=/dev/zero of=/dev/sdx count=2048 seek=$((`blockdev --getsz /dev/sdx` - 2048))
{ "source": [ "https://unix.stackexchange.com/questions/13848", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1209/" ] }
13,858
If I have a root folder with some restrictive permission, let's say 600, and if the child folders/files have 777 permission will everybody be able to read/write/execute the child file even though the root folder has 600?
The precise rule is: you can traverse a directory if and only if you have execute permission on it. So for example to access dir/subdir/file , you need execute permission on dir and dir/subdir , plus the permissions on file for the type of access you want. Getting into corner cases, I'm not sure whether it's universal that you need execute permission on the current directory to access a file through a relative path (you do on Linux). The way you access a file matters. For example, if you have execute permissions on /foo/bar but not on /foo , but your current directory is /foo/bar , you can access files in /foo/bar through a relative path but not through an absolute path. You can't change to /foo/bar in this scenario; a more privileged process has presumably done cd /foo/bar before going unprivileged. If a file has multiple hard links, the path you use to access it determines your access constraints. Symbolic links change nothing. The kernel uses the access rights of the calling process to traverse them. For example, if sym is a symbolic link to the directory dir , you need execute permission on dir to access sym/foo . The permissions on the symlink itself may or may not matter depending on the OS and filesystem (some respect them, some ignore them). Removing execute permission from the root directory effectively restricts a user to a part of the directory tree (which a more privileged process must change into). This requires access control lists to be any use. For example, if / and /home are off-limits to joe ( setfacl -m user:joe:0 / /home ) and /home/joe is joe 's home directory, then joe won't be able to access the rest of the system (including running shell scripts with /bin/sh or dynamically linked binaries that need to access /lib , so you'd need to go deeper for practical use, e.g. setfacl -m user:joe:0 /*; setfacl -d user:joe /bin /lib ). Read permission on a directory gives the right to enumerate the entries. Giving execute permission without giving read permission is occasionally useful: the names of entries serve as passwords to access them. I can't think of any use in giving read or write permission to a directory without execute permission.
{ "source": [ "https://unix.stackexchange.com/questions/13858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7813/" ] }
13,904
Accidentally I managed to copy-paste a paragraph in vim a zillion times. How do I select and then delete the text from my current position to the end of file? In Windows, this would be Ctrl-Shift-End, followed by Delete.
VGx Enter Visual Mode, go to End of File, delete. Alternatively, you can do: Vggx To delete from the current position to the beginning of the file.
{ "source": [ "https://unix.stackexchange.com/questions/13904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
13,907
I've got an extreme problem, and all of the solutions I can imagine are complicated. According to my UNIX/Linux experience there must be an easy way. I want to delete the first 31 bytes of each file in /foo/ . Each file is long enough. Well, I'm sure somebody will deliver me a suprisingly easy solution I just can't imagine. Maybe awk?
for file in /foo/* do if [ -f "$file" ] then dd if="$file" of="$file.truncated" bs=31 skip=1 && mv "$file.truncated" "$file" fi done or the faster, thanks to Gilles' suggestion: for file in /foo/* do if [ -f $file ] then tail +32c $file > $file.truncated && mv $file.truncated $file fi done Note: Posix tail specify "-c +32" instead of "+32c" but Solaris default tail doesn't like it: $ /usr/bin/tail -c +32 /tmp/foo > /tmp/foo1 tail: cannot open input /usr/xpg4/bin/tail is fine with both syntaxes. If you want to keep the original file permissions, replace ... && mv "$file.truncated" "$file" by ... && cat "$file.truncated" "$file" && rm "$file.truncated"
{ "source": [ "https://unix.stackexchange.com/questions/13907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7799/" ] }
13,953
I'm trying to run a minecraft server on my unRAID server. The server will run in the shell, and then sit there waiting for input. To stop it, I need to type 'stop' and press enter, and then it'll save the world and gracefully exit, and I'm back in the shell. That all works if I run it via telnetting into the NAS box, but I want to run it directly on the box. this is what I previously had as a first attempt: #define USER_SCRIPT_LABEL Start Minecraft server #define USER_SCRIPT_DESCR Start minecraft server. needs sde2 mounted first cd /mnt/disk/sde2/MCunraid screen -d -m -S minecraft /usr/lib/java/bin/java -Xincgc -Xmx1024M -jar CraftBukkit.jar MCunraid is the folder where I have the Craftbukkit.jar and all the world files etc. If I type that screen line in directly, the screen does setup detached and the server launches. If I execute that line from within the script it doesn't seem to set up a screen for stopping the server, I need to 'type' in STOP and then press enter. My approach was screen -S minecraft -X stuff "stop $(echo -ne '\r')" to send to screen 'minecraft' the text s-t-o-p and a carriage return. But that doesn't work, even if I type it directly onto the command line. But if I 'screen -r' I can get to the screen with the server running, then type 'stop' and it shuts down properly. The server runs well if I telnet in and do it manually, just need to run it without being connected from my remote computer.
I can solve at least part of the problem: why the stop part isn't working. Experimentally, when you start a Screen session in detached mode ( screen -d -m ), no window is selected, so input later sent with screen -X stuff is just lost. You need to explicitly specify that you want to send the keystrokes to window 0 ( -p 0 ). This is a good idea anyway, in case you happen to create other windows in that Screen session for whatever reason. screen -S minecraft -p 0 -X stuff "stop^M" (Screen translate ^M to control-M which is the character sent by the Enter key.) The problem with starting the session from a script is likely related to unMENU.
{ "source": [ "https://unix.stackexchange.com/questions/13953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7864/" ] }
13,968
How to show top five CPU consuming processes with ps?
Why use ps when you can do it easily with the top command? If you must use ps , try this: ps aux | sort -nrk 3,3 | head -n 5 If you want something that's truly 'top'esq with constant updates, use watch watch "ps aux | sort -nrk 3,3 | head -n 5"
{ "source": [ "https://unix.stackexchange.com/questions/13968", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7869/" ] }
13,972
I just ran df -h a minute ago and noticed a filesystem has been added that I'm not familiar with. Does anyone know why /run exists? Is this something that's been added by the kernel? By Arch Linux ? run 10M 236K 9.8M 3% /run
Apparently, many tools (among them udev) will soon require a /run/ directory that is mounted early (as tmpfs). Arch developers introduced /run last month to prepare for this. The udev runtime data moved from /dev/.udev/ to /run/udev/. The /run mountpoint is supposed to be a tmpfs mounted during early boot, available and writable to for all tools at any time during bootup, it replaces /var/run/, which should become a symlink some day. [1] There is more detail here: http://www.h-online.com/open/news/item/Linux-distributions-to-include-run-directory-1219006.html [1] From thread on the Arch Projects ML
{ "source": [ "https://unix.stackexchange.com/questions/13972", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
13,996
Many times I have an SSH session that doesn't respond anymore (for example, when I lose internet connection and then reconnect). Ctrl + C , Ctrl + D , Ctrl + Z and a zillion of key presses don't have any effect. Most of the time I already have tmux or byobu running already, so I can just start another terminal and reconnect. However it does feel cumbersome. How can I disconnect SSH from the current terminal?
Use the "escape character" (normally, the tilde ~ ) to control an SSH session: ~ followed by . closes the SSH connection; ~ followed by Ctrl + Z suspends the SSH process; ~ followed by another ~ sends a literal ~ . You can set the escape character using the -e option to ssh . Additionally, remember that You should also remember to press Enter before ~ . The escape character works when it is the first character in the line. And also you can use ~ and later ? to get help from the ssh client. (Thanks to the comment by Lukasz Stelmach .)
{ "source": [ "https://unix.stackexchange.com/questions/13996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250/" ] }
14,010
I ran into something a couple of weeks ago that I'd never seen before: A filesystem (ext3 I believe) installed to a storage device without a partition. In essence /dev/sdb was the entire filesystem. I know many filesystems can be extended into empty space, so doing this allows extending without dealing with LVM or some other kind of volume manager, but are there any other advantages for setting up storage this way? The specific case I saw was as the ephemeral data volume for a number crunching server, the boot and root volumes were traditional partitions on a different storage device entirely. -
Pro: you don't waste one disk sector on a partition table. (Yay.) Pro: the disk can be used in an operating system that doesn't support PC-style partitions. (Like you're going to use one.) Con: this is unusual and may confuse co-sysadmins. (See?) Con: if you install another operating system, it might think that the disk contains garbage and make it easy to accidentally overwrite it by selecting the wrong disk — whereas operating systems generally leave alone partitions whose type they don't understand. Irrelevant: extending the filesystem is not easier if it's directly on the disk than if it's in a partition, nor vice versa. (Being on LVM would make it easier.) Conclusion: it works, but it's not a good idea.
{ "source": [ "https://unix.stackexchange.com/questions/14010", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/366/" ] }
14,014
I want to connect to my home server from work using NFS. I tried sshfs but some people say it's not as reliable as NFS. I know sshfs traffic is encrypted. But what about NFS? Can someone sniff my traffic and view the files I'm copying? I'm using NFSv4 in my LAN and it works great.
If you use NFSv4 with sec=krb5p , then it is secure. (That means use Kerberos 5 for authentication, and encrypt the connection for privacy.) But if you use NFS v3 or NFS v4 with sys=system , then no, it's not secure at all. There might also be some concern with exposing the kerberos and rpc ports to the internet at large, just in case of unknown vulnerabilities.
{ "source": [ "https://unix.stackexchange.com/questions/14014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2309/" ] }
14,027
So I was looking at this answer on stackoverflow and realized that my fonts aren't covering a whole lot of the utf-8 unicode spectrum (as I get lots of squares). Does anyone know a font that will cover all of that post?
The hands-down most comprehensive coverage would be Roman Czyborra's GNU Unicode Font project. It is intended to collect a complete and free 8×16/16×16 pixel Unicode font. It currently covers 34,445 characters (out of ~40,000+ defined characters). Most distributions have GNU Unifont in their repositories. Ed Trager has written a Unicode Font Guide For Free/Libre Open Source Operating Systems which collates geographic coverage of fonts and their associated licensing. The guide was last updated in 2008. Other fonts with good Unicode support include: DejaVu GNU FreeFont , worth noting is that the Serif contains the most glyphs in this family : Serif 10537 / Sans 6272 / Mono 4178
{ "source": [ "https://unix.stackexchange.com/questions/14027", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
14,056
I have seen on many blogs, using this command to enable IP forwarding while using many network security/sniffing tools on linux echo 1 > /proc/sys/net/ipv4/ip_forward Can anyone explain me in layman terms, what essentially does this command do? Does it turn your system into router?
"IP forwarding" is a synonym for "routing." It is called "kernel IP forwarding" because it is a feature of the Linux kernel. A router has multiple network interfaces. If traffic comes in on one interface that matches a subnet of another network interface, a router then forwards that traffic to the other network interface. So, let's say you have two NICs, one (NIC 1) is at address 192.168.2.1/24, and the other (NIC 2) is 192.168.3.1/24. If forwarding is enabled, and a packet comes in on NIC 1 with a "destination address" of 192.168.3.8, the router will resend that packet out of the NIC 2. It's common for routers functioning as gateways to the Internet to have a default route whereby any traffic that doesn't match any NICs will go through the default route's NIC. So in the above example, if you have an internet connection on NIC 2, you'd set NIC 2 as your default route and then any traffic coming in from NIC 1 that isn't destined for something on 192.168.2.0/24 will go through NIC 2. Hopefully there's other routers past NIC 2 that can further route it (in the case of the Internet, the next hop would be your ISP's router, and then their providers upstream router, etc.) Enabling ip_forward tells your Linux system to do this. For it to be meaningful, you need two network interfaces (any 2 or more of wired NIC cards, Wifi cards or chipsets, PPP links over a 56k modem or serial, etc.). When doing routing, security is important and that's where Linux's packet filter, iptables , gets involved. So you will need an iptables configuration consistent with your needs. Note that enabling forwarding with iptables disabled and/or without taking firewalling and security into account could leave you open to vulnerabilites if one of the NICs is facing the Internet or a subnet you don't have control over.
{ "source": [ "https://unix.stackexchange.com/questions/14056", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3198/" ] }
14,077
How can I check if my CPU supports the AES-NI instruction set under Linux/UNIX.
Look in /proc/cpuinfo . If you have the aes flag then your CPU has AES support. You can use this command: grep aes /proc/cpuinfo If you have some output, which will be like flags : a bunch of flags aes another bunch of flags , then you have AES.
{ "source": [ "https://unix.stackexchange.com/questions/14077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
14,113
I'd like to set the terminal title to user@host so I can easily tell which machine I'm connected to from the window title. Is there a way to do this from SSH or from GNOME Terminal?
Yes. Here's an example for bash using PS1 that should be distro-agnostic: Specifically, the escape sequence \[\e]0; __SOME_STUFF_HERE__ \a\] is of interest. I've edited this to be set in a separate variable for more clarity. # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi TITLEBAR='\[\e]0;\u@\h\a\]' # Same thing.. but with octal ASCII escape chars #TITLEBAR='\[\033]2;\u@\h\007\]' if [ "$color_prompt" = yes ]; then PS1="${TITLEBAR}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\W\[\033[00m\]\$ " else PS1="${TITLEBAR}\u@\h:\W\$ " fi unset color_prompt force_color_prompt Also note that there can be many ways of setting an xterm's title, depending on which terminal program you are using, and which shell. For example, if you're using KDE's Konsole, you can override the title setting by going to Settings -> Configure Profiles -> Edit Profile -> Tabs and setting the Tab title format and Remote tab title format settings. Additionally, you may want to check out: this "How to change the title of an xterm" FAQ for other shells this "Prompt Magic" tip for a good reference of the escape sequences that work in bash. this Bash Prompt HOWTO for a reference on ANSI Color escape sequences.
{ "source": [ "https://unix.stackexchange.com/questions/14113", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
14,120
I need to extract a single file from a ZIP file which I know the path to. Is there a command like the following: unzip -d . myarchive.zip path/to/zipped/file.txt Unfortunately, the above command extracts and recreates the entire path to the file at ./path/to/zipped/file.txt . Is there a way for me to simply pull the file out into a specified directory?
You can extract just the text to standard output with the -p option: unzip -p myarchive.zip path/to/zipped/file.txt >file.txt This won't extract the metadata (date, permissions, …), only the file contents (obviously, it only works for regular files, not symlinks, devices, directories...). That's the price to pay for the convenience of not having to move the file afterwards. Alternatively, mount the archive as a directory and just copy the file. With AVFS : mountavfs cp -p ~/.avfs"$PWD/myarchive.zip#"/path/to/zipped/file.txt . Or with fuse-zip : mkdir myarchive.d fuse-zip myarchive.zip myarchive.d cp -p myarchive.d/path/to/zipped/file.txt . fusermount -u myarchive.d; rmdir myarchive.d
{ "source": [ "https://unix.stackexchange.com/questions/14120", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
14,129
Is there a way to set gtk-application-prefer-dark-theme for an application? This is normally set in the code by the application. Apps such as Eye of Gnome and Totem turn it on. I want to, as a user turn it on, on a per application bases. For gnome-terminal, I normally use a white text on black background color scheme, and having the dark window border would improve the overall look. I also want to turn it on for vlc.
With gtk+ ≥ 3.12 you can load a specific theme and its variant (dark, light) on a per-application 1 basis via the environment variable GTK_THEME=theme:variant . As per the gtk+ reference manual : GTK_THEME. If set, makes GTK+ use the named theme instead of the theme that is specified by the gtk-theme-name setting [...] It is also possible to specify a theme variant to load, by appending the variant name with a colon, like this: GTK_THEME=Adwaita:dark. So, to load 2 the dark variant you would run: GTK_THEME=Adwaita:dark gedit Likewise, to achieve the opposite (when the default theme is dark), you load the light variant: GTK_THEME=Adwaita:light gedit Note that if you want to use it via a custom launcher ( .desktop file) you'll have to prepend env to the command in the Exec line: Exec=env GTK_THEME=Adwaita:dark eog %U 1: Worth noting that - as per the devs decision - newer gnome-terminal has its own configuration via menu > preferences and it ignores the theme. Also, since this is rather new stuff, some gtk+ 3 applications might not (yet) honor the GTK_THEME environment variable. 2: This doesn't seem to work if you already have a running instance of that application e.g. if nautilus is already running in dark mode then running GTK_THEME=Adwaita:light nautilus will open a new nautilus window but still in dark mode. I don't know if this is a feature or a bug...
{ "source": [ "https://unix.stackexchange.com/questions/14129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4354/" ] }
14,143
I have several directories mounted through sshfs . I sometimes get disconnects from the server (not configurable by me). I usually mount the directories like this sshfs [email protected]:/home/user /mnt/example When a server disconnects, the sshfs subsystem doesn't umount / free the directory but instead locks it inaccessible. The mount is still visible when typing mount . When I type ls /mnt/example the process gets locked (also Ctrl + c doesn't help). I therefore do sudo umount -l /mnt/example # find pid of corresponding process: ps aux | grep example.com kill -9 <pid of locked sshfs process> Is there a better way to deal with this? Obviously sshfs should do the umount and clean up... Ideally it would reconnect automatically.
You can run sshfs with the "reconnect" option. We use sshfs with PAM/automount to share server files for each workstation in our network. We use -o reconnect as parameter for sshfs, mostly because our users suspended their computers and on wake sshfs would not reconnect (or respond, or anything). For example: sshfs [email protected]:/home/mvaldez/REMOTE /home/mvaldez/RemoteDocs -o reconnect,idmap=user,password_stdin,dev,suid Just a note, if the remote computer is really down, sshfs may become unresponsive for a long time.
{ "source": [ "https://unix.stackexchange.com/questions/14143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6958/" ] }
14,149
When I edit files with vim there is always this in the cmd line area,under the status line,all buffers' names(image : ) Is there any way I can remove it?
You can run sshfs with the "reconnect" option. We use sshfs with PAM/automount to share server files for each workstation in our network. We use -o reconnect as parameter for sshfs, mostly because our users suspended their computers and on wake sshfs would not reconnect (or respond, or anything). For example: sshfs [email protected]:/home/mvaldez/REMOTE /home/mvaldez/RemoteDocs -o reconnect,idmap=user,password_stdin,dev,suid Just a note, if the remote computer is really down, sshfs may become unresponsive for a long time.
{ "source": [ "https://unix.stackexchange.com/questions/14149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7842/" ] }
14,159
I've been trying to figure out the size of a window for use in a small script. My current technique is using wmctrl -lG to find out the dimensions. However, the problem is this: The x and y figures it gives are for the top left of the window decorations, while the height and width are for just the content area. This means that if the window decorations add 20px of height and 2px of width, wmctrl will report a window as being 640x480, even if it takes up 660x482 on screen. This is a problem because my script's next step would be to use that area to tell ffmpeg to record the screen. I would like to avoid hardcoding in the size of the window decorations from my current setup. What would suit is either a method to get the size of the window decorations so I can use them to figure out the position of the 640x480 content area, or a way to get the position of the content area directly, not that of the window decorations.
The following script will give you the top-left screen co-ords and size of the window (without any decoration). . . . xwininfo -id $(xdotool getactivewindow) contains enough information for you. #!/bin/bash # Get the coordinates of the active window's # top-left corner, and the window's size. # This excludes the window decoration. unset x y w h eval $(xwininfo -id $(xdotool getactivewindow) | sed -n -e "s/^ \+Absolute upper-left X: \+\([0-9]\+\).*/x=\1/p" \ -e "s/^ \+Absolute upper-left Y: \+\([0-9]\+\).*/y=\1/p" \ -e "s/^ \+Width: \+\([0-9]\+\).*/w=\1/p" \ -e "s/^ \+Height: \+\([0-9]\+\).*/h=\1/p" ) echo -n "$x $y $w $h" #
{ "source": [ "https://unix.stackexchange.com/questions/14159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/131/" ] }
14,160
When I open this ssh tunnel: ssh -nXNT -p 22 localhost -L 0.0.0.0:8984:remote:8983 I get this error when trying to access the HTTP server running on localhost:8984: channel 1: open failed: administratively prohibited: open failed What does this error mean, and on which machine can you fix the problem?
channel 1: open failed: administratively prohibited: open failed The above message refers to your SSH server rejecting your SSH client's request to open a side channel. This typically comes from -D , -L or -w , as separate channels in the SSH stream are required to ferry the forwarded data across. Since you are using -L (also applicable to -D ), there are two options in question that are causing your SSH server to reject this request: AllowTcpForwarding (as Steve Buzonas mentioned) PermitOpen These options can be found in /etc/ssh/sshd_config . You should ensure that: AllowTCPForwarding is either not present, is commented out, or is set to yes PermitOpen is either not present, is commented out, or is set to any [1] Additionally, if you are using an SSH key to connect, you should check that the entry corresponding to your SSH key in ~/.ssh/authorized_keys does not have no-port-forwarding or permitopen statements [2] . Not relevant to your particular command, but somewhat relevant to this topic as well, is the PermitTunnel option if you're attempting to use the -w option. [1] Full syntax in the sshd_config(5) manpage. [2] Full syntax in the authorized_keys(5) manpage.
{ "source": [ "https://unix.stackexchange.com/questions/14160", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2323/" ] }
14,165
Is there a command that will list all partitions along with their labels? sudo fdisk -l and sudo parted -l don't show labels by default. EDIT: (as per comment below) I'm talking about ext2 labels - those that you can set in gparted upon partitioning. EDIT2: The intent is to list unmounted partitions (so I know which one to mount).
With udev, You can use ls -l /dev/disk/by-label to show the symlinks by label to at least some partition device nodes. Not sure what the logic of inclusion is, possibly the existence of a label.
{ "source": [ "https://unix.stackexchange.com/questions/14165", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8069/" ] }
14,187
I'm trying to copy a file from one of my local machines to a remote machine. Copying a file with size upto 1405 bytes works fine. When I try to scp a larger file, the file gets copied but the scp process hangs up and doesn't exit. I have to hit Ctrl-C to return back to the shell. I have observed the same behavior with FTP as well. Any ideas about what might be causing this?
This definitely sounds like MTU problems (like @Konerak pointed out), this is how I would test this: ip link set eth0 mtu 1400 This temporarily sets the allowed size for network packets to 1400 on the network interface eth0 (you might need to adjust the name). Your system will then split all packets above this size before sending it on to the network. If this fixes the scp command, you need to find the problem within the network or make this ugly fix permanent ;)
{ "source": [ "https://unix.stackexchange.com/questions/14187", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29843/" ] }
14,191
How do I copy an entire directory into a directory of the same name without replacing the content in the destination directory? (instead, I would like to add to the contents of the destination folder)
Use rsync , and pass -u if you want to only update files that are newer in the original directory, or --ignore-existing to skip all files that already exist in the destination. rsync -au /local/directory/ host:/remote/directory/ rsync -a --ignore-existing /local/directory/ host:/remote/directory/ (Note the / on the source side: without it rsync would create /remote/directory/directory .)
{ "source": [ "https://unix.stackexchange.com/questions/14191", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
14,266
My RPROMPT is set to display svn info using vcs_info . It reads RPROMPT=${vcs_info_msg_0_} . vcs_info is called using precmd() . However, RPROMPT doesn't update when I change directories. It works only if I invoke the prompt again (either by source ~/.zshrc or prompt ) and doesn't change upon chdir, unless I invoke the prompt again. Is there any way to change this behaviour?
Try putting single quotes around the variable value at assignment to delay evaluation: RPROMPT='${vcs_info_msg_0_}'
{ "source": [ "https://unix.stackexchange.com/questions/14266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
14,270
I have two processes foo and bar , connected with a pipe: $ foo | bar bar always exits 0; I'm interested in the exit code of foo . Is there any way to get at it?
bash and zsh have an array variable that holds the exit status of each element (command) of the last pipeline executed by the shell. If you are using bash , the array is called PIPESTATUS (case matters!) and the array indicies start at zero: $ false | true $ echo "${PIPESTATUS[0]} ${PIPESTATUS[1]}" 1 0 If you are using zsh , the array is called pipestatus (case matters!) and the array indices start at one: $ false | true $ echo "${pipestatus[1]} ${pipestatus[2]}" 1 0 To combine them within a function in a manner that doesn't lose the values: $ false | true $ retval_bash="${PIPESTATUS[0]}" retval_zsh="${pipestatus[1]}" retval_final=$? $ echo $retval_bash $retval_zsh $retval_final 1 0 Run the above in bash or zsh and you'll get the same results; only one of retval_bash and retval_zsh will be set. The other will be blank. This would allow a function to end with return $retval_bash $retval_zsh (note the lack of quotes!).
{ "source": [ "https://unix.stackexchange.com/questions/14270", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73/" ] }
14,300
How do I move an existing pane into another window in tmux when I have multiple windows, and vice versa? I'm coming from screen , where I can switch to the pane and then switch windows until I get to the one I want; tmux does not seem to allow this.
The command to do this is join-pane in tmux 1.4. join-pane [-dhv] [-l size | -p percentage] [-s src-pane] [-t dst-pane] (alias: joinp) Like split-window, but instead of splitting dst-pane and creating a new pane, split it and move src-pane into the space. This can be used to reverse break-pane. To simplify this, I have these binds in my .tmux.conf for that: # pane movement bind-key j command-prompt -p "join pane from:" "join-pane -s '%%'" bind-key s command-prompt -p "send pane to:" "join-pane -t '%%'" The first grabs the pane from the target window and joins it to the current, the second does the reverse. You can then reload your tmux session by running the following from within the session: $ tmux source-file ~/.tmux.conf
{ "source": [ "https://unix.stackexchange.com/questions/14300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8012/" ] }
14,312
How can I restrict a user on the SSH server to allow them only the privileges for SSH TUNNELING ? i.e. So they cannot run commands even if they log in via SSH. My Linux servers are Ubuntu 11.04 and OpenWrt.
On the server side, you can restrict this by setting their user shell to /bin/true . This will allow them to authenticate, but not actually run anything since they don't get a shell to run it in. This means they will be limited to whatever subset of things SSH is able to offer them. If it offers port forwarding, they will still be able to do that. On the client side, you will probably want to connect with the -N . This stops the client from ASKING for a remote command such as a shell, it just stops after the authentication part is done. Thanks to commentors for pointhing this out.
{ "source": [ "https://unix.stackexchange.com/questions/14312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
14,345
I have a unix installation that's supposed to be usable both as a chroot and as a standalone system. If it's running as a chroot, I don't want to run any service (cron, inetd, and so on), because they would conflict with the host system or be redundant. How do I write a shell script that behaves differently depending on whether it's running in a chroot? My immediate need is a modern Linux system, with /proc mounted in the chroot, and the script is running as root, but more portable answers are welcome as well. (See How do I tell I'm running in a chroot if /proc is not mounted? for the case of Linux without /proc .) More generally, suggestions that work for other containment methods would be interesting. The practical question is, is this system supposed to be running any services? (The answer being no in a chroot, and yes in a full-fledged virtual machines; I don't know about intermediate cases such as jails or containers.)
What I've done here is to test whether the root of the init process (PID 1) is the same as the root of the current process. Although /proc/1/root is always a link to / (unless init itself is chrooted, but that's not a case I care about), following it leads to the “master” root directory. This technique is used in a few maintenance scripts in Debian, for example to skip starting udev after installation in a chroot. if [ "$(stat -c %d:%i /)" != "$(stat -c %d:%i /proc/1/root/.)" ]; then echo "We are chrooted!" else echo "Business as usual" fi (By the way, this is yet another example of why chroot is useless for security if the chrooted process has root access. Non-root processes can't read /proc/1/root , but they can follow /proc/1234/root if there is a running process with PID 1234 running as the same user.) If you do not have root permissions, you can look at /proc/1/mountinfo and /proc/$$/mountinfo (briefly documented in filesystems/proc.txt in the Linux kernel documentation ). This file is world-readable and contains a lot of information about each mount point in the process's view of the filesystem. The paths in that file are restricted by the chroot affecting the reader process, if any. If the process reading /proc/1/mountinfo is chrooted into a filesystem that's different from the global root (assuming pid 1's root is the global root), then no entry for / appears in /proc/1/mountinfo . If the process reading /proc/1/mountinfo is chrooted to a directory on the global root filesystem, then an entry for / appears in /proc/1/mountinfo , but with a different mount id. Incidentally, the root field ( $4 ) indicates where the chroot is in its master filesystem. [ "$(awk '$5=="/" {print $1}' </proc/1/mountinfo)" != "$(awk '$5=="/" {print $1}' </proc/$$/mountinfo)" ] This is a pure Linux solution. It may be generalizable to other Unix variants with a sufficiently similar /proc (Solaris has a similar /proc/1/root , I think, but not mountinfo ).
{ "source": [ "https://unix.stackexchange.com/questions/14345", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
14,353
In vim, if I'm working on a Python script, I will commonly type: :! python this_script.py to execute the script. Is there a shortcut for the name of the current file? If not, can I easily make one? I'm new at vim, and I'm not sure how to google for this.
You can just use % for current file. This command should serve your purpose: :! python %
{ "source": [ "https://unix.stackexchange.com/questions/14353", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5045/" ] }
14,354
I'm writing an application to read/write to/from a serial port in Fedora14, and it works great when I run it as root. But when I run it as a normal user I'm unable to obtain the privileges required to access the device (/dev/ttySx). That's kind of crappy because now I can't actually debug the damn thing using Eclipse. I've tried running Eclipse with sudo but it corrupts my workspace and I can't even open the project. So I'd like to know if it's possible to lower the access requirements to write to /dev/ttySx so that any normal user can access it. Is this possible?
The right to access a serial port is determined by the permissions of the device file (e.g. /dev/ttyS0 ). So all you need to do is either arrange for the device to be owned by you, or (better) put yourself in the group that owns the device, or (if Fedora supports it, which I think it does) arrange for the device to belong to the user who's logged in on the console. For example, on my system (not Fedora), /dev/ttyS0 is owned by the user root and the group dialout , so to be able to access the serial device, I would add myself to the dialout group: usermod -a -G dialout $USER
{ "source": [ "https://unix.stackexchange.com/questions/14354", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8030/" ] }
14,384
How can I find out that my CPU supports 64bit operating systems under Linux, e.g.: Ubuntu, Fedora?
Execute: grep flags /proc/cpuinfo Find 'lm' flag. If it's present, it means your CPU is 64bit and it supports 64bit OS. 'lm' stands for long mode. Alternatively, execute: grep flags /proc/cpuinfo | grep " lm " Note the spaces in " lm " . If it gives any output at all, your CPU is 64bit. Update: You can use the following in terminal too: lshw -C processor | grep width This works on Ubuntu, not sure if you need to install additional packages for Fedora.
{ "source": [ "https://unix.stackexchange.com/questions/14384", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
14,388
Are there any GUI's for Linux that doesn't use X11? Since X has very poor security :O e.g.: Ubuntu, Fedora - what else are there? Goal: having a Desktop Environment without X. - what are the solutions? (e.g.: watch Flash with Google Chrome, Edit docs with LibreOffice, etc., not using text-based webbrowsers) Maybe with framebuffers? But how? :O
Note: apart from this paragraph, this answer was last updated in 2016. Since then, Wayland has become a viable alternative to X11, although it's still mostly used as a backend for X11. No. X is the only usable GUI on Linux. There have been competing projects in the past, but none that gained any traction. Writing something like X is hard, and it takes a lot of extra work to get something usable in practice: you need hardware drivers, and you need applications. Since existing applications speak X11, you need either a translation layer (so… have you written something new, or just a new X server?) or to write new applications from scratch. There is one ongoing project that aims to supplant X: Mir . It's backed by Canonical, who want to standardize on it for Ubuntu — but it hasn't gained a lot of traction outside Ubuntu, so it may not succeed more than Wayland (which was designed for 3D performance, not for security) did. Mir does aim to improve on the X security model by allowing applications limited privileges (e.g. applications have to have some kind of privilege to mess with other applications' input and output); whether that scales when people want to take screenshots and define input methods remains to be seen. You can run a few graphical applications on Linux without X with SVGAlib . However that doesn't bring any extra security either (in addition to numerous other problems, such as poor hardware support, poor usability, and small number of applications). SVGAlib has had known security holes, and it doesn't get much attention, so it probably has many more. X implementations get a lot more attention, so you can at least mostly expect that the implementation matches the security model. X has a very easily understood security model: any application that's connected to the X server can do anything. (That's a safe approximation, but a fairly realistic one.) You can build a more secure system on top of this, simply by isolating untrusted applications: put them in their own virtual environment, displaying on their own X server, and show that X server's display in a window. You'll lose functionality from these applications, for example you have to run things like window managers and clipboard managers in the host environment. There's at least one usable project based on this approach: Qubes .
{ "source": [ "https://unix.stackexchange.com/questions/14388", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
14,409
I've got a question concerning the block size and cluster size. Regarding to what I have read about that I assume the following: The block size is the physical size of a block, mostly 512 bytes. There is no way to change this. The cluster size is the minimal size of a block that is read and writable by the OS. If I create a new filesystem e.g. ext3, I can specify this minimal block size with the switch -b. Almost all programs like dumpe2fs, mke2fs use block size as an name for cluster size. If I have got the following output: $ stat test File: `test' Size: 13 Blocks: 4 IO Block: 2048 regular file Device: 700h/1792d Inode: 15 Links: 1 Is it correct that the size is the actual space in bytes, blocks are the physically used blocks (512 bytes for each) and IO block relates to the block size specified when creating the FS?
I think you're confused, possibly because you've read several documents that use different terminology. Terms like “block size” and “cluster size” don't have a universal meaning, even within the context of filesystem literature. Filesystems For ext2 or ext3 , the situation is relatively simple: each file occupies a certain number of blocks . All blocks on a given filesystem have the same size, usually one of 1024, 2048 or 4096 bytes. A file¹ whose size is between N blocks plus one byte and N+1 blocks occupies N+1 blocks. That block size is what you specify with mke2fs -b . There is no separate notion of clusters. The FAT filesystem used in particular by MS-DOS and early versions of Windows has a similarly simple space allocation. What ext2 calls blocks, FAT calls clusters ; the concept is the same. Some filesystems have a more sophisticated allocation scheme: they have fixed-size blocks, but can use the same block to store the last few bytes of more than one file. This is known as block suballocation ; Reiserfs and Btrfs do it, but not ext3 or even ext4. Utilities Unix utilities often use the word “block” to mean an arbitrarily-sized unit, typically 512 bytes or 1kB. This usage is unrelated to any particular filesystem or disk hardware. Historically, the 512B block did come about because disks and filesystems at the time often operated in 512B chunks, but the modern usage is just arbitrary. Traditional unix utilities and interfaces still use 512B blocks sometimes, though 1kB blocks are now often preferred . You need to check the documentation of each utility to know what size of block it's using (some have a switch, e.g. du -B or df -B on Linux). In the GNU/Linux stat utility, the blocks figure is the number of 512B blocks used by the file. The IO Block figure is the preferred size for file input-output, which is in principle unrelated but usually an indication of the underlying filesystem's block size (or cluster size if that's what you want to call it). Here, you have a 13-byte file, which is occupying one block on the ext3 filesystem with a block size of 2048; therefore the file occupies 4 512-byte units (called “blocks” by stat ). Disks Most disks present an interface that shows the disk as a bunch of sectors . The disk can only write or read a whole sector, not individual bits or bytes. Most hard disks have 512-byte sectors, though 4kB-sector disks started appearing a couple of years ago. The disk sector size is not directly related to the filesystem block size, but having a block be a whole number of sectors is better for performance. ¹ Exception: sparse files save space .
{ "source": [ "https://unix.stackexchange.com/questions/14409", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8060/" ] }
14,479
Could somebody explain to me the difference between kill and killall ? Why doesn't killall see what ps shows? # ps aux |grep db2 root 1123 0.0 0.8 841300 33956 pts/1 Sl 11:48 0:00 db2wdog db2inst1 1125 0.0 3.5 2879496 143616 pts/1 Sl 11:48 0:02 db2sysc root 1126 0.0 0.6 579156 27840 pts/1 S 11:48 0:00 db2ckpwd root 1127 0.0 0.6 579156 27828 pts/1 S 11:48 0:00 db2ckpwd root 1128 0.0 0.6 579156 27828 pts/1 S 11:48 0:00 db2ckpwd # killall db2ckpwd db2ckpwd: no process found # kill -9 1126 # kill -9 1127 # kill -9 1128 System is SuSe 11.3 (64 bit); kernel 2.6.34-12; procps version 3.2.8; killall from PSmisc 22.7; kill from GNU coreutils 7.1
Is this on Linux? There are actually a few subtly different versions of the command name that are used by ps , killall , etc. The two main variants are: 1) the long command name, which is what you get when you run ps u ; and 2) the short command name, which is what you get when you run ps without any flags. Probably the biggest difference happens if your program is a shell script or anything that requires an interpreter, e.g. Python, Java, etc. Here's a really trivial script that demonstrates the difference. I called it mycat : #!/bin/sh cat After running it, here's the two different types of ps . Firstly, without u : $ ps -p 5290 PID TTY ... CMD 5290 pts/6 ... mycat Secondly, with u : $ ps u 5290 USER PID ... COMMAND mikel 5290 ... /bin/sh /home/mikel/bin/mycat Note how the second version starts with /bin/sh ? Now, as far as I can tell, killall actually reads /proc/<pid>/stat , and grabs the second word in between the parens as the command name, so that's really what you need to be specifying when you run killall . Logically, that should be the same as what ps without the u flag says, but it would be a good idea to check. Things to check: what does cat /proc/<pid>/stat say the command name is? what does ps -e | grep db2 say the command name is? do ps -e | grep db2 and ps au | grep db2 show the same command name? Notes If you're using other ps flags too, then you might find it simpler to use ps -o comm to see the short name and ps -o cmd to see the long name. You also might find pkill a better alternative. In particular, pkill -f tries to match using the full command name, i.e. the command name as printed by ps u or ps -o cmd .
{ "source": [ "https://unix.stackexchange.com/questions/14479", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7221/" ] }
14,483
If I am in remote directory on different server via ssh I want to do to directory like /var/lib/edumate/backup without typing the whole path. Is there any way to do that in mc?
In MC favourites or bookmarks are stored in the hotlist directory. Hotlist The Directory hotlist command shows the labels of the directories in the directory hotlist. The Midnight Commander will change to the directory corresponding to the selected label. From the hotlist dialog, you can remove already created label/directory pairs and add new ones. To add new directories quickly, you can use the Add to hotlist command ( Ctl + x , h ), which adds the current directory into the directory hotlist, asking just for the label for the directory. This makes cd to often used directories faster. You may consider using the CDPATH variable as described in internal cd command description. [1] You can also use the hotlist to store frequently accessed ftp sites: if you access a site regularly, add it to your directory hotlist for faster access. Go to Command menu - Directory Hotlist - add by either typing it in, or if you are connected in a panel already, simply Add Current. Access the list with Ctl + \ .[2] http://linux.die.net/man/1/mc http://www.trembath.co.za/mctutorial.html
{ "source": [ "https://unix.stackexchange.com/questions/14483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7221/" ] }
14,489
Why would someone choose FreeBSD over Linux? What are the advantages of FreeBSD compared to Linux? (My shared hosting provider uses FreeBSD.)
If you want to know what's different so you can use the system more efficiently, here is a commonly referenced introduction to BSD to people coming from a Linux background . If you want more of the historical context for this decision, I'll just take a guess as to why they chose FreeBSD. Around the time of the first dot-com bubble, FreeBSD 4 was extremely popular with ISPs. This may or may not have been related to the addition of kqueue . The Wikipedia page describes the feelings for FreeBSD 4 thusly: "…widely regarded as one of the most stable and high performance operating systems of the whole Unix lineage." FreeBSD in particular has added other features over time which would appeal to hosting providers, such as jail and ZFS support. Personally, I really like the BSD systems because they just feel like they fit together better than most Linux distros I've used. Also, the documentation provided directly in the various handbooks, etc. is outstanding. If you're going to be using FreeBSD, I highly recommend the FreeBSD Handbook .
{ "source": [ "https://unix.stackexchange.com/questions/14489", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2880/" ] }
14,560
I have a 4GB SD card with some family pictures on it that I need to recover. When I insert the card into my card reader, it shows up as an unknown 32MB device (as /dev/sde ) and cannot be mounted. When inserting back into the camera (a Nikon D60), it says the cards needs to be formatted (as does inserting it into a Windows machine). I want to recover all of the pictures on the card (there were others before the family pictures) because I don't know how many I took or their exact sizes (but I believe they were all JPEGs). The card should be formatted as a FAT32 filesystem. What Linux or Unix utilities are available to recover the files? Can I do it myself or do I need to seek professional help? Edit: It appears that my card reader has damaged the card in some way, making it unreadable and and unformattable. When I checked another card that was the exact same (save for no files), it "ruined" the second one. I would like to use the second card again, so is there a tool to format a damaged card that doesn't know (or cannot report properly) how large it is?
First, from your experience with the second card, it seems that your reader is damaged and now damages the cards you insert into it. Stop using that reader immediately, and try to recover the card with another reader. If your data is at all valuable, try to get a brand-name reader with better quality than a bottom-price one. If the card is merely partly unreadable and not completely unreadable, first try to copy what you can from the card to an image file. Don't use dd for this as it'll stop reading on the first error. Use tools such as dd_rescue or ddrescue . Both tools try to grab as much data as possible from the disk. Example usage ( /dev/sdc being the device corresponding to the card; if you don't know which one it is, run cat /proc/partitions and pick the one that seems to have the right size): ddrescue -dr3 /dev/sdc card.image logfile Since it looks like the filesystem structure is damaged (your OSes offer to format the drive because they don't see a valid filesystem on it), you'll have to try to recover the files individually. Fortunately, image files start with a recognizable header, and there are many existing carving tools that recognize images: Foremost , MagicRescue , PhotoRec (from the makers of TestDisk ), RecoverJPEG , … Most of these tools are available on typical unix distributions. But if you prefer, you can run a special-purpose distribution or other live CD including recovery tools such as SysRescueCD , Knoppix , CAINE …
{ "source": [ "https://unix.stackexchange.com/questions/14560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
14,589
I am trying to install a 3rd-party RPM package on RHEL5 which depends on version 3.4 of sqlite. According to Yum I already have 3.3.6 installed. Is there a way to list the installed packages that depend on sqlite 3.3.6?
The rpm option you want is: rpm -q --whatrequires sqlite Edited: added --installed per discussion in other answers/comments Edited: removed --installed as it is an invalid option for rpm
{ "source": [ "https://unix.stackexchange.com/questions/14589", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2163/" ] }
14,640
I know I have done this before, so I'm sure it's possible, I just forget how to do it. There's a way to tell convert to grab a specific page of a PDF, and I'd like to keep the format of that page as PDF.
ImageMagick is a tool for bitmap images, which most PDFs aren't. If you use it, it will rasterize the data, which is often not desirable. Pdftk can extract one or more pages from a PDF file. pdftk A=input.pdf cat A42 A43 output pages_42_43.pdf If you have a LaTeX installation with PDFLaTeX, you can use pdfpages . There's a shell wrapper for pdfpages, pdfjam . pdfjam -o pages_42_43.pdf input.pdf 42,43 Another possibility (overkill here, but useful for requirements more complex that one page) is Python with the PyPdf library. #!/usr/bin/env python import copy, sys from pyPdf import PdfFileWriter, PdfFileReader input = PdfFileReader(sys.stdin) output = PdfFileWriter() for i in [42, 43]: output.addPage(input.getPage(i)) output.write(sys.stdout)
{ "source": [ "https://unix.stackexchange.com/questions/14640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1389/" ] }
14,684
I can use the "script" command to record an interactive session at the command line. However, this includes all control characters and colour codes. I can remove control characters (like backspace) with "col -b", but I can't find a simple way to remove the colour codes. Note that I want to use the command line in the normal way, so don't want to disable colours there - I just want to remove them from the script output. Also, I know can play around and try find a regexp to fix things up, but I am hoping there is a simpler (and more reliable - what if there's a code I don't know about when I develop the regexp?) solution. To show the problem: spl62 tmp: script Script started, file is typescript spl62 lepl: ls add-licence.sed build-example.sh commit-test push-docs.sh add-licence.sh build.sh delete-licence.sed setup.py asn build-test.sh delete-licence.sh src build-doc.sh clean doc-src test.ini spl62 lepl: exit Script done, file is typescript spl62 tmp: cat -v typescript Script started on Thu 09 Jun 2011 09:47:27 AM CLT spl62 lepl: ls^M ^[[0m^[[00madd-licence.sed^[[0m ^[[00;32mbuild-example.sh^[[0m ^[[00mcommit-test^[[0m ^[[00;32mpush-docs.sh^[[0m^M ^[[00;32madd-licence.sh^[[0m ^[[00;32mbuild.sh^[[0m ^[[00mdelete-licence.sed^[[0m ^[[00msetup.py^[[0m^M ^[[01;34masn^[[0m ^[[00;32mbuild-test.sh^[[0m ^[[00;32mdelete-licence.sh^[[0m ^[[01;34msrc^[[0m^M ^[[00;32mbuild-doc.sh^[[0m ^[[00;32mclean^[[0m ^[[01;34mdoc-src^[[0m ^[[00mtest.ini^[[0m^M spl62 lepl: exit^M Script done on Thu 09 Jun 2011 09:47:29 AM CLT spl62 tmp: col -b < typescript Script started on Thu 09 Jun 2011 09:47:27 AM CLT spl62 lepl: ls 0m00madd-licence.sed0m 00;32mbuild-example.sh0m 00mcommit-test0m 00;32mpush-docs.sh0m 00;32madd-licence.sh0m 00;32mbuild.sh0m 00mdelete-licence.sed0m 00msetup.py0m 01;34masn0m 00;32mbuild-test.sh0m 00;32mdelete-licence.sh0m 01;34msrc0m 00;32mbuild-doc.sh0m 00;32mclean0m 01;34mdoc-src0m 00mtest.ini0m spl62 lepl: exit Script done on Thu 09 Jun 2011 09:47:29 AM CLT
The following script should filter out all ANSI/VT100/xterm control sequences for (based on ctlseqs ). Minimally tested, please report any under- or over-match. #!/usr/bin/env perl ## uncolor — remove terminal escape sequences such as color changes while (<>) { s/ \e[ #%()*+\-.\/]. | \e\[ [ -?]* [@-~] | # CSI ... Cmd \e\] .*? (?:\e\\|[\a\x9c]) | # OSC ... (ST|BEL) \e[P^_] .*? (?:\e\\|\x9c) | # (DCS|PM|APC) ... ST \e. //xg; print; } Known issues: Doesn't complain about malformed sequences. That's not what this script is for. Multi-line string arguments to DCS/PM/APC/OSC are not supported. Bytes in the range 128–159 may be parsed as control characters, though this is rarely used. Here's a version which parses non-ASCII control characters (this will mangle non-ASCII text in some encodings including UTF-8). #!/usr/bin/env perl ## uncolor — remove terminal escape sequences such as color changes while (<>) { s/ \e[ #%()*+\-.\/]. | (?:\e\[|\x9b) [ -?]* [@-~] | # CSI ... Cmd (?:\e\]|\x9d) .*? (?:\e\\|[\a\x9c]) | # OSC ... (ST|BEL) (?:\e[P^_]|[\x90\x9e\x9f]) .*? (?:\e\\|\x9c) | # (DCS|PM|APC) ... ST \e.|[\x80-\x9f] //xg; print; }
{ "source": [ "https://unix.stackexchange.com/questions/14684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8220/" ] }
14,746
In vi, I can use o or O to add a blank line and go into insertion mode. But what if I want to stay in command mode, is there a command for this? In googling, I'm seeing suggestions to add stuff to my vimrc, but it seems like there should be an easier way (that will always work.)
According to the VIM FAQ you can use the :put command: 12.15. How do I insert a blank line above/below the current line without entering insert mode? You can use the ":put" ex command to insert blank lines. For example, try :put ='' :put! ='' For more information, read :help :put but then really it's easier to add: map <Enter> o<ESC> map <S-Enter> O<ESC> to your .vimrc . This way you can press Enter or Shift-Enter in normal mode to insert a blank line below or above current line. Of course substitute <Enter> and <S-Enter> with your preferred keys.
{ "source": [ "https://unix.stackexchange.com/questions/14746", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5045/" ] }
14,838
I am working with some text that is full of stuff between brackets [] that I don't want. Since I can delete the brackets myself, I don't need the one-liner to do that for me, but I do need a one-liner that will remove everything between them. What about parentheses () instead of brackets?
Replace [some text] by the empty string. Assuming you don't want to parse nested brackets, the some text can't contain any brackets. sed -e 's/\[[^][]*\]//g' Note that in the bracket expression [^][] to match anything but [ or ] , the ] must come first. Normally a ] would end the character set, but if it's the first character in the set (here, after the ^ complementation character), the ] stands for itself. If you do want to parse nested brackets, or if the bracketed text can span multiple lines, sed isn't the right tool.
{ "source": [ "https://unix.stackexchange.com/questions/14838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1389/" ] }
14,858
I'm looking for a package that provides a specific binary, so I can install it. how can I search to find out what packages provide this binary? (note: I know there's at least one tool that does this, but I have forgotten its name.)
Since pacman 5.0, there is built-in functionality for searching the database with the -F option. First update the database: sudo pacman -Fy Then you can see which package contains $filename with pacman -F $filename if you are searching for an exact file name or full path, or pacman -Fx $expr to have $expr interpreted as a regular expression. Since you knew you were looking for an equivalent of apt-file , you could have looked it up in the Pacman Rosetta . Alternatively, you can use pkgfile . Install it with pacman -S pkgfile , then run sudo pkgfile -u to update the database. To see what package contains $filename , run pkgfile $filename
{ "source": [ "https://unix.stackexchange.com/questions/14858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
14,867
As I read in Wikipedia, Unix started as a revolutionary operating system written mostly in C allowing it to be ported and used on different hardware. Descendants of Unix is mentioned next, mostly BSD. Clones of Unix, Minix/Linux are discussed as well. But what happened to the original Unix operating system? Does it exist as an operating system any more or is it nothing more than a standard like POSIX nowadays? Do note that I am aware of this answer but it has no mention of the fate of the original Unix beyond the derived works.
We can distinguish UNIX the trademark from Unix the code-base. AT&T Unix was initially developed at Bell Labs, owned by AT&T . This Unix team became AT&T's Unix System Laboratories ( USL ) and produced Unix System V (Roman numeral for five) or SysV for short. The University of California at Berkeley (UCB) also licenced Unix for academic use, their Computer Systems Research Group (CSRG) later made many important changes and additions (notably TCP/IP) in their Berkeley Software Distribution ( BSD ) which were later incorporated into many descendants of Unix leading to the BSD vs SysV split. Ultimately a lot of the BSD changes were back-ported into SysV (which we can consider the main "ancestral Unix" code base). Along the way, many different businesses have licenced this code-base (at various stages in it's development) and used it as the basis of their proprietary Unix operating systems - AIX, HPUX, IRIX, Solaris, Ultrix and dozens of others. Novell (Attachmate) USL was purchased by Novell . At this time, the ancestral Unix code was known as Unix system V release 4 - or SVR4 for short. Novell named their product Unixware to complement the name of their legacy network OS Netware. Novell have been Acquired by Attachmate. The Santa Cruz Operation Novell eventually sold their Unix business to an old SVR3.2 licensee The Santa Cruz Operation ( SCO ) whose main business up to that point was selling a product named OpenServer that was based on Unix SVR3.2. Novell (since bought by Attachmate) still own some rights‡ to Unix but do not do any work on the source code. Caldera / The Sco Group / TSG Group Inc The Santa Cruz Operation later sold their Unix business to a Linux company Caldera who later renamed themselves to The SCO Group (sometimes referred to as new SCO or SCOG ) and who had a disastrous failure of leadership leading to chapter-11 bankruptcy and sale of the Unix business to UnXis , a business formed for this purpose. Subsequently The SCO Group were reorganised into TSG Group Inc and TSG Operations Inc. They have no role regarding maintenance of the ancestral Unix code base. In August 2012 TSG Group Inc converted to chapter 7 bankruptcy. UnXis / Xinuos So now UnXis are responsible for marketing and developing/maintaining Unixware - the ancestral AT&T Unix code base. Because the Santa Cruz Operation (old SCO) originally ported† Unix to the x86 platform, I believe x86 and x86_64 are the only target platforms that UnXis directly support. On June 12 2013, UnXis announced it had been renamed Xinuos. Xinuos no longer seriously support installation of their ancestral Unix products on bare metal. Their SVR3.2 and SVR5 based products are offered as virtual machines (VM) to be run under their "OpenServer 10" product which is a derivative of FreeBSD. So the body of code with the greatest claim to be the prime descendant of the ancestral Unix code is now, in practical terms, not much more than a compatibility layer between other operating systems and legacy applications. † Microsoft licensed Unix and ported it to 16-bit Zilog Z8000 - old SCO purchased Xenix from them and ported it to the 16-bit 8086 architecture (used by IBM for their original IBM PC). Old SCO later ported SVR3.2 to x86 as 32-bit SCO-Unix later renamed OpenServer ‡ Novell's rights were contested, somewhat futilely, by The SCO Group (now named TSG Group Inc), the bankrupt remnants of the old Linux company Caldera. It is not yet clear whether TSG Group Inc have finally discontinued this and related litigation, the last activity in a related case against IBM was January 2018 and TSG Group Inc are not commercially active
{ "source": [ "https://unix.stackexchange.com/questions/14867", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8305/" ] }
14,879
I'm trying to inject keystrokes into the input daemon so as to simulate typing from a Bash script. Is this possible, and if so, how can I make it happen?
If you are operating at the X level (as in Gilles' question), then use xdotool like so: xdotool key KEYSTROKE_SPECIFIER Where KEYSTROKE_SPECIFIER can be something like "a" or "F2" or "control+j" EDIT: I missed your response to Gilles' question, sorry. I'll leave this response here as a solution for the X-case.
{ "source": [ "https://unix.stackexchange.com/questions/14879", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
14,885
This issue has been bugging me for a while, and although I've thought I've found my answer through EnvWatcher , unfortunately it only works on Bash. And I use zsh. I would like to replicate the things env-watcher does, to a lesser degree maybe, but I need to know if there is a command by which I could reset a shell to it's clean startup stage. As for example, I'd like to have the following workflow. source some-functions alias another-thing export SVN_EDITOR=vim RESET-ZSH # none of the above are valid any more Is there such a built in possibility, or am I chasing butterflies?
You could just exec zsh , which will give you a fresh zsh and re-run the init functions. Note that you'd need to exec zsh -l for a login zsh to keep its "login shell" status. I don't know how well it preserves command history (it seems to work for me, but if you use multiple shells in different terminals you might get 'crosstalk' between the two shells' history)
{ "source": [ "https://unix.stackexchange.com/questions/14885", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4250/" ] }
14,895
I source bashrc's of few of my friends. So I end up having duplicate entries in my $PATH variable. I am not sure if that is the problem for commands taking long to start. How does $PATH internally work in bash? Does having more PATHS slow my start up time?
Having more entries in $PATH doesn't directly slow your startup, but it does slow each time you first run a particular command in a shell session (not every time you run the command, because bash maintains a cache). The slowdown is rarely perceptible unless you have a particularly slow filesystem (e.g. NFS, Samba or other network filesystem, or on Cygwin). Duplicate entries are also a little annoying when you review your $PATH visually, you have to wade through more cruft. It's easy enough to avoid adding duplicate entries. case ":$PATH:" in *":$new_entry:"*) :;; # already there *) PATH="$new_entry:$PATH";; # or PATH="$PATH:$new_entry" esac Side note: sourcing someone else's shell script means executing code that he's written. In other words, you're giving your friends access to your account whenever they want. Side note: .bashrc is not the right place to set $PATH or any other environment variable. Environment variables should be set in ~/.profile . See Which setup files should be used for setting up environment variables with bash? , Difference between .bashrc and .bash_profile .
{ "source": [ "https://unix.stackexchange.com/questions/14895", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2832/" ] }
14,961
I have eth0 and wlan0 according to ifconfig and I can ping google.com . How can I find out (with a normal user, not root ) what interface is active , as in, what interface did the ping (or whatever, ping is not mandatory) use? I am using Ubuntu 11.04 or Fedora 14
You can use route to find your default route: $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 * 255.255.255.0 U 1 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 The Iface column in the line with destination default tells you which interface is used.
{ "source": [ "https://unix.stackexchange.com/questions/14961", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
15,024
When running umount /path I get: umount: /path: device is busy. The filesystem is huge, so lsof +D /path is not a realistic option. lsof /path , lsof +f -- /path , and fuser /path all return nothing. fuser -v /path gives: USER PID ACCESS COMMAND /path: root kernel mount /path which is normal for all unused mounted file systems. umount -l and umount -f is not good enough for my situation. How do I figure out why the kernel thinks this filesystem is busy?
It seems the cause for my issue was the nfs-kernel-server was exporting the directory. The nfs-kernel-server probably goes behind the normal open files and thus is not listed by lsof and fuser . When I stopped the nfs-kernel-server I could umount the directory. I have made a page with examples of all solutions so far here: http://oletange.blogspot.com/2012/04/umount-device-is-busy-why.html
{ "source": [ "https://unix.stackexchange.com/questions/15024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
15,075
root@SERVER ~$ df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/YXCV 655360 365632 45% 6322 13% / /dev/ASDF 3801088 670648 83% 41759 32% /usr /dev/ASR 1048576 500496 53% 5555 9% /var How can I pipe the df command's output to only display lines that have more usage than 80%? e.g.: it would only display: /dev/ASDF 3801088 670648 83% 41759 32% /usr
Assuming you don't have device names containing spaces (which are a pain when it comes to parsing the output of df ): df -P | awk '0+$5 >= 80 {print}' Adapt the field number if you want to use your implementation's df output format rather than the POSIX format. Without the 0+ , the comparison would be lexical ( 9% would then be greater than 80 ). By using the + binary arithmetic operator, we force $5 to be converted to a number (so 9% becomes 9 ) and the comparison to be numerical. Using the + unary operator (as in awk '+$5 >= 80' ) works in some awk implementations but not in traditional ones (the ones written by A, W and K) where that operator is just ignored.
{ "source": [ "https://unix.stackexchange.com/questions/15075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
15,101
Reading the details of CVE-2009-4487 (which is about the danger of escape sequences in log files) I am a bit surprised. To quote CVE-2009-4487 : nginx 0.7.64 writes data to a log file without sanitizing non-printable characters, which might allow remote attackers to modify a window's title, or possibly execute arbitrary commands or overwrite files, via an HTTP request containing an escape sequence for a terminal emulator. Clearly, this is not really about a security hole in nginx, but in the terminal emulators. Sure, perhaps cat ing a log file to the terminal happens only by accident, but grep ing a logfile is quite common. less perhaps sanitizes escape sequences, but who knows what shell commands don't change escape sequences ... I tend to agree with the Varnish response : The wisdom of terminal-response-escapes in general have been questioned at regular intervals, but still none of the major terminal emulation programs have seen fit to discard these sequences, probably in a misguided attempt at compatibility with no longer used 1970'es technology. [..] Instead of blaming any and all programs which writes logfiles, it would be much more productive, from a security point of view, to get the terminal emulation programs to stop doing stupid things, and thus fix this and other security problems once and for all. Thus my questions: How can I secure my xterm, such that it is not possible anymore to execute commands or overwrite files via escape sequences? What terminal emulators for X are secured against this attack?
VT100 terminals (which all modern terminal emulators emulate to some extent) supported a number of problematic commands, but modern emulators or distributions disable the more problematic and less useful ones. Here's a non-exhaustive list of potentially risky escape sequences (not including the ones that merely make the display unreadable in some way): The arbitrary log file commands in rxvt and Eterm reported by H.D. Moore . These are indeed major bugs, fortunately long fixed. The answerback command, also known as Return Terminal Status, invoked by ENQ ( Ctrl+E ). This inserts text into the terminal as if the user had typed it. However, this text is not under control of the attacker: it is the terminal's own name, typically something like xterm or screen . On my system (Debian squeeze), xterm returns the empty string by default (this is controlled by the answerbackString resource). The Send Device Attributes commands, ESC [ c and friends. The terminal responds with ESC [ … c (where the … can contain digits and ASCII punctuation marks only). This is a way of querying some terminal capabilities, mostly obsolete but perhaps used by old applications. Again, the terminal's response is indistinguishable from user input, but it is not under control of the attacker. The control sequence might look like a function key, but only if the user has an unusual configuration (none of the usual settings I've encountered has a valid function key escape sequence that's a prefix of the terminal response). The various device control functions (DCS escapes, beginning with ESC P ). I don't know what harm can be done through DECUDK (set user-defined keys) on a typical terminal emulator. DECRQSS (Request Status String) is yet another command to which the terminal responds with an escape sequence, this time beginning with \eP ; this can be problematic since \eP is a valid key ( Alt + Shift + P ). Xterm has two more experimental features: ESC P + p … and ESC P + q … , to get and set termcap strings. From the description, this might be used at least to modify the effect of function keys. Several status report commands: ESC [ … n (Device Status Report). The terminal responds with an escape sequence. Most of these escape sequences don't correspond to function key escape sequences. One looks problematic: the report to ESC [ 6 n is of the form ESC [ x ; y R where x and y are digit sequences, and this could look like F3 with some modifiers. Window manipulation commands ESC [ … t . Some of these allow the xterm window to be resized, iconified, etc., which is disruptive. Some of these cause the terminal to respond with either a escape sequence. Most of these escape sequences look low-risk, however there are two dangerous commands: the answers to ESC [ 2 0 t and ESC [ 2 1 t include the terminal window's icon label and title respectively, and the attacker can choose these. At least under Debian squeeze, xterm ignores these commands by default; they can be enabled by setting the allowWindowOps resource, or selectively through the disallowedWindowOps resource. Gnome-terminal under Ubuntu 10.04 implements even the title answerbacks by default. I haven't checked other terminals or versions. Commands to set the terminal title or icon name. Under xterm and most other X terminals, they are ESC ] digit ; title ESC \ . Under Screen, the escape sequence is ESC k title ESC \ . I find the concern over these commands overrated. While they do allow some amount of mischief, any web page has the same issue. Acting on a window based solely on its title and not on its class is akin to opening a file whose name was given to you by an untrusted party, or not quoting a variable expansion in a shell script, or patting a rabid dog on the nose — don't complain if you get bitten. I find Varnish's response disingenuous. It's feels like it's either trying to shift the blame, or in security nazi mode (any security concern, genuine or not, justifies blackballing a feature). The wisdom of terminal-response-escapes in general have been questioned at regular intervals, but still none of the major terminal emulation programs have seen fit to discard these sequences, probably in a misguided attempt at compatibility with no longer used 1970'es technology. (…) Instead of blaming any and all programs which writes logfiles, it would be much more productive, from a security point of view, to get the terminal emulation programs to stop doing stupid things, and thus fix this and other security problems once and for all. Many of the answerbacks are useful features: an application does need to know things like the cursor position and the window size. Setting the window title is also very useful. It would be possible to rely entirely on ioctl calls for these, however this would have required additional code and utilities to make these ioctl calls and transate them into unix-style text passing on file descriptors. Changing these interfaces now would be a lot of work, for little benefit. Text files are not supposed to contain non-printing characters such as control characters. Log files are generally expected to be text files. Log files should not contain control characters. If you're worried that a file might contain escape sequences, open it in an editor, or view it with less without the -r or -R option, or view it through cat -v .
{ "source": [ "https://unix.stackexchange.com/questions/15101", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
15,138
If I use pubkey auth from e.g.: an Ubuntu 11.04 how can I set the ssh client to use only password auth to a server? (just needed because of testing passwords on a server, where I default log in with key) I found a way: mv ~/.ssh/id_rsa ~/.ssh/id_rsa.backup mv ~/.ssh/id_rsa.pub ~/.ssh/id_rsa.pub.backup and now I get prompted for password, but are there any offical ways?
Disable PubkeyAuthentication and also set PreferredAuthentications to password so that alternative methods like gssapi-with-mic aren't used: ssh -o PubkeyAuthentication=no -o PreferredAuthentications=password example.com You need to make sure that the client isn't configured to disallow password authentication.
{ "source": [ "https://unix.stackexchange.com/questions/15138", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
15,176
I stumbled upon this website which talk about this. So when downloading whole website by getting the gzipped version, what is the right command? I've tested this command, but I don't know if wget really getting the gzipped version: wget --header="accept-encoding: gzip" -m -Dlinux.about.com -r -q -R gif,png,jpg,jpeg,GIF,PNG,JPG,JPEG,js,rss,xml,feed,.tar.gz,.zip,rar,.rar,.php,.txt -t 1 http://linux.about.com/
If you request gzip'ed content (using the accept-encoding: gzip header, which is correct), then it's my understanding that wget can't then read the content. So you will end up with a single, gzipped file on disk, for the first page you hit, but no other content. i.e. you can't use wget to request gzipped content and to recurse the entire site at the same time. I think there's a patch that allows wget to support this function but it's not in the default distribution version. If you include the -S flag you can tell if the web server is responding with the correct type of content. For example, wget -S --header="accept-encoding: gzip" wordpress.com --2011-06-17 16:06:46-- http://wordpress.com/ Resolving wordpress.com (wordpress.com)... 72.233.104.124, 74.200.247.60, 76.74.254.126 Connecting to wordpress.com (wordpress.com)|72.233.104.124|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Server: nginx Date: Fri, 17 Jun 2011 15:06:47 GMT Content-Type: text/html; charset=UTF-8 Connection: close Vary: Accept-Encoding Last-Modified: Fri, 17 Jun 2011 15:04:57 +0000 Cache-Control: max-age=190, must-revalidate Vary: Cookie X-hacker: If you're reading this, you should visit automattic.com/jobs and apply to join the fun, mention this header. X-Pingback: http://wordpress.com/xmlrpc.php Link: <http://wp.me/1>; rel=shortlink X-nananana: Batcache Content-Encoding: gzip Length: unspecified [text/html] The content encoding clearly states gzip, however for linux.about.com (currently), wget -S --header="accept-encoding: gzip" linux.about.com --2011-06-17 16:12:55-- http://linux.about.com/ Resolving linux.about.com (linux.about.com)... 207.241.148.80 Connecting to linux.about.com (linux.about.com)|207.241.148.80|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 17 Jun 2011 15:12:56 GMT Server: Apache Set-Cookie: TMog=B6HFCs2H20kA1I4N; domain=.about.com; path=/; expires=Sat, 22-Sep-12 14:19:35 GMT Set-Cookie: Mint=B6HFCs2H20kA1I4N; domain=.about.com; path=/ Set-Cookie: zBT=1; domain=.about.com; path=/ Vary: * PRAGMA: no-cache P3P: CP="IDC DSP COR DEVa TAIa OUR BUS UNI" Cache-Control: max-age=-3600 Expires: Fri, 17 Jun 2011 14:12:56 GMT Connection: close Content-Type: text/html Length: unspecified [text/html] It's returning text/html. Because some older browsers still have issues with gzip encoded content, many sites only enable it based on browser identification. They often turn it off by default and only turn it one when they know the browser can support it - and they usually don't include wget in that list. This means you may find wget never returns gzip content even if the site appears to do so for your browser.
{ "source": [ "https://unix.stackexchange.com/questions/15176", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8427/" ] }
15,197
I can find the subdirectories of a directory by ls -d -- */ or ls -l | grep "^d" but both of these seem indirect, and I would imagine that there would be a standard way to find just the directories. Is there a right way to do this? And if not, will either of these lead to undesirable behavior on edge cases? (Symbolic links, hidden directories, etc.)
The answer will depend more on what you intend to do with the output than on what you are looking for. If you just want to see a list for visual reference at the terminal, your first solution is actually pretty nice. If you want to process the output you should consider using another method. One of the most robust ways to get a list to feed into another program is to use find . find -maxdepth 1 -type d The reason this is good for feeds is that find can output the data separated by nulls using -print0 or properly escape strings as arguments to another programs using -exec . For reference on why this is better than parsing the output of ls, see ParsingLS on Greg's Wiki.
{ "source": [ "https://unix.stackexchange.com/questions/15197", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5045/" ] }
15,216
Have I got enough (or perhaps too many) ;; terminators in this bash script containing nested case 's? case "$1" in OK) # Nothing needs done echo -n "OK:1" ;; CRITICAL) case "$2" in SOFT) case "$3" in 1|2) soft_reset_pool.sh ;; esac ;; HARD) case "$3" in 1|2) hard_reset_pool.sh ;; 3|4) hard_reset_service ;; esac ;; esac ;; esac It's been a few years since I did any bash in anger.
It's easier to keep track if you use sensible indentation on the ;; s. case "$1" in OK) # Nothing needs done echo -n "OK:1" ;; CRITICAL) case "$2" in SOFT) case "$3" in 1|2) soft_reset_pool.sh ;; esac ;; HARD) case "$3" in 1|2) hard_reset_pool.sh ;; 3|4) hard_reset_service ;; esac ;; esac ;; esac They match up fine, every case label ) having a terminating ;; That said, sometimes it's easier and/or clearer to collapse everything into a non-nested case : case "$1:$2:$3" in OK:*) # Nothing needs done echo -n "OK:1" ;; CRITICAL:SOFT:[12]) soft_reset_pool.sh ;; CRITICAL:HARD:[12]) hard_reset_pool.sh ;; CRITICAL:HARD:[34]) hard_reset_service ;; esac
{ "source": [ "https://unix.stackexchange.com/questions/15216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1347/" ] }
15,308
I can get all jpg images by using: find . -name "*.jpg" But how can I add png files to the results as well?
Use the -o flag between different parameters. find ./ -type f \( -iname \*.jpg -o -iname \*.png \) works like a charm. NOTE There must be a space between the bracket and its contents or it won't work. Explanation: -type f - only search for files (not directories) \( & \) - are needed for the -type f to apply to all arguments -o - logical OR operator -iname - like -name , but the match is case insensitive
{ "source": [ "https://unix.stackexchange.com/questions/15308", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8209/" ] }
15,316
How can I move tmux's status bar to the top? Can't find it on the man page.
Add set-option -g status-position top to ~/.tmux.conf . (See @ChrisJohnsen's comment above)
{ "source": [ "https://unix.stackexchange.com/questions/15316", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8489/" ] }
15,330
There's a few daemons I disable from starting on boot-up. As an example, I use the following: sudo update-rc.d -f postgresql remove I'm not even sure if that command is correct and I don't remember where I got it from. Anyways, whenever I upgrade postgresql , the setting is lost (i.e. the daemon starts up on reboot).
update-rc.d was initially used by package upgrade scripts. remove is called on package uninstall and removes all links, defaults is called on package install, enable or disable might be used depending on debconf and are useful to sysadmins. The cleanups remove does are not in fact useful to disable a service. From the man page: A common system administration error is to delete the links with the thought that this will "disable" the service, i.e., that this will prevent the service from being started. However, if all links have been deleted then the next time the package is upgraded, the package's postinst script will run update-rc.d again and this will reinstall links at their factory default locations. The correct way to disable services is to configure the service as stopped in all runlevels in which it is started by default. In the System V init system this means renaming the service's symbolic links from S to K. sudo update-rc.d postgresql disable will do what you want, because it keeps the rc.d symlinks but with the K (killed, stopped) prefix. Revert it with an enable . Some services support being disabled from their /etc/defaults/$service file, but sadly there are exceptions. Other ways to disable a service are to chmod -x the /etc/init.d/$service file, or to insert an exit 0 at the top of it.
{ "source": [ "https://unix.stackexchange.com/questions/15330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
15,348
I am developing a Nodejs application that the user interacts with via HTTP on localhost. There are practically no parameters and the daemon has virtually no dependencies and it just needs to be up by log-in time. I would like to follow the idioms on each platform for start-up scripts, and that means Upstart on Ubuntu and systemd on Fedora. Are there any good tutorials for writing systemd system files? Are there any 'best practices' to be aware of? I have found these resources: Fedora wiki page about systemd Blog about writing systemd system files Systemd on Wikipedia I am mostly looking for an API of sorts as a reference, as well as a basic format to follow.
The following example is inspired by this link, which actually does not mention all steps and is listed just to credit the source: http://patrakov.blogspot.com/2011/01/writing-systemd-service-files.html Step 1 : I created this file (note location) which essentially fires a bash process with an extended argument. You could fire your own command which could be different from bash. [root@y500-fedora ~]# cat /etc/systemd/system/foo.service [Unit] Description=foo [Service] ExecStart=/bin/bash -c "while true; do /bin/inotifywait -qq --event close_write /sys/class/backlight/acpi_video0/brightness; su myusername -c '/bin/xbacklight -display :0 -set $(cat /sys/class/backlight/acpi_video0/brightness)'; done" [Install] WantedBy=multi-user.target Step 2 : Reload systemd : systemctl daemon-reload Start the new service: systemctl enable foo (similarly you can disable it) (optional) Step 3 : It should start automatically at next reboot into multi-user mode (run level 3) but if you want to start it right away: systemctl start foo systemctl status foo # optional, just to verify Update: For completeness, I should add that ubuntu bionic seems to have a very thorough man page. RTFM here
{ "source": [ "https://unix.stackexchange.com/questions/15348", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8471/" ] }
15,405
mail -s "subject" [email protected] <test.html works, but only for plain text email. What is the correct way to send HTML email using the Linux command mail ?
There are many different versions of mail around. When you go beyond mail -s subject to1@address1 to2@address2 <body (for sending, that's all POSIX guarantees — and even -s didn't exist in the old days ), they tend to have different command line options. Adding an additional header isn't always easy. With some mailx implementations, e.g. from mailutils on Ubuntu or Debian's bsd-mailx , it's easy, because there's an option for that. mailx -a 'Content-Type: text/html' -s "Subject" to@address <test.html With the Heirloom mailx , there's no convenient way. One possibility to insert arbitrary headers is to set editheaders=1 and use an external editor (which can be a script). ## Prepare a temporary script that will serve as an editor. ## This script will be passed to ed. temp_script=$(mktemp) cat <<'EOF' >>"$temp_script" 1a Content-Type: text/html . $r test.html w q EOF ## Call mailx, and tell it to invoke the editor script EDITOR="ed -s $temp_script" heirloom-mailx -S editheaders=1 -s "Subject" to@address <<EOF ~e . EOF rm -f "$temp_script" With a general POSIX mailx , I don't know how to get at headers. If you're going to use any mail or mailx , keep in mind that This isn't portable even within a given Linux distribution. For example, both Ubuntu and Debian have several alternatives for mail and mailx . When composing a message, mail and mailx treats lines beginning with ~ as commands. If you pipe text into mail , you need to arrange for this text not to contain lines beginning with ~ . If you're going to install software anyway, you might as well install something more predictable than mail / Mail / mailx . For example, mutt . With Mutt, you can supply most headers in the input with the -H option, but not Content-Type , which needs to be set via a mutt option. mutt -e 'set content_type=text/html' -s 'hello' 'to@address' <test.html Or you can invoke sendmail directly. There are several versions of sendmail out there, but they all support sendmail -t to send a mail in the simplest fashion, reading the list of recipients from the mail. (I think they don't all support Bcc: .) On most systems, sendmail isn't in the usual $PATH , it's in /usr/sbin or /usr/lib . cat <<'EOF' - test.html | /usr/sbin/sendmail -t To: to@address Subject: hello Content-Type: text/html EOF
{ "source": [ "https://unix.stackexchange.com/questions/15405", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8533/" ] }
15,415
I want to do something like this: if cmd1 && cmd2 echo success else echo epic fail fi How should I do it?
These should do what you need: cmd1 && cmd2 && echo success || echo epic fail or if cmd1 && cmd2; then echo success else echo epic fail fi
{ "source": [ "https://unix.stackexchange.com/questions/15415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6001/" ] }
15,449
I have a key bound to setup panes to my liking, but sometimes things get messed up or out of hand and I want to close all the panes and rerun the script. Is there a simple tmux command to close all panes except the one I am currently in?
You can use the "kill-pane" command. kill-pane [-a] [-t target-pane] (alias: killp) Destroy the given pane. If no panes remain in the containing window, it is also destroyed. The -a option kills all but the pane given with -t. So, for example if you want to kill all the panes except for pane 0: kill-pane -a -t 0 If you don't know what you pane numbers are you can use the "display-panes" command: display-panes [-t target-client] (alias: displayp) Display a visible indicator of each pane shown by target-client. See the display-panes-time, display-panes-colour, and display-panes-active-colour session options. While the indicator is on screen, a pane may be selected with the '0' to '9' keys.
{ "source": [ "https://unix.stackexchange.com/questions/15449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8546/" ] }
15,473
I am developing a daemon that needs to store lots of application data, and I noticed that on my system (Fedora 15), there is a /usr/local/etc directory. I've decided to install my daemon to /usr/local/bin , and I need a place for my config files. I didn't see this on Wikipedia . Is this non-standard or is this in fact the standard place for programs installed to /usr/local/bin to store config files? Reason being, I want to market this to sys-admins, and getting something like this wrong is not a great selling-point...
/usr/local is usually for applications built from source. i.e. I install most of my packages using something like apt , but if I download a newer version of something or a piece of software not part of my distribution, I would build it from source and put everything into the `/usr/local' hierarchy. This allows for separation from the rest of the distribution. If you're developing a piece of software for others, you should design it so that it can be installed anywhere people want, but it should default to the regular FHS specified system directories when they specify the prefix to be /usr ( /etc , /usr/bin , etc.) i.e. /usr/local is for your personal use, it shouldn't be the only place to install your software. Have a good read of the FHS , and use the standard Linux tools to allow your source to be built and installed anywhere so that package builders for the various distributions can configure them as required for their distribution, and users can put it into /usr/local if they desire or the regular system directories if they wish.
{ "source": [ "https://unix.stackexchange.com/questions/15473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8471/" ] }
15,509
After a recent upgrade to Fedora 15, I'm finding that a number of tools are failing with errors along the lines of: tail: inotify resources exhausted tail: inotify cannot be used, reverting to polling It's not just tail that's reporting problems with inotify, either. Is there any way to interrogate the kernel to find out what process or processes are consuming the inotify resources? The current inotify-related sysctl settings look like this: fs.inotify.max_user_instances = 128 fs.inotify.max_user_watches = 8192 fs.inotify.max_queued_events = 16384
It seems that if the process creates inotify instance via inotify_init(), the resulting file that represents filedescriptor in the /proc filesystem is a symlink to (non-existing) 'anon_inode:inotify' file. $ cd /proc/5317/fd $ ls -l total 0 lrwx------ 1 puzel users 64 Jun 24 10:36 0 -> /dev/pts/25 lrwx------ 1 puzel users 64 Jun 24 10:36 1 -> /dev/pts/25 lrwx------ 1 puzel users 64 Jun 24 10:36 2 -> /dev/pts/25 lr-x------ 1 puzel users 64 Jun 24 10:36 3 -> anon_inode:inotify lr-x------ 1 puzel users 64 Jun 24 10:36 4 -> anon_inode:inotify Unless I misunderstood the concept, the following command should show you list of processes (their representation in /proc), sorted by number of inotify instances they use. $ for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | sort | uniq -c | sort -nr Finding the culprits Via the comments below @markkcowan mentioned this: $ find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname {})/../cmdline; echo ""' \; 2>/dev/null
{ "source": [ "https://unix.stackexchange.com/questions/15509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4989/" ] }
15,531
I setup my environment to create a core dump of everything that crashes, however when I run an program with SUID set on a different user than the executing user it doesn't create a core dump. Any idea's why this might be? I couldn't find it anywhere on the web, I think it's some sort of security feature but I would like to have it disabled... Problem: $ cd /tmp $ cat /etc/security/limits.conf | grep core * - core unlimited root - core unlimited $ ls -l ohai -rwsr-sr-x 1 root root 578988 2011-06-23 23:29 ohai $ ./ohai ... Floating point exception $ sudo -i # ./ohai ... Floating point exception (core dumped) # chmod -s ohai # exit $ ./ohai ... Floating point exception (core dumped) Edit: To make it work as secure as possible I now have the following script to setup the environment: mkdir -p /var/coredumps/ chown root:adm /var/coredumps/ chmod 772 /var/coredumps/ echo "kernel.core_pattern = /var/coredumps/core.%u.%e.%p" >> /etc/sysctrl.conf echo "fs.suid_dumpable = 2" >> /etc/sysctl.conf echo -e "*\t-\tcore\tunlimited" >> /etc/security/limits.conf echo -e "root\t-\tcore\tunlimited" >> /etc/security/limits.conf Now all that's left to do is add ACL to /var/coredumps so users can only add files and don't modify nor read them ever again. The only downsize is that I would still have a problem with chroot'ed applications which would need a bind mount or something like that.
The memory of a setuid program might (is likely to, even) contain confidential data. So the core dump would have to be readable by root only. If the core dump is owned by root, I don't see an obvious security hole, though the kernel would have to be careful not to overwrite an existing file. Linux disables core dumps for setxid programs. To enable them, you need to do at least the following (I haven't checked that this is sufficient): Enable setuid core dumps in general by setting the fs.suid_dumpable sysctl to 2, e.g. with echo 2 >/proc/sys/fs/suid_dumpable . (Note: 2, not 1; 1 means “I'm debugging the system as a whole and want to remove all security”.) Call prctl(PR_SET_DUMPABLE, 1) from the program.
{ "source": [ "https://unix.stackexchange.com/questions/15531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8579/" ] }
15,546
I have the following new logrotate configuration: /var/log/nexus/nexus.log { rotate 7 missingok compress delaycompress copytruncate daily } When I run logrotate -d nexus , I get the following: reading config file nexus reading config info for /var/log/nexus/nexus.log Handling 1 logs rotating pattern: /var/log/nexus/nexus.log after 1 days (7 rotations) empty log files are rotated, old logs are removed considering log /var/log/nexus/nexus.log log does not need rotating My /var/log/nexus/ folder contains the following: nexus.log oldlogs.tar.gz Why isn't LogRotate rotating the nexus.log file? What I was expecting was that the nexus.log file would have been truncated and a new file, something like nexus.log-201106241000, would have been created.
Most likely, the log file is less than a day old and/or has been rotated within the last day and logrotate remembers the history. If you add -f it'll force a rotation if you really want to (although not 100% sure how that interacts with -d ). You can look at the history, location depends on your distribution, but might be /var/lib/logrotate/status . That file shows when logs were last rotated.
{ "source": [ "https://unix.stackexchange.com/questions/15546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2163/" ] }
15,594
Is there a way to turn on line numbering for nano?
The only thing coming close to what you want is option to display your current cursor position. You activate it by using --constantshow (manpage: Constantly show the cursor position) option or by pressing Alt C on an open text file.
{ "source": [ "https://unix.stackexchange.com/questions/15594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7768/" ] }
15,601
Is there a way to find out what will program do when it receives kill signal HUP? Without simply running the command ofc :D For example, killall -HUP pppd will restart pppd killall -HUP firefox will just kill firefox
Read its documentation. That's the only way. As Keith already wrote , the original meaning of SIGHUP was that the user had lost access to the program, and so interactive programs should die. Daemons — programs that don't interact directly with the user — have no need for this behavior and instead often reload their configuration files when they receive SIGHUP. But these are just conventions. If you have the source, you can read that, too. Or if you only have the binary, you can try disassembling it, look for sigaction calls that set up a signal handler for SIGHUP , and try to figure out what those signal handlers are doing. It'll be easier arranging not to send SIGHUP to that program in the first place. At any point in time, a given process is in one of three states with respect to a particular signal: ignoring it, performing the default action or running a custom handler. Many unices allow you to see the signal mask of a process with ps , e.g. with ps s on Linux. That can tell you if the process is ignoring the signal or will die instantly on SIGHUP, but if the process has set a handler you can't tell what the handler does.
{ "source": [ "https://unix.stackexchange.com/questions/15601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2893/" ] }
15,611
Possible Duplicate: Why do we use su - and not just su? I understand that root doesn't have to be a superuser . But in the case that it is ... what is the difference between sudo su - and sudo su root ?
There are two questions there: Difference between su - username and su username If - (or -l ) is specified, su simulates a real login. The environment is cleared except for a few select variables ( TERM notably, DISPLAY and XAUTHORITY on some systems). Otherwise the environment is left as it is except for PATH that is reset. Difference between passing no user name and specifying root This might be system-dependent. On Linux with shadow as the package providing su , if no username is specified, then su first tries to see if user root has a passwd entry. If it does, it uses that. If it doesn't, it tries uid 0. Not sure about other Unix-like operating systems.
{ "source": [ "https://unix.stackexchange.com/questions/15611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
15,655
Please note that I don't ask how . I already know options like pv and rsync -P . I want to ask why doesn't cp implement a progress bar, at least as a flag ?
The tradition in unix tools is to display messages only if something goes wrong. I think this is both for design and practical reasons. The design is intended to make it obvious when something goes wrong: you get an error message, and it's not drowned in not-actually-informative messages. The practical reason is that in unix's very early days, there still were teleprinters ; that is, the output from programs would be printed on paper, and you don't want to print progress bars. Whatever the reason, the tradition of only displaying useful messages has stuck in the unix world. Modern tools have sometimes introduced progress bars; in rsync's case, the main motivation is that rsync is often performed over the network, and networks are a lot flakier than local disks, so the progress bar is more useful. The same reasoning applies to wget.
{ "source": [ "https://unix.stackexchange.com/questions/15655", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4903/" ] }