source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
10,825
I often find myself in the following position: I've started typing a long command at the bash prompt, but half-way through I find out I need to check something with another command. This is a problem when I'm at the console (no X), which is often the case, because then I only have the following unsatisfactory ways to do it, that I know of: Hit ctrl + alt + F2 and log in on another virtual console, and find out what I wanted, then go back and continue ctrl + a , type echo + space + enter , find out what I wanted, press ↑ until I find my command, ctrl + a , del x 5, ctrl + e , and continue Highlight what I've typed so far with my mouse (if gpm is running, which it usually is), press ctrl + c to interrupt, find out what I wanted while being careful not to use my mouse to highlight stuff, then press middle mouse button on a new prompt and continue Pray to the command line gods that the half-written command will have no adverse effects but simply fail, and gingerly press enter, then find out what I wanted, press uparrow until I get my command back, and continue Jump into my time machine, travel back in time and remind myself to start screen before starting to type command, travel back to the present, press ctrl + a c , find out what I wanted, press ctrl + a ctrl+a , and continue So what I want to know is, is there some more elegant way of doing this? A sort of subshell-command or similar? I'm most interested in methods that do not require any preparations or setup to work.
A somewhat faster version of alex's Ctrl + A Ctrl + K (which moves to the front of the line and then cuts everything forward) is to just use Ctrl + U , which cuts backward on bash, and the entire line (regardless of your current position) on zsh. Then you use Ctrl + Y to paste it again
{ "source": [ "https://unix.stackexchange.com/questions/10825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6339/" ] }
10,826
In shell, how can I read the bytes of a binary file I have, and print the output as hexadecimal numbers?
Use hexdump(1) $ hexdump -x /usr/bin/hexdump 0000000 feca beba 0000 0300 0001 0700 0080 0300 0000010 0000 0010 0000 5080 0000 0c00 0000 0700 0000020 0000 0300 0000 00a0 0000 b06f 0000 0c00 0000030 0000 1200 0000 0a00 0100 0010 0000 107c 0000040 0000 0c00 0000 0000 0000 0000 0000 0000 0000050 0000 0000 0000 0000 0000 0000 0000 0000 ...
{ "source": [ "https://unix.stackexchange.com/questions/10826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3608/" ] }
10,852
Technically, unless pam is set up to check your shell with pam_shells neither of these can actually prevent your login, if you're not on the shell. On my system they are even different sizes, so I suspect they actually do something. So what's the difference? why do they both exist? Why would I use one over the other? -rwxr-xr-x 1 root root 21K Feb 4 17:01 /bin/false -rwxr-xr-x 1 root root 4.7K Mar 2 14:59 /sbin/nologin
When /sbin/nologin is set as the shell, if user with that shell logs in, they'll get a polite message saying 'This account is currently not available.' This message can be changed with the file /etc/nologin.txt . /bin/false is just a binary that immediately exits, returning false, when it's called, so when someone who has false as shell logs in, they're immediately logged out when false exits. Setting the shell to /bin/true has the same effect of not allowing someone to log in but false is probably used as a convention over true since it's much better at conveying the concept that person doesn't have a shell. Looking at nologin 's man page, it says it was created in 4.4 BSD (early 1990s) so it came long after false was created. The use of false as a shell is probably just a convention carried over from the early days of UNIX. nologin is the more user-friendly option, with a customizable message given to the user trying to log in, so you would theoretically want to use that; but both nologin and false will have the same end result of someone not having a shell and not being able to ssh in.
{ "source": [ "https://unix.stackexchange.com/questions/10852", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29/" ] }
10,883
Recently I accidentally did rm on a set of files and it got me thinking where exactly these files end up? That is to say, when working with a GUI, deleted files go to the Trash. What's the equivalent for rm and is there a way of undoing an rm command?
Nowhere, gone, vanished. Well, more specifically, the file gets unlinked. The data is still sitting there on disk, but the link to it is removed. It used to be possible to retrieve the data, but nowadays the metadata is cleared and nothing's recoverable. There is no Trash can for rm , nor should there be. If you need a Trash can, you should use a higher-level interface. There is a command-line utility in trash-cli on Ubuntu, but most of the time GUI file managers like Nautilus or Dolphin are used to provide a standard Trash can. The Trash can is standard itself. Files trashed in Dolphin will be visible in the Trash from Nautilus. Files are usually moved to somewhere like ~/.local/share/Trash/files/ when trashed. The rm command on UNIX/Linux is comparable to del on DOS/Windows which also deletes and does not move files to the Recycle Bin. Another thing to realize is that moving a file across filesystems like to your USB disk from your hard disk drive is really 1) a copy of the file data followed by 2) unlinking the original file. You wouldn't want your Trash to be filled up with these extra copies.
{ "source": [ "https://unix.stackexchange.com/questions/10883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5770/" ] }
10,893
Ken Thompson, the creator of Unix, was once asked what he'd do if he had it to do over again. He said, "I'd spell creat with an 'e'." What is Ken referring to? Is there a "creat" command?
It's a Unix system call that creates a file: At a Unix shell prompt, type man 2 creat to learn more. Man pages are also available online these days: Linux's creat(2) POSIX generic man-page for the function/syscall: creat(3p) .
{ "source": [ "https://unix.stackexchange.com/questions/10893", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
10,922
Is there a way to temporarily suspend history tracking in bash, so as to enter a sort of "incognito" mode? I'm entering stuff into my terminal that I don't want recorded, sensitive financial info.
This will prevent bash from saving any new history when exiting the shell: unset HISTFILE Specifically, according to man bash : If HISTFILE is unset, or if the history file is unwritable, the history is not saved. Note that if you re-set HISTFILE , history will be saved normally. This only affects the moment the shell session ends. Alternatively, if you want to toggle it off and then back on again during a session, it may be easier to use set : set +o history # temporarily turn off history # commands here won't be saved set -o history # turn it back on
{ "source": [ "https://unix.stackexchange.com/questions/10922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
10,956
Do they refer to the same thing or is root just a location in filesystem (its ultimate base) and superuser a privileged user (sort of equivalent of windows administrator account) ? Do they need the same password ? Is superuser the kernel itself?
'root' is traditionally the name given to the user account with superuser level rights. In this respect they are one and the same, though there is no rule that I know of that says that the superuser account must be called root. It may be that the account was named 'root' due in part to the fact that only the superuser has write permission to the root directory (/) The Windows Administrator account is not analogous to the Unix superuser account since there are restrictions on what a Windows Administrator can do. The analog to root on Windows NT based OSes is the SYSTEM account, which cannot be used by an interactive user.
{ "source": [ "https://unix.stackexchange.com/questions/10956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3845/" ] }
11,004
Sometimes I start editing configuration files in /etc using Vim, but forget to use sudo to start Vim. The inevitable result then is that after finishing my edits I encounter the dreaded notice that I don't have the permission to save the file. Mostly the edits are small enough that I just exit Vim and do the whole thing again as root. I could of course save to a location I can write to and then copy as root, but that is also somewhat annoying. But I'm sure there is an easier way to become root or use sudo from inside Vim, without having to discard the changes. If the method would not rely on sudo being set up for the user that would be even better.
sudo cannot change the effective user of an existing process, it always creates a new process that has the elevated privileges and the original shell is unaffected. This is a fundamental of UNIX design. I most often just save the file to /tmp as a workaround. If you really want to save it directly you might try using a feature of Vim where it can pipe a file to another process. Try saving with this command: :w !sudo dd of=% Tested and works. Vim will then ask you to reload the file, but it's unnecessary: you can just press o to avoid reloading and losing your undo history. You can even save this to a Vim command/function or even bind it to a key for easy access, but I'll leave that as an exercise to the reader.
{ "source": [ "https://unix.stackexchange.com/questions/11004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2703/" ] }
11,005
Does OpenBSD use bcrypt by default? Why doesn't every modern Linux Distribution use BCRYPT? http://codahale.com/how-to-safely-store-a-password/ https://secure.wikimedia.org/wikipedia/en/wiki/Bcrypt WHY????
sudo cannot change the effective user of an existing process, it always creates a new process that has the elevated privileges and the original shell is unaffected. This is a fundamental of UNIX design. I most often just save the file to /tmp as a workaround. If you really want to save it directly you might try using a feature of Vim where it can pipe a file to another process. Try saving with this command: :w !sudo dd of=% Tested and works. Vim will then ask you to reload the file, but it's unnecessary: you can just press o to avoid reloading and losing your undo history. You can even save this to a Vim command/function or even bind it to a key for easy access, but I'll leave that as an exercise to the reader.
{ "source": [ "https://unix.stackexchange.com/questions/11005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
11,018
Say I have file named ugly_name.tar , which when extracted, becomes ugly_name directory. What command can I use such that the resulting directory name is pretty_name instead?
This should work: mkdir pretty_name && tar xf ugly_name.tar -C pretty_name --strip-components 1 -C changes to the specified directory before unpacking (or packing). --strip-components removes the specified number of directories from the filenames stored in the archive. Note that this is not really portable. GNU tar and at least some of the BSD tars have the --strip-components option, but doesn't seem to exist on other unix-like platforms. The dumb way of doing this would work pretty much everywhere though. tar xf ugly_name.tar && mv ugly_name pretty_name
{ "source": [ "https://unix.stackexchange.com/questions/11018", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
11,044
I have a symlink ~/link -> ~/a/really/long/path When I do cd ~/link cd .. it takes me to ~ but I want to go to ~/a/really/long Is there a way to do this? I am using bash.
Bash (as well as ksh, zsh, and even ash) track directory changes so that cd /foo/bar && cd .. always takes you to /foo even if bar is a symlink. Pass the -P option to cd to ignore the tracked change and follow the “physical” directory structure: cd -P .. See help cd or man builtins for documentation about the bash builtin cd . If you really dislike the directory tracking feature, you can turn it off with set -P in bash ( set -o no_chase_link in zsh).
{ "source": [ "https://unix.stackexchange.com/questions/11044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4249/" ] }
11,067
I have a fairly large file (35Gb), and I would like to filter this file in situ (i.e. I don't have enough disk space for another file), specifically I want to grep and ignore some patterns — is there a way to do this without using another file? Let's say I want to filter out all the lines containing foo: for example...
At the system call level this should be possible. A program can open your target file for writing without truncating it and start writing what it reads from stdin. When reading EOF, the output file can be truncated. Since you are filtering lines from the input, the output file write position should always be less than the read position. This means you should not corrupt your input with the new output. However, finding a program that does this is the problem. dd(1) has the option conv=notrunc that does not truncate the output file on open, but it also does not truncate at the end, leaving the original file contents after the grep contents (with a command like grep pattern bigfile | dd of=bigfile conv=notrunc ) Since it is very simple from a system call perspective, I wrote a small program and tested it on a small (1MiB) full loopback filesystem. It did what you wanted, but you really want to test this with some other files first. It's always going to be risky overwriting a file. overwrite.c /* This code is placed in the public domain by camh */ #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> int main(int argc, char **argv) { int outfd; char buf[1024]; int nread; off_t file_length; if (argc != 2) { fprintf(stderr, "usage: %s <output_file>\n", argv[0]); exit(1); } if ((outfd = open(argv[1], O_WRONLY)) == -1) { perror("Could not open output file"); exit(2); } while ((nread = read(0, buf, sizeof(buf))) > 0) { if (write(outfd, buf, nread) == -1) { perror("Could not write to output file"); exit(4); } } if (nread == -1) { perror("Could not read from stdin"); exit(3); } if ((file_length = lseek(outfd, 0, SEEK_CUR)) == (off_t)-1) { perror("Could not get file position"); exit(5); } if (ftruncate(outfd, file_length) == -1) { perror("Could not truncate file"); exit(6); } close(outfd); exit(0); } You would use it as: grep pattern bigfile | overwrite bigfile I'm mostly posting this for others to comment on before you try it. Perhaps someone else knows of a program that does something similar that is more tested.
{ "source": [ "https://unix.stackexchange.com/questions/11067", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6484/" ] }
11,100
I am using Linux as guest OS in VirtualBox. I deleted huge number of files from its filesystem. Now i want to shrink the filesystem image file (vdi). The shrinking works by compressing filesystem image wherever it has "null" value in disk. It seems an application called zerofree can write "null" into free space of filesystem in such a way that it becomes sparse. But the instructions say it works only on ext2/ext3. I have ext4 on my guest OS. Why won't it work on ext 4 (reason cited is "extents", but can someone shed more light on it) ? Will it work, If i mount the ext 4 as ext 3 and then remount as ext 4 ? Any other tools that can do similiar thing as zerofree on ext ?
The page you reference ( http://intgat.tigress.co.uk/rmy/uml/index.html ) states: The utility also works on ext3 or ext4 filesystems. So I'm not sure where you're getting that it doesn't work on ext4 filesystems. Note that the zerofree utility is different from the zerofree kernel patch that is mentioned on the same page (which indeed does not seem to have a version for ext4). Update: At least in the case of VirtualBox, I don't think you need this utility at all. In my testing, on a stock Ubuntu 10.04 install on ext4, you can just zero out the filesystem like so: $ dd if=/dev/zero of=test.file ...wait for the virtual disk to fill, then $ rm test.file and shut the VM down. Then on your VirtualBox host do: $ VBoxManage modifyhd --compact yourImage.vdi and you'll recover all the unused space.
{ "source": [ "https://unix.stackexchange.com/questions/11100", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5763/" ] }
11,102
Could you advise me what to write in crontab so that it runs some job (for testing I will use /usr/bin/chromium-browser ) every 15 seconds?
You can't go below one minute granularity with cron. What you can do is, every minute, run a script that runs your job, waits 15 seconds and repeats. The following crontab line will start some_job every 15 seconds. * * * * * for i in 0 1 2; do some_job & sleep 15; done; some_job This script assumes that the job will never take more than 15 seconds. The following slightly more complex script takes care of not running the next instance if one took too long to run. It relies on date supporting the %s format (e.g. GNU or Busybox, so you'll be ok on Linux). If you put it directly in a crontab, note that % characters must be written as \% in a crontab line. end=$(($(date +%s) + 45)) while true; do some_job & [ $(date +%s) -ge $end ] && break sleep 15 wait done [ $(date +%s) -ge $(($end + 15)) ] || some_job I will however note that if you need to run a job as often as every 15 seconds, cron is probably the wrong approach. Although unices are good with short-lived processes, the overhead of launching a program every 15 seconds might be non-negligible (depending on how demanding the program is). Can't you run your application all the time and have it execute its task every 15 seconds?
{ "source": [ "https://unix.stackexchange.com/questions/11102", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
11,110
When I run ls -l , the following is displayed: >: ls -l total 320 -rw-r--r-- 1 foo staff 633 5 Apr 13:23 A.class -rw-r--r-- 1 foo staff 296 5 Apr 13:24 A.java ... What does the total signify? Size? If so, what size?
You can't go below one minute granularity with cron. What you can do is, every minute, run a script that runs your job, waits 15 seconds and repeats. The following crontab line will start some_job every 15 seconds. * * * * * for i in 0 1 2; do some_job & sleep 15; done; some_job This script assumes that the job will never take more than 15 seconds. The following slightly more complex script takes care of not running the next instance if one took too long to run. It relies on date supporting the %s format (e.g. GNU or Busybox, so you'll be ok on Linux). If you put it directly in a crontab, note that % characters must be written as \% in a crontab line. end=$(($(date +%s) + 45)) while true; do some_job & [ $(date +%s) -ge $end ] && break sleep 15 wait done [ $(date +%s) -ge $(($end + 15)) ] || some_job I will however note that if you need to run a job as often as every 15 seconds, cron is probably the wrong approach. Although unices are good with short-lived processes, the overhead of launching a program every 15 seconds might be non-negligible (depending on how demanding the program is). Can't you run your application all the time and have it execute its task every 15 seconds?
{ "source": [ "https://unix.stackexchange.com/questions/11110", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3682/" ] }
11,125
I'm using Debian squeeze, and running LVM on top of software RAID 1. I just accidentally just discovered that most of the links under /dev/mapper are missing, though my system seems to be still functioning correctly. I'm not sure what happened. The only thing I can imagine that caused it was my failed attempt to get a LXC fedora container to work. I ended up deleting a directory /cgroup/laughlin , corresponding to the container, but I can't imagine why that should have caused the problem. /dev/mapper looked (I made some changes, see below) approximately like orwell:/dev/mapper# ls -la total 0 drwxr-xr-x 2 root root 540 Apr 12 05:08 . drwxr-xr-x 22 root root 4500 Apr 12 05:08 .. crw------- 1 root root 10, 59 Apr 8 10:32 control lrwxrwxrwx 1 root root 7 Mar 29 08:28 debian-root -> ../dm-0 lrwxrwxrwx 1 root root 8 Apr 12 03:32 debian-video -> ../dm-23 debian-video corresponds to a LV I had just created. However, I have quite a number of VGs on my system, corresponding to 4 VGs spread across 4 disks. vgs gives orwell:/dev/mapper# vgs VG #PV #LV #SN Attr VSize VFree backup 1 2 0 wz--n- 186.26g 96.26g debian 1 7 0 wz--n- 465.76g 151.41g olddebian 1 12 0 wz--n- 186.26g 21.26g testdebian 1 3 0 wz--n- 111.75g 34.22g I tried running /dev/mapper# vgscan --mknodes and some devices were created (see output below), but they aren't symbolic links to the dm devices as they should be, so I'm not sure if this is useless or worse. Would they get in the way of recreation of the correct links? Should I delete these devices again? I believe that udev creates these links, so would a reboot fix this problem, or would I get an unbootable system? What should I do to fix this? Are there any diagnostics/sanity checks I should run to make sure there aren't other problems I haven't noticed? Thanks in advance for any assistance. orwell:/dev/mapper# ls -la total 0 drwxr-xr-x 2 root root 540 Apr 12 05:08 . drwxr-xr-x 22 root root 4500 Apr 12 05:08 .. brw-rw---- 1 root disk 253, 1 Apr 12 05:08 backup-local_src brw-rw---- 1 root disk 253, 2 Apr 12 05:08 backup-video crw------- 1 root root 10, 59 Apr 8 10:32 control brw-rw---- 1 root disk 253, 15 Apr 12 05:08 debian-boot brw-rw---- 1 root disk 253, 16 Apr 12 05:08 debian-home brw-rw---- 1 root disk 253, 22 Apr 12 05:08 debian-lxc_laughlin brw-rw---- 1 root disk 253, 21 Apr 12 05:08 debian-lxc_squeeze lrwxrwxrwx 1 root root 7 Mar 29 08:28 debian-root -> ../dm-0 brw-rw---- 1 root disk 253, 17 Apr 12 05:08 debian-swap lrwxrwxrwx 1 root root 8 Apr 12 03:32 debian-video -> ../dm-23 brw-rw---- 1 root disk 253, 10 Apr 12 05:08 olddebian-etch_template brw-rw---- 1 root disk 253, 13 Apr 12 05:08 olddebian-fedora brw-rw---- 1 root disk 253, 8 Apr 12 05:08 olddebian-feisty brw-rw---- 1 root disk 253, 9 Apr 12 05:08 olddebian-gutsy brw-rw---- 1 root disk 253, 4 Apr 12 05:08 olddebian-home brw-rw---- 1 root disk 253, 11 Apr 12 05:08 olddebian-lenny brw-rw---- 1 root disk 253, 7 Apr 12 05:08 olddebian-msi brw-rw---- 1 root disk 253, 5 Apr 12 05:08 olddebian-oldchresto brw-rw---- 1 root disk 253, 3 Apr 12 05:08 olddebian-root brw-rw---- 1 root disk 253, 14 Apr 12 05:08 olddebian-suse brw-rw---- 1 root disk 253, 6 Apr 12 05:08 olddebian-vgentoo brw-rw---- 1 root disk 253, 12 Apr 12 05:08 olddebian-wsgi brw-rw---- 1 root disk 253, 20 Apr 12 05:08 testdebian-boot brw-rw---- 1 root disk 253, 18 Apr 12 05:08 testdebian-home brw-rw---- 1 root disk 253, 19 Apr 12 05:08 testdebian-root
These days /dev is on tmpfs and is created from scratch each boot by udev . You can safely reboot and these links will come back. You should also find LVM symlinks to the /dev/dm-X nodes in the /dev/<vg> directories, one directory for each volume group. However, those nodes re-created by vgscan --mknodes will also work fine, assuming they have the right major/minor numbers - and it's a safe assumption they were created properly. You can probably also get udev to re-create the symlinks using udevadm trigger with an appropriate match, testing with --dry-run until it is right. It hardly seems worth the effort though when a reboot will fix it too.
{ "source": [ "https://unix.stackexchange.com/questions/11125", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4671/" ] }
11,128
I have some sql dumps that I am looking at the differences between. diff can obviously show me the difference between two lines, but I'm driving myself nuts trying to find which values in the long list of comma-separated values are actually the ones causing the lines to be different. What tool can I use to point out the exact character differences between two lines in certain files?
There's wdiff , the word-diff for that. On desktop, meld can highlight the differences within a line for you.
{ "source": [ "https://unix.stackexchange.com/questions/11128", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394/" ] }
11,163
I'm trying to run rsync -a --files-from=~/.rsync_file_list ~/destination and it tells me: rsync error: syntax or usage error (code 1) at options.c(1652) [client=3.0.7] . Can anyone enlighten me as to what I'm doing wrong? The file ~/.rsync_file_list just contains a list of file names prefaced with ~/ , separated by newlines (though I've also tried listing them all on the same line, with the same result). If I run rsync -a ~/file ~/file2 ~/file3 ~/destination it works just fine. So what am I missing about the --files-from option?
Okay, I found the problem. The file containing file names has to contain only file names; no paths, relative or otherwise; After specifying --files-from=FILE , rsync requires a source directory in which to find the files listed. So the command should be rsync -a --files-from=~/.rsync_file_list $HOME/ /destination . .rsync_file_list should read: file 1 file 2 file 3
{ "source": [ "https://unix.stackexchange.com/questions/11163", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2333/" ] }
11,172
Transmission is intermittently hanging on my NAS. If I send SIGTERM, it doesn't disappear from the process list and a <defunct> label appears next to it. If I send a SIGKILL, it still doesn't disappear and I can't terminate the parent because the parent is init . The only way I can get rid of the process and restart Transmission is to reboot. I realize the best thing I can do is try and fix Transmission (and I've tried), but I'm a novice at compiling and I wanted to make sure my torrents finished before I start messing around with it.
You cannot kill a <defunct> process (also known as zombie process) as it is already dead. The system keeps zombie processes for the parent to collect the exit status. If the parent does not collect the exit status then the zombie processes will stay around forever. The only way to get rid of those zombie processes are by killing the parent. If the parent is init then you can only reboot. Zombie processes take up almost no resouces so there is no performance cost in letting them linger. Although having zombie processes around usually means there is a bug in some of your programs. Init should usually collect all children. If init has zombie children then there is a bug in init (or somehwere else but a bug it is). http://en.wikipedia.org/wiki/Zombie_process
{ "source": [ "https://unix.stackexchange.com/questions/11172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6216/" ] }
11,204
There are specific lines that I want to remove from a file. Let's say it's line 20-37 and then line 45. How would I do that without specifying the content of those lines?
With sed , like so: sed '20,37d; 45d' < input.txt > output.txt If you wanted to do this in-place: sed --in-place '20,37d; 45d' file.txt
{ "source": [ "https://unix.stackexchange.com/questions/11204", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
11,238
I tried to rm -rf a folder, and got "device or resource busy". In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one . Although they're useful, I'm currently interested in just ASimpleMethodThatWorks™)
The tool you want is lsof , which stands for list open files . It has a lot of options, so check the man page, but if you want to see all open files under a directory: lsof +D /path That will recurse through the filesystem under /path , so beware doing it on large directory trees. Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.
{ "source": [ "https://unix.stackexchange.com/questions/11238", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
11,262
I've not used dd all that much, but so far it's not failed me yet. Right now, I've had a dd going for over 12 hours - I'm writing an image back to the disk it came from - and I'm getting a little worried, as I was able to dd from the disk to the image in about 7 hours. I'm running OSX 10.6.6 on a MacBook with a Core 2 Duo at 2.1ghz/core with 4gb RAM. I'm reading from a .dmg on a 7200rpm hard drive (the boot drive), and I'm writing to a 7200rpm drive connected over a SATA-to-USB connector. I left the blocksize at default, and the image is about 160gb. EDIT: And, after 14 hours of pure stress, the dd worked perfectly after all. Next time, though, I'm going to run it through pv and track it with strace . Thanks to everyone for all your help.
You can send dd a certain signal using the kill command to make it output its current status. The signal is INFO on BSD systems (including OSX) and USR1 on Linux. In your case: kill -INFO $PID You can find the process id ( $PID above) with the ps command; or see pgrep and pkill alternatives on mac os x for more convenient methods. More simply, as AntoineG points out in his answer , you can type ctrl-T at the shell running dd to send it the INFO signal. As an example on Linux, you could make all active dd processes output status like this: pkill -USR1 -x dd After outputting its status, dd will continue coping.
{ "source": [ "https://unix.stackexchange.com/questions/11262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5286/" ] }
11,305
grep --before-context 5 shows 5 lines before the match. I want to show everything before the match. Doing grep --before-context 99999999 would work but it is not very... professional. How to show all the file up to the match?
Sed is better for that. Just do: sed '/PATTERN/q' FILE It works like this: For each line, we look if it matches /PATTERN : if yes, we print it and quit otherwise, we print it This is the most efficient solution, because as soon as it sees PATTERN , it quits. Without q , sed would continue to read the rest of the file, and do nothing with it. For big files it can make a difference. This trick can also be used to emulate head : sed 10q FILE
{ "source": [ "https://unix.stackexchange.com/questions/11305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2305/" ] }
11,311
Specifically: I did sudo mkdir /work , and would like to verify it indeed sits on my harddrive and not mapped to some other drive. How do I check where this folder is physically located?
The df(1) command will tell you the device that a file or directory is on: df /work The first field has the device that the file or directory is on. e.g. $ df /root Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 1043289 194300 795977 20% / If the device is a logical volume, you will need to determine which block device(s) the logical volume is on. For this, you can use the lvs(8) command: # df /usr Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/orthanc-usr 8256952 4578000 3259524 59% /usr # lvs -o +devices /dev/mapper/orthanc-usr LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices usr orthanc -wi-ao 8.00g /dev/sda3(0) The last column tells you that the logical volume usr in the volume group orthanc ( /dev/mapper/orthanc-usr ) is on the device /dev/sda3 . Since a volume group can span multiple physical volumes, you may find that you have multiple devices listed. Another type of logical block device is a md (Multiple Devices, and used to be called meta-disk I think) device, such as /dev/md2 . To look at the components of a md device, you can use mdadm --detail or look in /proc/mdstat # df /srv Filesystem 1K-blocks Used Available Use% Mounted on /dev/md2 956626436 199340344 757286092 21% /srv # mdadm --detail /dev/md2 ...details elided... Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 You can see that /dev/md2 is on the /dev/sda3 and /dev/sdb3 devices. There are other methods that block devices can be nested (fuse, loopback filesystems) that will have their own methods for determining the underlying block device, and you can even nest multiple layers so you have to work your way down. You'll have to take each case as it comes.
{ "source": [ "https://unix.stackexchange.com/questions/11311", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
11,343
Does anyone know of any linux tool specifically designed to treat files as sets and perform set operations on them? Like difference, intersection, etc?
Assuming elements are strings of characters other than NUL and newline (beware that newline is valid in file names though), you can represent a set as a text file with one element per line and use some of the standard Unix utilities. Set Membership $ grep -Fxc 'element' set # outputs 1 if element is in set # outputs >1 if set is a multi-set # outputs 0 if element is not in set $ grep -Fxq 'element' set # returns 0 (true) if element is in set # returns 1 (false) if element is not in set $ awk '$0 == "element" { s=1; exit }; END { exit !s }' set # returns 0 if element is in set, 1 otherwise. $ awk -v e='element' '$0 == e { s=1; exit } END { exit !s }' Set Intersection $ comm -12 <(sort set1) <(sort set2) # outputs intersect of set1 and set2 $ grep -xF -f set1 set2 $ sort set1 set2 | uniq -d $ join -t <(sort A) <(sort B) $ awk '!done { a[$0]; next }; $0 in a' set1 done=1 set2 Set Equality $ cmp -s <(sort set1) <(sort set2) # returns 0 if set1 is equal to set2 # returns 1 if set1 != set2 $ cmp -s <(sort -u set1) <(sort -u set2) # collapses multi-sets into sets and does the same as previous $ awk '{ if (!($0 in a)) c++; a[$0] }; END{ exit !(c==NR/2) }' set1 set2 # returns 0 if set1 == set2 # returns 1 if set1 != set2 $ awk '{ a[$0] }; END{ exit !(length(a)==NR/2) }' set1 set2 # same as previous, requires >= gnu awk 3.1.5 Set Cardinality $ wc -l < set # outputs number of elements in set $ awk 'END { print NR }' set $ sed '$=' set Subset Test $ comm -23 <(sort -u subset) <(sort -u set) | grep -q '^' # returns true iff subset is not a subset of set (has elements not in set) $ awk '!done { a[$0]; next }; { if !($0 in a) exit 1 }' set done=1 subset # returns 0 if subset is a subset of set # returns 1 if subset is not a subset of set Set Union $ cat set1 set2 # outputs union of set1 and set2 # assumes they are disjoint $ awk 1 set1 set2 # ditto $ cat set1 set2 ... setn # union over n sets $ sort -u set1 set2 # same, but doesn't assume they are disjoint $ sort set1 set2 | uniq $ awk '!a[$0]++' set1 set2 # ditto without sorting Set Complement $ comm -23 <(sort set1) <(sort set2) # outputs elements in set1 that are not in set2 $ grep -vxF -f set2 set1 # ditto $ sort set2 set2 set1 | uniq -u # ditto $ awk '!done { a[$0]; next }; !($0 in a)' set2 done=1 set1 Set Symmetric Difference $ comm -3 <(sort set1) <(sort set2) | tr -d '\t' # assumes not tab in sets # outputs elements that are in set1 or in set2 but not both $ sort set1 set2 | uniq -u $ cat <(grep -vxF -f set1 set2) <(grep -vxF -f set2 set1) $ grep -vxF -f set1 set2; grep -vxF -f set2 set1 $ awk '!done { a[$0]; next }; $0 in a { delete a[$0]; next }; 1; END { for (b in a) print b }' set1 done=1 set2 Power Set All possible subsets of a set displayed space separated, one per line: $ p() { [ "$#" -eq 0 ] && echo || (shift; p "$@") | while read r; do printf '%s %s\n%s\n' "$1" "$r" "$r"; done; } $ p $(cat set) (assumes elements don't contain SPC, TAB (assuming the default value of $IFS ), backslash, wildcard characters). Set Cartesian Product $ while IFS= read -r a; do while IFS= read -r b; do echo "$a, $b"; done < set1; done < set2 $ awk '!done { a[$0]; next }; { for (i in a) print i, $0 }' set1 done=1 set2 Disjoint Set Test $ comm -12 <(sort set1) <(sort set2) # does not output anything if disjoint $ awk '++seen[$0] == 2 { exit 1 }' set1 set2 # returns 0 if disjoint # returns 1 if not Empty Set Test $ wc -l < set # outputs 0 if the set is empty # outputs >0 if the set is not empty $ grep -q '^' set # returns true (0 exit status) unless set is empty $ awk '{ exit 1 }' set # returns true (0 exit status) if set is empty Minimum $ sort set | head -n 1 # outputs the minimum (lexically) element in the set $ awk 'NR == 1 { min = $0 }; $0 < min { min = $0 }; END { print min }' # ditto, but does numeric comparison when elements are numerical Maximum $ sort test | tail -n 1 # outputs the maximum element in the set $ sort -r test | head -n 1 $ awk '$0 > max { max = $0 }; END { print max }' # ditto, but does numeric comparison when elements are numerical All available at http://www.catonmat.net/blog/set-operations-in-unix-shell-simplified/
{ "source": [ "https://unix.stackexchange.com/questions/11343", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6596/" ] }
11,369
How can I delete a line in VI? Here what I am doing right now: Open up the terminal alt + ctrl + t vi a.txt I move my cursor to the line which I wan to delete, then what key-combination is should use to delete line in vi editor ?
Pressing dd will remove that line (actually it will cut it). So you can paste it via p .
{ "source": [ "https://unix.stackexchange.com/questions/11369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6162/" ] }
11,376
I have seen -- used in the compgen command. For example: compgen -W "foo bar baz" -- b What is the meaning of the -- in there?
More precisely, a double dash ( -- ) is used in most Bash built-in commands and many other commands to signify the end of command options, after which only positional arguments are accepted. Example use: Let's say you want to grep a file for the string -v . Normally -v will be considered the option to reverse the matching meaning (only show lines that do not match), but with -- you can grep for the string -v like this: grep -- -v file
{ "source": [ "https://unix.stackexchange.com/questions/11376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/872/" ] }
11,402
In vim, when I hit ESC to return to command mode, the cursor moves one character to the left. This is not what I would hope for, occasional I immediately hit l to move back to that spot, perhaps to delete a character. Is there a reason for this behavior? Is this convenient for a pattern of usage that I'm missing?
In insert mode, the cursor is between characters, or before the first or after the last character. In normal mode, the cursor is over a character (newlines are not characters for this purpose). This is somewhat unusual: most editors always put the cursor between characters, and have most commands act on the character after (not, strictly speaking, under ) the cursor. This is perhaps partly due to the fact that before GUIs, text terminals always showed the cursor on a character (underline or block, perhaps blinking). This abstraction fails in insert mode because that requires one more position (posts vs fences). Switching between modes has to move the cursor by a half-character, so to speak. The i command moves left, to put the cursor before the character it was over. The a command moves right. Going out of insert mode (by pressing Esc ) moves the cursor left if possible (if it's at the beginning of the line, it's moved right instead). I suppose the Esc behavior sort of makes sense. Often, you're typing at the end of the line, and there Esc can only go left. So the general behavior is the most common behavior. Think of the character under the cursor as the last interesting character, and of the insert command as a . You can repeat a Esc without moving the cursor, except that you'll be bumped one position right if you start at the beginning of a non-empty line.
{ "source": [ "https://unix.stackexchange.com/questions/11402", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5045/" ] }
11,454
Is there any intrinsic difference between a builtin command and another command which can nominally do the same thing? eg. Do builtins get "special" treatement? ... is there less overhead running them? .. or are they just simply 'built in'; like the dashboard of your car? ...and is there a definitive (current) list of these builtins?
From your comments, you seem to be confused about exactly what a shell is. The kernel is responsible for managing the system. It's the part that actually loads and runs programs, accesses files, allocates memory, etc. But the kernel has no user interface; you can only communicate with it by using another program as an intermediary. A shell is a program that prints a prompt, reads a line of input from you, and then interprets it as one or more commands to manipulate files or run other programs. Before the invention of the GUI, the shell was the primary user interface of an OS. On MS-DOS, the shell was called command.com and people wouldn't usually change it. On Unix, however, there have long been multiple shells that users could pick from. They can be divided into 3 types. The Bourne-compatible shells use the syntax derived from the original Bourne shell . C shells use the syntax from the original C shell . Then there are nontraditional shells that invent their own syntax, or borrow one from some programming language, and are generally much less popular than the first two types. A built-in command is simply a command that the shell carries out itself, instead of interpreting it as a request to load and run some other program. This has two main effects. First, it's usually faster, because loading and running a program takes time. Of course, the longer the command takes to run, the less significant the load time is compared to the overall run time (because the load time is fairly constant). Secondly, a built-in command can affect the internal state of the shell. That's why commands like cd must be built-in, because an external program can't change the current directory of the shell. Other commands, like echo , might be built-in for efficiency, but there's no intrinsic reason they can't be external commands. Which commands are built-in depends on the shell that you're using. You'll have to consult its documentation for a list (e.g., bash 's built-in commands are listed in Chapter 4 of its manual ). The type command can tell you if a command is built-in (if your shell is POSIX-compatible ), because POSIX requires that type be a built-in. If which is not a built-in in your shell, then it probably won't know about your shell's built-ins, but will just look for external programs.
{ "source": [ "https://unix.stackexchange.com/questions/11454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2343/" ] }
11,470
My problem: I have a Python program, and the user launch it using sudo . Sometimes I have to get the user's home, and I can do this only knowing its name: import pwd pwd.getpwnam(username) So: how can I get the name of the user that launched the program?
When you fire off something with sudo a couple of environment variables get set, specifically I think you are looking for SUDO_UID . These should be accessible to any program running through the usual channels of accessing environment variables. You can see the other things set by cheating like this from a shell: sudo env | grep SUDO
{ "source": [ "https://unix.stackexchange.com/questions/11470", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4289/" ] }
11,477
I followed these instructions to build Shadow, which provides the groupadd command. I am now getting an error when trying this: $ groupadd automake1.10 groupadd: 'automake1.10' is not a valid group name I checked alphanumeric names, and they work okay.
See the source code, specifically libmisc/chkname.c . Shadow is pretty conservative: names must match the regexp [_a-z][-0-9_a-z]*\$? and may be at most GROUP_NAME_MAX_LENGTH characters long (configure option, default 16; user names can usually go up to 32 characters, subject to compile-time determination). Debian relaxes the check a lot. As of squeeze, anything but whitespace and : is allowed. See bug #264879 and bug #377844 . POSIX requires allowing letters of either case, digits and ._- ( like in file names ). POSIX doesn't set any restriction if you don't care about portability. A number of recommended restrictions come from usage: Colons, newlines and nulls are right out; you just can't use them in /etc/passwd or /etc/group . An name consisting solely of digits is a bad idea — chown and chgrp are supposed to treat a digit sequence as a name if it's in the user/group database, but other applications may treat any number as a numerical id. An initial - or a . in a user name is strongly not recommended, because many applications expect to be able to pass $user.$group to an external utility (e.g. chown $user.$group /path/to/file )¹. A . in a group name should cause less trouble, but I'd still recommend against it. / is likely to cause trouble too, because some programs expect to be able to use user names in file names. Any character that the shell would expand is probably risky. Non-ASCII characters should be ok if you don't care about sharing with systems that may use different encodings. ¹ All modern implementations expect chown $user:$group , but support chown $user.$group for backward compatibility, and there are too many applications out there that pass a dot to remove that compatibility support.
{ "source": [ "https://unix.stackexchange.com/questions/11477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
11,544
According to the Filesystem Hierarchy Standard , /opt is for "the installation of add-on application software packages". /usr/local is "for use by the system administrator when installing software locally". These use cases seem pretty similar. Software not included with distributions usually is configured by default to install in either /usr/local or /opt with no particular rhyme or reason as to which they chose. Is there some difference I'm missing, or do both do the same thing, but exist for historical reasons?
While both are designed to contain files not belonging to the operating system, /opt and /usr/local are not intended to contain the same set of files. /usr/local is a place to install files built by the administrator, typically by using the make command (e.g., ./configure; make; make install ). The idea is to avoid clashes with files that are part of the operating system, which would either be overwritten or overwrite the local ones otherwise (e.g., /usr/bin/foo is part of the OS while /usr/local/bin/foo is a local alternative). All files under /usr are shareable between OS instances, although this is rarely done with Linux. This is a part where the FHS is slightly self-contradictory, as /usr is defined to be read-only, but /usr/local/bin needs to be read-write for local installation of software to succeed. The SVR4 file system standard, which was the FHS' main source of inspiration, is recommending to avoid /usr/local and use /opt/local instead to overcome this issue. /usr/local is a legacy from the original BSD. At that time, the source code of /usr/bin OS commands were in /usr/src/bin and /usr/src/usr.bin , while the source of locally developed commands was in /usr/local/src , and their binaries in /usr/local/bin . There was no notion of packaging (outside tarballs). On the other hand, /opt is a directory for installing unbundled packages (i.e. packages not part of the Operating System distribution, but provided by an independent source), each one in its own subdirectory. They are already built whole packages provided by an independent third party software distributor. Unlike /usr/local stuff, these packages follow the directory conventions (or at least they should). For example, someapp would be installed in /opt/someapp , with one of its command being /opt/someapp/bin/foo , its configuration file would be in /etc/opt/someapp/foo.conf , and its log files in /var/opt/someapp/logs/foo.access .
{ "source": [ "https://unix.stackexchange.com/questions/11544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5164/" ] }
11,602
I have several .htm files which open in Gedit without any warning/error, but when I open these same files in Jedit , it warns me of invalid UTF-8 encoding... The HTML meta tag states "charset=ISO-8859-1". Jedit allows a List of fallback encodings and a List of encoding auto-detectors (currently "BOM XML-PI"), so my immediate problem has been resolved. But this got me thinking about: What if the meta data wasn't there? When the encoding information is just not available, is there a CLI program which can make a "best-guess" of which encodings may apply? And, although it is a slightly different issue; is there a CLI program which tests the validity of a known encoding?
The file command makes "best-guesses" about the encoding. Here demonstrated on a file containing a german umlaut encoded in utf-8: $ file umlaut-utf8.txt umlaut-utf8.txt: UTF-8 Unicode text And the same umlaut in two other encodings: $ file umlaut-iso88591.txt umlaut-utf16.txt umlaut-iso88591.txt: ISO-8859 text umlaut-utf16.txt: Little-endian UTF-16 Unicode text, with no line terminators And all three mashed together for an invalid encoding: $ file umlaut-mixed.txt umlaut-mixed.txt: data You can use the -i parameter to output in mime type: $ file -i * umlaut-iso88591.txt: text/plain; charset=iso-8859-1 umlaut-mixed.txt: application/octet-stream; charset=binary umlaut-utf16.txt: text/plain; charset=utf-16le umlaut-utf8.txt: text/plain; charset=utf-8 (on mac it is -I . because apple devs think different.) The file command is quite limited. It looks over some of the bytes and tries to guess what the encoding might be. If it recognizes a pattern it will say that it is this or that encoding. If it does not recognize a pattern, or if the recognized patterns contradict each other, it will say "data" (or binary in mime type). Which practically means no valid encoding recognized. This is similar to how you might be able to recognize a text as being spanish or french based on the distribution of characters and umlauts. If you were given a text where the distribution of characters makes no sense then you might conclude that it is an "invalid" text. But it might be a language you just haven't seen before. Compare this to Lorem Ipsum. A text made to look like a natural text but is actually nonsense: https://en.wikipedia.org/wiki/Lorem_ipsum Here is an example where file was not able to recognize the correct encoding: view file containing DOS text (box-drawing characters, CRLF line terminators) and escape sequences Here is more information about the file command: http://www.linfo.org/file_command.html How I created the files: $ echo ä > umlaut-utf8.txt You can copy this line and run it. It should create a file containing the umlaut in utf8. Check the hex dump: $ hexdump -C umlaut-utf8.txt 00000000 c3 a4 0a |...| 00000003 Convert to the other encodings: $ iconv -f utf8 -t iso88591 umlaut-utf8.txt > umlaut-iso88591.txt $ iconv -f utf8 -t utf16 umlaut-utf8.txt > umlaut-utf16.txt The hex dumps: $ hexdump -C umlaut-iso88591.txt 00000000 e4 0a |..| 00000002 $ hexdump -C umlaut-utf16.txt 00000000 ff fe e4 00 0a 00 |......| 00000006 Compare with https://en.wikipedia.org/wiki/Ä#Computer_encoding Create something "invalid" by mixing all three: $ cat umlaut-iso88591.txt umlaut-utf8.txt umlaut-utf16.txt > umlaut-mixed.txt
{ "source": [ "https://unix.stackexchange.com/questions/11602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2343/" ] }
11,657
Is it necessary to defrag drives in Ubuntu? If so, how do I do it and how often should it be done?
Defragment is (or was) recommended under Windows because it had a poor filesystem implementation. Simple techniques such as allocating blocks for files in groups rather than one by one keep fragmentation down under Linux. Typical Linux filesystems only gain significantly from defragmentation on a nearly-full filesystem or with unusual write patterns. Most users don't need it , though heavy file sharers could benefit from it (filling a file in little bits in the middle is not the case ext3 was optimized for; if you're concerned about fragmentation and your bittorrent or other file sharing client offers that option, tell it to preallocate all files before starting to download). At the moment, there is no production-ready defragmentation tool for the common filesystems on Linux ( ext3 and ext4 ). If you installed Ubuntu 9.10 or newer, or converted an existing installation, you have an ext4 filesystem, which supports extents , further reducing fragmentation. For those cases where fragmentation does arise, an ext4 defragmentation tool is in the works, but it's not ready yet . Note that in general, the Linux philosophy and especially the Ubuntu philosophy is that common maintenance tasks should happen automatically without your needing to intervene .
{ "source": [ "https://unix.stackexchange.com/questions/11657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6756/" ] }
11,707
I am unable to edit text files using vim in cygwin. I have to press i many times to insert text. Sometimes it works and sometimes doesn't. Whenever I move cursor up down I have to press I many times. What could be the problem? Does backspace work in cygwin?
Cygwin vim ships with vim's default configuration, which leaves vim in vi compatibility mode where it tries to emulate the original vi as closely as possible. Among other limitations, arrow keys do not work in that mode, and backspace just moves the cursor left rather than erasing a character. Creating an empty ~/.vimrc is sufficient to disable vi compatibility mode: touch ~/.vimrc Having said that, i to enter insert mode should work anyway. You'll need to provide more details on where and how you're running vim. Also, are you actually running the vim that comes with Cygwin, or the native Windows version of vim? Update You can add below sets in ~/.vimrc to make is similar to default vim set nocompatible set backspace=indent,eol,start set backup set history=50 set ruler set background=dark set showcmd set incsearch syntax on set hlsearch If vim does not pick up your vimrc file, it may be looking for a .virc file instead. In this case, rename the file and the changes will be applied.
{ "source": [ "https://unix.stackexchange.com/questions/11707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6797/" ] }
11,733
Normally you would write: diff file1 file2 But I would like to diff a file and output from the command (here I make command a trivial one): diff file1 <(cat file2 | sort) Ok, this work when I enter this manually at shell prompt, but when I put exactly the same line in shell script, and then run the script, I get error. So, the question is -- how to do this correctly? Of course I would like avoid writing the output to a temporary file.
I suspect your script and your shell are different. Perhaps you have #!/bin/sh at the top of your script as the interpreter but you are using bash as your personal shell. You can find out what shell you run in a terminal by running echo $SHELL . An easier way to do this which should work across most shells would be to use a pipe redirect instead of the file read operator you give. The symbol '-' is a standard nomenclature for reading STDIN and can frequently be used as a replacement for a file name in an argument list: cat file2 | sort | diff file1 - Or to avoid a useless use of cat : sort < file2 | diff file1 -
{ "source": [ "https://unix.stackexchange.com/questions/11733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5884/" ] }
11,765
On Redhat systems cron logs into /var/log/cron file. What is the equivalent of that file on Debian systems?
Under Ubuntu, cron writes logs via rsyslogd to /var/log/syslog . You can redirect messages from cron to another file by uncommenting one line in /etc/rsyslog.d/50-default.conf . I believe, the same applies to Debian.
{ "source": [ "https://unix.stackexchange.com/questions/11765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
11,801
I need to replace all white spaces inside my text with commas. I'm currently using this line but it doesn't work: I get as output a text file which is exactly the same of the original one: sed 's/[:blank:]+/,/g' orig.txt > modified.txt thanks
With GNU sed : sed -e 's/\s\+/,/g' orig.txt > modified.txt Or with perl : perl -pne 's/\s+/,/g' < orig.txt > modified.txt Edit: To exclude newlines in Perl you could use a double negative 's/[^\S\n]+/,/g' or match against just the white space characters of your choice 's/[ \t\r\f]+/,/g' .
{ "source": [ "https://unix.stackexchange.com/questions/11801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2504/" ] }
11,835
When I convert a pdf file to bunch of jpg files using convert -quality 100 file.pdf page_%04d.jpg I have appreciable quality loss. However if I do the following, there is no (noticeable) quality loss: Start gscan2pdf, choose file-> import (and choose file.pdf). Then go to the temporary directory of gscan2pdf. There are many pnm files (one for every page of the pdf-file). Now I do for file in *.pnm; do convert $file $file.jpg done The resulting jpg-files are (roughly) of the same quality as the original pdf (which is what I want). Now my question is, if there is a simple command line way to convert the pdf file to a bunch of jpg files without noticeable quality loss? (The solution above is too complicated and time consuming).
It's not clear what you mean by "quality loss". That could mean a lot of different things. Could you post some samples to illustrate? Perhaps cut the same section out of the poor quality and good quality versions (as a PNG to avoid further quality loss). Perhaps you need to use -density to do the conversion at a higher dpi: convert -density 300 file.pdf page_%04d.jpg (You can prepend -units PixelsPerInch or -units PixelsPerCentimeter if necessary. My copy defaults to ppi.) Update: As you pointed out, gscan2pdf (the way you're using it) is just a wrapper for pdfimages (from poppler ). pdfimages does not do the same thing that convert does when given a PDF as input. convert takes the PDF, renders it at some resolution, and uses the resulting bitmap as the source image. pdfimages looks through the PDF for embedded bitmap images and exports each one to a file. It simply ignores any text or vector drawing commands in the PDF. As a result, if what you have is a PDF that's just a wrapper around a series of bitmaps, pdfimages will do a much better job of extracting them, because it gets you the raw data at its original size. You probably also want to use the -j option to pdfimages , because a PDF can contain raw JPEG data. By default, pdfimages converts everything to PNM format, and converting JPEG > PPM > JPEG is a lossy process. So, try pdfimages -j file.pdf page You may or may not need to follow that with a convert to .jpg step (depending on what bitmap format the PDF was using). I tried this command on a PDF that I had made myself from a sequence of JPEG images. The extracted JPEGs were byte-for-byte identical to the source images. You can't get higher quality than that.
{ "source": [ "https://unix.stackexchange.com/questions/11835", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5289/" ] }
11,856
I am getting output from a program that first produces one line that is a bunch of column headers, and then a bunch of lines of data. I want to cut various columns of this output and view it sorted according to various columns. Without the headers, the cutting and sorting is easily accomplished via the -k option to sort along with cut or awk to view a subset of the columns. However, this method of sorting mixes the column headers in with the rest of the lines of output. Is there an easy way to keep the headers at the top?
Stealing Andy's idea and making it a function so it's easier to use: # print the header (the first line of input) # and then run the specified command on the body (the rest of the input) # use it in a pipeline, e.g. ps | body grep somepattern body() { IFS= read -r header printf '%s\n' "$header" "$@" } Now I can do: $ ps -o pid,comm | body sort -k2 PID COMMAND 24759 bash 31276 bash 31032 less 31177 less 31020 man 31167 man ... $ ps -o pid,comm | body grep less PID COMMAND 31032 less 31177 less
{ "source": [ "https://unix.stackexchange.com/questions/11856", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3445/" ] }
11,871
A lot of the time I edit a file with nano , try to save and get a permission error because I forgot to run it as root. Is there some quick way I can become root with sudo from within the editor, without having to re-open and re-edit the file?
No, you can't give a running program permissions that it doesn't have when it starts, that would be the security hole known as 'privilege escalation'¹. Two things you can do: Save to a temporary file in /tmp or wherever, close the editor, then dump the contents of temp file into the file you were editing. sudo cp $TMPFILE $FILE . Note that it is not recomended to use mv for this because of the change in file ownership and permissions it is likely to cause, you just want to replace the file content not the file placeholder itself. Background the editor with Ctrl + z , change the file ownership or permissions so you can write to it, then use fg to get back to the editor and save. Don't forget to fix the permissions! ¹ Some editors are actually able to do this by launching a new process with different permissions and passing the data off to that process for saving. See for example this related question for other solutions in advanced editors that allow writing the file buffer to a process pipe. Nano does not have the ability to launch a new process or pass data to other processes, so it's left out of this party.
{ "source": [ "https://unix.stackexchange.com/questions/11871", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3125/" ] }
11,874
What user and group should I chown it to? All admins are in 'admins' group. How do I chmod it?
No, you can't give a running program permissions that it doesn't have when it starts, that would be the security hole known as 'privilege escalation'¹. Two things you can do: Save to a temporary file in /tmp or wherever, close the editor, then dump the contents of temp file into the file you were editing. sudo cp $TMPFILE $FILE . Note that it is not recomended to use mv for this because of the change in file ownership and permissions it is likely to cause, you just want to replace the file content not the file placeholder itself. Background the editor with Ctrl + z , change the file ownership or permissions so you can write to it, then use fg to get back to the editor and save. Don't forget to fix the permissions! ¹ Some editors are actually able to do this by launching a new process with different permissions and passing the data off to that process for saving. See for example this related question for other solutions in advanced editors that allow writing the file buffer to a process pipe. Nano does not have the ability to launch a new process or pass data to other processes, so it's left out of this party.
{ "source": [ "https://unix.stackexchange.com/questions/11874", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
11,889
I'd like to be able to paste the X selection using the keyboard. Currently I have to use the middle mouse button to do this. I gather that faking a middle mouse button press is fairly easy to do, but such a solution would also require moving the mouse pointer to the location of the text caret. Is there a better way to do this?
On some default linux setups, Shift + Insert will perform an X-selection-paste . As you noted, this is distinctly different from the X-clipboard-paste command, the binding for which often varies by application. If that doesn't work here are a couple other keys to try: Ctrl + V Ctrl + Shift + V Ctrl + Shift + Insert No go? Your Desktop Environment or Window Manager probably doesn't have them configured, and it's complicated because —even under the banner of one DE or WM— each toolkit (e.g. GTK, Qt, Etc.) may well have different default bindings. Some programs (e.g. gvim ) even have their own internal copy registers that are not necessarily synced to the graphical environment they run in. To top it off, even when a program does use the X-clipboard system, X has multiple systems to choose from. The two most basic are the selection buffer —which always has whatever the last thing selected was (execpt when it doesn't)— and the copy buffer —which things usually need to be specifically copied into. To do an explicit copy into the latter system you can try any of these on for size: Ctrl + C Shift + Ctrl + C Ctrl + Insert If none of that is just magically working for you, there are two ways you can go. There's an app for that! ™ Use one of the various clipboard manager programs to handle this for you. The most popular seem to be Parcellite and Glippy , but you can check out other alternatives here . See also this question about advanced clipboard managers Hack it yourself. So lets say you want to hack it. Short of writing your own code and tapping into the X api, the hacker tools for the job are a couple of little command line utilities that give you a window into the mind of X. Just a small window mind you, the whole view too scary. The first tool is xsel . This little jobber will spit out whatever is in X's selection buffer at any given time. Now you need to get that into your program. There are two options for this. One is xdotool which allows you to mimic sending events to the Xorg input system. You can use it's type method like xdotool type foo_bar to mimic typing 'foo_bar' at the cursor. Combined, you get something like this: $ xdotool type $(xsel) The other one is xvkbd which sends keyboard events from a lower subsystem. You can pipe keystrokes into it on STDIN. Combined with xsel , you get something like this: $ xsel | xvkbd -xsendevent -file - Great. Now for that keybinding to run this stuff. If you run Gnome-2, you can add a custom shortcut in System -> Preferences -> Keyboard shortcuts . If you use a different DE or WM this excersize is left up to the reader. The last note is that when binding commands to keyboard shortcuts it is often necessary to only have one command, not two commands connected with a pipe like we use above. You can accomplish this by invoking your piped command as a command string argumetn to a new shell like this: sh -c 'xsel | xvkbd -xsendevent -file -' sh -c 'xdotool type "$(xsel)"'
{ "source": [ "https://unix.stackexchange.com/questions/11889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1535/" ] }
11,907
Why does a shebang need a path? Wrong #!ruby Correct #!/usr/local/bin/ruby #!/usr/bin/env ruby The operating system should have the information regarding the path for a registered command, and why does it still expect it to be given?
Probably to keep the kernel simpler. I don't think the kernel ever searches your path to find an executable. That's handled by the C library. #! processing is done in the kernel, which doesn't use the standard C library. Also, I don't think the kernel has a notion of what your path is. $PATH is an environment variable, and only processes have an environment. The kernel doesn't. I suppose it could access the environment of the process that did the exec, but I don't think anything currently in the kernel ever accesses environment variables like that.
{ "source": [ "https://unix.stackexchange.com/questions/11907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6200/" ] }
11,914
I would like to generate a statistic from my git repository to create a time-chart: <commit> <timestamp> <changed-lines> 35abf648cfc 2011-04-04t17:23:58 +20 -4 93acb668f32 2011-04-04t17:59:01 -4 +1 so I can plot a "nice" graph for that with gnuplot or so. Where <changed-lines> might be anything that is generatable, i.e. +20 -3 (added 20 lines, removed 3 lines) or just 23 or whatever. Important is, that lines are counted -- changed files would not be useful in my scenario. It would be good, if I could apply this only on a part of the repo, because some directories contain nasty binary files, which would destroy a statistics. I guess git log might come in there somehow, but I have no idea where to start...
Probably to keep the kernel simpler. I don't think the kernel ever searches your path to find an executable. That's handled by the C library. #! processing is done in the kernel, which doesn't use the standard C library. Also, I don't think the kernel has a notion of what your path is. $PATH is an environment variable, and only processes have an environment. The kernel doesn't. I suppose it could access the environment of the process that did the exec, but I don't think anything currently in the kernel ever accesses environment variables like that.
{ "source": [ "https://unix.stackexchange.com/questions/11914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5590/" ] }
11,929
I use this sleep 900; <command> on my shell. Just wanted to know is there is some alternate/better way that you use?
You are searching for at ( at@wikipedia )? usr@srv % at now + 15 min at> YOUR COMMAND HERE You can define multiple commands that should be executed in 15 min; separate them with a return. Confirm all commands with control+d .
{ "source": [ "https://unix.stackexchange.com/questions/11929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/912/" ] }
11,939
$ cat data.txt aaaaaa aaaaaa cccccc aaaaaa aaaaaa bbbbbb $ cat data.txt | uniq aaaaaa cccccc aaaaaa bbbbbb $ cat data.txt | sort | uniq aaaaaa bbbbbb cccccc $ The result that I need is to display all the lines from the original file removing all the duplicates (not just the consecutive ones), while maintaining the original order of statements in the file . Here, in this example, the result that I actually was looking for was aaaaaa cccccc bbbbbb How can I perform this generalized uniq operation in general?
perl -ne 'print unless $seen{$_}++' data.txt Or, if you must have a useless use of cat : cat data.txt | perl -ne 'print unless $seen{$_}++' Here's an awk translation, for systems that lack Perl: awk '!seen[$0]++' data.txt cat data.txt | awk '!seen[$0]++'
{ "source": [ "https://unix.stackexchange.com/questions/11939", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/912/" ] }
11,946
As a comment in I'm confused as to why "| true" in a makefile has the same effect as "|| true" user cjm wrote: Another reason to avoid | true is that if the command produced enough output to fill up the pipe buffer, it would block waiting for true to read it. Do we have some way of finding out what the size of the pipe buffer is?
The capacity of a pipe buffer varies across systems (and can even vary on the same system). I am not sure there is a quick, easy, and cross platform way to just lookup the capacity of a pipe. Mac OS X, for example, uses a capacity of 16384 bytes by default, but can switch to 65336 byte capacities if large write are made to the pipe, or will switch to a capacity of a single system page if too much kernel memory is already being used by pipe buffers (see xnu/bsd/sys/pipe.h , and xnu/bsd/kern/sys_pipe.c ; since these are from FreeBSD, the same behavior may happen there, too). One Linux pipe(7) man page says that pipe capacity is 65536 bytes since Linux 2.6.11 and a single system page prior to that (e.g. 4096 bytes on (32-bit) x86 systems). The code ( include/linux/pipe_fs_i.h , and fs/pipe.c ) seems to use 16 system pages (i.e. 64 KiB if a system page is 4 KiB), but the buffer for each pipe can be adjusted via a fcntl on the pipe (up to a maximum capacity which defaults to 1048576 bytes, but can be changed via /proc/sys/fs/pipe-max-size )). Here is a little bash / perl combination that I used to test the pipe capacity on my system: #!/bin/bash test $# -ge 1 || { echo "usage: $0 write-size [wait-time]"; exit 1; } test $# -ge 2 || set -- "$@" 1 bytes_written=$( { exec 3>&1 { perl -e ' $size = $ARGV[0]; $block = q(a) x $size; $num_written = 0; sub report { print STDERR $num_written * $size, qq(\n); } report; while (defined syswrite STDOUT, $block) { $num_written++; report; } ' "$1" 2>&3 } | (sleep "$2"; exec 0<&-); } | tail -1 ) printf "write size: %10d; bytes successfully before error: %d\n" \ "$1" "$bytes_written" Here is what I found running it with various write sizes on a Mac OS X 10.6.7 system (note the change for writes larger than 16KiB): % /bin/bash -c 'for p in {0..18}; do /tmp/ts.sh $((2 ** $p)) 0.5; done' write size: 1; bytes successfully before error: 16384 write size: 2; bytes successfully before error: 16384 write size: 4; bytes successfully before error: 16384 write size: 8; bytes successfully before error: 16384 write size: 16; bytes successfully before error: 16384 write size: 32; bytes successfully before error: 16384 write size: 64; bytes successfully before error: 16384 write size: 128; bytes successfully before error: 16384 write size: 256; bytes successfully before error: 16384 write size: 512; bytes successfully before error: 16384 write size: 1024; bytes successfully before error: 16384 write size: 2048; bytes successfully before error: 16384 write size: 4096; bytes successfully before error: 16384 write size: 8192; bytes successfully before error: 16384 write size: 16384; bytes successfully before error: 16384 write size: 32768; bytes successfully before error: 65536 write size: 65536; bytes successfully before error: 65536 write size: 131072; bytes successfully before error: 0 write size: 262144; bytes successfully before error: 0 The same script on Linux 3.19: /bin/bash -c 'for p in {0..18}; do /tmp/ts.sh $((2 ** $p)) 0.5; done' write size: 1; bytes successfully before error: 65536 write size: 2; bytes successfully before error: 65536 write size: 4; bytes successfully before error: 65536 write size: 8; bytes successfully before error: 65536 write size: 16; bytes successfully before error: 65536 write size: 32; bytes successfully before error: 65536 write size: 64; bytes successfully before error: 65536 write size: 128; bytes successfully before error: 65536 write size: 256; bytes successfully before error: 65536 write size: 512; bytes successfully before error: 65536 write size: 1024; bytes successfully before error: 65536 write size: 2048; bytes successfully before error: 65536 write size: 4096; bytes successfully before error: 65536 write size: 8192; bytes successfully before error: 65536 write size: 16384; bytes successfully before error: 65536 write size: 32768; bytes successfully before error: 65536 write size: 65536; bytes successfully before error: 65536 write size: 131072; bytes successfully before error: 0 write size: 262144; bytes successfully before error: 0 Note: The PIPE_BUF value defined in the C header files (and the pathconf value for _PC_PIPE_BUF ), does not specify the capacity of pipes, but the maximum number of bytes that can be written atomically (see POSIX write(2) ). Quote from include/linux/pipe_fs_i.h : /* Differs from PIPE_BUF in that PIPE_SIZE is the length of the actual memory allocation, whereas PIPE_BUF makes atomicity guarantees. */
{ "source": [ "https://unix.stackexchange.com/questions/11946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3125/" ] }
11,953
Is there any way to make a log file for maintaining some data in /var/log/ with the help of some library function or system call in c language in linux. And I also want to know the standards that we should follow to write and process log. Thanks
The standard way to log from a C program is syslog . Start by including the header file: #include <syslog.h> Then early in your program, you should configure syslog by calling openlog : openlog("programname", 0, LOG_USER); The first argument is the identification or the tag, which is automatically added at the start of each message. Put your program's name here. The second argument is the options you want to use, or 0 for the normal behavior. The full list of options is in man 3 syslog . One you might find useful is LOG_PID , which makes syslog also record the process id in the log message. Then, each time you want to write a log message, you call syslog : syslog(LOG_INFO, "%s", "Message"); The first argument is the priority. The priority ranges from DEBUG (least important) to EMERG (only for emergencies) with DEBUG , INFO , and ERR being the most commonly used. See man 3 syslog for your options. The second and third arguments are a format and a message, just like printf. Which log file this appears in depends on your syslog settings. With a default setup, it probably goes into /var/log/messages . You can set up a custom log file by using one of the facilities in the range LOG_LOCAL0 to LOG_LOCAL7 . You use them by changing: openlog("programname", 0, LOG_USER); to openlog("programname", 0, LOG_LOCAL0); or openlog("programname", 0, LOG_LOCAL1); etc. and adding a corresponding entry to /etc/syslog.conf , e.g. local1.info /var/log/programname.log and restarting the syslog server, e.g. pkill -HUP syslogd The .info part of local1.info above means that all messages that are INFO or more important will be logged, including INFO , NOTICE , ERR (error), CRIT (critical), etc., but not DEBUG . Or, if you have rsyslog , you could try a property-based filter , e.g. :syslogtag, isequal, "programname:" /var/log/programname.log The syslogtag should contain a ":". Or, if you are planning on distributing your software to other people, it's probably not a good idea to rely on using LOG_LOCAL or an rsyslog filter. In that case, you should use LOG_USER (if it's a normal program) or LOG_DAEMON (if it's a server), write your startup messages and error messages using syslog , but write all of your log messages to a file outside of syslog . For example, Apache HTTPd logs to /var/log/apache2/* or /var/log/httpd/* , I assume using regular open / fopen and write / printf calls.
{ "source": [ "https://unix.stackexchange.com/questions/11953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5861/" ] }
11,969
When I try to launch any GUI application through a terminal in my RHEL 5.4, it takes a very long time for the GUI to come up. Also, sometimes it just hangs. I see the following messages in the terminal : Launching a SCIM daemon with Socket FrontEnd... GTK IM Module SCIM : Cannot connect to Panel! What might be the issue?
The standard way to log from a C program is syslog . Start by including the header file: #include <syslog.h> Then early in your program, you should configure syslog by calling openlog : openlog("programname", 0, LOG_USER); The first argument is the identification or the tag, which is automatically added at the start of each message. Put your program's name here. The second argument is the options you want to use, or 0 for the normal behavior. The full list of options is in man 3 syslog . One you might find useful is LOG_PID , which makes syslog also record the process id in the log message. Then, each time you want to write a log message, you call syslog : syslog(LOG_INFO, "%s", "Message"); The first argument is the priority. The priority ranges from DEBUG (least important) to EMERG (only for emergencies) with DEBUG , INFO , and ERR being the most commonly used. See man 3 syslog for your options. The second and third arguments are a format and a message, just like printf. Which log file this appears in depends on your syslog settings. With a default setup, it probably goes into /var/log/messages . You can set up a custom log file by using one of the facilities in the range LOG_LOCAL0 to LOG_LOCAL7 . You use them by changing: openlog("programname", 0, LOG_USER); to openlog("programname", 0, LOG_LOCAL0); or openlog("programname", 0, LOG_LOCAL1); etc. and adding a corresponding entry to /etc/syslog.conf , e.g. local1.info /var/log/programname.log and restarting the syslog server, e.g. pkill -HUP syslogd The .info part of local1.info above means that all messages that are INFO or more important will be logged, including INFO , NOTICE , ERR (error), CRIT (critical), etc., but not DEBUG . Or, if you have rsyslog , you could try a property-based filter , e.g. :syslogtag, isequal, "programname:" /var/log/programname.log The syslogtag should contain a ":". Or, if you are planning on distributing your software to other people, it's probably not a good idea to rely on using LOG_LOCAL or an rsyslog filter. In that case, you should use LOG_USER (if it's a normal program) or LOG_DAEMON (if it's a server), write your startup messages and error messages using syslog , but write all of your log messages to a file outside of syslog . For example, Apache HTTPd logs to /var/log/apache2/* or /var/log/httpd/* , I assume using regular open / fopen and write / printf calls.
{ "source": [ "https://unix.stackexchange.com/questions/11969", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6956/" ] }
11,983
I see POSIX mentioned often and everywhere, and I had assumed it to be the baseline UNIX standard.. until I noticed the following excerpt on a Wikipedia page: The Open Group The Open Group is most famous as the certifying body for the UNIX trademark, and its publication of the Single UNIX Specification technical standard , which extends the POSIX standards and is the official definition of a UNIX system . If the official definition of a UNIX system is an extension of POSIX, then what exactly is POSIX? ,,, It surely seems to be a touchstone of the UNIX world, but I don't know how it fits into the overall picture.
POSIX first was a standard in 1988 long before the Single UNIX Specification. It was one of the attempts at unifying all the various UNIX forks and UNIX-like systems. POSIX is an IEEE Standard, but as the IEEE does not own the UNIX® trademark, the standard is not UNIX® though it is based on the existing UNIX API at that time. The first standard POSIX.1 is formally known as IEEE std 1003.1-1988.[ 1 ] IEEE charged a substantial fee to obtain a copy of the standard. The Open Group released the Single UNIX Specification (SUSv2) in 1997 based on IEEE's work of the POSIX standard. SUSv3 was released in 2001 from a joint working group between IEEE and The Open Group known as the Austin Group. SUSv3 is also known as POSIX:2001[ 2 ]. There is now also POSIX:2004 and POSIX:2008 which is the core of SUSv4. As for what UNIX® is, UNIX® is whatever the current registered trademark holder says it is. Since 1994, that is The Open Group. Novell acquired the UNIX® systems business from AT&T/USL which is where UNIX® was born. In 1994, they sold the right to the UNIX® trademark to X/Open[ 3 ] now know as The Open Group. They then sold the UNIX® source code to SCO as UNIXWARE®.[ 3 ] UNIX® itself has forked many times[ 4 ][ 5 ] partly due to AT&T's licensing model. Purchasing UNIX® gave you the complete source of the operating system and the full tool-chain to build it. Modifications to the source can be distributed and used by anyone who owned a license to UNIX® from AT&T. The license fee was in the thousands. BSD was a project at Berkeley which added a number of enhancements to the UNIX® operating system. BSD code was released under a much more liberal license than AT&T's source and did not require a license fee or even a requirement to be distributed with source, unlike the GPL that the GNU Project and Linux use. This has caused a good part of the BSD code to be included with various commercial UNIX forks. By around 4.3BSD, they had nearly replaced any need for the original AT&T UNIX® source code. FreeBSD/NetBSD/OpenBSD are all forks of 4.3BSD that are a complete operating system and have none of the original AT&T source code. Nor do they have right to the UNIX® trademark, but much of their code is used by commercial UNIX operating systems. The Socket API used on UNIX was developed on BSD and the Unix Fast Filesystem code was borrowed and used on various UNIX Operating Systems like Solaris with their own enhancements. Linux was developed in 1991, but was developed from scratch unlike BSD and uses the existing GNU Project which is a clean-room implementation of much of the UNIX user-space. It implements much of POSIX for compatibility and is UNIX-like in design, but it does not have the close connection to AT&T or UNIX® that the BSDs have.
{ "source": [ "https://unix.stackexchange.com/questions/11983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2343/" ] }
12,015
Let's say one creates a file like so: touch myFile You enter some text in it with vim or whatever, and then use cat myFile to spit the contents out into the terminal. Now, what happens when I use cat on any image? Say, cat myPNG.png I just get a bunch of garbage. It just made me think about what the cat command is attempting to do, and where all of this "garbage" comes from. Just curious.
It may be useful to explain how files work at the lowest level: A file is a stream of bytes, zero or more in length. A byte is 8 bits. Since there are 256 combinations of 8 bits, that means a byte is any number from 0 to 255. So every file is, at its lowest level, a big hunk of numbers ranging from 0 to 255. It is completely up to programs and users to decide what the numbers "mean." If we want to store text, then it's probably a good idea to use the numbers as code, where each number is assigned a letter. That's what ASCII and Unicode do. If we want to display text, then it's probably a good idea to build a device or write a program that can take these numbers and display a bitmap looking like the corresponding ASCII/Unicode code. That's what terminals and terminal emulators do. Of course, for graphics, we probably want the numbers to represent pixels and their colors. Then we'll need a program that goes through the file, reads all the bytes, and renders the picture accordingly. A terminal emulator is expecting the bytes to be ASCII/Unicode numbers and is going to behave differently, for the same chunk of bytes (or file).
{ "source": [ "https://unix.stackexchange.com/questions/12015", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6950/" ] }
12,032
Is is possible to open a new-window with its working directory set to the one I am currently in. I am using zsh , if it matters.
Starting in tmux 1.9 the default-path option was removed, so you need to use the -c option with new-window , and split-window (e.g. by rebinding the c , " , and % bindings to include -c '#{pane_current_path}' ). See some of the other answers to this question for details. A relevant feature landed in the tmux SVN trunk in early February 2012. In tmux builds that include this code, tmux key bindings that invoke new-window will create new a window with the same current working directory as the current pane’s active processes (as long as the default-path session option is empty; it is by default). The same is true for the pane created by the split-window command when it is invoked via a binding. This uses special platform-specific code, so only certain OSes are supported at this time: Darwin (OS X), FreeBSD, Linux, OpenBSD, and Solaris. This should be available in the next release of tmux (1.7?). With tmux 1.4, I usually just use tmux neww in a shell that already has the desired current working directory. If, however, I anticipate needing to create many windows with the same current working directory (or I want to be able to start them with the usual <prefix> c key binding), then I set the default-path session option via tmux set-option default-path "$PWD" in a shell that already has the desired current working directory (though you could obviously do it from any directory and just specify the value instead). If default-path is set to a non-empty value, its value will be used instead of “inheriting” the current working directory from command-line invocations of tmux neww . The tmux FAQ has an entry titled “How can I open a new window in the same directory as the current window?” that describes another approach; it is a bit convoluted though.
{ "source": [ "https://unix.stackexchange.com/questions/12032", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/898/" ] }
12,038
Is it possible to highlight (set a background colour) for the whole line of the prompt in zsh ? In my emacs config I have the line on which the cursor sits a slightly different colour to the window background, which is a great visual aid. I'm wondering whether it's possible to do the same in my terminal/zsh prompt, so that it effectivly "draws a line" under everthing that's been run. I've tried setting PROMPT='%{$bg[grey]%}# ' in my .zshrc but the highlight only extends as far as I type, not to the edge of the terminal. Is what I'm trying to achieve possible?
Starting in tmux 1.9 the default-path option was removed, so you need to use the -c option with new-window , and split-window (e.g. by rebinding the c , " , and % bindings to include -c '#{pane_current_path}' ). See some of the other answers to this question for details. A relevant feature landed in the tmux SVN trunk in early February 2012. In tmux builds that include this code, tmux key bindings that invoke new-window will create new a window with the same current working directory as the current pane’s active processes (as long as the default-path session option is empty; it is by default). The same is true for the pane created by the split-window command when it is invoked via a binding. This uses special platform-specific code, so only certain OSes are supported at this time: Darwin (OS X), FreeBSD, Linux, OpenBSD, and Solaris. This should be available in the next release of tmux (1.7?). With tmux 1.4, I usually just use tmux neww in a shell that already has the desired current working directory. If, however, I anticipate needing to create many windows with the same current working directory (or I want to be able to start them with the usual <prefix> c key binding), then I set the default-path session option via tmux set-option default-path "$PWD" in a shell that already has the desired current working directory (though you could obviously do it from any directory and just specify the value instead). If default-path is set to a non-empty value, its value will be used instead of “inheriting” the current working directory from command-line invocations of tmux neww . The tmux FAQ has an entry titled “How can I open a new window in the same directory as the current window?” that describes another approach; it is a bit convoluted though.
{ "source": [ "https://unix.stackexchange.com/questions/12038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28538/" ] }
12,040
I have 2 questions. During Linux installation we specify memory space for 2 mount points - root and swap. Are there any other mount points created without the users notice? Is this statement correct: "mounting comes into the picture only when dealing with different partitions. i.e, you cannot mount, say, /proc unless it's a different partition"?
There are misconceptions behind your questions. Swap is not mounted. Mounting isn't limited to partitions. Partitions A partition is a slice¹ of disk space that's devoted to a particular purpose. Here are some common purposes for partitions. A filesystem , i.e. files organized as a directory tree and stored in a format such as ext2, ext3, FFS, FAT, NTFS, … Swap space, i.e. disk space used for paging (and storing hibernation images ). Direct application access. Some databases store their data directly on a partition rather than on a filesystem to gain a little performance. (A filesystem is a kind of database anyway.) A container for other partitions. For example, a PC extended partition , or a disk slice containing BSD partitions, or an LVM physical volume (containing eventually logical volumes which can themselves be considered partitions), … Filesystems Filesystems present information in a hierarchical structure. Here are some common kinds of filesystems: Disk-backed filesystems, such as ext2, ext3, FFS, FAT, NTFS, … The backing need not be directly on a disk partition, as seen above. For example, this could be an LVM logical volume, or a loop mount . Memory-backed filesystems, such as Solaris and Linux's tmpfs . Filesystems that present information from the kernel, such as proc and sysfs on Linux. Network filesystems, such as NFS , Samba , … Application-backed filesystems, of which FUSE has a large collection . Application-backed filesystems can do just about anything: make an FTP server appear as a filesystem, give an alternate view of a filesystem where file names are case-insensitive or converted to a different encoding, show archive contents as if they were directories, … Mounting Unix presents files in a single hierarchy, usually called “the filesystem” (but in this answer I'll not use the word “filesystem” in this sense to keep confusion down). Individual filesystems must be grafted onto that hierarchy in order to access them.³ You make a filesystem accessible by mounting it. Mounting associates the root directory of the filesystem you're mounting with a existing directory in the file hierarchy. A directory that has such an association is known as a mount point. For example, the root filesystem is mounted at boot time (before the kernel starts any process²) to the / directory. The proc filesystem over which some unix variants such as Solaris and Linux expose information about processes is mounted on /proc , so that /proc/42/environ designates the file /42/environ on the proc filesystem, which (on Linux, at least) contains a read-only view of the environment of process number 42. If you have a separate filesystem e.g. for /home , then /home/john/myfile.txt designates the file whose path is /john/myfile.txt from the root of the home filesystem. Under Linux, it's possible for the same filesystem to be accessible through more than one path, thanks to bind mounts . A typical Linux filesystems has many mounted filesystems. (This is an example; different distributions, versions and setups will lead to different filesystems being mounted.) / : the root filesystem, mounted before the kernel loads the first process. The bootloader tells the kernel what to use as the root filesystem (it's usually a disk partition but could be something else such as an NFS export). /proc : the proc filessytem, with process and kernel information. /sys : the sysfs filesystem, with information about hardware devices. /dev : an in-memory filesystem where device files are automatically created by udev based on available hardware. /dev/pts : a special-purpose filesystem containing device files for running terminal emulators . /dev/shm : an in-memory filesystem used for internal purposes by the system's standard library. Depending on what system components you have running, you may see other special-purpose filesystems such as binfmt_misc (used by the foreign executable file format kernel subsystem ), fusectl (used by FUSE ), nfsd (used by the kernel NFS server), … Any filesystem explicitly mentioned in /etc/fstab (and not marked noauto ) is mounted as part of the boot process. Any filesystem automatically mounted by HAL (or equivalent functionality) following the insertion of a removable device such as a USB key. Any filesystem explicitly mounted with the mount command. ¹ Informally speaking here. ² Initrd and such are beyond the scope of this answer. ³ This is unlike Windows, which has a separate hierarchy for each filesystem, e.g. c: or \\hostname\sharename .
{ "source": [ "https://unix.stackexchange.com/questions/12040", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6962/" ] }
12,068
In order to find out how long certain operations within a Bash (v4+) script take, I would like to parse the output from the time command "separately" and (ultimately) capture it within a Bash variable ( let VARNAME=... ). Now, I am using time -f '%e' ... (or rather command time -f '%e' ... because of the Bash built-in), but since I already redirect the output of the executed command I'm really lost as to how I would go about to capture the output of the time command. Basically the problem here is to separate the output of time from the output of the executed command(s). What I want is the functionality of counting the amount of time in seconds (integers) between starting a command and its completion. It doesn't have to be the time command or the respective built-in. Edit: given the two useful answers below, I wanted to add two clarifications. I do not want to throw away the output of the executed command, but it will not really matter whether it ends up on stdout or stderr. I would prefer a direct approach over an indirect one (i.e. catching output directly as opposed to store it in intermediate files). The solution using date so far comes closes to what I want.
To get the output of time into a var use the following: usr@srv $ mytime="$(time ( ls ) 2>&1 1>/dev/null )" usr@srv $ echo "$mytime" real 0m0.006s user 0m0.001s sys 0m0.005s You can also just ask for a single time type, e.g. utime: usr@srv $ utime="$( TIMEFORMAT='%lU';time ( ls ) 2>&1 1>/dev/null )" usr@srv $ echo "$utime" 0m0.000s To get the time you can also use date +%s.%N , so take it before and after execution and calculate the diff: START=$(date +%s.%N) command END=$(date +%s.%N) DIFF=$(echo "$END - $START" | bc) # echo $DIFF
{ "source": [ "https://unix.stackexchange.com/questions/12068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5462/" ] }
12,075
I have a server log that outputs a specific line of text into its log file when the server is up. I want to execute a command once the server is up, and hence do something like the following: tail -f /path/to/serverLog | grep "server is up" ...(now, e.g., wget on server)? What is the best way to do this?
A simple way would be awk. tail -f /path/to/serverLog | awk ' /Printer is on fire!/ { system("shutdown -h now") } /new USB high speed/ { system("echo \"New USB\" | mail admin") }' And yes, both of those are real messages from a kernel log. Perl might be a little more elegant to use for this and can also replace the need for tail. If using perl, it will look something like this: open(my $fd, "<", "/path/to/serverLog") or die "Can't open log"; while(1) { if(eof $fd) { sleep 1; $fd->clearerr; next; } my $line = <$fd>; chomp($line); if($line =~ /Printer is on fire!/) { system("shutdown -h now"); } elsif($line =~ /new USB high speed/) { system("echo \"New USB\" | mail admin"); } }
{ "source": [ "https://unix.stackexchange.com/questions/12075", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3445/" ] }
12,107
It's a situation that has happened quite often to me: after I press (with a different intention) Ctrl-S in a terminal, the interaction (input or output) with it is frozen. It's probably a kind of "scroll lock" or whatever. How do I unfreeze the terminal after this? (This time, I have been working with apt-shell inside a bash inside urxvt --not sure which of them is responsible for the special handling of Ctrl-S : I was searching the history of commands backwards with C-r , as usual for readline, but then I wanted to go "back" forwards through the history with the usual--at least in Emacs-- C-s ( 1 , 2 , 3 ), but that caused the terminal to freeze. Well, scrolling/paging to view past things still works in the terminal, but no interaction with the processes run there.)
Press Ctrl - Q to unfreeze. This stop/start scheme is software flow control , which is implemented by the OS's terminal device driver rather than the shell or terminal emulator. It can be configured with the stty command. To disable it altogether, put stty -ixon in a shell startup script such as ~/.bashrc or ~/.zshrc . To instead just allow any key to get things flowing again, use stty ixany .
{ "source": [ "https://unix.stackexchange.com/questions/12107", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4319/" ] }
12,118
I just added and modified a .desktop file in my /home/user/.local/share/applications folder. Is there any way to refresh the icon and caption in the list of applications without logging out?
You can restart the gnome-shell by pressing Alt + F2 and then typing in either " restart " or just " r " and pressing Enter . Otherwise I've noticed that it automatically refreshes .desktop files after waiting a little while.
{ "source": [ "https://unix.stackexchange.com/questions/12118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4330/" ] }
12,195
I set up my ssh stuff with the help of this guide , and it used to work well (I could run hg push without being asked for a passphrase). What could have happened between then and now, considering that I'm still using the same home directory. $ cat .hg/hgrc [paths] default = ssh://[email protected]/tshepang/bloog $ hg push Enter passphrase for key '/home/wena/.ssh/id_rsa': pushing to ssh://[email protected]/tshepang/bloog searching for changes ...
You need to use an ssh agent. Short answer: try $ ssh-add before pushing. Supply your passphrase when asked. If you aren't already running an ssh agent you will get the following message: Could not open a connection to your authentication agent. In that situation, you can start one and set your environment up thusly eval $(ssh-agent) Then repeat the ssh-add command. It's worth taking a look at the ssh agent manpage .
{ "source": [ "https://unix.stackexchange.com/questions/12195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/688/" ] }
12,197
I happen to know about rsyn, and I use rsync to sync between my mac and a linux server as follows. rsync -r -t -v MAC LINUX rsync -r -t -v LINUX MAC I expected to run the first command to sync, but I needed the second command also when a change is made in LINUX. Am I missing something? Does rsync have an option to sync between two directories?
You want bi-directional sync. Take a look at unison, which does this: http://www.cis.upenn.edu/~bcpierce/unison/ For example, on Debian/Ubuntu: $ sudo apt-get install unison $ unison MAC/ LINUX/ If you have trouble with permissions (example ext4 -> FAT): $ unison -perms 0 vlc-2.2.0/ /media/sf_vlc/vlc Contacting server... Looking for changes Reconciling changes vlc-2.2.0 vlc new dir ----> / [f] Proceed with propagating updates? [] y Propagating updates
{ "source": [ "https://unix.stackexchange.com/questions/12197", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1090/" ] }
12,203
When I use the -a option as is asked and answered in Preserve the permissions with rsync , I got a lot of "rsync: failed to set permissions on" errors. rsync: failed to set permissions on "/ata/text/RCS/jvlc,v": Operation not permitted (1) rsync: failed to set permissions on "/ata/text/RCS/jvm,v": Operation not permitted (1) rsync: failed to set permissions on ... Why is this? The files are normal files with permission of 0664.
Most likely, rsync on the destination end is not running as a user with permission to chmod those files (which would have to be either the file's owner or root).
{ "source": [ "https://unix.stackexchange.com/questions/12203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1090/" ] }
12,227
Is there a way to give a particular name to a unix screen session? For instance, say I'm running the same program multiple times, each with different parameters and I want to tell which one is which.
You can name a session when starting it with the -S name option. From within a running screen, you can change it by typing Ctrl + A , : followed by sessionname name (1) . You can view running screen sessions with screen -ls , and connect to one by name with screen -xS name (1): name is and an arbitrary string which will become the new session name. If the session name contains whitespace, quote it with single or double quotes. Within a single screen session, you can also name each window. Do this by typing Ctrl + A , A then the name you want. You can view an interactive list of named windows by typing Ctrl + A , " , and select the one you want to switch to from that list. Naming both screens and terminals within screens is really helpful for remembering what they are and why you started them in the first place.
{ "source": [ "https://unix.stackexchange.com/questions/12227", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3149/" ] }
12,247
Is there any way in unix to find out who accessed certain file in last 1 week? It may be user or some script ftp it to some other place. Can I get a list of user name who accessed certain file? How can I find out who is accessing particular file??
Unless you have extremely unusual logging policies in place, who accessed what file is not logged (that would be a huge amount of information). You can find out who was logged in at what time in the system logs; the last command gives you login history, and other logs such as /var/log/auth.log will tell you how users authenticated and from where they logged in (which terminal, or which host if remotely). The date at which a file was last read is called its access time, or atime for short . All unix filesystems can store it, but many systems don't record it, because it has a (usually small) performance penalty. ls -ltu /path/to/file or stat /path/to/file shows the file's access time. If a user accessed the file and wasn't trying to hide his tracks, his shell history (e.g. ~/.bash_history ) may have clues. To find out what or who has a file open now, use lsof /path/to/file . To log what happens to a file in the future, there are a few ways: Use inotifywait . inotifywait -me access /path/to will print a line /path/to/ ACCESS file when someone reads file . This interface won't tell you who accessed the file; you can call lsof /path/to/file as soon as this line appears, but there's a race condition (the access may be over by the time lsof gets going). LoggedFS is a stackable filesystem that provides a view of a filesystem tree, and can perform fancier logging of all accesses through that view. To configure it, see LoggedFS configuration file syntax . You can use Linux's audit subsystem to log a large number of things, including filesystem accesses. Make sure the auditd daemon is started, then configure what you want to log with auditctl . Each logged operation is recorded in /var/log/audit/audit.log (on typical distributions). To start watching a particular file: auditctl -w /path/to/file If you put a watch on a directory, the files in it and its subdirectories recursively are also watched.
{ "source": [ "https://unix.stackexchange.com/questions/12247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7054/" ] }
12,260
I have Alice's public key. I want to send Alice an RSA encrypted message. How can I do it using the openssl command? The message is: Hi Alice! Please bring malacpörkölt for dinner!
In the openssl manual ( openssl man page), search for RSA , and you'll see that the command for RSA encryption is rsautl . Then read the rsautl man page to see its syntax. echo 'Hi Alice! Please bring malacpörkölt for dinner!' | openssl rsautl -encrypt -pubin -inkey alice.pub >message.encrypted The default padding scheme is the original PKCS#1 v1.5 (still used in many procotols); openssl also supports OAEP (now recommended) and raw encryption (only useful in special circumstances). Note that using openssl directly is mostly an exercise. In practice, you'd use a tool such as gpg (which uses RSA, but not directly to encrypt the message).
{ "source": [ "https://unix.stackexchange.com/questions/12260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
12,283
In unix/linux, any number of consecutive forwardslashes in a path is generally equivalent to a single forwardslash. eg. $ cd /home/shum $ pwd /home/shum $ cd /home//shum $ pwd /home/shum $ cd /home///shum $ pwd /home/shum Yet for some reason two forwardslashes at the beginning of an absolute path is treated specially. eg. $ cd ////home $ pwd /home $ cd /// $ pwd / $ cd // $ pwd // $ cd home//shum $ pwd //home/shum Any other number of consecutive forwardslashes anywhere else in a patch gets truncated, but two at the beginning will remain, even if you then navigate around the filesystem relative to it. Why is this? Is there any difference between /... and //... ?
For the most part, repeated slahes in a path are equivalent to a single slash . This behavior is mandated by POSIX and most applications follow suit. The exception is that “a pathname that begins with two successive slashes may be interpreted in an implementation-defined manner” (but ///foo is equivalent to /foo ). Most unices don't do anything special with two initial slashes. Linux, in particular, doesn't. Cygwin does: //hostname/path accesses a network drive (SMB). What you're seeing is not, in fact, Linux doing anything special with // : it's bash's current directory tracking. Compare: $ bash -c 'cd //; pwd' // $ bash -c 'cd //; /bin/pwd' / Bash is taking the precaution that the OS might be treating // specially and keeping it. Dash does the same. Ksh and zsh don't when they're running on Linux, I guess (I haven't checked) they have a compile-time setting.
{ "source": [ "https://unix.stackexchange.com/questions/12283", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7314/" ] }
12,342
From the command line, what is the easiest way to show the contents of multiple files? My directory looks like below. ./WtCgikkCFHmmuXQXp0FkZjVrnJSU64Jb9WSyZ52b ./xdIwVHnHY7dnuM9zcPDYQGZFdoVORPyMVD2IzjgM ./GZnATXO1e5Hh3Bz1bhgJjjwheIjjZqtnXR0hfOyj ./mWz7ehBNoTZmtDh8JG6sxw2lMJFwIovPzxDGECUY ./JN65F5v3RL2ilHPqNSx9N9D4lvVpqpbJ9lASd8TJ ./At9PS4y4nTiXUO0Z0USnbYkTPBla1msQRpwuruqE ./YiPyMZPCaUDZTiTczAvWII9bJrUqLXCFtH2pXEA2 ./JoakdlbRFPwAvWp1d4n8RvMoyMeizCoiriL2Sn2U ./wFPWZUus8Yu7UtESGABLCoqDg36cT90USO0xuyUr ./qseI9PgV1EJfZCDyGGeVytajqG7JeX0r7eA5S1JW ./zgFJpNgXyCsaVh38aCuMGuzHwIbwSNB6rQDdh27x ./.htaccess Now I'd like to view the contents of all files except .htaccess . It might look something like: WtCgikkCFHmmuXQXp0FkZjVrnJSU64Jb9WSyZ52b: Contents of file WtCgikkCFHmmuXQXp0FkZjVrnJSU64Jb9WSyZ52b. xdIwVHnHY7dnuM9zcPDYQGZFdoVORPyMVD2IzjgM: Contents of file xdIwVHnHY7dnuM9zcPDYQGZFdoVORPyMVD2IzjgM. [...] I think this should be doable with a combination of find, xargs and cat, but I haven't figured out how. Thanks for your time!
The standard head command and some implementations of tail print a header with the file name if you pass them more than one file argument (POSIX tail accepts only 0 or 1 file argument). To print the whole file, use tail -n +1 (print from the first line onwards, i.e. everything). Here, it looks like you want to see every file except the single one whose name begins with a dot. Dot files are “hidden” under unix: they don't appear in the default output of ls or in a wildcard match. So matching every non-hidden files is done with just * . tail -n +1 -- * Or if your tail can't take more than one argument: head -n 999999 -- * (some head -n -0 -- * but that's not standard either). The GNU implementation of head / tail also accept a -v / --verbose option that ensures the header is printed even when only one filename is given. -- is needed to cover the cases where one of the file names begins with a - . Beware a file called - would still be taken by head / tail as meaning stdin. Using ./* would work around it but would mean the ./ prefix would be included in the header on output.
{ "source": [ "https://unix.stackexchange.com/questions/12342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3775/" ] }
12,352
I'm trying to use wget to save the text of a web page. I run: wget "http://www.finance.yahoo.com/q/op?s=GOOG" > goog.txt to try and save the web page to goog.txt, but instead wget tells me: Saving to: `op?s=GOOG' Why is wget acting like this, and how can I get the expected behavior?
Use the -O option: wget "http://www.finance.yahoo.com/q/op?s=GOOG" -O goog.txt
{ "source": [ "https://unix.stackexchange.com/questions/12352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77170/" ] }
12,356
And how does it also autocomplete aliases?
Depending on the command: Someone may have written a function to generate possible completions of arguments, including options. You'll find functions for some commands in /etc/bash_completion.d/* (or a different location on some systems). These functions are registered with the complete built-in (e.g. complete -F _find find tells bash to call the _find function when you press Tab on a find command). They use the compgen built-in to tell bash “here are the possible completions”. For some commands, bash will call the command with the argument --help and parse the output. Such commands can be registered with the complete built-in, e.g. complete -F _longopt ls . _longopt is in fact a completion generation function, that happens to parse a command's output rather than use a fixed list. (There are other more specialized completion functions that parse a command's output to generate possible completions; look in /etc/bash_completion.d/* for examples.) For things like aliases, the completion function looks them up in bash's internal tables. The complete built-in has options for that, e.g. -A for aliases.
{ "source": [ "https://unix.stackexchange.com/questions/12356", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3493/" ] }
12,359
Is there any port monitoring tool to watch the packets written on the port? I especially want to check if my program written in Java works so I need some kind of tool to see if my little application is writing the messages to the port. How do I do this?
socat is a tool to connect (nearly) everything to (nearly) everything, and tee can duplicate streams. In your usecase you could connect your serial port /dev/ttyS0 to a PTY /tmp/ttyV0 , then point your application to the PTY, and have socat tee out Input and Output somewhere for you to observe. Googling "socat serial port pty tee debug" will point you to several "standard procedure" examples, one being: socat /dev/ttyS0,raw,echo=0 \ SYSTEM:'tee in.txt |socat - "PTY,link=/tmp/ttyV0,raw,echo=0,waitslave" |tee out.txt' The files in.txt and out.txt will then contain the captured data. Updated after comments: The socat syntax looks confusing at first, but in fact, it's just 2 nested statements. A small price to pay to such a powerful, versatile tool. If you need to setup your serial port, or send other ioctls, do so before calling socat, as socat cannot proxy them. The single-purpose-tool interceptty from 2006 has slightly simpler syntax, but can only intercept TTYs (while proxying ioctls), and is probably not in your package manager. (Most Linux distros never added it to their repos)
{ "source": [ "https://unix.stackexchange.com/questions/12359", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5926/" ] }
12,383
I am using Ubuntu, and I would like to be able to type less compressed_text_file.gz and page the contents of the text file in uncompressed form. Is there a way to do this?
You can configure the key bindings and set many settings for less in a file called ~/.lesskey . Once you've created the file, run the lesskey command ; it generates a file called ~/.less which less reads when it starts. The setting you want is LESSOPEN . It's an input formatter for less. The less package comes with a sample formatter in /bin/lesspipe ; it decompresses gzipped files, shows content listings for many multi-file archive formats, and converts several formatted texts formats to plain text. In your ~/.lesskey : #env LESSOPEN=|/bin/lesspipe %s
{ "source": [ "https://unix.stackexchange.com/questions/12383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1010/" ] }
12,439
I want to set my terminal up so stderr is printed in a different color than stdout ; maybe red. This would make it easier to tell the two apart. Is there a way to configure this in .bashrc ? If not, is this even possible? Note : This question was merged with another that asked for stderr , stdout and the user input echo to be output in 3 different colours . Answers may be addressing either question.
This is a harder version of Show only stderr on screen but write both stdout and stderr to file . The applications running in the terminal use a single channel to communicate with it; the applications have two output ports, stdout and stderr, but they're both connected to the same channel. You can connect one of them to a different channel, add color to that channel, and merge the two channels, but this will cause two problems: The merged output may not be exactly in the same order as if there had been no redirection. This is because the added processing on one of the channel takes (a little) time, so the colored channel may be delayed. If any buffering is done, the disorder will be worse. Terminals use color changing escape sequences to determine the display color, e.g. ␛[31m means “switch to red foreground”. This means that if some output destined to stdout arrives just as some output for stderr is being displayed, the output will be miscolored. (Even worse, if there's a channel switch in the middle of an escape sequence, you'll see garbage.) In principle, it would be possible to write a program that listens on two ptys¹, synchronously (i.e. won't accept input on one channel while it's processing output on the other channel), and immediately outputs to the terminal with appropriate color changing instructions. You'd lose the ability to run programs that interact with the terminal. I don't know of any implementation of this method. Another possible approach would be to cause the program to output the proper color changing sequences, by hooking around all the libc functions that call the write system call in a library loaded with LD_PRELOAD . See sickill's answer for an existing implementation, or Stéphane Chazelas's answer for a mixed approach that leverages strace . In practice, if that's applicable, I suggest redirecting stderr to stdout and piping into a pattern-based colorizer such as colortail or multitail , or special-purpose colorizers such as colorgcc or colormake . ¹ pseudo-terminals. Pipes wouldn't work because of buffering: the source could write to the buffer, which would break the synchronicity with the colorizer.
{ "source": [ "https://unix.stackexchange.com/questions/12439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
12,453
uname -m gives i686 and uname -m gives i686 i386 output in Red Hat Enterprise Linux Server release 5.4 (Tikanga) machine. I need to install Oracle Database 10g Release 2 on that machine. So, how can I decide whether kernel architecture is 32bit or 64bit?
i386 and i686 are both 32-bit. x86_64 is 64-bit example for 64 bit: behrooz@behrooz:~$ uname -a Linux behrooz 2.6.32-5-amd64 #1 SMP Mon Mar 7 21:35:22 UTC 2011 **x86_64** GNU/Linux EDIT: See is my linux ARM 32 or 64 bit? for ARM
{ "source": [ "https://unix.stackexchange.com/questions/12453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2914/" ] }
12,469
What are the main advantages of using Debian instead of Ubuntu?
Debian has some features that you could consider "advantages" depending on your needs and use cases. Stability. The Debian Stable branch has been tested extensively, generally for at least a year, as the Testing branch. The only updates Stable get are mission critical bug fixes and security fixes. This makes it an extremely stable platform (i.e., well-tested and little change). A tier-ed branch system for releases allowing you to pick the level of stability/up-to-dateness you need. Stable, Testing, and Unstable (plus backports, where select packages and libraries are ported from Testing to Stable). This provides a great deal of flexibility in how you decide to upgrade or stay with a certain version of a package or an entire release. The Debian Social Contract . A commitment to free software and the free software community. For the community, by the community. Debian is your way. You get a tremendous amount of choice and configuration options. There is no one "typical" Debian install. Debian is on your terms. Maturity - The Debian project has been around for a long time and is a stable part of the free and open source software ecosystem. Debian has been ported to many different hardware architectures. The current Stable release has 11 different ports. Ubuntu on the other hand is focused on the x86, and amd64 platforms. A LOT of packages. As in 29,000 worth. There's an old saying, if the project exists there's a .deb for it.
{ "source": [ "https://unix.stackexchange.com/questions/12469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
12,482
I have a scanned PDF file in which two different real pages appear together on one virtual page. The resolution is with good quality. The problem is I have to zoom when reading and drag from left to the right. Is there some command ( convert , pdftk , ...) or script that can convert this pdf file with normal pages (one page from book = one page in pdf file)?
Here's a small Python script using the old PyPdf library that does the job neatly. Save it in a script called un2up (or whatever you like), make it executable ( chmod +x un2up ), and run it as a filter ( un2up <2up.pdf >1up.pdf ). #!/usr/bin/env python import copy, sys from pyPdf import PdfFileWriter, PdfFileReader input = PdfFileReader(sys.stdin) output = PdfFileWriter() for p in [input.getPage(i) for i in range(0,input.getNumPages())]: q = copy.copy(p) (w, h) = p.mediaBox.upperRight p.mediaBox.upperRight = (w/2, h) q.mediaBox.upperLeft = (w/2, h) output.addPage(p) output.addPage(q) output.write(sys.stdout) Ignore any deprecation warnings; only the PyPdf maintainers need be concerned with those. If the input is oriented in an unusual way, you may need to use different coordinates when truncating the pages. See Why my code not correctly split every page in a scanned pdf? Just in case it's useful, here's my earlier answer which uses a combination of two tools plus some manual intervention: Pdfjam (at least version 2.0), based on the pdfpages LaTeX package, to crop the pages; Pdftk , to put the left and right halves back together. Both tools are needed because as far as I can tell pdfpages isn't able to apply two different transformations to the same page in one stream. In the call to pdftk , replace 42 by the number of pages in the input document ( 2up.pdf ). pdfjam -o odd.pdf --trim '0cm 0cm 14.85cm 0cm' --scale 1.141 2up.pdf pdfjam -o even.pdf --trim '14.85cm 0cm 0cm 0cm' --scale 1.141 2up.pdf pdftk O=odd.pdf E=even.pdf cat $(i=1; while [ $i -le 42 ]; do echo O$i E$i; i=$(($i+1)); done) output all.pdf In case you don't have pdfjam 2.0, it's enough to have a PDFLaTeX installation with the pdfpages package (on Ubuntu: you need texlive-latex-recommended and perhaps (on Ubuntu: texlive-fonts-recommended ), and use the following driver file driver.tex : \batchmode \documentclass{minimal} \usepackage{pdfpages} \begin{document} \includepdfmerge[trim=0cm 0cm 14.85cm 0cm,scale=1.141]{2up.pdf,-} \includepdfmerge[trim=14.85cm 0cm 0cm 0cm,scale=1.141]{2up.pdf,-} \end{document} Then run the following commands, replacing 42 by the number of pages in the input file (which must be called 2up.pdf ): pdflatex driver pdftk driver.pdf cat $(i=1; pages=42; while [ $i -le $pages ]; do echo $i $(($pages+$i)); i=$(($i+1)); done) output 1up.pdf
{ "source": [ "https://unix.stackexchange.com/questions/12482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
12,532
I recently realized we can use cat as much as dd , and it's actually faster than dd I know that dd was useful in dealing with tapes where block size actually mattered in correctness, not just performance. In these days, though, are there situations where dd can do something cat can't? (Here I would regard a less than 20% performance difference irrelevant.) Concrete examples would be nice!
In appearance, dd is a tool from an IBM operating system that's retained its foreign appearance (its parameter passing), which performs some very rarely-used functions (such as EBCDIC to ASCII conversions or endianness reversal… not a common need nowadays). I used to think that dd was faster for copying large blocks of data on the same disk (due to more efficient use of buffering), but this isn't true , at least on today's Linux systems. I think some of dd 's options are useful when dealing with tapes, where reading is really performed in blocks (tape drivers don't hide the blocks on the storage medium the way disk drivers do). But I don't know the specifics. One thing dd can do that can't (easily) be done by any other POSIX tool is taking the first N bytes of a stream. Many systems can do it with head -c 42 , but head -c , while common, isn't in POSIX (and isn't available today on e.g. OpenBSD). ( tail -c is POSIX.) Also, even where head -c exists, it might read too many bytes from the source (because it uses stdio buffering internally), which is a problem if you're reading from a special file where just reading has an effect, or from a pipe when the rest of the data should be left for another process to read. (Current GNU coreutils read the exact count with head -c , but FreeBSD and NetBSD use stdio.) More generally, dd gives an interface to the underlying file API that is unique amongst Unix tools: only dd can overwrite or truncate a file at any point or seek in a file. (This is dd 's unique ability, and it's a big one; oddly enough dd is best known for things that other tools can do.) Most Unix tools overwrite their output file, i.e. erase its contents and start it over from scratch. This is what happens when you use > redirection in the shell as well. You can append to a file's contents with >> redirection in the shell, or with tee -a . If you want to shorten a file by removing all data after a certain point , this is supported by the underlying kernel and C API through the truncate function, but not exposed by any POSIX command line tool except dd : dd if=/dev/null of=/file/to/truncate seek=1 bs=123456 # truncate file to 123456 bytes (Many modern systems provide a truncate utility, however.) If you want to overwrite data in the middle of a file, again, this is possible in the underlying API by opening the file for writing without truncating (and calling lseek to move to the desired position if necessary), but only dd can open a file without truncating or appending, or seek from the shell ( more complex example ). # zero out the second kB block in the file (i.e. bytes 1024 to 2047) dd if=/dev/zero of=/path/to/file bs=1024 seek=1 count=1 conv=notrunc So… As a system tool, dd is pretty much useless, even dangerous . As a text (or binary file) processing tool, it's quite valuable!
{ "source": [ "https://unix.stackexchange.com/questions/12532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2130/" ] }
12,535
I'm trying to copy-paste some text from vim. I'm doing v to enter visual mode, then y once I selected my block. It appears to copy the text into vim's clipboard, because p will paste it. But in another program (e.g. Chrome), right-click->paste doesn't paste the correct text. How do I copy text to the correct clipboard?
The following will work only if vim --version indicates that you have +xterm_clipboard feature. If not, you will have to install extra packages or recompile vim with that feature added. There are actually two options for this: "+y copies to the "usual" clipboard buffer (so you can paste using Ctrl+V, right click and select "Paste" etc), while "*y copies to the X11 selection - you can paste from this buffer using middle click. Note that "* and "+ work both ways. So if you have selected some text in another application, you can paste it into vim using "*p and if you have copied some text (using, say, Ctrl-C) then you can paste it into vim using "+p .
{ "source": [ "https://unix.stackexchange.com/questions/12535", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
12,578
How can I list installed packages by installation date? I need to do this on debian/ubuntu. Answers for other distributions would be nice as well. I installed a lot of stuff to compile a certain piece of code, and I want to get a list of the packages that I had to install.
RPM-based distributions like Red Hat are easy: rpm -qa --last On Debian and other dpkg-based distributions, your specific problem is easy too: grep install /var/log/dpkg.log Unless the log file has been rotated, in which case you should try: grep install /var/log/dpkg.log /var/log/dpkg.log.1 In general, dpkg and apt don't seem to track the installation date, going by the lack of any such field in the dpkg-query man page. And eventually old /var/log/dpkg.log.* files will be deleted by log rotation, so that way isn't guaranteed to give you the entire history of your system. One suggestion that appears a few times (e.g. this thread ) is to look at the /var/lib/dpkg/info directory. The files there suggest you might try something like: ls -t /var/lib/dpkg/info/*.list | sed -e 's/\.list$//' | head -n 50 To answer your question about selections, here's a first pass. build list of packages by dates $ find /var/lib/dpkg/info -name "*.list" -exec stat -c $'%n\t%y' {} \; | \ sed -e 's,/var/lib/dpkg/info/,,' -e 's,\.list\t,\t,' | \ sort > ~/dpkglist.dates build list of installed packages $ dpkg --get-selections | sed -ne '/\tinstall$/{s/[[:space:]].*//;p}' | \ sort > ~/dpkglist.selections join the 2 lists $ join -1 1 -2 1 -t $'\t' ~/dpkglist.selections ~/dpkglist.dates \ > ~/dpkglist.selectiondates For some reason it's not printing very many differences for me, so there might be a bug or an invalid assumption about what --get-selections means. You can obviously limit the packages either by using find . -mtime -<days> or head -n <lines> , and change the output format as you like, e.g. $ find /var/lib/dpkg/info -name "*.list" -mtime -4 | \ sed -e 's,/var/lib/dpkg/info/,,' -e 's,\.list$,,' | \ sort > ~/dpkglist.recent $ join -1 1 -2 1 -t $'\t' ~/dpkglist.selections ~/dpkglist.recent \ > ~/dpkglist.recentselections to list only the selections that were installed (changed?) in the past 4 days. You could probably also remove the sort commands after verifying the sort order used by dpkg --get-selections and make the find command more efficient.
{ "source": [ "https://unix.stackexchange.com/questions/12578", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1262/" ] }
12,593
I am trying to remove all files and subdirectories in a directory. I used rm -r to remove all files, but I want to remove all files and subdirectories, excluding the top directory itself. For example, I have a top directory like images . It contains the files header.png , footer.png and a subdirectory. Now I want to delete header.png , footer.png and the subdirectory, but not images . How can I do this in linux?
If your top-level directory is called images , then run rm -r images/* . This uses the shell glob operator * to run rm -r on every file or directory within images .
{ "source": [ "https://unix.stackexchange.com/questions/12593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/337051/" ] }
12,601
I have a screen instance running, and I would need to execute some code inside the screen , and get the result out to my script. The first part is quite easy, I just screen -S session_name -X eval 'stuff "$cmd"\015' . (I modified a line I found in a script ) The second part, getting out the output, is trickier. How can I get the whole output, whatever it's size?
You could start screen with the -L option. This will cause screen to create a file screenlog.n (the n part is numerical, starting with a zero) in the current working directory. In your case this would look something like: screen -S session_name -L -X eval 'stuff "$cmd"\015' As long as you remember to clean up afterwards, this should match what you are after. For last line of the log, it can easily be obtained with tail -1 screenlog.0 , or the entire log can be parsed however you wish.
{ "source": [ "https://unix.stackexchange.com/questions/12601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4001/" ] }
12,699
I am asking this question on behalf of another user who raised the issue in the Ubuntu chat room. Do journaling filesystems guarantee that no corruption will occur if a power failure occurs? If this answer depends on the filesystem, please indicate which ones do protect against corruption and which ones don't.
There are no guarantees. A Journaling File System is more resilient and is less prone to corruption, but not immune. All a journal is is a list of operations which have recently been done to the file system. The crucial part is that the journal entry is made before the operations take place. Most operations have multiple steps. Deleting a file, for example might entail deleting the file's entry in the file system's table of contents and then marking the sectors on the drive as free. If something happens between the two steps, a journaled file system can tell immediately and perform the necessary clean up to keep everything consistent. This is not the case with a non-journaled file system which has to look at the entire contents of the volume to find errors. While this journaling is much less prone to corruption than not journaling, corruption can still occur. For example, if the hard drive is mechanically malfunctioning or if writes to the journal itself are failing or interrupted. The basic premise of journaling is that writing a journal entry is much quicker, usually, than the actual transaction it describes will be. So, the period between the OS ordering a (journal) write and the hard drive fulfilling it is much shorter than for a normal write: a narrower window for things to go wrong in, but there's still a window. Further reading
{ "source": [ "https://unix.stackexchange.com/questions/12699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1049/" ] }
12,719
Is there a way to 'time out' a root shell (for example, in gnome-terminal ) so that after a certain amount of time not issuing any commands, the shell exits? I'm searching for a solution that works in bash on Fedora, and in ksh on OpenBSD.
You can set set the TMOUT variable to a number in seconds that you wish for bash to wait before automatically logging out the shell if no command is run.
{ "source": [ "https://unix.stackexchange.com/questions/12719", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
12,736
When I use the shebang #!/usr/bin/env python to run a script, how does the system know which python to use? if I look for a python bin path in the environment variables I find nothing. env | grep -i python
The shebang expects a full path to the interpreter to use so the following syntax would be incorrect: #!python Setting a full path like this might work: #!/usr/local/bin/python but would be non portable as python might be installed in /bin , /opt/python/bin , or wherever other location. Using env #!/usr/bin/env python is a method allowing a portable way to specify to the OS a full path equivalent to the one where python is first located in the PATH .
{ "source": [ "https://unix.stackexchange.com/questions/12736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7278/" ] }
12,755
I have a machine running Ubuntu which I SSH to from my Fedora 14 machine. I want to forward X from the Ubuntu machine back to Fedora so I can run graphical programs remotely. Both machines are on a LAN. I know that the -X option enables X11 forwarding in SSH, but I feel like I am missing some of the steps. What are the required steps to forward X from a Ubuntu machine to Fedora over SSH?
X11 forwarding needs to be enabled on both the client side and the server side. On the client side , the -X (capital X) option to ssh enables X11 forwarding, and you can make this the default (for all connections or for a specific connection) with ForwardX11 yes in ~/.ssh/config . On the server side , X11Forwarding yes must be specified in /etc/ssh/sshd_config . Note that the default is no forwarding (some distributions turn it on in their default /etc/ssh/sshd_config ), and that the user cannot override this setting. The xauth program must be installed on the server side. If there are any X11 programs there, it's very likely that xauth will be there. In the unlikely case xauth was installed in a nonstandard location, it can be called through ~/.ssh/rc (on the server!). Note that you do not need to set any environment variables on the server. DISPLAY and XAUTHORITY will automatically be set to their proper values. If you run ssh and DISPLAY is not set, it means ssh is not forwarding the X11 connection. To confirm that ssh is forwarding X11, check for a line containing Requesting X11 forwarding in the output of ssh -v -X . Note that the server won't reply either way, a security precaution of hiding details from potential attackers.
{ "source": [ "https://unix.stackexchange.com/questions/12755", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4856/" ] }
12,762
I am using tcsh. bash and zsh and other suggestions won't help here. I have several aliases that are named the same thing as another command, so if I did unalias it, typing the same thing would now do something different. Most of the time I want the aliased command, which is why I have them. However, sometimes I want the unaliased command. Without actually unaliasing and redefining the command, is there a simple way to tell tcsh to use the unaliased command instead? For example, vi is aliased to vim, but sometimes I want to just use vi. cd is aliased to change my window title, but sometimes I want to leave it alone. Obviously I could type /usr/bin/vi but since cd is a shell built-in command, there is no equivalent. Is there a general solution?
You can use a backslash: % alias ls ls -a % ls # ls -a output here % \ls # plain ls output here For shell builtins, there turns out to be a gotcha: a leading backslash prevents both aliases and builtins from being used, but an internal backslash suppresses aliasing only. % alias cd pushd % cd /tmp /tmp /tmp % c\d % dirs ~ /tmp (I'm tempted to call that another argument against using the csh family of shells.)
{ "source": [ "https://unix.stackexchange.com/questions/12762", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7284/" ] }
12,812
Question more or less says it all. I'm aware that /^$/d will remove all blank lines, but I can't see how to say 'replace two or more blank lines with a single blank line' Any ideas?
If you aren't firing vim or sed for some other use, cat actually has an easy builtin way to collapse multiple blank lines, just use cat -s . If you were already in vim and wanted to stay there, you could do this with the internal search and replace by issuing: :%s!\n\n\n\+!^M^M!g (The ^M is the visual representation of a newline, you can enter it by hitting Ctrl + v Enter ), or save yourself the typing by just shelling out to cat: :%!cat -s .
{ "source": [ "https://unix.stackexchange.com/questions/12812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5709/" ] }
12,815
I often see that programs specify pid and lock files. And I'm not quite sure what they do. For example, when compiling nginx: --pid-path=/var/run/nginx.pid \ --lock-path=/var/lock/nginx.lock \ Can somebody shed some light on this one?
pid files are written by some programs to record their process ID while they are starting. This has multiple purposes: It's a signal to other processes and users of the system that that particular program is running, or at least started successfully. It allows one to write a script really easy to check if it's running and issue a plain kill command if one wants to end it. It's a cheap way for a program to see if a previous running instance of it did not exit successfully. Mere presence of a pid file doesn't guarantee that that particular process id is running, of course, so this method isn't 100% foolproof but "good enough" in a lot of instances. Checking if a particular PID exists in the process table isn't totally portable across UNIX-like operating systems unless you want to depend on the ps utility, which may not be desirable to call in all instances (and I believe some UNIX-like operating systems implement ps differently anyway). The idea with lock files is the following: the purpose is for two (well-behaved) separate instances of a program, which may be running concurrently on one system, don't access something else at the same time. The idea is before the program accesses its resource, it checks for presence of a lock file, and if the lock file exists, either error out or wait for it to go away. When it doesn't exist, the program wanting to "acquire" the resource creates the file, and then other instances that might come across later will wait for this process to be done with it. Of course, this assumes the program "acquiring" the lock does in fact release it and doesn't forget to delete the lock file. This works because the filesystem under all UNIX-like operating systems enforces serialization , which means only one change to the filesystem actually happens at any given time. Sort of like locks with databases and such. Operating systems or runtime platforms typically offer synchronization primitives and it's usually a better decision to use those instead. There may be situations such as writing something that's meant to run on a wide variety of operating systems past and future without a reliable library to abstract the calls (such as possibly sh or bash based scripts meant to work in a wide variety of unix flavors) - in that case this scheme may be a good compromise.
{ "source": [ "https://unix.stackexchange.com/questions/12815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2657/" ] }
12,829
I logged in for the first time, opened terminal, and typed in ‘hostname’. It returned ‘localhost.localdomain.com’. Then I logged as the root user in terminal using the command, ‘su –‘, provided the password for the root user and used the command ‘hostname etest’ where etest is the hostname I’d like my machine to have. To test if I got my hostname changed correctly, I typed ‘hostname’ again at terminal and it returned etest. However, when I restart my machine, the hostname reverts back to ‘localhost.localdomain.com’. Here are the entire series of commands I used in terminal. [thomasm@localhost ~]$ hostname localhost.localdomain [thomasm@localhost ~]$ su - Password: [root@localhost ~]# hostname etest [root@localhost ~]# hostname etest I had run into the same problem when I set up RHEL and Ubuntu OS’s with VMPlayer.
On RHEL and derivatives like CentOS, you need to edit two files to change the hostname. The system sets its hostname at bootup based on the HOSTNAME line in /etc/sysconfig/network . The nano text editor is installed by default on RHEL and its derivatives, and its usage is self-evident: # nano /etc/sysconfig/network You also have to change the name in the /etc/hosts file. If you do not, certain commands will suddenly start taking longer to run. They are trying to find the local host IP from the hostname, and without an entry in /etc/hosts , it has to go through the full network name lookup process before it can move on. Depending on your DNS setup, this can mean delays of a minute or so! Having changed those two files, you can either run the hostname command to change the run-time copy of the hostname (which again, was set from /etc/sysconfig/network ) or just reboot. Ubuntu differs in that the static copy of the hostname is stored in /etc/hostname . For that matter, many aspects of network configuration are stored in different places and with different file formats on Ubuntu as compared to RHEL.
{ "source": [ "https://unix.stackexchange.com/questions/12829", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7318/" ] }
12,842
Suppose I have two users Alice and Bob and a group GROUPNAME and a folder foo , both users are members of GROUPNAME (using Linux and ext3). If I save as user Alice a file under foo , the permissions are: -rw-r--r-- Alice Alice . However, is it possible to achieve that every file saved under some subdirectory of foo has permissions -rwxrwx--- Alice GROUPNAME (i.e. owner Alice, group GROUPNAME)?
You can control the assigned permission bits with umask , and the group by making the directory setgid to GROUPNAME . $ umask 002 # allow group write; everyone must do this $ chgrp GROUPNAME . # set directory group to GROUPNAME $ chmod g+s . # files created in directory will be in group GROUPNAME Note that you have to do the chgrp / chmod for every subdirectory; it doesn't propagate automatically for existing directories. On OS X, subsequently created directories under a setgid directory will be setgid , although the latter will be in group GROUPNAME . On most Linux distros, the setgid bit will propagate to new subdirectories. Also note that umask is a process attribute and applies to all files created by that process and its children (which inherit the umask in effect in their parent at fork() time). Users may need to set this in ~/.profile , and may need to watch out for things unrelated to your directory that need different permissions. modules may be useful if you need different settings when doing different things. You can control things a bit better if you can use POSIX ACLs; it should be possible to specify both a permissions mask and a group, and have them propagate sensibly. Support for POSIX ACLs is somewhat variable, though.
{ "source": [ "https://unix.stackexchange.com/questions/12842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5289/" ] }
12,858
I have a fedora guest OS in VMware. I want to expand /boot partition, so I add another virtual disk to this VM, and try to clone the disk. After dd if=/dev/sda1 of=/dev/sdb1 , blkid report that /dev/sda1 and /dev/sdb1 have same UUID/GUID. It's weird that there're 2 same UUIDs in the universe, how to change one of them to another UUID value? Update 2017-01-25 Subject changed, UUID here means filesystem UUID, not partition UUID. Since it's filesystem UUID, filesystem specific utils are needed to change the UUID, or use hexeditor to modify raw data on disk ( DANGEROUS, not recommended unless you know what you are doing ).
To generate a random new UUID, one can use: $ uuidgen To actually change the UUID is file system dependent. Assuming ext-family filesystem # tune2fs -U <output of uuidgen> /dev/sdb1 Or if you're confident uuidgen is going to work: # tune2fs -U $(uuidgen) /dev/sdb1 Assuming btrfs filesystem # btrfstune -U $(uuidgen) /dev/sdb1 The UUID is stored in the superblock, so a byte-for-byte copy of the filesystem will have the same UUID.
{ "source": [ "https://unix.stackexchange.com/questions/12858", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7332/" ] }
12,902
I'd like to find the files in the current directory that contain the text "chrome". $ find . -exec grep chrome find: missing argument to `-exec' What am I doing wrong?
You missed a ; (escaped here as \; to prevent the shell from interpreting it) or a + and a {} : find . -exec grep chrome {} \; or find . -exec grep chrome {} + find will execute grep and will substitute {} with the filename(s) found. The difference between ; and + is that with ; a single grep command for each file is executed whereas with + as many files as possible are given as parameters to grep at once.
{ "source": [ "https://unix.stackexchange.com/questions/12902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1594/" ] }
12,918
I've been looking at GNU Emacs for a few months now, on and off (mainly off), and I've really only gone as far as testing a few basic things which I especially want in an editor... I'm slowly getting to realize its topography, and it is starting to make (good) sense.... The main thing I've noticed is that it seems to work exactly the same in the X-GUI version as it does in the X-Terminal version (and I suspect it would be pretty much the same in a non-GUI environment... I originally thought that I would feel very uncomfortable working in a non-GUI editor, and that has been the case, but the more I dabble in the Emacs waters, the less significant that need becomes... so I am now looking at it from the other end of the stick... I'm turning my focus to working primarily in the Terminal version.. My question is: Aside from the obvious GUI-menu (which has turned out to be quite unnecessary), is there any notable difference between the versions (X-GUI, X-Terminal, and no-GUI) ?*
There used to be more restrictions, but since GNU Emacs 23, the text mode interface can do most of what the GUI interface can do. Also, since GNU Emacs 23, you can combine X frames and text mode frames in the same Emacs instance. Running in a terminal limits the input key combinations Emacs can recognize, because the terminal emulator often doesn't transmit distinct escape sequences for all key combinations. Most terminal emulators don't support all combinations of modifiers with ASCII characters (things like C-S-a or C-; or modifiers other than Ctrl , Shift and Meta / Alt ). You can't distinguish tab from C-i or backspace from DEL (or C-h depending on the terminal emulator setup). There is a proposed standard for encoding escape sequences in a systematic way but many popular terminals don't support it . In a terminal, you get bold, perhaps italics and underline, and however many colors the terminal supports. Under X, Emacs can use multiple fonts , and display images . Whether that's useful or not is mostly a personal preference. Don't knock it until you've tried LaTeX font-locking (in AUCTeX ) and rendering of mathematical symbols and diagrams through x-symbol (I tried, and didn't like it). If you use Emacs as a browser , image support is a plus (or not). In a terminal, you're limited by the terminal's support for encodings (but most at least support basic Unicode features nowadays). The X interface lets Emacs choose its own fonts and mix them in fontsets ; this is useful if you edit multilingual documents that aren't covered by a single font. I don't have enough experience with non-latin languages to say whether Emacs is better than your typical terminal emulator at coping with “difficult” languages (combining characters, double-width, left-to-right (which Emacs 23 doesn't support anyway, Emacs 24 should)). There's obviously mouse support in the GUI interface. In the text interface, you can turn on mouse support if running in a terminal emulator under X with xterm-mouse-mode . You can get X clipboard support as well. The GUI version has a few extra features like tooltips , mouse avoidance , and mouse-activated context menus . You can use the menu bar with either interface. The X version can put up icons at the top of the frame (the tool bar ), not that I've ever seen any use for them. You also don't get dialog boxes or scroll bars in text modes. You don't get multiple-frame convenience such as speedbars or an ediff control frame.
{ "source": [ "https://unix.stackexchange.com/questions/12918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2343/" ] }
12,942
I've got two files _jeter3.txt and _jeter1.txt I've checked they are both sorted on the 20th column using sort -c sort -t ' ' -c -k20,20 _jeter3.txt sort -t ' ' -c -k20,20 _jeter1.txt #no errors but there is an error when I want to join both files it says that the second file is not sorted: join -t ' ' -1 20 -2 20 _jeter1.txt _jeter3.txt > /dev/null join: File 2 is not in sorted order I don't understand why. cat /etc/*-release #FYI openSUSE 11.0 (i586) VERSION = 11.0 UPDATE : using ' sort -f ' and join -i (both case insensitive) fixes the problem. But it doesn't explain my initial problem. UPDATE : versions of sort & join: > join --version join (GNU coreutils) 6.11 Copyright (C) 2008 Free Software Foundation, Inc. (...) > sort --version sort (GNU coreutils) 6.11 Copyright (C) 2008 Free Software Foundation, Inc. (...)
I got the same error with Ubuntu 11.04, with sort and join both in version (GNU coreutils) 8.5. They are clearly incompatible. In fact the sort command seems bugged: there is no difference with or without the -f ( --ignore-case ) option. When sorting, aaB is always before aBa . Non alphanumeric characters seems also always ignored ( abc is before ab-x ) Join seems to expect the opposite... But I have a solution In fact, this is linked to the collation sequence: using LANG=en_EN sort -k 1,1 <myfile> ... then LANG=en_EN join ... eliminates the message. Internationalisation is the root of evil... (nobody documents it clearly).
{ "source": [ "https://unix.stackexchange.com/questions/12942", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5921/" ] }
12,956
I have a 64-bit (amd64 a.k.a. x86_64) Debian or Ubuntu installation. I need to run 32-bit (i386/i686) programs occasionally, or to compile programs for a 32-bit system. How can I do this with a minimum of fuss? Bonus: what if I want to run or test with an older or newer release of the distribution?
For current releases Current Debian and Ubuntu have multiarch support: You can mix x86_32 (i386) and x86_64 (amd64) packages on the same system in a straightforward way. This is known as multiarch support - see Ubuntu or Debian wiki more information. See warl0ck's answer for a simple, up-to-date answer. For old releases In older releases, Debian and Ubuntu ship with a number of 32-bit libraries on amd64. Install the ia32-libs package to have a basic set of 32-bit libraries, and possibly other packages that depend on this one. Your 32-bit executables should simply run if you have all the required libraries. For development, install gcc-multilib , and again possibly other packages that depend on it such as g++-multilib . You may find binutils-multiarch useful as well, and ia32-libs-dev on Debian. Pass the -m32 option to gcc to compile for ix86. Note that uname -m will still show x64_64 if you're running a 64-bit kernel, regardless of what 32-bit user mode components you have installed. Schroot described below takes care of this. Schroot This section is a guide to installing a Debian-like distribution “inside” another Linux distribution. It is worded in terms of installing a 32-bit Ubuntu inside a 64-bit Ubuntu, but should apply with minor modifications to other situations, such as installing Debian unstable inside Debian stable or vice versa. Introduction The idea is to install an alternate distribution in a subtree and run from that. You can install a 32-bit system on a 64-bit system that way, or a different release of your distribution, or a testing environment with different sets of packages installed. The chroot command and system call starts a process with a view of the filesystem that's restricted to a subtree of the directory tree. Debian and Ubuntu ship schroot , a utility that wraps around this feature to create a more usable sub-environment. Install the schroot package ( Debian ) and the debootstrap package ( Debian ). Debootstrap is only needed for the installation of the alternate distribution and can be removed afterwards. Set up schroot This example describes how to set up a 32-bit Ubuntu 10.04LTS (lucid lynx) alternate environment. A similar setup should work with other releases of Debian and Ubuntu. Create a file /etc/schroot/chroot.d/lucid32 with the following contents: [lucid32] description=Ubuntu 10.04LTS 32-bit directory=/32 type=directory personality=linux32 users=yourusername groups=users,admin The line directory=/32 tells schroot where we'll put the files of the 32-bit installation. The line username=yourusername says the user yourusername will be allowed to use the schroot. The line groups=users,admin says that users in either group will be allowed to use the schroot; you can also put a users=… directive. Install the new distribution Create the directory and start populating it with debootstrap. Debootstrap downloads and installs a core set of packages for the specified distribution and architecture. mkdir /32 debootstrap --arch i386 lucid /32 http://archive.ubuntu.com/ubuntu You almost have a working system already; what follows is minor enhancements. Schroot automatically overwrites several files in /32/etc when you run it, in particular the DNS configuration in /etc/resolv.conf and the user database in /etc/passwd and other files (this can be overridden, see the documentation). There are a few more files you may want to copy manually once and for all: cp -p /etc/apt/apt.conf /32/etc/apt/ # for proxy settings cp -p /etc/apt/sources.list /32/etc/apt/ # for universe, security, etc cp -p /etc/environment /32/etc/ # for proxy and locale settings cp -p /etc/sudoers /32/etc/ # for custom sudo settings There won't be a file /etc/mtab or /etc/fstab in the chroot. I don't recommend using the mount command manually in the chroot, do it from outside. But do create a good-enough /etc/mtab to make commands such as df work reasonably. ln -s /proc/mounts /32/etc/mtab With the directory type, schroot will perform bind mounts of a number of directories, i.e. those directories will be shared with the parent installation: /proc , /dev , /home , /tmp . Services in the chroot As described here, a schroot is not suitable for running daemons. Programs in the schroot will be killed when you exit the schroot. Use a “plain” schroot instead of a “directory” schroot if you want it to be more permanent, and set up permanent bind mounts in /etc/fstab on the parent installation. On Debian and Ubuntu, services start automatically on installation. To avoid this (which could disrupt the services running outside the chroot, in particular because network ports are shared), establish a policy of not running services in the chroot. Put the following script as /32/usr/sbin/policy-rc.d and make it executable ( chmod a+rx /32/usr/sbin/policy-rc.d ). #!/bin/sh ## Don't start any service if running in a chroot. ## See /usr/share/doc/sysv-rc/README.policy-rc.d.gz if [ "$(stat -c %d:%i /)" != "$(stat -c %d:%i /proc/1/root/.)" ]; then exit 101 fi Populate the new system Now we can start using the chroot. You'll want to install a few more packages at this point. schroot -c lucid32 sudo apt-get update apt-get install lsb-core nano ... You may need to generate a few locales, e.g. locale-gen en_US en_US.utf8 If the schroot is for an older release of Ubuntu such as 8.04 (hardy), note that the package ubuntu-standard pulls in an MTA. Select nullmailer instead of the default postfix (you may want your chroot to send mail but you definitely don't want it to receive any). Going further For more information, see the schroot manual , the schroot FAQ and the schroot.conf manual . Schroot is part of the Debian autobuilder (buildd) project . There may be additional useful tips on the Ubuntu community page about debootstrap . Virtual machine If you need complete isolation of the alternate environment, use a virtual machine such as KVM ( qemu-kvm ) or VirtualBox .
{ "source": [ "https://unix.stackexchange.com/questions/12956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
12,976
I have a directory with thousands of files. How can I move 100 of the files (any files will do) to another location.
for file in $(ls -p | grep -v / | tail -100) do mv $file /other/location done That assumes file names don't contain blanks, newline (assuming the default value of $IFS ), wildcard characters ( ? , * , [ ) or start with - .
{ "source": [ "https://unix.stackexchange.com/questions/12976", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7372/" ] }
12,985
Quite often in the course of troubleshooting and tuning things I find myself thinking about the following Linux kernel settings: net.core.netdev_max_backlog net.ipv4.tcp_max_syn_backlog net.core.somaxconn Other than fs.file-max , net.ipv4.ip_local_port_range , net.core.rmem_max , net.core.wmem_max , net.ipv4.tcp_rmem , and net.ipv4.tcp_wmem , they seems to be the important knobs to mess with when you are tuning a box for high levels of concurrency. My question: How can I check to see how many items are in each of those queues ? Usually people just set them super high, but I would like to log those queue sizes to help predict future failure and catch issues before they manifest in a user noticeable way.
I too have wondered this and was motivated by your question! I've collected how close I could come to each of the queues you listed with some information related to each. I welcome comments/feedback, any improvement to monitoring makes things easier to manage! net.core.somaxconn net.ipv4.tcp_max_syn_backlog net.core.netdev_max_backlog $ netstat -an | grep -c SYN_RECV Will show the current global count of connections in the queue, you can break this up per port and put this in exec statements in snmpd.conf if you wanted to poll it from a monitoring application. From: netstat -s These will show you how often you are seeing requests from the queue: 146533724 packets directly received from backlog TCPBacklogDrop: 1029 3805 packets collapsed in receive queue due to low socket buffer fs.file-max From: http://linux.die.net/man/5/proc $ cat /proc/sys/fs/file-nr 2720 0 197774 This (read-only) file gives the number of files presently opened. It contains three numbers: The number of allocated file handles, the number of free file handles and the maximum number of file handles. net.ipv4.ip_local_port_range If you can build an exclusion list of services (netstat -an | grep LISTEN) then you can deduce how many connections are being used for ephemeral activity: netstat -an | egrep -v "MYIP.(PORTS|IN|LISTEN)" | wc -l Should also monitor (from SNMP): TCP-MIB::tcpCurrEstab.0 It may also be interesting to collect stats about all the states seen in this tree(established/time_wait/fin_wait/etc): TCP-MIB::tcpConnState.* net.core.rmem_max net.core.wmem_max You'd have to dtrace/strace your system for setsockopt requests. I don't think stats for these requests are tracked otherwise. This isn't really a value that changes from my understanding. The application you've deployed will probably ask for a standard amount. I think you could 'profile' your application with strace and configure this value accordingly. (discuss?) net.ipv4.tcp_rmem net.ipv4.tcp_wmem To track how close you are to the limit you would have to look at the average and max from the tx_queue and rx_queue fields from (on a regular basis): # cat /proc/net/tcp sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode 0: 00000000:0FB1 00000000:0000 0A 00000000:00000000 00:00000000 00000000 500 0 262030037 1 ffff810759630d80 3000 0 0 2 -1 1: 00000000:A133 00000000:0000 0A 00000000:00000000 00:00000000 00000000 500 0 262029925 1 ffff81076d1958c0 3000 0 0 2 -1 To track errors related to this: # netstat -s 40 packets pruned from receive queue because of socket buffer overrun Should also be monitoring the global 'buffer' pool (via SNMP): HOST-RESOURCES-MIB::hrStorageDescr.1 = STRING: Memory Buffers HOST-RESOURCES-MIB::hrStorageSize.1 = INTEGER: 74172456 HOST-RESOURCES-MIB::hrStorageUsed.1 = INTEGER: 51629704
{ "source": [ "https://unix.stackexchange.com/questions/12985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7376/" ] }
12,986
I want to take a backup of the whole partition layout of a hard drive, including logical drives, so that I can restore that layout to another disk. I do not want to copy the contents of the partitions, only the layout. For the primary and extended partitions, it's easy: dd if=/dev/sda of=partitiontable.bin bs=1 skip=446 count=64 # backup dd if=partitiontable.bin of=/dev/sda bs=1 seek=446 count=64 # restore But when it comes to the layout of the logical partitions, I wonder if there exists among the standard tools a similar way of saving the layout? I guess the main problem is finding the offsets to the locations of the EBRs, because with that, dd will do the rest. Keep in mind I need to be able to put everything back to a (possibly) blank disk and thereby restore the same layout. Using partitioning tools like fdisk or parted is fine, but I must be able to automate their use (scripting) and they should not depend on any X-related packages -- command line only. My backup plan is doing it manually in a little python script using the struct module, but I rather hoped there was an easier way.
You can use sfdisk for this task even in GPT partitioned disks*. Save: sfdisk -d /dev/sdX > part_table Restore keeping the same disk & partition IDs**: sfdisk /dev/sdX < part_table Restore generating new disk & partition IDs**: grep -v ^label-id part_table | sed -e 's/, *uuid=[0-9A-F-]*//' | sfdisk /dev/sdY Notes *: For GPT partition tables , this requires sfdisk from util-linux 2.26 or later. It was re-written from scratch on top of libfdisk . **: by default sfdisk will copy the disk and partition IDs unchanged, rather than generating new ones. So the new disk will be a clone of the original, not just another disk with the same layout. Note that Linux's /dev/disk/by-uuid/ looks at filesystem UUIDs, though, not UUIDs in the partition table. sfdisk will generate new UUIDs if you delete the references to partitions ids ( , uuid=... ) and the reference to the disk id ( label-id: ... ) from the dump .
{ "source": [ "https://unix.stackexchange.com/questions/12986", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6339/" ] }
12,998
I was trying to run chmod -R 777 ./ but ended up typing chmod -R 777 / and set 777 on my entire machine. What can go wrong? How can I fix it?
Problems? Yes, lots. Can it be fixed? Sure. Faster than reinstalling? Probably not. My recommendation is to reinstall. Keep a backup of the existing system, and restore the package list and the contents of files in /etc and /var . For /usr/local , you can probably restore permissions manually. For /home and /srv , you'll have to restore the permissions from backups. If this is a system with multiple local users, note that making some files world-readable has revealed some things that should have remained confidential. Your password list is now compromised: local users have had access to the hashed password list and could try to brute-force them. Notify your users of this. All private user data (ssh keys, stored passwords, e-mail, whatever else users might consider confidential) has been exposed to all local users. Notify your users of this. If you really want to try repairing (more of a learning exercise than a practical recovery route), first restore the permissions of a few files. Note that while most files are now too open, a few are missing necessary setuid bits. Here are steps you should take before anything else. Note that this is not an exhaustive list, just an attempt at making the system barely functional. chmod -R go-w / chmod 440 /etc/sudoers chmod 640 /etc/shadow /etc/gshadow chmod 600 /etc/ssh/*_key /etc/ssh*key # whichever matches chmod 710 /etc/ssl/private /etc/cups/ssl chmod 1777 /tmp /var/tmp /var/lock chmod 4755 /bin/su /usr/bin/passwd /usr/bin/sudo /usr/bin/sudoedit chmod 2755 /var/mail /var/spool/mail Then you'll need to restore all the permissions everywhere. For files under /usr , you can reinstall the packages with one of the following commands, depending on your distribution: If you're using Debian, Ubuntu, or another distribution based on APT, you can execute apt-get --reinstall install If you're using Arch Linux, you can execute pacman -S $(pacman -Qq --dbpath /newarch/var/lib/pacman) --root /newarch --dbpath /newarch/var/lib/pacman , assuming that you're in a Live CD and your Arch install is mounted at /newarch . For files under /etc and /var , that won't work, as many of them will be left as they are: you'll have to replicate the permissions on a working installation. For files under /srv and /home , you'll have to restore from backups anyway. As you can see, you might as well reinstall.
{ "source": [ "https://unix.stackexchange.com/questions/12998", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7386/" ] }
13,008
I'm trying to automate a curl script and eventually make it loop also. I found that I can't use any loop like: for i in `cat authors` do curl *line here* "www.test.com/autors?string="$i"&proc=39" etc or even env glob variables for CURL Is there a way to loop a get request on curl for one variable? curl -H "Host: www.test.com" -G "www.test.com/autors?string="author+name"&proc=39" All I get is the traffic graph CURL has and no good result. I'm using bash
Problems? Yes, lots. Can it be fixed? Sure. Faster than reinstalling? Probably not. My recommendation is to reinstall. Keep a backup of the existing system, and restore the package list and the contents of files in /etc and /var . For /usr/local , you can probably restore permissions manually. For /home and /srv , you'll have to restore the permissions from backups. If this is a system with multiple local users, note that making some files world-readable has revealed some things that should have remained confidential. Your password list is now compromised: local users have had access to the hashed password list and could try to brute-force them. Notify your users of this. All private user data (ssh keys, stored passwords, e-mail, whatever else users might consider confidential) has been exposed to all local users. Notify your users of this. If you really want to try repairing (more of a learning exercise than a practical recovery route), first restore the permissions of a few files. Note that while most files are now too open, a few are missing necessary setuid bits. Here are steps you should take before anything else. Note that this is not an exhaustive list, just an attempt at making the system barely functional. chmod -R go-w / chmod 440 /etc/sudoers chmod 640 /etc/shadow /etc/gshadow chmod 600 /etc/ssh/*_key /etc/ssh*key # whichever matches chmod 710 /etc/ssl/private /etc/cups/ssl chmod 1777 /tmp /var/tmp /var/lock chmod 4755 /bin/su /usr/bin/passwd /usr/bin/sudo /usr/bin/sudoedit chmod 2755 /var/mail /var/spool/mail Then you'll need to restore all the permissions everywhere. For files under /usr , you can reinstall the packages with one of the following commands, depending on your distribution: If you're using Debian, Ubuntu, or another distribution based on APT, you can execute apt-get --reinstall install If you're using Arch Linux, you can execute pacman -S $(pacman -Qq --dbpath /newarch/var/lib/pacman) --root /newarch --dbpath /newarch/var/lib/pacman , assuming that you're in a Live CD and your Arch install is mounted at /newarch . For files under /etc and /var , that won't work, as many of them will be left as they are: you'll have to replicate the permissions on a working installation. For files under /srv and /home , you'll have to restore from backups anyway. As you can see, you might as well reinstall.
{ "source": [ "https://unix.stackexchange.com/questions/13008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7391/" ] }
13,019
Between Debian 5 and 6, the default suggested value for kernel.printk in /etc/sysctl.conf was changed from kernel.printk = 4 4 1 7 to kernel.printk = 3 4 1 3 . I understand that the first value corresponds to what is going to the console. What are the next 3 values for? Do the numerical values have the same meaning as the syslog log levels? Or do they have different definitions? Am I missing some documentation in my searching, or is the only location to figure this out the kernel source.
Sysctl settings are documented in Documentation/sysctl/*.txt in the kernel source tree. On Debian, install linux-doc to have the documentation in usr/share/doc/linux-doc-*/Documentation/ (most distributions have a similar package). From Documentation/sysctl/kernel.txt : The four values in printk denote: console_loglevel , default_message_loglevel , minimum_console_loglevel and default_console_loglevel respectively. These values influence printk() behavior when printing or logging error messages. See man 2 syslog for more info on the different loglevels. console_loglevel : messages with a higher priority than this will be printed to the console default_message_loglevel : messages without an explicit priority will be printed with this priority minimum_console_loglevel : minimum (highest) value to which console_loglevel can be set default_console_loglevel : default value for console_loglevel I don't find any clear prose explanation of what default_console_loglevel is used for. In the Linux kernel source , the kernel.printk sysctl sets console_printk . The default_console_loglevel field doesn't seem to be used anywhere.
{ "source": [ "https://unix.stackexchange.com/questions/13019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1209/" ] }