source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
34,196
I have often wondered why the ~ ( tilde ) character represents the home directory of a user. Is there a reason behind this, or is it just because tilde is an infrequently used character?
Quoting Wikipedia : On Unix-like operating systems (including BSD, GNU/Linux and Mac OS X), tilde often indicates the current user's home directory: for example, if the current user's home directory is /home/bloggsj , then cd , cd ~ , cd /home/bloggsj or cd $HOME are equivalent. This practice derives from the Lear-Siegler ADM-3A terminal in common use during the 1970s, which happened to have the tilde symbol and the word "Home" (for moving the cursor to the upper left) on the same key. You can find photos of the Lear-Siegler ADM-3A keyboard on this site. This terminal is also the source of the movement commands used in the vi editor: h , j , k , l for left, down, up, right.
{ "source": [ "https://unix.stackexchange.com/questions/34196", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6820/" ] }
34,202
Is it possible to execute a script if there is no permission to read it? In root mode, I made a script and I want the other user to execute this script but not read it. I did chmod to forbid read and write but allow execute, however in user mode, I saw the message that says: permission denied.
The issue is that the script is not what is running, but the interpreter ( bash , perl , python , etc.). And the interpreter needs to read the script. This is different from a "regular" program, like ls , in that the program is loaded directly into the kernel, as the interpreter would. Since the kernel itself is reading program file, it doesn't need to worry about read access. The interpreter needs to read the script file, as a normal file would need to be read.
{ "source": [ "https://unix.stackexchange.com/questions/34202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16610/" ] }
34,228
What is the shortcut key for scrolling inside a terminal? If I hit the Up or Down arrow or PageUp or PageDown , it will only traverse through command history, not let me traverse the previous part displayed in the terminal. Especially when I run Matlab in a terminal, I have the same problem. My OS is Ubuntu.
Shift + PgUp / PgDn / Home / End will scroll in gnome-terminal and Terminal.
{ "source": [ "https://unix.stackexchange.com/questions/34228", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
34,248
Is there a way to find all symbolic links that don't point anywere? find ./ -type l will give me all symbolic links, but makes no distinction between links that go somewhere and links that don't. I'm currently doing: find ./ -type l -exec file {} \; | grep broken But I'm wondering what alternate solutions exist.
I'd strongly suggest not to use find -L for the task (see below for explanation). Here are some other ways to do this: If you want to use a "pure find " method, and assuming the GNU implementation of find , it should rather look like this: find . -xtype l ( xtype is a test performed on a dereferenced link) portably (though less efficiently), you can also exec test -e from within the find command: find . -type l ! -exec test -e {} \; -print Even some grep trick could be better (i.e., safer ) than find -L , but not exactly such as presented in the question (which greps in entire output lines, including filenames): find . -type l -exec sh -c 'file -b "$1" | grep -q "^broken"' sh {} \; -print The find -L trick quoted by solo from commandlinefu looks nice and hacky, but it has one very dangerous pitfall : All the symlinks are followed. Consider directory with the contents presented below: $ ls -l total 0 lrwxrwxrwx 1 michal users 6 May 15 08:12 link_1 -> nonexistent1 lrwxrwxrwx 1 michal users 6 May 15 08:13 link_2 -> nonexistent2 lrwxrwxrwx 1 michal users 6 May 15 08:13 link_3 -> nonexistent3 lrwxrwxrwx 1 michal users 6 May 15 08:13 link_4 -> nonexistent4 lrwxrwxrwx 1 michal users 11 May 15 08:20 link_out -> /usr/share/ If you run find -L . -type l in that directory, all /usr/share/ would be searched as well (and that can take really long) 1 . For a find command that is "immune to outgoing links", don't use -L . 1 This may look like a minor inconvenience (the command will "just" take long to traverse all /usr/share ) – but can have more severe consequences. For instance, consider chroot environments: They can exist in some subdirectory of the main filesystem and contain symlinks to absolute locations. Those links could seem to be broken for the "outside" system, because they only point to proper places once you've entered the chroot. I also recall that some bootloader used symlinks under /boot that only made sense in an initial boot phase, when the boot partition was mounted as / . So if you use a find -L command to find and then delete broken symlinks from some harmless-looking directory, you might even break your system...
{ "source": [ "https://unix.stackexchange.com/questions/34248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298/" ] }
34,273
stdout on one CentOS server needs to be piped to stdin on another CentOS server. Is this possible? Update ScottPack, MikeyB and jofel all have valid answers. I awarded the answer to Scott because, even though my question didn't specify security as a requirement, it's always nice to be safe. However, the other two fellows' suggestions will also work.
This is an unabashed yes. When one uses ssh to execute a command on a remote server it performs some kind of fancy internal input/output redirection. In fact, I find this to be one of the subtly nicer features of OpenSSH. Specifically, if you use ssh to execute an arbitrary command on a remote system, then ssh will map STDIN and STDOUT to that of the command being executed. For the purposes of an example, let's assume you want to create a backup tarball, but don't want to, or can't, store it locally. Let's have a gander at this syntax: $ tar -cf - /path/to/backup/dir | ssh remotehost "cat - > backupfile.tar" We're creating a tarball, and writing it to STDOUT , normal stuff. Since we're using ssh to execute a remote command, STDIN gets mapped to the STDIN of cat . Which we then redirect to a file.
{ "source": [ "https://unix.stackexchange.com/questions/34273", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4232/" ] }
34,325
I need to be able to alphabetically sort the output of find before piping it to a command. Entering | sort | between didn't work, so what could I do? find folder1 folder2 -name "*.txt" -print0 | xargs -0 myCommand
Use find as usual and delimit your lines with NUL. GNU sort can handle these with the -z switch: find . -print0 | sort -z | xargs -r0 yourcommand
{ "source": [ "https://unix.stackexchange.com/questions/34325", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3208/" ] }
34,334
So I have 4 GB RAM + 4GB swap. I want to create a user with limited ram and swap: 3 GB RAM and 1 GB swap. Is such thing possible? Is it possible to start applications with limited RAM and swap avaliable to them without creating a separate user (and not installing any special apps - having just a default Debian/CentOS server configuration, and not using sudo)? Update: So I opened terminall and typed into it ulimit command: ulimit -v 1000000 which shall be like 976,6Mb limitation. Next I called ulimit -a and saw that limitation is "on". Then I started some bash script that compiles and starts my app in nohup , a long one nohup ./cloud-updater-linux.sh >& /dev/null & ... but after some time I saw: (which would be ok if no limitations were applied - it downloaded some large lib, and started to compile it.) But I thought I applied limitations to the shell and all processes launched with/from it with ulimit -v 1000000 ? What did I get wrong? How to make a terminal and all sub processes it launches be limited on ram usage?
ulimit is made for this. You can setup defaults for ulimit on a per user or a per group basis in /etc/security/limits.conf ulimit -v KBYTES sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use. So you limits.conf would have the line (to a maximum of 4G of memory) luser hard as 4000000 UPDATE - CGroups The limits imposed by ulimit and limits.conf is per process. I definitely wasn't clear on that point. If you want to limit the total amount of memory a users uses (which is what you asked). You want to use cgroups . In /etc/cgconfig.conf : group memlimit { memory { memory.limit_in_bytes = 4294967296; } } This creates a cgroup that has a max memory limit of 4GiB. In /etc/cgrules.conf : luser memory memlimit/ This will cause all processes run by luser to be run inside the memlimit cgroups created in cgconfig.conf .
{ "source": [ "https://unix.stackexchange.com/questions/34334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12631/" ] }
34,379
cd ~ does the same thing as cd $HOME which is also the same as cd /home/tandu However, cd ~not-tandu changes to /home/not-tandu Is this purely a syntactic choice? How is this handled by the kernel (or the cd execuable?) Is there a special case for ~ to add the slash if everything else is omitted? That is to say, ~/ and ~ change to the same directory, but ~a is one directory up. The same cannot be said for any other directory you change to.
~ is an alias for $HOME provided by a number of shells, but $HOME is more universal. $HOME actually asks the shell to insert (substitute) the environmental variable HOME here. There are quite a number of different environmental variable that can be substituted, try running env for a list. Note that ~ is not always recognized when it's not at the beginning of a word. Try these two commands for comparison: ls /~ ls /$HOME The first gets passed to the ls executable as /~ which then tries to look at a file called ~ in the root directory, the second expands $HOME and becomes //home/user which is then passed to the ls executable as a command-line argument. All POSIX systems (POSIX is the standard for how UNIX and Linux systems operate) allow multiple slashes to be treated the same as one slash so //home/user is the same as saying /home/user . ~username is a shortcut for telling the shell to look up username in the passwd file and return their home directory. There is no equivalent environment variable. All of these substitution are done by the shell and are supported by most of them, but only environment variables like $HOME are guaranteed to be supported by all shells. Also, cd is actually a built-in command. It's a special directive that tells the shell itself to change directories. It's not like other shell built-ins that can be implemented as a separate executable like echo is because it's used to change a fundamental attribute of the shell process. echo is merely a shell built-in for performance reasons, but in the good old days of UNIX, was only available as it's own executable /bin/echo .
{ "source": [ "https://unix.stackexchange.com/questions/34379", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14414/" ] }
34,462
I recently found out that if I edit GRUB before booting and I add rw init=/bin/bash I end up with a root shell. Being in a condition that I want to understand everything I would like to know why this happens. I mean is it a bug? is it a feature? is it there to help admins to fix things as it only works if you have physical access to a computer? Is it provided by GRUB or the actual kernel?
This is a feature, and is used for system maintainance: it allows a sysadmin to recover a system from messed-up initialization files or change a forgotten password. This post in the Red Hat mailing list explains some things: In Unix-like systems, init is the first process to be run, and the ultimate ancestor of all processes ever run. It's responsible for running all the init scripts. You're telling the Linux kernel to run /bin/bash as init, rather than the system init. [...] Thus, you are not exploiting anything, you are just using a standard kernel feature. Besides, as noted in a comment, the rw flag is separate from init= , it just tells the system to mount the root file system as read-write (so you can e.g. edit the misconfigured file or change a password).
{ "source": [ "https://unix.stackexchange.com/questions/34462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
34,464
I'm having a very specific problem, however any help will aid in understanding X's relationship to the keyboard. I'd like to be able to launch the ElectricSheep program on top of music playing from XBMC. I've already got the launch script set up, and I can launch ElectricSheep with no problems. The problem occurs when I try to close it. If I launch ElectricSheep without XBMC running, pressing escape closes it. If XBMC is running (or even if I include a line in the script to kill xbmc before launching), it grabs all keyboard input making my only route out of ElectricSheep to either kill it from an ssh session or kill X itself. If I run xev while XBMC is running, it recieves no input. Is there any way to launch an application and explicitly give it the X keyboard? Thanks for any help!
This is a feature, and is used for system maintainance: it allows a sysadmin to recover a system from messed-up initialization files or change a forgotten password. This post in the Red Hat mailing list explains some things: In Unix-like systems, init is the first process to be run, and the ultimate ancestor of all processes ever run. It's responsible for running all the init scripts. You're telling the Linux kernel to run /bin/bash as init, rather than the system init. [...] Thus, you are not exploiting anything, you are just using a standard kernel feature. Besides, as noted in a comment, the rw flag is separate from init= , it just tells the system to mount the root file system as read-write (so you can e.g. edit the misconfigured file or change a password).
{ "source": [ "https://unix.stackexchange.com/questions/34464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15198/" ] }
34,549
I have a number of tiff files named: sw.001.tif sw.002.tif ... and I want to remove the .tif at the end of each of the files. How can I use the rename command to do this?
perl 's rename (as typically found on Debian where it's also called prename ), or this derivative ( rename package on Debian): rename 's/\.tif$//' *.tif util-linux rename (as typically found on Red Hat, rename.ul on Debian): rename -- .tif '' *.tif (note that that one would rename blah.tiffany.tif to blahfany.tif )
{ "source": [ "https://unix.stackexchange.com/questions/34549", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15417/" ] }
34,646
Having a file of the following contents: 1111,2222,3333,4444 aaaa,bbbb,cccc,dddd I seek to get a file equal to the original but lacking a n-th column like, for n = 2 (or may it be 3) 1111,2222,4444 aaaa,bbbb,dddd or, for n = 0 (or may it be 1) 2222,3333,4444 bbbb,cccc,dddd A real file can be gigabytes long having tens thousands columns. As always in such cases, I suspect command line magicians can offer an elegant solution... :-) In my actual real case I need to drop 2 first columns, which can be done by dropping a first column twice in a sequence, but I suppose it would be more interesting to generalise a bit.
I believe this is specific to cut from the GNU coreutils: $ cut --complement -f 3 -d, inputfile 1111,2222,4444 aaaa,bbbb,dddd Normally you specify the fields you want via -f, but by adding --complement you reverse the meaning, naturally. From 'man cut': --complement complement the set of selected bytes, characters or fields One caveat: if any of the columns contain a comma, it will throw cut off, because cut isn't a CSV parser in the same way that a spreadsheet is. Many parsers have different ideas about how to handle escaping commas in CSV. For the simple CSV case, on the command line, cut is still the way to go.
{ "source": [ "https://unix.stackexchange.com/questions/34646", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2119/" ] }
34,677
In bash : $ type : : is a shell builtin $ type true true is a shell builtin Looks like they are the same, but they don't give the same system trace: $ strace : strace: :: command not found $ strace true execve("/bin/true", ["true"], [/* 82 vars */]) = 0 [snip] exit_group(0) = ? I tried diffing strace bash -c : 2>:.txt and strace bash -c true 2>true.txt , but couldn't find any differences between them except for the memory locations. In dash : $ type : : is a special shell builtin $ type true true is a shell builtin OK, so they are not the same. help : and help true aren't very useful, and they return the same in bash and dash . Is there any practical difference at all between them, except that : saves three bytes and makes scripts less readable?
There's no real difference in behavior. Both commands do nothing and exit with a successful status. : emphasizes doing nothing; true emphasizes the successful status. strace true works because true is both a shell builtin and an external command ( /bin/true ); : is only a shell builtin (there's no /bin/: -- though there could be, and probably was on very old Unix systems). In bash, try type -a : type -a true The reasons that both exist are historical. If I recall correctly, some very early shells didn't have a comment syntax, so the do-nothing : command was used instead. There is some internal difference in dash . Looking through the source, available at git://git.kernel.org/pub/scm/utils/dash/dash.git, shows some different code paths in eval.c , but I haven't been able to produce any visibly different behavior other than the word special in the output of type : .
{ "source": [ "https://unix.stackexchange.com/questions/34677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
34,718
Is there a simple command that takes a disk's device node as input, and tells me where (and whether) that disk is mounted? Is it possible to get the mount point by itself, so I can pass it to another command? I'm working on a Debian Squeeze live system with a minimal install (I can install extra packages if need be).
On Linux, you can now use the findmnt command from util-linux (since version 2.18): $ findmnt -S /dev/VG_SC/home TARGET SOURCE FSTYPE OPTIONS /home /dev/mapper/VG_SC-home ext4 rw,relatime,errors=remount-ro,data=ordered Or lsblk (also from util-linux , since 2.19): $ lsblk /dev/VG_SC/home NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT VG_SC-home 254:2 0 200G 0 lvm /home That one is also useful to find all the file system mounted under a specific device (disk or partition...): $ lsblk /dev/sda2 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda2 8:2 0 59.5G 0 part ├─linux-debian64 (dm-1) 252:1 0 15G 0 lvm └─linux-mint (dm-2) 252:2 0 15G 0 lvm / To get the mountpoint only: $ findmnt -nr -o target -S /dev/storage/home /home $ lsblk -o MOUNTPOINT -nr /dev/storage/home /home Above findmnt does return with a failure exit status if the device is not mounted, not lsblk . So: if mountpoint=$(findmnt -nr -o target -S "$device"); then printf '"%s" is mounted on "%s"\n' "$device" "$mountpoint" else printf '"%s" does not appear to be directly mounted\n' "$device" fi
{ "source": [ "https://unix.stackexchange.com/questions/34718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11908/" ] }
34,742
What are the differences between CIFS and SAMBA? When would you use one over the other? Are there any performance differences between the two?
SAMBA was originally SMB Server – but the name had to be changed due to SMB Server being an actual product. SMB was the predecessor to CIFS. SMB (Server Message Block) and CIFS (Common Internet File System) are protocols. Samba implements CIFS network protocol. This is what allows Samba to communicate with (newer) MS Windows systems. Typically you will see it referred to as SMB/CIFS. However, CIFS is the extension of the SMB protocol, so if someone is sharing out SMB via Samba to a legacy system still using NetBIOS, it will typically connect to the Samba server via ports 137, 138 and 139 and CIFS is strictly port 445. So to answer your question directly, Samba provides CIFS file shares. The time when you might use SMB over CIFS is if you are providing access to a Windows 2000 systems or earlier or you just want to connect to port 139 instead of 445. If you truly want to know about CIFS one of the definitive books is available free online. Implementing CIFS - The Common Internet Filesystem If you want to get deeper into Samba this book is available online free as well. Using Samba 2nd Edition Though there is a newer edition out but not free online that I am aware of.
{ "source": [ "https://unix.stackexchange.com/questions/34742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13133/" ] }
34,751
My problem is that with lsof -p pid I can find out the list of opened file of a process whose process id is pid. But is there a way to find out the file offset of each accessed file ? Please give me some suggestions ?
On linux, you can find the position of the file descriptor number N of process PID in /proc/$PID/fdinfo/$N . Example: $ cat /proc/687705/fdinfo/36 pos: 26088 flags: 0100001 The same file can be opened several times with different positions using several file descriptors, so you'll have to choose the relevant one in the case there are more than one. Use: $ readlink /proc/$PID/fd/$N to know what is the file to which the corresponding file descriptor is attached (it might not be a file, in this case the symlink is dangling).
{ "source": [ "https://unix.stackexchange.com/questions/34751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13093/" ] }
34,766
I have been using public key authentication on my servers for a while now, but I am experiencing issues on a new 'client' trying to connect to github . I have read many threads to verify that my permissions are set up correctly and have generated a new key for github. The problem I am facing is that ssh is asking for my passphrase even though I did not set a passphrase. I have even re-made the key to be 100% sure that I did not enter a passphrase. ssh -vvv gives the following related output: debug1: Offering public key: /home/me/.ssh/github.pub debug2: we sent a publickey packet, wait for reply debug3: Wrote 368 bytes for a total of 1495 debug1: Remote: Forced command: gerve mygithubusername c3:71:db:34:98:30:6d:c2:ca:d9:51:a8:c6:1b:fc:f7 debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Server accepts key: pkalg ssh-rsa blen 277 debug2: input_userauth_pk_ok: fp c3:71:db:34:98:30:6d:c2:ca:d9:51:a8:c6:1b:fc:f7 debug3: sign_and_send_pubkey debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> Enter passphrase for key '/home/me/.ssh/github.pub': I have searched to figure out why it is telling me PEM_read_PrivateKey failed, but I cannot find a solution. I do not use an agent or anything. I configure my ~/.ssh/config file similar to the following: Host github Host github.com Hostname github.com User git PubkeyAuthentication yes IdentityFile /home/me/.ssh/github.pub Thanks in advance.
When you use the IdentityFile option in your ~/.ssh/config you point to the private, not the public , key. From man ssh_config : IdentityFile Specifies a file from which the user's DSA, ECDSA or DSA authentication identity is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_dsa, ~/.ssh/id_ecdsa and ~/.ssh/id_rsa for protocol version 2. So, your ~/.ssh/config entry should look like: Host github.com Hostname github.com User git PubkeyAuthentication yes IdentityFile /home/me/ .ssh/github
{ "source": [ "https://unix.stackexchange.com/questions/34766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10805/" ] }
34,795
I'm a bit confused on some of the results I am seeing from ps and free . On my server, this is the result of free -m [root@server ~]# free -m total used free shared buffers cached Mem: 2048 2033 14 0 73 1398 -/+ buffers/cache: 561 1486 Swap: 2047 11 2036 My understanding of how Linux manages memory, is that it will store disk usage in RAM, so that each subsequent access is quicker. I believe this is indicated by the "cached" columns. Additionally, various buffers are stored in RAM, indicated in the "buffers" column. So if I understand correctly, the "actual" usage is supposed to be the "used" value of "-/+ buffers/cache", or 561 in this case. So assuming all of that is correct, the part that throws me is the results of ps aux . My understanding of the ps results, is that the 6th column (RSS), represents the size in kilobytes the process uses for memory. So when I run this command: [root@server ~]# ps aux | awk '{sum+=$6} END {print sum / 1024}' 1475.52 Shouldn't the result be the "used" column of "-/+ buffers/cache" from free -m ? So, how can I properly determine the memory usage of a process in Linux? Apparently my logic is flawed.
Shamelessly copy/pasting my answer from serverfault just the other day :-) The linux virtual memory system isn't quite so simple. You can't just add up all the RSS fields and get the value reported used by free . There are many reasons for this, but I'll hit a couple of the biggest ones. When a process forks, both the parent and the child will show with the same RSS. However linux employs copy-on-write so that both processes are really using the same memory. Only when one of the processes modifies the memory will it actually be duplicated. This will cause the free number to be smaller than the top RSS sum. The RSS value doesn't include shared memory. Because shared memory isn't owned by any one process, top doesn't include it in RSS. This will cause the free number to be larger than the top RSS sum. There are many other reasons the numbers might not add up. This answer is just trying to make the point that memory management is very complex, and you cant just add/subtract individual values to get total memory usage.
{ "source": [ "https://unix.stackexchange.com/questions/34795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16926/" ] }
34,874
I want to use cli tool for file comparison and need line-number before output line with which help I could jump to line difference, because I use tool which understands where to jump, if the line begins like this :line-number: regular line contents So I tried diff , and reading documentation seems like it might be possible: -D, --ifdef=NAME output merged file with `#ifdef NAME' diffs --GTYPE-group-format=GFMT format GTYPE input groups with GFMT --line-format=LFMT format all input lines with LFMT --LTYPE-line-format=LFMT format LTYPE input lines with LFMT These format options provide fine-grained control over the output of diff, generalizing -D/--ifdef. LTYPE is `old', `new', or `unchanged'. GTYPE is LTYPE or `changed'. GFMT (only) may contain: %< lines from FILE1 %> lines from FILE2 %= lines common to FILE1 and FILE2 %[-][WIDTH][.[PREC]]{doxX}LETTER printf-style spec for LETTER LETTERs are as follows for new group, lower case for old group: F first line number L last line number N number of lines = L-F+1 E F-1 M L+1 %(A=B?T:E) if A equals B then T else E LFMT (only) may contain: %L contents of line %l contents of line, excluding any trailing newline %[-][WIDTH][.[PREC]]{doxX}n printf-style spec for input line number Both GFMT and LFMT may contain: %% % %c'C' the single character C %c'\OOO' the character with octal code OOO C the character C (other characters represent themselves) but there is no example or explanation about this complicated switch. Is it possible to get such output from diff ? If so how?
Yes, it is possible. When using these options, the default is just to print out every line. This is very verbose, and not what you want. diff --unchanged-line-format="" will eliminate lines that are unchanged, so now only the old and new lines are produced. diff --unchanged-line-format="" --new-line-format=":%dn: %L" will now show the new lines prefixed by :<linenumber>: and a space, but still print the old lines. Assuming you want to eliminate them, diff --unchanged-line-format="" --old-line-format="" --new-line-format=":%dn: %L" If you want the old lines rather than the new ones to be printed, swap them around.
{ "source": [ "https://unix.stackexchange.com/questions/34874", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14866/" ] }
34,933
I can connect to Linux machines from Windows using PuTTY/SSH. I want to do the other way round - connect to a Windows machine from Linux. Is this possible?
It depends on how you want to connect. You can create shares on the Windows machine and use smb/cifs to connect to the share. The syntax would depend based on if you are in a domain or not. # mount -t cifs //server/share /mnt/server --verbose -o user=UserName,dom=DOMAIN You also have the ability to mount the $IPC and administrative shares. You can look into Inter-Process Communication for what you can do via the $IPC share. There is always: RDP VNC telnet ssh Linux on Windows With the last 3 you need to install additional software. Kpym (telnet / ssh server) MobaSSH (ssh server) Cygwin (run a Linux environment inside Windows) DamnSmall Linux - inside Windows (like Cygwin run DSL inside Windows) VNC can be run from a stand-alone binary or installed. RealVNC TightVNC For RDP most Linux systems either already have rdesktop installed or it is available in the package manager. Using rdesktop you only have to enable RDP connections to your Windows system and then you will be able to use RDP for a full GUI Windows console.
{ "source": [ "https://unix.stackexchange.com/questions/34933", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15530/" ] }
35,088
How can one read passwords in bash scripts in such a way that tools do not show it on a terminal? (Changing font to black-on-black is easily worked around by copy & paste, so it's not solution.)
From help read : -s do not echo input coming from a terminal For example, to prompt the user and read an arbitrary password into the variable passwd , IFS= read -s -p 'Password please: ' passwd
{ "source": [ "https://unix.stackexchange.com/questions/35088", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9689/" ] }
35,129
I found that pidstat would be a good tool to monitor processes. I want to calculate the average memory usage of a particular process. Here is some example output: 02:34:36 PM PID minflt/s majflt/s VSZ RSS %MEM Command 02:34:37 PM 7276 2.00 0.00 349212 210176 7.14 scalpel (This is part of the output from pidstat -r -p 7276 .) Should I use the Resident Set Size (RSS) or Virtual Size (VSZ) information to calculate the average memory consumption? I have read a few thing on Wikipedia and on forums but I am not sure to fully understand the differences. Plus, it seems that none of them are reliable. So, how can I monitor a process to get its memory usage? Any help on this matter would be useful.
RSS is how much memory this process currently has in main memory (RAM). VSZ is how much virtual memory the process has in total. This includes all types of memory, both in RAM and swapped out. These numbers can get skewed because they also include shared libraries and other types of memory. You can have five hundred instances of bash running, and the total size of their memory footprint won't be the sum of their RSS or VSZ values. If you need to get a more detailed idea about the memory footprint of a process, you have some options. You can go through /proc/$PID/map and weed out the stuff you don't like. If it's shared libraries, the calculation could get complex depending on your needs (which I think I remember). If you only care about the heap size of the process, you can always just parse the [heap] entry in the map file. The size the kernel has allocated for the process heap may or may not reflect the exact number of bytes the process has asked to be allocated. There are minute details, kernel internals and optimisations which can throw this off. In an ideal world, it'll be as much as your process needs, rounded up to the nearest multiple of the system page size ( getconf PAGESIZE will tell you what it is — on PCs, it's probably 4,096 bytes). If you want to see how much memory a process has allocated , one of the best ways is to forgo the kernel-side metrics. Instead, you instrument the C library's heap memory (de)allocation functions with the LD_PRELOAD mechanism. Personally, I slightly abuse valgrind to get information about this sort of thing. (Note that applying the instrumentation will require restarting the process.) Please note, since you may also be benchmarking runtimes, that valgrind will make your programs very slightly slower (but probably within your tolerances).
{ "source": [ "https://unix.stackexchange.com/questions/35129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16757/" ] }
35,149
I have a Lenovo Thinkpad T420 with Linux Mint 12 and gnome-shell on it. Its a Intel HD 3000 Graphics card in there. When I'm at home, I have another Screen plugged in (19" 4:3) and everything works fine (Extended Desktop), except that I would like to have the Gnome 3 Bars + Shell on the right screen. Can't figure out how to do it. Thanks in advance
Open the System Settings > Displays control applet. It's not evident - at all - but you can drag the miniature of the top black panel onto the display you want to mark as primary. Panels, activity overlay and everything will migrate on that display.
{ "source": [ "https://unix.stackexchange.com/questions/35149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17095/" ] }
35,180
I've been using screen -dRaA -S x to open up a single session between different workstations as I move about. Handy. Is it possible to connect multiple times to a single session, though, without disconnecting others? When I have two machines I'm quickly moving between even reconnecting starts to slow me down.
Try screen -aAxR -S x -x is the option that does what you want.
{ "source": [ "https://unix.stackexchange.com/questions/35180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4192/" ] }
35,183
We have some new hardware in our office which runs its own customized Linux OS. How do I go about figuring which distro it's based on?
A question very close to this one was posted on Unix.Stackexchange HERE Giles has a pretty complete | cool answer for the ways he describes. # cat /proc/version Linux version 2.6.32-71.el6.x86_64 ([email protected]) (gcc version 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) ) #1 SMP Fri May 20 03:51:51 BST 2011 # uname -a Linux system1.doofus.local 2.6.32-71.el6.x86_64 #1 SMP Fri May 20 03:51:51 BST 2011 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/issue CentOS Linux release 6.0 (Final) Kernel \r on an \m cat /proc/config.gz cat /usr/src/linux/config.gz cat /boot/config* Though I did some checking and this was not very reliable except on SUSE. # zcat /proc/config.gz | grep -i kernel CONFIG_SUSE_KERNEL=y # CONFIG_KERNEL_DESKTOP is not set CONFIG_LOCK_KERNEL=y Release Files in /etc ( from Unix.com ) Novell SuSE---> /etc/SuSE-release Red Hat--->/etc/redhat-release, /etc/redhat_version Fedora-->/etc/fedora-release Slackware--->/etc/slackware-release, /etc/slackware-version Old Debian--->/etc/debian_release, /etc/debian_version New Debian--->/etc/os-release Mandrake--->/etc/mandrake-release Yellow dog-->/etc/yellowdog-release Sun JDS--->/etc/sun-release Solaris/Sparc--->/etc/release Gentoo--->/etc/gentoo-release There is also a bash script at the Unix.com link someone wrote to automate checking. Figuring out what package manager you have is a good clue. rpm yum apt-get zypper +many more Though this is by no means foolproof as the vendor could use anything they want. It really just gives you a place to start. # dmesg | less Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 pretty much the same information as cat /proc/version & uname
{ "source": [ "https://unix.stackexchange.com/questions/35183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17114/" ] }
35,206
I am working with VIm and trying to set up a search and replace command to do some replacements where I can re-use the regular expression that is part of my search string. A simple example would be a line where I want to replace (10) to {10} , where 10 can be any number. I came this far .s/([0-9]*)/what here??/ which matches exactly the part that I want. Now the replacement, I tried .s/([0-9]*)/{\0}/ But, this gives as output {(10)} Then, I tried .s/(\zs[0-9]*\ze)/{\0}/ However, that gave me ({10}) , which I also close, but not what I want. I think I need some other kind of marking/back-referencing instead of this \0 , but I don't know where to look. So the question is, can this be done in vim, and if so, how?
\0 is the whole match. To use only part of it you need to set it like this and use \1 .s/(\([0-9]*\))/{\1}/ More detailed instruction you can find here or in vim help.
{ "source": [ "https://unix.stackexchange.com/questions/35206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14084/" ] }
35,292
I know of this command: find /path/to/mountpoint -inum <inode number> but it is a very slow search, I feel like there has to be a faster way to do this. Does anybody know a faster method?
For an ext4 filesystem, you can use debugfs as in the following example: $ sudo debugfs -R 'ncheck 393094' /dev/sda2 2>/dev/null Inode Pathname 393094 /home/enzotib/examples.desktop The answer is not immediate, but seems to be faster than find . The output of debugfs can be easily parsed to obtain the file names: $ sudo debugfs -R 'ncheck 393094' /dev/sda2 | cut -f2 | tail -n2 > filenames
{ "source": [ "https://unix.stackexchange.com/questions/35292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17166/" ] }
35,311
I am using this command on a 5GB archive tar -zxvf archive.tar.gz /folder/in/archive is this the correct way to do this? It seems to be taking forever with no command line output...
tar stores relative paths by default . GNU tar even says so if you try to store an absolute path: tar -cf foo.tar /home/foo tar: Removing leading `/' from member names If you need to extract a particular folder, have a look at what's in the tar file: tar -tvf foo.tar And note the exact filename. In the case of my foo.tar file, I could extract /home/foo/bar by saying: tar -xvf foo.tar home/foo/bar # Note: no leading slash So no, the way you posted isn't (necessarily) the correct way to do it. You have to leave out the leading slash. If you want to simulate absolute paths, do cd / first and make sure you're the superuser. Also, this does the same: tar -C / -xvf foo.tar home/foo/bar # -C is the ‘change directory’ option There are very obvious, good reasons why tar converts paths to relative ones. One is the ability to restore an archive in places other than its original source. The other is security. You could extract an archive, expect its files to appear in your current working directory, and instead overwrite system files (or your own work) elsewhere by mistake. Note: if you use the -P option, tar will archive absolute paths. So it always pays to check the contents of big archives before extracting.
{ "source": [ "https://unix.stackexchange.com/questions/35311", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11897/" ] }
35,333
The terminal is very fast and convenient way to quickly access directories and files (faster than find and click on the directory). One thing that it cannot show in text-mode is "pictures". What is a best way to view pictures (like you see images thumbnail in Nautilus) when you are working in the terminal (e.g. command nautilus or any program - but should be fast and convenient)?
The way to "double-click" on a file from the command line is xdg-open . If you're on Gnome (probably, if you're using Nautilus), you can use eog directly, or any other image program ( feh is quite good). feh <image-name> If you want to consult image-name file easilly.
{ "source": [ "https://unix.stackexchange.com/questions/35333", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8932/" ] }
35,338
What is the difference between the following commands: su sudo -s sudo -i sudo bash I know for su I need to know the root password, and for sudo I have to be in the sudoers file, but once executed what is difference? I know there is a difference between su and sudo -s because my home directory is /root after I execute su , but my home directory is still /home/myname after sudo -s . But I suspect this is just a symptom of an underlying difference that I'm missing.
With su , you become another user — root by default, but potentially another user. If you say su - , your environment gets replaced with that user's login environment as well, so that what you see is indistinguishable from logging in as that user. There is no way the system can tell what you do while su 'd to another user from actions by that user when they log in. Things are very different with sudo : Commands you run through sudo execute as the target user — root by default, but changeable with -u — but it logs the commands you run through it, tagging them with your username so blame can be assigned afterward. :) sudo is very flexible. You can limit the commands a given user or group of users are allowed to run, for example. With su , it's all or nothing. This feature is typically used to define roles. For instance, you could define a "backups" group allowed to run dump and tar , each of which needs root access to properly back up the system disk. I mention this here because it means you can give someone sudo privileges without giving them sudo -s or sudo bash abilities. They have only the permissions they need to do their job, whereas with su they have run of the entire system. You have to be careful with this, though: if you give someone the ability to say sudo vi , for example, they can shell out of vi and have effectively the same power as with sudo -s . Because it takes the sudoer's password instead of the root password, sudo isolates permission between multiple sudoers. This solves an administrative problem with su , which is that when the root password changes, all those who had to know it to use su had to be told. sudo allows the sudoers' passwords to change independently. In fact, it is common to password-lock the root user's account on a system with sudo to force all sysadmin tasks to be done via sudo . In a large organization with many trusted sudoers, this means when one of the sysadmins leaves, you don't have to change the root password and distribute it to those admins who remain. The main differences between sudo bash and sudo -s are: -s is shorter than bash You can say sudo -s some-command to run some-command under your default shell, but with superuser privileges. It's basically shorthand for sudo $SHELL -c some-command . You can instead pass the commands to the shell's standard input, like sudo -s < my-shell-script . You could use this with a heredoc to send several commands to a single sudo call, avoiding the need to type sudo repeatedly. Even without these extra command arguments, sudo -s still differs from sudo bash in that it might run a different shell than bash , since it looks first in the SHELL environment variable, and then if that is unset, at your user's login shell setting, typically in /etc/passwd . The shell run by sudo -s inherits your current user environment. If what you actually want is a clean environment, like you get just after login, what you want instead is sudo -i , a relatively recent addition to sudo . Roughly speaking, sudo -i is to sudo -s as su - is to su : it resets all but a few key environment variables and sends you back to your user's home directory. If you don't also give it commands to run under that shell via standard input or sudo -i some-command , it will run that shell as an interactive login shell, so your user's shell startup scripts (e.g. .bash_profile ) get run again. All of this makes sudo -i considerably more secure than sudo -s . Why? Because if someone can modify your environment before sudo -s , they could cause unintended commands to be executed. The most obvious case is modifying SHELL , but it can also happen less directly, such as via PAGER if you say man foo while under sudo -s . You might say, "If they can modify PAGER , they can modify PATH , and then they can just substitute an evil sudo program," but someone sufficiently paranoid can say /usr/bin/sudo /bin/bash to avoid that trap. You're probably not so paranoid that you also avoid the traps in all the other susceptible environment variables, though. Did you also remember to check EDITOR , for example, before running any VCS command? Thus sudo -i . Because sudo -i also changes your working directory to your user's home directory, you might still want to use sudo -s for those situations where you know you want to remain in the same directory you were cd 'd into when you ran sudo . It's still safer to sudo -i and cd back to where you were, though. Another variant of all this that you sometimes see is sudo su , which is approximately equivalent to sudo -s . Likewise, sudo su - is functionally quite close to sudo -i . Since sudo and su are competing commands, it's a little odd to pair them like this, so I recommend that you use the sudo flags instead.
{ "source": [ "https://unix.stackexchange.com/questions/35338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16212/" ] }
35,369
Here is an example of using cut to break input into fields using a space delimiter, and obtaining the second field: cut -f2 -d' ' How can the delimiter be defined as a tab, instead of a space?
Two ways: Press Ctrl + V and then Tab to use "verbatim" quoted insert . cut -f2 -d' ' infile or write it like this to use ANSI-C quoting : cut -f2 -d$'\t' infile The $'...' form of quotes isn't part of the POSIX shell language ( not yet ), but works at least in ksh, mksh, zsh and Busybox in addition to Bash.
{ "source": [ "https://unix.stackexchange.com/questions/35369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17204/" ] }
35,416
I have a bunch of PNG images on a directory. I have an application called pngout that I run to compress these images. This application is called by a script I did. The problem is that this script does one at a time, something like this: FILES=(./*.png) for f in "${FILES[@]}" do echo "Processing $f file..." # take action on each file. $f store current file name ./pngout -s0 $f R${f/\.\//} done Processing just one file at a time, takes a lot of time. After running this app, I see that the CPU is just 10%. So I discovered that I can divide these files in 4 batches, put each batch in a directory and fire 4, from four terminal windows, four processes, so I have four instances of my script, at the same time, processing those images and the job takes 1/4 of the time. The second problem is that I lost time dividing the images and batches and copying the script to four directories, open 4 terminal windows, bla bla... How do that with one script, without having to divide anything? I mean two things: first how do I from a bash script, fire a process to the background ? (just add & to the end?) Second: how do I stop sending tasks to the background after sending the fourth tasks and put the script to wait until the tasks end? I mean, just sending a new task to the background as one tasks end, keeping always 4 tasks in parallel? if I do not do that the loop will fire zillions of tasks to the background and the CPU will clog.
If you have a copy of xargs that supports parallel execution with -P , you can simply do printf '%s\0' *.png | xargs -0 -I {} -P 4 ./pngout -s0 {} R{} For other ideas, the Wooledge Bash wiki has a section in the Process Management article describing exactly what you want.
{ "source": [ "https://unix.stackexchange.com/questions/35416", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45335/" ] }
35,508
I know that the eject command can be used to eject almost any hardware component attached, but can it be used to eject USB drives? Is it possible to eject USB drives and external HDD's with the eject command?
Yes. For example: sudo eject /dev/sda Other answers here that indicate that you require mechanical ejection hardware are incorrect. Unmounting is not the same thing as ejecting. If you unmount a volume, you can immediately mount it back, because the underlying device is still available. In some situations, this could present a security risk. By ejecting the device, only a reset of the USB subsystem (e.g. a reboot) will reload the device. By ejecting the device, you effectively disable any further access to the device. Only a reset of the USB subsystem (e.g. a reboot) will reload the device. Otherwise, you must physically disconnect the USB device and reconnect it in order to access it again. Before ejecting, this command will unmount all volumes on the device that were mounted. If volumes are in use, this command will fail as with unmount, except that some volumes might be unmounted and some volumes might remain mounted.
{ "source": [ "https://unix.stackexchange.com/questions/35508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16533/" ] }
35,618
In a larger script to post-process some simulation data I had the following line: parallel bnzip2 -- *.bz2 Which, if I understand parallel correctly (and I may not), should run n-core threads of the program over all files with the listed extension. You may notice that I misspelled the command bunzip2 . I would expect a warning or error message here, but it fails silently. Is this intended? How do I not get bit by this in the future? Update: It is possible that I have a different parallel installed than I think I do: > parallel --version` parallel: invalid option -- '-' parallel [OPTIONS] command -- arguments for each argument, run command with argument, in parallel parallel [OPTIONS] -- commands run specified commands in parallel A man page of parallel on my system gives: parallel(1) parallel(1) NAME parallel - run programs in parallel .... AUTHOR Tollef Fog Heen Which seems this is not the GNU version.
You have been hit by the confusion with Tollef's parallel from moreutils. See https://www.gnu.org/software/parallel/history.html You can install GNU Parallel simply by: wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel chmod 755 parallel cp parallel sem Watch the intro videos for GNU Parallel to learn more: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
{ "source": [ "https://unix.stackexchange.com/questions/35618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7093/" ] }
35,624
I'm using Debian 6 and Dropbox. I followed these commands to install it. When I run ~/.dropbox-dist/dropboxd — Dropbox works and stuff. Problem is that when I close terminal or, even worse — reboot, Dropbox stops working and I need to run that daemon again. How can I have the computer automatically start that daemon in the background?
If you're running the daemon from your own account, start it at boot time with Cron . Run crontab -e to edit your crontab file and add the line @reboot ~/.dropbox-dist/dropboxd
{ "source": [ "https://unix.stackexchange.com/questions/35624", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8908/" ] }
35,627
Is there a standard way to reverse the output order of a terminal output, ie: ~/Developer $ command0 -bash: comman0: command not found ~/Developer $ command2 -bash: command2: command not found ~/Developer $ command3 -bash: command3: command not found ~/Developer $ would be displayed as: ~/Developer $ -bash: command3: command not found ~/Developer $ command3 -bash: command2: command not found ~/Developer $ command2 -bash: comman0: command not found ~/Developer $ comman0 I feel always having your prompt at the bottom is counter intuitive a more effective way of presenting the output woud be to reverse the output order. How might I go about implementing this? Specifically where output portion of the OSX terminal program defined?
If you're running the daemon from your own account, start it at boot time with Cron . Run crontab -e to edit your crontab file and add the line @reboot ~/.dropbox-dist/dropboxd
{ "source": [ "https://unix.stackexchange.com/questions/35627", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14877/" ] }
35,639
I'd like to write a statement to dmesg. How can I do this?
Write to /dev/kmsg (not /proc/kmsg as suggested by @ Nils ). See linux/kernel/printk/printk.c devkmsg_writev for the kernel-side implementation and systemd/src/journal/journald-kmsg.c server_forward_kmsg for an example of usage.
{ "source": [ "https://unix.stackexchange.com/questions/35639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12886/" ] }
35,728
Is it possible to customise the bash prompt to show the if there are any background jobs? I find it easy to forget that there are background jobs. Say if the prompt was... $ Is there a way to make it show the number of background jobs? For example, if there were two background jobs sent to the background using CTRL+Z , the prompt would be... 2 $
Put \j in your prompt. From the bash manual : \j The number of jobs currently managed by the shell Just remember that prompts do go stale and jobs can finish at any time, so if you have left the terminal idle, you'll want to redisplay the prompt. At the cost of requiring an extra process just to print your prompt, you can make the \j only appear if any jobs exist. PROMPT_COMMAND='hasjobs=$(jobs -p)' PS1='${hasjobs:+\j }\$ '
{ "source": [ "https://unix.stackexchange.com/questions/35728", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1932/" ] }
35,746
What command lines to use to convert from avi to mp4, but without destroying the framesize and making the file small as the original size or a little bit bigger, and same thing with mp4 to avi? Whenever I tried converting it became like 2 gb
Depending on how your original file was encoded, it may not be possible to keep the file size. This command should keep frame sizes and rates intact while making an mp4 file: ffmpeg -i infile.avi youroutput.mp4 And this command will give you information about your input file - the frame size, codecs used, bitrate, etc.: ffmpeg -i infile.avi You can also play with the acodec and vcodec options when you generate your output. Remember also that mp4 and avi files can use various codecs and your mileage may vary according to which codec you pick.
{ "source": [ "https://unix.stackexchange.com/questions/35746", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17266/" ] }
35,777
The default prompt will be root@monu dev]# Can we change the "root@monu" part and give it a desired name?
This is changed by changing the environment variable PS1 . You can see the current value of PS1 by: root@monu dev# echo $PS1 You can see the value to be equal to \u@\h \w\$ , where: \u : username \h : hostname \w : current working directory \$ : a # symbol for the root user, a $ symbol for other users If you want the change to be permanent, you have to add the script changing the value of PS1 in ~/.bashrc , since that it gets executed every time a user logs in.
{ "source": [ "https://unix.stackexchange.com/questions/35777", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17276/" ] }
35,782
Often I will work out of the root of a rails directory where I find my self navigating the same path twice every time I would like to move a file: mv app/views/layouts/application.html.erb app/views/layouts/application.html.haml The haml is just one of may examples where I need to change the file name without modifying the directory it currently lives in and without changing directory. Is there a way to achieve this?
Use brace expansion : mv very/long/path/to/filename.{old,new} would expand to mv very/long/path/to/filename.old very/long/path/to/filename.new
{ "source": [ "https://unix.stackexchange.com/questions/35782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14877/" ] }
35,789
I just opened a legacy shell script (written in old ksh88 on Solaris) and found the following repeated all throughout the code: [ -f $myfile ] && \rm -f $myfile The escaping backslash strikes me as odd. I know it is deliberate, since this kind of (apparently useless) escaping is repeated all throughout the code. The original author is long gone, I cannot get in touch with him to ask him. Is this simply a funny idiosyncrasy of the author or is it some sort of deprecated good practice that made sense at some point in time? Or maybe is actually the recommended way of doing things and I'm missing something altogether?
This is used for alias protection: $ ls .bashrc a b $ alias ls alias ls='ls $LS_OPTIONS' $ \ls a b
{ "source": [ "https://unix.stackexchange.com/questions/35789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4098/" ] }
35,832
The md5sum program does not provide checksums for directories. I want to get a single MD5 checksum for the entire contents of a directory, including files in sub-directories. That is, one combined checksum made out of all the files. Is there a way to do this?
The right way depends on exactly why you're asking: Option 1: Compare Data Only If you just need a hash of the tree's file contents, this will do the trick: $ find -s somedir -type f -exec md5sum {} \; | md5sum This first summarizes all of the file contents individually, in a predictable order, then passes that list of file names and MD5 hashes to be hashed itself, giving a single value that only changes when the content of one of the files in the tree changes. Unfortunately, find -s only works with BSD find(1), used in macOS, FreeBSD, NetBSD and OpenBSD. To get something comparable on a system with GNU or SUS find(1), you need something a bit uglier: $ find somedir -type f -exec md5sum {} \; | sort -k 2 | md5sum We've mimicked the behavior of BSD find -s by adding a call to sort . The -k 2 bit tells it to skip over the MD5 hash, so it only sorts the file names, which are in field 2 through end-of-line by sort 's reckoning. There's a weakness with this version of the command, which is that it's liable to become confused if you have any filenames with newlines in them, because it'll look like multiple lines to the sort call. The find -s variant doesn't have that problem, because the tree traversal and sorting happen within the same program, find . In either case, the sorting is necessary to avoid false positives: the most common Unix/Linux filesystems don't maintain the directory listings in a stable, predictable order. You might not realize this from using ls and such, which silently sort the directory contents for you. Calling find without sorting its output in some way will cause the order of lines in the output to match whatever order the underlying filesystem returns them, which will cause this command to give a changed hash value if the order of files given to it as input changes, even if the data remain identical. You may well ask whether the -k 2 bit in the GNU sort command above is necessary. Given that the hash of the file's data is an adequate proxy for the file's name as long as the contents have not changed, we will not get false positives if we drop this option, allowing us to use the same command with both GNU and BSD sort . However, realize that there is a small chance (1:2 128 with MD5) that the exact ordering of file names does not match the partial order that doing without -k 2 can give if there is ever a hash collision. Keep in mind, however, if such small chances of a mismatch matter to your application, this whole approach is probably out of the question for you. You might need to change the md5sum commands to md5 or some other hash function. If you choose another hash function and need the second form of the command for your system, you might need to adjust the sort command accordingly. Another trap is that some data summing programs don't write out a file name at all, a prime example being the old Unix sum program. This method is somewhat inefficient, calling md5sum N+1 times, where N is the number of files in the tree, but that's a necessary cost to avoid hashing file and directory metadata. Option 2: Compare Data and Metadata If you need to be able to detect that anything in a tree has changed, not just file contents, ask tar to pack the directory contents up for you, then send it to md5sum : $ tar -cf - somedir | md5sum Because tar also sees file permissions, ownership, etc., this will also detect changes to those things, not just changes to file contents. This method is considerably faster, since it makes only one pass over the tree and runs the hash program only once. As with the find based method above, tar is going to process file names in the order the underlying filesystem returns them. It may well be that in your application, you can be sure you won't cause this to happen. I can think of at least three different usage patterns where that is likely to be the case. (I'm not going to list them, because we're getting into unspecified behavior territory. Each filesystem can be different here, even from one version of the OS to the next.) If you find yourself getting false positives, I'd recommend going with the find | cpio option in Gilles' answer .
{ "source": [ "https://unix.stackexchange.com/questions/35832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
35,851
AFAIK dmesg shows information about kernel and kernel modules, and /var/log/messages also shows information produced by kernel and modules. So what's the difference? Does /var/log/messages ⊂ output of dmesg ? More Info that may be helpful: - There is a kernel ring buffer , which I think is the very and only place to store kernel log data. - Article " Kernel logging: APIs and implementation " on IBM DeveloperWorks described APIs and the bird-view picture.
dmesg prints the contents of the ring buffer. This information is also sent in real time to syslogd or klogd , when they are running, and ends up in /var/log/messages ; when dmesg is most useful is in capturing boot-time messages from before syslogd and/or klogd started, so that they will be properly logged.
{ "source": [ "https://unix.stackexchange.com/questions/35851", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12224/" ] }
35,929
If we don't know the root password and don't have root access to the machine, how can we change the root password?
Here are a few ways I can think of, from the least intrusive to the most intrusive. Without Rebooting With sudo: if you have sudo permissions to run passwd , you can do: sudo passwd root Enter your password, then enter a new password for root twice. Done. Editing files : this works in the unlikely case you don't have full sudo access, but you do have access to edit /etc/{passwd,shadow} . Open /etc/shadow , either with sudoedit /etc/shadow , or with sudo $EDITOR /etc/shadow . Replace root's password field (all the random characters between the second and third colons : ) with your own user's password field. Save. The local has the same password as you. Log in and change the password to something else. These are the easy ones. Reboot Required Single User mode : This was just explained by Renan. It works if you can get to GRUB (or your boot loader) and you can edit the Linux command line. It doesn't work if you use Debian, Ubuntu, and some others. Some boot loader configurations require a password to do so, and you must know that to proceed. Without further ado: Reboot. Enter boot-time password, if any. Enter your boot loader's menu. If single user mode is available, select that (Debian calls it ‘Recovery mode’). If not, and you run GRUB: Highlight your normal boot option. Press e to enter edit mode. You may be asked for a GRUB password there. Highlight the line starting with kernel or linux . Press e . Add the word ‘single’ at the end. (don't forget to prepend a space!) Press Enter and boot the edited stanza. Some GRUBs use Ctrl - X , some use b . It says which one it is at the bottom of the screen. Your system will boot up in single user mode. Some distributions won't ask you for a root password at this point (Debian and Debian-based ones do). You're root now. Change your password: mount / -o remount,rw passwd # Enter your new password twice at the prompts mount / -o remount,ro sync # some people sync multiple times. Do what pleases you. reboot and reboot , or, if you know your normal runlevel, say telinit 2 (or whatever it is). Replacing init : superficially similar to the single user mode trick, with largely the same instructions, but requires much more prowess with the command line. You boot your kernel as above, but instead of single , you add init=/bin/sh . This will run /bin/sh in place of init , and will give you a very early shell with almost no amenities. At this point your aim is to: Mount the root volume. Get passwd running. Change your password with the passwd command. Depending on your particular setup, these may be trivial (identical to the instructions for single user mode), or highly non-trivial: loading modules, initialising software RAID, opening encrypted volumes, starting LVM, et cetera. Without init , you aren't running dæmons or any other processes but /bin/sh and its children, so you're pretty literally on your own. You also don't have job control, so be careful what you type. One misplaced cat and you may have to reboot if you can't get out of it. Rescue Disk : this one's easy. Boot a rescue disk of your choice. Mount your root filesystem. The process depends on how your volumes are layered, but eventually boils down to: # do some stuff to make your root volume available. # The rescue disk may, or may not do it automatically. mkdir /tmp/my-root mount /dev/$SOME_ROOT_DEV /tmp/my-root $EDITOR /tmp/my-root/etc/shadow # Follow the `/etc/shadow` editing instructions near the top cd / umount /tmp/my-root reboot Obviously, $SOME_ROOT_DEV is whatever block device name is assigned to your root filesystem by the rescue disk and $EDITOR is your favourite editor (which may have to be vi on the rescue system). After the reboot , allow the machine to boot normally; root's password will be that of your own user. Log in as root and change it immediately. Other Ways Obviously, there are countless variations to the above. They all boil down to two steps: Get root access to the computer (catch-22 — and the real trick) Change root's password somehow.
{ "source": [ "https://unix.stackexchange.com/questions/35929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17276/" ] }
35,935
I know that I can log to file by adding > log.txt to the end of a command but how can I log to file AND to console?
try using tee ? | tee log.txt instead of > log.txt
{ "source": [ "https://unix.stackexchange.com/questions/35935", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6425/" ] }
35,956
I've found only puf (Parallel URL fetcher) but I couldn't get it to read urls from a file; something like puf < urls.txt does not work either. The operating system installed on the server is Ubuntu.
Using GNU Parallel , $ parallel -j ${jobs} wget < urls.txt or xargs from GNU Findutils , $ xargs -n 1 -P ${jobs} wget < urls.txt where ${jobs} is the maximum number of wget you want to allow to run concurrently (setting -n to 1 to get one wget invocation per line in urls.txt ). Without -j / -P , parallel will run as many jobs at a time as CPU cores (which doesn't necessarily make sense for wget bound by network IO), and xargs will run one at a time. One nice feature that parallel has over xargs is keeping the output of the concurrently-running jobs separated, but if you don't care about that, xargs is more likely to be pre-installed.
{ "source": [ "https://unix.stackexchange.com/questions/35956", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17487/" ] }
36,033
Is it possible to do it in one liner? I have an output like this: "First line" - Description " Second line" - Description "Third line" - Description " Fourth line" - Description This input is generated automatically. I want to replace the first occurrence of " (quotation mark + space) with " (quotation mark) for each line. If I apply the substitution globally, it will also change every occurrence of line" - to line"- , so I was wondering if it is possible to use a sed one liner to accomplish this. I have tried using ^ like this sed -r ':a;N;$!ba;s/(\^\" )/\"/g' But it's not working, it doesn't replace anything. I tried sed -r ':a;N;$!ba;s/(^|\" )/\"/g' and it replaces all the occurrences. I've just started to use sed, so I don't really know if I'm doing something wrong. What am I missing here?
You're overthinking it. sed replaces only the first instance on a line by default (without the /g modifier), although you still want to anchor because you don;t so much want the first instance in the line as the one at the start of the line; and you usually don't need the explicit line actions you're trying to use (why?). sed 's/^" /"/'
{ "source": [ "https://unix.stackexchange.com/questions/36033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17521/" ] }
36,044
If a person has root access to a particular RHEL machine, will they be able to retrieve the password of the other users?
TL;DR: No, password are stored as hashes which can (in general) not be recovered. Linux doesn't store plain-text passwords anywhere by default . They are hashed or otherwise encrypted through a variety of algorithms. So, in general, no, this isn't possible with stored data. If you have passwords stored somewhere other than the /etc/passwd database, they may be stored in a way that allows this. htpasswd files can contain wealy encrypted passwords, and other applications may store weaker hashes or plain text passwords for various (typically bad) reasons. Also, user configuration files may contain unencrypted passwords or weakly protected passwords for various reasons - fetchmail grabbing content from another service, .netrc , or simple automated things may include the password. If the passwords are hashed or encrypted with an older, weak algorithm (3DES, MD5) it would be possible to work out reasonably efficiently / cheaply what the password was - albeit through attacking the data rather than just reversing the transformation. (eg: things like http://project-rainbowcrack.com/ or http://www.openwall.com/john/ ) Since you are root it is also possible to attack the user password at another level - replace the login binary, or sudo, or part of PAM, etc, with something that will capture the password when it is entered. So, in specific, no, but in general having root access does make it easier to get at the users details through various side-channels.
{ "source": [ "https://unix.stackexchange.com/questions/36044", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17276/" ] }
36,055
I'm trying to get shared memory information from a linux box. I'm looking for shmmax, shmmni, shmall, msgmax, msgmni, semmsl, semmns etc. How to get all those values from a Perl script. any help is appreciated?
TL;DR: No, password are stored as hashes which can (in general) not be recovered. Linux doesn't store plain-text passwords anywhere by default . They are hashed or otherwise encrypted through a variety of algorithms. So, in general, no, this isn't possible with stored data. If you have passwords stored somewhere other than the /etc/passwd database, they may be stored in a way that allows this. htpasswd files can contain wealy encrypted passwords, and other applications may store weaker hashes or plain text passwords for various (typically bad) reasons. Also, user configuration files may contain unencrypted passwords or weakly protected passwords for various reasons - fetchmail grabbing content from another service, .netrc , or simple automated things may include the password. If the passwords are hashed or encrypted with an older, weak algorithm (3DES, MD5) it would be possible to work out reasonably efficiently / cheaply what the password was - albeit through attacking the data rather than just reversing the transformation. (eg: things like http://project-rainbowcrack.com/ or http://www.openwall.com/john/ ) Since you are root it is also possible to attack the user password at another level - replace the login binary, or sudo, or part of PAM, etc, with something that will capture the password when it is entered. So, in specific, no, but in general having root access does make it easier to get at the users details through various side-channels.
{ "source": [ "https://unix.stackexchange.com/questions/36055", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16270/" ] }
36,201
Is it possible to view pdf documents without having gdm (or similar) running? Rationale: I'm working on a remote server (assume no X forwarding) processing some data, creating some plots (assume pdf files). And I would like to view them without having to scp and open them on my machine. (There may be other use cases, probably.)
Not a real viewer, but as first aid a converter may also help: pdftotext file.pdf - | less pdftohtml -stdout -i file.pdf | lynx -stdin pdftotext and pdftohtml are part of the Poppler package.
{ "source": [ "https://unix.stackexchange.com/questions/36201", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17609/" ] }
36,241
Why would my new CentOS Virtual Machine not start the interface eth0 on startup? I have to start it manually every time. How can I fix this?
Make sure ONBOOT="yes" is in /etc/sysconfig/network-scripts/ifcfg-eth0. If you're using NetworkManager, make sure that service starts on boot ( chkconfig NetworkManager on ), otherwise, if you're using the old network service, make sure it starts on boot ( chkconfig network on ).
{ "source": [ "https://unix.stackexchange.com/questions/36241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17539/" ] }
36,310
I have a bash script looping through the results of a find and performing an ffmpeg encoding of some FLV files. Whilst the script is running the ffmpeg output seems to be interupted and is outputting some strange looking errors like the one below. I've no idea what is going on here. Can anyone point me in the right direction? It's as though the loop is still running when it shouldn't be and interupting the ffmpeg process. The specific error is: frame= 68 fps= 67 q=28.0 00000000000000000000000000001000size= 22kB time=00:00:00.50 bitrate= 363.2kbits/s dup=1 drop=0 Enter command: <target> <time> <command>[ <argument>] Parse error, at least 3 arguments were expected, only 1 given in string 'om/pt_br/nx/R3T4N2_HD3D_demoCheckedOut.flv' Some more details from the ffmpeg output: [buffer @ 0xa30e1e0] w:800 h:600 pixfmt:yuv420p tb:1/1000000 sar:0/1 sws_param:flags=2 [libx264 @ 0xa333240] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.1 Cache64 [libx264 @ 0xa333240] profile High, level 3.1 [libx264 @ 0xa333240] 264 - core 122 r2184 5c85e0a - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=5 deblock=1:0:0 analyse=0x3:0x113 me=umh subme=8 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=50 rc=cbr mbtree=1 bitrate=500 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=500 vbv_bufsize=1000 nal_hrd=none ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to './mp4s/pt_br/teamcenter/tc8_interactive/videos/8_SRM_EN.mp4': Metadata: audiodelay : 0 canSeekToEnd : true encoder : Lavf54.3.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuv420p, 800x600, q=-1--1, 500 kb/s, 30k tbn, 29.97 tbc Stream #0:1: Audio: aac (@[0][0][0] / 0x0040), 44100 Hz, mono, s16, 128 kb/s Stream mapping: Stream #0:1 -> #0:0 (vp6f -> libx264) Stream #0:0 -> #0:1 (mp3 -> libfaac) Press [q] to stop, [?] for help error parsing debug value0 00000000000000000000000000000000size= 13kB time=00:00:00.-3 bitrate=-3165.5kbits/s dup=1 drop=0 debug=0 frame= 68 fps= 67 q=28.0 00000000000000000000000000001000size= 22kB time=00:00:00.50 bitrate= 363.2kbits/s dup=1 drop=0 Enter command: <target> <time> <command>[ <argument>] Parse error, at least 3 arguments were expected, only 1 given in string 'om/pt_br/nx/R3T4N2_HD3D_demoCheckedOut.flv' The script is as follows #!/bin/bash LOGFILE=encodemp4ize.log echo '' > $LOGFILE STARTTIME=date echo "Started at `$STARTTIME`" >> $LOGFILE rsync -avz flvs/ mp4s/ --exclude '*.flv' #find flvs/ -name "*.flv" > flv-files # The loop find flvs/ -name "*.flv" | while read f do FILENAME=`echo $f | sed 's#flvs/##'` MP4FILENAME=`echo $FILENAME | sed 's#.flv#.mp4#'` ffmpeg -i "$f" -vcodec libx264 -vprofile high -preset slow -b:v 500k -maxrate 500k -bufsize 1000k -threads 0 -acodec libfaac -ab 128k "./mp4s/$MP4FILENAME" echo "$f MP4 done" >> $LOGFILE done
Your question is actually Bash FAQ #89 : just add </dev/null to prevent ffmpeg from reading its standard input. I've taken the liberty of fixing up your script for you because it contains a lot of potential errors. A few of the important points: Filenames are tricky to handle, because most filesystems allow them to contain all sorts of unprintable characters normal people would see as garbage. Making simplifying assumptions like "file names contain only 'normal' characters" tends to result fragile shell scripts that appear to work on "normal" file names and then break the day they run into a particularly nasty file name that doesn't follow the script's assumptions. On the other hand, correctly handling file names can be such a bother that you may find it not worth the effort if the chance of encountering a weird file name is expected to be near zero (i.e. you only use the script on your own files and you give your own files "simple" names). Sometimes it is possible to avoid this decision altogether by not parsing file names at all. Fortunately, that is possible with find(1) 's -exec option. Just put {} in the argument to -exec and you don't have to worry about parsing find output. Using sed or other external processes to do simple string operations like stripping extensions and prefixes is inefficient. Instead, use parameter expansions which are part of the shell (no external process means it will be faster). Some helpful articles on the subject are listed below: Bash FAQ 73 : Parameter expansions Bash FAQ 100 : String manipulations Use $( ) , and don't use `` anymore: Bash FAQ 82 . Avoid using UPPERCASE variable names. That namespace is generally reserved by the shell for special purposes (like PATH ), so using it for your own variables is a bad idea. And now, without further ado, here's a cleaned up script for you: #!/bin/sh logfile=encodemp4ize.log echo "Started at $(date)." > "$logfile" rsync -avz --exclude '*.flv' flvs/ mp4s/ find flvs/ -type f -name '*.flv' -exec sh -c ' for flvsfile; do file=${flvsfile#flvs/} < /dev/null ffmpeg -i "$flvsfile" -vcodec libx264 -vprofile high \ -preset slow -b:v 500k -maxrate 500k -bufsize 1000k \ -threads 0 -acodec libfaac -ab 128k \ "mp4s/${file%flv}"mp4 printf %s\\n "$flvsfile MP4 done." >> "$logfile" done ' _ {} + Note: I used POSIX sh because you didn't use or need any bash -specific features in your original.
{ "source": [ "https://unix.stackexchange.com/questions/36310", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17648/" ] }
36,380
I'm using OpenBox window manager without any desktop environment . xdg-open behaves strangely. It opens everything with firefox . $ xdg-settings --list Known properties: default-web-browser Default web browser I'm looking for a simple program; something like reading every *.desktop file in /usr/share/applications/ folder and automatically setting xdg settings.
Why not to use utilities from xdg itself? To make Thunar the default file-browser, i.e. the default application for opening folders. $ xdg-mime default Thunar.desktop inode/directory to use xpdf as the default PDF viewer: $ xdg-mime default xpdf.desktop application/pdf This should create an entry [Default Applications] application/pdf=xpdf.desktop in your local MIME database ~/.config/mimeapps.list . Your PDF files should be opened with xpdf now.
{ "source": [ "https://unix.stackexchange.com/questions/36380", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13428/" ] }
36,403
Occasionally I need to specify a "path-equivalent" of one of the standard IO streams ( stdin , stdout , stderr ). Since 99% of the time I work with Linux, I just prepend /dev/ to get /dev/stdin , etc., and this " seems to do the right thing". But, for one thing, I've always been uneasy about such a rationale (because, of course, "it seems to work" until it doesn't). Furthermore, I have no good sense for how portable this maneuver is. So I have a few questions: In the context of Linux, is it safe (yes/no) to equate stdin , stdout , and stderr with /dev/stdin , /dev/stdout , and /dev/stderr ? More generally, is this equivalence "adequately portable "? I could not find any POSIX references.
It's been available on Linux back into its prehistory. It is not POSIX, although many actual shells (including AT&T ksh and bash ) will simulate it if it's not present in the OS; note that this simulation only works at the shell level (i.e. redirection or command line parameter, not as explicit argument to e.g. open() ). That said, it should be available on most commercial Unix systems, one way or another (sometimes it's spelled /dev/fd/N for various integers N , but most systems with that will provide symlinks as Linux and *BSD do).
{ "source": [ "https://unix.stackexchange.com/questions/36403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10618/" ] }
36,467
When I edit a file in "vi" editor the inode value of the file is changing. But when edited with cat command the inode value is not changing.
Most likely, you have set the backup option on, and backupcopy to "no" or "breakhardlink".
{ "source": [ "https://unix.stackexchange.com/questions/36467", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17276/" ] }
36,531
What's the Netscape format of wget 's cookies.txt ? I need to mirror a website that requires login. I use a Chrome extension that returns cookies in that format, I save them in cookies.txt , import with wget command but to no use, it just downloads the content like I'm not logged in at all. I appreciate any help.
The format is Netscape format as stated in the man page and this format is: The layout of Netscape's cookies.txt file is such that each line contains one name-value pair. An example cookies.txt file may have an entry that looks like this: .netscape.com TRUE / FALSE 946684799 NETSCAPE_ID 100103 Each line represents a single piece of stored information. A tab is inserted between each of the fields. From left-to-right, here is what each field represents: domain - The domain that created AND that can read the variable. flag - A TRUE/FALSE value indicating if all machines within a given domain can access the variable. This value is set automatically by the browser, depending on the value you set for domain. path - The path within the domain that the variable is valid for. secure - A TRUE/FALSE value indicating if a secure connection with the domain is needed to access the variable. expiration - The UNIX time that the variable will expire on. UNIX time is defined as the number of seconds since Jan 1, 1970 00:00:00 GMT. name - The name of the variable. value - The value of the variable. (From " The Unofficial Cookie FAQ ", edited for clarity)
{ "source": [ "https://unix.stackexchange.com/questions/36531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17374/" ] }
36,540
I'm working from the URL I found here: http://web.archive.org/web/20160404025901/http://jaybyjayfresh.com/2009/02/04/logging-in-without-a-password-certificates-ssh/ My ssh client is Ubuntu 64 bit 11.10 desktop and my server is Centos 6.2 64 bit. I have followed the directions. I still get a password prompt on ssh. I'm not sure what to do next.
Make sure the permissions on the ~/.ssh directory and its contents are proper. When I first set up my ssh key auth, I didn't have the ~/.ssh folder properly set up, and it yelled at me. Your home directory ~ , your ~/.ssh directory and the ~/.ssh/authorized_keys file on the remote machine must be writable only by you: rwx------ and rwxr-xr-x are fine, but rwxrwx--- is no good¹, even if you are the only user in your group (if you prefer numeric modes: 700 or 755 , not 775 ). If ~/.ssh or authorized_keys is a symbolic link, the canonical path (with symbolic links expanded) is checked . Your ~/.ssh/authorized_keys file (on the remote machine) must be readable (at least 400), but you'll need it to be also writable (600) if you will add any more keys to it. Your private key file (on the local machine) must be readable and writable only by you: rw------- , i.e. 600 . Also, if SELinux is set to enforcing, you may need to run restorecon -R -v ~/.ssh (see e.g. Ubuntu bug 965663 and Debian bug report #658675 ; this is patched in CentOS 6 ). ¹ Except on some distributions (Debian and derivatives) which have patched the code to allow group writability if you are the only user in your group.
{ "source": [ "https://unix.stackexchange.com/questions/36540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17539/" ] }
36,580
The command id can be used to look up a user's uid , for example: $ id -u ubuntu 1000 Is there a command to lookup up a username from a uid ? I realize this can be done by looking at the /etc/passwd file but I'm asking if there is an existing command to to this, especially if the user executing it is not root. I'm not looking for the current user's username, i.e. I am not looking for whoami or logname . This also made me wonder if on shared web hosting this is a security feature, or am I just not understanding something correctly? For examination, the /etc/passwd file from a shared web host: root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/sbin/nologin daemon:x:2:2:daemon:/sbin:/sbin/nologin adm:x:3:4:adm:/var/adm:/sbin/nologin lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin sync:x:5:0:sync:/sbin:/bin/sync shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown halt:x:7:0:halt:/sbin:/sbin/halt mail:x:8:12:mail:/var/spool/mail:/sbin/nologin news:x:9:13:news:/etc/news: uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin operator:x:11:0:operator:/root:/sbin/nologin games:x:12:100:games:/usr/games:/sbin/nologin gopher:x:13:30:gopher:/var/gopher:/sbin/nologin ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin nobody:x:99:99:Nobody:/:/sbin/nologin nscd:x:28:28:NSCD Daemon:/:/sbin/nologin vcsa:x:69:69:virtual console memory owner:/dev:/sbin/nologin pcap:x:77:77::/var/arpwatch:/sbin/nologin rpc:x:32:32:Portmapper RPC user:/:/sbin/nologin mailnull:x:47:47::/var/spool/mqueue:/sbin/nologin smmsp:x:51:51::/var/spool/mqueue:/sbin/nologin oprofile:x:16:16:Special user account to be used by OProfile:/home/oprofile:/sbin/nologin sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin dbus:x:81:81:System message bus:/:/sbin/nologin avahi:x:70:70:Avahi daemon:/:/sbin/nologin rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin haldaemon:x:68:68:HAL daemon:/:/sbin/nologin xfs:x:43:43:X Font Server:/etc/X11/fs:/sbin/nologin avahi-autoipd:x:100:104:avahi-autoipd:/var/lib/avahi-autoipd:/sbin/nologin named:x:25:25:Named:/var/named:/sbin/nologin mailman:x:32006:32006::/usr/local/cpanel/3rdparty/mailman/mailman:/usr/local/cpanel/bin/noshell dovecot:x:97:97:dovecot:/usr/libexec/dovecot:/sbin/nologin mysql:x:101:105:MySQL server:/var/lib/mysql:/bin/bash cpaneleximfilter:x:32007:32009::/var/cpanel/userhomes/cpaneleximfilter:/usr/local/cpanel/bin/noshell nagios:x:102:106:nagios:/var/log/nagios:/bin/sh ntp:x:38:38::/etc/ntp:/sbin/nologin myuser:x:1747:1744::/home/myuser:/usr/local/cpanel/bin/jailshell And here is a sample directory listing of /tmp/ drwx------ 3 root root 1024 Apr 16 02:09 spamd-22217-init/ drwxr-xr-x 2 665 664 1024 Apr 4 00:05 update-cache-44068ab4/ drwxr-xr-x 4 665 664 1024 Apr 17 15:17 update-extraction-44068ab4/ -rw-rw-r-- 1 665 664 43801 Apr 17 15:17 variable.zip -rw-r--r-- 1 684 683 4396 Apr 17 07:01 wsdl-13fb96428c0685474db6b425a1d9baec We can see root is the owner of some files, and root is also showing up in /etc/passwd , however the other users/groups all show up as numbers.
Try getent passwd "$uid" | cut -d: -f1
{ "source": [ "https://unix.stackexchange.com/questions/36580", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
36,705
I am already a little bit familiar with Linux distros like Debian or Ubuntu (yeah, very similar) but I wanted to try Red Hat based - CentOS 6.2. I have installed it on my Windows 7 host in VirtualBox and tried to play with it a little. I have come across a small problem, namely: the default eth0 interface is down by default. I use the option with NAT (the virtual machine is 'behind' the host). Even if I bring the interface up with ifconfig eth0 up , it does not work right away. I get this after bringing the interface up: eth0 Link encap:Ethernet HWaddr 08:00:27:0F:00:8A inet6 addr: fe80::a00:27ff:fe0f:8a/64 Scope:Link UP BROADCAST RUNNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carriers:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:468 (468.0 b) Interrupt:19 Base address:0xd020 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) virbr0 Link encap:Ethernet HWaddr 52:54:00:75:C2:9B inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 [root@centos ~]# _ What should be done more to configure the network on CentOS machine?
Edit /etc/sysconfig/network-scripts/ifcfg-$IFNAME . Change the ONBOOT line's value to yes . $IFNAME will be eth0 on many EL6 boxes, but on boxes using the Consistent Network Device Naming scheme, it might be something else, like en3p1 . This scheme is optional in EL6 but the default in EL7 and newer. Use the command ip link to get a list of network interfaces, including the ones that are currently down. In your future installs, pay more attention. You blew past an option in the network configuration section that let you tell it to bring the interface up on boot. This on-boot option is off by default in EL6 and later, whereas in previous versions, it was on by default. To make the network interface come up on first boot at install time, go to the Configure → General tab in the network configuration screen, then check the box labeled Automatically connect to the network when available . As to why they changed this, I'd guess security reasons. It gives you a chance to tighten things down a bit from the default setup before bringing up the network interface for the first time, exposing the box to the outside world.
{ "source": [ "https://unix.stackexchange.com/questions/36705", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15387/" ] }
36,734
Related to another question , in order to fuzzily detect binary files, is there a way to detect ␀ bytes in sed ?
Example: Prove I'm sending a NUL byte, followed by a newline: $ echo -e \\0 | hexdump -C 00000000 00 0a |..| 00000002 Now I change the NUL byte to an ! exclamation mark: $ echo -e \\0 | sed 's/\x00/!/' | hexdump -C 00000000 21 0a |!.| So the trick is using \x00 as NUL-byte.
{ "source": [ "https://unix.stackexchange.com/questions/36734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3645/" ] }
36,745
Can anyone explain why the semi-colon is necessary in order for the LANG to be seen as updated by bash? Doesn't work: > LANG=Ja_JP bash -c "echo $LANG" en_US Works: > LANG=Ja_JP ; bash -c "echo $LANG" Ja_JP I'm working with both bash 4.1.10 on linux and the same version under cygwin
Parameter and other types of expansions are performed when the command is read, before it is executed. The first version, LANG=Ja_JP bash -c "echo $LANG" , is a single command. After it is parsed as such, $LANG is expanded to en_US before anything is executed. Once bash is finished processing the input, it forks a process, adds LANG=Ja_JP to the environment as expected, and then executes bash -c echo en_US . You can prevent expansion with single quotes, i.e. LANG=Ja_JP bash -c 'echo $LANG' outputs Ja_JP . Note that when you have a variable assignment as part of a command, the assignment only affects the environment of that command and not that of your shell. The second version, LANG=Ja_JP; bash -c "echo $LANG" is actually two separate commands executed in sequence. The first is a simple variable assignment without a command, so it affects your current shell. Thus, your two snippets are fundamentally different despite the superficial distinction of a single ; . Completely off-topic, but might I recommend appending a .UTF-8 when setting LANG . There's no good reason nowadays not to be using Unicode in the 21st century.
{ "source": [ "https://unix.stackexchange.com/questions/36745", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17834/" ] }
36,769
I often import MySQL databases, and this can take a while. There is no progress indicator whatsoever. Can one be shown, somehow? Either records imported, MB imported, or tables imported... anything is better than just waiting. Anybody any idea? I use this command: mysql -uuser -p -hhost database < largefile.sql Files are between 40-300 MB, and the host is within the local network.
There is a nice tool called pv . # On Ubuntu/Debian system $ sudo apt-get install pv # On Redhat/CentOS $ sudo yum install pv then e.g. you can use it like this $ zcat dbpackfile.sql.gz | pv -cN zcat | mysql -uuser -ppass dbname NOTE: Please check this blog http://blog.larsstrand.no/2011/12/tip-pipe-viewer.html for more insights. NOTE: Even better solution with FULL progress bar. To do it you have to use two build-in pv options. One is --progress to indicate the progress bar and the second is --size to tell pv how large the overall file is. pv --progress --size UNPACKED-FILE-SIZE-IN-BYTES ..the problem is with .gz original file size. You need somehow get unpacked original file size information without unpacking itself, otherwise, you will lose precious time to unpack this file twice (first time for pv and second time for zcat ). But fortunately, you have gzip -l option that contains uncompressed information about our gzipped file. Unfortunately, it is in a table format so you need to extract it before it can be used it. All together can be seen below: gzip -l /path/to/our/database.sql.gz | sed -n 2p | awk '{print $2}' NOTE: This is the list of most common archive tools and methods how to extract a number of uncompressed bytes from those archives: tar -tvf database.sql.tar | awk '{print $3}' | paste -sd+ | bc unzip -Zt database.sql.zip | awk '{print $3}' unrar l database.sql.rar | tail -n2 | head -n1 | awk '{ print $1 }' 7z l database.sql.7z | tail -n1 | awk '{ print $3 }' Uff.. so the last thing you need to do is just combine it all together. zcat /path/to/our/database.sql.gz | pv --progress --size `gzip -l %s | sed -n 2p | awk '{print $2}'` | mysql -uuser -ppass dbname To make it even nicer you can add progress NAME like this zcat /path/to/our/database.sql.gz | pv --progress --size `gzip -l %s | sed -n 2p | awk '{print $2}'` --name ' Importing.. ' | mysql -uuser -ppass dbname Final result: Importing.. : [===========================================>] 100% For quick usage, you can create a custom function. mysql_import() { zcat $2 | pv --progress --size `gzip -l %s | sed -n 2p | awk '{print $2}'` --name ' Importing.. ' | mysql -uuser -ppass $1 } ..and then use it like this: mysql_import dbname /path/to/our/database.sql.gz NOTE: If you don't know where to put it, read this answer: https://unix.stackexchange.com/a/106606/20056 NOTE: You can add functions among aliases, e.g. in ~/.bash_aliases file.
{ "source": [ "https://unix.stackexchange.com/questions/36769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14584/" ] }
36,798
Is there an easy way to keep a folder synced with a directory listing via HTTP? Edit : Thanks for the tip with wget! I created a shell script and added it as a cron job: remote_dirs=( "http://example.com/" "…") # Add your remote HTTP directories here local_dirs=( "~/examplecom" "…") for (( i = 0 ; i < ${#local_dirs[@]} ; i++ )) do cd "${local_dirs[$i]}" wget -r -l1 --no-parent -A "*.pdf" -nd -nc ${remote_dirs[$i]} done # Explanation: # -r to download recursively # -l1 to include only one directory depth # --no-parent to exclude parent directories # -A "*.pdf" to accept only .pdf files # -nd to prevent wget to create directories for everything # -N to make wget to download only new files Edit 2: As mentioned below one could also use --mirror ( -m ), which is the shorthand for -r -N .
wget is a great tool. Use wget -m http://somesite.com/directory -m --mirror Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf --no-remove-listing.
{ "source": [ "https://unix.stackexchange.com/questions/36798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17855/" ] }
36,815
We have one central server which functions as an internet gateway. This server is connected to the internet, and using iptables we forward traffic and share the internet connection among all computers in the network. This works just fine. However, sometimes internet gets really slow. Most likely one of the users is downloading videos or other large files. I want to pinpoint the culprit. I'm thinking of installing a tool that can monitor the network traffic that passes through the server, by IP. Preferably in real time as well as an accumulated total (again by IP). Any tool that is recommended for this? Preferably something in the Ubuntu repositories.
ntop is probably the best solution for doing this. It is designed to run long term and capture exactly what you're looking for. It can show you which clients are receiving/sending the most traffic, where they're receiving/sending to, what protocols and ports are being used etc. It then uses a web GUI to navigate and display this information. ntop is a fairly well known tool, so I would be highly surprised if its not in Ubuntu's package repository.
{ "source": [ "https://unix.stackexchange.com/questions/36815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16766/" ] }
36,838
Why isn't Red Hat Enterprise Linux Desktop free? Isn't it a Linux OS? If it is, so why is not free? http://www.redhat.com/products/enterprise-linux/desktop/
The reason that a Linux distribution is "free" is that many of the pieces of software it includes are covered by the GNU General Public License (GPL for short). There are two different types of "free": freedom to see and modify the source code ("libre") free of charge ("gratis") The GPL is about the first "freedom", not the second. Provided Red Hat release the source code, then they are probably complying with the license. Further reading: What is Free Software? Gratis versus Libre References: GNU General Public License A Quick Guide to the GPLv3 Does the GPL allow me to sell copies of the program for money? Red Hat source RPMS
{ "source": [ "https://unix.stackexchange.com/questions/36838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17482/" ] }
36,841
Right now, I know how to: find open files limit per process: ulimit -n count all opened files by all processes: lsof | wc -l get maximum allowed number of open files: cat /proc/sys/fs/file-max My question is: Why is there a limit of open files in Linux?
The reason is that the operating system needs memory to manage each open file, and memory is a limited resource - especially on embedded systems. As root user you can change the maximum of the open files count per process (via ulimit -n ) and per system (e.g. echo 800000 > /proc/sys/fs/file-max ).
{ "source": [ "https://unix.stackexchange.com/questions/36841", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12224/" ] }
36,845
I'm trying to setup two network profiles in Centos. One for at home, one for at work. The home profile has a fixed IP address, fixed gateway and DNS server addresses. The work profile depends on DHCP. I've created a 'home' and a 'work' directory in /etc/sysconfig/networking/profiles. Each has the following files containing the proper configuration: > -rw-r--r-- 2 root root 422 Apr 17 20:17 hosts > -rw-r--r-- 5 root root 223 Apr 17 20:18 ifcfg-eth0 > -rw-r--r-- 1 root root 101 Apr 17 20:17 network > -rw-r--r-- 2 root root 73 Apr 17 20:18 resolv.conf There was already a 'default' profile, which contains the same files. Then I issued these commands: system-config-network-cmd --profile work --activate service network restart I was expecting these files to get copied from the profiles/work directory to /etc/sysconfig/ and /etc/sysconfig/networking-scripts . And most files do get copied, except for ifcfg-eth0 . Stangely enough that files seems to be overwritten with the current settings when I issue system-config-network-cmd . The other files are also touched, but there contents stays in tact. The system is Centos 5.7 running on a virtual pc within a windows 7 machine. Here is the output for ifconfig: # ifconfig eth0 Link encap:Ethernet HWaddr 00:03:FF:6F:2E:AB inet addr:192.168.1.200 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::203:ffff:fe6f:2eab/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4199761 errors:7 dropped:0 overruns:0 frame:0 TX packets:1733750 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2316624688 (2.1 GiB) TX bytes:415533386 (396.2 MiB) Interrupt:9 Can someone tell what I'm missing here?
The reason is that the operating system needs memory to manage each open file, and memory is a limited resource - especially on embedded systems. As root user you can change the maximum of the open files count per process (via ulimit -n ) and per system (e.g. echo 800000 > /proc/sys/fs/file-max ).
{ "source": [ "https://unix.stackexchange.com/questions/36845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17874/" ] }
36,871
I have an executable for the perforce version control client ( p4 ). I can't place it in /opt/local because I don't have root privileges. Is there a standard location where it needs to be placed under $HOME ? Does the File System Hierarchy have a convention that says that local executables/binaries need to be placed in $HOME/bin ? I couldn't find such a convention mentioned on the Wikipedia article for the FHS . Also, if there indeed is a convention, would I have to explicitly include the path to the $HOME/bin directory or whatever the location of the bin directory is?
In general, if a non-system installed and maintained binary needs to be accessible system-wide to multiple users, it should be placed by an administrator into /usr/local/bin . There is a complete hierarchy under /usr/local that is generally used for locally compiled and installed software packages. If you are the only user of a binary, installing into $HOME/bin or $HOME/.local/bin is the appropriate location since you can install it yourself and you will be the only consumer. If you compile a software package from source, it's also appropriate to create a partial or full local hierarchy in your $HOME or $HOME/.local directory. Using $HOME , the full local hierarchy would look like this. $HOME/bin Local binaries $HOME/etc Host-specific system configuration for local binaries $HOME/games Local game binaries $HOME/include Local C header files $HOME/lib Local libraries $HOME/lib64 Local 64-bit libraries $HOME/man Local online manuals $HOME/sbin Local system binaries $HOME/share Local architecture-independent hierarchy $HOME/src Local source code When running configure , you should define your local hierarchy for installation by specifying $HOME as the prefix for the installation defaults. ./configure --prefix=$HOME Now when make && make install are run, the compiled binaries, packages, man pages, and libraries will be installed into your $HOME local hierarchy. If you have not manually created a $HOME local hierarchy, make install will create the directories needed by the software package. Once installed in $HOME/bin , you can either add $HOME/bin to your $PATH or call the binary using the absolute $PATH . Some distributions will include $HOME/bin in your $PATH by default. You can test this by either echo $PATH and seeing if $HOME/bin is there, or put the binary in $HOME/bin and executing which binaryname . If it comes back with $HOME/bin/binaryname , then it is in your $PATH by default.
{ "source": [ "https://unix.stackexchange.com/questions/36871", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17698/" ] }
36,922
How can I configure a shortcut key to send a text string to the current program? The purpose is to type common entries quicker (email address, street address, phone number, username, favorite quote, etc). I don't need any further automation than just entering the text. Gentoo Linux (3.2.12-gentoo) Xfce Desktop Environment (Version 4.8)
One simple approach is xdotool , like xdotool type 'text'
{ "source": [ "https://unix.stackexchange.com/questions/36922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9249/" ] }
36,949
I am trying to perform a mathematical operation with sed , but it continues to treat my variables as strings. The input is of this kind: $ echo 12 | sed 's/[0-9]*/&+3/' $ 12+3 I'd like to have 15 as output. I need to do the operation and replace its mathematical result in only one passage, because I am running the program as a Python daemon, and I want to avoid passages like redirecting stdout on files, open those files, perform operations, extract the result, do the replacement. To me, sed seems the best to perform all in one line. I've tried to cast both input and output in various ways like $ echo 12 | sed 's/[0-9]*/int(&+3)/' $ echo 12 | sed 's/[0-9]*/\int(&+3)/' $ echo 12 | sed 's/[0-9]*/\int(&+3)/' but the result was always a printing of the second field.
If you honestly want to use sed, then this is the way to go: s/[0-9]/<&/g s/0//g; s/1/|/g; s/2/||/g; s/3/|||/g; s/4/||||/g; s/5/|||||/g; s/6/||||||/g s/7/|||||||/g; s/8/||||||||/g; s/9/|||||||||/g : tens s/|</<||||||||||/g t tens s/<//g s/+//g : minus s/|-|/-/g t minus s/-$// : back s/||||||||||/</g s/<\([0-9]*\)$/<0\1/ s/|||||||||/9/; s/||||||||/8/; s/|||||||/7/; s/||||||/6/; s/|||||/5/; s/||||/4/ s/|||/3/; s/||/2/; s/|/1/ s/</|/g t back Input: 1+2 100+250 100-250 Output: 3 350 -150 Your mission, should you choose to accept it, is to implement multiplication.
{ "source": [ "https://unix.stackexchange.com/questions/36949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17922/" ] }
36,982
Is it possible to set up system mail on a linux box to be sent via a different smtp server - maybe even with authentication? If so, how do I do this? If that's unclear, let give an example. If I'm at the command line and type: cat body.txt | mail -s "just a test" [email protected] is it possible to have that be sent via an external SMTP server, like G-mail ? I'm not looking for "a way to send mail from gmail from the command line" but rather an option to configure the entire system to use a specific SMTP server, or possibly one account on an SMTP server (maybe overriding the from address).
I found sSMTP very simple to use. In Debian based systems: apt-get install ssmtp Then edit the configuration file in /etc/ssmtp/ssmtp.conf A sample configuration to use your gmail for sending e-mails: # root is the person who gets all mail for userids < 1000 [email protected] # Here is the gmail configuration (or change it to your private smtp server) mailhub=smtp.gmail.com:587 [email protected] AuthPass=yourGmailPass UseTLS=YES UseSTARTTLS=YES Note : Make sure the "mail" command is present in your system. mailutils package should provide this one in Debian based systems. Update : There are people (and bug reports for different Linux distributions) reporting that sSMTP will not accept passwords with a 'space' or '#' character. If sSMTP is not working for you, this may be the case.
{ "source": [ "https://unix.stackexchange.com/questions/36982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
37,000
Is it possible to test effective permissions of a file for a specific user? I normally do this by su user and then accessing the file, but I now want to test this on an user with no shell (i.e. a System user)
The sudo command can run anything as a particular user with the -u option. Instead of worrying about shells, just try to cat (or execute, whatever) your file as your target user: $ sudo -u apache cat .ssh/authorized_keys cat: .ssh/authorized_keys: Permission denied
{ "source": [ "https://unix.stackexchange.com/questions/37000", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10317/" ] }
37,064
I would like to know which are the standard commands available in every Linux system. For example if you get a debian/ubuntu/redhat/suse/arch/slackware etc, you will always find there commands like: cd, mkdir, ls, echo, grep, sed, awk, ping etc. I know that some of the mentioned commands are shell-builtin but others are not but they are still always there (based on my knowledge and experience so far). On the other hand commands like gawk, parted, traceroute and other quite famous commands are not installed by default in different Linux distributions. I made different web searches but I haven't found a straight forward answer to this. The purpose is that I would like to create a shell script and it should make some sanity checks if the commands used in the script are available in the system. If not, it should prompt the user to install the needed binaries.
Unfortunately there is no guarantee of anything being available. However, most systems will have GNU coreutils . That alone provides about 105 commands. You can probably rely on those unless it's an embedded system, which might use BusyBox instead. You can probably also rely on bash , cron , GNU findutils , GNU grep , gzip , iproute2 , iputils , man-db , module-init-tools , net-tools , passwd ( passwd or shadow ), procps , tar , and util-linux . Note that some programs might have some differences between distributions. For example /usr/bin/awk might be gawk or mawk . /bin/sh might be dash or bash in POSIX mode . On some older systems, /usr/bin/host does not have the same syntax as the BIND version , so it might be better to use dig . If you're looking for some standards, the Linux Standard Base defines some commonly found programs , but not all distributions claim to conform to the standard, and some only do so if you install an optional LSB compatibility package. As an example of this, some systems I've seen don't come with lsb_release in a default install. As well as this, the list of commands standardized by POSIX could be helpful. Another approach to your problem is to package your script using each distribution's packaging tools (e.g. RPM for Red Hat, DEB for Debian, etc.) and declare a dependency on any other programs or packages you need. It's a bit of work, but it means users will see a friendlier error message, telling them not just what's missing, but what packages they need to install. More info: RPM - Adding Dependency Information to a Package (historical) RPM - Dependencies Debian - Declaring Relationships Between Packages PKGBUILD - Dependencies
{ "source": [ "https://unix.stackexchange.com/questions/37064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17971/" ] }
37,069
What is the difference between the following methods of chaining commands? cmd1; cmd2 cmd1 && cmd2
Assume there is command1 && command2 . In this case command2 will be executed if and only if command1 returned zero exit status. ; is just a command separator. Thus command2 will be executed whatever command1 returned. $> [[ "a" = "b" ]] && echo ok $> [[ "a" = "b" ]]; echo ok ok
{ "source": [ "https://unix.stackexchange.com/questions/37069", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17737/" ] }
37,122
I created a Debian VM on VirtualBox with two interfaces: a NAT one (for accessing internet) and a host-only one. However, I do not know how to make both interfaces work at the same time. If I the define the host-only as the adapter 1, I can access my VM from the host but not the internet; if I define the NAT one as adapter 1, I can access the internet but cannot reach my guest Debian. So, how could I make both interfaces work together? Note : I am still trying to map some port from my host to the SSH port from my guest SO, so there is no need to suggest me to do it :) EDIT : This is the output of ifconfig when the first adapter is the host-only one: eth0 Link encap:Ethernet HWaddr 08:00:27:f6:b2:45 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fef6:b245/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:495 errors:0 dropped:0 overruns:0 frame:0 TX packets:206 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:48187 (47.0 KiB) TX bytes:38222 (37.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:560 (560.0 B) TX bytes:560 (560.0 B) This is the output of netstat -nr when the first adapter is the host-only one: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.56.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 This is the output of ifconfig when the first adapter is the NAT one: eth0 Link encap:Ethernet HWaddr 08:00:27:f6:b2:45 inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fef6:b245/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:53 errors:0 dropped:0 overruns:0 frame:0 TX packets:59 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6076 (5.9 KiB) TX bytes:5526 (5.3 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1664 (1.6 KiB) TX bytes:1664 (1.6 KiB) This is the output of netstat -nr when the first adapter is the NAT one: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
The solution was pretty simple: I just had to add the following lines into the Debian virtual machine 's /etc/network/interfaces file: allow-hotplug eth1 iface eth1 inet dhcp The second line instructs the interface to obtain an IP via DHCP. The first line loads the interface at boot time. To apply the changes to a running system, invoke: ifup eth1 The name for the eth1 interface may vary, use ifconfig -a to list all available interfaces. EDIT : full /etc/network/interfaces : # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 iface eth0 inet dhcp allow-hotplug eth1 iface eth1 inet dhcp
{ "source": [ "https://unix.stackexchange.com/questions/37122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3084/" ] }
37,164
It took me hours to solve this SSH problem with one of my class accounts on my school's servers. I couldn't ssh into one particular class account without entering my password, while passwordless authentication worked with my other class accounts. The .ssh/ directory and all of its contents had the same, correct permissions as the other class accounts. Turns out the problem was the permissions set on my own home directory. Passwordless authentication did not work when the permissions on my HOME directory were set to 770 (regardless of the permissions set for .ssh/), but it worked with permissions set to 755 or 700. Anyone know why SSH does this? Is it because the home directory permissions are too permissive? Why does SSH refuse to authenticate with the public/private keys when the home directory is set more permissive than 700?
This is the default behavior for SSH. It protects user keys by enforcing rwx------ on $HOME/.ssh and ensuring only the owner has write permissions to $HOME . If a user other than the respective owner has write permission on the $HOME directory, they could maliciously modify the permissions on $HOME/.ssh , potentially hijacking the user keys, known_hosts , or something similar. In summary, the following permissions on $HOME will be sufficient for SSH to work. rwx------ rwxr-x--- rwxr-xr-x SSH will not work correctly and will send warnings to the log facilities if any variation of g+w or o+w exists on the $HOME directory. However, the administrator can override this behavior by defining StrictModes no in the sshd_config (or similar) configuration file, though it should be clear that this is not recommended .
{ "source": [ "https://unix.stackexchange.com/questions/37164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17995/" ] }
37,168
I have the following on my /etc/fuse.conf file: # Set the maximum number of FUSE mounts allowed to non-root users. # The default is 1000. # #mount_max = 1000 # Allow non-root users to specify the 'allow_other' or 'allow_root' # mount options. # user_allow_other But when I try to mount a remote path with the option allow_other : > sshfs name@server:/remote/path /local/path -o allow_other I get: fusermount: failed to open /etc/fuse.conf: Permission denied fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf I have triple checked and the option user_allow_other is uncommented in my fuse.conf ,as I copied above. I have also executed sudo adduser my_user_name fuse (not sure if this is needed though), but I still get the same problem. Why is it not parsing the /etc/fuse.conf file correctly?
A better solution might be to add the user to the fuse group, i.e.: addgroup <username> fuse
{ "source": [ "https://unix.stackexchange.com/questions/37168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
37,181
I either had this somewhere 20 years ago or I dreamed about it. Basically: If if type blobblob I get blobblob: command not found Fair enough. I would like it so that when my shell gets those errors - command not found - it checks to see if a directory exists with that name ('blobblob') and if it does it cd 's to that directory. I'm sure there are some reasons for not doing this or doing it with caution. I just think it would be pretty neat though and I would like to give it a try by finding how somewhere (like here!). I have no idea how to do the kinda shell programming this might imply.
Bash: shopt -s autocd Zsh: setopt autocd tcsh: set implicitcd Also, 'autojump' is a useful tool. Once installed it remembers directories so that you can type j abc and if you've visited abc before, say x/d/f/g/t/abc then it will cd to there! https://github.com/joelthelion/autojump
{ "source": [ "https://unix.stackexchange.com/questions/37181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
37,221
Is it possible to list the largest files on my hard drive? I frequently use df -H to display my disk usage, but this only gives the percentage full, GBs remaining, etc. I do a lot of data-intensive calculations, with a large number of small files and a very small number of very large files. Since most of my disk space used is in a very small number of files, it can be difficult to track down where these large files are. Deleting a 1 kB file does not free much space, but deleting a 100 GB file does. Is there any way to sort the files on the hard drive in terms of their size? Thanks.
With standard available tools: To list the top 10 largest files from the current directory: du . | sort -nr | head -n10 To list the largest directories from the current directory: du -s * | sort -nr | head -n10 UPDATE These days I usually use a more readable form (as Jay Chakra explains in another answer and leave off the | head -n10 , simply let it scroll off the screen. The last line has the largest file or directory (tree). Sometimes, eg. when you have lots of mount points in the current directory, instead of using -x or multiple --exclude=PATTERN , it is handier to mount the filesystem on an unused mount point ( often /mnt ) and work from there. Mind you that when working with large (NFS) volumes, you can cause a substantial load on the storage backend (filer) when running du over lots of (sub)directories. In that case it is better to consider setting quota on the volume.
{ "source": [ "https://unix.stackexchange.com/questions/37221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9605/" ] }
37,234
Now that Google Drive is available, how do we mount it to a Linux filesystem? Similar solutions exist for Amazon S3 and Rackspace Cloud Files .
Grive or inSync is a file sync tool which syncs up a local file system and remote Google Drive. You cannot "mount" Google Drive using these tools. For mounting, use google-drive-ocamlfuse , FUSE-based filesystem for Google Drive. Installation instructions, and more details about configuration, and authorization are at the Installation of FUSE filesystem over Google Drive wiki page (on GitHub). The project's GitHub homepage also has the readme file that is for the google-drive-ocamlfuse source code. Here are distro-specific instructions to mount Google Drive with google-drive-ocamlfuse.
{ "source": [ "https://unix.stackexchange.com/questions/37234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5655/" ] }
37,260
Is it possible to change the font attributes of the output of echo in either zsh or bash? What I would like is something akin to: echo -n "This is the font: normal " echo -n $font=italic "italic," echo -n $font=bold "bold," echo -n "and" echo -n $font=small "small". so that it print: "This is the font: normal, italic , bold , small " within a line of text.
On most if not all terminal emulators, you can't set different font sizes or different fonts, only colors and a few attributes (bold, underlined, standout). In bash (or in zsh or any other shell), you can use the terminal escape sequences directly (apart from a few exotic ones, all terminals follow xterm's lead these days). CSI is ESC [ , written $'\e[' in bash. The escape sequence to change attributes is CSI Ps m . echo $'\e[32;1mbold red\e[0mplain\e[4munderlined' Zsh has a convenient function for that. autoload -U colors colors echo $bold_color$fg[red]bold red${reset_color}plain$'\e['$color[underline]munderlined Or can do it as part of prompt expansion , also done with print -P , or the % parameter expansion flag : print -P '%F{red}%Bbold%b red%f %Uunderline%u'
{ "source": [ "https://unix.stackexchange.com/questions/37260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10526/" ] }
37,313
I want to find all lines in several files that match one of two patterns. I tried to find the patterns I'm looking for by typing grep (foo|bar) *.txt but the shell interprets the | as a pipe and complains when bar isn't an executable. How can I grep for multiple patterns in the same set of files?
First, you need to protect the pattern from expansion by the shell. The easiest way to do that is to put single quotes around it. Single quotes prevent expansion of anything between them (including backslashes); the only thing you can't do then is have single quotes in the pattern. grep -- 'foo*' *.txt (also note the -- end-of-option-marker to stop some grep implementations including GNU grep from treating a file called -foo-.txt for instance (that would be expanded by the shell from *.txt ) to be taken as an option (even though it follows a non-option argument here)). If you do need a single quote, you can write it as '\'' (end string literal, literal quote, open string literal). grep -- 'foo*'\''bar' *.txt Second, grep supports at least¹ two syntaxes for patterns. The old, default syntax ( basic regular expressions ) doesn't support the alternation ( | ) operator, though some versions have it as an extension, but written with a backslash. grep -- 'foo\|bar' *.txt The portable way is to use the newer syntax, extended regular expressions . You need to pass the -E option to grep to select it (formerly that was done with the egrep separate command²) grep -E -- 'foo|bar' *.txt Another possibility when you're just looking for any of several patterns (as opposed to building a complex pattern using disjunction) is to pass multiple patterns to grep . You can do this by preceding each pattern with the -e option. grep -e foo -e bar -- *.txt Or put patterns on several lines: grep -- 'foo bar' *.txt Or store those patterns in a file, one per line and run grep -f that-file -- *.txt Note that if *.txt expands to a single file, grep won't prefix matching lines with its name like it does when there are more than one file. To work around that, with some grep implementations like GNU grep , you can use the -H option, or with any implementation, you can pass /dev/null as an extra argument. ¹ some grep implementations support even more like perl-compatible ones with -P , or augmented ones with -X , -K for ksh wildcards... ² while egrep has been deprecated by POSIX and is sometimes no longer found on some systems, on some other systems like Solaris when the POSIX or GNU utilities have not been installed, then egrep is your only option as its /bin/grep supports none of -e , -f , -E , \| or multi-line patterns
{ "source": [ "https://unix.stackexchange.com/questions/37313", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8474/" ] }
37,327
I use Ubuntu 12 beta on a Lenovo Z575. I noticed that the disk spins down a few seconds after the last operation. When I am working, e.g. in the vim and write quite often, it spins-up and -down frequently. This causes vim to freeze for a second. I used hdparm but it didn't change anything: hdparm -S 24 /dev/sda # 2 minutes standby time and I see (and hear) that the disk is idle or working: hdparm -C /dev/sda drive state is: standby # or... drive state is: active/idle I have laptop-mode-tools already installed.
First, you need to protect the pattern from expansion by the shell. The easiest way to do that is to put single quotes around it. Single quotes prevent expansion of anything between them (including backslashes); the only thing you can't do then is have single quotes in the pattern. grep -- 'foo*' *.txt (also note the -- end-of-option-marker to stop some grep implementations including GNU grep from treating a file called -foo-.txt for instance (that would be expanded by the shell from *.txt ) to be taken as an option (even though it follows a non-option argument here)). If you do need a single quote, you can write it as '\'' (end string literal, literal quote, open string literal). grep -- 'foo*'\''bar' *.txt Second, grep supports at least¹ two syntaxes for patterns. The old, default syntax ( basic regular expressions ) doesn't support the alternation ( | ) operator, though some versions have it as an extension, but written with a backslash. grep -- 'foo\|bar' *.txt The portable way is to use the newer syntax, extended regular expressions . You need to pass the -E option to grep to select it (formerly that was done with the egrep separate command²) grep -E -- 'foo|bar' *.txt Another possibility when you're just looking for any of several patterns (as opposed to building a complex pattern using disjunction) is to pass multiple patterns to grep . You can do this by preceding each pattern with the -e option. grep -e foo -e bar -- *.txt Or put patterns on several lines: grep -- 'foo bar' *.txt Or store those patterns in a file, one per line and run grep -f that-file -- *.txt Note that if *.txt expands to a single file, grep won't prefix matching lines with its name like it does when there are more than one file. To work around that, with some grep implementations like GNU grep , you can use the -H option, or with any implementation, you can pass /dev/null as an extra argument. ¹ some grep implementations support even more like perl-compatible ones with -P , or augmented ones with -X , -K for ksh wildcards... ² while egrep has been deprecated by POSIX and is sometimes no longer found on some systems, on some other systems like Solaris when the POSIX or GNU utilities have not been installed, then egrep is your only option as its /bin/grep supports none of -e , -f , -E , \| or multi-line patterns
{ "source": [ "https://unix.stackexchange.com/questions/37327", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17258/" ] }
37,329
We have an issue with a folder becoming unwieldy with hundreds of thousands of tiny files. There are so many files that performing rm -rf returns an error and instead what we need to do is something like: find /path/to/folder -name "filenamestart*" -type f -exec rm -f {} \; This works but is very slow and constantly fails from running out of memory. Is there a better way to do this? Ideally I would like to remove the entire directory without caring about the contents inside it.
Using rsync is surprising fast and simple. mkdir empty_dir rsync -a --delete empty_dir/ yourdirectory/ @sarath's answer mentioned another fast choice: Perl!  Its benchmarks are faster than rsync -a --delete . cd yourdirectory perl -e 'for(<*>){((stat)[9]<(unlink))}' or, without the stat (it's debatable whether it is needed; some say that may be faster with it, and others say it's faster without it): cd yourdirectory perl -e 'for(<*>){unlink}' Sources: https://stackoverflow.com/questions/1795370/unix-fast-remove-directory-for-cleaning-up-daily-builds http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux https://www.quora.com/Linux-why-stat+unlink-can-be-faster-than-a-single-unlink/answer/Kent-Fredric?srid=O9EW&share=1
{ "source": [ "https://unix.stackexchange.com/questions/37329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/625/" ] }
37,350
I have many files named sequence_1_0001.jpg sequence_1_0002.jpg sequence_1_0003.jpg ... and files named sequence_1_0001.hmf sequence_1_0002.hmf sequence_1_0003.hmf ... and files named sequence_2_0001.jpg sequence_2_0002.jpg sequence_2_0003.jpg ... and sequence_2_0001.hmf sequence_2_0002.hmf sequence_2_0003.hmf ... I just want to remove the files that begin with 'sequence_1' and end in '.hmf', but I don't want to remove them one by one, since there are thousands of files. How can I specify to the rm command that I want to remove all that begin with the prefilx 'sequence_1' and end in '.hmf'? I'm currently working with a RedHat Linux system, but I'd like to know how to do it on other distributions as well.
rm sequence_1*.hmf removes files beginning with sequence_1 and ending with .hmf . Globbing is the process in which your shell takes a pattern and expands it into a list of filenames matching that pattern. Do not confuse it with regular expressions, which is different. If you spend most of your time in bash , the Wooledge Wiki has a good page on globbing (pathname expansion) . If you want maximum portability, you'll want to read the POSIX spec on pattern matching as well / instead. In the unlikely case you run into an "Argument list too long" error, you can take a look at BashFAQ 95 , which addresses this. The simplest workaround is to break up the glob pattern into multiple smaller chunks, until the error goes away. In your case, you could probably get away with splitting the match by prefix digits 0 through 9, as follows: for c in {0..9}; do rm sequence_1_"$c"*.hmf; done rm sequence_1*.hmf # catch-all case
{ "source": [ "https://unix.stackexchange.com/questions/37350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15417/" ] }
37,411
Recently, I stumbled upon a multiline comment type I have never seen before - here is a script example: echo a # : aaa : ddd # echo b This seems to work, even vim syntax-highlights it. What is this style of commenting called and how to I find more info about it?
That is not a multi-line comment. # is a single line comment. : (colon) is not a comment at all, but rather a shell built-in command that is basically a NOP , a null operation that does nothing except return true, like true (and thus setting $? to 0 as a side effect). However since it is a command, it can accept arguments, and since it ignores its arguments, in most cases it superficially acts like a comment. The main problem with this kludge is the arguments are still expanded, leading to a host of unintended consequences. The arguments are still affected by syntax errors, redirections are still performed so : > file will truncate file , and : $(dangerous command) substitutions will still run. The least surprising completely safe way to insert comments in shell scripts is with # . Stick to that even for multi-line comments. Never attempt to (ab)use : for comments. There is no dedicated multi-line comment mechanism in shell that is analogous to the slash-star /* */ form in C -like languages. For the sake of completeness, but not because it is recommended practice, I will mention that it is possible to use here-documents to do multi-line "comments": : <<'end_long_comment' This is an abuse of the null command ':' and the here-document syntax to achieve a "multi-line comment". According to the POSIX spec linked above, if any character in the delimiter word ("end_long_comment" in this case) above is quoted, the here-document will not be expanded in any way. This is **critical**, as failing to quote the "end_long_comment" will result in the problems with unintended expansions described above. All of this text in this here-doc goes to the standard input of :, which does nothing with it, hence the effect is like a comment. There is very little point to doing this besides throwing people off. Just use '#'. end_long_comment
{ "source": [ "https://unix.stackexchange.com/questions/37411", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7121/" ] }
37,453
I've using Debian since 2010 for some home purposes and it has been stable. Is Debian still a good option if I need a server for heavy network, cpu, disk and memory usage? Last month I listened to some admins say that RedHat is the most stable in bulk operations and that CentOS is a free version of RHEL. Their opinion is that CentOS is the best free distro. CentOS is getting very popular in my country (Dominican Republic) and I've wondered if Debian is getting behind. Can RedHat, Debian, CentOS or Suse be used for bulk operations servers?
This kind of question cannot possibly be answered objectively. For many reasons: The word stable could mean literally anything. It's easy to find benchmarks ( random example off Google ) comparing certain particular aspects of computing, but to go as far as declare a distro more "stable" or "performant" or any other broad term like this is a bit far fetched. There's a big difference between a vanilla install of a distribution and a tweaked one. With proper hacking, Debian, Red Hat, SuSE or any other distro can be made to behave the way you want. In any case, should you encounter any stability/performance issue , you'll find ways to overcome them regardless of the distro you're using. Most of the work that makes a system stable happens in the kernel, that is Linux. Now this may lead distros to act a bit differently, since each ships with separate versions of the kernel, activating certain modules or not. However since installing your own kernel is always an option (again, only do this after profiling your system and detecting issues there), this is not inherent to the distro itself, but to different instances of the kernel. It is a bit misguided to imagine that distros will compete on that level. They usually compete on the level of what admin tools they offer (package management is the best example), the quality of help and documentation (Ubuntu targets the casual desktop user, where Red Hat addresses the seasoned corporate sysadmin) or the quality of their commercial support. My personal advice to you is not to get dragged into these meaningless flamewars (my distro is better than yours). Ultimately, it's a matter of personal preferences. Try something for yourself and you'll quickly realize that even though each distro acts a bit differently, there's virtually nothing one can do that the others can't. It helps knowing someone in real life who's already familiar with one distro (in your case CentOS). Also, Debian is waaaaay more stable than RHEL or CentOS.
{ "source": [ "https://unix.stackexchange.com/questions/37453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18185/" ] }
37,508
I've never really thought about how the shell actually executes piped commands. I've always been told that the "stdout of one program gets piped into the stdin of another," as a way of thinking about pipes. So naturally, I thought that in the case of say, A | B , A would run first, then B gets the stdout of A , and uses the stdout of A as its input. But I've noticed that when people search for a particular process in ps , they'd include grep -v "grep" at the end of the command to make sure that grep doesn't appear in the final output. This means that in the command ps aux | grep "bash" | grep -v "grep" it is implied that ps knew that grep was running and therefore is in the output of ps . But if ps finishes running before its output gets piped to grep , how did it know that grep was running? flamingtoast@FTOAST-UBUNTU: ~$ ps | grep ".*" PID TTY TIME CMD 3773 pts/0 00:00:00 bash 3784 pts/0 00:00:00 ps 3785 pts/0 00:00:00 grep
Piped commands run concurrently. When you run ps | grep … , it's the luck of the draw (or a matter of details of the workings of the shell combined with scheduler fine-tuning deep in the bowels of the kernel) as to whether ps or grep starts first, and in any case they continue to execute concurrently. This is very commonly used to allow the second program to process data as it comes out from the first program, before the first program has completed its operation. For example grep pattern very-large-file | tr a-z A-Z begins to display the matching lines in uppercase even before grep has finished traversing the large file. grep pattern very-large-file | head -n 1 displays the first matching line, and may stop processing well before grep has finished reading its input file. If you read somewhere that piped programs run in sequence, flee this document. Piped programs run concurrently and always have.
{ "source": [ "https://unix.stackexchange.com/questions/37508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17995/" ] }
37,539
Moving a tried-and-true vsftpd configuration onto a new server with Fedora 16, I ran into a problem. All seems to go as it should, but user authentication fails. I cannot find any entry in any log that indicates what happened. Here is the full config file: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/vsftpd.log xferlog_std_format=YES idle_session_timeout=0 data_connection_timeout=0 nopriv_user=ftpsecure connect_from_port_20=YES listen=YES chroot_local_user=YES chroot_list_enable=NO ls_recurse_enable=YES listen_ipv6=NO pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES FTP challenges me for a username and password, I provide them, Login Incorrect. I have verified, this user is able to login from ssh. Something is screwed up with pam_service . Anonymous (if changed to allowed) seems to work well. SELinux is disabled. Ftpsecure appears to be configured fine... I am at a complete loss! Here are the log files I examined with no success: /var/log/messages /var/log/xferlog #empty /var/log/vsftpd.log #empty /var/log/secure Found something in /var/log/audit/audit.log : type=USER_AUTH msg=audit(1335632253.332:18486): user pid=19528 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:authentication acct="kate" exe="/usr/sbin/vsftpd" hostname=ip68-5-219-23.oc.oc.cox.net addr=68.5.219.23 terminal=ftp res=failed' Perhaps I should look at /var/log/wtf-is-wrong.help :-) Further info: /etc/pam.d/vsftpd looks like this: #%PAM-1.0 session optional pam_keyinit.so force revoke auth required pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed auth required pam_shells.so auth include password-auth account include password-auth session required pam_loginuid.so session include password-auth
Whew. I solved the problem. It amounts to a config but within /etc/pam.d/vsftpd Because ssh sessions succeeded while ftp sessions failed, I went to /etc/pam.d/vsftpd, removed everything that was there and instead placed the contents of ./sshd to match the rules precisely. All worked! By method of elimination, I found that the offending line was: auth required pam_shells.so Removing it allows me to proceed. Tuns out, "pam_shells is a PAM module that only allows access to the system if the users shell is listed in /etc/shells." I looked there and sure enough, no bash, no nothing. This is a bug in vsftpd configuration in my opinion as nowhere in the documentation does it have you editing /etc/shells. Thus default installation and instructions do not work as stated. I'll go find where I can submit the bug now.
{ "source": [ "https://unix.stackexchange.com/questions/37539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12387/" ] }
37,625
I would like to know how to install .tar.bz and .tar.bz2 packages on Debian. Could anyone help me understand how to achieve that?
Firstly, according to the File System Hierarchy Standards , the location of this installed package should be /opt if it is a binary install and /usr/local if it's a from source install. Pure binaries These are ready to use binaries. Normally they just need to be extracted to be installed. A binary package is going to be easy: sudo tar --directory=/opt -xvf <file>.tar.[bz2|gz] add the directory to your path: export PATH=$PATH:/opt/[package_name]/bin and you are done. From sources A source package is going to be more troublesome (by far) and through they can roughly be processed with the method below, each package is different : download the package to /usr/local/src tar xf <file>.tar.[bz2|gz] cd <package name> read the README file (this almost certainly exists). most Open Source projects use autoconf/automake, the instructions should be in the README . Probably this step will go: ./configure && make && make install (run the commands separately for sanity if something goes wrong though). If there's any problems in the install then you'll have to ask specific questions. You might have problems of incorrect versions of libraries or missing dependencies. There's a reason that Debian packages everything up for you. And there is a reason Debian stable runs old packages - finding all the corner cases of installing packages on more than a dozen different architectures and countless different hardware/systems configurations is difficult. When you install something on your own you might run into one of these problems!
{ "source": [ "https://unix.stackexchange.com/questions/37625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18276/" ] }
37,633
I'm setting up a few ubuntu boxes, and using opscode's chef as a configuration tool. It would be fairly easy to install public keys for each user on each of these servers, and disable password authentication. However, the users should also have sudo privileges though, which by default requires a password. If I want to use the users' public keys as a method of access management and allow the users sudo privileges, does that mean I should also set up the users with NOPASSWD: ALL in visduo , or is there a way that a user can change their own password if they only have public key authentication?
Sudo, in its most common configuration, requires the user to type their password. Typically, the user already used their password to authenticate into the account, and typing the password again is a way to confirm that the legitimate user hasn't abandoned their console and been hijacked. In your setup, the user's password would be used only for authentication to sudo. In particular, if a user's SSH key is compromised, the attacker would not be able to elevate to root privileges on the server. The attacker could plant a key logger into the account, but this key logger would be detectable by other users, and could even be watched for automatically. A user normally needs to know their current password to change it to a different password. The passwd program verifies this (it can be configured not to, but this is not useful or at all desirable in your scenario). However, root can change any user's password without knowing the old one; hence a user with sudo powers can change his own password without entering it at the passwd prompt by running sudo passwd $USER . If sudo is configured to require the user's password, then the user must have typed the password to sudo anyway. You can disable password authentication selectively. In your situation, you would disable password authentication in ssh, and possibly in other services. Most services on most modern unices (including Ubuntu) use PAM to configure authentication methods. On Ubuntu, the PAM configuration files live in /etc/pam.d . To disable password authentication, comment out the auth … pam_unix.so line in /etc/pam.d/common-auth . Furthermore, make sure you have PasswordAuthentication no in /etc/ssh/sshd_config to disable sshd's built-in password authentication. You may want to allow some administrative users to log in with a password, or to allow password authentication on the console. This is possible with PAM (it's pretty flexible), but I couldn't tell you how off the top of my head; ask a separate question if you need help.
{ "source": [ "https://unix.stackexchange.com/questions/37633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
37,660
I don't quite understand how the computer reads this command. cat file1 file2 1> file.txt 2>&1 If I understand, 2>&1 simply redirect Standard Error to Standard Output. By that logic, the command reads to me as follows: concatenate files file1 and file2 . send stdout from this operation to file.txt . send stderr to stdout . end? I'm not sure what the computer's doing. By my logic, the command should be cat file1 file2 2>&1 > file.txt but this is not correct.
I find it easier to think of using assignments. > is like = & is like $ You start out with 1 = /dev/tty 2 = /dev/tty then your first example, 1> file.txt 2>&1 , does 1 = file.txt 2 = $1 # and currently $1 = file.txt leaving you with 1 = file.txt 2 = file.txt If you did it the other way, again you start with 1 = /dev/tty 2 = /dev/tty then 2>&1 > file.txt does 2 = $1 # and currently $1 = /dev/tty 1 = file.txt so the end result is 1 = file.txt 2 = /dev/tty and you've only redirected stdout , not stderr .
{ "source": [ "https://unix.stackexchange.com/questions/37660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18283/" ] }
37,724
From what I understand, the right place to put your own scripts is /usr/local/bin (for instance a script I use to back up some files). I notice that this folder is currently (by default) owned by root, and my normal user has no access to it. I am the only user on this computer. Shall I change this whole folder to my own user? Or is there another proper way to arrange permissions of /usr/local/bin ?
By default, the owner and group of /usr/local and all subdirectories (including bin ) should be root.root and the permissions should be rwxr-xr-x . This means that users of the system can read and execute in (and from) this directory structure, but cannot create or edit files there. Only the root account (or an administrator using sudo ) should be able to create and edit files in this location. Even though there is only one user on the system, it's generally a bad idea to change permissions of this directory structure to writable to any user other than root . I would suggest placing your script/binary/executable into /usr/local/bin using the root account. It's a good habit to get into. You could also place the script/binary/executable into $HOME/bin and make sure $HOME/bin is in your $PATH. See this question for more discussion: Where should a local executable be placed?
{ "source": [ "https://unix.stackexchange.com/questions/37724", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16766/" ] }
37,729
I have a device that needs a block of memory that is reserved solely for it, without the OS intervening. Is there any way to tell BIOS or the OS that a block of memory is reserved, and it must not use it? I am using this device on an openSUSE machine.
What you're asking for is called DMA. You need to write a driver to reserve this memory. Yes, I realize you said you didn't want the OS to intervene, and a driver becomes part of the OS, but in absence of a driver's reservation, the kernel believes all memory belongs to it. (Unless you tell the kernel to ignore the memory block, per Aaron's answer, that is.) Chapter 15 (PDF) of " Linux Device Drivers, 3/e " by Rubini, Corbet and Kroah-Hartmann covers DMA and related topics. If you want an HTML version of this, I found the second-edition version of the chapter elsewhere online. Beware that the 2nd edition is over a decade old now, having come out when kernel 2.4 was new. There's been a lot of work on the memory management subsystem of the kernel since those days, so it may not apply very well any more.
{ "source": [ "https://unix.stackexchange.com/questions/37729", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17046/" ] }
37,779
I am trying to get qemu-kvm to boot from my live usb stick. Is this possible?
qemu-kvm -hdb <device> , where <device> is the USB stick (e.g. /dev/sdb ), should do it (tested with Ubuntu 12.04 on an USB stick and it works). You will need write permission to the device (i.e. be root or change its permissions).
{ "source": [ "https://unix.stackexchange.com/questions/37779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16130/" ] }
37,782
I am using cygwin in my windows machine. I am trying to do a find and it is giving parameter format not correct. Why is that? $ ls bootstrap.jar catalina-tasks.xml catalina.bat catalina.sh commons-daemon-native.tar.gz commons-daemon.jar cpappend.bat digest.bat digest.sh setclasspath.bat setclasspath.sh shutdown.bat shutdown.sh startup.bat startup.sh tomcat-juli.jar tomcat-native.tar.gz tool-wrapper.bat tool-wrapper.sh version.bat version.sh $ find . -name "version.sh" FIND: Parameter format not correct Should I install anything while installing cygwin or am I doing something wrong?
Your PATH is bad. It has Windows system directories before Cygwin directories, or maybe doesn't have Cygwin directories at all. This message comes from the Windows command find (that it reports its name as FIND in uppercase is a hint). When you start a Cygwin shell, you usually need to set the PATH . I recommend that you start a login shell (if I recall correctly, that's what the default Cygwin system menu entries do). Your Cygwin PATH should have /usr/local/bin , /usr/bin and /bin (at least) ahead of any non-Cygwin directory.
{ "source": [ "https://unix.stackexchange.com/questions/37782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
37,790
I have multiple files that contain ascii text information in the first 5-10 lines, followed by well-tabulated matrix information. In a shell script, I want to remove these first few lines of text so that I can use the pure matrix information in another program. How can I use bash shell commands to do this? If it's any help, I'm using RedHat and an Ubuntu linux systems.
As long as the file is not a symlink or hardlink, you can use sed, tail, or awk. Example below. $ cat t.txt 12 34 56 78 90 sed $ sed -e '1,3d' < t.txt 78 90 You can also use sed in-place without a temp file: sed -i -e 1,3d yourfile . This won't echo anything, it will just modify the file in-place. If you don't need to pipe the result to another command, this is easier. tail $ tail -n +4 t.txt 78 90 awk $ awk 'NR > 3 { print }' < t.txt 78 90
{ "source": [ "https://unix.stackexchange.com/questions/37790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15417/" ] }
37,793
There are often times that I want my computer to do a single task, but not right now. For example, I could have it notify me in 30 minutes that it is time to leave work. Or maybe I want it to run a complicated test 2 hours from now when I'm sure most everyone else will be gone from the office. I know I could create a cron job to run at a specific time of day, but that seems like a lot of work when all I want is something simple like "Run this script in 10 minutes", besides I'd have to figure out what time it will actually be X minutes/hours/days from now, and then delete the cron job once it finished. Of course I could just write this script and run it in the background: sleep X do_task But that just seems so clunky: I either need a new script for each task, or I need to write and maintain a script generic enough to do what I want, not to mention I have to figure out how many seconds are in the minutes, hours, or days I want. Is there not an already established solution to this problem?
I use a simple script with at : #!/bin/bash # email reminder notes using at(1)... read -p "Time of message? [HH:MM] " time read -p "Date of message? [dd.mm.yy] " date read -p "Message body? " message at "$time" "$date" <<EOF echo "$message" | mailx -s "REMINDER" [email protected] EOF You could just as easily pipe the $message to notify-send or dzen if you wanted a desktop notification instead of an email.
{ "source": [ "https://unix.stackexchange.com/questions/37793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4143/" ] }
37,829
I am trying to understand character special files. From wikipedia , I understand that these files "provide an interface" for devices that transmit data one character at a time. My understanding is that the system somehow calls the character device instead of calling the device driver directly. But how does the file provide this interface? Is it an executable that translates the system call? Can someone explain what's up.
They are actually just that - interfaces. Encoded by a "major" and "minor" number they provide a hook to the kernel. They come in two flavors (well, three, but named pipes are out of the scope of this explanation for now): Character Devices and Block Devices. Block Devices tend to be storage devices, capable of buffering output and storing data for later retrieval. Character Devices are things like audio or graphics cards, or input devices like keyboard and mouse. In each case, when the kernel loads the correct driver (either at boot time, or via programs like udev ) it scans the various buses to see if any devices handled by that driver are actually present on the system. If so, it sets up a device that 'listens' on the appropriate major/minor number. (For instance, the Digital Signal Processor of the first audio card found by your system gets the major/minor number pair of 14/3; the second gets 14,35, etc.) It's up to udev to create an entry in /dev named dsp as a character device marked major 14 minor 3. (In significantly older or minimum-footprint versions of Linux, /dev/ may not be dynamically loaded but just contain all possible device files statically.) Then, when a userspace program tries to access a file that's marked as a 'character special file' with the appropriate major/minor number (for instance, your audio player trying to send digital audio to /dev/dsp ), the kernel knows that this data needs to be transmitted via the driver that major/minor number is attached to; presumably said driver knows what to do with it in turn.
{ "source": [ "https://unix.stackexchange.com/questions/37829", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18366/" ] }