source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
44,754 | How can I list files with a filename ending with last character and with .txt extension ? I have tried ls *+([[:digit:]]).txt but this is true for abc12.txt and abc2.txt . But I need to get only abc2.txt . How can I do that? Is there any sort form of :digit: that will do this? | How about: ls -d -- *[!0-9][0-9].txt The ! at the beginning of the group complements its meaning. As noted in the comments, this is bash's doing, try e.g.: printf "%s\n" *[!0-9][0-9].txt | {
"source": [
"https://unix.stackexchange.com/questions/44754",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21642/"
]
} |
44,898 | It is possible to make ls -l output the size field with digits grouped by thousands? If so, how? For instance: $ ls -l
-rw-rw---- 1 dahl dahl 43,210,052 2012-01-01 21:52 test.py (Note the commas in the size). Maybe by modifying the LC_NUMERIC setting inside the locale I'm using (en_US.utf8)? I'm on Kubuntu 12.04 LTS. | Block size - GNU Coreutils says A block size specification preceded by ' causes output sizes to be displayed with thousands separators. (Note well that just specifying a block size is not enough). So depending on what you want, you could try BLOCK_SIZE="'1" ls -l
BLOCK_SIZE="'1kB" ls -l or ls -l --block-size="'1"
ls -l --block-size="'1kB" you can make it permanent using export BLOCK_SIZE="'1"
export BLOCK_SIZE="'1kB" or alias ls="ls --block-size=\"'1\""
alias ls="ls --block-size=\"'1kB\"" | {
"source": [
"https://unix.stackexchange.com/questions/44898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21709/"
]
} |
44,906 | chmod 777 -R /mnt
rm -rf /mnt/*
rm: cannot remove 'omitted': Read-only file system
rm: cannot remove 'omitted': Read-only file system Please show me how I can do this? | As the error message says: the filesystem on which omitted is located is read-only. You can't do anything to modify that filesystem, including removing files. You can check the mount point of the filesystem by running df omitted . It is probably /mnt given the command you're running. You can remount the filesystem as read-write by running mount -o remount,rw /mnt However it would be a good idea to find out why the filesystem was mounted as read-only in the first place. This may be an indication that you should not be deleting those files. Run mount | grep /mnt to see what options were specified when mounting that filesystem. For an ext2/ext3/ext4 filesystem, if the options did not include ro (read-only) but included errors=remount-ro , it looks like the filesystem was damaged and was automatically remounted as read-only to limit the damage; you will find more information in the kernel logs. Note that your command attempts to remove the mount point itself, but this is harmless you won't have permission to do it anyway. By the way, I strongly urge you not to use chmod 777 . It is extremely rare to actually need these permissions, and they can cause a lot of harm (especially when you typo the argument, but even when not). If you try to remove a file and get a βpermission deniedβ error, all you need to do is give yourself permission to write to the containing directory: generally, that's chmod -R u+w /path/to/toplevel/directory . | {
"source": [
"https://unix.stackexchange.com/questions/44906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21663/"
]
} |
44,956 | There are tools providing coloured output: dwdiff -c File1 File2 # word level diff
grep --color=always # we all know this guy
... The question is: How to convert their colored output of arbitrary program into coloured html file? Other output formats might be suitable as well (LaTeX would be great).
I think html is good starting point, as it's easy to convert it to other formats. (For curious how to keep terminal colour codes, please follow answer: https://unix.stackexchange.com/a/10832/9689 ... | unbuffer command_with_colours arg1 arg2 | ... - tool unbuffer is part of expect ) | The answer to this question is probably what you want. It links to these tools, which do the conversion you're looking for: Perl package HTML::FromANSI aha , a C-language program ( github repo ) | {
"source": [
"https://unix.stackexchange.com/questions/44956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9689/"
]
} |
44,967 | I'm looking for the difference between cp -r and cp -a . What does "recursive" mean in terms of copying files from a folder? | Recursive means that cp copies the contents of directories, and if a directory has subdirectories they are copied (recursively) too. Without -R , the cp command skips directories. -r is identical with -R on Linux, it differs in some edge cases on some other unix variants. By default, cp creates a new file which has the same content as the old file, and the same permissions but restricted by the umask ; the copy is dated from the time of the copy, and belongs to the user doing the copy. With the -p option, the copy has the same modification time, the same access time, and the same permissions as the original. It also has the same owner and group as the original, if the user doing the copy has the permission to create such files. The -a option means -R and -p , plus a few other preservation options. It attempts to make a copy that's as close to the original as possible: same directory tree, same file types, same contents, same metadata (times, permissions, extended attributes, etc.). | {
"source": [
"https://unix.stackexchange.com/questions/44967",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21750/"
]
} |
44,985 | I'm running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems. So what I'd like to do is put some kind of hard limit on memory usage for a child process I'm about to run--just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible? I don't think ulimit can be used for this, but is there a one-process equivalent? | If your process doesn't spawn more children that consume the most memory, you may use setrlimit function. More common user interface for that is using ulimit command of the shell: $ ulimit -Sv 500000 # Set ~500 mb limit
$ pdftoppm ... This will only limit "virtual" memory of your process, taking into accountβand limitingβthe memory the process being invoked shares with other processes, and the memory mapped but not reserved (for instance, Java's large heap). Still, virtual memory is the closest approximation for processes that grow really large, making the said errors insignificant. If your program spawns children, and it's them which allocate memory, it becomes more complex, and you should write auxiliary scripts to run processes under your control. I wrote in my blog, why and how . | {
"source": [
"https://unix.stackexchange.com/questions/44985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33430/"
]
} |
44,995 | I'm learning Shell scripting for a diploma in IT I'm currently doing. I'm trying to write a small script that adds two numbers as shown as in one of the tutorials we were given. echo "Enter two numbers"
read num1 num2
sum = 'expr $num1 + $num2'
echo "The sum is = $sum" However when I give it the execution permission and run the script, it gives me this error. sum: =. No such file or directory.
sum: expr $num1 + $num2: No such file or directory I tried running this on both Ubuntu and Fedora but same error occurs. Can anyone please tell me what I'm missing here? | First you have to get rid of the spaces for the assignment, e.g sum='expr $num1 + $num2' then you have to change ' to a ` or even better to $() : sum=$(expr "$num1" + "$num2") instead of using expr you can also do the calculation directly in your shell: sum=$((num1 + num2)) | {
"source": [
"https://unix.stackexchange.com/questions/44995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21761/"
]
} |
45,025 | I have a process originally running in the foreground. I suspended by Ctrl + Z , and then resume its running in the background by bg <jobid> . I wonder how to suspend a process running in the background? How can I bring a background process to foreground? Edit: The process outputs to stderr, so how shall I issue the command fg <jobid> while the process is outputting to the terminal? | As Tim said, type fg to bring the last process back to foreground. If you have more than one process running in the background, do this: $ jobs
[1] Stopped vim
[2]- Stopped bash
[3]+ Stopped vim 23 fg %3 to bring the vim 23 process back to foreground. To suspend the process running in the background, use: kill -STOP %job_id The SIGSTOP signal stops (pauses) a process
in essentially the same way Ctrl + Z does. example: kill -STOP %3 . sources: How to send signals to processes in Linux and Unix How to manage background and foreground jobs . | {
"source": [
"https://unix.stackexchange.com/questions/45025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
45,042 | Possible Duplicate: Why am I still getting a password prompt with ssh with public key authentication? I have ssh access to two sever. One old one and one new one. For the old one I use the tutorial SSH login without password to login without typing the password every time. For the new machine I followed the tutorial again, but this time it is not working. I looked at the debug output from ssh ( -v option) and it seems to me that the new server does not accept my public key. But I checked and bot authorized_keys are the same, I even used md5sum . What could be the problem and how could I fix this? Debug output for old server where it does work (snippet): debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/NICK/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 277 Debug output for new server where it does not work (snippet): debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/NICK/.ssh/id_rsa
debug1: Authentications that can continue: publickey,password
debug1: Trying private key: /home/NICK/.ssh/id_dsa [ UPDATE] Ownership of authorized_keys on remote NICK@server-new:~/.ssh$ ls -l
total 4
-rwx------ 1 NICK NICK 404 2012-08-08 16:11 authorized_keys Complete debug output for the not working server: OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012
debug1: Reading configuration data /home/NICK/.ssh/config
debug1: /home/NICK/.ssh/config line 1: Applying options for foo2
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to foo-serv2.cs.bar.it [XXX.XXX.XXX.XXX] port 22.
debug1: Connection established.
debug1: identity file /home/NICK/.ssh/id_rsa type 1
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048
debug1: identity file /home/NICK/.ssh/id_rsa-cert type -1
debug1: identity file /home/NICK/.ssh/id_dsa type -1
debug1: identity file /home/NICK/.ssh/id_dsa-cert type -1
debug1: identity file /home/NICK/.ssh/id_ecdsa type -1
debug1: identity file /home/NICK/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-4ubuntu6
debug1: match: OpenSSH_5.5p1 Debian-4ubuntu6 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Server host key: RSA XXX
debug1: Host 'foo-serv2.cs.bar.it' is known and matches the RSA host key.
debug1: Found key in /home/NICK/.ssh/known_hosts:34
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/NICK/.ssh/id_rsa
debug1: Authentications that can continue: publickey,password
debug1: Trying private key: /home/NICK/.ssh/id_dsa
debug1: Trying private key: /home/NICK/.ssh/id_ecdsa
debug1: Next authentication method: password | Did you make sure that the ownership and mode of your ~/.ssh directory on the remote side is correct? It should be owned by you, and have 0700 permissions, i.e. chmod 700 ~/.ssh . Also chmod go-w ~ as this is checked also - because anyone with write permission on your home directory can change the permissions of the .ssh directory. | {
"source": [
"https://unix.stackexchange.com/questions/45042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11381/"
]
} |
45,153 | I noticed that I can read text files without an extension .txt just fine. How come? Should I save these files with or without the .txt extension? Also, what about .ini files? I usually use them like this: config.ini , should I remove the extension here to? Any general resources on how Linux handles file extensions would be useful. | UNIX/Linux does not have the same early DOS / CP/M heritage that Windows does. So extensions are generally less significant to most UNIX utilities and tools. I usually use a command-line only environment. Extensions in such an environment under Linux aren't really significant except as a convenience to the operator or user. (I don't have enough experience with KDE or GNOME to know how their filemanagers deal with extensions.) But such convenience is usually important. If config.ini is really in Microsoft-standard ".ini" format, I'd let the extension stand. Plain old text files usually carry no extension in Linux, but this isn't universal for all programs configuration files. The programmer usually gets to decide that. I think ".txt" is useful under Linux if you want to emphasize that it's NOT a configuration file or other machine-readable document. However, in source distributions, the convention is to name such files all caps without an extension (i.e. README, INSTALL, COPYING, etc.) There are some standards and conventions but nothing stopping you from naming anything whatever you want, unless you are sharing things with others. In Windows, naming a file .exe indicates to the shell (usually explorer.exe ) that it's an executable file. UNIX builds this knowledge into the file system's permissions. If the proper x bits (see man chmod ) are set, it is recognized as executable by shells and kernel functions (I believe). Beyond this, Linux doesn't care, most shells won't care, and most programs look in the file to find it's "type." Of course, there's the nice command file which can analyze the file and tell you what it is with a degree of certainty. I believe if it can't match the data in the file with any known type, and if it contains only printable ASCII/Unicode characters, then it assumes its a text file. @Bruce Ediger below is absolutely correct. There is nothing in the kernel or filesystem level, i.e. Linux itself, enforcing or caring that the contents of a file needs to match up with its name, or the program that is supposed to understand it. This doesn't mean it's not possible to create a shell or launcher utility to do things based on filename. | {
"source": [
"https://unix.stackexchange.com/questions/45153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17008/"
]
} |
45,160 | I only know about: 1) Internetworking with TCP/IP: Vol.II, Design, Implementation, and Internals 2) TCP/IP Illustrated, Vol. 2: The Implementation but these are quite dated. I am particularly interested in Open source implementations. Any ideas? EDIT: I found another book 1) TCP/IP Architecture, Design and Implementation in Linux | UNIX/Linux does not have the same early DOS / CP/M heritage that Windows does. So extensions are generally less significant to most UNIX utilities and tools. I usually use a command-line only environment. Extensions in such an environment under Linux aren't really significant except as a convenience to the operator or user. (I don't have enough experience with KDE or GNOME to know how their filemanagers deal with extensions.) But such convenience is usually important. If config.ini is really in Microsoft-standard ".ini" format, I'd let the extension stand. Plain old text files usually carry no extension in Linux, but this isn't universal for all programs configuration files. The programmer usually gets to decide that. I think ".txt" is useful under Linux if you want to emphasize that it's NOT a configuration file or other machine-readable document. However, in source distributions, the convention is to name such files all caps without an extension (i.e. README, INSTALL, COPYING, etc.) There are some standards and conventions but nothing stopping you from naming anything whatever you want, unless you are sharing things with others. In Windows, naming a file .exe indicates to the shell (usually explorer.exe ) that it's an executable file. UNIX builds this knowledge into the file system's permissions. If the proper x bits (see man chmod ) are set, it is recognized as executable by shells and kernel functions (I believe). Beyond this, Linux doesn't care, most shells won't care, and most programs look in the file to find it's "type." Of course, there's the nice command file which can analyze the file and tell you what it is with a degree of certainty. I believe if it can't match the data in the file with any known type, and if it contains only printable ASCII/Unicode characters, then it assumes its a text file. @Bruce Ediger below is absolutely correct. There is nothing in the kernel or filesystem level, i.e. Linux itself, enforcing or caring that the contents of a file needs to match up with its name, or the program that is supposed to understand it. This doesn't mean it's not possible to create a shell or launcher utility to do things based on filename. | {
"source": [
"https://unix.stackexchange.com/questions/45160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6522/"
]
} |
45,212 | I have a bunch of files as follows: 04602635_b0294.DAT20120807164534
04602637_b0297.DAT20120807164713
04602638_b0296.DAT20120807164637
04602639_b0299.DAT20120807164819
04602640_b0298.DAT20120807164748
04602641_b0300.DAT20120807164849
04602650_b0301.DAT20120807164921
04602652_b0302.DAT20120807164956 I need to rename them to exclude the prefix. It needs to look like this.. b0294.DAT20120807164534
b0297.DAT20120807164713
b0296.DAT20120807164637
b0299.DAT20120807164819
b0298.DAT20120807164748
b0300.DAT20120807164849
b0301.DAT20120807164921
b0302.DAT20120807164956 EDIT I forgot to add that I am using Solaris. | for file in * ; do
echo mv -v "$file" "${file#*_}"
done run this to satisfy that everything is ok. if it is, remove echo from command and it will rename files as you want. "${file#*_}" is a usual substitution feature in the shell. It removes all chars before the first _ symbol (including the symbol itself). For more details look here . | {
"source": [
"https://unix.stackexchange.com/questions/45212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7726/"
]
} |
45,275 | I have about 1k+ mails in an inbox (old cronjob stdout). How do I delete them in bulk? I'm on Solaris 8 and I have only mail available, no pine or mutt or similar "UI"-based client. Inline help and man page only give d # to delete a specific mail. I've tried for example d 1 - 100 but no luck. And I don't feel like doing d 1000 times. Any ideas how to clean up this inbox? I'd actually like to purge all mails older than x days. | While mail may not be able to, and you don't have pine or mutt you probably do have mailx . And mailx can d 5-10 or d * . | {
"source": [
"https://unix.stackexchange.com/questions/45275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21953/"
]
} |
45,305 | Example: I am using tar -zxvf command but I don't know what 'x' stands for. How can I check this single parameter without having to scroll all the way through man tar ? | Search x is for extract . After you are inside man, type /-x enter to search info about the -x parameter, Press n to jump to the next -x match, and N for the previous Search with Regex For large man pages, or a common terms, a little regex can be used to narrow the search. If you just want the main entry, you can use /^ *-x to remove most extraneous matches. This works as most man pages are formatted with the entry indented with spaces. ^ * matches the start of line, with zero to many spaces. -x is the search string. | {
"source": [
"https://unix.stackexchange.com/questions/45305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20334/"
]
} |
45,340 | I'm trying to run my first "process" program, but I get the following error : ./fork.c: line 4: syntax error near unexpected token `('
./fork.c: line 4: `int main()' I'm pretty sure that the code is correct: #include <sys/types.h>
#include <stdio.h>
int main() {
pid_t pid;
printf("Hello fork()\n");
switch(pid=fork()) {
case -1: printf("Error by fork().....\n"); exit(0);
case 0: printf("I'm the child process \n"); break;
default: printf("I'm the dad \n"); break;
}
exit(0);
} What is wrong? | You can't just run ./fork.c . It's not a program; it's the source for a program. Using ./ assumes that the file is a script (which it isn't) and treats it accordingly. However, as noted in another answer, there are compilers (like Tiny C Compiler ) that can execute C code without explicitly compiling it. Since it's a C program, you have to compile the program. Try cc -o fork fork.c then ./fork ; it worked here. | {
"source": [
"https://unix.stackexchange.com/questions/45340",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21631/"
]
} |
45,404 | A colleague suggested creating a random key via the following command: tr -dc A-Za-z0-9_\!\@\#\$\%\^\&\*\(\)-+= < /dev/urandom | head -c 32 | xargs It gave me the error: tr: Illegal byte sequence I'm concerned that I do not have /dev/urandom on my system. I tried googling to figure out how to install this file, but I have come up empty. I tried locate urandom and also came up empty. (well actually, it found the man page, but that doesn't help) How do I make urandom available on my Mac OSX system? (Lion) | Based on the error message that you get, I don't think /dev/urandom is the problem. If it were, I'd expect an error like no such file or directory . I searched for the error message you got and found this, which seems like it might be relevant to your issue: nerdbynature.de 2010-04-11 tr-Illegal-byte-sequence (Web Archive's 2019-09 snapshot) Basically, specify the locale by prepending the tr command with LC_CTYPE=C (or LC_ALL=C , see comments): LC_CTYPE=C tr -dc A-Za-z0-9_\!\@\#\$\%\^\&\*\(\)-+= < /dev/urandom | head -c 32 | xargs | {
"source": [
"https://unix.stackexchange.com/questions/45404",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22030/"
]
} |
45,441 | What command can be used to force release everything in swap partition back to memory ? Presume that I have enough memory. | From this Ask Ubuntu question : You can also clear your swap by running swapoff -a and then swapon -a as root instead of rebooting to achieve the same effect. Thus: $ free -tm
...
Swap: 6439 196 6243
...
$ sudo swapoff -a
$ sudo swapon -a
$ free -tm
...
Swap: 6439 0 6439
... As noted in a comment, if you don't have enough memory, swapoff will result in "out of memory" errors and on the kernel killing processes to recover RAM. | {
"source": [
"https://unix.stackexchange.com/questions/45441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
45,447 | All I can find about XkbOptions was: Option "XKbOptions" "grp:alt_shift_toggle" Seems I can only use alt + shift combination to switch keyboard layout, any other keys that I can use ? | From man xkeyboard-config : Key(s) to change layout βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βOption Description β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
βgrp:switch Right Alt (while pressed) β
βgrp:lswitch Left Alt (while pressed) β
βgrp:lwin_switch Left Win (while pressed) β
βgrp:rwin_switch Right Win (while pressed) β
βgrp:win_switch Any Win key (while pressed) β
βgrp:caps_switch Caps Lock (while pressed), Alt+Caps Lock does the original capslock action β
βgrp:rctrl_switch Right Ctrl (while pressed) β
βgrp:toggle Right Alt β
βgrp:lalt_toggle Left Alt β
βgrp:caps_toggle Caps Lock β
βgrp:shift_caps_toggle Shift+Caps Lock β
βgrp:shift_caps_switch Caps Lock (to first layout), Shift+Caps Lock (to last layout) β
βgrp:win_menu_switch Left Win (to first layout), Right Win/Menu (to last layout) β
βgrp:lctrl_rctrl_switch Left Ctrl (to first layout), Right Ctrl (to last layout) β
βgrp:alt_caps_toggle Alt+Caps Lock β
βgrp:shifts_toggle Both Shift keys together β
βgrp:alts_toggle Both Alt keys together β
βgrp:ctrls_toggle Both Ctrl keys together β
βgrp:ctrl_shift_toggle Ctrl+Shift β
βgrp:lctrl_lshift_toggle Left Ctrl+Left Shift β
βgrp:rctrl_rshift_toggle Right Ctrl+Right Shift β
βgrp:ctrl_alt_toggle Alt+Ctrl β
βgrp:alt_shift_toggle Alt+Shift β
βgrp:lalt_lshift_toggle Left Alt+Left Shift β
βgrp:alt_space_toggle Alt+Space β
βgrp:menu_toggle Menu β
βgrp:lwin_toggle Left Win β
βgrp:rwin_toggle Right Win β
βgrp:lshift_toggle Left Shift β
βgrp:rshift_toggle Right Shift β
βgrp:lctrl_toggle Left Ctrl β
βgrp:rctrl_toggle Right Ctrl β
βgrp:sclk_toggle Scroll Lock β
βgrp:lctrl_lwin_rctrl_menu LeftCtrl+LeftWin (to first layout), RightCtrl+Menu (to second layout) β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | {
"source": [
"https://unix.stackexchange.com/questions/45447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
45,583 | When I run a command like ls */*/*/*/*.jpg , I get the error -bash: /bin/ls: Argument list too long I know why this happens: it is because there is a kernel limit on the amount of space for arguments to a command. The standard advice is to change the command I use, to avoid requiring so much space for arguments (e.g., use find and xargs ). What if I don't want to change the command?
What if I want to keep using the same command?
How can I make things "just work", without getting this error?
What solutions are available? | On Linux, the maximum amount of space for command arguments is 1/4th of the amount of available stack space. So, a solution is to increase the amount of space available for the stack. Short version: run something like ulimit -s 65536 Longer version: The default amount of space available for the stack is something like 8192 KB. You can see the amount of space available, as follows: $ ulimit -s
8192 Choose a larger number, and set the amount of space available for the stack. For instance, if you want to try allowing up to 65536 KB for the stack, run this: $ ulimit -s 65536 You may need to play around with how large this needs to be, using trial-and-error. In many cases, this is a quick-and-dirty solution that will eliminate the need to modify the command and work out the syntax of find , xargs , etc. (though I realize there are other benefits to doing so). I believe that this is Linux-specific. I suspect it probably won't help on any other Unix operating system (not tested). | {
"source": [
"https://unix.stackexchange.com/questions/45583",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9812/"
]
} |
45,626 | I want to output a file's contents while they change, for example if I have the file foobar and I do: magic_command foobar The current terminal should display the file's contents and wait until, I don't know, I press ^C. Then if from another terminal I do: echo asdf >> foobar The first terminal should display the newly added line in addition to the original file contents (of course, given that I didn't press ^C). | You can use tail command with -f : tail -f /var/log/syslog It's good solution for real time show. | {
"source": [
"https://unix.stackexchange.com/questions/45626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18979/"
]
} |
45,646 | I expect to get some flak for this, but I can't find the answer anywhere. It seems like it should be so obvious. Sometimes, when I type a bad command in a bash terminal, the cursor just jumps down to the next line without any error or anything. I can't tell what I did wrong. It's like I'm stuck in the program. Reenactment: $ tidy Me: "Oops! That's not what I meant to type..." :q Me: "That didn't work..." :exit
:quit
exit
quit
/exit
/quit
-exit
-quit
-wtf??? I know I screwed up but how do I get back to the prompt without closing the terminal? | You can always try the obvious things like ^C , ^D (eof), Escape etc., but if all fails I usually end up suspending the command with ^Z (Control-Z) which puts me back into the shell. I then do a ps command and note the PID (process id) of the command and then issue a kill thePID ( kill -9 thePID if the former didn't work) command to terminate the application. Note that this is not a tidy (no pun intended) way to terminate the application/command and you run the risk of perhaps no saving some data etc. An example (I'd have used tidy but I don't have it installed): $ gnuplot
G N U P L O T
Version 4.2 patchlevel 6
....
Send bug reports and suggestions to <http://sourceforge.net/projects/gnuplot>
Terminal type set to 'wxt'
gnuplot>
gnuplot> ##### typed ^Z here
[1]+ Stopped gnuplot
$ ps
PID TTY TIME CMD
1681 pts/1 00:00:00 tcsh
1690 pts/1 00:00:00 bash
1708 pts/1 00:00:00 gnuplot
1709 pts/1 00:00:00 ps
$ kill 1708 ###### didn't kill the command as ps shows
$ ps
PID TTY TIME CMD
1681 pts/1 00:00:00 tcsh
1690 pts/1 00:00:00 bash
1708 pts/1 00:00:00 gnuplot
1710 pts/1 00:00:00 ps
$ kill -9 1708 ### -9 did the trick
$
[1]+ Killed gnuplot
$ ps
PID TTY TIME CMD
1681 pts/1 00:00:00 tcsh
1690 pts/1 00:00:00 bash
1711 pts/1 00:00:00 ps | {
"source": [
"https://unix.stackexchange.com/questions/45646",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22172/"
]
} |
45,673 | Somehow I happened to swap out 14 GB of memory. After having killed the culprit, I have tons of free memory again, so I thought I could bring in the important data again. So with 5 GB out of 32 GB used and 14 GB of swap space used, I ran swapoff -a .... and 4 hours later about half of the work was finished. This means less that 1 MB/s, while I can easily copy 200 MB/s. My swap is encrypted but so are all normal partitions and with aes-ni it leads to no noticeable CPU load (and filling the swap space took only a few minutes). I see that there's no special reason to optimize swapoff , however I wonder how it could get that slow? Just adding some more data: My main memory is 32 GB and I have 32 GB swap space on each of 4 harddisks (surely an overkill, but who cares?). The whole swap space can be (decrypted and) read in less than 5 minutes: time -p sudo sh -c 'for i in /dev/mapper/cryptswap?; do md5sum $i & done; wait'
014a2b7ef300e11094134785e1d882af /dev/mapper/cryptswap1
a6d8ef09203c1d8d459109ff93b6627c /dev/mapper/cryptswap4
05aff81f8d276ddf07cf26619726a405 /dev/mapper/cryptswap3
e7f606449327b9a016e88d46049c0c9a /dev/mapper/cryptswap2
real 264.27 Reading a part of a partition can't be slower than reading it all. Yet reading about 1/10th of it takes about 100 times longer. I observed that during swapoff both the CPU was mostly idle (maybe 10% of one core) and so were the disks ("measured" by the LEDs). I also saw that the swap spaces were turned off one after the other. | First, let's look at what you can expect from your hard drive. Your hard drive can do 200 MB/s sequentially . When you factor seek times in, it can be much slower. To pick an arbitrary example, take a look at the specs for one of Seagate's modern 3TB disks, the ST3000DM001 : Max sustained data rate: 210 MB/s Seek average read: <8.5 ms Bytes per sector: 4,096 If you never need to seek, and if your swap is near the edge of the disk, you can expect to see up to the max rate = 210 MB/s But if your swap data is entirely fragmented, in the worst case scenario, you'd need to seek around for every sector you read. That means that you only get to read 4 KB every 8.5 ms, or 4 KB / 0.0085 = 470 KB/s So right off the bat, it's not inconceivable that you are in fact running up against hard drive speeds. That said, it does seem silly that swapoff would run so slowly and have to read pages out of order, especially if they were written quickly (which implies in-order). But that may just be how the kernel works. Ubuntu bug report #486666 discusses the same problem: The swap is being removed at speed of 0.5 MB/s, while the
hard drive speed is 60 MB/s;
No other programs are using harddrive a lot, system is not under
high load etc.
Ubuntu 9.10 on quad core.
Swap partition is encrypted.
Top (atop) shows near 100% hard drive usage
DSK | sdc | busy 88% | read 56 | write 0 | avio 9 ms |
but the device transfer is low (kdesysguard)
0.4 MiB/s on /dev/sdc reads, and 0 on writes One of the replies was: It takes a long time to sort out because it has to rearrange and flush the
memory, as well as go through multiple decrypt cycles, etc. This is quite
normal The bug report was closed unresolved. Mel Gorman's book " Understanding the Linux Virtual Memory Manager " is a bit out of date, but agrees that this is a slow operation: The function responsible for deactivating an area is, predictably
enough, called sys_swapoff() . This function is mainly concerned with
updating the swap_info_struct . The major task of paging in each
paged-out page is the responsibility of try_to_unuse() which is extremely expensive. There's a bit more discussion from 2007 on the linux-kernel mailing list with the subject " speeding up swapoff " -- although the speeds they're discussing there are a bit higher than what you are seeing. It's an interesting question that probably gets generally ignored, since swapoff is rarely used. I think that if you really wanted to track it down, the first step would be trying to watch your disk usage patterns more carefully (maybe with atop , iostat , or even more powerful tools like perf or systemtap ). Things to look for might be excessive seeking, small I/O operations, constant rewriting and movement of data, etc. | {
"source": [
"https://unix.stackexchange.com/questions/45673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5116/"
]
} |
45,676 | In bash all I know is that rmdir directoryname will remove the directory but only if it's empty. Is there a way to force remove subdirectories? | The following command will do it for you. Use caution though if this isn't your intention as this also removes files in the directory and subdirectories. rm -rf directoryname | {
"source": [
"https://unix.stackexchange.com/questions/45676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22122/"
]
} |
45,684 | What is the difference between ~/.profile and ~/.bash_profile ? | The .profile was the original profile configuration for the Bourne shell (a.k.a., sh ). bash , being a Bourne compatible shell will read and use it. The .bash_profile on the other hand is only read by bash . It is intended for commands that are incompatible with the standard Bourne shell. | {
"source": [
"https://unix.stackexchange.com/questions/45684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20620/"
]
} |
45,711 | I have two files which look identical to me (including trailing whitespaces and newlines) but diff still says they differ. Even when I do a diff -y side by side comparison the lines look exactly the same. The output from diff is the whole 2 files. Any idea what's causing it? | Odd .. can you try cmp ? You may want to use the ' -b ' option too. cmp man page - Compare two files byte by byte. This is one of the nice things about Unix/Linux .. so many tools :) | {
"source": [
"https://unix.stackexchange.com/questions/45711",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21778/"
]
} |
45,721 | I am trying to use rsync to maintain a backup copy of my Aperture library. When I run the sync command to see what would happen this is the outcome: rsync --dry-run -r "/Volumes/Data/Aperture Library.aplibrary" "/Volumes/Backup"
skipping non-regular file "Aperture Library.aplibrary/Database/BigBlobs.apdb"
skipping non-regular file "Aperture Library.aplibrary/Database/Faces.db"
skipping non-regular file "Aperture Library.aplibrary/Database/History.apdb"
skipping non-regular file "Aperture Library.aplibrary/Database/ImageProxies.apdb"
skipping non-regular file "Aperture Library.aplibrary/Database/Library.apdb"
skipping non-regular file "Aperture Library.aplibrary/Database/Properties.apdb" The "file" command says that at least the ".db" file is a "SQLite 3.x database". How can I sync these files with rsync? | It would seem those files are symlinks. To copy them as symlinks, use --links (or -l ). To hard copy the files they are pointing to, use --copy-links (or -L ). For details see the SYMBOLIC LINKS section in man rsync . | {
"source": [
"https://unix.stackexchange.com/questions/45721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3243/"
]
} |
45,781 | I've been working on a script that automates setting up a development environment for Raspberry Pi development (step by step details that work are here ). The script is linked in that article but convenience you can find it here also. Now when run this script install and sets up the environment without error but you have to enter your sudo password more than once due to sudo's time-out value by default. So I started experimenting by removing all the sudo lines and running the whole script via sudo at the command line like so: kemra102@ubuntuvm:~$ sudo ./pi_dev_env_install.sh This works fine as expected and gets most of the way through until this point: ./pi_dev_env_install: 68: ./pi_dev_env_install.sh: Syntax error: "(" unexpected Now this line worked fine previously when not running the whole script with sudo. There is nothing about this line running as sudo that should stop it working to my knowledge, does anyone have any ideas? | The script does not begin with a shebang line, so the system executes it with /bin/sh . On Ubuntu, /bin/sh is dash , a shell designed for fast startup and execution with only standard features. When dash reaches line 68, it sees a syntax error: that parenthesis doesn't mean anything to it in context. Since dash (like all other shells) is an interpreter, it won't complain until the execution reaches the problematic line. So even if the script successfully started at some point in your testing, it would have aborted once line 68 was reached. The shebang line must be the very first thing in the file. Since you use bash features, the first line of the file must be #!/bin/bash or #!/usr/bin/env bash . | {
"source": [
"https://unix.stackexchange.com/questions/45781",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2885/"
]
} |
45,820 | I want to know how to umount my USB drive via command line. I am using Ubuntu 12.04 LTS 32-bit. | Suppose your usb drive is mounted to /media/usb then it would be sufficient to do sudo umount /media/usb Suppose the your usb is /dev/sdb1 then you could also do sudo umount /dev/sdb1 You may also have a look at the anwers of one of my questions, how to umount all attached usb devices with a single command: Umount all attached usb disks with a single command | {
"source": [
"https://unix.stackexchange.com/questions/45820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19803/"
]
} |
45,828 | I like tree it's a nice way to display my files and the size of folders/directories. But the -h option only shows the size of the directory, not the cumulative size of its contents. /media/
βββ [ 16K] 64D9-E862
βΒ Β βββ [8.0K] downloads I know for a fact that my external drive has more that 16kB in it. How can I fix that with tree 1.5? Better yet how do I upgrade to 1.6? | Only for tree 1.6 and above You might want to look at: man tree --du For each directory report its size as the accumulation of sizes of all its files and sub-directories (and
their files, and so on). The total amount of used space is also given in the final report (like the 'du -c'
command.) This option requires tree to read the entire directory tree before emitting it, see BUGS AND NOTES
below. Implies -s. So you should use: tree --du -h | {
"source": [
"https://unix.stackexchange.com/questions/45828",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13107/"
]
} |
45,879 | In Ubuntu one can add a repository via following command - sudo add-apt-repository ppa:yannubuntu/boot-repair As Ubuntu is based on Debian code base, I was expecting that the same would work in Debian too, but it doesn't. What is the reason for this? Is there some other shell command I can use to achieve the same? Note: I know I can edit /etc/apt/sources.list , but I want to achieve this from the shell. I also want to know why the same command won't work when the code base is the same. | Debian Jessie and later (2014-) As pointed out by @voltagex in the comments, it can now be found in the software-properties-common package: sudo apt-get install software-properties-common Debian Wheezy and earlier: The program add-apt-repository is available in Debian. It's in the python-software-properties package: sudo apt-get install python-software-properties It was added to that package in version 0.75. The current version in Debian Stable ('squeeze") is 0.60, so it doesn't have it. The version currently in Debian Testing ("wheezy") is 0.82.7.1debian1, so it's available there. | {
"source": [
"https://unix.stackexchange.com/questions/45879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19506/"
]
} |
45,899 | I was just wondering why the Linux NFS server is implemented in the kernel as opposed to a userspace application? I know a userspace NFS daemon exists, but it's not the standard method for providing NFS server services. I would think that running NFS server as a userspace application would be the preferred approach as it can provide added security having a daemon run in userspace instead of the kernel. It also would fit with the common Linux principal of doing one thing and doing it well (and that daemons shouldn't be a job for the kernel). In fact the only benefit I can think of running in the kernel would a performance boost from context switching (and that is a debatable reason). So is there any documented reason why it is implemented the way it is? I tried googling around but couldn't find anything. There seems to be a lot of confusion, please note I am not asking about mounting filesystems, I am asking about providing the server side of a network filesystem . There is a very distinct difference. Mounting a filesystem locally requires support for the filesystem in the kernel, providing it does not (eg samba or unfs3). | unfs3 is dead as far as I know; Ganesha is the most active userspace NFS server project right now, though it is not completely mature. Although it serves different protocols, Samba is an example of a successful
file server that operates in userspace. I haven't seen a recent performance comparison. Some other issues: Ordinary applications look files up by pathname, but nfsd needs to be able to
look them up by filehandle. This is tricky and requires support from the
filesystem (and not all filesystems can support it). In the past it was not
possible to do this from userspace, but more recent kernels have added name_to_handle_at(2) and open_by_handle_at(2) system calls. I seem to recall blocking file-locking calls being a problem; I'm not sure
how userspace servers handle them these days. (Do you tie up a server thread
waiting on the lock, or do you poll?) Newer file system semantics (change attributes, delegations, share locks)
may be implemented
more easily in kernel first (in theory--they mostly haven't been yet). You don't want to have to check permissions, quotas, etc., by hand--instead
you want to change your uid and rely on the common kernel vfs code to do
that. And Linux has a system call ( setfsuid(2) ) that should do that. For
reasons I forget, I think that's proved more complicated to use in servers
than it should be. In general, a kernel server's strengths are closer integration with the vfs and the exported filesystem. We can make up for that by providing more kernel interfaces (such as the filehandle system calls), but that's not easy. On the other hand, some of the filesystems people want to export these days (like gluster) actually live mainly in userspace. Those can be exported by the kernel nfsd using FUSE--but again extensions to the FUSE interfaces may be required for newer features, and there may be performance issues. Short version: good question! | {
"source": [
"https://unix.stackexchange.com/questions/45899",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4358/"
]
} |
45,913 | I frequently use the program nohup so that my processes are immune to hangups. So if I want to make the program program immune to hangups, I use the command nohup program & where & puts the process in the background. When starting, nohup gives me the message: nohup: appending output to `nohup.out' Is there any way to send the output to a file other than nohup.out ? Often I want to run many processes in the same directory using nohup , but if I do this, all the output gets lumped together in a single nohup.out file. The manual page (for example, here ) does not seem to have an option for specifying the log file. Can you please confirm this? Also, do you have any thoughts of how I can work around this problem? | GNU coreutils nohup man page indicates that you can use normal redirection: If standard input is a terminal, redirect it from /dev/null . If standard output is a terminal, append output to nohup.out if possible, $HOME/nohup.out otherwise. If standard error is a terminal, redirect it to standard output. To save output to FILE, use nohup COMMAND > FILE . Edit: I didn't read your link at first; you may have a different version of nohup , although this section suggests that you can still use normal redirection: nohup.out The output file of the nohup execution if
standard output is a terminal and if the
current directory is writable. You can redirect standard output and standard error to different files: nohup myprogram > myprogram.out 2> myprogram.err & or to the same file: nohup myprogram > myprogram.out 2>&1 & (don't forget the & at the end to put into the background) | {
"source": [
"https://unix.stackexchange.com/questions/45913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
45,924 | I have a ksh script that must work on both linux and solaris. I'm trying to color the output of specific commands. It works on linux (specifically RHEL6), but not on solaris (SunOS 5.10). Command on linux (the output "test" is correctly colored red): [amartin@linuxbox:~]$ echo "test" | sed 's,.*,\x1B[31m&\x1B[0m,'
test Command on solaris (the output "test" is not colored): [amartin@sunbox:~]$ echo "test" | sed 's,.*,\x1B[31m&\x1B[0m,'
x1B[31mtestx1B[0m Is there a way to craft this command such that the output is red, without the raw codes in the output? I can't change the 'echo' command because that's just a fill in for the command I'm actually running. | \xNN is an escape sequence in GNU sed, but it is not standard, and in particular it is not available on Solaris. You can include a literal escape character in your script, but that would make it hard to read and edit. You can use printf to generate an escape character. It understands octal escapes, not hexadecimal. esc=$(printf '\033')
echo "test" | sed "s,.*,${esc}[31m&${esc}[0m," You can call tput to generate the replacement text in the call to sed. This command looks up escape sequences in the terminfo database. In theory, using tput makes your script more portable, but in practice you're unlikely to encounter a terminal that doesn't use ANSI escape codes . echo "test" | sed "s,.*,$(tput setaf 1)&$(tput sgr0)," | {
"source": [
"https://unix.stackexchange.com/questions/45924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22328/"
]
} |
45,925 | Under different Unix/Linux systems I've observed different double click behavior in X terminal applications (e.g. xterm). Sometimes a double click selects everything left and right until the next non-alphabetic character (e.g. it selects the word under the cursor). Sometimes everything until the next blank/eol is selected (e.g. full paths under the cursor are selected). How can I configure the double click behavior - say - in xterm (because it is available on most systems)? Currently, I find the 2nd mode more convenient for most use cases. | You do it with X resources. I have a file, .Xresources , that contains these xterm-related resources: XTerm*VT100.cutNewLine: false
XTerm*VT100.cutToBeginningOfLine: false
XTerm*VT100.charClass: 33:48,35:48,37:48,42:48,45-47:48,64:48,95:48,126:48 In my .xinitrc file, I have some line that merge in those resources: if [ -f $userresources ]; then
/usr/X11/bin/xrdb -merge $userresources
fi Those lines make xterm double-clicks and triple-clicks do what I like: Double-click considers a "word" to include slash (/), dot (.), asterisk (*) and some other non-alphanumeric characters. That's the "charClass" resource. I had to do some tedious fiddling with that charClass to get it to do what I want. That mostly lets you double-click on URLs and fully- or partially-qualified paths to highlight them. The other two lines make triple-click start from the word under the mouse, and go to the end of the line, but not include any new-line. That way, you can triple click on a command you just executed, paste it in another window, and because it has no new-line, you can edit it before running it in the other window. The Arch Wiki has an article on X resources , including a section on xterm resources, but those xterm resources aren't complete. | {
"source": [
"https://unix.stackexchange.com/questions/45925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
45,926 | I need to display date and time in desired format in Unix/ Linux. My desired format is:
dd/mm/yyyy hh:mm:ss:ms in Unix or Linux. I got close using the following command: echo $(date +%x_%r) This returns: 08/20/2012_02:26:14 PM Any suggestions? | date +%x_%H:%M:%S:%N if you need to print only two first nums as ms: date +%x_%H:%M:%S:%N | sed 's/\(:[0-9][0-9]\)[0-9]*$/\1/' to store it in the var: VAR=$(date +%x_%H:%M:%S:%N | sed 's/\(:[0-9][0-9]\)[0-9]*$/\1/') | {
"source": [
"https://unix.stackexchange.com/questions/45926",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22329/"
]
} |
45,941 | I've got a CI server with a command-line interface that allows me to remotely kick-off a job ( jenkins CI server and the jenkins-cli.jar tool). After I kick the job off I tail -f the log (sorry for the messy command): ssh -t my-jenkins-host.com "tail -f \"/var/lib/jenkins/jobs/$job_name/builds/\`ls -ltr /var/lib/jenkins/jobs/$job_name/builds/ | grep '^l' | tail -n 1|awk '{print \$9}'\`/log\"" After the job successfully completes, usually after at least 5 minutes, I get the following line on the output: Finished: SUCCESS Is there a good way to stop tailing the log at this point? i.e. is there like a tail_until 'some line' my-file.log command? BONUS: extra credit if you can supply an answer that returns 0 when SUCCESS is matched, 1 when FAILURE is matched, and your solution works on mac! (which i believe is bsd based) | You can pipe the tail -f into sed , telling it to quit when it sees the line you're searching for: tail -f /path/to/file.log | sed '/^Finished: SUCCESS$/ q' sed will output each line it processes by default, and exit after it sees that line. The tail process will stop when it tries to write the next line and sees its output pipe is broken | {
"source": [
"https://unix.stackexchange.com/questions/45941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5784/"
]
} |
46,051 | I want to run time command to measure time of several commands. What I want to do is: Use the time command to measure the time it takes to run multiple commands together Write only the time output to a file Write the stderr of all commands I am measuring to stderr , not to the file What I do NOT want to do is: Write the several commands into a separate script Why? because all of this is already a script that I am generating programatically, and creating ANOTHER temporary script would be more mess than I want. What I have tried so far: /usr/bin/time --output=outtime -p echo "a"; echo "b"; Doesn't work, time is run only on the first one. /usr/bin/time --output=outtime -p ( echo "a"; echo "b"; ) Doesn't work, ( is unexpected token. /usr/bin/time --output=outtime -p { echo "a"; echo "b"; } Doesn't work, "no such file or directory". /usr/bin/time --output=outtime -p ' echo "a"; echo "b";' Doesn't work, "no such file or directory". time ( echo "a"; echo "b"; ) 2>outtime Doesn't work, since it redirects all STDERR into outtime ; I want only the time output there. And of course, time --output=outime echo "a"; Doesn't work, since --output=outime: command not found . How can I do it? | Use sh -c 'commands' as the command, e.g.: /usr/bin/time --output=outtime -p sh -c 'echo "a"; echo "b"' | {
"source": [
"https://unix.stackexchange.com/questions/46051",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10393/"
]
} |
46,066 | I always get messed up when need to use cp or mv : "do I need -R option when working with dir?" In GNU coreutils cp does need -R and mv doesn't. I just can not find any reason why cp needs -R option for copying dirs and mv doesn't. I think that cp ing dirs without -R (but behaving recursively like there's -R and like mv does) wouldn't cause any problems except breaking someone's habits on using the tool. Do you know any explanation? May be it had a reason long time ago? Additional question: why don't coreutils developers make cp copy dirs recursively by default? | A directory is (conceptually) a special "file" which contains a list of names, and the inode numbers those names point to. Some of names can be subdirectories. There is a special entry .. which points to the parent directory. So, its clear, changing the name of a file is easy: you just change the name in the directory entry, nothing else. This holds whether the file is actually a file, or is a "file" used to store another directory's contents. Indeed, the same rename syscall does both. Copying, however, is a much less trivial operation. You could just copy the directory "file", but then you'd have two directories where are the files are the same (they'd be hardlinks). If you had a system that allows hardlinks to directories, those would be to, but since no modern system allows that, at least to non-root, you have to do that copy for each subdirectory. You can actually ask cp for this behavior with cp -lR : -l for hard link, -R for that recursion. But leaving everything linked is likely not what you want. Instead, you want cp to copy each file. That's a fairly expensive operation: each file must be read into memory, and written back out to disk in a second location. It actually takes several syscalls, to open, read, write, and close the files, and that has to be repeated for each file. Traditional filesystems work this way on disk, too. There isn't any way to copy a bunch of files, other than to go through each one individually and copy it, and those are the types of filesystems that were in use when the basic command line utilities were designed. | {
"source": [
"https://unix.stackexchange.com/questions/46066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15607/"
]
} |
46,211 | I'm using the original EeePC with screen resolution of 800x480 . Some screens and dialogs do not fit into that resolution, so I have to use Alt-Drag to move windows around to reveal the bottom part of the window, but this doesn't work for a particular application I'm going to use since it's basically a fullscreen DirectX app running via wine, so some buttons are just cut off by the edge of the screen. Is there a way to make Xorg desktop to render at higher resolution ( 1024px wide or so) and then transparently scaled down to the display's native resolution, so the applications think the resolution is bigger? I do not care much about output getting blurred or text getting too small. Alternatively, is there a way to switch the video adapter to the resolution above physical resolution of the LCD screen and have the screen/video adapter itself to handle scaling (as used to be possible with CRT monitors)? I'm using Lubuntu 12.04 so I guess have Compiz installed. There is Scale plugin in Compiz , but I don't think it does what I need. | In short, you want something like xrandr --output LVDS --scale 1.28x1.28 (replacing LVDS with the desired output name, as seen in the output of running xrandr by itself). Give it a try. Some sites said that this doesn't work on some systems that are using KMS (kernel mode setting); if so, that's a bug that's hopefully fixed. See these links for some more info on using xrandr to scale a screen like this: Increase (scale) LCD resolution under Ubuntu Having a bigger resolution than the native one? Fun with xrandr and tiny Netbook screens :) | {
"source": [
"https://unix.stackexchange.com/questions/46211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18315/"
]
} |
46,235 | As I understand this, firewalls (assuming default settings) deny all incoming traffic that has no prior corresponding outgoing traffic. Based on Reversing an ssh connection and SSH Tunneling Made Easy , reverse SSH tunneling can be used to get around pesky firewall restrictions. I would like to execute shell commands on a remote machine. The remote machine has its own firewall and is behind an additional firewall (router). It has an IP address like 192.168.1.126 (or something similar). I am not behind a firewall and I know the remote machine's IP address as seen from the Internet (not the 192.168.1.126 address). Additionally, I can ask someone to execute ssh (something) as root on the remote machine first. Could anyone explain me, step by step, how reverse SSH tunneling works to get around the firewalls (local and remote machines' firewalls and the additional firewall between them)? What is the role of the switches ( -R , -f , -L , -N )? | I love explaining this kind of thing through visualization. :-) Think of your SSH connections as tubes. Big tubes. Normally, you'll reach through these tubes to run a shell on a remote computer. The shell runs in a virtual terminal (tty) through that tube. But you know this part already. Think of your tunnel as another tube within a tube. You still have the big SSH connection, but the -L or -R option lets you set up a smaller tube inside it. Your ssh remote shell actually communicates with you using one of these smaller, embedded tubes attached to stdio. Every tube has a beginning and an end. The big tube, your SSH connection, started with your SSH client and ends up at the SSH server you connected to. All the smaller tubes have the same endpoints, except that the role of "start" or "end" is determined by whether you used -L or -R (respectively) to create them. (You haven't said, but I'm going to assume that the "remote" machine you've mentioned, the one behind the firewall, can access the Internet using Network Address Translation (NAT). This is kind of important, so please correct this assumption if it is false.) When you create a tunnel, you specify an address and port on which it will answer (or "bind"), and an address and port to which it will be delivered. The -L option tells the tunnel to bind on the local side of the tunnel (the host running your client). The -R option tells the tunnel to bind on the remote side (the SSH server). So... To be able to SSH from the Internet into a host behind a firewall, you need the target host to open an SSH connection to a host on the outside and include a -R tunnel whose "entry" point is the "remote" side of its connection. Of the two models shown above, you want the one on the right. From the firewalled host: ssh -f -N -T -R22222:localhost:22 yourpublichost.example.com This tells the client on your target host to establish a tunnel with a -R emote entry point. Anything that attaches to port 22222 on the far end of the tunnel will actually reach "localhost port 22", where "localhost" is from the perspective of the exit point of the tunnel (i.e. your ssh client in this case, on the target host). The other options are: -f tells ssh to background itself after it authenticates, so you don't have to sit around running something like sleep on the remote server for the tunnel to remain alive. -N says that you want an SSH connection, but you don't actually want to run any remote commands. If all you're creating is a tunnel, then including this option saves resources. -T disables pseudo-tty allocation, which is appropriate because you're not trying to create an interactive shell. There will be a password challenge unless you have set up a key for a passwordless login. (Note that if you intend to leave a connection open long term, unattended, possibly having it automatically refresh the connection when it goes down (by parsing ssh -O check <remotehost> ), I recommend using a separate, unique SSH key for it that you set up for just this tunnel/customer/server, especially if you are using RemoteForward. Trust no one.) Now that the -R service tunnel is active, you can connect to it from yourpublichost , establish a connection to the firewalled host through the tunnel: ssh -p 22222 username@localhost You'll get a host key challenge, as you've probably never hit this host before. Then you'll get a password challenge for the username account (unless you've set up keys for passwordless login). If you're going to be accessing this host on a regular basis, you can also simplify access by adding a few lines to your ~/.ssh/config file on yourpublichost : host firewalledhost
User firewalleduser
Hostname localhost
Port 22222 Adjust firewalledhost and firewalleduser to suit. The firewalleduser field must match your username on the remote server, but firewalledhost can be any name that suits you, the name doesn't have to match anything resolvable, since your connection is governed by Hostname and Port . Alternately, if you want to reach this from elsewhere on the Internet, you might add the following to your ~/.ssh/config : host firewalledhost
ProxyCommand ssh -fWlocalhost:22222 yourpublichost The -W option is used to open a connection to a remote host in order to continue the SSH conversation. It implies -N and -T . See also: Expose the reverse endpoint on a non-localhost IP Tips on using ControlMaster to maintain your tunnel | {
"source": [
"https://unix.stackexchange.com/questions/46235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4928/"
]
} |
46,244 | I know I can do this to get a list of directory names: find . -type d -maxdepth 1 The output looks like this: .
./foo
./bar I prefer the listing without ./ . Is there a way to get find to output just the raw names? I tried sending the list to stat to format it but that just gives me the same result: find . -type d -maxdepth 1 -print0 | xargs -0 stat -f '%N' | With GNU find you can use the -printf option: find . -maxdepth 1 -type d -printf '%f\n' As noted by PaweΕ in the comments, if you don't want the current directory to be listed add -mindepth 1 , e.g.: find . -mindepth 1 -maxdepth 1 -type d -printf '%f\n' | {
"source": [
"https://unix.stackexchange.com/questions/46244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4288/"
]
} |
46,253 | My filesystem is read-only for several days.
Searching for somesthing on my /var/log/messages and found this. ****EMITTING CHANGED for /sys/devices/pci0000:00/0000:00:11/host0/target0:0:0/0:0:0:0/block/sda
****Refreshing ATA SMART data for /sys/devices/pci0000:00/0000:00:11/host0/target0:0:0/0:0:0:0/block/sda
helper(pid 3495):launched job udisks-helper-ata-smart-collect on /dev/sda
helper(pid 3495):completed with exit code 0 I this a health test: # smartctl -d ata -H /dev/sda and get the follow output: smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.1.0-1.2-default] (SUSE RPM)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED what can I do to prevent my filesystem to get readonly? | With GNU find you can use the -printf option: find . -maxdepth 1 -type d -printf '%f\n' As noted by PaweΕ in the comments, if you don't want the current directory to be listed add -mindepth 1 , e.g.: find . -mindepth 1 -maxdepth 1 -type d -printf '%f\n' | {
"source": [
"https://unix.stackexchange.com/questions/46253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18251/"
]
} |
46,276 | Is it possible to use the find command to find all the "non-binary" files in a directory? Here's the problem I'm trying to solve. I've received an archive of files from a windows user. This archive contains source code and image files. Our build system doesn't play nice with files that have windows line endings. I have a command line program ( flip -u ) that will flip line endings between *nix and windows. So, I'd like to do something like this find . -type f | xargs flip -u However, if this command is run against an image file, or other binary media file, it will corrupt the file. I realize I could build a list of file extensions and filter with that, but I'd rather have something that's not reliant on me keeping that list up to date. So, is there a way to find all the non-binary files in a directory tree? Or is there an alternate solution I should consider? | I'd use file and pipe the output into grep or awk to find text files, then extract just the filename portion of file 's output and pipe that into xargs. something like: file * | awk -F: '/ASCII text/ {print $1}' | xargs -d'\n' -r flip -u Note that the grep searches for 'ASCII text' rather than any just 'text' - you probably don't want to mess with Rich Text documents or unicode text files etc. You can also use find (or whatever) to generate a list of files to examine with file : find /path/to/files -type f -exec file {} + | \
awk -F: '/ASCII text/ {print $1}' | xargs -d'\n' -r flip -u The -d'\n' argument to xargs makes xargs treat each input line as a separate argument, thus catering for filenames with spaces and other problematic characters. i.e. it's an alternative to xargs -0 when the input source doesn't or can't generate NULL-separated output (such as find 's -print0 option). According to the changelog, xargs got the -d / --delimiter option in Sep 2005 so should be in any non-ancient linux distro (I wasn't sure, which is why I checked - I just vaguely remembered it was a "recent" addition). Note that a linefeed is a valid character in filenames, so this will break if any filenames have linefeeds in them. For typical unix users, this is pathologically insane, but isn't unheard of if the files originated on Mac or Windows machines. Also note that file is not perfect. It's very good at detecting the type of data in a file but can occasionally get confused. I have used numerous variations of this method many times in the past with success. | {
"source": [
"https://unix.stackexchange.com/questions/46276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22521/"
]
} |
46,301 | Is there such a thing as list of available D-Bus services?
I've stumbled upon a few, like those provided by NetworkManager, Rhythmbox, Skype, HAL. I wonder if I can find a rather complete list of provided services/interfaces. | On QT setups (short commands and clean, human readable output) you can run: qdbus will list list the services available on the session bus and qdbus --system will list list the services available on the system bus. On any setup you can use dbus-send dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames Just like qdbus , if --session or no message bus is specified, dbus will send to the login session message bus. So the above will list the services available on the session bus. Use --system if you want instead to use the system wide message bus: dbus-send --system --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames You could also use DFeet if you prefer a graphical tool (see the other answers for more GUI options). | {
"source": [
"https://unix.stackexchange.com/questions/46301",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22538/"
]
} |
46,302 | I would like to setup RAID1 so that a ramdisk in the RAID configuration has occasional synchronisation with a physical disk (that is very battery intensive to run, so I hope to let it spinout). Is there a way I can set the commission frequency on the RAID config such that it will only burst-write to the HDD every, let's say, 5 minutes? | On QT setups (short commands and clean, human readable output) you can run: qdbus will list list the services available on the session bus and qdbus --system will list list the services available on the system bus. On any setup you can use dbus-send dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames Just like qdbus , if --session or no message bus is specified, dbus will send to the login session message bus. So the above will list the services available on the session bus. Use --system if you want instead to use the system wide message bus: dbus-send --system --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames You could also use DFeet if you prefer a graphical tool (see the other answers for more GUI options). | {
"source": [
"https://unix.stackexchange.com/questions/46302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22539/"
]
} |
46,304 | Relating to question : What if 'kill -9' does not work? I have following situation : zombie process with threads, not collected by init : [root@Arch64]# ps auxH | grep java
gwpl 569 0.0 0.0 0 0 ? Zl 04:23 0:00 [java] <defunct>
gwpl 569 5.5 49.0 1466648 375572 ? Rl 07:25 23:55 [java] <defunct>
gwpl 569 16.0 49.0 1466648 375572 ? Rl 12:27 20:54 [java] <defunct>
gwpl 569 17.9 49.0 1466648 375572 ? Rl 12:47 19:48 [java] <defunct>
root 10466 0.0 0.0 6740 628 pts/0 S+ 14:38 0:00 grep java
[root@Arch64]# pstree -s 569
init---java---3*[{java}] Can I do anything about that ? Or is it init bug as suggested in comment to https://unix.stackexchange.com/a/11173/9689 ? If it's a bug, what should I dump to help fixing it? Above listing uses following status codes: Zl , Rl , S+ . Here is cheatsheet from man ps to decode them: PROCESS STATE CODES
(...)
R Running or runnable (on run queue)
S Interruptible sleep (waiting for an event to complete)
(...)
Z Defunct ("zombie") process, terminated but not reaped by its parent.
For BSD formats and when the stat keyword is used, additional characters may be displayed:
(...)
L has pages locked into memory (for real-time and custom IO)
(...)
+ is in the foreground process group | On QT setups (short commands and clean, human readable output) you can run: qdbus will list list the services available on the session bus and qdbus --system will list list the services available on the system bus. On any setup you can use dbus-send dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames Just like qdbus , if --session or no message bus is specified, dbus will send to the login session message bus. So the above will list the services available on the session bus. Use --system if you want instead to use the system wide message bus: dbus-send --system --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames You could also use DFeet if you prefer a graphical tool (see the other answers for more GUI options). | {
"source": [
"https://unix.stackexchange.com/questions/46304",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9689/"
]
} |
46,322 | Possible Duplicate: How to remove all empty directories in a subtree? I create directories very often, scattered over my home directory, and I find it very hard to locate and delete them. I want any alias/function/script to find/locate and delete all empty directories in my home directory. | The find command is the primary tool for recursive file system operations.
Use the -type d expression to tell find you're interested in finding directories only (and not plain files). The GNU version of find supports the -empty test, so $ find . -type d -empty -print will print all empty directories below your current directory. Use find ~ -β¦ or find "$HOME" -β¦ to base the search on your home directory (if it isn't your current directory). After you've verified that this is selecting the correct directories, use -delete to delete all matches: $ find . -type d -empty -delete | {
"source": [
"https://unix.stackexchange.com/questions/46322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22240/"
]
} |
46,372 | I could use the either form to execute the cat method: cat file_name
cat < file_name The result is the same Then I want to execute man in the format of stdin man < file_name While file_name contains: # file_name
cat But it pops up What manual page do you want? instead of execute man cat . I want to know why cat could accept stdin as arguments but man cannot. And what's the difference between command line arguments and stdin ? | Your question is closely related to how the shell you are using parses user input on the command line. If the first word on the command line is a program, located in a special folder (mostly defined by PATH ) and no more special characters are given (depends of the shell you are using), all subsequent words separated by spaces or tabs are passed to the program in a special form i.e. an array. With each word as one element in the array. How the program, you are going to invoke interprets the arguments (located in the array) depends on how it is programmed. There are some quasi standards of how the syntax of the arguments should look like but in general the programmer is entire free. So the first argument can be interpreted as a name of a file or whatever the programmer thoughts of at the time he wrote the program. In the case you add the special character < or > to your command line, the shell dosn't append < and > nor subsequent words to the array that will be passed to the program. With < or > given the shell starts to make fancy things, supported by the underlying kernel (keyword piping ). To grasp what's going on you must understand what STDIN and STDOUT (since it's not immediately related i omit STDERR ) are. Everything visible you see on your terminal (in most cases a part of your display) is either written by the shell or any other program you have invoked previously to a special file (in unix everything is a file ). This file has a special id and is called STDOUT . If a program wants to read data from the keyboard it dosn't poll the keyboard directly (at least in most cases) but reads from a special file called STDIN . Internally this file is connected to your standard input device, your keyboard in most cases. If the shell reads < or > in a parsed command line it manipulates STDIN or STDOUT in a particular kind for the time the corresponding program is running. STDIN and STDOUT dosn't point to the terminal or the standard input device any longer but rather to the subsequent filename on the command line. In the case of the two lines cat file_name
cat < file_name the observed behavior is identical because the corresponding developer makes cat to either read data from STDIN or read the data from the file, whose name is given as the first command line argument (which is the first element in the array the shell passes to cat ). Subsequently cat writes the whole content of file_name or STDIN to the terminal since we don't instruct the shell to manipulate STDOUT . Remember that in the second line your shell manipulates STDIN in this way, that it doesn't point to your standard input device anylonger but points to a file called file_name in your current working directory. In the other case of the line man < file_name man is not meant to read anything from STDIN if it's called with no argument i.e. an empty array. So the line man < file_name equals man For example man will read something from STDIN , too if you pass -l - to man . With this option given on the command line you can display the content of anything man reads from STDIN on your terminal. So man -l - < file_name would work also (but be careful man is not just a pager but also parses the input of the file and so the file content and the displayed content could differ). So how STDIN , STDOUT and the command line arguments are interpreted is all up to the corresponding developer. I hope my answer could clear things up. | {
"source": [
"https://unix.stackexchange.com/questions/46372",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11695/"
]
} |
46,486 | I have few .mdf images, that can be mounted with Alcohol 120% , but on Linux, is that possible? I've tried things similar to mount -o loop -t iso9660 XX.mdf /mnt/iso , but that doesn't work here, I got ISOFS: Unable to identify CD-ROM format . | Try first to convert it into an iso file, with mdf2iso (you have to install it) like this : mdf2iso your_file.mdf Linux cannot mount mdf file (which is a closed format) natively.
Or, you can try to rename it into "your_file.iso" and mount it with the command you gave, but it's not working with every mdf image. Or if you're using an X Server, you can try the software acetoneiso which is basically some sort of Daemon Tools / Alcohol 120% for Linux. | {
"source": [
"https://unix.stackexchange.com/questions/46486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
46,541 | I have a directory with crash logs, and I'd like to use a conditional statement in a bash script based on a find command. The log files are stored in this format: /var/log/crashes/app-2012-08-28.log
/var/log/crashes/otherapp-2012-08-28.log I want the if statement to only return true if there is a crash log for a specific app which has been modified in the last 5 minutes. The find command that I would use is: find /var/log/crashes -name app-\*\.log -mmin -5 I'm not sure how to incorporate that into an if statement properly. I think this might work: if [ test `find /var/log/crashes -name app-\*\.log -mmin -5` ] then
service myapp restart
fi There are a few areas where I'm unclear: I've looked at the if flags but I'm not sure which one, if any, that I should use. Do I need the test directive or should I just process against the results of the find command directly, or maybe use find... | wc -l to get a line count instead? Not 100% necessary to answer this question, but test is for testing against return codes that commands return? And they are sort of invisible - outside of stdout / stderr ? I read the man page but I'm still pretty unclear about when to use test and how to debug it. | [ and test are synonyms (except [ requires ] ), so you don't want to use [ test : [ -x /bin/cat ] && echo 'cat is executable'
test -x /bin/cat && echo 'cat is executable' test returns a zero exit status if the condition is true, otherwise nonzero. This can actually be replaced by any program to check its exit status, where 0 indicates success and non-zero indicates failure: # echoes "command succeeded" because echo rarely fails
if /bin/echo hi; then echo 'command succeeded'; else echo 'command failed'; fi
# echoes "command failed" because rmdir requires an argument
if /bin/rmdir; then echo 'command succeeded'; else echo 'command failed'; fi However, all of the above examples only test against the program's exit status, and ignore the program's output. For find , you will need to test if any output was generated. -n tests for a non-empty string: if [[ -n $(find /var/log/crashes -name "app-*.log" -mmin -5) ]]
then
service myapp restart
fi A full list of test arguments is available by invoking help test at the bash commandline. If you are using bash (and not sh ), you can use [[ condition ]] , which behaves more predictably when there are spaces or other special cases in your condition. Otherwise it is generally the same as using [ condition ] . I've used [[ condition ]] in this example, as I do whenever possible. I also changed `command` to $(command) , which also generally behaves similarly, but is nicer with nested commands. | {
"source": [
"https://unix.stackexchange.com/questions/46541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10287/"
]
} |
46,645 | The unix sysadmin where I'm working is reluctant to give me access to change my login shell from ksh to bash . He has given various excuses, the funniest being that since they write all their scripts for ksh they won't work if I try to run them. I don't know where he gets these ideas, but since I can't convince him, is there any alternative that I have? ( chsh is installed on these machines, but we use public/private keypairs for logging in, and I don't have any password, so when chsh prompts me for a password I have nothing to give it. ) | When you log in, the file ~/.profile is read by the login shell (ksh for you). You can instruct that login shell to replace itself by bash. You should take some precautions: Only replace the login shell if it's interactive. This is important: otherwise, logging in in graphic mode may not work (this is system-dependent: some but not all systems read ~/.profile when logging in through xdm or similar), and idioms such as ssh foo '. ~/.profile; mycommand' will fail. Check that bash is available, so that you can still log in if the executable isn't there for some reason. You have a choice whether to run bash as a login shell or not. The only major difference in making it a login shell is that it'll load ~/.bash_profile or ~/.profile . So if you run bash as login shell, be very careful to have a ~/.bash_profile or take care not to execute bash recursively from ~/.profile . There is no real advantage of having ~/.profile executed by bash rather than ksh, so I'd recommend not doing it. Also set the SHELL environment variable to bash, so that programs such as terminal emulators will invoke that shell. Here's code to switch to bash. Put it at the end of ~/.profile . case $- in
*i*)
# Interactive session. Try switching to bash.
if [ -z "$BASH" ]; then # do nothing if running under bash already
bash=$(command -v bash)
if [ -x "$bash" ]; then
export SHELL="$bash"
exec "$bash"
fi
fi
esac | {
"source": [
"https://unix.stackexchange.com/questions/46645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3358/"
]
} |
46,715 | I am trying to grep the ongoing tail of file log and get the n th word from a line. Example file: $ cat > test.txt <<EOL
Beam goes blah
John goes hey
Beam goes what?
John goes forget it
Beam goes okay
Beam goes bye
EOL
^C Now if I do a tail : $ tail -f test.txt
Beam goes blah
John goes hey
Beam goes what?
John goes forget it
Beam goes okay
Beam goes bye
^C If I grep that tail : $ tail -f test.txt | grep Beam
Beam goes blah
Beam goes what?
Beam goes okay
Beam goes bye
^C But if I awk that grep : $ tail -f test.txt | grep Beam | awk '{print $3}' Nothing no matter how long I wait. I suspect it's something to do with the way the stream works. Anyone have any clue? | It's probably output buffering from grep. you can disable that with grep --line-buffered . But you don't need to pipe output from grep into awk. awk can do regexp pattern matching all by itself. tail -f test.txt | awk '/Beam/ {print $3}' | {
"source": [
"https://unix.stackexchange.com/questions/46715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2372/"
]
} |
46,786 | I would like to compress a text file using gzip command line tool while keeping the original file. By default running the following command gzip file.txt results in modifying this file and renaming it file.txt.gz . instead of this behavior I would like to have this new compressed file in addition to the existing one file.txt . For now I am using the following command to do that gzip -c file.txt > file.txt.gz It works but I am wondering why there is no easier solution to do such a common task ? Maybe I missed the option doing that ? | For GNU gzip 1.6 or above, FreeBSD and derivatives or recent versions of NetBSD, see don_cristi's answer . With any version, you can use shell redirections as in: gzip < file.txt > file.txt.gz When not given any argument, gzip reads its standard input, compresses it and writes the compressed version to its standard output. As a bonus, when using shell redirections, you don't have to worry about files called "--help" or "-" (that latter one still being a problem for gzip -c -- ). Another benefit over gzip -c file.txt > file.txt.gz is that if file.txt can't be opened, the command will fail without creating an empty file.txt.gz (or overwriting an existing file.txt.gz ) and without running gzip at all. A significant difference compared to gzip -k though is that there will be no attempt at copying the file.txt 's metadata (ownership, permissions, modification time, name of uncompressed file) to file.txt.gz . Also if file.txt.gz already existed, it will silently override it unless you have turned the noclobber option on in your shell (with set -o noclobber for instance in POSIX shells). | {
"source": [
"https://unix.stackexchange.com/questions/46786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20996/"
]
} |
46,789 | Is there any variable that cron sets when it runs a program ? If the script is run by cron, I would like to skip some parts; otherwise invoke those parts. How can I know if the Bash script is started by cron ? | I'm not aware that cron does anything to its environment by default that can be of use here, but there are a couple of things you could do to get the desired effect. 1) Make a hard or soft link to the script file, so that, for example, myscript and myscript_via_cron point to the same file. You can then test the value of $0 inside the script when you want to conditionally run or omit certain parts of the code. Put the appropriate name in your crontab, and you're set. 2) Add an option to the script, and set that option in the crontab invocation. For example, add an option -c , which tells the script to run or omit the appropriate parts of the code, and add -c to the command name in your crontab. And of course, cron can set arbitrary environment variables, so you could just put a line like RUN_BY_CRON="TRUE" in your crontab, and check its value in your script. | {
"source": [
"https://unix.stackexchange.com/questions/46789",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
46,792 | When using Emacs (version 24.1 if that makes any difference), I would like to have a way to remove all the automatic line breaking inserted by the auto-fill minor mode. Disabling auto-fill-mode is enough to avoid inserting new automatic line breaks, but does nothing about cancelling the existing ones. I found a way to do what I want, but I wonder if it's the best way to go about it. I'm describing it here with the hope that it'll help explain my issue better: C-u 1000 C-x f : Set the current-fill-column to an arbitrary large value. C-x h : Select all text in buffer M-q : Re-arrange line breaks according to the new current-fill-column value. If the value is large enough, this will emulate the behavior I'm looking for. M-x auto-fill-mode : Disable Auto Fill mode. I wonder if this is a good way to go about it or whether there's a better way. | I'm not aware that cron does anything to its environment by default that can be of use here, but there are a couple of things you could do to get the desired effect. 1) Make a hard or soft link to the script file, so that, for example, myscript and myscript_via_cron point to the same file. You can then test the value of $0 inside the script when you want to conditionally run or omit certain parts of the code. Put the appropriate name in your crontab, and you're set. 2) Add an option to the script, and set that option in the crontab invocation. For example, add an option -c , which tells the script to run or omit the appropriate parts of the code, and add -c to the command name in your crontab. And of course, cron can set arbitrary environment variables, so you could just put a line like RUN_BY_CRON="TRUE" in your crontab, and check its value in your script. | {
"source": [
"https://unix.stackexchange.com/questions/46792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4098/"
]
} |
46,827 | I have a function, and I want to execute a key command, but I get the error Trailing characters: function! MyFunction()
if condition
<C-W><C-W>
else
:some_other_command
endif
endfunction It doesn't like the <C-W><C-W> What can I use instead? | The general answer is to use the :normal command, like :exe "normal \<C-W>\<C-w>" The :execute approach is the readable way to get :normal to recognize special characters like control-key combinations. The other approach is :normal ^W^W where each ^W is one character inserted by typing Ctrl-v Ctrl-w . | {
"source": [
"https://unix.stackexchange.com/questions/46827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22782/"
]
} |
46,915 | In FreeBSD and also in Linux, how can I get the numerical chmod value of a file? For example, 644 instead of -rw-r--r-- ? I need an automatic way for a Bash script. | You can get the value directly using a stat output format, e.g. BSD/OS X: stat -f "%OLp" <file> or in Linux stat --format '%a' <file> and in busybox stat -c '%a' <file> | {
"source": [
"https://unix.stackexchange.com/questions/46915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7753/"
]
} |
46,918 | I'm trying to set up a web server for my first time. I'm running Ubuntu 12.04.1 and i've installed LAMP. I have also set up a static IP for the server, 192.168.0.111 and reserved it in the router settings. So far so good. Now to the problem. I forwarded port 80 to the servers IP address, but the server is not responding to any connections. If i try to access the public IP through a webbrowser when the port is NOT forwarded, i get an error saying "Could not connect to host". If i do forward the port i get a timeout error. Correct me if i'm wrong, but that should mean the port is being forwarded to the server, but what happens after that i don't know. Am i right? And any suggestions about how to troubleshoot this? EDIT: I should also mention that Apache does in fact work and if i type in 127.0.0.1 in a browser, the page does indeed load. I just can't access it from the outside world. | You can get the value directly using a stat output format, e.g. BSD/OS X: stat -f "%OLp" <file> or in Linux stat --format '%a' <file> and in busybox stat -c '%a' <file> | {
"source": [
"https://unix.stackexchange.com/questions/46918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22820/"
]
} |
46,969 | I'm trying to compress a folder ( /var/www/ ) to ~/www_backups/$time.tar where $time is the current date. This is what I have: cd /var/www && sudo tar -czf ~/www_backups $time" I am completely lost and I've been at this for hours now. Not sure if -czf is correct. I simply want to copy all of the content in /var/www into a $time.tar file, and I want to maintain the file permissions for all of the files. Can anyone help me out? | To tar and gzip a folder, the syntax is: tar czf name_of_archive_file.tar.gz name_of_directory_to_tar Adding - before the options ( czf ) is optional with tar . The effect of czf is as follows: c β create an archive file (as opposed to extract, which is x ) f β filename of the archive file z β filter archive through gzip (remove this option to create a .tar file) If you want to tar the current directory, use . to designate that. To construct filenames dynamically, use the date utility (look at its man page for the available format options). For example: cd /var/www &&
tar czf ~/www_backups/$(date +%Y%m%d-%H%M%S).tar.gz . This will create a file named something like 20120902-185558.tar.gz . On Linux, chances are your tar also supports BZip2 compression with the j rather than z option. And possibly others. Check the man page on your local system. | {
"source": [
"https://unix.stackexchange.com/questions/46969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22820/"
]
} |
47,151 | Suppose that I have a folder containing .txt , .pdf , and other files. I would like to list the "other" files (i.e., files not having the extensions .txt or .pdf ). Do you have any advice on how to do this? I know how to list files not having a given extension. For example, if I want to list all files except the .txt files, then either find -not -iname "*.txt" or ls | grep -v '\.txt$' | column seem to work. But, how can I list everything except .txt files or .pdf files? It seems that I need to use some sort of logical "or" in find or grep . | Assuming one has GNU ls , this is possibly the simplest way: ls -I "*.txt" -I "*.pdf" If you want to iterate across all the subdirectories: ls -I "*.txt" -I "*.pdf" -R | {
"source": [
"https://unix.stackexchange.com/questions/47151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
47,163 | I have two different computers with different RAID5 arrays. When I run the command mdadm --detail /dev/md0 one drive reports "active" while the other reports "clean" for the "state" field. What is the difference? Should I worry about either state? Both seem to work fine. | From the RAID arrays documentation in the Linux kernel: clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync So, no, you don't need to worry about either state; both are normal operation. If you saw inactive , that you'd have to worry about. mdadm includes a raid monitor daemon that will alert you (via email by default) of any conditions you need to be aware of, in particular a failed disk. You should make sure it is configured & running. | {
"source": [
"https://unix.stackexchange.com/questions/47163",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16055/"
]
} |
47,178 | When creating directories, mkdir -m <mode> <dir> provides for creating one or more directories with the given mode/permissions set (atomically). Is there an equivalent for creating files, on the command line? Something akin to: open("file", O_WRONLY | O_APPEND | O_CREAT, 0777); Is using touch followed by a chmod my only option here? Edit: After trying out teppic's suggestion to use install , I ran it through strace to see how close to atomic it was. The answer is, not very: $ strace install -m 777 /dev/null newfile
...
open("newfile", O_WRONLY|O_CREAT|O_EXCL, 0666) = 4
fstat(4, {st_mode=S_IFREG|0666, st_size=0, ...}) = 0
...
fchmod(4, 0600) = 0
close(4) = 0
...
chmod("newfile", 0777) = 0
... Still, it's a single shell command and one I didn't know before. | You could use the install command (part of GNU coreutils) with a dummy file, e.g. install -b -m 755 /dev/null newfile The -b option backs up newfile if it already exists. You can use this command to set the owner as well. | {
"source": [
"https://unix.stackexchange.com/questions/47178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22934/"
]
} |
47,185 | Looking through the GNU Coreutils , I spotted the factor command, that I had never noticed before. Reading the man page: Print the prime factors of each specified integer NUMBER. If none
are specified on the command line, read them from standard input. Is there a practical use for factor , or is it just a demonstration / toy package? | Wikipedia, "Factor (Unix)" with an interesting take: factor first appeared on 5th edition Research Unix in 1974, as a "user maintained" utility (section 6 of the manual). In the 7th edition in 1979, it was moved into the main "commands" section of the manual (section 1). From there, the factor utility was copied to all other variants of Unix, including commercial Unixes and BSD. In some variants of Unix, it is classified as a "game" more than a serious utility, and therefore documented in section 6. So it would seem that some user(s) liked to play around with prime factors and wrote factor - and once it existed, there probably was no good reason not to include it as a command in subsequent Unix versions. So the "practical uses" of factor may depend on what you consider practical - if you are into prime number theory, it is probably a great tool/game/whatever. | {
"source": [
"https://unix.stackexchange.com/questions/47185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20470/"
]
} |
47,208 | When I do a lspci -k on my Kubuntu with a 3.2.0-29-generic kernel I can see something like this: 01:00.0 VGA compatible controller: NVIDIA Corporation G86 [Quadro NVS 290] (rev a1)
Subsystem: NVIDIA Corporation Device 0492
Kernel driver in use: nvidia
Kernel modules: nvidia_current, nouveau, nvidiafb There is a kernel driver nvidia and kernel modules nvidia_current , nouveau , nvidiafb . Now I wondered what might be the difference between Kernel drivers and Kernel modules? | A kernel module is a bit of compiled code that can be inserted into the kernel at run-time, such as with insmod or modprobe . A driver is a bit of code that runs in the kernel to talk to some hardware device. It "drives" the hardware. Most every bit of hardware in your computer has an associated driver.ΒΉ A large part of a running kernel is driver code.Β² A driver may be built statically into the kernel file on disk.Β³ A driver may also be built as a kernel module so that it can be dynamically loaded later. (And then maybe unloaded.) Standard practice is to build drivers as kernel modules where possible, rather than link them statically to the kernel, since that gives more flexibility. There are good reasons not to, however: Sometimes a given driver is absolutely necessary to help the system boot up. That doesn't happen as often as you might imagine, due to the initrd feature. Statically built drivers may be exactly what you want in a system that is statically scoped, such as an embedded system . That is to say, if you know in advance exactly which drivers will always be needed and that this will never change, you have a good reason not to bother with dynamic kernel modules. If you build your kernel statically and disable Linux's dynamic module loading feature, you prevent run-time modification of the kernel code. This provides additional security and stability at the expense of flexibility. Not all kernel modules are drivers. For example, a relatively recent feature in the Linux kernel is that you can load a different process scheduler . Another example is that the more complex types of hardware often have multiple generic layers that sit between the low-level hardware driver and userland, such as the USB HID driver , which implements a particular element of the USB stack , independent of the underlying hardware. Asides: One exception to this broad statement is the CPU chip, which has no "driver" per se . Your computer may also contain hardware for which you have no driver. The rest of the code in an OS kernel provides generic services like memory management , IPC , scheduling , etc. These services may primarily serve userland applications, as with the examples linked previously, or they may be internal services used by drivers or other intra-kernel infrastructure. The one in /boot , loaded into RAM at boot time by the boot loader early in the boot process . | {
"source": [
"https://unix.stackexchange.com/questions/47208",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17859/"
]
} |
47,230 | I need to execute multiple commands using nohup . Each command should be executed after the previous command. I used this command as an example: nohup wget $url && wget $url2 > /dev/null 2>&1 & However that command did not work. What command should I use for this purpose? | Wrap it in sh -c : nohup sh -c 'wget "$0" && wget "$1"' "$url1" "$url2" > /dev/null & | {
"source": [
"https://unix.stackexchange.com/questions/47230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19087/"
]
} |
47,271 | I'm trying to force GNU screen to create a "virtual" terminal, without attaching to it, execute script inside and NOT terminate session once script ends. I tried many combinations, including: screen -dmS udplistener /share/Sys/autorun/start_udp_listeners.sh or screen -S udplistener -X /share/Sys/autorun/start_udp_listeners.sh and none of them worked. I either get session without executed script, script executes, but session is terminated once it finishes or I'm getting "No screen session found" error. What I'm basically trying to do is to run UDP listener, written in PHP and make it work in infinte loop (don't break listening). Yes -- I could run PHP script with & at the end, forcing PHP CLI to run as daemon. The problem is, that I'm using a piece of junk called server (QNAP -- never, ever buy this junk!) which does seems to be ignoring this. As soon as I logoff SSH session, scripts stops. So screen seems to be the only option. But I can't understand, why it terminates session once executed command or script ends? EDIT : I've also tried example found in the Internet: screen -dmS name
screen -S name -p windowname -X stuff 'mc
' No lack! After attaching to it ( screen -R name ) I see that Midnight Commander HASN'T been executed. Though example author said, it will be. | To keep screen busy after the script completes, just keep something persistent running in a window. The simplest choice for that "something" is probably an interactive shell. Here's one way to do it (assuming bash as the choice of interactive shell): screen -dmS session_name sh -c '/share/Sys/autorun/start_udp_listeners.sh; exec bash' -dm : starts screen in detached mode -S : sets session name for screen for easier retrieval later on sh -c '...' : instead of simply running your script, which will terminate, use sh -c to run multiple commands exec bash : after the script terminates, the sh from above will switch over to an interactive shell ( bash ), which should never exit until something external terminates it. This will keep screen open as long as the bash instance is alive. | {
"source": [
"https://unix.stackexchange.com/questions/47271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20549/"
]
} |
47,304 | I have an existing CentOS installation which I'd like to install extra packages to. The packages to be installed were supplied to me in a list, one package per line, which looks like: ....
Cluster_Administration-en-US.noarch
ElectricFence.x86_64
GConf2.i386
GConf2.x86_64
GConf2-devel.i386
GConf2-devel.x86_64
Global_File_System-en-US.noarch
ImageMagick.i386
... Using this text file, is there a way to install every package listed? I suspect the list is actually a list of 'all' packages which could have been installed when the operating system was originally set up. | Yes, do this: yum -y install $(cat file_name) | {
"source": [
"https://unix.stackexchange.com/questions/47304",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22978/"
]
} |
47,330 | Possible Duplicate: What does a kernel source tree contain? Is this related to Linux kernel headers? I know that if I want to compile my own Linux kernel I need the Linux kernel headers, but what exactly are they good for? I found out that under /usr/src/ there seem to be dozens of C header files. But what is their purpose, aren't they included in the kernel sources directly? | The header files define an interface: they specify how the functions in the source file are defined. They are used so that a compiler can check if the usage of a function is correct as the function signature (return value and parameters) is present in the header file.
For this task the actual implementation of the function is not necessary. You could do the same with the complete kernel sources but you will install a lot of unnecessary files. Example: if I want to use the function int foo(double param); in a program I do not need to know how the implementation of foo is, I just need to know that it accepts a single param ( double ) and returns an integer. | {
"source": [
"https://unix.stackexchange.com/questions/47330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17859/"
]
} |
47,344 | There is a yes command in unix/linux which basically infinitely prints y to the stdout . What is the point of it, and how is it useful? | yes can be used to send an affirmative (or negative; e.g. yes n)
response to any command that would otherwise request one, thereby
causing the command to run non-interactively. The yes command in conjunction with the head command can be used to
generate large volume files for means of testing. It can also be used to test how well a system handles high loads, as
using yes results in 100% processor usage, for systems with a single
processor (for a multiprocessor system, a process must be run for each
processor). This, for example, can be useful for investigating whether
a system's cooling system will be effective when the processor is
running at 100%. In 2006, the yes command received publicity for being a means to test
whether or not a user's MacBook is affected by the Intermittent
Shutdown Syndrome. By running the yes command twice via Terminal under
Mac OS X, users were able to max out their computer's CPU, and thus
see if the failure was heat related via wikipedia: http://en.wikipedia.org/wiki/Yes_(Unix) | {
"source": [
"https://unix.stackexchange.com/questions/47344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23000/"
]
} |
47,359 | What is the purpose of the .xsession file in the home folder? What should be put in there? The desktop environments don't use that file and for the X startup from the tty there is .xinitrc . | If you log in in text mode then start a GUI session with xinit or with the wrapper script startx , then xinit does the following things: Start an X server (typically through the script /etc/X11/xinit/xserverrc ). Usually run some scripts in /etc/X11 (typically /etc/X11/xinit/xinitrc ), depending on how it's set up. Run ~/.xinitrc , if it exists. If it doesn't exist, run a default client (traditionally xterm ). Once ~/.xinitrc terminates, kill the X server. If you log in in graphical mode on an X display manager (xdm, gdm, kdm, wdm, lightdm, β¦), traditionally, what is executed after you log in is some scripts in /etc/X11 then ~/.xsession . ~/.xsession has the role of ~/.profile and ~/.xinitrc combined: it's supposed to perform the initial startup of your session (e.g. define environment variables), then launch programs specific to the GUI (usually at least window manager). Nowadays, most X display managers give you a choice of a session. Choosing a particular session launched a specific desktop environment, session manager, window manager. What is executed then is only that DE/SM/WM and whatever programs it chooses to start based on whatever configuration files it chooses to read. Many environments provide a βcustom sessionβ that reads the traditional ~/.xsession . | {
"source": [
"https://unix.stackexchange.com/questions/47359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11397/"
]
} |
47,363 | I've compiled the last emacs version from the source code (v24.2) because the version installed on my machine is (quite) old for me (v21.3). I've done the usual: $configure --prefix=$HOME
make
make install Now I am testing emacs and realized that it still launches the previous version ... while my $HOME/bin path is supposed to override the system one (since it is prepended to $PATH in my .bashrc file). My first thought was to see the which command output. And surprise, it gives the path to the new emacs. I can't understand where is the discrepancy here. In the same session here is the different outputs: $ emacs --version
GNU Emacs 21.3.1
$ `which emacs` --version
GNU Emacs 24.2.1 I have no alias involving emacs. At all. $ alias | grep emacs
$ Any idea what is going on please? | The three possibilities that come to mind for me: An alias exists for emacs (which you've checked) A function exists for emacs The new emacs binary is not in your shell's PATH hashtable. You can check if you have a function emacs : bash-3.2$ declare -F | fgrep emacs
declare -f emacs And remove it: unset -f emacs Your shell also has a PATH hashtable which contains a reference to each binary in your PATH. If you add a new binary with the same name as an existing one elsewhere in your PATH, the shell needs to be informed by updating the hashtable: hash -r Additional explanation: which doesn't know about functions, as it is not a bash builtin: bash-3.2$ emacs() { echo 'no emacs for you'; }
bash-3.2$ emacs
no emacs for you
bash-3.2$ which emacs
/usr/bin/emacs
bash-3.2$ `which emacs` --version | head -1
GNU Emacs 22.1.1 New binary hashtable behaviour is demonstrated by this script. bash-3.2$ PATH=$HOME/bin:$PATH
bash-3.2$ cd $HOME/bin
bash-3.2$ cat nofile
cat: nofile: No such file or directory
bash-3.2$ echo echo hi > cat
bash-3.2$ chmod +x cat
bash-3.2$ cat nofile
cat: nofile: No such file or directory
bash-3.2$ hash -r
bash-3.2$ cat nofile
hi
bash-3.2$ rm cat
bash-3.2$ cat nofile
bash: /Users/mrb/bin/cat: No such file or directory
bash-3.2$ hash -r
bash-3.2$ cat nofile
cat: nofile: No such file or directory Although I didn't call it, which cat would always return the first cat in my PATH, because it doesn't use the shell's hashtable. | {
"source": [
"https://unix.stackexchange.com/questions/47363",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14118/"
]
} |
47,367 | How can I bulk replace the prefix for many files? I have a lot of files like TestSRConnectionContext.h TestSRConnectionContext.m I would like to change all them to CLConnectionContext.h CLConnectionContext.m How would I do this? | for name in TestSR*
do
newname=CL"$(echo "$name" | cut -c7-)"
mv "$name" "$newname"
done This uses bash command substitution to remove the first 6 characters from the input filename via cut , prepends CL to the result, and stores that in $newname . Then it renames the old name to the new name. This is performed on every file. cut -c7- specifies that only characters after index 7 should be returned from the input. 7- is a range starting at index 7 with no end; that is, until the end of the line. Previously, I had used cut -b7- , but -c should be used instead to handle character encodings that could have multiple bytes per character, like UTF-8. | {
"source": [
"https://unix.stackexchange.com/questions/47367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23014/"
]
} |
47,407 | Say I have a huge text file (>2GB) and I just want to cat the lines X to Y (e.g. 57890000 to 57890010). From what I understand I can do this by piping head into tail or viceversa, i.e. head -A /path/to/file | tail -B or alternatively tail -C /path/to/file | head -D where A , B , C and D can be computed from the number of lines in the file, X and Y . But there are two problems with this approach: You have to compute A , B , C and D . The commands could pipe to each other many more lines than I am interested in reading (e.g. if I am reading just a few lines in the middle of a huge file) Is there a way to have the shell just work with and output the lines I want? (while providing only X and Y )? | I suggest the sed solution, but for the sake of completeness, awk 'NR >= 57890000 && NR <= 57890010' /path/to/file To cut out after the last line: awk 'NR < 57890000 { next } { print } NR == 57890010 { exit }' /path/to/file Speed test (here on macOS, YMMV on other systems): 100,000,000-line file generated by seq 100000000 > test.in Reading lines 50,000,000-50,000,010 Tests in no particular order real time as reported by bash 's builtin time 4.373 4.418 4.395 tail -n+50000000 test.in | head -n10
5.210 5.179 6.181 sed -n '50000000,50000010p;57890010q' test.in
5.525 5.475 5.488 head -n50000010 test.in | tail -n10
8.497 8.352 8.438 sed -n '50000000,50000010p' test.in
22.826 23.154 23.195 tail -n50000001 test.in | head -n10
25.694 25.908 27.638 ed -s test.in <<<"50000000,50000010p"
31.348 28.140 30.574 awk 'NR<57890000{next}1;NR==57890010{exit}' test.in
51.359 50.919 51.127 awk 'NR >= 57890000 && NR <= 57890010' test.in These are by no means precise benchmarks, but the difference is clear and repeatable enough* to give a good sense of the relative speed of each of these commands. *: Except between the first two, sed -n p;q and head|tail , which seem to be essentially the same. | {
"source": [
"https://unix.stackexchange.com/questions/47407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
47,434 | I am keen to know the difference between curl and wget . Both are used to get files and documents but what the key difference between them. Why are there two different programs? | The main differences are: wget 's major strong side compared to curl is its ability to download recursively. wget is command line only. There's no lib or anything, but curl 's features are powered by libcurl. curl supports FTP , FTPS , HTTP , HTTPS , SCP , SFTP , TFTP , TELNET , DICT , LDAP , LDAPS , FILE , POP3 , IMAP , SMTP , RTMP and RTSP . wget supports HTTP , HTTPS and FTP . curl builds and runs on more platforms than wget . wget is released under a free software copyleft license (the GNU GPL). curl is released under a free software permissive license (a MIT derivate). curl offers upload and sending capabilities. wget only offers plain HTTP POST support. You can see more details at the following link: curl vs Wget | {
"source": [
"https://unix.stackexchange.com/questions/47434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20620/"
]
} |
47,436 | Tuxfiles says the following about the Linux directory structure: /var : This directory contains variable data that changes constantly when the system is running. FHS on /var says the following: /var contains variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files. They then go on to say that things like logs, mail and the spooler are put in that folder. Traditionally A stock installation of Apache or Nginx or Arch on Ubuntu Linux will place the directory at /var/www/ . It doesn't seem to me like the ideal place to put a directory with files or otherwise content that is supposed to be almost permanent. Why is it so often put into /var ? More subjectively, is this where it should ideally go, according to the directory structure? | Usage of /var/www is confusing only at first sight. According to the FHS, web server data should go to /srv . That is the main rule. However, it also says that deciding about the structure of /srv is the sole responsibility of the local administrator! Therefore packages must not put anything into /srv , and the default document root must not be /srv , because the (apache) package does not know what is in /srv and below it. Maybe a subversion repository with clear text password and other things as well. So there must be a default outside of /srv . That default become /var/www . /var/www is mostly a placeholder. Packages use /usr/share for static HTML content, or /var/lib for dynamic variable content. Many people mistakenly thought that they should then put HTML into /var/www . That is a problem, because packages occasionally use that too. So recently they invented /var/www/html for packages. Hopefully people will not start to use that because then again they have to invent a new directory... and so on. Summary: you should use /srv and configure your Apache virtual hosts accordingly. | {
"source": [
"https://unix.stackexchange.com/questions/47436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11724/"
]
} |
47,555 | It's years I use Linux systems on a daily basis, and I never had major problems by updating a system when it was running, but I still wonder why this is possibile. Let me make an example. Suppose a program "A" from a certain package is running on a system. This program, at a certain point, needs to open another file ("B") from the same package. After that, program "A" closes "B" because it doesn't need it anymore. Suppose now I update the package "A" and "B" belong to. "A" is not directly affected by this operations, at least for the moment, since it is running in RAM and the update just replaced "A" on the hard disk. Suppose "B" has been replaced on the filesystem, too. Now "A" needs to read "B" again for some reason. The question is: is it possible that "A" could find an incompatible version of "B" and crash or malfunction in some other way? Why nobody update their systems by rebooting with a live CD or some similar procedure? | Updating Userland is Rarely a Problem You can often update packages on a live system because: Shared libraries are stored in memory, not read from disk on each call, so the old versions will remain in use until the application is restarted. Open files are actually read from file-descriptors , not the file names, so the file contents remain available to the running applications even when moved/renamed/deleted until the sectors are over-written or the file descriptors are closed. Packages that require reloading or restarting are usually handled properly by the package manager if the package has been well-designed. For example, Debian will restart certain services whenever libc6 is upgraded. Generally, unless you're updating your kernel and aren't using ksplice, then programs or services may need to be restarted to take advantage of an update. However, there's rarely a need to reboot a system to update anything in userland, although on desktops it's occasionally easier than restarting individual services. See Also http://en.wikipedia.org/wiki/Ring_%28computer_security%29#Supervisor_mode | {
"source": [
"https://unix.stackexchange.com/questions/47555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
47,557 | In bash, I know that it is possible to write a for loop in which some loop control variable i iterates over specified integers. For example, I can write a bash shell script that prints the integers between 1 and 10: #!/bin/bash
for i in {1..10}
do
echo $i
done Is it possible to instead iterate over a loop control variable that is a string, if I provide a list of strings? For example, suppose that I have a string fname that represents a file name. I want to call a set of commands for each file name. For example, I might want to print the contents of fname using a command like this: #!/bin/bash
for fname in {"a.txt", "b.txt", "c.txt"}
do
echo $fname
done In other words, on the first iteration, fname should have the value fname="a.txt" , while on the second iteration, fname should have the value fname="b.txt" , and so on. Unfortunately, it seems that the above syntax is not quite correct. I would like to obtain the output: a.txt b.txt c.txt but when I try the above code, I obtain this output: {a.txt, b.txt, c.txt} Can you please help me determine the correct syntax, so that I can iteratively change the value/contents of the variable fname ? Thank you for your time. | The correct syntax is as follows: #!/bin/bash
for fname in a.txt b.txt c.txt
do
echo $fname
done | {
"source": [
"https://unix.stackexchange.com/questions/47557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
47,584 | This question is a sequel of sorts to my earlier question . The users on this site kindly helped me determine how to write a bash for loop that iterates over string values. For example, suppose that a loop control variable fname iterates over the strings "a.txt" "b.txt" "c.txt" . I would like to echo "yes!" when fname has the value "a.txt" or "c.txt" , and echo "no!" otherwise. I have tried the following bash shell script: #!/bin/bash
for fname in "a.txt" "b.txt" "c.txt"
do
echo $fname
if [ "$fname" = "a.txt" ] | [ "$fname" = "c.txt" ]; then
echo "yes!"
else
echo "no!"
fi
done I obtain the output: a.txt no! b.txt no! c.txt yes! Why does the if statement apparently yield true when fname has the value "a.txt" ? Have I used | incorrectly? | If you want to say OR use double pipe ( || ). if [ "$fname" = "a.txt" ] || [ "$fname" = "c.txt" ] (The original OP code using | was simply piping the output of the left side to the right side, in the same way any ordinary pipe works.) After many years of comments and misunderstanding, allow me to clarify. To do OR you use || . Whether you use [ or [[ or test or (( all depends on what you need on a case by case basis. It's wrong to say that one of those is preferred in all cases. Sometimes [ is right and [[ is wrong. But that's not what the question was. OP asked why | didn't work. The answer is because it should be || instead. | {
"source": [
"https://unix.stackexchange.com/questions/47584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
47,688 | I use logrotate to rotate Apache access-, error- and rewrite-logs. My config file looks like this: /apache/*log {
compress
dateext
rotate 365
size=+300M
olddir /log/old/apache
notifempty
missingok
lastaction
/bin/apache reload
endscript
} My problem is that whenever a rotation occurs, Apache has to be reloaded because Apache doesn't write any more in the just-rotated logfile.
Is there a way to avoid Apache reloads every time logrotate does a rotation? | The reason that apache needs a reload is that once it's opened a file, it gets a filehandle to it, and it will keep writing to that filehandle. When you move the file, it doesn't see that, it just keeps writing to the same handle. When you do a reload, it'll open the file again and get a new handle. To avoid the reload, instead of moving the file, you can copy it and empty the old file. That way apache can keep writing to the same filehandle. You do this by adding the option "copytruncate" to the logrotate config file, like this: /apache/*log {
copytruncate
compress
dateext
rotate 365
size=+300M
olddir /log/old/apache
notifempty
missingok
} | {
"source": [
"https://unix.stackexchange.com/questions/47688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20867/"
]
} |
47,695 | I have 2 graphics cards on my laptop. One is IGP and another discrete. I've written a shell script to to turn off the discrete graphics card. How can I convert it to systemd script to run it at start-up? | There are mainly two approaches to do that: With script If you have to run a script, you don't convert it but rather run the script via a systemd service: Therefore you need two files: the script and the .service file (unit configuration file). Make sure your script is executable and the first line (the shebang ) is #!/bin/sh . Then create the .service file in /etc/systemd/system (a plain text file, let's call it vgaoff.service ). For example: the script: /usr/bin/vgaoff the unit file: /etc/systemd/system/vgaoff.service Now, edit the unit file. Its content depends on how your script works: If vgaoff just powers off the gpu, e.g.: exec blah-blah pwrOFF etc then the content of vgaoff.service should be: [Unit]
Description=Power-off gpu
[Service]
Type=oneshot
ExecStart=/usr/bin/vgaoff
[Install]
WantedBy=multi-user.target If vgaoff is used to power off the GPU and also to power it back on, e.g.: start() {
exec blah-blah pwrOFF etc
}
stop() {
exec blah-blah pwrON etc
}
case $1 in
start|stop) "$1" ;;
esac then the content of vgaoff.service should be: [Unit]
Description=Power-off gpu
[Service]
Type=oneshot
ExecStart=/usr/bin/vgaoff start
ExecStop=/usr/bin/vgaoff stop
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target Without script For the most trivial cases, you can do without the script and execute a certain command directly: To power off: [Unit]
Description=Power-off gpu
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo OFF > /whatever/vga_pwr_gadget/switch"
[Install]
WantedBy=multi-user.target To power off and on: [Unit]
Description=Power-off gpu
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo OFF > /whatever/vga_pwr_gadget/switch"
ExecStop=/bin/sh -c "echo ON > /whatever/vga_pwr_gadget/switch"
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target Enable the service Once you're done with the files, enable the service: systemctl enable vgaoff.service It will start automatically on next boot. You could even enable and start the service in one go with systemctl enable --now vgaoff.service as of systemd v.220 (on older setups you'll have to start it manually). For more details see systemd.service manual page. Troubleshooting How to see full log of a systemd service? systemd service exit codes and status information explanation | {
"source": [
"https://unix.stackexchange.com/questions/47695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5459/"
]
} |
47,710 | For use in a shell-script, I'm looking for a commandline-way to get the destination of a symbolic link. The closest I've come so far is stat -N src , which outputs src -> dst . Of course I could parse the output and get dst , but I wonder if there is some direct way of getting the destination. | Another option would be to use the specifically designed command readlink if available. E.g. $ readlink -f `command -v php`
/usr/bin/php7.1 | {
"source": [
"https://unix.stackexchange.com/questions/47710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15654/"
]
} |
47,724 | Is it possible to change the write permissions on a file from inside emacs, without killing/re-opening the buffer? Sometimes I forget to modify the permissions on a file before opening it. I can modify the permissions from inside emacs ( M-! chmod u+w filename ) but this doesn't update the buffer which remains write protected and refuses to modify the file. Is there a way to update permissions inside the buffer? Bonus point if I can assign this to a shortcut! | After changing the file mode, and before doing any edit, run M-x revert-buffer to reload the file. If the file is now writable, the buffer will no longer be read-only. Alternatively, type C-x C-q ( read-only-mode ). This makes the buffer no longer read-only. You can edit and even save, but you'll get a confirmation prompt asking whether you want to overwrite the read-only file. | {
"source": [
"https://unix.stackexchange.com/questions/47724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4098/"
]
} |
47,771 | Possible Duplicate: Is it possible to follow a command (run repeatedly)? as one would follow a file using tail -f? I would like to monitor files that are being downloaded to a directory in real time on screen in bash. Is there an easy way in Linux to do the equivalent of tail -f but on a directory, perhaps using ls? | Use the "watch" command: watch ls This will run the "ls" command every 2 seconds. | {
"source": [
"https://unix.stackexchange.com/questions/47771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8977/"
]
} |
47,793 | Is it possible to type a bash command inside vi and get the stdout? I find it often tedious to close and reopen vi just because I want to look something up in the shell. | Yes, e.g if you want to do ls , try: :!ls To spawn a shell, use :shell | {
"source": [
"https://unix.stackexchange.com/questions/47793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20867/"
]
} |
47,805 | I need to print the directory structure of our production system and I would like to remove some specific directories from the tree ? How do we specify multiple ignore patterns for tree command? | You simply provide all the patterns to the -I command, separated by | . From the manpage: -P pattern
List only those files that match the wild-card pattern. Note:
you must use the -a option to also consider those files beginβ
ning with a dot `.' for matching. Valid wildcard operators are
`*' (any zero or more characters), `?' (any single character),
`[...]' (any single character listed between brackets (optional
- (dash) for character range may be used: ex: [A-Z]), and
`[^...]' (any single character not listed in brackets) and `|'
separates alternate patterns.
-I pattern
Do not list those files that match the wild-card pattern. So, for example tree -I 'test*|docs|bin|lib' skips the 'docs', 'bin', and 'lib', directories, and any directory with 'test' in the name, wherever they may lie within the directory hierarchy. Obviously, you can apply wildcards for much more powerful matching. | {
"source": [
"https://unix.stackexchange.com/questions/47805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21500/"
]
} |
47,814 | I search the terminal command history by pressing Ctrl r but what if: This is an old command
This is an | less -S older command I press Ctrl r and then I type "this is an" and the old command commes up but not the older. How can I search all the "this is an" commands? Is it possible to pipe all similar commands to grep or something? If I set -o vi , how do I undo it? | To search for a command in the history press ctrl+r multiple times ;-) You can also grep through the history using: history | grep YOUR_STRING | {
"source": [
"https://unix.stackexchange.com/questions/47814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15221/"
]
} |
47,832 | E.g. I have a file (produced with echo -e "var1\tvar2\t\var3\tvar4" > foo ) that are output as: $ cat foo
case elems meshing nlsys
uniform 2350 0.076662 2.78
non-conformal 348 0.013332 0.55
scale 318 0.013333 0.44
smarter 504 0.016666 0.64
submodel 360 .009999 0.40
unstruct-quad 640 0.019999 0.80
unstruct-tri 1484 0.01 0.88 I'd prefer the output like this (here I used vim and :set tabstop=14 ): case elems meshing nlsys
uniform 2350 0.076662 2.78
non-conformal 348 0.013332 0.55
scale 318 0.013333 0.44
smarter 504 0.016666 0.64
submodel 360 .009999 0.40
unstruct-quad 640 0.019999 0.80
unstruct-tri 1484 0.01 0.88 I can get the same functionality with cat if I use $ tabs=15 in bash (see this question ). Is there a program that does this kind of formatting automatically? I don't want to experiment with the tabs value before cat ing a file. | I usually use the column program for this, it's in a package called bsdmainutils on Debian: column -t foo Output: case elems meshing nlsys
uniform 2350 0.076662 2.78
non-conformal 348 0.013332 0.55
scale 318 0.013333 0.44
smarter 504 0.016666 0.64
submodel 360 .009999 0.40
unstruct-quad 640 0.019999 0.80
unstruct-tri 1484 0.01 0.88 Excerpt from column(1) on my system: ...
-t Determine the number of columns the input contains and create a
table. Columns are delimited with whitespace, by default, or
with the characters supplied using the -s option. Useful for
pretty-printing displays.
... | {
"source": [
"https://unix.stackexchange.com/questions/47832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6958/"
]
} |
47,858 | How can I search a wild card name in all subfolders? What would be the equivalent of DOS command: dir *pattern* /s in *nix? | You can use find . If, for example, you wanted to find all files and directories that had abcd in the filename, you could run: find . -name '*abcd*' | {
"source": [
"https://unix.stackexchange.com/questions/47858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22558/"
]
} |
47,880 | The package qqq.deb installs the program qqq that should run from uqqq user account. The package consist of the qqq program, qqq.conf config file and /etc/init.d/qqq initscript. How should the package manage the creation of user uqqq ? Are there any best practices or official guidelines about this? Just create the user automatically uqqq in postinst; Create the user automatically on first startup from /etc/init.d/qqq script; Create the user automatically on first startup of qqq program (without arguments) Don't create any user accounts, refuse to start unless the user is explicitly created by administrator (for example, using qqq --create-user ); Don't create any user accounts, run unsafely from root by default; Interactively ask in postinst, init.d script or the qqq itself whether to create a user. Should the package remove the user account when uninstalled? | The Debian wiki has some more comprehensive and specific guidance than the already-mentioned Debian Policy Manual. See AccountHandlingInMaintainerScripts : The adduser program does the right thing if called with the --system option. It is thus usually only necessary to call adduser --system $USERNAME in your postinst to create the account with logins disabled, a primary group of nogroup and a home directory under /home. If you want other options, add them as you want to. It should normally not be necessary to cross-check with getent whether an account already exists since adduser --system generally does the right thing. If not, please report a bug against adduser to keep your maintainer scripts simple. The advice it provides on deleting accounts is inconclusive. However, I will note that the corresponding advice for fedora does not equivocate. Do not remove users or groups We never remove users or groups created by packages. There's no sane way to check if files owned by those users/groups are left behind (and even if there would, what would we do with them?) and leaving those behind with ownerships pointing to now nonexistent users/groups may result in security issues when a semantically unrelated user/group is created later and reuses the UID/GID. Also, in some setups deleting the user/group might not be possible or/nor desirable (eg. when using a shared, remote user/group database). Cleanup of unused users/groups is left to the system administrators to take care of if they so desire. | {
"source": [
"https://unix.stackexchange.com/questions/47880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17594/"
]
} |
47,909 | I try to transfer files from remote computer using ssh to my computer : scp My_file.txt user_id@server:/Home This should put My_file.txt in the home folder on my own computer, right?
I get scp/Home: permission denied Also when I try: ...@server:/Desktop , in order to copy the files from the remote computer to my desktop. What am I doing wrong? | Your commands are trying to put the new Document to the root ( / ) of your machine. What you want to do is to transfer them to your home directory (since you have no permissions to write to / ). If path to your home is something like /home/erez try the following: scp My_file.txt user_id@server:/home/erez/ You can substitute the path to your home directory with the shortcut ~/ , so the following will have the same effect: scp My_file.txt user_id@server:~/ You can even leave out the path altogether on the remote side; this means your home directory. scp My_file.txt user_id@server: That is, to copy the file to your desktop you might want to transfer it to /home/erez/Desktop/ : scp My_file.txt user_id@server:/home/erez/Desktop/ or using the shortcut: scp My_file.txt user_id@server:~/Desktop/ or using a relative path on the remote side, which is interpreted relative to your home directory: scp My_file.txt user_id@server:Desktop/ As @ckhan already mentioned, you also have to swap the arguments, it has to be scp FROM TO So if you want to copy the file My_file.txt from the server user_id@server to your desktop you should try the following: scp user_id@server:/path/to/My_file.txt ~/Desktop/ If the file My_file.txt is located in your home directory on the server you may again use the shortcut: scp user_id@server:~/My_file.txt ~/Desktop/ | {
"source": [
"https://unix.stackexchange.com/questions/47909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23256/"
]
} |
47,918 | Assuming a simple grep such as: $ psa aux | grep someApp
1000 11634 51.2 0.1 32824 9112 pts/1 SN+ 13:24 7:49 someApp This provides much information, but as the first line of the ps command is missing there is no context for the info. I would prefer that the first line of ps be shown as well: $ psa aux | someMagic someApp
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1000 11634 51.2 0.1 32824 9112 pts/1 SN+ 13:24 7:49 someApp Of course, I could add a regex to grep specifically for ps: $ ps aux | grep -E "COMMAND|someApp" However, I would prefer a more general solution as there are other cases in which I would like to have the first line as well. Seems like this would be a good use case for a "stdmeta" file descriptor . | Good way Normally you can't do this with grep but you can use other tools. AWK was already mentioned but you can also use sed , like this: sed -e '1p' -e '/youpattern/!d' How it works: Sed utility works on each line individually, running specified commands on each of them. You can have multiple commands, specifying several -e options. We can prepend each command with a range parameter that specifies if this command should be applied to specific line or not. "1p" is a first command. It uses p command which normally prints all the lines. But we prepend it with a numerical value that specifies the range it should be applied to. Here, we use 1 which means first line. If you want to print more lines, you can use x,yp where x is first line to print, y is last line to print. For example to print first 3 lines, you would use 1,3p Next command is d which normally deletes all the lines from buffer. Before this command we put yourpattern between two / characters. This is the other way (first was to specify which lines as we did with p command) of addressing lines that the command should be running at. This means the command will only work for the lines that match yourpattern . Except, we use ! character before d command which inverts its logic. So now it will remove all the lines that do not match specified pattern. At the end, sed will print all the lines that are left in buffer. But we removed lines that do not match from the buffer so only matching lines will be printed. To sum up: we print 1st line, then we delete all the lines that do not match our pattern from input. Rest of the lines are printed (so only lines that do match the pattern). First line problem As mentioned in comments, there is a problem with this approach. If specified pattern matches also first line, it will be printed twice (once by p command and once because of a match). We can avoid this in two ways: Adding 1d command after 1p . As I already mentioned, d command deletes lines from buffer and we specify it's range by number 1, which means it will only delete 1st line. So the command would be sed -e '1p' -e '1d' -e '/youpattern/!d' Using 1b command, instead of 1p . It's a trick. b command allows us to jump to other command specified by a label (this way some commands can be omitted). But if this label is not specified (as in our example) it just jumps to the end of commands, ignoring rest of the commands for our line. So in our case, last d command won't remove this line from buffer. Full example: ps aux | sed -e '1b' -e '/syslog/!d' Using semicolon Some sed implementations can save you some typing by using semicolon to separate commands instead of using multiple -e options. So if you don't care about being portable the command would be ps aux | sed '1b;/syslog/!d' . It works at least in GNU sed and busybox implementations. Crazy way Here's, however, rather crazy way to do this with grep. It's definitely not optimal, I'm posting this just for learning purposes, but you may use it for example, if you don't have any other tool in your system: ps aux | grep -n '.*' | grep -e '\(^1:\)\|syslog' How it works First, we use -n option to add line numbers before each line. We want to numerate all the lines we we are matching .* - anything, even empty line. As suggested in comments, we can also match '^', result is the same. Then we are using extended regular expressions so we can use \| special character which works as OR. So we match if the line starts with 1: (first line) or contains our pattern (in this case its syslog ). Line numbers problem Now the problem is, we are getting this ugly line numbers in our output. If this is a problem, we can remove them with cut , like this: ps aux | grep -n '.*' | grep -e '\(^1:\)\|syslog' | cut -d ':' -f2- -d option specifies delimiter, -f specifies fields (or columns) we want to print. So we want to cut each lines on every : character and print only 2nd and all subsequent columns. This effectively removes first column with it's delimiter and this is exactly what we need. | {
"source": [
"https://unix.stackexchange.com/questions/47918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
47,932 | I don't have much experience of using tee, so I hope this is not very basic. After viewing one of the answers to this question I came across a strange beheviour with tee . In order for me to output the first line, and a found line, I can use this: ps aux | tee >(head -n1) | grep syslog
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
syslog 806 0.0 0.0 34600 824 ? Sl Sep07 0:00 rsyslogd -c4 However, the first time I ran this (in zsh) the result was in the wrong order, the column headers were below the grep results (this did not happen again however), so I tried to swap the commands around: ps aux | tee >(grep syslog) | head -n1
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND Only the first line is printed, and nothing else! Can I use tee to redirect to grep, or am I doing this in the wrong manner? As I was typing this question, the second command actually worked once for me, I ran it again five times and then back to the one line result. Is this just my system? (I am running zsh within tmux). Finally, why with the first command is "grep syslog" not shown as a result (there is only one result)? For control here is the grep without the tee ps aux | grep syslog
syslog 806 0.0 0.0 34600 824 ? Sl Sep07 0:00 rsyslogd -c4
henry 2290 0.0 0.1 95220 3092 ? Ssl Sep07 3:12 /usr/bin/pulseaudio --start --log-target=syslog
henry 15924 0.0 0.0 3128 824 pts/4 S+ 13:44 0:00 grep syslog Update: It seems that head is causing the whole command to truncate (as indicated in the answer below) the below command is now returning the following: ps aux | tee >(grep syslog) | head -n1
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
syslog 806 | $ ps aux | tee >(head -n1) | grep syslog
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
syslog 806 0.0 0.0 34600 824 ? Sl Sep07 0:00 rsyslogd -c4 The grep and head commands start at about the same time, and both receive the same input data at their own leisure, but generally, as data becomes available. There are some things that can introduce the 'unsynchronized' output which flips lines; for example: The multiplexed data from tee actually gets sent to one process before the other, depending primarily on the implementation of tee . A simple tee implementation will read some amount of input, and then write it twice: Once to stdout and once to its argument. This means that one of those destinations will get the data first. However, pipes are all buffered. It is likely that these buffers are 1 line each, but they might be larger, which can cause one of the receiving commands to see everything it needs for output (ie. the grep ped line) before the other command ( head ) has received any data at all. Notwithstanding the above, it's also possible that one of these commands receives the data but is unable to do anything with it in time, and then the other command receives more data and processes it quickly. For example, even if head and grep are sent the data one line at a time, if head doesn't know how to deal with it (or gets delayed by kernel scheduling), grep can show its results before head even gets a chance to. To demonstrate, try adding a delay: ps aux | tee >(sleep 1; head -n1) | grep syslog This will almost certainly output the grep output first. $ ps aux | tee >(grep syslog) | head -n1
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND I believe you often only get one line here, because head receives the first line of input and then closes its stdin and exits. When tee sees that its stdout has been closed, it then closes its own stdin (output from ps ) and exits. This could be implementation-dependent. Effectively, the only data that ps gets to send is the first line (definitely, because head is controlling this), and maybe some other lines before head & tee close their stdin descriptors. The inconsistency with whether the second line appears is introduced by timing: head closes stdin, but ps is still sending data. These two events are not well-synchronized, so the line containing syslog still has a chance of making it to tee 's argument (the grep command). This is similar to the explanations above. You can avoid this problem altogether by using commands that wait for all input before closing stdin/exiting. For example, use awk instead of head , which will read and process all its lines (even if they cause no output): ps aux | tee >(grep syslog) | awk 'NR == 1' But note that the lines can still appear out-of-order, as above, which can be demonstrated by: ps aux | tee >(grep syslog) | (sleep 1; awk 'NR == 1') Hope this wasn't too much detail, but there are a lot of simultaneous things interacting with each other. Separate processes run simultaneously without any synchronization, so their actions on any particular run can vary; sometimes it helps to dig deep into the underlying processes to explain why. | {
"source": [
"https://unix.stackexchange.com/questions/47932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18056/"
]
} |
47,940 | I have a fresh Mint 13 Maya (MATE edition) with Compiz enabled. I also have a two-monitor set-up with the nVidia drivers with TwinView. I have set up 4 horizontal workspaces via CompizConfig Settings Manager. Here is the behavior I try to eliminate: I open a window, say, browser, and put it to workspace number 2. When I switch to workspace 3 or 4, then back to workspace 2, the window is gone. It has jumped to workspace 1 somehow. This annoys a lot. Could anyone help? Here are some details: If you stay on workspace 2 windows never jump If you switch to workspace 1 and back to worspace 2 - windows do not jump. They seem to jump only if you switch to the right I have 2 monitors, and this behavior occurs only with windows displayed on the left monitor | $ ps aux | tee >(head -n1) | grep syslog
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
syslog 806 0.0 0.0 34600 824 ? Sl Sep07 0:00 rsyslogd -c4 The grep and head commands start at about the same time, and both receive the same input data at their own leisure, but generally, as data becomes available. There are some things that can introduce the 'unsynchronized' output which flips lines; for example: The multiplexed data from tee actually gets sent to one process before the other, depending primarily on the implementation of tee . A simple tee implementation will read some amount of input, and then write it twice: Once to stdout and once to its argument. This means that one of those destinations will get the data first. However, pipes are all buffered. It is likely that these buffers are 1 line each, but they might be larger, which can cause one of the receiving commands to see everything it needs for output (ie. the grep ped line) before the other command ( head ) has received any data at all. Notwithstanding the above, it's also possible that one of these commands receives the data but is unable to do anything with it in time, and then the other command receives more data and processes it quickly. For example, even if head and grep are sent the data one line at a time, if head doesn't know how to deal with it (or gets delayed by kernel scheduling), grep can show its results before head even gets a chance to. To demonstrate, try adding a delay: ps aux | tee >(sleep 1; head -n1) | grep syslog This will almost certainly output the grep output first. $ ps aux | tee >(grep syslog) | head -n1
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND I believe you often only get one line here, because head receives the first line of input and then closes its stdin and exits. When tee sees that its stdout has been closed, it then closes its own stdin (output from ps ) and exits. This could be implementation-dependent. Effectively, the only data that ps gets to send is the first line (definitely, because head is controlling this), and maybe some other lines before head & tee close their stdin descriptors. The inconsistency with whether the second line appears is introduced by timing: head closes stdin, but ps is still sending data. These two events are not well-synchronized, so the line containing syslog still has a chance of making it to tee 's argument (the grep command). This is similar to the explanations above. You can avoid this problem altogether by using commands that wait for all input before closing stdin/exiting. For example, use awk instead of head , which will read and process all its lines (even if they cause no output): ps aux | tee >(grep syslog) | awk 'NR == 1' But note that the lines can still appear out-of-order, as above, which can be demonstrated by: ps aux | tee >(grep syslog) | (sleep 1; awk 'NR == 1') Hope this wasn't too much detail, but there are a lot of simultaneous things interacting with each other. Separate processes run simultaneously without any synchronization, so their actions on any particular run can vary; sometimes it helps to dig deep into the underlying processes to explain why. | {
"source": [
"https://unix.stackexchange.com/questions/47940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23274/"
]
} |
48,018 | Why is Perl installed by default with most Linux distributions? | The answer is/isn't sexy, depending on your point of view. Perl is very useful. Lots of the system utilities are written in or depend on perl. Most systems won't operate properly if Perl is uninstalled. A few years ago FreeBSD went through a lot of effort to remove Perl as a dependency for the base system. It wasn't an easy task. | {
"source": [
"https://unix.stackexchange.com/questions/48018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
48,059 | Please suggest me any particular unnecessary file that I can clean to back everything to normal condition(temporarily). (i.e. any log or archieve or anything ). My var/log has only 40MB and Home directory has 3GB of space(so I believe that's not a problem). Other than that what I can clean up to make space. [user@host]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_inamivm-lv_root
18G 17G 0 100% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/sda1 485M 71M 389M 16% /boot I am in a debian machine. UPDATE1: output of cd /; du -sxh * 6.1M bin
61M boot
156K dev
22M etc
3.3G home
306M lib
18M lib64
16K lost+found
4.0K media
4.0K mnt
408K opt
du: cannot access `proc/18605/task/18605/fd/4': No such file or directory
du: cannot access `proc/18605/task/18605/fdinfo/4': No such file or directory
du: cannot access `proc/18605/fd/4': No such file or directory
du: cannot access `proc/18605/fdinfo/4': No such file or directory
0 proc
208K root
9.7M sbin
0 selinux
4.0K srv
0 sys
8.0K tmp
536M usr
187M var Update2 Output of ls -la / dr-xr-xr-x. 22 root root 4096 Aug 7 08:42 .
dr-xr-xr-x. 22 root root 4096 Aug 7 08:42 ..
-rw-r--r--. 1 root root 0 Aug 7 08:42 .autofsck
dr-xr-xr-x. 2 root root 4096 Mar 28 16:53 bin
dr-xr-xr-x. 5 root root 1024 Mar 28 16:54 boot
drwxr-xr-x. 16 root root 3580 Sep 9 03:13 dev
drwxr-xr-x. 69 root root 4096 Aug 23 09:19 etc
drwxr-xr-x. 9 root root 4096 Jun 29 16:10 home
dr-xr-xr-x. 8 root root 4096 Mar 7 2012 lib
dr-xr-xr-x. 9 root root 12288 Mar 28 16:53 lib64
drwx------. 2 root root 16384 Mar 7 2012 lost+found
drwxr-xr-x. 2 root root 4096 Sep 23 2011 media
drwxr-xr-x. 2 root root 4096 Sep 23 2011 mnt
drwxr-xr-x. 3 root root 4096 Mar 7 2012 opt
dr-xr-xr-x. 355 root root 0 Aug 7 08:42 proc
dr-xr-x---. 5 root root 4096 Aug 17 18:27 root
dr-xr-xr-x. 2 root root 4096 May 2 09:13 sbin
drwxr-xr-x. 7 root root 0 Aug 7 08:42 selinux
drwxr-xr-x. 2 root root 4096 Sep 23 2011 srv
drwxr-xr-x. 13 root root 0 Aug 7 08:42 sys
drwxrwxrwt. 3 root root 4096 Sep 13 03:37 tmp
drwxr-xr-x. 13 root root 4096 Mar 28 17:53 usr
drwxr-xr-x. 18 root root 4096 Mar 7 2012 var | daisy's answer to use a graphical tool to visually find large files and directories is probably the best method. However, do note that "graphical tool" does not mean "requires an X server"! The wonderful ncdu program provides the graphical output in the CLI, and works perfectly on remote servers via SSH: $ ncdu /
. 43.7GiB [##########] /home
. 5.9GiB [# ] /usr
1.1GiB [ ] /lib
. 1.1GiB [ ] /var
736.9MiB [ ] /opt
. 324.6MiB [ ] /tmp
218.4MiB [ ] /boot
. 63.8MiB [ ] /etc
10.0MiB [ ] /sbin
8.8MiB [ ] /bin
3.3MiB [ ] /lib32
. 1.0MiB [ ] /run
64.0KiB [ ] /build
! 16.0KiB [ ] /lost+found
8.0KiB [ ] /media
8.0KiB [ ] /mnt
8.0KiB [ ] /.config
4.0KiB [ ] /dev
4.0KiB [ ] /lib64
e 4.0KiB [ ] /srv
e 4.0KiB [ ] /selinux
! 4.0KiB [ ] /root
e 4.0KiB [ ] /cdrom
. 0.0 B [ ] /proc
. 0.0 B [ ] /sys
@ 0.0 B [ ] initrd.img.old
@ 0.0 B [ ] initrd.img
@ 0.0 B [ ] vmlinuz.old Then, after entering /var/ for instance: . 395.3MiB [##########] /tmp
. 365.0MiB [######### ] /cache
. 297.8MiB [####### ] /lib
16.1MiB [ ] /backups
. 8.0MiB [ ] /log
. 56.0KiB [ ] /spool
40.0KiB [ ] /games
8.0KiB [ ] /www
e 4.0KiB [ ] /opt
e 4.0KiB [ ] /mail
e 4.0KiB [ ] /local
e 4.0KiB [ ] /crash
@ 0.0 B [ ] lock
@ 0.0 B [ ] run Install easily on Debian or Ubuntu: $ sudo apt-get install ncdu Install easily on CentOS as root: # yum install ncdu | {
"source": [
"https://unix.stackexchange.com/questions/48059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22698/"
]
} |
48,101 | I have a server running with the timezone set to UTC . It seemed like that was generally a good practice (please correct me if I'm wrong). Anyhow, one of the servers I connect to, in order to scp files, is running on EDT and stores files that I need to copy in the format /path/to/filename/data20120913 I looked at trying to rsync files using something like find's -mtime -1 flag for files modified in the last day, but I didn't have any luck. I don't mind just using scp to copy the current day's file, but as of right now there is a 4-hour window where running date +%Y%m%d will give a different day on each server and that bugs me a little. Looking through man date I see that I can have the time output as UTC , but I don't see a way to have it output as another timezone like EDT I suppose I could also use something like the GNU date extension date -d 20100909 +%s to get the date in seconds from the epoch, apply a manual 4 * 60 * 60 second calculation, and see about rendering that as a date - but then when daylight time kicks in it will still be an hour off. Is there a simpler way to output the date in a YYYYMMDD format for EDT on a server that is set to UTC ? | You can set a timezone for the duration of the query, thusly: TZ=America/New_York date Note the whitespace between the TZ setting and the date command. In Bourne-like and rc -like shell, that sets the TZ variable only for the command line. In other shells ( csh , tcsh , fish ), you can always use the env command instead: env TZ=America/New_York date tl;dr On Linux systems. timezones are defined in files in the /usr/share/zoneinfo directory. This structure is often referred to as the "Olson database" to honor its founding contributor. The rules for each timezone are defined as text file lines which are then compiled into a binary file. The lines so compiled, define the zone name; a range of data and time during which the zone applies; an offset from UTC for the standard time; and the notation for defining how transition to-and-from daylight saving time occurs, if applicable. For example, the directory "America" contains the requisite information for New York in the file America/New_York as used, above. Beware that the specification of a non-existent zone (file name) is silently ignored and UTC times are reported. For example, this reports an incorrect time: TZ="America/New York" date ### WRONG ### The Single UNIX Specification, version-3, known as SUSv3 or POSIX-2001, notes that for portability, the character string that identifies the timezone description should begin with a colon character. Thus, we can also write: TZ=":America/New_York" date
TZ=":America/Los_Angeles" date As an alternative method to the specification of timezones using a pathname to a description file, SUSv3 describes the POSIX model. In this format, a string is defined as: std offset [dst[offset][,start-date[/time],end-date[/time]]] where std is the standard component name and dst is the daylight saving one. Each name consists of three or more characters. The offset is positive for timezones west of the prime meridian and negative for those east of the meridian. The offset is added to the local time to obtain UTC (formerly known as GMT). The start and end time fields indicate when the standard/daylight transitions occur. For example, in the Eastern United States, standard time is 5-hours earlier than UTC, and we can specify EST5EDT in lieu of America/New_York . These alternatives are not always recognized, however, especially for zones outside of the United States and are best avoided. HP-UX (an SUSv3 compliant UNIX) uses textual rules in /usr/lib/tztab and the POSIX names like EST5EDT, CST6CDT, MST7MDT, PST8PDT. The file includes all of the historical rules for each time zone, akin to the Olson database. NOTE: You should be able to find all of the timezones by inspecting the following directory: /usr/share/zoneinfo . | {
"source": [
"https://unix.stackexchange.com/questions/48101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10287/"
]
} |
48,106 | I just saw this in an init script: echo $"Stopping Apache" What is that dollar-sign for? My research so far: I found this in the bash manual: extquote If set, $'string' and $"string" quoting is performed within ${parameter} expansions enclosed in double quotes. This option is enabled by default. ...but I'm not finding any difference between strings with and without the $ prefix: $ echo "I am in $PWD"
I am in /var/shared/home/southworth/qed
$ echo $"I am in $PWD"
I am in /var/shared/home/southworth/qed
$ echo $"I am in ${PWD}"
I am in /var/shared/home/southworth/qed
$ echo "I am in ${PWD}"
I am in /var/shared/home/southworth/qed
$ echo 'I am in ${PWD}'
I am in ${PWD}
$ echo $'I am in ${PWD}'
I am in ${PWD}
$ echo $'I am in $PWD'
I am in $PWD | There are two different things going on here, both documented in the bash manual $' Dollar-sign single quote is a special form of quoting: ANSI C Quoting Words of the form $'string' are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard. $" Dollar-sign double-quote is for localization: Locale translation A double-quoted string preceded by a dollar sign (β$β) will cause the string to be translated according to the current locale. If the current locale is C or POSIX, the dollar sign is ignored. If the string is translated and replaced, the replacement is double-quoted. | {
"source": [
"https://unix.stackexchange.com/questions/48106",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5639/"
]
} |
48,138 | I'm looking for a way to limit a processes disk io to a set speed limit. Ideally the program would work similar to this: $ limitio --pid 32423 --write-limit 1M Limiting process 32423 to 1 megabyte per second hard drive writing speed. | That is certainly not trivial task that can't be done in userspace. Fortunately, it is possible to do on Linux, using cgroup mechanizm and its blkio controller . Setting up cgroup is somehow distribution specific as it may already be mounted or even used somewhere. Here's general idea, however (assuming you have proper kernel configuration): mount -t tmpfs cgroup_root /sys/fs/cgroup
mkdir -p /sys/fs/cgroup/blkio
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio Now that you have blkio controller set, you can use it: mkdir -p /sys/fs/cgroup/blkio/limit1M/
echo "X:Y 1048576" > /sys/fs/cgroup/blkio/limit1M/blkio.throttle.write_bps_device Now you have a cgroup limit1M that limits write speed on device with major/minor numbers X:Y to 1MB/s. As you can see, this limit is per device. All you have to do now is to put some process inside of that group and it should be limited: echo $PID > /sys/fs/cgroup/blkio/limit1M/tasks I don't know if/how this can be done on other operating systems. | {
"source": [
"https://unix.stackexchange.com/questions/48138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23373/"
]
} |
48,227 | I tried to delete some directory, but $ rm DE.aspx_files -r
rm: cannot remove `DE.aspx_files': Directory not empty But listing its content returns none $ ls DE.aspx_files
$ Added: Actually $ ls -la DE.aspx_files
total 4
drwx------ 1 ting ting 4096 Sep 14 20:48 .
drwx------ 1 ting ting 0 Sep 13 22:34 ..
-rw------- 1 ting ting 0 Sep 13 22:34 .fuse_hidden0001d4bf00000006 When I try to rm .fuse_hidden0001d4bf00000006 , it is deleted, but another new .fuse_hidden0001d4bf00000007 created. So I wonder what happened, and how to fix this problem? Note: it is a newly bought external portable HDD, and I just copy some files to it using a data recovery program. OS: Ubuntu 12.04 Thanks! | Hidden Files You may have hidden files. You can find them with ls -la to make sure you're okay with really deleting them first. Then you can delete the files before running rm -r or rmdir as needed. Forcing the Recursive Delete You can also just do rm -rf to force the recursive deletion even if the target directory contains files. All the usual warnings apply, but it will get the job done regardless of what your directory contains--as long as you have permissions to delete the files and directories, of course. | {
"source": [
"https://unix.stackexchange.com/questions/48227",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
48,235 | I've copied a large file to a USB disk mounted on a Linux system with async. This returns to a command prompt relatively quickly, but when I type sync , of course, it all has to go to disk, and that takes a long time. I understand that it's going to be slow, but is there somewhere where I can watch a counter go down to zero? Watching buffers in top doesn't help. | Looking at /proc/meminfo will show the Dirty number shrinking over time as all the data spools out; some of it may spill into Writeback as well. That will be a summary against all devices, but in the cases where one device on the system is much slower than the rest you'll usually end up where everything in that queue is related to it. You'll probably find the Dirty number large when you start and the sync finishes about the same time it approaches 0. Try this to get an interactive display: watch -d grep -e Dirty: -e Writeback: /proc/meminfo With regular disks I can normally ignore Writeback , but I'm not sure if it's involved more often in the USB transfer path. If it just bounces up and down without a clear trend to it, you can probably just look at the Dirty number. | {
"source": [
"https://unix.stackexchange.com/questions/48235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2511/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.