source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
23,130
I need to inspect all sub-directories and report how many files (without further recursion) they contain: directoryName1 numberOfFiles directoryName2 numberOfFiles
This does it in a safe and portable way. It won't get confused by strange filenames. for f in *; do [ -d ./"$f" ] && find ./"$f" -maxdepth 1 -exec echo \; | wc -l && echo $f; done Note that it will print the number of files first, then the directory name on a separate line. If you wish to keep OP's format you will need further formatting, e.g. for f in *; do [ -d ./"$f" ] && find ./"$f" -maxdepth 1 -exec echo \;|wc -l|tr '\n' ' ' && echo $f; done|awk '{print $2"\t"$1}' If you have a specific set of subdirectories you're interested in, you can replace the * with them. Why is this safe? (and therefore script-worthy) Filenames can contain any character except / . There are a few characters that are treated specially either by the shell or by the commands. Those include spaces, newlines, and dashes. Using the for f in * construct is a safe way of getting each filename, no matter what it contains. Once you have the filename in a variable, you still have to avoid things like find $f . If $f contained the filename -test , find would complain about the option you just gave it. The way to avoid that is by using ./ in front of the name; this way it has the same meaning, but it no longer starts with a dash. Newlines and spaces are also a problem. If $f contained "hello, buddy" as a filename, find ./$f , is find ./hello, buddy . You're telling find to look at ./hello, and buddy . If those don't exist, it will complain, and it will never look in ./hello, buddy . This is easy to avoid - use quotes around your variables. Finally, filenames can contain newlines, so counting newlines in a list of filenames will not work; you'll get an extra count for every filename with a newline. To avoid this, don't count newlines in a list of files; instead, count newlines (or any other character) that represent a single file. This is why the find command has simply -exec echo \; and not -exec echo {} \; . I only want to print a single new line for the purpose of tallying the files.
{ "source": [ "https://unix.stackexchange.com/questions/23130", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11696/" ] }
23,138
My terminal setup is gnome-terminal + tmux + zsh with vi bindings. In applications like vim or even in the zsh's command line vi editing mode, I need to frequently hit the ESC key but there is a small delay before the effects of this key take place. See GNU Screen makes Vim ESC key slow After some experimentation, I found that hitting ESC key and immediately another key (say b ) has the same effect as hitting Alt+b . I don't know why this is the case (probably for legacy reasons when there was no Alt ? I don't know). Either way, I have two Alt keys and I don't want this behaviour with my ESC key. I have tried with C+[ and its the same problem with that too. I'm not sure who is responsible for this, gnome-terminal or tmux or my OS itself (Ubuntu Natty). Any ideas on how to address this would be great. Update : I checked without tmux on a different terminal (LXTerminal) and the delay is present there too.
Here's an actual fix. Add the following to .tmux.conf : set -s escape-time 0 As mentioned in the comments: The server may need to be restarted. tmux kill-server kills the server; you may need to restart it.  Alternatively, you can reload the configuration file from the command prompt inside tmux by typing your tmux prefix (default Ctrl + B ) followed by : and entering source-file ~/.tmux.conf .
{ "source": [ "https://unix.stackexchange.com/questions/23138", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/898/" ] }
23,145
I want know how I can run a command for a specified time say, one minute and if it doesn't complete execution then I should be able to stop it.
Use timeout : NAME timeout - run a command with a time limit SYNOPSIS timeout [OPTION] DURATION COMMAND [ARG]... timeout [OPTION] (Just in case, if you don't have this command or if you need to be compatible with very very old shells and have several other utterly specific requirements… have a look at this this question ;-))
{ "source": [ "https://unix.stackexchange.com/questions/23145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9630/" ] }
23,156
I am running out of disk space and noted that I have a large /var/cache directory. Can I safely remove this? (using Arch Linux, BTW).
From http://www.lindevdoc.org/wiki//var/cache Sorry for the (very) late answer, but I believe it's important to include this bit for future reference. Highlighted the bit which does answer this question. The /var/cache directory contains cached files, i.e. files that were generated and can be re-generated any time, but they are worth storing to save time of recomputing them. Any application can create a file or directory here. It is assumed that files stored here are not critical, so the system can delete the contents of /var/cache either periodically, or when its contents get too large. Any application should take into account that the file stored here can disappear any time, and be ready to recompute its contents (with some time penalty). So yes, you may remove these files without expecting anything bad to happen.
{ "source": [ "https://unix.stackexchange.com/questions/23156", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
23,163
I understand and accept the premise that defensive 1 shell scripting is both prudent and, in the longer term, more sustainable. Many of the answers to text processing questions here follow this principle by building into the answers contingencies for unorthodox filenames; that might contain spaces, dashes and new lines. How prevalent are new lines in filenames? Specifically: Do any applications create filenames that include newlines by default? Are there any situations where it would be desirable to create such filenames? Or are they predominantly an instance of user error? [1] Meaning planning for and managing the broadest possible range of scenarios and contingencies... Question inspired by the (rather plaintive) comment on this question .
I've never seen a file name with a newline other than ones deliberately created to test applications that manipulate file names. File names containing newlines can appear because: Some bug or user error (e.g. a bad copy-paste) resulted in an unintended file name. Some filesystem corruption affected a file name. Someone deliberately created a “strange” file name to exploit a security hole, where an application put more trust in the file names it was passed than it should have. POSIX defines a filename as “a name consisting of 1 to {NAME_MAX} bytes used to name a file. The characters composing the name may be selected from the set of all character values excluding the slash character and the null byte. The filenames dot and dot-dot have special meaning.” There is no guarantee that every filesystem will accept “strange” file names (the only guaranteed characters are ASCII letters, digits, period, hyphen and underscore , i.e. A-Z , a-z , 0-9 and ._- , with hyphen forbidden in first position), but most native filesystems on modern unices do.
{ "source": [ "https://unix.stackexchange.com/questions/23163", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6761/" ] }
23,164
What command line tools exist to list and manage X server grabs? (That's grab as in key and pointer grabs, i.e. restricting the use of a particular key or mouse button to a particular application, or constricting the mouse pointer to remain in a particular screen area.) I'm looking for a hypothetical xgrab utility that would show things like Key 0x00f00ba5 0x123 0 Button 0x00f00ba5 2 Pointer 0x00abcdef meaning that there has been a call to XGrabKey(display, 0x123, 0, 0x00f00ba5, ...) and so on with XGrabButton , XGrabPointer , XGrabKeyboard , XGrabServer (if possible). The display format doesn't matter, what I want is some way to see who's grabbing what, and possibly some way of revoking these grabs (if it's possible, I'm not sure if the X11 API allows that).
Recent versions of X (X.org server ≥1.11) support several debugging keysyms, introduced in this commit . When triggered, these perform actions related to grabs. By default ( at least in recent versions ), these are disabled (absent from the default keymap). However, if you have xdotool installed, it is possible to call them, by executing on the command-line: xdotool key NameOfKey where NameOfKey is the keysym you want to activate. For example, to print a list of active grabs to the X server log, use xdotool key XF86LogGrabInfo . Relevant keysyms are: XF86LogGrabInfo : prints a list of active grabs to the X server log XF86Ungrab : breaks all active grabs, without killing the application that holds the grabs XF86ClearGrab : kills all processes that hold active grabs Note that XF86LogGrabInfo only lists active grabs, not passive grabs such as a grab on a key which isn't currently pressed. If you want to get information about a passive grab, you need to activate the grab: run xdotool key XF86LogGrabInfo while the key chord or mouse button combination you're interested in is pressed. Do something like: Run sleep 1; xdotool key XF86LogGrabInfo Within 1 second, press the key chord or mouse button combination. After 1 second, release the key/button. Check the “Active grab …” information in the X server log (often /var/log/Xorg.0.log ).
{ "source": [ "https://unix.stackexchange.com/questions/23164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
23,179
I run the following script: VAR="Test" sh -c 'echo "Hello $VAR"' But I get: # ./test.sh Hello How can I send the variable VAR of my script to the shell created with sh -c '...' ?
Either use export to turn it into an environment variable, or pass it directly to the command. VAR="Test" sh -c 'echo "Hello $VAR"' VAR="Test" export VAR sh -c 'echo "Hello $VAR"' Avoid using double quotes around the shell code to allow interpolation as that introduces command injection vulnerabilities like in: sh -c " echo 'Hello $VAR' " causing a reboot if called when $VAR contains something like ';reboot #
{ "source": [ "https://unix.stackexchange.com/questions/23179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11791/" ] }
23,199
I'm mostly curious, but why aren't network interfaces in /dev? Are there any other kinds of devices that aren't represented as a node under /dev?
On many devices, the main operations are to send bytes from the computer to a peripheral, or to receive bytes from a peripheral on the computer. Such devices are similar to pipes and work well as character devices . For operations that aren't reading and writing (such as flow control on a serial line), the device provides ad-hoc commands called ioctl . Some devices are very much like regular files: they're made of a finite number of bytes, and what you write at a given position can later be read from the same position. These devices are called block devices . Network interfaces are more complex: what they read and write is not bytes but packets. While it would still be possible to use the usual interface with read and write , it would be awkward: presumably each call to write would send a packet, and each call to read would receive a packet (and if the buffer is too small for the packet to fit, the packet would be lost). Network interfaces could exist as devices providing only ioctl . In fact, this is what some unix variants do, but not Linux. There is some advantage to this approach; for example, on Linux, network interfaces could leverage udev . But the advantages are limited, which is why it hasn't been done. Most network-related applications don't care about individual network interfaces, they work at a higher level. For example, a web browser wants to make TCP connections, and a web server wants to listen for TCP connections. For this purpose, what would be useful is devices for high-level network protocols, e.g. { echo $'GET http://www.google.com/ HTTP/1.0\r'; echo $'Host: www.google.com\r'; echo $'\r' >&0; cat; } <>/dev/tcp/www.google.com/80 In fact ksh and bash provide such an interface for TCP and UDP clients. In general, however, network applications are more complex than file-accessing applications. While most data exchanges are conducted with calls analogous to read and write , establishing the connection requires more information than just a file name. For example, listening for TCP connections takes two steps: one to be performed when the server starts listening, and one to be performed each time a client connects. Such extra steps don't fit well into the file API, which is the main reason why networking has its own API. Another class of devices that typically doesn't have entries in /dev on Linux (but does on some other unix variants) is video adapters. In principle, simple video adapters could be exposed as framebuffer devices, which could be block devices made of blocks representing the color of each pixel. Accelerated video adapters could be represented as character devices onto which applications send commands. Here, the drawback of the device interface is that it's slow: the displaying application (in practice, an X server) would need to make kernel calls whenever displaying anything. What happens instead is that the X server mostly writes directly to the memory of the video adapter, because it's faster.
{ "source": [ "https://unix.stackexchange.com/questions/23199", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3639/" ] }
23,227
I have noticed there are two alternative ways of building loops in zsh : for x (1 2 3); do echo $x; done for x in 1 2 3; do echo $x; done They both print: 1 2 3 My question is, why the two syntaxes? Is $x iterating through a different type of object in each of them? Does bash make a similar distinction? Addendum: Why does the following work?: #!/bin/zsh a=1 b=2 c=5 d=(a b c) for x in $d; do print $x;done but this one doesn't?: #!/bin/zsh a=1 b=2 c=5 d=(a b c) # It complains with "parse error near `$d'" for x $d; do print $x;done
Several forms of complex commands such as loops have alternate forms in zsh. These forms are mostly inspired by the C shell , which was fairly common when zsh was young but has now disappeared. These alternate forms act exactly like the normal forms, they're just a different syntax. They're slightly shorter, but less clear. The standard form for the for command is for x in 1 2 3; do echo $x; done , and the standard form for the while command is while test …; do somecommand; done . Ksh, bash and zsh have an alternate form of for : for ((i = 0; i < 42; i++)); do somecommand; done , which mimics the for loops of languages like Pascal or C, to enumerate integers. Other exotic forms that exist in zsh are specific to zsh (but often inspired by csh).
{ "source": [ "https://unix.stackexchange.com/questions/23227", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
23,277
I can't find any documentation on how postfix stores email. Where is it stored, and in what format? I'm using Ubuntu server 11
Probably /var/mail/[username] or the more traditional /var/spool/mail/[username] The normal format, called "mbox", uses a line that starts with "From " to indicate the start of each message - this is one reason why many email clients will change "From " in the body of the message to ">From ". You can also configure it to use "maildir", in which /var/mail/[username] is a directory in which every email message is a file in that directory.
{ "source": [ "https://unix.stackexchange.com/questions/23277", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3970/" ] }
23,291
I have two servers. Both servers are in CentOS 5.6. I want to SSH from Server 1 to Server 2 using a private key I have (OpenSSH SSH-2 Private Key). I don't know how to do it over unix. But what I did on windows using Putty was to feed my OpenSSH private key to putty-gen and generate a private key in PPK format. However, I would be creating a bash script from server 1 that will execute some commands on server 2 via SSH. How do I SSH to Server 2 using my private key file from Server 1?
You need your SSH public key and you will need your ssh private key. Keys can be generated with ssh-keygen . The private key must be kept on Server 1 and the public key must be stored on Server 2. This is completly described in the manpage of openssh, so I will quote a lot of it. You should read the section 'Authentication'. Also the openSSH manual should be really helpful: http://www.openssh.org/manual.html Please be careful with ssh because this affects the security of your server. From man ssh : ~/.ssh/identity ~/.ssh/id_dsa ~/.ssh/id_rsa Contains the private key for authentication. These files contain sensitive data and should be readable by the user but not acces- sible by others (read/write/execute). ssh will simply ignore a private key file if it is accessible by others. It is possible to specify a passphrase when generating the key which will be used to encrypt the sensitive part of this file using 3DES. ~/.ssh/identity.pub ~/.ssh/id_dsa.pub ~/.ssh/id_rsa.pub Contains the public key for authentication. These files are not sensitive and can (but need not) be readable by anyone. This means you can store your private key in your home directory in .ssh. Another possibility is to tell ssh via the -i parameter switch to use a special identity file. Also from man ssh : -i identity_file Selects a file from which the identity (private key) for RSA or DSA authentication is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_rsa and ~/.ssh/id_dsa for pro- tocol version 2. Identity files may also be specified on a per- host basis in the configuration file. It is possible to have multiple -i options (and multiple identities specified in config- uration files). This is for the private key. Now you need to introduce your public key on Server 2. Again a quote from man ssh : ~/.ssh/authorized_keys Lists the public keys (RSA/DSA) that can be used for logging in as this user. The format of this file is described in the sshd(8) manual page. This file is not highly sensitive, but the recommended permissions are read/write for the user, and not accessible by others. The easiest way to achive that is to copy the file to Server 2 and append it to the authorized_keys file: scp -p your_pub_key.pub user@host: ssh user@host host$ cat id_dsa.pub >> ~/.ssh/authorized_keys Authorisation via public key must be allowed for the ssh daemon, see man ssh_config . Usually this can be done by adding the following statement to the config file: PubkeyAuthentication yes
{ "source": [ "https://unix.stackexchange.com/questions/23291", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6101/" ] }
23,303
I want to diff two sets of mod_rewrite rules. The set of lines are about 90% identical, but the order is so different that diff basically says they are completely different. How can I see which lines are truly different between two files, regardless of their line number?
sort can be used to get the files into the same order so diff can compare them and identify the differences. If you have process substitution, you can use that and avoid creating new sorted files. diff <(sort file1) <(sort file2)
{ "source": [ "https://unix.stackexchange.com/questions/23303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394/" ] }
23,363
I'm unable to return to the GUI with Ctrl + Alt + F7 (or any of the 12 function keys). I have some unsaved work and I don't want to lose them. Are there any other key combinations that will allow me to switch back? Here is what I did: I pressed Ctrl + Alt + F1 and it showed a text-based login screen as usual Then I pressed Ctrl + Alt + F7 and it showed a screen full of text (I can't remember what they were) Then I pressed Ctrl + Alt + F8 and it showed log messages that resembles /var/log/messages . Some entries are from automount , some from sendmail , and none are errors. Pressing any of the Ctrl + Alt + Fn combinations now has no effect. The cap-lock and num-lock LED no longer respond to their corresponding keys. I can use the mouse to highlight the text on the screen, but nothing else. Any idea what happened? I can still login to the system via SSH. GUI applications that I was using (e.g. opera ) are still running and consuming tiny amounts of CPU as usual, as reported by top . Is it possible to switch back to the GUI via the command line? If possible, I don't want to restart X, because doing so will kill all the GUI applications. System info: Red Hat Enterprise Linux Client release 5.7 Linux 2.6.18-238.12.1.el5 SMP x86_64 gnome-desktop: 2.16.0-1.fc6 xorg-x11-server-Xorg: 1.1.1-48.76.el5_7.5 Thanks to Shawn I was able to get back using chvt 9 . Further experiments shows that if I go to the 8th virtual terminal (either by Ctrl + Alt + F8 or chvt 8 ), I will not be able to switch to any other terminals using Ctrl + Alt + Fx keys. Now sure if this is a bug.
chvt allows you to change your virtual terminal. From man chvt : The command chvt N makes /dev/ttyN the foreground terminal. (The corresponding screen is created if it did not exist yet. To get rid of unused VTs, use deallocvt(1).) The key combination (Ctrl-)LeftAlt-FN (with N in the range 1-12) usually has a similar effect.
{ "source": [ "https://unix.stackexchange.com/questions/23363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5902/" ] }
23,383
Currently, when using the ifconfig command, the following IP addresses are shown: own IP, broadcast and mask. Is there a way to show the related gateway IP address as well (on the same screen with all the others, not by using 'route' command)?
You can with the ip command, and given that ifconfig is in the process of being deprecated by most distributions it's now the preferred tool. An example: $ ip route show 212.13.197.0/28 dev eth0 proto kernel scope link src 212.13.197.13 default via 212.13.197.1 dev eth0
{ "source": [ "https://unix.stackexchange.com/questions/23383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11876/" ] }
23,389
I'd like to contribute to an open source project by providing translated strings. One of their requirements is that contributors must use UTF-8 as the encoding for the PO files . I'm using Vim 7.3 on Linux. How can I be sure that Vim's encoding is set to UTF-8, so that I can edit and save the PO file the right way?
When Vim reads an existing file, it tries to detect the file encoding. When writing out the file, Vim uses the file encoding that it detected (except when you tell it differently). So a file detected as UTF-8 is written as UTF-8, a file detected as Latin-1 is written as Latin-1, and so on. By default, the detection process is crude. Every file that you open with Vim will be assumed to be Latin-1, unless it detects a Unicode byte-order mark at the top. A UTF-8 file without a byte-order mark will be hard to edit because any multibyte characters will be shown in the buffer as character sequences instead of single characters. Worse, Vim, by default, uses Latin-1 to represent the text in the buffer. So a UTF-8 file with a byte-order mark will be corrupted by down-conversion to Latin-1. The solution is to configure Vim to use UTF-8 internally. This is, in fact, recommended in the Vim documentation, and the only reason it is not configured that way out of the box is to avoid creating enormous confusion among users who expect Vim to operate basically as a Latin-1 editor. In your .vimrc , add set encoding=utf-8 and restart Vim. Or instead, set the LANG environment variable to indicate that UTF-8 is your preferred character encoding. This will affect not just Vim but any software which relies on LANG to determine how it should represent text. For example, to indicate that text should appear in English ( en ), as spoken in the United States ( US ), encoded as UTF-8 ( utf-8 ), set LANG=en_US.utf-8 . Now Vim will use UTF-8 to represent the text in the buffer. Plus, it will also make a more determined effort to detect the UTF-8 encoding in a file. Besides looking for a byte-order mark, it will also check for UTF-8 without a byte-order mark before falling back to Latin-1. So it will no longer corrupt a file coded in UTF-8, and it should properly display the UTF-8 characters during the editing session. For more information on how Vim detects the file encoding, see the fileencodings option in the Vim documentation . For more information on setting the encoding that Vim uses internally, see the encoding option . If you need to override the encoding used when writing a file back to disk, see the fileencoding option .
{ "source": [ "https://unix.stackexchange.com/questions/23389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8115/" ] }
23,394
By default, man uses less to output text. How can I tell it to just output to stdout? My terminal emulator has a scroll bar and search function and I want to use those instead of the arrow keys.
Actually it uses whatever is specified in the MANPAGER or the PAGER environment variable. Depending on your man implementation and version there could be also a command line switch to specify the pager. With the man-db implementation I use all the below ways work: MANPAGER=cat man man PAGER=cat man man MANOPT='-P cat' man man man -P cat man To set it permanently, just add it to your ~/.bashrc (or other initialization file used by your shell): export MANPAGER=cat That works with some older man implementations too, while MANOPT is man-db specific: export MANOPT='-P cat' (Better do not set PAGER that way. That one is used by many other applications too.) There could be also a global configuration file. man-db has /etc/man_db.conf or /etc/manpath.config . There you can set: DEFINE pager cat But unfortunately that is taken in consideration only if neither MANPAGER nor PAGER is set.
{ "source": [ "https://unix.stackexchange.com/questions/23394", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
23,419
In Vim is there an option to write regexs in the same style as Awk for example /sp\{0,\}/ Would be /sp{0,}/
Preceding your pattern with \v will make the pattern “magic”, and symbols like { and [ have an interpreted meaning (and literals need to be escaped). So /\vsp{1,} would find what you wanted (I just tested it). You can make this a sort of default by remapping / to /\v with the following lines in your vimrc: nnoremap / /\v vnoremap / /\v See :help pattern for more.
{ "source": [ "https://unix.stackexchange.com/questions/23419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9110/" ] }
23,426
I have several projects that require me to change versions of Java/Grails/Maven. I'm trying to handle this with some scripts that would make the changes. For example: #!/bin/sh export JAVA_HOME=/cygdrive/c/dev/Java/jdk1.5.0_22 export PATH=$JAVA_HOME/bin:$PATH export GRAILS_HOME=/cygdrive/c/dev/grails-1.0.3 export PATH=$GRAILS_HOME/bin:$PATH export MAVEN_HOME=/cygdrive/c/dev/apache-maven-2.0.11 export PATH=$MAVEN_HOME/bin:$PATH which java which grails which mvn When this executes, it successfully changes the PATH within the context of the script, but then the script ends, and no change has been accomplished. How can I run a script in a way to change the PATH in for the shell in which I am currently working? I'm using Cygwin.
You have to use source or eval or to spawn a new shell. When you run a shell script a new child shell is spawned. This child shell will execute the script commands. The father shell environment will remain untouched by anything happens in the child shell. There are a lot of different techniques to manage this situation: Prepare a file sourcefile containg a list of commands to source in the current shell: export JAVA_HOME=/cygdrive/c/dev/Java/jdk1.5.0_22 export PATH=$JAVA_HOME/bin:$PATH and then source it source sourcefile note that there is no need for a sha-bang at the begin of the sourcefile , but it will work with it. Prepare a script evalfile.sh that prints the command to set the environment: #!/bin/sh echo "export JAVA_HOME=/cygdrive/c/dev/Java/jdk1.5.0_22" echo "export PATH=$JAVA_HOME/bin:$PATH" and then eval uate it: eval `evalfile.sh` Configure and run a new shell: #!/bin/sh export JAVA_HOME=/cygdrive/c/dev/Java/jdk1.5.0_22 export PATH=$JAVA_HOME/bin:$PATH exec /bin/bash note that when you type exit in this shell, you will return to the father one. Put an alias in your ~/.bashrc : alias prepare_environ='export JAVA_HOME=/cygdrive/c/dev/Java/jdk1.5.0_22; export PATH=$JAVA_HOME/bin:$PATH;' and call it when needed: prepare_environ
{ "source": [ "https://unix.stackexchange.com/questions/23426", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5045/" ] }
23,452
I'm in a network using a proxy. I've got machines using lots of scripts here and there accessing each other over HTTP. The network is 10.0.0.0/8. My proxy is 10.1.1.1:81, so I set it up accordingly: export http_proxy=http://10.1.1.1:81/ I want to exclude my own range to be accessed with the proxy. I tried any combination available. export no_proxy='10.*' export no_proxy='10.*.*.*' export no_proxy='10.0.0.0/8' None of the above work! I'm testing with wget and it always tries to query the proxy, whatever IP address I want to connect to. Since lots of scripts lie everywhere in all systems the --no-proxy option is actually not an option. I want to set it system wide.
You're looking at it the wrong way. The no_proxy environment variable lists the domain suffixes, not the prefixes. From the documentation : no_proxy : This variable should contain a comma-separated list of domain extensions proxy should not be used for. So for IPs, you have two options: 1) Add each IP in full: printf -v no_proxy '%s,' 10.1.{1..255}.{1..255}; export no_proxy="${no_proxy%,}"; 2) Rename wget to wget-original and write a wrapper script (called wget ) that looks up the IP for the given URL's host, and determines if it should use the proxy or not: #!/bin/bash ip=''; for arg; do # parse arg; if it's a URL, determine the IP address done; if [[ "$ip" =~ ^10\.1\. ]]; then wget-original --no-proxy "$@"; else wget-original "$@"; fi;
{ "source": [ "https://unix.stackexchange.com/questions/23452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5115/" ] }
23,501
I need to use wget to download a file to the directory /var/cache/foobar/ (so, as an example, if I download stackexchange-site-list.txt , it'd be downloaded to /var/cache/foobar/stackexchange-site-list.txt ) Is this possible? curl would also be an option, but I'd prefer to not use curl , since it's not installed by default.
wget -P /var/cache/foobar/ [...] wget --directory-prefix=/var/cache/foobar/ [...]
{ "source": [ "https://unix.stackexchange.com/questions/23501", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7510/" ] }
23,545
How can I use text-processing tools to insert a new line after every N lines? Example for N=2: INPUT: sadf asdf yxcv cxv eqrt asdf OUTPUT: sadf asdf yxcv cxv eqrt asdf
With awk : awk ' {print;} NR % 2 == 0 { print ""; }' inputfile With sed ( GNU extension): sed '0~2 a\\' inputfile With bash : #!/bin/bash lines=0 while IFS= read -r line do printf '%s\n' "${line}" ((lines++ % 2)) && echo done < "$1"
{ "source": [ "https://unix.stackexchange.com/questions/23545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
23,547
I install xenomai by sudo apt-get install xenomai-* It installed dctrl-tools libxenomai-dev libxenomai1 linux-patch-xenomai xenomai-doc xenomai-runtime. But when I check /boot/grub/grub.cfg , it seems it didn't change anything. How do I boot xenomai on Ubuntu 10.04? Should I followed Building Debian packages 's Building a Xenomai patched Linux kernel package? But it uses kernel 2.6.35, which is newer than mine (2.6.32). Thank you~
With awk : awk ' {print;} NR % 2 == 0 { print ""; }' inputfile With sed ( GNU extension): sed '0~2 a\\' inputfile With bash : #!/bin/bash lines=0 while IFS= read -r line do printf '%s\n' "${line}" ((lines++ % 2)) && echo done < "$1"
{ "source": [ "https://unix.stackexchange.com/questions/23547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10826/" ] }
23,560
Is the "bootable flag" needed in today's distributions? If not, then why is it still in the installers? What is it exactly?
The boot flag is from ancient times, where you would indicate an MBR partition record as bootable, so you could indicate where the boot loader resided. On modern OS'es this is widely unused, as the MBR consists of a minimal stage loader which bootstraps either into its own partition or jumps to another area on the disk where the boot loader code is kept. (An MBR can contain either executable code or the boot partition table among other things. See also this link to an article about the MBR ). As an example, GRUB is written into the MBR and boots whatever partition you choose. See also this (quite small) Wikipedia page about the boot flag: en.wikipedia.org/wiki/Boot_flag
{ "source": [ "https://unix.stackexchange.com/questions/23560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
23,572
I have set up a VM using turnkey linux redmine and I'm trying to SSH into the server to install some more items. It doesn't appear to be recognizing the sudo command. Every time I try to sudo something I get an error saying: -bash: sudo: command not found I read somewhere else to type 'whereis sudo' and the output was: sudo:
It looks from http://www.turnkeylinux.org/redmine like Redmine, unlike Ubuntu, does not use sudo by default. What username are you using to SSH in? If it's root , then you don't need to use sudo , as everything you do when SSHed in to the Redmine system is done as root . If it's something else, like admin , then you could try using the su command to get a root shell in which to run commands as root .
{ "source": [ "https://unix.stackexchange.com/questions/23572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
23,576
I am working through SSH on a WD My Book World Edition. Basically I would like to start at a particular directory level, and recursively remove all sub-directories matching .Apple* . How would I go about that? I tried rm -rf .Apple* and rm -fR .Apple* neither deleted directories matching that name within sub-directories.
find is very useful for selectively performing actions on a whole tree. find . -type f -name ".Apple*" -delete Here, the -type f makes sure it's a file, not a directory, and may not be exactly what you want since it will also skip symlinks, sockets and other things. You can use ! -type d , which literally means not directories, but then you might also delete character and block devices. I'd suggest looking at the -type predicate on the man page for find . To do it strictly with a wildcard, you need advanced shell support. Bash v4 has the globstar option , which lets you recursively match subdirectories using ** . zsh and ksh also support this pattern. Using that, you can do rm -rf **/.Apple* . This is not POSIX-standard, and not very portable, so I would avoid using it in a script, but for a one-time interactive shell action, it's fine.
{ "source": [ "https://unix.stackexchange.com/questions/23576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11936/" ] }
23,579
I'm using OpenBSD for quite a while now. All I do, however is go from one release to the next, always just doing an update. I configured the system so it works as my router and firewall, and it works quite well like that. But I never update packages. All I do is just move on to the next release. Coming from the Linux world, I'm used to applying updates a few times a week; but how do I do that on *BSD? - Or is this not part of the *BSD philosophy?
OpenBSD is binary-centric. Patching the base system (e.g., because of a security flaw in the kernel) requires rebuilding the system from source or running syspatch . You can update the package binaries (if any updates/changes are available) by executing pkg_add : pkg_add -Uu The OpenBSD team recommends using the packages over building from ports - The OpenBSD packages and ports system FreeBSD can be updated via packages or ports .
{ "source": [ "https://unix.stackexchange.com/questions/23579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1290/" ] }
23,634
Why are most Linux programs written in C? Why are they not written with C++, which is newer?
There have been many discussions about this. Mainly, the reason is a philosophical one. C was invented as a simple language for system development (not so much application development). There are many arguments for using C++, but there are about as many for not using C++ and sticking to C. In the end, it's a historical issue. Most application stuff is written in C, because most Kernel stuff is written in C. And since back then most stuff was written in C, people tend to use the original languages. At this point, someone might ask "OK, so why is the kernel written in C and not ported to C++?" . This has been discussed on kerneltrap some time ago. One nice explanation that can be quoted from this thread is a response by yoshi314 (quoting directly): that's because nearly every c++ app needs a separate c++ standard library to operate. so they would have to port it to kernel, and expect an extra overhead everywhere. c++ is more complex language and that means that compiler creates more complex code from it. because of that, finding that a problem stems from compiler bug,rather than code error is easier in c. also c language is more barebone, and it's easier to follow its assembly representation, which is often easy to predict. c++ is more versatile, but c is more suited for lowlevel or embedded stuff. On the other hand, "most of Linux programs" is quite misleading. Take a look at graphical applications. Python is getting more and more ground especially in GUI environments on Linux. About the same thing that's happening with Windows and .NET.
{ "source": [ "https://unix.stackexchange.com/questions/23634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11906/" ] }
23,659
I am not able to use - in variables in shell. Is there a way to be able to use it, because I have one script which depends on such named variables: $export a-b=c -bash: export: `a-b=c': not a valid identifier $export a_b=c First throws the given error and second works fine.
I've never met a Bourne-style shell that allowed - in a variable name. Only ASCII letters (of either case), _ and digits are supported, and the first character must not be a digit. If you have a program that requires an environment variable that doesn't match the shell restrictions, launch it with the env program. env 'strange-name=some value' myprogram Note that some shells (e.g. modern dash , mksh, zsh) remove variables whose name they don't like from the environment. ( Shellshock has caused people to be more cautious about environment variable names, so restrictions are likely to become tighter over time, not more permissive.) So if you need to pass a variable whose name contains special character to a program, pass it directly, without a shell in between ( env 'strange-name=some value' sh -c'…; myprogram' may or may not work).
{ "source": [ "https://unix.stackexchange.com/questions/23659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10968/" ] }
23,675
Why isn't there a unified package manager that acts as an interface between the end-user and the underlying low-level package manager ( apt , yast , pacman , etc.)? Is it hard to do and therefore not practical, or is there a genuine obstacle making it impossible to do?
First of all, there is. The problem is not that there is no unified package manager, the problem is there are ten of them – seriously. Let's take my favorite: poldek . It's a user front end for package management that can run on several different distros and manage either rpm or deb packages. Poldek doesn't do the stuff rpm does (it leaves that to rpm) and just sends the right commands without the user having to figure out all that mess. But the problems don't stop there. Everybody has a different idea of what a user front end is supposed to look like and how it should function and what options it should expose. So other people have written their own. Actually many of the package front end managers people use in common distros today are able to handle more than one backend. In the end, however, the problem (or advantage) is people like things to function exactly the way they want, not in some meta-fashion that tries to satisfy everybody only to fail to really make anybody happy. This is the reason we have umpteen gazillion distros in the first place. It's the reason we have so many different Desktop Environments and Window Managers (and the fact those are actually different kinds of things at all). There are still outstanding proposals for ways of writing universal packages or having a manager that understands them all or having an api for converting one to the other ... but in the end Unix is best when used according to its philosophy ... each tool does one thing and does it well . Any time you have a tool that tries to do more than one thing, it ends up being not as good at one of them. For example, poldek sucks at handling deb package dependencies.
{ "source": [ "https://unix.stackexchange.com/questions/23675", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11975/" ] }
23,692
This is probably something basic but I'm not able to make it work. I'm trying to use DU to get a total size of files minus certain directories. I need to exclude one specific directory called uploads but not every directory called uploads . For example, my file structure looks a bit like this: /store /uploads /junk_to_ignore /more_junk_to_ignore /user_one /uploads /user_two I can run the following command: du -ch --exclude=uploads* and it gives me the file size minus all the "uploads" directories. However, in trying to exclude certain directories (and all its sub-directories) I fail. I've tried variations of: du -ch --exclude=./uploads* du -ch --exclude='/full/path/to/uploads/*' but can't seem to figure it out. How do I exclude a specific directory?
You've almost found it :) du -ch --exclude=./relative/path/to/uploads Note no asterisk at the end. The asterisk means all subdirectories under "upload" should be omitted - but not the files directly in that directory.
{ "source": [ "https://unix.stackexchange.com/questions/23692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11986/" ] }
23,726
I realize that this is not an entirely unix/linux related question. But since this is something I'll do on linux, I hope someone has an answer. I have an online excel file ( .xlsx ) which gets updated periodically (by someone else). I want to write a script and put it in as a cronjob in order to to process that excel sheet. But to do that, I need to convert that into a text file (so a .csv ) with semicolon separated columns. It can't be comma separated unfortunately since some columns have commas in them. Is it at all possible to do this conversion from shell? I have Open office installed and I can do this by using its GUI, but want to know if it is possible to do this from command line. Thanks! PS: I have a Mac machine as well, so if some solution can work there, thats good as well. :)
OpenOffice comes with the unoconv program to perform format conversions on the command line. unoconv -f csv filename.xlsx For more complex requirements, you can parse XLSX files with Spreadsheet::XLSX in Perl or openpyxl in Python. For example, here's a quickie script to print out a worksheet as a semicolon-separated CSV file (warning: untested, typed directly in the browser): perl -MSpreadsheet::XLSX -e ' $\ = "\n"; $, = ";"; my $workbook = Spreadsheet::XLSX->new()->parse($ARGV[0]); my $worksheet = ($workbook->worksheets())[0]; my ($row_min, $row_max) = $worksheet->row_range(); my ($col_min, $col_max) = $worksheet->col_range(); for my $row ($row_min..$row_max) { print map {$worksheet->get_cell($row,$_)->value()} ($col_min..$col_max); } ' filename.xlsx >filename.csv
{ "source": [ "https://unix.stackexchange.com/questions/23726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12007/" ] }
23,744
This doesn't work: tar xf /tmp/foo.tar.gz foo/bar tar: foo/bar: Not found in archive It's not obvious to me what would do this beyond extracting it in place and moving the files over.
From man tar : -C directory In c and r mode, this changes the directory before adding the following files. In x mode, change directories after opening the archive but before extracting entries from the archive. i.e, tar xC /foo/bar -f /tmp/foo.tar.gz should do the job. (on FreeBSD, but GNU tar is basically the same in this respect, see "Changing the Working Directory" in its manual )
{ "source": [ "https://unix.stackexchange.com/questions/23744", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3125/" ] }
23,763
Is there a reliable way to check how many colors my terminal emulator supports? If echo $TERM prints xterm , does that unequivocally tell me how many colors my terminal emulator supports? How could I check this information reliably?
The value of $TERM does not give much information about the number of supported colors. Many terminals advertise themselves as xterm , and might support any number of colors (2, 8, 16, 88 and 256 are common values). You can query the value of each color with the OSC 4 ; c ; ? BEL control sequence . If the color number c is supported, and if the terminal understands this control sequence, the terminal will answer back with the value of the color. If the color number is not supported or if the terminal doesn't understand this control sequence, the terminal answers nothing. Here's a bash/zsh snippet to query whether color 42 is supported (redirect to/from the terminal if necessary): printf '\e]4;%d;?\a' 42 if read -d $'\a' -s -t 1; then … # color 42 is supported Among popular terminals, xterm and terminals based on the VTE library (Gnome-terminal, Terminator, Xfce4-terminal, …) support this control sequence; rxvt, konsole, screen and tmux don't. I don't know of a more direct way.
{ "source": [ "https://unix.stackexchange.com/questions/23763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
23,833
Often times I will ssh into a new client's box to make changes to their website configuration without knowing much about the server configuration. I have seen a few ways to get information about the system you're using, but are there some standard commands to tell me what version of Unix/Linux I'm on and basic system information (like if it is a 64-bit system or not), and that sort of thing? Basically, if you just logged into a box and didn't know anything about it, what things would you check out and what commands would you use to do it?
If I need to know what it is say Linux/Unix , 32/64 bit uname -a This would give me almost all information that I need, If I further need to know what release it is say (Centos 5.4, or 5.5 or 5.6) on a Linux box I would further check the file /etc/issue to see its release info ( or for Debian / Ubuntu /etc/lsb-release ) Alternative way is to use the lsb_release utility: lsb_release -a Or do a rpm -qa | grep centos-release or redhat-release for RHEL derived systems
{ "source": [ "https://unix.stackexchange.com/questions/23833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
23,900
When using commands in bash I like the double- Tab option to display the available commands. Some commands have more possible matches than others. Example: here is just some of the output after typing git and pressing Tab Tab : $ git git git-effort git-quick-stats git-alias git-extras git-reauthor git-archive-file git-feature git-rebase-patch git-authors git-filechange-search git-receive-pack git-back git-force-clone git-refactor git-blametool git-fork git-release git-branch_ git-fresh-branch git-rename-branch git-branchz git-graft git-rename-tag git-bug git-gs_blametool git-repl git-bulk git-gs_branch_ git-reset-file git-changelog git-gs_changes git-root git-changes git-gs_diffc git-rscp git-chore git-guilt git-scp git-clang-format git-ignore git-sed git-clang-format-6.0 git-ignore-io git-setup git-clear git-info git-shell git-clear-soft gitk git-show-merged-branches git_commit_r git-lfs git-show-tree git-commits-since git-line-summary git-show-unmerged-branches git-contrib git-local-commits git-squash git-count git-lock git-stamp Is there a way I can pipe the output of the double- Tab to somewhere, like to grep ? I found a related post ( How does TAB auto-complete find options to complete? ), but I'm still not sure how to implement piping the output to grep .
For commands use compgen -c . The word compgen apparently stands for "completion generator". From help compgen (emphasis added): compgen: compgen [-abcdefgjksuv] [-o option] [-A action] [-G globpat] [-W wordlist] [-F function] [-C command] [-X filterpat] [-P prefix] [-S suffix] [word] Display possible completions depending on the options. Intended to be used from within a shell function generating possible completions. If the optional WORD argument is supplied, matches against WORD are generated. Exit Status: Returns success unless an invalid option is supplied or an error occurs. For extra details, search the man bash pages for compgen , since it's a built-in bash shell command. Example usage: $ compgen -c bas basename base64 bashbug bash This output you can simply pipe to grep . Example: $ compgen -c bas | grep bug bashbug
{ "source": [ "https://unix.stackexchange.com/questions/23900", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
23,961
I'm writing a bash script where I want to exit if the user is not root. The conditional works fine, but the script does not exit. [[ `id -u` == 0 ]] || (echo "Must be root to run script"; exit) I've tried using && instead of ; but neither work.
You could do that this way: [[ $(id -u) -eq 0 ]] || { echo >&2 "Must be root to run script"; exit 1; } ("ordinary" conditional expression with an arithmetic binary operator in the first statement), or: (( $(id -u) == 0 )) || { echo >&2 "Must be root to run script"; exit 1; } (arithmetic evaluation for the first test). Notice the change () -> {} - the curly brackets do not spawn a subshell. (Search man bash for "subshell".)
{ "source": [ "https://unix.stackexchange.com/questions/23961", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11897/" ] }
24,005
With the following .ssh/config configuration: ControlMaster auto ControlPath /tmp/ssh_mux_%h_%p_%r ControlPersist 4h How to close the persisting connection before the 4 hours? I know you can make new connections, but how to close them (all)? Maybe there is a way to show all the persisted connections and handle them individually but I can not find it.
From the manual : -O ctl_cmd Control an active connection multiplexing master process. When the -O option is specified, the ctl_cmd argument is interpreted and passed to the master process. Valid commands are: check (check that the master process is running), forward (request forwardings without command execution), cancel (cancel forwardings), exit (request the master to exit), and stop (request the master to stop accepting further multiplexing requests). Older versions only have check and exit , but that's enough for your purpose. ssh -O check host.example.com If you want to delete all connections (not just the connection to a particular host) in one fell swoop, then fuser /tmp/ssh_mux_* or lsof /tmp/ssh_mux_* will list the ssh clients that are controlling each socket. Use fuser -HUP -k tmp/ssh_mux_* to kill them all cleanly (using SIGHUP as the signal is best as it lets the clients properly remove their socket).
{ "source": [ "https://unix.stackexchange.com/questions/24005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9294/" ] }
24,014
Is there a tool to create a gif animation from a set of png files? I tried the convert command from the ImageMagick suite, but this doesn't always succeed. Also, I have several issues with this: I can't tell what the progress is. No matter what I try, the -delay flag doesn't change the frame rate of the gif animation. convert determines the frame order based upon the alphabetical order of the files names. This means that name500.png will be placed right after name50.png and not after name450.png I can fix this by adding 0's but this is annoying.
convert is a handy command line tool to do that. cd to the folder containing your png -files and run this command: convert -delay 10 -loop 0 *.png animation.gif Source: http://ubuntuforums.org/showthread.php?t=1132058
{ "source": [ "https://unix.stackexchange.com/questions/24014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7140/" ] }
24,026
There is a directory A whose contents are changed frequently by other people. I have made a personal directory B where I keep all the files that have ever been in A . Currently, I just occasionally run rsync to get the files to be backed up from A to B . However, I fear the possibility that some files will get added in A , and then removed from A before I get the chance to copy them over to B . What is the best way to prevent this from occurring? Ideally, I'd like to have my current backup script run every time the contents of A get changed.
If you have inotify-tools installed you can use inotifywait to trigger an action if a file or directory is written to: #!/bin/sh dir1=/path/to/A/ while inotifywait -qqre "attrib,modify,close_write,move,move_self,create,delete,delete_self" "$dir1"; do /run/backup/to/B done Where the -qq switch is completely silent, -r is recursive (if needed) and -e is the event to monitor. From man inotifywait : attrib The metadata of a watched file or a file within a watched directory was modified. This includes timestamps, file permissions, extended attributes etc. modify A watched file or a file within a watched directory was written to. close_write A watched file or a file within a watched directory was closed, after being opened in writeable mode. This does not necessarily imply the file was written to. move A file or directory was moved from or to a watched directory. Note that this is actually implemented simply by listening for both moved_to and moved_from, hence all close events received will be output as one or both of these, not MOVE. move_self A watched file or directory was moved. After this event, the file or directory is no longer being watched. create A file or directory was created within a watched directory. delete A file or directory within a watched directory was deleted. delete_self A watched file or directory was deleted. After this event the file or directory is no longer being watched. Note that this event can occur even if it is not explicitly being listened for.
{ "source": [ "https://unix.stackexchange.com/questions/24026", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1898/" ] }
24,138
How can I tell if two files are hard-linked from the command line? e.g. something link this: $ ls fileA fileB fileC $ is-hardlinked fileA fileB yes $ is-hardlinked fileA fileC no
On most filesystems¹, a file is uniquely determined by its inode number, so all you need to check is whether the two files have the same inode number and are on the same filesystem. Ash, ksh, bash and zsh have a construct that does the check for you: the file equality operator -ef . [ fileA -ef fileB ] && ! [ fileA -ef fileC ] For more advanced cases, ls -i /path/to/file lists a file's inode number. df -P /path/to/file shows what filesystem the file is on (if two files are in the same directory, they're on the same filesystem). If your system has the stat command, it can probably show the inode and filesystem numbers ( stat varies from system to system, check your documentation). If you want a quick glance of hard links inside a directory, try ls -i | sort (possibly piped to awk ). ¹ All native unix filesystems, and a few others such as NTFS, but possibly not exotic cases like CramFS.
{ "source": [ "https://unix.stackexchange.com/questions/24138", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4192/" ] }
24,140
So pulling open a file with cat and then using grep to get matching lines only gets me so far when I am working with the particular log set that I am dealing with. It need a way to match lines to a pattern, but only to return the portion of the line after the match. The portion before and after the match will consistently vary. I have played with using sed or awk , but have not been able to figure out how to filter the line to either delete the part before the match, or just return the part after the match, either will work. This is an example of a line that I need to filter: 2011-11-07T05:37:43-08:00 <0.4> isi-udb5-ash4-1(id1) /boot/kernel.amd64/kernel: [gmp_info.c:1758](pid 40370="kt: gmp-drive-updat")(tid=100872) new group: <15,1773>: { 1:0-25,27-34,37-38, 2:0-33,35-36, 3:0-35, 4:0-9,11-14,16-32,34-38, 5:0-35, 6:0-15,17-36, 7:0-16,18-36, 8:0-14,16-32,34-36, 9:0-10,12-36, 10-11:0-35, 12:0-5,7-30,32-35, 13-19:0-35, 20:0,2-35, down: 8:15, soft_failed: 1:27, 8:15, stalled: 12:6,31, 20:1 } The portion I need is everything after "stalled". The background behind this is that I can find out how often something stalls: cat messages | grep stalled | wc -l What I need to do is find out how many times a certain node has stalled (indicated by the portion before each colon after "stalled". If I just grep for that (ie 20:) it may return lines that have soft fails, but no stalls, which doesn't help me. I need to filter only the stalled portion so I can then grep for a specific node out of those that have stalled. For all intents and purposes, this is a freebsd system with standard GNU core utils, but I cannot install anything extra to assist.
The canonical tool for that would be sed . sed -n -e 's/^.*stalled: //p' Detailed explanation: -n means not to print anything by default. -e is followed by a sed command. s is the pattern replacement command. The regular expression ^.*stalled: matches the pattern you're looking for, plus any preceding text ( .* meaning any text, with an initial ^ to say that the match begins at the beginning of the line). Note that if stalled: occurs several times on the line, this will match the last occurrence. The match, i.e. everything on the line up to stalled: , is replaced by the empty string (i.e. deleted). The final p means to print the transformed line. If you want to retain the matching portion, use a backreference: \1 in the replacement part designates what is inside a group \(…\) in the pattern. Here, you could write stalled: again in the replacement part; this feature is useful when the pattern you're looking for is more general than a simple string. sed -n -e 's/^.*\(stalled: \)/\1/p' Sometimes you'll want to remove the portion of the line after the match. You can include it in the match by including .*$ at the end of the pattern (any text .* followed by the end of the line $ ). Unless you put that part in a group that you reference in the replacement text, the end of the line will not be in the output. As a further illustration of groups and backreferences, this command swaps the part before the match and the part after the match. sed -n -e 's/^\(.*\)\(stalled: \)\(.*\)$/\3\2\1/p' To get the part after the first occurrence of the string instead of last (for those lines where the string can occur several times), a common trick is to replace that string once with a newline character (which is the one character that won't occur inside a line), and then remove everything up to that newline: sed -n ' /stalled: / { s//\ / s/.*\n//p }' With some sed implementations, the first s command can be written s//\n/ though that's not standard/portable.
{ "source": [ "https://unix.stackexchange.com/questions/24140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1853/" ] }
24,182
I usually use mount to check which filesystems are mounted. I also know there is some connection between mount and /etc/mtab but I'm not sure about the details. After reading How to check if /proc/ is mounted I get more confused. My question is: How to get the most precise list of mounted filesystems? Should I just use mount , or read the contents of /etc/mtab , or contents of /proc/mounts ? What would give the most trustworthy result?
The definitive list of mounted filesystems is in /proc/mounts . If you have any form of containers on your system, /proc/mounts only lists the filesystems that are in your present container. For example, in a chroot , /proc/mounts lists only the filesystems whose mount point is within the chroot. ( There are ways to escape the chroot, mind. ) There's also a list of mounted filesystems in /etc/mtab . This list is maintained by the mount and umount commands. That means that if you don't use these commands (which is pretty rare), your action (mount or unmount) won't be recorded. In practice, it's mostly in a chroot that you'll find /etc/mtab files that differ wildly from the state of the system. Also, mounts performed in the chroot will be reflected in the chroot's /etc/mtab but not in the main /etc/mtab . Actions performed while /etc/mtab is on a read-only filesystem are also not recorded there. The reason why you'd sometimes want to consult /etc/mtab in preference to or in addition to /proc/mounts is that because it has access to the mount command line, it's sometimes able to present information in a way that's easier to understand; for example you see mount options as requested (whereas /proc/mounts lists the mount and kernel defaults as well), and bind mounts appear as such in /etc/mtab .
{ "source": [ "https://unix.stackexchange.com/questions/24182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12224/" ] }
24,242
This is a simple question. I'm trying to copy all of the files in the current directory that start with "axis-2" and end with ".jar" into a target directory, let's say it's ~/MyDirectory. My first thought was to try cp '^axis2.*jar$' ~/MyDirectory But this isn't working. I'm not even sure I can use regular expressions with cp. I also haven't really used regular expressions in a while and my syntax could be totally off. When I try this cp just outputs a "No such file or directory" error message. Does anyone have any suggestions of how to go about this? Thanks!
The UNIX shell uses glob patterns , not regular expressions. So, if you want to match file names starting with axis2 and ending with .jar , you use: cp axis2*.jar /destination/directory
{ "source": [ "https://unix.stackexchange.com/questions/24242", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6131/" ] }
24,260
I'm trying to read a text file and do something with each line, using a bash script. So, I have a list that looks like this: server1 server2 server3 server4 I thought I could loop over this using a while loop, like so: while read server; do ssh $server "uname -a" done < /home/kenny/list_of_servers.txt The while loop stops after 1 run, thus only running uname -a on server1 However, with a for loop using cat it works fine: for server in $(cat /home/kenny/list_of_servers.txt) ; do ssh $server "uname -a" done Even more baffling to me is that this also works: while read server; do echo $server done < /home/kenny/list_of_servers.txt Why does my first example stop after the first iteration?
The for loop is fine here. But note that this is because the file contains machine names, which do not contain any whitespace characters or globbing characters. for x in $(cat file); do … does not work to iterate over the lines of file in general, because the shell first splits the output from the command cat file anywhere there is whitespace, and then treats each word as a glob pattern so \[?* are further expanded. You can make for x in $(cat file) safe if you work on it: set -f IFS=' ' for x in $(cat file); do … Related reading: Looping through files with spaces in the names? ; How can I read line by line from a variable in bash? ; Why is while IFS= read used so often, instead of IFS=; while read.. ? Note that when using while read , the safe syntax to read lines is while IFS= read -r line; do … . Now let's turn to what goes wrong with your while read attempt. The redirection from the server list file applies to the whole loop. So when ssh runs, its standard input comes from that file. The ssh client can't know when the remote application might want to read from its standard input. So as soon as the ssh client notices some input, it sends that input to the remote side. The ssh server there is then ready to feed that input to the remote command, should it want it. In your case, the remote command never reads any input, so the data ends up discarded, but the client side doesn't know anything about that. Your attempt with echo worked because echo never reads any input, it leaves its standard input alone. There are a few ways you can avoid this. You can tell ssh not to read from standard input, with the -n option. while read server; do ssh -n $server "uname -a" done < /home/kenny/list_of_servers.txt The -n option in fact tells ssh to redirect its input from /dev/null . You can do that at the shell level, and it'll work for any command. while read server; do ssh $server "uname -a" </dev/null done < /home/kenny/list_of_servers.txt A tempting method to avoid ssh's input coming from the file is to put the redirection on the read command: while read server </home/kenny/list_of_servers.txt; do … . This will not work, because it causes the file to be opened again each time the read command is executed (so it would read the first line of the file over and over). The redirection needs to be on the whole while loop so that the file is opened once for the duration of the loop. The general solution is to provide the input to the loop on a file descriptor other than standard input. The shell has constructs to ferry input and output from one descriptor number to another. Here, we open the file on file descriptor 3, and redirect the read command's standard input from file descriptor 3. The ssh client ignores open non-standard descriptors, so all is well. while read server <&3; do ssh $server "uname -a" done 3</home/kenny/list_of_servers.txt In bash, the read command has a specific option to read from a different file descriptor, so you can write read -u3 server . Related reading: File descriptors & shell scripting ; When would you use an additional file descriptor?
{ "source": [ "https://unix.stackexchange.com/questions/24260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/466/" ] }
24,293
Is there a Unix command to get the absolute (and canonicalized) path from a relative path which may contain symbolic links?
You can use the readlink utility, with the -f option: -f, --canonicalize canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist Some distributions, for example those that use GNU coreutils and FreeBSD , also come with a realpath(1) utility that basically just calls realpath(3) and does pretty much the same thing.
{ "source": [ "https://unix.stackexchange.com/questions/24293", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2573/" ] }
24,330
I don't like having the middle mouse button paste, because I often end up with uncompilable code in Eclipse. How can I turn this off (in all programs)? I'm running Fedora.
This solution will work globally and preserve the middle mouse functionality under Xorg. Install xbindkeys xsel xdotool Place this in ~/.xbindkeysrc "echo -n | xsel -n -i; pkill xbindkeys; xdotool click 2; xbindkeys" b:2 + Release Reload xbindkeys -p Run xbindkeys on startup, pkill xbindkeys to stop.
{ "source": [ "https://unix.stackexchange.com/questions/24330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12315/" ] }
24,337
According to " Linux: The Complete Reference 6th Edition " (pg. 44), you can pipe only STDERR using the |& redirection symbols. I've written a pretty simple script to test this: #!/bin/bash echo "Normal Text." echo "Error Text." >&2 I run this script like this: ./script.sh |& sed 's:^:\t:' Presumably, only the lines printed to STDERR will be indented. However, it doesn't actually work like this, as I see: Normal Text. Error Text. What am I doing wrong here?
I don't know what text your book uses, but the bash manual is clear (if you're a little familiar with redirections already): If |& is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 | . This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command. So if you don't want to mix standard output and standard error, you'll have to redirect standard output somewhere else. See How to grep standard error stream (stderr)? { ./script.sh 2>&1 >&3 | sed 's:^:\t:'; } 3>&1 Both fd 1 and 3 of script.sh and sed will point to the original stdout destination however. If you want to be a good citizen, you can close those fd 3 which those commands don't need: { ./script.sh 2>&1 >&3 3>&- | sed 's:^:\t:' 3>&-; } 3>&1 bash and ksh93 can condense the >&3 3>&- to >&3- (fd move).
{ "source": [ "https://unix.stackexchange.com/questions/24337", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
24,347
I've noticed that some applications put their configuration files to ~/.config/appname while others use ~/.appname (the classic way, AFAIK) for this. What's the sense in this distinction and what could be better to consider for an application of mine? UPDATE: Looks like my (XUbuntu 11.10 default) $XDG_CONFIG_HOME is set to ~/ and the most of the applications in my system (like Mozilla Firefox, Adobe Flash Player, Midnight Commander, Opera, Wine, etc.) comply to this. But there are still many applications (like Compiz, Deadbeef, VLC, Qt Creator, Google Chrome, XFCE, etc.) using ~/.config/ instead. Another suspicious thing is that directories in ~/.config/ are not themselves hidden (no dot in their names) - aren't application config dirs expected to have constant own names without depending on the location ($XDG_CONFIG_HOME value)?
A complement to jasonwryan's great answer, addressing some of your issues: Your $XDG_CONFIG_HOME is not set to ~/ . It simply isn't set. So applications that follow the XDG Specification use the default ~/.config The dirs inside /.config are not hidden because they don't have to. The whole point of using a ~/.config dir is to un-clutter the user's $HOME . Since they are already in a separate, hidden dir, there's no need to be hidden inside there. Software that does not follow the spec (unfortunately still the vast majority) use a hidden dir for their settings (like ~/.myapp ) as an attempt not to clutter the user's $HOME . It (kinda) works, but it is still a bad approach when, for example, you try to backup your settings and your "big data" (like Pictures, Videos, Music) separately. Having all settings in a single place, without mixing with user's data, is a much better approach As for "having constant names regardless of where XDG_CONFIG_HOME points to" , they already do: it is appname without the leading dot. Remember: the ones using $HOME/.appname are the ones that ignore XDG Spec. They use a hardcoded path. As for your applications, please use the XDG Standard ! I beg you, and your users will thank you for not cluttering their home directory any further.
{ "source": [ "https://unix.stackexchange.com/questions/24347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2119/" ] }
24,355
I have a project wherein I need to update configuration files each time an EC2 instance is booted with the Public DNS address of the current instance. I'll be using Perl or Sed for this, so that's not really the question, but the real question is: is there a way that I can determine the instance's public DNS address? Is there an EC2 api that I can access from the instance to determine it?
There is. From inside the instance, you can run: curl http://169.254.169.254/latest/meta-data/public-ipv4 To get the public DNS hostname, you can change that to: curl http://169.254.169.254/latest/meta-data/public-hostname You can get the private IP for the instance, too: curl http://169.254.169.254/latest/meta-data/local-ipv4 As a side note, you can double-check it against a non-AWS site on the internet, like http://ip4.me #!/bin/bash pubip=$( curl http://ip4.me 2>/dev/null | sed -e 's#<[^>]*>##g' | grep '^[0-9]' ) echo $pubip That will work, generally, to check the "public IP" of any NATed system, or to find your public proxy IP, etc. And here's a good link to read up on the types of information you can get from Amazon's API: http://www.ducea.com/2009/06/01/howto-update-dns-hostnames-automatically-for-your-amazon-ec2-instances/
{ "source": [ "https://unix.stackexchange.com/questions/24355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
24,383
Can someone explain in short what "recursive DNS query" means and how it can be considered bad ?
TL;DR : Recursive queries are part of the way the internet and DNS work, but not all DNS servers should be receiving recursive queries, and when the ones that shouldn't respond do respond you can get problems. Longer version: Recursion, n: See under Recursion. A recursive DNS query happens when the DNS server you asked for the address of, say, unix.stackexchange.com doesn't know the answer itself, so it has to check with another server. Normally this is actually how DNS works -- the DNS server of your ISP does not have the entire internet's domain records permanently memorized for obvious reasons, so the following exchange happens under the hood: You: Hey, browser, show me http://unix.stackexchange.com Browser: Sure thing! ... Hm. I don't actually know what IP address that is. Hey, OS, can you tell me where to find unix.stackexchange.com? OS: Sure thing... Hmm. It's not in my own hosts file. Lemme just check my resolver configuration... Hey, ISP's DNS server, can you tell me where to find unix.stackexchange.com? ISP's DNS server: Sure thing! ... Hmmm. That one isn't in my list of authoritative domains, and right now I don't have that answer cached. Hey, internet root servers, can you tell me who is authoritative for stackexchange.com? Internet Root Servers: Sure thing! According to our records, you want ns1.serverfault.com, ns2.serverfault.com, or ns3.serverfault.com. ISP's DNS server: Thanks, Internet Root Servers! Hi there, ns2.serverfault.com, can you tell me where to find unix.stackexchange.com? ns2.serverfault.com : Sure thing! That's address 64.34.119.12 ISP's DNS server : Great, thanks! OS, the number you're looking for is 64.34.119.12. OS: Great, thanks! Browser, you need address 64.34.119.12 Browser: Great, thanks! Okay, calling up the page now. You: Yay, thanks Browser! Now bear in mind that there are actually two types of name servers queried here -- authoritative DNS servers (the so called "root" servers that told your ISP's DNS server where to find SE.com's DNS server, and SE.com's authoritative DNS server) and recursing or forwarding DNS servers (your ISP's DNS server). Normally, the former type is not supposed to respond to recursive queries, especially not from outside their own domain. Smaller ISPs sometimes save on costs by having their primary authoritative name server be the same server as their primary forwarding nameserver, but that's somewhat unsafe policy - especially if you don't configure your server to refuse recursive queries from outside your own IP range. Further reading here on Wikipedia .
{ "source": [ "https://unix.stackexchange.com/questions/24383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6960/" ] }
24,419
I had this on my Ubuntu setup and since I switched to Fedora I want to set it and I forgot how... The idea is simple : I don't want the terminal to show me suggestions when I double tab , instead I want it to cycle through every possible suggestion with each press on tab ... this can be done in Vim also. So when I type gedit a and press tab it will show me every file with a first letter a .
This is actually a readline feature called menu-complete . You can bind it to tab (replacing the default complete ) by running: bind TAB:menu-complete You probably want to add that to your ~/.bashrc . Alternatively, you could configure it for all readline completions (not just bash) in ~/.inputrc . You may also find bind -p (show current bindings, note that shows tab as "\C-i" ) and bind -l (list all functions that can be bound) useful, as well as the bash manual's line editing section and readline's documentation .
{ "source": [ "https://unix.stackexchange.com/questions/24419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12310/" ] }
24,441
Possible Duplicate: How do I do a ls and then sort the results by date created? Is there a command in Linux which displays when the file was created ? I see that ls -l gives the last modified time, but can I get the created time/date?
The stat command may output this - (dash). I guess it depends on the filesystem you are using. stat calls it the "Birth time". On my ext4 fs it is empty, though. %w Time of file birth, human-readable; - if unknown %W Time of file birth, seconds since Epoch; 0 if unknown stat foo.txt File: `foo.txt' Size: 239 Blocks: 8 IO Block: 4096 regular file Device: 900h/2304d Inode: 121037111 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 1000/ adrian) Gid: ( 100/ users) Access: 2011-10-26 13:57:15.000000000 -0600 Modify: 2011-10-26 13:57:15.000000000 -0600 Change: 2011-10-26 13:57:15.000000000 -0600 Birth: -
{ "source": [ "https://unix.stackexchange.com/questions/24441", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
24,500
I want to try simple script flag=false while !$flag do read x if [ "$x" -eq "true" ] then flag=true fi echo "${x} : ${flag}" done But when I run it, if I type true , I will see that x="true" and flag="true" , but the cycle doesn't end. What is wrong with the script? How can I properly invert a boolean variable?
There are two errors in your script. The first is that you need a space between ! and $flag , otherwise the shell looks for a command called !$flag . The second error is that -eq is for integer comparisons, but you're using it on a string. Depending on your shell, either you'll see an error message and the loop will continue forever because the condition [ "$x" -eq "true" ] cannot be true, or every non-integer value will be treated as 0 and the loop will exit if you enter any string (including false ) other than a number different from 0. While ! $flag is correct, it's a bad idea to treat a string as a command. It would work, but it would be very sensitive to changes in your script, since you'd need to make sure that $flag can never be anything but true or false . It would be better to use a string comparison here, like in the test below. flag=false while [ "$flag" != "true" ] do read x if [ "$x" = "true" ] then flag=true fi echo "${x} : ${flag}" done There's probably a better way to express the logic you're after. For example, you could make an infinite loop and break it when you detect the termination condition. while true; do read -r x if [ "$x" = "true" ]; then break; fi echo "$x: false" done
{ "source": [ "https://unix.stackexchange.com/questions/24500", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12364/" ] }
24,509
I'm looking for the simplest method to print the longest line in a file. I did some googling and surprisingly couldn't seem to find an answer. I frequently print the length of the longest line in a file, but I don't know how to actually print the longest line. Can anyone provide a solution to print the longest line in a file? Thanks in advance.
cat ./text | awk ' { if ( length > x ) { x = length; y = $0 } }END{ print y }' UPD : summarizing all the advices in the comments awk 'length > max_length { max_length = length; longest_line = $0 } END { print longest_line }' ./text
{ "source": [ "https://unix.stackexchange.com/questions/24509", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8713/" ] }
24,557
I'm wanting to find all directories with a specific string so I can do another find on the files contained within. So I don't want to waste time on ./my-search-term/dir/my-search-term etc. How can I stop recursing when I've found the first my-search-term directory?
The -prune action makes find not recurse into the directory. You can combine it with another action such as -exec (the order of -prune and -exec doesn't matter, as long as -prune is executed either way). find . -name my-search-term -prune -exec find {} … \; Note that nesting find inside a find -exec can be a little problematic: you can't use -exec in the inner find , because the terminator would be seen as a terminator by the outer find . You can work around that by invoking a shell, but beware of quoting. find . -name my-search-term -prune -exec sh -c ' find "$@" … -exec … {\} + ' _ {} +
{ "source": [ "https://unix.stackexchange.com/questions/24557", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9738/" ] }
24,565
Several command line tools use the -h or --human-readable option to print file size in a human readable format (i.e., 36G vs 37550836 ). Why is this option needed and not the default? Aren't these tools mainly for output to humans?
Because they didn't exist originally, and the default behavior is backwards compatible. Also, because they don't exist on all unix variants, and the default behavior is compatible with other unix variants. For many tools, because they are intended to be parseable by other tools. This is rarely the case for ls , but parsing the output of du or df is relatively common. (Mind, for df , you should use df -P when parsing.) Because some humans prefer the 37550836 format, because when you see a bunch of such numbers, their relative size is visually clear (number of digits).
{ "source": [ "https://unix.stackexchange.com/questions/24565", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6234/" ] }
24,586
Usually when I find a command I want to alias, I echo it to my .bashrc like so: [up button pressed to last command, then line edited so that it reads] $echo "command-i-just-did" >> ~/.bashrc There may be a better way to do this. But anyway, just now I overwrote the entire .rc file by using a single chevron. However, since the .bashrc is still current, it's still accepting my old aliases (for now of course). So is there a way to recover it?
alias without parameter outputs the definitions of currently defined aliases. declare -f outputs the definitions of currently defined functions. export -p outputs the definitions of currently defined variables. All those commands output definitions ready to be reused, you can redirect their outputs directly to a new ~/.bashrc . All lists will contain a lot of elements defined elsewhere, for example /etc/profile and /etc/bash_completion . So you will have to clean up the list manually.
{ "source": [ "https://unix.stackexchange.com/questions/24586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1389/" ] }
24,598
I am using this command on Ubuntu but it is starting on port 8080 and I don't have another server running so I'd like it to start on port 80. I saw ways that you could set up a bash script to do something like this, but isn't there a command line flag or something simpler to specify the port? python -m SimpleHTTPServer
sudo python -m SimpleHTTPServer 80 for python 3.x version, you may need : sudo python -m http.server 80 Ports below 1024 require root privileges. As George added in a comment, running this command as root is not a good idea - it opens up all kinds of security vulnerabilities. However, it answers the question.
{ "source": [ "https://unix.stackexchange.com/questions/24598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
24,626
I often want to make some quick date calculations, such as: What is the difference between these two dates? What is the date n weeks after this other date? I usually open a calendar and count the days, but I think there should be a program/script that I can use to do these kinds of calculations. Any suggestions?
The "n weeks after a date" is easy with GNU date(1): $ date -d 'now + 3 weeks' Tue Dec 6 23:58:04 EST 2011 $ date -d 'Aug 4 + 3 weeks' Thu Aug 25 00:00:00 EST 2011 $ date -d 'Jan 1 1982 + 11 weeks' Fri Mar 19 00:00:00 EST 1982 I don't know of a simple way to calculate the difference between two dates, but you can wrap a little logic around date(1) with a shell function. datediff() { d1=$(date -d "$1" +%s) d2=$(date -d "$2" +%s) echo $(( (d1 - d2) / 86400 )) days } $ datediff '1 Nov' '1 Aug' # Note: answer should be 92 days but in my timezone, DST starts between the dates. 91 days Swap d1 and d2 if you want the date calculation the other way, or get a bit fancier to make it not matter. Furthermore, in case there is a non-DST to DST transition in the interval, one of the days will be only 23 hours long; you can compensate by adding ½ day to the sum. echo $(( (((d1-d2) > 0 ? (d1-d2) : (d2-d1)) + 43200) / 86400 )) days
{ "source": [ "https://unix.stackexchange.com/questions/24626", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10788/" ] }
24,630
If I have a large file and need to split it into 100 megabyte chunks I will do split -b 100m myImage.iso That usually give me something like xaa xab xac xad And to get them back together I have been using cat x* > myImage.iso Seems like there should be a more efficient way than reading through each line of code in a group of files with cat and redirecting the output to a new file. Like a way of just opening two files, removing the EOF marker from the first one, and connecting them - without having to go through all the contents. Windows/DOS has a copy command for binary files. The help mentions that this command was designed to able able to combine multiple files. It works with this syntax: ( /b is for binary mode) copy /b file1 + file2 + file3 outputfile Is there something similar or a better way to join large files on Linux than cat? Update It seems that cat is in fact the right way and best way to join files. Glad to know i was using the right command all along :) Thanks everyone for your feedback.
That's just what cat was made for. Since it is one of the oldest GNU tools, I think it's very unlikely that any other tool does that faster/better. And it's not piping - it's only redirecting output.
{ "source": [ "https://unix.stackexchange.com/questions/24630", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
24,684
I was skimming through an /etc/rc.d/init.d/sendmail file (I know this is hardly ever used, but I'm studying for an exam), and I've become a bit confused about the && and the || operators. I've read where they can be used in statements such as: if [ test1 ] && [ test2 ]; then echo "both tests are true" elif [ test1 ] || [ test2 ]; then echo "one test is true" fi However, this script shows single line statements such as: [ -z "$SMQUEUE" ] && SMQUEUE="QUEUE" [ -f /usr/sbin/sendmail ] || exit 0 These seem to be using the && and || operators to elicit responses based on tests, but I haven't been able to dig up documenation regarding this particular use of these operators. Can anyone explain what these do in this particular context?
The right side of && will only be evaluated if the exit status of the left side is zero (i.e. true). || is the opposite: it will evaluate the right side only if the left side exit status is non-zero (i.e. false). You can consider [ ... ] to be a program with a return value. If the test inside evaluates to true, it returns zero; it returns nonzero otherwise. Examples: $ false && echo howdy! $ true && echo howdy! howdy! $ true || echo howdy! $ false || echo howdy! howdy! Extra notes: If you do which [ , you might see that [ actually does point to a program! It's usually not actually the one that runs in scripts, though; run type [ to see what actually gets run. If you wan to try using the program, just give the full path like so: /bin/[ 1 = 1 .
{ "source": [ "https://unix.stackexchange.com/questions/24684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12041/" ] }
24,694
Related subject: blocking facebook.com outside facebook.com domain This is from default.filter ################################################################################# # # shockwave-flash: Kill embedded Shockwave Flash objects. # Note: Better just block "/.*\.swf$"! # ################################################################################# FILTER: shockwave-flash Kill embedded Shockwave Flash objects. s|<object [^>]*macromedia.*</object>|<!-- Squished Shockwave Object -->|sigU s|<embed [^>]*(application/x-shockwave-flash\|\.swf).*>(.*</embed>)?|<!-- Squished Shockwave Flash Embed -->|sigU This is how you implement it in the .action file ############################################################################# # Kill embedded Shockwave SWF objects ############################################################################# {+filter{shockwave-flash}} .funny-games.biz/ Works fine, but... I am failing to achieve my wanted result .filter: ################################################################################# # # trace-widget: Get rid of particularly annoying so-called sharing buttons. # ################################################################################# FILTER: trace-widget Kill embedded spying buttons. s|<script [^>]*.twitter.*</script>|<!-- Squished Twitter Object -->|sigU .action: #---------------------------------------------------------------------------- # Deny access for Facebook Google and Twitter scripts #---------------------------------------------------------------------------- {+filter{trace-widget}} / What is wrong with it? I am puzzled on how it can be applied for frames and scripts like: Twitter: <script type="text/javascript" src="http://platform.twitter.com/widgets.js"></script> Google: g+ analytics etc. <g:plusone annotation="inline"></g:plusone> <script type="text/javascript"> (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })(); </script> Facebook: <script src="http://static.ak.fbcdn.net/connect.php/js/FB.Share" type="text/javascript"></script> <a name="fb_share" type="button_count" share_url="..." href="http://www.facebook.com/sharer.php">Share</a> <iframe frameborder='0' id='facebook_like' scrolling='no' src='https://www.facebook.com/plugins/like.php?href=...'></iframe> Your help is deeply appreciated. Update (working filter rules) Facebook: s|<a [^>]*(sharer.php).*>(.*</a>)|<!-- Squished Facebook Object -->|sigU s|<iframe [^>]*(like.php).*>(.*</iframe>)|<!-- Squished Facebook Frame -->|sigU (this would be better if a facebook.com and fbcdn.net domains be added to these rules so that it won't block any other PHP or JS or other contents of the current website) Google: (not always working - cutroni.com) s|<script [^>]*(plusone.js).*>(.*</script>)|<!-- Squished Google Button -->|sigU Twitter: (work with fenopy.eu but not with The Pirate Bay HTTPS pages https://thepiratebay.org/ ) (not always working - cutroni.com) s|<script [^>]*(widgets.js).*>(.*</script>)|<!-- Squished Twitter Object -->|sigU Your help, for a better code, is deeply appreciated. Edit: Not f'd — you won't find me on Facebook fsf.org/fb (Just for fun xD) s|<a [^>]*(sharer.php).*>(.*</a>)|<a href="http://www.fsf.org/fb"><img src="http://img804.imageshack.us/img804/7822/dislike50.png" alt="Not f'd" /></a>|sigU
The right side of && will only be evaluated if the exit status of the left side is zero (i.e. true). || is the opposite: it will evaluate the right side only if the left side exit status is non-zero (i.e. false). You can consider [ ... ] to be a program with a return value. If the test inside evaluates to true, it returns zero; it returns nonzero otherwise. Examples: $ false && echo howdy! $ true && echo howdy! howdy! $ true || echo howdy! $ false || echo howdy! howdy! Extra notes: If you do which [ , you might see that [ actually does point to a program! It's usually not actually the one that runs in scripts, though; run type [ to see what actually gets run. If you wan to try using the program, just give the full path like so: /bin/[ 1 = 1 .
{ "source": [ "https://unix.stackexchange.com/questions/24694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12453/" ] }
24,704
I recently went through Unpacking kernel-source rpm off-system (OpenSuse)? ; and as it took > 10 h on my machine, imagine my surprise that after doing the process described there, I find no Module.symvers anywhere! When I search for "generate Module.symvers", I get this: NOTE: "modules_prepare" will not build Module.symvers even if CONFIG_MODVERSIONS is set; therefore, a full kernel build needs to be executed to make module versioning work. ( Linux Kernel Documentation :: kbuild : modules.txt ) ... but I don't really get it - didn't the kernel get built in the previous step (described in the link given above? I sure know it took > 10 h for CC to generate a whole bunch of *.o files, and LD to link them, so something must have been built. But if so, where is then Module.symvers ? In more explicit terms, exactly what command should I call to generate Module.symvers? I know that make prepare will not work - but what is the command that will?
The Module.symvers is (re)generated when you (re)compile modules. Run make modules , and you should get a Module.symvers file at the root of the kernel tree. Note that if you only ran make and not make modules , you haven't built any modules yet. The symbols from the kernel itself ( vmlinux or one of the architecture-dependent image formats) are in System.map .
{ "source": [ "https://unix.stackexchange.com/questions/24704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8069/" ] }
24,711
I'm writing a program that will test programs written by students. I'm afraid that I can't trust them and I need to make sure that it won't end up badly for the computer running it. I was thinking about making some crash test user with limited access to system resources and run programs as that user, but from what I have found on the net so far, making a virtual system would be the safest option... Can someone help me with choosing the right approach? Security is a big concern for me. On the other hand, I don't want a solution that is overkill and waste much time trying to learn something I don't really need.
Virtual machine can give you highest security without reboot, but lowest performance. Another option, for even higher security than a virtual machine: boot a "live" CD/DVD/pendrive without access to the hard drive (temporarily disable the HDD in BIOS; if you can't, at least do not mount the drive / unmount it, if mounted automatically - but this is much less secure) A docker container is a bit less secure alternative to a full virtual machine. Probably the crucial difference (in terms of security) between these two is that systems running in docker actually use the kernel of your host system. There are programs such as isolate that will create a special, secured environment - this is generally called a sandbox - those are typically chroot-based, with additional supervision - find one that fits you. A simple chroot will be least secure (esp. in regards to executing programs), though maybe a little faster, but ... You'll need to build/copy a whole separate root tree and use bind mounts for /dev etc. (see Note 1 below!). So in general, this approach cannot be recommended, especially if you can use a more secure, and often easier to set up, sandbox environment. Note 0: To the aspect of a "special user", like the nobody account: This gives hardly any security, much less than even a simple chroot . A nobody user can still access files and programs that have read and execute permissions set for other . You can test it with su -s /bin/sh -c 'some command' nobody . And if you have any configuration/history/cache file accessible to anybody (by a mistake or minor security hole), a program running with nobody 's permissions can access it, grep for confidential data (like "pass=" etc.) and in many ways send it over the net or whatever. Note 1: As Gilles pointed in a comment below , a simple chroot environment will give very little security against exploits aiming at privilege escalation. A sole chroot makes sense security-wise, only if the environment is minimal, consisting of security-confirmed programs only (but there still remains the risk of exploiting potential kernel-level vulnerabilities), and all the untrusted programs running in the chroot are running as a user who does not run any process outside the chroot. What chroot does prevent against (with the restrictions mentioned here), is direct system penetration without privilege escalation. However, as Gilles noted in another comment , even that level of security might get circumvented, allowing a program to break out of the chroot.
{ "source": [ "https://unix.stackexchange.com/questions/24711", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12459/" ] }
24,721
I want to compare two floating point numbers in a shell script. The following code is not working: #!/bin/bash min=12.45 val=10.35 if (( $val < $min )) ; then min=$val fi echo $min
Bash does not understand floating point arithmetic. It treats numbers containing a decimal point as strings. Use awk or bc instead. #!/bin/bash min=12.45 val=10.35 if [ 1 -eq "$(echo "${val} < ${min}" | bc)" ] then min=${val} fi echo "$min" If you intend to do a lot of math operations, it's probably better to rely on python or perl.
{ "source": [ "https://unix.stackexchange.com/questions/24721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12188/" ] }
24,739
Suppose I want to execute a sequence of four commands that I have executed before. If the first one is 432 in the command-history, then I could do: $ !432; !433; !434; !435 I'm curious, is there a more efficient way to accomplish this?
If it refers to commands run just recently, a more efficient way is to reference them with negative numbers: !-4; !-3; !-2; !-1 Also, once you do it, your last history entry will contain the whole chain of commands, so you can repeat it with !! . Edit: If you haven't already, get familiar with the great builtin function fc , mentioned by Gilles . (Use help fc .) It turns out that you can also use negative numbers with it, so you could do the same as above using eval "`fc -ln -4 -1`" This has one caveat, though: after this, the eval line is stored in the history as the last command. So if you run this again, you'll fall into a loop! A safer way of doing this is to use the default fc operation mode: forwarding the selected range of commands to an editor and running them once you exit from it. Try: fc -4 -1 You can even reverse the order of the range of commands: fc -1 -4
{ "source": [ "https://unix.stackexchange.com/questions/24739", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5045/" ] }
24,910
I would like to specify a range of files (in lexicographical order) with two integers (e.g. 2 to 57) in zsh by globbing. For example: "pick the files 2 to 57 in lexicographical order under the path that matches some globbing pattern". I thought using square brackets would do it for x in /foo/bar/*[2-57]; do print $x; done but zsh apparently thinks I am asking for the files 2 to 5 (or something like that) instead of files 2 to 57 . Any thoughts why? How can I accomplish this?
[2-57] is a character set consisting of 2 , 3 , 4 , 5 and 7 , in zsh and every other wildcard and regexp syntax out there. Your glob pattern *[2-57] matches every filename whose last character is one of those five digits. I think you are misremembering the syntax of the [m,n] glob qualifier . Glob qualifiers always go in parentheses at the end of the pattern, and the range separator is a comma. The pattern *([2,57]) expands to the 2nd, 3rd, …, 57th matches. The default expansion order is lexicographic (with some special magic to sort numbers in numeric order if the numeric_glob_sort option is set); you can control it with the o or O glob qualifier (e.g. *(om[2,57]) to match the 57 most recent file except the one most recent file). for x in /foo/bar/*([2,57]); do print $x; done Not what you asked for, but related and possibly useful to future readers: if you want to enumerate files 2 to 57 whether they exist or not, you can use a range brace expression . This feature also exists in bash and ksh. echo hello{2..57} And if you want to match files whose name contains a number between 2 and 57, you can use the pattern <2-57> . This is specific to zsh. $ ls file1 file2 file3 file57 file58 $ echo file<2-57> file2 file3 file57 Note that a pattern like *<2-57> is likely not to do what you expect, because the * could match digits too. For example, file58 matches *<2-57> , with file5 matching the * part and 8 matching the <2-57> part. The pattern *[^0-9]<2-57> avoids this issue.
{ "source": [ "https://unix.stackexchange.com/questions/24910", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
24,926
I know about vnstat and that tracks over the hour/day/months. I would like to see the network speed right now. As in for a duration of a second or at most a minute. Kind of like how top can show me the current CPU usage. How might I check that? My port is not called eth0 , it's em2 , so I hope that doesn't conflict with anything.
[2-57] is a character set consisting of 2 , 3 , 4 , 5 and 7 , in zsh and every other wildcard and regexp syntax out there. Your glob pattern *[2-57] matches every filename whose last character is one of those five digits. I think you are misremembering the syntax of the [m,n] glob qualifier . Glob qualifiers always go in parentheses at the end of the pattern, and the range separator is a comma. The pattern *([2,57]) expands to the 2nd, 3rd, …, 57th matches. The default expansion order is lexicographic (with some special magic to sort numbers in numeric order if the numeric_glob_sort option is set); you can control it with the o or O glob qualifier (e.g. *(om[2,57]) to match the 57 most recent file except the one most recent file). for x in /foo/bar/*([2,57]); do print $x; done Not what you asked for, but related and possibly useful to future readers: if you want to enumerate files 2 to 57 whether they exist or not, you can use a range brace expression . This feature also exists in bash and ksh. echo hello{2..57} And if you want to match files whose name contains a number between 2 and 57, you can use the pattern <2-57> . This is specific to zsh. $ ls file1 file2 file3 file57 file58 $ echo file<2-57> file2 file3 file57 Note that a pattern like *<2-57> is likely not to do what you expect, because the * could match digits too. For example, file58 matches *<2-57> , with file5 matching the * part and 8 matching the <2-57> part. The pattern *[^0-9]<2-57> avoids this issue.
{ "source": [ "https://unix.stackexchange.com/questions/24926", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
24,931
echo '<h1>hello, world</h1>' | firefox cat index.html | firefox These commands don't work. If firefox can read stdin, I can send HTML to firefox via a pipe. Is it possible to make firefox read stdin?
The short answer is, you're better off writing a temporary file and opening that. Getting pipes to work properly is more complicated and probably won't give you any extra advantages. That said, here's what I've found. If your firefox command is actually starting Firefox instead of talking with an already-running Firefox instance, you can do this: echo '<h1>hello, world</h1>' | firefox /dev/fd/0 Which tells Firefox explicitly to read its standard input, which is where the pipe is putting its data. But if Firefox is already running, the firefox command is just going to pass that name to the main Firefox process, which will read its own standard input, which probably won't give it anything and certainly isn't connected to your pipe. Furthermore, when reading from a pipe, Firefox buffers things pretty heavily, so it's not going to update the page each time you give it a new line of HTML, if that's what you're going for. Try closing Firefox and running: cat | firefox /dev/fd/0 (N.B. you do actually need the cat here.) Paste some long lines into your shell window repeatedly until Firefox decides to update the page, and you can see how much data it takes. Now send an End-Of-File signal by hitting Ctrl+D on a new line, and watch Firefox update instantly. But then you can't add any more data. So best is probably: echo '<h1>hello, world</h1>' >my_temporary_file; firefox my_temporary_file
{ "source": [ "https://unix.stackexchange.com/questions/24931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7971/" ] }
24,952
How can I immediately detect when new files were added to a folder within a bash script? I would like the script to process files as soon as they are created in the folder. Are there any methods aside from scheduling a cron job that checks for new files each minute or so?
You should consider using inotifywait , as an example: inotifywait -m /path -e create -e moved_to | while read dir action file; do echo "The file '$file' appeared in directory '$dir' via '$action'" # do something with the file done In Ubuntu, inotifywait is provided by the inotify-tools package . As of version 3.13 (current in Ubuntu 12.04) inotifywait will include the filename without the -f option. Older versions may need to be coerced. What is important to note is that the -e option to inotifywait is the best way to do event filtering. Also, your read command can assign the positional output into multiple variables that you can choose to use or ignore. There is no need to use grep/sed/awk to preprocess the output.
{ "source": [ "https://unix.stackexchange.com/questions/24952", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11536/" ] }
24,954
The xargs command always confuses me. Is there a general rule for it? Consider the two examples below: $ \ls | grep Cases | less prints the files that match 'Cases', but changing the command to touch will require xargs : $ \ls | grep Cases | touch touch: missing file operand Try `touch --help' for more information. $ \ls | grep Cases | xargs touch
The difference is in what data the target program is accepting. If you just use a pipe, it receives data on STDIN (the standard input stream) as a raw pile of data that it can sort through one line at a time. However some programs don't accept their commands on standard in, they expect it to be spelled out in the arguments to the command. For example touch takes a file name as a parameter on the command line like so: touch file1.txt . If you have a program that outputs filenames on standard out and want to use them as arguments to touch , you have to use xargs which reads the STDIN stream data and converts each line into space separated arguments to the command. These two things are equivalent: # touch file1.txt # echo file1.txt | xargs touch Don't use xargs unless you know exactly what it's doing and why it's needed. It's quite often the case that there is a better way to do the job than using xargs to force the conversion. The conversion process is also fraught with potential pitfalls like escaping and word expansion etc.
{ "source": [ "https://unix.stackexchange.com/questions/24954", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/446/" ] }
25,005
Possible Duplicate: How to de-unzip, de-tar -xvf — de-unarchive in a messy folder? This is a pretty annoying occurrence. Sometimes, I download an archive ( tar.gz, tar.bz2, zip, rar, etc ) and run tar xf [file] (or similar) in the file's directory. In rare occasions, all the files extract in the current working directory instead of a sub-directory. This can lead to hundreds of files and hundreds of patterns that can't simply be removed using a pattern matching solution. Is there a way to get the file contents of an archive and then delete all files on that list in the current working directory?
You can list the content of the archive and then pass the list to rm using xargs Example for a tarball (test it without the rm first): tar tfz archive.tar.gz | xargs rm -rf
{ "source": [ "https://unix.stackexchange.com/questions/25005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6266/" ] }
25,049
More than once I've accidentally run a number of commands and polluted my bash history. How do I close my terminal without saving my bash history? I'm using Fedora.
Your shell's history is saved in the file indicated by the HISTFILE variable. So: unset HISTFILE This also applies to zsh, but not to ksh which keeps saving to the file indicated by $HISTFILE when the shell starts (and conversely, you decide to save your history in ksh once you've started the shell).
{ "source": [ "https://unix.stackexchange.com/questions/25049", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
25,055
To get to my machine in my office, at the moment I am doing this: me@home:~$ ssh unix.university.com me@unix:~$ ssh unix.department.univeristy.com [email protected]:~$ ssh office-machine.department.university.com me@office-machine:~$ echo "This is very annoying" Is there an easy way of automating this process, perhaps a single command that I can use at my end?
You can use the ssh client to execute ssh on the remote machine upon login. ssh -t unix.university.com \ ssh -t unix.department.univeristy.com \ ssh -t office-machine.department.university.com (The reason I include -t in the invocations is because ssh was giving me errors re: stdin not being a terminal when I tried it on my own machine; your machine may be different.) When you exit from the last shell, the process will chain-exit, saving you typing Ctrl-D over and over again.
{ "source": [ "https://unix.stackexchange.com/questions/25055", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6723/" ] }
25,138
I have the following file: id name age 1 ed 50 2 joe 70 I want to print just the id and age columns. Right now I just use awk : cat file.tsv | awk '{ print $1, $3 }' However, this requires knowing the column numbers. Is there a way to do it where I can use the name of the column (specified on the first row), instead of the column number?
Maybe something like this: $ cat t.awk NR==1 { for (i=1; i<=NF; i++) { ix[$i] = i } } NR>1 { print $ix[c1], $ix[c2] } $ awk -f t.awk c1=id c2=name input 1 ed 2 joe $ awk -f t.awk c1=age c2=name input 50 ed 70 joe If you want to specify the columns to print on the command line, you could do something like this: $ cat t.awk BEGIN { split(cols,out,",") } NR==1 { for (i=1; i<=NF; i++) ix[$i] = i } NR>1 { for(i=1; i <= length(out); i++) printf "%s%s", $ix[out[i]], OFS print "" } $ awk -f t.awk -v cols=name,age,id,name,id input ed 1 ed 50 1 joe 2 joe 70 2 (Note the -v switch to get the variable defined in the BEGIN block.)
{ "source": [ "https://unix.stackexchange.com/questions/25138", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12627/" ] }
25,139
I have noticed, that the Russian timezone is not up-to-date. The GMT offset is set to +3 h now, whereas the current GMT offset is +4 hours. Could someone tell me how can i manually edit the tzdata file to set it up to date?
Maybe something like this: $ cat t.awk NR==1 { for (i=1; i<=NF; i++) { ix[$i] = i } } NR>1 { print $ix[c1], $ix[c2] } $ awk -f t.awk c1=id c2=name input 1 ed 2 joe $ awk -f t.awk c1=age c2=name input 50 ed 70 joe If you want to specify the columns to print on the command line, you could do something like this: $ cat t.awk BEGIN { split(cols,out,",") } NR==1 { for (i=1; i<=NF; i++) ix[$i] = i } NR>1 { for(i=1; i <= length(out); i++) printf "%s%s", $ix[out[i]], OFS print "" } $ awk -f t.awk -v cols=name,age,id,name,id input ed 1 ed 50 1 joe 2 joe 70 2 (Note the -v switch to get the variable defined in the BEGIN block.)
{ "source": [ "https://unix.stackexchange.com/questions/25139", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
25,173
I know that I can use something like cat test.txt | pr -w 80 to wrap lines to 80 characters wide, but that puts a lot of space on the top and bottom of the printed lines and it does not work right on some systems What's the best way to force a text file with long lines to be wrapped at a certain width? Bonus points if you can keep it from breaking words.
You are looking for fold -w 80 -s text.txt -w tells the width of the text, where 80 is standard. -s tells to break at spaces, and not in words. This is the standard way, but there are other systems, which need "-c" instead of "-w".
{ "source": [ "https://unix.stackexchange.com/questions/25173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10287/" ] }
25,203
I've moved a server from one mainboard to another due a disk controller failure. Since then I've noticed that constantly a 25% of one of the cores goes always to IRQ however I haven't managed myself to know which is the IRQ responsible for that. The kernel is a Linux 2.6.18-194.3.1.el5 (CentOS). mpstat -P ALL shows: 18:20:33 CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s 18:20:33 all 0,23 0,00 0,08 0,11 6,41 0,02 0,00 93,16 2149,29 18:20:33 0 0,25 0,00 0,12 0,07 0,01 0,05 0,00 99,49 127,08 18:20:33 1 0,14 0,00 0,03 0,04 0,00 0,00 0,00 99,78 0,00 18:20:33 2 0,23 0,00 0,02 0,03 0,00 0,00 0,00 99,72 0,02 18:20:33 3 0,28 0,00 0,15 0,28 25,63 0,03 0,00 73,64 2022,19 This is the /proc/interrupts cat /proc/interrupts CPU0 CPU1 CPU2 CPU3 0: 245 0 0 7134094 IO-APIC-edge timer 8: 0 0 49 0 IO-APIC-edge rtc 9: 0 0 0 0 IO-APIC-level acpi 66: 67 0 0 0 IO-APIC-level ehci_hcd:usb2 74: 902214 0 0 0 PCI-MSI eth0 169: 0 0 79 0 IO-APIC-level ehci_hcd:usb1 177: 0 0 0 7170885 IO-APIC-level ata_piix, b4xxp 185: 0 0 0 59375 IO-APIC-level ata_piix NMI: 0 0 0 0 LOC: 7104234 7104239 7104243 7104218 ERR: 0 MIS: 0 How can I identify which IRQ is causing the high CPU usage? Edit: Output from dmesg | grep -i b4xxp wcb4xxp 0000:30:00.0: probe called for b4xx... wcb4xxp 0000:30:00.0: Identified Wildcard B410P (controller rev 1) at 00012000, IRQ 177 wcb4xxp 0000:30:00.0: VPM 0/1 init: chip ver 33 wcb4xxp 0000:30:00.0: VPM 1/1 init: chip ver 33 wcb4xxp 0000:30:00.0: Hardware echo cancellation enabled. wcb4xxp 0000:30:00.0: Port 1: TE mode wcb4xxp 0000:30:00.0: Port 2: TE mode wcb4xxp 0000:30:00.0: Port 3: TE mode wcb4xxp 0000:30:00.0: Port 4: TE mode wcb4xxp 0000:30:00.0: Did not do the highestorder stuff wcb4xxp 0000:30:00.0: new card sync source: port 3
Well, since you're specifically asking how to know which IRQ is responsible for the number in mpstat , you can assume it's not the local interrupt timer (LOC), since those numbers are fairly equal, and yet mpstat shows some of those cpus at 0 %irq. That leaves IRQ 0, which is the system timer, and which you can't do anything about, and IRQ 177, which is tied to your b4xxp driver. My guess is that IRQ 177 would be your culprit. If this is causing a problem, and you would like to change the behavior your see, try: disabling the software that uses that card, and see if the interrupts decrease. removing that card from the system, and unloading the driver, and see if there's improvement. move that card to another slot and see if that helps. check for updated drivers or patches for the software. If it's not a problem, and you were just curious, then carry on. :)
{ "source": [ "https://unix.stackexchange.com/questions/25203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12669/" ] }
25,216
I'd like to see the absolute size in bytes of each file that has been compressed into single zip file. Having read the zip man page, I'm not sure that that utility can do it. This is on Mac OS X. Something like: $zip list myarchive.zip file1.jpg 100 bytes compressed 3000 bytes uncompressed file2.jpg 130 bytes compressed 3440 bytes uncompressed
You can use the unzip utility with the -v flag: unzip -v files.zip Archive: files.zip Length Method Size Cmpr Date Time CRC-32 Name -------- ------ ------- ---- ---------- ----- -------- ---- 0 Stored 0 0% 11-23-2011 15:02 00000000 file1 0 Stored 0 0% 11-23-2011 15:02 00000000 file2 -------- ------- --- ------- 0 0 0% 2 files Note: The file sizes here are 0 because I made test files of zero length.
{ "source": [ "https://unix.stackexchange.com/questions/25216", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12676/" ] }
25,236
I have VirtualBox instance of Centos 5. The screen size is quite small (800*600) and I'd like to increase it to 1280*1080. Under the Gnome preferences for "Screen Resolution" I only get the option for 600*800 or 640*480. I've tried editing my xorg.conf (based on this tutorial http://paulsiu.wordpress.com/2008/09/08/creating-and-managing-centos-virtual-machine-under-virtualbox/ ) but it doesn't seem to have made a difference. Here is a snippet from the edited section: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1280x800" EndSubSection EndSection Does anyone know how to do this?
A maximum resolution of 800x600 suggests that your X server inside the virtual machine is using the SVGA driver. SVGA is the highest resolution for which there is standard support; beyond that, you need a driver. VirtualBox emulates a graphics adapter that is specific to VirtualBox, it does not emulate a previously existing hardware component like most other subsystems. The guest additions include a driver for that adapter. Insert the guest additions CD from the VirtualBox device menu, then run the installation program. Log out, restart the X server (send Ctrl+Alt+Backspace from the VirtualBox menu), and you should have a screen resolution that matches your VirtualBox window. If you find that you still need manual tweaking of your xorg.conf , the manual has some pointers. There's a limit to how high you can get, due to the amount of memory you've allocated to the graphics adapter in the VirtualBox configuration. 8MB will give you up to 1600x1200 in 32 colors. Going beyond that is mostly useful if you use 3D.
{ "source": [ "https://unix.stackexchange.com/questions/25236", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10337/" ] }
25,311
Is it possible to create a target directory, similar to mkdir -p , where I can define a non-existent target directory within my tar command, and tar will create the directory for me? I know I can redirect the output to a directory using tar -C /target/dir , but this doesn't work if the target directory is non-existent.
mkdir -p /target/dir && tar -C /target/dir
{ "source": [ "https://unix.stackexchange.com/questions/25311", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8157/" ] }
25,327
If a run the watch command containing an alias, it will not expand the alias. I have tried both with single quote and double quotes, in fact given the following alias: # alias ll alias ll='ls -l --color=tty' The following command will fail # watch ll sh: ll: command not found Shouldn't command line expansion work in this case?
Aliases are only expanded as the first argument, or after another alias with a trailing space on the end of the command. From bash 's help alias : A trailing space in VALUE causes the next word to be checked for alias substitution when the alias is expanded. To do this, try the following: alias watch='watch ' alias ll='ls -l --color=tty' watch ll Bear in mind that some versions of watch strip colours by default, on some versions this can be stopped by using --color or -G .
{ "source": [ "https://unix.stackexchange.com/questions/25327", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10790/" ] }
25,340
I have a problem with the following wget command: wget -nd -r -l 10 http://web.archive.org/web/20110726051510/http://feedparser.org/docs/ It should download recursively all of the linked documents on the original web but it downloads only two files ( index.html and robots.txt ). How can I achieve recursive download of this web?
wget by default honours the robots.txt standard for crawling pages, just like search engines do, and for archive.org, it disallows the entire /web/ subdirectory. To override, use -e robots=off , wget -nd -r -l 10 -e robots=off http://web.archive.org/web/20110726051510/http://feedparser.org/docs/
{ "source": [ "https://unix.stackexchange.com/questions/25340", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6215/" ] }
25,344
I am redirecting grep results to a file, and then using cat to show its contents on the screen. I want to know how many lines of results I have in my results file and then add it to some counter. What will be the best way? Any relevant flag to grep or cat ?
If you have already collected the grep output in a file, you could output a numbered list with: cat -n myfile If you only want the number of lines, simply do: wc -l myfile There is absolutely no reason to do: cat myfile | wc -l ...as this needlessly does I/O (the cat ) that wc has to repeat. Besides, you have two processes where one suffices. If you want to grep to your terminal and print a count of the matches at the end, you can do: grep whatever myfile | tee /dev/tty | wc -l Note: /dev/tty is the controlling terminal. From the tty(4) man page : The file /dev/tty is a character file with major number 5 and minor number 0, usually of mode 0666 and owner.group root.tty . It is a synonym for the controlling terminal of a process, if any. In addition to the ioctl(2) requests supported by the device that tty refers to, the ioctl(2) request TIOCNOTTY is supported.
{ "source": [ "https://unix.stackexchange.com/questions/25344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
25,361
Why is a signed integer used to represent timestamps? There is a clearly defined start at 1970 that's represented as 0, so why would we need numbers before that? Are negative timestamps used anywhere?
Early versions of C didn't have unsigned integers. (Some programmers used pointers when they needed unsigned arithmetic.) I don't know which came first, the time() function or unsigned types, but I suspect the representation was established before unsigned types were universally available. And 2038 was far enough in the future that it probably wasn't worth worrying about. I doubt that many people thought Unix would still exist by then. Another advantage of a signed time_t is that extending it to 64 bits (which is already happening on some systems) lets you represent times several hundred billion years into the future without losing the ability to represent times before 1970. (That's why I oppose switching to a 32-bit unsigned time_t ; we have enough time to transition to 64 bits.)
{ "source": [ "https://unix.stackexchange.com/questions/25361", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2742/" ] }
25,372
I have a script which calls two commands: long_running_command | print_progress The long_running_command prints progress but I'm unhappy with it. I'm using print_progress to make it nicer (namely, I print the progress in a single line). The problem: Connection a pipe to stdout also activates a 4K buffer, so the nice print program gets nothing ... nothing ... nothing ... a whole lot ... :) How can I disable the 4K buffer for the long_running_command (no, I do not have the source)?
You can use the unbuffer command (which comes as part of the expect package), e.g. unbuffer long_running_command | print_progress unbuffer connects to long_running_command via a pseudoterminal (pty), which makes the system treat it as an interactive process, therefore not using the 4-kiB buffering in the pipeline that is the likely cause of the delay. For longer pipelines, you may have to unbuffer each command (except the final one), e.g. unbuffer x | unbuffer -p y | z
{ "source": [ "https://unix.stackexchange.com/questions/25372", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9688/" ] }
25,425
I read the following in A User's Guide to the Z-Shell : A synonym for ‘true’ is ‘:’; it’s often used in this form to give arguments which have side effects but which shouldn’t be used — something like : ${param:=value} which is a common idiom in all Bourne shell derivatives. In the parameter expansion, $param is given the value value if it was empty before, and left alone otherwise. Since that was the only reason for the parameter expansion, you use : to ignore the argument. Actually, the shell blithely builds the command line — the colon, followed by whatever the value of $param is, whether or not the assignment happened — then executes the command; it just so happens that ‘:’ takes no notice of the arguments it was given. but I don't understand it. I get that : means true , but there are two colons in the expression. As a minor question, why is this idiom used so much in all Bourne shell derivatives? What purpose does it serve? Note: I am interested in what this idiom does in both bash and in zsh . Thanks
Let's break this down into pieces. This code runs the command : with some arguments. The command : does nothing and ignores its arguments. Therefore the whole command line does nothing, except whatever side effects happen in the arguments. The syntax ${parameter_name:=value} exists in all non-antique Bourne-style shells, including ash, bash, ksh and zsh. It sets the parameter to a default if necessary. It is equivalent to if [ -z "$parameter_name" ]; then parameter_name=value; fi … ${parameter_name} In other words, if parameter_name is not set or is set to an empty value, then set it to the indicated value; and then run the command, using the new parameter value. There is a variant, ${parameter_name=value} , which leaves the parameter empty if it was empty, only using the indicated value if the parameter was unset. You'll find this syntax documented under “parameter expansion” in the POSIX spec , and the dash, bash, ksh and zsh manuals. There are variations on this syntax, in particular ${parameter_name:-value} which let you use a default value for this expansion only, without assigning to the parameter. In summary, : ${parameter_name:=value} is a concise way of writing if [ -z "$parameter_name" ]; then parameter_name=value; fi
{ "source": [ "https://unix.stackexchange.com/questions/25425", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
25,444
I just installed Ubuntu 10.04.3 to dual-boot with Windows. I can connect to the Internet using a wired connection, but not through wireless. The first time I booted, the wireless notification bar said "Device not ready". The next time I booted, the wireless notification didn't show any available networks, but did not say "Device not ready". The third time I booted, the Internet icon does not show at all. I am conjecturing that this is a driver / hardware issue, but don't have enough experience to know. My question is: What is the next troubleshooting step (if any) based on the steps I have already taken? Background Info: The hardware Wi-Fi button is in the on position :-) but the indicator light is not toggling with the button. i.e., when I have the wireless device in the on position while running Windows, the light is blue. When it's off, the light is orange. In Ubuntu, it is showing orange in both positions. When I log on to the system, I get a dialog box that says something like, 'The configuration defaults for GNOME Power Manager have not been installed correctly, etc.' (I don't know how this affects the process, but hopefully you do :-) Summary of troubleshooting steps (with outputs listed far below): Stepped through the pertinent troubleshooting steps at the Ubuntu Community Documentation a. Checked for device recognition with sudo lshw -C network (output below) b. Checked for a connection to the router with iwconfig (output below) c. Checked ip assignment with ifconfig (output below) Ran the "Hardware Drivers" utility in System-->Administration (it said 'There are no proprietary drivers on this computer') Identified my card and driver with lspci -vvnn | grep 14e4 (output below) Checked linuxwireless.org to see if the driver is supported (it is) What's my next step? 1.a -- Output from sudo lshw -C network *-network description: Ethernet interface product: MCP67 Ethernet vendor: nVidia Corporation physical id: a bus info: pci@0000:00:0a.0 logical name: eth0 version: a2 serial: 00:1b:24:d0:dc:21 size: 100MB/s capacity: 100MB/s width: 32 bits clock: 66MHz capabilities: pm msi ht bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=forcedeth driverversion=0.64 duplex=full ip=192.168.1.6 latency=0 link=yes maxlatency=20 mingnt=1 multicast=yes port=MII speed=100MB/s resources: irq:27 memory:f6488000-f6488fff ioport:30f8(size=8) memory:f6489c00-f6489cff memory:f6489800-f648980f *-network description: Network controller product: BCM4311 802.11b/g WLAN vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 02 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:19 memory:f6000000-f6003fff *-network DISABLED description: Wireless interface physical id: 1 logical name: wlan0 serial: 00:1a:73:bb:d7:35 capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bg 1.b -- Output from iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off 1.c -- Output from ifconfig eth0 Link encap:Ethernet HWaddr 00:1b:24:d0:dc:21 inet addr:192.168.X.X Bcast:192.168.X.XXX Mask:255.255.XXX.X inet6 addr: fe80::21b:24ff:xxxX:xxXX/XX Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4966 errors:0 dropped:0 overruns:0 frame:0 TX packets:4068 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4503525 (4.5 MB) TX bytes:777899 (777.8 KB) Interrupt:27 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.X.X Mask:255.0.X.X inet6 addr: ::X/XXX Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1958 errors:0 dropped:0 overruns:0 frame:0 TX packets:1958 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:117940 (117.9 KB) TX bytes:117940 (117.9 KB) 3 -- Output from lspci -vvnn | grep 14e4 03:00.0 Network controller [0280]: Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 02)
Let's break this down into pieces. This code runs the command : with some arguments. The command : does nothing and ignores its arguments. Therefore the whole command line does nothing, except whatever side effects happen in the arguments. The syntax ${parameter_name:=value} exists in all non-antique Bourne-style shells, including ash, bash, ksh and zsh. It sets the parameter to a default if necessary. It is equivalent to if [ -z "$parameter_name" ]; then parameter_name=value; fi … ${parameter_name} In other words, if parameter_name is not set or is set to an empty value, then set it to the indicated value; and then run the command, using the new parameter value. There is a variant, ${parameter_name=value} , which leaves the parameter empty if it was empty, only using the indicated value if the parameter was unset. You'll find this syntax documented under “parameter expansion” in the POSIX spec , and the dash, bash, ksh and zsh manuals. There are variations on this syntax, in particular ${parameter_name:-value} which let you use a default value for this expansion only, without assigning to the parameter. In summary, : ${parameter_name:=value} is a concise way of writing if [ -z "$parameter_name" ]; then parameter_name=value; fi
{ "source": [ "https://unix.stackexchange.com/questions/25444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9483/" ] }
25,463
I read from somewhere that Android uses the Linux Kernel. Is it really true? I thought the Linux Kernel was meant for desktop operating systems.
Architecture of Android Android relies on Linux for core system services such as security, memory management, process management, network stack, and driver model. The kernel also acts as an abstraction layer between the hardware and the rest of the software stack. Latest Android runs Linux version 3.10 ( source ). And my comment on your second sentence is that Linux Kernel is not meant for only desktop operating systems. Its use cases vary from Desktop OS to Servers, mainframes and supercomputers to Embedded Devices. Linux is a widely ported operating system kernel. Due to its low cost and ease of customization, the Linux kernel is used on a highly diverse range of computer architectures: in the hand-held devices and the mainframe Systems, in devices ranging from mobile phones to supercomputers. On the other note: Palm (later acquired by HP) use Linux-derived operating system, webOS , which is used into its line of Palm Pre smartphones. Several network firewalls and routers from makers such as Cisco/Linksys use customized linux kernel. There are tons of devices out there which are using embedded linux .
{ "source": [ "https://unix.stackexchange.com/questions/25463", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12762/" ] }
25,475
I just created a new user on my system using useradd -d /home/users/john -m john (Using Ubuntu 11.04). This worked fine, but when I changed to john , my bash is not fully functional. That is, it has no autocomplete, I can not use they arrow keys (e.g. UP to get the last command), and instead of showing my current directory it only shows $ . I loaded a .bashrc but this appears to be completely ignored. What could cause this? PD: this user is already working on the system and I rather not remove it and add it again, if possible.
Probably john's shell is not /bin/bash , but /bin/sh . On Ubuntu, that's a shell intended to execute scripts fast, with no fancy interactive features such as command line edition. Check last column of grep john /etc/passwd or getent passwd john . You might want to run chsh -s /bin/bash john to change user's shell.
{ "source": [ "https://unix.stackexchange.com/questions/25475", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
25,546
I want to create a bash alias for grep that adds line numbers: alias grep='grep -n' But that, of course, adds line numbers to pipelines as well. Most of the time (and no exceptions come to mind) I don't want line numbers within a pipeline (at least internally, probably OK if it's last), and I don't really want to add a sed/awk/cut to the pipeline just to take them out. Perhaps my requirements could be simplified to "only add line numbers if grep is the only command on the line." Is there any way to do this without a particularly ugly alias?
You could use a function in bash (or any POSIX shell) like this: grep() { if [ -t 1 ] && [ -t 0 ]; then command grep -n "$@" else command grep "$@" fi } The [ -t 1 ] part uses the [ command (also known as test ) to check if stdout is associated with a tty. The [ -t 0 ] checks standard input, as well, since you specified to only add line numbers if grep is the only command in the pipeline.
{ "source": [ "https://unix.stackexchange.com/questions/25546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11750/" ] }
25,559
Let's say I have text file like: R1 12 324 3453 36 457 4 7 8 R2 34 2342 2525 25 25 26 26 2 2 R3 23 2342 32 52 54 543 643 63 R4 25 234 2342 4 234242 I want to use awk to process these lines differently, like awk '/R1/ { print "=>" $0} /R2/ { print "*" $0} ' and I want to also print all the rest of the lines as they are (without making duplicates of the lines I've already processed), basically I need an /ELSE/ { print $0} at the end of my awk line. Is there such a thing?
Simplified Approach with awk awk '/R1/ {print "=>" $0;next} /R2/{print "*" $0;next} 1' text.file [jaypal:~/Temp] cat text.file R1 12 324 3453 36 457 4 7 8 R2 34 2342 2525 25 25 26 26 2 2 R3 23 2342 32 52 54 543 643 63 R4 25 234 2342 4 234242 [jaypal:~/Temp] awk '/R1/ { print "=>" $0;next} /R2/{print "*" $0;next}1' text.file =>R1 12 324 3453 36 457 4 7 8 *R2 34 2342 2525 25 25 26 26 2 2 R3 23 2342 32 52 54 543 643 63 R4 25 234 2342 4 234242 [jaypal:~/Temp] Breakout of Pattern {Action} Statements: /R1/ { print "=>" $0;next} : This means lines having /R1/ the action of printing => will be done. next means the rest of the awk statements will be ignored and next line will be looked at. /R2/{print "*" $0;next} : This means lines matching the pattern /R2/ the action of printing * will be done. When awk processing starts, the first pattern {action} statement will be ignored as the pattern /R1/ will not be true for lines having /R2/ . So second pattern {action} statement will done on the line. next would again mean that we don't want any more processing and awk will duly go to the next line. 1 prints all lines. When just a condition is supplied with no {action} , awk defaults to using {print} . Here the condition is 1 which is interpreted as true, so it always succeeds. If we get to this point, it's because the first and second pattern {action} statements were ignored or by-passed (for lines not containing /R1/ and /R2/ ), so the default print action will be done for the remaining lines.
{ "source": [ "https://unix.stackexchange.com/questions/25559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3850/" ] }
25,569
On a standard Linux distribution (e.g. Ubuntu) there is usually /etc/group and /etc/group- , where the second one is only readable by root. man group only describes /etc/group . Thus my question: What is the purpose of /etc/group- ?
It is a backup of the previous copy of the file that is version of the file before the last change. It is kept because it is very important file. You can delete it, but backups are a "good thing". You can easily verify it. Try # groupadd test # diff /etc/group /etc/group- There are other files also that are backed up the same way viz. /etc/passwd- /etc/shadow- . All the user and group management utilities like useradd, usermod, userdel, groupmod, groupdel etc. create/update these backup files after successful execution of the command.
{ "source": [ "https://unix.stackexchange.com/questions/25569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1131/" ] }
25,592
Is there a way to create out of thin air, a file that is a sequence of numbers, starting at a given number, one per line? something like magic_command start 100 lines 5 > b.txt and then, b.txt would be 100 101 102 103 104
There is already a command for this: seq 100 104 will print these numbers on separate lines: 100 101 102 103 104 So just direct this output into a file: seq 100 104 > my_file.txt and seq 100 2 104 will print in increments of two, namely: 100 , 102 , 104
{ "source": [ "https://unix.stackexchange.com/questions/25592", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45335/" ] }
25,601
This is mostly out of curiosity, I'm trying to understand how event handling works on a low level, so please don't reference me to a software that'll do it for me. If for example I want to write a program in C/C++ that reacts to mouse clicks, I assume I need to use a system call to hook some function to the kernel, or maybe you need to just constantly check the status of the mouse, I don't know. I assume it's possible since just about everything is possible in C/C++, being so low level, I'm mostly interested in how it works, even though I'll probably never have to implement it myself. The question is how it works in linux, are there certain system calls, c libraries, etc.?
If you're writing a real-world program that uses the mouse in Linux, you're most likely writing an X application, and in that case you should ask the X server for mouse events. Qt , GTK , and libsdl are some popular C libraries that provide functions for accessing mouse, keyboard, graphics, timers, and other features needed to write GUI programs. Ncurses is a similar library for terminal applications. But if you're exploring your system, or you can't use X for whatever reason, here is how it works at the kernel interface. A core idea in the UNIX philosophy is that "everything is a file". More specifically, as many things as possible should be accessible through the same system calls that you use to work with files. And so the kernel interface to the mouse is a device file. You open() it, optionally call poll() or select() on it to see if there's incoming data, and read() to read the data. In pre-USB times, the specific device file was often a serial port, e.g. /dev/ttyS0 , or a PS/2 port, /dev/psaux . You talked to the mouse using whatever hardware protocol was built into the mouse. These days, the /dev/input/* subsystem is preferred, as it provides a unified, device-independent way of handling many different input devices. In particular, /dev/input/mice will give you events from any mouse attached to your system, and /dev/input/mouseN will give you events from a particular mouse. In most modern Linux distributions, these files are created dynamically when you plug in a mouse. For more information about exactly what you would read or write to the mouse device file, you can start with input/input.txt in the kernel documentation. Look in particular at sections 3.2.2 (mousedev) and 3.2.4 (evdev), and also sections 4 and 5.
{ "source": [ "https://unix.stackexchange.com/questions/25601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12839/" ] }
25,602
Possible Duplicate: ext4: How to account for the filesystem space? I have a ~2TB ext4 USB external disk which is about half full: $ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc 1922860848 927384456 897800668 51% /media/big I'm wondering why the total size (1922860848) isn't the same as Used+Available (1825185124)? From this answer I see that 5% of the disk might be reserved for root, but that would still only take the total used to 1921328166, which is still off. Is it related to some other filesystem overhead? In case it's relevant, lsof -n | grep deleted shows no deleted files on this disk, and there are no other filesystems mounted inside this one. Edit: As requested, here's the output of tune2fs -l /dev/sdc tune2fs 1.41.14 (22-Dec-2010) Filesystem volume name: big Last mounted on: /media/big Filesystem UUID: 5d9b9f5d-dae7-4221-9096-cbe7dd78924d Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 122101760 Block count: 488378624 Reserved block count: 24418931 Free blocks: 480665205 Free inodes: 122101749 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 907 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Wed Nov 23 14:13:57 2011 Last mount time: Wed Nov 23 14:14:24 2011 Last write time: Wed Nov 23 14:14:24 2011 Mount count: 2 Maximum mount count: 20 Last checked: Wed Nov 23 14:13:57 2011 Check interval: 15552000 (6 months) Next check after: Mon May 21 13:13:57 2012 Lifetime writes: 144 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 68e954e4-59b1-4f59-9434-6c636402c3db Journal backup: inode blocks
Theres no missing space. 5% reserved is rounded down to the nearest significant figure. 1k Blocks: 1922860848 Reserved 1k Blocks: (24418931 * 4) = 97675724 Total blocks used: 927384456 + 897800668 + 97675724 = 1922860848 Edit: Regarding your comment on the difference between df blocks and 'Block Count' blocks. So the 4k block difference is (1953514496 - 1922860848)/4 = 7663412 The majority of the 'difference' is made up of the "Inode blocks per group" parameter which is 512. Since there is 32768 blocks per group that puts the number of groups at 488378624 / 32768 which is 14904 rounded down. Multiplied by the 512 blocks it takes up gives 7630848 blocks. That gives us 7663412 - 7630848 = 32564 unaccounted for. I assume that those blocks make up your journal size, but not too sure on that one!
{ "source": [ "https://unix.stackexchange.com/questions/25602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12840/" ] }
25,629
The commands below may takes minutes depends on the file size. Is there any more effient method? sed -i 1d large_file
Try ed instead: ed <<< $'1d\nwq' large_file If that “large” means about 10 million lines or more, better use tail . Is not able for in-place editing, but its performance makes that lack forgivable: tail -n +2 large_file > large_file.new Edit to show some time differences: ( awk code by Jaypal added to have execution times on the same machine (CPU 2.2GHz).) bash-4.2$ seq 1000000 > bigfile.txt # further file creations skipped bash-4.2$ time sed -i 1d bigfile.txt time 0m4.318s bash-4.2$ time ed -s <<< $'1d\nwq' bigfile.txt time 0m0.533s bash-4.2$ time perl -pi -e 'undef$_ if$.==1' bigfile.txt time 0m0.626s bash-4.2$ time { tail -n +2 bigfile.txt > bigfile.new && mv -f bigfile.new bigfile.txt; } time 0m0.034s bash-4.2$ time { awk 'NR>1 {print}' bigfile.txt > newfile.txt && mv -f newfile.txt bigfile.txt; } time 0m0.328s
{ "source": [ "https://unix.stackexchange.com/questions/25629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1875/" ] }
25,639
What I want to achieve is be able to record my terminal sessions to file automatically whenever I use Yakuake/Konsole. It's easy to achieve if at the start of my session I do: script -f /home/$USER/bin/shell_logs/$(date +"%d-%b-%y_%H-%M-%S")_shell.log But I want to run the above automatically whenever I start Yakuake or open a new tab. Using .bashrc does not work because it creates endless loop as 'script' opens a new session, which in turn reads .bashrc and starts another 'script' and so on. So presumably I need to script Yakuake/Konsole somehow to run 'script' once as a new tab gets opened. The question is how?
If someone wants to record their terminal sessions automatically--including SSH sessions(!)--using the script utility, here is how. Add the following line at the end of .bashrc in your home directory, or otherwise /etc/bash.bashrc if you only want to record all users' sessions. We test for shell's parent process not being script and then run script . For Linux: test "$(ps -ocommand= -p $PPID | awk '{print $1}')" == 'script' || (script -f $HOME/$(date +"%d-%b-%y_%H-%M-%S")_shell.log) For BSD and macOS, change script -f to script -F : test "$(ps -ocommand= -p $PPID | awk '{print $1}')" == 'script' || (script -F $HOME/$(date +"%d-%b-%y_%H-%M-%S")_shell.log) That's all! Now when you open a new terminal you'll see: Script started, file is /home/username/file_name.log script will write your sessions to a file in your home directory naming them something like 30-Nov-11_00-11-12_shell.log as a result. More customization: You can append your sessions to one large file rather than creating a new one for every session with script -a /path/to/single_log_file You can adjust where the files are written to by changing the path after script -f (Linux) or script -F (BSD and macOS) This answer assumes that you have script installed, of course. On Debian-based distributions, script is part of the bsdutils package.
{ "source": [ "https://unix.stackexchange.com/questions/25639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12564/" ] }
25,681
Often when I mistype a command such as ls (e.g. I hit ENTER before I type 's') there is a long (~2s) delay after the terminal displays: bash: l: command not found... I can understand the reasons for a similar delay after an incorrect password is entered, per Why is there a big delay after entering a wrong password? . But why delay after an unrecognized command? Does FAIL_DELAY in /etc/login.defs affect this also?
after some research I have found this : try to uninstall the command-not-found package with $>yum remove command-not-found then install it again with >$yum install command-not-found (just in case you have that package installed on your system). if that doesn't help try: add this to your ~/.bashrc file: unset command_not_found_handle
{ "source": [ "https://unix.stackexchange.com/questions/25681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12870/" ] }
25,821
How can I introduce a conditional OR into grep? Something like, grepping a file's type for (JPEG OR JPG), and then sending only those files into the photos folder. For example. I know how to send the file where I want it, and get the file type, I just need some help with the grep part. I'm on OS X, which IMO seems to have modified/customized *nix utilities than what I'm used to in a *nix environment. So hopefully the answers can be as generic/portable as possible.
I'm also fairly new to regex, but since noone else has answered I'll give it a shot. The pipe-operator "|" is used for an OR operator. The following REGEX should get you going somewhere. .+((JPG)$|(JPEG)$) (Match anything one or more times followed by either "JPG" or "JPEG" at the end) Extended answer (editted after learning a bit about (e)grep): Assuming you have a folder with following files in them: test.jpeg, test.JpEg, test.JPEG, test.jpg, test.JPG, test.notimagefile, test.gif (Not that creative with names...) First we start by defining what we know about our pattern: We know that we are looking for the end of the name. Ergo we use the "$" operand to define that each line has to end with the defined pattern. We know that the pattern needs to be either JPEG or JPG. To this we use the pipeline "|" as an or operand. Our pattern is now: ((JPEG)|(JPG))$ (Match any line ending with EITHER "JPEG" or "JPG") However we see that in this example, the only difference is the optional "E". To this we can use the "?" operand (meaning optional). We write: (JP(E)?G)$ (Mach any file ending with a pattern like: "J", followed by "P", followed by an optional "E", followed by a "G"). However we might also like to match files with lowercase letters in file name. To this we introduce the character-class "[...]". meaning match either of the following. We write: ([jJ][pP]([eE])?[gG])$ (Match any file ending with at pattern like: "j" or "J", followed by "p" or "P", followed by an optional "e" or "E", followed by "g" or "G") (This could also be done using the "-i" option in grep, but I took this as an exercise in REGEX) Finally, since we (hopefully) start to see a pattern, we can omit the unnecessary parentheses. Since there is only one optional letter ("E"), we can omit this one. Also, since the file only has this pattern to end on, we can omit the starting and ending parenthesis. Thus we simply get: [jJ][pP][eE]?[gG]$ Finally; lets assume you also want to find files with ".gif"-filetype, we can add this as a second parameter: ([jJ][pP][eE]?[gG])|([gG][iI][fF])$ (Here I've again added extra parenthesis for readability/grouping. Feel free to remove them if they seem obfuscating.) Finally, I used ls and a pipeline to send all file names to (e)grep: ls | egrep '([jJ][pP][eE]?[gG])|([gG][iI][fF])$' Result: test.gif test.JPG test.JpEg test.JPEG test.jpg test.JPG Second edit: Using the -i option and omitting parenthesis we can shorten it down to only: ls | egrep -i 'jpe?g|gif$'
{ "source": [ "https://unix.stackexchange.com/questions/25821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3488/" ] }