source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
15,662 | I have a text file which I want to split into 64 unequal parts, according to the 64 hexagrams of the Yi Jing. Since the passage for each hexagram begins with some digit(s), a period, and two newlines, the regex should be pretty easy to write. But how do I actually split the text file into 64 new files according to this regex? It seems like more of a task for perl . But maybe there's a more obvious way that I'm just totally missing. | This would be csplit except that the regex has to be a single line. That also makes sed difficult; I'd go with Perl or Python. You could see if csplit foo.txt '/^[0-9][0-9]*\.$/' '{64}' is good enough for your purposes. ( csplit requires a POSIX BRE, so it can't use \d or + , among others.) | {
"source": [
"https://unix.stackexchange.com/questions/15662",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1389/"
]
} |
15,693 | If I start an app with this command: /path/to/my/command >> /var/log/command.log And the command doesn't return, is there a way, from another prompt, to see what the STDOUT redirect is set to? I'm looking for something like either cat /proc/PID/redirects or ps -??? | grep PID but any method will do. | Check out the file descriptor #1 (STDOUT) in /proc/$PID/fd/ . The kernel represents this file as symbolic link to a file the descriptor is redirected to. $ readlink -f /proc/20361/fd/1
/tmp/file | {
"source": [
"https://unix.stackexchange.com/questions/15693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2163/"
]
} |
15,715 | I am trying to figure out a decent way to copy what I have in a tmux buffer into my clipboard. I have tried a couple of different things like bind-key p select-pane -t 2 \; split-window 'xsel -i -b' \; paste-buffer which gets me fairly close, all I have to do is hit control-d after I do prefix-p. I tried fixing that by doing bind-key p select-pane -t 2 \; split-window 'xsel -i -b << HERE\; tmux paste-buffer\; echo HERE' But that just doesn't work. In fact if I pair this down to just bind-key p select-pane -t 2 \; split-window 'xsel -i -b << HERE' it doesn't do anything so I am assuming that split-window doesn't like << in a shell command. Any ideas? Edit:
You can skip the select-pane -t 2 if you want, it isn't really important. I just use a specific layout and pane 2 is the one I prefer to split when I doing something else so that goes into my bindings involving splits by default. | This should work: # move x clipboard into tmux paste buffer
bind C-p run "tmux set-buffer \"$(xclip -o)\"; tmux paste-buffer"
# move tmux copy buffer into x clipboard
bind C-y run "tmux save-buffer - | xclip -i" | {
"source": [
"https://unix.stackexchange.com/questions/15715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8546/"
]
} |
15,722 | Right now, when I run man (something) , less is used to view it. However, on Mac OS X, I really like scrolling with touchpad instead of up / down or page-up / page-down buttons. Is there a way to just print text into the terminal instead of using less or more ? | This should work: # move x clipboard into tmux paste buffer
bind C-p run "tmux set-buffer \"$(xclip -o)\"; tmux paste-buffer"
# move tmux copy buffer into x clipboard
bind C-y run "tmux save-buffer - | xclip -i" | {
"source": [
"https://unix.stackexchange.com/questions/15722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4764/"
]
} |
15,855 | How can I 'cat' a man page like I would 'cat' a file to get just a dump of the contents? | First of all, the man files are usually just gziped text files stored somewhere in your file system. Since your mileage will vary finding them and you probably wanted the processed and formatted version that man gives you instead of the source, you can just dump them with the man tool. By looking at man man , I see that you can change the program used to view man pages with the -P flag like this: man -P cat command_name It's also worth noting that man automatically detects when you pipe its output instead of viewing it on the screen, so if you are going to process it with something else you can skip straight to that step like so: man command_name | grep search_string or to dump TO a file: man command_name > formatted_man_page.txt | {
"source": [
"https://unix.stackexchange.com/questions/15855",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6960/"
]
} |
15,895 | I'm using busybox in a embedded system, and I would like to check its version. How do I check the busybox version from within busybox? | Invoke the busybox binary as busybox , and you get a line with the Busybox version, a few more lines of fluff, and the list of utilities included in the binary. busybox | head -1 Most utilities show a usage message if you call them with --help , with the version number in the first line. ls --help 2>&1 | head -1 | {
"source": [
"https://unix.stackexchange.com/questions/15895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3608/"
]
} |
15,911 | I wrote a program that calls setuid(0) and execve("/bin/bash",NULL,NULL) . Then I did chown root:root a.out && chmod +s a.out When I execute ./a.out I get a root shell. However when I do gdb a.out it starts the process as normal user, and launches a user shell. So... can I debug a setuid root program? | You can only debug a setuid or setgid program if the debugger is running as root. The kernel won't let you call ptrace on a program running with extra privileges. If it did, you would be able to make the program execute anything, which would effectively mean you could e.g. run a root shell by calling a debugger on /bin/su . If you run Gdb as root, you'll be able to run your program, but you'll only be observing its behavior when run by root. If you need to debug the program when it's not started by root, start the program outside Gdb, make it pause in some fashion before getting to the troublesome part, and attach the process inside Gdb ( at 1234 where 1234 is the process ID). | {
"source": [
"https://unix.stackexchange.com/questions/15911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1578/"
]
} |
15,950 | I have a directory called uploads . It contains a bunch of files, plus a few subdirectories which in turn contain files. Is there a way I can (in one step) do the following: List ONLY the files in the root uploads directory -- I do not want to see the subfolder names or their contents; and Do NOT list any files that start with t_ I know about the -d flag, but it doesn't get me quite what I want. | This sounds like a job for find . Use -maxdepth to only return the current directory, not recursivly search inside subfolders Use -type f to only return files and not directories or device nodes or whatever else Use a combination if -not and -name to avoid the files with names you don't want It might come together like this: find /path/to/uploads -maxdepth 1 -type f -not -name 't_*' | {
"source": [
"https://unix.stackexchange.com/questions/15950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5740/"
]
} |
15,985 | Let's say I have a bash script with the following: #!/bin/sh
gedit
rm *.temp When I execute it using sh ./test.sh , gedit pops-up but the rm part does not run until after I close gedit . I want the script to continue running even if gedit isn't closed; like the gedit isn't blocking the bash execution. The example I gave is just an example (putting the rm first won't work in a real situation). | The term you are looking for is called "backgrounding" a job. When you run a command either in your shell or in a script you can add a flag at the end to send it to the background and continue running new commands or the rest of the script. In most shells including sh , this is the & character. #!/bin/sh
gedit &
rm ./*.temp That way, the shell doesn't wait for the termination of gedit and both rm and gedit will run concurrently. The term "blocking" usually has to do with input/output streams, often to files or a device. It's also used in compiled languages in something similar to the sense you used, but in bash and similar shell scripting, the terminology (and function!) is rather different. | {
"source": [
"https://unix.stackexchange.com/questions/15985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8783/"
]
} |
15,989 | If I want to tcpdump DNS requests by clients (on an OpenWrt 10.04 router), then I root@ROUTER:/etc# tcpdump -n -i br-lan dst port 53 2>&1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on br-lan, link-type EN10MB (Ethernet), capture size 96 bytes
22:29:38.989412 IP 192.168.1.200.55919 > 192.168.1.1.53: 5697+ A? foo.org. (25)
22:29:39.538981 IP 192.168.1.200.60071 > 192.168.1.1.53: 17481+ PTR? 150.33.87.208.in-addr.arpa. (44)
^C
2 packets captured
3 packets received by filter
0 packets dropped by kernel That's fully ok. But. Why can't I pipe the tcpdumps output in realtime? root@ROUTER:/etc# tcpdump -n -i br-lan dst port 53 2>&1 | awk '/\?/ {print $3}'
^C
root@ROUTER:/etc# If I awk, etc. anything after tcpdump, I don't get ANY output. Why is that? Why can't I process the output of tcpdump with pipelining in realtime? (so that e.g.: in the example in only outputs the 3rd column) Are there any solutions for this? | Straight out of man tcpdump -l Make stdout line buffered. Useful if you want to see the data while
capturing it. E.g.,
tcpdump -l | tee dat
or
tcpdump -l > dat & tail -f dat
Note that on Windows,``line buffered'' means ``unbuffered'', so that
WinDump will write each character individually if -l is specified.
-U is similar to -l in its behavior, but it will cause output to be
``packet-buffered'', so that the output is written to stdout at the
end of each packet rather than at the end of each line; this is
buffered on all platforms, including Windows. | {
"source": [
"https://unix.stackexchange.com/questions/15989",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6960/"
]
} |
15,992 | I have a PDF file that needs a blank page inserted into it every so often. The pattern is unpredictable, so I need a command that will allow me to fit one in wherever necessary. How can i do this? | From http://blog.chewearn.com/2008/12/18/rearrange-pdf-pages-with-pdftk/ pdftk A=src.pdf B=blank.pdf cat A1 B1 A2-end output res.pdf Hope you like this script, just save it as pdfInsertBlankPageAt.sh , add execute permissions, and run. ./pdfInsertBlankPageAt 5 src.pdf res.pdf #!/bin/bash
if [ $# -ne 3 ]
then
echo "Usage example: ./pdfInsertBlankPageAt 5 src.pdf res.pdf"
exit $E_BADARGS
else
pdftk A=$2 B=blank.pdf cat A1-$(($1-1)) B1 A$1-end output $3
fi cat A1 B1 A2-end means that the output file will contain the first page of document A ( src.pdf ) followed by the first page of document B ( blank.pdf ) followed by the rest (pages 2 to end) of document B. This operation is called concatenation, Linux cat is very often used to display text, but it is interesting when used with more than one argument. To create blank.pdf , see How do I create a blank PDF from the command line? | {
"source": [
"https://unix.stackexchange.com/questions/15992",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1389/"
]
} |
15,998 | I have a bash script with the following: #!/bin/bash -e
egrep "^username" /etc/passwd >/dev/null
if[ $? -eq 0 ]
then
echo "doesn't exist"
fi This script will not run without the -e. What does the -e do for this script? Also, what does the $? do in this context? | Your post actually contains 2 questions. The -e flag instructs the script to exit on error. More flags If there is an error it will exit right away. The $? is the exit status of the last command. In Linux an exit status of 0 means that the command was successful. Any other status would mean an error occurred. To apply these answers to your script: egrep "^username" /etc/passwd >/dev/null would look for the username in the /etc/passwd file. If it finds it then the exit status $? will be equal to 0 . If it doesn't find it the exit status will be something else (not 0 ). Here, you will want to execute the echo "doesn't exist" part of the code. Unfortunately there is an error in your script, and you would execute that code if the user exists - change the line to if [ $? -ne 0 ] to get the logic right. However if the user doesn't exist, egrep will return an error code, and due to the -e option the shell will immediately exit after that line, so you would never reach that part of the code. | {
"source": [
"https://unix.stackexchange.com/questions/15998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8794/"
]
} |
16,024 | I want to assign the result of an expression
(i.e., the output from a command)
to a variable and then manipulate it – for example,
concatenate it with a string, then echo it.
Here's what I've got: #!/bin/bash
cd ~/Desktop;
thefile= ls -t -U | grep -m 1 "Screen Shot";
echo "Most recent screenshot is: "$thefile; But that outputs: Screen Shot 2011-07-03 at 1.55.43 PM.png
Most recent screenshot is: So, it looks like that isn't getting assigned to $thefile , and is being printed as it's executed. What am I missing? | A shell assignment is a single word, with no space after the equal sign. So what you wrote assigns an empty value to thefile ; furthermore, since the assignment is grouped with a command, it makes thefile an environment variable and the assignment is local to that particular command, i.e. only the call to ls sees the assigned value. You want to capture the output of a command, so you need to use command substitution : thefile=$(ls -t -U | grep -m 1 "Screen Shot") (Some literature shows an alternate syntax thefile=`ls …` ; the backquote syntax is equivalent to the dollar-parentheses syntax except that quoting inside backquotes is weird sometimes, so just use $(…) .) Other remarks about your script: Combining -t (sort by time) with -U (don't sort with GNU ls ) doesn't make sense; just use -t . Rather than using grep to match screenshots, it's clearer to pass a wildcard to ls and use head to capture the first file: thefile=$(ls -td -- *"Screen Shot"* | head -n 1) It's generally a bad idea to parse the output of ls . This could fail quite badly if you have file names with nonprintable characters. However, sorting files by date is difficult without ls , so it's an acceptable solution if you know you won't have unprintable characters or backslashes in file names. Always use double quotes around variable substitutions , i.e. here write echo "Most recent screenshot is: $thefile" Without double quotes, the value of the variable is reexpanded, which will cause trouble if it contains whitespace or other special characters. You don't need semicolons at the end of a line. They're redundant but harmless. In a shell script, it's often a good idea to include set -e . This tells the shell to exit if any command fails (by returning a nonzero status). If you have GNU find and sort (in particular if you're running non-embedded Linux or Cygwin), there's another approach to finding the most recent file: have find list the files and their dates, and use sort and read (here assuming bash or zsh for -d '' to read a NUL-delimited record) to extract the youngest file. IFS=/ read -rd '' ignored thefile < <(
find -maxdepth 1 -type f -name "*Screen Shot*" -printf "%T@/%p\0" |
sort -rnz) If you're willing to write this script in zsh instead of bash, there's a much easier way to catch the newest file, because zsh has glob qualifiers that permit wildcard matches not only on names but also on file metadata. The (om[1]) part after the pattern is the glob qualifiers; om sorts matches by increasing age (i.e. by modification time, newest first) and [1] extracts the first match only. The whole match needs to be in parentheses because it's technically an array, since globbing returns a list of files, even if the [1] means that in this particular case the list contains (at most) one file. #!/bin/zsh
set -e
cd ~/Desktop
thefile=(*"Screen Shot"*(om[1]))
print -r "Most recent screenshot is: $thefile" | {
"source": [
"https://unix.stackexchange.com/questions/16024",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8802/"
]
} |
16,057 | I'm using openconnect to connect to vpn. After entering my credentials, I get this: POST https://domain.name/...
Got CONNECT response: HTTP/1.1 200 OK
CSTP connected. DPD 30, Keepalive 30
Connected tun0 as xxx.xxx.xxx.xxx, using SSL
Established DTLS connection Running ifconfig shows I have a new network interface tun0 with a certain ip address. Question: How do I make ssh use only the network interface tun0 so that I can access computers on that private network? Edit: My network configuration ( route -n ) seems to be this: 172.16.194.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet8
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
172.16.25.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet1
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0
0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 | It's not the ssh client that decides through which interface TCP
packets should go, it's the kernel. In short, SSH asks the kernel to
open a connection to a certain IP address, and the kernel decides
which interface is to be used by consulting the routing tables. (The following assumes you're on GNU/Linux; the general concept is the
same for all Unices, but the specifics of the commands to run and the
way the output is formatted may vary.) You can display the kernel routing tables with the commands route -n and/or ip route show . OpenConnect should have added a line for the tun0 interface;
connections to any address matching that line will be routed through
that interface. For example, running route -n on my laptop I get
the following output: Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.30.0.1 0.0.0.0 UG 0 0 0 eth0
10.30.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 This means that connections to hosts in the 192.168.122.0/24 (i.e., addresses 192.168.122.0 to 192.168.122.255 according to CIDR notation ) network
will be routed through interface virbr0 ; those to 169.254.0.0/16 and
10.30.0.0/24 will go through eth0 , and anything else (the 0.0.0.0
line) will be routed through eth0 to the gateway host 10.30.0.1. | {
"source": [
"https://unix.stackexchange.com/questions/16057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5683/"
]
} |
16,059 | I'm writing a module for the Linux kernel. When this module is ready, I'd like it to run in any device running the kernel. How can I test it in an Android phone? Do I need to recompile the whole tree for that phone with the module in to test it? If so, how can I do that? Is there some kind of insmod for Android? | It's not the ssh client that decides through which interface TCP
packets should go, it's the kernel. In short, SSH asks the kernel to
open a connection to a certain IP address, and the kernel decides
which interface is to be used by consulting the routing tables. (The following assumes you're on GNU/Linux; the general concept is the
same for all Unices, but the specifics of the commands to run and the
way the output is formatted may vary.) You can display the kernel routing tables with the commands route -n and/or ip route show . OpenConnect should have added a line for the tun0 interface;
connections to any address matching that line will be routed through
that interface. For example, running route -n on my laptop I get
the following output: Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.30.0.1 0.0.0.0 UG 0 0 0 eth0
10.30.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 This means that connections to hosts in the 192.168.122.0/24 (i.e., addresses 192.168.122.0 to 192.168.122.255 according to CIDR notation ) network
will be routed through interface virbr0 ; those to 169.254.0.0/16 and
10.30.0.0/24 will go through eth0 , and anything else (the 0.0.0.0
line) will be routed through eth0 to the gateway host 10.30.0.1. | {
"source": [
"https://unix.stackexchange.com/questions/16059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7531/"
]
} |
16,062 | Since Unix is 40 years old, Unix is older than the invention of the computer mouse. (Actually, only 3 years, if Unix is from 1969 and the mouse from 1972.) How in the world did a new user do anything on Unix without copy & paste? I know they always had a text editor with copy/paste, but everything I do on Linux is copy from web browser, and paste (from the CLIPBOARD) into vim or gedit or gnome terminal. You're the same, right? I just can't imagine loading up a man file into vim, copying & pasting code from it into a temporary buffer, and then having bash execute that buffer. Maybe they never left emacs; is that the answer? | Copy-paste is older than the mouse. The first unix editor, ed , had the t command to copy a bunch of lines to a different location. In vi, there are various commands to cut, yank and paste text. To copy text between files, you would save the text to copy in a temporary file and import that temporary file in the target document, e.g. with w and r in ed ( :w and :r in vi). To include the output of a command in a file, you would redirect its output ( mycommand >file or mycommand >>file ) and import that file into your document; vi introduced the ! command and friends to directly insert the output without requiring a temporary file. Loading a man page into Vim or Emacs and copy-pasting from it is routine for Vim/Emacs users. Web browsers didn't exist until Unix was old enough to drink, but the same principle applies anywhere: the clipboard is older than window environments. What window environments brought was cross-application copy-paste, which could be done with only a little more effort through files. | {
"source": [
"https://unix.stackexchange.com/questions/16062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8827/"
]
} |
16,093 | When configuring cron to run a command every other day using the "Day of Month" field, like so: 1 22 */2 * * COMMAND it runs every time the day of month is odd: 1,3,5,7,9 and so on. How can I configure cron to run on days of month that are even like 2,6,8,10 and so on (without specifying it literally, which is problematic as every month has a different number of days in the month)? | The syntax you tried is actually ambiguous. Depending on how many days are in the month, some months it will run on odd days and some on even. This is because the way it is calculated takes the total number of possibilities and divides them up. You can override this strage-ish behavior by manually specifying the day range and using either an odd or even number of days. Since even day scripts would never run on the 31st day of longer months, you don't lose anything using 30 days as the base for even-days, and by specifying specifically to divide it up as if there were 31 days you can force odd-day execution. The syntax would look like this: # Will only run on odd days:
0 0 1-31/2 * * command
# Will only run on even days:
0 0 2-30/2 * * command Your concern about months not having the same number of days is not important here because no months have MORE days than this, and for poor February, the date range just won't ever match the last day or two, but it will do no harm having it listed. The only 'gotcha' for this approach is that if you are on an odd day cycle, following months with 31 days your command will also run on the first of the month. Likewise if you are forcing an even cycle, each leap year will cause one three-day cycle at the end of February. You cannot really get around the fact that any regular pattern of "every other day" is not always going to fall on even or odd days in every month and any way you force this you will either have an extra run or be missing a run between months with mismatched day counts. | {
"source": [
"https://unix.stackexchange.com/questions/16093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8847/"
]
} |
16,101 | Currently, I have the following in my .zshrc: bindkey '^[[A' up-line-or-search
bindkey '^[[B' down-line-or-search However, this only seems to match the content of my current input before a space character occurs. For example, sudo ls / will match every line in my history that begins with sudo , while I would like it to only match lines that match my entire input. (i.e. sudo ls /etc would match, but not sudo cat /var/log/messages ) What do I need to change in order to gain the desired behavior? Here is my entire .zshrc in case it is relevant: https://gist.github.com/919566 | zsh provide this functionality by using history-beginning-search-backward
history-beginning-search-forward Ex. bindkey "^[[A" history-beginning-search-backward
bindkey "^[[B" history-beginning-search-forward Find exact Key code by ctrl + v KEY Ex. ctrl + v UP ctrl + v DOWN ctrl + v PageUp ctrl + v PageDown etc. In case if you are using mac the below works on OSX catalina. bindkey "\e[5~" history-search-backward
bindkey "\e[6~" history-search-forward | {
"source": [
"https://unix.stackexchange.com/questions/16101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8852/"
]
} |
16,109 | I am doing integer comparison in bash (trying to see if the user is running as root), and I found two different ways of doing it: Double equals: if [ $UID == 0 ]
then
fi -eq if [ $UID -eq 0 ]
then
fi I understand that there's no >= or <= in bash, only -ge and -le, so why is there a == if there's a -eq ? Is there a difference in the way it compares both sides? | == is a bash -specific alias for = , which performs a string (lexical) comparison instead of the -eq numeric comparison. (It's backwards from Perl: the word-style operators are numeric, the symbolic ones lexical.) | {
"source": [
"https://unix.stackexchange.com/questions/16109",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8471/"
]
} |
16,131 | From a bash script, is there some way to get the ID of the X window under the mouse pointer? (edit) I need the process to be non-interactive. | xdotool exposes the pointer location ( xdotool getmouselocation ), and recent versions (since 2.20110530.1) indicate which window is at that location as well . None of xwininfo , wmctrl or older versions of xdotool appear to have a way to match a window by a screen position where it's visible. The underlying X library call is XQueryPointer (corresponding to a QueryPointer message). Here's a simple Python wrapper script around this call (using ctypes ). Error checking largely omitted. Assumes you're using screen 0 (if you didn't know that displays could have more than one screen, ignore this). #! /usr/bin/env python
import sys
from ctypes import *
Xlib = CDLL("libX11.so.6")
display = Xlib.XOpenDisplay(None)
if display == 0: sys.exit(2)
w = Xlib.XRootWindow(display, c_int(0))
(root_id, child_id) = (c_uint32(), c_uint32())
(root_x, root_y, win_x, win_y) = (c_int(), c_int(), c_int(), c_int())
mask = c_uint()
ret = Xlib.XQueryPointer(display, c_uint32(w), byref(root_id), byref(child_id),
byref(root_x), byref(root_y),
byref(win_x), byref(win_y), byref(mask))
if ret == 0: sys.exit(1)
print child_id.value Usage example: xwininfo -tree -id $(XQueryPointer) | {
"source": [
"https://unix.stackexchange.com/questions/16131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2343/"
]
} |
16,138 | Assuming that the grep tool should be used, I'd like to search for the text string "800x600" throughout the entire file system. I tried: grep -r 800x600 / but it doesn't work. What I believe my command should do is grep recursively through all files/folders under root for the text "800x600" and list the search results. What am I doing wrong? | I normally use this style of command to run grep over a number of files: find / -xdev -type f -print0 | xargs -0 grep -H "800x600" What this actually does is make a list of every file on the system, and then for each file, execute grep with the given arguments and the name of each file. The -xdev argument tells find that it must ignore other filesystems - this is good for avoiding special filesystems such as /proc . However it will also ignore normal filesystems too - so if, for example, your /home folder is on a different partition, it won't be searched - you would need to say find / /home -xdev ... . -type f means search for files only, so directories, devices and other special files are ignored (it will still recurse into directories and execute grep on the files within - it just won't execute grep on the directory itself, which wouldn't work anyway). And the -H option to grep tells it to always print the filename in its output. find accepts all sorts of options to filter the list of files. For example, -name '*.txt' processes only files ending in .txt. -size -2M means files that are smaller than 2 megabytes. -mtime -5 means files modified in the last five days. Join these together with -a for and and -o for or , and use '(' parentheses ')' to group expressions (in quotes to prevent the shell from interpreting them). So for example: find / -xdev '(' -type f -a -name '*.txt' -a -size -2M -a -mtime -5 ')' -print0 | xargs -0 grep -H "800x600" Take a look at man find to see the full list of possible filters. | {
"source": [
"https://unix.stackexchange.com/questions/16138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8866/"
]
} |
16,192 | I was reading this Q&A: How to loop over the lines of a file? What is the IFS variable? And what is its usage in the context of for -loops? | IFS stands for Input Internal Field Separator - it's a character that separates fields. In the example you posted, it is set to new line character ( \n ); so after you set it, for will process text line by line. In that example, you could change the value of IFS (to some letter that you have in your input file) and check how text will be split. BTW, from the answer you posted the second solution is the recommended one... As @jasonwryan noticed, it's not Input but Internal . Name Input came from awk in which there is also OFS - Output Field Separator . | {
"source": [
"https://unix.stackexchange.com/questions/16192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7768/"
]
} |
16,199 | How should I transfer files from Windows to Ubuntu installed on Virtualbox? When I plugged in a USB, it only pops up in Windows. How can I see it in Ubuntu? | There are 2 ways, which I normally use Option 1:
Before booting up Ubuntu, inside Virtualbox Ubuntu VM settings, specify a share folder. Then after logged in to Ubuntu, create a new directory for example /media/vboxshared and mount that drive using the command sudo mount -t vboxsf SHARENAME /media/vboxshared . Enter your password when it prompts for the password. Option 2:
Before booting up Ubuntu, add a new Network adapter and select 'Bridged Adapter'. Then after logged in to Ubuntu, run the command ifconfig -a | more to get the ip address of that new network adapter. In Windows, use WinSCP or FileZilla to transfer the file to Ubuntu | {
"source": [
"https://unix.stackexchange.com/questions/16199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8554/"
]
} |
16,226 | I'm trying to validate/verify that the rsa key, ca-bundle, and certificate stored here are ok. They are not being served by a webserver. How can I verify them? | Assuming your certificates are in PEM format, you can do: openssl verify cert.pem If your "ca-bundle" is a file containing additional intermediate certificates in PEM format: openssl verify -untrusted ca-bundle cert.pem If your openssl isn't set up to automatically use an installed set of root certificates (e.g. in /etc/ssl/certs ), then you can use -CApath or -CAfile to specify the CA. | {
"source": [
"https://unix.stackexchange.com/questions/16226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
16,243 | I use ubuntu 11.04. Is there a way to set coloring for output after tab completion listing of a cd, ls, or etc. ? i.e. myshell@root$ cd ~/user/files/ I hit tab... myfile myfoo mybar <-- this output is colored? I hope you enjoyed my diagram. | With readline 6.3 and later you can add set colored-stats on to ~/.inputrc . See https://tiswww.case.edu/php/chet/readline/rluserman.html : colored-stats If set to ` on ', Readline displays possible completions using different colors to indicate their file type. The color definitions are taken from the value of the LS_COLORS environment variable. The default is ` off '. You can use http://geoff.greer.fm/lscolors/ to generate both LS_COLORS (which is used by GNU ls and colored-stats ) and LSCOLORS (which is used by BSD ls ). | {
"source": [
"https://unix.stackexchange.com/questions/16243",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7768/"
]
} |
16,255 | I'm getting more familiar with tweaking the way things look in a shell via the prompt ( .bashrc ), but I'm now trying to change the initial stuff displayed when I first log in. On my EC2 instance, this is what I see when I log in: __| __|_ ) Fedora 8
_| ( / 32-bit
___|\___|___|
Welcome to:
Wowza Media Server 2 for Amazon EC2
Version: 2.0.0.08 On my home Mint computer, here's what I see when I log in: Welcome to Linux Mint 11 Katya (GNU/Linux 2.6.38-8-generic x86_64)
Welcome to Linux Mint
* Documentation: http://www.linuxmint.com
Last login: Tue Jun 21 17:44:05 2011 Where is this defined? How can I tweak it for some mad ASCII art ACTION ? | The text displayed before the login prompt is stored in /etc/issue (there's a related file, /etc/motd , that's displayed after the user logs in, before their shell is started). It's just a normal text file, but it accepts a bunch of escape sequences: \b -- Baudrate of the current line. \d -- Current date. \s -- System name, the name of the operating system. \l -- Name of the current tty line. \m -- Architecture identifier of the machine, eg. i486 \n -- Nodename of the machine, also known as the hostname. \o -- Domainname of the machine. \r -- Release number of the OS, eg. 1.1.9. \t -- Current time. \u -- Number of current users logged in. \U -- The string "1 user" or " users" where is the number of current users logged in. \v -- Version of the OS, eg. the build-date etc. On my machine I have: This is \n (\s \m \r) \t \l Which is rendered as: This is etudes-1 (Linux x86_64 2.6.39-gentoo) 17:43:10 tty1 | {
"source": [
"https://unix.stackexchange.com/questions/16255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
16,279 | A lot of command-line utilities can take their input either from a pipe or as a filename argument. For long shell scripts, I find starting the chain off with a cat makes it more readable, especially if the first command would need multi-line arguments. Compare sed s/bla/blaha/ data \
| grep blah \
| grep -n babla and cat data \
| sed s/bla/blaha/ \
| grep blah \
| grep -n babla Is the latter method less efficient? If so, is the difference enough to care about if the script is run, say, once a second? The difference in readability is not huge. | The "definitive" answer is of course brought to you by The Useless Use of cat Award . The purpose of cat is to concatenate (or "catenate") files. If it's only one file, concatenating it with nothing at all is a waste of time, and costs you a process. Instantiating cat just so your code reads differently makes for just one more process and one more set of input/output streams that are not needed. Typically the real hold-up in your scripts is going to be inefficient loops and actuall processing. On most modern systems, one extra cat is not going to kill your performance, but there is almost always another way to write your code. Most programs, as you note, are able to accept an argument for the input file. However, there is always the shell builtin < that can be used wherever a STDIN stream is expected which will save you one process by doing the work in the shell process that is already running. You can even get creative with WHERE you write it. Normally it would be placed at the end of a command before you specify any output redirects or pipes like this: sed s/blah/blaha/ < data | pipe But it doesn't have to be that way. It can even come first. For instance your example code could be written like this: < data \
sed s/bla/blaha/ |
grep blah |
grep -n babla If script readability is your concern and your code is messy enough that adding a line for cat is expected to make it easier to follow, there are other ways to clean up your code. One that I use a lot that helps make scripts easiy to figure out later is breaking up pipes into logical sets and saving them in functions. The script code then becomes very natural, and any one part of the pipline is easier to debug. function fix_blahs () {
sed s/bla/blaha/ |
grep blah |
grep -n babla
}
fix_blahs < data You could then continue with fix_blahs < data | fix_frogs | reorder | format_for_sql . A pipleline that reads like that is really easy to follow, and the individual components can be debuged easily in their respective functions. | {
"source": [
"https://unix.stackexchange.com/questions/16279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
16,300 | I want to determine which process has the other end of a UNIX socket. Specifically, I'm asking about one that was created with socketpair() , though the problem is the same for any UNIX socket. I have a program parent which creates a socketpair(AF_UNIX, SOCK_STREAM, 0, fds) , and fork() s. The parent process closes fds[1] and keeps fds[0] to communicate. The child does the opposite, close(fds[0]); s=fds[1] . Then the child exec() s another program, child1 . The two can communicate back and forth via this socketpair. Now, let's say I know who parent is, but I want to figure out who child1 is. How do I do this? There are several tools at my disposal, but none can tell me which process is on the other end of the socket. I have tried: lsof -c progname lsof -c parent -c child1 ls -l /proc/$(pidof server)/fd cat /proc/net/unix Basically, I can see the two sockets, and everything about them, but cannot tell that they are connected. I am trying to determine which FD in the parent is communicating with which child process. | Since kernel 3.3, it is possible using ss or lsof-4.89 or above — see Stéphane Chazelas's answer . In older versions, according to the author of lsof , it was impossible to find this out: the Linux kernel does not expose this information. Source: 2003 thread on comp.unix.admin . The number shown in /proc/$pid/fd/$fd is the socket's inode number in the virtual socket filesystem. When you create a pipe or socket pair, each end successively receives an inode number. The numbers are attributed sequentially, so there is a high probability that the numbers differ by 1, but this is not guaranteed (either because the first socket was N and N +1 was already in use due to wrapping, or because some other thread was scheduled between the two inode allocations and that thread created some inodes too). I checked the definition of socketpair in kernel 2.6.39 , and the two ends of the socket are not correlated except by the type-specific socketpair method . For unix sockets, that's unix_socketpair in net/unix/af_unix.c . | {
"source": [
"https://unix.stackexchange.com/questions/16300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8928/"
]
} |
16,303 | I defined some environment variables in my .profile like this: MY_HOME="/home/my_user" but the variable does not seem to evaluate unless I strip off the quotes and re-source the file. I believe the quotes are necessary if there will be spaces, and single quotes are used if escapes are not desired. Can someone clarify the significance of single and double quotes in variable definitions? How about front-ticks and back-ticks? | I think you're confused about terminology. An "environment variable" is merely a shell variable that any child processes will inherit. What you're doing in your example is creating a shell variable. It's not in the environment until you export it: MY_HOME="/home/my_user"
export MY_HOME puts a variable named "MY_HOME" in almost all shells (csh, tcsh excepted). In this particular case, the double-quotes are superfluous. They have no effect. Double-quotes group substrings, but allows whatever shell you use to do variable substitution. Single-quoting groups substrings and prevents substitution. Since your example assignment does not have any variables in it, the double-quotes could have appeared as single-quotes. V='some substrings grouped together' # assignment
X="Put $V to make a longer string" # substitution and then assignment
Y=`date` # run command, assign its output
Z='Put $V to make a longer string' # no substition, simple assignment Nothing is in the environment until you export it. | {
"source": [
"https://unix.stackexchange.com/questions/16303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8884/"
]
} |
16,311 | Could someone explain to me what a socket is? I see it in many acronyms in context of SSL, etc. Also, why is it called a socket? Is it purely because it was what a name they invented? Or was it the first name they came up with? | A socket is just a logical endpoint for communication. They exist on the transport layer. You can send and receive things on a socket, you can bind and listen to a socket. A socket is specific to a protocol, machine, and port, and is addressed as such in the header of a packet. Beej's guides to Network Programming and Inter-Process Communication both have good information on how to use sockets, and even answer this exact question . | {
"source": [
"https://unix.stackexchange.com/questions/16311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7768/"
]
} |
16,333 | In Bash, I learned that the ending signal can be changed by here document. But by default how can I signal the end of stdin input? I happened to find that with cat and chardet , their stdin inputs can be signaled as finished by Ctrl + D . But I seems to remember that Ctrl + D and Ctrl + C are similar to ending execution of a running command. So am I wrong? | Ctrl+D , when typed at the start of a line on a terminal, signifies the end of the input. This is not a signal in the unix sense: when an application is reading from the terminal and the user presses Ctrl+D , the application is notified that the end of the file has been reached (just like if it was reading from a file and had passed the last byte). Ctrl+C does send a signal, SIGINT . By default SIGINT (the interrupt signal) kills the foreground application, but the application can catch the signal and react in some different way (for example, the shell itself catches the signal and aborts the line you've begun typing, but it doesn't exit, it shows a new prompt and waits for a new command line). You can change the characters associated with end-of-file and SIGINT with the stty command, e.g. stty eof a would make a the end-of-file character, and stty intr ^- would disable the SIGINT character. This is rarely useful. | {
"source": [
"https://unix.stackexchange.com/questions/16333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
16,357 | For a command, if using - as an argument in place of a file name will mean STDIN or STDOUT. But in this example, it creates a file with the name - : echo hello > - How can I make - in this example mean STDOUT? Conversely, how can I make - mean a file named - in examples such as: cat - | Using - as a filename to mean stdin/stdout is a convention that a lot of programs use. It is not a special property of the filename. The kernel does not recognise - as special so any system calls referring to - as a filename will use - literally as the filename. With bash redirection, - is not recognised as a special filename, so bash will use that as the literal filename. When cat sees the string - as a filename, it treats it as a synonym for stdin. To get around this, you need to alter the string that cat sees in such a way that it still refers to a file called - . The usual way of doing this is to prefix the filename with a path - ./- , or /home/Tim/- . This technique is also used to get around similar issues where command line options clash with filenames, so a file referred to as ./-e does not appear as the -e command line option to a program, for example. | {
"source": [
"https://unix.stackexchange.com/questions/16357",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
16,407 | I have 2 GPU's in my netbook. How do I know which one I'm actually using at any given moment? | I've just gone through a hell of a time trying to get my discrete graphics to work in Ubuntu and answering this questions was constantly a challenge, since the lspci method mentioned earlier can sometimes say that both are [VGA controller] I think the following command should give you an indication of your active chip: $ glxinfo|egrep "OpenGL vendor|OpenGL renderer"
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Sandybridge Mobile For me this is telling me that my intel graphics are running the show. If you're using an nvidia chip, and you're using the bumblebee package, you can put optirun in front of that line and it should tell you that you're running the NVidia chip (optirun is basically telling the computer to use the discrete chip to run whatever command follows, but everything else is still using the integrated chip) $ optirun glxinfo|egrep "OpenGL vendor|OpenGL renderer"
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GT 555M/PCIe/SSE2 glxheads also tells you some useful information about which graphics card is in use (mostly repeats glxinfo in a more compact and easy to read form tho), and it gives you a nice rendering of a rotating triangle. | {
"source": [
"https://unix.stackexchange.com/questions/16407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6960/"
]
} |
16,422 | When I run the command: sudo apt-get install build-essential I get the following error message: Reading Package Lists... Done
Building Dependency Tree... Done
E: Couldn't find package build-essential | I believe this still should work. sudo yum groupinstall 'Development Tools' | {
"source": [
"https://unix.stackexchange.com/questions/16422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8985/"
]
} |
16,443 | I have two text files. The first one has content: Languages
Recursively enumerable
Regular while the second one has content: Minimal automaton
Turing machine
Finite I want to combine them into one file column-wise. So I tried paste 1 2 and its output is: Languages Minimal automaton
Recursively enumerable Turing machine
Regular Finite However I would like to have the columns aligned well such as Languages Minimal automaton
Recursively enumerable Turing machine
Regular Finite I was wondering if it would be possible to achieve that without manually handling? Added: Here is another example, where Bruce method almost nails it, except some slight misalignment about which I wonder why? $ cat 1
Chomsky hierarchy
Type-0
—
$ cat 2
Grammars
Unrestricted
$ paste 1 2 | pr -t -e20
Chomsky hierarchy Grammars
Type-0 Unrestricted
— (no common name) | You just need the column command, and tell it to use tabs to separate columns paste file1 file2 | column -s $'\t' -t To address the "empty cell" controversy, we just need the -n option to column : $ paste <(echo foo; echo; echo barbarbar) <(seq 3) | column -s $'\t' -t
foo 1
2
barbarbar 3
$ paste <(echo foo; echo; echo barbarbar) <(seq 3) | column -s $'\t' -tn
foo 1
2
barbarbar 3 My column man page indicates -n is a "Debian GNU/Linux extension." My Fedora system does not exhibit the empty cell problem: it appears to be derived from BSD and the man page says "Version 2.23 changed the -s option to be non-greedy" | {
"source": [
"https://unix.stackexchange.com/questions/16443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
16,455 | From reading the man pages on the read() and write() calls it appears that these calls get interrupted by signals regardless of whether they have to block or not. In particular, assume a process establishes a handler for some signal. a device is opened (say, a terminal) with the O_NONBLOCK not set (i.e. operating in blocking mode) the process then makes a read() system call to read from the device and as a result executes a kernel control path in kernel-space. while the precess is executing its read() in kernel-space, the signal for which the handler was installed earlier is delivered to that process and its signal handler is invoked. Reading the man pages and the appropriate sections in SUSv3 'System Interfaces volume (XSH)' , one finds that: i. If a read() is interrupted by a signal before it reads any data (i.e. it had to block because no data was available), it returns -1 with errno set to [EINTR]. ii. If a read() is interrupted by a signal after it has successfully read some data (i.e. it was possible to start servicing the request immediately), it returns the number of bytes read. Question A): Am I correct to assume that in either case (block/no block) the delivery and handling of the signal is not entirely transparent to the read() ? Case i. seems understandable since the blocking read() would normally place the process in the TASK_INTERRUPTIBLE state so that when a signal is delivered, the kernel places the process into TASK_RUNNING state. However when the read() doesn't need to block (case ii.) and is processing the request in kernel-space, I would have thought that the arrival of a signal and its handling would be transparent much like the arrival and proper handling of a HW interrupt would be. In particular I would have assumed that upon delivery of the signal, the process would be temporarily placed into user mode to execute its signal handler from which it would return eventually to finish off processing the interrupted read() (in kernel-space) so that the read() runs its course to completion after which the process returns back to the point just after the call to read() (in user-space), with all of the available bytes read as a result. But ii. seems to imply that the read() is interrupted, since data is available immediately, but it returns returns only some of the data (instead of all). This brings me to my second (and final) question: Question B): If my assumption under A) is correct, why does the read() get interrupted, even though it does not need to block because there is data available to satisfy the request immediately?
In other words, why is the read() not resumed after executing the signal handler, eventually resulting in all of the available data (which was available after all) to be returned? | Summary: you're correct that receiving a signal is not transparent, neither in case i (interrupted without having read anything) nor in case ii (interrupted after a partial read). To do otherwise in case i would require making fundamental changes both to the architecture of the operating system and the architecture of applications. The OS implementation view Consider what happens if a system call is interrupted by a signal. The signal handler will execute user-mode code. But the syscall handler is kernel code and does not trust any user-mode code. So let's explore the choices for the syscall handler: Terminate the system call; report how much was done to the user code. It's up to the application code to restart the system call in some way, if desired. That's how unix works. Save the state of the system call, and allow the user code to resume the call. This is problematic for several reasons: While the user code is running, something could happen to invalidate the saved state. For example, if reading from a file, the file might be truncated. So the kernel code would need a lot of logic to handle these cases. The saved state can't be allowed to keep any lock, because there's no guarantee that the user code will ever resume the syscall, and then the lock would be held forever. The kernel must expose new interfaces to resume or cancel ongoing syscalls, in addition to the normal interface to start a syscall. This is a lot of complication for a rare case. The saved state would need to use resources (memory, at least); those resources would need to be allocated and held by the kernel but be counted against the process's allotment. This isn't insurmountable, but it is a complication. Note that the signal handler might make system calls that themselves get interrupted; so you can't just have a static resource allotment that covers all possible syscalls. And what if the resources cannot be allocated? Then the syscall would have to fail anyway. Which means the application would need to have code to handle this case, so this design would not simplify the application code. Remain in progress (but suspended), create a new thread for the signal handler. This, again, is problematic: Early unix implementations had a single thread per process. The signal handler would risk overstepping on the syscall's shoes. This is an issue anyway, but in the current unix design, it's contained. Resources would need to be allocated for the new thread; see above. The main difference with an interrupt is that the interrupt code is trusted, and highly constrained. It's usually not allowed to allocate resources, or run forever, or take locks and not release them, or do any other kind of nasty things; since the interrupt handler is written by the OS implementer himself, he knows that it won't do anything bad. On the other hand, application code can do anything. The application design view When an application is interrupted in the middle of a system call, should the syscall continue to completion? Not always. For example, consider a program like a shell that's reading a line from the terminal, and the user presses Ctrl+C , triggering SIGINT. The read must not complete, that's what the signal is all about. Note that this example shows that the read syscall must be interruptible even if no byte has been read yet. So there must be a way for the application to tell the kernel to cancel the system call. Under the unix design, that happens automatically: the signal makes the syscall return. Other designs would require a way for the application to resume or cancel the syscall at its leasure. The read system call is the way it is because it's the primitive that makes sense, given the general design of the operating system. What it means is, roughly, “read as much as you can, up to a limit (the buffer size), but stop if something else happens”. To actually read a full buffer involves running read in a loop until as many bytes as possible have been read; this is a higher-level function, fread(3) . Unlike read(2) which is a system call, fread is a library function, implemented in user space on top of read . It's suitable for an application that reads for a file or dies trying; it's not suitable for a command line interpreter or for a networked program that must throttle connections cleanly, nor for a networked program that has concurrent connections and doesn't use threads. The example of read in a loop is provided in Robert Love's Linux System Programming: ssize_t ret;
while (len != 0 && (ret = read (fd, buf, len)) != 0) {
if (ret == -1) {
if (errno == EINTR)
continue;
perror ("read");
break;
}
len -= ret;
buf += ret;
} It takes care of case i and case ii and few more. | {
"source": [
"https://unix.stackexchange.com/questions/16455",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6918/"
]
} |
16,500 | When I establish an OpenVPN client connection over the Internet to our corporate OpenVPN server, it pushes several static routes. Unfortunately, these routes break some connectivity within my local network environment, as they collide with my own routes. How can I refuse those routes? | After extensive study of the openvpn manual, I have found an answer for my question: I you don't want the routes to be executed automatically, but to be handled by your own tool, use the following option: --route-noexec
Don't add or remove routes automatically. Instead pass routes to --route-up script using environmental variables. If you are accepting everything that is pushed by the server except the routes, use the following option: --route-nopull
When used with --client or --pull, accept options pushed by server EXCEPT for routes.
When used on the client, this option effectively bars the server from adding routes to the client's routing table, however note that
this option still allows the server to set the TCP/IP properties of the client's TUN/TAP interface. | {
"source": [
"https://unix.stackexchange.com/questions/16500",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3766/"
]
} |
16,530 | In the magic sysrq key combinations, there is the combination alt+sysrq+r which, according to wikipedia, does the following: Switch the keyboard from raw mode, the mode used by programs such as
X11 and svgalib, to XLATE mode What is raw mode? and what is XLATE mode? Can I switch back to raw mode once I have switched to XLATE mode? How can I tell which mode my keyboard is in? | When you press a key on your keyboard, it sends a numeric code to the computer, called a scan code. The scan code tells the computer which key was pressed; for example, on a typical US keyboard, the A key sends the scan code 30 when you press it (and 158 when you release it). The keyboard driver reports these codes directly to applications when the keyboard is in raw mode (“raw” means unprocessed, straight off-the-keyboard). A few programs use raw mode and do their own keyboard processing; the X server is the most prominent one. Most programs expect that when you press the A key, the program reads the character a (ASCII 97), and that when you press Shift + A the program reads A (ASCII 65), and when you press Ctrl + A the program reads the Ctrl+A character (ASCII 1). Keys that don't have associated characters send escape sequences, e.g. \e[A for Left , where \e is the ESC character (ASCII 27). The keyboard driver performs this translation when the keyboard is in ASCII mode, also called XLATE mode (short for “translate”). XLATE mode lets applications do character input, at the cost of not having access to such nuances as “Left Shift key pressed” or Ctrl+Shift+A as distinct from Ctrl+A. The kbd_mode lets you switch between modes, and shows the current mode if you invoke it without any argument. The magic SysRq key combinations are meant to recover from various nasty situations. One of the key combinations is Alt+SysRq+K to kill all programs on the current virtual console; if that program put the keyboard in raw mode, then you won't be able to type at the login prompt (which will often appear, depending on your system configuration). Pressing Alt+SysRq+R restores the usual (outside X) ASCII mode where you can type characters. | {
"source": [
"https://unix.stackexchange.com/questions/16530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1170/"
]
} |
16,559 | I currently work remotely for a number of organizations which need me to access their servers regularly for maintenance and upgrades. Is it safe for me to add my RSA public SSH key to the .ssh/authorized_keys file to allow me to login much faster/without having to lookup passwords? I operate under the presumption that it'd be impossible to generate the private key from the public one, but am I right in my assumption? Taking it one step further, would there be any security risk whatsoever in posting the contents of my RSA public key to the internet? I know that if my private key is obtained, I'm in a lot of trouble, but other than that, are there any real possible security threats? | Yes, it is impossible to recover the private key from the public key. If it was possible, RSA would be fundamentally broken, and this would be major news (breaking RSA would not only break a lot of Internet communication security, but also allow all kinds of banking fraud, amongst others). Logging in with a public key instead of a password in fact tends to increase security. If your password isn't strong enough, it can be brute-forced by an attacker with sufficient bandwidth. If the attacker doesn't have a copy of your private key file, the RSA key effectively cannot be brute-forced (a 1024-bit key is equivalent to something like a 160-character password made of random case-sensitive letters and digits). Someone who watches over your shoulder might be able to see your password and your key passphrase, but with a key they'd also need to get the key. Private keys aren't always more secure than passwords. If the attacker obtains a copy of your private key files (for example by stealing your laptop or your backup media), she can try to brute-force the passphrase, and she can do it at high speed since you have no way to limit the rate (unlike password guesses that need to be made online). If your passphrase is good enough and you notice the theft immediately, you'll still have time to revoke the key. A public key introduces an element of privacy exposure: if someone knows that you've used the same public key to log into A and to log into B, they know the same person logged into A and B. Merely possessing the public key makes you a suspect that you also have the private key, so you lose some anonimity. But that's usually minor, especially if you're just storing the key in ~/.ssh where only system administrators (who also know what IP address you logged in from) can see it. These security considerations aside, a private key has many practical advantages. You don't need to type your password so often, and in particular can run automated scripts that don't prompt you once you've entered your key in ssh-agent or the like. You don't need to type your password so often, so you can afford to make it higher-entropy (longer, harder to type). You don't need to type your password so often, so there's less risk that it'll be snooped by a human observer or camera. | {
"source": [
"https://unix.stackexchange.com/questions/16559",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
16,566 | This is a followup to my earlier question . I am validating the number of fields in /etc/passwd using this handy snippit. In the following example, the users 'fieldcount1' and 'fieldcount2' have the wrong number of fields: $ awk -F: ' NF!=7 {print}' /etc/passwd
fieldcount1:x:1000:100:fieldcount1:/home/fieldcount1:/bin/bash::::
fieldcount2:blah::blah:1002:100:fieldcount2:/home/fieldcount2:/bin/bash:
$ echo $?
0 As you'll notice, awk will exit with an return status of 0. From it's standpoint, there are no problems here. I would like to incorporate this awk statement into a shell script. I would like to print all lines which are error, and set the return code to be 1 (error). I can try to force a particular exit status, but then awk only prints a single line: $ awk -F: ' NF!=7 {print ; exit 1}' /etc/passwd
fieldcount1:x:1000:100:fieldcount1:/home/fieldcount1:/bin/bash::::
$ echo $?
1 Can I force awk to exit with a return status of '1', and print all lines which match? | Keep the status in a variable and use it in an END block. awk -F: 'NF != 7 {print; err = 1}
END {exit err}' /etc/passwd | {
"source": [
"https://unix.stackexchange.com/questions/16566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4/"
]
} |
16,571 | My understanding of the way ~/.ssh/config works is that each 'Host ' line takes effect for any host matching after that point in the config file. I have a number of personal servers and work servers that I need to connect to. I'm trying to do something like the following: # General Settings
ControlMaster auto
ControlPath ~/.ssh/controlmaster/%r@%h:%p
ForwardAgent yes
ForwardX11 yes
GSSAPIAuthentication no
PubkeyAuthentication yes
# Personal Servers
Host *
User harleypig
IdentityFile ~/.ssh/personal_id_rsa
Host host1
Hostname host1.com
Host host2
Hostname host2.com
# Work Servers
Host *
User alan.young
IdentityFile ~/.ssh/work_id_rsa
Host work1
Hostname work1.companyserver.com
Host work2
Hostname work2.companyserver.com
Host *
User devuser
Host dev1
Hostname dev1.companyserver.com
Host dev2
Hostname dev2.companyserver.com The docs seem to indicate that host1 and host2 should use 'personal_id_rsa' and the user harleypig. work1, work2, dev1 and dev2 should use 'work_id_rsa' and the first two should be the user 'alan.young' and dev1 and dev2 should be the user 'devuser' However, this is not happening. Whatever 'Host *' I put first is what all of the following hosts try to connect with. Am I misunderstanding or missing something? | From the ssh_config manual : Since the first obtained value for each parameter is used, more host-specific declarations should be given near the beginning of the file, and general defaults at the end. So in your example, all hosts will use User harleypig and IdentityFile ~/.ssh/personal_id_rsa . Think of Host directives with wildcards as fallbacks: use the following settings only if they haven't been set yet. You need to write something like this: Host host1
Hostname host1.com
Host host2
Hostname host2.com
Host host*
User harleypig
IdentityFile ~/.ssh/personal_id_rsa You can put multiple patterns on a Host line if a given set of host aliases can't be matched with wildcards, e.g. Host host* more* outlier . | {
"source": [
"https://unix.stackexchange.com/questions/16571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9032/"
]
} |
16,640 | How can I get the size of a file in a bash script? How do I assign this to a bash variable so I can use it later? | Your best bet if on a GNU system: stat --printf="%s" file.any From man stat : %s total size, in bytes In a bash script : #!/bin/bash
FILENAME=/home/heiko/dummy/packages.txt
FILESIZE=$(stat -c%s "$FILENAME")
echo "Size of $FILENAME = $FILESIZE bytes." NOTE: see @chbrown's answer for how to use stat in terminal on Mac OS X. | {
"source": [
"https://unix.stackexchange.com/questions/16640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9140/"
]
} |
16,669 | I have a huge file tree. Some files have same name but in different case, e.g., some_code.c and Some_Code.c . So when I'm trying to copy it to an NTFS/FAT filesystem, it asks me about whether I want it to replace the file or skip it. Is there any way to automatically rename such files, for example, by adding (1) to the name of conflict file (as Windows 7 does)? | Many GNU tools such as cp , mv and tar support creating backup files when the target exists. That is, when copying foo to bar , if there is already a file called bar , the existing bar will be renamed, and after the copy bar will contain the contents of foo . By default, bar is renamed to bar~ , but the behavior can be modified: # If a file foo exists in the target, then…
cp -r --backup source target # rename foo → foo~
cp -r --backup=t source target # rename foo → foo.~1~ (or foo.~2~, etc) There are other variants, such as creating numbered backups only when one already exists. See the coreutils manual for more details. | {
"source": [
"https://unix.stackexchange.com/questions/16669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7312/"
]
} |
16,694 | Here's my usage case: I'm often connected to other computers over SSH for work and I often need to copy and paste documents/text from the server to locally running editors for writing examples and sharing text. Often, if the text is small enough, I'll simply copy the output from my terminal program (gnome-terminal at the moment) and paste it. However, when it comes to entire documents, my options are quite limited. I can either copy the document chunk-by-chunk, or scp it to the local machine. Is there a way to use a program such as xclip which will allow me to copy remote stdin to the local X server's clipboard? Something to the effect of: cat myconffile.conf | sed {...} | copy-over-ssh-to-local-clipboard would be awesome. Does something exist to make this possible? | If you run ssh with X forwarding, this is transparent: remote commands (including xclip ) have access to your X server (including its keyboard). Make sure you have ForwardX11 yes in your ~/.ssh/config and X11Forwarding yes in the server sshd_config (depending on your distributions, these options may be on or off by default). <myconffile.conf sed {...} | xclip -i There are other ways of working on remote files that may be more convenient, for example mounting remote directories on your local machine with SSHfs , or opening remote files in Emacs with Tramp . If you have ssh and FUSE set up and SSHfs installed, SSHfs is as easy as mkdir ~/net/myserver; sshfs myserver:/ ~/net/myserver . If you have ssh set up and Emacs installed, Tramp is as easy as opening /myserver:/path/to/file . | {
"source": [
"https://unix.stackexchange.com/questions/16694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
16,705 | On my Lenovo T400 and Ubuntu, the light for hard drive writing keeps flashing. I was wondering if in Linux it is possible to find out what processes are doing I/O to the hard drive? Just like by top , you can find out what processes are using most CPU and memory. | iotop (simple top-like I/O monitor) is a good tool for what you want. It also allows one to display the accumulated amount of I/O on any of the DISK READ, DISK WRITE, SWAPIN, and IO (overall percentage). This is through a nifty interface: You just press a on the keyboard, and it will sort the hungriest processes on top. Reversing the order, you just press r . If you want to sort by other colums, you just press the left/right key. Like top , the presentation is rather busy. Another thing is that it doesn't have the myriad options that top has (e.g. I can't chose to hide any of the columns I'm uninterested in), but the tool is more than good enough for its specific purpose. | {
"source": [
"https://unix.stackexchange.com/questions/16705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
16,743 | So I have a git repository that I cloned from an upstream source on github. I made a few changes to it (that are uncommitted in the master branch). What I want to do is push my changes onto my github page as a new branch and have github still see it as a fork. Is that possible? I'm fairly new to git and github. Did my question even make sense? The easiest way that I can think of (which I'm sure is the most aroundabout way), is to fork the repo on github. Clone it locally to a different directory. Add the upstream origin repo. Create a branch in that new forked repo. Copy my code changes by hand into the new local repo. And then push it back up to my github. Is this a common use case that there's a simpler way of doing it without duplicating directories? I guess I'm asking here as opposed to SO since I'm on linux using command line git and the people here give better answers imo =] | You can do it all from your existing repository (no need to clone the fork into a new (local) repository, create your branch, copy your commits/changes, etc.). Get your commits ready to be published. Refine any existing local commits (e.g. with git commit --amend and/or git rebase --interactive ). Commit any of your uncommitted changes that you want to publish (I am not sure if you meant to imply that you have some commits on your local master and some uncommitted changes, or just some uncommitted changes; incidentally, uncommitted changes are not “on a branch”, they are strictly in your working tree). Rename your master branch to give it the name you want for your “new branch”. This is not strictly necessary (you can push from any branch to any other branch), but it will probably reduce confusion in the long run if your local branch and the branch in your GitHub fork have the same name. git branch -m master my-feature Fork the upstream GitHub repository (e.g.) github.com:UpstreamOwner/repostory_name.git as (e.g.) github.com:YourUser/repository_name.git . This is done on the GitHub website (or a “client” that uses the GitHub APIs), there are no local Git commands involved. In your local repository (the one that was originally cloned from the upstream GitHub repository and has your changes in its master ), add your fork repository as a remote: git remote add -f github github.com:YourUser/repository_name.git Push your branch to your fork repository on GitHub. git push github my-feature Optionally, rename the remotes so that your fork is known as “origin” and the upstream as “upstream”. git remote rename origin upstream
git remote rename github origin One reason to rename the remotes would be because you want to be able to use git push without specifying a repository (it defaults to “origin”). | {
"source": [
"https://unix.stackexchange.com/questions/16743",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1571/"
]
} |
16,828 | I need some clarification/confirmation/elaboration on the different roles DAC, ACL and MAC play in Linux file security. After some research from the documentation, this is my understanding of the stack: SELinux must allow you access to the file object. If the file's ACLs (e.g., setfacl , getfacl for an ACL mount) explicitly allows/denies access to the object, then no further processing is required. Otherwise, it is up to the file's permissions (rwxrwxrwx DAC model). Am I missing something? Are there situations where this is not the case? | When a process performs an operation to a file, the Linux kernel performs the check in the following order: Discretionary Access Control (DAC) or user dictated access control. This includes both classic UNIX style permission checks and POSIX Access Control Lists (ACL) . Classical UNIX checks compare the current process UID and GID versus the UID and GID of the file being accessed with regards to which modes have been set (Read/Write/eXecute). Access Control List extends classic UNIX checks to allow more options regarding permission control. Mandatory Access Control (MAC) or policy based access control. This is implemented using Linux Security Modules (LSM) which are not real modules anymore (they used to be but it was dropped). They enable additionnal checks based on other models than the classical UNIX style security checks. All of those models are based on a policy describing what kind of opeartions are allowed for which process in which context. Here is an example for inodes access (which includes file access) to back my answer with links to an online Linux Cross Reference . The " function_name (filename:line)" given are for the 3.14 version of the Linux kernel. The function inode_permission ( fs/namei.c:449 ) first checks for read permission on the filesystem itself ( sb_permission in fs/namei.c:425 ), then calls __inode_permission ( fs/namei.c:394 ) to check for read/write/execute permissions and POSIX ACL on an inode in do_inode_permission ( fs/namei.c:368 ) (DAC) and then LSM-related permissions (MAC) in security_inode_permission ( security/security.c:550 ). There was only one exception to this order (DAC then MAC): it was for the mmap checks. But this has been fixed in the 3.15 version of the Linux kernel ( relevant commit ). | {
"source": [
"https://unix.stackexchange.com/questions/16828",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2372/"
]
} |
16,832 | I have a directory in Desktop named Project. I tried opening it using the cd command but it is not opening. What's the reason? Check the screenshot. The directory has all the required permissions. I tried with the root user also, but facing the same problem. | It looks to me like the Project directory has a space at the start of it, so try: cd " Project" If it's something other than a space (say a tab) then cd *Project should do the trick. | {
"source": [
"https://unix.stackexchange.com/questions/16832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2063/"
]
} |
16,883 | What is the maximum value of the Process ID? Also, is it possible to change a Process ID? | On Linux, you can find the maximum PID value for your system with this: $ cat /proc/sys/kernel/pid_max This value can also be written using the same file, however the value can only be extended up to a theoretical maximum of 32768 (2^15) for 32 bit systems or 4194304 (2^22) for 64 bit: $ echo 32768 > /proc/sys/kernel/pid_max It seems to be normative practice on most 64 bit systems to set this value to the same value as found on 32 bit systems, but this is by convention rather than a requirement. From man 5 proc : /proc/sys/kernel/pid_max This file (new in Linux 2.5) specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID). The default value for this file, 32768, results in the same range of PIDs as on earlier kernels. On 32-bit platfroms, 32768 is the maximum value for pid_max . On 64-bit systems, pid_max can be set to any value up to 2^22 ( PID_MAX_LIMIT , approximately 4 million). And no, you cannot change the PID of a running process. It gets assigned as a sequential number by the kernel at the time the process starts and that is it's identifier from that time on. The only thing you could do to get a new one is have your code fork a new process and terminate the old one. | {
"source": [
"https://unix.stackexchange.com/questions/16883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9146/"
]
} |
16,885 | Currently I am running a command like this, to get the most requested content: grep "17\/Jul\/2011" other_vhosts_access.log | awk '{print $8}' | sort | uniq -c | sort -nr I want to now see the user agent strings, but the problem is they include several spaces. Here is a typical log file line. The UA is the last section delimited by quotation marks: example.com:80 [ip] - - [17/Jul/2011:23:59:59 +0100] "GET [url] HTTP/1.1" 200 6449 "[referer]" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.122 Safari/534.30" Is there a better tool than awk for this? | If that format is consistent and the field is really wrapped in double quotes you can use either awk or cut with " as the field delimiter: awk -F\" '{print $6}' or: cut -d\" -f 6 | {
"source": [
"https://unix.stackexchange.com/questions/16885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5485/"
]
} |
16,890 | Here are details of the machine I want to access using its hostname: $ hostname
hostname
$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 hostname.company.local hostname It's a default Debian 6 (Squeeze) install, so I didn't fiddle with anything yet. This is what I get from a machine (running Debian Unstable) trying to access above machine: $ ping hostname
ping: unknown host hostname
$ ping hostname.company.local
ping: unknown host hostname.company.local
$ cat /etc/resolv.conf
nameserver 192.168.2.21
nameserver 192.168.2.51
search company.local | On the Internet, including local networks, machines call each other by IP addresses . In order to access machine B from machine A using the name of machine B, machine A has to have some way to map the name of B to its IP address. There are three ways to declare machine names on A: a hosts file . This is a simple text file that maps names to addresses. the domain name system (DNS) . This is the method used on the global Internet. For example, when you load this page in a browser, the first thing your computer does is to make a DNS request to know the address of unix.stackexchange.com . other name databases such as NIS , LDAP or Active Directory . These are used in some corporate networks, but not very often (many networks that use NIS, LDAP or AD for user databases use DNS for machine names). If your network uses one of these, you have a professional network administrator and should ask him what to do. There are many ways in which these can work in practice; it's impossible to cover them all. In this answer, I'll describe a few common situations. Hosts file The hosts file method has the advantage that it doesn't require any special method. It can be cumbersome if you have several machines, because you have to update every machine when the name of one machine changes. It's not suitable if the IP address of B is assigned dynamically (so that you get a different one each time you connect to the network). A hosts file is a simple list of lines mapping names to IP addresses. It looks like this: 127.0.0.1 localhost localhost.localdomain
198.51.100.42 darkstar darkstar.bands On unix systems, the hosts file is /etc/hosts . On Windows, it's c:\windows\system32\drivers\etc\hosts . Just about every operating system that you can connect to the Internet has a similar file; Wikipedia has a list . To add an entry for B in the hosts file of A: Determine the IP address of B. On B, run the command ifconfig (if the command is not found, try /sbin/ifconfig ). The output will contain lines like this: eth1 Link encap:Ethernet HWaddr 01:23:45:67:89:ab
inet addr:10.3.1.42 Bcast:10.3.1.255 Mask:255.255.255.0 In this example, the IP address of B is 10.3.1.42. If there are several inet addr: lines, pick the one that corresponds to your network card, never the lo entry or a tunnel or virtual entry. Edit the hosts file on A. If A is running some unix system, you'll need to edit /etc/hosts as the super user; see How do I run a command as the system administrator (root) . DHCP+DNS on home or small office networks This method is by far the simplest if you have the requisite equipment. You only need to configure one device, and all your computers will know about each other's names. This method assumes your computers get their IP addresses over DHCP , which is a method for computers to automatically retrieve an IP address when they connect to the network. If you don't know what DHCP is, they probably do. If your network has a home router , it's the best place to configure names for machines connected to that router. First, you need to figure out the MAC address of B. Each network device has a unique MAC address. On B, run the command ifconfig -a (if the command is not found, try /sbin/ifconfig -a ). The output will contain lines like this: eth1 Link encap:Ethernet HWaddr 01:23:45:67:89:ab In this example the MAC address is 01:23:45:67:89:ab . You must pick the HWaddr line that corresponds to the network port that's connected to the router via a cable (or the wifi card if you're connected over wifi). If you have several entries and you don't know which is which, plug the cable and see which network device receives an IP address ( inet addr line just below). Now, on your router's web interface, look for a setting like “DHCP”. The name and location of the setting is completely dependent on the router model, but most have a similar set of basic settings. Here's what it looks like on a Tomato firmware : Enter the MAC address, an IP address and the desired name. You can pick any IP address on your local network's address range. Most home routers are preconfigured for an address range of the form 192.168. x . y or 10. x . y . z . For example, on the Tomato router shown above, in the “Network” tab, there's a “router IP address” setting with the value 10.3.0.1 and a “subnet mask” setting with the value 255.255.255.0, which means that computers on the local network must have an address of the form 10.3.0. z . There's also a range of addresses for automatically assigned DHCP addresses (10.3.0.129–10.3.0.254); for your manually assigned DHCP address, pick one that isn't in this range. Now connect B to the network, and it should get the IP address you specified and it'll be reachable by the specified name from any machine in the network. Make your own DNS server with Dnsmasq If you don't have a capable home router, you can set up the same functionality on any Linux machine. I'll explain how to use Dnsmasq to set up DNS . There are many other similar programs; I chose Dnsmasq because it's easy to configure and lightweight (it's what the Tomato router illustrated above uses, for example). Dnsmasq is available on most Linux and BSD distributions for PCs, servers and network equipment. Pick a computer that's always on, that has a static IP address, and that's running some kind of Linux or BSD; let's call it S (for server). On S, install the dnsmasq package (if it's not already there). Below I'll assume that the configuration file is /etc/dnsmasq.conf ; the location may vary on some distribution. Now you need to do several things. Tell Dnsmasq to serve your host names in addition to the ones it gets from the Internet. The simplest way is to enter the names and IP addresses in /etc/hosts (see the “Hosts file” section above), and make sure that /etc/dnsmasq.conf does not have the no-hosts directive uncommented. (Lines that begin with a # are commented out.) You can put the names in a different file; if you do, put a line addn-hosts=/path/to/hosts/file in /etc/dnsmasq.conf . Tell Dnsmasq how to obtain IP addresses for names of machines on the Internet. If you're running Debian, Ubuntu or a derivative, install the resolvconf package. In most common cases, everything will work out of the box. If your network administrator or your ISP gave you the addresses of DNS servers, enter them in /etc/dnsmasq.conf , for example: server=8.8.8.8
server=8.8.4.4 If you don't know what your current DNS settings are, look in the file /etc/resolv.conf . If you see a line like nameserver 8.8.8.8 , put a line server=8.8.8.8 in /etc/dnsmasq.conf .
After you've changed /etc/dnsmasq.conf , restart Dnsmasq. The command to do that depends on the distribution; typical possibilities include restart dnsmasq or /etc/init.d/dnsmasq restart . Tell S to use the Dnsmasq service for all host name requests. Edit the file /etc/resolv.conf (as root), remove every nameserver line, and put nameserver 127.0.0.1 instead. If you're using resolvconf on Debian or Ubuntu, the /etc/resolv.conf may be suboptimal if you installed the resolvconf package with the network up and running. Make sure that the files base , head and tail in the /etc/resolvconf/resolv.conf.d/ directory don't contain any nameserver entries, then run resolvconf -u (as root). Tell the other machines to use S as the DNS server. Edit /etc/resolv.conf and replace all nameserver lines with a single nameserver 10.3.0.2 where 10.3.0.2 is the IP address of S (see above for how to find out S's IP address). You can also use Dnsmasq as a DHCP server, so that machines can obtain the address corresponding to their name automatically. This is beyond the scope of this answer; consult the Dnsmasq documentation (it's not difficult). Note that there can only be a single DHCP server on a given local network (the exact definition of local network is beyond the scope of this answer). Names on the global Internet So far, I've assumed a local network. What if you want to give a name to a machine that's in a different corner of the world? You can still use any of the techniques above, except that the parts involving DHCP are only applicable within a local network. Alternatively, if your machines have public IP addresses, you can register your own public name for them. (You can assign a private IP address to a public name, too; it's less common and less useful, but there's no technical difficulty.) Getting your own domain name You can get your own domain name and assign IP addresses to host names inside this domain. You need to register the domain name with a domain name provider; this typically costs $10–$15/year (for the cheapest domains). Use your domain name provider's web interface to assign addresses to host names. Dynamic DNS If your machines have a dynamic IP address, you can use the dynamic DNS protocol to update the IP address associated to the machine's name when the address changes. Not all domain name providers support dynamic DNS, so shop before you buy. For personal use, No-IP provides a free dynamic DNS service, if you use their own domains (e.g. example.ddns.net ). | {
"source": [
"https://unix.stackexchange.com/questions/16890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
16,939 | For example, I'm editing this code: <html>
<body>
<script>
var a = 10;
a += 100;
</script>
</body>
</html> now I need to indent the script line: <html>
<body>
<script>
var a = 10;
a += 100;
</script>
</body>
</html> How could I do this without moving cursor to the begin of each line and press Tab? | Press V to switch to VISUAL LINE mode and highlight the lines you want to indent by pressing j . Then press > to indent them. So the complete command would be Vjjj> . Alternatively, put your cursor on the <script> tag and use 4>> to indent four lines. | {
"source": [
"https://unix.stackexchange.com/questions/16939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8209/"
]
} |
16,990 | Per man definition, this command gets the input from a file. $ command -r FILENAME Suppose that FILENAME is a file containing a list of filenames, as it was generated using ls > FILENAME . How can I, instead, feed the command with the result of ls directly? In my head something like this should be possible: $ ls | command -r But it doesn't, the output of ls doesn't get hooked as an argument. Output: Usage: command -r FILENAME
error: -r option requires an argument How could I obtain the desired effect? | This is dependent on the command. Some commands that read from a file expect that the file be a regular file, whose size is known in advance, and which can be read from any position and rewinded. This is unlikely if the contents of the file is a list of file names: then the command will probably be content with a pipe which it will just read sequentially from start to finish. There are several ways to feed data via a pipe to a command that expects a file name. Many commands treat - as a special name, meaning to read from standard input rather than opening a file. This is a convention, not an obligation. ls | command -r - Many unix variants provide special files in /dev that designate the standard descriptors. If /dev/stdin exists, opening it and reading from it is equivalent to reading from standard input; likewise /dev/fd/0 if it exists. ls | command -r /dev/stdin
ls | command -r /dev/fd/0 If your shell is ksh, bash or zsh, you can make the shell deal with the business of allocating some file descriptor. The main advantage of this method is that it's not tied to standard input, so you can use standard input for something else, and you can use it more than once. command -r <(ls) If the command expects the name to have a particular form (typically a particular extension), you can try to fool it with a symbolic link. ln -s /dev/fd/0 list.foo
ls | command -r list.foo Or you can use a named pipe. mkfifo list.foo
ls >list.foo &
command -r list.foo Note that generating a list of files with ls is problematic because ls tends to mangle file names when they contain unprintable characters. printf '%s\n' * is more reliable — it'll print every byte literally in file names. File names containing newlines will still cause trouble, but that's unavoidable if the command expects a list of file names separated by newlines. | {
"source": [
"https://unix.stackexchange.com/questions/16990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8115/"
]
} |
17,023 | Coming from the Windows world, I have found the majority of the folder directory names to be quite intuitive: \Program Files contains files used by programs (surprise!) \Program Files (x86) contains files used by 32-bit programs on 64-bit OSes \Users (formerly Documents and Settings ) contains users' files, i.e. documents and settings \Users\USER\Application Data contains application-specific data \Users\USER\Documents contains documents belonging to the user \Windows contains files that belong to the operation of Windows itself \Windows\Fonts stores font files (surprise!) \Windows\Temp is a global temporary directory et cetera. Even if I had no idea what these folders did, I could guess with good accuracy from their names. Now I'm taking a good look at Linux, and getting quite confused about how to find my way around the file system. For example: /bin contains binaries. But so do /sbin , /usr/bin , /usr/sbin , and probably more that I don't know about. Which is which?? What is the difference between them? If I want to make a binary and put it somewhere system-wide, where do I put it? /media contains external media file systems. But so does /mnt . And neither of them contain anything on my system at the moment; everything seems to be in /dev . What's the difference? Where are the other partitions on my hard disk, like the C: and D: that were in Windows? /home contains the user files and settings. That much is intuitive, but then, what is supposed to go into /usr ? And how come /root is still separate, even though it's a user with files and settings? /lib contains shared libraries, like DLLs. But so does /usr/lib . What's the difference? What is /etc ? Does it really stand for "et cetera", or something else? What kinds of files should go in there -- global or local? Is it a catch-all for things no one knew where to put, or is there a particular use case for it? What are /opt , /proc , and /var ? What do they stand for and what are they used for? I haven't seen anything like them in Windows*, and I just can't figure out what they might be for. If anyone can think of other standard places that might be good to know about, feel free to add it to the question; hopefully this can be a good reference for people like me, who are starting to get familiar with *nix systems. *OK, that's a lie. I've seen similar things in WinObj, but obviously not on a regular basis. I still don't know what these do on Linux, though. | Linux distributions use the FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html You can also try man hier . I'll try to sum up answers your questions off the top of my head, but I strongly suggest that you read through the FHS: /bin is for non-superuser system binaries /sbin is for superuser (root) system binaries /usr/bin & /usr/sbin are for non-critical shared non-superuser or superuser binaries, respectively /mnt is for temporarily mounting a partition /media is for mounting many removable media at once /dev contains your system device files; it's a long story :) The /usr folder, and its subfolders, can be shared with other systems, so that they will have access to the same programs/files installed in one place. Since /usr is typically on a separate filesystem, it doesn't contain binaries that are necessary to bring the system online. /root is separate because it may be necessary to bring the system online without mounting other directories which may be on separate partitions/hard drives/servers Yes, /etc stands for "et cetera". Configuration files for the local system are stored there. /opt is a place where you can install programs that you download/compile. That way you can keep them separate from the rest of the system, with all of the files in one place. /proc contains information about the kernel and running processes /var contains variable size files like logs, mail, webpages, etc. To access a system, you generally don't need /var, /opt, /usr, /home; some of potentially largest directories on a system. One of my favorites, which some people don't use, is /srv. It's for data that is being hosted via services like http/ftp/samba. I've see /var used for this a lot, which isn't really its purpose. | {
"source": [
"https://unix.stackexchange.com/questions/17023",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6252/"
]
} |
17,027 | It is a common way to set the resolution of a text consoles (that are usually available by Ctrl-Alt-F1 thru Ctrl-Alt-F6) by using a vga=... kernel parameter.
I'm using Ubuntu 10.04 Lucid, output of uname -a is: Linux 2.6.32-33-generic #70-Ubuntu SMP Thu Jul 7 21:13:52 UTC 2011 x86_64 GNU/Linux To identify modes available i use the sudo hwinfo --framebuffer which reports: 02: None 00.0: 11001 VESA Framebuffer
[Created at bios.464]
Unique ID: rdCR.R1b4duaxSqA
Hardware Class: framebuffer
Model: "NVIDIA G73 Board - p456h1 "
Vendor: "NVIDIA Corporation"
Device: "G73 Board - p456h1 "
SubVendor: "NVIDIA"
SubDevice:
Revision: "Chip Rev"
Memory Size: 256 MB
Memory Range: 0xc0000000-0xcfffffff (rw)
Mode 0x0300: 640x400 (+640), 8 bits
Mode 0x0301: 640x480 (+640), 8 bits
Mode 0x0303: 800x600 (+800), 8 bits
Mode 0x0305: 1024x768 (+1024), 8 bits
Mode 0x0307: 1280x1024 (+1280), 8 bits
Mode 0x030e: 320x200 (+640), 16 bits
Mode 0x030f: 320x200 (+1280), 24 bits
Mode 0x0311: 640x480 (+1280), 16 bits
Mode 0x0312: 640x480 (+2560), 24 bits
Mode 0x0314: 800x600 (+1600), 16 bits
Mode 0x0315: 800x600 (+3200), 24 bits
Mode 0x0317: 1024x768 (+2048), 16 bits
Mode 0x0318: 1024x768 (+4096), 24 bits
Mode 0x031a: 1280x1024 (+2560), 16 bits
Mode 0x031b: 1280x1024 (+5120), 24 bits
Mode 0x0330: 320x200 (+320), 8 bits
Mode 0x0331: 320x400 (+320), 8 bits
Mode 0x0332: 320x400 (+640), 16 bits
Mode 0x0333: 320x400 (+1280), 24 bits
Mode 0x0334: 320x240 (+320), 8 bits
Mode 0x0335: 320x240 (+640), 16 bits
Mode 0x0336: 320x240 (+1280), 24 bits
Mode 0x033d: 640x400 (+1280), 16 bits
Mode 0x033e: 640x400 (+2560), 24 bits
Config Status: cfg=new, avail=yes, need=no, active=unknown It looks like many hi-res modes are available, like 0x305, 0x307, 0x317, 0x318, 0x31a, 0x31b (by the way, what does the plus-number means in the list of modes?). However, setting any of these modes in kernel option string, line vga=0x305 , results in either pitch black text console, or screen filled by blinking color/bw dots. What is the 'modern', 'robust' way to set up high resolution in text consoles? | Newer kernels use KMS by default, so you should move away from appending vga= to your grub line as it will conflict with the native resolution of KMS. However, it depends upon the video driver you are using: the proprietary Nvidia driver doesn't support KMS , but you can work around it. You should be able to get full resolution in the framebuffer by editing your /etc/default/grub and making sure that the GFXMODE is set correctly, and then adding a GFXPAYLOAD entry like so: GRUB_GFXMODE=1680x1050x24
# Hack to force higher framebuffer resolution
GRUB_GFXPAYLOAD_LINUX=1680x1050 Remember to run sudo update-grub afterwards. | {
"source": [
"https://unix.stackexchange.com/questions/17027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3766/"
]
} |
17,040 | I've two configuration files, the original from the package manager and a customized one modified by myself. I've added some comments to describe behavior. How can I run diff on the configuration files, skipping the comments? A commented line is defined by: optional leading whitespace (tabs and spaces) hash sign ( # ) anything other character The (simplest) regular expression skipping the first requirement would be #.* . I tried the --ignore-matching-lines=RE ( -I RE ) option of GNU diff 3.0, but I couldn't get it working with that RE. I also tried .*#.* and .*\#.* without luck. Literally putting the line ( Port 631 ) as RE does not match anything, neither does it help to put the RE between slashes. As suggested in “diff” tool's flavor of regex seems lacking? , I tried grep -G : grep -G '#.*' file This seems to match the comments, but it does not work for diff -I '#.*' file1 file2 . So, how should this option be used? How can I make diff skip certain lines (in my case, comments)? Please do not suggest grep ing the file and comparing the temporary files. | According to Gilles, the -I option only ignores a line if nothing else inside that set matches except for the match of -I . I didn't fully get it until I tested it. The Test Three files are involved in my test: File test1 : text File test2 : text
#comment File test3 : changed text
#comment The commands: $ # comparing files with comment-only changes
$ diff -u -I '#.*' test{1,2}
$ # comparing files with both comment and regular changes
$ diff -u -I '#.*' test{2,3}
--- test2 2011-07-20 16:38:59.717701430 +0200
+++ test3 2011-07-20 16:39:10.187701435 +0200
@@ -1,2 +1,2 @@
-text
+changed text
#comment The alternative way Since there is no answer so far explaining how to use the -I option correctly, I'll provide an alternative which works in bash shells: diff -u -B <(grep -vE '^\s*(#|$)' test1) <(grep -vE '^\s*(#|$)' test2) diff -u - unified diff -B - ignore blank lines <(command) - a bash feature called process substitution which opens a file descriptor for the command, this removes the need for a temporary file grep - command for printing lines (not) matching a pattern -v - show non-matching lines E - use extended regular expressions '^\s*(#|$)' - a regular expression matching comments and empty lines ^ - match the beginning of a line \s* - match whitespace (tabs and spaces) if any (#|$) match a hash mark, or alternatively, the end of a line | {
"source": [
"https://unix.stackexchange.com/questions/17040",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8250/"
]
} |
17,064 | echo -e 'one two three\nfour five six\nseven eight nine'
one two three
four five six
seven eight nine how can I do some "MAGIC" do get this output?: three
six
nine UPDATE:
I don't need it in this specific way, I need a general solution so that no matter how many columns are in a row, e.g.: awk always displays the last column. | Try: echo -e 'one two three\nfour five six\nseven eight nine' | awk '{print $NF}' | {
"source": [
"https://unix.stackexchange.com/questions/17064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6960/"
]
} |
17,087 | I like to clone a whole partition or a whole hard drive onto a larger external disk but like to create a sparse file. I often use dd for cloning, but it doesn't support sparse files. As a workaround I used something like: cp --sparse=always <(dd if=/dev/sda1 bs=8M) /mount/external/backup/sda1.raw However this is a little too tricky for my taste and doesn't allow me to resume the process if aborted. It is funny that there is a NTFS tool for this ( ntfsclone ) but no such tool exists for the native file systems of Linux (EXT2-4). Is there some better tool for this, e.g. a dd variant with sparse support?
I do not look for some proprietary software for disk backups but simply want to make a sparse clone copy which I can mount as loop device if required. | You want dd_rescue . dd_rescue -a -b 8M /dev/sda1 /mount/external/backup/sda1.raw The copy may be interrupted at any time by Ctrl-C , showing the current position. This value can be used, when restarting by adding -s and the position to the original command, e.g. dd_rescue -a -b 8M -s 42000k /dev/sda1 /mount/external/backup/sda1.raw Even easier would be to specify a third file name, which acts as a log file. On restart dd_rescue will read that log file and pick up where it left off. | {
"source": [
"https://unix.stackexchange.com/questions/17087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4245/"
]
} |
17,099 | Is it true that all commands that can run in bash are not actually part of bash? I'm gradually realizing I have been confusing the shell and the applications that can run in it. | There isn't a sharp border between the shell and the applications. Some of the commands that you run in a shell have to be built into the shell, because they act on the shell process. For example, cd changes the shell's working directory, and this can't be done from the outside, so cd has to be a built-in command. But this is an implementation detail. A casual user doesn't need to know that there are technical reasons that compel cd to be a built-in command. At another extreme, there are large applications that are made by different authors, such as Firefox or Emacs. These aren't going to be built into any shell because they're too big. But this isn't a fundamental impossibility, just a matter of design. For examples, there are shells¹ that have builtins to make simple GUI applications. In the middle, there are commands that could go either way. For example, the echo command doesn't need to be built into the shell, but almost every shell has it built in because it's very small, and is used often so should be efficient. Another example is kill , which for casual usage could be an external command, but having a built-in has several advantages: you can invoke it even if you've reached a limit on the number of processes, and you can give it a shell job number ( kill %2 ) in lieu of a process ID. Even the [ … ] construct (which can also be written test ) could, in principle, be an external command, but is built into shells for the same reason as echo . If you're curious, you can check the status of a given command with the type command. For example, in my setup: % type while type setenv cp emacs
while is a reserved word
type is a shell builtin
setenv is a shell function
cp is an alias for cp -i
emacs is /usr/bin/emacs Reserved words such as while and then are part of the shell syntax. Builtins are commands that are built into the shell and don't require an external executable. Functions are compound commands that are defined and named by the user, and can be called by their name. Aliases are user-defined short names for longer commands (behaving differently from functions). The last example is an external command. ¹ dtksh . There is no free implementation. | {
"source": [
"https://unix.stackexchange.com/questions/17099",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
17,107 | I was wondering how to understand the following: Piping the stdout of a command into the stdin of another is a powerful
technique. But, what if you need to pipe the stdout of multiple
commands? This is where process substitution comes in. In other words, can process substitution do whatever pipe can do? What can process substitution do, but pipe cannot? | A good way to grok the difference between them is to do a little experimenting on the command line. In spite of the visual similarity in use of the < character, it does something very different than a redirect or pipe. Let's use the date command for testing. $ date | cat
Thu Jul 21 12:39:18 EEST 2011 This is a pointless example but it shows that cat accepted the output of date on STDIN and spit it back out. The same results can be achieved by process substitution: $ cat <(date)
Thu Jul 21 12:40:53 EEST 2011 However what just happened behind the scenes was different. Instead of being given a STDIN stream, cat was actually passed the name of a file that it needed to go open and read. You can see this step by using echo instead of cat . $ echo <(date)
/proc/self/fd/11 When cat received the file name, it read the file's content for us. On the other hand, echo just showed us the file's name that it was passed. This difference becomes more obvious if you add more substitutions: $ cat <(date) <(date) <(date)
Thu Jul 21 12:44:45 EEST 2011
Thu Jul 21 12:44:45 EEST 2011
Thu Jul 21 12:44:45 EEST 2011
$ echo <(date) <(date) <(date)
/proc/self/fd/11 /proc/self/fd/12 /proc/self/fd/13 It is possible to combine process substitution (which generates a file) and input redirection (which connects a file to STDIN): $ cat < <(date)
Thu Jul 21 12:46:22 EEST 2011 It looks pretty much the same but this time cat was passed STDIN stream instead of a file name. You can see this by trying it with echo: $ echo < <(date)
<blank> Since echo doesn't read STDIN and no argument was passed, we get nothing. Pipes and input redirects shove content onto the STDIN stream. Process substitution runs the commands, saves their output to a special temporary file and then passes that file name in place of the command. Whatever command you are using treats it as a file name. Note that the file created is not a regular file but a named pipe that gets removed automatically once it is no longer needed. | {
"source": [
"https://unix.stackexchange.com/questions/17107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
17,122 | I'm just curious if it's possible to install the Linux kernel alone, or if you need to use one of the flavours. If it were possible, how would you do it? I don't need a detailed tutorial. I just want to know how it would be done conceptually. I'm not good with low-level stuff, and want to know how you get an OS into the computer. I imagine it has something to do with the MBR. Oh and I noticed that a lot of the answers suggest a certain distribution of some minimal Linux. I should have probably stated that I am not looking to install a minimal or bare bones Linux. This question is purely theoretical. Still, I really appreciate all the answers, and will refer to them immediately, if ever I would want to install a truly personalized Linux. | You can technically install just a bootloader and the kernel alone, but as soon as the kernel boots, it will complain about not being able to start "init", then it will just sit there and you can't do anything with it. BTW, it is a part of the bootloader that is in the MBR. The kernel sits somewhere on the regular area of a disk. The bootloader is configured to know where that is, so it can load the kernel and execute it. | {
"source": [
"https://unix.stackexchange.com/questions/17122",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9228/"
]
} |
17,170 | How can one programmatically "freeze" the Keyboard & Mouse temporarily, so that no one could mess with the system? There are several possibilities where this is useful. For instance, I have a laptop and I want to make sure no one uses it while I leave, even if somebody knows the password or can guess it (like wife or children), as well as depressing thieves' appetite (as it seems dis-functioning). or I'm doing something remotely so I want to make sure the user at the computer doesn't disturb. | Assuming your GUI is X-based (as almost all UNIX GUIs are), use xinput . First, list your devices: $ xinput --list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Windows mouse id=6 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Windows keyboard id=7 [slave keyboard (3)] List the details for your mouse (id=6 in our example): $ xinput --list-props 6
Device 'Windows mouse':
Device Enabled (112): 1
Coordinate Transformation Matrix (114): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
Device Accel Profile (222): 0
Device Accel Constant Deceleration (223): 1.000000
Device Accel Adaptive Deceleration (224): 1.000000
Device Accel Velocity Scaling (225): 10.000000 Now disable it: $ export DISPLAY=:0
$ xinput set-int-prop 6 "Device Enabled" 8 0 To enable it do: $ xinput set-int-prop 6 "Device Enabled" 8 1 The same goes for the keyboard, just replace the int-prop number with the proper id. Tested and worked on cygwin. Of course, you have to plan beforehand how will you enable your devices again. such as schedule it on cron, re-enable it remotely, or disable just one of them in first place. | {
"source": [
"https://unix.stackexchange.com/questions/17170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8997/"
]
} |
17,189 | I understand this is somewhat less Ubuntu related, but it affects it. So,what is so new about it that Linus decided to name it 3.0? I'm not trying to get information about the drivers that got into it or stuff that always gets improved. I want to know what really made it 3.0. I read somewhere that Linus wanted to get rid of the code that supports legacy hardware. Hm, not sure what that really meant because 3.0 is bigger (in MB), not smaller, than, say, 2.6.38. What was the cause of naming it 3.0? | Nothing new at all. Citation below are from https://lkml.org/lkml/2011/5/29/204 I decided to just bite the bullet, and call the next version 3.0. It
will get released close enough to the 20-year mark, which is excuse
enough for me, although honestly, the real reason is just that I can
no longe rcomfortably count as high as 40. I especially like: The whole renumbering was discussed at last years Kernel Summit, and
there was a plan to take it up this year too. But let's face it -
what's the point of being in charge if you can't pick the bike shed
color without holding a referendum on it? So I'm just going all
alpha-male, and just renumbering it. You'll like it. And finally: So what are the big changes? NOTHING. Absolutely nothing. Sure, we have the usual two thirds driver
changes, and a lot of random fixes, but the point is that 3.0 is just about renumbering, we are very much not doing a KDE-4 or a
Gnome-3 here. No breakage, no special scary new features, nothing at
all like that. We've been doing time-based releases for many years
now, this is in no way about features. If you want an excuse for the
renumbering, you really should look at the time-based one ("20 years")
instead. | {
"source": [
"https://unix.stackexchange.com/questions/17189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9257/"
]
} |
17,255 | When SSH'd locally into my computer (don't ask, it's a workaround), I can't start graphical applications without running: export DISPLAY=:0.0 If I run this first and then run a graphical application, things work out. If not, it doesn't work, there's no display to attach to. Is there a command for listing all available displays (ie: all possible values) on a machine? | If you want the X connection forwarded over SSH, you need to enable it on both the server side and the client side. (Depending on the distribution, it may be enabled or disabled by default.) On the server side, make sure that you have X11Forwarding yes in /etc/sshd_config (or /etc/ssh/sshd_config or wherever the configuration file is). On the client side, pass the -X option to the ssh command , or put ForwardX11 in your ~/.ssh/config . If you run ssh -X localhost , you should see that $DISPLAY is (probably) localhost:10.0 . Contrast with :0.0 , which is the value when you're not connected over SSH. (The .0 part may be omitted; it's a screen number, but multiple screens are rarely used.) There are two forms of X displays that you're likely to ever encounter: Local displays, with nothing before the : . TCP displays, with a hostname before the : . With ssh -X localhost , you can access the X server through both displays, but the applications will use a different method: :NUMBER accesses the server via local sockets and shared memory, whereas HOSTNAME:NUMBER accesses the server over TCP, which is slower and disables some extensions. Note that you need a form of authorization to access an X server, called a cookie and normally stored behind the scenes in the file ~/.Xauthority . If you're using ssh to access a different user account, or if your distribution puts the cookies in a different file, you may find that DISPLAY=:0 doesn't work within the SSH session (but ssh -X will, if it's enabled in the server; you never need to mess with XAUTHORITY when doing ssh -X ). If that's a problem, you need to set the XAUTHORITY environment variable or obtain the other user's cookies . To answer your actual question: Local displays correspond to a socket in /tmp/.X11-unix . (cd /tmp/.X11-unix && for x in X*; do echo ":${x#X}"; done) Remote displays correspond to open TCP ports above 6000; accessing display number N on machine M is done by connecting to TCP port 6000+N on machine M. From machine M itself: netstat -lnt | awk '
sub(/.*:/,"",$4) && $4 >= 6000 && $4 < 6100 {
print ($1 == "tcp6" ? "ip6-localhost:" : "localhost:") ($4 - 6000)
}' (The rest of this bullet point is of academic interest only.) From another machine, you can use nmap -p 6000-6099 host_name to probe open TCP ports in the usual range. It's rare nowadays to have X servers listening on a TCP socket, especially outside the loopback interface. Strictly speaking, another application could be using a port in the range usually used by X servers. You can tell whether an X server is listening by checking which program has the port open. lsof -i -n | awk '$9 ~ /:60[0-9][0-9]$/ {print}' If that shows something ambiguous like sshd , there's no way to know for sure whether it's an X server or a coincidence. | {
"source": [
"https://unix.stackexchange.com/questions/17255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
17,319 | hd and od are both dump viewers of binary content. Can hd be used wherever od is and vice versa? | hd is a synonym for hexdump -C on FreeBSD and on some Linux distributions. hexdump is from the BSD days ; od is from the dawn of time . Only od is standardized by POSIX . The Single UNIX rationale discusses why od was chosen in preference to hd or xd . These commands do very similar things: display a textual representation of a binary file, using octal, decimal or hexadecimal notation. There's no fundamental difference between the two. They have many options to control the output format, and some formats can only be achieved with one or the other command. In particular, to see a glance of what's in a binary file, I like hd 's output format, with a column on the right showing printable characters literally; od can't do that. $ od /bin/sh | head -n 2 # od default: octal, 2-byte words
0000000 042577 043114 000402 000001 000000 000000 000000 000000
0000020 000002 000076 000001 000000 170020 000101 000000 000000
$ od -Ax -t x1 /bin/sh | head -n 2 # od showing bytes in hexadecimal
000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
000010 02 00 3e 00 01 00 00 00 10 f0 41 00 00 00 00 00
$ hd /bin/sh | head -n 2 # hd default output: nice
00000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 |.ELF............|
00000010 02 00 3e 00 01 00 00 00 10 f0 41 00 00 00 00 00 |..>.......A.....| | {
"source": [
"https://unix.stackexchange.com/questions/17319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
17,327 | Examples of commands I'm referring to are ls , pwd , and cd . Also, how are these built? Do you have an example? | It's usually plain C . The commands ls and pwd come from the GNU Coreutils package in (most?) Linux distributions (and maybe some other systems). You can find the code on their homepage . For coreutils specifically, you build them with the usual steps: after unpacking the source, issue: ./configure --prefix=/some/path
# type ./configure --help to get the available options
make
make install # could require root access depending on the path you used Be carefull - installing base utilities like those over your distribution's copy of them is a bad idea . Use whatever package manager your system comes with for that. You can install to a different prefix though (installing somewhere into your home directory is a good idea if you want to experiment). Note that although there is a cd executable , the cd you'll be using in most circumstances isn't a separate executable. It has to be a shell build-in (otherwise it could not change the shell's current directory - this has to be done by the process itself), so it is written in the same language as the shell (which is often C too). Other examples: OpenSolaris pwd source. FreeBSD ls You can find many more of these online. | {
"source": [
"https://unix.stackexchange.com/questions/17327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8932/"
]
} |
17,329 | I want to collect statistics about who is using a computer and for how much time. I can use the users command to see who is logged in but I want to know who is on the currently active VT. I don't care about people who's logged in through SSH or who leaves a download running in a locked session. I only care about who actually sits in front of the machine. I need information on X and console sessions. Getting the time of the last activity (moving the mouse, etc) would be also useful but I can live without it. How can I do this? | It's usually plain C . The commands ls and pwd come from the GNU Coreutils package in (most?) Linux distributions (and maybe some other systems). You can find the code on their homepage . For coreutils specifically, you build them with the usual steps: after unpacking the source, issue: ./configure --prefix=/some/path
# type ./configure --help to get the available options
make
make install # could require root access depending on the path you used Be carefull - installing base utilities like those over your distribution's copy of them is a bad idea . Use whatever package manager your system comes with for that. You can install to a different prefix though (installing somewhere into your home directory is a good idea if you want to experiment). Note that although there is a cd executable , the cd you'll be using in most circumstances isn't a separate executable. It has to be a shell build-in (otherwise it could not change the shell's current directory - this has to be done by the process itself), so it is written in the same language as the shell (which is often C too). Other examples: OpenSolaris pwd source. FreeBSD ls You can find many more of these online. | {
"source": [
"https://unix.stackexchange.com/questions/17329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4477/"
]
} |
17,402 | I use a FUSE filesystem with no problems as my own user, but root can't access my FUSE mounts. Instead, any command gives Permission denied . How can I give root the permission to read these mounts? ~/top$ sudo ls -l
total 12
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 bar
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 foo
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 normal-directory
~/top$ fuse-zip foo.zip foo
~/top$ unionfs-fuse ~/Pictures bar My user, yonran , can read it fine: ~/top$ ls -l
total 8
drwxr-xr-x 1 yonran yonran 4096 2011-07-25 18:12 bar
drwxr-xr-x 2 yonran yonran 0 2011-07-25 18:51 foo
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 normal-directory
~/top$ ls bar/
Photos But root can't read either FUSE directory: ~/top$ sudo ls -l
ls: cannot access foo: Permission denied
ls: cannot access bar: Permission denied
total 4
d????????? ? ? ? ? ? bar
d????????? ? ? ? ? ? foo
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 normal-directory
~/top$ sudo ls bar/
ls: cannot access bar/: Permission denied I'm running Ubuntu 10.04: I always install any update from Canonical. $ uname -a
Linux mochi 2.6.32-33-generic #70-Ubuntu SMP Thu Jul 7 21:13:52 UTC 2011 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 10.04.3 LTS
Release: 10.04
Codename: lucid Edit : removed the implication that root used to be able to access the mounts. Come to think of it, maybe my scripts never tried to access the directory as root. | It's the way fuse works.
If you want to allow access to root or others users, you have to add: user_allow_other in /etc/fuse.conf and mount your fuse filesystem with allow_other or allow_root as options. | {
"source": [
"https://unix.stackexchange.com/questions/17402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9342/"
]
} |
17,406 | I want to convert .txt files to .pdf . I'm using this: ls | while read ONELINE; do convert -density 400 "$ONELINE" "$(echo "$ONELINE" | sed 's/.txt/.pdf/g')"; done But this produces one "error" -- if there's a very long line in the text file, it doesn't get wrapped. Input text Output PDF -- Also, it would also be great if the output PDF could contain text, instead of images of text. I have many-many-many TXT files. So don't want to do it by hand. I need an automatic solution, like the one I mentioned above. | One method is to use CUPS and the PDF psuedo-printer to "print" the text to a PDF file. Another is to use enscript to encode to postscript and then convert from postscript to PDF using the ps2pdf file from ghostscript package. | {
"source": [
"https://unix.stackexchange.com/questions/17406",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6960/"
]
} |
17,428 | I accidentally moved all folders from root to a subfolder. ( /bin , /etc , /home , /lib , /usr ... all moved) The only ones that were not moved, since they were in use, are /bak , /boot , /dev , /proc , /sys . Now, any command that I try to execute will simply not happen. I constantly get "No such file or directory". I am connected through ssh and through ftp, but I cannot move files through ftp, as direct SU login is disabled. I also have access to the actual server if I need to do something directly from there. I'm assuming I would need to edit a configuration file in order to tell it where to find the /bin folder and that would help me get access again, but I don't know which file that would be or how to do it (since I can't even run chmod to change permissions). Is there any way out of this other than re-installing? I am working on an old version of CentOS. I'm extremely new to the world of Linux, hence this action and the question... | If you still have a root shell, you may have a chance to repair your system. Let's say that you moved all the common directories ( /bin , /etc , /lib , /sbin , /usr — these are the ones that could make recovery difficult) under /oops . You won't be able to issue the mv command directly, even if you specify the full path /oops/bin/mv . That's because mv is dynamically linked ; because you've moved the /lib directory, mv can't run because it can't find the libraries that constitute part of its code. In fact, it's even worse than that: mv can't find the dynamic loader /lib/ld-linux.so.2 (the name may vary depending on your architecture and unix variant, and the directory could be a different name such as /lib32 or /lib64 ). Therefore, until you've moved the /lib directory back, you need to invoke the linker explicitly, and you need to specify the path to the moved libraries. Here's the command tested on Debian squeeze i386. export LD_LIBRARY_PATH=/oops/lib:/oops/lib/i386-linux-gnu
/oops/lib/ld-linux.so.2 /oops/bin/mv /oops/* / You may need to adjust this a little for other distributions or architectures. For example, for CentOS on x86_64: export LD_LIBRARY_PATH=/oops/lib:/oops/lib64
/oops/lib64/ld-linux-x86-64.so.2 /oops/bin/mv /oops/* / When you've screwed up something /lib , it helps to have a statically linked toolbox lying around. Some distributions (I don't know about CentOS) provide a statically-linked copy of Busybox . There's also sash , a standalone shell with many commands built-in. If you have one of these, you can do your recovery from there. If you haven't installed them before the fact, it's too late. # mkdir /oops
# mv /lib /bin /oops
# sash
Stand-alone shell (version 3.7)
> -mv /oops/* /
> exit If you don't have a root shell anymore, but you still have an SSH daemon listening and you can log in directly as root over ssh, and you have one of these statically-linked toolboxes, you might be able to ssh in. This can work if you've moved /lib and /bin , but not /etc . ssh [email protected] /oops/bin/sash
[email protected]'s password:
Stand-alone shell (version 3.7)
> -mv /oops/* / Some administrators set up an alternate account with a statically-linked shell, or make the root account use a statically-linked shell, just for this kind of trouble. If you don't have a root shell and haven't taken precautions, you'll need to boot from a Linux live CD/USB (any will do as long as it's recent enough to be able to access your disks and filesystems) and move the files back. | {
"source": [
"https://unix.stackexchange.com/questions/17428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9362/"
]
} |
17,466 | I am writing a shell script where I have to delete a file on a remote machine via a shell script. Manual workflow: Log on to remote machine: ssh [email protected] At the remote machine ( domain ), type the following commands cd ./some/where
rm some_file.war How should I accomplish that task in a script? | You can pass the SSH client a command to execute in place of starting a shell by appending it to the SSH command. ssh [email protected] 'rm /some/where/some_file.war' You don't have to cd to a location to remove something as long as you specify the full path, so that's another step you can skip. The next question is authentication. If you just run that, you will get prompted for a password. If you don't want to enter this interactively you should set up public key authentication. | {
"source": [
"https://unix.stackexchange.com/questions/17466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8999/"
]
} |
17,499 | I have a utility consisting of a couple of directories with some bash scripts and supporting files that will be deployed to several machines, possibly in a different directory on each machine. The scripts need to be able to reference paths relative to themselves, so I need to be able to get the path to the file that's currently being executed. I am aware of the dirname $0 idiom which works perfectly when my script is called directly. Unfortunately there is a desire to be able to create symlinks to these scripts from a totally different directory and still have the relative pathing logic work. An example of the overall directory structure is as follows: /
|-usr/local/lib
| |-foo
| | |-bin
| | | |-script.sh
| | |-res
| | | |-resource_file.txt
|-home/mike/bin
| |-link_to_script (symlink to /usr/local/lib/foo/bin/script.sh) How can I reliably reference /usr/local/lib/foo/res/resource_file.txt from script.sh whether it is invoked by /usr/local/lib/foo/bin/script.sh or ~mike/bin/link_to_script ? | Try this as a general purpose solution: DIR="$(cd "$(dirname "$0")" && pwd)" In the specific case of following symlinks, you could also do this: DIR="$(dirname "$(readlink -f "$0")")" | {
"source": [
"https://unix.stackexchange.com/questions/17499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5981/"
]
} |
17,504 | Maybe there are some compatibility issues? I have the impression that for Intel-based systems, the Intel compiler would potentially do a better job than GCC. Perhaps there's already a distro that has attempted this? I would think this might be quite straightforward using Gentoo. | You won't be able to compile everything with icc. Many programs out there use GCC extensions to the C language. However Intel have made a lot of effort to support most of these extensions; for example, recent versions of icc can compile the Linux kernel. Gentoo is indeed your best bet if you like recompiling your software in an unusual way. The icc page on the Gentoo wiki describes the main hurdles. First make a basic Gentoo installation, and emerge icc . Don't remove icc later as long as you have any binary compiled with icc on your system. Note that icc is installed in /opt ; if that isn't on your root partition, you'll need to copy the icc libraries to your root partition if any of the programs used at boot time are compiled with icc. Set up /etc/portage/bashrc and declare your favorite compilation options; see the Gentoo wiki for a more thorough script which supports building different packages with different compilers (this is necessary because icc breaks some packages). export OCC="icc" CFLAGS="-O2 -gcc"
export OCXX="icpc" CXXFLAGS="$CFLAGS"
export CC_FOR_BUILD="${OCC}" | {
"source": [
"https://unix.stackexchange.com/questions/17504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8189/"
]
} |
17,574 | Can I increase the size of the command history in bash? Note that I use a Red Hat Linux computer in the undergraduate astrophysics department here (so I don't get that many privileges). | Instead of specifying numbers, you can do unset HISTSIZE
unset HISTFILESIZE
shopt -s histappend in which case only your disk size (and your "largest file limit", if your OS or FS has one) is the limit. However, be aware that this will eventually slow down bash more and more. see this BashFAQ document and the debian-administration article (original link died, look in a mirror: archive.is and archive.org ) for techniques which scale better. | {
"source": [
"https://unix.stackexchange.com/questions/17574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9431/"
]
} |
17,663 | I just learned that PDF files can be compressed to reduce their disk size. I was wondering how to know if a PDF file has already been compressed? What applications/commands can be used to compress or uncompress a PDF file? My environment is Linux Ubuntu 10.10. Some attempts don't give satisfactory results: Here are the results of trying pdftk : $ pdftk 3.pdf output 5.pdf uncompress
$ pdftk 3.pdf output 3comp.pdf compress
$ ls -l 3.pdf 3comp.pdf 5.pdf
-rwxrwx--- 1 root plugdev 8652269 2011-07-30 12:27 3comp.pdf
-rwxrwx--- 1 root plugdev 8652319 2011-07-29 22:15 3.pdf
-rwxrwx--- 1 root plugdev 16829828 2011-07-30 12:27 5.pdf Properties of the files show that all of them are not optimized. Results of converting to ps and then back to pdf: $ pdf2ps 3.pdf 3.ps
$ ps2pdf 3.ps 3c.pdf
$ ls -l 3.pdf 3.ps 3c.pdf
-rwxrwx--- 1 root plugdev 8808946 2011-07-30 13:14 3c.pdf
-rwxrwx--- 1 root plugdev 8652319 2011-07-29 22:15 3.pdf
-rwxrwx--- 1 root plugdev 122375966 2011-07-30 13:14 3.ps | in short: To know if it's compressed already: strings your.pdf | grep /Filter To (un)compress a PDF, use QPDF qpdf --stream-data=compress your.pdf compressed.pdf
qpdf --stream-data=uncompress compressed.pdf uncompressed.pdf explanation: The "Filter" keyword inside a pdf file is a indicator of the compression method used. Some of them are: CCITT G3/G4 – used for monochrome images JPEG – a lossy algorithm that is used for images JPEG2000 – a more modern alternative to JPEG, which is also used for compressing images Flate – used for compressing text as well as images JBIG2 – an alternative to CCITT compression for monochrome images LZW – used for compressing text as well as images but getting replaced by Flate RLE – used for monochrome images ZIP – used for grayscale or color images (copied from here ). However, given the PDF complex file structure, most of the time some part (or "stream") of the PDF will be compressed already in some way (and will show up when grepping /Filter) while some other part will not be, so there is no YES / NO answer to the question whether the PDF is compressed. one way to overcome this would be to add the -c option to grep, which returns the number of occurrences, so you could see relatively how well it is compressed. for example, if strings "large .pdf" | grep -c /Filter returns less then 10 it's pretty non-compressed. Another property relating to size in PDFs, is whether they have been optimized for quick access, with "optimized" PDFs being bigger in size, to quote from wikipedia : There are two layouts to the PDF files—non-linear (not "optimized") and linear ("optimized"). Non-linear PDF files consume less disk space than their linear counterparts, though they are slower to access because portions of the data required to assemble pages of the document are scattered throughout the PDF file. Linear PDF files (also called "optimized" or "web optimized" PDF files) are constructed in a manner that enables them to be read in a Web browser plugin without waiting for the entire file to download, since they are written to disk in a linear (as in page order) fashion. PDF files may be optimized using Adobe Acrobat software or QPDF. You can check whether the PDF is optimized using pdfinfo your.pdf . | {
"source": [
"https://unix.stackexchange.com/questions/17663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
17,664 | A long time ago I remember using a command that makes its input into a nicely formatted table. For example, for this input, apple 1 100
orange 20 19
pineapple 1000 87
avocado 4 30 The output will be similar to this: apple 1 100
orange 20 19
pineapple 1000 87
avocado 4 30 I'd like to know the name of this tool. | Use column -t . column is part of util-linux . $ column -t <<END
> apple 1 100
> orange 20 19
> pineapple 1000 87
> avocado 4 30
> END
apple 1 100
orange 20 19
pineapple 1000 87
avocado 4 30 | {
"source": [
"https://unix.stackexchange.com/questions/17664",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210/"
]
} |
17,715 | It looks like when adding a directory into $PATH, its subdirectories are not added recursively. So can I do that? Or is there a reason why this is not supported? | Add them recursively using find like so: PATH=$PATH$( find $HOME/scripts/ -type d -printf ":%p" ) WARNING: As mentioned in the comments to the question this isn't encouraged as it poses a security risk because there is no guarantee that executable files in the directories added aren't malicious. It's probably a better solution to follow Gilles' answer and use stow | {
"source": [
"https://unix.stackexchange.com/questions/17715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
17,717 | I have a bash script file, which is put under some directory added to $PATH so that I can call the script from any directory. There is another text file under the same directory as the script. I wonder how to refer to the text file in the script? For example, if the script is just to output the content of the text file, cat textfile won't work, since when calling the script from a different directory, the text file is not found. | These should work the same, as long as there are no symlinks (in the path expansion or the script itself): MYDIR="$(dirname "$(realpath "$0")")" MYDIR="$(dirname "$(which "$0")")" A two step version of any of the above: MYSELF="$(realpath "$0")" MYDIR="${MYSELF%/*}" If there is a symlink on the way to your script, then which will provide an answer not including resolution of that link. If realpath is not installed by default on your system, you can find it here . [EDIT]: As it seems that realpath has no advantage over readlink -f suggested by Caleb , it is probably better to use the latter. My timing tests indicate it is actually faster. | {
"source": [
"https://unix.stackexchange.com/questions/17717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
17,747 | As per the following example, and as in my recent question In bash, where has the trailing newline char gone? , I want to know "why" it happens x="$(echo -ne "a\nb\n")" ; echo -n "$x" | xxd -p
# Output is: 610a62
# The trailing newline from the 'echo' command
# has been "deleted" by Command Substitution I assume there must be some very significant reason for a shell action, namely Command Substitution, to actually delete some data from the command output it is substituting... but I can't get my head around this one, as it seems to be the antithesis of what it is supposed to do.. ie. to pass the output of a command back into the script process... Holding back one character seems weird to me, but I suppose there is a sensible reason for it... I'm keen to find out what that reason is... | Because the shell was not originally intended to be a full programming language. It is quite difficult to remove a trailing \n from some command output.
However, for display purposes, almost all commands end their output with \n , so… there has to be a simple way to remove it when you want to use it in another command. Automatic removal with the $() construction was the chosen solution. So, maybe you'll accept this question as an answer: Can you find a simple way to remove the trailing \n if this was not done automatically in the following command? > echo The current date is "$(date)", have a good day! Note that quoting is required to prevent smashing of double spaces that may appear in formatted dates. | {
"source": [
"https://unix.stackexchange.com/questions/17747",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2343/"
]
} |
17,815 | I was wondering if the following two ways of running a bash script
are equivalent? . ./myScript.sh
source myScript.sh Are they both running the content of the script instead of running the script, i.e. not creating a subshell for running the script? | They are equivalent in bash in that they do exactly the same thing. On the other hand, source is 5 characters longer and is not portable to POSIX-only shells or Bourne whereas . (dot) is, so I never bother using source . That is correct - sourcing a file runs the commands in the current shell and it will affect your current shell environment. You can still pass arguments to the sourced file and bash will actually look in $PATH for the file name just like a normal command if it doesn't contain any slashes. Not related to the original question of . vs source , but in your example, . ./myScript.sh is not identical to source myScript.sh because while . and source are functionally identical, myScript.sh and ./myScript.sh are not the same. Since ./myScript.sh contains a slash, it's interpreted as a path and the shell just uses ./myScript.sh . However, myScript.sh does not have a slash so the shell does a $PATH search for it first. This is the POSIX specified standard behavior for . . Most shells default to this although they may add extensions (such as searching in the current working directory after the path search) or options to change the behavior of . / source . | {
"source": [
"https://unix.stackexchange.com/questions/17815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
17,833 | I have not been able to understand the SYNOPSIS section in the manpage of a command. For example, let's see the manpage of man itself. By man man : SYNOPSIS
man [-C file] [-d] [-D] [--warnings[=warnings]] [-R encoding] [-L
locale] [-m system[,...]] [-M path] [-S list] [-e extension] [-i|-I]
[--regex|--wildcard] [--names-only] [-a] [-u] [--no-subpages] [-P
pager] [-r prompt] [-7] [-E encoding] [--no-hyphenation] [--no-justifi‐
cation] [-p string] [-t] [-T[device]] [-H[browser]] [-X[dpi]] [-Z]
[[section] page ...] ...
man -k [apropos options] regexp ...
man -K [-w|-W] [-S list] [-i|-I] [--regex] [section] term ...
man -f [whatis options] page ...
man -l [-C file] [-d] [-D] [--warnings[=warnings]] [-R encoding] [-L
locale] [-P pager] [-r prompt] [-7] [-E encoding] [-p string] [-t]
[-T[device]] [-H[browser]] [-X[dpi]] [-Z] file ...
man -w|-W [-C file] [-d] [-D] page ...
man -c [-C file] [-d] [-D] page ...
man [-hV] Does the SYNOPSIS section describe the syntax for the command? what do those [...] and [...] inside [...] mean? Do they mean
something optional? Does | mean OR? What does , mean in [-m system[,...]] ? Does the SYNOPSIS section follow the rules used for Regular Expressions? | The synopsis section usually gives some example use-cases. Sometimes sub-commands have different options, so several examples might be shown. Brackets [] always denote optional switches, arguments, options, etc. Yes, the pipe | means or, particularly when inside brackets or parenthesis. Brackets in brackets just means that the second part is dependent on the first, and also itself optional. Some switches you can use on their own or add a value to them. Commas at the start of a bracket would indicate there can be multiple comma separated values. They lean on Regex concepts, but are meant to be human readable so don't follow all the escaping rules etc. | {
"source": [
"https://unix.stackexchange.com/questions/17833",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
17,892 | As man mkdir states -p, --parents
no error if existing, make parent directories as needed When I ran this command mkdir -p work/{F1,F2,F3}/{temp1,temp2} It creates a folder structure like this work parent folder then F1 , F2 , F3 child folders
and temp1 and temp2 child folders under three parent folder F1 , F2 , F3 . work
-F1
-temp1
-temp2
-F2
-temp1
-temp2
-F3
-temp1
-temp2 Now the problem is that I want to create temp1 , temp2 folders only under F1 not under F2 and F3 , but I'm confused on how I can write a command to do what I want. | Maybe this is what you are looking for? mkdir -p work/{F1/{temp1,temp2},F2,F3} | {
"source": [
"https://unix.stackexchange.com/questions/17892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3303/"
]
} |
17,903 | If you look at the output of cal 9 1752 you will see this strange output: September 1752
S M Tu W Th F S
1 2 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30 The following story titled " cal 9 1752 explained" " was copied off of a newslist in the early 90's and archived under Collections of funny stuff for a computer audience by David G. Wiseman (a Network Manager in the Department of Computer Science at The University of Western Ontario). The guy that originally wrote the "cal" command on some old Version 7 machine had an off-by-one error in his code. This showed up as some erroneous output when a malloc'd variable overwrote 12 extra bytes with zeroes, thus leading to the strange calendar output seen above. Now, nobody in his right mind really cares about the calendar for September 1752. Even the idea of the year 1752 does not exist under UNIX, because time did not begin for UNIX until early 1970. As a result, nobody even knew that "cal" had this error until much later. By then there were thousands of copies of "cal" floating around, many of them binary-only. It was too late to fix them all. So in mid-1975, some high-level AT&T officials met with the Pope, and came to an agreement. The calendar was retroactively changed to bring September 1752 in line with UNIX reality. Since the calendar was changed by counting backwards from September 14, 1752, none of the dates after that were affected. The dates before that were all moved by 12 days. They also fixed the man page for "cal" to document the bug as a feature. The 11 days from September 3 to September 13 were simply gone from the records. They searched the history books and found that fortunately nothing of much significance happened during those 11 days. Overall, this whole incident was pretty much a non-event. One science fiction author later heard about it, and blew the thing up into a full-length work of science-fiction called "The Lathe of Heaven", a book that in my opinion bears little resemblance to what really happened. What is the real explanation for the output anomaly? | To trace the real story, try running man cal yourself: The Gregorian Reformation is assumed to have occurred in 1752 on the 3rd
of September. By this time, most countries had recognized the reforma-
tion (although a few did not recognize it until the early 1900’s.) Ten
days following that date were eliminated by the reformation, so the cal-
endar for that month is a bit unusual. Then, if your history is sketchy, continue with Wikipedia for information about the changes introduced by Gregorian Calendar and it's history of adoption in various parts of the world: The Gregorian calendar reform contained two parts, a reform of the Julian calendar as used up to Pope Gregory's time, together with a reform of the lunar cycle used by the Church along with the Julian calendar for calculating dates of Easter. [...] In addition to the change in the mean length of the calendar year from 365.25 days (365 days 6 hours) to 365.2425 days (365 days 5 hours 49 minutes 12 seconds), a reduction of 10 minutes 48 seconds per year, the Gregorian calendar reform also dealt with the past accumulated difference between these lengths. [...] Because of the Protestant Reformation, however, many Western European countries did not initially follow the Gregorian reform, and maintained their old-style systems. Eventually other countries followed the reform for the sake of consistency, but by the time the last adherents of the Julian calendar in Eastern Europe (Russia and Greece) changed to the Gregorian system in the 20th century, they had to drop 13 days from their calendars, due to the additional accumulated difference between the two calendars since 1582 . [...] Britain and the British Empire (including the eastern part of what is now the United States) adopted the Gregorian calendar in 1752, by which time it was necessary to correct by 11 days. Wednesday, 2 September 1752 was followed by Thursday, 14 September 1752. By the time Unix came along and reset the worlds clocks to start at January 1st, 1970 there was nothing to be done about the whole mess except pick a date to show the reset on. Since the world adopted the current Gregorian calendar system at varying times in different countries, the exact time to make this correction is somewhat arbitrary. If you ever have a reason to count dates going back that far in your software, you will run into much more significant issues than just that one reset! The history of calendaring is full of surprises! | {
"source": [
"https://unix.stackexchange.com/questions/17903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6960/"
]
} |
17,907 | I want to install tmux on a machine where I don't have root access. I already compiled libevent and installed it in $HOME/.bin-libevent and now I want to compile tmux, but configure always ends with configure: error: "libevent not found" , even though I tried to point to the libevent directory in the Makefile.am by modifying LDFLAGS and CPPFLAGS , but nothing seems to work. How can I tell the system to look in my home dir for the libevent? | Try: DIR="$HOME/.bin-libevent"
./configure CFLAGS="-I$DIR/include" LDFLAGS="-L$DIR/lib" (I'm sure there must be a better way to configure library paths with autoconf. Usually there is a --with-libevent=dir option. But here, it seems there is no such option.) | {
"source": [
"https://unix.stackexchange.com/questions/17907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9545/"
]
} |
17,936 | As part of doing some cold cache timings, I'm trying to free the OS cache. The kernel documentation (retrieved January 2019) says: drop_caches
Writing to this will cause the kernel to drop clean caches, as well as
reclaimable slab objects like dentries and inodes. Once dropped, their
memory becomes free.
To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
To free reclaimable slab objects (includes dentries and inodes):
echo 2 > /proc/sys/vm/drop_caches
To free slab objects and pagecache:
echo 3 > /proc/sys/vm/drop_caches
This is a non-destructive operation and will not free any dirty objects.
To increase the number of objects freed by this operation, the user may run
`sync' prior to writing to /proc/sys/vm/drop_caches. This will minimize the
number of dirty objects on the system and create more candidates to be
dropped.
This file is not a means to control the growth of the various kernel caches
(inodes, dentries, pagecache, etc...) These objects are automatically
reclaimed by the kernel when memory is needed elsewhere on the system.
Use of this file can cause performance problems. Since it discards cached
objects, it may cost a significant amount of I/O and CPU to recreate the
dropped objects, especially if they were under heavy use. Because of this,
use outside of a testing or debugging environment is not recommended.
You may see informational messages in your kernel log when this file is
used:
cat (1234): drop_caches: 3
These are informational only. They do not mean that anything is wrong
with your system. To disable them, echo 4 (bit 3) into drop_caches. I'm a bit sketchy about the details. Running echo 3 > /proc/sys/vm/drop_caches frees pagecache, dentries and inodes. Ok. So, if I want the system to start caching normally again, do I need to reset it to 0 first? My system has the value currently set to 0, which I assume is the default. Or will it reset on its own? I see at least two possibilities here, and I'm not sure which one is true: echo 3 > /proc/sys/vm/drop_caches frees pagecache, dentries and inodes. The system then immediately starts caching again. I'm not sure what I would expect the value in /proc/sys/vm/drop_caches to do if this is the case. Go back to 0 almost immediately? If /proc/sys/vm/drop_caches is set to 3, the system does not do any memory caching till it is reset to 0. Which case is true? | It isn't sticky - you just write to the file to make it drop the caches and then it immediately starts caching again. Basically when you write to that file you aren't really changing a setting, you are issuing a command to the kernel. The kernel acts on that command (by dropping the caches) then carries on as before. | {
"source": [
"https://unix.stackexchange.com/questions/17936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4671/"
]
} |
17,949 | Can anyone tell me the technical differences between grep , egrep , and fgrep and provide suitable examples? When do I need to use grep over egrep and vice versa? | egrep is 100% equivalent to grep -E fgrep is 100% equivalent to grep -F Historically these switches were provided in separate binaries. On some really old Unix systems you will find that you need to call the separate binaries, but on all modern systems the switches are preferred. The man page for grep has details about this. As for what they do, -E switches grep into a special mode so that the expression is evaluated as an ERE (Extended Regular Expression) as opposed to its normal pattern matching. Details of this syntax are on the man page. -E, --extended-regexp Interpret PATTERN as an extended regular expression The -F switch switches grep into a different mode where it accepts a pattern to match, but then splits that pattern up into one search string per line and does an OR search on any of the strings without doing any special pattern matching. -F, --fixed-strings Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched. Here are some example scenarios: You have a file with a list of say ten Unix usernames in plain text. You want to search the group file on your machine to see if any of the ten users listed are in any special groups: grep -F -f user_list.txt /etc/group The reason the -F switch helps here is that the usernames in your pattern file are interpreted as plain text strings. Dots for example would be interpreted as dots rather than wild-cards. You want to search using a fancy expression. For example parenthesis () can be used to indicate groups with | used as an OR operator. You could run this search using -E : grep -E '^no(fork|group)' /etc/group ...to return lines that start with either "nofork" or "nogroup". Without the -E switch you would have to escape the special characters involved because with normal pattern matching they would just search for that exact pattern; grep '^no\(fork\|group\)' /etc/group | {
"source": [
"https://unix.stackexchange.com/questions/17949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3303/"
]
} |
17,999 | When I run: watch 'cmd >> output.txt' & the job gets suspended by the system: 3569 Stopped (tty output) Is there a workaround? | The purpose of watch is to show the results of a command full-screen and update continuously; if you're redirecting the output into a file and backgrounding it there's really no reason to use watch in the first place. If you want to just run a command over and over again with a delay ( watch waits two seconds by default), you can use something like this: while true; do
cmd >> output.txt
sleep 2
done | {
"source": [
"https://unix.stackexchange.com/questions/17999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3722/"
]
} |
18,006 | Is there any way to use the mouse wheel to scroll through the output of a screen session? I can use the keypad scroll through previous output in screen after pressing ctrl+a [ . Is it possible to do this with the mouse wheel? (I'm using putty , but I don't think it's a putty issue, I believe it's a screen issue.) | Mouse scrolling and elevators will work if you enable them in your .screenrc. Screen FAQ Q: My xterm scrollbar does not work with screen. A: The problem is that xterm will not allow scrolling if the alternate text buffer is selected. The standard definitions of the termcap initialize capabilities ti and te switch to and from the alternate text buffer. (The scrollbar also does not work when you start e.g. 'vi'). You can tell screen not to use these initialisations by adding the line
termcapinfo xterm ti@:te@
to your ~/.screenrc file. So in my .screenrc, I have: termcapinfo xterm* ti@:te@ In tmux, it'd be something like (.tmux.conf): set -g terminal-overrides 'xterm*:smcup@:rmcup@' | {
"source": [
"https://unix.stackexchange.com/questions/18006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9575/"
]
} |
18,043 | I want to kill all running processes of a particular user from either a shell script or native code on a Linux system. Do I have to read the /proc directory and look for these? Any ideas? Is there a dynamic mapping of the pids under UIDs in Linux? Isn't this in the proc? If not, then where is this list maintained? Should I read from it? Also where is the static list of all UIDs in the system so I can validate this this user exists and then proceed to kill all processes running under it? | Use pkill -U UID or pkill -u UID or username instead of UID. Sometimes skill -u USERNAME may work, another tool is killall -u USERNAME . Skill was a linux-specific and is now outdated, and pkill is more portable (Linux, Solaris, BSD). pkill allow both numberic and symbolic UIDs, effective and real http://man7.org/linux/man-pages/man1/pkill.1.html pkill - ... signal processes based on name and other attributes -u, --euid euid,...
Only match processes whose effective user ID is listed.
Either the numerical or symbolical value may be used.
-U, --uid uid,...
Only match processes whose real user ID is listed. Either the
numerical or symbolical value may be used. Man page of skill says is it allowed only to use username, not user id: http://man7.org/linux/man-pages/man1/skill.1.html skill, snice ... These tools are obsolete and unportable. The command syntax is poorly defined. Consider using the killall, pkill -u, --user user
The next expression is a username. killall is not marked as outdated in Linux, but it also will not work with numberic UID; only username: http://man7.org/linux/man-pages/man1/killall.1.html killall - kill processes by name -u, --user
Kill only processes the specified user owns. Command names
are optional. I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use full list of processes (doing some readdir of /proc ). I think, they will iterate over /proc digital subfolders and check every found process for match. To get list of users, use getpwent (it will get one user per call). skill (procps & procps-ng) and killall (psmisc) tools both uses getpwnam library call to parse argument of -u option, and only username will be parsed. pkill (procps & procps-ng) uses both atol and getpwnam to parse -u / -U argument and allow both numeric and textual user specifier. | {
"source": [
"https://unix.stackexchange.com/questions/18043",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9595/"
]
} |
18,048 | Rather than using mount | grep , I'd like to use mount -l -t bind , but that doesn't work, and -t none shows all mounts. | Bind mounts are not a filesystem type, nor a parameter of a mounted filesystem; they're parameters of a mount operation . As far as I know, the following sequences of commands lead to essentially identical system states as far as the kernel is concerned: mount /dev/foo /mnt/one; mount --bind /mnt/one /mnt/two
mount /dev/foo /mnt/two; mount --bind /mnt/two /mnt/one So the only way to remember what mounts were bind mounts is the log of mount commands left in /etc/mtab . A bind mount operation is indicated by the bind mount option (which causes the filesystem type to be ignored). But mount has no option to list only filesystems mounted with a particular set of sets of options. Therefore you need to do your own filtering. mount | grep -E '[,(]bind[,)]'
</etc/mtab awk '$4 ~ /(^|,)bind(,|$)/' Note that /etc/mtab is only useful here if it's a text file maintained by mount . Some distributions set up /etc/mtab as a symbolic link to /proc/mounts instead; /proc/mounts is mostly equivalent to /etc/mtab but does have a few differences, one of which is not tracking bind mounts. One piece of information that is retained by the kernel, but not shown in /proc/mounts , is when a mount point only shows a part of the directory tree on the mounted filesystem. In practice this mostly happens with bind mounts: mount --bind /mnt/one/sub /mnt/partial In /proc/mounts , the entries for /mnt/one and /mnt/partial have the same device, the same filesystem type and the same options. The information that /mnt/partial only shows the part of the filesystem that's rooted at /sub is visible in the per-process mount point information in /proc/$pid/mountinfo (column 4). Entries there look like this: 12 34 56:78 / /mnt/one rw,relatime - ext3 /dev/foo rw,errors=remount-ro,data=ordered
12 34 56:78 /sub /mnt/partial rw,relatime - ext3 /dev/foo rw,errors=remount-ro,data=ordered | {
"source": [
"https://unix.stackexchange.com/questions/18048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
18,061 | Possible Duplicate: “No such file or directory” lies on Optware installed binaries I'm trying to add ebtables to a little router box. I went and got a binary compiled for the correct architecture, and put it on the box in /sbin/ . When I do /sbin/ebtables , the shell says /bin/sh: /sbin/ebtables: not found , but I can do ls -l /sbin/ebtables and it shows up perfectly: -rwxr-xr-x 1 admin admin 4808 Aug 4 10:36 /sbin/ebtables Any ideas about what's going on here? | It could be a missing dependency. Notably you'll get that type of message if the runtime linker ("program interpreter") set in the ELF header does not exist on your system. To check for that, run: readelf -l your_executable|grep "program interpreter" If what it gives you does not exist on your system, or has missing dependencies (check with ldd ), you'll get that strange error message. Demo: $ gcc -o test t.c
$ readelf -l test|grep "program interpreter"
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
$ ./test
hello!
$ gcc -Wl,--dynamic-linker -Wl,/i/dont/exist.so -o test t.c
$ readelf -l test|grep "program interpreter"
[Requesting program interpreter: /i/dont/exist.so]
$ ./test
bash: ./test: No such file or directory | {
"source": [
"https://unix.stackexchange.com/questions/18061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2180/"
]
} |
18,077 | Is it possible to use 2 commands in the -exec part of find command ? I've tried something like: find . -name "*" -exec chgrp -v new_group {} ; chmod -v 770 {} \; and I get: find: missing argument to -exec chmod: cannot access {}: No such file or directory chmod: cannot access ;: No such file or directory | As for the find command, you can also just add more -exec commands in a row: find . -name "*" -exec chgrp -v new_group '{}' \; -exec chmod -v 770 '{}' \; Note that this command is, in its result, equivalent of using chgrp -v new_group file && chmod -v 770 file on each file. All the find 's parameters such as -name , -exec , -size and so on, are actually tests : find will continue to run them one by one as long as the entire chain so far has evaluated to true . So each consecutive -exec command is executed only if the previous ones returned true (i.e. 0 exit status of the commands). But find also understands logic operators such as or ( -o ) and not ( ! ). Therefore, to use a chain of -exec tests regardless of the previous results, one would need to use something like this: find . -name "*" \( -exec chgrp -v new_group {} \; -o -true \) -exec chmod -v 770 {} \; | {
"source": [
"https://unix.stackexchange.com/questions/18077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/393/"
]
} |
18,087 | Is there anywhere you can download a manpage for every builtin commands? I know you can just use help or man bash and search to find info about it, but I want them separated, so I can just do man read and get the read manpage. | help read
help read | less In zsh: run-help read or type read something and press M-h (i.e. Alt+h or ESC h ). If you want to have a single man command so as not to need to know whether the command is a built-in, define this function in your ~/.bashrc : man () {
case "$(type -t "$1"):$1" in
builtin:*) help "$1" | "${PAGER:-less}";; # built-in
*[[?*]*) help "$1" | "${PAGER:-less}";; # pattern
*) command -p man "$@";; # something else, presumed to be an external command
# or options for the man command or a section number
esac
} | {
"source": [
"https://unix.stackexchange.com/questions/18087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9191/"
]
} |
18,117 | When I just used pipe in bash, I didn't think more about this. But when I read some C code example using system call pipe() together with fork(), I wonder how to understand pipes, including both anonymous pipes and named pipes. It is often heard that "everything in Linux/Unix is a file". I wonder if a pipe is actually a file so that one part it connects writes to the pipe file, and the other part reads from the pipe file? If yes, where is the pipe file for an anonymous pipe created? In /tmp, /dev, or ...? However, from examples of named pipes, I also learned that using pipes has space and time performance advantage over explicitly using temporary files, probably because there are no files involved in implementation of pipes. Also pipes seem not store data as files do. So I doubt a pipe is actually a file. | About your performance question, pipes are more efficient than files because no disk IO is needed. So cmd1 | cmd2 is more efficient than cmd1 > tmpfile; cmd2 < tmpfile (this might not be true if tmpfile is backed on a RAM disk or other memory device as named pipe; but if it is a named pipe, cmd1 should be run in the background as its output can block if the pipe becomes full). If you need the result of cmd1 and still need to send its output to cmd2 , you should cmd1 | tee tmpfile | cmd2 which will allow cmd1 and cmd2 to run in parallel avoiding disk read operations from cmd2 . Named pipes are useful if many processes read/write to the same pipe. They can also be useful when a program is not designed to use stdin/stdout for its IO needing to use files . I put files in italic because named pipes are not exactly files in a storage point of view as they reside in memory and have a fixed buffer size, even if they have a filesystem entry (for reference purpose). Other things in UNIX have filesystem entries without being files: just think of /dev/null or others entries in /dev or /proc . As pipes (named and unnamed) have a fixed buffer size, read/write operations to them can block, causing the reading/writing process to go in IOWait state. Also, when do you receive an EOF when reading from a memory buffer ? Rules on this behavior are well defined and can be found in the man. One thing you cannot do with pipes (named and unnamed) is seek back in the data. As they are implemented using a memory buffer, this is understandable. About "everything in Linux/Unix is a file" , I do not agree. Named pipes have filesystem entries, but are not exactly file. Unnamed pipes do not have filesystem entries (except maybe in /proc ). However, most IO operations on UNIX are done using read/write function that need a file descriptor , including unnamed pipe (and socket). I do not think that we can say that "everything in Linux/Unix is a file" , but we can surely say that "most IO in Linux/Unix is done using a file descriptor" . | {
"source": [
"https://unix.stackexchange.com/questions/18117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
18,119 | Typically, one sees find commands that look like this: $ find . -name foo.txt when the search is to begin from the current directory. I'm finding that on my machines (Ubuntu, Cygwin) I get the same results without the dot. Why is it typically included? Is it just a convention to be explicit, or was/is it required on certain systems? | Some versions* of find require that you provide a path argument which is a directory from which to start searching. Dot . simply represents the current directory is is usually where you want to search. You could replace this with any path that you want to be the base of the search. In some versions of find this can be left because the current directory is implied if no path argument is present. You can run man find in your shell for details about the arguments. For example the usage synopsis for mine indicates that the path argument is optional (inside square brackest [] ): find [-H] [-L] [-P] [-D debugopts] [-Olevel] [path...] [expression] If you ran my find with no arguments at all all files and directories starting from the current folder would be returned. Your example simply expressly states that the search should start from . and includes the expression -name foo.txt as one of the search filters. * Notably all the BSD variants and anything sticking strictly to the POSIX standard . GNU find allows it to be optional. | {
"source": [
"https://unix.stackexchange.com/questions/18119",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5045/"
]
} |
18,151 | Is there a way to follow the links mentioned in a man page? For example, here's the man page for ps ; how do I access the underlined link circled in red in the screenshot: top(1) ? | Man pages date back to Unix First Edition . While hypertext had been invented, it was still in infancy; the web was two decades away, and the manual was an actual printed book, often with one command per page if they fit (that's why they were called pages). The format used for manual pages has evolved somewhat since then, but most pages aren't really designed for hypertext, and the default man program doesn't support it (it's just a plain text viewer, with hacks to support some basic formatting). There are however man page viewing programs that reconstruct some hyperlinks, mainly links to other man pages, which are traditionally written in the form man(1) where man is the name of the man page and 1 is the section number : tkman , a GUI man page viewer with hyperlinks WoMan ( wiki , man comparsion , formerly ), a man page browser for Emacs, supporting hyperlinks man2html , a man to HTML converter (plus a web browser to read the result) You can browse the manual pages of several operating systems, converted to HTML by man2html or similar tools, on a number of sites online, for example: CentOS Debian FreeBSD (and a bunch of other collections) macOS: archive from 10.9 Mavericks , recent version at unix.com , partial copy at ss64.com MINIX 3 NetBSD OpenBSD Solaris 10 , Solaris 11 , other Solaris versions Ubuntu Unix 1st edition , Unix 6th edition , Unix 8th edition Some time after man pages had become the established documentation format on unix and some time before the web was invented, the GNU project introduced the info documentation format, more advanced than man while sticking to simple markup designed for text terminals. The major innovation of info compared to man was to have multi-page documentation with hyperlinks to other pages. Info is still the prefered documentation format for GNU projects, though most Info pages are generated from a Texinfo source (or sometimes other formats) that can also generate HTML. When info documentation for a program exists, it's often the main manual, while the man pages only contain basic information about command line arguments. | {
"source": [
"https://unix.stackexchange.com/questions/18151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9610/"
]
} |
18,153 | I was wondering what are some formats of object files in Linux? There are two types of object files that I know: executable, which has ELF format object files that are generated by gcc after compilation but before linkage. what is the format of such object files? Or are they also ELF format but with some different sub-formats than executables? Is the job of a linker to convert the format of this type of object files into the format of executables? Are there other types of object files? | Man pages date back to Unix First Edition . While hypertext had been invented, it was still in infancy; the web was two decades away, and the manual was an actual printed book, often with one command per page if they fit (that's why they were called pages). The format used for manual pages has evolved somewhat since then, but most pages aren't really designed for hypertext, and the default man program doesn't support it (it's just a plain text viewer, with hacks to support some basic formatting). There are however man page viewing programs that reconstruct some hyperlinks, mainly links to other man pages, which are traditionally written in the form man(1) where man is the name of the man page and 1 is the section number : tkman , a GUI man page viewer with hyperlinks WoMan ( wiki , man comparsion , formerly ), a man page browser for Emacs, supporting hyperlinks man2html , a man to HTML converter (plus a web browser to read the result) You can browse the manual pages of several operating systems, converted to HTML by man2html or similar tools, on a number of sites online, for example: CentOS Debian FreeBSD (and a bunch of other collections) macOS: archive from 10.9 Mavericks , recent version at unix.com , partial copy at ss64.com MINIX 3 NetBSD OpenBSD Solaris 10 , Solaris 11 , other Solaris versions Ubuntu Unix 1st edition , Unix 6th edition , Unix 8th edition Some time after man pages had become the established documentation format on unix and some time before the web was invented, the GNU project introduced the info documentation format, more advanced than man while sticking to simple markup designed for text terminals. The major innovation of info compared to man was to have multi-page documentation with hyperlinks to other pages. Info is still the prefered documentation format for GNU projects, though most Info pages are generated from a Texinfo source (or sometimes other formats) that can also generate HTML. When info documentation for a program exists, it's often the main manual, while the man pages only contain basic information about command line arguments. | {
"source": [
"https://unix.stackexchange.com/questions/18153",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
18,154 | There is a folder at the root of Linux and Unix operating systems called /lost+found/ What is it for? Under what circumstances would I interact with it? How would I interact with it? | If you run fsck , the filesystem check and repair command, it might find data fragments that are not referenced anywhere in the filesystem. In particular, fsck might find data that looks like a complete file but doesn't have a name on the system — an inode with no corresponding file name. This data is still using up space, but it isn't accessible by any normal means. If you tell fsck to repair the filesystem, it will turn these almost-deleted files back into files. The thing is, the file had a name and location once, but that information is no longer available. So fsck deposits the file in a specific directory, called lost+found (after lost and found property). Files that appear in lost+found are typically files that were already unlinked (i.e. their name had been erased) but still opened by some process (so the data wasn't erased yet) when the system halted suddenly (kernel panic or power failure). If that's all that happened, these files were slated for deletion anyway, you don't need to care about them. Files can also appear in lost+found because the filesystem was in an inconsistent state due to a software or hardware bug. If that's the case, it's a way for you to find files that were lost but that the system repair managed to salvage. The files may or may not contain useful data, and even if they do they may be incomplete or out of date; it all depends how bad the filesystem damage was. On many filesystems, the lost+found directory is a bit special because it preallocates a bit of space for fsck to deposit files there. (The space isn't for the file data, which fsck leaves in place; it's for the directory entries which fsck has to make up.) If you accidentally delete lost+found , don't re-create it with mkdir , use mklost+found if available. | {
"source": [
"https://unix.stackexchange.com/questions/18154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4232/"
]
} |
18,166 | What are session leaders, as in ps -d which selects all processes except session leaders? | In Linux, every process has several IDs associated with it, including: Process ID (PID) This is an arbitrary number identifying the process. Every process has a unique ID, but after the process exits and the parent process has retrieved the exit status, the process ID is freed to be reused by a new process. Parent Process ID (PPID) This is just the PID of the process that started the process in question. If the parent process exits before the child does, the child's PPID is changed to another process (usually PID 1). Process Group ID (PGID) This is just the PID of the process group leader. If PID == PGID, then this process is a process group leader. Session ID (SID) This is just the PID of the session leader. If PID == SID, then this process is a session leader. Sessions and process groups are just ways to treat a number of related processes as a unit. All the members of a process group always belong to the same session, but a session may have multiple process groups. Normally, a shell will be a session leader, and every pipeline executed by that shell will be a process group. This is to make it easy to kill the children of a shell when it exits. (See exit(3) for the gory details.) I don't think there is a special term for a member of a session or process group that isn't the leader. | {
"source": [
"https://unix.stackexchange.com/questions/18166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9610/"
]
} |
18,187 | Given a code like this: 588 while ($linecounter > 0) {
589 $token = " ";
590 while ($token != PHP_EOL) {
591 if (fseek($handle, $pos, SEEK_END) == -1) {
592 $beginning = true;
593 break;
594 }
595 $token = fgetc($handle);
596 $pos--;
597 }
598 $linecounter--;
599 if ($beginning) {
600 rewind($handle);
601 }
602 } The cursor is at the character = in the line 590. Which is the most efficient way to select the code block: lines 590-597 lines 591-596 (just the inner part) | To do the first: Hit $ to go to the end of the lineover the { Push v or V (depending on whether you want to select lines or not) Push % (to jump to the matching bracket). To select just the inner part, go inside the inner part and use the i{ directional modifier. For example, to delete everything inside the current {…} block, type: di{ . | {
"source": [
"https://unix.stackexchange.com/questions/18187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9769/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.