source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
57,540 | I'm mainly a Windows guy, programming C#, but I often do use technologies that were intended for Linux machines like git, MySQL, perl scripts, memcached, php, etc... And therefore I am exposed to these tools. I like looking at the code base of these tools every once in a while, and something I realized in many code bases is a folder called t with a bunch of files with the t extension. What are these files? How come the folder doesn't have a more descriptive name? | Have you tried tail -f file1 file2 ? It appears to do exactly what you want, at least on my FreeBSD machine. Perhaps the tail that comes with a Debian system can do it too? | {
"source": [
"https://unix.stackexchange.com/questions/57540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28268/"
]
} |
57,555 | After some searching, I got to know :echo @% displays the current filename in the bottom line of vim-screen. I would like to dump the filename (with and without full path) into the contents of the file without leaving vim . Is there any way to do this? | The current filename is in the "% register, so you can insert it (while in insert mode) with <c-r>% ; the full path can be inserted with <c-r>=expand("%:p") . You could make a macro of it if you use it often. For more, similar tricks, see :h expand and :h "= . | {
"source": [
"https://unix.stackexchange.com/questions/57555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
57,590 | I'm trying to append the current date to the end of a file name like this: TheFile.log.2012-02-11 Here is what I have so far: set today = 'date +%Y'
mkdir -p The_Logs &
find . -name The_Logs -atime -1 -type d -exec mv \{} "The_Logs_+$today" \; & However all I get is the name of the file, and it appends nothing. How do I append a current date to a filename? | More than likely it is your use of set . That will assign 'today', '=' and the output of the date program to positional parameters (aka command-line arguments). You want to just use C shell (which you are tagging this as "bash", so likely not), you will want to use: today=`date +%Y-%m-%d.%H:%M:%S` # or whatever pattern you desire Notice the lack of spaces around the equal sign. You also do not want to use & at the end of your statements; which causes the shell to not wait for the command to finish. Especially when one relies on the next. The find command could fail because it is started before the mkdir . | {
"source": [
"https://unix.stackexchange.com/questions/57590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28309/"
]
} |
57,641 | I've been experimenting with different tmux keybinding settings and I've noticed the following: If I reload my tmux config (from within tmux) the keybindings I once had loaded will remain loaded. The only way (I know of) to clean this up is to quit all tmux sessions and restart. So it looks like tmux remembers all previously loaded keybindings and will only remove them on a fresh start or by explicitly unbinding them. To recreate this: open a terminal (A) start tmux check whether the keybinding shows a clock (press PREFIX C-t ) press PREFIX ? to see the keybinding in the list edit ~/.tmux.conf add a keybinding ( bind C-t display "Keybinding C-t" ) reload tmux config ( PREFIX : source-file ~/.tmux.conf ) check whether the keybinding works (press PREFIX C-t ) press PREFIX ? to see the new keybinding in the list edit ~/.tmux.conf again remove the keybinding (so remove bind C-t display "Keybinding C-t" ) reload tmux config ( PREFIX : source-file ~/.tmux.conf ) check whether the keybinding works (press PREFIX C-t ), it still displays "Keybinding C-t" press PREFIX ? to see that the new keybinding is still in the list exit tmux enter tmux check whether the original keybinding works again (press PREFIX C-t ), it should now display a clock again press PREFIX ? to see that the new keybinding has been removed from the list My question: is there a way to instruct tmux to "forget" all loaded configs and then load .tmux.conf ? | According to the tmux(1) man page, unbind-key -a is what you are looking for. Note that tmux runs a server that will only exit once all sessions are closed, and the key-bindings are per-server. Hence once you create a binding, it will be persistent over all client detaches. That said, put unbind-key -a at the very top of your configuration file, and on config reload it should do what you want - unbind everything and start binding from scratch. Or - if your modifications are smaller - unbind only what you want to change. | {
"source": [
"https://unix.stackexchange.com/questions/57641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13296/"
]
} |
57,667 | [Edit: This looks similar to some other questions asking how to kill all spawned processes – the answers all seem to be to use pkill. So the core of my question may be: Is there a way to propagate Ctrl-C/Z to all processes spawned by a script?] When calling a SoX rec with the timeout command from coreutils (discussed here ), there doesn't seem to be any way to kill it with a keystroke once it's been invoked from within a Bash script. Examples: timeout 10 rec test.wav ...can be killed with Ctrl + C or Ctrl + Z from bash, but not when it's been called from inside a script. timeout 10 ping nowhere ...can be killed with Ctrl + C or Ctrl + Z from bash, and with Ctrl + Z when it's run from inside a script. I can find the process ID and kill it that way, but why can't I use a standard break keystroke? And is there any way to structure my script so that I can? | Signal keys such as Ctrl + C send a signal to all processes in the foreground process group . In the typical case, a process group is a pipeline. For example, in head <somefile | sort , the process running head and the process running sort are in the same process group, as is the shell, so they all receive the signal. When you run a job in the background ( somecommand & ), that job is in its own process group, so pressing Ctrl + C doesn't affect it. The timeout program places itself in its own process group. From the source code: /* Ensure we're in our own group so all subprocesses can be killed.
Note we don't just put the child in a separate group as
then we would need to worry about foreground and background groups
and propagating signals between them. */
setpgid (0, 0); When a timeout occurs, timeout goes through the simple expedient of killing the process group of which it is a member. Since it has put itself in a separate process group, its parent process will not be in the group. Using a process group here ensures that if the child application forks into several processes, all its processes will receive the signal. When you run timeout directly on the command line and press Ctrl + C , the resulting SIGINT is received both by timeout and by the child process, but not by interactive shell which is timeout 's parent process. When timeout is called from a script, only the shell running the script receives the signal: timeout doesn't get it since it's in a different process group. You can set a signal handler in a shell script with the trap builtin. Unfortunately, it's not that simple. Consider this: #!/bin/sh
trap 'echo Interrupted at $(date)' INT
date
timeout 5 sleep 10
date If you press Ctrl + C after 2 seconds, this still waits the full 5 seconds, then print the “Interrupted” message. That's because the shell refrains from running the trap code while a foreground job is active. To remedy this, run the job in the background. In the signal handler, call kill to relay the signal to the timeout process group. #!/bin/sh
trap 'kill -INT -$pid' INT
timeout 5 sleep 10 &
pid=$!
wait $pid | {
"source": [
"https://unix.stackexchange.com/questions/57667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17891/"
]
} |
57,705 | I just started learning Linux and all my previous experience of programming has been using the Windows platform. I came across Vim editor and read that it is modal editor unlike notepad which is termed as a modeless editor . Can you please explain what is the difference between modeless and modal editors in general? | A normal, "modeless" editor is like Notepad on Windows: there is only one mode, where you input text. Vi, and it's successor Vim, are modal: there are two primary modes 1 , insert mode where you type text into the editor and it is committed to the document, and normal mode where you enter arguments via the keyboard that perform a variety of functions, including: moving the cursor around the document, searching, and manipulating the text in the document (for example, cutting and pasting). The Wikipedia article on Vi has a good entry on the modal interface. The primary appeal, originally a necessity in the early days of Unix computing prior to the widespread adoption of the mouse, is completely keyboard driven editing. This approach has now been more widely adopted in Unix-land, being used for example by a variety of web browsers . This awesome project, Vim Clutch , provides a clear visualization of the concept of switching between modes. 1. There are also two other modes, command mode for entering commands as you would in a shell, and visual mode when selecting text to operate on. | {
"source": [
"https://unix.stackexchange.com/questions/57705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28032/"
]
} |
57,725 | Unlike rdesktop, when I press ALT + F4 in remmina, it doesn't react in the Windows system, but instead closes the remmina window. Any thoughts? | Remmina has this configurable in the 'Preferences' -> tab 'Keyboard' -> Field 'Grab keyboard'. On my Ubuntu installation this is the right Ctrl key. Works for me and seems to behave like a toggle key. Pressing it again makes modifier keys working on the local machine again. | {
"source": [
"https://unix.stackexchange.com/questions/57725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
57,844 | I am getting the error: touch: cannot touch `/opt/tsrm/compliance/cme/log/20121207.log`: No such file or directory on the touch command: touch $LOGFILE I also checked the link: touch: cannot touch `foo': No such file or directory , But I didn't understand the answer. Note: I was also getting mkdir: cannot create directory ; I fixed this by adding the -p option. Could this be something with the version of Linux I am working in? | You do not have the path that holds the file: /opt/tsrm/compliance/cme/log/ That's where the error come from. | {
"source": [
"https://unix.stackexchange.com/questions/57844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28428/"
]
} |
57,852 | I need my script to be executed a minute after each reboot. When I apply @reboot in my crontab it is too early for my script - I want the script to be executed after all other tasks that are routinely run on reboot. How might I run the script sometime after reboot? | Is the script only ever intended to run one minute after boot up, or can it be used at other times, too? In the former case, you can add sleep 60 to the beginning of your script, or in the latter case, add it to the crontab file: @reboot sleep 60 && my_script.sh As has been pointed out by sr_, though, perhaps you are tackling this in the wrong way, and a proper init.d or rc.d script would be a more robust solution. | {
"source": [
"https://unix.stackexchange.com/questions/57852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20029/"
]
} |
57,876 | I'm trying to grep username: users | grep "^\b\w*\b" -P How can I get it to only show the first match with grep ? | To show only the first match with grep , use -m parameter, e.g.: grep -m1 pattern file -m num , --max-count=num Stop reading the file after num matches. | {
"source": [
"https://unix.stackexchange.com/questions/57876",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20029/"
]
} |
57,890 | On my Ubuntu machine, in /etc/sysctl.conf file, I've got reverse path filtering options commented out by default like this: #net.ipv4.conf.default.rp_filter=1
#net.ipv4.conf.all.rp_filter=1 but in /etc/sysctl.d/10-network-security.conf they are (again, by default) not commented out: net.ipv4.conf.default.rp_filter=1
net.ipv4.conf.all.rp_filter=1 So is reverse path filtering enabled or not? Which of the configuration locations takes priority? How do I check the current values of these and other kernel options? | Checking the value of a sysctl variable is as easy as sysctl <variable name> and, by the way, setting a sysctl variable is as straightforward as sudo sysctl -w <variable name>=<value> but changes made this way will probably hold only till the next reboot. As to which of the config locations, /etc/sysctl.conf or /etc/sysctl.d/ , takes precedence, here is what /etc/sysctl.d/README file says: End-users can use 60-*.conf and above, or use /etc/sysctl.conf directly, which overrides anything in this directory . After editing the config in any of the two locations, the changes can be applied with sudo sysctl -p | {
"source": [
"https://unix.stackexchange.com/questions/57890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22884/"
]
} |
57,920 | I've got an Intel i7 2700k here, and I'd like to know how I can tell which processors are physical and which are virtual (ie: hyperthreading). I'm currently running a Conky script to display my CPU temps, frequencies, and loads, but I'm not sure that I've done it right: I've written my own script to get temperatures and frequencies from i7z , but these only correspond to physical cores. I'm currently displaying each core like this: ${cpu cpu1} ${lua display_temp 0} ${lua display_load 0}
${cpu cpu2}
${cpu cpu3} ${lua display_temp 1} ${lua display_load 1}
${cpu cpu4}
# ... I'm not sure that this is right, because of the loads and temperatures I see sometimes. In /proc/cpuinfo , how are cores sorted? First all physical then all virtual? Each physical core then its virtual core(s)? How are they sorted? | You can know about each processor core by examining each cpuinfo entry: processor : 0
[...]
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
processor : 1
[...]
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 2
processor : 2
[...]
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 4
processor : 3
[...]
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 6
processor : 4
[...]
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 1
[and so on] physical id shows the identifier of the processor. Unless you have a multiprocessor setup (having two separate, physical processor in a machine), it will always be 0. siblings show the number of processor attached to the same physical processor. core id show the identifier of the current core, out to a total of cpu cores . You can use this information to correlate which virtual processor goes into a single core. apicid (and original apicid ) show the number of the (virtual) processor, as given by the bios. Note that there 8 siblings and 4 cores, so there is 2 virtual processor per core. There is no distinction between "virtual" or "real" in hyperthreading. But using this information you can associate which processors are from the same core. | {
"source": [
"https://unix.stackexchange.com/questions/57920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
57,938 | I am looking for a command line tool that listens on a given part, happily excepts every HTTP POST request and dumps it. I want to use it for testing purposes, i.e. for testing clients that issue HTTP POST requests. That means I am searching the counterpart to curl -F (which I can use to send test HTTP POSTs to a HTTP server). Perhaps something like socat TCP4-LISTEN:80,fork,bind=127.0.0.1 ... - but socat is not enough because it does not talk HTTP. | I was looking for this myself as well and ran into the Node.js http-echo-server : npm install http-echo-server -g
PORT=8081 http-echo-server It accepts all requests and echos the full request including header to the command-line. | {
"source": [
"https://unix.stackexchange.com/questions/57938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
57,940 | Many examples for trap use trap ... INT TERM EXIT for cleanup tasks. But is it really necessary to list all the three sigspecs? The manual says : If a SIGNAL_SPEC is EXIT (0) ARG is executed on exit from the shell. which I believe applies whether the script finished normally or it finished because it received SIGINT or SIGTERM . An experiment also confirms my belief: $ cat ./trap-exit
#!/bin/bash
trap 'echo TRAP' EXIT
sleep 3
$ ./trap-exit & sleep 1; kill -INT %1
[1] 759
TRAP
[1]+ Interrupt ./trap-exit
$ ./trap-exit & sleep 1; kill -TERM %1
[1] 773
TRAP
[1]+ Terminated ./trap-exit Then why do so many examples list all of INT TERM EXIT ? Or did I miss something and is there any case where a sole EXIT would miss? | The POSIX spec doesn't say much about the conditions resulting in executing the EXIT trap, only about what its environment must look like when it is executed. In Busybox's ash shell, your trap-exit test does not echo 'TRAP' before exiting due to either SIGINT or SIGTERM. I would suspect there are other shells in existance that may not work that way as well. # /tmp/test.sh & sleep 1; kill -INT %1
#
[1]+ Interrupt /tmp/test.sh
#
#
# /tmp/test.sh & sleep 1; kill -TERM %1
#
[1]+ Terminated /tmp/test.sh
# | {
"source": [
"https://unix.stackexchange.com/questions/57940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17253/"
]
} |
57,957 | While we use * to denote zero or more previous characters in grep , we use *.c to find all C files when we use it with the ls command like ls *.c . Could someone tell how the use of * differs in these two cases? | Shell file name globbing and regular expressions use some of the same characters, and they have similar purposes, but you're right, they aren't compatible. File name globbing is a much less powerful system. In file name globbing: * means "zero or more characters" ? means "any single character" But in regexes, you have to use .* to mean "zero or more characters", and . means "any single character." A ? means something quite different in regexes: zero or one instance of the preceding RE element. Square brackets ( [] ) appear to work the same in both systems on the system I'm typing this on, for simple cases at least. This includes things like POSIX character classes (e.g. [:alpha:] ). That said, if you need your commands to work on many different system types, I recommend against using anything beyond elementary things like lists of characters (e.g. [abeq] ) and maybe character ranges (e.g. [a-c] ). These differences mean the two systems are only directly interchangeable for simple cases. If you need regex matching of file names, you need to do it another way. find -regex is one option. (Notice that there is also find -name , by the way, which uses glob syntax.) | {
"source": [
"https://unix.stackexchange.com/questions/57957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3539/"
]
} |
58,103 | Let's assume that infile contains a specific text, and I were to execute the following set of commands: exec 3<infile cat -n <&3 cat -n <&3 The first instance of cat will display the file's contents, but the second time does not seem to be doing anything. Why do they differ? | They look like the same command but the reason they differ is the system state has changed as a result of the first command. Specifically, the first cat consumed the entire file, so the second cat has nothing left to read, hits EOF (end of file) immediately, and exits. The reason behind this is you are using the exact same file description (the one you created with exec < infile and assigned to the file descriptor 3 ) for both invocations of cat . One of the things associated with an open file description is a file offset. So, the first cat reads the entire file, leaves the offset at the end, and the second one tries to pick up from the end of the file and finds nothing to read. | {
"source": [
"https://unix.stackexchange.com/questions/58103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28555/"
]
} |
58,145 | According to Wikipedia (which could be wrong) When a fork() system call is issued, a copy of all the pages corresponding to the parent process is created, loaded into a separate memory location by the OS for the child process. But this is not needed in certain cases. Consider the case when a child executes an " exec " system call (which is used to execute any executable file from within a C program) or exits very soon after the fork() . When the child is needed just to execute a command for the parent process, there is no need for copying the parent process' pages, since exec replaces the address space of the process which invoked it with the command to be executed. In such cases, a technique called copy-on-write (COW) is used. With this technique, when a fork occurs, the parent process's pages are not copied for the child process. Instead, the pages are shared between the child and the parent process. Whenever a process (parent or child) modifies a page, a separate copy of that particular page alone is made for that process (parent or child) which performed the modification. This process will then use the newly copied page rather than the shared one in all future references. The other process (the one which did not modify the shared page) continues to use the original copy of the page (which is now no longer shared). This technique is called copy-on-write since the page is copied when some process writes to it. It seems that when either of the processes tries to write to the page a new copy of the page gets allocated and assigned to the process that generated the page fault. The original page gets marked writable afterwards. My question is: what happens if the fork() gets called multiple times before any of the processes made an attempt to write to a shared page? | Nothing particular happens. All processes are sharing the same set of pages and each one gets its own private copy when it wants to modify a page. | {
"source": [
"https://unix.stackexchange.com/questions/58145",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28576/"
]
} |
58,156 | ( Crossposted from StackOverflow.com ) This question and the answer teach us how to introduce colour into tcsh prompts. This webpage explains nicely how to get colour into any output of the echo command: > echo \\e[1\;30mBLACK\\e[0m
BLACK
> echo '\e[1;30mBLACK\e[0m'
BLACK The word 'BLACK' in the example above is printed with a black (or darkgrey) foreground colour (depending on the overall color scheme). Now I'd like to introduce this into the [TAB] command autocompletion feature of tcsh . I tried: complete testcmd 'p/*/`echo '"'"'\e[1;30mf834fef\e[0m'"'"'`/' And I get: > testcmd [TAB]
> testcmd ^[\[1\;30mf834fef^[\[0m Obviously the characters lose their special meaning. Hopefully I just did not get the escaping right. But I tried several other ways. So any help is appreciated. The real use case is that I've got a command completion that offers three different types of completions and I'd like to visually distinguish the types. Also the alternatives are computed by an external command. That is why I need the completion to use the backticks with an external command, such as echo . I don't care about the details of this command. If you make it work in any way with the tcsh 's complete command I'll probably be able to adapt (thinking perl -pe wrappers and such). The reason why I believe this has to work somehow is that the tcsh itself offers coloured command completion if you e.g. type ls [TAB] . That works correctly in my setup. Also you can use ls -1F inside the autocompletion and the colours that ls outputs are also piped through. An example would be: complete testcmd 'p/*/`ls -1F`/' Update: As user mavin points out on stackoverflow , the colourization of ls in this example is indeed not piped through. The colours of ls are lost, but the auto completion can reapply colours according to LS_COLOURS variable based on hints such as the / and * marker endings as added by the ls. This can be verified by doing complete testcmd 'p/*/`ls --color -1`/' which fails to provide colour, and only provides garbled output. (Literally pipes through the escape character sequences) I'm on tcsh version 6.13.00 Any ideas? Pointers? | Nothing particular happens. All processes are sharing the same set of pages and each one gets its own private copy when it wants to modify a page. | {
"source": [
"https://unix.stackexchange.com/questions/58156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28585/"
]
} |
58,157 | CentOs doesn't recognise www-data but want to change ownership on my files folder. All my folders are owned by root at the moment. Confused as to what should be owned by apache and what she be owned by me the root user? Also when it says root root does that mean root user me and group apache root? | There is apache user instead of www-data in Centos. | {
"source": [
"https://unix.stackexchange.com/questions/58157",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27824/"
]
} |
58,189 | What command can I use to split input like this: foo:bar:baz:quux into this? foo
bar
baz
quux I'm trying to figure out the cut command but it seems to only work with fixed amounts of input, like "first 1000 characters" or "first 7 fields". I need to work with arbitrarily long input. | There are a few options: tr : \\n sed 's/:/\n/g' (with GNU sed) awk '{ gsub(":", "\n") } 1' You can also do this in pure bash : while IFS=: read -ra line; do
printf '%s\n' "${line[@]}"
done | {
"source": [
"https://unix.stackexchange.com/questions/58189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28601/"
]
} |
58,271 | I am using the reboot -f command remotely to force reboot a Unix machine.
The problem is that the ssh connection remains active for a long time which I don't know why?
I want to close the ssh connection immediately after rebooting the machine and return to my local shell.
How can i do that?
Note that the reboot command without -f flag does not work. | The command reboot -f never returns (unless you didn't have permission to cause a reboot). At the point where it is issued, the SSH client is waiting for something to do, which could be: the SSH server notifying the client that something happened that requires its attention, for example that there is some output to display, or that the remote command has finished; some event on the client side, such as a signal to relay; a timer firing up to cause the client to send a keepalive message (and close the connection if the server doesn't reply). Since the SSH server process is dead, the SSH client won't die until the timer fires up. If you run ssh remotehost 'reboot -f >/dev/null &' , then what happens is: The remote shell launches the reboot command in the background. Because the server-side shell command has exited and there is no process holding the file descriptor for standard output open, the SSH server closes the connection. The reboot command causes the machine to reboot. However, this is not reliable: depending on timing, step 3 might happen before step 2. Adding a timer makes this unlikely: ssh remotehost '{ sleep 1; reboot -f; } >/dev/null &' To be absolutely sure that the server side is committed to running reboot , while making sure that it doesn't actually reboot before notifying the client that it is committed, you need an additional notification to go from the server to the client. This can be output through the SSH connection, but it gets complicated. | {
"source": [
"https://unix.stackexchange.com/questions/58271",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28650/"
]
} |
58,310 | What is the difference between the printf function in bash and the echo function? Specifically, running: echo "blah" >/dev/udp/localhost/8125 did not send the "blah" command to the server listening on 8125, whereas printf "blah" >/dev/udp/localhost/8125 sent the data. Does printf send an extra EOF at the end of its output? | The difference is that echo sends a newline at the end of its output. There is no way to "send" an EOF. | {
"source": [
"https://unix.stackexchange.com/questions/58310",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9519/"
]
} |
58,473 | I've got a huge (about half a GiB, impossible to use a usual text editor on) CSV file with fields enclosed in double quotes like "abc","def" but need a file without quotes (I am sure this is not going to break the file consistency - a comma is never used inside the values in it). How to remove all the quotes (without introducing spaces on their places)? | tr can do that: tr -d \" < infile > outfile You could also use sed : sed 's/"//g' < infile > outfile | {
"source": [
"https://unix.stackexchange.com/questions/58473",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
58,514 | Possible Duplicate: running script with “. ” and with “source ” I have used both the dot command '.' and 'source' to reload a given rc file (typically to update my environment variables) but I am not sure if they are different and if one is preferred. What is the difference between the two ? | . is the Bourne and POSIX shell command while source is the C-Shell command. Some Bourne-shell derivatives like bash , zsh and most implementations of ksh also have a source command which is generally an alias for . - however, it may behave slightly differently (e.g. in zsh and ksh). For bash, . and source behave the same, but their behaviour is affected by whether they are run in POSIX mode or not¹. POSIX requires that the . command exits the shell process² if it can't open the file for reading and requires that the file be found through a search of the directories in $PATH if the provided path doesn't contain a / . csh 's source interprets the argument as a path, and never looks the file up in $PATH . bash . and source behave as POSIX requires when in POSIX mode, and as pdksh's source when not, that is they don't exit the script if they fail to open the file for reading (same as command . ) and lookup the file in $PATH and the current directory if the provided path doesn't contain a / . zsh . behaves as POSIX requires, while source looks in the current directory first and then $PATH (even in csh emulation) when the argument doesn't contain a / . (see info zsh . and info zsh source for details). If . or source fail to find/open the file, that only aborts the shell when in POSIX mode ( sh emulation) though. AT&T ksh's source doesn't exit the shell either but doesn't look for the file in the current directory. All in all, in Bourne-like shells (though not the Bourne shell that doesn't have a command builtin), if you want a consistent behaviour, you could do command . /path/to/the-file-to-source || handle-error And if the-file-to-source is meant to be in the current directory, be sure to write: command . ./the-file-to-source || handle-error In sh scripts (where sh is a POSIX sh ) you should be able to rely on the POSIX behaviour stated above. ¹ zsh and bash enable the POSIX mode when called as sh . For bash , also when it receives POSIXLY_CORRECT in its environment (even when called as bash even though there's no POSIX command called bash ), or when it receives SHELLOPTS=posix , or when called with bash --posix or bash -o posix or after a set -o posix . With zsh, you use emulate sh to emulate sh . Emulations alter a whole bunch of options that change the behaviour of zsh. In this case, the option is POSIX_BUILTINS . In bash, you can check if in POSIX mode or not with the (non-POSIX), [ -o posix ] command. In zsh, you check the output of emulate to see if you're in sh emulation, or [[ -o posixbuiltins ]] to check whether that particular option is enabled. You can also temporarily enable a given emulation mode with emulate -L (to emulate in the current local scope only). ² for non-interactive shells. For interactive shells, the behaviour varies between shells, some ignore the failure, some will return to the prompt like some syntax errors would. Also, when run in a subshell, that exits the subshell only. | {
"source": [
"https://unix.stackexchange.com/questions/58514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28071/"
]
} |
58,539 | This is linked to this question. When I run top I get the following result: pid 3038 is using 18% cpu, however when running the result is 5.5%. And this number does not appear to be changing with time (i.e. when running the same command a bit later)... Is the ps command somehow averaging the cpu usage? | man ps in NOTES section. CPU usage is currently expressed as the percentage of time spent running
during the entire lifetime of a process. This is not ideal, and it does not
conform to the standards that ps otherwise conforms to. CPU usage is
unlikely to add up to exactly 100%. And, guess you know, but you can also do: top -p <PID> Edit : as to your comment on other answer; " Hmm yeah i Wonder how to get that (the instant CPU percentage) from ps " Short answer: you can't. Why is it so? It is like asking someone to calculate the speed of a car from a picture. While top is a monitoring tool, ps is a snapshot tool. Think of it like this: At any given moment a process either uses the CPU or not. Thus you have either 0% or 100% load in that exact moment. Giving: If ps should give instant CPU usage it would be either 0% or 100%. top on the other hand keep polling numbers and calculate load over time. ps could have given current usage – but that would require it to read data multiple times and sleep between each read. It doesn't. Calculation for ps %cpu ps calculates CPU usage in the following manner: uptime = total time system has been running.
ps_time = process start time measured in seconds from boot.
pu_time = total time process has been using the CPU.
;; Seconds process has been running:
seconds = uptime - ps_time
;; Usage:
cpu_usage = pu_time * 1000 / seconds
print: cpu_usage / 10 "." cpu_usage % 10 Example:
uptime = 344,545
ps_time = 322,462
pu_time = 3,383
seconds = 344,545 - 322,462 = 22,083
cpu_usage = 3,383 * 1,000 / 22,083 = 153
print: 153 / 10 "." 153 % 10 => 15.3 So the number printed is: time the process has been using the CPU during it's lifetime. As in the example above. It has done so in 15.3% of its lifetime. In 84,7% of the time it has not been bugging on the CPU. Data retrieval ps , as well as top , uses data from files stored under /proc/ - or the process information pseudo-file system . You have some files in root of /proc/ that have various information about the overall state of the system. In addition each process has its own sub folder /proc/<PID>/ where process specific data is stored. So, for example the process from your question had a folder at /proc/3038/ . When ps calculates CPU usage it uses two files: /proc/uptime The uptime of the system (seconds), and the amount of time spent in idle process (seconds).
/proc/[PID]/stat Status information about the process. From uptime it uses the first value ( uptime ). From [PID]/stat it uses the following: # Name Description
14 utime CPU time spent in user code, measured in jiffies
15 stime CPU time spent in kernel code, measured in jiffies
16 cutime CPU time spent in user code, including time from children
17 cstime CPU time spent in kernel code, including time from children
22 starttime Time when the process started, measured in jiffies A jiffie is clock tick. So in addition it uses various methods, ie., sysconf(_SC_CLK_TCK) , to get system's Hertz (number of ticks per second) - ultimately using 100 as a fall-back after exhausting other options. So if utime is 1234 and Hertz is 100 then: seconds = utime / Hertz = 1234 / 100 = 12.34 The actual calculation is done by: total_time = utime + stime
IF include_dead_children
total_time = total_time + cutime + cstime
ENDIF
seconds = uptime - starttime / Hertz
pcpu = (total_time * 1000 / Hertz) / seconds
print: "%CPU" pcpu / 10 "." pcpu % 10 Example (Output from a custom Bash script): $ ./psw2 30894
System information
uptime : 353,512 seconds
idle : 0
Process information
PID : 30894
filename : plugin-containe
utime : 421,951 jiffies 4,219 seconds
stime : 63,334 jiffies 633 seconds
cutime : 0 jiffies 0 seconds
cstime : 1 jiffies 0 seconds
starttime : 32,246,240 jiffies 322,462 seconds
Process run time : 31,050
Process CPU time : 485,286 jiffies 4,852 seconds
CPU usage since birth: 15.6% Calculating "current" load with ps This is a (bit?) shady endeavour but OK. Lets have a go. One could use times provided by ps and calculate CPU usage from this. When thinking about it it could actually be rather useful, with some limitations. This could be useful to calculate CPU usage over a longer period. I.e. say you want to monitor the average CPU load of plugin-container in Firefox while doing some Firefox-related task. By using output from: $ ps -p -o cputime,etimes CODE HEADER DESCRIPTION
cputime TIME cumulative CPU time, "[DD-]hh:mm:ss" format. (alias time).
etime ELAPSED elapsed time since the process was started, [DD-]hh:]mm:ss.
etimes ELAPSED elapsed time since the process was started, in seconds. I use etime over etimes in this sample, on calculations, only to be a bit more clear. Also I add %cpu for "fun". In i.e. a bash script one would obviously use etimes - or better read from /proc/<PID>/ etc. Start:
$ ps -p 30894 -o %cpu,cputime,etime,etimes
%CPU TIME ELAPSED ELAPSED
5.9 00:13:55 03:53:56 14036
End:
%CPU TIME ELAPSED ELAPSED
6.2 00:14:45 03:56:07 14167
Calculate times:
13 * 60 + 55 = 835 (cputime this far)
3 * 3,600 + 53 * 60 + 56 = 14,036 (time running this far)
14 * 60 + 45 = 885 (cputime at end)
3 * 3,600 + 56 * 60 + 7 = 14,167 (time running at end)
Calculate percent load:
((885 - 835) / (14,167 - 14,036)) * 100 = 38 Process was using the CPU 38% of the time during this period. Look at the code If you want to know how ps does it, and know a little C, do (looks like you run Gnome Debain deriavnt) - nice attitude in the code regarding comments etc.: apt-get source procps
cd procps*/ps
vim HACKING | {
"source": [
"https://unix.stackexchange.com/questions/58539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26675/"
]
} |
58,550 | I have left a script running on a remote machine from when I was locally working at it. I can connect over SSH to the machine as the same user and see the script running in ps . $ ps aux | grep ipcheck
myuser 18386 0.0 0.0 18460 3476 pts/0 S+ Dec14 1:11 /bin/bash ./ipchecker.sh It is simply outputting to stdout on a local session (I ran ./ipchecker.sh form a local terminal window, no redirection, no use of screen etc). Is there anyway from an SSH session I can view the output of this running command (without stopping it)? So far the best I have found is to use strace -p 18386 but I get hordes of text flying up the screen, its far too detailed. I can stop strace and then sift through the output and find the text bring printed to stdout but its very long and confusing, and obviously whilst it's stopped I might miss something. I would like to find a way to see the script output live as if I was working locally. Can anyone improve on this? The obvious answer is to restart the script with redirection or in a screen session etc, this isn't a mission critical script so I could do that. Rather though, I see this as a fun learning exercise. | If all you want to do is spy on the existing process, you can use strace -p1234 -s9999 -e write where 1234 is the process ID. ( -s9999 avoids having strings truncated to 32 characters, and write the system call that produces output.) If you want to view only data written on a particular file descriptor, you can use something like strace -p1234 -e trace= -e write=3 to see only data written to file descriptor 3 ( -e trace= prevents the system calls from being loged). That won't give you output that's already been produced. If the output is scrolling by too fast, you can pipe it into a pager such as less , or send it to a file with strace -o trace.log … . With many programs, you can divert subsequent output with a ptrace hack, either to your current terminal or to a new screen session. See How can I disown a running process and associate it to a new screen shell? and other linked threads. Note that depending on how your system is set up, you may need to run all these strace commands as root even if the process is running under your user with no extra privileges. (If the process is running as a different user or is setuid or setgid, you will need to run strace as root.) Most distributions only allow a process to trace its children (this provides a moderate security benefit — it prevents some direct malware injection, but doesn't prevent indirect injection by modifying files). This is controlled by the kernel.yama.ptrace_scome sysctl. | {
"source": [
"https://unix.stackexchange.com/questions/58550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12583/"
]
} |
58,651 | When using a tty login shell by entering Ctrl-Alt-F1 from an Ubuntu 12.04 install on a laptop the keyboard seems overly sensitive and if my finger lingers for a moment on a button I end up with repeats of the same letter. Is there a way to adjust keyboard sensitivity that would influence the keyboard response when accessing a login shell from a tty instance? | It is called 'keyboard auto repeat rate' and you can set it with kbdrate Mine is set to: $ sudo kbdrate
Typematic Rate set to 10.9 cps (delay = 250 ms) You can set same with: $ sudo kbdrate -r 10.9 -d 250
Typematic Rate set to 10.9 cps (delay = 250 ms) Check the manual page for exact options: man kbdrate Unsure where the default setting is done, but /etc/rc.local , your .bash_profile , .profile or .bashrc sounds like a good place. | {
"source": [
"https://unix.stackexchange.com/questions/58651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12997/"
]
} |
58,763 | I have two files opened in vim, each in one tmux pane. I would like to copy let's say 10 lines from one file to another. How can I accomplish this not using the mouse's Copy -> Paste ? | You'll have to use tmux shortcuts. Assuming your tmux command shortcut is the default: Ctrl + b , then: Ctrl + b , [ Enter copy(?) mode. Move to start/end of text to highlight. Ctrl + Space Start highlighting text (on Arch Linux). When I've compiled tmux from source on OSX and other Linux's, just Space on its own usually works. Selected text changes the colours, so you'll know if the command worked. Move to opposite end of text to copy. Alt + w Copies selected text into tmux clipboard. On Mac, use Esc + w . Try Enter if none of the above work. Move cursor to opposite tmux pane, or completely different tmux window. Put the cursor where you want to paste the text you just copied. Ctrl + b , ] Paste copied text from tmux clipboard. tmux is quite good at mapping commands to custom keyboard shortcuts. See Ctrl + b , ? for the full list of set keyboard shortcuts. | {
"source": [
"https://unix.stackexchange.com/questions/58763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15387/"
]
} |
58,824 | I run gnome shell 3.6 and Eclipse 4.2. I installed Eclipse manually, in my /opt directory, because the Ubuntu package for Eclipse is very outdated. I've created a .desktop file for it, and placed in in ~/.local/share/applications. It looks like this: [Desktop Entry]
Type=Application
Name=Eclipse
Comment=Eclipse Integrated Development Environment
Icon=/opt/eclipse-4.2.1/icon.xpm
Exec=/opt/eclipse-4.2.1/eclipse
Terminal=false
Categories=Development;IDE;Java; I can run Eclipse from the Activities menu; if I hit the super menu and type in "Eclipse" and run it, it starts just fine, and shows up in my launcher/sidebar/dock/whatever it's called. But if I right-click on its icon, there is no "Add to favorites" option. (I notice this is also the case if I run some very old programs, like xeyes and xcalc. it's amazing these are still distributed!) So what is it about a program that determines whether or not the "Add to favorites" option is available? if I knew and understood that, maybe it'd set me on the right path to fixing this Eclipse problem. | Found the answer elsewhere. The .desktop file needs to be named EXACTLY the same as the binary that's launching. Mine was something like eclipse_ide.desktop and the binary that runs is just "eclipse". Gnome shell does not seem to like that. | {
"source": [
"https://unix.stackexchange.com/questions/58824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28962/"
]
} |
58,846 | In Windows, EXE and DLL have version info, including at least the following fields: file version product version internal name product name copyright In Linux Library / Executable: Which fields are present? How to view such info? What tools/libraries to read? | The version info in not explicitly stored in an ELF file . What you have in there is the name of the library, the soname , which includes the major version.
The full version is usually stored as a part of the library file name. If you have library, say libtest.so , then you usually have: libtest.so.1.0.1 - The library file itself, containing the full version libtest.so.1 - Symlink to libtest.so.1.0.1 , having the same name as soname libtest.so - Symlink to libtest.so.1 used for linking. In the library file libtest.so.1.0.1 , there will be an entry called SONAME in dynamic section, that will say this library is called libtest.so.1 . When you link a program against this library, the linked program will store the soname of the library under NEEDED entry in the dynamic section. If you want to verify, what exactly is in which ELF file, you can try to run: readelf -a -W elffile where elffile can be either an library of an executable. If you simply want to get the library version, you can play with: readelf -d /path/to/library.so |grep SONAME AFAIK, there's no such info (at least not by default) in executable files. Or you can rely on the program itself or your packaging system, as Rahul Patil wrote. | {
"source": [
"https://unix.stackexchange.com/questions/58846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18295/"
]
} |
58,870 | After the last system update the ctrl + left/right arrow command on zsh terminal doesn't do anything. Also ctrl+ u has something wrong because usually that command erase from the cursor to the beginning of the line, while now erase entire line..
Someone knows how to solve these problems?
thank you all. | FWIW, this is what worked on my environment (rhel5.x) using zsh's default. bindkey "^[[1;5C" forward-word
bindkey "^[[1;5D" backward-word | {
"source": [
"https://unix.stackexchange.com/questions/58870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21018/"
]
} |
58,880 | Witness the following: sh-3.2$ mkdir testcase
sh-3.2$ cd testcase
sh-3.2$ sudo touch temp
sh-3.2$ ls -al
total 0
drwxr-xr-x 3 glen staff 102 19 Dec 12:38 .
drwxr-xr-x 12 glen staff 408 19 Dec 12:38 ..
-rw-r--r-- 1 root staff 0 19 Dec 12:38 temp
sh-3.2$ echo nope > temp
sh: temp: Permission denied
sh-3.2$ vim temp
# inside vim
itheivery
# press [ESC]
:wq!
# vim exits
sh-3.2$ ls -al
total 8
drwxr-xr-x 3 glen staff 102 19 Dec 12:38 .
drwxr-xr-x 12 glen staff 408 19 Dec 12:38 ..
-rw-r--r-- 1 glen staff 7 19 Dec 12:38 temp Somehow vim has taken this root-owned file, and changed it into a user owned file! This only seems to work if the user owns the directory - but it still feels like it shouldn't be possible. Can anyone explain how this is done? | You, glen , are the owner of the directory (see the . file in your listing). A directory is just a list of files and you have the permission to alter this list (e.g. add files, remove files, change ownerships to make it yours again, etc.). You may not be able to alter the contents of the file directly, but you can read and unlink (remove) the file as a whole and add new files subsequently. 1 Only witnessing the before and after, this may look like the file has been altered. Vim uses swap files and moves files around under water, so that explains why it seems to write to the same file as you do in your shell, but it's not the same thing. 2 So, what Vim does, comes down to this: cat temp > .temp.swp # copy file by contents into a new glen-owned file
echo nope >> .temp.swp # or other command to alter the new file
rm temp && mv .temp.swp temp # move temporary swap file back 1 This is an important difference in file permission handling between Windows and Unices. In Windows, one is usually not able to remove files you don't have write permission for. 2 update: as noted in the comments, Vim does not actually do it this way for changing the ownership, as the inode number on the temp file does not change (comaring ls -li before and after). Using strace we can see exactly what vim does. The interesting part is here: open("temp", O_WRONLY|O_CREAT|O_TRUNC, 0664) = -1 EACCES (Permission denied)
unlink("temp") = 0
open("temp", O_WRONLY|O_CREAT|O_TRUNC, 0664) = 4
write(4, "more text bla\n", 14) = 14
close(4) = 0
chmod("temp", 0664) = 0 This shows that it only unlinks , but does not close the file descriptor to temp . It rather just overwrites its whole contents ( more text bla\n in my case). I guess this explains why the inode number does not change. | {
"source": [
"https://unix.stackexchange.com/questions/58880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10223/"
]
} |
58,900 | On Windows, most programs with large, scrollable text containers (e.g. all browsers, most word processors and IDEs) let you press the middle mouse button and then move the mouse to scroll. This scrolling is smooth and allows you to scroll very quickly using just the mouse. When I've used Linux on laptops , two-finger scrolling performs roughly the same function; it's easy to scroll down a page quickly (much more quickly than one can by scrolling a mouse wheel) but the scrolling remains smooth enough to allow precise positioning. I am unsure how to achieve the same thing when running Linux on a Desktop with a mouse. As far as I can tell after a whole bunch of Googling, there are neither application-specific settings to swap to Windows-style middle mouse button behaviour, nor any system-wide settings to achieve the same effect. Just to make this concrete, let's say - if it's relevant - that I'm asking in the context of Firefox, Google Chrome, Gedit and Eclipse on a recent version of either Mint (what I use at home) or Ubuntu (what I use at work). I suspect this is a fairly distro-agnostic and application-agnostic question, though. As far as I can tell, my options for scrolling are: Scroll with the mousewheel - slow! Use the PgUp / PgDn keys - jumps a huge distance at a time so can't be used for precise positioning, and is less comfortable than using the mouse Drag the scroll bar at the right hand side of the screen up and down like I used to do on old Windows PCs with two-button mice. This is what I do in practice, but it's just plain less comfortable than Windows-style middle-mouse scrolling; on a huge widescreen, it takes me most of a second just to move the cursor over from the middle of the screen to the scrollbar, and most of a second to move it back again, and I have to take my eyes off the content I'm actually scrolling to do this. None of these satisfy me! This UI issue is the single thing that poisons my enjoyment of Linux on desktops and almost makes me wish I was using a laptop touchpad instead of a mouse. It irritates me enough that I've concluded that either I'm missing some basic Linux UI feature that solves this problem, or I'm just an oversensitive freak and it doesn't even bother anyone else - but I'm not sure which. So my questions are: Does Windows-style middle mouse button scrolling exist anywhere in the Linux world, or is it really purely a Windows thing? In particular, do any Linux web browsers let you use Windows-style scrolling? Are there any mechanisms for scrolling pages that exist in Linux but not in Windows, especially ones that perform the role I've described? Any other solutions that I'm missing? | The feature you are talking about is called "Auto-Scrolling". It lets you press and hold the middle mouse button and move your mouse to scroll smoothly. In Linux, the default behavior for this action (pressing middle mouse button) is generally pasting text. However, there is a preference setting in Firefox and an extension available for Chrome/Chromium which would let you use the middle mouse button for scrolling and activate this feature. Firefox Open the "Options" tab: "≡" (Open menu) → "Options". Navigate to "General" (it should open to "General" by default). Scroll down to "Browsing".
Under "Browsing", you will find the "Use autoscrolling" option. Put a check mark beside this to activate this functionality in Firefox. Or just search for "autoscrolling" using the search bar. In older versions of Firefox: "Edit" → "Preferences" → "Advanced" → "General" → "Browsing" → "User autoscrolling".
Click on the below for a larger image. Chrome/Chromium For Chrome/Chromium we can use an Extension called "AutoScroll" (from kaescripts.blogspot.com). Go to this link on Chrome Web Store (obviously using Chrome/Chromium). Click on the button labeled "+ ADD TO CHROME" to install this extension. Click on "Add" in the Confirmation Dialog Box. Other Applications As far as other applications are concerned, I haven't yet found a solution for them. Anyways, it's the tall webpages that create most of the problems for which both Firefox and Chrome/Chromium have a solution. | {
"source": [
"https://unix.stackexchange.com/questions/58900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29001/"
]
} |
58,903 | I have a script that runs a series of scripts numbered 001,002,003,004... etc down to 041 right now, will be more in the future - and these scripts them selves use some cursor control to print a progress bar and other status information and get the width and height of the terminal from tput cols and tput lines respectively. Without rewriting the sub-scripts, I would like to reserve one line at the bottom for overall status information for the outer script. I was curious if there was a way to set what tput replies for lines and cols. There must be a way because tmux achieves it. I was thinking there may be an environmental variable but the only change I can see that tmux makes when running env is setting the $TERM to screen. Any help would be greatly appreciated | The feature you are talking about is called "Auto-Scrolling". It lets you press and hold the middle mouse button and move your mouse to scroll smoothly. In Linux, the default behavior for this action (pressing middle mouse button) is generally pasting text. However, there is a preference setting in Firefox and an extension available for Chrome/Chromium which would let you use the middle mouse button for scrolling and activate this feature. Firefox Open the "Options" tab: "≡" (Open menu) → "Options". Navigate to "General" (it should open to "General" by default). Scroll down to "Browsing".
Under "Browsing", you will find the "Use autoscrolling" option. Put a check mark beside this to activate this functionality in Firefox. Or just search for "autoscrolling" using the search bar. In older versions of Firefox: "Edit" → "Preferences" → "Advanced" → "General" → "Browsing" → "User autoscrolling".
Click on the below for a larger image. Chrome/Chromium For Chrome/Chromium we can use an Extension called "AutoScroll" (from kaescripts.blogspot.com). Go to this link on Chrome Web Store (obviously using Chrome/Chromium). Click on the button labeled "+ ADD TO CHROME" to install this extension. Click on "Add" in the Confirmation Dialog Box. Other Applications As far as other applications are concerned, I haven't yet found a solution for them. Anyways, it's the tall webpages that create most of the problems for which both Firefox and Chrome/Chromium have a solution. | {
"source": [
"https://unix.stackexchange.com/questions/58903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16030/"
]
} |
58,925 | I have a domain setup to point to my LAN's external IP using dynamic DNS, because my external IP address changes frequently. However, I want to create an alias to this host, so I can access it with home . So I appended the following to my /etc/hosts : example.com home However, it doesn’t seem to like the domain name. If I change it to an IP: 0.0.0.0 home then it works, but of course this defeats the purpose of dynamic DNS! Is this possible? | The file /etc/hosts contains IP addresses and host names only. You cannot alias the string "home" in the way that you want by this method. If you were running your own DNS server you'd be able to add a CNAME record to make home.example.com an alias for domain.example , but otherwise you're out of luck. The best thing you could do is use the same DNS client to update a fully-qualified name. | {
"source": [
"https://unix.stackexchange.com/questions/58925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29020/"
]
} |
58,934 | Calling dhclient as root works beautifully on my debian machine. However, I would like to allow some users to execute dhclient, too. So far, I have tried these two examples: Example 1: When a normal user calls user@box:~$ dhclient ..., the result is: bash: dhclient: command not found Example 2: I have also tried user@box:~$ /sbin/dhclient ..., and got [...]
can't create /var/lib/dhcp3/dhclient.leases: Permission denied
SIOCSIFADDR: Permission denied
SIOCSIFFLAGS: Permission denied
SIOCSIFFLAGS: Permission denied
Open a socket for LPF: Operation not permitted Now... I have the feeling it is neither a good a idea (or would work) to chmod the entire /sbin directory for user access, nor does it appear to be elegant to chmod everything dhclient complains about in the second example above. What is the best and safest way to attack this issue? | The file /etc/hosts contains IP addresses and host names only. You cannot alias the string "home" in the way that you want by this method. If you were running your own DNS server you'd be able to add a CNAME record to make home.example.com an alias for domain.example , but otherwise you're out of luck. The best thing you could do is use the same DNS client to update a fully-qualified name. | {
"source": [
"https://unix.stackexchange.com/questions/58934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27804/"
]
} |
58,969 | How and where can I check what keys have been added with ssh-add to my ssh-agent ? | Use ssh-add -l to list them by fingerprint. $ ssh-add -l
2048 72:...:eb /home/gert/.ssh/mykey (RSA) Or ssh-add -L to get the full key in OpenSSH format. $ ssh-add -L
ssh-rsa AAAAB3NzaC1yc[...]B63SQ== /home/gert/.ssh/id_rsa The latter format is the same as you would put them in a ~/.ssh/authorized_keys file. | {
"source": [
"https://unix.stackexchange.com/questions/58969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15387/"
]
} |
59,112 | I have created the following script that move old days files as defined from source directory to destination directory. It is working perfectly. #!/bin/bash
echo "Enter Your Source Directory"
read soure
echo "Enter Your Destination Directory"
read destination
echo "Enter Days"
read days
find "$soure" -type f -mtime "-$days" -exec mv {} "$destination" \;
echo "Files which were $days Days old moved from $soure to $destination" This script moves files great, It also move files of source subdirectory, but it doesn't create subdirectory into destination directory. I want to implement this additional feature in it. with example /home/ketan : source directory
/home/ketan/hex : source subdirectory
/home/maxi : destination directory When I run this script , it also move hex's files in maxi directory, but I need that same hex should be created into maxi directory and move its files in same hex there. | Instead of running mv /home/ketan/hex/foo /home/maxi , you'll need to vary the target directory based on the path produced by find . This is easier if you change to the source directory first and run find . . Now you can merely prepend the destination directory to each item produced by find . You'll need to run a shell in the find … -exec command to perform the concatenation, and to create the target directory if necessary. destination=$(cd -- "$destination" && pwd) # make it an absolute path
cd -- "$source" &&
find . -type f -mtime "-$days" -exec sh -c '
mkdir -p "$0/${1%/*}"
mv "$1" "$0/$1"
' "$destination" {} \; Note that to avoid quoting issues if $destination contains special characters, you can't just substitute it inside the shell script. You can export it to the environment so that it reaches the inner shell, or you can pass it as an argument (that's what I did). You might save a bit of execution time by grouping sh calls: destination=$(cd -- "$destination" && pwd) # make it an absolute path
cd -- "$source" &&
find . -type f -mtime "-$days" -exec sh -c '
for x do
mkdir -p "$0/${x%/*}"
mv "$x" "$0/$x"
done
' "$destination" {} + Alternatively, in zsh, you can use the zmv function , and the . and m glob qualifiers to only match regular files in the right date range. You'll need to pass an alternate mv function that first creates the target directory if necessary. autoload -U zmv
mkdir_mv () {
mkdir -p -- $3:h
mv -- $2 $3
}
zmv -Qw -p mkdir_mv $source/'**/*(.m-'$days')' '$destination/$1$2' | {
"source": [
"https://unix.stackexchange.com/questions/59112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28793/"
]
} |
59,132 | In Linux, what does the d mean in the first position of drwxr-xr-x ? And what are all of the possible letters that could be there, and what do they mean? I'm trying to learn more about the Linux file permissions system, and I'd like to see a list of the character meanings for the first slot. | It means that it is a directory. The first mode field is the "special file" designator; regular files display as - (none). As for which possible letters could be there, on Linux the following exist: d (directory) c (character device) l (symlink) p (named pipe) s (socket) b (block device) D (door, not common on Linux systems, but has been ported) | {
"source": [
"https://unix.stackexchange.com/questions/59132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/25792/"
]
} |
59,141 | I have an application that seems to have grabbed the mouse (I can move it but can't click anywhere), is there a way to find which app owns the X.org mouse grab? The shortcut given here to release the mouse didn't seem to work, so I'm interested in something that could give me more information. | You can do this by pressing the XF86LogGrabInfo key, introduced in this commit . By default, this keysym is not bound to any physical key or key combination. But you can still activate it using xdotool : xdotool key "XF86LogGrabInfo" After executing that command, a list of active grabs will be logged to the X log. On Ubuntu at least, this is /var/log/Xorg.0.log . It will be somewhere near the end of the log file, but there may be several irrelevant log messages below it. If there are no grabs, it writes: [1199271.146] (II) Printing all currently active device grabs:
[1199271.146] (II) End list of active device grabs If there are grabs (here, I opened a menu in Firefox), it logs something like: [1199428.782] (II) Printing all currently active device grabs:
[1199428.782] Active grab 0x4c00000 (core) on device 'Virtual core pointer' (2):
[1199428.782] client pid 15620 /usr/lib/firefox/firefox
[1199428.782] at 1199423728 (from active grab) (device thawed, state 1)
[1199428.782] core event mask 0x7c
[1199428.782] owner-events true, kb 1 ptr 1, confine 0, cursor 0x0
[1199428.782] (II) End list of active device grabs | {
"source": [
"https://unix.stackexchange.com/questions/59141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2729/"
]
} |
59,155 | I am proficient at using Unix/Linux, but I am not an expert. If I want to open a file (for example, file.txt ), I use vi : vi file.txt This opens the file, and if I want to close it, I use :q! . I have been using this method for the two years that I have been using Unix/Linux. My institution has a cluster running Ubuntu Linux. Today, however, I tried to open a file, and I got these error messages: E576: viminfo: Missing '>' in line: newest to oldest):
E576: viminfo: Missing '>' in line: ?/CJ
E576: viminfo: Missing '>' in line: ?/CG
E576: viminfo: Missing '>' in line: ?/CC
E576: viminfo: Missing '>' in line: ?/OEP
E576: viminfo: Missing '>' in line: ?/CEP
E576: viminfo: Missing '>' in line: ?/dih
E576: viminfo: Missing '>' in line: ?/ang
E576: viminfo: Missing '>' in line: ??b
E576: viminfo: Missing '>' in line: ?/xvg
E136: viminfo: Too many errors, skipping rest of file
Press ENTER or type command to continue So I press Enter . I get the same messages: E576: viminfo: Missing '>' in line: newest to oldest):
E576: viminfo: Missing '>' in line: ?/CJ
E576: viminfo: Missing '>' in line: ?/CG
E576: viminfo: Missing '>' in line: ?/CC
E576: viminfo: Missing '>' in line: ?/OEP
E576: viminfo: Missing '>' in line: ?/CEP
E576: viminfo: Missing '>' in line: ?/dih
E576: viminfo: Missing '>' in line: ?/ang
E576: viminfo: Missing '>' in line: ??b
E576: viminfo: Missing '>' in line: ?/xvg
E136: viminfo: Too many errors, skipping rest of file
Press ENTER or type command to continue Again I press Enter , and finally the file opens for reading/editing. However, the problem repeats when I try to close the file using :q! , and also when I try to open any other file using vi . The key words CJ , CG , CC , OEP , CEP , dih , ang , and xvg (I am not sure about b , though) are all strings that often appear in files that I read using vi , although I am not certain that they all exist in the particular file that I am opening (I do not think so). Thus, perhaps something is wrong with my viminfo file? However, I am using vi , not vim . I am not sure what has happened; do you have any suggestions of how I can diagnose, and possibly fix, this problem? | Do this: rm -f ~/.viminfo The .viminfo file keeps metadata about various useful, but non-critical state information. Yours is corrupt. Remove it. | {
"source": [
"https://unix.stackexchange.com/questions/59155",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
59,243 | root@server # tar fcz bkup.tar.gz /home/foo/
tar: Removing leading `/' from member names How can I solve this problem and keep the / on file names ? | Use the --absolute-names or -P option to disable this feature. tar fczP bkup.tar.gz /home/foo/
tar fcz bkup.tar.gz --absolute-names /home/foo | {
"source": [
"https://unix.stackexchange.com/questions/59243",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45797/"
]
} |
59,249 | MYPATH=/var/www/html/error_logs/
TOTALFILE=$(ls $MYPATH* | wc -l)
FILETIME=$(stat --format=%y $MYPATH* | head -5 | cut -d'.' -f1)
FILE=$(ls -1tcr $MYPATH* | head -5 | rev | cut -d/ -f1 | rev)
TOPLINE=$(head -1 $MYPATH* | grep -Po '".*?"' | head -5) How can I elegantly print these 5 files of information into columns with headers? FILE CREATED TIME | FILE NAME | ERROR HEADER
---------------------------------------------
$FILETIME | $FILE | $TOPLINE
2012-11-29 11:27:45 | 684939947465 | "SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)" and so on with all 5 files. total files: $TOTALFILE is there any easy way to get what I want? This is the output I get when I echo every variable: 2012-11-29 11:27:45 2012-11-29 11:27:41 2012-11-28 23:33:01 2012-11-26 10:23:37 2012-11-19 22:49:36
684939947465 1313307654813 1311411049509 1234980770182 354797376843
"SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)" "SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)" "Connection to localhost:6379 failed: Connection refused (111)" "An error occurred connecting to Redis." "SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)" | You can use the column command which on Linux is part of the util-linux package. It's also available on FreeBSD , NetBSD , OpenBSD and DragonFly BSD . Combine this with a loop and you're in business, e.g.: #!/bin/sh
MYPATH=/
TOTALFILE=$(ls $MYPATH/* | wc -l)
FILE=$(ls -1tcr $MYPATH/* | head -5 | rev | cut -d/ -f1 | rev)
declare -a FILES
declare -a FILETIME
OUTPUT="FILENAME CREATED TIME ERROR_HEADER\n\n------------------------------ ----------------------------- ----------------------------------- ------$"
for i in $MYPATH/*;
do
FILES[${#FILES[@]}]="$i"
FILETIME[${#FILETIME[@]}]=$(stat --format=%y $i | head -5 | cut -d'.' -f1)
TOPLINE=$(head -1 $i | grep -Po '".*?"' | head -5)
OUTPUT="$OUTPUT\n${FILES[${#FILES[@]}-1]} ${FILETIME[${#FILETIME[@]}-1]} $TOPLINE\n"
done
echo -ne $OUTPUT | column -t | {
"source": [
"https://unix.stackexchange.com/questions/59249",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29187/"
]
} |
59,276 | How does one extract a specific folder from a zipped archive to a given directory? I tried using unzip "/path/to/archive.zip" "in/archive/folder/" -d "/path/to/unzip/to" but that only creates the folder on the path I want it to unzip to and does nothing else. | unzip /path/to/archive.zip "in/archive/folder/*" -d "/path/to/unzip/to" | {
"source": [
"https://unix.stackexchange.com/questions/59276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29195/"
]
} |
59,300 | I have read somewhere that one can put a file on a linux system into memory, and loading it will be superfast. How do I do this? How do I verify the file is loaded from memory? | On Linux, you probably already have an tmpfs filesystem that you can write to at /dev/shm . $ >/dev/shm/foo
$ df /dev/shm/foo
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 224088 0 224088 0% /dev/shm This may use swap, however. For a true ramdisk (that won't swap), you need to use the ramfs filesystem. mount ramfs -t ramfs /mountpoint | {
"source": [
"https://unix.stackexchange.com/questions/59300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24053/"
]
} |
59,303 | I used below options in script to print on terminal as well as /var/log/messages echo "System Load is OK : $Current_loadadv"
echo "System Load is OK : $Current_loadadv" | logger but how can i do that in single line ? | On Linux, you probably already have an tmpfs filesystem that you can write to at /dev/shm . $ >/dev/shm/foo
$ df /dev/shm/foo
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 224088 0 224088 0% /dev/shm This may use swap, however. For a true ramdisk (that won't swap), you need to use the ramfs filesystem. mount ramfs -t ramfs /mountpoint | {
"source": [
"https://unix.stackexchange.com/questions/59303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28434/"
]
} |
59,336 | With diff -r I can do this task, however it takes so long because diff checks file's content. I want something that determine that two files are the same regarding of their size, last modified, etc. But no checking bit by bit the file (for example a video takes sooo long). Is there any other way? | rsync, by default, compares only file metadata. that means timestamp, size, and attributes. among others. but not content of files. rsync -n -a -i --delete source/ target/ explanation: -n do not actually copy or delete <-- THIS IS IMPORTANT!!1 -a compare all metadata of file like timestamp and attributes -i print one line of information per file --delete also report files which are not in source note: it is important to append the directory names with a slash. this is an rsync thing. you can shorten the one letter options like this rsync -nai --delete source/ target/ if you also want to see lines printed for files that are identical then provide -i twice rsync -naii --delete source/ target/ example output: *deleting removedfile (file in target but not in source)
.d..t...... ./ (directory with different timestamp)
>f.st...... modifiedfile (file with different size and timestamp)
>f+++++++++ newfile (file in source but not in target)
.f samefile (file that has same metadata. only with -ii) remember that rsync only compares metadata. that means if the file content changed but metadata stayed the same then rsync will report that file is same. this is an unlikely scenario. so either trust that when metadata is same then data is same, or you have to compare file data bit by bit. bonus: for progress information see here: Estimate time or work left to finish for rsync? | {
"source": [
"https://unix.stackexchange.com/questions/59336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27807/"
]
} |
59,350 | I wanted to try vi mode in bash but now I would like to change it back to normal. How can I unset -o vi ? | The only two line editing interfaces currently available in bash are vi mode and emacs mode, so all you need to do is set emacs mode again. set -o emacs | {
"source": [
"https://unix.stackexchange.com/questions/59350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15387/"
]
} |
59,360 | So I started using zsh . I like it all right. It seems very cool and slick, and the fact that the current working directory and actual command line are on different lines is nice, but at the same time, I'm noticing that zsh can be a bit slower than bash , especially when printing text to the screen. The thing I liked best was the fact that zsh was 'backward compatible' with all of the functions I defined in my .bashrc . One gripe though. The functions all work perfectly, but I can't figure out how the exporting system works. I had some of those .bashrc functions exported so that I could use them elsewhere, such as in scripts and external programs, with export -f . In zsh, exporting doesn't seem to even be talked about. Is it autoloading? Are those two things the same? I'm having a seriously hard time figuring that out. | Environment variables containing functions are a bash hack. Zsh doesn't have anything similar. You can do something similar with a few lines of code. Environment variables contain strings; older versions of bash, before Shellshock was discovered, stored the function's code in a variable whose name is that of the function and whose value is () { followed by the function's code followed by } . You can use the following code to import variables with this encoding, and attempt to run them with bash-like settings. Note that zsh cannot emulate all bash features, all you can do is get a bit closer (e.g. to make $foo split the value and expand wildcards, and make arrays 0-based). bash_function_preamble='
emulate -LR ksh
'
for name in ${(k)parameters}; do
[[ "-$parameters[name]-" = *-export-* ]] || continue
[[ ${(P)name} = '() {'*'}' ]] || continue
((! $+builtins[$name])) || continue
functions[$name]=$bash_function_preamble${${${(P)name}#"() {"}%"}"}
done (As Stéphane Chazelas , the original discoverer of Shellshock, noted, an earlier version of this answer could execute arbitrary code at this point if the function definition was malformed. This version doesn't, but of course as soon as you execute any command, it could be a function imported from the environment.) Post-Shellshock versions of bash encode functions in the environment using invalid variable names (e.g. BASH_FUNC_myfunc%% ). This makes them harder to parse reliably as zsh doesn't provide an interface to extract such variable names from the environment. I don't recommend doing this. Relying on exported functions in scripts is a bad idea: it creates an invisible dependency in your script. If you ever run your script in an environment that doesn't have your function (on another machine, in a cron job, after changing your shell initialization files, …), your script won't work anymore. Instead, store all your functions in one or more separate files (something like ~/lib/shell/foo.sh ) and start your scripts by importing the functions that it uses ( . ~/lib/shell/foo.sh ). This way, if you modify foo.sh , you can easily search which scripts are relying on it. If you copy a script, you can easily find out which auxiliary files it needs. Zsh (and ksh before it) makes this more convenient by providing a way to automatically load functions in scripts where they are used. The constraint is that you can only put one function per file. Declare the function as autoloaded, and put the function definition in a file whose name is the name of the function. Put this file in a directory listed in $fpath (which you may configure through the FPATH environment variable). In your script, declare autoloaded functions with autoload -U foo . Furthermore zsh can compile scripts, to save parsing time. Call zcompile to compile a script. This creates a file with the .zwc extension. If this file is present then autoload will load the compiled file instead of the source code. You can use the zrecompile function to (re)compile all the function definitions in a directory. | {
"source": [
"https://unix.stackexchange.com/questions/59360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1389/"
]
} |
59,434 | I have not had the chance to read enough about Android , Linux, or UNIX to answer this myself. sudo works on a Linux machine but doesn't work on Android unless you root the mobile device (e.g. Samsung GT-N8013). Why does the mobile device require to be rooted, but not the typical Linux install? The context of my question is related to https://stackoverflow.com/questions/14019698/adb-shell-sudo-on-windows-7/14019726#14019726 (Also, is there any way for a program to ask to run as root on Android, the same way you have escalation of privileges to "run as administrator" on Windows? If you think this question should be on its own thread, I can create one) | sudo is a a normal application with the suid bit. This means in order to use sudo it has to be installed on the system. Not all Linux systems have sudo installed per default like for example Debian. Most Android systems are targeted for end users who don't need to know the internals of Android (i.e. each Android applications runs under it's own user), so there is no need to provide an interactive way for an enduser to run a command as system administrator. In general you can use su instead of sudo to run a command as a different user but you have to know the credentials for the target user for su (for sudo you have to know the credentials of the user running the command) | {
"source": [
"https://unix.stackexchange.com/questions/59434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29268/"
]
} |
59,537 | I have setup several functions in my .bashrc file. I would like to just display the actual code of the function and not execute it, to quickly refer to something. Is there any way, we could see the function definition? | The declare builtin's -f option does that: bash-4.2$ declare -f apropos1
apropos1 ()
{
apropos "$@" | grep ' (1.*) '
} I use type for that purpose, it is shorter to type ;) bash-4.2$ type apropos1
apropos1 is a function
apropos1 ()
{
apropos "$@" | grep ' (1.*) '
} | {
"source": [
"https://unix.stackexchange.com/questions/59537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
59,585 | I use Linux and Mac OS X on a regular basis, and sometimes I have to use Windows. I need to use a flash drive on all three, and I need a filesystem that will work well on all of them. None of the ext's work on Mac or Windows, HFS+ doesn't work on Windows (or well on Linux), NTFS is read-only on Mac, and FAT sucks on all OSes. Is there a file system that would work reasonably well on all operating systems? I'd like it to work without drivers or additional installations, so it can be used on any computer. | UDF is a candidate. It works out-of-the-box on linux >= 2.6.31, Windows >= Vista, MacOS >= 9 and on many BSDs. Note: UDF comes in different versions, which are not equally supported on all platforms, see Wikipedia - Compatibility . UDF can be created on Linux with the tool mkudffs from the package udftools . | {
"source": [
"https://unix.stackexchange.com/questions/59585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19064/"
]
} |
59,653 | I am running Mint 13 with Cinnamon 1.6. I would like my desktop wallpaper to automatically change depending on the time of day. So, the first thing that comes to mind is setting up a cron job to do it for me. Problem is, I don't know how to change the wallpaper from script / terminal. What I would like to know: 1) How would one change the background from terminal? 2) Is the there already a built-in way of doing this? | This is the correct answer to the question. Anything else would just be a hack gsettings set org.cinnamon.desktop.background picture-uri "file:///filename" | {
"source": [
"https://unix.stackexchange.com/questions/59653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22426/"
]
} |
59,669 | I have become so used to do this: someprogram >output.file I do it whenever I want to save the output that a program generates to a file. I am also aware of the two variants of this IO redirection : someprogram 2>output.of.stderr.file (for stderr) someprogram &>output.stderr.and.stdout.file (for both stdout+stderr combined) Today I have run across a situation I have not thought possible. I use the following command xinput test 10 and as expected I have the following output: user@hostname:~$ xinput test 10
key press 30
key release 30
key press 40
key release 40
key press 32
key release 32
key press 65
key release 65
key press 61
key release 61
key press 31
^C
user@hostname:~$ I expected that this output could as usual be saved to a file like using xinput test 10 > output.file . But when contrairy to my expectation the file output.file remains empty. This is also true for xinput test 10 &> output.file just to make sure I do not miss something on stdout or stderr. I am really confused and hence ask here if the xinput program might have a way to avoid its output to be redirected? update I have looked at the source. It seems the output is generated by this code (see snippet below). It appears to me the output would be generated by an ordinary printf //in file test.c
static void print_events(Display *dpy)
{
XEvent Event;
while(1) {
XNextEvent(dpy, &Event);
// [... some other event types are omnited here ...]
if ((Event.type == key_press_type) ||
(Event.type == key_release_type)) {
int loop;
XDeviceKeyEvent *key = (XDeviceKeyEvent *) &Event
printf("key %s %d ", (Event.type == key_release_type) ? "release" : "press ", key->keycode);
for(loop=0; loopaxes_count; loop++) {
printf("a[%d]=%d ", key->first_axis + loop, key->axis_data[loop]);
}
printf("\n");
}
}
} I modified the source to this (see next snippet below), which allows me to have a copy of the output on stderr. This output I am able to redirect: //in file test.c
static void print_events(Display *dpy)
{
XEvent Event;
while(1) {
XNextEvent(dpy, &Event);
// [... some other event types are omnited here ...]
if ((Event.type == key_press_type) ||
(Event.type == key_release_type)) {
int loop;
XDeviceKeyEvent *key = (XDeviceKeyEvent *) &Event
printf("key %s %d ", (Event.type == key_release_type) ? "release" : "press ", key->keycode);
fprintf(stderr,"key %s %d ", (Event.type == key_release_type) ? "release" : "press ", key->keycode);
for(loop=0; loopaxes_count; loop++) {
printf("a[%d]=%d ", key->first_axis + loop, key->axis_data[loop]);
}
printf("\n");
}
}
} My idea at present is that maybe by doing the redirect the program looses its ability to monitor the key-press key-release events. | It's just that when stdout is not a terminal, output is buffered. And when you press Ctrl-C , that buffer is lost as/if it has not been written yet. You get the same behavior with anything using stdio . Try for instance: grep . > file Enter a few non-empty lines and press Ctrl-C , and you'll see the file is empty. On the other hand, type: xinput test 10 > file And type enough on the keyboard for the buffer to get full (at least 4k worth of ouput), and you'll see the size of file grow by chunks of 4k at a time. With grep , you can type Ctrl-D for grep to exit gracefully after having flushed its buffer. For xinput , I don't think there's such an option. Note that by default stderr is not buffered which explains why you get a different behaviour with fprintf(stderr) If, in xinput.c , you add a signal(SIGINT, exit) , that is tell xinput to exit gracefully when it receives SIGINT , you'll see the file is no longer empty (assuming it doesn't crash, as calling library functions from signal handlers isn't guaranteed safe: consider what could happen if the signal comes in while printf is writing to the buffer). If it's available, you could use the stdbuf command to alter the stdio buffering behaviour: stdbuf -oL xinput test 10 > file There are many questions on this site that cover disabling stdio type buffering where you'll find even more alternative solutions. | {
"source": [
"https://unix.stackexchange.com/questions/59669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24394/"
]
} |
59,685 | I am using sshfs to mount a folder with some python projects over ssh to my ~/ directory. $ mkdir -p ~/mount/my-projects
$ sshfs [email protected]:/home/user/my-projects ~/mount/my-projects I can perform most commands as could be expected: $ ls ~/mount/my-projects
some-python-project But I if I try to do anything with sudo , it fails with permission denied: $ sudo ls ~/mount/my-projects
ls: cannot access /home/user/mount/my-projects: Permission denied What I'm actually trying to accomplish is to test a python package installation script on my local machine: $ cd ~/mount/my-projects/some-python-project
$ sudo python setup.py install | I believe you need use the allow_other option to sshfs. In order to do this, you should call it with sudo, as follows:- sudo sshfs -o allow_other user@myserver:/home/user/myprojects ~/mount/myprojects Without this option, only the user who ran sshfs can access the mount. This is a fuse restriction. More info is available by typing man fuse . You should also note that (on Ubuntu at least) you need to be a member of the 'fuse' group, otherwise the command above will complain about not being able to access /etc/fuse.conf when ran without 'sudo'. | {
"source": [
"https://unix.stackexchange.com/questions/59685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29398/"
]
} |
59,854 | In a case I can use only UDP and ICMP protocols, how can I discover, in bytes, the path MTU for packet transfer from my computer to a destination IP? | I believe what you are looking for, is easiest gotten via traceroute --mtu <target> ; maybe with a -6 switch thrown in for good measure depending on your interests.
Linux traceroute uses UDP as a default, if you believe your luck is better with ICMP try also -I . | {
"source": [
"https://unix.stackexchange.com/questions/59854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29480/"
]
} |
59,893 | I try to search for lines that start with "1" using ls -1 | grep ^1* but it returns lines that do not start with 1. What I am missing here? | Your regular expression doesn't mean what you think it does. It matches all lines starting (^) with one (1) repeated zero or more (*) times. All strings match that regular expression. grep '^1' does what you want. | {
"source": [
"https://unix.stackexchange.com/questions/59893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
59,929 | I want to add a permanent iptables rule to my new VPS , and after brief google search i was surprised that there are two places this rule can be added, that seems like identical: /etc/rc.local and /etc/init.d/rc.local . Maybe someone knows why where is two places for simple startup code to place? Is it linux flavor specific (but ubuntu has both!)? Or one of them is deprecated? | /etc/init.d is maintained on ubuntu for backward compatibility with sysvinit stuff. If you actually look at /etc/init.d/rc.local you'll see (also from a 12.04 LTS Server): #! /bin/sh
### BEGIN INIT INFORMATION
# Provides: rc.local
# Required-Start: $remote_fs $syslog $all
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: Run /etc/rc.local if it exist
### END INIT INFO And "Run /etc/rc.local" is exactly what it does. The entirety of /etc/rc.local is: #!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
exit 0 I would guess the purpose in doing this is to provide a dead simple place to put shell commands you want run at boot, without having to deal with the stop|start service stuff, which is in /etc/init.d/rc.local . So it is in fact a service, and can be run as such. I added a echo line to /etc/rc.local and: »service rc.local start
hello world However, I do not believe it is referenced by anything in upstart's /etc/init (not init.d!) directory: »initctl start rc.local
initctl: Unknown job: rc.local There are a few "rc" services in upstart: »initctl list | grep rc
rc stop/waiting
rcS stop/waiting
rc-sysinit stop/waiting But none of those seem to have anything to do with rc.local. | {
"source": [
"https://unix.stackexchange.com/questions/59929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29525/"
]
} |
59,959 | Why do Linux people always say to read the manual when it would be so much easier to just give you an answer? There's no manual! It didn't come with one. | There is a manual, you just have to know where it is. It can be accessed with the man command. If you are unsure how to use it, type man man . The man command is very important; remember it even if you forget everything else. The manual contains detailed information about a variety of topics, which are separated into several sections: General commands System calls Library functions, covering in particular the C standard library Special files (usually devices, those found in /dev ) and drivers File formats and conventions Games and screensavers Miscellaneous System administration commands and daemons The notation ls(1) refers to the ls page in section 1. To read it type man 1 ls or man ls . To avoid being told to read the manual when you ask a question, try man command , apropos command , command -? , command --help , and a few Google searches. If you do not understand something in the manual, quote it in your question and try to explain what you don't understand. Usually when they ask you to read the manual, it is because they think it will be more beneficial to you than a simple, incomplete answer. If you don't know which man pages are relevant, ask. | {
"source": [
"https://unix.stackexchange.com/questions/59959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29537/"
]
} |
60,007 | Sometimes, when I use synergy between my machines when one is using full-screen VirtualBox guest I get stuck with some weird key modifiers turned on. If they exist on my keyboard (like Shift_L) I can just tap it and their status is reset and I can continue typing in small letters. But some of them are not mapped to my keyboard at all (like ISO_Level3_Shift), so I have no means of turning them off at all! How to reset them? Right now, all I can do is to reboot the computer, but it's rather embarrassing solution. All I want is some program that can artificially "tap" all possible keyboard modifiers, so their status would be reset. Can it be done? I use Mint 14 (a clone of Ubuntu 12.10 Quantal). | If you have xdotool installed, you could just simply use xdotool keyup ISO_Level3_Shift Which sends a key release (for ISO_Level3_Shift, of course) event to the X server. But you wanted a program to release all modifier keys.
One could use xdotool to achieve that easily, if not for that I have no idea what modifier keysyms are defined. One possible method of finding them is to parse keysymdef.h : grep '^#define' /usr/include/X11/keysymdef.h | sed -r 's/^#define XK_(\S*?).*$/\1/;' | grep -E '_(L|R|Level.*)$' Which returns some keysyms that surely are modifiers. Unfortunately, I can't find any precise definition of a modifier key right now, so I don't know whether that's a complete list. Appending | xargs xdotool keyup to the above pipeline will release all those keys. On my system, it executes the following command: xdotool keyup Shift_L Shift_R Control_L Control_R Meta_L Meta_R Alt_L Alt_R Super_L Super_R Hyper_L Hyper_R ISO_Level2_Latch ISO_Level3_Shift ISO_Level3_Latch ISO_Level3_Lock ISO_Level5_Shift ISO_Level5_Latch ISO_Level5_Lock | {
"source": [
"https://unix.stackexchange.com/questions/60007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17765/"
]
} |
60,034 | How are character special files and block special files different from regular files in a Unix-like system? Why are they called “character special” and “block special” respectively? | When a program reads or writes data from a file, the requests go to a kernel driver. If the file is a regular file, the data is handled by a filesystem driver and it is typically stored in zones on a disk or other storage media, and the data that is read from a file is what was previously written in that place. There are other file types for which different things happen. When data is read or written to a device file, the request is handled by the driver for that device. Each device file has an associated number which identifies the driver to use. What the device does with the data is its own business. Block devices (also called block special files) usually behave a lot like ordinary files: they are an array of bytes, and the value that is read at a given location is the value that was last written there. Data from block device can be cached in memory and read back from cache; writes can be buffered. Block devices are normally seekable (i.e. there is a notion of position inside the file which the application can change). The name “block device” comes from the fact that the corresponding hardware typically reads and writes a whole block at a time (e.g. a sector on a hard disk). Character devices (also called character special files) behave like pipes, serial ports, etc. Writing or reading to them is an immediate action. What the driver does with the data is its own business. Writing a byte to a character device might cause it to be displayed on screen, output on a serial port, converted into a sound, ... Reading a byte from a device might cause the serial port to wait for input, might return a random byte ( /dev/urandom ), ... The name “character device” comes from the fact that each character is handled individually. See Wikipedia and Understanding /dev and its subdirs and files for more information. | {
"source": [
"https://unix.stackexchange.com/questions/60034",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28032/"
]
} |
60,041 | In bash, when running with the -x option, is it possible to exempt individual commands from echoing? I'm trying to make the output as neat as possible, so I am running certain parts of my script in a subshell with set +x . However, the row set +x itself is still echoed and adds no valuable information to the output. I remember back in the bad old .bat days, when running with echo on , individual lines could be exempted by starting them with a @ . Is there any equivalent in bash? #!/bin/bash -x
function i_know_what_this_does() {
(
set +x
echo do stuff
)
}
echo the next-next line still echoes 'set +x', is that avoidable?
i_know_what_this_does
echo and we are back and echoing is back on When running the above, output is: + echo the next-next line still echoes 'set +x,' is that 'avoidable?'
the next-next line still echoes set +x, is that avoidable?
+ i_know_what_this_does
+ set +x
do stuff
+ echo and we are back and echoing is back on
and we are back and echoing is back on | xtrace output goes to stderr, so you could redirect stderr to /dev/null : i_know_what_this_does() {
echo do stuff
} 2> /dev/null If you still want to see the errors from the commands run inside the functions, you could do i_know_what_this_does() (
{ set +x; } 2> /dev/null # silently disable xtrace
echo do stuff
) Note the use of (...) instead of {...} to provide a local scope for that function via a subshell. bash , since version 4.4 now supports local - like in the Almquist shell to make options local to the function (similar to set -o localoptions in zsh ), so you could avoid the subshell by doing: i_know_what_this_does() {
{ local -; set +x; } 2> /dev/null # silently disable xtrace
echo do stuff
} An alternative for bash 4.0 to 4.3 would be to use the $BASH_XTRACEFD variable and have a dedicated file descriptor open on /dev/null for that: exec 9> /dev/null
set -x
i_know_what_this_does() {
{ local BASH_XTRACEFD=9; } 2> /dev/null # silently disable xtrace
echo do stuff
} Since bash lacks the ability to mark a fd with the close-on-exec flag, that has the side effect of leaking that fd to other commands though. See also this locvar.sh which contains a few functions to implement local scope for variables and functions in POSIX scripts and also provides with trace_fn and untrace_fn functions to make them xtrace d or not. | {
"source": [
"https://unix.stackexchange.com/questions/60041",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17980/"
]
} |
60,078 | Could you recommend a way to figure out which driver is being used for a USB device.
Sort of a usb equivalent of lspci -k command. | Finding the Kernel Driver(s) The victim device $ lsusb
Bus 010 Device 002: ID 046d:c01e Logitech, Inc. MX518 Optical Mouse
Bus 010 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply We're going to try to find out what driver is used for the APC UPS. Note that there are two answers to this question: The driver that the kernel would use, and the driver that is currently in use. Userspace can instruct the kernel to use a different driver (and in the case of my APC UPS, nut has). Method 1: Using usbutils (easy) The usbutils package (on Debian, at least) includes a script called usb-devices . If you run it, it outputs information about the devices on the system, including which driver is used: $ usb-devices
⋮
T: Bus=10 Lev=01 Prnt=01 Port=01 Cnt=02 Dev#= 3 Spd=1.5 MxCh= 0
D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1
P: Vendor=051d ProdID=0002 Rev=01.06
S: Manufacturer=American Power Conversion
S: Product=Back-UPS RS 1500 FW:8.g9 .D USB FW:g9
S: SerialNumber=XXXXXXXXXXXX
C: #Ifs= 1 Cfg#= 1 Atr=a0 MxPwr=24mA
I: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbfs
⋮ Note that this lists the current driver, not the default one. There isn't a way to find the default one. Method 2: Using debugfs (requires root) If you have debugfs mounted, the kernel maintains a file in the same format as usb-devices prints out at /sys/kernel/debug/usb/devices ; you can view with less , etc. Note that debugfs interfaces are not stable, so different kernel versions may print in a different format, or be missing the file entirely. Once again, this only shows the current driver, not the default. Method 3: Using only basic utilities to read /sys directly (best for scripting or recovery) You can get the information out of /sys , thought its more painful than lspci . These /sys interfaces should be reasonably stable, so if you're writing a shell script, this is probably how you want to do it. Initially, lsusb seems to count devices from 1, /sys from 0. So 10-2 is a good guess for where to find the APC UPS lsusb gives as bus 10, device 3. Unfortunately, over time that mapping breaks down—sysfs re-uses numbers even when device numbers aren't. The devnum file's contents will match the device number given by lsusb, so you can do something like this: $ grep -l '^3$' /sys/bus/usb/devices/10-*/devnum # the ^ and $ to prevent also matching 13, 31, etc.
/sys/bus/usb/devices/10-2/devnum So, in this case, it's definitely 10-2 . $ cd /sys/bus/usb/devices/10-2
$ ls
10-2:1.0 bDeviceClass bMaxPower descriptors ep_00 maxchild remove urbnum
authorized bDeviceProtocol bNumConfigurations dev idProduct power serial version
avoid_reset_quirk bDeviceSubClass bNumInterfaces devnum idVendor product speed
bcdDevice bmAttributes busnum devpath ltm_capable quirks subsystem
bConfigurationValue bMaxPacketSize0 configuration driver manufacturer removable uevent We can be sure this is the right device by cat ing a few of the files: $ cat idVendor idProduct manufacturer product
051d
0002
American Power Conversion
Back-UPS RS 1500 FW:8.g9 .D USB FW:g9 If you look in 10-2:1.0 ( :1 is the "configuration", .0 the interface—a single USB device can do multiple things, and have multiple drivers; lsusb -v will show these), there is a modalias file and a driver symlink: $ cat 10-2\:1.0/modalias
usb:v051Dp0002d0106dc00dsc00dp00ic03isc00ip00in00
$ readlink driver
../../../../../../bus/usb/drivers/usbfs So, the current driver is usbfs . You can find the default driver by asking modinfo about the modalias: $ /sbin/modinfo `cat 10-2\:1.0/modalias`
filename: /lib/modules/3.6-trunk-amd64/kernel/drivers/hid/usbhid/usbhid.ko
license: GPL
description: USB HID core driver
author: Jiri Kosina
author: Vojtech Pavlik
author: Andreas Gal
alias: usb:v*p*d*dc*dsc*dp*ic03isc*ip*in*
depends: hid,usbcore
intree: Y
vermagic: 3.6-trunk-amd64 SMP mod_unload modversions
parm: mousepoll:Polling interval of mice (uint)
parm: ignoreled:Autosuspend with active leds (uint)
parm: quirks:Add/modify USB HID quirks by specifying quirks=vendorID:productID:quirks where vendorID, productID, and quirks are all in 0x-prefixed hex (array of charp) So, the APC UPS defaults to the hid driver, which is indeed correct. And its currently using usbfs, which is correct since nut 's usbhid-ups is monitoring it. What about userspace (usbfs) drivers? When the driver is usbfs , it basically means a userspace (non-kernel) program is functioning as the driver. Finding which program it is requires root (unless the program is running as your user) and is fairly easy: whichever program has the device file open. We know that our "victim" device is bus 10, device 3. So the device file is /dev/bus/usb/010/003 (at least on a modern Debian), and lsof provides the answer: # lsof /dev/bus/usb/010/003
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
usbhid-up 4951 nut 4u CHR 189,1154 0t0 8332 /dev/bus/usb/010/003 And indeed, its usbhid-ups as expected (lsof truncated the command name to make the layout fit, if you need the full name, you can use ps 4951 to get it, or probably some lsof output formatting options). | {
"source": [
"https://unix.stackexchange.com/questions/60078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29436/"
]
} |
60,088 | I am using Linux Mint 14 (MATE) with Cinnamon installed. Cinnamon failed when I tried to delete an item on the menu with menu editor. The system froze, so I logged out by pressing Ctrl + Alt + Delete . When I logged back in, the desktop panel was not loaded (shortcuts were there as normal, and right-click works as well). I can barely do anyting but get the terminal with Ctrl + Alt + t . But all the dialogue windows are presented without any frames. Here are screenshots: I tried reinstalling Cinnamon in the MATE desktop interface (it works fine), but it failed to solve the problem. | Finding the Kernel Driver(s) The victim device $ lsusb
Bus 010 Device 002: ID 046d:c01e Logitech, Inc. MX518 Optical Mouse
Bus 010 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power Supply We're going to try to find out what driver is used for the APC UPS. Note that there are two answers to this question: The driver that the kernel would use, and the driver that is currently in use. Userspace can instruct the kernel to use a different driver (and in the case of my APC UPS, nut has). Method 1: Using usbutils (easy) The usbutils package (on Debian, at least) includes a script called usb-devices . If you run it, it outputs information about the devices on the system, including which driver is used: $ usb-devices
⋮
T: Bus=10 Lev=01 Prnt=01 Port=01 Cnt=02 Dev#= 3 Spd=1.5 MxCh= 0
D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1
P: Vendor=051d ProdID=0002 Rev=01.06
S: Manufacturer=American Power Conversion
S: Product=Back-UPS RS 1500 FW:8.g9 .D USB FW:g9
S: SerialNumber=XXXXXXXXXXXX
C: #Ifs= 1 Cfg#= 1 Atr=a0 MxPwr=24mA
I: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbfs
⋮ Note that this lists the current driver, not the default one. There isn't a way to find the default one. Method 2: Using debugfs (requires root) If you have debugfs mounted, the kernel maintains a file in the same format as usb-devices prints out at /sys/kernel/debug/usb/devices ; you can view with less , etc. Note that debugfs interfaces are not stable, so different kernel versions may print in a different format, or be missing the file entirely. Once again, this only shows the current driver, not the default. Method 3: Using only basic utilities to read /sys directly (best for scripting or recovery) You can get the information out of /sys , thought its more painful than lspci . These /sys interfaces should be reasonably stable, so if you're writing a shell script, this is probably how you want to do it. Initially, lsusb seems to count devices from 1, /sys from 0. So 10-2 is a good guess for where to find the APC UPS lsusb gives as bus 10, device 3. Unfortunately, over time that mapping breaks down—sysfs re-uses numbers even when device numbers aren't. The devnum file's contents will match the device number given by lsusb, so you can do something like this: $ grep -l '^3$' /sys/bus/usb/devices/10-*/devnum # the ^ and $ to prevent also matching 13, 31, etc.
/sys/bus/usb/devices/10-2/devnum So, in this case, it's definitely 10-2 . $ cd /sys/bus/usb/devices/10-2
$ ls
10-2:1.0 bDeviceClass bMaxPower descriptors ep_00 maxchild remove urbnum
authorized bDeviceProtocol bNumConfigurations dev idProduct power serial version
avoid_reset_quirk bDeviceSubClass bNumInterfaces devnum idVendor product speed
bcdDevice bmAttributes busnum devpath ltm_capable quirks subsystem
bConfigurationValue bMaxPacketSize0 configuration driver manufacturer removable uevent We can be sure this is the right device by cat ing a few of the files: $ cat idVendor idProduct manufacturer product
051d
0002
American Power Conversion
Back-UPS RS 1500 FW:8.g9 .D USB FW:g9 If you look in 10-2:1.0 ( :1 is the "configuration", .0 the interface—a single USB device can do multiple things, and have multiple drivers; lsusb -v will show these), there is a modalias file and a driver symlink: $ cat 10-2\:1.0/modalias
usb:v051Dp0002d0106dc00dsc00dp00ic03isc00ip00in00
$ readlink driver
../../../../../../bus/usb/drivers/usbfs So, the current driver is usbfs . You can find the default driver by asking modinfo about the modalias: $ /sbin/modinfo `cat 10-2\:1.0/modalias`
filename: /lib/modules/3.6-trunk-amd64/kernel/drivers/hid/usbhid/usbhid.ko
license: GPL
description: USB HID core driver
author: Jiri Kosina
author: Vojtech Pavlik
author: Andreas Gal
alias: usb:v*p*d*dc*dsc*dp*ic03isc*ip*in*
depends: hid,usbcore
intree: Y
vermagic: 3.6-trunk-amd64 SMP mod_unload modversions
parm: mousepoll:Polling interval of mice (uint)
parm: ignoreled:Autosuspend with active leds (uint)
parm: quirks:Add/modify USB HID quirks by specifying quirks=vendorID:productID:quirks where vendorID, productID, and quirks are all in 0x-prefixed hex (array of charp) So, the APC UPS defaults to the hid driver, which is indeed correct. And its currently using usbfs, which is correct since nut 's usbhid-ups is monitoring it. What about userspace (usbfs) drivers? When the driver is usbfs , it basically means a userspace (non-kernel) program is functioning as the driver. Finding which program it is requires root (unless the program is running as your user) and is fairly easy: whichever program has the device file open. We know that our "victim" device is bus 10, device 3. So the device file is /dev/bus/usb/010/003 (at least on a modern Debian), and lsof provides the answer: # lsof /dev/bus/usb/010/003
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
usbhid-up 4951 nut 4u CHR 189,1154 0t0 8332 /dev/bus/usb/010/003 And indeed, its usbhid-ups as expected (lsof truncated the command name to make the layout fit, if you need the full name, you can use ps 4951 to get it, or probably some lsof output formatting options). | {
"source": [
"https://unix.stackexchange.com/questions/60088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29513/"
]
} |
60,098 | There is the well known while condition; do ...; done loop, but is there a do... while style loop that guarantees at least one execution of the block? | A very versatile version of a do ... while has this structure: while
Commands ...
do :; done An example is: #i=16
while
echo "this command is executed at least once $i"
: ${start=$i} # capture the starting value of i
# some other commands # needed for the loop
(( ++i < 20 )) # Place the loop ending test here.
do :; done
echo "Final value of $i///$start"
echo "The loop was executed $(( i - start )) times " As it is (no value set for i ) the loop executes 20 times. UN-Commenting the line that sets i to 16 i=16 , the loop is executed 4 times. For i=16 , i=17 , i=18 and i=19 . If i is set to (let's say 26) at the same point (the start), the commands still get executed the first time (until the loop break command is tested). The test for a while should be truthy (exit status of 0). The test should be reversed for an until loop, i.e.: be falsy (exit status not 0). A POSIX version needs several elements changed to work: i=16
while
echo "this command is executed at least once $i"
: ${start=$i} # capture the starting value of i
# some other commands # needed for the loop
i="$((i+1))" # increment the variable of the loop.
[ "$i" -lt 20 ] # test the limit of the loop.
do :; done
echo "Final value of $i///$start"
echo "The loop was executed $(( i - start )) times " ./script.sh
this command is executed at least once 16
this command is executed at least once 17
this command is executed at least once 18
this command is executed at least once 19
Final value of 20///16
The loop was executed 4 times | {
"source": [
"https://unix.stackexchange.com/questions/60098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2180/"
]
} |
60,205 | In Linux and Unix systems there are two common search commands: locate and find . What are the pros and cons of each? When one have benefits over the other? | locate(1) has only one big advantage over find(1) : speed. find(1) , though, has many advantages over locate(1) : find(1) is primordial, going back to the very first version of AT&T Unix . You will even find it in cut-down embedded Linuxes via Busybox . It is all but universal. locate(1) is much younger than find(1) . The earliest ancestor of locate(1) didn't appear until 1983 , and it wasn't widely available as " locate " until 1994, when it was adopted into GNU findutils and into 4.4BSD . locate(1) is also nonstandard , thus it is not installed by default everywhere. Some POSIX type OSes don't even offer it as an option, and where it is available, the implementation may be lacking features you want because there is no independent standard specifying the minimum feature set that must be available. There is a de facto standard, being BSD locate(1) , but that is only because the other two main flavors of locate implement all of its options: -0 , -c , -d , -i , -l , -m , -s , and -S . mlocate implements 6 additional options not in BSD locate : -b , -e , -P , -q , --regex and -w . GNU locate implements those six plus another four : -A , -D , -E , and -p . (I'm ignoring aliases and minor differences like -? vs -h vs --help .) The BSDs and Mac OS X ship BSD locate . Most Linuxes ship GNU locate , but Red Hat Linuxes and Arch ship mlocate instead. Debian doesn't install either in its base install, but offers both versions in its default package repositories; if both are installed at once, " locate " runs mlocate . Oracle has been shipping mlocate in Solaris since 11.2 , released in December 2014. Prior to that, locate was not installed by default on Solaris. (Presumably, this was done to reduce Solaris' command incompatibility with Oracle Linux , which is based on Red Hat Enterprise Linux , which also uses mlocate .) IBM AIX still doesn't ship any version of locate , at least as of AIX 7.2 , unless you install GNU findutils from the AIX Toolbox for Linux Applications . HP-UX also appears to lack locate in the base system. Older "real" Unixes generally did not include an implementation of locate . find(1) has a powerful expression syntax, with many functions, Boolean operators , etc. find(1) can select files by more than just name. It can select by: age size owner file type timestamp permissions depth within the subtree... When finding files by name, you can search using file globbing syntax in all versions of find(1) , or in GNU or BSD versions, using regular expressions . Current versions of locate(1) accept glob patterns as find does, but BSD locate doesn't do regexes at all. If you're like me and have to use a variety of machine types, you find yourself preferring grep filtering to developing a dependence on -r or --regex . locate needs strong filtering more than find does because... find(1) doesn't necessarily search the entire filesystem. You typically point it at a subdirectory, a parent containing all the files you want it to operate on. The typical behavior for a locate(1) implementation is to spew up all files matching your pattern, leaving it to grep filtering and such to cut its eruption down to size. (Evil tip: locate / will probably get you a list of all files on the system!) There are variants of locate(1) like slocate(1) which restrict output based on user permissions, but this is not the default version of locate in any major operating system. find(1) can do things to files it finds, in addition to just finding them. The most powerful and widely supported such operator is -exec , but there are others. In recent GNU and BSD find implementations, for example, you have the -delete and -execdir operators. find(1) runs in real time, so its output is always up to date. Because locate(1) relies on a database updated hours or days in the past, its output can be outdated. (This is the stale cache problem .) This coin has two sides: locate can name files that no longer exist. GNU locate and mlocate have the -e flag to make it check for file existence before printing out the name of each file it discovered in the past, but this eats away some of the locate speed advantage, and isn't available in BSD locate besides. locate will fail to name files that were created since the last database update. You learn to be somewhat distrustful of locate output, knowing it may be wrong. There are ways to solve this problem, but I am not aware of any implementation in widespread use. For example, there is rlocate , but it appears to not work against any modern Linux kernel. find(1) never has any more privilege than the user running it. Because locate provides a global service to all users on a system, it wants to have its updatedb process run as root so it can see the entire filesystem. This leads to a choice of security problems: Run updatedb as root, but make its output file world-readable so locate can run without special privileges. This effectively exposes the names of all files in the system to all users. This may be enough of a security breach to cause a real problem. BSD locate is configured this way on Mac OS X and FreeBSD. Write the database as readable only by root , and make locate setuid root so it can read the database. This means locate effectively has to reimplement the OS's permission system so it doesn't show you files you can't normally see. It also increases the attack surface of your system, specifically risking a root escalation attack. Create a special " locate " user or group to own the database file, and mark the locate binary as setuid/setgid for that user/group so it can read the database. This doesn't prevent privilege escalation attacks by itself, but it greatly mitigates the damage one could cause. mlocate is configured this way on Red Hat Enterprise Linux . You still have a problem, though, because if you can use a debugger on locate or cause it to dump core you can get at privileged parts of the database. I don't see a way to create a truly "secure" locate command, short of running it separately for each user on the system, which negates much of its advantage over find(1) . Bottom line, both are very useful. locate(1) is better when you're just trying to find a particular file by name, which you know exists, but you just don't remember where it is exactly. find(1) is better when you have a focused area to examine, or when you need any of its many advantages. | {
"source": [
"https://unix.stackexchange.com/questions/60205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13451/"
]
} |
60,213 | I expect the following command to extract the gpg file without asking for password: gpg --passphrase 1234 file.gpg But it asks for the password. Why? This also have the same behavior: gpg --passphrase-file passfile.txt file.gpg I use Ubuntu with gnome 3, and remember that it was working in Fedora | I am in your exact same boat (it worked on Fedora but not Ubuntu). Here is an apparent work around I discovered: echo your_password | gpg --batch --yes --passphrase-fd 0 your_file.gpg Explanation: Passing 0 causes --passphrase-fd to read from STDIN rather than from a file. So, piping the passphrase will get --passphrase-fd to accept your specified password string. | {
"source": [
"https://unix.stackexchange.com/questions/60213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20188/"
]
} |
60,219 | Given a long string that needs to be displayed with limited text-width, is there a command line tool in *nix that converts the single-line string to a multi-line string with each line being no longer than a given text-width? For example, given the following string $ MYSTRING="Call me Ishmael. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world." I would like to format somewhat like this: $ echo $MYSTRING | special-format-command --width=30
Call me Ishmael. Some years ag
o - never mind how long precis
ely - having little or no mone
y in my purse, and nothing par
ticular to interest me on shor
e, I thought I would sail abou
t a little and see the watery
part of the world. | You might try the fold command: echo "$MYSTRING" | fold -w 30 | {
"source": [
"https://unix.stackexchange.com/questions/60219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
60,255 | i want to set sticky bit for all directories in a directory excluding files. is there any wild card to do this? #sudo chmod g+s /var/www/<WILD_CARD_FOR_ALL_DIRECTORIES> | Use */ to match only directories. chmod g+s /var/www/*/ To match all directories and subdirectories use **/*/ (provided you have globstar enabled in bash): shopt -s globstar
chmod g+s /var/www/**/*/ | {
"source": [
"https://unix.stackexchange.com/questions/60255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27711/"
]
} |
60,257 | When I use the following, I get a result as expected: $ echo {8..10}
8 9 10 How can I use this brace expansion in an easy way, to get the following output? $ echo {8..10}
08 09 10 I now that this may be obtained using seq (didn't try), but that is not what I am looking for. Useful info may be that I am restricted to this bash version. (If you have a zsh solution, but no bash solution, please share as well) $ bash -version
GNU bash, version 3.2.51(1)-release (x86_64-suse-linux-gnu) | Prefix the first number with a 0 to force each term to have the same width. $ echo {08..10}
08 09 10 From the bash man page section on Brace Expansion: Supplied integers may be prefixed with 0 to force each term to have
the same width. When either x or y begins with a zero, the shell
attempts to force all generated terms to contain the same number of
digits, zero-padding where necessary. Also note that you can use seq with the -w option to equalize width by padding with leading zeroes: $ seq -w 8 10
08
09
10
$ seq -s " " -w 8 10
08 09 10 If you want more control, you can even specify a printf style format: $ seq -s " " -f %02g 8 10
08 09 10 | {
"source": [
"https://unix.stackexchange.com/questions/60257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14084/"
]
} |
60,276 | When I do - CD .. instead of cd .. it gives me error saying - CD: command not found Why is the terminal case sensitive when it comes to linux commands? I mean you should be able to execute the command either with "all uppercase" or "all lowercase" characters. I know it is due to some reason, but I am just curious. | Ultimately, it was an arbitrary choice made by the creators of Unix over four decades ago now. They could have chosen to make things case-insensitive like the creators of MS-DOS did a decade later, but that has its disadvantages, too. It's too deeply embedded in *ix culture to change now. The case sensitive filesystem issue brought up by eppesuig is only part of it. macOS systems — which are Unix-based — typically have case-insensitive (but case-preserving) file systems, so on such systems commands external to the shell are in fact treated case-insensitively. But, builtins like cd remain case-sensitive. Even with a case-insensitive filesystem, the history of things conspires against your wishes, Hussain. If I type ls on my Mac, I get a colorized directory listing. If I type LS instead, /bin/ls still runs, but the listing isn't colorized because the alias that adds the -C flag is case-sensitive. Best just get used to it. If you can, learn to like it. | {
"source": [
"https://unix.stackexchange.com/questions/60276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24449/"
]
} |
60,299 | Possible Duplicate: How to know if /dev/sdX is a connected USB or HDD? The output of ls /dev/sd* on my system is - sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sdb sdc sdc1 sdc2 How should I determine which drive is which? | Assuming you're on Linux. Try: sudo /lib/udev/scsi_id --page=0x80 --whitelisted --device=/dev/sdc or: cat /sys/block/sdc/device/{vendor,model} You can also get information (including labels) from the filesystems on the different partitions with sudo blkid /dev/sdc1 The pathid will help to determine the type of device: readlink -f /sys/class/block/sdc/device See also: find /dev/disk -ls | grep /sdc Which with a properly working udev would give you all the information from the other commands above. The content of /proc/partitions will give you information on size (though not in as a friendly format as lsblk already mentionned by @Max). sudo blockdev --getsize64 /dev/sdc Will give you the size in bytes of the corresponding block device. sudo smartctl -i /dev/sdc (cross-platform), will also give you a lot of information including make, model, size, serial numbers, firmware revisions... | {
"source": [
"https://unix.stackexchange.com/questions/60299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19506/"
]
} |
60,382 | How can I scroll in bash using only the keyboard? If it's not possible in bash, are there any other shells that support this? | In "terminal" (not a graphic emulator like gterm ), Shift + PageUp and Shift + PageDown work. | {
"source": [
"https://unix.stackexchange.com/questions/60382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19506/"
]
} |
60,422 | root@host [/home2]# lsof /home2
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
php 3182 ctxmortg cwd DIR 8,17 4096 32858196 /home2/ctxmortg/public_html/hello
php 3182 ctxmortg 3r REG 8,17 46404 55781766 /home2/ctxmortg/public_html/hello/cache/subprimemortgagemorgage.com/cache-zch-8284-cache.txt
php 3185 ctxmortg cwd DIR 8,17 4096 32858196 /home2/ctxmortg/public_html/hello
php 3185 ctxmortg 3r REG 8,17 4185 35962154 /home2/ctxmortg/public_html/hello/cache/curl/http%3A%2F%2Fimage.yahoo.cn%2Fs%3Fq%3DNudity%26c%3D0%26s%3D%26page%3D277
php 3187 ctxmortg cwd DIR 8,17 4096 32858196 /home2/ctxmortg/public_html/hello
php 3187 ctxmortg 3r REG 8,17 54640 69699731 /home2/ctxmortg/public_html/hello/cache/newdatingfriends.com/cache-zch-1545-cache.txt
php 3188 ctxmortg cwd DIR 8,17 4096 32858196 /home2/ctxmortg/public_html/hello
php 3188 ctxmortg 3r REG 8,17 54640 21557063 /home2/ctxmortg/public_html/hello/cache/customersdeals.com/cache-zch-5715-cache.txt
php 3189 ctxmortg cwd DIR 8,17 4096 32858196 /home2/ctxmortg/public_html/hello
php 3189 ctxmortg 3r REG 8,17 4185 36028071 /home2/ctxmortg/public_html/hello/cache/curl/http%3A%2F%2Fimage.yahoo.cn%2Fs%3Fq%3DVideos%26c%3D0%26s%3D%26page%3D329
php 3200 ctxmortg cwd DIR 8,17 4096 32858196 /home2/ctxmortg/public_html/hello
php 3200 ctxmortg 3r REG 8,17 21036 9155614 /home2/ctxmortg/public_html/hello/cache/indorealestates.com/cache-zch-8562-cache.txt
lsof 3201 root cwd DIR 8,17 4096 2 /home2
lsof 3202 root cwd DIR 8,17 4096 2 /home2
webalizer 32342 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32342 ctxmortg 5uW REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32360 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32360 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32361 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32361 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32362 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32362 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32363 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32363 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32364 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32364 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32365 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32365 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32366 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32366 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32367 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32367 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32368 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32368 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
webalizer 32369 ctxmortg cwd DIR 8,17 4096 32890953 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com
webalizer 32369 ctxmortg 5u REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db
bash 32409 root cwd DIR 8,17 4096 2 /home2 I want to umount a drive but cannot. So, what does cwd, 3r dir, and reg mean anyway? | COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
webalizer 32342 ctxmortg 5uW REG 8,17 12288 32890954 /home2/ctxmortg/tmp/webalizer/eyebestdatedotcomauph.ctxmortgagemortgagerefi.com/dns_cache.db FD - File Descriptor If you are looking for file being written, look for following flag # - The number in front of flag(s) is the file descriptor number of used by the process to associated with the file
u - File open with Read and Write permission
r - File open with Read permission
w - File open with Write permission
W - File open with Write permission and with Write Lock on entire file
mem - Memory mapped file, usually for share library So 3r means webalizer has a descriptor number 3 associated with ...dns_cache.db , with read permission. TYPE - File Type In Linux, almost everything are files, but with different type. REG - REGgular file, file that show up in directory
DIR - Directory NODE inode number in filesystem You can find complete details in the man page . | {
"source": [
"https://unix.stackexchange.com/questions/60422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29357/"
]
} |
60,459 | What I get: host:~ user$ cat example.txt
some texthost:~ stas$ What I want to get: host:~ user$ cat example.txt
some text
host:~ stas$ Is there a way I can make cat behave like this? I'm using bash on Mac OS X. | Most unix tools are designed to work well with text files. A text file consists of a sequence of lines. A line consists of a sequence of printable characters ending with a newline character. In particular, the last character of a non-empty text file is always a newline character. Evidently, example.txt contains only some text with no final newline, so it is not a text file. cat does a simple job; turning arbitrary files into text files isn't part of that job. Some other tools always turn their input into text files; if you aren't sure the file you're displaying ends with a newline, try running awk 1 instead of cat . You can make the bash display its prompt on the next line if the previous command left the cursor somewhere other than the last margin. Put this in your .bashrc (variation by GetFree of a proposal by Dennis Williamson ): shopt -s promptvars
PS1='$(printf "%$((COLUMNS-1))s\r")'$PS1 | {
"source": [
"https://unix.stackexchange.com/questions/60459",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9998/"
]
} |
60,499 | My question is simple, but I am finding it hard to frame/explain it easily. I log into several Unix boxes with different accounts. I see 2 different things for user1 and user2 , while editing text files in vim user1 When I type vim filename , vim opens and I edit the file. When I close it, the complete text from the file is gone , and I see the Teminals' command/output that was previously present. user2 When I type vim filename , vim opens and I edit the file. When I close it, the part of file that was present on the display while I was in vim still shows up at the display, and all the previous Terminal display get's scrolled up. Even if the file was just 1 line, after exiting vim, the display shows the first line, with rest all the ~ and I see the command prompt at the bottom of screen. Details $ bash --version
GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.
$ vim --version
VIM - Vi IMproved 7.0 (2006 May 7, compiled Jun 12 2009 07:08:36) I compared the vimrc files for both users, and I am aware of all the settings, and don't find any setting/config related to this behavior. Is this behavior related to shell config ? How do I set the things, so that I get the behavior as shown in user1 scenario? I am not able to describe this easily, also finding it hard to google, as I don't to know what keyword to look for such behavior. Let me know, If I should elaborate further. | One of the reasons for that behaviour will be the setting of the terminal for each user. For example: User1 is using TERM= xterm , in this case when you exit vim it will clear the terminal. User2 is using TERM= vt100 , in this case when you exit vim it will not clear the terminal. Check what terminal user1 is using with echo $TERM and set the same for user2.
for bash: TERM=xterm; export TERM | {
"source": [
"https://unix.stackexchange.com/questions/60499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
60,549 | How can I get the /etc/hosts file to refer to another configuration file for it's list of hosts? Example /etc/hosts : ## My Hosts
127.0.0.1 localhost
255.255.255.255 broadcasthost
#Other Configurations
<Link to /myPath/to/MyConfig/ConfigFile.txt>
#Other Addresses
3.3.3.3 MyAwesomeDomain.com
4.4.4.4 SomeplaceIWantToGoTo.com ConfigFile.txt ##My additional Hosts
1.1.1.1 SomeLocation.com
2.2.2.2 AnotherLocation.com How do I add a link/Reference to /etc/hosts file such that ConfigFile.txt will be loaded? | You can't. The format for /etc/hosts is quite simple, and doesn't support including extra files. There are a couple approaches you could use instead: Set up a (possibly local-only) DNS server. Some of these give a lot of flexibility, and you can definitely spread your host files over multiple files, or even machines. Others (such as dnsmasq) offer less (but still sufficient) flexibility, but are easy to set up. If you're trying to include the same list of hosts on a bunch of machines, then DNS is probably the right answer. Set up some other name service (NIS, LDAP, etc.). Check the glibc NSS docs for what is supported. Personally, I think you should use DNS in most all cases. Make yourself an /etc/hosts.d directory or similar, and write some scripts to concatenate them all together (most trivial: cat /etc/hosts.d/*.conf > /etc/hosts , though you'll probably want a little better to e.g., override the default sort by current locale), and run that script at boot, or from cron, or manually whenever you update the files. Personally, at both home and work, to have machine names resolvable from every device, I run BIND 9. That does involve a few hours to learn, though. | {
"source": [
"https://unix.stackexchange.com/questions/60549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17651/"
]
} |
60,555 | I'm currently running Debian testing (Wheezy) and am trying to get SCIM working. I want to install the scim-pinyin package, but there is no such package available in the testing repository, although there was one in the previous stable (Squeeze) repository. There is a copy of the package in unstable but not for my architecture (amd64). Looking at the package versions, I notice that the version in the stable repositories is the same as that in unstable. This being the case, I have two questions: Is there any reason why I can't install a package from an older repository since I would assume that most Squeeze packages will probably have their dependencies met by the package versions currently in testing? What is the best way to achieve this? (Add the Squeeze repository to sources.list ? Download the Squeeze package and install it manually?) | In this case, yes , it's possible and safe. As debian keep dependences tree for each requested package. At all there is still a risk that some libraries could not exist in two different version together in same installation, due to conflict (port reservation, device driver and so). In this kind of situation, apt would prevent you and ask for what to do. (Come back with another UL question in this case;-) You could add squeeze.list to source.list.d ( Care! New versions of APT will ignore filename not ending by " .list ".): cat <<eof >/etc/apt/sources.list.d/squeeze.list
deb http://ftp.be.debian.org/debian/ squeeze-updates main contrib
deb-src http://security.debian.org/ squeeze/updates main contrib
eof add a default directive to /etc/apt/apt.conf.d/ cat <<eof >/etc/apt/apt.conf.d/99squeeze
APT::Default-Release "wheezy"; Than use -t switch to apt-get for overriding default config: apt-get -t squeeze install scim-pinyin | {
"source": [
"https://unix.stackexchange.com/questions/60555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29526/"
]
} |
60,574 | I'm running an Ubuntu 12.04 derivative (amd64) and I've been having really strange issues recently. Out of the blue, seemingly, X will freeze completely for a while (1-3 minutes?) and then the system will reboot. This system is overclocked, but very stable as verified in Windows, which leads me to believe I'm having a kernel panic or an issue with one of my modules. Even in Linux, I can run LINPACK and won't see a crash despite putting ridiculous load on the CPU. Crashes seem to happen at random times, even when the machine is sitting idle. How can I debug what's crashing the system? On a hunch that it might be the proprietary NVIDIA driver, I reverted all the way down to the stable version of the driver, version 304 and I still experience the crash. Can anyone walk me through a good debugging procedure for after a crash? I'd be more than happy to boot into a thumb drive and post all of my post-crash configuration files, I'm just not sure what they would be. How can I find out what's crashing my system? Here are a bunch of logs, the usual culprits. .xsession-errors : http://pastebin.com/EEDtVkVm /var/log/Xorg.0.log : http://pastebin.com/ftsG5VAn /var/log/kern.log : http://pastebin.com/Hsy7jcHZ /var/log/syslog : http://pastebin.com/9Fkp3FMz I can't even seem to find a record of the crash at all. Triggering the crash is not so simple, it seem to happen when the GPU is trying to draw multiple things at once. If I put on a YouTube video in full screen and let it repeat for a while or scroll through a ton of GIFs and a Skype notification pops up, sometimes it'll crash. Totally scratching my head on this one. The CPU is overclocked to 4.8GHz, but it's completely stable and has survived huge LINPACK runs and 9 hours of Prime95 yesterday without a single crash. Update I've installed kdump , crash , and linux-crashdump , as well as the kernel debug symbols for my kernel version 3.2.0-35. When I run apport-unpack on the crashed kernel file and then crash on the VmCore crash dump, here's what I see: KERNEL: /usr/lib/debug/boot/vmlinux-3.2.0-35-generic
DUMPFILE: Downloads/crash/VmCore
CPUS: 8
DATE: Thu Jan 10 16:05:55 2013
UPTIME: 00:26:04
LOAD AVERAGE: 2.20, 0.84, 0.49
TASKS: 614
NODENAME: mightymoose
RELEASE: 3.2.0-35-generic
VERSION: #55-Ubuntu SMP Wed Dec 5 17:42:16 UTC 2012
MACHINE: x86_64 (3499 Mhz)
MEMORY: 8 GB
PANIC: "[ 1561.519960] Kernel panic - not syncing: Fatal Machine check"
PID: 0
COMMAND: "swapper/5"
TASK: ffff880211251700 (1 of 8) [THREAD_INFO: ffff880211260000]
CPU: 5
STATE: TASK_RUNNING (PANIC) When I run log from the crash utility, I see this at the bottom of the log: [ 1561.519943] [Hardware Error]: CPU 4: Machine Check Exception: 5 Bank 3: be00000000800400
[ 1561.519946] [Hardware Error]: RIP !INEXACT! 33:<00007fe99ae93e54>
[ 1561.519948] [Hardware Error]: TSC 539b174dead ADDR 3fe98d264ebd MISC 1
[ 1561.519950] [Hardware Error]: PROCESSOR 0:206a7 TIME 1357862746 SOCKET 0 APIC 1 microcode 28
[ 1561.519951] [Hardware Error]: Run the above through 'mcelog --ascii'
[ 1561.519953] [Hardware Error]: CPU 0: Machine Check Exception: 4 Bank 3: be00000000800400
[ 1561.519955] [Hardware Error]: TSC 539b174de9d ADDR 3fe98d264ebd MISC 1
[ 1561.519957] [Hardware Error]: PROCESSOR 0:206a7 TIME 1357862746 SOCKET 0 APIC 0 microcode 28
[ 1561.519958] [Hardware Error]: Run the above through 'mcelog --ascii'
[ 1561.519959] [Hardware Error]: Machine check: Processor context corrupt
[ 1561.519960] Kernel panic - not syncing: Fatal Machine check
[ 1561.519962] Pid: 0, comm: swapper/5 Tainted: P M C O 3.2.0-35-generic #55-Ubuntu
[ 1561.519963] Call Trace:
[ 1561.519964] <#MC> [<ffffffff81644340>] panic+0x91/0x1a4
[ 1561.519971] [<ffffffff8102abeb>] mce_panic.part.14+0x18b/0x1c0
[ 1561.519973] [<ffffffff8102ac80>] mce_panic+0x60/0xb0
[ 1561.519975] [<ffffffff8102aec4>] mce_reign+0x1f4/0x200
[ 1561.519977] [<ffffffff8102b175>] mce_end+0xf5/0x100
[ 1561.519979] [<ffffffff8102b92c>] do_machine_check+0x3fc/0x600
[ 1561.519982] [<ffffffff8136d48f>] ? intel_idle+0xbf/0x150
[ 1561.519984] [<ffffffff8165d78c>] machine_check+0x1c/0x30
[ 1561.519986] [<ffffffff8136d48f>] ? intel_idle+0xbf/0x150
[ 1561.519987] <<EOE>> [<ffffffff81509697>] ? menu_select+0xe7/0x2c0
[ 1561.519991] [<ffffffff815082d1>] cpuidle_idle_call+0xc1/0x280
[ 1561.519994] [<ffffffff8101322a>] cpu_idle+0xca/0x120
[ 1561.519996] [<ffffffff8163aa9a>] start_secondary+0xd9/0xdb bt outputs the backtrace: PID: 0 TASK: ffff880211251700 CPU: 5 COMMAND: "swapper/5"
#0 [ffff88021ed4aba0] machine_kexec at ffffffff8103947a
#1 [ffff88021ed4ac10] crash_kexec at ffffffff810b52c8
#2 [ffff88021ed4ace0] panic at ffffffff81644347
#3 [ffff88021ed4ad60] mce_panic.part.14 at ffffffff8102abeb
#4 [ffff88021ed4adb0] mce_panic at ffffffff8102ac80
#5 [ffff88021ed4ade0] mce_reign at ffffffff8102aec4
#6 [ffff88021ed4ae40] mce_end at ffffffff8102b175
#7 [ffff88021ed4ae70] do_machine_check at ffffffff8102b92c
#8 [ffff88021ed4af50] machine_check at ffffffff8165d78c
[exception RIP: intel_idle+191]
RIP: ffffffff8136d48f RSP: ffff880211261e38 RFLAGS: 00000046
RAX: 0000000000000020 RBX: 0000000000000008 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffff880211261fd8 RDI: ffffffff81c12f00
RBP: ffff880211261e98 R8: 00000000fffffffc R9: 0000000000000f9f
R10: 0000000000001e95 R11: 0000000000000000 R12: 0000000000000003
R13: ffff88021ed5ac70 R14: 0000000000000020 R15: 12d818fb42cfe42b
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
--- <MCE exception stack> ---
#9 [ffff880211261e38] intel_idle at ffffffff8136d48f
#10 [ffff880211261ea0] cpuidle_idle_call at ffffffff815082d1
#11 [ffff880211261f00] cpu_idle at ffffffff8101322a Any ideas? | I have two suggestions to start. The first you're not going to like. No matter how stable you think your overclocked system is, it would be my first suspect. And any developer you report the problem to will say the same thing. Your stable test workload isn't necessarily using the same instructions, stressing the memory subsystem as much, whatever. Stop overclocking. If you want people to believe the problem's not overclocking, then make it happen when not overclocking so you can get a clean bug report. This will make a huge difference in how much effort other people will invest in solving this problem. Having bug-free software is a point of pride, but reports from people with particularly questionable hardware setups are frustrating time-sinks that probably don't involve a real bug at all. The second is to get the oops data, which as you've noticed doesn't go to any of the places you've mentioned. If the crash only happens while running X11, I think local console is pretty much out (it's a pain anyway), so you need to do this over a serial console, over the network, or by saving to local disk (which is trickier than it may sound because you don't want an untrustworthy kernel to corrupt your filesystem). Here are some ways to do this: use netdump to save to a server over the network. I haven't done this in years, so I'm not sure this software is still around and working with modern kernels, but it's easy enough that it's worth a shot. boot using a serial console ; you'll need a serial port free on both machines (whether an old-school one or a USB serial adapter) and a null modem cable; you'd configure the other machine to save the output. kdump seems to be what the cool kids use nowadays, and seems quite flexible, although it wouldn't be my preference because it looks complex to set up. In short, it involves booting a different kernel that can do anything and inspect the former kernel's memory contents, but you have to essentially build the whole process and I don't see a lot of canned options out there. Update: There are some nice distro things, actually; on Ubuntu, linux-crashdump Once you get the debug info, there's a tool called ksymoops that you can use to turn the addresses into symbol names and start getting an idea how your kernel crashed. And if the symbolized dump doesn't mean anything to you, at least this is something helpful to report here or perhaps on your Linux distribution's mailing list / bug tracker. From crash on your crashdump, you can try typing log and bt to get a bit more information (things logged during the panic and a stack backtrace). Your Fatal Machine check seems to be coming from here , though. From skimming the code, your processor has reported a Machine Check Exception - a hardware problem. Again, my first bet would be due to overclocking. It seems like there might be a more specific message in the log output which could tell you more. Also from that code, it looks like if you boot with the mce=3 kernel parameter, it will stop crashing...but I wouldn't really recommend this except as a diagnostic step. If the Linux kernel thinks this error is worth crashing over, it's probably right. | {
"source": [
"https://unix.stackexchange.com/questions/60574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
60,577 | I have multiple files with the same header and different vectors below that. I need to concatenate all of them but I want only the header of first file to be concatenated and I don't want other headers to be concatenated since they are all same. for example:
file1.txt <header>INFO=<ID=DP,Number=1,Type=Integer>
<header>INFO=<ID=DP4,Number=4,Type=Integer>
A
B
C file2.txt <header>INFO=<ID=DP,Number=1,Type=Integer>
<header>INFO=<ID=DP4,Number=4,Type=Integer>
D
E
F I need the output to be <header>INFO=<ID=DP,Number=1,Type=Integer>
<header>INFO=<ID=DP4,Number=4,Type=Integer>
A
B
C
D
E
F I could write a script in R but I need it in shell? | If you know how to do it in R, then by all means do it in R. With classical unix tools, this is most naturally done in awk. awk '
FNR==1 && NR!=1 { while (/^<header>/) getline; }
1 {print}
' file*.txt >all.txt The first line of the awk script matches the first line of a file ( FNR==1 ) except if it's also the first line across all files ( NR==1 ). When these conditions are met, the expression while (/^<header>/) getline; is executed, which causes awk to keep reading another line (skipping the current one) as long as the current one matches the regexp ^<header> . The second line of the awk script prints everything except for the lines that were previously skipped. | {
"source": [
"https://unix.stackexchange.com/questions/60577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15258/"
]
} |
60,584 | I have two arrays: arrayA=(1 2 3)
arrayB=(a b c) and I want to print out one of them using a command line argument, i.e., without any if else . I tried a few variations on the syntax with no success. I am wanting to do something like this: ARG="$1"
echo ${array${ARG}[@]} but I get a "bad substitution" error. How can I achieve this? | Try doing this : $ arrayA=(1 2 3)
$ x=A
$ var=array$x[@]
$ echo ${!var}
1 2 3 NOTE from man bash (parameter expansion) : ${parameter}
The value of parameter is substituted.
The braces are required when parameter is a positional parameter with
more than one digit, or when parameter is followed by a character which
is not to be interpreted as part of its name. * If the first character of parameter is an exclamation point (!), a level of variable indirection is introduced. Bash uses the
value of the variable formed from the rest of parameter as the
name of the variable; this variable is then expanded and that value is used in the rest of the substitution, rather than the value
of parameter itself. This is known as indirect expansion.
* The exceptions to this are the expansions of ${!prefix*} and ${!name[@]} described below. The exclamation point must immediately
follow the left brace in order to introduce indirection. | {
"source": [
"https://unix.stackexchange.com/questions/60584",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29889/"
]
} |
60,595 | I ran sudo add-apt-repository ppa:noobslab/indicators to install my-weather-indicator but it requires GTK3 and I don't want to proceed further. So I'd like to undo that command. I've checked my /etc/apt/source.list but I didn't find any line related to it. What should I do now? | From Ubuntu's manual pages ( man add-apt-repository ): -r , --remove Remove the specified repository So... sudo add-apt-repository -r ppa:noobslab/indicators This removes it from the repo list in /etc/apt/sources.list.d/. Depending on what you are doing, BEFORE you run the above command - If an installed package from that repo is newer than the same package in a standard repo, then manually downgrade with ppa-purge : sudo ppa-purge ppa:noobslab/indicators For Debian, just delete the .list file in /etc/apt/sources.list.d/ | {
"source": [
"https://unix.stackexchange.com/questions/60595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19195/"
]
} |
60,641 | On a Linux system, what is the difference between /dev/console , /dev/tty and /dev/tty0 ? What is their respective use and how do they compare? | From the Linux Kernel documentation on Kernel.org : /dev/tty Current TTY device
/dev/console System console
/dev/tty0 Current virtual console In the good old days /dev/console was System Administrator console. And TTYs were users' serial devices attached to a server. Now /dev/console and /dev/tty0 represent current display and usually are the same. You can override it for example by adding console=ttyS0 to grub.conf . After that your /dev/tty0 is a monitor and /dev/console is /dev/ttyS0 . An exercise to show the difference between /dev/tty and /dev/tty0 : Switch to the 2nd console by pressing Ctrl + Alt + F2 . Login as root . Type sleep 5; echo tty0 > /dev/tty0 . Press Enter and switch to the 3rd console by pressing Alt + F3 .
Now switch back to the 2nd console by pressing Alt + F2 . Type sleep 5; echo tty > /dev/tty , press Enter and switch to the 3rd console. You can see that tty is the console where process starts, and tty0 is a always current console. | {
"source": [
"https://unix.stackexchange.com/questions/60641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29924/"
]
} |
60,642 | I was hoping that I could just type "job.php" and be directed to the job.php in my /lib/model/ folder, but I've mostly just get many other files returned: [No name]
> batch/dataFixes/jobProspectsSubscriptionId.php
> batch/dataFixes/jobProspectsRankDistance.php
> batch/dataFixes/hiredJobDistanceRankFeedback.php
> batch/sendWeeklyJobOwnersUpdateEmail.php
> batch/dataFixes/backdateJobClosureDailyStats.php
> batch/dataFixes/jobExpectedRevenue.php
> batch/dataFixes/updateJobStats.php
> batch/updateEndedJobState.php
> batch/findUnresponsiveJobPosters.php
> batch/_job_criteria.php
prt file <mru>={ files }=<buf> <-> /Users/shane/Documents/sites/zinc
>d> job.php_ Switching to 'find in path' mode, and typing lib/model/job.php brings up tonnes of other classes in that folder which have 'job' in the filename, but not job.php . Job is a pretty common word in our project, but I was hoping that an exact match for the filename would get ranked pretty highly in the results. Am I using CtrlP wrong, or is the project not really suited to it? | Easiest way is to toggle to file name only mode and regex mode, from docs: Once inside the prompt: Ctrl + D : Toggle between full-path search and filename only search. Note : in filename mode, the prompt's base is >d> instead of >>> Ctrl + R : Toggle between the string mode and full regexp mode. Note : in full regexp mode, the prompt's base is r>> instead of >>> | {
"source": [
"https://unix.stackexchange.com/questions/60642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29918/"
]
} |
60,688 | I was wanting to initialize some strings at the top of my script with variables that have no yet been set, such as: str1='I went to ${PLACE} and saw ${EVENT}'
str2='If you do ${ACTION} you will ${RESULT}' and then later on PLACE , EVENT , ACTION , and RESULT will be set. I want to then be able to print my strings out with the variables expanded. Is my only option eval ? This seems to work: eval "echo ${str1}" is this standard? is there a better way to do this? It would be nice to not run eval considering the variables could be anything. | With the kind of input you show, the only way to leverage shell expansion to substitute values into a string is to use eval in some form. This is safe as long as you control the value of str1 and can ensure that it only references variables that are known as safe (not containing confidential data) and doesn't contain any other unquoted shell special character. You should expand the string inside double quotes or in a here document, that way only "$\` are special (they need to be preceded by a \ in str1 ). eval "substituted=\"$str1\"" It would be a lot more robust to define a function instead of a string. fill_template () {
sentence1="I went to ${PLACE} and saw ${EVENT}"
sentence2="If you do ${ACTION} you will ${RESULT}"
} Set the variables then call the function fill_template to set the output variables. PLACE=Sydney; EVENT=fireworks
ACTION='not learn from history'; RESULT='have to relive history'
fill_template
echo "During my holidays, $sentence1."
echo "Cicero said: \"$sentence2\"." | {
"source": [
"https://unix.stackexchange.com/questions/60688",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29889/"
]
} |
60,723 | I just formatted stuff. One disk I format as ext2. The other I want to format as ext4. I want to test how they perform. Now, how do I know the kind of file system in a partition? | How do I tell what sort of data (what data format) is in a file? → Use the file utility. Here, you want to know the format of data in a device file, so you need to pass the -s flag to tell file not just to say that it's a device file but look at the content. Sometimes you'll need the -L flag as well, if the device file name is a symbolic link. You'll see output like this: # file -sL /dev/sd*
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=63fa0104-4aab-4dc8-a50d-e2c1bf0fb188 (extents) (large files) (huge files)
/dev/sdb1: Linux rev 1.0 ext2 filesystem data, UUID=b3c82023-78e1-4ad4-b6e0-62355b272166
/dev/sdb2: Linux/i386 swap file (new style), version 1 (4K pages), size 4194303 pages, no label, UUID=3f64308c-19db-4da5-a9a0-db4d7defb80f Given this sample output, the first disk has one partition and the second disk has two partitions. /dev/sda1 is an ext4 filesystem, /dev/sdb1 is an ext2 filesystem, and /dev/sdb2 is some swap space (about 4GB). You must run this command as root, because ordinary users may not read disk partitions directly: if needed, add sudo in front. | {
"source": [
"https://unix.stackexchange.com/questions/60723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29357/"
]
} |
60,750 | I am trying to make a curl request to one of our local development servers running a dev site with a self-signed SSL cert. I am using curl from the command line. I saw some blog posts mentioning that you can add to the list of certificates or specify a specific (self signed) certificate as valid, but is there a catch-all way of saying "don't verify" the ssl cert - like the --no-check-certificate that wget has? | Yes. From the manpage : -k, --insecure (TLS) By default, every SSL connection curl makes is verified to be
secure. This option allows curl to proceed and operate even for server
connections otherwise considered insecure. The server connection is verified by making sure the server's
certificate contains the right name and verifies successfully using
the cert store. See this online resource for further details: https://curl.haxx.se/docs/sslcerts.html See also --proxy-insecure and --cacert. The reference mentioned in that manpage entry describes some of the specific behaviors of -k . These behaviors can be observed with curl requests to test pages from BadSSL.com curl -X GET https://wrong.host.badssl.com/
curl: (51) SSL: no alternative certificate subject name matches target host name 'wrong.host.badssl.com'
curl -k -X GET https://wrong.host.badssl.com/
..returns HTML content... | {
"source": [
"https://unix.stackexchange.com/questions/60750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10287/"
]
} |
60,755 | I had a disk with a full size NTFS partition. I just deleted it and created an EXT4 one. When it was NTFS, if it wasn't in use (mount but no in use) it was quiet. However now, using EXT4, it is constant reading and I don't know why. Using EXT3 is fine also. Any idea? | Yes. From the manpage : -k, --insecure (TLS) By default, every SSL connection curl makes is verified to be
secure. This option allows curl to proceed and operate even for server
connections otherwise considered insecure. The server connection is verified by making sure the server's
certificate contains the right name and verifies successfully using
the cert store. See this online resource for further details: https://curl.haxx.se/docs/sslcerts.html See also --proxy-insecure and --cacert. The reference mentioned in that manpage entry describes some of the specific behaviors of -k . These behaviors can be observed with curl requests to test pages from BadSSL.com curl -X GET https://wrong.host.badssl.com/
curl: (51) SSL: no alternative certificate subject name matches target host name 'wrong.host.badssl.com'
curl -k -X GET https://wrong.host.badssl.com/
..returns HTML content... | {
"source": [
"https://unix.stackexchange.com/questions/60755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27807/"
]
} |
60,760 | I'm trying to get a per-directory total size of all the .jpg / .jpeg images in each directory that contains such images. And showing the full directory path. I've managed to cobble this together from various bits I've found. for i in $(tree -dfi --noreport); do
find . \( -iname "*.jpg" -or -iname "*.jpeg" \) -type f -exec du -c {} \; $i
done However I'm getting an error: find: paths must precede expression Anyone know what I've done wrong? Or can suggest any alternatives with bash that might do what I'm looking for? I get the same error when changing it to this: for i in $(tree -dfi --noreport); do
find $i \( -iname "*.jpg" -or -iname "*.jpeg" \) -type f -exec du {} \; $i
done | Yes. From the manpage : -k, --insecure (TLS) By default, every SSL connection curl makes is verified to be
secure. This option allows curl to proceed and operate even for server
connections otherwise considered insecure. The server connection is verified by making sure the server's
certificate contains the right name and verifies successfully using
the cert store. See this online resource for further details: https://curl.haxx.se/docs/sslcerts.html See also --proxy-insecure and --cacert. The reference mentioned in that manpage entry describes some of the specific behaviors of -k . These behaviors can be observed with curl requests to test pages from BadSSL.com curl -X GET https://wrong.host.badssl.com/
curl: (51) SSL: no alternative certificate subject name matches target host name 'wrong.host.badssl.com'
curl -k -X GET https://wrong.host.badssl.com/
..returns HTML content... | {
"source": [
"https://unix.stackexchange.com/questions/60760",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30008/"
]
} |
60,776 | Is there an "easy" way of running an "ls -la" style command for listing all files / executable binaries in the current PATH? (I intend to pipe the output into grep, for looking for commands with unknown prefixes but basically known "names", the kind of case when the auto-completion / tabbing in bash is essentially useless. So some sort of an "inverse auto-complete feature" ...) | compgen -c # will list all the commands you could run.
compgen -a # will list all the aliases you could run.
compgen -b # will list all the built-ins you could run.
compgen -k # will list all the keywords you could run.
compgen -A function # will list all the functions you could run.
compgen -A function -abck # will list all the above in one go. | {
"source": [
"https://unix.stackexchange.com/questions/60776",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28301/"
]
} |
60,838 | When I press S in mutt, it saves the mail to a mail folder format ( cur/ tmp/ new/ ), but I want a single file to be saved, just like how attachments are saved. Is that configurable? | The actual message shows up as an attachment as well, so you can save it from the attachment list. From either the index or the message itself, hit v to open the attachments and s to save | {
"source": [
"https://unix.stackexchange.com/questions/60838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
60,849 | I am trying to list all the files from dir1 , dir2 , dir3 and dir4 which might be anywhere in as a sub directory of my cwd using the find command. I tried the following with no success: find . -type f -regextype posix-egrep -regex 'dir1/.+|dir2/.+|dir3/.+|dir4/.+' I tried posix-extended as well. How can I list these files? | And if you want to search three folders named foo , bar , and baz for all *.py files, use this command: find foo bar baz -name "*.py" so if you want to display files from dir1 dir2 dir3 use find dir1 dir2 dir3 -type f try this find . \( -name "dir1" -o -name "dir2" \) -exec ls '{}' \; | {
"source": [
"https://unix.stackexchange.com/questions/60849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29889/"
]
} |
60,968 | I recently started to use tmux and like it much, but its green bottom bar is a bit distracting, is there a way to change its color? or a way to hide it? | There are many options given in the manual . (See the OPTIONS section.)
Create an RC file: ~/.tmux.conf . The contents below enables UTF-8, sets the right TERM type, and draws the status bar with a black background and white foreground. set status-utf8 on
set utf8 on
set -g default-terminal "screen-256color"
set -g status-bg black
set -g status-fg white In FreeBSD 10.1, I have had to add -g to the UTF directives. set -g status-utf8 on
set -g utf8 on On UTF-8, many SSH clients require one to explicitly define a character set to use. For example, in Putty, select Window -> Translation -> Remote character set: UTF-8 and select Use Unicode line drawing code points . And to turn off the status bar... set -g status off On colors from the manual... message-bg colour Set status line message background colour, where colour is one of:
black, red, green, yellow, blue, magenta, cyan, white, colour0 to
colour255 from the 256-colour palette, or default. So, to list the available colors, first create a script , maybe colors.sh : #!/usr/bin/env bash
for i in {0..255} ; do
printf "\x1b[38;5;${i}mcolour${i}\n"
done Next, execute the script, piping to less : colors.sh | less -r This produces a list of colors, 1-255, in this format: colour1
[...]
colour255 Pick a color from the list, perhaps colour240 , a shade of grey. In ~/.tmux.conf , use this value to set the desired color: set -g status-bg colour240 In Fedora 17, 256-color terminals are not enabled by default. The official method used to enable 256-color terminals by default is given on the Fedora Project Wiki . Follow that guide, or, as a per-user solution, create an alias for tmux to force 256-color support with the "-2" switch. alias tmux="tmux -2" Then start tmux to test it. Note that, as @ILMostro_7 points out, it would not be correct to set the TERM type for tmux from, for example, ~/.bashrc . Each tmux pane emulates a terminal - not the same thing as an xterm. The emulation in tmux needs to match screen, a different terminal description, to behave properly; but, the real terminal does not need to do so. It's description is xterm-256color . | {
"source": [
"https://unix.stackexchange.com/questions/60968",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18188/"
]
} |
60,971 | I tried removing LUKS encryption on my home directory using the following command: cryptsetup luksRemoveKey /dev/mapper/luks-3fd5-235-26-2625-2456f-4353fgdgd But it gives me an error saying: Device /dev/mapper/luks-3fd5-235-26-2625-2456f-4353fgdgd is not a
valid LUKS device. Puzzled, I tried the following: cryptsetup status luks-3fd5-235-26-2625-2456f-4353fgdgd And it says: /dev/mapper/luks-3fd5-235-26-2625-2456f-4353fgdgd is active and is in use.
type: LUKS1
cipher: ... It seems the encrypted device is active, but not valid. What could be wrong here? | Backup Reformat Restore cryptsetup luksRemoveKey would only remove an encryption key if you had more than one. The encryption would still be there. The Fedora Installation_Guide Section C.5.3 explains how luksRemoveKey works. That it's "impossible" to remove the encryption while keeping the contents is just an educated guess. I base that on two things: Because the LUKS container has a filesystem or LVM or whatever on top of it, just removing the encryption layer would require knowledge of the meaning of the data stored on top of it, which simply is not available. Also, a requirement would be that overwriting a part of the LUKS volume with its decrypted counterpart, would not break the rest of the LUKS content, and I'm not sure if that can be done. Implementing it would solve a problem that is about as far away from the purpose of LUKS as you can get, and I find it very unlikely that someone would take the time to do that instead of something more "meaningful". | {
"source": [
"https://unix.stackexchange.com/questions/60971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24509/"
]
} |
60,994 | I want to grep smb.conf and see only lines which are not commented. | grep "^[^#;]" smb.conf The first ^ refers to the beginning of the line, so lines with comments starting after the first character will not be excluded. [^#;] means any character which is not # or ; . In other words, it reports lines that start with any character other than # and ; . It's not the same as reporting the lines that don't start with # and ; (for which you'd use grep -v '^[#;]' ) in that it also excludes empty lines, but that's probably preferable in this case as I doubt you care about empty lines. If you wanted to ignore leading blank characters, you could change it to: grep '^[[:blank:]]*[^[:blank:]#;]' smb.conf or grep -vxE '[[:blank:]]*([#;].*)?' smb.conf Or awk '$1 ~ /^[^;#]/' smb.conf | {
"source": [
"https://unix.stackexchange.com/questions/60994",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17424/"
]
} |
61,004 | I would like to create a tar file with contents belonging to an owner:group pair who do not exist on the system from which the file is being made. Here's the direction I've tried: tar ca --owner='otherowner' --group='othergroup' mydata.tgz mydata And when running this command, I get the following error: tar: otherowner: Invalid owner
tar: Error is not recoverable: exiting now Is there a way to force tar to accept the owner:group, even though neither of them exist on the system from which the file is being created? | Linux doesn't use internally owners and groups names but numbers - UIDs and GIDs. Users and groups names are mapped from contents of /etc/passwd and /etc/group files for convenience of user. Since you don't have 'otherowner' entry in any of those files, Linux doesn't actually know which UID and GID should be assigned to a file. Let's try to pass a number instead: $ tar cf archive.tar test.c --owner=0 --group=0
$ tar -tvf archive.tar
-rw-rw-r-- root/root 45 2013-01-10 15:06 test.c
$ tar cf archive.tar test.c --owner=543543 --group=543543
$ tar -tvf archive.tar
-rw-rw-r-- 543543/543543 45 2013-01-10 15:06 test.c It seems to work. | {
"source": [
"https://unix.stackexchange.com/questions/61004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30162/"
]
} |
61,021 | I know a bit about *NIX text editors (currently migrating from nano to vim ), and, after looking around a bit on the Unix & Linux SE, have noticed that vi is used instead of 'vim' in a fair number of questions. I know that 'vim' stands for 'Vi IMproved', and, with that in mind, am wondering why anyone would rather use vi instead of vim. Does vi have any significant advantage over vim? Edit: I think that my question is being misinterpreted. I know that vim is, for the most part, significantly more powerful and feature-complete then vi is. What I want to know is if there are any possible cases where vi has an advantage over vim, such as less memory use, prevalence on *nix systems, etc. | vi is (also) a POSIX standard editor . There are plenty of implementations and vim is likely the most popular. While many traditional Unix compliant OSes provide vi implementations very close to the standard, vim has added a lot of extra features that make it a double-edged sword. Of course, these extensions are usually designed to ease the editing process and provide useful features and functionalities. However, once you are used to some of them (not the cosmetic ones like syntax coloring but those that change the editor's behavior) you can easily forget they are specific; and using a different implementation, including the ones based on the original BSD code can be very frustrating. The opposite is also true. This is quite similar to the issue that happens with scripts using non POSIX bashisms faced to more orthodox shell implementations like dash or ksh . | {
"source": [
"https://unix.stackexchange.com/questions/61021",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20615/"
]
} |
61,074 | Could you please give me an example of using the tree command to filter out result as follows: to ignore directories (say bin , unitTest ) only listing certain files having extensions (say .cpp , .c , .hpp , .h ) providing full path-names of only the resultant files matching the criteria. | One way is to use patterns with the -I and -P switches: tree -f -I "bin|unitTest" -P "*.[ch]|*.[ch]pp." your_dir/ The -f prints the full path for each file, and -I excludes the files in the pattern here separated by a vertical bar. The -P switch inlcudes only the files listed in the pattern matching a certain extension. | {
"source": [
"https://unix.stackexchange.com/questions/61074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30188/"
]
} |
61,079 | I'm trying to replicate a single directory (with sub-directories) to a bunch of new directories based on a list. For example I can: mkdir Fred Barney Thelma Louise Foo Bar How would I copy a premade directory (with some empty sub-directories) to the same set of names? For example: cp -r master_folder/ Fred Barney Thelma Louise Foo Bar Any suggestions much appreciated! | One way is to use patterns with the -I and -P switches: tree -f -I "bin|unitTest" -P "*.[ch]|*.[ch]pp." your_dir/ The -f prints the full path for each file, and -I excludes the files in the pattern here separated by a vertical bar. The -P switch inlcudes only the files listed in the pattern matching a certain extension. | {
"source": [
"https://unix.stackexchange.com/questions/61079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30191/"
]
} |
61,118 | I'm trying to write a golfing library for postscript. But it needs to be condensed itself. So I need a convenient way to type-in arbitrary bytes within mostly ascii text. I know this can easily be done with absolutely any programming language, but can I do it in vi? ( :help octal was no help). Edit: Here's the resulting golfing library for postscript . Fortunately, I realized early on that golfing the library itself was a stupid idea and I did not do that. | :help i_CTRL-V_digit In insert mode, type Ctrl + V followed by a decimal number (0-255) o then an octal number (o0-o377, i.e., 255 is the maximum value) x then a hex number (x00-xFF, i.e., 255 is the maximum value) u then a 4-hexchar Unicode sequence U then an 8-hexchar Unicode sequence Decimal and octal numbers are limited to three digits.
Decimal numbers less than 100 may include leading zeroes,
which are ignored.
Octal numbers less than 100 oct (i.e., 64)
may include leading zeroes, but they are not required.
Octal numbers greater than or equal to 100 oct may not include leading zeroes
(but you may type a leading o if you want to). You can terminate a number by typing a character
that is not a valid digit for that radix.
For example, Ctrl + V 0 6 5 → A . Ctrl + V 6 5 B → Ab . Ctrl + V o 0 4 1 → ! . Ctrl + V o 4 1 9 → !9 . Regular (one-octet) hex numbers are limited to two digits.
As with the above, you can repeat the radix character
(e.g., Ctrl + V u u 0 0 4 1 → A )
for characters specified by hex codes. o and x are case-insensitive. | {
"source": [
"https://unix.stackexchange.com/questions/61118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12551/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.