source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
66,007 | Can anyone explain difference between declare and typeset with real life examples . | In bash , typeset and declare are exactly the same. The only difference is that typeset is considered obsolete. typeset: typeset [-aAfFgilrtux] [-p] name[=value] ...
Set variable values and attributes.
Obsolete. See `help declare'. The man page even lists them in the same breath: declare [-aAfFgilrtux] [-p] [name[=value] ...]
typeset [-aAfFgilrtux] [-p] [name[=value] ...]
Declare variables and/or give them attributes. typeset is portable to some other shells, for example, ksh93 . If you are aiming for cross-shell portability, use typeset (and make sure that the way you are calling it is portable). If you don't care about such portability, use declare . | {
"source": [
"https://unix.stackexchange.com/questions/66007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28434/"
]
} |
66,197 | Is it possible to cause a kernel panic with a single command line? What would be the most straightforward such command for a sudoing user and what would it be for a regular user, if any? Scenarios that suggest downloading something as a part of the command do not count. | FreeBSD: sysctl debug.kdb.panic=1 Linux (more info in the kernel documentation ): echo c > /proc/sysrq-trigger | {
"source": [
"https://unix.stackexchange.com/questions/66197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22884/"
]
} |
66,235 | Hi I have read Here that lsof is not an accurate way of getting the number of File Descriptors that are currently open. He recommended to use this command instead cat /proc/sys/fs/file-nr While this command displays the number of FD's, how do you display the list of open file descriptors that the command above just counted? | There are two reasons lsof | wc -l doesn't count file descriptors. One is that it lists things that aren't open files, such as loaded dynamically linked libraries and current working directories; you need to filter them out. Another is that lsof takes some time to run, so can miss files that are opened or closed while it's running; therefore the number of listed open files is approximate. Looking at /proc/sys/fs/file-nr gives you an exact value at a particular point in time. cat /proc/sys/fs/file-nr is only useful when you need the exact figure, mainly to check for resource exhaustion. If you want to list the open files, you need to call lsof , or use some equivalent method such as trawling /proc/*/fd manually. | {
"source": [
"https://unix.stackexchange.com/questions/66235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32639/"
]
} |
66,293 | Iβm trying to change the password that is asked when running sudo in Ubuntu. Running sudo passwd or sudo passwd root does give me the two new password prompts and it successfully changes the password. But then I can still use my old password when running sudo again for something else. I do have a user with the exact same password but I donβt know if that makes a difference. I enabled the root user and I can see the new password does work with the root user account. So the root password is changed but not the password for sudo . How do I change the sudo password? | You're changing root's password. sudo wants your user's password. To change it, try plain passwd , without arguments or running it through sudo . Alternately, you can issue: $ sudo passwd <your username> | {
"source": [
"https://unix.stackexchange.com/questions/66293",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34579/"
]
} |
66,329 | I have a machine with 62GB of RAM, and a trunk that's only 7GB, so I thought I would create a RAM disk and compile there. I am not a Linux expert. I found instructions on the internet to create the RAM disk: mkfs -q /dev/ram1 8192 but I changed the 8192 to 16777216 in an attempt to allocate 16GB of ram disk. I got the following error: mkfs.ext2: Filesystem larger than apparent device size.
Proceed anyway? (y,n) At which point I got spooked and bailed. sudo dmidecode --type 17 | grep Size shows 8x8192MB + 2048MB = 67584 MB but du on /dev gives 804K . Is that the problem? Can I overcome that /dev size? | The best way to create a ram disk on linux is tmpfs. It's a filesystem living in ram, so there is no need for ext2. You can create a tmpfs of 16Gb size with: mount -o size=16G -t tmpfs none /mnt/tmpfs | {
"source": [
"https://unix.stackexchange.com/questions/66329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33223/"
]
} |
66,548 | I have a git mirror on my disk and when I want to update my repo with git pull it gives me error message: Your configuration specifies to merge with the ref '3.5/master' from the remote, but no such ref was fetched. It also gives me: 1ce6dac..a5ab7de 3.4/bfq -> origin/3.4/bfq
fa52ab1..f5d387e 3.4/master -> origin/3.4/master
398cc33..1c3000a 3.4/upstream-updates -> origin/3.4/upstream-updates
d01630e..6b612f7 3.7/master -> origin/3.7/master
491e78a..f49f47f 3.7/misc -> origin/3.7/misc
5b7be63..356d8c6 3.7/upstream-updates -> origin/3.7/upstream-updates
636753a..027c1f3 3.8/master -> origin/3.8/master
b8e524c..cfcf7b5 3.8/misc -> origin/3.8/misc
* [neuer Zweig] 3.8/upstream-updates -> origin/3.8/upstream-updates When I run make menuconfig it gives me Linux version 3.5.7? What does this mean? How can I update my repo? | Check the branch you are on ( git branch ), check the configuration for that branch (in .../.git/config ), you probably are on the wrong branch or your configuration for it tells to merge with a (now?) non-existent remote branch. | {
"source": [
"https://unix.stackexchange.com/questions/66548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14569/"
]
} |
66,579 | There's a lot of questions around this, but it's because there are so many variables. I had to piece together instructions from many sites before I got this working. First, I could not easily set up the 16 solarized colour definitions in gnome-terminal (I did it by hand/clicking only to realise that I'd not got the order/mapping correct). Once I fixed that I moved on... Then I had solarized colours working in vim OK-ish, but there was some odd black backgrounds appearing in certain highlighting. Once I fixed that, I moved on... Then I realised vim went v. wonky once running inside tmux. This is massively debated, but very few of the answers (which mostly say about setting TERM to xterm-256colors ) worked for me. I eventually fixed that too. Solarized is a very nice palette (although I darkened the darkest base colour and lightened the lightest as I prefer the higher contrast and found tmux's 'white' far far too yellow on my calibrated screen - prob fine on a typical uncalibrated laptop screen as they're usually way too blue!), so I'm posting this question with its answer to share the results of my learning. | Solarized gives very specific colours. You can't really achieve these colours in a standard 256 colour palette . The only way you can achieve this is through setting up the exact colours in your terminal emulator, then apps think they're just using standard 16 colours (8 + 8 brights) but these have been accurately mapped to the Solarized palette. Gnome terminal does not provide a very easy way to export/import palettes or profiles, but you can do it with this bash script: #!/bin/sh
DARK_BG='#000014141A1A'
# original: DARK_BG='#00002B2B3636'
LIGHTEST='#FFFFFBFBF0F0'
# original: LIGHTEST='#FDFDF6F6E3E3'
gconftool-2 --set "/apps/gnome-terminal/profiles/Default/use_theme_background" --type bool false
gconftool-2 --set "/apps/gnome-terminal/profiles/Default/use_theme_colors" --type bool false
gconftool-2 --set "/apps/gnome-terminal/profiles/Default/palette" --type string "#070736364242:#D3D301010202:#858599990000:#B5B589890000:#26268B8BD2D2:#D3D336368282:#2A2AA1A19898:#EEEEE8E8D5D5:$DARK_BG:#CBCB4B4B1616:#58586E6E7575:#65657B7B8383:#838394949696:#6C6C7171C4C4:#9393A1A1A1A1:$LIGHTEST"
gconftool-2 --set "/apps/gnome-terminal/profiles/Default/background_color" --type string "$DARK_BG"
gconftool-2 --set "/apps/gnome-terminal/profiles/Default/foreground_color" --type string "#65657B7B8383" Nb. here I've overridden Solarized's darkest and lightest colours. You can use the originals if you like, as commented. Good enough. Now install the Solarized vim colours file by placing that file in ~.vim/colors/solarized.vim . Now you can tell Vim to use that colour scheme with colo solarized . But this did not quite work and I had to tell Vim to use a 16 colour pallete, set t_Co=16 . I stuck both of those in my ~/.vimrc file. Now vim colours were working, but not if it ran inside tmux. This next bit is very confusing. Most advice says about setting TERM outside tmux to xterm-256colors , but when I did that tmux would not even start. It confused me, too: doesn't solarized say that the 256 colour palette is a poor approximation? Well, it is confusing, and anyway, it wasn't working so I needed another way forward: Create a file /tmp/foo containing: xterm-16color|xterm with 16 colors,
colors#16, use=xterm, Then install this with sudo tic /tmp/foo Finally, alias tmux as follows: alias tmux='TERMINFO=/usr/share/terminfo/x/xterm-16color TERM=xterm-16color tmux -2' I now get exactly the right colours in the terminal, in vim, and in vim-inside-tmux. Nb. the -2 option tells tmux to use a 256 colour palette, which is really confusing because the env variables would appear to be telling it otherwise... I genuinely don't know, and I'm afraid I don't really care to climb that learning curve because I now have a beautiful coloured terminal that Just Works. | {
"source": [
"https://unix.stackexchange.com/questions/66579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23542/"
]
} |
66,581 | I wish to use shell to invoke a script on a remote server.
I would like to capture the output of that script (its logging messages) and the exit code it returns. If I do this: ssh user@server /usr/local/scripts/test_ping.sh
echo "$?" I get the exit code but can't capture the remote logging messages . If I do this: local RESULTS=$(ssh user@server /usr/local/scripts/test_ping.sh)
echo "$?"
LOG "${RESULTS}"; I get to log my output using my LOG function but can't seem to get a correct exit code, I assume the code I get is the code from the varianble assignment. I would like to continue to use my LOG function to capture all output as it formats and sends things to a file, syslog, and the screen for me. How can I capture results in a var AND get the correct exit code from the remote script? | The reason you are not getting the correct error code is because local is actually the last thing executed. You need to declare the variable as local prior to running the command. local RESULTS
RESULTS=$(ssh user@server /usr/local/scripts/test_ping.sh)
echo $? You can see the issue here: $ bar() { foo=$(ls asdkjasd 2>&1); echo $?; }; bar
2
$ bar() { local foo=$(ls asdkjasd 2>&1); echo $?; }; bar
0
$ bar() { local foo; foo=$(ls asdkjasd 2>&1); echo $?; }; bar
2 | {
"source": [
"https://unix.stackexchange.com/questions/66581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33375/"
]
} |
66,606 | I installed tmux locally (without root priviledges). I also created my .tmux.conf file in my home directory with the following lines: unbind-key C-b
set -g prefix C-o
bind-key C-o send-prefix However, tmux does not seem to be sourcing this file (my bind key is still C-b ). I have tried closing and re-opening my ssh session (this is on a remote machine) with no success. What could be hapenning? | It's most likely that you haven't started a new tmux server process. You say that you've closed your ssh session and started a new one, but that wouldn't have any effect on the tmux server; one of the main benefits to using tmux is that sessions can survive that type of activity. Try running tmux ls to check if the server is still running. If it isn't it should complain about that. If you instead get a list of sessions, attach to each of those in turn and close them. The tmux server process will die when the last session is closed. Then the next time that you start a new session a new server process will be created and it will read the tmux.conf file. If you don't want to close the existing sessions you can ask the tmux server to read the configuration file with tmux source ~/.tmux.conf . | {
"source": [
"https://unix.stackexchange.com/questions/66606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
66,766 | It looks like bc doesn't support float operations. When I do echo 1/8 | bc it gets me a zero. I checked the manual page bc (1) , but it doesn't even mention float , so I wonder if it's supported? | bc doesn't do floating point but it does do fixed precision decimal numbers. The -l flag Hauke mentions loads a math library for eg. trig functions but it also means [...] the default scale is 20 scale is one of a number of "special variables" mentioned in the man page. You can set it: scale=4 Anytime you want (whether -l was used or not). It refers to the number of significant digits used in a decimal . In other words, subsequent solutions will be rounded down to that number of digits after the decimal scale (== fixed precision). The default scale sans -l is 0, meaning rounded (down) to whole numbers. | {
"source": [
"https://unix.stackexchange.com/questions/66766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11318/"
]
} |
66,795 | Is it possible to check the progress of running cp process? Some processes respond to various KILL signals so that you can check what is their status. I know that I can run cp with parameter -v but what if forgot to do that, cp is running for a very long time and I want to know which file is being copied, or how many were already copied. | Yes, by running stat on target file and local file, and get a file size, i.e stat -c "%s" /bin/ls And you get the percentage of data copied by comparing the two value, that's it In a very basic implementation that will look like this: function cpstat()
{
local pid="${1:-$(pgrep -xn cp)}" src dst
[[ "$pid" ]] || return
while [[ -f "/proc/$pid/fd/3" ]]; do
read src dst < <(stat -L --printf '%s ' "/proc/$pid/fd/"{3,4})
(( src )) || break
printf 'cp %d%%\r' $((dst*100/src))
sleep 1
done
echo
} | {
"source": [
"https://unix.stackexchange.com/questions/66795",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30258/"
]
} |
66,853 | If I call some command, for instance an echo I can use the results from that command in several other commands with tee . Example: echo "Hello world!" | tee >(command1) >(command2) >(command3) With cat I can collect the results of several commands. Example: cat <(command1) <(command2) <(command3) I would like to be able to do both things at the same time, so that I can use tee to call those commands on the output of something else (for instance the echo I've written) and then collect all their results on a single output with cat . It's important to keep the results in order, this means the lines in the output of command1 , command2 and command3 should not be intertwined, but ordered as the commands are (as it happens with cat ). There may be better options than cat and tee but those are the ones I know so far. I want to avoid using temporary files because the size of the input and output may be large. How could I do this? PD: another problem is that this happens in a loop, that makes harder handling temporary files. This is the current code I have and it works for small testcases, but it creates infinite loops when reading and writing from the auxfile in some way I don't understand. somefunction()
{
if [ $1 -eq 1 ]
then
echo "Hello world!"
else
somefunction $(( $1 - 1 )) > auxfile
cat <(command1 < auxfile) \
<(command2 < auxfile) \
<(command3 < auxfile)
fi
} Readings and writings in auxfile seem to be overlapping, causing everything to explode. | You could use a combination of GNU stdbuf and pee from moreutils : echo "Hello world!" | stdbuf -o 1M pee cmd1 cmd2 cmd3 > output pee popen(3) s those 3 shell command lines and then fread s the input and fwrite s it all three, which will be buffered to up to 1M. The idea is to have a buffer at least as big as the input. This way even though the three commands are started at the same time, they will only see input coming in when pee pclose s the three commands sequentially. Upon each pclose , pee flushes the buffer to the command and waits for its termination. That guarantees that as long as those cmdx commands don't start outputting anything before they've received any input (and don't fork a process that may continue outputting after their parent has returned), the output of the three commands won't be interleaved. In effect, that's a bit like using a temp file in memory, with the drawback that the 3 commands are started concurrently. To avoid starting the commands concurrently, you could write pee as a shell function: pee() (
input=$(cat; echo .)
for i do
printf %s "${input%.}" | eval "$i"
done
)
echo "Hello world!" | pee cmd1 cmd2 cmd3 > out But beware that shells other than zsh would fail for binary input with NUL characters. That avoids using temporary files, but that means the whole input is stored in memory. In any case, you'll have to store the input somewhere, in memory or a temp file. Actually, it's quite an interesting question, as it shows us the limit of the Unix idea of having several simple tools cooperate to a single task. Here, we'd like to have several tools cooperate to the task: a source command (here echo ) a dispatcher command ( tee ) some filter commands ( cmd1 , cmd2 , cmd3 ) and an aggregation command ( cat ). It would be nice if they could all run together at the same time and do their hard work on the data that they're meant to process as soon as it's available. In the case of one filter command, it's easy: src | tee | cmd1 | cat All commands are run concurrently, cmd1 starts to munch data from src as soon as it's available. Now, with three filter commands, we can still do the same: start them concurrently and connect them with pipes: ββββββββββββββββββββββββββββββββββββ
β βββββ2ββββββcmd1ββββββ5βββββ β
β ββββββββββββββββββββββββββββ β
ββββββββββββββββ ββββββββββββββββββββββββββββ βββββββββββββββ
βsrcβββββ1ββββββteeβββββ3ββββββcmd2ββββββ6βββββcatβββββββββββoutβ
ββββββββββββββββ ββββββββββββββββββββββββββββ βββββββββββββββ
β ββββββββββββββββββββββββββββ β
β βββββ4ββββββcmd3ββββββ7βββββ β
ββββββββββββββββββββββββββββββββββββ Which we can do relatively easily with named pipes : pee() (
mkfifo tee-cmd1 tee-cmd2 tee-cmd3 cmd1-cat cmd2-cat cmd3-cat
{ tee tee-cmd1 tee-cmd2 tee-cmd3 > /dev/null <&3 3<&- & } 3<&0
eval "$1 < tee-cmd1 1<> cmd1-cat &"
eval "$2 < tee-cmd2 1<> cmd2-cat &"
eval "$3 < tee-cmd3 1<> cmd3-cat &"
exec cat cmd1-cat cmd2-cat cmd3-cat
)
echo abc | pee 'tr a A' 'tr b B' 'tr c C' (above the } 3<&0 is to work around the fact that & redirects stdin from /dev/null , and we use <> to avoid the opening of the pipes to block until the other end ( cat ) has opened as well) Or to avoid named pipes, a bit more painfully with zsh coproc: pee() (
n=0 ci= co= is=() os=()
for cmd do
eval "coproc $cmd $ci $co"
exec {i}<&p {o}>&p
is+=($i) os+=($o)
eval i$n=$i o$n=$o
ci+=" {i$n}<&-" co+=" {o$n}>&-"
((n++))
done
coproc :
read -p
eval tee /dev/fd/$^os $ci "> /dev/null &" exec cat /dev/fd/$^is $co
)
echo abc | pee 'tr a A' 'tr b B' 'tr c C' Now, the question is: once all the programs are started and connected, will the data flow? We've got two contraints: tee feeds all its outputs at the same rate, so it can only dispatch data at the rate of its slowest output pipe. cat will only start reading from the second pipe (pipe 6 in the drawing above) when all data has been read from the first (5). What that means is that data will not flow in pipe 6 until cmd1 has finished. And, like in the case of the tr b B above, that may mean that data will not flow in pipe 3 either, which means it will not flow in any of pipes 2, 3 or 4 since tee feeds at the slowest rate of all 3. In practice those pipes have a non-null size, so some data will manage to get through, and on my system at least, I can get it to work up to: yes abc | head -c $((2 * 65536 + 8192)) | pee 'tr a A' 'tr b B' 'tr c C' | uniq -c -c Beyond that, with yes abc | head -c $((2 * 65536 + 8192 + 1)) | pee 'tr a A' 'tr b B' 'tr c C' | uniq -c We've got a deadlock, where we're in this situation: βββββββββ2ββββββββββββββββ5βββββββββ
β ββββββββββββcmd1ββββββββββββ β
β ββββββββββββββββββββββββββββ β
βββββββββ1ββββββ βββββ3ββββββββββββββββ6βββββ βββββββββββββββ
βsrcββββββββββββteeββββββββββββcmd2ββββββββββββcatβββββββββββoutβ
ββββββββββββββββ ββββββββββββββββββββββββββββ βββββββββββββββ
β βββββ4ββββββββββββββββ7βββββ β
β ββββββββββββcmd3ββββββββββββ β
ββββββββββββββββββββββββββββββββββββ We've filled pipes 3 and 6 (64kiB each). tee has read that extra byte, it has fed it to cmd1 , but it's now blocked writing on pipe 3 as it's waiting for cmd2 to empty it cmd2 can't empty it because it's blocked writing on pipe 6, waiting for cat to empty it cat can't empty it because it's waiting until there's no more input on pipe 5. cmd1 can't tell cat there's no more input because it is waiting itself for more input from tee . and tee can't tell cmd1 there's no more input because it's blocked... and so on. We've got a dependency loop and thus a deadlock. Now, what's the solution? Bigger pipes 3 and 4 (big enough to contain all of src 's output) would do it. We could do that for instance by inserting pv -qB 1G between tee and cmd2/3 where pv could store up to 1G of data waiting for cmd2 and cmd3 to read them. That would mean two things though: that's using potentially a lot of memory, and moreover, duplicating it that's failing to have all 3 commands cooperate because cmd2 would in reality only start to process data when cmd1 has finished. A solution to the second problem would be to make pipes 6 and 7 bigger as well. Assuming that cmd2 and cmd3 produce as much output as they consume, that would not consume more memory. The only way to avoid duplicating the data (in the first problem) would be to implement the retention of data in the dispatcher itself, that is implement a variation on tee that can feed data at the rate of the fastest output (holding data to feed the slower ones at their own pace). Not really trivial. So, in the end, the best we can reasonably get without programming is probably something like (Zsh syntax): max_hold=1G
pee() (
n=0 ci= co= is=() os=()
for cmd do
if ((n)); then
eval "coproc pv -qB $max_hold $ci $co | $cmd $ci $co | pv -qB $max_hold $ci $co"
else
eval "coproc $cmd $ci $co"
fi
exec {i}<&p {o}>&p
is+=($i) os+=($o)
eval i$n=$i o$n=$o
ci+=" {i$n}<&-" co+=" {o$n}>&-"
((n++))
done
coproc :
read -p
eval tee /dev/fd/$^os $ci "> /dev/null &" exec cat /dev/fd/$^is $co
)
yes abc | head -n 1000000 | pee 'tr a A' 'tr b B' 'tr c C' | uniq -c | {
"source": [
"https://unix.stackexchange.com/questions/66853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33538/"
]
} |
66,878 | I have a file named .ignore . In need to replace the projdir . For example: ignore \..*
ignore README
projdir Snake I need to replace Snake with, for example, "PacMan". I read the man page, but I have no idea what to do. | Search for a line that starts with projdir , and replace the whole line with a new one: sed -i 's/^projdir .*$/projdir PacMan/' .ignore ^ and $ are beginning/end-of-line markers, so the pattern will match the whole line; .* matches anything. The -i tells sed to write the changes directly to .ignore , instead of just outputting them | {
"source": [
"https://unix.stackexchange.com/questions/66878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33550/"
]
} |
66,901 | I have an Arduino which sometimes gets bound to /dev/ttyUSB0 and other times to /dev/ttyUSB1 , making my script fail. I do not want to enumerate all the possibilities of where my device could be, but I'd rather have it be bound somewhere static, e.g. /dev/arduino . How do I achieve that? | As suggested, you can add some udev rules. I edited the /etc/udev/rules.d/10-local.rules to contain: ACTION=="add", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", SYMLINK+="my_uart" You can check for the variables of your device by running udevadm info -a -p $(udevadm info -q path -n /dev/ttyUSB0) There is a more in depth guide you can read on http://www.reactivated.net/writing_udev_rules.html | {
"source": [
"https://unix.stackexchange.com/questions/66901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
66,990 | Is there any way to create a virtual file, such that reading from the file actually reads from the stdout of a command; writing to the file is acually writing to the stdin of a command? So far I have kludged this with an inotifywait on a file, which calls a command when the file is modified, taking it's input from the file and writing back to it. I don't like that the inotifywait has to be constantly restarted though (and I have to ensure that it is always running). I only use this file perhaps twice a week. | You may be looking for a named pipe . mkfifo f
{
echo 'V cebqhpr bhgchg.'
sleep 2
echo 'Urer vf zber bhgchg.'
} >f
rot13 < f Writing to the pipe doesn't start the listening program. If you want to process input in a loop, you need to keep a listening program running. while true; do rot13 <f >decoded-output-$(date +%s.%N); done Note that all data written to the pipe is merged, even if there are multiple processes writing. If multiple processes are reading, only one gets the data. So a pipe may not be suitable for concurrent situations. A named socket can handle concurrent connections, but this is beyond the capabilities for basic shell scripts. At the most complex end of the scale are custom filesystems , which lets you design and mount a filesystem where each open , write , etc., triggers a function in a program. The minimum investment is tens of lines of nontrivial coding, for example in Python . If you only want to execute commands when reading files, you can use scriptfs or fuseflt . | {
"source": [
"https://unix.stackexchange.com/questions/66990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15359/"
]
} |
67,006 | How can one run multiple programs in the background with single command? I have tried the commands below, but they do not work. nohup ./script1.sh & && nohup ./script2.sh &
-bash: syntax error near unexpected token '&&'
nohup ./script1.sh & ; nohup ./script2.sh &
-bash: syntax error near unexpected token ';' | From a shell syntax point of view, & separates commands like ; / | / && ... (though of course with different semantic). So it's just: cmd1 & cmd2 & cmd3 & | {
"source": [
"https://unix.stackexchange.com/questions/67006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28434/"
]
} |
67,007 | I have a list of relative paths such as this: dir1
dir2
dir2/dir3
dir2/file1
dir3/file2
dir3/dir4
dir3/dir4/file3 In the example above, the specifier dir2/file1 (for example) is redundant, because the dir2 entry would include this file. Want I want to do, essentially, is remove redundant paths from a given list of paths. The above example would output the following: dir1
dir2
dir3/file2
dir3/dir4 Note that the files and directories specified need not actually exist on the filesystem. I am willing to use any common Unix command (sed, awk, perl, etc.). | From a shell syntax point of view, & separates commands like ; / | / && ... (though of course with different semantic). So it's just: cmd1 & cmd2 & cmd3 & | {
"source": [
"https://unix.stackexchange.com/questions/67007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33634/"
]
} |
67,095 | I have a drive with this configuration: fdisk -l
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000f1b8b
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2612 20971520 83 Linux
/dev/sda3 60736 60801 525312 82 Linux swap / Solaris There is 478GB unallocated space, how would I go about adding this space to /dev/sda1 without losing the data that is currently on /dev/sda1 ? (the filesystem is ext4). NOTE: it is a server with only SSH, no GUI for running stuff like gparted. | Enlarge the partition: fdisk -u /dev/sda . p to print the partition table, take note of the number, start, end, type of sda1. Delete it: d : Recreate it using command n with same number (1), start and type but with a bigger end (taking care not to overlap with other partitions). Try to align things on a megabyte boundary that is for end , make it a multiple of 2048 minus 1. Change the type if needed with t (for partitions holding an extX or btrfs filesystem, the default of 83 is fine). Then w to write and q to quit. The partition table will have been modified but the kernel will not be able to take that into account as some partitions are mounted. However, if in-use partitions were only enlarged, you should be able to force the kernel to take the new layout with: partx /dev/sda If that fails, you'll need to reboot. The system should boot just fine. Then, resize the filesystem so it spreads to the extent of the enlarged partition (might be located in /sbin ): resize2fs /dev/sda1 Which for ext4 will work just fine even on a live FS. | {
"source": [
"https://unix.stackexchange.com/questions/67095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33689/"
]
} |
67,192 | I have been trying to create a find command string that will find all files that end with a number 1-99 but exclude all others. e.g. I want to find myfile1 myfile99 but not myfile456 and not myfilebackup The regex I'm thinking of is myfile[1-9]{1,2} but I can't get this to work with find. find . -regex '.*myfile[0-9]{1,2}' OR find . -iname 'myfile[0-9]{1,2}' From what I can see it's the {1,2} part that is not working. (by the way can you use -name -regex interchangably?) Any help appreciated. | You could try find . -regex '.*myfile[0-9][0-9]?' or find . \( -name "myfile[0-9][0-9]" -o -name "myfile[0-9]" \) | {
"source": [
"https://unix.stackexchange.com/questions/67192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24710/"
]
} |
67,199 | rootwait and rootdelay are used in situations when the filesystem is not immediately available, for example if it's detected asynchroneously or mounted via usb. The thing is, it should be obvious based on the root bootarg if that's the case or not, so why can't the kernel realize automatically that it needs to wait for the filesystem to appear? Are there some technical constraints preventing this automatization from being implemented? | Sometimes the OS can't distinguish a peripheral that's slow to respond from a peripheral that's not there or completely hosed. The most obvious example is a root filesystem coming from the network (TFTP, NFS) where a slow network link or an overloaded server are difficult to distinguish from a severed network link or a crashed server. A timeout tells the kernel when to give up. This can also happen with disks that are slow to spin up, RAID arrays that need to be verified and so on. rootdelay instructs the kernel not to give up immediately if the device isn't available. The kernel can't know whether a SCSI drive is a local disk or some kind of RAID bay. rootwait is provided to wait indefinitely. It's not always desirable, for example a system may want to fall back to a different root filesystem if the normal one takes too long to respond. | {
"source": [
"https://unix.stackexchange.com/questions/67199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12464/"
]
} |
67,334 | I'm using a Match block in OpenSSH's /etc/ssh/sshd_config (on debian) to restrict some users to SFTP: # my stuff
Match group sftponly
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp -u 0002
ChrootDirectory %h As you can see, I use a #my stuff comment in custom config files to easily distinguish default configurations from those I made (and I put those at the end of the config files). Now I wanted to append the directive UseDNS no to the configuration (to speed up logins) but OpenSSH said Directive 'UseDNS' is not allowed within a Match block . Now I was wondering whether there is a syntax like End Match to end those match blocks? | To end up a match block with openssh 6.5p1 or above, use the line: Match all Here is a piece of code, taken from my /etc/ssh/sshd_config file: # Change to no to disable tunnelled clear text passwords
PasswordAuthentication no
Match host 192.168.1.12
PasswordAuthentication yes
Match all
X11Forwarding yes
X11DisplayOffset 10 A line with a sole Match won't work. (It didn't work for me, sshd refused to start) | {
"source": [
"https://unix.stackexchange.com/questions/67334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26778/"
]
} |
67,435 | ~/bin$ cat setbrightness
id
echo $1 > /sys/class/backlight/intel_backlight/brightness
~/bin$ whoami
rag
~/bin$ sudo -l
Matching Defaults entries for rag on this host:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin
User rag may run the following commands on this host:
(root) NOPASSWD: /home/rag/bin/setbrightness
(ALL : ALL) ALL EDIT: /etc/sudoers Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# Host alias specification
# User alias specification
# Cmnd alias specification
Cmnd_Alias SETBRIGHTNESS = /home/rag/bin/setbrightness
# User privilege specification
root ALL=(ALL:ALL) ALL
rag ALL=NOPASSWD:SETBRIGHTNESS
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d Run command ~/bin$ sudo /home/rag/bin/setbrightness 3000
[sudo] password for rag: Can someone please point out why the password is prompted when running sudo for that command? | It's the order - if I replicate your sudoers file with: Cmnd_Alias TESTCOMM = /bin/more
root ALL=(ALL:ALL) ALL
dave ALL=NOPASSWD:TESTCOMM
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL I get the same behaviour e.g. doing sudo more asks for a password, same as sudo . However... Cmnd_Alias TESTCOMM = /bin/more
root ALL=(ALL:ALL) ALL
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
dave ALL=NOPASSWD:TESTCOMM Let's me use more without a password, just prompting for anything else. I guess this is due to the order which which things are checked in sudo (bottom-to-top). | {
"source": [
"https://unix.stackexchange.com/questions/67435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23448/"
]
} |
67,466 | I have noticed that while on Ubuntu, if I type the following: mc and it isn't installed, I get the message below: The program 'mc' is currently not installed. You can install it by typing: sudo apt-get install mc However in Debian, that is not available. It just gives a "-bash: /usr/bin/mc: No such file or directory" message. How can I implement the same functionality in bash command line on Debian? Yes, I know that if it is package suggestion that I want, I can simply do a regex search using apt-cache search . However I was hoping for the simpler suggestion immediately on typing the name of the program. As per discussions, the functionality is provided by the package command-not-found . However even after installing it, and also installing bash-completion package, this isn't available on the Debian bash shell. | The reason that installing command-not-found did not start providing suggestions for non-installed packages was that I had missed a small notification from dpkg as part of the install. One is supposed to run the command update-command-not-found immediately after running apt-get install command-not-found . In fact dpkg prompts for running this command. | {
"source": [
"https://unix.stackexchange.com/questions/67466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33920/"
]
} |
67,503 | I have a bunch of .zip files in several directories: Fol1/Fol2
Fol3
Fol4/Fol5 How would I do move them all to a common base folder? | Go to the toplevel directory of the tree containing the zip files ( cd β¦ ), then run mv **/*.zip /path/to/single/target/directory/ This works out of the box in zsh. If your shell is bash, you'll need to run shopt -s globstar first (you can and should put this command in your ~/.bashrc ). If your shell is ksh, you'll need to run set -o globstar first (put it in your ~/.kshrc ). Alternatively, use find , which works everywhere with no special preparation but is more complicated: find . -name '*.zip' -exec mv {} /path/to/single/target/directory/ \; If you want to remove empty directories afterwards, in zsh: rmdir **/*(/^Fod) In bash or ksh: rmdir **/*/ and repeat as long as there are empty directories to remove. Alternatively, in any shell find . -depth -type d -empty -exec rmdir {} \; | {
"source": [
"https://unix.stackexchange.com/questions/67503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9431/"
]
} |
67,508 | While logged in, I can do the following: mkdir foo
touch foo/bar
chmod 400 foo/bar
chmod 500 foo Then I can open vim (not as root ), edit bar , force a write with w! , and the file is modified. How can I make the operating system disallow any file modification? UPDATE Mar 02 2017 chmod 500 foo is a red herring: the write permission on a directory has nothing to do with the ability to modify a file's contents--only the ability to create and delete files. chmod 400 foo/bar does in fact prevent the file's contents from being changed. But , it does not prevent a file's permissions from being changed--a file's owner can always change his file's permissions (assuming they can access the file i.e. execute permission on all ancestor directories). In fact, strace(1) reveals that this is what vim (7.4.576 Debian Jessie) is doing--vim calls chmod(2) to temporarily add the write permission for the file's owner, modifies the file, and then calls chmod(2) again to remove the write permission. That is why using chattr +i works--only root can call chattr -i . Theoretically, vim (or any program) could do the same thing with chattr as it does with chmod on an immutable file if run as root. | You can set the "immutable" attribute with most filesystems in Linux. chattr +i foo/bar To remove the immutable attribute, you use - instead of + : chattr -i foo/bar To see the current attributes for a file, you can use lsattr: lsattr foo/bar The chattr(1) manpage provides a description of all the available attributes. Here is the description for i : A file with the `i' attribute cannot be modified: it cannot be deleted
or renamed, no link can be created to this file and no data can be
written to the file. Only the superuser or a process possessing the
CAP_LINUX_IMMUTABLE capability can set or clear this attribute. | {
"source": [
"https://unix.stackexchange.com/questions/67508",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33945/"
]
} |
67,537 | I'm currently using Fedora 18 gnome-terminal , then started tmux multiplexer in it. After I connected to a CentOS 5 server via ssh command, I find: ls result has no color tmux , screen , hexedit , htop all failed to start with error message like: open terminal failed: missing or unsuitable terminal: screen-256color It seems that ssh passes the $TERM environment variable to the server, but I can't find it in /etc/ssh/ssh_config file of Fedora 18. Although I can manually change the $TERM variable on the server, each time I connect, it happens again. So how to prevent it? | $TERM is to tell applications what terminal they're talking to so they know how to talk to it. Change it to a value supported by the remote host and that matches as closely as possible your terminal ( screen ). Most Linux systems should at least have a screen terminfo entry. If not, screen implements a superset of vt100 and vt100 is universal. So: TERM=screen ssh host or TERM=vt100 ssh host If you do need the 256 color support, you could try xterm-256color which should be close enough ( screen supports 256 colors the same way xterm does) and tell applications your terminal application supports 256 colors and tell them how to use them. Or you can install the terminfo entry on the remote host. infocmp -x | ssh -t root@remote-host '
cat > "$TERM.info" && tic -x "$TERM.info"' | {
"source": [
"https://unix.stackexchange.com/questions/67537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7332/"
]
} |
67,539 | I am trying to set up rsync to synchronize my main web server to the remote server by adding newly generated file to the latter. Here is the command that I use: rsync -avh --update -e "ssh -i /path/to/thishost-rsync-key" remoteuser@remotehost:/foo/bar /foo/bar But it seems that the web server actually transfers all files despite the '--update' flag. I have tried different flag combinations (e.g. omitting '-a' and using'-uv' instead) but none helped. How can I modify the rsync command to send out only newly added files? | From man rsync : --ignore-existing skip updating files that exist on receiver --update does something slightly different, which is probably why you are getting unexpected results (see man rsync ): This forces rsync to skip any files which exist on the destination and have a modified time that is newer than the source file . (If an existing destination file has a modification time equal to the source file's, it will be updated if the sizes are different.) | {
"source": [
"https://unix.stackexchange.com/questions/67539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31464/"
]
} |
67,635 | I would like to get a list of all the processes whose parent is $pid. This is the simplest way I've come up with: pstree -p $pid | tr "\n" " " |sed "s/[^0-9]/ /g" |sed "s/\s\s*/ /g" Is there any command, or any simpler way to get the list of children processes? Thanks! | Yes, using the -P option of pgrep , i.e pgrep -P 1234 will get you a list of child process ids. | {
"source": [
"https://unix.stackexchange.com/questions/67635",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29319/"
]
} |
67,668 | I would like to get a list of all the processes that descend (e.g. children, grand-children, etc) from $pid . This is the simplest way I've come up with: pstree -p $pid | tr "\n" " " |sed "s/[^0-9]/ /g" |sed "s/\s\s*/ /g" Is there any command, or any simpler way to get the full list of all descendant processes? | The following is somewhat simpler, and has the added advantage of ignoring numbers in the command names: pstree -p $pid | grep -o '([0-9]\+)' | grep -o '[0-9]\+' Or with Perl: pstree -p $pid | perl -ne 'print "$1\n" while /\((\d+)\)/g' We're looking for numbers within parentheses so that we don't, for example, give 2 as a child process when we run across gif2png(3012) . But if the command name contains a parenthesized number, all bets are off. There's only so far text processing can take you. So I also think that process groups are the way to go. If you'd like to have a process run in its own process group, you can use the 'pgrphack' tool from the Debian package 'daemontools': pgrphack my_command args Or you could again turn to Perl: perl -e 'setpgid or die; exec { $ARGV[0] } @ARGV;' my_command args The only caveat here is that process groups do not nest, so if some process is creating its own process groups, its subprocesses will no longer be in the group that you created. | {
"source": [
"https://unix.stackexchange.com/questions/67668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29319/"
]
} |
67,672 | I'm using a third party proprietary application on CentOS 5.4, which is known to malfunction on pristine 5.9 or later. I would like to incrementally upgrade to the latest CentOS version where that application works. Since yum update brings me to 5.9 right away, that's not an option. I thought of burning all four Install-DVDs 5.5 through 5.8 and installing each to find out the hard way where the breakage sets in, but maybe there's an easier way frobbing the /etc/yum.conf.d files to go from 5.4 to 5.5 etc. Can anyone provide some guidance? (Sadly, it is not an option to ask the vendor of that application for a fix, since it was acquired by a big blue company and terminated a little while after that.) Edit : Further investigation revealed it is an incompatibility in the GNU C library. With glibc-2.5-58 the app runs fine on CentOS 5.9, with stock glibc-2.5-107 it hangs. I now point LD_LIBRARY_PATH at the older glibc when running the app. | The following is somewhat simpler, and has the added advantage of ignoring numbers in the command names: pstree -p $pid | grep -o '([0-9]\+)' | grep -o '[0-9]\+' Or with Perl: pstree -p $pid | perl -ne 'print "$1\n" while /\((\d+)\)/g' We're looking for numbers within parentheses so that we don't, for example, give 2 as a child process when we run across gif2png(3012) . But if the command name contains a parenthesized number, all bets are off. There's only so far text processing can take you. So I also think that process groups are the way to go. If you'd like to have a process run in its own process group, you can use the 'pgrphack' tool from the Debian package 'daemontools': pgrphack my_command args Or you could again turn to Perl: perl -e 'setpgid or die; exec { $ARGV[0] } @ARGV;' my_command args The only caveat here is that process groups do not nest, so if some process is creating its own process groups, its subprocesses will no longer be in the group that you created. | {
"source": [
"https://unix.stackexchange.com/questions/67672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7107/"
]
} |
67,673 | I am running the latest version of tmux (from the git repository) and xclip (0.12), and I would like to be able to use Emacs-like keyboard bindings to move around the text in copy-mode , copy ( M-w ) selections to the clipboard, and paste ( C-y ) from/to the copy buffer to the clipboard. So far I have been able to paste text with C-y , and move around in copy-mode with Emacs-like keyboard bindings, but I am still unable to copy text from a tmux buffer (e.g. in copy-mode ) I found this thread for copying the entire buffer to the clipboard (and viceversa), but it does not seem to be working for me. Also, in the tmux-users mail list I was told that recent versions of tmux (only in the git repo) provide a command called copy-pipe . The man page says the following about this command: One command in accepts an argument, copy-pipe, which copies the
selection and pipes it to a command. For example the following will
bind βC-qβ to copy the selection into /tmp as well as the paste
buffer: bind-key -temacs-copy C-q copy-pipe "cat >/tmp/out" It looks like copy-pipe is meant to be used in part to pipe the selection to another command. There also seem to be some typos in this description and in the command (what is temacs-copy ?) Either way, what I would like to do is: Copying: Enter copy mode Move to the text I want to copy using Emacs navigation commands (i.e. C-f , C-b , M-f , M-b , C-a , C-e etc. to move the cursor). No prefix for any of these. Copy the selected text into the clipboard with: M-w ( no prefix either) Pasting: I would like to be able to type C-y ( without having to enter copy-mode ) to paste text in the terminal ( no prefix either) I have tried the following for copying without luck: bind-key -n M-w run "tmux save-buffer - | xclip -i -selection clipboard" However, pasting works great: bind-key -n C-y run "xclip -o | tmux load-buffer - ; tmux paste-buffer" The odd thing is that I know that the " xclip -i -selection clipboard " part of the copy command above works well, since I can copy things to the clipboard in the command line, e.g.: echo "Hello world. How are you?" | xclip -i -selection clipboard With all this, how can I copy a selection from copy mode to the clipboard? | Use the following tmux.conf with copy-pipe in the new versions of tmux (1.8+): set -g mouse on
# To copy:
bind-key -n -t emacs-copy M-w copy-pipe "xclip -i -sel p -f | xclip -i -sel c "
# To paste:
bind-key -n C-y run "xclip -o | tmux load-buffer - ; tmux paste-buffer" prefix+[ into copy-mode select content with mouse(hold) M-w to copy that part into system clipboard C-y the paste it inside tmux, C-v to paste it inside other regular application like web browser. | {
"source": [
"https://unix.stackexchange.com/questions/67673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
67,702 | [root@localhost ~] vgdisplay
--- Volume group ---
VG Name vg_root
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 297,59 GiB
PE Size 4,00 MiB
Total PE 76182
Alloc PE / Size 59392 / 232,00 GiB
Free PE / Size 16790 / 65,59 GiB
VG UUID XXXXXXXXXX PV: [root@localhost ~] pvdisplay
--- Physical volume ---
PV Name /dev/mapper/udisks-luks-uuid-ASDFASDF
VG Name vg_root
PV Size 297,59 GiB / not usable 2,00 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 76182
Free PE 16790
Allocated PE 59392
PV UUID YYYYYYYYYYY So I have a VG with 65 GByte free space. But when I want to shrink this Volume Group about ~50 GByte: pvresize -tv --setphysicalvolumesize 247G /dev/mapper/udisks-luks-uuid-ASDFASDF
Test mode: Metadata will NOT be updated and volumes will not be (de)activated.
Using physical volume(s) on command line
Test mode: Skipping archiving of volume group.
/dev/mapper/udisks-luks-uuid-ASDFASDF: Pretending size is 517996544 not 624087040 sectors.
Resizing volume "/dev/mapper/udisks-luks-uuid-ASDFASDF" to 624087040 sectors.
Resizing physical volume /dev/mapper/udisks-luks-uuid-ASDFASDF from 0 to 63231 extents.
/dev/mapper/udisks-luks-uuid-ASDFASDF: cannot resize to 63231 extents as later ones are allocated.
0 physical volume(s) resized / 1 physical volume(s) not resized
Test mode: Wiping internal cache
Wiping internal VG cache So the error message is: cannot resize to 63231 extents as later ones are allocated. Q: How can I defrag the vg_root so I can remove the unneeded part of it? p.s: I already found out that I only need to resize the PV to resize the VG, or are there any better commands to do the VG resize (ex.: what can I do if I would several VG's on a PV? ...)? | You can use pvmove to move those extents to the beginning of the device or another device: sudo pvmove --alloc anywhere /dev/device:60000-76182 Then pvmove chooses where to move the extents to, or you can specify where to move them. See pvs -v --segments /dev/device to see what extents are currently allocated. | {
"source": [
"https://unix.stackexchange.com/questions/67702",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18526/"
]
} |
67,781 | I have build some libraries from sources, and the files after make install are in /usr/local/lib For example, in my case I have the file libodb-2.2.so which is in this directory. However when I launch the executable that has linked with libodb , I got the error: error while loading shared libraries: libodb-2.2.so: cannont open shared object file: No such file or directory. Does it mean that I have build my executable not correctly ? or should I indicate the system that there may be some interesting libs in the folder /usr/local/lib also ? I'm using Ubuntu 12.04, Linux kernel 3.2.0-38-generic. | For the current session you can export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib or to make the change permanent you can add /usr/local/lib to /etc/ld.so.conf (or something it includes) and run ldconfig as root. If you're still having problems, running ldd [executable name] will show you the libraries it's trying to find, and which ones can't be found. | {
"source": [
"https://unix.stackexchange.com/questions/67781",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4175/"
]
} |
67,792 | Local machine running my scripts needs to access a remote server's DB.
Last week I could do, on the local machine ssh -f -N -T -L1081:localhost:3306 ueser@remoteserver -p2222 on the local machine: # netstat -an | egrep "tcp.*:1081.*"
tcp 0 0 127.0.0.1:1081 0.0.0.0:* LISTEN
tcp6 0 0 ::1:1081 :::* LISTEN then telling my scripts to use port 1081 for mysql connections worked. This netstat command shows me im accepting connections fine on the local side but the same check on the remote machine the one with the actual mysql server is not listening in on 1081 at all, which wasn't the case before I checked sshd_config which seems to allow tunnelling, that is no config change.
I also tried opening a tunnel to another server on my network and that's also not working, is my command crap or something? Tried with various combinations of -f -T | I have drawn some sketches The machine, where the ssh tunnel command is typed is called Β»your hostΒ« . Introduction local: -L Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side. ssh -L sourcePort:forwardToHost:onPort connectToHost means: connect with ssh to connectToHost , and forward all connection attempts to the local sourcePort to port onPort on the machine called forwardToHost , which can be reached from the connectToHost machine. remote: -R Specifies that the given port on the remote (server) host is to be forwarded to the given host and port on the local side. ssh -R sourcePort:forwardToHost:onPort connectToHost means: connect with ssh to connectToHost , and forward all connection attempts to the remote sourcePort to port onPort on the machine called forwardToHost , which can be reached from your local machine. Additional options -f tells ssh to background itself after it authenticates, so you don't have to sit around running something on the remote server for the tunnel to remain alive. -N says that you want an SSH connection, but you don't actually want to run any remote commands. If all you're creating is a tunnel, then including this option saves resources. -T disables pseudo-tty allocation, which is appropriate because you're not trying to create an interactive shell. Your example The first image represents your case. If you do ssh -L 1081:localhost:3306 remotehost all connection attempts to the green port 1081 are forwarded through the ssh tunnel to the pink port 3306 on the remotehostβs localhost, i.e. the remotehost itself. Now your php scripts can access your database via localhost:1081 . But the netstat command on your remotehost canβt find anything listening at 1081 (at least not as a result of the tunnel).
Because the pink port isnβt a listening port created by ssh, but the target of the forwarding. And it is not forwarding to port 1081 but to 3306 . Is your db server really listening on port 3306 or possibly on port 1081 ? If the latter is true, then your command should be changed to look like this: ssh -L 1081:localhost:1081 remotehost If your database listens on port 1081 you should find it with netstat (independently of ssh). | {
"source": [
"https://unix.stackexchange.com/questions/67792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34112/"
]
} |
67,794 | I'm looking for a way to search files where two word instances exist in the same file. I've been using the following to perform my searches up to this point: find . -exec grep -l "FIND ME" {} \; The problem I'm running into is that if there isn't exactly one space that between "FIND" and "ME", the search result does not yield the file. How do I adapt the former search string where both words "FIND" and "ME exist in a file as opposed to "FIND ME"? I'm using AIX. | With GNU tools: find . -type f -exec grep -lZ FIND {} + | xargs -r0 grep -l ME You can do standardly: find . -type f -exec grep -q FIND {} \; -exec grep -l ME {} \; But that would run up to two grep s per file. To avoid running that many grep s and still be portable while still allowing any character in file names, you could do: convert_to_xargs() {
sed "s/[[:blank:]\"\']/\\\\&/g" | awk '
{
if (NR > 1) {
printf "%s", line
if (!index($0, "//")) printf "\\"
print ""
}
line = $0
}'
END { print line }'
}
export LC_ALL=C
find .//. -type f |
convert_to_xargs |
xargs grep -l FIND |
convert_to_xargs |
xargs grep -l ME The idea being to convert the output of find into a format suitable for xargs (that expects a blank (SPC/TAB/NL in the C locale, YMMV in other locales) separated list of words where single, double quotes and backslashes can escape blanks and each other). Generally you can't post-process the output of find -print , because it separates the file names with a newline character and doesn't escape the newline characters that are found in file names. For instance if we see: ./a
./b We've got no way to know whether it's one file called b in a directory called a<NL>. or if it's the two files a and b in the current directory. By using .//. , because // cannot appear otherwise in a file path as output by find (because there's no such thing as a directory with an empty name and / is not allowed in a file name), we know that if we see a line that contains // , then that's the first line of a new filename. So we can use that awk command to escape all newline characters but those that precede those lines. If we take the example above, find would output in the first case (one file): .//a
./b Which awk escapes to: .//a\
./b So that xargs sees it as one argument. And in the second case (two files): .//a
.//b Which awk would leave as is, so xargs sees two arguments. You need the LC_ALL=C so sed , awk (and some implementations of xargs ) work for arbitrary sequences of bytes (even though that don't form valid characters in the user's locale), to simplify the blank definition to just SPC and TAB and to avoid problems with different interpretations of characters whose encoding contains the encoding of backslash by the different utilities. | {
"source": [
"https://unix.stackexchange.com/questions/67794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10979/"
]
} |
67,806 | I know you are able to see the byte size of a file when you do a long listing with ll or ls -l . But I want to know how much storage is in a directory including the files within that directory and the subdirectories within there, etc. I don't want the number of files, but instead the amount of storage those files take up. So I want to know how much storage is in a certain directory recursively? I'm guessing, if there is a command, that it would be in bytes. | Try doing this: (replace dir with the name of your directory) du -s dir That gives the cumulative disk usage (not size ) of unique (hards links to the same file are counted only once) files (of any type including directory though in practice only regular and directory file take up disk space). That's expressed in 512-byte units with POSIX compliant du implementations (including GNU du when POSIXLY_CORRECT is in the environment), but some du implementations give you kibibytes instead. Use -k to guarantee you get kibibytes. For the size (not disk usage ) in bytes, with the GNU implementation of du or compatible: du -sb dir or (still not standard): du -sh dir For human readable sizes ( disk usage ). See man du (link here is for the GNU implementation). | {
"source": [
"https://unix.stackexchange.com/questions/67806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31012/"
]
} |
67,861 | Once I've created a new tab in gnome-terminal with Ctrl + Shift + t , how can I switch back and forth between the tabs using the keyboard? For example, in Google Chrome the keyboard shortcut is Ctrl + Tab (forward) and Ctrl + Shift + Tab (backward). I'm using Linux Mint 14 Cinnamon. | Ctrl + Page Down (forward) and Ctrl + Page Up (backward). | {
"source": [
"https://unix.stackexchange.com/questions/67861",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34148/"
]
} |
67,898 | I tried to check if the PHONE_TYPE variable contains one of three valid values. if [ "$PHONE_TYPE" != "NORTEL" ] || [ "$PHONE_TYPE" != "NEC" ] ||
[ "$PHONE_TYPE" != "CISCO" ]
then
echo "Phone type must be nortel,cisco or nec"
exit
fi The above code did not work for me, so I tried this instead: if [ "$PHONE_TYPE" == "NORTEL" ] || [ "$PHONE_TYPE" == "NEC" ] ||
[ "$PHONE_TYPE" == "CISCO" ]
then
: # do nothing
else
echo "Phone type must be nortel,cisco or nec"
exit
fi Are there cleaner ways for this type of task? | I guess you're looking for: if [ "$PHONE_TYPE" != "NORTEL" ] && [ "$PHONE_TYPE" != "NEC" ] &&
[ "$PHONE_TYPE" != "CISCO" ] The rules for these equivalents are called De Morgan's laws and in your case meant: not(A || B || C) => not(A) && not(B) && not (C) Note the change in the boolean operator or and and. Whereas you tried to do: not(A || B || C) => not(A) || not(B) || not(C) Which obviously doesn't work. | {
"source": [
"https://unix.stackexchange.com/questions/67898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8032/"
]
} |
67,937 | This happened many times. While the SSH active, I'm so tired and just close the terminal without exit Is it dangerous to doing like that? | Not at all. When the terminal process exits, the client processes within will also die, and when this happens, the connection to the remote server will be closed by the operating system. The server will see the connection close, and terminate the processes on the server. However, it is possible that you might end up with lingering processes on the server if they were backgrounded and ignore certain signals. | {
"source": [
"https://unix.stackexchange.com/questions/67937",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17948/"
]
} |
67,940 | I have defined "SHELL" variable in /etc/crontab file: [martin@martin ~]$ grep SHELL /etc/crontab
SHELL=/usr/local/bin/bash
[martin@martin ~]$ file /usr/local/bin/bash
/usr/local/bin/bash: ELF 32-bit LSB executable, Intel 80386, version 1 (FreeBSD), dynamically linked (uses shared libs), for FreeBSD 8.0 (800107), stripped
[martin@martin ~]$ In addition, all my scripts in /etc/crontab file are started under user "martin". However /home/martin/.bash_profile (for login shell) and /home/martin/.bashrc (for non-logging shell) contain some variables which are ignored in case of cron job, but are used in case I log into machine over SSH or open new bash session. Why cron ignores those variables? Isn't cron simply executing "/usr/local/bin/bash my-script.sh" with permissions for user "martin"? | You can source the file you want at the top of the script or beginning of the job for the user that is executing the job. The "source" command is a built-in. You'd do the same thing if you made edits to those files to load the changes. * * * * * source /home/user/.bash_profile; <command> or #!/bin/bash
source /home/user/.bash_profile
<commands> | {
"source": [
"https://unix.stackexchange.com/questions/67940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
68,035 | ${!FOO} performs a double substitution in bash , meaning it takes the (string) value of FOO and uses it as a variable name. zsh doesnβt support this feature. Is there a way to make this work the same in bash and zsh ? Background: Iβve got a list of environment variables, like PATH MAIL EDITOR and want to first print the variable names and afterwards their values. This works in bash but not zsh : for VAR in LIST
do
echo $VAR
echo ${!VAR}
done It should be somehow possible βthe old wayβ with eval , but I canβt get it to work: for VAR in LIST
do
echo $VAR
echo `eval \$$VAR`
done Iβm never going to understand why I canβt simply do arbitrary deep substitutions like ${${VAR}} or even ${${${VAR}}} if need be, so an explanation for that would be nice, too. | Both bash and zsh have a way to perform indirect expansion, but they use different syntax. It's easy enough to perform indirect expansion using eval ; this works in all POSIX and most Bourne shells. Take care to quote properly in case the value contains characters that have a special meaning in the shell. eval "value=\"\${$VAR}\""
echo "$VAR"
echo "$value" ${${VAR}} doesn't work because it's not a feature that any shell implements. The thing inside the braces must conform to syntax rules which do not include ${VAR} . (In zsh, this is supported syntax, but does something different: nested substitutions perform successive transformations on the same value; ${${VAR}} is equivalent to $VAR since this performs the identity transformation twice on the value.) | {
"source": [
"https://unix.stackexchange.com/questions/68035",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31810/"
]
} |
68,042 | Follow-up to the background part in this question . In bash I can use ${!FOO} for double substitution, in zsh ${(P)FOO} . In both, the old-school (hack-y) eval \$$FOO works. So, the smartest and most logical thing for me would be ${${FOO}}, ${${${FOO}}}β¦ for double/triple/n substitution. Why doesnβt this work as expected? Second: What does the \ do in the eval statement? I reckon itβs an escape, making something like eval \$$$FOO impossible. How to do a triple/n substitution with that that works in every shell? | The \ must be used to prevent the expansion of $$ (current process id). For triple substitution, you need double eval, so also more escapes to avoid the unwanted expansions in each eval: #! /bin/bash
l0=value
l1=l0
l2=l1
l3=l2
l4=l3
echo $l0
eval echo \$$l1
eval eval echo \\$\$$l2
eval eval eval echo \\\\$\\$\$$l3
eval eval eval eval echo \\\\\\\\$\\\\$\\$\$$l4 | {
"source": [
"https://unix.stackexchange.com/questions/68042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31810/"
]
} |
68,079 | I have access to a cifs network drive. When I mount it under my OSX machine, I can read and write from and to it. When I mount the drive in ubuntu, using: sudo mount -t cifs -o username=${USER},password=${PASSWORD} //server-address/folder /mount/path/on/ubuntu I am not able to write to the network drive, but I can read from it.
I have checked the permissions and owner of the mount folder, they look like: 4.0K drwxr-xr-x 4 root root 0 Nov 12 2010 Mounted_folder I cannot change the owner, because I get the error: chown: changing ownership of `/Volumes/Mounted_folder': Not a directory When I descend deeper into the network drive, and change the ownership there, I get the error that I have no permission to change the folderΒ΄s owner. What should I do to activate my write permission? | You are mounting the CIFS share as root (because you used sudo ), so you cannot write as normal user. If your Linux Distribution and its kernel are recent enough that you could mount the network share as a normal user (but under a folder that the user own), you will have the proper credentials to write file (e.g. mount the shared folder somewhere under your home directory, like for instance $HOME/netshare/ . Obviously, you would need to create the folder before mounting it). An alternative is to specify the user and group ID that the mounted network share should used, this would allow that particular user and potentially group to write to the share. Add the following options to your mount : uid=<user>,gid=<group> and replace <user> and <group> respectively by your own user and default group, which you can find automatically with the id command. sudo mount -t cifs -o username=${USER},password=${PASSWORD},uid=$(id -u),gid=$(id -g) //server-address/folder /mount/path/on/ubuntu If the server is sending ownership information, you may need to add the forceuid and forcegid options. sudo mount -t cifs -o username=${USER},password=${PASSWORD},uid=$(id -u),gid=$(id -g),forceuid,forcegid, //server-address/folder /mount/path/on/ubuntu | {
"source": [
"https://unix.stackexchange.com/questions/68079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21670/"
]
} |
68,084 | As part of a deployment script, I want to dump some cached stuff from my temp directory. I use a command like: rm /tmp/our_cache/* However, if /tmp/our_cache is empty (fairly common when pushing many changes in quick succession to our testing server), this prints the following error message: rm: cannot remove `/tmp/our_cache/*': No such file or directory It's not a big deal, but it's a little ugly and I want to cut down the noise-to-signal ratio in the output from this script. What's a concise way in unix to delete the contents of a directory without getting messages complaining that the directory is already empty? | Since you presumably want to remove all files without prompting, why not just use the -f switch to rm to ignore nonexistent files? rm -f /tmp/our_cache/* From man page: -f, --force
ignore nonexistent files, never prompt Also, if there may be any subdirectories in /tmp/our_cache/ and you want those and their contents deleted as well, don't forget the -r switch. | {
"source": [
"https://unix.stackexchange.com/questions/68084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29001/"
]
} |
68,175 | In part of the script that I'm working on, I want to validate that the inputted IP address is in the correct format. I want to make a loop while the input format is NOT correct. The following works for a loop while the format IS correct. while [[ $range =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3} ]]
do How can I turn this around to say DOES NOT fit the format. I was hoping !=~ would work, but I'm getting a syntax error. | The solution is so simple, I'm sure you will all find it funny. No need to get worked up about negation syntax details, just use until instead of while . | {
"source": [
"https://unix.stackexchange.com/questions/68175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33692/"
]
} |
68,236 | I know I can wait on a condition to become true in bash by doing: while true; do
test_condition && break
sleep 1
done But it creates 1 sub-process at each iteration (sleep). I could avoid them by doing: while true; do
test_condition && break
done But it uses lot of CPU (busy waiting). To avoid sub-processes and busy waiting, I came up with the solution bellow, but I find it ugly: my_tmp_dir=$(mktemp -d --tmpdir=/tmp) # Create a unique tmp dir for the fifo.
mkfifo $my_tmp_dir/fifo # Create an empty fifo for sleep by read.
exec 3<> $my_tmp_dir/fifo # Open the fifo for reading and writing.
while true; do
test_condition && break
read -t 1 -u 3 var # Same as sleep 1, but without sub-process.
done
exec 3<&- # Closing the fifo.
rm $my_tmp_dir/fifo; rmdir $my_tmp_dir # Cleanup, could be done in a trap. Note: in the general case, I cannot simply use read -t 1 var without the fifo, because it will consume stdin, and will not work if stdin is not a terminal or a pipe. Can I avoid sub-processes and busy waiting in a more elegant way ? | In newer versions of bash (at least v2), builtins may be loaded (via enable -f filename commandname ) at runtime. A number of such loadable builtins is also distributed with the bash sources, and sleep is among them. Availability may differ from OS to OS (and even machine to machine), of course. For example, on openSUSE, these builtins are distributed via the package bash-loadables . | {
"source": [
"https://unix.stackexchange.com/questions/68236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9190/"
]
} |
68,322 | unset array[0] removes the element but still if I do echo ${array[0]} I get a null value moreover there are other ways of doing this but if an element of an array contains spaces like below array[0]='james young'
array[1]='mary'
array[2]='randy orton' but these also fail to do the job array=${array[@]:1} #removed the 1st element now I want the new array to be like array[0]='mary'
array[1]='randy orton' The spaces cause the trouble after assignment and the actual array becomes like with substitution. array=(mary randy orton) | Just use array syntax on the assignment and quote your variable: array=("${array[@]:1}") #removed the 1st element Edit according to question in comment. For $@ you can use it like this: set -- "${@:2}" #removed the 1st parameter | {
"source": [
"https://unix.stackexchange.com/questions/68322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8032/"
]
} |
68,375 | When I first got into using Slackware years ago I quickly learned to love JFS over ext3 or reiserfs given that it was reliable and if there was an unclean shutdown, its disk checking was very very quick. It's only been recently that I've found out that JFS is obscure to the point of being almost completely unmaintained by anyone. I had no idea I was in such a minority. Why is it that this happened? Is it that filesystem technology has since advanced to the point that JFS now lacks any comparative advantages? Is it that ext3 was more interoperable with other operating systems? Was a particular other filesystem blessed by a particular vendor or the kernel developers? Not so much a technical question as a historical one. | The first thing you have to get out of the way is the comparison to ext [234] . Replacing any of them is going to be like replacing NTFS in Windows. Possible, sure, but it will require a decision from the top to switch. I know you're asking about keeping existing alternatives, not removal of other alternatives, but that privileged competition is sucking up most of the oxygen in the room. Until you get rid of the competition, marginal alternatives are going to have an exceptionally hard time getting any attention. Since ext [234] aren't going away, JFS and its ilk are at a serious disadvantage from the start. (This phenomenon is called the Tyranny of the Default.) The second thing is that both JFS and XFS were contributed to Linux at about the same time, and they pretty much solve the same problems. Kernel geeks can argue about fine points between the two, but the fact is that those who have run into one of ext [234] 's limitations had two roughly equivalent solutions in XFS and JFS. So why did XFS win? I'm not sure, but here are some observations: Red Hat and SuSE endorsed it. RHEL 7 uses XFS as its default filesystem, and it was an install-time option in RHEL 6. After RHEL 6 came out, Red Hat backported official XFS support to RHEL 5. XFS was available for RHEL 5 before that through the semi-official EPEL channel. SuSE included XFS as an install-time option much earlier than Red Hat did, going back to SLES 8 , released in 2002. It is not the current default, but it has been officially supported that whole time. There are many other Linux distros, and RHEL and SuSE are not the most popular distros across the entire Linux space, but they are the big iron distros of choice. They're playing where the advantages of JFS and XFS matter most. These companies can't always wag the dog, but in questions involving big iron, they sometimes can. XFS is from SGI , a company that is essentially gone now. Before they died, they formally gave over any rights they had in XFS so the Linux folk felt comfortable including it in the kernel. IBM has also given over enough rights to JFS to make the Linux kernel maintainers comfortable, but we can't forget that they're an active, multibillion dollar company with thousands of patents. If IBM ever decided that their support of Linux no longer aligned with its interests, well, it could get ugly. Sure, someone probably owns SGI's IP rights now and could make a fuss, but it probably wouldn't turn out any worse than the SCO debacle . IBM might even weigh in and help squash such a troll, since their interests do currently include supporting Linux. The point being, XFS just feels more "free" to a lot of folk. It's less likely to pose some future IP problem. One of the problems with our current IP system is that copyright is tied to company lifetime, and companies don't usually die. Well, SGI did. That makes people feel better about treating SGI's contribution of XFS like that of any individual's contribution. In any system involving network effects where you have two roughly equivalent alternatives β JFS and XFS in this case β you almost never get a 50/50 market share split. Here, the network effects are training, compatibility, feature availability... These effects push the balance further and further toward the option that gained that early victory. Witness Windows vs. OS X, Linux vs. all-other-*ix, Ethernet vs. Token Ring... | {
"source": [
"https://unix.stackexchange.com/questions/68375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34445/"
]
} |
68,414 | Say there may be hundreds of *.txt files in a directory. I only want to find the first three *.txt files and then exit the searching process. How to achieve this using the find utility? I had a quick through on its man page, seemed not such a option for this. | You could pipe the output of find through head : find . -name '*.txt' | head -n 3 | {
"source": [
"https://unix.stackexchange.com/questions/68414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11409/"
]
} |
68,419 | How can I echo "$" in a here-doc in bash? For example, I want to have a file with the content on remote server like $ABC=home_dir . $ ssh hostname sudo -s <<EOF
echo "$ABC=home_dir" > file
EOF But it would be treated as a variable. How can I print a literal $ ? | If you want to write a here-doc and you don't want ANY of the doc to be expanded or any special characters interpreted, you can quote the label with single quotes, like this: $ cat >file <<'EOF'
echo "$ABC=home_dir"
EOF However, your situation as described in your example is much more complex, because you're really sending this content through ssh, to a remote system, to be run by sudo which is also invoking a shell (and so that shell will expand the content as well). You're going to need more levels of quoting to get this right, but even with that it still won't work because sudo requires a terminal (so it can ask for a password) and you've redirected from stdin. Even using ssh -t won't help here. Also I agree with Johan. It's not clear this is really what you want; note that it's not legal to assign a value to a shell variable reference, so if this file you're trying to create is supposed to be a shell script, it won't work as you've described it. Maybe if you back up a bit and describe what you really want to do, we can help more. | {
"source": [
"https://unix.stackexchange.com/questions/68419",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19415/"
]
} |
68,426 | I was looking at the command to burn iso image to a dvd. But I couldnt get the name of the device. In the /dev/ I could see cdrom , cdrw , dvd , dvdrw . I am using debian. When I gave the command, I got the following output dd if=debian-6.0.7-i386-DVD-1.iso of=/dev/dvdrw
dd: opening `/dev/dvdrw': Read-only file system | You can't use dd this way (it might work for DVD-RAM though). What you are looking for is growisofs - (the main) part of dvd+rw-tools . growisofs -Z /dev/dvdrw=image.iso | {
"source": [
"https://unix.stackexchange.com/questions/68426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3539/"
]
} |
68,455 | Is there a better way on the command line to essentially accomplish the following but with a single command cp -r css/ ar/
cp -r images/ ar/
cp -r js/ ar/
cp -r backups/ ar/ I've just been stringing them together with a semicolon. | Copying folders into another folder (folder in folder): cp -r css images js backups ar/ Note: this is different from copying just the contents themselves(contents of folders in folder): cp -r css/ images/ js/ backups/ ar/ | {
"source": [
"https://unix.stackexchange.com/questions/68455",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
68,484 | In the Perl documentation, perlrun(1) suggests launching Perl scripts using a bilingual shell/Perl header: #!/bin/sh
#! -*-perl-*-
eval 'exec perl -x -wS $0 ${1+"$@"}'
if 0; What does ${1+"$@"} mean? I tried using "$@" instead (using Bash as /bin/sh), and it seems to work just as well. Edit Two answers below say that it's supposed to be ${1:+"$@"} . I am aware of the ${parameter:+word} ("Use Alternate Value") syntax documented in bash(1). However, I am unconvinced, because Both ${1+"$@"} and "$@" work just fine, even when there are no parameters. If I create simple.sh as #!/bin/sh
eval 'exec /usr/bin/perl -x -S -- $0 "$@"'
if 0;
#!perl
use Data::Dumper;
print Dumper(\@ARGV); and question.sh as #!/bin/sh
eval 'exec /usr/bin/perl -x -S -- $0 ${1+"$@"}'
if 0;
#!perl
use Data::Dumper;
print Dumper(\@ARGV); I can get both to work identically: $ ./question.sh
$VAR1 = [];
$ ./question.sh a
$VAR1 = [
'a'
];
$ ./question.sh a 'b c'
$VAR1 = [
'a',
'b c'
];
$ ./question.sh ""
$VAR1 = [
''
];
$ ./simple.sh
$VAR1 = [];
$ ./simple.sh a
$VAR1 = [
'a'
];
$ ./simple.sh a 'b c'
$VAR1 = [
'a',
'b c'
];
$ ./simple.sh ""
$VAR1 = [
''
]; Other sources on the Internet also use ${1+"$@"} , including one hacker who seems to know what he's doing. Perhaps ${parameter+word} is an undocumented alternate (or deprecated) syntax for ${parameter:+word} ? Could someone confirm that hypothesis? | That's for compatibility with the Bourne shell. The Bourne shell was an old shell that was first released with Unix version 7 in 1979 and was still common until the mid 90s as /bin/sh on most commercial Unices. It is the ancestor of most Bourne-like shells like ksh , bash or zsh . It had a few awkward features many of which have been fixed in ksh and the other shells and the new standard specification of sh , one of which is this: With the Bourne shell (at least those variants where it has not been fixed): "$@" expands to one empty argument if the list of positional parameters is empty ( $# == 0 ) instead of no argument at all. ${var+something} expands to "something" unless $var is unset. It is clearly documented in all shells but hard to find in the bash documentation as you need to pay attention to this sentence: When not performing substring expansion, using the forms documented below, bash tests for a parameter that is unset or null. Omitting the colon results in a test only for a parameter that is unset . So ${1+"$@"} expands to "$@" only if $1 is set ( $# > 0 ) which works around that limitation of the Bourne shell. Note that the Bourne shell is the only shell with that problem. Modern sh s (that is sh conforming to the POSIX specification of sh (which the Bourne shell is not)) don't have that issue. So you only need that if you need your code to work on very old systems where /bin/sh might be a Bourne shell instead of a standard shell (note that POSIX doesn't specify the location of the standard sh , so for instance on Solaris before Solaris 11, /bin/sh was still a Bourne shell (though did not have that particular issue) while the normal/standard sh was in another location ( /usr/xpg4/bin/sh )). There is a problem in that perlrun perldoc page in that $0 is not quoted though. See http://www.in-ulm.de/~mascheck/various/bourne_args/ for more information. | {
"source": [
"https://unix.stackexchange.com/questions/68484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17535/"
]
} |
68,489 | I have a single directory that contains dozens of directories inside of it. I'm new to command line and I'm struggling to come up with a command that will zip each sub-directory into a unique sub-directory.zip file. So in the end my primary directory will be filled with all of my original sub-directories, plus the corresponding .zip files that contain the zipped up content of each sub-directory. Is something like this possible? If so, please show me how it's done. | You can use this loop in bash : for i in */; do zip -r "${i%/}.zip" "$i"; done i is the name of the loop variable. */ means every subdirectory of the current directory, and will include a trailing slash in those names. Make sure you cd to the right place before executing this. "$i" simply names that directory, including trailing slash. The quotation marks ensure that whitespace in the directory name won't cause trouble. ${i%/} is like $i but with the trailing slash removed, so you can use that to construct the name of the zip file. If you want to see how this works, include an echo before the zip and you will see the commands printed instead of executed. Parallel execution To run them in parallel you can use & : for i in */; do zip -0 -r "${i%/}.zip" "$i" & done; wait We use wait to tell the shell to wait for all background tasks to finish before exiting. Beware that if you have too many folders in your current directory, then you may overwhelm your computer as this code does not limit the number of parallel tasks. | {
"source": [
"https://unix.stackexchange.com/questions/68489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34534/"
]
} |
68,523 | How does one find large files that have been deleted but are still open in an application? How can one remove such a file, even though a process has it open? The situation is that we are running a process that is filling up a log file at a terrific rate. I know the reason, and I can fix it. Until then, I would like to rm or empty the log file without shutting down the process. Simply doing rm output.log removes only references to the file, but it continues to occupy space on disk until the process is terminated. Worse: after rm ing I now have no way to find where the file is or how big it is! Is there any way to find the file, and possibly empty it, even though it is still open in another process? I specifically refer to Linux-based operating systems such as Debian or RHEL. | If you can't kill your application, you can truncate instead of deleting the log file to reclaim the space. If the file was not open in append mode (with O_APPEND ), then the file will appear as big as before the next time the application writes to it (though with the leading part sparse and looking as if it contained NUL bytes), but the space will have been reclaimed (that does not apply to HFS+ file systems on Apple OS/X that don't support sparse files though). To truncate it: : > /path/to/the/file.log If it was already deleted, on Linux, you can still truncate it by doing: : > "/proc/$pid/fd/$fd" Where $pid is the process id of the process that has the file opened, and $fd one file descriptor it has it opened under (which you can check with lsof -p "$pid" . If you don't know the pid, and are looking for deleted files, you can do: lsof -nP | grep '(deleted)' lsof -nP +L1 , as mentioned by @user75021 is an even better (more reliable and more portable) option (list files that have fewer than 1 link). Or (on Linux): find /proc/*/fd -ls | grep '(deleted)' Or to find the large ones with zsh : ls -ld /proc/*/fd/*(-.LM+1l0) An alternative, if the application is dynamically linked is to attach a debugger to it and make it call close(fd) followed by a new open("the-file", ....) . | {
"source": [
"https://unix.stackexchange.com/questions/68523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
68,531 | Is there a way to return the current watt consumption on the command line? I have found about the powertop program, but have not seen a way to return the Watt consumption as a value to the command line. I'm thinking of some file that I can cat or grep . | On my system I can obtain the power drawn from the battery from cat /sys/class/power_supply/BAT0/power_now
9616000 On Thinkpads if the tp_smapi module is loaded, the file is cat /sys/devices/platform/smapi/BAT0/power_now The value seems to be in Β΅W, though. You can convert it with any tool you're comfortable with, e.g. awk : awk '{print $1*10^-6 " W"}' /sys/class/power_supply/BAT0/power_now
9.616 W In case you cannot find the location within the sysfs file system, you can search for it: find /sys -type f -name power_now 2>/dev/null Additionally, the package lm-sensors may be used to determine the system power usage on some machines: # sensors power_meter-acpi-0
power_meter-acpi-0
Adapter: ACPI interface
power1: 339.00 W (interval = 1.00 s) | {
"source": [
"https://unix.stackexchange.com/questions/68531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16766/"
]
} |
68,548 | I like zathura for its simple UI, very nice for just reading.
I like xournal for its rich annotation features. But what I still need is a reader that is interactive, has a "find" feature, is able to highlight an copy text to clipboard, is independant of any desktop environments (gnome, kde ...) and is able to display simple notes e.g. made with xournal (so no epdfview). Is there such a viewer? | On my system I can obtain the power drawn from the battery from cat /sys/class/power_supply/BAT0/power_now
9616000 On Thinkpads if the tp_smapi module is loaded, the file is cat /sys/devices/platform/smapi/BAT0/power_now The value seems to be in Β΅W, though. You can convert it with any tool you're comfortable with, e.g. awk : awk '{print $1*10^-6 " W"}' /sys/class/power_supply/BAT0/power_now
9.616 W In case you cannot find the location within the sysfs file system, you can search for it: find /sys -type f -name power_now 2>/dev/null Additionally, the package lm-sensors may be used to determine the system power usage on some machines: # sensors power_meter-acpi-0
power_meter-acpi-0
Adapter: ACPI interface
power1: 339.00 W (interval = 1.00 s) | {
"source": [
"https://unix.stackexchange.com/questions/68548",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27330/"
]
} |
68,552 | I am trying to install tcl and tk on my linux server.I don't have the root password,So I am installing them in my home directory.I am using below method for installing it manually. cd ~/tcl8.5.11/unix
./configure --prefix=/home/cnel711 --exec-prefix=/home/cnel711
make
make install
cd ~/tk8.5.11/unix
./configure --prefix=/home/cnel711 --exec-prefix=/home/cnel711 --with-tcl=/home/cnel711/tcl8.5.11/unix
make
make install I was able to install tcl without any problem ,but I am facing problem while installing tk. configure for tk worked fine ,I am facing problem while using make .I am getting this error. X11/Xlib.h: No such file or directory I found out this file was missing on the server.So,I downloaded libX11-devel from here .Again,I installed it in my home directory.Then I exported the path to the header files and when I use which command to find Xlib.h,it locates ths file. >which Xlib.h
~/include/X11/Xlib.h Now, when I try to install tk again configure works fine as usual but I get the same error again while using make X11/Xlib.h: No such file or directory . Please help me out,what possibly is going wrong here ? | On my system I can obtain the power drawn from the battery from cat /sys/class/power_supply/BAT0/power_now
9616000 On Thinkpads if the tp_smapi module is loaded, the file is cat /sys/devices/platform/smapi/BAT0/power_now The value seems to be in Β΅W, though. You can convert it with any tool you're comfortable with, e.g. awk : awk '{print $1*10^-6 " W"}' /sys/class/power_supply/BAT0/power_now
9.616 W In case you cannot find the location within the sysfs file system, you can search for it: find /sys -type f -name power_now 2>/dev/null Additionally, the package lm-sensors may be used to determine the system power usage on some machines: # sensors power_meter-acpi-0
power_meter-acpi-0
Adapter: ACPI interface
power1: 339.00 W (interval = 1.00 s) | {
"source": [
"https://unix.stackexchange.com/questions/68552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26374/"
]
} |
68,581 | I've been reading a lot about dual-booting, and it seems as easy as loading Windows and then loading Linux with GRUB, but everybody says that Windows loves to trash GRUB when it gets the opportunity. What are some steps I can take to prevent this from happening (other than using Windows' bootloader, I want to keep this as simple as possible)? | Windows will overwrite the boot sector whenever you install it, upgrade it to a new version, or use tools like bootrec /fixmbr , bootrec /fixboot , or the older fdisk /mbr . In general, install Windows first, then Linux. The boot sector will stay put until you do one of the things above. (And perhaps there are also other ways to write onto the MBR.) But, if you lose GRUB, it is easily restored: Boot from a live CD (CD/DVD or flash drive). Become root or use sudo with commands below. List the available partitions if needed: fdisk -l Windows will almost certainly exist on /dev/sda1: mount /dev/sda1 /mnt Reinstall GRUB in the MBR: grub-install --root-directory=/mnt/ /dev/sda Reboot: shutdown -r now Restore the GRUB menu: update-grub You could also install 100% Unix, Linux, or BSD and simply run Windows in a virtual machine if the computer is strong enough for that. Also: your computer's BIOS may have an option to protect the boot sector. | {
"source": [
"https://unix.stackexchange.com/questions/68581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34332/"
]
} |
68,698 | There's a system-created /var/backups directory on Debian-based systems. I need a place to store backups of several git repositories (the primaries are on bitbucket). If I store them in /var/backup/git will that break apt-get, or will they get automatically deleted at inopportune times? Is there any reason I shouldn't use /var/backup? If there is, what is a reasonable alternative? | /var/backups is specific to Debian. It is not specified in the FHS , and its use is not documented in Debian policy (See Debian Bug report logs - #122038 ). The behavior is described in http://ubuntuforums.org/showthread.php?t=1232703 . While I agree with @fpmurphy that there is little danger of Debian ever removing your backup files in /var/backup , I think that it is not good policy to use a directory that is so Debian-specific. For one, Debian might change its policy and break things. For another, the user community already has specific expectations about what the directory is for. And finally, because it is not "portable" in the sense that it is not clear where this directory would be in a non-Debian distribution. If my understanding of the FHS is correct, it would be appropriate to put clones of Git repositories in /opt/<project_name>/.git or in /usr/local/src/<project_name/.git . My personal inclination would be to use the former because it leaves the door open to backup project resources that are not source files and therefore not in Git. If you really want to emphasis the backup nature of these repositories, you could put them in /backups , or even /home/backups , two directory names that are often used as mount points for external storage. | {
"source": [
"https://unix.stackexchange.com/questions/68698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5377/"
]
} |
68,790 | I know that mounting the same disk with an ext4 filesystem from two different servers (it's an iSCSI vloume) will likely corrupt data on the disk. My question is will it make any difference if one of the servers mounts the disk read-only while the other mounts it read-write? I know OCFS2 or the likes could be used for this and that I could export the disk with NFS to be accesible to the other server, but I would like to know if the setup I propose will work. | No. It won't give consistent results on the read-only client, because of caching. It's definitely not designed for it. You could expect to see IO errors returned to applications. There's probably still some number of oversights in the code, that could cause a kernel crash or corrupt memory used by any process. But most importantly, ext4 replays the journal even on readonly mounts. So a readonly mount will still write to the underlying block device. It would be unsafe even if both the mounts were readonly :). | {
"source": [
"https://unix.stackexchange.com/questions/68790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34739/"
]
} |
68,796 | I have one built-in hard disk /dev/sda which looks like this: Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00042134
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 293048319 146523136 83 Linux
/dev/sda2 293050366 312580095 9764865 5 Extended
/dev/sda5 293050368 312580095 9764864 82 Linux swap / Solaris Now I can easily find the MBR with xxd /dev/sda | less , it's located in the first sector. According to Wikipedia the VBR must be in the first sector within the first bootable partition, in my case /dev/sda1 . But in the first sector of /dev/sda1 I only see zeroes when I do a xxd /dev/sda1 | less . I actually thought to find binary code of GRUB in there, where could it be then? | No. It won't give consistent results on the read-only client, because of caching. It's definitely not designed for it. You could expect to see IO errors returned to applications. There's probably still some number of oversights in the code, that could cause a kernel crash or corrupt memory used by any process. But most importantly, ext4 replays the journal even on readonly mounts. So a readonly mount will still write to the underlying block device. It would be unsafe even if both the mounts were readonly :). | {
"source": [
"https://unix.stackexchange.com/questions/68796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17859/"
]
} |
68,832 | I'm running Chromium like so : chromium --no-sandbox I'm doing this because I'm running Debian Squeeze on an OpenVZ VM Container and it's the only way I can get it to work. Though I keep reading this is terrible . But I want to know why exactly. Can someone please explain it to me? Does someone need to hack into your computer to do damage? Or does the vulnerability come from a file on the web like a JavaScript file? What if I locked browsing down to only a handful of "trusted" sites?
(Gmail, stackexchange (ofcourse), and facebook) | I was not sure I could post it as an answer as I did not specifically address "where vulnerability comes from" - and mere refs then own words. But anyhow β Hopefully this shed some light on the topic of sandbox : Quick introduction to Chrome's sandbox. More in depth design document . With internal links to FAQ, etc. And as stated, Google themselves recommend using another browser than using Chrome without sandbox. And then obviously understood as if one can fix it then that would be preferred ;) | {
"source": [
"https://unix.stackexchange.com/questions/68832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14560/"
]
} |
68,846 | This question is kind of a phase II to the first question I posted at here I have a directory that contains a bunch of sub-directories, .zip files, and other random files not contained within a sub-directory. I'd like a command line script to remove all sub-directories from within the parent directory, but keep all zip files and loose files that don't belong to any sub-directories. All of the sub-directories have content, so I believe I'd need to force their deletion with the -f command. So basically, a command that looks inside the parent directory (or the current directory), deletes all folders from within it, but keeps all other content and files that are not a folder or contained within a folder. I understand that deleting items from the command line requires special care, but I have already taken all necessary precautions to back up remotely. | In BASH you can use the trailing slash (I think it should work in any POSIX shell): rm -R -- */ Note the -- which separates options from arguments and allows one to remove entries starting with a hyphen - otherwise after expansion by the shell the entry name would be interpreted as an option by rm (the same holds for many other command line utilities). Add the -f option if you don't want to be prompted for confirmation when deleting non-writeable files. Note that by default, hidden directories (those whose name starts with . ) will be left alone. An important caveat : the expansion of */ will also include symlinks that eventually resolve to files of type directory . And depending on the rm implementation, rm -R -- thelink/ will either just delete the symlink, or (in most of them) delete the content of the linked directory recursively but not that directory itself nor the symlink. If using zsh , a better approach would be to use a glob qualifier to select files of type directory only: rm -R -- *(/) # or *(D/) to include hidden ones or: rm -R -- *(-/) to include symlinks to directories (but because, this time, the expansion doesn't have trailing / s, it's the symlink only that is removed with all rm implementations). With bash , AT&T ksh , yash or zsh you can do: set -- */
rm -R -- "${@%/}" to strip the trailing / . | {
"source": [
"https://unix.stackexchange.com/questions/68846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34534/"
]
} |
68,956 | Is it possible to block the (outgoing) network access of a single process? | With Linux 2.6.24+ (considered experimental until 2.6.29), you can use network namespaces for that. You need to have the 'network namespaces' enabled in your kernel ( CONFIG_NET_NS=y ) and util-linux with the unshare tool. Then, starting a process without network access is as simple as: unshare -n program ... This creates an empty network namespace for the process. That is, it is run with no network interfaces, including no loopback . In below example we add -r to run the program only after the current effective user and group IDs have been mapped to the superuser ones (avoid sudo): $ unshare -r -n ping 127.0.0.1
connect: Network is unreachable If your app needs a network interface you can set a new one up: $ unshare -n -- sh -c 'ip link set dev lo up; ping 127.0.0.1'
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=32 time=0.066 ms Note that this will create a new, local loopback. That is, the spawned process won't be able to access open ports of the host's 127.0.0.1 . If you need to gain access to the original networking inside the namespace, you can use nsenter to enter the other namespace. The following example runs ping with network namespace that is used by PID 1 (it is specified through -t 1 ): $ nsenter -n -t 1 -- ping -c4 example.com
PING example.com (93.184.216.119) 56(84) bytes of data.
64 bytes from 93.184.216.119: icmp_seq=1 ttl=50 time=134 ms
64 bytes from 93.184.216.119: icmp_seq=2 ttl=50 time=134 ms
64 bytes from 93.184.216.119: icmp_seq=3 ttl=50 time=134 ms
64 bytes from 93.184.216.119: icmp_seq=4 ttl=50 time=139 ms
--- example.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 134.621/136.028/139.848/2.252 ms | {
"source": [
"https://unix.stackexchange.com/questions/68956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34837/"
]
} |
69,001 | Can I assume that enough people have zsh installed to run scripts with a #!/usr/bin/env zsh as shebang? Or will this make my scripts un-runnable on too many systems? Clarification: Iβm interested in programs/scripts an end user might want to run (like on Ubuntu, Debian, SUSE, Arch &c.) | For portability, no. While zsh can be compiled on any Unix or Unix-like and even Windows at least via Cygwin, and is packaged for most Open Source Unix-likes and several commercial ones, it is generally not included in the default install. bash on the other end is installed on GNU systems (as bash is the shell of the GNU project) like the great majority of non-embedded Linux based systems and sometimes on non-GNU systems like Apple OS/X. In the commercial Unix side, the Korn shell (the AT&T variant, though more the ksh88 one) is the norm and both bash and zsh are in optional packages. On the BSDs, the preferred interactive shell is often tcsh while sh is based on either the Almquist shell or pdksh and bash or zsh need to be installed as optional packages as well. zsh is installed by default on Apple OS/X. It even used to be the /bin/sh there. It can be found by default in a few Linux distributions like SysRescCD, Grml, Gobolinux and probably others, but I don't think any of the major ones. Like for bash , there's the question of the installed version and as a consequence the features available. For instance, it's not uncommon to find systems with bash3 or zsh3 . Also, there's no guarantee that the script that you write now for zsh5 will work with zsh6 though like for bash they do try to maintain backward compatibility. For scripts, my view is: use the POSIX shell syntax as all Unices have at least one shell called sh (not necessarily in /bin ) that is able to interpret that syntax. Then you don't have to worry so much about portability. And if that syntax is not enough for your need, then probably you need more than a shell. Then, your options are: Perl which is ubiquitous (though again you may have to limit yourself to the feature set of old versions, and can't make assumptions on the Perl modules installed by default) Specify the interpreter and its version (python 2.6 or above, zsh 4 or above, bash 4.2 or above...), as a dependency for your script, either by building a package for every targeted system which specifies the dependency or by stipulating it in a README file shipped alongside your script or embedded as comments at the top of your script, or by adding a few lines in Bourne syntax at the beginning of your script that checks for the availability of the requested interpreter and bails out with an explicit error when it's not, like this script needs zsh 4.0 or above . Ship the interpreter alongside your script (beware of licensing implications) which means you also need one package for every targeted OS. Some interpreters make it easier by providing a way to pack the script and its interpreter in a single executable. Write it in a compiled language. Again, one package per targeted system. | {
"source": [
"https://unix.stackexchange.com/questions/69001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31810/"
]
} |
69,057 | I was reviewing a set of interview questions that are asked from a unix admin; I found a topic called "named pipe". I googled the topic; to some extent I have been able to understand it :- named pipes || FIFO But still I feel that I am lacking the knowledge of when to use this particular type of pipe. Are there any special situations where unnamed pipes would fail to work ? | Named pipes (fifo) have four three advantages I can think of: you don't have to start the reading/writing processes at the same time you can have multiple readers/writers which do not need common ancestry as a file you can control ownership and permissions they are bi-directional, unnamed pipes may be unidirectional * *) Think of a standard shell | pipeline which is unidirectional, several shells ( ksh , zsh , and bash ) also offer coprocesses which allow bi-directional communication. POSIX treats pipes as half-duplex (i.e. each side can only read or write), the pipe() system call returns two file handles and you may be required to treat one as read-only and the other as write-only. Some (BSD) systems support read and write simultaneously (not forbidden by POSIX), on others you would need two pipes, one for each direction. Check your pipe() , popen() and possibly popen2() man pages. The undirectionality may not be dependent on whether the pipe is named or not, though on Linux 2.6 it is dependent. (Updated, thanks to feedback from Stephane Chazelas ) So one immediately obvious task you cannot achieve with an unnamed pipe is a conventional client/server application. The last (stricken) point above about unidirectional pipes is relevant on Linux, POSIX (see popen() ) says that a pipe need only be readable or writeable , on Linux they are unidirectional . See Understanding The Linux Kernel (3rd Ed. O'Reilly) for Linux-specific details (p787). Other OS's offer bidirectional (unnamed) pipes. As an example, Nagios uses a fifo for its command file . Various external processes (CGI scripts, external checks, NRPE etc) write commands/updates to this fifo and these are processed by the persistent Nagios process. Named pipes have features not unlike TCP connections, but there are important differences. Because a fifo has a persistent filesystem name you can write to it even when there is no reader, admittedly the writes will block (without async or non-blocking I/O), though you won't loose data if the receiver isn't started (or is being restarted). For reference, see also Unix domain sockets , and the answer to this Stackoverflow question which summarises the main IPC methods, and this one which talks about popen() | {
"source": [
"https://unix.stackexchange.com/questions/69057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21637/"
]
} |
69,112 | I want to do: cat update_via_sed.sh | sed 's/old_name/new_name/' > new_update_via_sed.sh in my program. But I want to use variables, e.g. old_run='old_name_952'
new_run='old_name_953' I have tried using them but the substitution doesn't happen (no error).
I have tried: cat update_via_sed.sh | sed 's/old_run/new_run/'
cat update_via_sed.sh | sed 's/$old_run/$new_run/'
cat update_via_sed.sh | sed 's/${old_run}/${new_run}/' | You could do: sed "s/$old_run/$new_run/" < infile > outfile But beware that $old_run would be taken as a regular expression and so any special characters that the variable contains, such as / or . would have to be escaped . Similarly, in $new_run , the & and \ characters would need to be treated specially and you would have to escape the / and newline characters in it. If the content of $old_run or $new_run is not under your control, then it's critical to perform that escaping, or otherwise that code amounts to a code injection vulnerability . | {
"source": [
"https://unix.stackexchange.com/questions/69112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
69,314 | I would like to make an automated script that calls ssh-keygen and creates some pub/private keypairs that I will use later on. In principle everything works fine with.... ssh-keygen -b 2048 -t rsa -f /tmp/sshkey -q ...except that it asks me for the passphrase that would encrypt the keys. This make -at present- the automation difficult. I could provide a passphrase via the command line argument -N thepassphrase , so to keep the prompt from appearing.
Still I do not even desire to have the keys - additionally secured by encryption - and want the keypairs to be plaintext. What is a (the best) solution to this problem? The -q option which supposedly means "quiet/silent" does still not avoid the passphrase interaction. Also I have not found something like this ssh-keygen ... -q --no-passphrase Please do not start preaching about or lecture me to the
pro and cons of the "missing passphrase", I am aware of that.
In the interactive form (not as a script) the user can simply hit [ENTER] twice and the key will be saved as plaintext. This is what I want to achieve in a script like this: #!/bin/bash
command1
command2
var=$(command3)
# this should not stop the script and ask for password
ssh-keygen -b 2048 -t rsa -f /tmp/sshkey -q | This will prevent the passphrase prompt from appearing and set the key-pair to be stored in plaintext (which of course carries all the disadvantages and risks of that): ssh-keygen -b 2048 -t rsa -f /tmp/sshkey -q -N "" Using Windows 10 built in SSH PowerShell: ssh-keygen -b 2048 -t rsa -f C:/temp/sshkey -q -N '""' CMD: ssh-keygen -b 2048 -t rsa -f C:/temp/sshkey -q -N "" | {
"source": [
"https://unix.stackexchange.com/questions/69314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24394/"
]
} |
69,322 | I want to do a bash script that measures the launch time of a browser for that I am using an html which gets the time-stamp on-load in milliseconds using JavaScript. In the shell script just before i call the browser I get the time-stamp with: date +%s The problem is that it gets the time-stamp in seconds, and I need it in milliseconds since sometimes when ran a second time the browser starts in under a second and I need to be able to measure that time precisely using milliseconds instead of seconds. How can I get the time-stamp in milliseconds from a bash script? | date +%s.%N will give you, eg., 1364391019.877418748 . The %N is the
number of nanoseconds elapsed in the current second. Notice it is 9 digits,
and by default date will pad this with zeros if it is less than 100000000. This is actually a problem if we want to do math with the number, because bash treats numbers with a leading zero as octal . This padding can be disabled by using a hyphen in the field spec, so: echo $((`date +%s`*1000+`date +%-N`/1000000)) would naively give you milliseconds since the epoch. However , as Stephane Chazelas points out in comment below, that's two different date calls which will yield two slightly different times. If
the second has rolled over in between them, the calculation will be an
entire second off. So: echo $(($(date +'%s * 1000 + %-N / 1000000'))) Or optimized (thanks to comments below, though this should have been obvious): echo $(( $(date '+%s%N') / 1000000)); | {
"source": [
"https://unix.stackexchange.com/questions/69322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22558/"
]
} |
70,411 | After sysadmin replaced the NAS, I can no longer mount the network share with sudo mount -t cifs //netgear.local/public /media/mountY -o uid=1000,iocharset=utf8,username="adam",password="password" Both NAS are linux-based, one with Samba 3.5.15 (the old one) and the other with Samba 3.5.16 (the new one) (information obtained from smbclient) I can, however, log in and use the share with the help of smbclient , like this: smbclient //NETGEARV2/public -U adam What can I do? There is no smbmount on Linux Mint (nor on Ubuntu) anymore. When I check dmesg I get this info: CIFS VFS: Send error in QFSUnixInfo = -95
CIFS VFS: cifs_read_super: get root inode failed | After seeing the dmseg and Googling, I found the solution: One has to add the sec=ntlm option. The problem (feature?) is introduced in recent kernels (I use 3.8.4). I just didn't realize that the problem is kernel-related. So the correct way of mounting is: sudo mount -t cifs //netgear.local/public /media/mountY -o uid=1000,iocharset=utf8,username="adam",password="password",sec=ntlm | {
"source": [
"https://unix.stackexchange.com/questions/70411",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17765/"
]
} |
70,438 | The manpage for grep describes the -I flag as follows: -I Ignore binary files. This option is equivalent to
--binary-file=without-match option. It also says this about binary files: --binary-files=value Controls searching and printing of binary files.
Options are binary, the default: search binary files but do not print
them; without-match: do not search binary files; and text: treat all
files as text. I cannot think of a scenario where I would care about matches in binary files. If such a scenario exists, surely it must be the exception rather than the norm. Why doesn't grep ignore binary files by default rather than requiring setting this flag to do so? | Not everything that grep thinks is a binary file, is actually a binary file. e.g. puppet's logs have ansi color coding in them, which makes grep think they're binary. I'd still want to search them if I'm grepping through /var/log though. | {
"source": [
"https://unix.stackexchange.com/questions/70438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
70,460 | How can I add an application to Applications > Internet in Gnome Desktop running on CentOS 6.4? Per this old Docs link , it suggests to edit /etc/xdg/menus/applications.menu . How can I add /home/danny/some/path/myprog/prog (executable) to my Gnome Applications menu? | This is how I did it: vi /usr/share/applications/newitem.desktop [Desktop Entry]
Version=1.0
Name=My Program
Exec=/home/danny/some/path/myprog/prog
Terminal=false
Type=Application
StartupNotify=true
Categories=Network;WebBrowser;
X-Desktop-File-Install-Version=0.15 An icon can be added by including Icon=/some/path . | {
"source": [
"https://unix.stackexchange.com/questions/70460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36088/"
]
} |
70,551 | On OS X, I get a nice human readable system memory reading like so: printf -v system_memory \
"$(system_profiler SPHardwareDataType \
| awk -F ': ' '/^ +Memory: /{print $2}')"
echo "$system_memory" prints out the friendly: 4 GB Although this on Linux is correct: lshw -class memory it outputs: size: 4096MiB I need to painfully parse it and try to make it into a string as nice as the one above. Am I using the wrong command? | If that's all you need, just use free : $ free -h | gawk '/Mem:/{print $2}'
7.8G free returns memory info, the -h switch tells it to print in human readable format. | {
"source": [
"https://unix.stackexchange.com/questions/70551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10283/"
]
} |
70,581 | What is the canonical way to: scp a file to a remote location compress the file in transit ( tar or not, single file or whole folder, 7za or something else even more efficient) do the above without saving intermediate files I am familiar with shell pipes like this: tar cf - MyBackups | 7za a -si -mx=9 -ms=on MyBackups.tar.7z essentially: rolling a whole folder into a single tar pass data through stdout to stdin of the compressing program apply aggressive compression What's the best way to do this over an ssh link, with the file landing on the remote filesystem? I prefer not to sshfs mount. This, does not work: scp <(tar cvf - MyBackups | 7za a -si -mx=9 -so) localhost:/tmp/tmp.tar.7z because: /dev/fd/63: not a regular file | There are many ways to do what you want. The simplest is to use a pìpe: tar zcvf - MyBackups | ssh user@server "cat > /path/to/backup/foo.tgz" Here, the compression is being handled by tar which calls gzip ( z flag). You can also use compress ( Z ) and bzip ( j ). For 7z , do this: tar cf - MyBackups | 7za a -si -mx=9 -ms=on MyBackups.tar.7z |
ssh user@server "cat > /path/to/backup/foo.7z" The best way, however, is probably rsync . Rsync is a fast and extraordinarily versatile file copying tool. It can copy
locally, to/from another host over any remote shell, or to/from a remote rsync daeβ
mon. It offers a large number of options that control every aspect of its behavior
and permit very flexible specification of the set of files to be copied. It is
famous for its delta-transfer algorithm, which reduces the amount of data sent over
the network by sending only the differences between the source files and the existβ
ing files in the destination. Rsync is widely used for backups and mirroring and
as an improved copy command for everyday use. rsync has way too many options. It really is worth reading through them but they are scary at first sight. The ones you care about in this context though are: -z, --compress compress file data during the transfer
--compress-level=NUM explicitly set compression level
-z, --compress
With this option, rsync compresses the file data as it is sent to the destiβ
nation machine, which reduces the amount of data being transmitted --
something that is useful over a slow connection.
Note that this option typically achieves better compression ratios than can
be achieved by using a compressing remote shell or a compressing transport
because it takes advantage of the implicit information in the matching data
blocks that are not explicitly sent over the connection. So, in your case, you would want something like this: rsync -z MyBackups user@server:/path/to/backup/ The files would be compressed while in transit and arrive decompressed at the destination. Some more choices: scp itself can compress the data -C Compression enable. Passes the -C flag to ssh(1) to
enable compression.
$ scp -C source user@server:/path/to/backup There may be a way to get rsync and 7za to play nice but there is no point in doing so. The benefit of rsync is that it will only copy the bits that have changed between the local and remote files. However, a small local change can result in a very different compressed file so there is no point in using rsync for this. It just complicates matters with no benefit. Just use direct ssh as shown above. If you really want to do this, you can try by giving a subshell as an argument to rsync . On my system, I could not get this to work with 7za because it does not allow you to write compressed data to a terminal. Perhaps your implementation is different. Try something like ( this does not work for me ): rsync $(tar cf - MyBackups | 7za a -an -txz -si -so) \
user@server:/path/to/backup Another point is that 7z should not be used for backups on Linux . As stated on the 7z man page: DO NOT USE the 7-zip format for backup purpose on Linux/Unix
because : - 7-zip does not store the owner/group of the file. | {
"source": [
"https://unix.stackexchange.com/questions/70581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10283/"
]
} |
70,612 | I am attempting to install Arch Linux (for the hundredth time) and I recently ran across another problem. I am trying to find a list of my partitions. In order to do this I enter gdisk When I do this however it returns Type device filename, or press to exit: I have attempted entering gdisk /dev/disk1 When I do this I get there error Problem opening /dev/disk1 for reading! Error is 2. However, I am still able to mount partitions when I know the partition. I am simply trying to get a list of my partitions so I can remember which ones they are. Any help understanding the problem would be useful. (Off topic question: Boot loaders do not need to be installed in the first partition of root correct? Last time I installed it I put it in /boot yet I was given an error) | There are many ways to do what you want. The simplest is to use a pìpe: tar zcvf - MyBackups | ssh user@server "cat > /path/to/backup/foo.tgz" Here, the compression is being handled by tar which calls gzip ( z flag). You can also use compress ( Z ) and bzip ( j ). For 7z , do this: tar cf - MyBackups | 7za a -si -mx=9 -ms=on MyBackups.tar.7z |
ssh user@server "cat > /path/to/backup/foo.7z" The best way, however, is probably rsync . Rsync is a fast and extraordinarily versatile file copying tool. It can copy
locally, to/from another host over any remote shell, or to/from a remote rsync daeβ
mon. It offers a large number of options that control every aspect of its behavior
and permit very flexible specification of the set of files to be copied. It is
famous for its delta-transfer algorithm, which reduces the amount of data sent over
the network by sending only the differences between the source files and the existβ
ing files in the destination. Rsync is widely used for backups and mirroring and
as an improved copy command for everyday use. rsync has way too many options. It really is worth reading through them but they are scary at first sight. The ones you care about in this context though are: -z, --compress compress file data during the transfer
--compress-level=NUM explicitly set compression level
-z, --compress
With this option, rsync compresses the file data as it is sent to the destiβ
nation machine, which reduces the amount of data being transmitted --
something that is useful over a slow connection.
Note that this option typically achieves better compression ratios than can
be achieved by using a compressing remote shell or a compressing transport
because it takes advantage of the implicit information in the matching data
blocks that are not explicitly sent over the connection. So, in your case, you would want something like this: rsync -z MyBackups user@server:/path/to/backup/ The files would be compressed while in transit and arrive decompressed at the destination. Some more choices: scp itself can compress the data -C Compression enable. Passes the -C flag to ssh(1) to
enable compression.
$ scp -C source user@server:/path/to/backup There may be a way to get rsync and 7za to play nice but there is no point in doing so. The benefit of rsync is that it will only copy the bits that have changed between the local and remote files. However, a small local change can result in a very different compressed file so there is no point in using rsync for this. It just complicates matters with no benefit. Just use direct ssh as shown above. If you really want to do this, you can try by giving a subshell as an argument to rsync . On my system, I could not get this to work with 7za because it does not allow you to write compressed data to a terminal. Perhaps your implementation is different. Try something like ( this does not work for me ): rsync $(tar cf - MyBackups | 7za a -an -txz -si -so) \
user@server:/path/to/backup Another point is that 7z should not be used for backups on Linux . As stated on the 7z man page: DO NOT USE the 7-zip format for backup purpose on Linux/Unix
because : - 7-zip does not store the owner/group of the file. | {
"source": [
"https://unix.stackexchange.com/questions/70612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33838/"
]
} |
70,614 | I should echo only names of files or directories with this construction: ls -Al | while read string
do
...
done ls -Al output : drwxr-xr-x 12 s162103 studs 12 march 28 12:49 personal domain
drwxr-xr-x 2 s162103 studs 3 march 28 22:32 public_html
drwxr-xr-x 7 s162103 studs 8 march 28 13:59 WebApplication1 For example if I try: ls -Al | while read string
do
echo "$string" | awk '{print $9}
done then output only files and directories without spaces. If file or directory have spaces like "personal domain" it will be only word "personal". I need very simple solution. Maybe there is better solution than awk. | You really should not parse the output of ls . If this is a homework assignment and you are required to, your professor does not know what they're talking about. Why don't you do something like this: The good... find ./ -printf "%f\n" or for n in *; do printf '%s\n' "$n"; done ...the bad... If you really really want to use ls , you can make it a little bit more robust by doing something like this: ls -lA | awk -F':[0-9]* ' '/:/{print $2}' ...and the ugly If you insist on doing it the wrong, dangerous way and just have to use a while loop, do this: ls -Al | while IFS= read -r string; do echo "$string" |
awk -F':[0-9]* ' '/:/{print $2}'; done Seriously though, just don't. | {
"source": [
"https://unix.stackexchange.com/questions/70614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34495/"
]
} |
70,622 | I'm using ubuntu 12.04.
I've installed libwebp2 & libwebp-dev So far, no example found on the net of converting webp to jpg. Some webp files can easily converted by using imagemagick with command convert file.webp file.jpg but lots of webp files cannot be converted and give error: convert: no decode delegate for this image format `file.webp' @ error/constitute.c/ReadImage/532.
convert: missing an image filename `file.jpg' @ error/convert.c/ConvertImageCommand/3011. --------added This is the file: http://www.filedropper.com/file_144 | Google already provided the tool to decode webp images in the libwebp package, your uploaded file works on Arch. dwebp file.webp -o abc.png For the encoding tool, check the cwebp command. In Ubuntu you can install the tools with: sudo apt install webp On RHEL/CentOS: yum install libwebp libwebp-tools And you might consider using this online tool . | {
"source": [
"https://unix.stackexchange.com/questions/70622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17948/"
]
} |
70,684 | When someone is not in the sudoers group and try to use sudo, get an error message like this one: yzT is not in the sudoers file. This incident will be reported. I'm trying to figure out in which log this information is logged, for check who tried to run a command with sudo for example, but can't find it. The first Google search says /var/log/syslog but I don't see any information related to sudo there. | On redhat based linux systems like centos or fedora it is in: /var/log/secure and for debian based systems like ubuntu it is in: /var/log/auth.log | {
"source": [
"https://unix.stackexchange.com/questions/70684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27807/"
]
} |
70,700 | Or: where can I put files belonging to a group? Suppose there are two users on a Unix system: joe and sarah . They are both members of the movies-enthusiast group. Where should I put their movie files? /home/{joe,sarah}/movies are not appropriate because those directories belongs to joe / sarah , not to their group; /home/movies-enthusiast is not appropriate too, because movies-enthusiast is a group, not a user; /var/movies-enthusiast might be an option, but I'm not sure this is allowed by the FHS; /srv/movies-enthusiast might be an option too, however movies are not files required by system services. | Don't use /usr is for sharable read-only data. Data here should only change for administrative reasons (e.g. the installation of new packages.) /opt is generally for programs that are self-contained or need to be isolated from the rest of the system for some reason (low and medium interaction honeypot programs, for example). /var is for "files whose content is expected to continually change during normal operation of the system---such as logs, spool files, and temporary e-mail files." I like to think of it like this: if your data wouldn't look right summarized in a list, it generally doesn't belong in /var (though, there are exceptions to this.) Use /home is for user home directories. Some see this directory as being an area for group files as well. The FHS actually notes that, "on large systems (especially when the /home directories are shared amongst many hosts using NFS) it is useful to subdivide user home directories. Subdivision may be accomplished by using subdirectories such as /home/staff, /home/guests, /home/students, etc." /srv is an acceptable and often-preferred location for group files. I generally use this directory for group-shared files for the reason mentioned in Chris Down's answer ; I see group file sharing as being a service that the server provides. See the hier(7) man page ( man hier ) for more information of the purpose of each directory described by the FHS. | {
"source": [
"https://unix.stackexchange.com/questions/70700",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32612/"
]
} |
70,738 | Currently I'm mounting an ISO to a (readonly) directory (using mount -o loop command) and then copying the contents to another normal directory. This takes lot of time as the ISO is large. Is this the only way to do so, or is there some alternative? | you can do this by 7zip software: sudo apt-get install p7zip-full 7z x iso_file.iso on Fedora: 7za x iso_file.iso | {
"source": [
"https://unix.stackexchange.com/questions/70738",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23301/"
]
} |
70,756 | I used the following two commands to produce the same results:- [root@localhost ~]# grep line comments
The line should start with a single quote to comment in VB scripting.
Double slashes in the beginning of the line for single line comment in C.
[root@localhost ~]#
[root@localhost ~]# grep line <comments
The line should start with a single quote to comment in VB scripting.
Double slashes in the beginning of the line for single line comment in C.
[root@localhost ~]# Could any please explain to me any pros/cons if any of these 2 approaches over each other. | From the man grep page (on Debian): DESCRIPTION grep searches the named input FILEs (or standard input if no files are
named, or if a single hyphen-minus (-) is given as file name) for lines
containing a match to the given PATTERN. By default, grep prints the
matching lines. In the first case, grep opens the file; in the second, the shell opens the file and assigns it to the standard input of grep , and grep not being passed any file name argument assumes it needs to grep its standard input. Pros of 1: grep can grep more than one fileΒΉ. grep can display the file name where each occurrence of line is found. grep couldΒ² (but I don't know of any implementation that does) do a fadvise(POSIX_FADV_SEQUENTIAL) on the file descriptors it opens. Pros of 2: If the file can't be opened, the shell returns an error which will include more relevant information (like line number in the script) and in a more consistent way (if you let the shell open files for other commands as well) than when grep opens it. And if the file can't be opened, grep is not even called (which for some commands -- maybe not grep -- can make a big difference). in grep line < in > out , if in can't be opened, out won't be created or truncated. There's no problem with some files with unusual names (like - or file names starting with - )Β³. cosmetic: you can put <file anywhere on the command-line to show the command flow more naturally, like <in grep line >out if you prefer. cosmetic: with GNU grep , you can choose what label to use in front of the matching line instead of just the file name as in: <file grep --label='Found in file at line' -Hn line In terms of performance, if the file can't be opened, you save the execution of grep when using redirection, but otherwise for grep I don't expect much difference. With redirection, you save having to pass an extra argument to grep , you make grep 's argument parsing slightly easier. On the other hand, the shell will need (at least) an extra system call to dup2() the file descriptor onto file descriptor 0. In { grep -m1 line; next command; } < file , grep (here GNU grep ) will want to seek() back to just after the matching line so the next command sees the rest of the file (it will also need to determine whether the file is seekable or not). In other words, the position within stdin is another one of grep 's output. With grep -m1 line file , it can optimise that out, that's one fewer thing for grep to care about. Notes ΒΉ With zsh , you can do: grep line < file1 < file2 but that's doing the equivalent of cat file1 file2 | grep line (without invoking the cat utility) and so is less efficient, can cause confusion if the first file doesn't end in a newline character and won't let you know in which file the pattern is found. Β² That is to tell the system that grep is going to read the file sequentially so the I/O scheduler can make more educated decisions for instance as to how to read the data. grep can do that on its own fd, but it would be wrong to do it on that fd 0 that it borrows from its caller, as that fd (or rather the open file description it references) could be used later or even at the same time for non-sequential read. Β³ In the case of ksh93 and bash though, there are files like /dev/tcp/host/port (and /dev/fd/x on some systems in bash ) which, when used in the target of redirections the shell intercepts for special purposes instead of really opening the file on the file system (though generally, those files don't exist on the file system). /dev/stdin serves the same purpose as - recognised by grep , but at least, here it's more properly namespaced (anybody can create a file called - in any directory, while only administrators can create a file called /dev/tcp/host/port and administrators should know better). | {
"source": [
"https://unix.stackexchange.com/questions/70756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21637/"
]
} |
70,766 | I have Windows 7 installed, and headphones work perfectly.
Few days ago, I installed Arch Linux along with Win7.
This issue appears with gnome/xfce4/openbox. Perhaps it is not related to desktop environment My headphones work fine under Arch Linux but: When I turn to Win7 from Arch Linux, headphones stop working. When I reboot Win7 twice, headphone works. When I use halt -p , from Arch Linux and reboot to Win7, headphone
works fine. I am using plain headphones(not usb headphone). Here is my device 00:1b.0 Audio device [0403]: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio [8086:3b56] (rev 05)
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Redwood HDMI Audio [Radeon HD 5000 Series] [1002:aa60] | From the man grep page (on Debian): DESCRIPTION grep searches the named input FILEs (or standard input if no files are
named, or if a single hyphen-minus (-) is given as file name) for lines
containing a match to the given PATTERN. By default, grep prints the
matching lines. In the first case, grep opens the file; in the second, the shell opens the file and assigns it to the standard input of grep , and grep not being passed any file name argument assumes it needs to grep its standard input. Pros of 1: grep can grep more than one fileΒΉ. grep can display the file name where each occurrence of line is found. grep couldΒ² (but I don't know of any implementation that does) do a fadvise(POSIX_FADV_SEQUENTIAL) on the file descriptors it opens. Pros of 2: If the file can't be opened, the shell returns an error which will include more relevant information (like line number in the script) and in a more consistent way (if you let the shell open files for other commands as well) than when grep opens it. And if the file can't be opened, grep is not even called (which for some commands -- maybe not grep -- can make a big difference). in grep line < in > out , if in can't be opened, out won't be created or truncated. There's no problem with some files with unusual names (like - or file names starting with - )Β³. cosmetic: you can put <file anywhere on the command-line to show the command flow more naturally, like <in grep line >out if you prefer. cosmetic: with GNU grep , you can choose what label to use in front of the matching line instead of just the file name as in: <file grep --label='Found in file at line' -Hn line In terms of performance, if the file can't be opened, you save the execution of grep when using redirection, but otherwise for grep I don't expect much difference. With redirection, you save having to pass an extra argument to grep , you make grep 's argument parsing slightly easier. On the other hand, the shell will need (at least) an extra system call to dup2() the file descriptor onto file descriptor 0. In { grep -m1 line; next command; } < file , grep (here GNU grep ) will want to seek() back to just after the matching line so the next command sees the rest of the file (it will also need to determine whether the file is seekable or not). In other words, the position within stdin is another one of grep 's output. With grep -m1 line file , it can optimise that out, that's one fewer thing for grep to care about. Notes ΒΉ With zsh , you can do: grep line < file1 < file2 but that's doing the equivalent of cat file1 file2 | grep line (without invoking the cat utility) and so is less efficient, can cause confusion if the first file doesn't end in a newline character and won't let you know in which file the pattern is found. Β² That is to tell the system that grep is going to read the file sequentially so the I/O scheduler can make more educated decisions for instance as to how to read the data. grep can do that on its own fd, but it would be wrong to do it on that fd 0 that it borrows from its caller, as that fd (or rather the open file description it references) could be used later or even at the same time for non-sequential read. Β³ In the case of ksh93 and bash though, there are files like /dev/tcp/host/port (and /dev/fd/x on some systems in bash ) which, when used in the target of redirections the shell intercepts for special purposes instead of really opening the file on the file system (though generally, those files don't exist on the file system). /dev/stdin serves the same purpose as - recognised by grep , but at least, here it's more properly namespaced (anybody can create a file called - in any directory, while only administrators can create a file called /dev/tcp/host/port and administrators should know better). | {
"source": [
"https://unix.stackexchange.com/questions/70766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12350/"
]
} |
70,859 | A sample script can be as below: #!/bin/bash
sudo su
ls /root When using ./test.sh as the normal user, instead run ls as super user and exit, it switches to root; and when I logout, it executes ls /root as the normal user. Can anybody tell me about the mechanism about it? | The commands in a script execute one by one, independently. The Script itself as the parent of all commands in the script, is another independent process and the su command does not and can not change it to root: the su command creates a new process with root privileges. After that su command completes, the parent process, still running as the same user, will execute the rest of the script. What you want to do is write a wrapper script. The privileged commands goes into the main script, for example ~/main.sh #!/bin/sh
ls /root The wrapper script calls the main script with root permissions, like this #!/bin/sh
su -c ~/main.sh root To launch this process you run the wrapper, which in turn launches the main script after switching user to the root user. This wrapper technique can be used to turn the script into a wrapper around itself. Basically check to see if it is running as root, if not, use "su" to re-launch itself. $0 is a handy way of making a script refer to itself, and the whoami command can tell us who we are (are we root?) So the main script with built-in wrapper becomes #!/bin/sh
[ `whoami` = root ] || exec su -c $0 root
ls /root Note the use of exec. It means "replace this program by", which effectively ends its execution and starts the new program, launched by su, with root, to run from the top. The replacement instance is "root" so it doesn't execute the right side of the || | {
"source": [
"https://unix.stackexchange.com/questions/70859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8776/"
]
} |
70,878 | I have a situation where i want to replace a particular string in many files Replace a string AAA with another string BBB but there are lot of strings starting with AAA or ending in AAA ,and i want to replace only one on line 34 and keep others intact. Is it possible to specify by line number,on all files this string is exactly on 34th line. | You can specify line number in sed or NR (number of record) in awk. awk 'NR==34 { sub("AAA", "BBB") }' or use FNR (file number record) if you want to specify more than one file on the command line. awk 'FNR==34 { sub("AAA", "BBB") }' or sed '34s/AAA/BBB/' to do in-place replacement with sed sed -i '34s/AAA/BBB/' file_name | {
"source": [
"https://unix.stackexchange.com/questions/70878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36556/"
]
} |
70,885 | After a recent update (Ubuntu 12.04 LTS), TAB complete on the command line is slow. After entering a partial command (e.g evi [TAB] ) or partial filename (e.g. evince somedocu[TAB] ) the shell, sometimes though not always, hangs for several seconds. Personally, I'd prefer a less powerfull autocomplete to a slow one. Is there a simple fix? Edit: Additional information related to comments: PATH is pretty standard. ~/bin has some bash scripts $ echo $PATH
/home/USERNAME/bin:/usr/local/bin:/usr/bin:/bin:/usr/games The number of files in the working directory is less than 100. The autocomplete feature was especially slow after unusual disk activity (system upgrade). It is, thus, possible, that rereading /usr/bin and other directories caused the lag. | I don't know about fixing βΒ there are all kinds of things that could go cause delays. But I can offer a few tips to investigate. Just as a guess, maybe there's a directory somewhere in a search path ( $PATH , or some place where bash looks for completion data) that's on a filesystem which is slow to respond. Usually it's remote filesystems that are slow, but it could also be a failing hard disk, a hung FUSE driver, etc. The first step to investigate is to run set -x to get a trace of the commands that the shell executes to generate the completions. Watch where it pauses. When done, turn tracking back off with set +x . If that doesn't give enough information, bring in the big guns. Note the shell's process ID ( echo $$ ). In another terminal, run strace -f -s9999 -p$$ (or the equivalent of strace if running on another unix flavor). Strace lists the system calls performed by the process. See if it seems to be accessing files that it shouldn't, or if access to some files is slow. Adding the option -T to the strace command line makes it show the time spent in each system call. | {
"source": [
"https://unix.stackexchange.com/questions/70885",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9979/"
]
} |
70,963 | Just looking for the difference between 2>&- 2>/dev/null |& &>/dev/null >/dev/null 2>&1 and their portability with non-Bourne shells like tcsh , mksh , etc. | For background: a number 1 = standard out (i.e. STDOUT) a number 2 = standard error (i.e. STDERR) if a number isn't explicitly given, then number 1 is assumed by the shell (bash) First let's tackle the function of these. For reference see the Advanced Bash-Scripting Guide . Functions 2>&- The general form of this one is M>&- , where "M" is a file descriptor number. This will close output for whichever file descriptor is referenced, i.e. "M" . 2>/dev/null The general form of this one is M>/dev/null , where "M" is a file descriptor number. This will redirect the file descriptor, "M" , to /dev/null . 2>&1 The general form of this one is M>&N , where "M" & "N" are file descriptor numbers. It combines the output of file descriptors "M" and "N" into a single stream. |& This is just an abbreviation for 2>&1 | . It was added in Bash 4. &>/dev/null This is just an abbreviation for >/dev/null 2>&1 . It redirects file descriptor 2 (STDERR) and descriptor 1 (STDOUT) to /dev/null . >/dev/null This is just an abbreviation for 1>/dev/null . It redirects file descriptor 1 (STDOUT) to /dev/null . Portability to non-bash, tcsh, mksh, etc. I've not dealt much with other shells outside of csh and tcsh . My experience with those 2 compared to bash's redirection operators, is that bash is superior in that regard. See the tcsh man page for more details. Of the commands you asked about none are directly supported by csh/tcsh. You'd have to use different syntaxes to construct similar functions. | {
"source": [
"https://unix.stackexchange.com/questions/70963",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36369/"
]
} |
70,966 | I have this: date +"%H hours and %M minutes" I use festival to say it up.. but it says like: "zero nine hours".. I want it to say "nine hours"! but date always give me 09... so I wonder if bash can easly make that become just 9? in the complex script I tried like printf %d 09 but it fails.. not octal :( any idea? | In your case, you can simply disable zero padding by append - after % in the format string of date: %-H By default, date pads numeric fields with zeroes. The following optional flags may follow '%': - (hyphen) do not pad the field _ (underscore) pad with spaces 0 (zero) pad with zeros ^ use upper case if possible # use opposite case if possible See date manual If you want to interpret number in different base, in bash Constants with a leading 0 are interpreted as octal numbers. A leading 0x or 0X denotes hexadecimal. Otherwise, numbers take the form [base#]n, where base is a decimal number between 2 and 64 representing the arithmetic base, and n is a number in that base So, to interpret a number as decimal, use 10#n form, eg. 10#09 echo $((10#09*2))
18 See Arithmetic Evaluation section of bash manual. | {
"source": [
"https://unix.stackexchange.com/questions/70966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30352/"
]
} |
71,053 | While trying to save a file out of Nano the other day, I got an error message saying "XOFF ignored, mumble mumble". I have no idea what that's supposed to mean. Any insights? | You typed the XOFF character Ctrl-S. In a traditional terminal environment, XOFF would cause the terminal to pause it's output until you typed the XON character. Nano ignores this because Nano is a full-screen editor, and pausing it's output is pretty much a nonsensical concept. As to why the wording is what it is, you'd have to ask the original devs. | {
"source": [
"https://unix.stackexchange.com/questions/71053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36429/"
]
} |
71,064 | My distribution is Fedora 17 Gnome.
Every time I reboot/restart my computer I need to run this command as root: modprobe rt2800usb How can I make it permanent? | On any distro using systemd you can automatically load the module via modules-load.d : create the config file: /etc/modules-load.d/rt2800usb.conf open it and edit like this (add the module name): rt2800usb next time you reboot the module should be automatically loaded Troubleshooting: Check if systemd service loaded the module: systemctl status systemd-modules-load.service The output should look like this: systemd-modules-load.service - Load Kernel Modules
Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static)
Active: active (exited) since Wed, 03 Apr 2013 22:50:57 +0000; 46s ago
Docs: man:systemd-modules-load.service(8)
man:modules-load.d(5)
Process: 260 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=0/SUCCESS) The last line contains the PID (process id) and the exit code. status=0/SUCCESS means the module was successfully inserted, confirmed by: journalctl -b _PID=260 output being: Apr 03 22:50:57 mxhst systemd-modules-load[260]: Inserted module 'rt2800usb' In case of failure, systemctl output looks like this: systemd-modules-load.service - Load Kernel Modules
Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static)
Active: failed (Result: exit-code) since Wed, 03 Apr 2013 22:50:59 +0000; 43s ago
Docs: man:systemd-modules-load.service(8)
man:modules-load.d(5)
Process: 260 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=1/FAILURE) with journalctl -b reporting: Apr 03 22:50:59 mxhst systemd-modules-load[260]: Failed to find module 'fakert2800usb' When the exit code is 0/SUCCESS it means your module has been successfully inserted; running lsmod | grep rt2800 should confirm that: rt2800usb 26854 0
rt2x00usb 19757 1 rt2800usb
rt2800lib 64762 1 rt2800usb
rt2x00lib 66520 3 rt2x00usb,rt2800lib,rt2800usb
mac80211 578735 3 rt2x00lib,rt2x00usb,rt2800lib If lsmod output doesn't confirm (despite the service exit code being 0/SUCCESS ) it means something removed the module after being loaded by modules-load.service . One possible cause is another *.conf file that blacklisted the module. Look for a line like: blacklist rt2800usb in /etc/modprobe.d/*.conf , /usr/lib/modprobe.d/*.conf or /run/modprobe.d/*.conf and comment it out / delete it. | {
"source": [
"https://unix.stackexchange.com/questions/71064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36440/"
]
} |
71,109 | I opened a file in readonly mode; is there a way to get out of readonly mode? | You could do this: :set noro That unsets the read-only flag, but if the underlying file is still not writable by you then vim still will be unable to write to it. | {
"source": [
"https://unix.stackexchange.com/questions/71109",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12521/"
]
} |
71,113 | Are there any open source MySQL performance tools for optimizing SQL queries? | You could do this: :set noro That unsets the read-only flag, but if the underlying file is still not writable by you then vim still will be unable to write to it. | {
"source": [
"https://unix.stackexchange.com/questions/71113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36466/"
]
} |
71,118 | How can I ignore the space in "Kuala Lumpur" when sorting this list? I cheated it by tabbing the columns and sorted on the tab which gives me the right results but I would like to know how to deal with a space in a column because re-formatting a list doesn't seem like a good habit to get into, especially if the list is much larger. Thanks in advance Kuala Lumpur 78 56
Seoul 86 66
Karachi 95 75
Tokyo 85 60
Lahore 85 75
Manila 90 85 BY CITY: Karachi 95 75
Kuala Lumpur 78 56
Lahore 85 75
Manila 90 85
Seoul 86 66
Tokyo 85 60 I also have it sorted by high temp (high-low, 2nd column) and low temp (low-high, 3rd col) BY HIGH TEMP: Karachi 95 75
Manila 90 85
Seoul 86 66
Lahore 85 75
Tokyo 85 60
Kuala Lumpur 78 56 BY LOW TEMP: Kuala Lumpur 78 56
Tokyo 85 60
Seoul 86 66
Karachi 95 75
Lahore 85 75
Manila 90 85 | You could do this: :set noro That unsets the read-only flag, but if the underlying file is still not writable by you then vim still will be unable to write to it. | {
"source": [
"https://unix.stackexchange.com/questions/71118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36287/"
]
} |
71,121 | As per my knowledge, to determine the current shell we use echo $0 in the shell. Rather I want my script to check in which shell it is running. So, I tried to print $0 in the script and it returns the name of the script as it should. So, my question is how can I find which shell is my script running in during runtime? | Maybe not what you're asking for, but this should work to some extent to identify the interpreter currently interpreting it for a few like Thompson shell ( osh ), Bourne shell, Bourne-again shell ( bash ), Korn shell ( ksh88 , ksh93 , pdksh , mksh ), zsh , Policy-compliant Ordinary shell ( posh ), Yet Another shell ( yash ), rc shell, akanga shell, es shell, wish TCL interpreter, tclsh TCL interpreter, expect TCL interpreter, Perl, Python, Ruby, PHP, JavaScript (nodejs, SpiderMonkey shell and JSPL at least) MS/Wine cmd.exe , command.com (MSDOS, FreeDOS...). 'echo' +"'[{<?php echo chr(13)?>php <?php echo PHP_VERSION.chr(10);exit;?>}\
@GOTO DOS [exit[set 1 [[set 2 package] names];set 3 Tcl\ [info patchlevel];\
if {[lsearch -exact $1 Expect]>=0} {puts expect\ [$2 require Expect]\ ($3)} \
elseif {[lsearch -exact $1 Tk]>=0} {puts wish\ ($3,\ Tk\ [$2 require Tk])} \
else {puts $3}]]]' >/dev/null ' {\">/dev/null \
">"/dev/null" +"\'";q="#{",1//2,"}";a=+1;q='''=.q,';q=%{\"
'echo' /*>/dev/null
echo ">/dev/null;status=0;@ {status=1};*=(" '$' ");~ $status 1&&{e='"\
"';eval catch $2 ^'&version {eval ''echo <='^ $2 ^'&version''}';exit};e='"\
"';if (eval '{let ''a^~a''} >[2] /dev/null'){e='"\
"';exec echo akanga};eval exec echo rc $2 ^ version;\" > /dev/null
: #;echo possibly pre-Bourne UNIX V1-6 shell;exit
if (! $?version) set version=csh;exec echo $version
:DOS
@CLS
@IF NOT "%DOSEMU_VERSION%"=="" ECHO DOSEMU %DOSEMU_VERSION%
@ECHO %OS% %COMSPEC%
@VER
@GOTO FIN
", unless eval 'printf "perl %vd\n",$^V;exit;'> "/dev/null";eval ': "\'';
=S"';f=false e=exec\ echo n=/dev/null v=SH_VERSION;`(eval "f() { echo :
};f")2>$n` $f||$e Bourne-like shell without function
case `(: ${_z_?1}) 2>&1` in 1) $e ash/BSD sh;;esac;t(){
eval "\${$1$v+:} $f &&exec echo ${2}sh \$$1$v";};t BA ba;t Z z;t PO po;t YA ya
case `(typeset -Z2 b=0;$e $b)2>$n` in 00) (eval ':${.}')2>$n&&eval '
$e ksh93 ${.sh.version}';t K pdk;$e ksh88;;esac;case `(eval '$e ${f#*s}$($e 1
)$((1+1))')2>$n` in e12)$e POSIX shell;;esac;$e Bourne-like shell;: }
print "ruby ",RUBY_VERSION,"\n";exit;' ''';import sys
print("python "+sys.version);z='''*/;
s="";j="JavaScript";if(typeof process=="object"){p=console.log;p(process.title
,process.version)}else{p=print;p((f="function")==(t=typeof version)?"string"==
typeof(v=version())?v:(typeof build!=f?"":s= "SpiderMonkey ")+j+" "+v:(t==
"undefined"?j+"?":version)+"\n");if(s)build()}/*
:FIN } *///''' I posted the initial version of that which_interpreter script circa 2004 on usenet. Sven Mascheck has a (probably more useful to you) script called whatshell that focuses on identifying Bourne-like shells. You can also find a merged version of our two scripts there . | {
"source": [
"https://unix.stackexchange.com/questions/71121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26374/"
]
} |
71,135 | ssh-add -l shows you all ssh-keys that have been added with ssh-add ~/.ssh/id_yourkey . How do I do the analogous thing with gpg and gpg-agent, in other words, ask it to show a list of cached keys? | You may not be able to do this, at least not yet, or at least not in the general case. However, I will share what I have learned, and look forward to updating this answer in due course. First of all, unlike the ssh-agent capability, which actually caches private keys, gpg-agent can cache either keys or passphrases. It is up to each client which to cache, and gpg just uses gpg-agent to cache the passphrase. You can interact with gpg-agent using the gpg-connect-agent utility. In the example that follows, I am passing commands one at a time via STDIN. $ CACHEID="ThisIsTheTrickyPart"
$ ERRSTR="Error+string+goes+here"
$ PMTSTR="Prompt"
$ DESSTR="Description+string+goes+here"
$ echo "GET_PASSPHRASE --data $CACHEID $ERRSTR $PMTSTR $DESSTR" | gpg-connect-agent
D MyPassPhrase
OK Upon invoking gpg-connect-agent and passing in this command, the pinentry command configured on my system uses the error, prompt, and description strings to prompt for a passphrase. In this case I entered "MyPassPhrase" which is what is returned in the structured output (see image below) . If I send GET_PASSPHRASE to gpg-agent again with the same $CACHEID , it returns the cached passphrase instead of using pinentry . GET_PASSPHRASE also accepts a --no-ask option which will return an error on a cache miss. Here I use "NotCachedID" as the cache ID, and use dummy strings for the required arguments that gpg-agent will not use. $ echo "GET_PASSPHRASE --no-ask NotCachedID Err Pmt Des" | gpg-connect-agent
ERR 67108922 No data <GPG Agent> In principle, then, you could ask the agent for each maybe-cached passphrase in turn, and check for OK or ERR in the output. The question then becomes, how do I generate the cache ID? As we see in the example above, gpg-agent is liberal in what it accepts as the cache ID. It turns out that gpg computes a fingerprint on the public key and uses a hex-coded string representation as the cache ID, but the trouble is that this fingerprint is not the same as the fingerprint you can learn via gpg --fingerprint --list-secret-keys . This digest is called keygrip (because it is computed over the raw key material only whereas the fingerprint is calculcated over the key material and the creation timestamp). If you really want to continue down this path, you will have to find out how to generate the correct fingerprint for each of the keys you wish to check (this will be easy using the next generation of GnuPG, 2.1, with the option --with-keygrip ). Warning: The output from GET_PASSPHRASE actually contains the passphrase in the clear . Even if you leave off the --data option, the passphrase is plainly visible as a hex-coded string. It is probably a Very Bad Idea(tm) to muck around with this unless you know what you are doing, and take the appropriate precautions. | {
"source": [
"https://unix.stackexchange.com/questions/71135",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36479/"
]
} |
71,144 | I am at a bit of a loss as to the purpose of set and export in Bash (and I guess probably other shells too). I would think it is for setting environment variables, but that can be done just with VARIABLE=VALUE , right? Also typing set and export on their own show different values. So what is their purpose? | export marks the given variable as exported to children of the current process, by default they are not exported. For example: $ foo=bar
$ echo "$foo"
bar
$ bash -c 'echo "$foo"'
$ export foo
$ bash -c 'echo "$foo"'
bar set , on the other hand, sets shell attributes and the positional parameters. $ set foo=baz
$ echo "$1"
foo=baz Note that baz is not assigned to foo , it simply becomes a literal positional parameter. There are many other things set can do (mostly shell options), see help set . As for printing, export called with no arguments prints all of the variables in the shell's environment. set also prints variables that are not exported. It can also export some other objects (although you should note that this is not portable), see help export . | {
"source": [
"https://unix.stackexchange.com/questions/71144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6377/"
]
} |
71,176 | Is it possible to find duplicate files on my disk which are bit to bit identical but have different file-names? | fdupes can do this. From man fdupes : Searches the given path for duplicate files. Such files are found by comparing file sizes and MD5 signatures, followed by a byte-by-byte comparison. In Debian or Ubuntu, you can install it with apt-get install fdupes . In Fedora/Red Hat/CentOS, you can install it with yum install fdupes . On Arch Linux you can use pacman -S fdupes , and on Gentoo, emerge fdupes . To run a check descending from your filesystem root, which will likely take a significant amount of time and memory, use something like fdupes -r / . As asked in the comments, you can get the largest duplicates by doing the following: fdupes -r . | {
while IFS= read -r file; do
[[ $file ]] && du "$file"
done
} | sort -n This will break if your filenames contain newlines. | {
"source": [
"https://unix.stackexchange.com/questions/71176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
71,253 | I'm looking for guidelines on what one should and should not include in the various startup files for zsh . I understand the order of sourcing of these files, and the conditions under which they are sourced, but it is still not clear to me what should go in each. | Here is a non-exhaustive list, in execution-order, of what each file tends to contain: .zshenv is always sourced. It often contains exported variables that should be available to other programs. For example, $PATH , $EDITOR , and $PAGER are often set in .zshenv . Also, you can set $ZDOTDIR in .zshenv to specify an alternative location for the rest of your zsh configuration. .zprofile is for login shells. It is basically the same as .zlogin except that it's sourced before .zshrc whereas .zlogin is sourced after .zshrc . According to the zsh documentation, " .zprofile is meant as an alternative to .zlogin for ksh fans; the two are not intended to be used together, although this could certainly be done if desired." .zshrc is for interactive shells. You set options for the interactive shell there with the setopt and unsetopt commands. You can also load shell modules, set your history options, change your prompt, set up zle and completion, et cetera. You also set any variables that are only used in the interactive shell (e.g. $LS_COLORS ). .zlogin is for login shells. It is sourced on the start of a login shell but after .zshrc , if the shell is also interactive. This file is often used to start X using startx . Some systems start X on boot, so this file is not always very useful. .zlogout is sometimes used to clear and reset the terminal. It is called when exiting, not when opening. You should go through the configuration files of random Github users to get a better idea of what each file should contain. | {
"source": [
"https://unix.stackexchange.com/questions/71253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
71,481 | I connect to a remote ssh server by running this command: ssh -D 12345 [email protected] This creates a socks proxy that I can use with Firefox to bypass censorship in my country. However, I can't take advantage of it to in the command line. Let's say my country blocks access to youtube. How can I use the ssh connection to run a command such as: youtube-dl "youtube.com/watch?v=3XjwiV-6_CA" Without being blocked by the government? How I can set a socks proxy for all terminal commands? | Youtube-dl doesn't support a SOCKS proxy. There's a feature request for it, with links to a couple of working proposals. Youtube-dl supports HTTP proxies out of the box. To benefit from this support, you'll need to run a proxy on myserver.com . Pretty much any lightweight proxy will do, for example tinyproxy . The proxy only needs to listen to local connections ( Listen 127.0.0.1 in tinyproxy.conf ). If the HTTP proxy is listening on port 8035 ( Port 8035 ), run the following ssh command: ssh -L 8035:localhost:8035 [email protected] and set the environment variables http_proxy and https_proxy : export http_proxy=http://localhost:8035/ https_proxy=http://localhost:8035/
youtube-dl youtube.com/watch?V=3XjwiV-6_CA | {
"source": [
"https://unix.stackexchange.com/questions/71481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36665/"
]
} |
71,492 | I have a script that outputs text to stdout . I want to see all this output in my terminal, and at the same time I want to filter some lines and save them in a file. Example: $ myscript
Line A
Line B
Line C
$ myscript | grep -P 'A|C' > out.file
$ cat out.file
Line A
Line C I want to see output of first command in terminal, and save the output of the second command in a file. At the same time. I tried using tee , but with no result, or better, with reversed result . | I want to see output of first command in terminal, and save the output of the second command in a file. As long as you don't care whether what you are looking at is from stdout or stderr , you can still use tee : myscript | tee /dev/stderr | grep -P 'A|C' > out.file Will work on linux; I don't know if "/dev/stderr" is equally applicable on other *nixes. | {
"source": [
"https://unix.stackexchange.com/questions/71492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36667/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.