source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
168,181 | I want to write a bash script to print all possible lower and upper case permutations of a certain word, e.g. harley: harley
harleY
harlEy
harLey
...
HARLey
HARLEy
HARLEY My naive solution is to write a n-th (n is len(word)) nested for-loop for this specific word: #!/bin/bash
for a in {h,H}; do
for b in {a,A}; do
...
done
done However, I would have to code the script for a different word again. Is there a better way to accomplish this? | A slightly better solution: echo {h,H}{a,A}{r,R}{l,L}{e,E}{y,Y} For full scalability: echo harley \
| perl -nle 'print "echo ",
join "",map { "{" . lc . "," .uc ."}" } split //' \
| xargs -I {} bash -c "{}" If you absolutely must have one word per line, go with for w in {h,H}{a,A}{r,R}{l,L}{e,E}{y,Y};do echo $w;done thanks to mattdm's comment The corresponding scalable version would be: echo harley \
| perl -nle 'print join "",map { "{" . lc . "," .uc ."}" } split //' \
| xargs -I {} bash -c 'for w in {};do echo $w;done' For fun, try replacing "harley" with "supercalifragilisticexpialidocious" It's been 5 minutes and my computer is still crunching on this one and will probably never finish :) | {
"source": [
"https://unix.stackexchange.com/questions/168181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72471/"
]
} |
168,221 | In my testing (in Bash and Z Shell), I saw no problems with defining functions or aliases or executable shell scripts which have hyphens in the name, but I'm not confident that this will be okay in all shells and in all use cases. The reason I would like to do this is that a hyphen is easier to type than an underscore, and therefore faster and smoother. One reason I'm hesitant to trust that it's not a problem is that in some languages (Ruby for example) the hyphen would be interpreted as a minus sign even without spaces around it. It wouldn't surprise me if something like this might happen in some shells, where the hyphen is interpreted as signaling an option even without a space. Another reason I'm a little suspicious is that my text editor screws up the syntax highlighting for functions with hyphens. (But of course it's entirely possible that that's just a bug in its syntax highlighting configuration for shell scripts.) Is there any reason to avoid hyphens? | POSIX and Hyphens: No Guarantee According to the POSIX standard, a function name must be a valid name and a name can consist of: 3.231 Name In the shell command language, a word consisting solely of
underscores, digits, and alphabetics from the portable character set.
The first character of a name is not a digit. Additionally, an alias must be a valid alias name , which can consist of: 3.10 Alias Name In the shell command language, a word consisting solely of underscores,
digits, and alphabetics from the portable character set and any of the
following characters: '!', '%', ',', '@'. Implementations may allow other characters within alias names as an
extension. (Emphasis mine.) A hyphen is not listed among the characters that must be allowed in either case. So, if they are used, portability is not guaranteed. Examples of Shells That Do Not Support Hyphens dash is the default shell ( /bin/sh ) on the debian-ubuntu family and it does not support hyphens in function names: $ a-b() { date; }
dash: 1: Syntax error: Bad function name Interestingly enough, it does support hyphens in aliases, though, as noted above, this is an implementation characteristic , not a requirement: $ a_b() { printf "hello %s\n" "$1"; }
$ alias a-b='a_b'
$ a-b world
hello world The busybox shell (only the ash based one) also does not support hyphens in function names: $ a-b() { date; }
-sh: Syntax error: Bad function name Summary of Hyphen Support by Shell The following shells are known to support hyphens in function names: pdksh and derivatives, bash, zsh some ash derivatives such as the sh of FreeBSD ( since 2010 ) or NetBSD (since 2016. busybox sh when the selected shell at compile time is hush instead of ash . csh and tcsh (in their aliases, those shells have no function support). Those shells have a radically different syntax anyway, so there's no hope to have cross shell compatibility with those. rc and derivatives (again with a radically different syntax) fish (again with a radically different syntax) The following shells are known not to support hyphens in function names: the Bourne shell and derivatives such as ksh88 and bosh (in the Bourne shell, functions and variables shared the same namespace, you couldn't have a variable and a function by the same name). ksh93, yash, the original ash and some of its derivatives (busybox ash (the default choice for sh), dash) Conclusions Hyphens are non-standard. Stay away from them if you want cross-shell compatibility. Use underscores instead of hyphens: underscores are accepted everywhere. | {
"source": [
"https://unix.stackexchange.com/questions/168221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3358/"
]
} |
168,232 | I am using the following command to create messages on the fly, and send them: echo "Subject:Hello \n\n I would like to buy a hamburger\n" | sendmail [email protected] It seems that when you send the information from a file, by doing something like: sendmail [email protected] mail.txt Then sendmail sees each line as a header, and parses it. But the way I sent it above, everything ends up in the subject line. If one wants to echo a message complete with headers, into sendmail, then what is the format ? How does one do it ? | Your echo statement should really output newlines not the sequence \ followed by n . You can do that by providing the -e option: echo -e "Subject:Hello \n\n I would like to buy a hamburger\n" | sendmail [email protected] To understand what is the difference have a look at the output from the following two commands: echo "Subject:Hello \n\n I would like to buy a hamburger\n"
echo -e "Subject:Hello \n\n I would like to buy a hamburger\n" | {
"source": [
"https://unix.stackexchange.com/questions/168232",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91690/"
]
} |
168,255 | I see I can do $ [ -w /home/durrantm ] && echo "writable"
writable or $ test -w /home/durrantm && echo "writable"
writable or $ [[ -w /home/durrantm ]] && echo "writable"
writable I like using the third syntax. Are they equivalent in all ways and for all negative and edge cases? Are there any differences in portability, e.g. between bash on Ubuntu and on OS X or older/newer bash versions, e.g. before/after 4.0 and do they both expand expressions the same way? | [ is synonym of the test command and it is simultaneously a bash builtin and separate command. But [[ is a bash keyword and works in some versions only. So for reasons of portability you are better off using single [] or test [ -w "/home/durrantm" ] && echo "writable" | {
"source": [
"https://unix.stackexchange.com/questions/168255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
168,284 | When should I use -eq vs = vs == e.g. [[ $num -eq 0 ]]
[[ $num = 'zzz' ]] I've observed a pattern of using -eq (and -ne , etc.) for numbers and = for strings. Is there a reason for this and when should I use == | Because that's the definition for those operands. From POSIX test documentation, OPERANDS section : s1 = s2 True if the strings s1 and s2 are identical; otherwise, false. ... n1 -eq n2 True if the integers n1 and n2 are algebraically equal; otherwise,
false. == is not defined by POSIX, it's an extension of bash , derived from ksh . You shouldn't use == when you want portability. From bash documentation - Bash Conditional Expressions : string1 == string2 string1 = string2 True if the strings are equal. ‘=’ should be used with the test
command for POSIX conformance. | {
"source": [
"https://unix.stackexchange.com/questions/168284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
168,354 | Often I'll use tmux to start a task that will be running for a while. I will periodically go back and check on it using tmux -a and then disconnect if it still hasn't completed and check again later. Is there any way to just see a brief snapshot of what's going on in the session without fully attaching? I'm looking for something like theoretically doing a tail on the session to get the last bit of output ( but if I can avoid creating another file with a copy of the output all the better ) Maybe attaching and having it immediately detach would also work. I'm attempting to save keystrokes, perhaps such a command could be executed remotely, i.e. ssh root@server tmux --tail ? | I think capture-pane might suit your needs: tmux capture-pane -pt "$target-pane" (see “target-pane” in the man page for the ways to specify a pane) By default, that command will dump the current contents of the specified pane. You can specify a range of lines by using the -S and -E options (start and end line numbers): the first line is 0, and negative numbers refer to lines from the pane’s “scroll back” history. So adding -S -10 gets you the most recent ten lines of history plus the current contents of the pane. tmux capture-pane -pt "$target-pane" -S -10 The -p option was added in 1.8. If you are running an earlier version then you can do this instead: tmux capture-pane -t "$target_pane" \; save-buffer - \; delete-buffer But mind those semicolons if you are issuing this command via ssh since the remote shell will add an additional level of shell interpretation (the semicolons need to be passed as arguments to the final tmux command, they must not be interpreted by either the local or the remote shell). | {
"source": [
"https://unix.stackexchange.com/questions/168354",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10287/"
]
} |
168,357 | I am trying to create a graphical program for my script. Inside the script I use tar to create a tar archive. From the graphical program I get the full name of file that I want to create a tar archive. tar -cvf temp.tar /home/username/dir1/dir2/selecteddir My tar archive includes home, username, dir1, dir2 and selecteddir while i want tar to create archive only including selecteddir. | You can use the -C option of tar to accomplish this: tar -C /home/username/dir1/dir2 -cvf temp.tar selecteddir From the man page of tar : -C directory
In c and r mode, this changes the directory before adding the following files.
In x mode, change directories after opening the archive but before extracting
entries from the archive. | {
"source": [
"https://unix.stackexchange.com/questions/168357",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73285/"
]
} |
168,436 | I would like to open a terminal, split it to lets say 9 parts (3x3) and execute some bash script. But for each terminal part different script. Can this be done using perl, python or even bash? How can I switch between those little terminals without using keyboard shortcuts? Oh, by the way, I'm using terminator . And if there is some other terminal emulator that enables such a functionality, which is it? | To plagiarize myself , you can set up a profile with your desired settings (instructions adapted from here ): Run terminator and set up the layout you want. You can use Ctrl + Shift + E to split windows vertically and Ctrl + Shift + O (that's O as in oodles, not zero) to split horizontally. For this example, I have created a layout with 6 panes: Right click on the terminator window and choose Preferences . Once the Preferences window is open, go to Layouts and click Add : That will populate the Layouts list with your new layout: Find each of the terminals you have created in the layout and click on them. Then on the right, enter the command you want to run in them on startup: IMPORTANT: Note that the command is followed by ; bash . If you don't do that, the terminals will not be accessible, since they will run the command you give and exit. You need to launch a shell after each command to be able to use the terminals. Once you have set all the commands, click Close and then exit terminator . Open the terminator config file ~/.config/terminator/config and delete the section under layouts for the default config. Then change the name of the layout you created to default. It should look something like this: [global_config]
[keybindings]
[profiles]
[[default]]
[layouts]
[[default]]
[[[child0]]]
position = 446:100
type = Window
order = 0
parent = ""
size = 885, 550
[[[child1]]]
position = 444
type = HPaned
order = 0
parent = child0
[[[child2]]]
position = 275
type = VPaned
order = 0
parent = child1
[[[child5]]]
position = 219
type = HPaned
order = 1
parent = child1
[[[child6]]]
position = 275
type = VPaned
order = 0
parent = child5
[[[child9]]]
position = 275
type = VPaned
order = 1
parent = child5
[[[terminal11]]]
profile = default
command = 'df -h; bash'
type = Terminal
order = 1
parent = child9
[[[terminal10]]]
profile = default
command = 'export foo="bar" && cd /var/www/; bash'
type = Terminal
order = 0
parent = child9
[[[terminal3]]]
profile = default
command = 'ssh -Yp 24222 [email protected]'
type = Terminal
order = 0
parent = child2
[[[terminal4]]]
profile = default
command = 'top; bash'
type = Terminal
order = 1
parent = child2
[[[terminal7]]]
profile = default
command = 'cd /etc; bash'
type = Terminal
order = 0
parent = child6
[[[terminal8]]]
profile = default
command = 'cd ~/dev; bash'
type = Terminal
order = 1
parent = child6
[plugins] The final result is that when you run terminator it will open with 6 panes, each of which has run or is running the commands you have specified: Also, you can set up as many different profiles as you wish and either launch terminator with the -p switch giving a profile name, or manually switch to whichever profile you want after launching. | {
"source": [
"https://unix.stackexchange.com/questions/168436",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78357/"
]
} |
168,452 | I have the following setup Linux 1 Linux 0
eth1 eth0-------------------eth0
14.14.14.80 19.19.19.20 19.19.19.10
2005::5/64 2004::3/64 2001::3/64 From Linux0, i am able to ping 14.14.14.80 or 19.19.19.20 ( 19.19.19.20 was added as a default GW) and also on Linux1 , ipv4 forwarding was enabled.
For ipv6 , i cannot add 2004::3/64 as the default ipv6 gateway on Linux0 .
I tried ip -6 route add default via 2004::3 and ip -6 route add default via 2004:: But i get the error RTNETLINK answers: No route to host What am i missing here?. | You need to add the route to the gateway first: ip -6 route add 2004::3 dev eth0 | {
"source": [
"https://unix.stackexchange.com/questions/168452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68053/"
]
} |
168,807 | I am looking for a way to mount a ZIP archive as a filesystem so that I can transparently access files within the archive. I only need read access -- the ZIP will not be modified. RAM consumption is important since this is for a (resource constrained) embedded system. What are the available options? | fuse-zip is an option and claims to be faster than the competition. # fuse-zip -r archivetest.zip /mnt archivemount is another: # archivemount -o readonly archivetest.zip /mnt Both will probably need to open the whole archive, therefore won't be particularly quick. Have you considered extracting the ZIP to a HDD or USB-stick beforehand and simply mounting that read-only? There are also other libraries like fuse-archive and ratarmount which supposedly are more performant under certain situations and provide additional features. | {
"source": [
"https://unix.stackexchange.com/questions/168807",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67390/"
]
} |
168,862 | Why this bash script ssh $SERVER bash <<EOF
sed -i "s/database_name: [^ ]*/database_name: kartable_$ME" $PARAM_FILE
exit
EOF output -> sed: -e expression #1, char 53: unterminated `s' command | The s command in sed , uses a specific syntax: s/AAAA/BBBB/options where s is the substitution command, AAAA is the regex you want to replace, BBBB is with what you want it to be replaced with and options is any of the substitution command's options, such as global ( g ) or ignore case ( i ). In your specific case, you were missing the final slash / , if you add it, sed will work just fine: ➜ ~ sed 's/database_name: [^ ]*/database_name: kartable_$ME/'
database_name: something
database_name: kartable_$ME info sed 'The "s" Command' includes the full description and usage of the s command. | {
"source": [
"https://unix.stackexchange.com/questions/168862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92104/"
]
} |
168,866 | From time to time I need to do a simple task where I output basic HTML into the console. I'd like to have it minimally rendered, to make it easier to read at a glance. Is there a utility which can handle basic HTML rendering in the shell (think of Lynx -style rendering--but not an actual browser)? For example, sometimes I'll put a watch on Apache's mod_status page: watch -n 1 curl http://some-server/server-status The output of the page is HTML with some minimal markup, which shows in the shell like: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html><head>
<title>Apache Status</title>
</head><body>
<h1>Apache Server Status for localhost</h1>
<dl><dt>Server Version: Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.15 with Suhosin-Patch</dt>
<dt>Server Built: Jul 22 2014 14:35:25
</dt></dl><hr /><dl>
<dt>Current Time: Wednesday, 19-Nov-2014 15:21:40 UTC</dt>
<dt>Restart Time: Wednesday, 19-Nov-2014 15:13:02 UTC</dt>
<dt>Parent Server Generation: 1</dt>
<dt>Server uptime: 8 minutes 38 seconds</dt>
<dt>Total accesses: 549 - Total Traffic: 2.8 MB</dt>
<dt>CPU Usage: u35.77 s12.76 cu0 cs0 - 9.37% CPU load</dt>
<dt>1.06 requests/sec - 5.6 kB/second - 5.3 kB/request</dt>
<dt>1 requests currently being processed, 9 idle workers</dt>
</dl><pre>__W._______.....................................................
................................................................
................................................................
................................................................
</pre>
<p>Scoreboard Key:<br />
"<b><code>_</code></b>" Waiting for Connection,
"<b><code>S</code></b>" Starting up,
"<b><code>R</code></b>" Reading Request,<br />
"<b><code>W</code></b>" Sending Reply,
"<b><code>K</code></b>" Keepalive (read),
"<b><code>D</code></b>" DNS Lookup,<br />
"<b><code>C</code></b>" Closing connection,
"<b><code>L</code></b>" Logging,
"<b><code>G</code></b>" Gracefully finishing,<br />
"<b><code>I</code></b>" Idle cleanup of worker,
"<b><code>.</code></b>" Open slot with no current process</p>
<p /> When viewed in Lynx the same HTML is rendered as:
Apache Status (p1 of 2)
Apache Server Status for localhost Server Version: Apache/2.2.22 (Ubuntu) PHP/5.3.10-1ubuntu3.15 with Suhosin-Patch
Server Built: Jul 22 2014 14:35:25
________________________________________________________________________________________________________
Current Time: Wednesday, 19-Nov-2014 15:23:50 UTC
Restart Time: Wednesday, 19-Nov-2014 15:13:02 UTC
Parent Server Generation: 1
Server uptime: 10 minutes 48 seconds
Total accesses: 606 - Total Traffic: 3.1 MB
CPU Usage: u37.48 s13.6 cu0 cs0 - 7.88% CPU load
.935 requests/sec - 5088 B/second - 5.3 kB/request
2 requests currently being processed, 9 idle workers
_C_______W_.....................................................
................................................................
................................................................
................................................................
Scoreboard Key:
"_" Waiting for Connection, "S" Starting up, "R" Reading Request,
"W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
"C" Closing connection, "L" Logging, "G" Gracefully finishing,
"I" Idle cleanup of worker, "." Open slot with no current process | lynx has a "dump" mode, which you can use with watch : $ watch lynx https://www.google.com -dump From man lynx : -dump dumps the formatted output of the default document or those
specified on the command line to standard output. Unlike
interactive mode, all documents are processed. This can be used
in the following way:
lynx -dump http://www.subir.com/lynx.html
Files specified on the command line are formatted as HTML if
their names end with one of the standard web suffixes such as
“.htm” or “.html”. Use the -force_html option to format files
whose names do not follow this convention. This Ask Ubuntu question has many more options. | {
"source": [
"https://unix.stackexchange.com/questions/168866",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26343/"
]
} |
169,054 | I'm tailing a log file using tail -f messages.log and this is part of the output: Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Fusce eget tellus sit amet odio porttitor rhoncus.
Donec consequat diam sit amet tellus viverra pellentesque.
tail: messages.log: file truncated
Suspendisse at risus id neque pharetra finibus in facilisis ipsum. It shows tail: messages.log: file truncated when the file gets truncated automatically and that's supposed to happen, but I just want tail to show me the output without this truncate message. I've tried using tail -f messages.log | grep -v truncated but it shows me the message anyway. Is there any method to suppress this message? | That message is output on stderr like all warning and error messages. You can either drop all the error output: tail -f file 2> /dev/null Or to filter out only the error messages that contain truncate : { tail -f file 2>&1 >&3 3>&- | grep -v truncated >&2 3>&-;} 3>&1 That means however that you lose the exit status of tail . A few shells have a pipefail option (enabled with set -o pipefail ) for that pipeline to report the exit status of tail if it fails. zsh and bash can also report the status of individual components of the pipeline in their $pipestatus / $PIPESTATUS array. With zsh or bash , you can use: tail -f file 2> >(grep -v truncated >&2) But beware that the grep command is not waited for, so the error messages if any may end up being displayed after tail exits and the shell has already started running the next command in the script. In zsh , you can address that by writing it: { tail -f file; } 2> >(grep -v truncated >&2) That is discussed in the zsh documentation at info zsh 'Process Substitution' : There is an additional problem with >(PROCESS) ; when this is attached to
an external command, the parent shell does not wait for PROCESS to
finish and hence an immediately following command cannot rely on the
results being complete. The problem and solution are the same as
described in the section MULTIOS in note Redirection:: . Hence in a
simplified version of the example above: paste <(cut -f1 FILE1) <(cut -f3 FILE2) > >(PROCESS) (note that no MULTIOS are involved), PROCESS will be run asynchronously
as far as the parent shell is concerned. The workaround is: { paste <(cut -f1 FILE1) <(cut -f3 FILE2) } > >(PROCESS) The extra processes here are spawned from the parent shell which will
wait for their completion. | {
"source": [
"https://unix.stackexchange.com/questions/169054",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60445/"
]
} |
169,079 | Variants of this question have certainly been asked several times in different places, but I am trying to remove the last M lines from a file without luck. The second most voted answer in this question recommends doing the following to get rid of the last line in a file: head -n -1 foo.txt > temp.txt However, when I try that in OSX & Zsh, I get: head: illegal line count -- -1 Why is that? How can I remove the M last lines and the first N lines of a given file? | You can remove the first 12 lines with: tail -n +13 (That means print from the 13 th line.) Some implementations of head like GNU head support: head -n -12 but that's not standard. tail -r file | tail -n +13 | tail -r would work on those systems that have tail -r (see also GNU tac ) but is sub-optimal. Where n is 1: sed '$d' file You can also do: sed '$d' file | sed '$d' to remove 2 lines, but that's not optimal. You can do: sed -ne :1 -e 'N;1,12b1' -e 'P;D' But beware that won't work with large values of n with some sed implementations. With awk : awk -v n=12 'NR>n{print line[NR%n]};{line[NR%n]=$0}' To remove m lines from the beginning and n from the end: awk -v m=6 -v n=12 'NR<=m{next};NR>n+m{print line[NR%n]};{line[NR%n]=$0}' | {
"source": [
"https://unix.stackexchange.com/questions/169079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
169,098 | How can I print a range of ip addresses on linux command line using the "seq" command? For eg: I need seq to print a range of ip from 10.0.0.1 to 10.0.0.23 . Seems like the period in between the octets causes the number to behave like a floating point . I am getting a "invalid floating point argument error" . I tried using the -f option . May be I am not using it correctly. But it still gave me an error. I am trying to something similar to seq 10.0.0.2 10.0.0.23 Is there another way to print IP addresses in a range in Linux other than switching over to excel ? | Use a format: $ seq -f "10.20.30.%g" 40 50
10.20.30.40
10.20.30.41
10.20.30.42
10.20.30.43
10.20.30.44
10.20.30.45
10.20.30.46
10.20.30.47
10.20.30.48
10.20.30.49
10.20.30.50 Unfortunately this is non-obvious as GNU doesn't like to write man pages. | {
"source": [
"https://unix.stackexchange.com/questions/169098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92605/"
]
} |
169,186 | I've just lost a small part of my audio collection, by a stupid mistake I made. :-( GLADLY I had a fairly recent backup, but it was still irritating. Apart from yours truly, the other culprit doing the mischief was mv , which will show as follows: The audio files had a certain scheme: ARTIST - Some Title YY.mp3 where YY is the 2-digit year specification. mkdir 90<invisible control character> (Up to this moment, I did not know that I had actually typed one third excess character which was invisible ...!) Instead of having all in one directory, I wanted to have all 1990s music in one directory. So I typed: find . -name '* 9?.mp3' -exec mv {} 90 \; Not so hard to get the idea what happened eh? :-> The (disastrous) result was a virgin empty directory called '90 something ' (with something being the "invisible" control character) and one single file called '90', overwritten n times. ALL FILES WERE GONE. :-(( (obviously) Wish mv would've checked in time whether the signature of the destination "file" (remember on *NIX: Everything Is A File ) starts with a d------ (e. g. drwxr-xr-x ). And, of course, whether the destination exists at all. There is a variant of the aforementioned scenario, when you simply forgot to mkdir the directory first. (but of course, you assumed that it's there...) Even our pet-hate OS starting with the capital W DOES DO THIS. You get even prompted to specify the type of destination (file? directory?) if you ask for it. Hence, I'm wondering if we *NIXers still have to write ourselves a " mv scriptlet" just to avoid these kinds of most unwanted surprises. | You can append a / to the destination if you want to move files to a
directory. In case the directory does not exist you'll receive an error: mv somefile somedir/
mv: cannot move ‘somefile’ to ‘somedir/’: Not a directory In case the directory exists, it moves the file into that directory. | {
"source": [
"https://unix.stackexchange.com/questions/169186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22858/"
]
} |
169,492 | How can I use a variable - $BASE in my cd.
I tried the following but I get an error $ cd ~/z/repo_1_ruby_193/
23:23:57 durrantm Castle2012 /home/durrantm/z/repo_1_ruby_193
$ BASE="~/z"
23:24:03 durrantm Castle2012 /home/durrantm/z/repo_1_ruby_193
$ cd $BASE/repo_1_ruby_193
-bash: cd: ~/z/repo_1_ruby_193: No such file or directory
23:24:25 durrantm Castle2012 /home/durrantm/z/repo_1_ruby_193 | In cd ~/z/ you are using Tilde expansion to expand ~ into your home directory. In BASE="~/z" , you are not because you quoted the ~ character, so it is not expanded. That is why you get a message complaining about a nonexistent ~ directory. The solution is to not quote it, i.e. BASE=~/z in order to let the expansion occur. | {
"source": [
"https://unix.stackexchange.com/questions/169492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
169,508 | In section 3.1.2.3 titled Double Quotes, the Bash manual says: Enclosing characters in double quotes (‘"’) preserves the literal
value of all characters within the quotes, with the exception of ‘$’,
‘`’, ‘\’, and, when history expansion is enabled, ‘!’. At the moment I am concerned with the single quote( ' ). It's special meaning, described in the preceding section, section 3.1.2.2 is: Enclosing characters in single quotes ( ' ) preserves the literal
value of each character within the quotes. A single quote may not
occur between single quotes, even when preceded by a backslash. Combining the two expositions, echo "'$a'" where variable a is not defined (hence $a = null string), should print $a on the screen, as '' , having it's special meaning inside, would shield $ from the special interpretation. Instead, it prints '' . Why so? | The ' single quote character in your echo example gets it literal value (and loses its meaning) as it enclosed in double quotes ( " ). The enclosing characters are the double quotes. What you can do is print the single quotes separately: echo "'"'$a'"'" or escape the $ : echo "'\$a'" | {
"source": [
"https://unix.stackexchange.com/questions/169508",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39843/"
]
} |
169,534 | Say I have a large 800x5000 image; how would I split that into 5 separate images with dimensions 800x1000 using the command line? | Solved it using ImageMagick's convert -crop geometry +repage: convert -crop 100%x20% +repage image.png image.png | {
"source": [
"https://unix.stackexchange.com/questions/169534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92557/"
]
} |
169,697 | I'd like to know what is the exact mechanism (implementation) used to defer mounting until after network interface is up when one uses _netdev option in /etc/fstab ? Does systemd alter this behavior? Also, what does delay_connect option to sshfs provide what _netdev does not? From mount man page : _netdev The filesystem resides on a device that requires network
access (used to prevent the system from attempting to mount
these filesystems until the network has been enabled on the
system). From sshfs man page : -o delay_connect delay connection to server | From man systemd.mount for version 231 of systemd: Mount units referring to local and network file systems are
distinguished by their file
system type specification. In some cases this is not sufficient (for example network block device based mounts, such as
iSCSI), in which case _netdev may be added to the mount option string of the unit, which forces systemd to consider the mount
unit a network mount. | {
"source": [
"https://unix.stackexchange.com/questions/169697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5355/"
]
} |
169,716 | Is using a while loop to process text generally considered bad practice in POSIX shells? As Stéphane Chazelas pointed out , some of the reasons for not using shell loop are conceptual , reliability , legibility , performance and security . This answer explains the reliability and legibility aspects: while IFS= read -r line <&3; do
printf '%s\n' "$line"
done 3< "$InputFile" For performance , the while loop and read are tremendously slow when reading from a file or a pipe, because the read shell built-in reads one character at a time. How about conceptual and security aspects? | Yes, we see a number of things like: while read line; do
echo $line | cut -c3
done Or worse: for line in `cat file`; do
foo=`echo $line | awk '{print $2}'`
echo whatever $foo
done (don't laugh, I've seen many of those). Generally from shell scripting beginners. Those are naive literal translations of what you would do in imperative languages like C or python, but that's not how you do things in shells, and those examples are very inefficient, completely unreliable (potentially leading to security issues), and if you ever manage to fix most of the bugs, your code becomes illegible. Conceptually In C or most other languages, building blocks are just one level above computer instructions. You tell your processor what to do and then what to do next. You take your processor by the hand and micro-manage it: you open that file, you read that many bytes, you do this, you do that with it. Shells are a higher level language. One may say it's not even a language. They're before all command line interpreters. The job is done by those commands you run and the shell is only meant to orchestrate them. One of the great things that Unix introduced was the pipe and those default stdin/stdout/stderr streams that all commands handle by default. In 50 years, we've not found better than that API to harness the power of commands and have them cooperate to a task. That's probably the main reason why people are still using shells today. You've got a cutting tool and a transliterate tool, and you can simply do: cut -c4-5 < in | tr a b > out The shell is just doing the plumbing (open the files, setup the pipes, invoke the commands) and when it's all ready, it just flows without the shell doing anything. The tools do their job concurrently, efficiently at their own pace with enough buffering so as not one blocking the other, it's just beautiful and yet so simple. Invoking a tool though has a cost (and we'll develop that on the performance point). Those tools may be written with thousands of instructions in C. A process has to be created, the tool has to be loaded, initialised, then cleaned-up, process destroyed and waited for. Invoking cut is like opening the kitchen drawer, take the knife, use it, wash it, dry it, put it back in the drawer. When you do: while read line; do
echo $line | cut -c3
done < file It's like for each line of the file, getting the read tool from the kitchen drawer (a very clumsy one because it's not been designed for that ), read a line, wash your read tool, put it back in the drawer. Then schedule a meeting for the echo and cut tool, get them from the drawer, invoke them, wash them, dry them, put them back in the drawer and so on. Some of those tools ( read and echo ) are built in most shells, but that hardly makes a difference here since echo and cut still need to be run in separate processes. It's like cutting an onion but washing your knife and put it back in the kitchen drawer between each slice. Here the obvious way is to get your cut tool from the drawer, slice your whole onion and put it back in the drawer after the whole job is done. IOW, in shells, especially to process text, you invoke as few utilities as possible and have them cooperate to the task, not run thousands of tools in sequence waiting for each one to start, run, clean up before running the next one. Further reading in Bruce's fine answer . The low-level text processing internal tools in shells (except maybe for zsh ) are limited, cumbersome, and generally not fit for general text processing. Performance As said earlier, running one command has a cost. A huge cost if that command is not builtin, but even if they are builtin, the cost is big. And shells have not been designed to run like that, they have no pretension to being performant programming languages. They are not, they're just command line interpreters. So, little optimisation has been done on this front. Also, the shells run commands in separate processes. Those building blocks don't share a common memory or state. When you do a fgets() or fputs() in C, that's a function in stdio. stdio keeps internal buffers for input and output for all the stdio functions, to avoid to do costly system calls too often. The corresponding even builtin shell utilities ( read , echo , printf ) can't do that. read is meant to read one line. If it reads past the newline character, that means the next command you run will miss it. So read has to read the input one byte at a time (some implementations have an optimisation if the input is a regular file in that they read chunks and seek back, but that only works for regular files and bash for instance only reads 128 byte chunks which is still a lot less than text utilities will do). Same on the output side, echo can't just buffer its output, it has to output it straight away because the next command you run will not share that buffer. Obviously, running commands sequentially means you have to wait for them, it's a little scheduler dance that gives control from the shell and to the tools and back. That also means (as opposed to using long running instances of tools in a pipeline) that you cannot harness several processors at the same time when available. Between that while read loop and the (supposedly) equivalent cut -c3 < file , in my quick test, there's a CPU time ratio of around 40000 in my tests (one second versus half a day). But even if you use only shell builtins: while read line; do
echo ${line:2:1}
done (here with bash ), that's still around 1:600 (one second vs 10 minutes). Reliability/legibility It's very hard to get that code right. The examples I gave are seen too often in the wild, but they have many bugs. read is a handy tool that can do many different things. It can read input from the user, split it into words to store in different variables. read line does not read a line of input, or maybe it reads a line in a very special way. It actually reads words from the input those words separated by $IFS and where backslash can be used to escape the separators or the newline character. With the default value of $IFS , on an input like: foo\/bar \
baz
biz read line will store "foo/bar baz" into $line , not " foo\/bar \" as you'd expect. To read a line, you actually need: IFS= read -r line That's not very intuitive, but that's the way it is, remember shells were not meant to be used like that. Same for echo . echo expands sequences. You can't use it for arbitrary contents like the content of a random file. You need printf here instead. And of course, there's the typical forgetting of quoting your variable which everybody falls into. So it's more: while IFS= read -r line; do
printf '%s\n' "$line" | cut -c3
done < file Now, a few more caveats: except for zsh , that doesn't work if the input contains NUL characters while at least GNU text utilities would not have the problem. if there's data after the last newline, it will be skipped inside the loop, stdin is redirected so you need to pay attention that the commands in it don't read from stdin. for the commands within the loops, we're not paying attention to whether they succeed or not. Usually, error (disk full, read errors...) conditions will be poorly handled, usually more poorly than with the correct equivalent. Many commands, including several implementations of printf also don't reflect their failure to write to stdout in their exit status. If we want to address some of those issues above, that becomes: while IFS= read -r line <&3; do
{
printf '%s\n' "$line" | cut -c3 || exit
} 3<&-
done 3< file
if [ -n "$line" ]; then
printf '%s' "$line" | cut -c3 || exit
fi That's becoming less and less legible. There are a number of other issues with passing data to commands via the arguments or retrieving their output in variables: the limitation on the size of arguments (some text utility implementations have a limit there as well, though the effect of those being reached are generally less problematic) the NUL character (also a problem with text utilities). arguments taken as options when they start with - (or + sometimes) various quirks of various commands typically used in those loops like expr , test ... the (limited) text manipulation operators of various shells that handle multi-byte characters in inconsistent ways. ... Security considerations When you start working with shell variables and arguments to commands , you're entering a mine-field. If you forget to quote your variables , forget the end of option marker , work in locales with multi-byte characters (the norm these days), you're certain to introduce bugs which sooner or later will become vulnerabilities. When you may want to use loops. TBD | {
"source": [
"https://unix.stackexchange.com/questions/169716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38906/"
]
} |
169,739 | I wrote the following script to test the speed of Python's sort functionality: from sys import stdin, stdout
lines = list(stdin)
lines.sort()
stdout.writelines(lines) I then compared this to the coreutils sort command on a file containing 10 million lines: $ time python sort.py <numbers.txt >s1.txt
real 0m16.707s
user 0m16.288s
sys 0m0.420s
$ time sort <numbers.txt >s2.txt
real 0m45.141s
user 2m28.304s
sys 0m0.380s The built-in command used all four CPUs (Python only used one) but took about 3 times as long to run! What gives? I am using Ubuntu 12.04.5 (32-bit), Python 2.7.3, and sort 8.13 | Izkata's comment revealed the answer: locale-specific comparisons. The sort command uses the locale indicated by the environment, whereas Python defaults to a byte order comparison. Comparing UTF-8 strings is harder than comparing byte strings. $ time (LC_ALL=C sort <numbers.txt >s2.txt)
real 0m5.485s
user 0m14.028s
sys 0m0.404s How about that. | {
"source": [
"https://unix.stackexchange.com/questions/169739",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45158/"
]
} |
169,787 | I recently installed CentOS 7 on a machine that has been running Windows 7. I did a dual boot installation and installed CentOS in a partition. But when I boot up my machine, it only gives me two CentOS options. It does not give me the option to choose to boot Windows 7. How can I add windows 7 back to the boot options? NOTE: I'm reading this post titled: CenTOS 7 dual boot with windows , but my /grub folder only seems to have a splash.xpm.gz file in it with no other files. Also, I'm new to Linux and need something more step by step. EDIT #1 I'm getting the following results on the command line: [root@localhost home]# sudo update-grub
sudo: update-grub: command not found
[root@localhost home]# sudo grub-mkconfig
sudo: grub-mkconfig: command not found Also, I'm currently researching the possibility that these commands might not apply to CentOS. For example in this U&L Q&A titled: " Equivalent of update-grub for RHEL/Fedora/CentOS systems? ", as well as this Q&A titled: " Installed Centos 7 after Windows and can't boot into CentOS " seem to imply that I should reinstall grub2. But how do I do that? I'm just now learning Linux. EDIT #2 The following command does work. Here is the output: [root@localhost home]# sudo grub2-mkconfig 2>/dev/null
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
### BEGIN /etc/grub.d/00_header ###
set pager=1
if [ -s $prefix/grubenv ]; then
load_env
fi
if [ "${next_entry}" ] ; then
set default="${next_entry}"
set next_entry=
save_env next_entry
set boot_once=true
else
set default="${saved_entry}"
fi
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi
export menuentry_id_option
if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi
function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
}
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
}
terminal_output console
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=menu
set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
set timeout=5
fi
### END /etc/grub.d/00_header ###
### BEGIN /etc/grub.d/10_linux ###
menuentry 'CentOS Linux, with Linux 3.10.0-123.el7.x86_64' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-123.el7.x86_64-advanced-77a053a9-a71b-43ce-a8d7-1a3418f5b0d9' {
load_video
set gfxpayload=keep
insmod gzio
insmod part_msdos
insmod xfs
set root='hd0,msdos5'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint- efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 --hint='hd0,msdos5' 589631f1-d5aa-4374-a069-7aae5ca289bc
else
search --no-floppy --fs-uuid --set=root 589631f1-d5aa-4374-a069-7aae5ca289bc
fi
linux16 /vmlinuz-3.10.0-123.el7.x86_64 root=UUID=77a053a9-a71b-43ce-a8d7-1a3418f5b0d9 ro rd.luks.uuid=luks-a45243be-2514-4a81-b7a1-7e4eff712d2d vconsole.font=latarcyrheb-sun16 crashkernel=auto vconsole.keymap=us rd.luks.uuid=luks-5349515e-a082-4ff2-b035-54da7b8d4990 rhgb quiet
initrd16 /initramfs-3.10.0-123.el7.x86_64.img
}
menuentry 'CentOS Linux, with Linux 0-rescue-369d0c1b630b48cc8ef010ceb99bc668' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-369d0c1b630b48cc8ef010ceb99bc668-advanced-77a053a9-a71b-43ce-a8d7-1a3418f5b0d9' {
load_video
insmod gzio
insmod part_msdos
insmod xfs
set root='hd0,msdos5'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint-efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 --hint='hd0,msdos5' 589631f1-d5aa-4374-a069-7aae5ca289bc
else
search --no-floppy --fs-uuid --set=root 589631f1-d5aa-4374-a069-7aae5ca289bc
fi
linux16 /vmlinuz-0-rescue-369d0c1b630b48cc8ef010ceb99bc668 root=UUID=77a053a9-a71b-43ce-a8d7-1a3418f5b0d9 ro rd.luks.uuid=luks-a45243be-2514-4a81-b7a1-7e4eff712d2d vconsole.font=latarcyrheb-sun16 crashkernel=auto vconsole.keymap=us rd.luks.uuid=luks-5349515e-a082-4ff2-b035-54da7b8d4990 rhgb quiet
initrd16 /initramfs-0-rescue-369d0c1b630b48cc8ef010ceb99bc668.img
}
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###
### BEGIN /etc/grub.d/20_ppc_terminfo ###
### END /etc/grub.d/20_ppc_terminfo ###
### BEGIN /etc/grub.d/30_os-prober ###
menuentry 'Windows 7 (loader) (on /dev/sda2)' --class windows --class os $menuentry_id_option 'osprober-chain-386ED4266ED3DB28' {
insmod part_msdos
insmod ntfs
set root='hd0,msdos2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos2 --hint-efi=hd0,msdos2 --hint-baremetal=ahci0,msdos2 --hint='hd0,msdos2' 386ED4266ED3DB28
else
search --no-floppy --fs-uuid --set=root 386ED4266ED3DB28
fi
chainloader +1
}
### END /etc/grub.d/30_os-prober ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ### | This is usually fixed by running the scripts detect the installed operating systems and generate the boot loader's ( grub2 in this case) configuration file. On CentOS 7, that should be grub2-mkconfig . Check that windows is detected. Run grub2-mkconfig but discard its output: $ sudo grub2-mkconfig > /dev/null
Generating grub configuration file ...
Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found linux image: /boot/vmlinuz-3.16.0-4-amd64
Found initrd image: /boot/initrd.img-3.16.0-4-amd64
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Found Windows 7 (loader) on /dev/sda2 The output will look similar (but not identical) to what is shown above. Make sure that Windows is listed. If Windows was listed in the previous step, go ahead and save the new configuration file. Make a backup first, just in case. sudo cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.old
sudo grub2-mkconfig -o /boot/grub2/grub.cfg If all went well, you should now be able to reboot into Windows. | {
"source": [
"https://unix.stackexchange.com/questions/169787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
169,798 | I know I can use this option to find file between particular modified times. But I'm curious about what does this mean? I used man find | grep newermt trying to find something. But I got no direct content. It seems -newer file and mtime stuff may have relation with it. But I'm not sure.. So, what does -newermt actually mean? | find(1) : -newerXY reference
Compares the timestamp of the current file with reference. The
reference argument is normally the name of a file (and one of
its timestamps is used for the comparison) but it may also be a
string describing an absolute time. X and Y are placeholders
for other letters, and these letters select which time belonging
to how reference is used for the comparison.
a The access time of the file reference
B The birth time of the file reference
c The inode status change time of reference
m The modification time of the file reference
t reference is interpreted directly as a time | {
"source": [
"https://unix.stackexchange.com/questions/169798",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
169,886 | I'm using less to parse HTTP access logs. I want to view everything neatly on single lines, so I'm using -S . The problem I have is that the first third of my terminal window is taken up with metadata that I don't care about. When I use my arrow keys to scroll right, I find that it scrolls past the start of the information that I do care about! I could just delete the start of each line, but I don't know if I may need that data in the future, and I'd rather not have to maintain separate files or run a script each time I want to view some logs. Example This line: access.log00002:10.0.0.0 - USER_X [07/Nov/2013:16:50:50 +0000] "GET /some/long/URL" Would scroll to: ng/URL" Question Is there a way I can scroll in smaller increments, either by character or by word? | The only horizontal scrolling commands scroll by half a screenful, but you can pass a numeric argument to specify the number of characters, e.g. typing 4 Right scrolls to the right by 4 characters. Less doesn't really have a notion of “current line” and doesn't split a line into words, so there's no way to scroll by a word at a time. You can define a command that scrolls by a fixed number of characters. For example, if you want Shift + Left and Shift + Right to scroll by 4 characters at a time: Determine the control sequences that your terminal sends for these key combinations. Terminals send a sequence of bytes that begin with the escape (which can be written \e , \033 , ^[ in various contexts) character for function keys and keychords. Press Ctrl + V Shift + Left at a shell prompt: this inserts the escape character literally (you'll see ^[ on the screen) instead of it being processed by your shell, and inserts the rest of the escape sequence. A common setup has Shift + Left and Shift + Right send \eO2D and \eO2C respectively. Create a file called ~/.lesskey and add the following lines (adjust if your terminal sends different escape sequences): #command
\eO2D noaction 4\e(
\eO2C noaction 4\e)
\eOD noaction 40\e(
\eOC noaction 40\e) In addition to defining bindings for Shift + arrow , you may want to define bindings for arrow alone, because motion commands reuse the numeric values from the last call. Adjust 40 to your customary terminal width. There doesn't appear to be a way to say “now use the terminal width again, whatever it is at this moment”. A downside of these bindings is that you lose the ability to pass a numeric argument to Left and Right (you can still pass a numeric argument to Esc ( and Esc ) ). Then run lesskey , which converts the human-readable ~/.lesskey into a binary file ~/.less that less reads when it starts. | {
"source": [
"https://unix.stackexchange.com/questions/169886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63553/"
]
} |
169,898 | I recently came across this in a shell script. if ! kill -0 $(cat /path/to/file.pid); then
... do something ...
fi What does kill -0 ... do? | This one is a little hard to glean but if you look in the following 2 man pages you'll see the following notes: kill(1) $ man 1 kill
...
If sig is 0, then no signal is sent, but error checking is still performed.
... kill(2) $ man 2 kill
...
If sig is 0, then no signal is sent, but error checking is still performed;
this can be used to check for the existence of a process ID or process
group ID.
... So signal 0 will not actually in fact send anything to your process's PID, but will check whether you have permissions to do so. Where might this be useful? One obvious place would be if you were trying to determine if you had permissions to send signals to a running process via kill . You could check prior to sending the actual kill signal that you want, by wrapping a check to make sure that kill -0 <PID> was first allowed. Example Say a process was being run by root as follows: $ sudo sleep 2500 &
[1] 15693 Now in another window if we run this command we can confirm that that PID is running. $ pgrep sleep
15693 Now let's try this command to see if we have access to send that PID signals via kill . $ if ! kill -0 $(pgrep sleep); then echo "You're weak!"; fi
bash: kill: (15693) - Operation not permitted
You're weak! So it works, but the output is leaking a message from the kill command that we don't have permissions. Not a big deal, simply catch STDERR and send it to /dev/null . $ if ! kill -0 $(pgrep sleep) 2>/dev/null; then echo "You're weak!"; fi
You're weak! Complete example So then we could do something like this, killer.bash : #!/bin/bash
PID=$(pgrep sleep)
if ! kill -0 $PID 2>/dev/null; then
echo "you don't have permissions to kill PID:$PID"
exit 1
fi
kill -9 $PID Now when I run the above as a non-root user: $ ~/killer.bash
you don't have permissions to kill PID:15693
$ echo $?
1 However when it's run as root: $ sudo ~/killer.bash
$ echo $?
0
$ pgrep sleep
$ | {
"source": [
"https://unix.stackexchange.com/questions/169898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
169,909 | I have following line in /etc/fstab: UUID=E0FD-F7F5 /mnt/zeno vfat noauto,utf8,user,rw,uid=1000,gid=1000,fmask=0113,dmask=0002 0 0 The partition is freshly created by gnome-disks under the respective user, and spans the whole card. Now: Running mount /mnt/zeno as user (1000) succeeds, but right after that I find out that it's actually not mounted: following umount /mnt/zeno fails with umount: /mnt/zeno: not mounted . When watching journalctl -f , I can see following messages appear when mounting: [...] kernel: SELinux: initialized (dev mmcblk0p1, type vfat), uses genfs_contexts
[...] systemd[1]: Unit mnt-zeno.mount is bound to inactive service. Stopping, too.
[...] systemd[1]: Unmounting /mnt/zeno...
[...] systemd[1]: Unmounted /mnt/zeno. So it seems that systemd indeed keeps unmounting the drive, but I can't find out why. I don't remember creating any custom ".mount" files. I tried to find something in /etc/systemd and in my home folder but did not find anything. So what is this "mnt-zeno.mount" file and how can I review it? And most importantly, how can I mount the drive? | mnt-zeno.mount was created by systemd-fstab-generator . According to Jonathan de Boyne Pollard's explanation on debian-user mailing list : [systemd-fstab-generator is] a program that reads /etc/fstab at boot time
and generates units that translate fstab records to the systemd way of
doing things [.....] The systemd way of doing things is mount and device units, per the
systemd.mount(5) and systemd.device(5) manual pages. In the raw systemd
way of doing things, there's a device unit named "dev-sde1.device" which
is a base requirement for a mount unit named "media-lumix\x2dphotos.mount". After altering fstab one should either run systemctl daemon-reload (this makes systemd to reparse /etc/fstab and pick up the changes) or reboot. | {
"source": [
"https://unix.stackexchange.com/questions/169909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9365/"
]
} |
170,013 | Apparently, running: perl -n -e 'some perl code' * Or find . ... -exec perl -n -e '...' {} + (same with -p instead of -n ) Or perl -e 'some code using <>' * often found in one-liners posted on this site, has security implications. What's the deal? How to avoid it? | What's the problem First, like for many utilities, you'll have an issue with file names starting with - . While in: sh -c 'inline sh script here' other args The other args are passed to the inline sh script ; with the perl equivalent, perl -e 'inline perl script here' other args The other args are scanned for more options to perl first, not to the inline script. So, for instance, if there's a file called -eBEGIN{do something evil} in the current directory, perl -ne 'inline perl script here;' * (with or without -n ) will do something evil. Like for other utilities, the work around for that is to use the end-of-options marker ( -- ): perl -ne 'inline perl script here;' -- * But even then, it's still dangerous and that's down to the <> operator used by -n / -p . The issue is explained in perldoc perlop documentation. That special operator is used to read one line (one record, records being lines by default) of input, where that input is coming from each of the arguments in turn passed in @ARGV . In: perl -pe '' a b -p implies a while (<>) loop around the code (here empty). <> will first open a , read records one line at a time until the file is exhausted and then open b ... The problem is that, to open the file, it uses the first, unsafe form of open : open ARGV, "the file as provided" With that form, if the argument is "> afile" , it opens afile in writing mode, "cmd|" , it runs cmd and reads it's output. "|cmd" , you've a stream open for writing to the input of cmd . So for instance: perl -pe '' 'uname|' Doesn't output the content of the file called uname| (a perfectly valid file name btw), but the output of the uname command. If you're running: perl -ne 'something' -- * And someone has created a file called rm -rf "$HOME"| (again a perfectly valid file name) in the current directory (for instance because that directory was once writeable by others, or you've extracted a dodgy archive, or you've run some dodgy command, or another vulnerability in some other software was exploited), then you're in big trouble. Areas where it's important to be aware of that problem is tools processing files automatically in public areas like /tmp (or tools that may be called by such tools). Files called > foo , foo| , |foo are a problem. But to a lesser extent < foo and foo with leading or trailing ASCII spacing characters (including space, tab, newline, cr...) as well as that means those files won't be processed or the wrong one will be. Also beware that some characters in some multi-byte character sets (like ǖ in BIG5-HKSCS) end in byte 0x7c, the encoding of | . $ printf ǖ | iconv -t BIG5-HKSCS | od -tx1 -tc
0000000 88 7c
210 |
0000002 So in locales using that charset, perl -pe '' ./nǖ Would try to run the ./n\x88 command as perl would not try to interpret that file name in the user's locale! How to fix/work around AFAIK, there is nothing you can do to change that unsafe default behaviour of perl once and for all system-wide. First, the problem occurs only with characters at the start and end of the file name. So, while perl -ne '' * or perl -ne '' *.txt are a problem, perl -ne 'some code' ./*.txt is not because all the arguments now start with ./ and end in .txt (so not - , < , > , | , space...). More generally, it's a good idea to prefix globs with ./ . That also avoids problems with files called - or starting with - with many other utilities (and here, that means you don't need the end-of-options ( -- ) marker any more). Using -T to turn on taint mode helps to some extent. It will abort the command if such malicious file is encountered (only for the > and | cases, not < or whitespace though). That's useful when using such commands interactively as that alerts you that there's something dodgy going on. That may not be desirable when doing some automatic processing though, as that means someone can make that processing fail just by creating a file. If you do want to process every file, regardless of their name, you can use the ARGV::readonly perl module on CPAN (unfortunately usually not installed by default). That's a very short module that does: sub import{
# Tom Christiansen in Message-ID: <24692.1217339882@chthon>
# reccomends essentially the following:
for (@ARGV){
s/^(\s+)/.\/$1/; # leading whitespace preserved
s/^/< /; # force open for input
$_.=qq/\0/; # trailing whitespace preserved & pipes forbidden
};
}; Basically, it sanitises @ARGV by turning " foo|" for instance into "< ./ foo|\0" . You can do the same in a BEGIN statement in your perl -n/-p command: perl -pe 'BEGIN{$_.="\0" for @ARGV} your code here' ./* Here we simplify it on the assumption that ./ is being used. A side effect of that (and ARGV::readonly ) though is that $ARGV in your code here shows that trailing NUL character. Update 2015-06-03 perl v5.21.5 and above have a new <<>> operator that behaves like <> except that it will not do that special processing. Arguments will only be considered as file names. So with those versions, you can now write: perl -e 'while(<<>>){ ...;}' -- * (don't forget the -- or use ./* though) without fear of it overwriting files or running unexpected commands. -n / -p still use the dangerous <> form though. And beware symlinks are still being followed, so that does not necessarily mean it's safe to use in untrusted directories. | {
"source": [
"https://unix.stackexchange.com/questions/170013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22565/"
]
} |
170,043 | I have Apache logfile, access.log , how to count number of line occurrence in that file? for example the result of cut -f 7 -d ' ' | cut -d '?' -f 1 | tr '[:upper:]' '[:lower:]' is a.php
b.php
a.php
c.php
d.php
b.php
a.php the result that I want is: 3 a.php
2 b.php
1 d.php # order doesn't matter
1 c.php | | sort | uniq -c As stated in the comments. Piping the output into sort organises the output into alphabetical/numerical order. This is a requirement because uniq only matches on repeated lines, ie a
b
a If you use uniq on this text file, it will return the following: a
b
a This is because the two a s are separated by the b - they are not consecutive lines. However if you first sort the data into alphabetical order first like a
a
b Then uniq will remove the repeating lines. The -c option of uniq counts the number of duplicates and provides output in the form: 2 a
1 b References: sort(1) uniq(1) | {
"source": [
"https://unix.stackexchange.com/questions/170043",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27996/"
]
} |
170,063 | After about an hour of Googling this, I can't believe nobody has actually asked this question before... So I've got a script running on TTY1. How do I make that script launch some arbitrary program on TTY2? I found tty , which tells you which TTY you're currently on. I found writevt , which writes a single line of text onto a different TTY. I found chvt , which changes which TTY is currently displayed. I don't want to display TTY2. I just want the main script to continue executing normally, but if I manually switch to TTY2 I can interact with the second program. | setsid sh -c 'exec command <> /dev/tty2 >&0 2>&1' As long as nothing else is using the other TTY ( /dev/tty2 in this example), this should work. This includes a getty process that may be waiting for someone to login; having more than one process reading its input from a TTY will lead to unexpected results. setsid takes care of starting the command in a new session. Note that command will have to take care of setting the stty settings correctly, e.g. turn on "cooked mode" and onlcr so that outputting a newline will add a carriage return, etc. | {
"source": [
"https://unix.stackexchange.com/questions/170063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26776/"
]
} |
170,204 | How to find the max value from the column 1 and echo the respective path location from a file which contains n number of records. $ cat version.log
112030 /opt/oracle/app/oracle/product/11.2.0
121010 /opt/oracle/app/oracle/product/12.1.0 Expected output: /opt/oracle/app/oracle/product/12.1.0 | This should work: awk -v max=0 '{if($1>max){want=$2; max=$1}}END{print want} ' version.log The -v max=0 sets the variable max to 0 , then, for each line, the first field is compared to the current value of max . If it is greater, max is set to the value of the 1st field and want is set to the current line. When the program has processed the entire file, the current value of want is printed. Edit I did not test the awk solution earlier and it was really my bad to have provided it. Anyways, the edited version of the answer should work (Thanks to terdon for fixing it) and I tested the below as well. sort -nrk1,1 filename | head -1 | cut -d ' ' -f3 I am sort ing on the first field where, -n specifies numerical sort. -r specifies reverse the sort result. -k1,1 specifies first field for the sorting to occur. Now, after the sorting I am piping the output and just getting the first result which will give me the numerically highest value of column1 in the result. Now, I finally pipe it to cut with the delimiter specified as space and printing the -f3 which is the intended output. Testing cat filename
112030 /opt/oracle/app/oracle/product/11.2.0
121010 /opt/oracle/app/oracle/product/12.1.0
2312 /hello/some/other/path
3423232 /this/is/really/big/number
342 /ok/not/the/maximum/number
9999899 /Max/number
9767 /average/number Now, after I run the above command for the input as above, I get the output as, /Max/number | {
"source": [
"https://unix.stackexchange.com/questions/170204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92238/"
]
} |
170,275 | How can I list the number of lines in the files in /group/book/four/word , sorted by the number of lines they contain? ls -l command lists them down but does not sort them | You should use a command like this: find /group/book/four/word/ -type f -exec wc -l {} + | sort -rn find : search for files on the path you want. If you don't want it recursive, and your find implementation supports it, you should add -maxdepth 1 just before the -exec option. exec : tells the command to execute wc -l on every file. sort -rn : sort the results numerically in reverse order. From greater to lower. (that assumes file names don't contain newline characters). | {
"source": [
"https://unix.stackexchange.com/questions/170275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93026/"
]
} |
170,346 | In a CentOS 7 server, I want to get the list of selectable units for which journalctl can produce logs. How can I change the following code to accomplish this? journalctl --output=json-pretty | grep -f UNIT | sort -u In the CentOS 7 terminal, the above code produces grep: UNIT: No such file or directory . EDIT: The following java program is terminating without printing any output from the desired grep. How can I change things so that the java program works in addition to the terminal version? String s;
Process p;
String[] cmd = {"journalctl --output=json-pretty ","grep UNIT ","sort -u"};
try {
p = Runtime.getRuntime().exec(cmd);
BufferedReader br = new BufferedReader(new InputStreamReader(p.getInputStream()));
while ((s = br.readLine()) != null)
System.out.println("line: " + s);
p.waitFor();
System.out.println ("exit: " + p.exitValue()+", "+p.getErrorStream());
BufferedReader br2 = new BufferedReader(new InputStreamReader(p.getErrorStream()));
while ((s = br2.readLine()) != null)
System.out.println("error line: " + s);
p.waitFor();
p.destroy();
} catch (Exception e) {} | journalctl can display logs for all units - whether these units write to the log is a different matter. To list all available units and therefore all available for journalctl to use: systemctl list-unit-files --all As to your java code, in order to make pipes work with Runtime.exec() you could either put the command in a script and invoke the script or use a string array, something like: String[] cmd = {"sh", "-c", "command1 | command2 | command3"};
p = Runtime.getRuntime().exec(cmd); or: Runtime.getRuntime().exec(new String[]{"sh", "-c", "command1 | command2 | command3"}); | {
"source": [
"https://unix.stackexchange.com/questions/170346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
170,398 | I know that flv and mp4 files contain aac audio, while avi video usually mp3 audio streams. What command (avconv, ffmpeg) would extract the audio without transcoding it? | ffmpeg -i video.mp4 -vn -acodec copy audio.aac Here’s a short explanation on what every parameter does: -i option specifies the input file. -vn option is used to skip the video part. -acodec copy will copy the audio stream keeping the original codec. | {
"source": [
"https://unix.stackexchange.com/questions/170398",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
170,399 | I am install openstack controller node for one machtion, and Another metchion running nova-compute only. so I am running controller node cinder will
got error. I clearly meantion it which service gor error, so please help me. cat /var/log/cinder/cinder-backup.log 1) ERROR cinder.service [-] Recovered model server connection! 2) 2014-11-28 12:43:35.415 4628 ERROR cinder.openstack.common.rpc.common AMQP server on 10.192.1.126:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 1 seconds. 3) ERROR cinder.brick.local_dev.lvm Unable to locate Volume Group cinder-volumes 4) ERROR cinder.backup.manager Error encountered during initialization of driver: LVMISCSIDriver 5) ERROR cinder.backup.manager Bad or unexpected response from the storage volume backend API: Volume Group cinder-volumes does not exist scheduler: 1) ERROR cinder.service [-] Recovered model server connection! 2) ERROR cinder.volume.flows.create_volume Failed to schedule_create_volume: No valid host was found. | ffmpeg -i video.mp4 -vn -acodec copy audio.aac Here’s a short explanation on what every parameter does: -i option specifies the input file. -vn option is used to skip the video part. -acodec copy will copy the audio stream keeping the original codec. | {
"source": [
"https://unix.stackexchange.com/questions/170399",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78285/"
]
} |
170,444 | I have a file in ~/file.txt . I have created a hard link by : ln ~/file.txt ~/test/hardfile.txt and a symlink file : ln -s ~/file.txt ~/test/symfile.txt Now, How can I find out that which file is hard link ? How can I find out hard link follows which file? We can find symlink file by -> , but what about hard link? | -rw--r--r-- 2 kamix users 5 Nov 17:10 hardfile.txt
^ That's the number of hard links the file has. A "hard link" is actually between two directory entries; they're really the same file. You can tell by looking at the output from stat : stat hardlink.file | grep -i inode
Device: 805h/2053d Inode: 1835019 Links: 2 Notice again the number of links is 2, indicating there's another listing for this file somewhere. The reason you know this is the same file as another is they have the same inode number; no other file will have that. Unfortunately, this is the only way to find them (by inode number). There are some ideas about how best to find a file by inode (e.g., with find ) in this Q&A . | {
"source": [
"https://unix.stackexchange.com/questions/170444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52995/"
]
} |
170,775 | How can I check if a UTF-8 text file has a BOM from command line? file command shows me: UTF-8 Unicode text But, I don't know if it means there is no BOM in the file. I'm using Ubuntu 12.04. | file will tell you if there is a BOM . You can simply test it with: printf '\ufeff...\n' | file -
/dev/stdin: UTF-8 Unicode (with BOM) text Some shells such as ash or dash have a printf builtin that does not support \u , in which case you need to use printf from the GNU coreutils, e.g. /usr/bin/printf . Note: according to the file changelog, this feature existed already in 2007. So, this should work on any current machine. | {
"source": [
"https://unix.stackexchange.com/questions/170775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44001/"
]
} |
170,823 | I have created a custom application entry in the applications menu for Eclipse as follows in a file /usr/share/applications/eclipse.desktop as follows [Desktop Entry]
Version=1.0
Name=Eclipse
Exec=/usr/local/eclipse/eclipse
Terminal=false
Type=Application
StartupNotify=true
Categories=X-Red-Hat-Extra;Application;Development;
X-Desktop-File-Install-Version=0.15
Icon=/usr/local/eclipse/icon.xpm This now appears fine in the Programming section of the Applications menu. How can I add it to the Favorites section? | The favourite in Gnome Classic view follows the favourites in the Gnome 3 shell. Click on Activities in the top-left corner or use your keyboard's Windows button if it has one, to bring up the activities overview. Right-click on one of those activities and Add to Favourites . It should now be visible in the Gnome Classic Favourite menu. | {
"source": [
"https://unix.stackexchange.com/questions/170823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93387/"
]
} |
170,961 | I have a list of .ts files: out1.ts ... out749.ts out8159.ts out8818.ts How can I get the total duration (running time) of all these files? | I have no .ts here (only .mp4 ) but this should work for all video files: Use ffprobe (part of ffmpeg ) to get the time in seconds, e.g: ffprobe -v quiet -of csv=p=0 -show_entries format=duration Inception.mp4 275.690000 So for all video files you could use a for loop and awk to calculate the total time in seconds: for f in ./*.mp4
do ffprobe -v quiet -of csv=p=0 -show_entries format=duration "$f"
done | awk '{sum += $1}; END{print sum}' 2735.38 To further process the output to convert the total to DD:HH:MM:SS , see the answers here . Another way is via exiftool which has an internal ConvertDuration : exiftool -n -q -p '${Duration;our $sum;$_=ConvertDuration($sum+=$_)
}' ./*.mp4 | tail -n1 0:45:35 | {
"source": [
"https://unix.stackexchange.com/questions/170961",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81823/"
]
} |
171,025 | I am currently doing preparation for my GCSE computing controlled assessment on Linux. I type ls > list and ls >> list into the command line, but it does not do anything. I have googled it but I can't find what it exactly does. What does: ls > list and ls >> list do? | Both redirect stdout to file. ls > list If the file exists it'll be replaced. ls >> list If the file does not exist it'll be created. If it exists, it'll be appended to the end of the file. Find out more: IO Redirection | {
"source": [
"https://unix.stackexchange.com/questions/171025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93519/"
]
} |
171,091 | I have large 3-column files (~10,000 lines) and I would like to remove lines when the contents of the third column of that line appear in the third column of another line. The files' sizes make sort a bit cumbersome, and I can't use something like the below code because the entire lines aren't identical; just the contents of column 3. awk '!seen[$0]++' filename | Just change your awk command to the column you want to remove duplicated lines based on that column (in your case third column): awk '!seen[$3]++' filename This command is telling awk which lines to print. The variable $3 holds the entire contents of column 3 and square brackets are array access. So, for each third column of line in filename, the node of the array named seen is incremented and the line printed if the content of that node(column3) was not ( ! ) previously set. By doing this, always the first lines (unique by the third column) will be kept. Above will work if your columns in input file are delimited with Spaces/Tabs, if that is something else, you will need to tell it to awk with its -F option. So, for example if columns delimited with comma( , ) and wants to remove lines base on third column, use the command as following: awk -F',' '!seen[$3]++' filename | {
"source": [
"https://unix.stackexchange.com/questions/171091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93551/"
]
} |
171,314 | I have a set of log files that I need to review and I would like to search specific strings on the same files at once Is this possible? Currently I am using grep -E 'fatal|error|critical|failure|warning|' /path_to_file How do I use this and search for the strings of multiple files at once? If this is something that needs to be scripted, can someone provide a simple script to do this? | grep -E 'fatal|error|critical|failure|warning|' *.log | {
"source": [
"https://unix.stackexchange.com/questions/171314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81926/"
]
} |
171,341 | Maybe I'm overlooking something but is there a way to get your current bash history for the current session you are using like if i run ssh host
$ pwd
$ ls
$ cd /tmp I just want to see those 3 commands and nothing else | A slightly roundabout way: history -a ~/current_history This will save the current session's unsaved bash history to ~/current_history , which you can then view. | {
"source": [
"https://unix.stackexchange.com/questions/171341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9400/"
]
} |
171,346 | If you've been following unix.stackexchange.com for a while, you
should hopefully know by now that leaving a variable
unquoted in list context (as in echo $var ) in Bourne/POSIX
shells (zsh being the exception) has a very special meaning and
shouldn't be done unless you have a very good reason to. It's discussed at length in a number of Q&A here (Examples: Why does my shell script choke on whitespace or other special characters? , When is double-quoting necessary? , Expansion of a shell variable and effect of glob and split on it , Quoted vs unquoted string expansion ) That has been the case since the initial release of the Bourne
shell in the late 70s and hasn't been changed by the Korn shell
(one of David Korn's biggest
regrets (question #7) ) or bash which mostly
copied the Korn shell, and that's how that has been specified by POSIX/Unix. Now, we're still seeing a number of answers here and even
occasionally publicly released shell code where
variables are not quoted. You'd have thought people would have
learnt by now. In my experience, there are mainly 3 types of people who omit to
quote their variables: beginners. Those can be excused as admittedly it's a
completely unintuitive syntax. And it's our role on this site
to educate them. forgetful people. people who are not convinced even after repeated hammering,
who think that surely the Bourne shell author did not
intend us to quote all our variables . Maybe we can convince them if we expose the risk associated with
this kind of behaviours. What's the worst thing that can possibly happen if you
forget to quote your variables. Is it really that bad? What kind of vulnerability are we talking of here? In what contexts can it be a problem? | Preamble First, I'd say it's not the right way to address the problem.
It's a bit like saying " you should not murder people because
otherwise you'll go to jail ". Similarly, you don't quote your variable because otherwise
you're introducing security vulnerabilities. You quote your
variables because it is wrong not to (but if the fear of the jail can help, why not). A little summary for those who've just jumped on the train. In most shells, leaving a variable expansion unquoted (though
that (and the rest of this answer) also applies to command
substitution ( `...` or $(...) ) and arithmetic expansion ( $((...)) or $[...] )) has a very special
meaning. The best way to describe it is that it is like
invoking some sort of implicit split+glob operator¹. cmd $var in another language would be written something like: cmd(glob(split($var))) $var is first split into a list of words according to complex
rules involving the $IFS special parameter (the split part)
and then each word resulting of that splitting is considered as
a pattern which is expanded to a list of files that match it
(the glob part). As an example, if $var contains *.txt,/var/*.xml and $IFS contains , , cmd would be called with a number of arguments,
the first one being cmd and the next ones being the txt files in the current directory and the xml files in /var . If you wanted to call cmd with just the two literal arguments cmd and *.txt,/var/*.xml , you'd write: cmd "$var" which would be in your other more familiar language: cmd($var) What do we mean by vulnerability in a shell ? After all, it's been known since the dawn of time that shell
scripts should not be used in security-sensitive contexts.
Surely, OK, leaving a variable unquoted is a bug but that can't
do that much harm, can it? Well, despite the fact that anybody would tell you that shell
scripts should never be used for web CGIs, or that thankfully
most systems don't allow setuid/setgid shell scripts nowadays,
one thing that shellshock (the remotely exploitable bash bug
that made the headlines in September 2014) revealed is that
shells are still extensively used where they probably shouldn't:
in CGIs, in DHCP client hook scripts, in sudoers commands,
invoked by (if not as ) setuid commands... Sometimes unknowingly. For instance system('cmd $PATH_INFO') in a php / perl / python CGI script does invoke a shell to interpret that command line (not to
mention the fact that cmd itself may be a shell script and its
author may have never expected it to be called from a CGI). You've got a vulnerability when there's a path for privilege
escalation, that is when someone (let's call him the attacker )
is able to do something he is not meant to. Invariably that means the attacker providing data, that data
being processed by a privileged user/process which inadvertently
does something it shouldn't be doing, in most of the cases because
of a bug. Basically, you've got a problem when your buggy code processes
data under the control of the attacker . Now, it's not always obvious where that data may come from,
and it's often hard to tell if your code will ever get to
process untrusted data. As far as variables are concerned, In the case of a CGI script,
it's quite obvious, the data are the CGI GET/POST parameters and
things like cookies, path, host... parameters. For a setuid script (running as one user when invoked by
another), it's the arguments or environment variables. Another very common vector is file names. If you're getting a
file list from a directory, it's possible that files have been
planted there by the attacker . In that regard, even at the prompt of an interactive shell, you
could be vulnerable (when processing files in /tmp or ~/tmp for instance). Even a ~/.bashrc can be vulnerable (for instance, bash will
interpret it when invoked over ssh to run a ForcedCommand like in git server deployments with some variables under the
control of the client). Now, a script may not be called directly to process untrusted
data, but it may be called by another command that does. Or your
incorrect code may be copy-pasted into scripts that do (by you 3
years down the line or one of your colleagues). One place where it's
particularly critical is in answers in Q&A sites as you'll
never know where copies of your code may end up. Down to business; how bad is it? Leaving a variable (or command substitution) unquoted is by far
the number one source of security vulnerabilities associated
with shell code. Partly because those bugs often translate to
vulnerabilities but also because it's so common to see unquoted
variables. Actually, when looking for vulnerabilities in shell code, the
first thing to do is look for unquoted variables. It's easy to
spot, often a good candidate, generally easy to track back to
attacker-controlled data. There's an infinite number of ways an unquoted variable can turn
into a vulnerability. I'll just give a few common trends here. Information disclosure Most people will bump into bugs associated with unquoted
variables because of the split part (for instance, it's
common for files to have spaces in their names nowadays and space
is in the default value of IFS). Many people will overlook the glob part. The glob part is at least as dangerous as the split part. Globbing done upon unsanitised external input means the
attacker can make you read the content of any directory. In: echo You entered: $unsanitised_external_input if $unsanitised_external_input contains /* , that means the
attacker can see the content of / . No big deal. It becomes
more interesting though with /home/* which gives you a list of
user names on the machine, /tmp/* , /home/*/.forward for
hints at other dangerous practises, /etc/rc*/* for enabled
services... No need to name them individually. A value of /* /*/* /*/*/*... will just list the whole file system. Denial of service vulnerabilities. Taking the previous case a bit too far and we've got a DoS. Actually, any unquoted variable in list context with unsanitized
input is at least a DoS vulnerability. Even expert shell scripters commonly forget to quote things
like: #! /bin/sh -
: ${QUERYSTRING=$1} : is the no-op command. What could possibly go wrong? That's meant to assign $1 to $QUERYSTRING if $QUERYSTRING was unset. That's a quick way to make a CGI script callable from
the command line as well. That $QUERYSTRING is still expanded though and because it's
not quoted, the split+glob operator is invoked. Now, there are some globs that are particularly expensive to
expand. The /*/*/*/* one is bad enough as it means listing
directories up to 4 levels down. In addition to the disk and CPU
activity, that means storing tens of thousands of file paths
(40k here on a minimal server VM, 10k of which directories). Now /*/*/*/*/../../../../*/*/*/* means 40k x 10k and /*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/* is enough to
bring even the mightiest machine to its knees. Try it for yourself (though be prepared for your machine to
crash or hang): a='/*/*/*/*/../../../../*/*/*/*/../../../../*/*/*/*' sh -c ': ${a=foo}' Of course, if the code is: echo $QUERYSTRING > /some/file Then you can fill up the disk. Just do a google search on shell
cgi or bash
cgi or ksh
cgi , and you'll find
a few pages that show you how to write CGIs in shells. Notice
how half of those that process parameters are vulnerable. Even David Korn's
own
one is vulnerable (look at the cookie handling). up to arbitrary code execution vulnerabilities Arbitrary code execution is the worst type of vulnerability,
since if the attacker can run any command, there's no limit on
what he may do. That's generally the split part that leads to those. That
splitting results in several arguments to be passed to commands
when only one is expected. While the first of those will be used
in the expected context, the others will be in a different context
so potentially interpreted differently. Better with an example: awk -v foo=$external_input '$2 == foo' Here, the intention was to assign the content of the $external_input shell variable to the foo awk variable. Now: $ external_input='x BEGIN{system("uname")}'
$ awk -v foo=$external_input '$2 == foo'
Linux The second word resulting of the splitting of $external_input is not assigned to foo but considered as awk code (here that
executes an arbitrary command: uname ). That's especially a problem for commands that can execute other
commands ( awk , env , sed (GNU one), perl , find ...) especially
with the GNU variants (which accept options after arguments).
Sometimes, you wouldn't suspect commands to be able to execute
others like ksh , bash or zsh 's [ or printf ... for file in *; do
[ -f $file ] || continue
something-that-would-be-dangerous-if-$file-were-a-directory
done If we create a directory called x -o yes , then the test
becomes positive, because it's a completely different
conditional expression we're evaluating. Worse, if we create a file called x -a a[0$(uname>&2)] -gt 1 ,
with all ksh implementations at least (which includes the sh of most commercial Unices and some BSDs), that executes uname because those shells perform arithmetic evaluation on the
numerical comparison operators of the [ command. $ touch x 'x -a a[0$(uname>&2)] -gt 1'
$ ksh -c 'for f in *; do [ -f $f ]; done'
Linux Same with bash for a filename like x -a -v a[0$(uname>&2)] . Of course, if they can't get arbitrary execution, the attacker may
settle for lesser damage (which may help to get arbitrary
execution). Any command that can write files or change
permissions, ownership or have any main or side effect could be exploited. All sorts of things can be done with file names. $ touch -- '-R ..'
$ for file in *; do [ -f "$file" ] && chmod +w $file; done And you end up making .. writeable (recursively with GNU chmod ). Scripts doing automatic processing of files in publicly writable areas like /tmp are to be written very carefully. What about [ $# -gt 1 ] That's something I find exasperating. Some people go down all
the trouble of wondering whether a particular expansion may be
problematic to decide if they can omit the quotes. It's like saying. Hey, it looks like $# cannot be subject to
the split+glob operator, let's ask the shell to split+glob it .
Or Hey, let's write incorrect code just because the bug is
unlikely to be hit . Now how unlikely is it? OK, $# (or $! , $? or any
arithmetic substitution) may only contain digits (or - for
some²) so the glob part is out. For the split part to do
something though, all we need is for $IFS to contain digits (or - ). With some shells, $IFS may be inherited from the environment,
but if the environment is not safe, it's game over anyway. Now if you write a function like: my_function() {
[ $# -eq 2 ] || return
...
} What that means is that the behaviour of your function depends
on the context in which it is called. Or in other words, $IFS becomes one of the inputs to it. Strictly speaking, when you
write the API documentation for your function, it should be
something like: # my_function
# inputs:
# $1: source directory
# $2: destination directory
# $IFS: used to split $#, expected not to contain digits... And code calling your function needs to make sure $IFS doesn't
contain digits. All that because you didn't feel like typing
those 2 double-quote characters. Now, for that [ $# -eq 2 ] bug to become a vulnerability,
you'd need somehow for the value of $IFS to become under
control of the attacker . Conceivably, that would not normally
happen unless the attacker managed to exploit another bug. That's not unheard of though. A common case is when people
forget to sanitize data before using it in arithmetic
expression. We've already seen above that it can allow
arbitrary code execution in some shells, but in all of them, it allows the attacker to give any variable an integer value. For instance: n=$(($1 + 1))
if [ $# -gt 2 ]; then
echo >&2 "Too many arguments"
exit 1
fi And with a $1 with value (IFS=-1234567890) , that arithmetic
evaluation has the side effect of settings IFS and the next [ command fails which means the check for too many args is
bypassed. What about when the split+glob operator is not invoked? There's another case where quotes are needed around variables and other expansions: when it's used as a pattern. [[ $a = $b ]] # a `ksh` construct also supported by `bash`
case $a in ($b) ...; esac do not test whether $a and $b are the same (except with zsh ) but if $a matches the pattern in $b . And you need to quote $b if you want to compare as strings (same thing in "${a#$b}" or "${a%$b}" or "${a##*$b*}" where $b should be quoted if it's not to be taken as a pattern). What that means is that [[ $a = $b ]] may return true in cases where $a is different from $b (for instance when $a is anything and $b is * ) or may return false when they are identical (for instance when both $a and $b are [a] ). Can that make for a security vulnerability? Yes, like any bug. Here, the attacker can alter your script's logical code flow and/or break the assumptions that your script are making. For instance, with a code like: if [[ $1 = $2 ]]; then
echo >&2 '$1 and $2 cannot be the same or damage will incur'
exit 1
fi The attacker can bypass the check by passing '[a]' '[a]' . Now, if neither that pattern matching nor the split+glob operator apply, what's the danger of leaving a variable unquoted? I have to admit that I do write: a=$b
case $a in... There, quoting doesn't harm but is not strictly necessary. However, one side effect of omitting quotes in those cases (for instance in Q&A answers) is that it can send a wrong message to beginners: that it may be all right not to quote variables . For instance, they may start thinking that if a=$b is OK, then export a=$b would be as well (which it's not in many shells as it's in arguments to the export command so in list context) or env a=$b . What about zsh ? zsh did fix most of those design awkwardnesses. In zsh (at least when not in sh/ksh emulation mode), if you want splitting , or globbing , or pattern matching , you have to request it explicitly: $=var to split, and $~var to glob or for the content of the variable to be treated as a pattern. However, splitting (but not globbing) is still done implicitly upon unquoted command substitution (as in echo $(cmd) ). Also, a sometimes unwanted side effect of not quoting variable is the empties removal . The zsh behaviour is similar to what you can achieve in other shells by disabling globbing altogether (with set -f ) and splitting (with IFS='' ). Still, in: cmd $var There will be no split+glob , but if $var is empty, instead of receiving one empty argument, cmd will receive no argument at all. That can cause bugs (like the obvious [ -n $var ] ). That can possibly break a script's expectations and assumptions and cause vulnerabilities. As the empty variable can cause an argument to be just removed , that means the next argument could be interpreted in the wrong context. As an example, printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2 If $attacker_supplied1 is empty, then $attacker_supplied2 will be interpreted as an arithmetic expression (for %d ) instead of a string (for %s ) and any unsanitized data used in an arithmetic expression is a command injection vulnerability in Korn-like shells such as zsh . $ attacker_supplied1='x y' attacker_supplied2='*'
$ printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2
[1] <x y>
[2] <*> fine, but: $ attacker_supplied1='' attacker_supplied2='psvar[$(uname>&2)0]'
$ printf '[%d] <%s>\n' 1 $attacker_supplied1 2 $attacker_supplied2
Linux
[1] <2>
[0] <> The uname arbitrary command was run. Also note that while zsh doesn't do globbing upon substitutions by default, as globs in zsh are much more powerful than in other shells, that means they can do a lot more damage if ever you enabled the globsubst option at the same time of the extendedglob one, or without disabling bareglobqual and left some variables unintentionally unquoted. For instance, even: set -o globsubst
echo $attacker_controlled Would be an arbitrary command execution vulnerability, because commands can be executed as part of glob expansions, for instance with the e valuation glob qualifier: $ set -o globsubst
$ attacker_controlled='.(e[uname])'
$ echo $attacker_controlled
Linux
. emulate sh # or ksh
echo $attacker_controlled doesn't cause an ACE vulnerability (though it still a DoS one like in sh) because bareglobqual is disabled in sh/ksh emulation. There's no good reason to enable globsubst other than in those sh/ksh emulations when wanting to interpret sh/ksh code. What about when you do need the split+glob operator? Yes, that's typically when you do want to leave your variable unquoted. But then you need to make sure you tune your split and glob operators correctly before using it. If you only want the split part and not the glob part (which is the case most of the time), then you do need to disable globbing ( set -o noglob / set -f ) and fix $IFS . Otherwise you'll cause vulnerabilities as well (like David Korn's CGI example mentioned above). Conclusion In short, leaving a variable (or command substitution or
arithmetic expansion) unquoted in shells can be very dangerous
indeed especially when done in the wrong contexts, and it's very
hard to know which are those wrong contexts. That's one of the reasons why it is considered bad practice . Thanks for reading so far. If it goes over your head, don't
worry. One can't expect everyone to understand all the implications of
writing their code the way they write it. That's why we have good practice recommendations , so they can be followed without
necessarily understanding why. (and in case that's not obvious yet, please avoid writing
security sensitive code in shells). And please quote your variables on your answers on this site! ¹In ksh93 and pdksh and derivatives, brace expansion is also performed unless globbing is disabled (in the case of ksh93 versions up to ksh93u+, even when the braceexpand option is disabled). ² In ksh93 and yash , arithmetic expansions can also include things like 1,2 , 1e+66 , inf , nan . There are even more in zsh , including # which is a glob operator with extendedglob , but zsh never does split+glob upon arithmetic expansion, even in sh emulation | {
"source": [
"https://unix.stackexchange.com/questions/171346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22565/"
]
} |
171,489 | So, I was moving my laptop around (and I have the bad habit of setting things on the keyboard...) and I woke up to discover this: $
Display all 2588 possibilities? (y or n) What command would display something like this? I'm using Bash. | Hitting TAB key helps you to auto complete either a command or a file/directory (as long as it is executable) you want to use, depending on what you are requesting. Double hitting the TAB key helps you displaying the available stuff you could use for next. e.g. Command completition: I want to edit my crontab. Typing cront and hitting TAB then I will see my command complete: crontab . File/Directory completition: I want to backup my crontab. crontab -l >> Type some words of the destination /ho TAB then I will see: /home/ , type next us TAB then I will see: /home/user/ Now, when you double hit TAB key without typing something, then the prompt expects something, so it will want to help you displaying all the possibilities. With the prompt empty, it's expecting a command or a file/directory so it will want to display all the commands available for you & all the files/directories located in the directory where you are. The 2588 possibilities output, means the total amount of commands/files/directories available to type. | {
"source": [
"https://unix.stackexchange.com/questions/171489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/147281/"
]
} |
171,519 | What exactly is happening here? root@bob-p7-1298c:/# ls -l /tmp/report.csv && lsof | grep "report.csv"
-rw-r--r-- 1 mysql mysql 1430 Dec 4 12:34 /tmp/report.csv
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete. | FUSE and its access rights lsof by default checks all mounted file systems including FUSE - file systems implemented in user space which have special access rights in Linux. As you can see in this answer on Ask Ubuntu a mounted GVFS file system (special case of FUSE) is normally accessible only to the user which mounted it (the owner of gvfsd-fuse ). Even root cannot access it. To override this restriction it is possible to use mount options allow_root and allow_other . The option must be also enabled in the FUSE daemon which is described for example in this answer ...but in your case you do not need to (and should not) change the access rights. Excluding file systems from lsof In your case lsof does not need to check the GVFS file systems so you can exclude the stat() calls on them using the -e option (or you can just ignore the waring): lsof -e /run/user/1000/gvfs Checking certain files by lsof You are using lsof to get information about all processes running on your system and only then you filter the complete output using grep . If you want to check just certain files and the related processes use the -f option without a value directly following it then specify a list of files after the "end of options" separator -- . This will be considerably faster. lsof -e /run/user/1000/gvfs -f -- /tmp/report.csv General solution To exclude all mounted file systems on which stat() fails you can run something like this (in bash ): x=(); for a in $(mount | cut -d' ' -f3); do test -e "$a" || x+=("-e$a"); done
lsof "${x[@]}" -f -- /tmp/report.csv Or to be sure to use stat() ( test -e could be implemented a different way): x=(); for a in $(mount | cut -d' ' -f3); do stat --printf= "$a" 2>/dev/null || x+=("-e$a"); done | {
"source": [
"https://unix.stackexchange.com/questions/171519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61110/"
]
} |
171,603 | I have temp file with some lower-case and upper-case contents. Input Contents of my temp file: hi
Jigar
GANDHI
jiga I want to convert all upper to lower . Command I tried the following command: sed -e "s/[A-Z]/[a-z]/g" temp but got wrong output. Output I want it as: hi
jigar
gandhi
jiga What needs to be in the substitute part of argument for sed ? | If your input only contains ASCII characters, you could use tr like: tr A-Z a-z < input or (less easy to remember and type IMO; but not limited to ASCII latin letters, though in some implementations including GNU tr , still limited to single-byte characters, so in UTF-8 locales, still limited to ASCII letters): tr '[:upper:]' '[:lower:]' < input if you have to use sed : sed 's/.*/\L&/g' < input (here assuming the GNU implementation). With POSIX sed , you'd need to specify all the transliterations and then you can choose which letters you want to convert: sed 'y/AǼBCΓDEFGH.../aǽbcγdefgh.../' < input With awk : awk '{print tolower($0)}' < input | {
"source": [
"https://unix.stackexchange.com/questions/171603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93151/"
]
} |
171,677 | I'm attempting to write a script that will be run in a given directory with many single level sub directories. The script will cd into each of the sub directories, execute a command on the files in the directory, and cd out to continue onto the next directory. What is the best way to do this? | for d in ./*/ ; do (cd "$d" && somecommand); done | {
"source": [
"https://unix.stackexchange.com/questions/171677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93933/"
]
} |
171,832 | I have a file in UTF-8 that contains texts in multiple languages. A lot of it are people's names. I need to convert it to ASCII and I need the result to look as decent as possible. There are many ways how to approach converting from a wider encoding to a narrower one. The simplest transformation would be to replace all non-ASCII characters with some placeholder, like '_'. If I know the language the file is written in, there are additional possibilities, like romanization. What Unix tool or programming language library available on Unix can give me a decent (best-effort) conversion from UTF-8 to ASCII? Most of the text is in European, latin type based languages. | This will work for some things: iconv -f utf-8 -t ascii//TRANSLIT echo ĥéĺłœ π | iconv -f utf-8 -t ascii//TRANSLIT returns helloe ? . Any characters that iconv doesn’t know how to convert will be replaced with question marks. iconv is POSIX, but I don’t know if all systems have the TRANSLIT option. It works for me on Linux. Also, the IGNORE option will silently discard characters that cannot be represented in the target character set (see man iconv_open ). An inferior but POSIX-compliant option is to use tr . This command replaces all non-ASCII code points with a question mark. It reads UTF-8 text one byte at a time. “É” might be replaced with E? or ? , depending on whether it was encoded using a combining accent or a precomposed character. echo café äëïöü | tr -d '\200-\277' | tr '\300-\377' '[?*]' That example returns caf? ????? , using precomposed characters. | {
"source": [
"https://unix.stackexchange.com/questions/171832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23890/"
]
} |
171,938 | How can I display the welcome message "welcome panni" every time I log into unix? | Normally, a welcome message can be shown by customizing the /etc/motd file (which stands for Message Of The Day). /etc/motd is not a script but a text file which contents are shown before the first prompt of a login session. You can also add some messages in /etc/profile or /etc/bashrc scripts using the echo or print commands (note that the /etc/bashrc assumes you are using the bash shell). Here are examples of commands that can be added to /etc/profile file to obtain a result like the one you expected: echo "Welcome ${USER}" or echo "Welcome $(whoami)" OBS1: If the system is correctly configured, the results of the above should be the same, but the ways they work are different: The first one shows the $USER environment variable while the second executes the command whoami . OBS2: Note that the /etc/profile is ran once per session and only for login shells. This means that the message will be shown when the user logs in in the console or rsh / ssh to the machine, but not when he/she simply opens a terminal in an X session, for example. | {
"source": [
"https://unix.stackexchange.com/questions/171938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94094/"
]
} |
172,382 | I work a lot with imaged drives, meaning a do a dd-copy of the drive in question and then work on the image instead of the drive itself. For most work, I use kpartx to map the drive's partitions to a device under /dev/mapper/. What I'm wondering here is if there's a way to find which of the mapping belong to which image. Consider this: root@vyvyan:/tmp# kpartx -a -v Image1
add map loop1p1 (254:4): 0 10240 linear /dev/loop1 2048
add map loop1p2 (254:5): 0 10240 linear /dev/loop1 12288
add map loop1p3 (254:6): 0 52848 linear /dev/loop1 22528
root@vyvyan:/tmp# kpartx -a -v Image2
add map loop2p1 (254:7): 0 33508 linear /dev/loop2 2048
add map loop2p2 (254:8): 0 39820 linear /dev/loop2 35556 Now, let's say I forget which image went to which mapping. Is there a way to let kpartx - or the kernel, or anything else - tell me which image goes where? EDIT Also, if I accidentally rm the image-file while kpartx has added the mappings, how do you remove the mappings? kpartx wants the actual image to be present. | losetup (the command normally used to set them up) will tell you: $ /sbin/losetup --list
NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0 0 0 0 0 /var/tmp/jigdo/debian-7.6.0-amd64-CD-1.iso Note that with older versions you may hat to use use -a instead of --list , and this outputs in a different and now deprecated format. The information comes from /sys : $ cat /sys/class/block/loop0/loop/backing_file
/var/tmp/jigdo/debian-7.6.0-amd64-CD-1.iso Another, possibly more portable, option is to get it from udisks: $ udisksctl info -b /dev/loop0
/org/freedesktop/UDisks2/block_devices/loop0:
⋮
org.freedesktop.UDisks2.Loop:
Autoclear: false
BackingFile: /var/tmp/jigdo/debian-7.6.0-amd64-CD-1.iso
SetupByUID: 1000
⋮ losetup will also happily remove them for you, using the -d option. That just requires the loop device as a parameter; it doesn't care about the backing file/device. | {
"source": [
"https://unix.stackexchange.com/questions/172382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11865/"
]
} |
172,481 | Suppose that I want to delete all files in a folder that are greater than 1 MB. $ find . -size +1M | xargs -0 rm This will not delete files that have space in their names. So I want it to quote all arguments it sends to rm . If find gives it Some report.docx it should pass "Some report.docx" to rm . How can I do that? | As you're already using that non-standard 1M , chances are your find implementation also supports -delete . So, simply use: find . -type f -size +1M -delete Where supported, that's by far the safest and most efficient. If you insist on using xargs and rm with find , just add -print0 in your command: find . -type f -size +1M -print0 | xargs -r0 rm -f -- ( -print0 and -0 are non-standard, but pretty common. -r (to avoid running rm at all if find doesn't find anything) is less common, but if your xargs doesn't support it, you can just omit it, as rm with -f won't complain if called without argument). The standard syntax would be: find . -type f -size +1048576c -exec rm -f -- {} + Other way: find . -type f -size +1M -execdir rm -f -- {} + (that's safer than -exec / xargs -0 and would work with very deep directory trees (where full file paths would end up larger than PATH_MAX ), but that's also non-standard, and runs at least one rm for each directory that contains at least one big file, so would be less efficient). From man find on a GNU system: -print0 True; print the full file name on the standard output, followed by a null
character (instead of the newline character that -print uses). This allows file names
that contain newlines or other types of white space to be correctly interpreted by
programs that process the find output. This option corresponds to the -0 option of xargs . | {
"source": [
"https://unix.stackexchange.com/questions/172481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19506/"
]
} |
172,541 | I have a script, that does not exit when I want it to. An example script with the same error is: #!/bin/bash
function bla() {
return 1
}
bla || ( echo '1' ; exit 1 )
echo '2' I would assume to see the output: :~$ ./test.sh
1
:~$ But I actually see: :~$ ./test.sh
1
2
:~$ Does the () command chaining somehow create a scope? What is exit exiting out of, if not the script? | () runs commands in the subshell, so by exit you are exiting from subshell and returning to the parent shell. Use braces {} if you want to run commands in the current shell. From bash manual: (list) list is executed in a subshell environment. Variable assignments and builtin commands that affect the shell's environment do not remain in effect after the command completes. The return status is the exit status of list. { list; } list is simply executed in the current shell environment. list must be terminated with a newline or semicolon. This is known as a group command. The return status is the exit status of list. Note that unlike the metacharacters ( and ), { and } are reserved words and must occur where a reserved word is permitted to be recognized. Since they do not cause a word break, they must be separated from list by whitespace or another shell metacharacter. It's worth mentioning that the shell syntax is quite consistent and the subshell participates also in the other () constructs like command substitution (also with the old-style `..` syntax) or process substitution, so the following won't exit from the current shell either: echo $(exit)
cat <(exit) While it may be obvious that subshells are involved when commands are placed explicitly inside () , the less visible fact is that they are also spawned in these other structures: command started in the background exit & doesn't exit the current shell because (after man bash ) If a command is terminated by the control operator &, the shell executes the command in the background in a subshell. The shell does not wait for the command to finish, and the return status is 0. the pipeline exit | echo foo still exits only from the subshell. However different shells behave differently in this regard. For example bash puts all components of the pipeline into separate subshells (unless you use the lastpipe option in invocations where job control is not enabled), but AT&T ksh and zsh run the last part inside the current shell (both behaviours are allowed by POSIX). Thus exit | exit | exit does basically nothing in bash, but exits from the zsh because of the last exit . coproc exit also runs exit in a subshell. | {
"source": [
"https://unix.stackexchange.com/questions/172541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88357/"
]
} |
172,624 | I've piped a command to less , and now I want to save the command's output to a file. How do I do that? In this case, I don't want to use tee , I want a solution directly from less, so that I don't have to rerun a long-running command if I forgot to use tee . This question is similar to this one, the only difference is that I want to save all of the lines, not a subset: Write lines to a file from less | From less , type s then type the file name you want to save to, then Enter . From the man page, under COMMANDS : s filename
Save the input to a file. This only works if the input is a pipe, not an ordinary file. man page also states that, depending on your particular installation, the s command might not be available. In that case, you could go to line 1 with: g or < or ESC-<
Go to line N in the file, default 1 (beginning of file). and pipe the whole content to cat with: | <m> shell-command
<m> represents any mark letter. Pipes a section of the input file to the
given shell command. The section of the file to be piped is between the
first line on the current screen and the position marked by the letter.
<m> may also be ^ or $ to indicate beginning or end of file respectively. so either: g|$cat > filename or: <|$cat > filename i.e. type g or < ( g or less-than ) | $ ( pipe then dollar ) then cat > filename and Enter . This should work whether input is a pipe or an ordinary file. | {
"source": [
"https://unix.stackexchange.com/questions/172624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9041/"
]
} |
172,666 | When I try to log in to gmail with mutt, it flashes a quick Webalert with a url, something like accounts.gmail.com or something. It's too quick for me to see or copy it. Then it says Login failed. Then I get an email from Gmail saying: Google Account: sign-in attempt blocked
Hi Adam,
We recently blocked a sign-in attempt to your Google Account [[email protected]].
Sign in attempt details
Date & Time: Wednesday, December 10, 2014 11:55:21 PM UTC
Location: Utah, USA
If this wasn't you
Please review your Account Activity page at https://security.google.com/settings/security/activity to see if anything looks suspicious. Whoever tried to sign in to your account knows your password; we recommend that you change it right away.
If this was you
You can switch to an app made by Google such as Gmail to access your account (recommended) or change your settings at https://www.google.com/settings/security/lesssecureapps so that your account is no longer protected by modern security standards.
To learn more, see https://support.google.com/accounts/answer/6010255.
Sincerely,
The Google Accounts team I can go to the link and enable "Access for less secure apps" and then I can log in just fine, but is there a way to login with mutt without having to turn on this less secure option in Gmail? Update: I'm on mac os x Yosemite
When I run mutt -v, in the compile options, it does contain +USE_SSL_OPENSSL
I'm not using google 2-step verification
I'm not using an application specific password
Here are the messages that I get when I try to log in: Reading imaps://imap.gmail.com:993/INBOX...
Looking up imap.gmail.com...
Connecting to imap.gmail.com...
TLSv1.2 connection using TLSv1/SSLv3 (ECDHE-RSA-AES128-GCM-SHA256)
Logging in...
[WEBALERT https://accounts.google.com/ContinueSignIn?sarp=1&scc=1&plt=AKgnsbsm0P...... I found this answer, but it didn't work: https://stackoverflow.com/a/25209735/1665818 | I finally got it to work by enabling Google 2-step verification and using an app-specific password for mutt. More detail: I enabled 2-step verification on my Google account, which means that when I log in to Google, I have to enter a pin number from either a text or from the Google Authenticator app. Then I had to get an app-specific password for mutt. You can generate an app specific password here . Then I used that app-specific password for logging into mutt instead of my normal password. And then I don't have to enter a pin number. | {
"source": [
"https://unix.stackexchange.com/questions/172666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64643/"
]
} |
173,708 | I'm trying to force a newly created user to change a password at the first time login using ssh. For security reasons I want to give him a secure password until he logs in for the first time. I did the following so far: useradd -s /bin/bash -m -d /home/foo foo
passwd foo Doing chage -d 0 foo only gives me the the error Your account has expired; please contact your system administrator on ssh login. | change the age of password to 0 day syntax chage -d 0 {user-name} In this case chage -d0 foo This works for me over ssh also | {
"source": [
"https://unix.stackexchange.com/questions/173708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17859/"
]
} |
173,719 | I'm trying to configure sshd for our internal network to accept public key authentication if a user has set up their key or ask for password if the user has not, but not both. So a user should be able to login passwordless if they have their public key configured or be asked for a password if they haven't set up a public key. Ubuntu 14.04 OpenSSH-server 1:6.6p1-2ubuntu2 | change the age of password to 0 day syntax chage -d 0 {user-name} In this case chage -d0 foo This works for me over ssh also | {
"source": [
"https://unix.stackexchange.com/questions/173719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94600/"
]
} |
173,916 | I encountered BASEDIR=$(pwd) in a script. Are there any advantages or disadvantages over using BASEDIR="$PWD" , other than maybe, that $PWD could be overwritten? | If bash encounters $(pwd) it will execute the command pwd and replace $(pwd) with this command's output. $PWD is a variable that is almost always set. pwd is a builtin shell command since a long time. So $PWD will fail if this variable is not set and $(pwd) will fail if you are using a shell that does not support the $() construct which is to my experience pretty often the case. So I would use $PWD . As every nerd I have my own shell scripting tutorial | {
"source": [
"https://unix.stackexchange.com/questions/173916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88357/"
]
} |
173,939 | I have a script that execute a java program, and I want to create a service to start this script at boot time. So I've created a script called run.sh. /test/run.sh #!/bin/bash
java -cp myjar:/test/lib/* com.xxxx.util.AmazonS3FileDownloader I have also created a file called test in /etc/init.d /etc/init.d/test #!/bin/bash
/test/run.sh For testing purposes I gave the test folder /test all rights (chmod 777 /test). drwxrwxrwx 7 testuser testuser 4096 Dec 12 13:28 test And this is what inside /etc/ini.d folder -rwxr-xr-x 1 root root 2062 Dec 12 13:18 test If I run this command. Everything is fine. No error, the program is running fine. $ /test/run.sh but for a reason I ignore if I try to do the same thing but using the service. It doesn;t work. $ service test start I've got permission denied when creating the receipts_download.log in /test folder. log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: receipts_download.log (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:142)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:290)
at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:194)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:164)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:257)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:133)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:97)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:689)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:647)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:544)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:440)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:476)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:471)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:125)
at org.apache.log4j.Logger.getLogger(Logger.java:118)
at com.xxxxx.util.AmazonS3FileDownloader.<init>(Unknown Source)
at com.xxxxx.util.AmazonS3FileDownloader.main(Unknown Source) /test has all permission and why I can run $ /test/run.sh without problem but not $ service test start Thanks for your help. | If bash encounters $(pwd) it will execute the command pwd and replace $(pwd) with this command's output. $PWD is a variable that is almost always set. pwd is a builtin shell command since a long time. So $PWD will fail if this variable is not set and $(pwd) will fail if you are using a shell that does not support the $() construct which is to my experience pretty often the case. So I would use $PWD . As every nerd I have my own shell scripting tutorial | {
"source": [
"https://unix.stackexchange.com/questions/173939",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94739/"
]
} |
174,206 | I'm reading myself for the release of Jessie on Debian, so I'm extra cautious (should be said paranoid) about any message that can cause problems, namely warnings. My system is a desktop with Debian testing/unstable installed, on ext4 partitions for both /boot and / , yet I'm seeing this message while upgrading the grub-pc package in Debian: Installing for i386-pc platform.
Installation finished. No error reported.
Installing for i386-pc platform.
grub-install: warning: File system `ext2' doesn't support embedding.
grub-install: warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged..
Installation finished. No error reported.
Generating grub configuration file ... Why is grub saying that my system is embedded? What is the cause of this? I tried to check the grub-install binary, but I couldn't make sense of it. | Most people coming to this from a search engine are probably wondering, "why do I get this error?": warning: File system `ext2' doesn't support embedding.
warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged..
error: will not proceed with blocklists. Because you did, e.g.: grub-install /dev/sda1 instead of grub-install /dev/sda I.e. tried to install to a partition instead of the MBR. | {
"source": [
"https://unix.stackexchange.com/questions/174206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41104/"
]
} |
174,210 | According to my browser (Firefox 34.0) the installed version of the Shockwave Flash plugin appears to be 11.2.202.424. This version is considered to be insecure: https://helpx.adobe.com/security/products/flash-player/apsb14-27.html The plugin is therefore blocked: https://blocklist.addons.mozilla.org/en-US/firefox/blocked/p796 In the attempt to update the plugin to the version currently considered safe (11.2.202.425), I found out that the recommended version apparantly is already installed: $ yum info flash-plugin
Loaded plugins: langpacks, refresh-packagekit
Installed Packages
Name : flash-plugin
Arch : x86_64
Version : 11.2.202.425
Release : release
Size : 19 M
Repo : installed
From repo : adobe-linux-x86_64
Summary : Adobe Flash Player 11.2
URL : http://www.adobe.com/downloads/
License : Commercial
Description : Adobe Flash Plugin 11.2.202.425
: Fully Supported: Mozilla SeaMonkey 1.0+, Firefox 1.5+, Mozilla
: 1.7.13+ My operating system: $ cat /etc/redhat-release
Fedora release 20 (Heisenbug) My questions: Do I have multiple versions of this plugin installed? How can I fix my installation? | I ran into this too, and found the answer in mozilla's bugzilla . In short, it happened because the plugin was updated while Firefox was running, and the pluginreg.dat got corrupted. So: exit firefox rm ~/.mozilla/firefox/*/pluginreg.dat start firefox again and you'll be all set. (The file will be regenerated.) Of course, you'll need to make sure that the .425 version is installed via yum update or other method. Presumably, this problem has been happening harmlessly for many updates — this is just the first where we all noticed it because of the blacklisting. | {
"source": [
"https://unix.stackexchange.com/questions/174210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17609/"
]
} |
174,349 | I was given the files for a mini linux , that boots directly into firefox . It works for all it should be doing, only that I do not get an internet connection. We have 3 DNS servers in the network, which all work. I can ping them, too. But when trying to ping google.de or wget google.de I get a bad address error. nslookup google.de works for some reason. I tracked the issue down to my resolv.conf on the booted system not having the same contents as the resolv.conf that I put into the .iso file. I tried understanding all the factors that go into creating and modifying resolv.conf . I'm not quite sure I got it all, but I definitely didn't find my solution there. So as a last ditch effort, I tried making the resolv.conf file immutable using :~# chattr +i /etc/resolv.conf When rebuilding and booting again to my surprise my file was renamed to resolv.conf~ and in its place was the same standard file that has been haunting me. The file contents make me believe it gets the information from the network itself. When starting the .iso in Virtualbox without internet access, my file is being kept as it is. I tried changing /etc/dhcp/dhclient.conf to not get the information from the net, by deleting domain-name-server and domain-name-search from the request part of the file. Didn't work unfortunately. I don't have the NetworkManager installed. The iso is based on Ubuntu 14.04. There is probably vital information missing. I'm happy to provide it. UPDATE: I think I found the file that clears resolv.conf . It seems to be /usr/share/udhcpc/default.script #!/bin/sh
# udhcpc script edited by Tim Riker <[email protected]>
[ -z "$1" ] && echo "Error: should be called from udhcpc" && exit 1
RESOLV_CONF="/etc/resolv.conf"
[ - n "$broadcast" ] && BROADCAST="broadcast $broadcast"
[ -n "$subnet" ] && NETMASK="netmask $subnet"
case "$1" in
deconfig)
/bin/ifconfig $interface 0.0.0.0
for i in /etc/ipdown.d/*; do
[ -e $i ] && . $i $interface
done
;;
renew|bound)
/bin/ifconfig $interface $ip $BROADCAST $NETMASK
if [ -n "$router" ] ; then
echo "deleting routers"
while route del default gw 0.0.0.0 dev $interface ; do
:
done
metric=0
for i in $router ; do
route add default gw $i dev $interface metric $((metric++))
done
fi
echo -n > $RESOLV_CONF # Start ----------------
[ -n "$domain" ] && echo search $domain >> $RESOLV_CONF
for i in $dns ; do
echo adding dns $i
echo nameserver $i >> $RESOLV_CONF
done
for i in /etc/ipup.d/*; do
[ -e $i ] && . $i $interface $ip $dns
done # End ------------------
;;
esac
exit 0 It's part of the udhcpc program. A tiny dhcp client, that is part of busybox Will investigate further. UPDATE2 AND SOLUTION: I commented the part out (#Start to #End), that seemingly overwrites the /etc/resolv.conf file and sure enough. That was the culprit. So an obscure script caused all this trouble. I changed the question to reflect, what actually needed to be known to solve my problem, so it would be easier to find for people with the same problem and so I could accept an answer. Thanks for the help here in figuring things out. | You shouldn't manually update your resolv.conf , because all changes will be overwritten by data that your local DHCP server provides. If you want it to be static, run sudo dpkg-reconfigure resolvconf and answer "no" to dynamic updates. If you want to add new entries there, edit /etc/resolvconf/resolv.conf.d/base and run sudo resolvconf -u , it will append your entries and DHCP server's entries. Try to edit your /etc/network/interfaces and add your entries there, like auto eth0
iface eth0 inet dhcp
dns-search google.com
dns-nameservers dnsserverip and then restart /etc/init.d/networking restart or sudo ifdown -a and sudo ifup -a Your system uses udhcp which is a very small DHCP client program. The udhcp client negotiates a lease with the DHCP server and notifies
a set of scripts when a leases is obtained or lost. You can read about it's usage here or just edit this script (as you did). | {
"source": [
"https://unix.stackexchange.com/questions/174349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88357/"
]
} |
174,350 | I'm getting the error: Argument list too long when trying to use cUrl to send a file in base64 inside the body of my JSON. I'm using something like this: DATA=$( base64 "$FILE" )
curl -X POST -H "Content-Type: application/json" -d '{
"data": "'"$DATA"'"
}' $HOST Is there any other way to get the DATA in the body of my JSON? Take into account that I need to read a file in my filesystem, transform it into base64 and then send it inside the body. | If the base64-encoded file is too big to fit in the argument list you are going to have to pass it via a file. One of the easier ways I can think of is to pass it via standard input. From the curl man page , you can use -d @- to read from stdin instead of the command line. curl -X POST -H "Content-Type: application/json" -d @- "$HOST" <<CURL_DATA
{ "data": "$DATA" }
CURL_DATA | {
"source": [
"https://unix.stackexchange.com/questions/174350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94997/"
]
} |
174,440 | OK I'm new to this. I installed tmux to run a several days experiment. After typing tmux new -s name I got a new window with green banner at the bottom. I compile and run java program. Now I do not know how to exit the window (while leave it running). The bash (or whatever) cursor is not responding because the java program is still running. My solution so far is to quit the Terminal program completely and reopen it again. Any ideas on how to quit the tmux window without exiting the whole Terminal program? | Detach from currently attached session Session Ctrl + b d or Ctrl + b :detach Screen Ctrl + a Ctrl + d or Ctrl + a :detach | {
"source": [
"https://unix.stackexchange.com/questions/174440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95066/"
]
} |
174,566 | I have came across this script: #! /bin/bash
if (( $# < 3 )); then
echo "$0 old_string new_string file [file...]"
exit 0
else
ostr="$1"; shift
nstr="$1"; shift
fi
echo "Replacing \"$ostr\" with \"$nstr\""
for file in $@; do
if [ -f $file ]; then
echo "Working with: $file"
eval "sed 's/"$ostr"/"$nstr"/g' $file" > $file.tmp
mv $file.tmp $file
fi
done What is the meaning of the lines where they use shift ? I presume the script should be used with at least arguments so...? | shift is a bash built-in which kind of removes arguments from the beginning of the argument list. Given that the 3 arguments provided to the script are available in $1 , $2 , $3 , then a call to shift will make $2 the new $1 .
A shift 2 will shift by two making new $1 the old $3 .
For more information, see here: http://ss64.com/bash/shift.html http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_07.html | {
"source": [
"https://unix.stackexchange.com/questions/174566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15387/"
]
} |
174,609 | I am trying to replace multiple words in a file by using sed -i #expression1 #expression2 file Something 123 item1
Something 456 item2
Something 768 item3
Something 353 item4 Output (Desired) anything 123 stuff1
anything 456 stuff2
anything 768 stuff3
anything 353 stuff4 Try-outs I can get the following output by using sed -i two times. sed -i 's/Some/any/g' file
sed -i 's/item/stuff/g' file Can I have any possible way of making this as a single in-place command like sed -i 's/Some/any/g' -i 's/item/stuff/g' file When I tried the above code it takes s/item/stuff/g as a file and tries working on it. | Depending on the version of sed on your system you may be able to do sed -i 's/Some/any/; s/item/stuff/' file You don't need the g after the final slash in the s command here, since you're only doing one replacement per line. Alternatively: sed -i -e 's/Some/any/' -e 's/item/stuff/' file Or: sed -i '
s/Some/any/
s/item/stuff/' file The -i option (a GNU extension now supported by a few other implementations though some need -i '' instead) tells sed to edit files in place; if there are characters immediately after the -i then sed makes a backup of the original file and uses those characters as the backup file's extension. Eg, sed -i.bak 's/Some/any/; s/item/stuff/' file or sed -i'.bak' 's/Some/any/; s/item/stuff/' file will modify file , saving the original to file.bak . Of course, on a Unix (or Unix-like) system, we normally use '~' rather than '.bak', so sed -i~ 's/Some/any/;s/item/stuff/' file | {
"source": [
"https://unix.stackexchange.com/questions/174609",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93151/"
]
} |
174,817 | I have a script that needs to create temporary files for its work, and clean up after itself. My question is about finding the right base directory for the temporary files. The script needs to work on multiple platforms: Git Bash (Windows), Solaris, Linux, OSX.
On each platform, the preferred temp directory is expressed differently: Windows: %TMP% (and possibly %TEMP% ) OSX: $TMPDIR Linux, UNIX: supposed to be $TMPDIR but appears to be unset on multiple systems I tried So in my script I added this boilerplate: if test -d "$TMPDIR"; then
:
elif test -d "$TMP"; then
TMPDIR=$TMP
elif test -d /var/tmp; then
TMPDIR=/var/tmp
else
TMPDIR=/tmp
fi This seems too tedious. Is there a better way? | A slightly more portable way to handle temporary files is to use mktemp . It'll create temporary files and return their paths for you. For instance: $ mktemp
/tmp/tmp.zVNygt4o7P
$ ls /tmp/tmp.zVNygt4o7P
/tmp/tmp.zVNygt4o7P You could use it in a script quite easily: tmpfile=$(mktemp)
echo "Some temp. data..." > $tmpfile
rm $tmpfile Reading the man page, you should be able to set options according to your needs. For instance: -d creates a directory instead of a file. -u generates a name, but does not create anything. Using -u you could retrieve the temporary directory quite easily with... $ tmpdir=$(dirname $(mktemp -u)) More information about mktemp is available here . Edit regarding Mac OS X: I have never used a Mac OSX system, but according to a comment by Tyilo below, it seems like Mac OSX's mktemp requires you to provide a template (which is an optional argument on Linux). Quoting: The template may be any file name with some number of "Xs" appended to it, for example /tmp/temp.XXXX . The trailing "Xs" are replaced with the current process number and/or a unique letter combination. The number of unique file names mktemp can return depends on the number of "Xs" provided; six "Xs" will result in mktemp selecting 1 of 56800235584 (62 ** 6) possible file names. The man page also says that this implementation is inspired by the OpenBSD man page for mktemp . A similar divergence might therefore be observed by OpenBSD and FreeBSD users as well (see the History section). Now, as you probably noticed, this requires you to specify a complete file path, including the temporary directory you are looking for in your question. This little problem can be handled using the -t switch. While this option seems to require an argument ( prefix ), it would appear that mktemp relies on $TMPDIR when necessary. All in all, you should be able to get the same result as above using... $ tmpdir=$(dirname $(mktemp tmp.XXXXXXXXXX -ut)) Any feedback from Mac OS X users would be greatly appreciated, as I am unable to test this solution myself. | {
"source": [
"https://unix.stackexchange.com/questions/174817",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17433/"
]
} |
174,990 | So Wikipedia ( link ) tells me that the command pwd is short for "print working directory", and that makes sense. But for the environment variable, the "P" has to be an acronym for something else than print. I hear people talking about "current working directory", which sounds better and is more intuitive, but still the environment variable seems to be called $PWD, and not $CWD. Nobody ever says "Did you check the print working directory variable?". I am currently playing around with the web application server uWSGI, and when running it tells me (on the uWSGI stats page): "cwd":"/home/velle/greendrinks", so they obviously like the (more intuitive acronym) cwd over pwd . I guess I am trying to figure out if I misunderstood something, or if it is just a matter of having given the environment variable an unintuitive name? | That depends on what you're doing. First of all, $PWD is an environment variable and pwd is a shell builtin or an actual binary: $ type -a pwd
pwd is a shell builtin
pwd is /bin/pwd Now, the bash builtin will simply print the current value of $PWD unless you use the -P flag. As explained in help pwd : pwd: pwd [-LP] Print the name of the current working directory. Options: -L print the value of $PWD if it names the current working directory -P print the physical directory, without any symbolic links By default, ‘pwd’ behaves as if ‘-L’ were specified. The pwd binary, on the other hand, gets the current directory through the getcwd(3) system call which returns the same value as readlink -f /proc/self/cwd .
To illustrate, try moving into a directory that is a link to another one: $ ls -l
total 4
drwxr-xr-x 2 terdon terdon 4096 Jun 4 11:22 foo
lrwxrwxrwx 1 terdon terdon 4 Jun 4 11:22 linktofoo -> foo/
$ cd linktofoo
$ echo $PWD
/home/terdon/foo/linktofoo
$ pwd
/home/terdon/foo/linktofoo
$ /bin/pwd
/home/terdon/foo/foo So, in conclusion, on GNU systems (such as Ubuntu), pwd and echo $PWD are equivalent unless you use the -P option, but /bin/pwd is different and behaves like pwd -P . Source https://askubuntu.com/a/476633/291937 | {
"source": [
"https://unix.stackexchange.com/questions/174990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89717/"
]
} |
175,071 | I try to find a script to decrypt (unhash) the ssh hostnames in the known_hosts file by passing a list of the hostnamses . So, to do exactly the reverse of : ssh-keygen -H -f known_hosts Or also, to do the same as this if the ssh config HashKnownHosts is set to No: ssh-keygen -R know-host.com -f known_hosts
ssh-keyscan -H know-host.com >> known_hosts But without re-downloading the host key (caused by ssh-keyscan). Something like: ssh-keygen --decrypt -f known_hosts --hostnames hostnames.txt Where hostnames.txt contains a list of hostnames. | Lines in the known_hosts file are not encrypted, they are hashed. You can't decrypt them, because they're not encrypted. You can't “unhash” them, because that what a hash is all about — given the hash, it's impossible¹ to discover the original string. The only way to “unhash” is to guess the original string and verify your guess. If you have a list of host names, you can pass them to ssh-keygen -F and replace them by the host name. while read host comment; do
found=$(ssh-keygen -F "$host" | grep -v '^#' | sed "s/^[^ ]*/$host/")
if [ -n "$found" ]; then
ssh-keygen -R "$host"
echo "$found" >>~/.ssh/known_hosts
fi
done <hostnames.txt ¹ In a practical sense, i.e. it would take all the computers existing today longer than the present age of the universe to do it. | {
"source": [
"https://unix.stackexchange.com/questions/175071",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88343/"
]
} |
175,078 | Let's say I start in my local account: avindra@host:~> then I switch to root: host:~ # Then I switch to oracle: [ oracle@host:~] Is there a way for me to drop back into the root shell (the parent), without logging out of the oracle shell? This would be convenient, because the oracle account does not have sudo privileges. A typical scenario with oracle is that I end up in /some/really/deeply/nested/directory, and all kinds of special environment variables are set in particular ways. Here comes the problem: I need to get back into root to touch some system files. Yes, I can log out of oracle to get back to root, but at the cost of losing my current working directory and environment. Is there a way to "switch" to the parent shell using known conventions? | You can simulate a CTRL-Z (which you normally use to temporarily background a process) using the kill command: [tsa20@xxx01:/home/tsa20/software]$ kill -19 $$
[1]+ Stopped sudo -iu tsa20
[root@xxx01 ~]# fg
sudo -iu tsa20
[tsa20@xxx01:/home/tsa20/software]$ bash just traps the CTRL-Z key combination. kill -19 sends SIGSTP to the process which is effectively the same thing. | {
"source": [
"https://unix.stackexchange.com/questions/175078",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63602/"
]
} |
175,135 | https://serverfault.com/questions/70939/how-to-replace-a-text-string-in-multiple-files-in-linux https://serverfault.com/questions/228733/how-to-rename-multiple-files-by-replacing-word-in-file-name https://serverfault.com/questions/212153/replace-string-in-files-with-certain-file-extension https://serverfault.com/questions/33158/searching-a-number-of-files-for-a-string-in-linux These mentioned articles have all answered my question. However none of them work for me. I suspect it is because the string I am trying to replace has a # in it. Is there a special way to address this? I have image file that had an é replaced by #U00a9 during a site migration. These look like this: Lucky-#U00a9NBC-80x60.jpg
Lucky-#U00a9NBC-125x125.jpg
Lucky-#U00a9NBC-150x150.jpg
Lucky-#U00a9NBC-250x250.jpg
Lucky-#U00a9NBC-282x232.jpg
Lucky-#U00a9NBC-300x150.jpg
Lucky-#U00a9NBC-300x200.jpg
Lucky-#U00a9NBC-300x250.jpg
Lucky-#U00a9NBC-360x240.jpg
Lucky-#U00a9NBC-400x250.jpg
Lucky-#U00a9NBC-430x270.jpg
Lucky-#U00a9NBC-480x240.jpg
Lucky-#U00a9NBC-600x240.jpg
Lucky-#U00a9NBC-600x250.jpg
Lucky-#U00a9NBC.jpg and I want to change it to something like this: Lucky-safeNBC-80x60.jpg
Lucky-safeNBC-125x125.jpg
Lucky-safeNBC-150x150.jpg
Lucky-safeNBC-250x250.jpg
Lucky-safeNBC-282x232.jpg
Lucky-safeNBC-300x150.jpg
Lucky-safeNBC-300x200.jpg
Lucky-safeNBC-300x250.jpg
Lucky-safeNBC-360x240.jpg
Lucky-safeNBC-400x250.jpg
Lucky-safeNBC-430x270.jpg
Lucky-safeNBC-480x240.jpg
Lucky-safeNBC-600x240.jpg
Lucky-safeNBC-600x250.jpg
Lucky-safeNBC.jpg UPDATE: These examples all start with "LU00a9ucky but here are many images with different names. I am simply targeting the "#U00a9" portion of the string to replace with "safe". | This is not hard, simply make sure to escape the octothorpe (#) in the name by prepending a reverse-slash (\). find . -type f -name 'Lucky-*' | while read FILE ; do
newfile="$(echo ${FILE} |sed -e 's/\\#U00a9/safe/')" ;
mv "${FILE}" "${newfile}" ;
done | {
"source": [
"https://unix.stackexchange.com/questions/175135",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
175,325 | I want to use the stat command to get information on a file. I did this: Josephs-MacBook-Pro:Desktop Joseph$ echo 'hello' > info.txt
Josephs-MacBook-Pro:Desktop Joseph$ stat info.txt
16777220 21195549 -rw-r--r-- 1 Joseph staff 0 6 "Dec 21 20:45:31 2014" "Dec 21 20:45:30 2014" "Dec 21 20:45:30 2014" "Dec 21 20:45:30 2014" 4096 8 0 info.txt The 3rd and 4th lines are the output I got. This happens whenever I use the stat command. Meanwhile everyone on the internet gets stuff like: File: `index.htm'
Size: 17137 Blocks: 40 IO Block: 8192 regular file
Device: 8h/8d Inode: 23161443 Links: 1
Access: (0644/-rw-r--r--)
Uid: (17433/comphope) Gid: ( 32/ www)
Access: 2007-04-03 09:20:18.000000000 -0600
Modify: 2007-04-01 23:13:05.000000000 -0600
Change: 2007-04-02
16:36:21.000000000 -0600 I tried this on Terminal and iTerm 2 and in a fresh session.
On the same laptop, I connected to my CentOS server and put in the same commands. It worked perfectly. This leads me to believe that the terminal application isn't the problem.
I'm on a MacBook Pro (Retina, 15-inch, Late 2013) with OS X Yosemite version 10.10.1 What is going on and how can I fix this? | Using the -x option for stat should give you similar output: $ stat -x foo
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: ( 501/ Tyilo) Gid: ( 0/ wheel)
Device: 1,4 Inode: 8626874 Links: 1
Access: Mon Dec 22 06:17:54 2014
Modify: Mon Dec 22 06:17:54 2014
Change: Mon Dec 22 06:17:54 2014 To make this the default, you can create an alias and save it to ~/.bashrc : alias stat="stat -x" | {
"source": [
"https://unix.stackexchange.com/questions/175325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95676/"
]
} |
175,345 | Is there any difference between /run directory and var/run directory. It seems the latter is a link to the former. If the contents are one and the same what is the need for two directories? | From the Wikipedia page on the Filesystem Hierarchy Standard : Modern Linux distributions include a /run directory as a temporary filesystem (tmpfs) which stores volatile runtime data, following the FHS version 3.0. According to the FHS version 2.3, this data should be stored in /var/run but this was a problem in some cases because this directory isn't always available at early boot. As a result, these programs have had to resort to trickery, such as using /dev/.udev, /dev/.mdadm, /dev/.systemd or /dev/.mount directories, even though the device directory isn't intended for such data. Among other advantages, this makes the system easier to use normally with the root filesystem mounted read-only. So if you have already made a temporary filesystem for /run , linking /var/run to it would be the next logical step (as opposed to keeping the files on disk or creating a separate tmpfs ). | {
"source": [
"https://unix.stackexchange.com/questions/175345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94449/"
]
} |
175,352 | I am using gnome 3.14 + debian 8 jessie + nvidia optimus graphic driver. Those borders on animations are driving me insane and I would love some suggestions on how to resolve it :( ? PS, can someone please tell me what is the name of this bug ? | From the Wikipedia page on the Filesystem Hierarchy Standard : Modern Linux distributions include a /run directory as a temporary filesystem (tmpfs) which stores volatile runtime data, following the FHS version 3.0. According to the FHS version 2.3, this data should be stored in /var/run but this was a problem in some cases because this directory isn't always available at early boot. As a result, these programs have had to resort to trickery, such as using /dev/.udev, /dev/.mdadm, /dev/.systemd or /dev/.mount directories, even though the device directory isn't intended for such data. Among other advantages, this makes the system easier to use normally with the root filesystem mounted read-only. So if you have already made a temporary filesystem for /run , linking /var/run to it would be the next logical step (as opposed to keeping the files on disk or creating a separate tmpfs ). | {
"source": [
"https://unix.stackexchange.com/questions/175352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94725/"
]
} |
175,380 | From my question Can Process id and session id of a daemon differ? , it was clear that I cannot easily decide the features of a daemon. I have read in different articles and from different forums that service --status-all command can be used to list all the daemons in my system. But I do not think that the command is listing all daemons because NetworkManager , a daemon which is currently running in my Ubuntu 14.04 system, is not listed by the command. Is there some command to list the running daemons or else is there some way to find the daemons from the filesystem itself? | The notion of daemon is attached to processes , not files . For this reason, there is no sense in "finding daemons on the filesystem". Just to make the notion a little clearer : a program is an executable file (visible in the output of ls ) ; a process is an instance of that program (visible in the output of ps ). Now, if we use the information that I gave in my answer , we could find running daemons by searching for processes which run without a controlling terminal attached to them . This can be done quite easily with ps : $ ps -eo 'tty,pid,comm' | grep ^? The tty output field contains "?" when the process has no controlling terminal. The big problem here comes when your system runs a graphical environment. Since GUI programs (i.e. Chromium) are not attached to a terminal, they also appear in the output. On a standard system, where root does not run graphical programs, you could simply restrict the previous list to root's processes. This can be achieved using ps ' -U switch. $ ps -U0 -o 'tty,pid,comm' | grep ^? Yet, two problems arise here: If root is running graphical programs, they will show up. Daemons running without root privileges won't. Note that daemons which start at boot time are usually running as root. Basically, we would like to display all programs without a controlling terminal, but not GUI programs . Luckily for us, there is a program to list GUI processes : xlsclients ! This answer from slm tells us how to use it to list all GUI programs, but we'll have to reverse it, since we want to exclude them. This can be done using the --deselect switch. First, we'll build a list of all GUI programs for which we have running processes. From the answer I just linked, this is done using... $ xlsclients | cut -d' ' -f3 | paste - -s -d ',' Now, ps has a -C switch which allows us to select by command name. We just got our command list, so let's inject it into the ps command line. Note that I'm using --deselect afterwards to reverse my selection. $ ps -C "$(xlsclients | cut -d' ' -f3 | paste - -s -d ',')" --deselect Now, we have a list of all non-GUI processes. Let's not forget our "no TTY attached" rule. For this, I'll add -o tty,args to the previous line in order to output the tty of each process (and its full command line) : $ ps -C "$(xlsclients | cut -d' ' -f3 | paste - -s -d ',')" --deselect -o tty,args | grep ^? The final grep captures all lines which begin with "?", that is, all processes without a controlling tty. And there you go! This final line gives you all non-GUI processes running without a controlling terminal. Note that you could still improve it, for instance, by excluding kernel threads (which aren't processes)... $ ps -C "$(xlsclients | cut -d' ' -f3 | paste - -s -d ',')" --ppid 2 --pid 2 --deselect -o tty,args | grep ^? ... or by adding a few columns of information for you to read: $ ps -C "$(xlsclients | cut -d' ' -f3 | paste - -s -d ',')" --ppid 2 --pid 2 --deselect -o tty,uid,pid,ppid,args | grep ^? | {
"source": [
"https://unix.stackexchange.com/questions/175380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94449/"
]
} |
175,814 | What's the FreeBSD variant of Linux's lsblk and blkid ? I want something that provides the same sort of information as lsblk does in the example below: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/sda 8:0 0 465.8G 0 disk
├─/dev/sda1 8:1 0 1007K 0 part
├─/dev/sda2 8:2 0 256M 0 part /boot
├─/dev/sda3 8:3 0 9.8G 0 part [SWAP]
├─/dev/sda4 8:4 0 29.3G 0 part /
├─/dev/sda5 8:5 0 29.3G 0 part /var
├─/dev/sda6 8:6 0 297.6G 0 part /home
└─/dev/sda9 8:9 0 16.3G 0 part
/dev/sr0 11:0 1 1024M 0 rom I've tried running commands like man -k blk and apropos dev . There's devinfo , but I'm not sure if that's what I'm really looking for since it doesn't seem to give me to /dev/<DEVICE> path for the devices listed. I even tried devstat , but that seems equally unhelpful EDIT: All I really need to know is the /dev/<DEVICE> path for each block device connected, and maybe the label of said device (if any); regardless of whether or not they have been mounted yet. | Use geom disk list . This will show all disk-like devices (technically, every instance of GEOM "DISK" class). For more information: geom | FreeBSD Manual Pages | {
"source": [
"https://unix.stackexchange.com/questions/175814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43029/"
]
} |
175,851 | On CentOS 6.4: I installed a newer version of devtoolset (1.1) and was wondering how I would go about permanently setting these to be default. Right now, when I ssh into my server running CentOS 6, I have to run this command scl enable devtoolset-1.1 bash I tried adding it to ~/.bashrc and simply pasting it on the very last line, without success. | In your ~/.bashrc or ~/.bash_profile Simply source the "enable" script provided with the devtoolset. For example, with the Devtoolset 2, the command is: source /opt/rh/devtoolset-2/enable or source scl_source enable devtoolset-2 Lot more efficient: no forkbomb, no tricky shell | {
"source": [
"https://unix.stackexchange.com/questions/175851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95779/"
]
} |
175,930 | When I boot, PulseAudio defaults to sending output to Headphones. I'd like it to default to sending output to Line Out. How do I do that? I can manually change where the output is current sent as follows: launch the Pulseaudio Volume Control application, go to the Output Devices tab, and next to Port, select the Line Out option instead of Headphones. However, I have to do this after each time I boot the machine -- after a reboot, Pulseaudio resets itself back to Headphones. That's a bit annoying. How do I make my selection stick and persist across reboots? Here's a screenshot of how the Volume Control application looks after a reboot, with Headphones selected: If I click on the chooser next to Port, I get the following two options: Selecting Line Out makes sound work. (Notice that both Headphones and Line Out are marked as "unplugged", but actually I do have something plugged into the Line Out port.) Comments: I'm not looking for a way to change the default output device . I have only one sound card. pacmd list-sinks shows only one sink. Therefore, pacmd set-default-sink is not helpful. ( This doesn't help either.) Here what I need to set is the "Port", not the output device. If it's relevant, I'm using Fedora 20 and pulseaudio-5.0-25.fc21.x86_64. | I had the same problem (for at least a year now), and the following seemed to work: Taken from: https://bbs.archlinux.org/viewtopic.php?id=164868 Use pavucontrol to change the port to your desired one. Then find the internal name of the port with this command: $ pacmd list | grep "active port"
active port: <hdmi-output-0>
active port: <analog-output-lineout>
active port: <analog-input-linein> Using this information about the internal name of the port, we can change it with the command: pacmd set-sink-port 0 analog-output-lineout If you (or someone else with the problem) has multiple cards, try changing the 0 to a 1. If this works, you can put: set-sink-port 0 analog-output-lineout in your /etc/pulse/default.pa file to have it across reboots. | {
"source": [
"https://unix.stackexchange.com/questions/175930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9812/"
]
} |
176,001 | Reading about this question: In zsh how can I list all the environment variables? , I wondered, how can I list all the shell variables ? Also, does the distinction between shell variables and environment variables apply to shells other than zsh? I am primarily interested in Bash and Zsh, but it would be great to know how to do this in other mainstream shells. | List all shell variables bash : use set -o posix ; set . The POSIX options is there to avoid outputting too much information, like function definitions. declare -p also works. zsh : use typeset Shell variables and environment variables An environment variable is available to exec() -ed child processes (as a copy. if parent process change the variable, the child environment is not updated). A non-environment variable is only available to the current running shell and fork() -ed subshells. This distinction is present in all shells. (completed thanks to comments) | {
"source": [
"https://unix.stackexchange.com/questions/176001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37128/"
]
} |
176,027 | I am looking for a way to customize Ash sessions with my own sets of alias es and whatnots. What is the Ash equivalent of Bash's bashrc files? | Ash first reads the following files (if they exist): System: /etc/profile User: ~/.profile | {
"source": [
"https://unix.stackexchange.com/questions/176027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/686/"
]
} |
176,111 | I have a binary file I would like to include in my C source code (temporarily, for testing purposes) so I would like to obtain the file contents as a C string, something like this: \x01\x02\x03\x04 Is this possible, perhaps by using the od or hexdump utilities? While not necessary, if the string can wrap to the next line every 16 input bytes, and include double-quotes at the start and end of each line, that would be even nicer! I am aware that the string will have embedded nulls ( \x00 ) so I will need to specify the length of the string in the code, to prevent these bytes from terminating the string early. | xxd has a mode for this. The -i / --include option will: output in C include file style. A complete static array definition is written (named after the input file), unless xxd reads from stdin. You can dump that into a file to be #include d, and then just access foo like any other character array (or link it in). It also includes a declaration of the length of the array. The output is wrapped to 80 bytes and looks essentially like what you might write by hand: $ xxd --include foo
unsigned char foo[] = {
0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x2c, 0x20, 0x77, 0x6f, 0x72, 0x6c, 0x64,
0x21, 0x0a, 0x0a, 0x59, 0x6f, 0x75, 0x27, 0x72, 0x65, 0x20, 0x76, 0x65,
0x72, 0x79, 0x20, 0x63, 0x75, 0x72, 0x69, 0x6f, 0x75, 0x73, 0x21, 0x20,
0x57, 0x65, 0x6c, 0x6c, 0x20, 0x64, 0x6f, 0x6e, 0x65, 0x2e, 0x0a
};
unsigned int foo_len = 47; xxd is, somewhat oddly, part of the vim distribution, so you likely have it already. If not, that's where you get it — you can also build the tool on its own out of the vim source. | {
"source": [
"https://unix.stackexchange.com/questions/176111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6662/"
]
} |
176,322 | bash and fish scripts are not compatible, but I would like to have a file that defines some some environment variables to be initialized by both bash and fish. My proposed solution is defining a ~/.env file that would contain the list of environment variables like so: PATH="$HOME/bin:$PATH"
FOO="bar" I could then just source it in bash and make a script that converts it to fish format and sources that in fish. I was thinking that there may be a better solution than this, so I'm asking for better way of sharing environment variables between bash fish. Note: I'm using OS X. Here is an example .env file that I would like both fish and bash to handle using ridiculous-fish's syntax (assume ~/bin and ~/bin2 are empty directories): setenv _PATH "$PATH"
setenv PATH "$HOME/bin"
setenv PATH "$PATH:$HOME/bin2"
setenv PATH "$PATH:$_PATH" | bash has special syntax for setting environment variables, while fish uses a builtin. I would suggest writing your .env file like so: setenv VAR1 val1
setenv VAR2 val2 and then defining setenv appropriately in the respective shells. In bash (e.g. .bashrc): function setenv() { export "$1=$2"; }
. ~/.env In fish (e.g. config.fish): function setenv; set -gx $argv; end
source ~/.env Note that PATH will require some special handling, since it's an array in fish but a colon delimited string in bash. If you prefer to write setenv PATH "$HOME/bin:$PATH" in .env, you could write fish's setenv like so: function setenv
if [ $argv[1] = PATH ]
# Replace colons and spaces with newlines
set -gx PATH (echo $argv[2] | tr ': ' \n)
else
set -gx $argv
end
end This will mishandle elements in PATH that contain spaces, colons, or newlines. The awkwardness in PATH is due to mixing up colon-delimited strings with true arrays. The preferred way to append to PATH in fish is simply set PATH $PATH ~/bin . | {
"source": [
"https://unix.stackexchange.com/questions/176322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9191/"
]
} |
176,489 | I want to install Android NDK on my CentOS 6.5 machine. But when I ran the program, it says it needs glibc 2.14 to be able to run. My CentOS 6.5 only has Glibc 2.12 installed. So I tried to update glibc by: $ sudo yum update glibc But after that I found the glibc version is still 2.12, not 2.14. $ ldd --version
ldd (GNU libc) 2.12 I think glibc 2.14 may not be available on CentOS repositories. So how can I update it to glibc 2.14 on CentOS 6.5? | You cannot update glibc on Centos 6 safely. However you can install 2.14 alongside 2.12 easily, then use it to compile projects etc. Here is how: mkdir ~/glibc_install; cd ~/glibc_install
wget http://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.gz
tar zxvf glibc-2.14.tar.gz
cd glibc-2.14
mkdir build
cd build
../configure --prefix=/opt/glibc-2.14
make -j4
sudo make install
export LD_LIBRARY_PATH="/opt/glibc-2.14/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}" | {
"source": [
"https://unix.stackexchange.com/questions/176489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33477/"
]
} |
176,557 | [I had to change the example to make it clear that there are subdirectories.] Let's say I want to recreate a subset of my hierarchy. for arguments sake, let's say I want to backup files in filelist.conf # cat rsync-list
ab*
bb* and # find .
.
./abc
./abc/file-in-abc
./abd
./abd/file-in-abd
./aca
./bba
./bbc
./bca
./rsync-list I would have hoped that rsync -arv --include-from=rsync-list --exclude='*' . /somewhere-else would recreate abc, abd, bba, and bbc. the problem is that it does not descend into the ab* directories, so it does not do abc/file-in-abc and abd/file-in-abd. so, in this sense, the ab* is not really a wildcard that is expanded into abc and abd and then rsynced. | The manpage lists these five options: --exclude=PATTERN exclude files matching PATTERN
--exclude-from=FILE read exclude patterns from FILE
--include=PATTERN don't exclude files matching PATTERN
--include-from=FILE read include patterns from FILE
--files-from=FILE read list of source-file names from FILE --files-from is for exact filenames, and --include-from is for patterns, so you might want to try that instead. Using include-from , you don't need to specify + , but you do need to exclude everything else. For example, given: $ ls -v1 source
image1.tiff
...
image700.tiff
$ cat includes
image7*.tiff Then I can sync only image7*.tiff using: rsync -aP --include-from=includes --exclude='*' source/ target The manpage also says, in the INCLUDE/EXCLUDE PATTERN RULES section: a ’*’ matches any path component, but it stops at slashes. use ’**’ to match anything, including slashes. | {
"source": [
"https://unix.stackexchange.com/questions/176557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70646/"
]
} |
176,717 | In a CentOS 7 server, I type in firewall-cmd --list-all , and it gives me the following: public (default, active)
interfaces: enp3s0
sources:
services: dhcpv6-client https ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules: What is the dhcpv6-client service? What does it do? And what are the implications of removing it? I read the wikipedia page for dhcpv6 , but it does not tell me specifically what this service on CentOS 7 Firewalld does. This server is accessible via https and email via mydomain.com , but it is a private server that can only be accessed via https by a list of known ip addresses. In addition, this server can receive email from a list of known email addresses. Is the dhcpv6-client service required to reconcile the domain addresses from the known ip https requests and for exchanging the email with known email addresses? | This is needed if you are using DHCP v6 due to the slightly different way that DHCP works in v4 and v6. In DHCP v4 the client establishes the connection with the server and because of the default rules to allow 'established' connections back through the firewall, the returning DHCP response is allowed through. However, in DHCP v6, the initial client request is sent to a statically assigned multicast address while the response has the DHCP server's unicast address as the source (see RFC 3315 ). As the source is now different to the initial request's destination, the 'established' rule will not allow it through and consequently DHCP v6 will fail. To combat this, a new firewalld rule was created called dhcpv6-client which allows incoming DHCP v6 responses to pass - this is the dhcpv6-client rule. If you're not running DHCP v6 on your network or you are using static IP addressing, then you can disable it. | {
"source": [
"https://unix.stackexchange.com/questions/176717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92670/"
]
} |
176,870 | Is xorg.conf.d no longer used by Arch Linux? If so, does anyone know where the configuration files that once lived under said directory now reside? | The default X config files live in /usr/share/X11/xorg.conf.d in arch. You can still put them in /etc/X11/xorg.conf.d if you want. | {
"source": [
"https://unix.stackexchange.com/questions/176870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43029/"
]
} |
176,873 | Previously I used source command like this: source file_name But what I'm trying to do is this: echo something | source Which doesn't work. | Since source (or . ) takes a file as argument, you could try process substitution: source <(echo something) | {
"source": [
"https://unix.stackexchange.com/questions/176873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22884/"
]
} |
176,917 | Many times I accidentally run the cat command on files that have contents up to few thousand lines. I try to kill the cat command with Ctrl + C or Ctrl + Z , but both only take effect after the total output of cat is displayed in the terminal, so I have to wait till cat gets completely executed. Is there a better solution that avoids waiting? Because sometimes the files are up to size of 100MBs, and it gets irritating to wait for it. I am using tcsh . | If the file(s) in question contain lots of data, sending the signal can actually get to cat before it finishes. What you really observe is the finite speed of your terminal - cat sends the data to the terminal and it takes some time for the terminal to display all of it. Remember that it usually has to redraw the whole output window for each line of output (i.e. move the contents of the window one line up and print the next line at the bottom). While there are techniques and algorithms to make this faster than if it was done the straightforward way, it still takes some time. Thus, if you want to get rid of the output as quickly as possible, hide the terminal window , because then (usually) no actual redrawing takes place. In a graphical environment this can mean either minimizing the window or switching to a different virtual desktop, on the Linux virtual console just switch to another one (( Ctrl +) Alt + F x ). Also notice that if you ran this over a slow network link (SSH over a GSM connection, for example), you would definitely see much less output before cat is killed by the signal, because the speed of the terminal redrawing wouldn't be the bottleneck any more. | {
"source": [
"https://unix.stackexchange.com/questions/176917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93151/"
]
} |
176,997 | $ whoami
admin
$ sudo -S -u otheruser whoami
otheruser
$ sudo -S -u otheruser /bin/bash -l -c 'echo $HOME'
/home/admin Why isn't $HOME being set to /home/otheruser even though bash is invoked as a login shell? Specifically, /home/otheruser/.bashrc isn't being sourced.
Also, /home/otheruser/.profile isn't being sourced. - ( /home/otheruser/.bash_profile doesn't exist) | To invoke a login shell using sudo just use -i . When command is not specified you'll get a login shell prompt, otherwise you'll get the output of your command. Example (login shell): sudo -i Example (with a specified user): sudo -i -u user Example (with a command): sudo -i -u user whoami Example (print user's $HOME ): sudo -i -u user echo \$HOME Note: The backslash character ensures that the dollar sign reaches the target user's shell and is not interpreted in the calling user's shell. I have just checked the last example with strace which tells you exactly what's happening. The output bellow shows that the shell is being called with --login and with the specified command, just as in your explicit call to bash, but in addition sudo can do its own work like setting the $HOME . # strace -f -e process sudo -S -i -u user echo \$HOME
execve("/usr/bin/sudo", ["sudo", "-S", "-i", "-u", "user", "echo", "$HOME"], [/* 42 vars */]) = 0
...
[pid 12270] execve("/bin/bash", ["-bash", "--login", "-c", "echo \\$HOME"], [/* 16 vars */]) = 0
... I noticed that you are using -S and I don't think it is generally a good technique. If you want to run commands as a different user without performing authentication from the keyboard, you might want to use SSH instead. It works for localhost as well as for other hosts and provides public key authentication that works without any interactive input. ssh user@localhost echo \$HOME Note: You don't need any special options with SSH as the SSH server always creates a login shell to be accessed by the SSH client. | {
"source": [
"https://unix.stackexchange.com/questions/176997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49146/"
]
} |
177,138 | Help for a simple script #!/bin/bash
array1=(
prova1
prova2
slack64
)
a="slack64"
b="ab"
if [ $a = $b ]
then
echo "$a = $b : a is equal to b"
else
echo "$a = $b: a is not equal to b"
fi This script simply doesn't work, I want a script which check if slack64 is present in a list(i use an array),and simply give me, yes is present,or no.
I don't know how to compare an array with a single variable. | Use a different kind of array: rather than an integer-indexed array, use an associative array, so the key (index) is what you will be checking for. bash-4.0 or later is required for this. declare -A array1=(
[prova1]=1 [prova2]=1 [slack64]=1
)
a=slack64
[[ -n "${array1[$a]}" ]] && printf '%s is in array\n' "$a" In the above we don't really care about the values, they need only be non-empty for this. You can "invert" an indexed array into a new associative array by exchanging the key and value: declare -a array1=(
prova1 prova2 slack64
)
declare -A map # required: declare explicit associative array
for key in "${!array1[@]}"; do map[${array1[$key]}]="$key"; done # see below
a=slack64
[[ -n "${map[$a]}" ]] && printf '%s is in array\n' "$a" This can pay off if you have large arrays which are frequently searched, since the implementation of associative arrays will perform better than array-traversing loops. It won't suit every use case though, since it cannot handle duplicates (though you can use the value as a counter, instead of just 1 as above), and it cannot handle an empty index. Breaking out the complex line above, to explain the "inversion": for key in "${!a[@]}" # expand the array indexes to a list of words
do
map[${a[$key]}]="$key" # exchange the value ${a[$key]} with the index $key
done | {
"source": [
"https://unix.stackexchange.com/questions/177138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
177,205 | In a tutorial, I'm prompted "If you are running Squeeze, follow these instructions..." and "If you are running Wheezy, follow these other instructions..." When I run uname , I get the following information: Linux dragon-debian 3.2.0-4-686-pae #1 SMP Debian 3.2.63-2+deb7u2 i686 GNU/Linux Is that information enough to know if I'm using Squeeze or Wheezy , or do I get that from somewhere else? | Commands to try: • cat /etc/*-release • cat /proc/version • lsb_release -a - this shows "certain LSB (Linux Standard Base) and distribution-specific information" . For a shell script to get the details on different platforms, there's this related question. | {
"source": [
"https://unix.stackexchange.com/questions/177205",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
177,224 | I have the following working code: largest_prime=1
for number_under_test in {1..100}
do
is_prime=true
factors=''
for ((divider = 2; divider < number_under_test-1; divider++));
do
remainder=$(($number_under_test % $divider))
[ $remainder == 0 ] && [ is_prime ] && is_prime=false && factors+=$divider' '
done
[ $is_prime == true ] && echo "${number_under_test} is prime!" || echo "${number_under_test} is NOT prime (factors= $factors)" [ $is_prime == true ] && largest_prime=$number_under_test
done
printf "\nLargest Prime= $largest_prime\n" This code runs quickly is 0.194 seconds. However I found the && is_prime= false a bit hard to read and it could look (to the untrained eye) as if it was being tested rather than being set which is what it does.
So I tried changed the && into an if...then and this works - but is 75 times slower at 14.48 seconds. It's most noticeable on the higher numbers. largest_prime=1
for number_under_test in {1..100}
do
is_prime=true
factors=''
for ((divider = 2; divider < number_under_test-1; divider++));
do
remainder=$(($number_under_test % $divider))
if ([ $remainder == 0 ] && [ $is_prime == true ]); then
is_prime=false
factors+=$divider' '
fi
done
[ $is_prime == true ] && echo "${number_under_test} is prime!" || echo "${number_under_test} is NOT prime (factors= $factors)" [ $is_prime == true ] && largest_prime=$number_under_test
done
printf "\nLargest Prime= $largest_prime\n" Is there any was to have the clarity of the block without the slowness? Update (1/4/2015 10:40am EST) Great feedback! I am now using the following. Any other feedback ? largest_prime=1
separator=' '
for number_under_test in {1..100}; {
is_prime=true
factors=''
for ((divider = 2; divider < (number_under_test/2)+1; divider++)) {
remainder=$(($number_under_test % $divider))
if [ $remainder == 0 ]; then
is_prime=false
factors+=$divider' '
fi
}
if $is_prime; then
printf "\n${number_under_test} IS prime\n\n"
largest_prime=$number_under_test
else
printf "${number_under_test} is NOT prime, factors are: "
printf "$factors\n"
fi
}
printf "\nLargest Prime= $largest_prime\n" | That's because you're spawning a sub-shell every time: if ([ $remainder == 0 ] && [ $is_prime == true ]); then Just remove the parentheses if [ $remainder == 0 ] && [ $is_prime == true ]; then If you want to group commands, there's syntax to do that in the current shell: if { [ $remainder == 0 ] && [ $is_prime == true ]; }; then (the trailing semicolon is required, see the manual ) Note that [ is_prime ] is not the same as [ $is_prime == true ] : you could write that as simply $is_prime (with no brackets) which would invoke the bash built-in true or false command. [ is_prime ] is a test with one argument, the string "is_prime" -- when [ is given a single argument, the result is success if the argument is non-empty, and that literal string is always non-empty, hence always "true". For readability, I would change the very long line [ $is_prime == true ] && echo "${number_under_test} is prime!" || echo "${number_under_test} is NOT prime (factors= $factors)" [ $is_prime == true ] && largest_prime=$number_under_test to if [ $is_prime == true ]; then
echo "${number_under_test} is prime!"
else
echo "${number_under_test} is NOT prime (factors= $factors)"
# removed extraneous [ $is_prime == true ] test that you probably
# didn't notice off the edge of the screen
largest_prime=$number_under_test
fi Don't underestimate whitespace to improve clarity. | {
"source": [
"https://unix.stackexchange.com/questions/177224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
177,291 | What is the best way to renew a gpg key pair when it got expired and what is the reason for the method? The key pair is already signed by many users and available on public servers. Should the new key be a subkey of the expired private key? Should it be signed by the old (I could try to edit the key and change the date of expiration to tomorrow)? Should the new key sign the old? | Private keys never expire. Only public keys do. Otherwise, the world would never notice the expiration as (hopefully) the world never sees the private keys. For the important part, there is only one way, so that saves a discussion about pros and cons. You have to extend the validity of the main key: gpg --edit-key 0x12345678
gpg> expire
...
gpg> save You have to make a decision about extending validity of vs. replacing the subkey(s). Replacing them gives you limited forward security (limited to rather large time frames). If that is important to you then you should have (separate) subkeys for both encryption and signing (the default is one for encryption only). gpg --edit-key 0x12345678
gpg> key 1
gpg> expire
...
gpg> key 1
gpg> key 2
gpg> expire
...
gpg> save You need key 1 twice for selecting and deselecting because you can extend the validity of only one key at a time. You could also decide to extend the validity unless you have some reason to assume the key has been compromised. Not throwing the whole certificate away in case of compromise makes sense only if you have an offline main key (which IMHO is the only reasonable way to use OpenPGP anyway). The users of your certificate have to get its updated version anyway (either for the new key signatures or for the new key(s)). Replacing makes the key a bit bigger but that is not a problem. If you use smartcards (or plan to do so) then having more (encryption) keys creates a certain inconvenience (a card with the new key cannot decrypt old data). | {
"source": [
"https://unix.stackexchange.com/questions/177291",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26440/"
]
} |
177,513 | Is there a grep-like utility that will enable me to do grep searches with logic operators. I want to be able to nest and combine the logical constructs freely. For example, stuff like this should be possible: grep (term1 && term2) || (term1 && (term3 xor term4)) * I realize this can be done with vanilla grep and additional bash scripting, but my goal here is to avoid having to do that. | There are lot of ways to use grep with logical operators. Using multiple -e options matches anything that matches any of the patterns, giving the OR operation. Example: grep -e pattern1 -e pattern2 filename In extended regular expressions ( grep -E ), you can use | to combine multiple patterns with the OR operation. Example: grep -E 'pattern1|pattern2' filename grep -v can simulate the NOT operation. There is no AND operator in grep , but you can brute-force
simulate AND by using multiple patterns with | . Example : grep -E 'pattern1.*pattern2|pattern2.*pattern1' filename The above example will match all the lines that contain both pattern1 and pattern2 in either order. This gets very ugly if there are more patterns to combine. | {
"source": [
"https://unix.stackexchange.com/questions/177513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
177,572 | Used to be able to right click on the tab and change the title. Not sure how to do this anymore. Just upgraded to Fedora 21. EDIT: I have switched from gnome-terminal to ROXterm | Create a function in ~/.bashrc : function set-title() {
if [[ -z "$ORIG" ]]; then
ORIG=$PS1
fi
TITLE="\[\e]2;$*\a\]"
PS1=${ORIG}${TITLE}
} Then use your new command to set the terminal title. It works with spaces in the name too set-title my new tab title It is possible to subsequently use set-title again (original PS1 is preserved as ORIG ). | {
"source": [
"https://unix.stackexchange.com/questions/177572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37688/"
]
} |
177,651 | If I do $ cat > file.txt text Ctrl - D Ctrl - D Question 1: If I don't press enter, why do I have to press Ctrl - D twice? If I do $ cat > file.txt pa bam pshhh Ctrl - Z [2]+ Stopped cat > file.txt
$ cat file.txt
$ cat > file.txt pa bam pshhh Ctrl - Z [2]+ Stopped cat > file.txt
$ cat file.txt
pa bam pshhh Why is the second time the file with 1 line? | In Unix, most objects you can read and write - ordinary files, pipes, terminals, raw disk drives - are all made to resemble files. A program like cat reads from its standard input like this: n = read(0, buffer, 512); which asks for 512 bytes. n is the number of bytes actually read, or -1 if there's an error. If you did this repeatedly with an ordinary file, you'd get a bunch of 512-byte reads, then a somewhat shorter read at the tail end of the file, then 0 if you tried to read past the end of the file. So, cat will run until n is <= 0. Reading from a terminal is slightly different. After you type in a line, terminated by the Enter key, read returns just that line. There are a few special characters you can type. One is Ctrl-D . When you type this, the operating system sends all of the current line that you've typed (but not the Ctrl-D itself) to the program doing the read. And here's the serendipitous thing: if Ctrl-D is the first character on the line, the program is sent a line of length 0 - just like the program would see if it just got to the end of an ordinary file. cat doesn't need to do anything differently , whether it's reading from an ordinary file or a terminal. Another special character is Ctrl-Z . When you type it, anywhere in a line, the operating system discards whatever you've typed up until that point and sends a SIGTSTP signal to the program, which normally stops (pauses) it and returns control to the shell. So in your example $ cat > file.txt
pa bam pshhh<Ctrl+Z>
[2]+ Stopped cat > file.txt you typed some characters that were discarded, then cat was stopped without having written anything to its output file. $ cat > file.txt
pa bam pshhh
<Ctrl+Z>
[2]+ Stopped cat > file.txt you typed in one line, which cat read and wrote to its output file, and then the Ctrl-Z stopped cat . | {
"source": [
"https://unix.stackexchange.com/questions/177651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96175/"
]
} |
177,843 | I have a JSON output that contains a list of objects stored in a variable. (I may not be phrasing that right) [
{
"item1": "value1",
"item2": "value2",
"sub items": [
{
"subitem": "subvalue"
}
]
},
{
"item1": "value1_2",
"item2": "value2_2",
"sub items_2": [
{
"subitem_2": "subvalue_2"
}
]
}
] I need all the values for item2 in a array for a bash script to be run on ubuntu 14.04.1. I have found a bunch of ways to get the entire result into an array but not just the items I need | Using jq : $ cat json
[
{
"item1": "value1",
"item2": "value2",
"sub items": [
{
"subitem": "subvalue"
}
]
},
{
"item1": "value1_2",
"item2": "value2_2",
"sub items_2": [
{
"subitem_2": "subvalue_2"
}
]
}
] CODE: arr=( $(jq -r '.[].item2' json) )
printf '%s\n' "${arr[@]}" OUTPUT: value2
value2_2 | {
"source": [
"https://unix.stackexchange.com/questions/177843",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/97293/"
]
} |
178,069 | How do I specify command arguments in sudoers? As a background, aws command is actually a gateway to a whole bunch of sub-systems and I want to restrict the user to only run aws s3 cp ...any other args... When I try the following in /etc/sudoers Cmnd_Alias AWSS3_CMD = /usr/local/bin/aws s3 cp, /usr/local/aws/bin/aws s3 cp
gbt1 ALL=(gbt-ops) NOPASSWD: AWSS3_CMD The shell unfortunately prompts for password $ sudo -u gbt-ops aws s3 cp helloworld s3://my-bucket/hw.1
gbt-ops's password: If I remove the command args in Cmnd_Alias, then it flows as desired (without password prompt), but the authorization are way too broad. So, what is the right way of restricting to only certain types of command invocations . Cmnd_Alias AWSS3_CMD = /usr/local/bin/aws, /usr/local/aws/bin/aws Then $ sudo -u gbt-ops aws s3 cp helloworld s3://my-bucket/hw.1
...happy Thanks a lot. | You haven't used any wildcards, but have provided two arguments. Therefore sudo looks for commands exactly as written (excepting path-lookup) (from man 5 sudoers ): If a Cmnd has associated command line arguments, then the arguments in
the Cmnd must match exactly those given by the user on the command line
(or match the wildcards if there are any). Try something like: Cmnd_Alias AWSS3_CMD = /usr/local/bin/aws s3 cp *, /usr/local/aws/bin/aws s3 cp * Note that: Wildcards in command line arguments should be used with care. Because
command line arguments are matched as a single, concatenated string, a
wildcard such as ‘?’ or ‘*’ can match multiple words. So, only one wildcard is needed per command. | {
"source": [
"https://unix.stackexchange.com/questions/178069",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91672/"
]
} |
178,070 | How do I paste from the PRIMARY selection (eg. mouse-selected text) with a keyboard shortcut? Shift+Insert inconsistently pastes from PRIMARY or CLIPBOARD, depending on the application. Background: Ctrl+C copies selected text to CLIPBOARD while mouse-selection copies to PRIMARY.
Paste from CLIPBOARD with Ctrl+V and paste from PRIMARY with mouse-middle-click . In a terminal emulator (gnome-terminal), paste from CLIPBOARD with Ctrl+Shift+V . (Paste from PRIMARY with mouse-middle-click still.) I want to paste from PRIMARY with a keyboard shortcut. In gnome-terminal, this is Shift+Insert , but in gedit and Firefox, Shift+Insert pastes from CLIPBOARD. I want a shortcut that consistently pastes from CLIPBOARD and a different short cut that consistently pastes from PRIMARY. I'm running Ubuntu 14.04 with xmonad and Firefox 34.0 | All the apps you've mentioned are gtk+ apps so it's quite easy to answer Why ... Because in all gtk+ apps ( except one ), Shift + Insert pastes from CLIPBOARD - i.e. it's equivalent to Ctrl + V . The shortcut is hardcoded in gtkentry.c (line 2022) and gtktextview.c (line 1819): gtk_binding_entry_add_signal (binding_set, GDK_KEY_Insert, GDK_SHIFT_MASK,
"paste-clipboard", 0); It is also documented in the GTK+ 3 Reference Manual under GtkEntry : The “paste-clipboard” signal
void
user_function (GtkEntry *entry,
gpointer user_data)
The ::paste-clipboard signal is a keybinding signal which gets emitted
to paste the contents of the clipboard into the text view.
The default bindings for this signal are Ctrl-v and Shift-Insert. As far as I know this was done for consistency with other DE's (see KDE 's Qt key bindings in QTextEdit Class ) and Windows OS 1 . The only exception is gnome-terminal . After long debates, the devs have decided (for consistency with other terminals) that, in gnome-terminal , Shift + Insert should paste from PRIMARY and Ctrl + Shift + V should paste from CLIPBOARD (although you have the options to customize some shortcuts). As to How do you paste selection with a keyboard shortcut... there's no straightforward way. The easiest way is to assign a shortcut to a script that runs xdotool click 2 (simulates clicking the middle-mouse button). While this works (and it should work with all or most DE's and toolkits), it only works if the mouse cursor is actually over the text entry box, otherwise it fails. Another relatively easy way is via Gnome Accessibility, if it's available on your system. It also requires the presence of a numpad. Go to Universal Access >> Pointing & Clicking and enable Mouse Keys . Make sure NumLock is off. You can then use the numpad keys to move the cursor and click. To simulate a middle-mouse button click, press (and release) * (asterisk) then press 5 (here's a short guide ). This solution seems to always work in a gtk+ environment. The downside is that it requires Gnome Accessibility and a numpad. Also, you cannot customize the shortcut. An interesting solution was proposed on gnome-bugzilla (bug 643391) . (Update 2018: issue has now been moved here .) It requires patching some source files and setting configuration options in ~/.config/gtk-3.0/gtk.css (or ~/.gtkrc-2.0 for gtk+ 2 apps). I haven't tried it personally but the feedback is positive. Ideally, you would patch the source files and define a "paste-selection" signal then bind Shift + Insert to "paste-selection" instead of "paste-clipboard" . Andy's code (attached in the bug report linked above) could serve as a guide on how to do that. Even then, it would only affect gtk+ apps (I'm not a KDE/Qt guy so I have no idea how to alter Qt apps behavior). 1: (not to mention IBM's CUA) | {
"source": [
"https://unix.stackexchange.com/questions/178070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18510/"
]
} |
178,077 | I understand what mounting is in Linux, and I understand device files. However I do not understand WHY we need to mount. For example, as explained in the accepted answer of this question , using this command: mount /dev/cdrom /media/cdrom we are mounting the CDROM device to /media/cdrom and eventually are able to access the files of CDROM with the following command ls /media/cdrom which will list the content of the CDROM. Why not skip mounting altogether, and do the following? ls /dev/cdrom And have the content of the CDROM Listed. I expect one of the answers to be: " This is how Linux is designed ". But if so, then why was it designed that way? Why not access the /dev/cdrom directory directly? What's the real purpose of mounting? | One reason is that block level access is a bit lower level than ls would be able to work with. /dev/cdrom , or dev/sda1 may be your CD ROM drive and partition 1 of your hard drive, respectively, but they aren't implementing ISO 9660 / ext4 - they're just RAW pointers to those devices known as Device Files . One of the things mount determines is HOW to use that raw access - what file system logic / driver / kernel modules are going to manage the reads/writes, or translate ls /mnt/cdrom into which blocks need to be read, and how to interpret the content of those blocks into things like file.txt . Other times, this low level access can be good enough; I've just read from and written to serial ports, usb devices, tty terminals, and other relatively simple devices. I would never try to manually read/write from /dev/sda1 to, say, edit a text file, because I'd basically have to reimplement ext4 logic, which may include, among other things: look up the file inodes, find the storage blocks, read the full block, make my change(s), write the full blocks, then update the inode (perhaps), or instead write this all to the journal - much too difficult. One way to see this for yourself is just to try it: [root@ArchHP dev]# cd /dev/sda1
bash: cd: /dev/sda1: Not a directory /dev is a directory, and you can cd and ls all you like. /dev/sda1 is not a directory; it's a special type of file that is what the kernel offers up as a 'handle' to that device. See the wikipedia entry on Device Files for a more in depth treatment. | {
"source": [
"https://unix.stackexchange.com/questions/178077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95326/"
]
} |
Subsets and Splits