source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
442,552 | When applying sudo to a command which doesn't actually need sudo , sometimes it doesn't ask me for my password. For example under my $HOME , sudo ls . But I remember that it does for some other command, though I forget which one. So I was wondering how sudo decides whether to ask for a password, when given a command which doesn't actually need sudo ? Is there some rule in /etc/sudoers specifying that? My real problem is that when I use du , it sometimes shows "permission denied" for some directories, and sometimes not, probably because I don't have permission on some directories? I apply sudo to du regardless, and thought I would be asked for a password regardless, but actually not on my own directories. | In a typical configuration, the command is irrelevant. You need to enter your password the first time you use sudo, and you don't need your password in that particular shell for the next 15 minutes. From the computer's perspective, there is no such thing as a “command that needs sudo”. Any user can attempt to run any command. The outcome may be nothing but an error message such as “Permission denied” or “No such file or directory”, but it's always possible to run the command. For example, if you run du on a directory tree that has contents that you don't have permission to access, you'll get permission errors. That's what “permission denied” means. If you run sudo du , sudo runs du as root, so you don't get permission errors (that's the point of the root account: root¹ always has permission). When you run sudo du , du runs as root, and sudo is not involved at all after du has started. Whether du encounters permission errors is completely irrelevant to how sudo operates. There are commands that need sudo to do something useful . Usefulness is a human concept. You need to use sudo (or some other methods to run the command as root) if the command does something useful when run as root but not when run under your account. Whether sudo asks for your password depends on two things. Based on the configuration, sudo decides whether you need to be authenticated. By default, sudo requires a password. This can be turned off in several ways, including setting the authenticate option to false and having an applicable rule with the NOPASSWD tag. If sudo requires your password, it may be content to use a cached value. That's ok because the reason sudo needs your password is not to authenticate who's calling it (sudo knows what user invoked it), but to confirm that it's still you at the commands and not somebody who took control over your keyboard. By default, sudo is willing to believe that you're still at the commands if you entered your password less than 15 minutes ago (this can be changed with the timeout option). You need to have entered the password in the same terminal (so that if you remain logged in on one terminal then leave that terminal unattended and then use another terminal, someone can't take advantage of this to use sudo on the other terminal — but this is a very weak advantage and it can be turned off by setting the tty_tickets option to false). ¹ nearly, but that's beyond the scope of this thread. | {
"source": [
"https://unix.stackexchange.com/questions/442552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
442,598 | I'm connected to local area network with access to the Internet through gateway. There is DNS server in local network which is capable of resolving hostnames of computers from local network. I would like to configure systemd-resolved and systemd-networkd so that lookup requests for local hostnames would be directed (routed) exclusively to local DNS server and lookup requests for all other hostnames would be directed exclusively to another, remote DNS server. Let's assume I don't know where the configuration files are or whether I should add more files and require their path(s) to be specified in the answer. | In the configuration file for local network interface (a file matching the name pattern /etc/systemd/network/*.network ) we have to either specify we want to obtain local DNS server address from DHCP server using DHCP= option : [Network]
DHCP=yes or specify its address explicitly using DNS= option : [Network]
DNS=10.0.0.1 In addition we need to specify (in the same section) local domains using Domains= option Domains=domainA.example domainB.example ~example We specify local domains domainA.example domainB.example to get the following behavior (from systemd-resolved.service, systemd-resolved man page): Lookups for a hostname ending in one of the per-interface domains are
exclusively routed to the matching interfaces. This way hostX.domainA.example will be resolved exclusively by our local DNS server. We specify with ~example that all domains ending in example are to be treated as route-only domains to get the following behavior (from description of this commit) : DNS servers which have route-only domains should only be used for the
specified domains. This way hostY.on.the.internet will be resolved exclusively by our global, remote DNS server. Note Ideally, when using DHCP protocol, local domain names should be obtained from DHCP server instead of being specified explicitly in configuration file of network interface above. See UseDomains= option . However there are still outstanding issues with this feature – see systemd-networkd DHCP search domains option issue. We need to specify remote DNS server as our global, system-wide DNS server. We can do this in /etc/systemd/resolved.conf file: [Resolve]
DNS=8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844 Don't forget to reload configuration and to restart services: $ sudo systemctl daemon-reload
$ sudo systemctl restart systemd-networkd
$ sudo systemctl restart systemd-resolved Caution! Above guarantees apply only when names are being resolved by systemd-resolved – see man page for nss-resolve, libnss_resolve.so.2 and man page for systemd-resolved.service, systemd-resolved . See also: Description of routing lookup requests in systemd related man pages is unclear How to troubleshoot DNS with systemd-resolved? References: Man page for systemd-resolved.service, systemd-resolved Man page for resolved.conf, resolved.conf.d Man page for systemd-network | {
"source": [
"https://unix.stackexchange.com/questions/442598",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5355/"
]
} |
442,692 | I understand the subshell syntax to be (<commands...>) , is $() just a subshell that you can retrieve variable values from? Note: This applies to bash 4.4 based on different wording in their documentation. | $(…) is a subshell by definition: it's a copy of the shell runtime state¹, and changes to the state made in the subshell have no impact on the parent. A subshell is typically implemented by forking a new process (but some shells may optimize this in some cases). It isn't a subshell that you can retrieve variable values from. If changes to variables had an impact on the parent, it wouldn't be a subshell. It's a subshell whose output the parent can retrieve. The subshell created by $(…) has its standard output set to a pipe, and the parent reads from that pipe and collects the output. There are several other constructs that create a subshell. I think this is the full list for bash: Subshell for grouping : ( … ) does nothing but create a subshell and wait for it to terminate). Contrast with { … } which groups commands purely for syntactic purposes and does not create a subshell. Background : … & creates a subshell and does not wait for it to terminate. Pipeline : … | … creates two subshells, one for the left-hand side and one for the right-hand side, and waits for both to terminate. The shell creates a pipe and connects the left-hand side's standard output to the write end of the pipe and the right-hand side's standard input to the read end. In some shells (ksh88, ksh93, zsh, bash with the lastpipe option set and effective), the right-hand side runs in the original shell, so the pipeline construct only creates one subshell. Command substitution : $(…) (also spelled `…` ) creates a subshell with its standard output set to a pipe, collects the output in the parent and expands to that output, minus its trailing newlines. (And the output may be further subject to splitting and globbing, but that's another story.) Process substitution : <(…) creates a subshell with its standard output set to a pipe and expands to the name of the pipe. The parent (or some other process) may open the pipe to communicate with the subshell. >(…) does the same but with the pipe on standard input. Coprocess : coproc … creates a subshell and does not wait for it to terminate. The subshell's standard input and output are each set to a pipe with the parent being connected to the other end of each pipe. ¹ As opposed to running a separate shell . | {
"source": [
"https://unix.stackexchange.com/questions/442692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3570/"
]
} |
443,118 | Is there any sh code that is not syntactically valid bash code (won't barf on syntax)? I am thinking of overwriting sh with bash for certain commands. | Here is some code that does something different in POSIX sh and Bash: hello &> world Whether that is "invalid" for you I don't know. In Bash, it redirects both standard output and standard error from hello into the file world . In POSIX sh , it runs hello in the background and then makes an empty redirection into world , truncating it (i.e. it's treated as & > ). There are plenty of other cases where Bash extensions will do their thing when run under bash , and would have different effects in a pure POSIX sh . For example, brace expansion is another, and it too operates the same under Bash's POSIX mode and not. As far as static syntax errors go, Bash has both reserved words (like [[ and time ) not specified by POSIX, such that [[ x is valid POSIX shell code but a Bash syntax error, and a history of various POSIX incompatibility bugs that may result in syntax errors, such as the one from this question : x=$(cat <<'EOF'
`
EOF
)
bash: line 2: unexpected EOF while looking for matching ``'
bash: line 5: syntax error: unexpected end of file Syntax-errors-only is a pretty dangerous definition of "invalid" for any circumstance where it matters, but there it is. | {
"source": [
"https://unix.stackexchange.com/questions/443118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
443,146 | I found line sed 's~ ~~g' in a shell script on a Linux system. What is this ~ ? | It's an alternative delimiter for the sed substitute ( s ) command. Usually, the slash is used, as in s/pattern/replacement/ , but sed allows for almost any character to be used. The main reason for using another delimiter than / in the sed substitution expression is when the expression will act on literal / characters. For example, to substitute the path /some/path/here with /other/path/now , one may do s/\/some\/path\/here/\/other\/path\/now/ This suffers from what's usually referred to as "leaning toothpick syndrome" , which means it's hard to read and to properly maintain. Instead, we are allowed to use another expression delimiter: s#/some/path/here#/other/path/now# Using ~ is just another example of a valid substitution expression delimiter. Your expression s~ ~~g is the same as s/ //g and will remove all spaces from the input. In this case, using another delimiter than / is not needed at all since neither pattern nor replacement contains / . Another way of doing the same thing is tr -d ' ' <infile >outfile | {
"source": [
"https://unix.stackexchange.com/questions/443146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/289083/"
]
} |
443,507 | 1. Summary I don't understand, why do I need E010 bashate rule . 2. Details I use bashate for linting .sh files. E010 rule: do not on the same line as for for bashate: Correct: #!/bin/bash
for f in bash/*.sh; do
sashacommand "$f"
done Error: #!/bin/bash
for f in bash/*.sh
do sashacommand "$f"
done Is any valid arguments, why I need for and do in the same line? 3. Not useful I can't find an answer to my question in: Google Articles about best coding practices ( example ) bashate documentation . I find only : A set of rules that help keep things consistent in control blocks. These are ignored on long lines that have a continuation, because unrolling that is kind of “interesting” | Syntactically, the following two code snippets are correct and equivalent: for f in bash/*.sh; do
sashacommand "$f"
done for f in bash/*.sh
do sashacommand "$f"
done The latter one could possibly be said to be harder to read as do slightly obfuscates the command in the body of the loop. If the loop contains multiple commands and the next command is on a new line, the obfuscation would be further highlighted: for f in *
do cmd1
cmd2
done ... but to flag it as an "error" is IMHO someone's personal opinion rather than an objective truth. I would say that if you want to prepend the command in the loop with do , then feel free to do so if that makes the code consistent and readable in the eyes of whoever is reading the code. In general, almost any ; may be replaced by a newline. Both ; and newline are command terminators. do is a keyword that means "here follows what needs to be done (in this for loop)". for f in *; do ...; done is the same as for f in *
do
...
done and as for f in *; do
...
done and for f in *
do ...
done The reason to use one over another is readability and local/personal style conventions. Personal opinion: In loop headers that are very long , I think that it may make sense to put do on a new line, as in for i in animals people houses thoughts basketballs bees
do
...
done or for i in \
animals \
people \
houses \
thoughts \
basketballs \
bees
do
...
done The same goes for the then in an if statement. But again, this comes down to one's personal style preferences, or to whatever coding style one's team/project is using. | {
"source": [
"https://unix.stackexchange.com/questions/443507",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237999/"
]
} |
443,539 | echo 'echo "hello, world!";sleep 3;' | parallel This command does not output anything until it has completed. Parallel's man page claims: GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. I guess the devil is in the phrasing: you get the same output as if you would run it normally, but not the output the same as if you would run it normally. I've looked for an option that will do this, for example --results /dev/stdout , but that does not work. My use-case is seeing real-time progress output from the command that I'm running. It's not about how many tasks have completed, which parallel could display for me, but about the progress output of each command individually that I want to see. I would use a bash loop ( for i in $x; do cmd & done; ) but I want to be able to stop all tasks with a single Ctrl+C, which parallel allows me to do. Is it possible to do this in parallel? If not, is there another tool? | I think you're looking for --ungroup . The man page says: --group Group output. Output from each jobs is grouped
together and is only printed when the command is finished.
--group is the default. Can be reversed with -u. -u of course is a synonym for --ungroup . | {
"source": [
"https://unix.stackexchange.com/questions/443539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30731/"
]
} |
443,764 | I'm using CentOS 7. I want to get the PID (if one exists) of the process running on port 3000. I would like to get this PID for the purposes of saving it to a variable in a shell script. So far I have [rails@server proddir]$ sudo ss -lptn 'sport = :3000'
State Recv-Q Send-Q Local Address:Port Peer Address:Port
Cannot open netlink socket: Protocol not supported
LISTEN 0 0 *:3000 *:* users:(("ruby",pid=4861,fd=7),("ruby",pid=4857,fd=7),("ruby",pid=4855,fd=7),("ruby",pid=4851,fd=7),("ruby",pid=4843,fd=7)) but I can't figure out how to isolate the PID all by itself without all this extra information. | Another possible solution: lsof -t -i :<port> -s <PROTO>:LISTEN For example: # lsof -i :22 -s TCP:LISTEN
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1392 root 3u IPv4 19944 0t0 TCP *:ssh (LISTEN)
sshd 1392 root 4u IPv6 19946 0t0 TCP *:ssh (LISTEN)
# lsof -t -i :22 -s TCP:LISTEN
1392 | {
"source": [
"https://unix.stackexchange.com/questions/443764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166917/"
]
} |
444,682 | Vim 8.1 added the :terminal command, which opens up a new bash terminal as a split. However, it always seems to be a horizontal split, and I prefer vertical splits. Is there a way to open a terminal as a vertical split without using: :vsp
:terminal
<c-w>j
:q Alternatively, is there a way I could add it as a command in my .vimrc , like so: command Vterm :vsp | :terminal | <c-w>j | :q The command above chokes on trying to execute <c-w>j , opens a new vim split with the following: executing job failed: No such file or directory Just having: command Vterm :vsp | :terminal Works fine, but leaves the original split. | You can use the :vert[ical] command modifier : :vert term :vertical works with any command that splits a window, for example: :vert copen
:vert help vert | {
"source": [
"https://unix.stackexchange.com/questions/444682",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291583/"
]
} |
444,946 | $ ls -l /tmp/test/my\ dir/
total 0 I was wondering why the following ways to run the above command fail or succeed? $ abc='ls -l "/tmp/test/my dir"'
$ $abc
ls: cannot access '"/tmp/test/my': No such file or directory
ls: cannot access 'dir"': No such file or directory
$ "$abc"
bash: ls -l "/tmp/test/my dir": No such file or directory
$ bash -c $abc
'my dir'
$ bash -c "$abc"
total 0
$ eval $abc
total 0
$ eval "$abc"
total 0 | This has been discussed in a number of questions on unix.SE, I'll try to collect all issues I can come up with here. Below is a description of why and how the various attempts fail, a way to do it properly with a function (for a fixed command), or with shell arrays (Bash/ksh/zsh) or the $@ pseudo-array (POSIX sh), both of which also allow building the command line pieces, if you e.g. only need to vary some optoins and notes about using eval to do this. Some references at the end. For the purposes here, it doesn't matter much if it's only the command arguments or also the command name that is to be stored in a variable. They're processed similarly up to the point where the command is launched, at which point the shell just takes the first word as the name of the command to run. Why it fails The reason you face those problems is the fact that word splitting is quite simple and doesn't lend itself to complex cases, and the fact that quotes expanded from variables don't act as quotes, but are just ordinary characters. (Note that the part about quotes is similar to every other programming language: e.g. char *s = "foo()"; printf("%s\n", s) does not call the function foo() in C, but just prints the string foo() . That's different in macro processors, like m4, the C preprocessor, or Make (to some extent). The shell is a programming language, not a macro processor.) On Unix-like systems, it's the shell that processes quotes and variable expansions on the command line, turning it from a single string into the list of strings that the underlying system call passes to the launched command. The program itself doesn't see the quotes the shell processed. E.g. if given the command ls -l "foo bar" , the shell turns that into the three strings ls , -l and foo bar (removing the quotes), and passes those to ls . (Even the command name is passed, though not all programs use it.) The cases presented in the question: The assignment here assigns the single string ls -l "/tmp/test/my dir" to abc : $ abc='ls -l "/tmp/test/my dir"' Below, $abc is split on whitespace, and ls gets the three arguments -l , "/tmp/test/my and dir" . The quotes here are just data, so there's one at the front of the second argument and another at the back of the third. The option works, but the path gets incorrectly processed as ls sees the quotes as part of the filenames: $ $abc
ls: cannot access '"/tmp/test/my': No such file or directory
ls: cannot access 'dir"': No such file or directory Here, the expansion is quoted, so it's kept as a single word. The shell tries to find a program literally called ls -l "/tmp/test/my dir" , spaces and quotes included. $ "$abc"
bash: ls -l "/tmp/test/my dir": No such file or directory And here, $abc is split, and only the first resulting word is taken as the argument to -c , so Bash just runs ls in the current directory. The other words are arguments to bash, and are used to fill $0 , $1 , etc. $ bash -c $abc
'my dir' With bash -c "$abc" , and eval "$abc" , there's an additional shell processing step, which does make the quotes work, but also causes all shell expansions to be processed again , so there's a risk of accidentally running e.g. a command substitution from user-provided data, unless you're very careful about quoting. Better ways to do it The two better ways to store a command are a) use a function instead, b) use an array variable (or the positional parameters). Using functions: Simply declare a function with the command inside, and run the function as if it were a command. Expansions in commands within the function are only processed when the command runs, not when it's defined, and you don't need to quote the individual commands. Though this really only helps if you have a fixed command you need to store (or more than one fixed command). # define it
myls() {
ls -l "/tmp/test/my dir"
}
# run it
myls It's also possible to define multiple functions and use a variable to store the name of the function you want to run in the end. Using an array: Arrays allow creating multi-word variables where the individual words contain white space. Here, the individual words are stored as distinct array elements, and the "${array[@]}" expansion expands each element as separate shell words: # define the array
mycmd=(ls -l "/tmp/test/my dir")
# expand the array, run the command
"${mycmd[@]}" The command is written inside the parentheses exactly as it would be written when running the command. The processing the shell does is the same in both cases, just in one it only saves the resulting list of strings, instead of using it to run a program. The syntax for expanding the array later is slightly horrible, though, and the quotes around it are important. Arrays also allow you to build the command line piece-by-piece. For example: mycmd=(ls) # initial command
if [ "$want_detail" = 1 ]; then
mycmd+=(-l) # optional flag, append to array
fi
mycmd+=("$targetdir") # the filename
"${mycmd[@]}" or keep parts of the command line constant and use the array fill just a part of it, like options or filenames: options=(-x -v)
files=(file1 "file name with whitespace")
target=/somedir
somecommand "${options[@]}" "${files[@]}" "$target" ( somecommand being a generic placeholder name here, not any real command.) The downside of arrays is that they're not a standard feature, so plain POSIX shells (like dash , the default /bin/sh in Debian/Ubuntu) don't support them (but see below). Bash, ksh and zsh do, however, so it's likely your system has some shell that supports arrays. Using "$@" In shells with no support for named arrays, one can still use the positional parameters (the pseudo-array "$@" ) to hold the arguments of a command. The following should be portable script bits that do the equivalent of the code bits in the previous section. The array is replaced with "$@" , the list of positional parameters. Setting "$@" is done with set , and the double quotes around "$@" are important (these cause the elements of the list to be individually quoted). First, simply storing a command with arguments in "$@" and running it: set -- ls -l "/tmp/test/my dir"
"$@" Conditionally setting parts of the command line options for a command: set -- ls
if [ "$want_detail" = 1 ]; then
set -- "$@" -l
fi
set -- "$@" "$targetdir"
"$@" Only using "$@" for options and operands: set -- -x -v
set -- "$@" file1 "file name with whitespace"
set -- "$@" /somedir
somecommand "$@" Of course, "$@" is usually filled with the arguments to the script itself, so you'll have to save them somewhere before re-purposing "$@" . To conditionally pass a single argument, you can also use the alternate value expansion ${var:+word} with some careful quoting. Here, we include -f and the filename only if the filename is nonempty: file="foo bar"
somecommand ${file:+-f "$file"} Using eval (be careful here!) eval takes a string and runs it as a command, just like if it was entered on the shell command line. This includes all quote and expansion processing, which is both useful and dangerous. In the simple case, it allows doing just what we want: cmd='ls -l "/tmp/test/my dir"'
eval "$cmd" With eval , the quotes are processed, so ls eventually sees just the two arguments -l and /tmp/test/my dir , like we want. eval is also smart enough to concatenate any arguments it gets, so eval $cmd could also work in some cases, but e.g. all runs of whitespace would be changed to single spaces. It's still better to quote the variable there as that will ensure it gets unmodified to eval . However, it's dangerous to include user input in the command string to eval . For example, this seems to work: read -r filename
cmd="ls -ld '$filename'"
eval "$cmd"; But if the user gives input that contains single quotes, they can break out of the quoting and run arbitrary commands! E.g. with the input '$(whatever)'.txt , your script happily runs the command substitution. That it could have been rm -rf (or worse) instead. The issue there is that the value of $filename was embedded in the command line that eval runs. It was expanded before eval , which saw e.g. the command ls -l ''$(whatever)'.txt' . You would need to pre-process the input to be safe. If we do it the other way, keeping the filename in the variable, and letting the eval command expand it, it's safer again: read -r filename
cmd='ls -ld "$filename"'
eval "$cmd"; Note the outer quotes are now single quotes, so expansions within do not happen. Hence, eval sees the command ls -l "$filename" and expands the filename safely itself. But that's not much different from just storing the command in a function or an array. With functions or arrays, there is no such problem since the words are kept separate for the whole time, and there's no quote or other processing for the contents of filename . read -r filename
cmd=(ls -ld -- "$filename")
"${cmd[@]}" Pretty much the only reason to use eval is one where the varying part involves shell syntax elements that can't be brought in via variables (pipelines, redirections, etc.). However, you'll then need to quote/escape everything else on the command line that needs protection from the additional parsing step (see link below). In any case, it's best to avoid embedding input from the user in the eval command! References Word Splitting in BashGuide BashFAQ/050 or "I'm trying to put a command in a variable, but the complex cases always fail!" The question Why does my shell script choke on whitespace or other special characters? , which discusses a number of issues related to quoting and whitespace, including storing commands. Escape a variable for use as content of another script How can I conditionally pass an argument from a POSIX shell script? | {
"source": [
"https://unix.stackexchange.com/questions/444946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
444,962 | I have file with this output, and I am trying to collect useful data from my file. R1#show ip route 192.168.5.130
Routing Descriptor Blocks:
* 192.168.5.128, from 192.168.5.162, 00:20:16 ago, via Serial0/0/0.2
Route metric is 2172416, traffic share count is 1
Total delay is 20100 microseconds, minimum bandwidth is 1544 Kbit/sec
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 1 I want to grep match if my above paragraph have word "metric" then it should display the whole paragraph not just that line. Also is there a way I can check condition that if metric==2172416 then return the whole paragraph. I would like to know the simplest and easiest way to do it, since I am going to apply that in different scenarios. Also If I have this in my file, how can fetch just the lines from Apr 11? Can I use wildcard here? CPU0:Apr 11 05:22:04.768 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-INTCHG :
CPU0:Apr 11 05:22:04.769 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-NBRCHG :
CPU0:Apr 11 05:22:04.769 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-NBRCHG :
CPU0:Apr 11 06:09:53.066 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-INTCHG :
CPU0:Apr 11 06:09:53.066 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-NBRCHG :
CPU0:Apr 11 06:09:56.707 UTC: pim[1182]: %ROUTING-IPV4_PIM-5-NBRCHG : | This has been discussed in a number of questions on unix.SE, I'll try to collect all issues I can come up with here. Below is a description of why and how the various attempts fail, a way to do it properly with a function (for a fixed command), or with shell arrays (Bash/ksh/zsh) or the $@ pseudo-array (POSIX sh), both of which also allow building the command line pieces, if you e.g. only need to vary some optoins and notes about using eval to do this. Some references at the end. For the purposes here, it doesn't matter much if it's only the command arguments or also the command name that is to be stored in a variable. They're processed similarly up to the point where the command is launched, at which point the shell just takes the first word as the name of the command to run. Why it fails The reason you face those problems is the fact that word splitting is quite simple and doesn't lend itself to complex cases, and the fact that quotes expanded from variables don't act as quotes, but are just ordinary characters. (Note that the part about quotes is similar to every other programming language: e.g. char *s = "foo()"; printf("%s\n", s) does not call the function foo() in C, but just prints the string foo() . That's different in macro processors, like m4, the C preprocessor, or Make (to some extent). The shell is a programming language, not a macro processor.) On Unix-like systems, it's the shell that processes quotes and variable expansions on the command line, turning it from a single string into the list of strings that the underlying system call passes to the launched command. The program itself doesn't see the quotes the shell processed. E.g. if given the command ls -l "foo bar" , the shell turns that into the three strings ls , -l and foo bar (removing the quotes), and passes those to ls . (Even the command name is passed, though not all programs use it.) The cases presented in the question: The assignment here assigns the single string ls -l "/tmp/test/my dir" to abc : $ abc='ls -l "/tmp/test/my dir"' Below, $abc is split on whitespace, and ls gets the three arguments -l , "/tmp/test/my and dir" . The quotes here are just data, so there's one at the front of the second argument and another at the back of the third. The option works, but the path gets incorrectly processed as ls sees the quotes as part of the filenames: $ $abc
ls: cannot access '"/tmp/test/my': No such file or directory
ls: cannot access 'dir"': No such file or directory Here, the expansion is quoted, so it's kept as a single word. The shell tries to find a program literally called ls -l "/tmp/test/my dir" , spaces and quotes included. $ "$abc"
bash: ls -l "/tmp/test/my dir": No such file or directory And here, $abc is split, and only the first resulting word is taken as the argument to -c , so Bash just runs ls in the current directory. The other words are arguments to bash, and are used to fill $0 , $1 , etc. $ bash -c $abc
'my dir' With bash -c "$abc" , and eval "$abc" , there's an additional shell processing step, which does make the quotes work, but also causes all shell expansions to be processed again , so there's a risk of accidentally running e.g. a command substitution from user-provided data, unless you're very careful about quoting. Better ways to do it The two better ways to store a command are a) use a function instead, b) use an array variable (or the positional parameters). Using functions: Simply declare a function with the command inside, and run the function as if it were a command. Expansions in commands within the function are only processed when the command runs, not when it's defined, and you don't need to quote the individual commands. Though this really only helps if you have a fixed command you need to store (or more than one fixed command). # define it
myls() {
ls -l "/tmp/test/my dir"
}
# run it
myls It's also possible to define multiple functions and use a variable to store the name of the function you want to run in the end. Using an array: Arrays allow creating multi-word variables where the individual words contain white space. Here, the individual words are stored as distinct array elements, and the "${array[@]}" expansion expands each element as separate shell words: # define the array
mycmd=(ls -l "/tmp/test/my dir")
# expand the array, run the command
"${mycmd[@]}" The command is written inside the parentheses exactly as it would be written when running the command. The processing the shell does is the same in both cases, just in one it only saves the resulting list of strings, instead of using it to run a program. The syntax for expanding the array later is slightly horrible, though, and the quotes around it are important. Arrays also allow you to build the command line piece-by-piece. For example: mycmd=(ls) # initial command
if [ "$want_detail" = 1 ]; then
mycmd+=(-l) # optional flag, append to array
fi
mycmd+=("$targetdir") # the filename
"${mycmd[@]}" or keep parts of the command line constant and use the array fill just a part of it, like options or filenames: options=(-x -v)
files=(file1 "file name with whitespace")
target=/somedir
somecommand "${options[@]}" "${files[@]}" "$target" ( somecommand being a generic placeholder name here, not any real command.) The downside of arrays is that they're not a standard feature, so plain POSIX shells (like dash , the default /bin/sh in Debian/Ubuntu) don't support them (but see below). Bash, ksh and zsh do, however, so it's likely your system has some shell that supports arrays. Using "$@" In shells with no support for named arrays, one can still use the positional parameters (the pseudo-array "$@" ) to hold the arguments of a command. The following should be portable script bits that do the equivalent of the code bits in the previous section. The array is replaced with "$@" , the list of positional parameters. Setting "$@" is done with set , and the double quotes around "$@" are important (these cause the elements of the list to be individually quoted). First, simply storing a command with arguments in "$@" and running it: set -- ls -l "/tmp/test/my dir"
"$@" Conditionally setting parts of the command line options for a command: set -- ls
if [ "$want_detail" = 1 ]; then
set -- "$@" -l
fi
set -- "$@" "$targetdir"
"$@" Only using "$@" for options and operands: set -- -x -v
set -- "$@" file1 "file name with whitespace"
set -- "$@" /somedir
somecommand "$@" Of course, "$@" is usually filled with the arguments to the script itself, so you'll have to save them somewhere before re-purposing "$@" . To conditionally pass a single argument, you can also use the alternate value expansion ${var:+word} with some careful quoting. Here, we include -f and the filename only if the filename is nonempty: file="foo bar"
somecommand ${file:+-f "$file"} Using eval (be careful here!) eval takes a string and runs it as a command, just like if it was entered on the shell command line. This includes all quote and expansion processing, which is both useful and dangerous. In the simple case, it allows doing just what we want: cmd='ls -l "/tmp/test/my dir"'
eval "$cmd" With eval , the quotes are processed, so ls eventually sees just the two arguments -l and /tmp/test/my dir , like we want. eval is also smart enough to concatenate any arguments it gets, so eval $cmd could also work in some cases, but e.g. all runs of whitespace would be changed to single spaces. It's still better to quote the variable there as that will ensure it gets unmodified to eval . However, it's dangerous to include user input in the command string to eval . For example, this seems to work: read -r filename
cmd="ls -ld '$filename'"
eval "$cmd"; But if the user gives input that contains single quotes, they can break out of the quoting and run arbitrary commands! E.g. with the input '$(whatever)'.txt , your script happily runs the command substitution. That it could have been rm -rf (or worse) instead. The issue there is that the value of $filename was embedded in the command line that eval runs. It was expanded before eval , which saw e.g. the command ls -l ''$(whatever)'.txt' . You would need to pre-process the input to be safe. If we do it the other way, keeping the filename in the variable, and letting the eval command expand it, it's safer again: read -r filename
cmd='ls -ld "$filename"'
eval "$cmd"; Note the outer quotes are now single quotes, so expansions within do not happen. Hence, eval sees the command ls -l "$filename" and expands the filename safely itself. But that's not much different from just storing the command in a function or an array. With functions or arrays, there is no such problem since the words are kept separate for the whole time, and there's no quote or other processing for the contents of filename . read -r filename
cmd=(ls -ld -- "$filename")
"${cmd[@]}" Pretty much the only reason to use eval is one where the varying part involves shell syntax elements that can't be brought in via variables (pipelines, redirections, etc.). However, you'll then need to quote/escape everything else on the command line that needs protection from the additional parsing step (see link below). In any case, it's best to avoid embedding input from the user in the eval command! References Word Splitting in BashGuide BashFAQ/050 or "I'm trying to put a command in a variable, but the complex cases always fail!" The question Why does my shell script choke on whitespace or other special characters? , which discusses a number of issues related to quoting and whitespace, including storing commands. Escape a variable for use as content of another script How can I conditionally pass an argument from a POSIX shell script? | {
"source": [
"https://unix.stackexchange.com/questions/444962",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291229/"
]
} |
444,970 | I have a file alphanum with these two lines: 123 abc
this is a line I am confused as to why, when I run sed 's/[a-z]*/SUB/' alphanum , I get the following output: SUB123 abc
SUB is a line I was expecting: 123 SUB
SUB is a line I found a fix (use sed 's/[a-z][a-z]*/SUB/' instead), but I don't understand why it works and mine doesn't. Can you help? | The pattern [a-z]* matches zero or more characters in the range a to z (the actual characters are dependent on the current locale). There are zero such characters at the very start of the string 123 abc (i.e. the pattern matches), and also four of them at the start of this is a line . If you need at least one match, then use [a-z][a-z]* or [a-z]\{1,\} , or enable extended regular expressions with sed -E and use [a-z]+ . To visualize where the pattern matches, add parentheses around each match: $ sed 's/[a-z]*/(&)/' file
()123 abc
(this) is a line Or, to see all matches on the lines: $ sed 's/[a-z]*/(&)/g' file
()1()2()3() (abc)
(this) (is) (a) (line) Compare that last result with $ sed -E 's/[a-z]+/(&)/g' file
123 (abc)
(this) (is) (a) (line) | {
"source": [
"https://unix.stackexchange.com/questions/444970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/271929/"
]
} |
444,998 | I don't understand the best way to set fs.inotify.max_user_watches with sysctl . In fact, I don't understand much of what is happening here other than the fact that I need to set the number of files that can be watched by a particular process. I believe that I can see the max number of users by running this command: cat /proc/sys/fs/inotify/max_user_watches My understanding is that some people suggest changing /proc/sys/fs/inotify/max_user_watches by opening /etc/sysctl.conf in an editor and adding this to it: fs.inotify.max_user_watches=524288 Then run sudo sysctl -p to -- presumably -- process the changes made to the file. Others suggest running commands like this: sudo sysctl -w fs.inotify.max_user_instances=1024
sudo sysctl -w fs.inotify.max_user_watches=12288 I know that -w stands for write, but what is being written and where? Is it just that this command changes /proc/.../max_user_watches ? Which of the two approaches outlined above is best? I understand that 524288 and 12288 are different numbers, but I don't understand the difference between the effect of running -p and -w . | sysctl -w writes kernel parameter values to the corresponding keys under /proc/sys : sudo sysctl -w fs.inotify.max_user_watches=12288 writes 12288 to /proc/sys/fs/inotify/max_user_watches . (It’s not equivalent, it’s exactly that; interested readers can strace it to see for themselves.) sysctl -p loads settings from a file, either /etc/sysctl.conf (the default), or whatever file is specified after -p . The difference between both approaches, beyond the different sources of the parameters and values they write, is that -w only changes the parameters until the next reboot, whereas values stored in /etc/sysctl.conf will be applied again every time the system boots. My usual approach is to use -w to test values, then once I’m sure the new settings are OK, write them to /etc/sysctl.conf or a file under /etc/sysctl.d (usually /etc/sysctl.d/local.conf ). See the sysctl and sysctl.conf manual pages ( man sysctl and man sysctl.conf on your system) for details. | {
"source": [
"https://unix.stackexchange.com/questions/444998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91728/"
]
} |
445,829 | I have researched the heck out of this question and found two pages about the issue but not clarifying it. In the debian-installer during the optional software selection phase you have these options: Debian desktop environment (already ticked by default)
... GNOME (not ticked)
... xfce (not ticked)
... KDE (not ticked)
... Cinnamon (not ticked)
... MATE (not ticked)
... LXDE (not ticked) What does Debian desktop environment actually install? Does it install a GUI (Gnome, my understanding, is the default) or does it just install a handful of programs useful for desktop users but which do not include a GUI? Do you have to tick off Gnome to get the GUI or not? And if not, what is the purpose of the option to tick off Gnome in addition to Debian Desktop Environment? The page concerning Desktop Environments in the Debian Wiki does not clarify the issue. This thread on the Debian User Forums concerns this very issue but has a raft of contradictory answers. | If no specific desktop environment is selected, but the “Debian desktop environment” is, the default which ends up installed is determined by tasksel : on i386 and amd64 , it’s GNOME, on other architectures, it’s XFCE. | {
"source": [
"https://unix.stackexchange.com/questions/445829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288980/"
]
} |
445,890 | If I run the command ip link | awk '{print $2}' in Ubuntu 18.04, I get this output: lo:
00:00:00:00:00:00
wlp1s0:
2c:6e:85:bf:01:00
enp2s0:
14:18:77:a3:01:02 I want it formatted like this (without lo ) wlp1s0: 2c:6e:85:bf:01:00
enp2s0: 14:18:77:a3:01:02 How do I do this? | You can get the MAC address from /sys/class/net/<dev>/address : $ cat /sys/class/net/enp0s3/address
08:00:27:15:dc:fd So, something like: find /sys/class/net -mindepth 1 -maxdepth 1 ! -name lo -printf "%P: " -execdir cat {}/address \; Gives me: enp0s3: 08:00:27:15:dc:fd
docker0: 02:42:61:cb:85:33 Or, using ip 's one-line mode, which is convenient for scripting: $ ip -o link | awk '$2 != "lo:" {print $2, $(NF-2)}'
enp0s3: 08:00:27:15:dc:fd
docker0: 02:42:61:cb:85:33 | {
"source": [
"https://unix.stackexchange.com/questions/445890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266428/"
]
} |
446,049 | Never realized that you could do this until just now: : >> file It seems to be functionally similar to: touch file Is there a reason why most resources seem to prefer touch over this shell builtin? | You don't even need to use : ; you can just > file (at least in bash ; other shells may behave differently). In practical terms, there is no real difference here (though the minuscule overhead of calling out to /bin/touch is a thing). touch , however, can also be used to modify the timestamps on a file that already exists without changing or erasing the contents; further, > file will blow out any file that already exists. This can be worked around by instead using >> file . One other difference with touch is that you can have it create (or update the timestamp on) multiple files at once (e.g. touch foo bar baz quux ) with a more succinct syntax than with redirection, where each file needs its own redirection (e.g. >foo >bar >baz >quux ). Using touch : $ touch foo; stat -x foo; sleep 2; touch foo; stat -x foo
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:19 2018
Modify: Fri May 25 10:55:19 2018
Change: Fri May 25 10:55:19 2018
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:21 2018
Modify: Fri May 25 10:55:21 2018
Change: Fri May 25 10:55:21 2018 Using redirection: $ > foo; stat -x foo; sleep 2; >> foo; stat -x foo
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:21 2018
Modify: Fri May 25 10:56:25 2018
Change: Fri May 25 10:56:25 2018
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:21 2018
Modify: Fri May 25 10:56:25 2018
Change: Fri May 25 10:56:25 2018 | {
"source": [
"https://unix.stackexchange.com/questions/446049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273175/"
]
} |
446,060 | I have a dual boot, and I never use Windows, but because I'm nice I wanted at the beginning to cut my hard drive in two equal parts, Windows on the left part, Linux on the right part. But then Linux ran out of space, so I shrinked Windows, and because I used a LVM partition, I created a new partition and share them on the logical partition. But now linux still runs out of space, and I'm thinking that it is strange to create tons of small LVM partitions, so I'm thiking to move the second LVM partition I created and extend it so that I just keep 2 partitions. Is it possible? Thanks. | You don't even need to use : ; you can just > file (at least in bash ; other shells may behave differently). In practical terms, there is no real difference here (though the minuscule overhead of calling out to /bin/touch is a thing). touch , however, can also be used to modify the timestamps on a file that already exists without changing or erasing the contents; further, > file will blow out any file that already exists. This can be worked around by instead using >> file . One other difference with touch is that you can have it create (or update the timestamp on) multiple files at once (e.g. touch foo bar baz quux ) with a more succinct syntax than with redirection, where each file needs its own redirection (e.g. >foo >bar >baz >quux ). Using touch : $ touch foo; stat -x foo; sleep 2; touch foo; stat -x foo
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:19 2018
Modify: Fri May 25 10:55:19 2018
Change: Fri May 25 10:55:19 2018
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:21 2018
Modify: Fri May 25 10:55:21 2018
Change: Fri May 25 10:55:21 2018 Using redirection: $ > foo; stat -x foo; sleep 2; >> foo; stat -x foo
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:21 2018
Modify: Fri May 25 10:56:25 2018
Change: Fri May 25 10:56:25 2018
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:21 2018
Modify: Fri May 25 10:56:25 2018
Change: Fri May 25 10:56:25 2018 | {
"source": [
"https://unix.stackexchange.com/questions/446060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169695/"
]
} |
446,065 | I'm creating a L3-Switch that modifies packets by redirecting some of them to local app. My goal is to send them further to the same MAC as before. Short "why": zero-conf device to connect with to any ethernet network, portable, does proxying. Switch is organized as ethernet bridge (br-lan) between eth0 and eth1. It is assumed by default that gateway for br-lan clients lies through eth0. Question: Let's say that packet comes from eth1 on the way to eth0 and gets redirected to local app. After that app has output and destination IP of the original packet has changed. L3 tries to route packet to new destination, but it doesn't have any default gateways (And it shouldn't, because it's switch!). Assuming I know the MAC address of default gateway, how to I force packet to go out through eth0 to specific MAC address? Technically I'm not trying to do anything "illegal" in terms of network. I want to kick the packet out of eth0 and all I'm "missing" is destination MAC, but I can retrieve it from the original packet. I know for sure that destination IP isn't local, therefore it would be sent to default gateway anyway using it's MAC address. So it's a question of implementation. I was trying to modify destination MAC at bridge -t NAT OUTPUT by doing this: ebtables -t nat -A OUTPUT -p ipv4 --ip-proto tcp --ip-src 192.168.1.251 -j dnat --to-dst 04:61:e7:d2:e2:09 But that didn't help. (Assuming 04:61:e7:d2:e2:09 is default gateway MAC and 192.168.1.251 is one of the clients just to test this theory) Actual implementation is on OpenWRT, so available packages might be limited. How did I get to that problem: More information on the local app: it's ss-redir from here, binds to 0.0.0.0:port => https://github.com/shadowsocks/shadowsocks-libev Added use cases to the [Device]: Expectation: We have 3 PC-clients connected to a regular switch. After bringing [Device] and connecting it to regular switch and reconnecting PC-clients to [Device], PC-clients gain [Result] without configuring the device. [Result]: 1)From the "outside"(other network nodes except 3 ours and everything else) it should look like every user keeps his IP/MAC pair so admin would be happy. DHCP is static-configured in the office, so IP/MAC pair won't probably change, but admin can change any of that. And device should handle any changes without reconfiguring manually. No new IP/MAC should appear in the network(being not admin-registered). 2)From the "outside" every PC-client should be accessible for all protocols in the network, whatever they are (RDP, NetBIOS for naming resolution, file sharing, or whatever local admin decides to do). 3)They should have internet access via default gateway as always, except proxying tcp via SS for particular destination ipset (which is always through the very same gateway) Under assumption that these use cases require device not having any IP/MAC knowledge of the existing network from the start(because office users won't config anything by themselves), I'm trying to make "proxying bridge" that works like a switch, intercepting packets and sends them out to eth0(WAN) after local app redirection. The problem is the after redirection packet needs to be sent on its way. I'm investigating "auto-reconfig on the fly idea" with a MAC-snat/dnat, but stuck with the problem that packet won't go to eth0 after being generated locally even if I can specify Default Gateway MAC-addr in ebtables as destination. | You don't even need to use : ; you can just > file (at least in bash ; other shells may behave differently). In practical terms, there is no real difference here (though the minuscule overhead of calling out to /bin/touch is a thing). touch , however, can also be used to modify the timestamps on a file that already exists without changing or erasing the contents; further, > file will blow out any file that already exists. This can be worked around by instead using >> file . One other difference with touch is that you can have it create (or update the timestamp on) multiple files at once (e.g. touch foo bar baz quux ) with a more succinct syntax than with redirection, where each file needs its own redirection (e.g. >foo >bar >baz >quux ). Using touch : $ touch foo; stat -x foo; sleep 2; touch foo; stat -x foo
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:19 2018
Modify: Fri May 25 10:55:19 2018
Change: Fri May 25 10:55:19 2018
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:21 2018
Modify: Fri May 25 10:55:21 2018
Change: Fri May 25 10:55:21 2018 Using redirection: $ > foo; stat -x foo; sleep 2; >> foo; stat -x foo
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:21 2018
Modify: Fri May 25 10:56:25 2018
Change: Fri May 25 10:56:25 2018
File: "foo"
Size: 0 FileType: Regular File
Mode: (0644/-rw-r--r--) Uid: (991148597/redacted) Gid: (1640268302/redacted)
Device: 1,5 Inode: 8597208698 Links: 1
Access: Fri May 25 10:55:21 2018
Modify: Fri May 25 10:56:25 2018
Change: Fri May 25 10:56:25 2018 | {
"source": [
"https://unix.stackexchange.com/questions/446065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292622/"
]
} |
446,237 | POSIX defines a text file as: A file that contains characters organized into zero or more lines. The lines do not contain NUL characters and none can exceed {LINE_MAX} bytes in length, including the <newline> character. Although POSIX.1-2017 does not distinguish between text files and binary files (see the ISO C standard), many utilities only produce predictable or meaningful output when operating on text files. The standard utilities that have such restrictions always specify "text files" in their STDIN or INPUT FILES sections. Source: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_403 However, there are several things I find unclear: Must a text file be a regular file? In the above excerpt it does not explicitly say the file must be a regular file Can a file be considered a text file if contains one character and one character only (i.e., a single character that isn't terminated with a newline)? I know this question may sound nitpicky, but they use the word "characters" instead of "one or more characters". Others may disagree, but if they mean "one or more characters" I think they should explicitly say it In the above excerpt, it makes reference to "lines". I found four definitions with line in their name: "Empty Line", "Display Line", "Incomplete Line" and "Line". Am I supposed to infer that they mean "Line" because of their omission of "Empty", "Display" and "Incomplete"- or are all four of these definitions inclusive as being considered a line in the excerpt above? All questions that come after this block of text depend on inferring that "characters" means "one or more characters": Can I safely infer that if a file is empty, it is not a text file because it does not contain one or more characters? All questions that come after this block of text depend on inferring that in the above excerpt, a line is defined as a "Line", and that the other three definitions containing "Line" in their name should be excluded: Does the "zero" in "zero or more lines" mean that a file can still be considered a text file if it contains one or more characters that are not terminated with newline? Does "zero or more lines" mean that once a single "Line" (0 or more characters plus a terminating newline) comes into play, that it becomes illegal for the last line to be an "Incomplete Line" (one or more non-newline characters at the end of a file)? Does "none [no line] can exceed {LINE_MAX} bytes in length, including the newline character" mean that there a limitation to the number of characters allowed in any given "Line" in a text file (as an aside, the value of LINE_MAX on Ubuntu 18.04 and FreeBSD 11.1 is "2048")? | Must a text file be a regular file? In the above excerpt it does not explicitly say the file must be a regular file No; the excerpt even specifically notes standard input as a potential text file. Other standard utilities, such as make , specifically use the character special file /dev/null as a text file . Can a file be considered a text file if contains one character and one character only (i.e., a single character that isn't terminated with a newline)? That character must be a <newline>, or this isn't a line , and so the file it's in isn't a text file. A file containing exactly byte 0A is a single-line text file. An empty line is a valid line. In the above excerpt, it makes reference to "lines". I found four definitions with line in their name: "Empty Line", "Display Line", "Incomplete Line" and "Line". Am I supposed to infer that they mean "Line" because of their omission of "Empty", "Display" and "Incomplete" It's not really an inference, it's just what it says. The word "line" has been given a contextually-appropriate definition and so that's what it's talking about. Can I safely infer that if a file is empty, it is not a text file because it does not contain one or more characters? An empty file consists of zero (or more) lines and is thus a text file. Does the "zero" in "zero or more lines" mean that a file can still be considered a text file if it contains one or more characters that are not terminated with newline? No, these characters are not organised into lines. Does "zero or more lines" mean that once a single "Line" (0 or more characters plus a terminating newline) comes into play, that it becomes illegal for the last line to be an "Incomplete Line" (one or more non-newline characters at the end of a file)? It's not illegal , it's just not a text file. A utility requiring a text file to be given to it may behave adversely if given that file instead. Does "none [no line] can exceed {LINE_MAX} bytes in length, including the newline character" mean that there a limitation to the number of characters allowed in any given "Line" in a text file Yes. This definition is just trying to set some bounds on what a text-based utility ( for example, grep ) will definitely accept — nothing more. They are also free to accept things more liberally, and quite often they do in practice. They are permitted to use a fixed-size buffer to process a line, to assume a newline appears before it's full, and so on. You may be reading too much into things. | {
"source": [
"https://unix.stackexchange.com/questions/446237",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273175/"
]
} |
446,319 | how to print only the properties lines from json file example of json file {
"href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
"items" : [
{
"href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
"tag" : "version1527250007610",
"type" : "kafka-env",
"version" : 8,
"Config" : {
"cluster_name" : "HDP",
"stack_id" : "HDP-2.6"
},
"properties" : {
"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536"
}
}
] expected output "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536" | Jq is the right tool for processing JSON data: jq '.items[].properties | to_entries[] | "\(.key) : \(.value)"' input.json The output: "content : \n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi"
"is_supported_kafka_ranger : true"
"kafka_log_dir : /var/log/kafka"
"kafka_pid_dir : /var/run/kafka"
"kafka_user : kafka"
"kafka_user_nofile_limit : 128000"
"kafka_user_nproc_limit : 65536" In case if it's really mandatory to obtain each key and value double-quoted - use the following modification: jq -r '.items[].properties | to_entries[]
| "\"\(.key)\" : \"\(.value | gsub("\n";"\\n"))\","' input.json The output: "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536", | {
"source": [
"https://unix.stackexchange.com/questions/446319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
446,502 | In include/x86_64-linux-gnu/asm/unistd_64.h , I see a system call named tuxcall , #define __NR_tuxcall 184 There is nothing about it in man tuxcall except to say that it's an unimplemented system calls . What did it do? Was it never implemented, or did it do something in antiquity? | tuxcall is the place-holder for the tux system call which was used by user-space tools to communicate with the TUX kernel module, which implemented the TUX web server . This was a web server running entirely in the kernel; it was maintained by Ingo Molnar until improvements in other parts of Linux, notably thread support with NPTL , brought user-space web server performance up to the level attained by TUX. You can still find the TUX 3 patches for Linux 2.6.18 among Ingo’s patches , including the implementation of sys_tux (the system call in question). The user-space portion, which includes the documentation, can be found on the Wayback Machine (thanks hvd !). | {
"source": [
"https://unix.stackexchange.com/questions/446502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
446,624 | When using sudo iotop (latest version 0.6-2.el7 ) in a terminal in my newly installed CentOS 7.5, I get the following error message: Traceback (most recent call last):
File "/sbin/iotop", line 17, in <module>
main()
File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 620, in main
main_loop()
File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 610, in <lambda>
main_loop = lambda: run_iotop(options)
File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 508, in run_iotop
return curses.wrapper(run_iotop_window, options)
File "/usr/lib64/python2.7/curses/wrapper.py", line 43, in wrapper
return func(stdscr, *args, **kwds)
File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 501, in run_iotop_window
ui.run()
File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 155, in run
self.process_list.duration)
File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 434, in refresh_display
lines = self.get_data()
File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 415, in get_data
return list(map(format, processes))
File "/usr/lib/python2.7/site-packages/iotop/ui.py", line 388, in format
cmdline = p.get_cmdline()
File "/usr/lib/python2.7/site-packages/iotop/data.py", line 292, in get_cmdline
proc_status = parse_proc_pid_status(self.pid)
File "/usr/lib/python2.7/site-packages/iotop/data.py", line 196, in parse_proc_pid_status
key, value = line.split(':\t', 1)
ValueError: need more than 1 value to unpack Any idea how to fix this problem? | Apparently, recent kernel versions introduced a blank line in /proc/(pid)/status that iotop does not expect: CapBnd: 0000001fffffffff
CapAmb: 0000000000000000
Seccomp: 0
SpeculationStoreBypass: vulnerable As a zeroth approximation of a fix, edit (as root) /usr/lib/python2.7/site-packages/iotop/data.py ca l.195: def parse_proc_pid_status(pid):
result_dict = {}
try:
for line in open('/proc/%d/status' % pid):
if not line.strip(): continue
key, value = line.split(':\t', 1)
result_dict[key] = value.strip()
except IOError:
pass # No such process
return result_dict where the if not line.strip(): continue is new. Beware that python does not have explicit braces, so the indentation of this line should match that of the line below it. (Also see https://bugs.launchpad.net/pkg-website/+bug/1773383 for other fixes for this bug.) | {
"source": [
"https://unix.stackexchange.com/questions/446624",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/287970/"
]
} |
446,847 | I have a script running on Linux that accepts some parameters. I would like to do something like: if [[ $CONDITION == "true" ]]; then
script param1 --param2
else
script param1
fi I would like to avoid the forking path of the if. Is there a more optimal way to pass the second parameter? | The most expansible and robust way would probably be to use an array to hold the optional parameter(s): params=()
if [[ $CONDITION == true ]]; then
params+=(--param2)
fi
script param1 "${params[@]}" Or in shorthand: [[ $CONDITION == true ]] && params+=(--param2)
script param1 "${params[@]}" That avoids repeating the constant part of the command and you can put more than one argument in the array, even the whole command. Note that it's important to do this with an array: if you replace the array with a regular variable ( params="--param2"; script param1 $params ) you'll either have to expand the variable unquoted, with all the problems that brings, or expand it quoted, in which case you'll pass an empty string as argument if the variable is empty. In a simple case like this, the "alternate value" expansion can also be used: cond=x
p2="--param2"
script param1 ${cond:+"$p2"} Here, if cond is nonempty (regardless of if it's cond=false or cond=0 instead of cond=true ), the value of p2 is expanded. This may be seen as less ugly than arrays, but be careful with the placement of the quotes. See also: How can we run a command stored in a variable? Using shell variables for command options Why does my shell script choke on whitespace or other special characters? | {
"source": [
"https://unix.stackexchange.com/questions/446847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264385/"
]
} |
447,305 | I am writing an HTTP server daemon in C (there are reasons why), managing it with systemd unit file. I am rewriting an application designed 20 years ago, around 1995. And the system they use is that they chroot and then setuid, and the standard procedure. Now in my previous work, the usual policy was that you never ever run any process as root. You create a user/group for it and run from there. Of course, the system did run some things as root, but we could achieve all business logic processing without being root. Now for the HTTP daemon, I can run it without root if I don't chroot inside the application. So isn't it more secure for the application to never ever run as root? Isn't it more secure to run it as mydaemon-user from the beginning? Instead of starting it with root, chrooting, then setuid to mydaemon-user? | It seems that others have missed your point, which was not reasons why to use changed roots, which of course you clearly already know, nor what else you can do to place limits on dæmons, when you also clearly know about running under the aegides of unprivileged user accounts; but why to do this stuff inside the application . There's actually a fairly on point example of why. Consider the design of the httpd dæmon program in Daniel J. Bernstein's publicfile package. The first thing that it does is change root to the root directory that it was told to use with a command argument, then drop privileges to the unprivileged user ID and group ID that are passed in two environment variables. Dæmon management toolsets have dedicated tools for things like changing root directory and dropping to unprivileged user and group IDs. Gerrit Pape's runit has chpst . My nosh toolset has chroot and setuidgid-fromenv . Laurent Bercot's s6 has s6-chroot and s6-setuidgid . Wayne Marshall's Perp has runtool and runuid . And so forth. Indeed, they all have M. Bernstein's own daemontools toolset with setuidgid as an antecedent. One would think that one could extract the functionality from httpd and use such dedicated tools. Then, as you envision, no part of the server program ever runs with superuser privileges. The problem is that one as a direct consequence has to do significantly more work to set up the changed root, and this exposes new problems. With Bernstein httpd as it stands, the only files and directories that are in the root directory tree are ones that are to be published to the world. There is nothing else in the tree at all. Moreover, there is no reason for any executable program image file to exist in that tree. But move the root directory change out into a chain-loading program (or systemd), and suddenly the program image file for httpd , any shared libraries that it loads, and any special files in /etc , /run , and /dev that the program loader or C runtime library access during program initialization (which you might find quite surprising if you truss / strace a C or C++ program), also have to be present in the changed root. Otherwise httpd cannot be chained to and won't load/run. Remember that this is a HTTP(S) content server. It can potentially serve up any (world-readable) file in the changed root. This now includes things like your shared libraries, your program loader, and copies of various loader/CRTL configuration files for your operating system. And if by some (accidental) means the content server has access to write stuff, a compromised server can possibly gain write access to the program image for httpd itself, or even your system's program loader. (Remember that you now have two parallel sets of /usr , /lib , /etc , /run , and /dev directories to keep secure.) None of this is the case where httpd changes root and drops privileges itself. So you have traded having a small amount of privileged code, that is fairly easy to audit and that runs right at the start of the httpd program, running with superuser privileges; for having a greatly expanded attack surface of files and directories within the changed root. This is why it is not as simple as doing everything externally to the service program. Notice that this is nonetheless a bare minimum of functionality within httpd itself. All of the code that does things such as look in the operating system's account database for the user ID and group ID to put into those environment variables in the first place is external to the httpd program, in simple standalone auditable commands such as envuidgid . (And of course it is a UCSPI tool, so it contains none of the code to listen on the relevant TCP port(s) or to accept connections, those being the domain of commands such as tcpserver , tcp-socket-listen , tcp-socket-accept , s6-tcpserver4-socketbinder , s6-tcpserver4d , and so on.) Further reading Daniel J. Bernstein (1996). httpd . publicfile . cr.yp.to. httpd . Daniel J. Bernstein's softwares all in one . Softwares. Jonathan de Boyne Pollard. 2016. gopherd . Daniel J. Bernstein's softwares all in one . Softwares. Jonathan de Boyne Pollard. 2017. https://unix.stackexchange.com/a/353698/5132 https://github.com/janmojzis/httpfile/blob/master/droproot.c | {
"source": [
"https://unix.stackexchange.com/questions/447305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293566/"
]
} |
447,430 | I’m looking for an “in” operator that works something like this: if [ "$1" in ("cat","dog","mouse") ]; then
echo "dollar 1 is either a cat or a dog or a mouse"
fi It's obviously a much shorter statement compared to, say, using several "or" tests. | You can use case ... esac $ cat in.sh
#!/bin/bash
case "$1" in
"cat"|"dog"|"mouse")
echo "dollar 1 is either a cat or a dog or a mouse"
;;
*)
echo "none of the above"
;;
esac Ex. $ ./in.sh dog
dollar 1 is either a cat or a dog or a mouse
$ ./in.sh hamster
none of the above With ksh , bash -O extglob or zsh -o kshglob , you could also use an extended glob pattern: if [[ "$1" = @(cat|dog|mouse) ]]; then
echo "dollar 1 is either a cat or a dog or a mouse"
else
echo "none of the above"
fi With bash , ksh93 or zsh , you could also use a regular expression comparison: if [[ "$1" =~ ^(cat|dog|mouse)$ ]]; then
echo "dollar 1 is either a cat or a dog or a mouse"
else
echo "none of the above"
fi | {
"source": [
"https://unix.stackexchange.com/questions/447430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65536/"
]
} |
447,525 | In my laptop: $ cat /etc/issue
Ubuntu 18.04 LTS \n \l There are two different folders for libraries x86 and x86_64 : ~$ ls -1 /
bin
lib
lib64
sbin
... Why for binaries exists only one directory? P.S. I'm also interested in Android but I hope that answer should be the same. | First, why there are separate /lib and /lib64 : The Filesystem Hierarchy Standard mentions that separate /lib and /lib64 exist because: 10.1. There may be one or more variants of the /lib directory on systems which support more than one binary format requiring
separate libraries. (...) This is commonly used for 64-bit or 32-bit
support on systems which support multiple binary formats, but require
libraries of the same name. In this case, /lib32 and /lib64 might be
the library directories, and /lib a symlink to one of them. On my Slackware 14.2 for example there are /lib and /lib64 directories for 32-bit and 64-bit libraries respectively even though /lib is not as a symlink as the FHS snippet would suggest: $ ls -l /lib/libc.so.6
lrwxrwxrwx 1 root root 12 Aug 11 2016 /lib/libc.so.6 -> libc-2.23.so
$ ls -l /lib64/libc.so.6
lrwxrwxrwx 1 root root 12 Aug 11 2016 /lib64/libc.so.6 -> libc-2.23.so There are two libc.so.6 libraries in /lib and /lib64 . Each dynamically built ELF binary contains a hardcoded path to the interpreter, in this case either /lib/ld-linux.so.2 or /lib64/ld-linux-x86-64.so.2 : $ file main
main: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, not stripped
$ readelf -a main | grep 'Requesting program interpreter'
[Requesting program interpreter: /lib/ld-linux.so.2]
$ file ./main64
./main64: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, not stripped
$ readelf -a main64 | grep 'Requesting program interpreter'
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2] The job of the interpreter is to load necessary shared libraries. You
can ask a GNU interpreter what libraries it would load without even
running a binary using LD_TRACE_LOADED_OBJECTS=1 or a ldd wrapper: $ LD_TRACE_LOADED_OBJECTS=1 ./main
linux-gate.so.1 (0xf77a9000)
libc.so.6 => /lib/libc.so.6 (0xf760e000)
/lib/ld-linux.so.2 (0xf77aa000)
$ LD_TRACE_LOADED_OBJECTS=1 ./main64
linux-vdso.so.1 (0x00007ffd535b3000)
libc.so.6 => /lib64/libc.so.6 (0x00007f56830b3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f568347c000) As you can see a given interpreter knows exactly where to look for
libraries - 32-bit version looks for libraries in /lib and 64-bit
version looks for libraries in /lib64 . FHS standard says the following about /bin : /bin contains commands that may be used by both the system
administrator and by users, but which are required when no other
filesystems are mounted (e.g. in single user mode). It may also
contain commands which are used indirectly by scripts. IMO the reason why there are no separate /bin and /bin64 is that if we had
the file with the same name in both of these directories we couldn't call one of them
indirectly because we'd have to put /bin or /bin64 first in $PATH . However, notice that the above is just the convention - the Linux
kernel does not really care if you have separate /bin and /bin64 .
If you want them, you can create them and setup your system accordingly. You also mentioned Android - note that except for running a modified
Linux kernel it has nothing to do with GNU systems such as
Ubuntu - no glibc, no bash (by default, you can of course compile and deploy it manually), and also directory structure is
completely different. | {
"source": [
"https://unix.stackexchange.com/questions/447525",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72849/"
]
} |
447,561 | When I run systemctl status , I get State: degraded at the top, ● x230
State: degraded
Jobs: 0 queued
Failed: 1 units
Since: Wed 2018-05-30 17:09:49 CDT; 3 days ago
.... What's going on, and how do I fix it? | That means some of your services failed to start. You can see them if you run systemctl; without the status argument. They should show something like, loaded failed failed Or you can just list the failed services with systemctl --failed , in my case it shows UNIT LOAD ACTIVE SUB DESCRIPTION
● [email protected] loaded failed failed PostgreSQL Cluster 9.4-main
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type. Normally, you'll need to read the journal/log to figure out what to do next about that failing item, by using journalctl -xe . If you just want to reset the units so the system "says" running with a green dot, you can run: systemctl reset-failed | {
"source": [
"https://unix.stackexchange.com/questions/447561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
447,589 | When I turned on my Ubuntu 18.04 yesterday and wanted to start GitKraken, it did not work. After I click its icon I see how the process tries to start in the upper left corner (next to "Activities") but after a few seconds the process seems to die and nothing happens.
Trying to launch GitKraken from the console fails too with the following two messages: /snap/gitkraken/58/bin/desktop-launch: line 23: $HOME/.config/user-dirs.dirs: Permission denied
ln: failed to create symbolic link '$HOME/snap/gitkraken/58/.config/gtk-2.0/gtkfilechooser.ini': File exists Unfortunately, my Linux skills are too limited to solve this. The only thing I've tried is chmod 777 $HOME/.config/user-dirs.dirs because of the Permossion denied but that did not help. EDIT: as terdon suggested in his comment I've made ls -ld ~/.config/user-dirs.dirs and this is its output: -rwxrwxrwx 1 myusername myusername 633 Mai 6 10:30 /home/mayusername/.config/user-dirs.dirs Then, I made the mv ~/snap/gitkraken/58/.config/gtk-2.0/gtkfilechooser.ini gtkfilechooser.ini.bak command and tried to start GitKraken afterwards. I did not start showing again: /snap/gitkraken/58/bin/desktop-launch: line 23: /home/myusername/.config/user-dirs.dirs: Permission denied The ln: failed to create symbolic link ... error from my initial post did not appear. Exe cuting ll in the directory ~/snap/gitkraken/58/.config/gtk-2.0 gives me the following output: drwxrwxr-x 2 myusername myusername 4096 Jun 3 16:44 ./
drwxrwxr-x 8 myusername myusername 4096 Mai 21 12:28 ../
lrwxrwxrwx 1 myusername myusername 47 Jun 3 15:45 gtkfilechooser.ini -> /home/myusername/.config/gtk-2.0/gtkfilechooser.ini
-rw-r--r-- 1 myusername myusername 198 Jun 3 16:44 gtkfilechooser.ini.bak gtkfilechooser.ini -> /home/myusername/.config/gtk-2.0/gtkfilechooser.ini is red since the file does not exist anymore. Executing the chmod command afterwards did not change anything. GitKraken does not start and outputs the same errors. | SOLVED:
Had to install libgnome-keyring: sudo apt install libgnome-keyring0 The UI now comes up and works for me.
Still get the following warnings, but it's working: Gtk-Message: 11:19:31.343: Failed to load module "overlay-scrollbar"
Gtk-Message: 11:19:31.349: Failed to load module "canberra-gtk-module"
Node started time: 1528391971495
state: update-not-available
EVENT: Main process loaded at 441 ms
state: checking-for-update
state: update-not-available
state: checking-for-update
state: update-not-available
EVENT: Starting initial render of foreground window at 5331 ms
EVENT: Startup triggers started at 5446 ms | {
"source": [
"https://unix.stackexchange.com/questions/447589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165579/"
]
} |
447,622 | How can remove the last comma separator from a file on Linux? Example of file : "is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536", expected results: "is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536" | Using GNU sed : sed -i '$s/,$//' file That is, on the last line ( $ ) substitute ( s ) the comma at the end of the line ( ,$ ) by nothing. The change will be done in-place due to the -i flag. With standard sed : sed '$s/,$//' <file >file.new &&
mv file.new file Note: Someone suggested an edit to change "on the last line" to "last on the line" (or something similar). This is wrong. When $ is used to specify an address (a line where to apply an editing command), then it refers to the last line of the stream or file. This is different from using $ in a regular expression. | {
"source": [
"https://unix.stackexchange.com/questions/447622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
447,898 | In Program 1 Hello world gets printed just once, but when I remove \n and run it (Program 2), the output gets printed 8 times. Can someone please explain me the significance of \n here and how it affects the fork() ? Program 1 #include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
printf("hello world...\n");
fork();
fork();
fork();
} Output 1: hello world... Program 2 #include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
printf("hello world...");
fork();
fork();
fork();
} Output 2: hello world... hello world...hello world...hello world...hello world...hello world...hello world...hello world... | When outputting to standard output using the C library's printf() function, the output is usually buffered. The buffer is not flushed until you output a newline, call fflush(stdout) or exit the program (not through calling _exit() though). The standard output stream is by default line-buffered in this way when it's connected to a TTY. When you fork the process in "Program 2", the child processes inherits every part of the parent process, including the unflushed output buffer. This effectively copies the unflushed buffer to each child process. When the process terminates, the buffers are flushed. You start a grand total of eight processes (including the original process), and the unflushed buffer will be flushed at the termination of each individual process. It's eight because at each fork() you get twice the number of processes you had before the fork() (since they are unconditional), and you have three of these (2 3 = 8). | {
"source": [
"https://unix.stackexchange.com/questions/447898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293805/"
]
} |
448,443 | From findutils' manual: For example constructs such as these two commands # risky
find -exec sh -c "something {}" \;
find -execdir sh -c "something {}" \; are very dangerous. The reason for this is that the ‘{}’ is expanded
to a filename which might contain a semicolon or other characters
special to the shell. If for example someone creates the file /tmp/foo; rm -rf $HOME then the two commands above could delete
someone’s home directory. So for this reason do not run any command which will pass untrusted
data (such as the names of fi les) to commands which interpret
arguments as commands to be further interpreted (for example ‘sh’). In the case of the shell, there is a clever workaround for this
problem: # safer
find -exec sh -c 'something "$@"' sh {} \;
find -execdir sh -c 'something "$@"' sh {} \; This approach is not guaranteed to avoid every problem, but it is much
safer than substituting data of an attacker’s choice into the text of
a shell command. Is the cause of the problem in find -exec sh -c "something {}" \; that the replacement for {} is
unquoted and therefore not treated as a single string? In the solution find -exec sh -c 'something "$@"' sh {} \; , first {} is replaced, but since {} is unquoted, doesn't "$@" also have the same problem as the original command? For example, "$@" will be expanded to "/tmp/foo;" , "rm" , "-rf" , and "$HOME" ? why is {} not escaped or quoted? Could you give other examples (still with sh -c , or without it if
applicable; with or without find which may be not necessary) where the same kind of problem and solution apply, and
which are minimal examples so that we can focus on the problem and
solution with little distraction as possible? See Ways to provide arguments to a command executed by `bash -c` Thanks. | This isn’t really related to quoting, but rather to argument processing. Consider the risky example: find -exec sh -c "something {}" \; This is parsed by the shell, and split into six words: find , -exec , sh , -c , something {} (no quotes any more), ; . There’s nothing to expand. The shell runs find with those six words as arguments. When find finds something to process, say foo; rm -rf $HOME , it replaces {} with foo; rm -rf $HOME , and runs sh with the arguments sh , -c , and something foo; rm -rf $HOME . sh now sees -c , and as a result parses something foo; rm -rf $HOME ( the first non-option argument ) and executes the result. Now consider the safer variant: find -exec sh -c 'something "$@"' sh {} \; The shell runs find with the arguments find , -exec , sh , -c , something "$@" , sh , {} , ; . Now when find finds foo; rm -rf $HOME , it replaces {} again, and runs sh with the arguments sh , -c , something "$@" , sh , foo; rm -rf $HOME . sh sees -c , and parses something "$@" as the command to run, and sh and foo; rm -rf $HOME as the positional parameters ( starting from $0 ), expands "$@" to foo; rm -rf $HOME as a single value , and runs something with the single argument foo; rm -rf $HOME . You can see this by using printf . Create a new directory, enter it, and run touch "hello; echo pwned" Running the first variant as follows find -exec sh -c "printf \"Argument: %s\n\" {}" \; produces Argument: .
Argument: ./hello
pwned whereas the second variant, run as find -exec sh -c 'printf "Argument: %s\n" "$@"' sh {} \; produces Argument: .
Argument: ./hello; echo pwned | {
"source": [
"https://unix.stackexchange.com/questions/448443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
448,811 | Is it possible to export a gnome-terminal profile to another computer? I create a terminal profile using edit>preferences and save it as "def". I would like to save the configuration in a file and use it another computer. I try to grep "def" within .config/dconf/ and find Binary file dconf/user matches Is it possible to extract the information from the configuration (specially about the colours, takes a lot of time to find the right colurs) and use them in another computer. I am using Fedora 28 with gnome. 4.16.13-300.fc28.x86_64 , gnome-terminal-3.28.2-2.fc28.x86_64 . | You can use dconf(1) to dump and load the gnome-terminal profiles. I got the basic command usage from this source: https://gist.github.com/reavon/0bbe99150810baa5623e5f601aa93afc To export all of your gnome-terminal profiles from one system, and then load them on another, you would issue the following: source system: $ dconf dump /org/gnome/terminal/legacy/profiles:/ > gnome-terminal-profiles.dconf destination system (after transferring the gnome-terminal-profiles.dconf file): $ dconf load /org/gnome/terminal/legacy/profiles:/ < gnome-terminal-profiles.dconf | {
"source": [
"https://unix.stackexchange.com/questions/448811",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145150/"
]
} |
448,964 | I have program that depends on library that is linked to libboost 1.67, which installed in the system. When I launch it, I have an error that libboost_system.so.1.58 does not exist. LD_PRELOAD and LD_LIBRARY_PATH are unset. lddtree execution doesn't show this library as dependency but ldd does. How can I trace from where the library is required? | If on a GNU system, try running your application with: LD_DEBUG=libs your-application See LD_DEBUG=help for more options or man ld.so . | {
"source": [
"https://unix.stackexchange.com/questions/448964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/294838/"
]
} |
449,067 | My Centos 7 server doesn't resolve domain names properly. From what I see, in modern Linux systems /etc/resolv.conf is often generated with dhclient , dnsmasq or Network Manager . Thus I have a general theoretical question about network stack in modern Linuxes: Who is responsible for reading /etc/resolv.conf ? What players (services or kernel subsystems) are involved in domain name resolution? SHORT ANSWER: Arch linux manual says that high-level configuration of domain name resolution is done in /etc/nsswitch.conf and relies on Name Service Switch glibc API. glibc uses nss-resolve function for sending DNS requests to DNS servers. Normally on modern CentOS systems nss-resolve relies upon systemd-resolved service. If /etc/resolv.conf was generated by something like dhclient-script , systemd-resolved reads it and works in a compatibility mode, emulating behaviour of older systems like BIND DNS client. | DNS client libraries do. C libraries contain DNS clients that wrap up name-to-address lookups in the DNS protocol and hand them over to proxy DNS servers to do all of the grunt work of query resolution. There are a lot of these DNS clients. The one that is in the main C runtime library of your operating system will very likely be the one from ISC's BIND. But there are a whole load of others from Daniel J. Bernstein's dns library through c-ares to adns. Although several of them contain their own native configuration mechanisms, they generally have a BIND library compatibility mode where they read resolv.conf , which is the configuration file for the ISC's BIND C client library. The NSS is layered on top of this, and is configured by nsswitch.conf . One of the things that NSS lookups can invoke internally is the DNS client, and nsswitch.conf is read by the NSS code in the C library to determine whether and where lookups are handed to the DNS client and how to deal with the various responses. (There is a slight complication to this idea caused by the Name Services Cache Dæmon, nscd. But this simply adds an extra upper-layer client in the C library, speaking an idiosyncratic protocol to a local server, which in its turn acts as a DNS client speaking the DNS protocol to a proxy DNS server. systemd-resolved adds similar complications.) systemd-resolved , NetworkManager , connman , dhcpcd , resolvconf , and others adjust the BIND DNS client configuration file to switch DNS clients to talk to different proxy DNS servers on the fly. This is out of scope for this answer, especially since there are plenty of answers on this WWW site already dealing with the byzantine details that such a mechanism involves. The more traditional way of doing things in the Unix world is to run a proxy DNS server either on the machine itself or on a LAN. Hence what the FreeBSD manual says about normally configured systems, where the default action of the DNS client library in the absence of resolv.conf matches what Unix system administrators normally have, which is a proxy DNS server listening on 127.0.0.1. (The FreeBSD manual for resolv.conf is actually doco that also originates from ISC's BIND, and can of course also be found where the BIND DNS client library has been incorporated into other places such as the GNU C library.) Further reading Daniel J. Bernstein. The dns library . cr.yp.to. Jonathan de Boyne Pollard (2017). What DNS name qualification is . Frequently Given Answers. Jonathan de Boyne Pollard (2004). What DNS query resolution is . Frequently Given Answers. Jonathan de Boyne Pollard (2001). The Big Picture for "djbdns" . Frequently Given Answers. Jonathan de Boyne Pollard (2000). "content" and "proxy" DNS servers. Frequently Given Answers. | {
"source": [
"https://unix.stackexchange.com/questions/449067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23424/"
]
} |
449,224 | Suppose I have two resources, named 0 and 1 , that can only be accessed exclusively. Is there any way to recover the "index" of the "parallel processor" that xargs launches in order to use it as a free mutual exclusion service? E.g., consider the following parallelized computation: $ echo {1..8} | xargs -d " " -P 2 -I {} echo "consuming task {}"
consuming task 1
consuming task 2
consuming task 3
consuming task 4
consuming task 5
consuming task 6
consuming task 7
consuming task 8 My question is whether there exists a magic word, say index , where the output would look like $ echo {1..8} | xargs -d " " -P 2 -I {} echo "consuming task {} with resource index"
consuming task 1 with resource 0
consuming task 2 with resource 1
consuming task 3 with resource 1
consuming task 4 with resource 1
consuming task 5 with resource 0
consuming task 6 with resource 1
consuming task 7 with resource 0
consuming task 8 with resource 0 where the only guarantee is that there is only ever at most one process using resource 0 and same for 1 . Basically, I'd like to communicate this index down to the child process that would respect the rule to only use the resource it was told to. Of course, it'd be preferable to extend this to more than two resources. Inspecting the docs, xargs probably can't do this. Is there a minimal equivalent solution? Using/cleaning files as fake locks is not preferable. | If you're using GNU xargs , there's --process-slot-var : --process-slot-var = environment-variable-name Set the environment variable environment-variable-name to a unique
value in each running child process. Each value is a decimal integer.
Values are reused once child processes exit. This can be used in a
rudimentary load distribution scheme, for example. So, for example: ~ echo {1..9} | xargs -n2 -P2 --process-slot-var=index sh -c 'echo "$index" "$@" "$$"' _
0 1 2 10475
1 3 4 10476
1 5 6 10477
0 7 8 10478
1 9 10479 | {
"source": [
"https://unix.stackexchange.com/questions/449224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/269078/"
]
} |
449,498 | Is it possible to call a function which is declared below in bash? Example if [ "$input" = "yes" ]; then
YES_FUNCTION
elif [ "$input" = "no" ]; then
NO_FUNCTION
else
exit 0;
fi
YES_FUNCTION()
{
.....
.....
}
NO_FUNCTION()
{
.....
.....
} | Like others have said, you can't do that. But if you want to arrange the code into one file so that the main program is at the top of the file, and other functions are defined below, you can do it by having a separate main function. E.g. #!/bin/sh
main() {
if [ "$1" = yes ]; then
do_task_this
else
do_task_that
fi
}
do_task_this() {
...
}
do_task_that() {
...
}
main "$@"; exit When we call main at the end of file, all functions are already defined. Explicitly passing "$@" to main is required to make the command line arguments of the script visible in the function. The explicit exit on the same line as the call to main is not mandatory, but can be used to prevent a running script from getting messed up if the script file is modified. Without it, the shell would try to continue reading commands from the script file after main returns. (see How to read the whole shell script before executing it? ) | {
"source": [
"https://unix.stackexchange.com/questions/449498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270935/"
]
} |
450,008 | Python packages are frequently hosted in many distribution's repositories. After reading this tutorial, specifically the section titled "Do you really want to do this" I have avoided using pip and preferred to use the system repository, only resorting to pip when I need to install a package not in the repository. However, because this is an inconsistent installation method, would it be better to only use pip? What are the benefits/detractors to using pip over the system's own repository for packages that are available in both places? The link I included states The advantage of always using standard Debian / NeuroDebian packages, is that the packages are carefully tested to be compatible with each other. The Debian packages record dependencies with other libraries so you will always get the libraries you need as part of the install. I use arch. Is this the case with other package-management systems besides apt? | The biggest disadvantage I see with using pip to install Python modules on your system, either as system modules or as user modules, is that your distribution’s package management system won’t know about them. This means that they won’t be used for any other package which needs them, and which you may want to install in the future (or which might start using one of those modules following an upgrade); you’ll then end up with both pip - and distribution-managed versions of the modules, which can cause issues (I ran into yet another instance of this recently). So your question ends up being an all-or-nothing proposition: if you only use pip for Python modules, you can no longer use your distribution’s package manager for anything which wants to use a Python module... The general advice given in the page you linked to is very good: try to use your distribution’s packages as far as possible, only use pip for modules which aren’t packaged, and when you do, do so in your user setup and not system-wide. Use virtual environments as far as possible, in particular for module development. Especially on Arch, you shouldn’t run into issues caused by older modules; even on distributions where that can be a problem, virtual environments deal with it quite readily. It’s always worth considering that a distribution’s library and module packages are packaged primarily for the use of other packages in the distribution; having them around is a nice side-effect for development using those libraries and modules, but that’s not the primary use-case. | {
"source": [
"https://unix.stackexchange.com/questions/450008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288980/"
]
} |
450,229 | The official Checkpoint out command line tool from CheckPoint, for setting up a SSL Network Extender VPN is not longer working from the Linux command line. It is also no longer actively supported by CheckPoint. However, there is a promising project, that tries to replicate the Java applet for authentication, that talks with the snx command line utility, called snxconnect . I was trying to put snxconnect text utility to work in Debian Buster, doing: sudo pip install snxvpn and export PYTHONHTTPSVERIFY=0
snxconnect -H checkpoint.hostname -U USER However, it was mostly dying either with an HTTP error of: HTTP/1.1 301 Moved Permanently: or: Got HTTP response: HTTP/1.1 302 Found or: Unexpected response, try again. What to do about it? PS. The EndPoint Security VPN official client is working well both in a Mac High Sierra and Windows 10 Pro. | SNX build 800007075 from 2012, used to support the CheckPoint VPN from the Linux command line. So I tested it, and lo and behold, it still works with the latest distributions and kernel(s) 4.x/5.x. So ultimately, my other answer in this thread holds true, if you cannot get hold of SNX build 800007075 or if that specific version of SNX stops working with the current Linux versions (it might happen in a near future) or if you need OTP support. Presently, the solution is then installing this specific last version of SNX that still supports doing the VPN from the command line. To install snx build 800007075, get it from: wget https://starkers.keybase.pub/snx_install_linux30.sh?dl=1 -O snx_install.sh For Debian and Debian-based 64-bit systems like Ubuntu and Linux Mint, you might need to add the 32-bit architecture: sudo dpkg --add-architecture i386
sudo apt-get update I had to install the following 32-bit packages: sudo apt-get install libstdc++5:i386 libx11-6:i386 libpam0g:i386 Run then the snx installation script: chmod a+rx snx_install.sh
sudo ./snx_install.sh` You will have now a /usr/bin/snx 32-bit client binary executable. Check if any dynamic libraries are missing with: sudo ldd /usr/bin/snx You can only proceed to the following points when all the dependencies are satisfied. You might need to run manually first snx -s CheckpointURLFQDN -u USER , before scripting any automatic use, for the signature VPN be saved at /etc/snx/USER.db . Before using it, you create a ~/.snxrc file, using your regular user (not root) with the following contents: server IP_address_of_your_VPN
username YOUR_USER
reauth yes For connecting, type snx $ snx
Check Point's Linux SNX
build 800007075
Please enter your password: SNX - connected. Session parameters: Office Mode IP : 10.x.x.x
DNS Server : 10.x.x.x
Secondary DNS Server: 10.x.x.x
DNS Suffix : xxx.xx, xxx.xx
Timeout : 24 hours If you understand the security risks of hard coding a VPN password in a script, you also can use it as: echo 'Password' | snx For closing/disconnecting the VPN, while you may stop/kill snx , the better and official way is issuing the command: $snx -d
SNX - Disconnecting...
done. see also Linux Checkpoint SNX tool configuration issues for some clarifications about which snx version to use. If automating the login and accepting a new signature (and understanding the security implications), I wrote an expect script, which I called the script snx_login.exp ; not very secure, however you can automate your login, calling it with the password as an argument: #!/usr/bin/expect
spawn /usr/bin/snx set password [lindex $argv 0] expect " ?assword: "
send -- "$password\r" expect {
"o:" {
send "y\r"
exp_continue
}
eof
} PS. Beware snx does not support OTP alone, you will have to use the snxconnect script present on the other answer if using it. PPS @gibies called to my attention that using an etoken, the password field gets the password plus the appended etoken and not a fixed password. | {
"source": [
"https://unix.stackexchange.com/questions/450229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138261/"
]
} |
450,239 | When attempting to source a file, wouldn't you want an error saying the file doesn't exist so you know what to fix? For example, nvm recommends adding this to your profile/rc: export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm With above, if nvm.sh doesn't exist, you'll get a "silent error". But if you try . "$NVM_DIR/nvm.sh" , the output will be FILE_PATH: No such file or directory . | In POSIX shells, . is a special builtin, so its failure causes the shell to exit (in some shells like bash , it's only done when in POSIX mode). What qualifies as an error depends on the shell. Not all of them exit upon a syntax error when parsing the file, but most would exit when the sourced file can't be found or opened. I don't know of any that would exit if the last command in the sourced file returned with a non-zero exit status (unless the errexit option is on of course). Here doing: [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" Is a case where you want to source the file if it's there, and don't if it's not (or is empty here with -s ). That is, it should not be considered an error (fatal error in POSIX shells) if the file is not there, that file is considered an optional file. It would still be a (fatal) error if the file was not readable or was a directory or (in some shells) if there was a syntax error while parsing it which would be real error conditions that should be reported. Some would argue that there's a race condition. But the only thing it means would be that the shell would exit with an error if the file is removed in between the [ and . , but I'd argue it's valid to consider it an error that this fixed path file would suddenly vanish while the script is running. On the other hand, command . "$NVM_DIR/nvm.sh" 2> /dev/null where command ¹ removes the special attribute to the . command (so it doesn't exit the shell on error) would not work as: it would hide . 's errors but also the errors of the commands run in the sourced file it would also hide real error conditions like the file having the wrong permissions. Other common syntaxes (see for instance grep -r /etc/default /etc/init* on Debian systems for the init scripts that haven't been converted to systemd yet (where EnvironmentFile=-/etc/default/service is used to specify an optional environment file instead)) include: [ -e "$file" ] && . "$file" Check the file it's there, still source it if it's empty. Still fatal error if it can't be opened (even though it's there, or was there). You may see more variants like [ -f "$file" ] (exists and is a regular file), [ -r "$file" ] (is readable), or combinations of those. [ ! -e "$file" ] || . "$file" A slightly better version. Makes it clearer that the file not existing is an OK case. That also means the $? will reflect the exit status of the last command run in $file (in the previous case, if you get 1 , you don't know whether it's because $file didn't exist or if that command failed). command . "$file" Expect the file to be there, but don't exit if it can't be interpreted. [ ! -e "$file" ] || command . "$file" Combination of the above: it's OK if the file is not there, and for POSIX shells, failures to open (or parse) the file are reported but are not fatal (which may be more desirable for ~/.profile ). ¹ Note: In zsh however, you can't use command like that unless in sh emulation; note that in the Korn shell, source is actually an alias for command . , a non-special variant of . | {
"source": [
"https://unix.stackexchange.com/questions/450239",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273817/"
]
} |
450,248 | I'm planning on doing a migration of my Debian installation from one disk to another in the near future. As a part of that, I'm thinking about setting the file systems up differently, for future-proofing as well as for simplifying the setup. My current setup is a one-device RAID1 LVM (I originally intended to set up mirroring of the system disk, but never got around to actually doing that) on a partition on a SSD. That RAID1 in turn holds the ext4 root file system, with /opt plus parts of /usr and /var separated onto ZFS storage. Particularly, /boot is part of the root file system, and I'm booting using old-style MBR using GRUB 2. The idea is to have a large root file system with a *nix-esque file system (probably ext4 to begin with), and to separate out the parts that have special needs. I'd like to leave open the possibility of migrating to UEFI boot later, possibly including a migration to GPT, without needing to move things around. (Backup/repartition/restore is another matter, and will likely be needed for migrating from MBR to GPT, but I'll probably be getting a new disk again before that becomes an issue.) I'd also like to have the option to migrate the root file system to ZFS later, or at least to set up dm-verity for data integrity verification. (Yes, it'll be a bit of a headache to get everything about that right, especially semi-in-place. That'll be a matter for a later day; their only consideration for this question is in terms of later options.) This all seems to make an obvious case for separating / , /boot and the FAT32 /boot/efi (the last of which may initially be empty), in addition to those that I have already separated from the root file system. But are there others? Which system file systems, backed by persistent storage, should be separated from the root file system and why on a modern-day Linux installation? Do any of these file systems need to go onto specific partition locations when using MBR, or are their locations arbitrary? For example, would /boot/efi need to go onto the first primary partition or something like that? | In POSIX shells, . is a special builtin, so its failure causes the shell to exit (in some shells like bash , it's only done when in POSIX mode). What qualifies as an error depends on the shell. Not all of them exit upon a syntax error when parsing the file, but most would exit when the sourced file can't be found or opened. I don't know of any that would exit if the last command in the sourced file returned with a non-zero exit status (unless the errexit option is on of course). Here doing: [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" Is a case where you want to source the file if it's there, and don't if it's not (or is empty here with -s ). That is, it should not be considered an error (fatal error in POSIX shells) if the file is not there, that file is considered an optional file. It would still be a (fatal) error if the file was not readable or was a directory or (in some shells) if there was a syntax error while parsing it which would be real error conditions that should be reported. Some would argue that there's a race condition. But the only thing it means would be that the shell would exit with an error if the file is removed in between the [ and . , but I'd argue it's valid to consider it an error that this fixed path file would suddenly vanish while the script is running. On the other hand, command . "$NVM_DIR/nvm.sh" 2> /dev/null where command ¹ removes the special attribute to the . command (so it doesn't exit the shell on error) would not work as: it would hide . 's errors but also the errors of the commands run in the sourced file it would also hide real error conditions like the file having the wrong permissions. Other common syntaxes (see for instance grep -r /etc/default /etc/init* on Debian systems for the init scripts that haven't been converted to systemd yet (where EnvironmentFile=-/etc/default/service is used to specify an optional environment file instead)) include: [ -e "$file" ] && . "$file" Check the file it's there, still source it if it's empty. Still fatal error if it can't be opened (even though it's there, or was there). You may see more variants like [ -f "$file" ] (exists and is a regular file), [ -r "$file" ] (is readable), or combinations of those. [ ! -e "$file" ] || . "$file" A slightly better version. Makes it clearer that the file not existing is an OK case. That also means the $? will reflect the exit status of the last command run in $file (in the previous case, if you get 1 , you don't know whether it's because $file didn't exist or if that command failed). command . "$file" Expect the file to be there, but don't exit if it can't be interpreted. [ ! -e "$file" ] || command . "$file" Combination of the above: it's OK if the file is not there, and for POSIX shells, failures to open (or parse) the file are reported but are not fatal (which may be more desirable for ~/.profile ). ¹ Note: In zsh however, you can't use command like that unless in sh emulation; note that in the Korn shell, source is actually an alias for command . , a non-special variant of . | {
"source": [
"https://unix.stackexchange.com/questions/450248",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2465/"
]
} |
450,480 | I performed a git commit command and it gave me the following reply: 7 files changed, 93 insertions(+), 15 deletions(-)
mode change 100644 => 100755 assets/internal/fonts/icomoon.svg
mode change 100644 => 100755 assets/internal/fonts/icomoon.ttf
mode change 100644 => 100755 assets/internal/fonts/icomoon.woff I know files can have user / group / other rwx permissions and those can be expressed as three bytes, like "644" or "755". But why is git showing six bytes here? I've read the following articles but didn't find an answer: Wikipedia's article on "File system permissions" How do I remove files saying “old mode 100755 new mode 100644” from unstaged changes in Git? Unix permissions made easy Chmod permissions (flags) explained: 600, 0600, 700, 777, 100 etc.. | The values shown are the 16-bit file modes as stored by Git , following the layout of POSIX types and modes : 32-bit mode, split into (high to low bits)
4-bit object type
valid values in binary are 1000 (regular file), 1010 (symbolic link)
and 1110 (gitlink)
3-bit unused
9-bit unix permission. Only 0755 and 0644 are valid for regular files.
Symbolic links and gitlinks have value 0 in this field. That file doesn’t mention directories; they are represented using object type 0100. Each digit in the six-digit value is in octal, representing three bits; 16 bits thus need six digits, the first of which only represents one bit: Type|---|Perm bits
1000 000 111101101
1 0 0 7 5 5
1000 000 110100100
1 0 0 6 4 4 Git doesn’t store arbitrary modes, only a subset of the values are allowed, from the usual POSIX types and modes (in octal, 12 for a symbolic link, 10 for a regular file, 04 for a directory) to which git adds 16 for Git links. The mode is appended, using four octal digits. For files, you’ll only ever see 100755 or 100644 (although 100664 is also technically possible); directories are 040000 (permissions are ignored), symbolic links 120000. The set-user-ID, set-group-ID and sticky bits aren’t supported at all (they would be stored in the unused bits). See also this related answer . | {
"source": [
"https://unix.stackexchange.com/questions/450480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28160/"
]
} |
450,489 | I have a 588Ko file, and I want to extract bytes from 0x7E8D6 to 0x8AD5D.
I tried : dd if=file of=result bs=50311 count=1 skip=518358 50311 stands for 0x8AD5D - 0x7E8D6 518358 stands for 0x7E8D6 (from where I want to cut) dd tells me that it can't skip to the specified offset.
What can I do? Is there any other utility to do it? | The values shown are the 16-bit file modes as stored by Git , following the layout of POSIX types and modes : 32-bit mode, split into (high to low bits)
4-bit object type
valid values in binary are 1000 (regular file), 1010 (symbolic link)
and 1110 (gitlink)
3-bit unused
9-bit unix permission. Only 0755 and 0644 are valid for regular files.
Symbolic links and gitlinks have value 0 in this field. That file doesn’t mention directories; they are represented using object type 0100. Each digit in the six-digit value is in octal, representing three bits; 16 bits thus need six digits, the first of which only represents one bit: Type|---|Perm bits
1000 000 111101101
1 0 0 7 5 5
1000 000 110100100
1 0 0 6 4 4 Git doesn’t store arbitrary modes, only a subset of the values are allowed, from the usual POSIX types and modes (in octal, 12 for a symbolic link, 10 for a regular file, 04 for a directory) to which git adds 16 for Git links. The mode is appended, using four octal digits. For files, you’ll only ever see 100755 or 100644 (although 100664 is also technically possible); directories are 040000 (permissions are ignored), symbolic links 120000. The set-user-ID, set-group-ID and sticky bits aren’t supported at all (they would be stored in the unused bits). See also this related answer . | {
"source": [
"https://unix.stackexchange.com/questions/450489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296039/"
]
} |
450,877 | Brian Kernighan explains in this video the early Bell Labs attraction to small languages/programs being based on memory limitations A big machine would be 64 k-bytes--K, not M or G--and so that meant any individual program could not be very big, and so there was a natural tendency to write small programs, and then the pipe mechanism, basically input output redirection, made it possible to link one program to another. But I don't understand how this could limit memory usage considering the fact that the data has to be stored in RAM to transmit between programs. From Wikipedia : In most Unix-like systems, all processes of a pipeline are started at the same time [emphasis mine] , with their streams appropriately connected, and managed by the scheduler together with all other processes running on the machine. An important aspect of this, setting Unix pipes apart from other pipe implementations, is the concept of buffering: for example a sending program may produce 5000 bytes per second, and a receiving program may only be able to accept 100 bytes per second, but no data is lost. Instead, the output of the sending program is held in the buffer. When the receiving program is ready to read data, then next program in the pipeline reads from the buffer. In Linux, the size of the buffer is 65536 bytes (64KB). An open source third-party filter called bfr is available to provide larger buffers if required. This confuses me even more, as this completely defeats the purpose of small programs (though they would be modular up to a certain scale). The only thing I can think of as a solution to my first question (the memory limitations being problematic dependent upon the size data) would be that large data sets simply weren't computed back then and the real problem pipelines were meant to solve was the amount of memory required by the programs themselves. But given the bolded text in the Wikipedia quote, even this confuses me: as one program is not implemented at a time. All this would make a great deal of sense if temp files were used, but it's my understanding that pipes do not write to disk (unless swap is used). Example: sed 'simplesubstitution' file | sort | uniq > file2 It's clear to me that sed is reading in the file and spitting it out on a line by line basis. But sort , as BK states in the linked video, is a full stop, so the all of the data has to be read into memory (or does it?), then it's passed on to uniq , which (to my mind) would be a one-line-at-a-time program. But between the first and second pipe, all the data has to be in memory, no? | The data doesn’t need to be stored in RAM. Pipes block their writers if the readers aren’t there or can’t keep up; under Linux (and most other implementations, I imagine) there’s some buffering but that’s not required. As mentioned by mtraceur and JdeBP (see the latter’s answer ), early versions of Unix buffered pipes to disk, and this is how they helped limit memory usage: a processing pipeline could be split up into small programs, each of which would process some data, within the limits of the disk buffers. Small programs take less memory, and the use of pipes meant that processing could be serialised: the first program would run, fill its output buffer, be suspended, then the second program would be scheduled, process the buffer, etc. Modern systems are orders of magnitude larger than the early Unix systems, and can run many pipes in parallel; but for huge amounts of data you’d still see a similar effect (and variants of this kind of technique are used for “big data” processing). In your example, sed 'simplesubstitution' file | sort | uniq > file2 sed reads data from file as necessary, then writes it as long as sort is ready to read it; if sort isn’t ready, the write blocks. The data does indeed live in memory eventually, but that’s specific to sort , and sort is prepared to deal with any issues (it will use temporary files it the amount of data to sort is too large). You can see the blocking behaviour by running strace seq 1000000 -1 1 | (sleep 120; sort -n) This produces a fair amount of data and pipes it to a process which isn’t ready to read anything for the first two minutes. You’ll see a number of write operations go through, but very quickly seq will stop and wait for the two minutes to elapse, blocked by the kernel (the write system call waits). | {
"source": [
"https://unix.stackexchange.com/questions/450877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288980/"
]
} |
450,944 | Is it possible to format this sample: for i in string1 string2 stringN
do
echo $i
done to something similar to this: for i in
string1
string2
stringN
do
echo $i
done EDIT: Sorry for confusion, didn't realize that there was different methods of executing script - sh <scriptname> versus bash <scriptname> and also this thing which I cannot name right now - #!/bin/sh and #!/bin/bash :) | Using arrays in bash can aid readability: this array syntax allows arbitrary whitespace between words. strings=(
string1
string2
"string with spaces"
stringN
)
for i in "${strings[@]}"; do
echo "$i"
done | {
"source": [
"https://unix.stackexchange.com/questions/450944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
451,085 | Along side the question " Username is not in the sudoers file. This incident will be reported " that explained the programical aspects of the error and suggested some workarounds, I want to know: what does this error mean? X is not in the sudoers file. This incident will be reported. The former part of the error explains, clearly, the error. But the second part says that "This error will be reported"?! But why? Why the error will be reported and where? To whom? I'm both user and administrator and didn't receive any report :)! | The administrator(s) of a system are likely to want to know when a non-privileged user tries but fails to execute commands using sudo . If this happens, it could be a sign of a curious legitimate user just trying things out, or a hacker trying to do "bad things". Since sudo by itself can not distinguish between these, failed attempts to use sudo are brought to the attention of the admins. Depending on how sudo is configured on your system, any attempt (successful or not) to use sudo will be logged. Successful attempts are logged for audit purposes (to be able to keep track of who did what when), and failed attempts for security. On a fairly vanilla Ubuntu setup that I have, this is logged in /var/log/auth.log . If a user gives the wrong password three times, or if they are not in the sudoers file, an email is sent to root (depending on the configuration of sudo , see below). This is what's meant by "this incident will be reported". The email will have a prominent subject: Subject: *** SECURITY information for thehostname *** The body of the message contains the relevant lines from the logfile, for example thehostname : Jun 22 07:07:44 : nobody : user NOT in sudoers ; TTY=console ; PWD=/some/path ; USER=root ; COMMAND=/bin/ls (Here, the user nobody tried to run ls through sudo as root, but failed since they were not in the sudoers file). No email is sent if (local) mail has not been set up on the system. All of these things are configurable as well, and that local variations in the default configuration may differ between Unix variants. Have a look at the mail_no_user setting (and related mail_* settings) in the sudoers manual (my emphasis below): mail_no_user If set, mail will be sent to the mailto user if the invoking user is not in the sudoers file. This flag is on by default . | {
"source": [
"https://unix.stackexchange.com/questions/451085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79615/"
]
} |
451,207 | I've created a self-signed certificate for foo.localhost using a Let's Encrypt recommendation using this Makefile: include ../.env
configuration = csr.cnf
certificate = self-signed.crt
key = self-signed.key
.PHONY: all
all: $(certificate)
$(certificate): $(configuration)
openssl req -x509 -out $@ -keyout $(key) -newkey rsa:2048 -nodes -sha256 -subj '/CN=$(HOSTNAME)' -extensions EXT -config $(configuration)
$(configuration):
printf "[dn]\nCN=$(HOSTNAME)\n[req]\ndistinguished_name = dn\n[EXT]\nsubjectAltName=DNS:$(HOSTNAME)\nkeyUsage=digitalSignature\nextendedKeyUsage=serverAuth" > $@
.PHONY: clean
clean:
$(RM) $(configuration) I've then assigned that to a web server. I've verified that the server returns the relevant certificate: $ openssl s_client -showcerts -connect foo.localhost:8443 < /dev/null
CONNECTED(00000003)
depth=0 CN = foo.localhost
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = foo.localhost
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/CN=foo.localhost
i:/CN=foo.localhost
-----BEGIN CERTIFICATE-----
[…]
-----END CERTIFICATE-----
---
Server certificate
subject=/CN=foo.localhost
issuer=/CN=foo.localhost
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1330 bytes and written 269 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES128-GCM-SHA256
Session-ID: […]
Session-ID-ctx:
Master-Key: […]
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket:
[…]
Start Time: 1529622990
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
Extended master secret: no
---
DONE How do I make cURL trust it without modifying anything in /etc? --cacert does not work, presumably because there is no CA: $ curl --cacert tls/foo.localhost.crt 'https://foo.localhost:8443/'
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above. The goal is to enable HTTPS during development: I can't have a completely production-like certificate without a lot of work to enable DNS verification in all development environments. Therefore I have to use a self-signed certificate. I still obviously want to make my development environment as similar as possible to production, so I can't simply ignore any and all certificate issues. curl -k is like catch (Exception e) {} in this case - nothing at all like a browser talking to a web server. In other words, when running curl [something] https://project.local/api/foo I want to be confident that if TLS is configured properly except for having a self-signed certificate the command will succeed and if I have any issues with my TLS configuration except for having a self-signed certificate the command will fail. Using HTTP or --insecure fails the second criterion. | Try -k : curl -k https://yourhost/ It should "accept" self-signed certificates | {
"source": [
"https://unix.stackexchange.com/questions/451207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
451,479 | input json: {
"id": "3885",
"login": "050111",
"lastLoginTime": 1529730115000,
"lastLoginFrom": "192.168.66.230"
}
{
"id": "3898",
"login": "050112",
"lastLoginTime": null,
"lastLoginFrom": null
} I want to get output for login, lastLoginTime and lastLoginFrom in tabulator delimited format: 050111 1529730115000 192.168.66.230
050112 - - with below jq filter I get on output no "null" values which I could replace with "-" $ jq -r '.|[.login, .lastLoginTime, .lastLoginFrom]|@tsv' test_json
050111 1529730115000 192.168.66.230
050112 Is there any other way to get "-" printed for such null values? | use the alternative operator : // so : $jq -r '.|[.login, .lastLoginTime // "-" , .lastLoginFrom // "-" ]|@tsv' test_json
050111 1529730115000 192.168.66.230
050112 - - | {
"source": [
"https://unix.stackexchange.com/questions/451479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274554/"
]
} |
451,496 | json input: [
{
"name": "cust1",
"grp": [
{
"id": "46",
"name": "BA2"
},
{
"id": "36",
"name": "GA1"
},
{
"id": "47",
"name": "NA1"
},
{
"id": "37",
"name": "TR3"
},
{
"id": "38",
"name": "TS1"
}
]
}
] expected, on output are two lines: name: cust1
groups: BA2 GA1 NA1 TR3 TS1 I was trying to build filter without success.. $ jq -r '.[]|"name:", .name, "groups:", (.grp[]|[.name]|@tsv)' test_json
name:
cust1
groups:
BA2
GA1
NA1
TR3
TS1 Update:
the solution provided below works fine, but I did not predict case when no groups exists: [
{
"name": "cust1",
"grp": null
}
] in such case, the solution provided returns error: $ jq -jr '.[]|"name:", " ",.name, "\n","groups:", (.grp[]|" ",.name),"\n"' test_json2
name: cust1
jq: error (at test_json2:6): Cannot iterate over null (null) any workaround appreciated. | Use the "join", -j $ jq -jr '.[]|"name:", " ",.name, "\n","groups:", (.grp[]|" ",.name),"\n"' test_json
name: cust1
groups: BA2 GA1 NA1 TR3 TS1 And with a place holder $ jq -jr '.[]|"name:", " ",.name, "\n","groups:", (.grp//[{"name":"-"}]|.[]|" ",.name),"\n"' test_json
name: cust1
groups: - | {
"source": [
"https://unix.stackexchange.com/questions/451496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274554/"
]
} |
451,579 | I have thousands of unl files named something like this cbs_cdr_vou_20180624_603_126_239457.unl . I wanted to print all the lines from those files by using following command. but its giving me only file names. I don't need file names, I just need contents from those files. find -type f -name 'cbs_cdr_vou_20180615*.unl' > /home/fifa/cbs/test.txt Current Output: ./cbs_cdr_vou_20180615_603_129_152023.unl
./cbs_cdr_vou_20180615_603_128_219001.unl
./cbs_cdr_vou_20180615_602_113_215712.unl
./cbs_cdr_vou_20180615_602_120_160466.unl
./cbs_cdr_vou_20180615_603_125_174428.unl
./cbs_cdr_vou_20180615_601_101_152369.unl
./cbs_cdr_vou_20180615_603_133_193306.unl Expected output: 8801865252020|200200|20180613100325|;
8801837463298|200200|20180613111209|;
8801845136955|200200|20180613133708|;
8801845205889|200200|20180613141140|;
8801837612072|200200|20180613141525|;
8801877103875|200200|20180613183008|;
8801877167964|200200|20180613191607|;
8801845437651|200200|20180613200415|;
8801845437651|200200|20180613221625|;
8801839460670|200200|20180613235936|; Please note that, for cat command I'm getting error like -bash: /bin/logger: Argument list too long that's why wanted to use find instead of cat command. | The find utility deals with pathnames. If no specific action is mentioned in the find command for the found pathnames, the default action is to output them. You may perform an action on the found pathnames, such as running cat , by adding -exec to the find command: find . -type f -name 'cbs_cdr_vou_20180615*.unl' -exec cat {} + >/home/fifa/cbs/test.txt This would find all regular files in or under the current directory, whose names match the given pattern. For as large batches of these as possible, cat would be called to concatenate the contents of the files. The output would go to /home/fifa/cbs/test.txt . Related: Understanding the -exec option of `find` | {
"source": [
"https://unix.stackexchange.com/questions/451579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131420/"
]
} |
451,778 | I accidentially destroyed my cd command. I tried to automatically execute ls after cd is called. I found a post saying that I have to execute alias cd='/bin/cd && /bin/ls' , but now I get -bash: /bin/cd: No such file or directory and can't change directoy anymore. | Your system (like many Unix systems) does not have an external cd command (at least not at that path). Even if it had one, the ls would give you the directory listing of the original directory. An external command can never change directory for the calling process (your shell) 1 . Remove the alias from the environment with unalias cd (and also remove its definition from any shell initialization files that you may have added it to). With a shell function, you can get it to work as cd ordinarily does, with an extra invocation of ls at the end if the cd succeeded: cd () {
command cd "$@" && ls -lah
} or, cd () { command cd "$@" && ls -lah; } This would call the cd command built into your shell with the same command line arguments that you gave the function. If the change of directory was successful, the ls would run. The command command stops the shell from executing the function recursively. The function definition (as written above) would go into your shell's startup file. With bash , this might be ~/.bashrc . The function definition would then be active in the next new interactive shell session . If you want it to be active now , then execute the function definition as-is at the interactive shell prompt, which will define it within your current interactive session. 1 On systems where cd is available as an external command, this command also does not change directory for the calling process. The only real use for such a command is to provide POSIX compliance and for acting as a test of whether changing directory to a particular one would be possible . | {
"source": [
"https://unix.stackexchange.com/questions/451778",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124191/"
]
} |
452,011 | I have a huge log file compressed in .gz format and I want to just read the first line of it without uncompressing it to just check the date of the oldest log in the file. The logs are of the form: YYYY-MM-DD Log content asnsenfvwen eaifnesinrng
YYYY-MM-DD Log content asnsenfvwen eaifnesinrng
YYYY-MM-DD Log content asnsenfvwen eaifnesinrng I just want to read the date in the first line which I would do like this for an uncompressed file: read logdate otherstuff < logfile.gz
echo $logdate Using zcat is taking too long. | Piping zcat ’s output to head -n 1 will decompress a small amount of data, guaranteed to be enough to show the first line, but typically no more than a few buffer-fulls (96 KiB in my experiments): zcat logfile.gz | head -n 1 Once head has finished reading one line, it closes its input, which closes the pipe, and zcat stops after receiving a SIGPIPE (which happens when it next tries to write into the closed pipe). You can see this by running (zcat logfile.gz; echo $? >&2) | head -n 1 This will show that zcat exits with code 141, which indicates it stopped because of a SIGPIPE (13 + 128). You can add more post-processing, e.g. with AWK, to only extract the date: zcat logfile.gz | awk '{ print $1; exit }' (On macOS you might need to use gzcat rather than zcat to handle gzipped files.) | {
"source": [
"https://unix.stackexchange.com/questions/452011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295951/"
]
} |
452,723 | Let's suppose I've declared the following variables: $ var='$test'
$ test="my string" If I print their contents I see the following: $ echo $var
$test
$ echo $test
my string I'd like to find a way to print the content of the content of $var (which is the content of $test ). So I tried to do the following: $ echo $(echo $var)
$test But here the result is $test and not "my string" ... Is it possible to print the content of the content of variables using bash? | You can accomplish this using bash's indirect variable expansion (as long as it's okay for you to leave out the $ from your reference variable): $ var=test
$ test="my string"
$ echo "$var"
test
$ echo "${!var}"
my string 3.5.3 Shell Parameter Expansion | {
"source": [
"https://unix.stackexchange.com/questions/452723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103357/"
]
} |
452,757 | As most of you have done many times, it's convenient to view long text using less : some_command | less Now its stdin is connected to a pipe (FIFO). How can it still read commands like up/down/quit? | As mentioned by William Pursell , less reads the user’s keystrokes from the terminal. It explicitly opens /dev/tty , the controlling terminal; that gives it a file descriptor, separate from standard input, from which it can read the user’s interactive input. It can simultaneously read data to display from its standard input if necessary. (It could also write directly to the terminal if necessary.) You can see this happen by running some_command | strace -o less.trace -e open,read,write less Move around the input, exit less , and look at the contents of less.trace : you’ll see it open /dev/tty , and read from both file descriptor 0 and whichever one was returned when it opened /dev/tty (likely 3). This is common practice for programs wishing to ensure they’re reading from and writing to the terminal. One example is SSH, e.g. when it asks for a password or passphrase. As explained by schily , if /dev/tty can’t be opened, less will read from its standard error (file descriptor 2). less ’s use of /dev/tty was introduced in version 177, released on April 2, 1991. If you try running cat /dev/tty | less , as suggested by Hagen von Eitzen , less will succeed in opening /dev/tty but won’t get any input from it until cat closes it. So you’ll see the screen blank, and nothing else until you press Ctrl C to kill cat (or kill it in some other way); then less will show whatever you typed while cat was running, and allow you to control it. | {
"source": [
"https://unix.stackexchange.com/questions/452757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211239/"
]
} |
452,865 | Given that zsh can clobber all files given the command: >* I'm thinking that setting the option noclobber would be a good idea. I can always use >| file if I want to use the default clobber behaviour in both bash and zsh. (zsh also allows the alternative syntax >!file ). I'm guessing noclobber is unset by default because of POSIX compatibility, but just to be sure: Are there any downsides to setting noclobber ? Is there anyway to set noclobber only for the interactive shell? | The reason noclobber is not set by default is tradition. As a matter of user interface design, it's a good idea to make “create this new file” the easy action and to put an extra hurdle the more dangerous action “either create a new file or overwrite an existing file”. Thus noclobber is a good idea ( > to create a new file, >| to potentially overwrite an existing file) and it would likely have been the default if the shell had been designed a few decades later. I strongly recommend to use the following in your interactive shell startup file ( .bashrc or .zshrc ): set -o noclobber
alias cp='cp -i'
alias mv='mv -i' In each case (redirection, copying, moving), the goal is to add an extra hurdle when the operation may have the side effect of erasing some existing data, even though erasing existing data is not the primary goal of the operation. I don't put rm -i in this list because erasing data is the primary goal of rm . Do note that noclobber and -i are safety nets . If they trigger, you've done something wrong . So don't use them as an excuse to not check what you're overwriting! The point is that you should have checked that the output file doesn't exist. If you're told file exists: foo or overwrite 'foo'? , it means you made a mistake and you should feel bad and be more careful. In particular, don't get into the habit of saying y if prompted to overwrite (arguably, the aliases should be alias cp='yes n | cp -i' mv='yes n | mv -i' , but pressing Ctrl + C makes the output look better): if you did mean to overwrite, cancel the command, move or remove the output file, and run the command again. It's also important not to get into the habit of triggering those safeties because if you do, one day you'll be on a machine which doesn't have your configuration, and you'll lose data because the protections you were counting on aren't there. noclobber will only be set for interactive shells, since .bashrc or .zshrc is only read by interactive shells. Of course you shouldn't change shell options in a way that would affect scripts, since it could break those scripts. | {
"source": [
"https://unix.stackexchange.com/questions/452865",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
452,978 | I use Nautilus to explore my files. I use a Debian-based OS with KDE Plasma 5. I use the keyboard a lot. When I press the key up when navigating files, if I'm already at the extremity of the list of files, Nautilus sends a big system beep which I will hear at 100% volume through my headphones. My reaction is comparable to getting electrified. I have placed the following lines in ~/.bashrc for the sudo (root) user and for my regular desktop user: # Turn off system beep in console:
xset b off
xset b 0 0 0 However, despite the beep going away from some places in the OS (such as erasing an empty line in the gnome-terminal), it's still in Nautilus. I believe it's because Nautilus doesn't source any of the .bashrc or because it ignores the xset commands. How do I fix this? What I need might be at a deeper level than the .bashrc , some file that is executed by everything, but which can still control the sound. Otherwise, disabling the sound another way or replacing it could be interesting. | Short of muting the sound entirely or disconnecting your headphones, there is no system-wide setting for events which will be followed by all applications. In your case especially, since you’re using Nautilus on a KDE system, you’ll run into issues since Nautilus won’t follow your desktop’s configured behaviour. Nautilus uses GNOME’s settings. If you have the GNOME control centre, you can disable sound effects there — go to the sound settings, and disable sound effects. Alternatively, run dconf-editor , go to “org/gnome/desktop/sound”, and disable “event-sounds” and “input-feedback-sounds”. You can do this from the command line too, see How to turn off alert sounds/sound effects on Gnome from terminal? for details. | {
"source": [
"https://unix.stackexchange.com/questions/452978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244375/"
]
} |
453,196 | I have one particular server that is exhibiting strange behaviour when using tr. Here is an example from a working server: -bash-3.2$ echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]
1234567890
-bash-3.2$ That makes perfect sense to me. This, however, is from the 'special' server: [root@host~]# echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]
abcdefghijklmnpqrstuvwxyz1234567890 As you can see, deleting all lower case characters fails. BUT, it has deleted the letter 'o' The interesting part is the following two examples, which make no sense to me whatsoever: [root@host~]# echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-n]
opqrstuvwxyz1234567890
[root@host~]# echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-o]
abcdefghijklmnpqrstuvwxyz1234567890
[root@host~]# (again, the 'o' is deleted in the last example) Does anyone have any idea what is going on here? I can't reproduce on any other linux box that I am using. | you have a file named o in current directory foo> ls
foo> echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]
1234567890
foo> touch o
foo> echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]
abcdefghijklmnpqrstuvwxyz1234567890 shell will expand [a-z] string if a match is found. This is called pathname expansion, according to man bash Pathname Expansion After word splitting, unless the -f option has been set, bash scans
each word for the characters *, ?, and [. ...
(...) bash will perform expansion. [...] Matches any one of the enclosed characters. | {
"source": [
"https://unix.stackexchange.com/questions/453196",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/298058/"
]
} |
453,222 | In Linux, is it possible to see football in VLC player without www browser from address https://areena.yle.fi/tv/ohjelmat/30-901?play=1-50003218 | you have a file named o in current directory foo> ls
foo> echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]
1234567890
foo> touch o
foo> echo "abcdefghijklmnopqrstuvwxyz1234567890"|tr -d [a-z]
abcdefghijklmnpqrstuvwxyz1234567890 shell will expand [a-z] string if a match is found. This is called pathname expansion, according to man bash Pathname Expansion After word splitting, unless the -f option has been set, bash scans
each word for the characters *, ?, and [. ...
(...) bash will perform expansion. [...] Matches any one of the enclosed characters. | {
"source": [
"https://unix.stackexchange.com/questions/453222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/298091/"
]
} |
453,234 | Where can I find reference for less regex search patterns? I want to search file with less using \d to find digits, but it does not seem to understand this wildcard. I tried to find a reference for less regex patterns, but could not find anything, not on man pages and not on the Internet. | less 's man page says: /pattern
Search forward in the file for the N-th line containing
the pattern. N defaults to 1. The pattern is a regular
expression, as recognized by the regular expression library
supplied by your system. so the accepted syntax may depend on your system. Off-hand, it seems to accept extended regular expressions on my Debian system, see regex(7) , and Why does my regular expression work in X but not in Y? \d is from Perl, and isn't supported by all regex engines. Use [0-9] or [[:digit:]] to match digits. (Their exact behaviour may depend on the locale.) | {
"source": [
"https://unix.stackexchange.com/questions/453234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/298100/"
]
} |
453,364 | On a shared server, I would like to have some very low priority users such that whenever an other user (also without root privileges) needs the resources, they can kill any of the low priority users' processes. Is it possible to allow something like that? | Give the other users permission to kill the processes as the low priority user through sudo -u lowpriouser /bin/kill PID A user can only signal their own processes, unless they have root privileges. By using sudo -u a user with the correct set-up in the sudoers file may assume the identity of the low priority user and kill the process. For example: %killers ALL = (lowpriouser) /bin/kill This would allow all users in the group killers to run /bin/kill as lowpriouser . See also the sudoers manual on your system. On an OpenBSD system, the same can be done through the native doas utility with a configuration like permit :killers as lowpriouser cmd /bin/kill Then doas -u lowpriouser /bin/kill PID See the manuals for doas and doas.conf . | {
"source": [
"https://unix.stackexchange.com/questions/453364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18200/"
]
} |
453,547 | I know env is a shell command, it can be used to print a list of the current environment variables. And as far as I understand, RANDOM is also
a environment variable. So why, when I launch env on Linux, does the output not include RANDOM ? | RANDOM is not an environment variable. It's a shell variable maintained by some shells. It is generally not exported by default. This is why it doesn't show up in the output of env . Once it's been used at least once, it would show up in the output of set , which, by itself, lists the shell variables (and functions) and their values in the current shell session. This behaviour is dependent on the shell and using pdksh on OpenBSD, RANDOM would be listed by set even if not previously used. The rest of this answer concerns what could be expected to happen if RANDOM was exported (i.e. turned into an environment variable). Exporting it with export RANDOM would make it an environment variable but its use would be severely limited as its value in a child process would be "random but static" (meaning it would be an unchanging random number). The exact behaviour differs between shells. I'm using pdksh on OpenBSD in the example below and I get a new random value in each awk run (but the same value every time within the same awk instance). Using bash , I would get exactly the same random value in all invocations of awk . $ awk 'BEGIN { print ENVIRON["RANDOM"], ENVIRON["RANDOM"] }'
25444 25444
$ awk 'BEGIN { print ENVIRON["RANDOM"], ENVIRON["RANDOM"] }'
30906 30906 In bash , the exported value of RANDOM would remain static regardless of the use of RANDOM in the shell (where each use of $RANDOM would still give a new value). This is because each reference to the shell variable RANDOM in bash makes the shell access its internal get_random() function to give the variable a new random value, but the shell does not update the environment variable RANDOM . This is similar in behaviour as with other dynamic bash variables, such as LINENO , SECONDS , BASHPID etc. To update the environment variable RANDOM in bash , you would have to assign it the value of the shell variable RANDOM and re-export it: export RANDOM="$RANDOM" It is unclear to me if this would have the additional side effect of re-seeding the random number generator in bash or not (but an educated guess would be that it doesn't). | {
"source": [
"https://unix.stackexchange.com/questions/453547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262683/"
]
} |
453,749 | From this post it is shown that FS:[0x28] is a stack-canary. I'm generating that same code using GCC on this function, void foo () {
char a[500] = {};
printf("%s", a);
} Specifically, I'm getting this assembly.. 0x000006b5 64488b042528. mov rax, qword fs:[0x28] ; [0x28:8]=0x1978 ; '(' ; "x\x19"
0x000006be 488945f8 mov qword [local_8h], rax
...stuff...
0x00000700 488b45f8 mov rax, qword [local_8h]
0x00000704 644833042528. xor rax, qword fs:[0x28]
0x0000070d 7405 je 0x714
0x0000070f e85cfeffff call sym.imp.__stack_chk_fail ; void __stack_chk_fail(void)
; CODE XREF from 0x0000070d (sym.foo)
0x00000714 c9 leave
0x00000715 c3 ret What is setting the value of fs:[0x28] ? The kernel, or is GCC throwing in the code? Can you show the code in the kernel, or compiled into the binary that sets fs:[0x28] ? Is the canary regenerated -- on boot, or process spawn? Where is this documented? | It's easy to track this initialization, as for (almost) every process strace shows a very suspicious syscall during the very beginning of the process run: arch_prctl(ARCH_SET_FS, 0x7fc189ed0740) = 0 That's what man 2 arch_prctl says: ARCH_SET_FS
Set the 64-bit base for the FS register to addr. Yay, looks like that's what we need. To find, who calls arch_prctl , let's look for a backtrace: (gdb) catch syscall arch_prctl
Catchpoint 1 (syscall 'arch_prctl' [158])
(gdb) r
Starting program: <program path>
Catchpoint 1 (call to syscall arch_prctl), 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2
(gdb) bt
#0 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2
#1 0x00007ffff7ddd3e3 in dl_main () from /lib64/ld-linux-x86-64.so.2
#2 0x00007ffff7df04c0 in _dl_sysdep_start () from /lib64/ld-linux-x86-64.so.2
#3 0x00007ffff7dda028 in _dl_start () from /lib64/ld-linux-x86-64.so.2
#4 0x00007ffff7dd8fb8 in _start () from /lib64/ld-linux-x86-64.so.2
#5 0x0000000000000001 in ?? ()
#6 0x00007fffffffecef in ?? ()
#7 0x0000000000000000 in ?? () So, the FS segment base is set by the ld-linux , which is a part of glibc , during the program loading (if the program is statically linked, this code is embedded into the binary). This is where it all happens. During the startup, the loader initializes TLS . This includes memory allocation and setting FS base value to point to the TLS beginning. This is done via arch_prctl syscall . After TLS initialization security_init function is called, which generates the value of the stack guard and writes it to the memory location, which fs:[0x28] points to: Stack guard value initialization Stack guard value write , more detailed And 0x28 is the offset of the stack_guard field in the structure which is located at the TLS start. | {
"source": [
"https://unix.stackexchange.com/questions/453749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
453,753 | If you put this link in a browser: https://unix.stackexchange.com/q/453740#453743 it returns this: https://unix.stackexchange.com/questions/453740/installing-busybox-for-ubuntu#453743 However cURL drops the Hash: $ curl -I https://unix.stackexchange.com/q/453740#453743
HTTP/2 302
cache-control: no-cache, no-store, must-revalidate
content-type: text/html; charset=utf-8
location: /questions/453740/installing-busybox-for-ubuntu Does cURL have an option to keep the Hash with the resultant URL? Essentially I
am trying to write a script that will resolve URLs like a browser - this is what
I have so far but it breaks if the URL contains a Hash: $ set https://unix.stackexchange.com/q/453740#453743
$ curl -L -s -o /dev/null -w %{url_effective} "$1"
https://unix.stackexchange.com/questions/453740/installing-busybox-for-ubuntu | It's easy to track this initialization, as for (almost) every process strace shows a very suspicious syscall during the very beginning of the process run: arch_prctl(ARCH_SET_FS, 0x7fc189ed0740) = 0 That's what man 2 arch_prctl says: ARCH_SET_FS
Set the 64-bit base for the FS register to addr. Yay, looks like that's what we need. To find, who calls arch_prctl , let's look for a backtrace: (gdb) catch syscall arch_prctl
Catchpoint 1 (syscall 'arch_prctl' [158])
(gdb) r
Starting program: <program path>
Catchpoint 1 (call to syscall arch_prctl), 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2
(gdb) bt
#0 0x00007ffff7dd9cad in init_tls () from /lib64/ld-linux-x86-64.so.2
#1 0x00007ffff7ddd3e3 in dl_main () from /lib64/ld-linux-x86-64.so.2
#2 0x00007ffff7df04c0 in _dl_sysdep_start () from /lib64/ld-linux-x86-64.so.2
#3 0x00007ffff7dda028 in _dl_start () from /lib64/ld-linux-x86-64.so.2
#4 0x00007ffff7dd8fb8 in _start () from /lib64/ld-linux-x86-64.so.2
#5 0x0000000000000001 in ?? ()
#6 0x00007fffffffecef in ?? ()
#7 0x0000000000000000 in ?? () So, the FS segment base is set by the ld-linux , which is a part of glibc , during the program loading (if the program is statically linked, this code is embedded into the binary). This is where it all happens. During the startup, the loader initializes TLS . This includes memory allocation and setting FS base value to point to the TLS beginning. This is done via arch_prctl syscall . After TLS initialization security_init function is called, which generates the value of the stack guard and writes it to the memory location, which fs:[0x28] points to: Stack guard value initialization Stack guard value write , more detailed And 0x28 is the offset of the stack_guard field in the structure which is located at the TLS start. | {
"source": [
"https://unix.stackexchange.com/questions/453753",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17307/"
]
} |
453,906 | I have a text file containing lines like this: This is a thread 139737522087680
This is a thread 139737513694976
This is a thread 139737505302272
This is a thread 139737312270080
.
.
.
This is a thread 139737203164928
This is a thread 139737194772224
This is a thread 139737186379520 How can I be sure of the uniqueness of every line? NOTE: The goal is to test the file, not to modify it if duplicate lines are present. | [ "$(wc -l < input)" -eq "$(sort -u input | wc -l)" ] && echo all unique | {
"source": [
"https://unix.stackexchange.com/questions/453906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262070/"
]
} |
454,318 | I drag and drop a folder into another by mistake in FileZilla. ~/big_folder
~/some_other_folder The folder got moved is a very huge one. It includes hundreds of thousands of files (node_modules, small image files, a lot of folders) What is so weird is that after I release my mouse, the moving is done. The folder "big_folder" is moved into "some_other_folder". ~/some_other_folder/big_folder (there is no big_folder in the ~/ after moving) Then I realize the mistake and try move back but it fails both on FileZilla and terminal. Then I have to cp -r to copy files back because there are server-side codes accessing those files in ~/big_folder And it takes like forever to wait ... What should I do? BTW, here are the output from FileZilla (it's the failure of the moving back): Status: Renaming '/root/big_folder' to '/root/some_other_folder/big_folder'
Status: /root/big_folder -> /root/some_other_folder/big_folder
Status: Renaming '/root/some_other_folder/big_folder' to '/root/big_folder'
Command: mv "big_folder" "/root/big_folder"
Error: mv /root/some_other_folder/big_folder /root/big_folder: received failure with description 'Failure' | If a directory is moved within the same filesystem (the same partition), then all that is needed is to rename the file path of the directory. No data apart from the directory entry for the directory itself has to be altered. When copying directories, the data for each and every file needs to be duplicated. This involves reading all the source data and writing it at the destination. Moving a directory between filesystems would involve copying the data to the destination and removing it from the source. This would take about as long time as copying (duplicating) the data within a single filesystem. If FileZilla successfully renamed the directory from ~/big_folder to ~/some_other_folder/big_folder , then I would revert that using mv ~/some_other_folder/big_folder ~/big_folder ... after first making sure that there were no directory called ~/big_folder (if there was, the move would put big_folder from some_other_folder into the ~/big_folder directory as a subfolder). | {
"source": [
"https://unix.stackexchange.com/questions/454318",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
454,686 | After reading some pretty nice answers from this question , I am still fuzzy on why you would want to pretend that you are root without getting any of the benefits of actually being root. So far, what I can gather is that fakeroot is used to give ownership to a file that needs to be root when it is unzip/tar'ed. My question, is why can't you just do that with chown? A Google Groups discussion here points out that you need fakeroot to compile a Debian kernel (if you want to do it from an unprivileged user). My comment is that, the reason you need to be root in order to compile is probably because read permissions were not set for other users. If so isn't it a security violation that fakeroot allows for compilation(which means gcc can now read a file that was for root)? This answer here describes that the actual system calls are made with real uid/gid of the user , so again where does fakeroot help? How does fakeroot stop unwanted privilege escalations on Linux? If fakeroot can trick tar into making a file that was owned by root, why not do something similar with SUID? From what I have gathered, fakeroot is just useful when you want to change the owner of any package files that you built to root. But you can do that with chown, so where am I lacking in my understanding of how this component is suppose to be used? | So far, what I can gather is that fakeroot is used to give ownership to a file that needs to be root when it is unzip/tar'ed. My question, is why can't you just do that with chown? Because you can’t just do that with chown , at least not as a non-root user. (And if you’re running as root, you don’t need fakeroot .) That’s the whole point of fakeroot : to allow programs which expect to be run as root to run as a normal user, while pretending that the root-requiring operations succeed. This is used typically when building a package, so that the installation process of the package being installed can proceed without error (even if it runs chown root:root , or install -o root , etc.). fakeroot remembers the fake ownership which it pretended to give files, so subsequent operations looking at the ownership see this instead of the real one; this allows subsequent tar runs for example to store files as owned by root. How does fakeroot stop unwanted privilege escalations on Linux? If fakeroot can trick tar into making a file that was owned by root, why not do something similar with SUID? fakeroot doesn’t trick tar into doing anything, it preserves changes the build wants to make without letting those changes take effect on the system hosting the build. You don’t need fakeroot to produce a tarball containing a file owned by root and suid; if you have a binary evilbinary , running tar cf evil.tar --mode=4755 --owner=root --group=root evilbinary , as a regular user, will create a tarball containing evilbinary , owned by root, and suid. However, you won’t be able to extract that tarball and preserve those permissions unless you do so as root: there is no privilege escalation here. fakeroot is a privilege de -escalation tool: it allows you to run a build as a regular user, while preserving the effects the build would have had if it had been run as root, allowing those effects to be replayed later. Applying the effects “for real” always requires root privileges; fakeroot doesn’t provide any method of acquiring them. To understand the use of fakeroot in more detail, consider that a typical distribution build involves the following operations (among many others): install files, owned by root ... archive those files, still owned by root, so that when they’re extracted, they’ll be owned by root The first part obviously fails if you’re not root. However, when running under fakeroot , as a normal user, the process becomes install files, owned by root — this fails, but fakeroot pretends it succeeds, and remembers the changed ownership ... archive those files, still owned by root — when tar (or whatever archiver is being used) asks the system what the file ownership is, fakeroot changes the answer to match the ownership it recorded earlier Thus you can run a package build without being root, while obtaining the same results you’d get if you were really running as root. Using fakeroot is safer: the system still can’t do anything your user can’t do, so a rogue installation process can’t damage your system (beyond touching your files). In Debian, the build tools have been improved so as not to require this any more, and you can build packages without fakeroot . This is supported by dpkg directly with the Rules-Requires-Root directive (see rootless-builds.txt ). To understand the purpose of fakeroot , and the security aspects of running as root or not, it might help to consider the purpose of packaging. When you install a piece of software from source, for use system-wide, you proceed as follows: build the software (which can be done without privileges) install the software (which needs to be done as root, or at least as a user allowed to write to the appropriate system locations) When you package a piece of software, you’re delaying the second part; but to do so successfully, you still need to “install” the software, into the package rather than onto the system. So when you package software, the process becomes: build the software (with no special privileges) pretend to install the software (again with no special privileges) capture the software installation as a package (ditto) make the package available (ditto) Now a user completes the process by installing the package, which needs to be done as root (or again, a user with the appropriate privileges to write to the appropriate locations). This is where the delayed privileged process is realised, and is the only part of the process which needs special privileges. fakeroot helps with steps 2 and 3 above by allowing us to run software installation processes, and capture their behaviour, without running as root. | {
"source": [
"https://unix.stackexchange.com/questions/454686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230582/"
]
} |
454,694 | So, I deleted my home folder (or, more precisely, all files I had write access to). What happened is that I had build="build"
...
rm -rf "${build}/"*
...
<do other things with $build> in a bash script and, after no longer needing $build , removing the declaration and all its usages -- but the rm . Bash happily expands to rm -rf /* . Yea. I felt stupid, installed the backup, redid the work I lost. Trying to move past the shame. Now, I wonder: what are techniques to write bash scripts so that such mistakes can't happen, or are at least less likely? For instance, had I written FileUtils.rm_rf("#{build}/*") in a Ruby script, the interpreter would have complained about build not being declared, so there the language protects me. What I have considered in bash, besides corraling rm (which, as many answers in related questions mention, is not unproblematic): rm -rf "./${build}/"* That would have killed my current work (a Git repo) but nothing else. A variant/parameterization of rm that requires interaction when acting outside of the current directory. (Could not find any.)
Similar effect. Is that it, or are there other ways to write bash scripts that are "robust" in this sense? | set -u or set -o nounset This would make the current shell treat expansions of unset variables as an error: $ unset build
$ set -u
$ rm -rf "$build"/*
bash: build: unbound variable set -u and set -o nounset are POSIX shell options . An empty value would not trigger an error though. For that, use $ rm -rf "${build:?Error, variable is empty or unset}"/*
bash: build: Error, variable is empty or unset The expansion of ${variable:?word} would expand to the value of variable unless it's empty or unset. If it's empty or unset, the word would be displayed on standard error and the shell would treat the expansion as an error (the command would not be executed, and if running in a non-interactive shell, this would terminate). Leaving the : out would trigger the error only for an unset value, just like under set -u . ${variable:?word} is a POSIX parameter expansion . Neither of these would cause an interactive shell to terminate unless set -e (or set -o errexit ) was also in effect. ${variable:?word} causes scripts to exit if the variable is empty or unset. set -u would cause a script to exit if used together with set -e . As for your second question. There is no way to limit rm to not work outside of the current directory. The GNU implementation of rm has a --one-file-system option that stops it from recursively delete mounted filesystems, but that's as close as I believe we can get without wrapping the rm call in a function that actually checks the arguments. As a side note: ${build} is exactly equivalent to $build unless the expansion occurs as part of a string where the immediately following character is a valid character in a variable name, such as in "${build}x" . | {
"source": [
"https://unix.stackexchange.com/questions/454694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17409/"
]
} |
454,896 | I've been tuning my Linux kernel for Intel Core 2 Quad (Yorkfield) processors, and I noticed the following messages from dmesg : [ 0.019526] cpuidle: using governor menu
[ 0.531691] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[ 0.550918] intel_idle: does not run on family 6 model 23
[ 0.554415] tsc: Marking TSC unstable due to TSC halts in idle PowerTop shows only states C1, C2 and C3 being used for the package and individual cores: Package | CPU 0
POLL 0.0% | POLL 0.0% 0.1 ms
C1 0.0% | C1 0.0% 0.0 ms
C2 8.2% | C2 9.9% 0.4 ms
C3 84.9% | C3 82.5% 0.9 ms
| CPU 1
| POLL 0.1% 1.6 ms
| C1 0.0% 1.5 ms
| C2 9.6% 0.4 ms
| C3 82.7% 1.0 ms
| CPU 2
| POLL 0.0% 0.1 ms
| C1 0.0% 0.0 ms
| C2 7.2% 0.3 ms
| C3 86.5% 1.0 ms
| CPU 3
| POLL 0.0% 0.1 ms
| C1 0.0% 0.0 ms
| C2 5.9% 0.3 ms
| C3 87.7% 1.0 ms Curious, I queried sysfs and found that the legacy acpi_idle driver was in use (I expected to see the intel_idle driver): cat /sys/devices/system/cpu/cpuidle/current_driver acpi_idle Looking at the kernel source code, the current intel_idle driver contains a debug message specifically noting that some Intel family 6 models are not supported by the driver: if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && boot_cpu_data.x86 == 6)
pr_debug("does not run on family %d model %d\n", boot_cpu_data.x86, boot_cpu_data.x86_model); An earlier fork (November 22, 2010) of intel_idle.c shows anticipated support for Core 2 processors (model 23 actually covers both Core 2 Duo and Quad): #ifdef FUTURE_USE
case 0x17: /* 23 - Core 2 Duo */
lapic_timer_reliable_states = (1 << 2) | (1 << 1); /* C2, C1 */
#endif The above code was deleted in December 2010 commit . Unfortunately, there is almost no documentation in the source code, so there is no explanation regarding the lack of support for the idle function in these CPUs. My current kernel configuration is as follows: CONFIG_SMP=y
CONFIG_MCORE2=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_ACPI_PROCESSOR_IDLE=y
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_INTEL_IDLE=y My question is as follows: Is there a specific hardware reason that Core 2 processors are not supported by intel_idle ? Is there a more appropriate way to configure a kernel for optimal CPU idle support for this family of processors (aside from disabling support for intel_idle )? | While researching Core 2 CPU power states (" C-states "), I actually managed to implement support for most of the legacy Intel Core/Core 2 processors. The complete implementation (Linux patch) with all of the background information is documented here. As I accumulated more information about these processors, it started to become apparent that the C-states supported in the Core 2 model(s) are far more complex than those in both earlier and later processors. These are known as Enhanced C-states (or " CxE "), which involve the package, individual cores and other components on the chipset (e.g., memory). At the time the intel_idle driver was released, the code was not particularly mature and several Core 2 processors had been released that had conflicting C-state support. Some compelling information on Core 2 Solo/Duo C-state support was found in this article from 2006 . This is in relation to support on Windows, however it does indicate the robust hardware C-state support on these processors. The information regarding Kentsfield conflicts with the actual model number, so I believe they are actually referring to a Yorkfield below: ...the quad-core Intel Core 2 Extreme (Kentsfield) processor supports
all five performance and power saving technologies — Enhanced Intel
SpeedStep (EIST), Thermal Monitor 1 (TM1) and Thermal Monitor 2 (TM2),
old On-Demand Clock Modulation (ODCM), as well as Enhanced C States
(CxE). Compared to Intel Pentium 4 and Pentium D 600, 800, and 900
processors, which are characterized only by Enhanced Halt (C1) State,
this function has been expanded in Intel Core 2 processors (as well as
Intel Core Solo/Duo processors) for all possible idle states of a
processor, including Stop Grant (C2), Deep Sleep (C3), and Deeper
Sleep (C4). This article from 2008 outlines support for per-core C-states on multi-core Intel processors, including Core 2 Duo and Core 2 Quad (additional helpful background reading was found in this white paper from Dell ): A core C-state is a hardware C-state. There are several core idle
states, e.g. CC1 and CC3. As we know, a modern state of the art
processor has multiple cores, such as the recently released Core Duo
T5000/T7000 mobile processors, known as Penryn in some circles. What
we used to think of as a CPU / processor, actually has multiple
general purpose CPUs in side of it. The Intel Core Duo has 2 cores in
the processor chip. The Intel Core-2 Quad has 4 such cores per
processor chip. Each of these cores has its own idle state. This makes
sense as one core might be idle while another is hard at work on a
thread. So a core C-state is the idle state of one of those cores. I found a 2010 presentation from Intel that provides some additional background about the intel_idle driver, but unfortunately does not explain the lack of support for Core 2: This EXPERIMENTAL driver supersedes acpi_idle on Intel Atom
Processors, Intel Core i3/i5/i7 Processors and associated Intel Xeon
processors. It does not support the Intel Core2 processor or earlier. The above presentation does indicate that the intel_idle driver is an implementation of the "menu" CPU governor, which has an impact on Linux kernel configuration (i.e., CONFIG_CPU_IDLE_GOV_LADDER vs. CONFIG_CPU_IDLE_GOV_MENU ). The differences between the ladder and menu governors are succinctly described in this answer . Dell has a helpful article that lists C-state C0 to C6 compatibility: Modes C1 to C3 work by basically cutting clock signals used inside the
CPU, while modes C4 to C6 work by reducing the CPU voltage. "Enhanced"
modes can do both at the same time. Mode Name CPUs
C0 Operating State All CPUs
C1 Halt 486DX4 and above
C1E Enhanced Halt All socket LGA775 CPUs
C1E — Turion 64, 65-nm Athlon X2 and Phenom CPUs
C2 Stop Grant 486DX4 and above
C2 Stop Clock Only 486DX4, Pentium, Pentium MMX, K5, K6, K6-2, K6-III
C2E Extended Stop Grant Core 2 Duo and above (Intel only)
C3 Sleep Pentium II, Athlon and above, but not on Core 2 Duo E4000 and E6000
C3 Deep Sleep Pentium II and above, but not on Core 2 Duo E4000 and E6000; Turion 64
C3 AltVID AMD Turion 64
C4 Deeper Sleep Pentium M and above, but not on Core 2 Duo E4000 and E6000 series; AMD Turion 64
C4E/C5 Enhanced Deeper Sleep Core Solo, Core Duo and 45-nm mobile Core 2 Duo only
C6 Deep Power Down 45-nm mobile Core 2 Duo only From this table (which I later found to be incorrect in some cases), it appears that there were a variety of differences in C-state support with the Core 2 processors (Note that nearly all Core 2 processors are Socket LGA775, except for Core 2 Solo SU3500, which is Socket BGA956 and Merom/Penryn processors. "Intel Core" Solo/Duo processors are one of Socket PBGA479 or PPGA478). An additional exception to the table was found in this article : Intel’s Core 2 Duo E8500 supports C-states C2 and C4, while the Core 2
Extreme QX9650 does not. Interestingly, the QX9650 is a Yorkfield processor (Intel family 6, model 23, stepping 6). For reference, my Q9550S is Intel family 6, model 23 (0x17), stepping 10, which supposedly supports C-state C4 (confirmed through experimentation). Additionally, the Core 2 Solo U3500 has an identical CPUID (family, model, stepping) to the Q9550S but is available in a non-LGA775 socket, which confounds interpretation of the above table. Clearly, the CPUID must be used at least down to the stepping in order to identify C-state support for this model of processor, and in some cases that may be insufficient (undetermined at this time). The method signature for assigning CPU idle information is: #define ICPU(model, cpu) \
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&cpu } Where model is enumerated in asm/intel-family.h . Examining this header file, I see that Intel CPUs are assigned 8-bit identifiers that appear to match the Intel family 6 model numbers: #define INTEL_FAM6_CORE2_PENRYN 0x17 From the above, we have Intel Family 6, Model 23 (0x17) defined as INTEL_FAM6_CORE2_PENRYN . This should be sufficient for defining idle states for most of the Model 23 processors, but could potentially cause issues with QX9650 as noted above. So, minimally, each group of processors that has a distinct C-state set would need to be defined in this list. Zagacki and Ponnala, Intel Technology Journal 12 (3):219-227, 2008 indicate that Yorkfield processors do indeed support C2 and C4. They also seem to indicate that the ACPI 3.0a specification supports transitions only between C-states C0, C1, C2 and C3, which I presume may also limit the Linux acpi_idle driver to transitions between that limited set of C-states. However, this article indicates that may not always be the case: Bear in mind that is the ACPI C state, not the processor one, so ACPI
C3 might be HW C6, etc. Also of note: Beyond the processor itself, since C4 is a synchronized effort between
major silicon components in the platform, the Intel Q45 Express
Chipset achieves a 28-percent power improvement. The chipset I'm using is indeed an Intel Q45 Express Chipset. The Intel documentation on MWAIT states is terse but confirms the BIOS-specific ACPI behavior: The processor-specific C-states defined in MWAIT extensions can map to
ACPI defined C-state types (C0, C1, C2, C3). The mapping relationship
depends on the definition of a C-state by processor implementation and
is exposed to OSPM by the BIOS using the ACPI defined _CST table. My interpretation of the above table (combined with a table from Wikipedia , asm/intel-family.h and the above articles) is: Model 9 0x09 ( Pentium M and Celeron M ): Banias: C0, C1, C2, C3, C4 Model 13 0x0D ( Pentium M and Celeron M ): Dothan, Stealey: C0, C1, C2, C3, C4 Model 14 0x0E INTEL_FAM6_CORE_YONAH ( Enhanced Pentium M , Enhanced Celeron M or Intel Core ): Yonah ( Core Solo , Core Duo ): C0, C1, C2, C3, C4, C4E/C5 Model 15 0x0F INTEL_FAM6_CORE2_MEROM (some Core 2 and Pentium Dual-Core ): Kentsfield, Merom, Conroe, Allendale ( E2xxx/E4xxx and Core 2 Duo E6xxx, T7xxxx/T8xxxx , Core 2 Extreme QX6xxx , Core 2 Quad Q6xxx ): C0, C1, C1E, C2, C2E Model 23 0x17 INTEL_FAM6_CORE2_PENRYN ( Core 2 ): Merom-L/Penryn-L: ? Penryn ( Core 2 Duo 45-nm mobile ): C0, C1, C1E, C2, C2E, C3, C4, C4E/C5, C6 Yorkfield ( Core 2 Extreme QX9650 ): C0, C1, C1E, C2E?, C3 Wolfdale/Yorkfield ( Core 2 Quad , C2Q Xeon , Core 2 Duo E5xxx/E7xxx/E8xxx , Pentium Dual-Core E6xxx , Celeron Dual-Core ): C0, C1, C1E, C2, C2E, C3, C4 From the amount of diversity in C-state support within just the Core 2 line of processors, it appears that a lack of consistent support for C-states may have been the reason for not attempting to fully support them via the intel_idle driver. I would like to fully complete the above list for the entire Core 2 line. This is not really a satisfying answer, because it makes me wonder how much unnecessary power is used and excess heat has been (and still is) generated by not fully utilizing the robust power-saving MWAIT C-states on these processors. Chattopadhyay et al. 2018, Energy Efficient High Performance Processors: Recent Approaches for Designing Green High Performance Computing is worth noting for the specific behavior I'm looking for in the Q45 Express Chipset: Package C-state (PC0-PC10) - When the compute domains, Core and
Graphics (GPU) are idle, the processor has an opportunity for
additional power savings at uncore and platform levels, for example,
flushing the LLC and power-gating the memory controller and DRAM IO,
and at some state, the whole processor can be turned off while its
state is preserved on always-on power domain. As a test, I inserted the following at linux/drivers/idle/intel_idle.c line 127: static struct cpuidle_state conroe_cstates[] = {
{
.name = "C1",
.desc = "MWAIT 0x00",
.flags = MWAIT2flg(0x00),
.exit_latency = 3,
.target_residency = 6,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C1E",
.desc = "MWAIT 0x01",
.flags = MWAIT2flg(0x01),
.exit_latency = 10,
.target_residency = 20,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
// {
// .name = "C2",
// .desc = "MWAIT 0x10",
// .flags = MWAIT2flg(0x10),
// .exit_latency = 20,
// .target_residency = 40,
// .enter = &intel_idle,
// .enter_s2idle = intel_idle_s2idle, },
{
.name = "C2E",
.desc = "MWAIT 0x11",
.flags = MWAIT2flg(0x11),
.exit_latency = 40,
.target_residency = 100,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.enter = NULL }
};
static struct cpuidle_state core2_cstates[] = {
{
.name = "C1",
.desc = "MWAIT 0x00",
.flags = MWAIT2flg(0x00),
.exit_latency = 3,
.target_residency = 6,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C1E",
.desc = "MWAIT 0x01",
.flags = MWAIT2flg(0x01),
.exit_latency = 10,
.target_residency = 20,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C2",
.desc = "MWAIT 0x10",
.flags = MWAIT2flg(0x10),
.exit_latency = 20,
.target_residency = 40,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C2E",
.desc = "MWAIT 0x11",
.flags = MWAIT2flg(0x11),
.exit_latency = 40,
.target_residency = 100,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C3",
.desc = "MWAIT 0x20",
.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 85,
.target_residency = 200,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C4",
.desc = "MWAIT 0x30",
.flags = MWAIT2flg(0x30) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 100,
.target_residency = 400,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C4E",
.desc = "MWAIT 0x31",
.flags = MWAIT2flg(0x31) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 100,
.target_residency = 400,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.name = "C6",
.desc = "MWAIT 0x40",
.flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
.exit_latency = 200,
.target_residency = 800,
.enter = &intel_idle,
.enter_s2idle = intel_idle_s2idle, },
{
.enter = NULL }
}; at intel_idle.c line 983: static const struct idle_cpu idle_cpu_conroe = {
.state_table = conroe_cstates,
.disable_promotion_to_c1e = false,
};
static const struct idle_cpu idle_cpu_core2 = {
.state_table = core2_cstates,
.disable_promotion_to_c1e = false,
}; at intel_idle.c line 1073: ICPU(INTEL_FAM6_CORE2_MEROM, idle_cpu_conroe),
ICPU(INTEL_FAM6_CORE2_PENRYN, idle_cpu_core2), After a quick compile and reboot of my PXE nodes, dmesg now shows: [ 0.019845] cpuidle: using governor menu
[ 0.515785] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[ 0.543404] intel_idle: MWAIT substates: 0x22220
[ 0.543405] intel_idle: v0.4.1 model 0x17
[ 0.543413] tsc: Marking TSC unstable due to TSC halts in idle states deeper than C2
[ 0.543680] intel_idle: lapic_timer_reliable_states 0x2 And now PowerTOP is showing: Package | CPU 0
POLL 2.5% | POLL 0.0% 0.0 ms
C1E 2.9% | C1E 5.0% 22.4 ms
C2 0.4% | C2 0.2% 0.2 ms
C3 2.1% | C3 1.9% 0.5 ms
C4E 89.9% | C4E 92.6% 66.5 ms
| CPU 1
| POLL 10.0% 400.8 ms
| C1E 5.1% 6.4 ms
| C2 0.3% 0.1 ms
| C3 1.4% 0.6 ms
| C4E 76.8% 73.6 ms
| CPU 2
| POLL 0.0% 0.2 ms
| C1E 1.1% 3.7 ms
| C2 0.2% 0.2 ms
| C3 3.9% 1.3 ms
| C4E 93.1% 26.4 ms
| CPU 3
| POLL 0.0% 0.7 ms
| C1E 0.3% 0.3 ms
| C2 1.1% 0.4 ms
| C3 1.1% 0.5 ms
| C4E 97.0% 45.2 ms I've finally accessed the Enhanced Core 2 C-states, and it looks like there is a measurable drop in power consumption - my meter on 8 nodes appears to be averaging at least 5% lower (with one node still running the old kernel), but I'll try swapping the kernels out again as a test. An interesting note regarding C4E support - My Yorktown Q9550S processor appears to support it (or some other sub-state of C4), as evidenced above! This confuses me, because the Intel datasheet on the Core 2 Q9000 processor (section 6.2) only mentions C-states Normal (C0), HALT (C1 = 0x00), Extended HALT (C1E = 0x01), Stop Grant (C2 = 0x10), Extended Stop Grant (C2E = 0x11), Sleep/Deep Sleep (C3 = 0x20) and Deeper Sleep (C4 = 0x30). What is this additional 0x31 state? If I enable state C2, then C4E is used instead of C4. If I disable state C2 (force state C2E) then C4 is used instead of C4E. I suspect this may have something to do with the MWAIT flags, but I haven't yet found documentation for this behavior. I'm not certain what to make of this: The C1E state appears to be used in lieu of C1, C2 is used in lieu of C2E and C4E is used in lieu of C4. I'm uncertain if C1/C1E, C2/C2E and C4/C4E can be used together with intel_idle or if they are redundant. I found a note in this 2010 presentation by Intel Labs Pittsburgh that indicates the transitions are C0 - C1 - C0 - C1E - C0, and further states: C1E is only used when all the cores are in C1E I believe that is to be interpreted as the C1E state is entered on other components (e.g. memory) only when all cores are in the C1E state. I also take this to apply equivalently to the C2/C2E and C4/C4E states (Although C4E is referred to as "C4E/C5" so I'm uncertain if C4E is a sub-state of C4 or if C5 is a sub-state of C4E. Testing seems to indicate C4/C4E is correct). I can force C2E to be used by commenting out the C2 state - however, this causes the C4 state to be used instead of C4E (more work may be required here). Hopefully there aren't any model 15 or model 23 processors that lack state C2E, because those processors would be limited to C1/C1E with the above code. Also, the flags, latency and residency values could probably stand to be fine-tuned, but just taking educated guesses based on the Nehalem idle values seems to work fine. More reading will be required to make any improvements. I tested this on a Core 2 Duo E2220 ( Allendale ), a Dual Core Pentium E5300 ( Wolfdale ), Core 2 Duo E7400 , Core 2 Duo E8400 ( Wolfdale ), Core 2 Quad Q9550S ( Yorkfield ) and Core 2 Extreme QX9650 , and I have found no issues beyond the afore-mentioned preference for state C2/C2E and C4/C4E. Not covered by this driver modification: The original Core Solo / Core Duo ( Yonah , non Core 2) are family 6, model 14. This is good because they supported the C4E/C5 (Enhanced Deep Sleep) C-states but not the C1E/C2E states and would need their own idle definition. The only issues that I can think of are: Core 2 Solo SU3300/ SU3500 (Penryn-L) are family 6, model 23 and will be detected by this driver. However, they are not Socket LGA775 so they may not support the C1E Enhanced Halt C-state. Likewise for the Core 2 Solo ULV U2100/U2200 ( Merom-L ). However, the intel_idle driver appears to choose the appropriate C1/C1E based on hardware support of the sub-states. Core 2 Extreme QX9650 (Yorkfield) reportedly does not support C-state C2 or C4. I have confirmed this by purchasing a used Optiplex 780 and QX9650 Extreme processor on eBay. The processor supports C-states C1 and C1E. With this driver modification, the CPU idles in state C1E instead of C1, so there is presumably some power savings. I expected to see C-state C3, but it is not present when using this driver so I may need to look into this further. I managed to find a slide from a 2009 Intel presentation on the transitions between C-states (i.e., Deep Power Down): In conclusion, it turns out that there was no real reason for the lack of Core 2 support in the intel_idle driver. It is clear now that the original stub code for "Core 2 Duo" only handled C-states C1 and C2, which would have been far less efficient than the acpi_idle function which also handles C-state C3. Once I knew where to look, implementing support was easy. The helpful comments and other answers were much appreciated, and if Amazon is listening, you know where to send the check. This update has been committed to github . I will e-mail a patch to the LKML soon. Update : I also managed to dig up a Socket T/LGA775 Allendale ( Conroe ) Core 2 Duo E2220, which is family 6, model 15, so I added support for that as well. This model lacks support for C-state C4, but supports C1/C1E and C2/C2E. This should also work for other Conroe-based chips ( E4xxx / E6xxx ) and possibly all Kentsfield and Merom (non Merom-L) processors. Update : I finally found some MWAIT tuning resources. This Power vs. Performance writeup and this Deeper C states and increased latency blog post both contain some useful information on identifying CPU idle latencies. Unfortunately, this only reports those exit latencies that were coded into the kernel (but, interestingly, only those hardware states supported by the processor): # cd /sys/devices/system/cpu/cpu0/cpuidle
# for state in `ls -d state*` ; do echo c-$state `cat $state/name` `cat $state/latency` ; done
c-state0/ POLL 0
c-state1/ C1 3
c-state2/ C1E 10
c-state3/ C2 20
c-state4/ C2E 40
c-state5/ C3 20
c-state6/ C4 60
c-state7/ C4E 100 Update: An Intel employee recently published an article on intel_idle detailing MWAIT states. | {
"source": [
"https://unix.stackexchange.com/questions/454896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120445/"
]
} |
455,013 | I'd like to have a file on my computer that stores a particular token (rather than having them just exported to the shell env). As such, I'd like that the token can only be read by sudo, so access to it requires authorisation. How can I write a file that can only be read by sudo? | Note that sudo is not synonymous with root/superuser. In fact, sudo command let you execute commands as virtually any user, as specified by the security policy: $ sudo whoami
root
$ sudo -u bob whoami
bob I assume you meant to create a file that only root user can read: # Create the file
touch file
# Change permissions of the file
# '600' means only owner has read and write permissions
chmod 600 file
# Change owner of the file
sudo chown root:root file When you need to edit the content of the file: # Replace 'nano' with your prefered editor
sudo nano file See how only root can read the file: $ cat file
cat: file: Permission denied
$ sudo cat file
foo bar baz | {
"source": [
"https://unix.stackexchange.com/questions/455013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50703/"
]
} |
455,261 | I'm working with ROS, which has been installed on my Ubuntu correctly. To run the ROS, we have to first source /opt/ros/kinetic/setup.bash then execute roscore . If I execute roscore without source setup.bash , the command roscore can't be found. Now, I want to execute the ROS while the system starts up. I've read this link: https://askubuntu.com/questions/814/how-to-run-scripts-on-start-up It seems that I only need to create a custom service file and put it into /etc/systemd/system/ . But still I'm not sure what to do because I need to source setup.bash to setup some necessary environmental variables before executing roscore . Is it possible to set environmental variables in the service file? For my need, I have to set these environmental variables not only for the execution of roscore but also for the whole system. I have another idea, which is that I set these environmental variables in /etc/profile and write a service file only for the command roscore , will it work? | Normally systemd services have only a limited set of environment variables,
and things in /etc/profile , /etc/profile.d and bashrc -related files are not set. To add environment variables for a systemd service you have different possibilities. The examples as follows assume that roscore is at /opt/ros/kinetic/bin/roscore , since systemd services must have the binary or script configured with a full path. One possibility is to use the Environment option in your systemd service and a simple systemd service would be as follows. [root@localhost ~]# cat /etc/systemd/system/ros.service
[Unit]
Description=ROS Kinetic
After=sshd.service
[Service]
Type=simple
Environment="One=1" "Three=3"
Environment="Two=2"
Environment="Four=4"
ExecStart=/opt/ros/kinetic/bin/roscore
[Install]
WantedBy=multi-user.target You also can put all the environment variables into a file that can be read with the EnvironmentFile option in the systemd service. [root@localhost ~]# cat /etc/systemd/system/ros.env
One=1
Three=3
Two=2
Four=4
[root@localhost ~]# cat /etc/systemd/system/ros.service
[Unit]
Description=ROS Kinetic
After=sshd.service
[Service]
Type=simple
EnvironmentFile=/etc/systemd/systemd/ros.env
ExecStart=/opt/ros/kinetic/bin/roscore
[Install]
WantedBy=multi-user.target Another option would be to make a wrapper script for your ros binary and call that wrapper script from the systemd service. The script needs to be executable. To ensure that, run chmod 755 /opt/ros/kinetic/bin/roscore.startup after creating that file. [root@localhost ~]# cat /opt/ros/kinetic/bin/roscore.startup
#!/bin/bash
source /opt/ros/kinetic/setup.bash
roscore
[root@localhost ~]# cat /etc/systemd/system/ros.service
[Unit]
Description=ROS Kinetic
After=sshd.service
[Service]
Type=simple
ExecStart=/opt/ros/kinetic/bin/roscore.startup
[Install]
WantedBy=multi-user.target Note that you need to run systemctl daemon-reload after you have edited the service file to make the changes active. To enable the service on systemboot, you have to enter systemctl enable ros . I am not familiar with the roscore binary and it might be necessary to change Type= from simple (which is the default and normally not needed) to forking in the first two examples. For normal logins, you could copy or symlink /opt/ros/kinetic/setup.bash to /etc/profile.d/ros.sh , which should be sourced on normal logins. | {
"source": [
"https://unix.stackexchange.com/questions/455261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145824/"
]
} |
456,320 | I use CentOS shared server environment with Bash. ll "$HOME"/public_html/cron_daily/ brings: ./
../
-rwxr-xr-x 1 user group 181 Jul 11 11:32 wp_cli.sh* I don't know why the filename has an asterisk in the end. I don't recall adding it and when I tried to change it I got this output: [~/public_html]# mv cron_daily/wp_cli.sh* cron_daily/wp_cli.sh
+ mv cron_daily/wp_cli.sh cron_daily/wp_cli.sh
mv: `cron_daily/wp_cli.sh' and `cron_daily/wp_cli.sh' are the same file This error might indicate why my Cpanel cronjob failed: Did I do anything wrong when changing the file or when running the Cpanel cron command? Because both operations seem to fail. | The asterisk is not actually part of the filename. You are seeing it because the file is executable and your alias for ll includes the -F flag: -F Display a slash ('/') immediately after each pathname that is a directory, an asterisk ('*') after each that is executable, an
at sign ('@') after each symbolic link, an equals sign (`=') after each socket, a percent sign ('%') after each whiteout, and a
vertical bar ('|') after each that is a FIFO. As Kusalananda mentioned you can't glob all scripts in a directory with cron like that. With run-parts, you can call "$HOME"/public_html/cron_daily/ to execute all scripts in the directory (not just .sh) or loop through them as mentioned in this post . | {
"source": [
"https://unix.stackexchange.com/questions/456320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273994/"
]
} |
456,328 | I want to find three patterns in a list. I tried typing $ pip3 list | grep -ei foo -ei bar -ei baz but the shell throws a broken pipe error and a large Traceback . How do I grep for multiple patterns passed from a list that is piped to grep ? | The asterisk is not actually part of the filename. You are seeing it because the file is executable and your alias for ll includes the -F flag: -F Display a slash ('/') immediately after each pathname that is a directory, an asterisk ('*') after each that is executable, an
at sign ('@') after each symbolic link, an equals sign (`=') after each socket, a percent sign ('%') after each whiteout, and a
vertical bar ('|') after each that is a FIFO. As Kusalananda mentioned you can't glob all scripts in a directory with cron like that. With run-parts, you can call "$HOME"/public_html/cron_daily/ to execute all scripts in the directory (not just .sh) or loop through them as mentioned in this post . | {
"source": [
"https://unix.stackexchange.com/questions/456328",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/289865/"
]
} |
457,166 | On Linux Debian 9 I am able to resolve a specific local domain e.g. my.sample-domain.local using some commands like nslookup or host , but not with some other commands like ping or the Postgres client psql . I think stuff like Network Manager has setup my DNS resolver correctly (the content of /etc/resolv.conf ), so I am not sure why is this happening? I checked with a colleague using Windows 10 and they don't have any custom entry in their host file, although in their case the Windows version of ping and their database UI for Postgres works as expected resolving the domain into an IP address. Please see below: $ ping my.sample-domain.local
ping: my.sample-domain.local: Name or service not known
$ host my.sample-domain.local
my.sample-domain.local has address <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>
$ ping -c 5 <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>
PING <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN> (<THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>) 56(84) bytes of data.
64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=1 ttl=128 time=1.16 ms
64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=2 ttl=128 time=0.644 ms
64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=3 ttl=128 time=0.758 ms
64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=4 ttl=128 time=0.684 ms
64 bytes from <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>: icmp_seq=5 ttl=128 time=0.794 ms
--- <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN> ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4056ms
rtt min/avg/max/mdev = 0.644/0.808/1.160/0.183 ms
$ nslookup my.sample-domain.local
Server: <THE_IP_REPRESENTING_THE_NAMESERVER>
Address: <THE_IP_REPRESENTING_THE_NAMESERVER>#53
Non-authoritative answer:
Name: my.sample-domain.local
Address: <THE_IP_REPRESENTING_THE_LOCAL_DOMAIN>
$ cat /etc/resolv.conf
domain <AN_INTERNAL_DOMAIN>
search <AN_INTERNAL_DOMAIN>
nameserver <THE_IP_REPRESENTING_THE_NAMESERVER>
nameserver <ANOTHER_IP_REPRESENTING_THE_NAMESERVER> EDIT: Meanwhile I realized there is an Ubuntu 16 Virtual Machine in the same office LAN, so I logged into it and tried the ping command which is working there. Also that Ubuntu VM does not have any particular custom setting in /etc/hosts (the same as my Debian 9 laptop with not customized /etc/hosts ). Both the /etc/resolv.conf look similar (some shared domains/IPs, some other IPs for the same domain). However the file /etc/nsswitch.conf is different, so I think there is something going on with this mdsn4_minimal and the order of hosts resolution in there like mdsn4_minimal coming before dns : hosts: files mdns4_minimal [NOTFOUND=return] dns and on Ubuntu: hosts: files dns EDIT 2: Both the Ubuntu 16 VM and my Debian 9 laptop are able to resolve that .local domain using the dig command. | host and nslookup perform DNS lookups, however most applications use glibc's Name Service Switch to decide how host names are looked up. Your /etc/nsswitch.conf might enable mDNS, which might cause the issues when resolving .local names. You could change the order in which lookups are made or just remove mDNS service if you think you won't need it. Your nsswitch.conf 's has mdns4_minimal , which does mDNS lookup (for .local names). The [NOTFOUND=return] after it causes the lookup to stop and therefore DNS is never used and your application can't resolve the host name. You could either remove the whole mdns4_minimal [NOTFOUND=return] , so mDNS lookups are not used, or just remove the NOTFOUND action so DNS lookup would be made should mDNS lookup fail. For further details, I recommend checking out the Name Service Switch documentation . | {
"source": [
"https://unix.stackexchange.com/questions/457166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255367/"
]
} |
457,222 | In my current directory, I execute the command: ls -1 and it gives a list of the current directory contents. In the same directory, I repeat the command: ls and it gives me the same result, with perhaps a different formatted output. Finally, I try to find out about the available commands by typing ls --help and the output is: usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...] It looks like the last option is 1 (#1). Can someone explain what the ls -1 does and how it's different to the standard ls command? | Yes, the formatting of the output is the only difference between ls -1 and ls without any options. From the ls manual on my system : -1 (The numeric digit "one".) Force output to be one entry per line.
This is the default when output is not to a terminal. This is also a POSIX option to the ls utility . The manual for ls on your system is bound to say something similar (see man ls ). Related: is piped ls the same as ls -1? | {
"source": [
"https://unix.stackexchange.com/questions/457222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/301386/"
]
} |
457,670 | I am using the newest version of netcat ( v1.10-41.1 ) which does not seem to have an option for IPv6 addresses (as the -6 was in the older versions of nc ). If I type in nc -lvnp 2222 and check listening ports with netstat -punta , the server appears to be listening on port 2222 for IPv4 addresses only: tcp 0 0 0.0.0.0:2222 0.0.0.0:* LISTEN 2839/nc tcp6 is not active like, for example, my apache2 server: tcp6 0 0 :::80 :::* LISTEN - | There are at least 3 or 4 different implementations of netcat as seen on Debian: netcat-traditional 1.10-41 the original which doesn't support IPv6: probably what you installed. netcat6 which was made to offer IPv6 (oldstable, superseded). netcat-openbsd 1.130-3 . Does support IPv6. ncat 7.70+dfsg1-3 probably a bit newer since not in Debian stable, provided by nmap , does support IPv6. I'd go for the openbsd one. Each version can have subtly different syntax, so take care. By the way: socat is a much better tool able to really do much more than netcat. You should try it! | {
"source": [
"https://unix.stackexchange.com/questions/457670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288674/"
]
} |
458,265 | From https://unix.stackexchange.com/a/458074/674 Remember to use -- when passing arbitrary arguments to commands (or use redirections where possible). So sort -- "$f1" or better sort < "$f1" instead of sort "$f1" . Why is it preferred to use -- and redirection? Why is sort < "$f1" preferred over sort -- "$f1" ? Why is sort -- "$f1" preferred over sort "$f1" ? Thanks. | sort "$f1" fails for values of $f1 that start with - or here for the case of sort some that start with + (can have severe consequences for a file called -o/etc/passwd for instance). sort -- "$f1" (where -- signals the end of options) addresses most of those issues but still fails for the file called - (which sort interprets as meaning its stdin instead). sort < "$f1" Doesn't have those issues. Here, it's the shell that opens the file. It also means that if the file can't be opened, you'll also get a potentially more useful error message (for instance, most shells will indicate the line number in the script), and the error message will be consistent if you use redirections wherever possible to open files. And in sort < "$f1" > out (contrary to sort -- "$f1" > out ), if "$f1" can't be opened, out won't be created/truncated and sort not even run. To clear some possible confusion (following comments below), that does not prevent the command from mmap() ing the file or lseek() ing inside it (not that sort does either) provided the file itself is seekable. The only difference is that the file is opened earlier and on file descriptor 0 by the shell as opposed to later by the command possibly on a different file descriptor. The command can still seek/mmap that fd 0 as it pleases. That is not to be confused with cat file | cmd where this time cmd 's stdin is a pipe that cannot be mmaped/seeked. | {
"source": [
"https://unix.stackexchange.com/questions/458265",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
458,648 | When checking a service status via systemctl systemctl status docker the output is something like ● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Mon 2018-03-19 13:52:21 CST; 4min 32s ago
Docs: https://docs.docker.com
Process: 6001 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=205/LIMITS)
Main PID: 6001 ( code=exited, status=205/LIMITS ) The question is about the part in bold: the main process exit code and status information. Is there a list of all the codes and statuses along with their explanation ? I know that most times it's self-explanatory (and I know the answer to the question here) but lately we get this question a lot at work (some people search via google but can't find it, other people open the systemd.service man page, search for e.g. code 203 and don't find it...) so I thought I might as well put it here so it's easier for people to find the answer via google. | Yes, but only since 2017 when Jan Synacek finally documented them in the systemd manual. Your work colleagues are simply reading the wrong page of the manual. ☺ Further reading Lennart Poettering (2017). " Process exit codes: systemd-specific exit codes ". systemd.exec . systemd manual pages. Freedesktop.org. Jan Synacek (2016-06-15) Document service status codes . systemd bug #3545. GitHub. | {
"source": [
"https://unix.stackexchange.com/questions/458648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22142/"
]
} |
458,650 | How can i read list of server entered by user & save it into variable ? Example: Please enter list of server:
(user will enter following:)
abc
def
ghi
END
$echo $variable
abc
def
ghi I want it to be running in shell script.If i use following in shell script: read -d '' x <<-EOF It is giving me an error : line 2: warning: here-document at line 1 delimited by end-of-file (wanted `EOF') Please suggest how can I incorporate it in shell script ? | Yes, but only since 2017 when Jan Synacek finally documented them in the systemd manual. Your work colleagues are simply reading the wrong page of the manual. ☺ Further reading Lennart Poettering (2017). " Process exit codes: systemd-specific exit codes ". systemd.exec . systemd manual pages. Freedesktop.org. Jan Synacek (2016-06-15) Document service status codes . systemd bug #3545. GitHub. | {
"source": [
"https://unix.stackexchange.com/questions/458650",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302429/"
]
} |
458,694 | How can I search and replace horizontal tabs in nano? I've been trying to use [\t] in regex mode, but this only matches every occurrence of the character t . I've just been using sed 's/\t//g' file , which works fine, but I would still be interested in a nano solution. | In nano to search and replace: Press Ctrl + \ Enter your search string and hit return Enter your replacement string and hit return Press A to replace all instances To replace tab characters you need to put nano in verbatim mode: Alt + Shift + V . Once in verbatim mode, you can type any character in it'll be be accepted literally when in verbatim mode, then hit return . References 3.8. Tell me more about this verbatim input stuff! Nano global search and replace tabs to spaces or spaces to tabs Is it possible to easily switch between tabs and spaces in nano? | {
"source": [
"https://unix.stackexchange.com/questions/458694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
459,074 | I am trying to ssh to remote machine, the attempt fails: $ ssh -vvv [email protected]
OpenSSH_7.7p1, OpenSSL 1.0.2o 27 Mar 2018
.....
debug2: ciphers ctos: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc
debug2: ciphers stoc: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc
debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: compression ctos: none,[email protected]
debug2: compression stoc: none,[email protected]
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: rsa-sha2-512
Unable to negotiate with 192.168.100.14 port 22: no matching cipher found. Their offer: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc As far as I understand the last string of the log, the server offers to use one of the following 4 cipher algorithms: aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc . Looks like my ssh client doesn't support any of them, so the server and client are unable to negotiate further. But my client does support all the suggested algorithms: $ ssh -Q cipher
3des-cbc
aes128-cbc
aes192-cbc
aes256-cbc
[email protected]
aes128-ctr
... and there are several more. And if I explicitly specify the algorithm like this: ssh -vvv -c aes256-cbc [email protected] I can successfully login to the server. My ~/.ssh/config doesn't contain any cipher-related directives (actually I removed it completely, but the problem remains). So, why client and server can't decide which cipher to use without my explicit instructions? The client understands that server supports aes256-cbc , client understands that he can use it himself, why not just use it? Some additional notes: There was no such problem some time (about a month) ago. I've not changed any ssh configuration files since then. I did update installed packages though. There is a question which describes very similar-looking problem, but there is no answer my question: ssh unable to negotiate - no matching key exchange method found UPDATE: problem solved As telcoM explained the problem is with server: it suggests only the obsolete cipher algorithms. I was sure that both client and server are not outdated. I have logged into server (by the way, it's Synology, updated to latest available version), and examined the /etc/ssh/sshd_config . The very first (!) line of this file was: Ciphers aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc This is very strange (the fact that line is very first in the file), I am sure I've never touched the file before. However I've changed the line to: Ciphers aes256-ctr,aes128-cbc,3des-cbc,aes192-cbc,aes256-cbc restarted the server (did not figure out how to restart the sshd service only), and now the problem is gone: I can ssh to server as usual. | The -cbc algorithms have turned out to be vulnerable to an attack. As a result, up-to-date versions of OpenSSH will now reject those algorithms by default: for now, they are still available if you need them, but as you discovered, you must explicitly enable them. Initially when the vulnerability was discovered (in late 2008, nearly 10 years ago!) those algorithms were only placed at the tail end of the priority list for the sake of compatibility, but now their deprecation in SSH has reached a phase where those algorithms are disabled by default. According to this question in Cryptography.SE , this deprecation step was already happening in year 2014. Please consider this a gentle reminder to update your SSH server , if at all possible. (If it's a firmware-based implementation, see if updated firmware is available for your hardware.) | {
"source": [
"https://unix.stackexchange.com/questions/459074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50667/"
]
} |
459,367 | In a Bash script, I'm trying to store the options I'm using for rsync in a separate variable. This works fine for simple options (like --recursive ), but I'm running into problems with --exclude='.*' : $ find source
source
source/.bar
source/foo
$ rsync -rnv --exclude='.*' source/ dest
sending incremental file list
foo
sent 57 bytes received 19 bytes 152.00 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
$ RSYNC_OPTIONS="-rnv --exclude='.*'"
$ rsync $RSYNC_OPTIONS source/ dest
sending incremental file list
.bar
foo
sent 78 bytes received 22 bytes 200.00 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN) As you can see, passing --exclude='.*' to rsync "manually" works fine ( .bar isn't copied), it doesn't work when the options are stored in a variable first. I'm guessing that this is either related to the quotes or the wildcard (or both), but I haven't been able to figure out what exactly is wrong. | In general, it's a bad idea to demote a list of separate items into a single string, whether it's a list of command line options or a list of pathnames. Using an array instead: rsync_options=( -rnv --exclude='.*' ) or rsync_options=( -r -n -v --exclude='.*' ) and later... rsync "${rsync_options[@]}" source/ target This way, the quoting of the individual options is maintained (as long as you double quote the expansion of ${rsync_options[@]} ). It also allows you to easily manipulate the individual entries of the array, would you need to do so, before calling rsync . In any POSIX shell, one may use the list of positional parameters for this: set -- -rnv --exclude='.*'
rsync "$@" source/ target Again, double quoting the expansion of $@ is critical here. Tangentially related: How can we run a command stored in a variable? The issue is that when you put the two sets of option into a string, the single quotes of the --exclude option's value becomes part of that value. Hence, RSYNC_OPTIONS='-rnv --exclude=.*' would have worked¹... but it's better (as in safer) to use an array or the positional parameters with individually quoted entries. Doing so would also allow you to use things with spaces in them, if you would need to, and avoids having the shell perform filename generation (globbing) on the options. ¹ provided that $IFS is not modified and that there's no file whose name starts with --exclude=. in the current directory, and that the nullglob or failglob shell options are not set. | {
"source": [
"https://unix.stackexchange.com/questions/459367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30497/"
]
} |
459,610 | As opposed to editing /etc/hostname, or wherever is relevant? There must be a good reason (I hope) - in general I much prefer the "old" way, where everything was a text file. I'm not trying to be contentious - I'd really like to know, and to decide for myself if it's a good reason.
Thanks. | Background hostnamectl is part of systemd, and provides a proper API for dealing with setting a server's hostnames in a standardized way. $ rpm -qf $(type -P hostnamectl)
systemd-219-57.el7.x86_64 Previously each distro that did not use systemd, had their own methods for doing this which made for a lot of unnecessary complexity. DESCRIPTION
hostnamectl may be used to query and change the system hostname and
related settings.
This tool distinguishes three different hostnames: the high-level
"pretty" hostname which might include all kinds of special characters
(e.g. "Lennart's Laptop"), the static hostname which is used to
initialize the kernel hostname at boot (e.g. "lennarts-laptop"), and the
transient hostname which is a default received from network
configuration. If a static hostname is set, and is valid (something
other than localhost), then the transient hostname is not used.
Note that the pretty hostname has little restrictions on the characters
used, while the static and transient hostnames are limited to the
usually accepted characters of Internet domain names.
The static hostname is stored in /etc/hostname, see hostname(5) for
more information. The pretty hostname, chassis type, and icon name are
stored in /etc/machine-info, see machine-info(5).
Use systemd-firstboot(1) to initialize the system host name for mounted
(but not booted) system images. hostnamectl also pulls a lot of disparate data together into a single location to boot: $ hostnamectl
Static hostname: centos7
Icon name: computer-vm
Chassis: vm
Machine ID: 1ec1e304541e429e8876ba9b8942a14a
Boot ID: 37c39a452464482da8d261f0ee46dfa5
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-693.21.1.el7.x86_64
Architecture: x86-64 The info here is coming from /etc/*release , uname -a , etc. including the hostname of the server. What about the files? Incidentally, everything is still in files, hostnamectl is merely simplifying how we have to interact with these files or know their every location. As proof of this you can use strace -s 2000 hostnamectl and see what files it's pulling from: $ strace -s 2000 hostnamectl |& grep ^open | tail -5
open("/lib64/libattr.so.1", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
open("/proc/self/stat", O_RDONLY|O_CLOEXEC) = 3
open("/etc/machine-id", O_RDONLY|O_NOCTTY|O_CLOEXEC) = 4
open("/proc/sys/kernel/random/boot_id", O_RDONLY|O_NOCTTY|O_CLOEXEC) = 4 systemd-hostname.service? To the astute observer, you should notice in the above strace that not all files are present. hostnamectl is actually interacting with a service, systemd-hostnamectl.service which in fact does the "interacting" with most of the files that most admins would be familiar with, such as /etc/hostname . Therefore when you run hostnamectl you're getting details from the service. This is a ondemand service, so you won't see if running all the time. Only when hostnamectl runs. You can see it if you run a watch command, and then start running hostnamectl multiple times: $ watch "ps -eaf|grep [h]ostname"
root 3162 1 0 10:35 ? 00:00:00 /usr/lib/systemd/systemd-hostnamed The source for it is here: https://github.com/systemd/systemd/blob/master/src/hostname/hostnamed.c and if you look through it, you'll see the references to /etc/hostname etc. References systemd/src/hostname/hostnamectl.c systemd/src/hostname/hostnamed.c hostnamectl systemd-hostnamed.service | {
"source": [
"https://unix.stackexchange.com/questions/459610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46626/"
]
} |
459,692 | I am writing a bash script to use rsync and update files on about 20 different servers. I have the rsync part figured out. What I'm having trouble with is going through a list of variables. My script thus far look like this: #!/bin/bash
SERVER1="192.xxx.xxx.2"
SERVER2="192.xxx.xxx.3"
SERVER3="192.xxx.xxx.4"
SERVER4="192.xxx.xxx.5"
SERVER5="192.xxx.xxx.6"
SERVER6="192.xxx.xxx.7"
for ((i=1; i<7; i++))
do
echo [Server IP Address]
done Where [Server IP Address] should be the value of the associated variable. So when i = 1 I should echo the value of $SERVER1. I've tried several iterations of this including echo "$SERVER$i" # printed the value of i
echo "SERVER$i" # printer "SERVER" plus the value of i ex: SERVER 1 where i = 1
echo $("SERVER$i") # produced an error SERVER1: command not found where i = 1
echo $$SERVER$i # printed a four digit number followed by "SERVER" plus the value of i
echo \$$SERVER$i # printed "$" plus the value of i It has been a long time since I scripted so I know I am missing something. Plus I'm sure I'm mixing in what I could do using C#, which I've used for the past 11 years. Is what I'm trying to do even possible? Or should I be putting these values in an array and looping through the array? I need to this same thing for production IP addresses as well as location names. This is all in an effort to not have to repeat a block of code I will be using to sync the files on the remote server. | Use an array. #! /bin/bash
servers=( 192.xxx.xxx.2 192.xxx.xxx.3
192.xxx.xxx.4 192.xxx.xxx.5
192.xxx.xxx.6 192.xxx.xxx.7
)
for server in "${servers[@]}" ; do
echo "$server"
done | {
"source": [
"https://unix.stackexchange.com/questions/459692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286397/"
]
} |
459,944 | [fakename]$ help time
time: time [-p] pipeline
Report time consumed by pipeline's execution... From this, it seems that time is a Bash builtin. However, I cannot find a description of it on this page: https://www.gnu.org/software/bash/manual/html_node/Shell-Builtin-Commands.html#Shell-Builtin-Commands . Why is this the case? | It is described in the "Shell Grammar/Pipelines" subsection of the bash manpage . It is also described in the
link that you provided in the Pipelines section, where it is indexed under "Reserved Words" . Pipelines A pipeline is a sequence of one or more commands separated by one of the control operators | or |&. The format for a pipeline is: [time [-p]] [ ! ] command [ | or |& command2 ... ] The standard output of command is connected via a pipe to the standard input of command2. This connection is performed before any redirections specified by the command (see REDIRECTION below). If |& is used, the standard error of command is connected to command2's standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error is performed after any redirections specified by the command. The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully. If the reserved word ! precedes a pipeline, the exit status of that pipeline is the logical negation of the exit status as described above. The shell waits for all commands in the pipeline to terminate before returning a value. If the time reserved word precedes a pipeline, the elapsed as well as user and system time consumed by its execution are reported when the pipeline terminates. The -p option changes the output format to that specified by POSIX. The TIMEFORMAT variable may be set to a format string that specifies how the timing information should be displayed; see the description of TIMEFORMAT under Shell Variables below. Each command in a pipeline is executed as a separate process (i.e., in a subshell). | {
"source": [
"https://unix.stackexchange.com/questions/459944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209133/"
]
} |
460,533 | A software I installed inserted a line in my profile that reads: [ -s "$SOME_FILE" ] && \. "$SOME_FILE" I know dot . is synonymous with source , so I suspect this is just sourcing the file, but I have never seen \. before; does it do something else? Edit, regarding DVs: searching for "backslash dot" leads to questions regarding ./ when calling executable files, and man source leads to a manpage where \. does not appear. I don't know what else to try, hence the question. Edit 2: see related questions Why start a shell command with a backslash Backslash at the beginning of a command Why do backslashes prevent alias expansion Run a command that is shadowed by an alias | A backslash outside of quotes means “interpret the next character literally during parsing”. Since . is an ordinary character for the parser, \. is parsed in the same way as . , and invokes the builtin . (of which source is a synonym in bash). There is one case where it could make a difference in this context. If a user has defined an alias called . earlier in .profile , and .profile is being read in a shell that expands aliases (which bash only does by default when it's invoked interactively), then . would trigger the alias, but \. would still trigger the builtin, because the shell doesn't try alias expansion on words that were quoted in any way. I suspect that . was changed to \. because a user complained after they'd made an alias for . . Note that \. would invoke a function called . . Presumably users who write functions are more knowledgeable than users who write aliases and would know that redefining a standard command in .profile is a bad idea if you're going to include code from third parties. But if you wanted to bypass both aliases and functions, you could write command . . The author of this snippet didn't do this either because they cared about antique shells that didn't have the command builtin, or more likely because they weren't aware of it. By the way, defining any alias in .profile is a bad idea because .profile is a session initialization script, not a shell initialization script. Aliases for bash belong in .bashrc . | {
"source": [
"https://unix.stackexchange.com/questions/460533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45354/"
]
} |
460,595 | I have below scenario like: if [file exists]; then
exit
elif
recheck if file exist (max 10 times)
if found exit else recheck again as per counter
fi | There are many ways to do this loop. With ksh93 syntax (also supported by zsh and bash ): for (( i=0; i<10; ++i)); do
[ -e filename ] && break
sleep 10
done For any POSIX-like shell: n=0
while [ "$n" -lt 10 ] && [ ! -e filename ]; do
n=$(( n + 1 ))
sleep 10
done Both of the loops sleep 10 seconds in each iteration before testing the existence of the file again. After the loop has finished, you will have to test for existence of the file a last time to figure out whether the loop exited due to running 10 times or due to the file appearing. If you wish, and if you have access to inotify-tools, you may replace the sleep 10 call with inotifywait -q -t 10 -e create ./ >/dev/null This would wait for a file creation event to occur in the current directory, but would time out after 10 seconds. This way your loop would exit as soon as the given filename appeared (if it appeared). The full code, with inotifywait (replace with sleep 10 if you don't want that), may look like for (( i=0; i<10; ++i)); do
[ -e filename ] && break
inotifywait -q -t 10 -e create ./ >/dev/null
done
if [ -e filename ]; then
echo 'file appeared!'
else
echo 'file did not turn up in time'
fi | {
"source": [
"https://unix.stackexchange.com/questions/460595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/190945/"
]
} |
460,836 | I'm running this loop to check and print some things every second. However, because the calculations take maybe a few hundred milliseconds, the printed time sometimes skip a second. Is there any way to write such a loop that I am guaranteed to get a printout every second? (Provided, of course, that the calculations in the loop take less than a second :)) while true; do
TIME=$(date +%H:%M:%S)
# some calculations which take a few hundred milliseconds
FOO=...
BAR=...
printf '%s %s %s\n' $TIME $FOO $BAR
sleep 1
done | To stay a bit closer to the original code, what I do is: while true; do
sleep 1 &
...your stuff here...
wait # for sleep
done This changes the semantics a little: if your stuff took less than a second, it will simply wait for the full second to pass. However, if your stuff takes longer than a second for any reason, it won't keep spawning even more subprocesses with never any end to it. So your stuff never runs in parallel, and not in the background, so variables work as expected too. Note that if you do start additional background tasks as well, you'll have to change the wait instruction to only wait for the sleep process specifically. If you need it to be even more accurate, you'll probably just have to sync it to the system clock and sleep ms instead of full seconds. How to sync to system clock? No idea really, stupid attempt: Default: while sleep 1
do
date +%N
done Output: 003511461 010510925 016081282 021643477 028504349 03... (keeps growing) Synced: while sleep 0.$((1999999999 - 1$(date +%N)))
do
date +%N
done Output: 002648691 001098397 002514348 001293023 001679137 00... (stays same) | {
"source": [
"https://unix.stackexchange.com/questions/460836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40335/"
]
} |
460,845 | I have a script bash that converts this file "origin.txt" cxx-yyy-zzz-999-111
2018-01-1T00:10:54.412Z
2018-01-5T00:01:19.447Z
1111-6b54-eeee-rrrr-tttt
2018-01-1T00:41:38.867Z
2018-01-5T01:14:55.744Z
1234456-1233-6666-mmmm-12123
2018-01-1T00:12:37.152Z
2018-01-5T00:12:44.307Z to cxx-yyy-zzz-999-111,2018-01-1T00:10:54.412Z,2018-01-5T00:01:19.447Z
1111-6b54-eeee-rrrr-tttt,2018-01-1T00:41:38.867Z,2018-01-5T01:14:55.744Z
1234456-1233-6666-mmmm-12123,2018-01-1T00:12:37.152Z,2018-01-5T00:12:44.307Z How could I do it in bash with AWK? | To stay a bit closer to the original code, what I do is: while true; do
sleep 1 &
...your stuff here...
wait # for sleep
done This changes the semantics a little: if your stuff took less than a second, it will simply wait for the full second to pass. However, if your stuff takes longer than a second for any reason, it won't keep spawning even more subprocesses with never any end to it. So your stuff never runs in parallel, and not in the background, so variables work as expected too. Note that if you do start additional background tasks as well, you'll have to change the wait instruction to only wait for the sleep process specifically. If you need it to be even more accurate, you'll probably just have to sync it to the system clock and sleep ms instead of full seconds. How to sync to system clock? No idea really, stupid attempt: Default: while sleep 1
do
date +%N
done Output: 003511461 010510925 016081282 021643477 028504349 03... (keeps growing) Synced: while sleep 0.$((1999999999 - 1$(date +%N)))
do
date +%N
done Output: 002648691 001098397 002514348 001293023 001679137 00... (stays same) | {
"source": [
"https://unix.stackexchange.com/questions/460845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/303997/"
]
} |
461,113 | Linux is only a kernel, and if users want to use it, then they need a complete distribution. That being said, how were the first versions of Linux used when there were no Linux distributions? | In the early stages of Linux, Linus Torvalds released the Linux kernel source in an alpha state to signal to others that work towards a new Unix-like kernel was in development. By that time, as @RalfFriedi stated, the Linux kernel was cross-compiled in Minix. As for usable software, Linus Torvalds also ported utilities to distribute along with the Linux kernel in order for others to test it. These programs were mainly bash and gcc , as described by LINUX's History by Linus Torvalds . Per the the Usenet post : From: [email protected] (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system
Message-ID: <[email protected]> Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and
professional like gnu) for 386(486) AT clones. This has been brewing
since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). I've currently ported bash(1.08) and gcc(1.40), and things seem to
work. This implies that I'll get something practical within a few
months, and I'd like to know what features most people would want.
Any suggestions are welcome, but I won't promise I'll implement them
:-) Linus distributed the kernel and core utility programs in a diskette format for users to try it and possibly to contribute to it. Afterwards, there were H.J. Lu's Boot-root floppy diskettes. If this could be called a distribution, then it would gain the fame of being the first distribution capable of being installed on hard disk. These were two 5¼" diskette images containing the Linux kernel and the
minimum tools required to get started. So minimal were these tools
that to be able to boot from a hard drive required editing its master
boot record with a hex editor. Eventually the number of utilities grew larger than the maximum size of a diskette. MCC Interim Linux was the first Linux distribution to be used by people with slightly less technical skills by introducing an automated installation and new utilities such as fdisk . MCC Interim Linux was a Linux distribution first released in February
1992 by Owen Le Blanc of the Manchester Computing Centre (MCC), part
of the University of Manchester. The first release of MCC Interim Linux was based on Linux 0.12 and
made use of Theodore Ts'o's ramdisk code to copy a small root image to
memory, freeing the floppy drive for additional utilities
diskettes.[2] He also stated his distributions were "unofficial experiments",
describing the goals of his releases as being: To provide a simple installation procedure. To provide a more complete installation procedure. To provide a backup/recovery service. To back up his (then) current system. To compile, link, and test every binary file under the current versions of the kernel, gcc, and libraries. To provide a stable base system, which can be installed in a short time, and to which other software can be added with relatively little
effort. After the MCC precursor, SLS was the first distribution offering the X Window System in May of 1992. Notably, the competitor to SLS, the mythical Yggdrasil , debuted in December of 1992. Other major distributors followed as we know them today, notably Slackware in July of 1993 (based on SLS) and Debian in December of 1993 until the first official version 1.1 release in December of 1995. Image credits: * Boot/Root diskettes image from: https://www.maketecheasier.com/ * yggdrasil diskette image from: https://yggdrasilblog.wordpress.com/ | {
"source": [
"https://unix.stackexchange.com/questions/461113",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304221/"
]
} |
462,017 | Look at the following: $ echo .[].aliases[]
..
$ echo .[].foo[]
..
$ echo .[].[]
..
$ echo .[].xyz[]
..
$ echo .xyz[].xyz[]
.xyz[].xyz[]
$ echo .xyz[].[]
.xyz[].[] Apparently this seems to be globbing something, but I don’t understand how the result comes together. From my understanding [] is an empty character class. It would be intuitive if it matched only the empty string; in this case, I’d expect bash to reproduce in its entirety since nothing matches it in this directory, but also match things like ..aliases (in the first example), or nothing at all; in this case, I’d expect bash to reproduce the string in total, too. This is with GNU bash, version 4.4.23(1)-release. | The [ starts a set. A set is terminated by ] . But there is a way to have ] as part of the set, and that is to specify the ] as the first character. As an empty set doesn't make any sense, this is not ambiguous. So your examples are basically all a dot followed by a set that contains a dot, therefore it matches two dots. The later examples don't find any files and are therefore returned verbatim. | {
"source": [
"https://unix.stackexchange.com/questions/462017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20834/"
]
} |
462,156 | How do you find the line number in Bash where an error occurred? Example I create the following simple script with line numbers to explain what we need. The script will copy files from cp $file1 $file2
cp $file3 $file4 When one of the cp commands fail then the function will exit with exit 1 . We want to add the ability to the function to also print the error with the line number (for example, 8 or 12). Is this possible? Sample script 1 #!/bin/bash
2
3
4 function in_case_fail {
5 [[ $1 -ne 0 ]] && echo "fail on $2" && exit 1
6 }
7
8 cp $file1 $file2
9 in_case_fail $? "cp $file1 $file2"
10
11
12 cp $file3 $file4
13 in_case_fail $? "cp $file3 $file4"
14 | Rather than use your function, I'd use this method instead: $ cat yael.bash
#!/bin/bash
set -eE -o functrace
file1=f1
file2=f2
file3=f3
file4=f4
failure() {
local lineno=$1
local msg=$2
echo "Failed at $lineno: $msg"
}
trap 'failure ${LINENO} "$BASH_COMMAND"' ERR
cp -- "$file1" "$file2"
cp -- "$file3" "$file4" This works by trapping on ERR and then calling the failure() function with the current line number + bash command that was executed. Example Here I've not taken any care to create the files, f1 , f2 , f3 , or f4 . When I run the above script: $ ./yael.bash
cp: cannot stat ‘f1’: No such file or directory
Failed at 17: cp -- "$file1" "$file2" It fails, reporting the line number plus command that was executed. | {
"source": [
"https://unix.stackexchange.com/questions/462156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
462,416 | After installing Linux Mint 19 I wanted to check how vsinc affects fps in Linux, so I typed this command: CLUTTER_SHOW_FPS=1 cinnamon --replace After some time I accidentally pressed Ctrl + Z and paused that process. Immediately my Bash shell and everything except the mouse cursor froze, so I can't type the fg command. Is there a way to unpause that process without rebooting and should I use Ctrl + C next time to properly exit that process? | Switch to a new TTY. See How to switch between tty and xorg session? for tips on how to switch TTYs. Determine the PID of the cinnamon process: ps -e | grep cinnamon Send this process the SIGCONT signal with kill -SIGCONT [pid] | {
"source": [
"https://unix.stackexchange.com/questions/462416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305327/"
]
} |
462,663 | What purpose does the [ -n "$PS1" ] in [ -n "$PS1" ] && source ~/.bash_profile; serve? This line is included in a .bashrc of a dotfiles repo . | This is checking whether the shell is interactive or not. In this case, only sourcing the ~/.bash_profile file if the shell is interactive. See "Is this Shell Interactive?" in the bash manual, which cites that specific idiom. (It also recommends checking whether the shell is interactive by testing whether the $- special variable contains the i character, which is a better approach to this problem.) | {
"source": [
"https://unix.stackexchange.com/questions/462663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305528/"
]
} |
462,670 | How do I set the default profile that is used after each boot, in PulseAudio? When I boot, sound doesn't work. If I open the PulseAudio Volume Control app, and go to the Configuration pane and select "Analog Surround 4.0 Output" from the Profile drop-down menu, then sound works again. However this only lasts until the next reboot. How do I configure the system to use that profile in the future after reboots? | Add the following to /etc/pulse/default.pa : set-card-profile <cardindex> <profilename> How do we figure out what to use as cardindex and as profilename ? Here's one way. Configure the card so everything is working. The cardindex will usually be 0, but you can find it by running pacmd list-cards and looking at the line index: ... . To find the profilename , use pacmd list-cards | grep 'active profile' The name of the current profile should appear in the output. Remove the angle brackets (the < and > ). You can test your configuration by running pactl set-card-profile <cardindex> <profilename> from the command line to see if it sets the profile correctly, then add it to /etc/pulse/default.pa . Since the index name is dynamic (it can change your PCI device index if you boot with a USB audio device plugged in), you could use <symbolic-name> instead of <index> (if you run pacmd list-cards , the symbolic name is right below the index). Also, the command might fail if the device is missing when starting pulseaudio so it might worth to wrap the command with an .ifexists clause: .ifexists <symbolic-name>
pactl set-card-profile <symbolic-name> <profilename>
.endif | {
"source": [
"https://unix.stackexchange.com/questions/462670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9812/"
]
} |
463,034 | I am trying to learn how to use getopts so that I can have scripts with parsed input (although I think getopts could be better). I am trying to just write a simple script to return partition usage percentages. The problem is that one of my bash functions does not seem to like that I reference $1 as an variable within the function. The reason I reference $1 is because the get_percent function can be passed a mount point as an optional argument to display instead of all of the mount points. The script #!/usr/bin/bash
set -e
set -u
set -o pipefail
get_percent(){
if [ -n "$1" ]
then
df -h $1 | tail -n +2 | awk '{ print $1,"\t",$5 }'
else
df -h | tail -n +2 | awk '{ print $1,"\t",$5 }'
fi
}
usage(){
echo "script usage: $(basename $0) [-h] [-p] [-m mount_point]" >&2
}
# If the user doesn't supply any arguments, we run the script as normal
if [ $# -eq 0 ];
then
get_percent
exit 0
fi
# ... The Output $ bash thing.sh
thing.sh: line 8: $1: unbound variable
$ bash -x thing.sh
+ set -e
+ set -u
+ set -o pipefail
+ '[' 0 -eq 0 ']'
+ get_percent
thing.sh: line 8: $1: unbound variable | set -u will abort exactly as you describe if you reference a variable which has not been set. You are invoking your script with no arguments, so get_percent is being invoked with no arguments, causing $1 to be unset. Either check for this before invoking your function, or use default expansions ( ${1-default} will expand to default if not already set to something else). | {
"source": [
"https://unix.stackexchange.com/questions/463034",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139546/"
]
} |
463,072 | The Ubuntu man page for apt-key includes the following note regarding apt-key add : Note: Instead of using this command a keyring should be placed
directly in the /etc/apt/trusted.gpg.d/ directory with a
descriptive name and either "gpg" or "asc" as file extension. I don't think I've ever seen this advice anywhere else. Most projects that host their own repositories say to download their key file and add it with apt-key . What is the motivation behind this advice? Is this an Ubuntu-ism, or does it apply to any APT-based distro? | Those projects have outdated instructions. I know this because I publish a Debian repository and I updated my instructions when I found out about the changes in Debian 9 APT. Indeed, this part of the manual is now out of date, as it is the wrong directory. This is not really to do with .d directories and more to do with preventing a cross-site vulnerability in APT. The older system used separate keyring files for convenience, but this is now a necessity for security; your security. This is the vulnerability. Consider two repository publishers, A and B. In the world of Debian 8 and before, both publishers' keys went in the single global keyring on users' machines. If publisher A could somehow arrange to supplant the repository WWW site of publisher B, then A could publish subversive packages, signed with A's own key , which APT would happily accept and install. A's key is, after all, trusted globally for all repositories. The mitigation is for users to use separate keyrings for individual publishers , and to reference those keyrings with individual Signed-By settings in their repository definitions. Specifically, publisher A's key is only used in the Signed-By of repository A and publisher B's key is only used in the Signed-By of repository B. This way, if publisher A supplants publisher B's repository, APT will not accept the subversive packages from it since they and the repository are signed by publisher A's key not by publisher B's. The /etc/apt/trusted.gpg.d mechanism at hand is an older Poor Man's somewhat flawed halfway house towards this, from back in 2005 or so, that is not quite good enough. It sets up the keyring in a separate file, so that it can be packaged up and just installed in one step by a package manager (or downloaded with fetch / curl / wget ) as any other file. (The package manager handles preventing publisher A's special this-is-my-repository-keyring package from installing over publisher B's, in the normal way that it handles file conflicts between packages in general.) But it still adds it to the set of keys that is globally trusted for all repositories. The full mechanism that exists now uses separate, not globally trusted, keyring files in /usr/share/keyrings/ . My instructions are already there. ☺ There are moves afoot to move Debian's own repositories to this mechanism, so that they no longer use globally trusted keys either. You might want to have a word with those "most projects" that you found. After all, they are currently instructing you to hand over global access to APT on your machine to them. Further reading Daniel Kahn Gillmor (2017-05-02). Please ship release-specific keys separately outside of /etc/apt/trusted.gpg.d/ . Debian bug #861695. Daniel Kahn Gillmor (2017-07-27). debian sources.list entries should have signed-by options pointing to specific keys . Debian bug #877012. "Sources.list entry" . Instructions to connect to a third-party repository . Debian wiki. 2018. Why isn't it a security risk to add to sources.list? Debian 9, APT, and "GPG error: ... InRelease: The following signatures were invalid:" | {
"source": [
"https://unix.stackexchange.com/questions/463072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/121291/"
]
} |
463,904 | Is there any advantage/disadvantage of initializing the value of a bash variable in the script, either before the main code, or local variables in a function before assigning the actual value to it? Do I need to do something like this: init()
{
name=""
name=$1
}
init "Mark" Is there any risk of variables being initialized with garbage values (if not initialized) and that having a negative effect of the values of the variables? | There is no benefit to assigning an empty string to a variable and then immediately assigning another variable string to it. An assignment of a value to a shell variable will completely overwrite its previous value. There is, to my knowledge, no recommendation that says that you should explicitly initialize variables to empty strings. In fact, doing so may mask errors under some circumstances (errors that would otherwise be apparent if running under set -u , see below). An unset variable, unused since the start of a script or explicitly unset by running the unset -v command on it, will have no value. The value of such a variable will be nothing. If used as "$myvariable" , you will get the equivalent of "" , and you would never get "garbage data". If the shell option nounset is set with either set -o nounset or set -u , then referencing an unset variable will cause the shell to produce an error (and a non-interactive shell would terminate): $ set -u
$ unset -v myvariable
$ echo "$myvariable"
/bin/sh: myvariable: parameter not set or, in bash : $ set -u
$ unset -v myvariable
$ echo "$myvariable"
bash: myvariable: unbound variable Shell variables will be initialized by the environment if the name corresponds to an existing environment variable. If you expect that you are using a variable that may be initialized by the environment in this way (and if it's unwanted), then you may explicitly unset it before the main part of your script: unset -v myvariable # unset so that it doesn't inherit a value from the environment ... which would also remove it as an environment variable, or you may simply ignore its initial value and overwrite it with an assignment (which would make the environment variable change value too). You would never encounter uninitialized garbage in a shell variable (unless, as stated, that garbage already existed in an environment variable by the same name). | {
"source": [
"https://unix.stackexchange.com/questions/463904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245789/"
]
} |
463,917 | I tried to restrict the number of a service (in a container) restarts. The OS version is CentOs 7.5 , the service file is pretty much as below (removed some parameters for reading convenience). It should be pretty straight forward as some other posts pointed out (Post of Server Fault restart limit 1 , Post of Stack Overflow restart limit 2 ). Yet StartLimitBurst and StartLimitIntervalSec never work for me. I tested with several ways: I check the service PID, kill the service with kill -9 **** several times. The service always gets restarted after 20s ! I also tried to mess up the service file, make the container never
runs. Still, it doesn't work, the service file just keep restarting. Any idea? [Unit]
Description=Hello Fluentd
After=docker.service
Requires=docker.service
StartLimitBurst=2
StartLimitIntervalSec=150s
[Service]
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker stop "fluentd"
ExecStartPre=-/usr/bin/docker rm -f "fluentd"
ExecStart=/usr/bin/docker run fluentd
ExecStop=/usr/bin/docker stop "fluentd"
Restart=always
RestartSec=20s
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target | StartLimitIntervalSec= was added as part of systemd v230. In systemd v229 and below, you can only use StartLimitInterval= . You will also need to put StartLimitInterval= and StartLimitBurst= in the [Service] section - not the [Unit] section. To check your systemd version on CentOS, run rpm -q systemd . If you ever upgrade to systemd v230 or above, the old names in the [Service] section will continue to work. Source: https://lists.freedesktop.org/archives/systemd-devel/2017-July/039255.html You can have this problem without seeing any error at all, because systemd ignores unknown directives. systemd assumes that many newer directives can be ignored and still allow the service to run. It is possible to manually check a unit file for unknown directives. At least it seems to work on recent systemd: $ systemd-analyze verify foo.service
/etc/systemd/system/foo.service:9: Unknown lvalue 'FancyNewOption' in section 'Service' | {
"source": [
"https://unix.stackexchange.com/questions/463917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306685/"
]
} |
464,184 | I used to think that file changes are saved directly into the disk, that is, as soon as I close the file and decide to click/select save. However, in a recent conversation, a friend of mine told me that is not usually true; the OS (specifically we were talking about Linux systems) keeps the changes in memory and it has a daemon that actually writes the content from memory to the disk. He even gave the example of external flash drives: these are mounted into the system (copied into memory) and sometimes data loss happens because the daemon did not yet save the contents into the flash memory; that is why we unmount flash drives. I have no knowledge about operating systems functioning, and so I have absolutely no idea whether this is true and in which circumstances. My main question is: does this happen like described in Linux/Unix systems (and maybe other OSes)? For instance, does this mean that if I turn off the computer immediately after I edit and save a file, my changes will be most likely lost? Perhaps it depends on the disk type -- traditional hard drives vs. solid-state disks? The question refers specifically to filesystems that have a disk to store the information, even though any clarification or comparison is well received. | if I turn off the computer immediately after I edit and save a file, my changes will be most likely lost? They might be. I wouldn't say "most likely", but the likelihood depends on a lot of things. An easy way to increase performance of file writes, is for the OS to just cache the data, tell (lie to) the application the write went through, and then actually do the write later. This is especially useful if there's other disk activity going on at the same time: the OS can prioritize reads and do the writes later. It can also remove the need for an actual write completely, e.g., in the case where a temporary file is removed quickly afterwards. The caching issue is more pronounced if the storage is slow. Copying files from a fast SSD to a slow USB stick will probably involve a lot of write caching, since the USB stick just can't keep up. But your cp command returns faster, so you can carry on working, possibly even editing the files that were just copied. Of course caching like that has the downside you note, some data might be lost before it's actually saved. The user will be miffed if their editor told them the write was successful, but the file wasn't actually on the disk. Which is why there's the fsync() system call , which is supposed to return only after the file has actually hit the disk. Your editor can use that to make sure the data is fine before reporting to the user that the write succeeded. I said, "is supposed to", since the drive itself might tell the same lies to the OS and say that the write is complete, while the file really only exists in a volatile write cache within the drive. Depending on the drive, there might be no way around that. In addition to fsync() , there are also the sync() and syncfs() system calls that ask the system to make sure all system-wide writes or all writes on a particular filesystem have hit the disk. The utility sync can be used to call those. Then there's also the O_DIRECT flag to open() , which is supposed to "try to minimize cache effects of the I/O to and from this file." Removing caching reduces performance, so that's mostly used by applications (databases) that do their own caching and want to be in control of it.
( O_DIRECT isn't without its issues, the comments about it in the man page are somewhat amusing.) What happens on a power-out also depends on the filesystem. It's not just the file data that you should be concerned about, but the filesystem metadata. Having the file data on disk isn't much use if you can't find it. Just extending a file to a larger size will require allocating new data blocks, and they need to be marked somewhere. How a filesystem deals with metadata changes and the ordering between metadata and data writes varies a lot. E.g., with ext4 , if you set the mount flag data=journal , then all writes – even data writes – go through the journal and should be rather safe. That also means they get written twice, so performance goes down. The default options try to order the writes so that the data is on the disk before the metadata is updated. Other options or other filesystem may be better or worse; I won't even try a comprehensive study. In practice, on a lightly loaded system, the file should hit the disk within a few seconds. If you're dealing with removable storage, unmount the filesystem before pulling the media to make sure the data is actually sent to the drive, and there's no further activity. (Or have your GUI environment do that for you.) | {
"source": [
"https://unix.stackexchange.com/questions/464184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96121/"
]
} |
464,217 | I have a fresh Alpine Linux 3.8.0 installed on a local disk, dual booted with Ubuntu 18.04. While trying to solve some GUI localization issue, I've entered a wrong keymap in setup-keymap . Sadly, after rebooting, this caused all typed letters to be displayed as squares, for example: Alpine login: øøøøø123 My username and password consist of lowercase English letters and digits. When typing letters, the result is garbage, but digits work fine. Now, because of this, I'm not able to login again and revert the keymap setting. Previously, the keymap was set to us , and everything (almost) was working fine. How can I revert the keymap setting back to us , without having to login to Alpine? Thanks in advance! | if I turn off the computer immediately after I edit and save a file, my changes will be most likely lost? They might be. I wouldn't say "most likely", but the likelihood depends on a lot of things. An easy way to increase performance of file writes, is for the OS to just cache the data, tell (lie to) the application the write went through, and then actually do the write later. This is especially useful if there's other disk activity going on at the same time: the OS can prioritize reads and do the writes later. It can also remove the need for an actual write completely, e.g., in the case where a temporary file is removed quickly afterwards. The caching issue is more pronounced if the storage is slow. Copying files from a fast SSD to a slow USB stick will probably involve a lot of write caching, since the USB stick just can't keep up. But your cp command returns faster, so you can carry on working, possibly even editing the files that were just copied. Of course caching like that has the downside you note, some data might be lost before it's actually saved. The user will be miffed if their editor told them the write was successful, but the file wasn't actually on the disk. Which is why there's the fsync() system call , which is supposed to return only after the file has actually hit the disk. Your editor can use that to make sure the data is fine before reporting to the user that the write succeeded. I said, "is supposed to", since the drive itself might tell the same lies to the OS and say that the write is complete, while the file really only exists in a volatile write cache within the drive. Depending on the drive, there might be no way around that. In addition to fsync() , there are also the sync() and syncfs() system calls that ask the system to make sure all system-wide writes or all writes on a particular filesystem have hit the disk. The utility sync can be used to call those. Then there's also the O_DIRECT flag to open() , which is supposed to "try to minimize cache effects of the I/O to and from this file." Removing caching reduces performance, so that's mostly used by applications (databases) that do their own caching and want to be in control of it.
( O_DIRECT isn't without its issues, the comments about it in the man page are somewhat amusing.) What happens on a power-out also depends on the filesystem. It's not just the file data that you should be concerned about, but the filesystem metadata. Having the file data on disk isn't much use if you can't find it. Just extending a file to a larger size will require allocating new data blocks, and they need to be marked somewhere. How a filesystem deals with metadata changes and the ordering between metadata and data writes varies a lot. E.g., with ext4 , if you set the mount flag data=journal , then all writes – even data writes – go through the journal and should be rather safe. That also means they get written twice, so performance goes down. The default options try to order the writes so that the data is on the disk before the metadata is updated. Other options or other filesystem may be better or worse; I won't even try a comprehensive study. In practice, on a lightly loaded system, the file should hit the disk within a few seconds. If you're dealing with removable storage, unmount the filesystem before pulling the media to make sure the data is actually sent to the drive, and there's no further activity. (Or have your GUI environment do that for you.) | {
"source": [
"https://unix.stackexchange.com/questions/464217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246490/"
]
} |
464,392 | If I run a command like this one: find / -inum 12582925 Is there a chance that this will list two files on separate mounted filesystems (from separate partitions) that happen to have been assigned the same number? Is the inode number unique on a single filesystem, or across all mounted filesystems? | An inode number is only unique on a single file system. One example you’ll run into quickly is the root inode on ext2/3/4 file systems, which is 2: $ ls -id / /home
2 / 2 /home If you run (assuming GNU find ) find / -printf "%i %p\n" | sort -n | less on a system with multiple file systems you’ll see many, many duplicate inode numbers (although you need to take the output with a pinch of salt since it will also include hard links). When you’re looking for a file by inode number, you can use find ’s -xdev option to limit its search to the file system containing the start path, if you have a single start path: find / -xdev -inum 12582925 will only find files with inode number 12582925 on the root file system. ( -xdev also works with multiple start paths, but then its usefulness is reduced in this particular case.) It's the combination of inode number and device number ( st_dev and st_ino in the stat structure, %D %i in GNU find 's -printf ) that identifies a file uniquely (on a given system). If two directory entries have the same inode and dev number, they refer to the same file (though possibly through two different mounts of a same file system for bind mounts). Some find implementations also have a -samefile predicate that will find files with the same device and inode number. Most [ / test implementations also have a -ef operator to check that two files paths refer to the same file (after symlink resolution though). | {
"source": [
"https://unix.stackexchange.com/questions/464392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9041/"
]
} |
Subsets and Splits