source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
381,901 | My text file looks like this: This is onesentence that is broken.However this is a good one.And thisone issomehow, broken intomany. I want to remove the trailing newline character for any line which is followed by a line starting with a lowercase letter. So this should be: This is one sentence that is broken.However this is a good one.And this one is somehow, broken into many. How can I do this? Edit: There are some really good answers here, but I chose to accept the first one that worked and was earliest. Thanks so much everyone! | try awk '$NF !~ /\.$/ { printf "%s ",$0 ; next ; } {print;}' file where $NF !~ /\.$/ match line where last element do not end with a dot, { printf "%s ",$0 print this line with a trailling space, and no line feed, next ; } fetch next line, {print;} and print it. I am sure there will be a sed option. Note: this will work with line ending in a dot, however condition in sentences beginning with upper case letter won't get merged. See Stéphane Chazelas's answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/381901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
381,902 | The following error will be displayed when i use the command "npm start". > [email protected] start /var/www/html/dev/callcenter> react-scripts startsh: 1: react-scripts: Permission deniednpm ERR! Linux 4.4.0-1013-awsnpm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "start"npm ERR! node v6.10.0npm ERR! npm v3.10.10npm ERR! code ELIFECYCLEnpm ERR! [email protected] start: `react-scripts start`npm ERR! Exit status 126npm ERR!npm ERR! Failed at the [email protected] start script 'react-scripts start'.npm ERR! Make sure you have the latest version of node.js and npm installed.npm ERR! If you do, this is most likely a problem with the callcenter package,npm ERR! not with npm itself.npm ERR! Tell the author that this fails on your system:npm ERR! react-scripts startnpm ERR! You can get information on how to open an issue for this project with:npm ERR! npm bugs callcenternpm ERR! Or if that isn't available, you can get their info via:npm ERR! npm owner ls callcenternpm ERR! There is likely additional logging output above.npm ERR! Please include the following file with any support request:npm ERR! /var/www/html/dev/callcenter/npm-debug.log | Make sure that your react-script binary is executable. $ chmod +x node_modules/.bin/react-scripts | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/381902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243560/"
]
} |
381,924 | I updated my kernel to fix a problem with a graphics driver (namely AMDGPU-PRO from AMD) and it works like a dream now However, if you fix one problem - another problem will arise.The bcmwl-kernel-source (WiFi driver for the AC1900 wireless card)doesn't seem to work at all? Usually on the 4.10.1.27 kernel on Linux Mint 18.2, (I have no clue how to solve this problem) I have to re-install the WiFi drivers on reboot. using sudo apt install--reinstall bcmwl-kernel-source And it works as if nothing is wrong, tested it 4 hours straight and got a solid 8 megabytes/s peak. Other information that I wouldn't mind speaking of:The PCIe ID is 14e4:43e0 / BCM4360 These were the error logs when trying to do re-install whycan'tnewlyregisteredpeopleputpictureshere >:( And I'm using a computer that I've built, and the specs are: GA-Z270X UD5 motherboard i7 7700k (kabylake) CPU RX 560 graphics card Crucial ballistix elite 2666MHz (2 x 8gb kit) memory TP-link Archer T9E AC1900 (aka the problematic card) wifi card I'm also very new to Linux. so please go easy on me :p | Make sure that your react-script binary is executable. $ chmod +x node_modules/.bin/react-scripts | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/381924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243565/"
]
} |
381,974 | I have a local machine which is supposed to make an SSH session to a remote master machine and then another inner SSH session from the master to each of some remote slaves , and then execute 2 commands i.e. to delete a specific directory and recreate it. Note that the local machine has passwordless SSH to the master and the master has passwordless SSH to the slaves. Also all hostnames are known in .ssh/config of the local/master machines and the hostnames of the slaves are in slaves.txt locally and I read them from there. So what I do and works is this: username="ubuntu"masterHostname="myMaster"while read linedo #Remove previous folders and create new ones. ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition"" ssh -n $username@$masterHostname "ssh -t -t $username@$line "mkdir -p EC2_WORKSPACE/$project Input Output Partition"" #Update changed files... ssh -n $username@$masterHostname "ssh -t -t $username@$line "rsync --delete -avzh /EC2_NFS/$project/* EC2_WORKSPACE/$project""done < slaves.txt This cluster is on Amazon EC2 and I have noticed that there are 6 SSH sessions created at each iteration which induces a significant delay. I would like to combine these 3 commands into 1 to get fewer SSH connections. So I tried to combine the first 2 commands into ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition && mkdir -p EC2_WORKSPACE/$project Input Output Partition"" But it doesn't work as expected. It seems to execute the first one ( rm -rf Input Output Partition ) and then exits the session and goes on. What can I do? | Consider that && is a logical operator. It does not mean "also run this command" it means "run this command if the other succeeded". That means if the rm command fails (which will happen if any of the three directories don't exist) then the mkdir won't be executed. This does not sound like the behaviour you want; if the directories don't exist, it's probably fine to create them. Use ; The semicolon ; is used to separate commands. The commands are run sequentially, waiting for each before continuing onto the next, but their success or failure has no impact on each other. Escape inner quotes Quotes inside other quotes should be escaped, otherwise you're creating an extra end point and start point. Your command: ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition && mkdir -p EC2_WORKSPACE/$project Input Output Partition"" Becomes: ssh -n $username@$masterHostname "ssh -t -t $username@$line \"rm -rf Input Output Partition && mkdir -p EC2_WORKSPACE/$project Input OutputPartition\"" Your current command, because of the lack of escaped quotes should be executing: ssh -n $username@$masterHostname "ssh -t -t $username@$line "rm -rf Input Output Partition if that succeeds: mkdir -p EC2_WORKSPACE/$project Input Output Partition"" # runs on your local machine You'll notice the syntax highlighting shows the entire command as red on here, which means the whole command is the string being passed to ssh. Check your local machine; you may have the directories Input Output and Partition where you were running this. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/381974",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144281/"
]
} |
382,003 | From bash manual, for conditional expressions string1 == string2string1 = string2 True if the strings are equal. When used with the [[ command, this performs pattern matching as described above (see Section 3.2.4.2 [Conditional Constructs], page 10). What does "pattern matching" mean here? What is "pattern matching" opposed to here? If not used with [[ but with other commands, what does "this" perform? ‘=’ should be used with the test command for posix conformance. What does POSIX say here? What is the sentence opposed to? Can == be used with test command? I tried and it seems yes. Can = be used with other commands besides test ? I tried = with [[ and [ , and it seems yes. what are the differences between == and = ? In Bash 4.3, I tried == and = with test , [[ , and [ . == and = look the same to me. Can == and = be used interchangeably in any conditional expression? Thanks. | POSIX test (or [ ... ] ) only knows about the one with a single equal sign: s1 = s2 True if the strings s1 and s2 are identical; otherwise, false. But Bash accepts the double equal sign too, though the builtin help doesn't admit to that (the manual does): $ help test | grep = -A1 STRING1 = STRING2 True if the strings are equal. STRING1 != STRING2 True if the strings are not equal. As for other shells, it depends. Well, particularly Dash is the stubborn one here: $ dash -c '[ x == x ] && echo foo'dash: 1: [: x: unexpected operator but $ yash -c '[ x == x ] && echo foo'foo$ busybox sh -c '[ x == x ] && echo foo'foo$ ksh93 -c '[ x == x ] && echo foo'foo zsh is a bit odd here, == is considered a special operator, so it must be quoted: $ zsh -c '[ x == x ] && echo foo'zsh:1: = not found$ zsh -c '[ x "==" x ] && echo foo'foo The external test / [ utility from GNU coreutils on my Debian supports == (but the manual doesn't admit that), the one on OS X doesn't. So, with test / [ .. ] , use = as it's more widely supported. With the [[ ... ]] construct , both = and == are equal (at least in Bash) and the right side of the operator is taken as a pattern, like in a filename glob, unless it is quoted. (Filenames are not expanded within [[ ... ]] ) $ bash -c '[[ xxx == x* ]] && echo foo'foo But of course that construct isn't standard: $ dash -c '[[ xxx == x* ]] && echo foo'dash: 1: [[: not found$ yash -c '[[ xx == x* ]] && echo foo'yash: no such command ‘[[’ And while Busybox has it, it does't do the pattern match: $ busybox sh -c '[[ xx == xx ]] && echo yes || echo no'yes$ busybox sh -c '[[ xx == x* ]] && echo yes || echo no'no | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
382,054 | From Bash Manual Storing the regular expression in a shell variable is often a useful way to avoid problems with quoting characters that are special to the shell. It is sometimes difficult to specify a regular expression literally without using quotes, or to keep track of the quoting used by regular expressions while paying attention to the shell’s quote removal. Using a shell variable to store the pattern decreases these problems. For example, the following are equivalent: pattern='[[:space:]]*(a)?b'[[ $line =~ $pattern ]] and [[ $line =~ [[:space:]]*(a)?b ]] If you want to match a character that’s special to the regular expression grammar, it has to be quoted to remove its special meaning. This means that in the pattern xxx.txt , the . matches any character in the string (its usual regular expression meaning), but in the pattern "xxx.txt" it can only match a literal . . Shell programmers should take special care with backslashes, since back-slashes are used both by the shell and regular expressions to remove the special meaning from the following character. The following two sets of commands are not equivalent: pattern='\.'[[ . =~ $pattern ]][[ . =~ \. ]][[ . =~ "$pattern" ]][[ . =~ '\.' ]] The first two matches will succeed, but the second two will not, because in the second two the backslash will be part of the pattern to be matched. In the first two examples, the backslash removes the special meaning from . , so the literal . matches. If the string in the first examples were anything other than . , say a , the pattern would not match, because the quoted . in the pattern loses its special meaning of matching any single character. How is storing the regular expression in a shell variable a useful way to avoid problems with quoting characters that are special to the shell? The given examples don't seem to explain that.In the given examples, the regex literals in one method and the values of the shell variable pattern in the other method are the same. Thanks. | [[ ... ]] tokenisation clashes with regular expressions (more on that in my answer to your follow-up question ) and \ is overloaded as a shell quoting operator and a regexp operator (with some interference between the two in bash), and even when there's no apparent reason for a clash, the behaviour can be surprising. Rules can be confusing. Who can tell what these will do without trying it (on all possible input) with any given version of bash ? [[ $a = a|b ]][[ $a =~ a|b ]][[ $a =~ a&b ]][[ $a =~ (a|b) ]][[ $a =~ ([)}]*) ]][[ $a =~ [/\(] ]][[ $a =~ \s+ ]][[ $a =~ ( ) ]][[ $a =~ [ ] ]][[ $a =~ ([ ]) ]] You can't quote the regexps, because if you do, since bash 3.2 and if bash 3.1 compatibility has not been enabled, quoting the regexps removes the special meaning of RE operator. For instance, [[ $a =~ 'a|b' ]] Matches if $a contains a litteral a|b only. Storing the regexp in a variable avoids all those problems and also makes the code compatible to ksh93 and zsh (provided you limit yourself to POSIX EREs): regexp='a|b'[[ $a =~ $regexp ]] # $regexp should *not* be quoted. There's no ambiguity in the parsing/tokenising of that shell command, and the regexp that is used is the one stored in the variable without any transformation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382054",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
382,060 | When I run sudo and enter my password, a subsequent invocation of sudo within a few minutes will not need the password to be re-entered. How can I change the default timeout to require the password again? | man sudoers says: Once a user has been authenticated, [...] the user may then use sudo without a password for a short period of time (5 minutes unless overridden by the timestamp_timeout option). To change the timeout, run, sudo visudo and add the line: Defaults timestamp_timeout=30 where 30 is the new timeout in minutes. To always require a password, set to 0 . To set an infinite timeout, set the value to be negative. To totally disable the prompt for a password for user ravi : Defaults:ravi !authenticate | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/382060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
382,077 | How do I replace a string I1Rov4Rvh/GtjpuuYttr== with mytest in a file mtestsed.properties with sed command? I have tried: sed -e -i 's/I1Rov4Rvh/GtjpuuYttr==/mytest/g' mtestsed.properties | sed delimiter can be any char, precisely for occasion where you need to replace a string with / either escape / symbol: sed -i 's/I1Rov4Rvh\/GtjpuuYttr==/mytest/g' use another separator: sed -i 's|I1Rov4Rvh/GtjpuuYttr==|mytest|g' sed -i 's:I1Rov4Rvh/GtjpuuYttr==:mytest:g' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243668/"
]
} |
382,156 | So I have scriptA which does: ssh server1 -- scriptB &ssh server2 -- scriptB &ssh server3 -- scriptB &waitotherstuffhappens ScriptB does: rsync -av /important/stuff/. remoteserver:/remote/dir/.rsync -av /not/so/important/stuff/. remoteserver:/remote/dir/. &exit My desired result is scriptA will wait for all the instances of scriptB to finish before moving on, which it currently does, however it's also waiting for the background rsyncs of the not so important stuff. These are larger files that I don't want to wait on. I've read through Difference between nohup, disown and & and tried different combinations, but I'm not getting the result I'm looking for. At this point I'm pretty stumped. Any help would be appreciated! | The problem here is that sshd waits for end-of-file on the pipe it is reading the command's stdout (not the stderr one for some reason, at least with the version I'm testing on) from. And the background job inherits a fd to that pipe. So, to work around that, redirect the output of that background rsync command to some file or /dev/null if you don't care for it. You should also redirect stderr, because even if sshd is not waiting for the corresponding pipe, after sshd exits, the pipe will be broken so rsync would be killed if it tries to write on stderr. So: rsync ... > /dev/null 2>&1 & Compare: $ time ssh localhost 'sleep 2 &'ssh localhost 'sleep 2 &' 0.05s user 0.00s system 2% cpu 2.365 total$ time ssh localhost 'sleep 2 > /dev/null &'ssh localhost 'sleep 2 > /dev/null &' 0.04s user 0.00s system 12% cpu 0.349 total And: $ ssh localhost '(sleep 1; ls /x; echo "$?" > out) > /dev/null &'; sleep 2; cat out141 # ls by killed with SIGPIPE upon writing the error message$ ssh localhost '(sleep 1; ls /x; echo "$?" > out) > /dev/null 2>&1 &'; sleep 2; cat out2 # ls exited normally after writing the error on /dev/null instead # of a broken pipe | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243726/"
]
} |
382,191 | I'm sourcing a bash script in the terminal , so exiting on error with set -o errexit kills my terminal, which is EXTREMELY ANNOYING, because I have to close the terminal, open another one, and reset some variables. So far, using command || return lines, in the script, is doing exactly what I want set -o errexit to do... But I want it done for the entire script; not just one line/command I have a file full of commands for setting up a site, and I'd rather not do command || return for every single line in the file Is there another set option, or something else that will just "return" instead of exiting the terminal? -- Just for clarity , I'd like to kill the script, and leave the terminal in the same state that pressing ctrl+C to kill a service running in the terminal would. command || return does that. But I don't want to tack on || return to every line in the file. So I'm looking for something similar to set -o errexit , that doesn't cause the terminal to shut down --- Note: Creating a dumb script with two lines in it (super.sh): create_path=~/Desktop/site_builder/create.shsource $create_path blah And placing set -o errexit at the top of create.sh, works exactly as I expect it. However, it's really stupid to have to create a file with two lines in it, just to call another bash script, instead of just calling it from the terminal. Ugghhh here's some examples: in super.sh #!/bin/bashcreate_path=~/Desktop/site_builder/create.shsource $create_path blah in create.sh #!/bin/bashset -o errexit#line below this is a line that fails and will cause the script to stop and return to the terminal as expected sed "s/@@SITE_NAME@@/$dirname" ~/Desktop/site_builder/template_files/base.html > ~/Desktop/$dirname/templates/base.html # a line with a stupid error in the terminal: $ bash super.sh output as expected: my-mac$ This works. What an annoying solution. I want , ideally, to execute what's in the stupid super.sh file from the terminal, not the super.sh file :D, without having the terminal shut down on me. This is what happens with what I'm trying to do: terminal command: my-mac$ source $create_path blah in create.sh I still have set -o errexit Here's the output on the terminal sed: 1: "s/@@SITE_NAME@@/blah": unterminated substitute in regular expressionSaving session......copying shared history......saving history...truncating history files......completed.[Process completed] And then the terminal is frozen. Ctrl+C doesn't work, neither does Ctrl+D If instead of set -o errexit , if I just use command || return statements everywhere in the create.sh file, then I get exactly what I want , while executing the lines in supser.sh directly on the terminal (instead of calling super.sh from the terminal). But that's not a practical solution either. Note: I liked @terdon 's answer about just spawning a child shell so I ended up just spawning a sub shell via the script instead of the terminal, as he showed in his answer using the braces ( ) , around the entire script.. His answer works too. | This is the only thing that works for what I needed to accomplish (creating a virtual environment then activating it, then installing requirements from a bash script): spawn a subshell / child shell from the script, as in: stupid_file.sh (set -o errexit#bunch of commands#one line fails) run the stupid_file using: source stupid_file.sh <file arguments here> || true THE END. ** takes a bow ** (credit goes to Jeff and Terdon) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
382,200 | Linux Mint 18.2, MATE. For example, I am copying some files from one place to another. It is quite hard to notice errors or see how many files left while a window of this file operation is minimized. In Windows data copying process displays in a minimized window in the panel by means of a green strip, or a minimized window changes its color because of some error. Is there a way to do something similar in Linux? | This is the only thing that works for what I needed to accomplish (creating a virtual environment then activating it, then installing requirements from a bash script): spawn a subshell / child shell from the script, as in: stupid_file.sh (set -o errexit#bunch of commands#one line fails) run the stupid_file using: source stupid_file.sh <file arguments here> || true THE END. ** takes a bow ** (credit goes to Jeff and Terdon) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382200",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243523/"
]
} |
382,208 | For an initial file with lines like this example: 112123123412345 The desired state of the file is '1''12''123''1234''12345' I've been doing this with two commands, :%s/^/'/g and :%s/$/'/g , that I would like to get into one. However, when I try :%s/[$^]/'/g I get the error E486: Pattern not found: [$^] I know the leading ^ in brackets means exclusion, so I figured putting $ first would mean match both the beginning and end of lines, which is obviously not happening. How can I match both the beginning and end of lines in vim? | How about: :%s/.*/'&'/ "Replace zero or more characters with those characters preceded and succeded by a single-quote". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382208",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99989/"
]
} |
382,214 | My question comes from How does storing the regular expression in a shell variable avoid problems with quoting characters that are special to the shell? . Why is there an error: $ [[ $a = a|b ]] bash: syntax error in conditional expression: unexpected token `|'bash: syntax error near `|b' Inside [[ ... ]] the second operand of = is expected to be aglobbing pattern. Is a|b not a valid globbing pattern? Can you point out whichsyntax rule it violates? Some comment below points out that | is interpreted as pipe. Then changing = for glob pattern to =~ for regex pattern make | work $ [[ $a =~ a|b ]] I learned from Learning Bash p180 in my previous post that | is recognized as pipe at the beginning ofinterpretation, even before any other steps of interpretation (including parse the conditional expressions in the examples). So how can | be recognized as regex operator when using =~ , without being recognized as pipein invalid use, just as when using = ? That makes me think that the syntax error in part 1 doesn't mean that | is interpreted as a pipe. Each line that the shell reads from the standard input or a script is called a pipeline; it contains one or more commands separated by zero or more pipe characters (|). For each pipeline it reads, the shell breaks it up into commands, sets up the I/O for the pipeline, then does the following for each command (Figure 7-1): Thanks. | Standard globs ("filename expansion") are: * , ? , and [ ... ] . | is not a valid glob operator in standard (non-extglob) settings. Try: shopt -s extglob[[ a = @(a|b) ]] && echo matched | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
382,279 | I like to sign my git commits with my PGP key, so I was quite alarmed when I went to git commit -S but instead of prompting for my PGP key passphrase, git just started hanging. I haven't made a change to my GPG setup in several months and have made many commits since then with no problem. Additionally, when I attempt to view my private keys with gpg -K , gpg hangs. However, when I run gpg -k to view my public keys, it returns the list like normal. Hopefully someone will have some idea of what is causing this problem and how to fix it. | I came across this exact issue (OSX Sierra 10.12.6, gpg/GnuPG 2.2.5) Commands that would hang: gpg -K # --list-secret-keysgpg -d # --decryptgpg --edit-keygpgconf --kill gpg-agent My solution was the same as mentioned by John above (ie. kill gpg-agent) as most other methods on how-can-i-restart-gpg-agent would also hang. # Solution pkill -9 gpg-agent Then for signing git commits I set the tty env as mentioned by cas above and also at gpg-failed-to-sign-commit-object . export GPG_TTY=$(tty) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/177537/"
]
} |
382,326 | I have randomly been reading about union file system which enables a user to mount multiple filesystems on top of one another simultaneously. However, am finding trouble deciding on which one to use(Unionfs vs Aufs vs Overlayfs vs mhddfs) and why as I have not found concrete information on the subject anywhere. I know for instance that overlayFS has been adopted in the mainstream Linux kernel which means it might get wider adoption. Would appreciate if someone would give me some perspective. Also I can't find any conceiving use-case for Union file system over something like LVM (as recommended by users in separate question ) or RAID setup except in the fact that LVM requires formatting all the drives which might not be desirable if you already have valuable data on the drives. | Here are some thoughts - I am still learning this and will update this as I go. How to choose the union filesystem There are two ways to look at this: How do the features of each one compare? For some common use cases, which one should I choose? I'll compare unionfs / unionfs-fuse / overlayfs / aufs / mergerfs, the latter being a replacement for mhddfs. Features of each one Development status aufs seems to be active unionfs looks mature, but not under active development ? unionfs-fuse seems to be active mergerfs seems to be active overlayfs is active Distribution / Kernel support There are kernel mode and usersystem mode filesystems, the latter run on FUSE. Kernel mode ones have less overhead (there is overhead when code switches between user space and kernel space) but the only one currently supported in the Linux kernel is overlayfs . User mode filesystems are easier for distributions to package. unionfs and aufs need kernel patches unionfs is not distributed by Debian (the rest are) unionfs-fuse and mergerfs are based on FUSE, so don't need to additional modules in the kernel overlayfs has been part of the kernel since 3.18 (Debian Stretch) Copy on write This relates to the Live CD use case below: mergerfs does not have copy on write The others do Use cases Read-only root / The Live CD use case The idea is to have a read-only CD-ROM/partition of a linux system. The union filesystem makes it look to the user like it is a read-write system so they can make changes. There is a read-write filesystem (for example, a tmpfs RAM disk) which stores the "Delta" of any changes made by the user, but not the full snapshot. Here any of the union filesystems except mergerfs would do (lack of cow support). Docker use case I am aware this is a main use case, but don't know the details - can someone provide guidance on this? Merging hard disks For example, you might have two sets of /home directories on different filesystems. Or you might be upgrading your home computer with a second hard disk, and want a single logical volume. This is where you don't actually want copy-on-write, so possibly mergerfs is the best choice. Union filesystem versus LVM for disk pooling I'll list some use cases that can be achieved with union filesystems but not LVM: If you are upgrading an existing system with a second disk, something like mergerfs might be better because LVM would require you to reformat the first hard disk hence destoying the data on it. A union filesystem would avoid this step. LVM might split a file over two physical hard disks (assuming RAID 0), so you would lose it if one hard disk fails. Some users might like, for example, to keep their /home directory on a USB stick that they can take away. In the use case of one virtual partition on two physical disks, with LVM you wouldn't need to worry about whether files get saved on one disk or the other. With mergefs, the system can automatically choose which one for you depending on how much free space is available. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382326",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/94670/"
]
} |
382,377 | I am using nohup command from a while like nohup bash life.bash > nohup.out 2> nohup.err & In the nohup.err file I always have a line nohup: ignoring input . Can you please help me figure out what it means? | That’s just nohup telling you what it’s set up; it outputs that to its standard error, which you’ve redirected to nohup.err . You can avoid the message by redirecting its standard input: nohup bash life.bash > nohup.out 2> nohup.err < /dev/null & nohup checks its standard input, standard output and standard error to see which are connected to a terminal; if it finds any which are connected, it handles them as appropriate (ignoring input, redirecting output to nohup.out , redirecting error to standard output), and tells you what it’s done. If it doesn’t find anything it needs to disconnect, it doesn’t output anything itself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83681/"
]
} |
382,390 | The goal is to unzip many zip files and rename the resulting file to the original archive name. Given a folder with many zip files, most of them containing a file called bla.txt , but of course there are some exceptions. All is known is that every file has a single file, which is a txt file. ./zips/ a.zip - contains bla.txt b.zip - contains bla.txt c.zip - contains somethingelse.txt d.zip - contains bla.txt ... - ... output should be ./zips/ a.txt b.txt c.txt d.txt ... Current best shot, which extracts everything but leaves me with a ton of folders: for z in *.zip; do unzip $z -d $(echo 'dir_'$z); done maybe something like for z in *.zip; do unzip $z; mv 'content_name.txt' $z; done But how is the content name obtained? | You could continue with the -d idea, and simply rename any extracted file to the desired "zip name minus the zip plus txt": mkdir tmpfor f in *.zip; do unzip "$f" -d tmp && mv tmp/* "${f%.zip}.txt"; donermdir tmp Alternatively, you could pipe the output from unzip into the appropriately-named file: for f in *.zip; do unzip -p "$f" > "${f%.zip}.txt"; done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243919/"
]
} |
382,398 | Plasma gives me the following error after closing Apper: KDEInit could not launch '/usr/bin/apper' Prior to this I had a problem of high CPU load and rebuild the kservice desktop file with kbuildsycoca5, restarted plasma and then restarted my PC. I also cleaned cache and memory with BleachBit. And I tried to uninstall OpenJDK and deleted the "suspicious" files /usr/lib/jvm/.java-1.8.0-openjdk-amd64.jinfo and /usr/lib/jvm/.java-gcj-6.jinfo . However I already tried reinstalling OpenJDK which also restored those 2 files. Note that when running sudo dpkg --verify those 2 files are now shown too. Edit: debsums|grep -v OK tells me they are missing. Also note that otherwise apper seems to be working fine. When running sudo apper I get this in the console: QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root' QCommandLineParser: option not defined: "install-mime-type" QCommandLineParser: option not defined: "install-package-name" QCommandLineParser: option not defined: "install-provide-file" QCommandLineParser: option not defined: "install-catalog" QCommandLineParser: option not defined: "remove-package-by-file" Reusing existing ksycoca Recreating ksycoca file ("/root/.cache/ksycoca5_...", version 303) Still in the time dict (i.e. deleted files) ("apps") Menu "applications-kmenuedit.menu" not found. new: "/usr/share/applications/openjdk-8-policytool.desktop" Saving QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root' QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root' Invalid pixmap specified. Invalid pixmap specified. QObject::connect: No such slot MainUi::setCaption(QString) QObject::connect: (sender name: 'ApperKCM') I'm running Debian 9.1 with KDE. Still new to GNU/Linux and any help is welcome. Edit: It works again after I reinstalled those 2 packages via apt-get install --reinstall gcj-6-jre-headles & apt-get install --reinstall openjdk-8-jre-headless:amd64 . However I still frequently get this message when closing apper. Edit: one should not run apper as root (with sudo)! | You could continue with the -d idea, and simply rename any extracted file to the desired "zip name minus the zip plus txt": mkdir tmpfor f in *.zip; do unzip "$f" -d tmp && mv tmp/* "${f%.zip}.txt"; donermdir tmp Alternatively, you could pipe the output from unzip into the appropriately-named file: for f in *.zip; do unzip -p "$f" > "${f%.zip}.txt"; done | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382398",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233262/"
]
} |
382,446 | I've tried several invocations of the OEM tar to create LZMA-compressed tarballs. More specifically, I tried: tar -c -f --lzma Windows\ 7.vmwarevm.tar.lzma Windows\ 7.vmwarevm My efforts created an archive with filename --lzma , and tar complained of Windows 7.vmwarevm.tar.lzma : Cannot stat: No such file or directory , probably for the same reason: --lzma was taken as the filename of the archive to be created, and consequently the actual intended archive name was taken to be the first in a list of arguments to include in the archive. I thought after some searching that MacOS had not included it in the provided options, and built GNU tar from scratch, storing it under another name in /usr/local/bin . However, my efforts to use the above invocation with the renamed and newly built tar had the same effect: I was building an archive in --lzma . My computer has a seemingly working /usr/local/bin/lzma . What invocation(s) should I use, perhaps piping tar to lzma and perhaps in a script to do the work of "tar czf foo.tgz foo", but uses lzma instead of gzip for compression? | When using tar , the first word after -f is the output filename . In your case, switching the order of options might be enough: tar -c --lzma -f foo.tar.lzma sourcefile(s) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382446",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19841/"
]
} |
382,531 | My os was installed from debian 8.5.0 amd64. Execute commands after installation. cat /etc/issueDebian GNU/Linux 8 \n \l No 8.5 here. cat /proc/versionLinux version 3.16.0-4-amd64 ([email protected]) (gcc version 4.8.4 (Debian 4.8.4-1) ) #1 SMP Debian 3.16.43-2+deb8u2 (2017-06-26) No 8.5 here too. Does deb8u2 means debian 8.5? | Debian does not include the minor version number in /etc/os-release , despite the clear indication in the manual that minor versions are allowed, and despite the inclusion of minor version numbers there by other Linux distributions. The only explanation that anyone has ever come up with for this is the rather weak one — admittedly proferred by a person known only by a pseudonym on a discussion forum and hardly in any way official — that the Release Announcement for 8.5 said "this update does not constitute a new version of Debian 8". Yet that self-same announcement used "8.5" as the string in its headline. You can obtain the minor number in the Debian-specific way that is still the only mechanism mentioned in its FAQ document , which makes no mention of /etc/os-release at all: Use lsb_release or read the Debian-specific /etc/debian_version file, which does include the minor version number, again demonstrating that the minor number is considered part of the version. Debian is not the only operating system which does not include the minor version number in the version string in /etc/os-release . Neither does CentOS. (See CentOS bug #9448 and bug 8359 .) Arch does not include a version string at all. As for deb8u2 , that is not a complete version string either . The actual version string, as you can see in that output, is 3.16.43-2+deb8u2 . This string follows the convention of suffixing a local version string to the origin version string. The origin version here is 3.16.43 , and the suffix is 2+deb8u2 , known as the Debian version of the package. You'll find this deb N u M scheme used a lot in Debian package versions. The suffix does indicate Debian 8, but the update number is the update number for the Debian package version , and is not the minor version number of the Debian operating system. This is the Linux kernel package in Debian with Linux version 3.16.43 and Debian version 2+deb8u2 . See the version history of that package . In the larger picture, what is in /proc/version is the version string of the (running) kernel , in particular here the version string of the Debian kernel package containing the kernel, not of the operating system. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102745/"
]
} |
382,535 | I am a cat-owner and a cat-lover. But I don't like it when my cat sits on my keyboard and pushes randoms keys and messes everything up. I have an idea to have a function key that turns off the keyboard (except for one special key combination). I know there is already Ctl - S , but this freezes the keyboard and keeps track of the input until the keyboard is unlocked. Is there any way have the keyboard disregard all input except one hard-to-press-accidentally key combination? Bonus points: Is there any way to do the same thing in Windows? | Open a tiny terminal window somewhere on the screen and run cat in it. Whenever you want to protect the system from your cat, change focus to that window. Not many people know this but this feature was an important design goal for the cat program :). Unfortunately, really clever cats (like my evil beast) know what Ctrl-C is. If your cat is clever enough to figure out Ctrl-C , Ctrl-D , Ctrl-\ or Ctrl-Z , run cat using this sh script wrapper ( /usr/local/bin/toodamnsmartcat.sh ): #!/bin/shtrap "" TSTP INT QUITstty raw -echowhile true; do cat -vdone | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/382535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244013/"
]
} |
382,578 | I have this data: 300>BRIAN100>DANY200>NICOLE105>DANY And I want to generate the following: 300>BRIAN205>DANY200>NICOLE The delimiter is > and the first column should SUM. | Obligatory GNU Datamash solution datamash -st '>' groupby 2 sum 1 < data | datamash -t '>' reverse300>BRIAN205>DANY200>NICOLE | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244055/"
]
} |
382,592 | I used my school Windows PC to copy the content(to the USB) and now when I connect it to my home computer(running Antergos) it is not detectable. Other pen drives are working fine. My friend, who also had to copy, has a Windows PC and told me that his pen drive is showing as a shortcut(not accessible). Tried fdisk -l , lsblk and lsusb but the device is not showing.Output of lsusb : Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 001 Device 008: ID 23a9:ef18 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 004 Device 002: ID 046d:c07e Logitech, Inc. G402 Gaming MouseBus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hubBus 003 Device 002: ID 413c:2107 Dell Computer Corp. /*keyboard*/Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Pardon my English. EDIT: Reply to KyleH: output for command udevadm monitor: monitor will print the received events for:UDEV - the event which udev sends out after rule processingKERNEL - the kernel ueventKERNEL[6416.233763] add /devices/pci0000:00/0000:00:1a.7/usb1/1-2 (usb)KERNEL[6416.234592] add /devices/pci0000:00/0000:00:1a.7/usb1/1-2/1-2:1.0 (usb)UDEV [6416.238130] add /devices/pci0000:00/0000:00:1a.7/usb1/1-2 (usb)UDEV [6416.240221] add /devices/pci0000:00/0000:00:1a.7/usb1/1-2/1-2:1.0 (usb) | Obligatory GNU Datamash solution datamash -st '>' groupby 2 sum 1 < data | datamash -t '>' reverse300>BRIAN205>DANY200>NICOLE | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382592",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244062/"
]
} |
382,601 | I often have two KDE sessions running as two different users. When I switch between them (e.g. Ctrl+Alt+F8) music that is playing in one of the accounts is muted on the other account. The same happens when I switch to a virtual terminal (Ctrl+Alt+F1). As far as I know this is on purpose and makes sense for multiple human users but is annoying in my setup. How can I keep music hearable whenever I switch to another KDE session or a virtual terminal? I am using KDE Neon 5 based on Ubuntu 16.04 LTS (Xenial Xerus) with KDE 5.36.0 and Pulseaudio 8.0. | Solution Add all users that should be able play back to the pulse-access group # adduser problemofficer pulse-access Create /etc/systemd/system/pulseaudio.service with the following content: [Service]Type=simplePIDFile=/var/run/pulse/pidExecStart=/usr/bin/pulseaudio --daemonize=yes --system=yes --disallow-module-loading=yes --disallow-exit=yes[Install]WantedBy=multi-user.target Enable this new systemd service so that it is started on boot: # systemctl enable pulseaudio Be aware that this configuration is less secure (e.g. other users can listen to your microphone) sound output will not automatically switch to and from headphones anymore and might prevent Bluetooth from working.Also see Caveat section below. Reboot Cause The reason why the sound turns of is that Pulseaudio is started on each login with this users privileges and the system¹ does not allow other users to listen on other users audio. Solution Background Pulseaudio In order to solve this problem Pulseaudio must be started with root privileges so that it runs as a system wide daemon of which there is only for all users. Everyone will connect to this one instance and will be able to playback and listen to everything other users playback or record. Pulseaudio will not actually run as root the whole time, but will dropthose privileges and assume the user pulse . From man pulseaudio User pulse , group pulse : if PulseAudio is running as a system daemon (see --system above) and is started as root the daemon will drop privileges and become a normal user process using this user and group. If PulseAudio is running as a user daemon this user and group has no meaning. Note that "user daemon" is not the same as "system deamon". The former is how Pulseaudio ran before, the latter is how it will run if the changes are applied. In order to be able to connect to the Pulseaudio system service you need to be a member of the pulse-access group. Again from man pulseaudio Group pulse-access : if PulseAudio is running as a system daemon (see --system above) access is granted to members of this group when they connect via AF_UNIX sockets. If PulseAudio is running as a user daemon this group has no meaning. Systemd Service As a quick work around it would be possible to simply kill all "user daemon" instances of Pulseaudio and then run /usr/bin/pulseaudio --system=yes . This would start Pulseaudio without it becoming a daemon and in a more insecure way but might be useful for a quick proof-of-concept check. To make this persistent and for the Pulseaudio daemon to start automatically on startup it needs to added as a systemd service. This is what the file /etc/systemd/system/pulseaudio.service is for. Pulseaudio will not start a user daemon² when it already finds a system daemon, this is why this solution works. Caveat The official Pulseaudio documentation advices against using Pulseaudio as a system daemon . Some of the problems mentioned are: ...one especially problematic thing from security point of view is module loading. Anyone who has access can load and unload modules. Module loading can be disabled, but then bluetooth and alsa hotplug functionality doesn't work... (That means when you plug-in headphones the sound output does not automatically switch from speakers to headphones. The reverse is also true when you remove the headphones. Both have to be done manually.) ...much higher memory usage and CPU load in system mode... (Personally I haven't noticed any change in load though.) ...all users that have access to the server can sniff into each others audio streams, listen to their mikes, and so on... ...you also lose a lot of further functionality, like the bridging to jack... And possibly other things that I do not understand and therefore did not felt were worthwhile including here. Note regarding thecarpy's answer : None of the steps described in his answer were necessary for this solution. Ressources man pulseaudio https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/WhatIsWrongWithSystemWide/ https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/SystemWide/ ¹ If someone can explain this in detail, I would be very thankful. ² Assumption [citation required] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382601",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38418/"
]
} |
382,611 | I am trying to install some software called CRISPResso on my mac using sudo pip install . When I type the command CRISPResso --help I get the following message. You need to install and have the command #####needle##### in your PATH variable to use CRISPResso! I next installed EMBOSS-6.6.0 which contains needle, after doing ./configure followed by make I get the following error Making install in plplotMaking install in libmake[3]: Nothing to be done for `install-exec-am'. ../.././install-sh -c -d '/usr/local/share/EMBOSS' /usr/bin/install -c -m 644 plstnd5.fnt plxtnd5.fnt '/usr/local/share/EMBOSS'/bin/sh ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I../ajax/core -I../ajax/core -I/usr/X11/include -I./ -I/usr/include/gd -DPREFIX=\"/usr/local\" -DBUILD_DIR=\".\" -DDRV_DIR=\".\" -DEMBOSS_TOP=\"/Users/hc/Downloads/EMBOSS-6.6.0\" -DAJ_MACOSXLF -O2 -I/usr/X11/include -MT gd.lo -MD -MP -MF .deps/gd.Tpo -c -o gd.lo gd.clibtool: compile: gcc -DHAVE_CONFIG_H -I. -I../ajax/core -I../ajax/core -I/usr/X11/include -I./ -I/usr/include/gd -DPREFIX=\"/usr/local\" -DBUILD_DIR=\".\" -DDRV_DIR=\".\" -DEMBOSS_TOP=\"/Users/hc/Downloads/EMBOSS-6.6.0\" -DAJ_MACOSXLF -O2 -I/usr/X11/include -MT gd.lo -MD -MP -MF .deps/gd.Tpo -c gd.c -fno-common -DPIC -o .libs/gd.ogd.c:127:16: fatal error: gd.h: No such file or directorycompilation terminated.make[2]: *** [gd.lo] Error 1make[1]: *** [install-recursive] Error 1make: *** [install-recursive] Error 1 I have also tried sudo make install and I get the same error. Any suggestions on what is going wrong here? Many thanks! | Solution Add all users that should be able play back to the pulse-access group # adduser problemofficer pulse-access Create /etc/systemd/system/pulseaudio.service with the following content: [Service]Type=simplePIDFile=/var/run/pulse/pidExecStart=/usr/bin/pulseaudio --daemonize=yes --system=yes --disallow-module-loading=yes --disallow-exit=yes[Install]WantedBy=multi-user.target Enable this new systemd service so that it is started on boot: # systemctl enable pulseaudio Be aware that this configuration is less secure (e.g. other users can listen to your microphone) sound output will not automatically switch to and from headphones anymore and might prevent Bluetooth from working.Also see Caveat section below. Reboot Cause The reason why the sound turns of is that Pulseaudio is started on each login with this users privileges and the system¹ does not allow other users to listen on other users audio. Solution Background Pulseaudio In order to solve this problem Pulseaudio must be started with root privileges so that it runs as a system wide daemon of which there is only for all users. Everyone will connect to this one instance and will be able to playback and listen to everything other users playback or record. Pulseaudio will not actually run as root the whole time, but will dropthose privileges and assume the user pulse . From man pulseaudio User pulse , group pulse : if PulseAudio is running as a system daemon (see --system above) and is started as root the daemon will drop privileges and become a normal user process using this user and group. If PulseAudio is running as a user daemon this user and group has no meaning. Note that "user daemon" is not the same as "system deamon". The former is how Pulseaudio ran before, the latter is how it will run if the changes are applied. In order to be able to connect to the Pulseaudio system service you need to be a member of the pulse-access group. Again from man pulseaudio Group pulse-access : if PulseAudio is running as a system daemon (see --system above) access is granted to members of this group when they connect via AF_UNIX sockets. If PulseAudio is running as a user daemon this group has no meaning. Systemd Service As a quick work around it would be possible to simply kill all "user daemon" instances of Pulseaudio and then run /usr/bin/pulseaudio --system=yes . This would start Pulseaudio without it becoming a daemon and in a more insecure way but might be useful for a quick proof-of-concept check. To make this persistent and for the Pulseaudio daemon to start automatically on startup it needs to added as a systemd service. This is what the file /etc/systemd/system/pulseaudio.service is for. Pulseaudio will not start a user daemon² when it already finds a system daemon, this is why this solution works. Caveat The official Pulseaudio documentation advices against using Pulseaudio as a system daemon . Some of the problems mentioned are: ...one especially problematic thing from security point of view is module loading. Anyone who has access can load and unload modules. Module loading can be disabled, but then bluetooth and alsa hotplug functionality doesn't work... (That means when you plug-in headphones the sound output does not automatically switch from speakers to headphones. The reverse is also true when you remove the headphones. Both have to be done manually.) ...much higher memory usage and CPU load in system mode... (Personally I haven't noticed any change in load though.) ...all users that have access to the server can sniff into each others audio streams, listen to their mikes, and so on... ...you also lose a lot of further functionality, like the bridging to jack... And possibly other things that I do not understand and therefore did not felt were worthwhile including here. Note regarding thecarpy's answer : None of the steps described in his answer were necessary for this solution. Ressources man pulseaudio https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/WhatIsWrongWithSystemWide/ https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/SystemWide/ ¹ If someone can explain this in detail, I would be very thankful. ² Assumption [citation required] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382611",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244076/"
]
} |
382,693 | Looking for the largests among the open files by all processes. lsof already has the open files with their sizes. It may be passing the right parameters to lsof and processing the output. | You can use the -F option of lsof to get almost unambiguous output which is machine-parseable with only moderate pain . The output is ambiguous because lsof rewrites newlines in file names to \n . The lsof output consists of one field per line. The first character of each name indicates the field type and the rest of the line is the field value. The fields are: p =PID (only for the first descriptor in a given process), f =descriptor, t =type ( REG for regular files, the only type that has a size), s =size (only if available), n =name. The awk code below collects entries that have a size and prints the size and the file name. The rest of the pipelines sorts the output and retains the entry with the largest size. lsof -Fnst | awk ' { field = substr($0,1,1); sub(/^./,""); } field == "p" { pid = $0; } field == "t" { if ($0 == "REG") size = 0; else next; } field == "s" { size = $0; } field == "n" && size != 0 { print size, $0; }' | sort -k1n -u | tail -n42 | sed 's/^[0-9]* //' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/212862/"
]
} |
382,714 | I connect with the following command: sudo wpa_supplicant -B -D nl80211 -i wlan_card -c /etc/wpa_supplicant/connection.conf It connects fine, and keeps persistent connection. If AP goes down, the connection tears, if AP gets back up, the connection comes back. If I power down the wifi interface: sudo ip link set wlan_card down It goes down. When I bring it up with: sudo ip link set wlan_card up The connection, that was launched in the very beginning with wpa_supplicant, reconnects again. Such stable, persistent connection is very good, but then it causes a problem, if I want to connect to a different AP. When I try to use wpa_cli with any command, it just gives me the following error: Failed to connect to non-global ctrl_ifname: (nil) error: No such file or directory When I try to disconnect with: sudo iw dev wlan_card disconnect It disconnects, but reconnects right away, so, currently, I have to reserve to: ps -AlF|grep -i wpasudo kill -KILL wpa_pid I wish to know the correct method to stop the connection, or killing is the only way? | Before connecting a to a different AP you can stop the running instance of the wpa_supplicant service: sudo killall wpa_supplicant Configure your /etc/wpa_supplicant/connection.conf then connect through wpa_supplicant . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244153/"
]
} |
382,719 | Under the MBR model, we could create four primary partitions one of which could an extended partition that's further subdivided into logical partitions. Consider this GPT schematic taking from Wikipedia: Partition entries range from LBA 1 to LBA 34, presumably we ran out of that space and I understand that's a fair amount of partitions, is it possible to make an extended partition if the disk partitioned with GPT? If possible how many extended partitions per GPT partition table we can make? I'm not sure if this is a standard to have partition entries within the range LBA 1 to LBA 34, maybe we could expand partition entries beyond that? Practically this is a fair amount of partitions, I have no intention to do that. | 128 partitions is the default limit for GPT, and it's probably painful in practice to use half that many... Linux itself originally also had some limitations in its device namespace. For /dev/sdX it assumes no more than 15 partitions (sda is 8,0, sdb is 8,16, etc.). If there are more partitions, they will be represented using 259,X aka Block Extended Major. You could certainly still do more partitions in various ways. Loop devices, LVM, or even GPT inside GPT. Sometimes this happens naturally when handing partitions as block devices to virtual machines, they see the partition as virtual disk drive and partition that. Just don't expect such partitions inside partitions to be picked up automatically. As @fpmurphy1 pointed out in the comments, I was wrong: You can change the limit, using gdisk , expert menu , resize partition table . This can also be done for existing partition tables, provided there is unpartitioned space (a 512-byte sector for 4 additional partition entries) at the start and end of the drive. However I'm not sure how widely supported this is; there doesn't seem to be an option for it in parted or other partitioners I've tried. And the highest limit you can set with gdisk seems to be 65536 but it's bugged: Expert command (? for help): s Current partition table size is 128.Enter new size (4 up, default 128): 65536Value out of range And then... Expert command (? for help): s Current partition table size is 128.Enter new size (4 up, default 128): 65535Adjusting GPT size from 65535 to 65536 to fill the sectorExpert command (? for help): sCurrent partition table size is 65536. Eeeh? Whatever you say. But try to save that partition table and gdisk is stuck in a loop for several minutes. Expert command (? for help): w--- gdisk gets stuck here --- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22253 root 20 0 24004 11932 3680 R 100.0 0.1 1:03.47 gdisk --- unstuck several minutes later ---Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTINGPARTITIONS!!Do you want to proceed? (Y/N): Your option? (Y/N): yOK; writing new GUID partition table (GPT) to /dev/loop0.Warning: The kernel is still using the old partition table.The new table will be used at the next reboot or after yourun partprobe(8) or kpartx(8)The operation has completed successfully. And here's what parted has to say about the successfully completed operation: # parted /dev/loop0 print freeBacktrace has 8 calls on stack: 8: /usr/lib64/libparted.so.2(ped_assert+0x45) [0x7f7e780181f5] 7: /usr/lib64/libparted.so.2(+0x24d5e) [0x7f7e7802fd5e] 6: /usr/lib64/libparted.so.2(ped_disk_new+0x49) [0x7f7e7801d179] 5: parted() [0x40722e] 4: parted(non_interactive_mode+0x92) [0x40ccd2] 3: parted(main+0x1102) [0x405f52] 2: /lib64/libc.so.6(__libc_start_main+0xf1) [0x7f7e777ec1e1] 1: parted(_start+0x2a) [0x40610a]You found a bug in GNU Parted! Here's what you have to do:Don't panic! The bug has most likely not affected any of your data.Help us to fix this bug by doing the following:Check whether the bug has already been fixed by checkingthe last version of GNU Parted that you can find at: http://ftp.gnu.org/gnu/parted/Please check this version prior to bug reporting.If this has not been fixed yet or if you don't know how to check,please visit the GNU Parted website: http://www.gnu.org/software/partedfor further information.Your report should contain the version of this release (3.2)along with the error message below, the output of parted DEVICE unit co print unit s printand the following history of commands you entered.Also include any additional information about your setup youconsider important.Assertion (gpt_disk_data->entry_count <= 8192) at gpt.c:793 in function_parse_header() failed.Aborted So parted refuses to work with GPT that has more than 8192 partition entries. Nobody ever does that, so it has to be corrupt, right? This is what happens when you don't stick to defaults. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233788/"
]
} |
382,740 | Suppose I have a file bar.txt inside directory foo and create a symlink baz.txt to foo/bar.txt . Like: ./foo./foo/bar.txt./baz.txt -> foo/bar.txt If I open baz.txt my editor will think baz.txt is opened in directory . . Is there a way to create a link such that rather bar.txt is (literally) opened? Context (or why I'm trying to do this): I have a directory with a large collection of files which I index and comment inside an .odt file which remains in the same directory. In this .odt file I create hyperlinks to the indexed files in the directory, so that I can easily access the individual files with (much) more context than just the filename. I set LibreOffice to save the hyperlinks as relative paths, so that these links will work in all of my computers, which not always have the same directory tree to my user files. I'd like to create a symlink (or equivalent) to this .odt file, but (in the terms of the above example) if the link opens baz.txt then relative paths (from the point of view of LibreOffice) will be wrong. The formerly created hyperlinks will not work, and if I happen to create an hyperlink in baz.txt (figuratively, of course) it won't work in the original bar.txt . | No. But you can create a libreoffice wrapper that'll take each argument that is a symlink and turn it into $(readlink -f $the_symlink) . You can then set your file manager to open libreoffice files through that wrapper. lowrapper: #!/bin/bash -eargs=()for a; do case $a in -*) args+=("$a");; #skip flags (your file names don't start with -, right?) *) if ! [ -L "$a" ]; then #not a link args+=("$a") else #link => target args+=( "$( readlink -f "$a")" ) fi ;; esacdonelibreoffice "${args[@]}" Now if you chmod +x lowrapper , put it in some directory of your PATH, and then change the handler program of your libreoffice files from libreoffice to lowrapper , then libreoffice will be opening the link targets instead of the links. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244172/"
]
} |
382,742 | Let's say, somehow a malware is present on your filesystem (e.g : BusyWinman Malware ). How would you secure your files against being transfered by such a malware to somewhere else?Please, describe the most restrictive case. | No. But you can create a libreoffice wrapper that'll take each argument that is a symlink and turn it into $(readlink -f $the_symlink) . You can then set your file manager to open libreoffice files through that wrapper. lowrapper: #!/bin/bash -eargs=()for a; do case $a in -*) args+=("$a");; #skip flags (your file names don't start with -, right?) *) if ! [ -L "$a" ]; then #not a link args+=("$a") else #link => target args+=( "$( readlink -f "$a")" ) fi ;; esacdonelibreoffice "${args[@]}" Now if you chmod +x lowrapper , put it in some directory of your PATH, and then change the handler program of your libreoffice files from libreoffice to lowrapper , then libreoffice will be opening the link targets instead of the links. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382742",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6215/"
]
} |
382,808 | I have the following bash script: for i in {0800..9999}; do for j in {001..032}; do wget http://example.com/"$i-$j".jpg donedone All photos are exist and in fact each iteration does not depend from another. How to parallelize it with possibility of control the number of threads? | Confiq's answer is a good one for small i and j . However, given the size of i and j in your question, you will likely want to limit the overall number of processes spawned. You can do this with the parallel command or some versions of xargs . For example, using an xargs that supports the -P flag you could parallelize your inner loop as follows: for i in {0800..9999}; do echo {001..032} | xargs -n 1 -P 8 -I{} wget http://example.com/"$i-{}".jpgdone GNU parallel has a large number of features for when you need more sophisticated behavior and makes it easy to parallelize over both parameters: parallel -a <(seq 0800 9999) -a <(seq 001 032) -P 8 wget http://example.com/{1}-{2}.jpg | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/382808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221889/"
]
} |
382,821 | There have been some instances where I installed a utility/program and the name of the command is different from the name of the program. Like I installed PostgreSQL recently and after installation I ran the command postgresql but it gave an error bash: postgresql command not found So I searched upon internet and found that the command to fire postgresql was psql So, how do I find out which utility/program to access with what name? I did apt-cache show postgresql but even there it wasn't mentioned that the program would be accessed with the command psql Please do not suggest locate command. It doesn't help. | One tactic would be to investigate what files the package installed into the various bin directories. For instance, on a dpkg-based distribution, you might do something like: dpkg -L postgresql-client-9.3 | grep bin or on a system using RPMs you might do something like: dnf repoquery -l PACKAGE_NAME | grep bin and then read the manual pages for the binaries you find. A challenges of this tactic is that in some cases (such as postgresql) the files are spread out over a few packages. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/382821",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224025/"
]
} |
382,874 | I inherited a legacy development system which is poorly documented and the source code is not known if it still is available. Now I could locate some of the source code and actually build one part of the system. I wonder if I can find the rest of the source code and if there is any better way than locate *.c and manually inspecting the files (that's how I found part of the code). There are 3 machines and only one where I found the source code that seems to be a development machine. It also has 61 .deb archives that seems to be the packaged versions of the projects, but looking into the .deb archives shows that the source is not in the archives or at least not where I looked. Is there a good way to "scan" an entire drive for source code? | This won’t answer your more general question, but in your specific case, since you have packages on the system, it’s worth looking for the corresponding source code: find / -name \*.orig.tar\* -o -name \*.dsc This will look for source archives named in the way the Debian package building tools expect, and source package control files. If you find those, look for .debian.tar* or .diff.gz files alongside them. All these files combined would give you the source code and the build rules, along with all the package metadata. You could also look for unpacked control files: find / -name control These would typically live in the debian subdirectory of a package’s source, which should contain everything you need to rebuild the package from source. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382874",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9115/"
]
} |
382,909 | Various Debian packages, including logrotate and rsyslog , put their own log rotation definitions in /etc/logrotate.d/ What is the correct way to override those definitions? If I modify the files, I get warnings at every system update and I risk losing the changes if I (or someone else) give the wrong answer; or risk not getting new upstream definitions for new log files if I (or someone else) fail to merge the files by hand. Both things have happened regularly in the past few years. I tried overriding the definition in 00_* or zz_* files, but I get a duplicate error: error: zz_mail:1 duplicate log entry for /var/log/mail.logerror: found error in /var/log/mail.log , skipping Is there any clean solution? Should I write a cron script to re-apply my changes to the definition files every day? Edit: to be more clear, ideally I would like to keep 99% of rsyslog 's log rotation definitions in place, and automatically updated with APT. Except for a single definition, that of /var/log/mail.log , for which I need to apply a different rotation policy. If Logrotate allowed duplicate definitions, and only used the first or the last one for each file, my problem would be solved. If it had an override option, to flag a definition as overriding a previous one on purpose, that would also solve it. But alas, it seems I need to override the entire /etc/logrotate.d/rsyslog (and nginx , and others) with my own versions. | First of all, I recommend using a tool such as etckeeper to keep track of changes to files in /etc ; that avoids data loss during upgrades (among other benefits). The “correct” way to override the definitions is to edit the configuration files directly; that’s why dpkg knows how to handle configuration files and prompts you when upgrades introduce changes. Unfortunately that’s not ideal, as you discovered. To actually address your specific configuration issue, in a Debian-friendly way, I would suggest actually logging your mail messages to a different log file, and setting that up in logrotate : add a new log configuration file in /etc/rsyslog.d , directing mail.* to a new log file, e.g. /var/log/ourmail.log (assuming you’re using rsyslog — change as appropriate); configure /var/log/ourmail.log in a new logrotate configuration file. Since this only involves adding new configuration files, there’s no upgrade issue. The existing log files will still be generated and rotated using the default configuration, but your log files will follow your configuration. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34892/"
]
} |
382,943 | I am running Alpine Linux by using the alpine-standard-3.6.2-x86_64.iso image on VMWare Workstation 12.5.5. Following this guide I've been able to configure, among others, root password, keyboard layout and network interfaces. The gist of the process described in the guide is: create and mount a floppy device execute setup-alpine execute lbu commit floppy The issue is that the saved configuration does not get loaded when I reboot the machine Here are some observations: when I mount the /dev/fd0 floppy it contains a localhost.apkovl.tar.gz file running lbu list-backup floppy does not list anything after committing the configuration, the result is empty I have a disk attached to the VM, when running setup-alpine I designated the disk to be used as a data volume | Alpine Linux no longer support floppy. You will have to create a minimal hard disk image (32MB should do, depending a bit on how much configs you need), mkfs.vfat /dev/sda instead of /dev/fd0 , mount it on /media/usb and make sure the /dev/sda is in your fstab. Then lbu ci usb or select usb when setup-alpine asks where to store the configs. I have updated the wiki. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146227/"
]
} |
382,946 | I'm using Amazon Linux. I'm trying to append some text onto a file. The file is owned by root. I thought by using "sudo", I could append the needed text, but I'm getting "permission denied", see below [myuser@mymachine ~]$ ls -al /etc/yum.repos.d/google-chrome.repo-rw-r--r-- 1 root root 186 Jul 31 15:50 /etc/yum.repos.d/google-chrome.repo[myuser@mymachine ~]$ sudo echo -e "[google-chrome]\nname=google-chrome\nbaseurl=http://dl.google.com/linux/chrome/rpm/stable/\$basearch\nenabled=1\ngpgcheck=1\ngpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub" >> /etc/yum.repos.d/google-chrome.repo-bash: /etc/yum.repos.d/google-chrome.repo: Permission denied How can I adjust my statement so that I can append the necessary text onto the file? | You have to use tee utility to redirect or append streams to a file which needs some permissions, like: echo something | sudo tee /etc/file or for append echo something | sudo tee -a /etc/file because by default your shell is running with your own user permissions and the redirection > or >> will be done with same permissions as your user, you are actually running the echo using the sudo and redirecting it without root permission. As an alternative you can also get a root shell then try normal redirect: sudo -iecho something >> /etc/pat/to/fileexit or sudo -s for a non-login shell. you can also run a non interactive shell using root access: sudo bash -c 'echo something >> /etc/somewhere/file' | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/382946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166917/"
]
} |
382,951 | If I have the full contents of a MIME message, what is the best utility on Linux to send the message? The MIME message would include the full headers and mail body, for example: Received: (qmail 32389 invoked by uid 0); 13 Jun 2017 09:24:51 -0400Date: Tue, 13 Jun 2017 09:24:51 -0400From: [email protected]: [email protected]: Test EmailMessage-ID: <593fe7a3.IgSR+/BLy+NYXlVZ%[email protected]>User-Agent: Heirloom mailx 12.5 7/5/10MIME-Version: 1.0Content-Type: text/plain; charset=us-asciiContent-Transfer-Encoding: 7bitThe mail body goes here I want to be able to feed the above to a command line utility which will then re-process the message exactly 'as is' without having to parse fields such as sender, subject, etc. It should send the message through a specified external SMTP server (not the local server's mail queue). What command line utility can I use for this purpose? | You may use sendmail or "sendmail look alike" provided by postfix/exim/... . /usr/sbin/sendmail -i -- $recipients < message_file -i - do not treat lines with leading dot specially You may use more exotic "sendmail look alike" (e.g. provided by msmtp ) to send directly via another smtp host without "system wide" configuration. msmtp is distributed in debian so it is likely to be included in other linux distributions. https://packages.debian.org/stretch/msmtp Package: msmtp (1.6.6-1) light SMTP client with support for server profiles msmtp is an SMTP client that can be used to send mails from Mutt and probably other MUAs (mail user agents). It forwards mails to an SMTP server (for example at a free mail provider), which takes care of the final delivery. Using profiles, it can be easily configured to use different SMTP servers with different configurations, which makes it ideal for mobile clients. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/382951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66537/"
]
} |
383,013 | I'm using Debian 9.1 with KDE and I'm wondering why it comes without a firewall installed and enabled by default? gufw is not even in DVD1's packages. Are people expected to connect to the Internet before getting a firewall? Why? Even if all ports are closed by default various installed, updated or downloaded programs could open them (or not?) and I wish for not even a single bit leaving my machine without my permission. Edit: So I just found out about iptables but I guess the question still remains as iptables as firewall seems to be rather unknown to most, its default rules, its accessability & ease of usage and the fact that by default any iptable-rules are reset at restart . | First, Debian tends to assume you know what you are doing, and tries to avoid making choices for you. The default install of Debian is fairly small and is secure — it doesn't start any services. And even the standard optional extras (e.g., web server, ssh) that are added to an install are usually quite conservative and secure. So, a firewall is not needed in this case. Debian (or its developers) assume that if you start up additional services, you'll know how to protect them, and can add a firewall if necessary. More importantly, perhaps, Debian avoids making the choice for you regarding what firewall software to use. There are a number of choices available — which one should it use? And even regarding a basic firewall setting, what setting should be chosen? Having said that, iptables is of priority important, so it is installed by default. But of course, Debian doesn't know how you want it configured, so it doesn't configure it for you. And you might prefer to use iptables 's successor, nftables , anyway. Note also, that firewalling functionality is already built into the Linux kernel to a certain extent; e.g. nftables and netfilter . Debian and other Linux distributions do provide user space tools like iptables to manage that functionality. But what you do with them is up to you. Note that these entities are not named consistently. To quote the Wikipedia nftables page : nftables is configured via the user-space utility nft while netfilter is configured via the utilities iptables, ip6tables, arptables and ebtables frameworks. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233262/"
]
} |
383,056 | I'm trying to sort a huge list (130 Gb) in Linux. Partially like this: @K00188:14:H2LMFBBXX:6:1101:27440:1668 1:N:0:0_I_CGATGT@K00188:14:H2LMFBBXX:6:1101:30262:1668 1:N:0:0_I_CGATGT@K00188:14:H2LMFBBXX:6:1101:6501:1686 1:N:0:0_I_CGATGT@K00188:14:H2LMFBBXX:6:1101:18133:1686 1:N:0:0_I_CGATGT@K00188:14:H2LMFBBXX:6:1101:21827:1686 1:N:0:0_I_CGATGT The problem is the not enough RAM during sort . There are some solutions from previous questions (e.g. sort --parallel , python scripts etc.) but they didn't work out on my hand. I want to stick to sort but the following error message appeared while using the option --parallel sort: unrecognized option '--parallel=2'Try `sort --help' for more information. If there are some python scripts written for sorting big lists, I would like to have a try. Do you have any suggestions? | sort (except for some stripped-down implementations like busybox ) is not limited by memory as it only uses a limited amount and resorts to temporary files when it reaches that limit. More likely, you ran out of disk space. You'll need about 130GB of free disk space in the temporary area sort is using (in addition to the disk space used to store the result unless you write the file over itself with sort -o mybigfile mybigfile ). If there's not enough in the filesystem of /tmp , set $TMPDIR to a location with more free disk space (see also the -T option with some sort implementations): TMPDIR=~/tmp sort my-big-file > sorted To reduce the amount of temporary disk space needed, with some sort implementations, you can also tell it to compress the temporary files. That might even improve performance if you pick a compressor that is easy on CPU like lzop and have a file system on rotational disks. TMPDIR=~/tmp sort --compress-program=lzop my-big-file > sorted | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240505/"
]
} |
383,061 | On a Debian system pressing END key generates ^[[F : $ showkey -aPress any keys - Ctrl-D will terminate this program^[[F 27 0033 0x1b 91 0133 0x5b 70 0106 0x46 But why this keychord is not in terminfo ? $ infocmp -1 | grep endkend=\EOF, Nevertheless, ncurses manages to correctly recognize it as KEY_END . How? TERM is xterm-256color BTW, what is the motivation behind having kend and end instead of just end ? (the same for khome and home ) EDIT As said in the comment of Johan Myréen, khome string is the sequence pressing the Home key produces. But on Debian pressing Home key produces home . Why? $ showkey -aPress any keys - Ctrl-D will terminate this program^[[H 27 0033 0x1b 91 0133 0x5b 72 0110 0x48 $ infocmp -1 | grep home home=\E[H, khome=\EOH, | Johan Myréen's answer was close, but not exactly the problem: most of the terminal emulators which you will use have normal and application modes for special keys. Terminal descriptions are written for one mode, which corresponds to what a full-screen application uses. Other applications (such as an interactive shell ) typically do not initialize the screen to use application mode. Bash is an example of that. In normal mode, xterm and similar terminals send escape [ (CSI) while in application mode, their keypads send escape O (SS3). In terminfo syntax, that escape is \E . So infocmp is showing you that the description uses application mode. The home capability is sent to the terminal, telling it how to move the cursor to the home position (upper left), and is not the same as khome (sent from the terminal using the keyboard). Full-screen applications (such as those using ncurses) may send the terminal-capability strings for initializing the keypad. Some terminal descriptions do put the terminal into application mode, some don't. The use of kend versus end is a naming convention: in terminfo by convention any name beginning with k refers to a special key (function key, cursor key, keypad-key) to make it clear that these are strings to be read by an application. For example, kcub1 (cursor-backward key ) is distinct from cub1 (move the cursor back one column). ncurses recognizes the key as KEY_END because the application you are using will call the keypad function to initialize the terminal using the smkx (the mnemonic means "start keyboard-transmit mode"). That may/may not actually turn on application mode. Linux console's terminal description does not, xterm's does. In principle, you could use tput for switching the mode (and get different results from showkey ): $ showkey -aPress any keys - Ctrl-D will terminate this program^[[H 27 0033 0x1b 91 0133 0x5b 72 0110 0x48^C 3 0003 0x03^D 4 0004 0x04$ tput smkx$ showkey -aPress any keys - Ctrl-D will terminate this program^[OH 27 0033 0x1b 79 0117 0x4f 72 0110 0x48 As a complication, curses will recognize only one name for a string. Some terminals (such as xterm) emulate older hardware terminals using different names for the keys on the editing keypad. In the xterm FAQ listed below, there's the possibility of naming that "Home" key "Insert"... Further reading: How do I fix unix so that I can use the arrow keys in a terminal? My home/end keys do not work (ncurses FAQ) Why doesn't my keypad work? (xterm FAQ) Keypad and Function Keys (terminfo manual) User-Defined Capabilities (terminfo manual, commenting on other use of "k") Keypad mode ( getch manual page) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143661/"
]
} |
383,150 | I have some data like <td><a href="data1">abc</a> ... <a href="data2">abc</a> ... <a href="data3">abc</a> ( Would refer to above line as data in code below ) I need data1 in between the first " and " so I do echo 'data' | sed 's/.*"\(.*\)".*/\1/' but it returns me the last string in between " and " always, i.e in this case it would return me data3 instead instead of data1 In order to get data1 , I end up doing echo 'data' | sed 's/.*"\(.*\)".*".*".*".*".*/\1/' How do I get data1 without this much of redundancy in sed | The .* in the regex pattern is greedy, it matches as long a string as it can, so the quotes that are matched will be the last ones. Since the separator is only one character here, we can use an inverted bracket group to match anything but a quote, i.e. [^"] , and then repeats of that to match a number of characters that aren't quotes. $ echo '... "foo" ... "bar" ...' | sed 's/[^"]*"\([^"]*\)".*/\1/'foo Another way would be to just remove everything up to the first quote, then remove everything starting from the (new) first quote: $ echo '... "foo" ... "bar" ...' | sed 's/^[^"]*"//; s/".*$//'foo In Perl regexes, the * and + specifiers can be made non-greedy by appending a question mark, so .*? would anything, but as few characters/bytes as possible. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383150",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224025/"
]
} |
383,184 | Reading a single character, how can I tell the difference between the null <EOF> and \n ? Eg: f() { read -rn 1 -p "Enter a character: " char && printf "\nYou entered '%s'\n" "$char"; } With a printable character: $ fEnter a character: xYou entered 'x' When pressing Enter : $ fEnter a character: You entered '' When pressing Ctrl + D : $ fEnter a character: ^DYou entered ''$ Why is the output the same in the last two cases? How can I distinguish between them? Is there a different way to do this in POSIX shell vs bash ? | With read -n "$n" (not a POSIX feature), and if stdin is a terminal device, read puts the terminal out of the icanon mode, as otherwise read would only see full lines as returned by the terminal line discipline internal line editor and then reads one byte at a time until $n characters or a newline have been read (you may see unexpected results if invalid characters are entered). It reads up to $n character from one line. You'll also need to empty $IFS for it not to strip IFS characters from the input. Since we leave the icanon mode, ^D is no longer special. So if you press Ctrl+D , the ^D character will be read. You wouldn't see eof from the terminal device unless the terminal is somehow disconnected. If stdin is another type of file, you may see eof (like in : | IFS= read -rn 1; echo "$?" where stdin is an empty pipe, or with redirecting stdin from /dev/null ) read will return 0 if $n characters (bytes not forming part of valid characters being counted as 1 character) or a full line have been read. So, in the special case of only one character being requested: if IFS= read -rn 1 var; then if [ "${#var}" -eq 0 ]; then echo an empty line was read else printf %s "${#var} character " (export LC_ALL=C; printf '%s\n' "made of ${#var} byte(s) was read") fielse echo "EOF found"fi Doing it POSIXly is rather complicated. That would be something like (assuming an ASCII-based (as opposed to EBCDIC for instance) system): readk() { REPLY= ret=1 if [ -t 0 ]; then saved_settings=$(stty -g) stty -icanon min 1 time 0 icrnl fi while true; do code=$(dd bs=1 count=1 2> /dev/null | od -An -vto1 | tr -cd 0-7) [ -n "$code" ] || break case $code in 000 | 012) ret=0; break;; # can't store NUL in variable anyway (*) REPLY=$REPLY$(printf "\\$code");; esac if expr " $REPLY" : ' .' > /dev/null; then ret=0 break fi done if [ -t 0 ]; then stty "$saved_settings" fi return "$ret"} Note that we return only when a full character has been read. If the input is in the wrong encoding (different from the locale's encoding), for instance if your terminal sends é encoded in iso8859-1 (0xe9) when we expect UTF-8 (0xc3 0xa9), then you may enter as many é as you like, the function will not return. bash 's read -n1 would return upon the second 0xe9 (and store both in the variable) which is a slightly better behaviour. If you also wanted to read a ^C character upon Ctrl+C (instead of letting it kill your script; also for ^Z , ^\ ...), or ^S / ^Q upon Ctrl+S/Q (instead of flow control), you could add a -isig -ixon to the stty line. Note that bash 's read -n1 doesn't do it either (it even restores isig if it was off). That will not restore the tty settings if the script is killed (like if you press Ctrl+C . You could add a trap , but that would potentially override other trap s in the script. You could also use zsh instead of bash , where read -k (which predates ksh93 or bash 's read -n/-N ) reads one character from the terminal and handles ^D by itself (returns non-zero if that character is entered) and doesn't treat newline specially. if read -k k; then printf '1 character entered: %q\n' $kfi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
383,197 | Can I use read to capture the \n \012 or newline character? Define test function: f() { read -rd '' -n1 -p "Enter a character: " char && printf "\nYou entered: %q\n" "$char"; } Run the function, press Enter : $ f;Enter a character: You entered: '' Hmmm. It's a null string. How do I get my expected output: $ f;Enter a character:You entered: $'\012'$ I want the same method to be able to capture ^D or \004 . If read can't do it, what is the work around? | With read -n "$n" (not a POSIX feature), and if stdin is a terminal device, read puts the terminal out of the icanon mode, as otherwise read would only see full lines as returned by the terminal line discipline internal line editor and then reads one byte at a time until $n characters or a newline have been read (you may see unexpected results if invalid characters are entered). It reads up to $n character from one line. You'll also need to empty $IFS for it not to strip IFS characters from the input. Since we leave the icanon mode, ^D is no longer special. So if you press Ctrl+D , the ^D character will be read. You wouldn't see eof from the terminal device unless the terminal is somehow disconnected. If stdin is another type of file, you may see eof (like in : | IFS= read -rn 1; echo "$?" where stdin is an empty pipe, or with redirecting stdin from /dev/null ) read will return 0 if $n characters (bytes not forming part of valid characters being counted as 1 character) or a full line have been read. So, in the special case of only one character being requested: if IFS= read -rn 1 var; then if [ "${#var}" -eq 0 ]; then echo an empty line was read else printf %s "${#var} character " (export LC_ALL=C; printf '%s\n' "made of ${#var} byte(s) was read") fielse echo "EOF found"fi Doing it POSIXly is rather complicated. That would be something like (assuming an ASCII-based (as opposed to EBCDIC for instance) system): readk() { REPLY= ret=1 if [ -t 0 ]; then saved_settings=$(stty -g) stty -icanon min 1 time 0 icrnl fi while true; do code=$(dd bs=1 count=1 2> /dev/null | od -An -vto1 | tr -cd 0-7) [ -n "$code" ] || break case $code in 000 | 012) ret=0; break;; # can't store NUL in variable anyway (*) REPLY=$REPLY$(printf "\\$code");; esac if expr " $REPLY" : ' .' > /dev/null; then ret=0 break fi done if [ -t 0 ]; then stty "$saved_settings" fi return "$ret"} Note that we return only when a full character has been read. If the input is in the wrong encoding (different from the locale's encoding), for instance if your terminal sends é encoded in iso8859-1 (0xe9) when we expect UTF-8 (0xc3 0xa9), then you may enter as many é as you like, the function will not return. bash 's read -n1 would return upon the second 0xe9 (and store both in the variable) which is a slightly better behaviour. If you also wanted to read a ^C character upon Ctrl+C (instead of letting it kill your script; also for ^Z , ^\ ...), or ^S / ^Q upon Ctrl+S/Q (instead of flow control), you could add a -isig -ixon to the stty line. Note that bash 's read -n1 doesn't do it either (it even restores isig if it was off). That will not restore the tty settings if the script is killed (like if you press Ctrl+C . You could add a trap , but that would potentially override other trap s in the script. You could also use zsh instead of bash , where read -k (which predates ksh93 or bash 's read -n/-N ) reads one character from the terminal and handles ^D by itself (returns non-zero if that character is entered) and doesn't treat newline specially. if read -k k; then printf '1 character entered: %q\n' $kfi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
383,217 | I want to be able to capture the exact output of a command substitution, including the trailing new line characters . I realise that they are stripped by default, so some manipulation may be required to keep them, and I want to keep the original exit code . For example, given a command with a variable number of trailing newlines and exit code: f(){ for i in $(seq "$((RANDOM % 3))"); do echo; done; return $((RANDOM % 256));}export -f f I want to run something like: exact_output f And have the output be: Output: $'\n\n'Exit: 5 I'm interested in both bash and POSIX sh . | POSIX shells The usual ( 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ) trick to get the complete stdout of a command is to do: output=$(cmd; ret=$?; echo .; exit "$ret")ret=$?output=${output%.} The idea is to add an extra .\n . Command substitution will only strip that \n . And you strip the . with ${output%.} . Note that in shells other than zsh , that will still not work if the output has NUL bytes. With yash , that won't work if the output is not text. Also note that in some locales, it matters what character you use to insert at the end. . should generally be fine (see below), but some other might not. For instance x (as used in some other answers) or @ would not work in a locale using the BIG5, GB18030 or BIG5HKSCS charsets. In those charsets, the encoding of a number of characters ends in the same byte as the encoding of x or @ (0x78, 0x40) For instance, ū in BIG5HKSCS is 0x88 0x78 (and x is 0x78 like in ASCII, all charsets on a system must have the same encoding for all the characters of the portable character set which includes English letters, @ and . ). So if cmd was printf '\x88' (which by itself is not a valid character in that encoding, but just a byte-sequence) and we inserted x after it, ${output%x} would fail to strip that x as $output would actually contain ū (the two bytes making up a byte sequence that is a valid character in that encoding). Using . or / should be generally fine , as POSIX requires: “The encoded values associated with <period> , <slash> , <newline> , and <carriage-return> shall be invariant across all locales supported by the implementation.”, which means that these will have the same binary represenation in any locale/encoding. “Likewise, the byte values used to encode <period> , <slash> , <newline> , and <carriage-return> shall not occur as part of any other character in any locale.”, which means that the above cannot happen, as no partial byte sequence could be completed by these bytes/characters to a valid character in any locale/encoding.(see 6.1 Portable Character Set ) The above does not apply to other characters of the Portable Character Set. Another approach, as discussed by @Isaac , would be to change the locale to C (which would also guarantee that any single byte can be correctly stripped), only for the stripping of the last character ( ${output%.} ).It would be typically necessary to use LC_ALL for that (in principle LC_CTYPE would be enough, but that could be accidentally overridden by any already set LC_ALL ). Also it would be necessary to restore the original value (or e.g. the non-POSIX compliant locale be used in a function). But beware, that some shells don't support changing the locale while running (though this is required by POSIX). By using . or / , all that can be avoided. bash/zsh alternatives With bash and zsh , assuming the output has no NULs, you can also do: IFS= read -rd '' output < <(cmd) To get the exit status of cmd , you can do wait "$!"; ret=$? in bash but not in zsh . rc/es/akanaga For completeness, note that rc / es / akanga have an operator for that. In them, command substitution, expressed as `cmd (or `{cmd} for more complex commands) returns a list (by splitting on $ifs , space-tab-newline by default). In those shells (as opposed to Bourne-like shells), the stripping of newline is only done as part of that $ifs splitting. So you can either empty $ifs or use the ``(seps){cmd} form where you specify the separators: ifs = ''; output = `cmd or: output = ``()cmd In any case, the exit status of the command is lost. You'd need to embed it in the output and extract it afterwards which would become ugly. fish In fish, command substitution is with (cmd) and doesn't involve a subshell. set var (cmd) Creates a $var array with all the lines in the output of cmd if $IFS is non-empty, or with the output of cmd stripped of up to one (as opposed to all in most other shells) newline character if $IFS is empty. So there's still an issue in that (printf 'a\nb') and (printf 'a\nb\n') expand to the same thing even with an empty $IFS . To work around that, the best I could come up with was: function exact_output set -l IFS . # non-empty IFS set -l ret set -l lines ( cmd set ret $status echo ) set -g output '' set -l line test (count $lines) -le 1; or for line in $lines[1..-2] set output $output$line\n end set output $output$lines[-1] return $retend Since version 3.4.0 (released in March 2022), you can do instead: set output (cmd | string collect --allow-empty --no-trim-newlines) With older versions, you could do: read -z output < (begin; cmd; set ret $status; end | psub) With the caveat that $output is an empty list instead of a list with one empty element if there's no output. Version 3.4.0 also added support for $(...) which behaves like (...) except that it can also be used inside double quotes in which case it behaves like in the POSIX shell: the output is not split on lines but all trailing newline characters are removed. Bourne shell The Bourne shell did not support the $(...) form nor the ${var%pattern} operator, so it can be quite hard to achieve there. One approach is to use eval and quoting: eval " output='` exec 4>&1 ret=\` exec 3>&1 >&4 4>&- (cmd 3>&-; echo \"\$?\" >&3; printf \"'\") | awk 3>&- -v RS=\\\\' -v ORS= -v b='\\\\\\\\' ' NR > 1 {print RS b RS RS}; {print}; END {print RS}' \` echo \";ret=\$ret\" `" Here, we're generating a output='output of cmdwith the single quotes escaped as '\''';ret=X to be passed to eval . As for the POSIX approach, if ' was one of those characters whose encoding can be found at the end of other characters, we'd have a problem (a much worse one as it would become a command injection vulnerability), but thankfully, like . , it's not one of those, and that quoting technique is generally the one that is used by anything that quotes shell code (note that \ has the issue, so shouldn't be used (also excludes "..." inside which you need to use backslashes for some characters). Here, we're only using it after a ' which is OK). tcsh See tcsh preserve newlines in command substitution `...` (not taking care of the exit status, which you could address by saving it in a temporary file ( echo $status > $tempfile:q after the command)) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/383217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
383,241 | Say I have a file like this: foo bar foo bar foo bar foo bar something useful"foo bar foo bar" Basically, I want to know how I can get the string something useful by itself, either saved to its own file or displayed as output alone. There will always be the same number of characters (33) before something useful and there will always be " directly after it. | Try this: cut -c 34- | cut -d '"' -f1 First cut removes the first 33 characters; second cut keeps only the part before the first " . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194482/"
]
} |
383,285 | I'm trying to use the find command to generate a list of source files within a directory and only some of its subdirectories. Example: /Source_Files /dontexclude dontexclude.h dontexclude.c /exclude exclude.c exclude.h main.c test.c test.h I want a list of files ending in '.c' and '.h', and I want to exclude the contents of the /exclude subdirectory. find -name '*.c' -o -name '*.h' -o -path '*/exclude' -prune produces this output: ./test.h./exclude./test.c./main.c./dontexclude/dontexclude.c./dontexclude/dontexclude.h How can I use find to produce the above list without "./exclude"? | find . -name excludeme -prune -o \ \( -name '*.c' -o -name '*.h' \) -print Remember that AND (implicit) has precedence over OR ( -o ). (see also -name '*.[ch]' ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112758/"
]
} |
383,365 | When I try to login as root, this warning comes up. luvpreet@DHARI-Inspiron-3542:/$ sudo suPassword: zsh compinit: insecure directories, run compaudit for list.Ignore insecure directories and continue [y] or abort compinit [n]? If I say yes, it simply logs in, and my shell changes from bash to zsh. If I say no, it says that ncompinit: initialization aborted and logs in. After login, my shell changes to zsh. All I ever did related to zsh, was download oh-my-zsh from github. What is happening and why ? Using - Ubuntu 16.04 on Dell. | You can list those insecure folders by: compaudit The root cause of "insecure" is these folders are group writable. There's a one line solution to fix that: compaudit | xargs chmod g-w Please see zsh, Cygwin and Insecure Directories and zsh compinit: insecure directories for reference. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/383365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217864/"
]
} |
383,378 | My current script is giving output with many decimal places. Is there any way to get only 1 place in the output? # cat sampleHDD Used: 15.0223THDD Total: 55.9520T# cat sample | awk ' /HDD Total/ { hdd_total=$NF } /HDD Used/ { hdd_used=$NF } END { used=hdd_total-hdd_used print "cal =" used}' Current Output cal =40.9297 Required output cal =40.9 ---> With only one decimal Getting error for this # isi storagepool list -v| grep -i 'HDD Total:' | awk '{print "HDD Total=%.1f", $NF -1 " TB" }'HDD Total=%.1f 54.952 TB#cat isistorage1 7.332T n/a (R) 13.01% (T)# cat isistorage1 | awk '{ print "Snapshot USED=", $1}'Snapshot USED= 7.332T | You can use printf "cal =%.1f\n", used instead since it has print type modifier control and area. The .1f there means, only print 1 decimal place after a floating point, you can change it to any number you want in decimal places. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383378",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216946/"
]
} |
383,433 | I am trying to pass an environment variable defined in the current shell to one of the systemd unit I am writing. DB_URL=databus.dev.mysite.io:8080 I am using this in a python script which is running as a service. My systemd unit will run this script as a unit making use of the variable for its working. [Unit]Description=device-CompStatus: Computes device availability status [Service]Type=simple ExecStart=/usr/bin/bash -c "/usr/bin/python /opt/deviceCompStatus/deviceCompStatusHandler.py"Restart=always [Install]WantedBy=default.target The way am using the variable in Python script would be if os.environ.get('DB_URL') is not None: dbEndPoint = "http://" + os.environ['DB_URL'] The problem is am not able to use the variable when running the script in systemd . I looked up couple of resources Using environment variables in systemd units , it says to use assignment under [Service] directly as [Service]Environment=DB_URL=databus.dev.mysite.io:8080 As you can see, my DB_URL could change depending upon the environment I am deploying my machine, it could be a developer, or a production setup, in which the URLs would vary. How do I do this dynamically? i.e. pass whatever value available to DB_URL to systemd environment? I also tried using the EnvironmentFile= option to define a file and pass it to service. But the same problem again, my variable could be dynamic and cannot be hardcoded. Update After using the option systemctl import-environment DB_URL I am able to see the variable available in the environment of systemd which I confirmed by systemctl show-environmentDB_URL=databus.dev.mysite.io:8080LANG=en_US.UTF-8PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin but still the value is not reflected in the python application which I run. Is os.environ('DB_URL') a wrong way to access the variable? | You can affect the global systemd environment for all future commands (until reboot) by using sudo systemctl set-environment var=value or if you already have var exported in your environment, you can use sudo systemctl import-environment var After starting your unit you can remove the variable with unset-environment similarly. As this is global in effect you would be better off simply writing the line DB_URL=databus.dev.mysite.io:8080 into some file /some/file and setting EnvironmentFile=/some/file in your unit. An alternative method is to use a template unit [email protected] which is started with systemctl start myunit@'databus.dev.mysite.io:8080' . You can then recover this parameter as %i inside the unit, for example in the [Service] section with a line like: Environment=DB_URL=%i | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112235/"
]
} |
383,491 | How do I get a remote host IP address if I don't have ping, and don't have any bind utilities like dig, nslookup, etc? I need an answer that does not include 'install X' or 'use sidecar container'. I am looking for something that relies on nothing more than bash and the basic shell commands. | Use getent : $ getent hosts unix.stackexchange.com151.101.193.69 unix.stackexchange.com unix.stackexchange.com | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383491",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222809/"
]
} |
383,497 | So, I have some jobs like this: sleep 30 | sleep 30 & The natural way to think would be: kill `jobs -p` But that kills only the first sleep but not the second. Doing this kills both processes: kill %1 But that only kills at most one job if there are a lot of such jobs running. It shouldn't kill processes with the same name but not run in this shell. | Use this: pids=( $(jobs -p) )[ -n "$pids" ] && kill -- "${pids[@]/#/-}" jobs -p prints the PID of process group leaders. By providing a negative PID to kill , we kill all the processes belonging to that process group ( man 2 kill ). "${pids[@]/#/-}" just negates each PID stored in array pids . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95685/"
]
} |
383,501 | Apparently, if the same shell launches multiple ssh connections to the same server, they won't return after executing the command they're given but will hang ( Stopped (tty input) ) for ever. To illustrate: #!/bin/bashssh localhost sleep 2echo "$$ DONE!" If I run the script above more than once in the background, it never exits: $ for i in {1..3}; do foo.sh & done[1] 28695[2] 28696[3] 28697$ ## Hit enter[1] Stopped foo.sh[2]- Stopped foo.sh[3]+ Stopped foo.sh$ ## Hit enter again $ jobs -l[1] 28695 Stopped (tty input) foo.sh[2]- 28696 Stopped (tty input) foo.sh[3]+ 28697 Stopped (tty input) foo.sh Details I found this because I was ssh'ing in a Perl script to run a command. The same behavior occurs when using Perl's system() call to launch ssh . The same issue occurs when using Perl modules instead of system() . I tried Net::SSH::Perl , Net:SSH2 and Net::OpenSSH . If I run the multiple ssh commands from different shells (open multiple terminals) they work as expected. Nothing obviously useful in the ssh connection debugging info: OpenSSH_7.5p1, OpenSSL 1.1.0f 25 May 2017debug1: Reading configuration data /home/terdon/.ssh/configdebug1: Reading configuration data /etc/ssh/ssh_configdebug2: resolving "localhost" port 22debug2: ssh_connect_direct: needpriv 0debug1: Connecting to localhost [::1] port 22.debug1: Connection established.debug1: identity file /home/terdon/.ssh/id_rsa type 1debug1: key_load_public: No such file or directorydebug1: identity file /home/terdon/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/terdon/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/terdon/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/terdon/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/terdon/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/terdon/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/terdon/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.5debug1: Remote protocol version 2.0, remote software version OpenSSH_7.5debug1: match: OpenSSH_7.5 pat OpenSSH* compat 0x04000000debug2: fd 3 setting O_NONBLOCKdebug1: Authenticating to localhost:22 as 'terdon'debug3: hostkeys_foreach: reading file "/home/terdon/.ssh/known_hosts"debug3: record_hostkey: found key type ECDSA in file /home/terdon/.ssh/known_hosts:47debug3: load_hostkeys: loaded 1 keys from localhostdebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521debug3: send packet: type 20debug1: SSH2_MSG_KEXINIT sentdebug3: receive packet: type 20debug1: SSH2_MSG_KEXINIT receiveddebug2: local client KEXINIT proposaldebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-cdebug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsadebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbcdebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbcdebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: compression ctos: none,[email protected],zlibdebug2: compression stoc: none,[email protected],zlibdebug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposaldebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: compression ctos: none,[email protected]: compression stoc: none,[email protected]: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: curve25519-sha256debug1: kex: host key algorithm: ecdsa-sha2-nistp256debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: nonedebug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: nonedebug3: send packet: type 30debug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug3: receive packet: type 31debug1: Server host key: ecdsa-sha2-nistp256 SHA256:uxhkh+gGPiCJQPaP024WXHth382h3BTs7QdGMokB9VMdebug3: hostkeys_foreach: reading file "/home/terdon/.ssh/known_hosts"debug3: record_hostkey: found key type ECDSA in file /home/terdon/.ssh/known_hosts:47debug3: load_hostkeys: loaded 1 keys from localhostdebug1: Host 'localhost' is known and matches the ECDSA host key.debug1: Found key in /home/terdon/.ssh/known_hosts:47debug3: send packet: type 21debug2: set_newkeys: mode 1debug1: rekey after 134217728 blocksdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug3: receive packet: type 21debug1: SSH2_MSG_NEWKEYS receiveddebug2: set_newkeys: mode 0debug1: rekey after 134217728 blocksdebug2: key: /home/terdon/.ssh/id_rsa (0x555a5e4b5060)debug2: key: /home/terdon/.ssh/id_dsa ((nil))debug2: key: /home/terdon/.ssh/id_ecdsa ((nil))debug2: key: /home/terdon/.ssh/id_ed25519 ((nil))debug3: send packet: type 5debug3: receive packet: type 7debug1: SSH2_MSG_EXT_INFO receiveddebug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>debug3: receive packet: type 6debug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug3: send packet: type 50debug3: receive packet: type 51debug1: Authentications that can continue: publickey,passworddebug3: start over, passed a different list publickey,passworddebug3: preferred publickey,keyboard-interactive,passworddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/terdon/.ssh/id_rsadebug3: send_pubkey_testdebug3: send packet: type 50debug2: we sent a publickey packet, wait for replydebug3: receive packet: type 60debug1: Server accepts key: pkalg rsa-sha2-512 blen 279debug2: input_userauth_pk_ok: fp SHA256:OGvtyUIFJw426w/FK/RvIhsykeP8kIEAtAeZwYBIzokdebug3: sign_and_send_pubkey: RSA SHA256:OGvtyUIFJw426w/FK/RvIhsykeP8kIEAtAeZwYBIzokdebug3: send packet: type 50debug3: receive packet: type 52debug1: Authentication succeeded (publickey).Authenticated to localhost ([::1]:22).debug2: fd 6 setting O_NONBLOCKdebug1: channel 0: new [client-session]debug3: ssh_session2_open: channel_new: 0debug2: channel 0: send opendebug3: send packet: type 90debug1: Requesting [email protected]: send packet: type 80debug1: Entering interactive session.debug1: pledge: networkdebug3: receive packet: type 80debug1: client_input_global_request: rtype [email protected] want_reply 0debug3: receive packet: type 91debug2: callback startdebug2: fd 3 setting TCP_NODELAYdebug3: ssh_packet_set_tos: set IPV6_TCLASS 0x08debug2: client_session2_setup: id 0debug1: Sending command: sleep 2debug2: channel 0: request exec confirm 1debug3: send packet: type 98debug2: callback donedebug2: channel 0: open confirm rwindow 0 rmax 32768debug2: channel 0: rcvd adjust 2097152debug3: receive packet: type 99debug2: channel_input_status_confirm: type 99 id 0debug2: exec request accepted on channel 0 This doesn't depend on my ~/.ssh/config setup. Renaming the file doesn't change anything. This happens on multiple machines. I've tried 4 or 5 different machines running updated Ubuntu and Arch distros. The command ( sleep in the dummy example but something a good deal more complex in real life) exits successfully and does what it's supposed to do. This doesn't depend on the command you're running, it's an ssh issue. This is the worst of them: it isn't consistent . Every now and then, one of the instances will exit and return control to the parent script. But not always, and there is no pattern I've been able to discern. Renaming ~/.bashrc makes no difference. Also, I've run this on machines running Ubuntu (default login shell dash ) and Arch (default login shell bash , called as sh ). Interestingly, the issue only occurs if I hit any key (for example Enter , but any seems to work) after launching the loop but before the first script exits. If I leave the terminal alone, they finish as expected. What's going on? Is this a bug in ssh? Is there an option I need to set? How can I launch multiple instances of a script that runs a command over ssh from the same shell? | Foreground processes and terminal access control To understand what is going on, you need to know a little about sharing terminals. What happens when two programs try to read from the same terminal at the same time? Each input byte goes randomly to one of the programs. (Not random as in the kernel uses an RNG to decide, just random as in unpredictable in practice.) The same thing happens when two programs read from a pipe, or any other file type which is a stream of bytes being moved from one place to another (socket, character device, …), rather than a byte array where any byte can be read multiple times (regular file, block device). For example, run a shell in a terminal, figure out the name of the terminal and run cat . $ tty/dev/pts/18$ cat Then from another terminal, run cat /dev/pts/18 . Now type in the terminal, and watch as lines sometimes go to one of the cat processes and sometimes to the other. Lines are dispatched as a whole when the terminal is in cooked mode. If you put the terminal in raw mode then each byte would be dispatched independently. That's messy. Surely there should be a mechanism to decide that one program gets the terminal, and the others don't. Well, there is! It triggers in typical cases, but not in the scenario I set up above. That scenario is unusual because cat /dev/pts/18 wasn't started from /dev/pts/18 . It's unusual to access a terminal from a program that wasn't started inside this terminal. In the usual case, you run a shell in a terminal, and you run programs from that shell. Then the rule is that the program in the foreground gets the terminal, and programs in the background don't. This is known as terminal access control . The way it works is: Each process has a controlling terminal (or doesn't have one, typically because it doesn't have any open file descriptor that's a terminal). When a process tries to access its controlling terminal, if the process is not in the foreground, then the kernel blocks it. (Conditions apply. Access to other terminals is not regulated.) The shell decides who is the foreground process. (Foreground process group, actually.) It calls the tcsetpgrp to let the kernel know who should be in the foreground. This works in typical cases. Run a program in a shell, and that program gets to be the foreground process. Run a program in the background (with & ), and the program doesn't get to be in the foreground. When the shell is displaying a prompt, the shell puts itself in the foreground. When you resume a suspended job with fg , the job gets to be in the foreground. With bg , it doesn't. If a background process tries to read from the terminal, the kernel sends it a SIGTTIN signal. The default action of the signal is to suspend the process (like SIGSTOP). The parent of the process can know about this by calling waitpid with the WSTOPPED flag; when a child process receives a signal that suspends it, the waitpid call in the parent returns and lets the parent know what the signal was. This is how the shell knows to print “Stopped (tty input)”. What it's telling you is that this job is suspended due to a SIGTTIN. Since the process is suspended, nothing will happen to it until it's resumed or killed (with a signal that the process doesn't catch, because if the process has set a signal handler, it won't run since the process is suspended). You can resume the process by sending it a SIGCONT, but that won't achieve anything if the process is reading from the terminal, it'll receive another SIGTTIN immediately. If you resume the process with fg , it goes to the foreground and so the read succeeds. Now you understand what happens when you run cat in the background: $ cat &$ [1] + Stopped (tty input) cat$ The case of SSH Now let's do the same thing with SSH. $ ssh localhost sleep 999999 &$ $ $ [1] + Stopped (tty input) ssh localhost sleep 999999$ Pressing Enter sometimes goes to the shell (which is in the foreground), and sometimes to the SSH process (at which point it gets stopped by SIGTTIN). Why? If ssh was reading from the terminal, it should receive SIGTTIN immediately, and if it wasn't then why does it receive SIGTTIN? What's happening is that the SSH process calls the select system call to know when input is available on any of the files it's interested in (or if an output file is ready to receive more data). The input sources include at least the terminal and the network socket. Unlike read , select is not forbidden to background processes, and ssh doesn't receive a SIGTTIN when it calls select . The intent of select is to find out whether data is available, without disrupting anything. Ideally select would not change the system state at all, but in fact this isn't completely true. When select tells the SSH process that input is available on the terminal file descriptor, the kernel has to commit to sending input if the process calls read afterwards. (If it didn't, and the process called read , then there might be no input available at this point, so the return value from select would have been a lie.) So if the kernel decides to route some input to the SSH process, it decides by the time the select system call returns. Then SSH calls read , and at that point the kernel sees that a background process tried to read from the terminal and suspends it with SIGTTIN. Note that you don't need to launch multiple connections to the same server. One is enough. Multiple connections merely increases the probability that the problem arises. The solution: don't read from the terminal If you need the SSH session to read from the terminal, run it in the foreground. If you don't need the SSH session to read from the terminal, make sure that its input is not coming from the terminal. There are two ways to do this: You can redirect the input: ssh … </dev/null You can instruct SSH not to forward a terminal connection with -n or -f . ( -n is equivalent to </dev/null ; -f allows SSH itself to read from the terminal, e.g. to read a password, but the command itself won't have the terminal open.) ssh -n … Note that the disconnection between the terminal and SSH has to happen on the client. The sleep process running on the server will never read from the terminal, but SSH has no way to know that. If the client receives input on standard input, it must forward it to the server, which will make the data available in a buffer in case the application ever decides to read it (and if the application calls select , it'll be informed that data is available). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
383,505 | We've got a Squid web cache set up that is used for caching package downloads, so that all the machines here don't have to independently redownload everything. The installer prompts for a mirror and proxy. As long as every machine uses the exact same mirror (here, http.us.debian.org ) and the proxy then it works. That's somewhat annoying, as it involves going to 'enter information manually' and typing it in each time (as the installer would do ftp.us.debian.org , which Squid doesn't realize is identical). The installer defaults to just mirror , is there a way to make that work? So I can be lazy and just hit enter a bunch? | For having a local deb cache to server my Debian server farm, I actually prefer to use apt-cacher-ng (caching proxy server for software repositories) It is a proxy specially APT/deb aware, quite customisable and can cache your deb files for quite a while (configurable). You install it with: apt-get install apt-cacher-ng And by default it caches repositories/debs into /home/apt-cacher-ng . Under this directory, it creates a directory per repository used in your Debian/Ubuntu servers, then distros used, much similar to mirror structures. As an added bonus, is also much easier to fetch manually a deb from cache from here, than from a Squid server. To use it in all your servers, add to the directory /etc/apt/apt.conf.d a file 02proxy with the contents: Acquire::http { Proxy "http://your_proxy_APT_server:3142"; }; After you add that file, the Debian package manager will proxy all the configured repositories via the configured http APT proxy. It also got an interesting statics page for consulting it´s activity. You might also need to open 3142/TCP in your firewalls to allow the servers to talk with your new proxy APT server. The advantage of such setup is that besides downloading only one deb copy for a bucketload of servers, and saving bandwidth and the public repositories usage is that allows you to update internal servers that do not need to have Internet access (example: DHCP servers). As documented in Appendix B of the Official Install Guide , you can have your DHCP server give out a preseed file, by adding something like this to it's config: if substring (option vendor-class-identifier, 0, 3) = "d-i" { filename "http://host/preseed.cfg";} Then using these preseed options, you can configure the mirror and proxy automatically: d-i mirror/protocol string httpd-i mirror/country string manuald-i mirror/http/hostname string http.us.debian.orgd-i mirror/http/directory string /debiand-i mirror/http/proxy string http://your_proxy_APT_server:3128/ See also: How to set up Apt caching server on Ubuntu or Debian | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/977/"
]
} |
383,521 | How do I get all interface and associate IP address like following [root@centso ]# ifconfigenp3s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet addr:10.5.2.10 Bcast:10.5.7.255 Mask:255.255.248.0 inet6 fe80::e611:5bff:feea:5e50 prefixlen 64 scopeid 0x20<link> ether e4:11:5b:ea:5e:50 txqueuelen 1000 (Ethernet) RX packets 638000416 bytes 763371981799 (710.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 16607280 bytes 9787019600 (9.1 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0enp3s0f1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether e4:11:5b:ea:5e:52 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0enp4s0f0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether e4:11:5b:ea:5e:44 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0enp4s0f1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether e4:11:5b:ea:5e:46 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 45015 bytes 4371658 (4.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 45015 bytes 4371658 (4.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I want output like following, also I believe there is a command ip link or something does that but didn't recall, also some machine has nic name different like enoX or ethX enp3s0f0: 10.5.2.10enp3s0f1:enp4s0f0:enp4s0f1: | I think i found something, i am sure there must be more smart way but for now i am all set. [root@server1 ~]# ip -o -4 addr show | awk '{print $1" " $2": "$4}'1: lo: 127.0.0.1/82: eno1: 192.168.100.190/243: eno2: 10.5.8.33/21 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29656/"
]
} |
383,537 | I am working on Linux Ubuntu, and I want a bash script whose output is to convert the timezone 7 hours in advance from my server time. My server time: Mon Jul 23 23:00:00 2017 What I want to achieve: Mon Jul 24 06:00:00 2017 I have tried this one in my bash script: #!/bin/bashlet var=$(date +%H)*3600+$(date +%M)*60+$(date +%S)seven=25200time=$(($var+$seven))date=$(date --date='TZ="UTC+7"' "+%Y-%m-%d")hours=$(date -d@$time -u +%H:%M:%S)echo "$date" "$hours" the output was: 2017-07-23 06:00:00 The hours works, but the date still matches the server date. Is there another way to solve this? | You can change the time zone for the entire script by changing the TZ environment variable early in the script. It can be overridden on individual commands. For example this script #!/bin/bashexport TZ=Australia/SydneydateTZ=US/Pacific datedate Will output Sun 30 Jul 21:56:25 AEST 2017Sun 30 Jul 04:56:25 PDT 2017Sun 30 Jul 21:56:25 AEST 2017 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244774/"
]
} |
383,541 | I have read through many questions on various stack exchange sites and unix help forums on how to modify shell options and then restore them - the most comprehensive one I found on here is at How to "undo" a `set -x`? The received wisdom seems to be to either save off the result of set +o or shopt -po and then eval it later to restore the previous settings. However, in my own testing with bash 3.x and 4.x, the errexit option does not get saved correctly when doing command substitution. Here is an example script to show the issue: set -o errexitset -o nounsetecho "LOCAL SETTINGS:"set +oOLDOPTS=$(set +o)echoecho "SAVED SETTINGS:"echo "$OLDOPTS" And output (I trimmed out some of the irrelevant variables): LOCAL SETTINGS:set -o errexitset -o nounsetSAVED SETTINGS:set +o errexitset -o nounset This seems extremely dangerous. Most scripts I write depend on errexit to halt execution if any commands fail. I just located a bug in one of my scripts caused by this, where the function that was supposed to restore errexit at the end wound up overriding it, setting it back to the default of off for the duration of the script. What I'd like to be able to do is write functions that can set options as needed and then restore all the options properly before exiting. But it seems as if in the subshell invoked by the command substitution, errexit is not inherited. I'm at a loss for how to save off the result of set +o without using command substitution or jumping through FIFO hoops. I can read from $SHELLOPTS but it is not writable or in eval -able format. I know one alternative is to use a subshell function , but that introduces a lot of headaches for being able to log output as well as pass back multiple variables. Probably related: https://stackoverflow.com/questions/29532904/bash-subshell-errexit-semantics (seems there is a workaround for bash 4.4 and up but I'd rather have a portable solution) | What you're doing should work. But bash turns off the errexit option in command substitutions, so it preserves all the options except this one. This is specific to bash and specific to the errexit option. Bash does preserve errexit when running in POSIX mode. Since bash 4.4, bash also doesn't clear errexit in a command substitution if shopt -s inherit_errexit is in effect. Since the option is turned off before any code runs inside the command substitution, you have to check it outside. OLDOPTS=$(set +o)case $- in *e*) OLDOPTS="$OLDOPTS; set -e";; *) OLDOPTS="$OLDOPTS; set +e";;esac If you don't like this complexity, use zsh instead. setopt local_options | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52838/"
]
} |
383,552 | I want to add tags to my files (in this case to PDF files)so I can search for them in the filesystem and then process the result from the command-line or in a script. Is there a linux-tool that can do this for files in general? An Easy way would be to modify the filenames and then access those with find -exec or in pipe Though I want to tag the files with multible tags, and the filenames would get to long, but I want to process them in this kind of way For an example, lets say I have plenty of PDF files.So I want to tag some of them as bills , some of them as drafts So that later I could make an application browse through my filesystem and process all the matches.Lets say create symlinks for all of them in an appropriate folder,Or merge them to one single PDF etc... My question is not about those programms that would come second in the pipe as: ln , gs , pdfjoin , but about those working with the tags directly such as:applying the tags and searching for files containing those tags. | This isn't quite a match for what you're thinking, but if working with files that support metadata is of interest, exiftool can view and change the metadata on a large number of file types, including PDF files. For a full list, see man exiftool . I've used it to create and change metadata on PDFs on numerous occasions. For example: exiftool -Title="My PDF" \ -Subject="stuff" \ -Description="my pdf about various things" \ -Keywords="miscellanea, nonsense" \ -Author="me" \ -Creator="also me" \ "mypdf.pdf" Now here's where it becomes more closely related to your idea. The Keywords metadata field (or any other field for those file formats which support the creation of arbitrary fields - many do) can be used to store your tags in the files themselves, allowing the tag symlink farm to be automatically maintained by a script. Alternatively, a script could maintain a database (flat-text like CSV or similar, or an SQL database like sqlite ) containing a list of filenames (with full absolute path), filesystem metadata (timestamps, size, perms, etc) and their tags. Other scripts could be written to search this database and return the result(s) in a useful format. For example: vi $(search-tagged-files --date "last sunday" --keywords thesis) or localc $(search-tagged-files --keywords budget,2017 \ --mimetype=application/vnd.oasis.opendocument.spreadsheet) NOTE: the single biggest drawback to anything like this is the enormous amount of work it would take to maintain the tags for each of the files. Some of this could be automated, but much of it would be tedious, time-consuming manual work. And that's ignoring the design and development time to come up with a system to do it with. None of the programs used to create or edit files would be in any way integrated with a file management system like this, and neither would standard tools like mv or cp or rm . You could write wrapper scripts for many of them that were aware of this tags database and updated it automatically, but I wouldn't even know where to begin doing that if you used a GUI file browser to move, copy, open files etc...you'd probably have to write your own file browser. The work involved is probably the biggest reason why most people who have had ideas like this have ended up thinking "I'll just use a well-organised directory tree instead". Even the work required to write the code to manage the documents is is enormous, and the work to manage the metadata for each file is even larger - it's generally only worth the effort for very large organisations with at least tens of thousands of documents to keep track of. This isn't a new idea, there's been a lot of research and development on ideas like this. One of the names for it is Document Management System . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
383,566 | I tried to install zfs on debian 9.1, however I'm experiencing some errors. My first installation was only of zfs-dkms however I read on the net that also the spl-dkms is required for zfs-dkms to run. My steps were to change my sources.list adding the contrib non-free as follows: /etc/apt/sources.list deb http://ftp.nl.debian.org/debian/ stretch main contrib non-freedeb-src http://ftp.nl.debian.org/debian/ stretch main contrib non-freedeb http://security.debian.org/debian-security stretch/updates main contrib non-freedeb-src http://security.debian.org/debian-security stretch/updates main contrib non-free# stretch-updates, previously known as 'volatile'deb http://ftp.nl.debian.org/debian/ stretch-updates main contrib non-freedeb-src http://ftp.nl.debian.org/debian/ stretch-updates main contrib non-free Done a classic apt-get update and then tried installing zfs with the following: apt-get install spl-dkms and only after apt-get install zfs-dkms As a result, I have these errors: root@debian:/etc/apt# apt-get install zfs-dkmsReading package lists... DoneBuilding dependency treeReading state information... DoneThe following additional packages will be installed: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-zed zfsutils-linux...DKMS: install completed.Setting up libzpool2linux (0.6.5.9-5) ...Setting up libzfs2linux (0.6.5.9-5) ...Setting up zfsutils-linux (0.6.5.9-5) ...Created symlink /etc/systemd/system/zfs-mount.service.wants/zfs-import-cache.service â /lib/systemd/system/zfs-import-cacCreated symlink /etc/systemd/system/zfs.target.wants/zfs-import-cache.service â /lib/systemd/system/zfs-import-cache.servCreated symlink /etc/systemd/system/zfs-share.service.wants/zfs-mount.service â /lib/systemd/system/zfs-mount.service.Created symlink /etc/systemd/system/zfs.target.wants/zfs-mount.service â /lib/systemd/system/zfs-mount.service.Created symlink /etc/systemd/system/zfs.target.wants/zfs-share.service â /lib/systemd/system/zfs-share.service.Created symlink /etc/systemd/system/multi-user.target.wants/zfs.target â /lib/systemd/system/zfs.target.zfs-import-scan.service is a disabled or a static unit, not starting it.Job for zfs-mount.service failed because the control process exited with error code.See "systemctl status zfs-mount.service" and "journalctl -xe" for details.zfs-mount.service couldn't start.Job for zfs-share.service failed because the control process exited with error code.See "systemctl status zfs-share.service" and "journalctl -xe" for details.zfs-share.service couldn't start.Setting up zfs-zed (0.6.5.9-5) ...Created symlink /etc/systemd/system/zed.service â /lib/systemd/system/zfs-zed.service.Created symlink /etc/systemd/system/zfs.target.wants/zfs-zed.service â /lib/systemd/system/zfs-zed.service.Processing triggers for libc-bin (2.24-11+deb9u1) ... Reading journalctl -xe as suggested I get: root@debian:/etc/apt# journalctl -xeAug 02 23:13:13 debian systemd[1]: zfs-share.service: Main process exited, code=exited, status=1/FAILUREAug 02 23:13:13 debian systemd[1]: Failed to start ZFS file system shares.-- Subject: Unit zfs-share.service has failed-- Defined-By: systemd-- Support: https://www.debian.org/support---- Unit zfs-share.service has failed.---- The result is failed.Aug 02 23:13:13 debian systemd[1]: zfs-share.service: Unit entered failed state.Aug 02 23:13:13 debian systemd[1]: zfs-share.service: Failed with result 'exit-code'.Aug 02 23:13:13 debian systemd[1]: Starting Mount ZFS filesystems...-- Subject: Unit zfs-mount.service has begun start-up-- Defined-By: systemd-- Support: https://www.debian.org/support---- Unit zfs-mount.service has begun starting up.Aug 02 23:13:13 debian zfs[81481]: The ZFS modules are not loaded.Aug 02 23:13:13 debian zfs[81481]: Try running '/sbin/modprobe zfs' as root to load them.Aug 02 23:13:13 debian systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILUREAug 02 23:13:13 debian systemd[1]: Failed to start Mount ZFS filesystems.-- Subject: Unit zfs-mount.service has failed-- Defined-By: systemd-- Support: https://www.debian.org/support---- Unit zfs-mount.service has failed.---- The result is failed.Aug 02 23:13:13 debian systemd[1]: zfs-mount.service: Unit entered failed state.Aug 02 23:13:13 debian systemd[1]: zfs-mount.service: Failed with result 'exit-code'.Aug 02 23:13:13 debian systemd[1]: Starting ZFS file system shares...-- Subject: Unit zfs-share.service has begun start-up-- Defined-By: systemd-- Support: https://www.debian.org/support---- Unit zfs-share.service has begun starting up.Aug 02 23:13:13 debian systemd[81483]: zfs-share.service: Failed at step EXEC spawning /usr/bin/rm: No such file or direc-- Subject: Process /usr/bin/rm could not be executed-- Defined-By: systemd-- Support: https://www.debian.org/support---- The process /usr/bin/rm could not be executed and failed.---- The error number returned by this process is 2. What's wrong here? I missed something else? How is the zfs-linux package related to zfs installation? What is the correct way to install zfs in debian 9 ? | The actual answer by @cas is good but have some corrections to be applied. So let's take a fresh installation of Debian 9 and assuming that the contrib non-free repositories are also not enabled. Step 0 - Enable the contrib non-free repositories I used sed to find and replace the word main inside /etc/apt/sources.list sed -i 's/main/main contrib non-free/g' /etc/apt/sources.listapt-get update Step 1 - ZFS Installation Since the last fixes spl-dkms is correctly seen as zfs-dkms dependency so it's recalled automatically and it's not necessary to install it manually before zfs-dkms . The symbolic link is needed due to a bug inside the zfs distribution in Debian, that doesn't look for rm binary in the right position. apt -y install linux-headers-$(uname -r)ln -s /bin/rm /usr/bin/rmapt-get -y install zfs-dkms Step 2 - ZFS Restart At this point zfs-dkms is installed but it throws errors in journalctl -xe ; to start zfs properly use: /sbin/modprobe zfssystemctl restart zfs-import-cachesystemctl restart zfs-import-scansystemctl restart zfs-mountsystemctl restart zfs-share Step 3 - YOU MUST CREATE AT LEAST ONE ZPOOL At this point I discovered that YOU must create a zpool before reboot otherwise zfs will not load the proper modules if there are no zpools. It's a sort of saving resources mechanism ( but even in that case this will still throw errors inside journalctl -xe ) https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864348 " We are not doing this because ZFS modules would taint the kernel, if there's no zpool available then it shouldn't be loaded. " If you miss this part you have to start from Step 2 For example, by using the example provided by @cas, you can create this file based zpool or directly create your disk based ones. truncate -s 100M /root/z1truncate -s 100M /root/z2zpool create tank /root/z1 /root/z2zpool scrub tankzpool status then after a reboot everything will work with no errors in journalctl -xe | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143935/"
]
} |
383,613 | I have a input string with | [pipe] delimiter and like to replace the empty string 3rd and 5th column by & character. Input File: a a|b b|c c|d d|e ef f|g g|h h|i i|j j Output File: a a|b b|c&c|d d|e&ef f|g g|h&h|i i|j&j You can see the space between cc, ee, hh and jj is replaced with & I have an alternate solution which involves read file using while loop and by using cut command based on delimiter and storing it in variable based on position and replacing the space by '&' using sed and append all the splitted variable in to one variable and append it in a new file. Is there a single command which can be used to achieve this? | Use awk for this: awk -F\| '{gsub(" ","\\&",$3); gsub(" ","\\&",$5)}1' OFS=\| infile.txt The -F\| , telling 'awk' that fields are delimited by | pipe (it's escaped by \ to shell don't interpret it as pipeline stdin , we could use -F"|" or either -F'|' ). The gsub("regexp","replacement"[, INDEX]) syntax used to replace " " (space) with literal & in index (column) $3 and $5 , below is showing each Index position based on | delimiter. a a|b b|c c|d d|e e^^^|^^^|^^^|^^^|^^^$1 |$2 |$3 |$4 |$5 Read more about why we escaped \\& there and two times?! What is the 1 used at the end in awk '{...}1' ? it's awk's default action control to print. read more in details The OFS=\| again bring back or print the fields with specified | delimiter. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229539/"
]
} |
383,633 | I'm using "pure" Debian 9 alongside with DWM (no desktop-environment) on my laptop. After the installation, I had to install pulseaudio package in order to make the sound work. It worked well but suddenly the sound doesn't play anymore. I'm not sure what action caused this (whether it was upgrading some package or something else). I don't see any errors anywhere; it just doesn't play. I've checked on Windows that the speakers work, so it's not a hardware problem. When I issue pulseaudio --start --log-target=syslog and look to the syslog, there are no errors there. Can anyone help me how to solve this problem? Write in the comments what logs or configs should I paste there. | With help of my friend I installed the pavucontrol package and found out that the sound has been muted. I don't know how it got itself to this state, but simple clicking the button solved my problem. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141945/"
]
} |
383,678 | To reach an isolated network I use an ssh -D socks proxy . In order to avoid having to type the details every time I added them to ~/.ssh/config : $ awk '/Host socks-proxy/' RS= ~/.ssh/configHost socks-proxy Hostname pcit BatchMode yes RequestTTY no Compression yes DynamicForward localhost:9118 Then I created a systemd-user service unit definition file: $ cat ~/.config/systemd/user/SocksProxy.service [Unit]Description=SocksProxy Over Bridge Host[Service]ExecStart=/usr/bin/ssh -Nk socks-proxy[Install]WantedBy=default.target I let the daemon reload the new service definitions, enabled the new service, started it, checked its status, and verified, that it is listening: $ systemctl --user daemon-reload$ systemctl --user list-unit-files | grep SocksPSocksProxy.service disabled$ systemctl --user enable SocksProxy.serviceCreated symlink from ~/.config/systemd/user/default.target.wants/SocksProxy.service to ~/.config/systemd/user/SocksProxy.service.$ systemctl --user start SocksProxy.service $ systemctl --user status SocksProxy.service ● SocksProxy.service - SocksProxy Over Bridge Host Loaded: loaded (/home/alex/.config/systemd/user/SocksProxy.service; enabled) Active: active (running) since Thu 2017-08-03 10:45:29 CEST; 2s ago Main PID: 26490 (ssh) CGroup: /user.slice/user-1000.slice/[email protected]/SocksProxy.service └─26490 /usr/bin/ssh -Nk socks-proxy$ netstat -tnlp | grep 118tcp 0 0 127.0.0.1:9118 0.0.0.0:* LISTEN tcp6 0 0 ::1:9118 :::* LISTEN This works as intended. Then I wanted to avoid having to manually start the service, or running it permanently with autossh , by using systemd socket-activation for on-demand (re-)spawning. That didn't work, I think (my version of) ssh cannot receive socket file-descriptors. I found the documentation ( 1 , 2 ), and an example for using the systemd-socket-proxyd -tool to create 2 "wrapper" services, a "service" and a "socket": $ cat ~/.config/systemd/user/SocksProxyHelper.socket [Unit]Description=On Demand Socks proxy into Work[Socket]ListenStream=8118#BindToDevice=lo#Accept=yes[Install]WantedBy=sockets.target$ cat ~/.config/systemd/user/SocksProxyHelper.service [Unit]Description=On demand Work Socks tunnelAfter=network.target SocksProxyHelper.socketRequires=SocksProxyHelper.socket SocksProxy.serviceAfter=SocksProxy.service[Service]#Type=simple#Accept=falseExecStart=/lib/systemd/systemd-socket-proxyd 127.0.0.1:9118TimeoutStopSec=5[Install]WantedBy=multi-user.target$ systemctl --user daemon-reload This seems to work, until ssh dies or gets killed. Then it won't re-spawn at the next connection attempt when it should. Questions: Can /usr/bin/ssh really not accept systemd-passed sockets? Or only newer versions? Mine is the one from up2date Debian 8.9 . Can only units of root use the BindTodevice option? Why is my proxy service not respawning correctly on first new connection after the old tunnel dies? Is this the right way to set-up an "on-demand ssh socks proxy"? If, not, how do you do it? | Can /usr/bin/ssh really not accept systemd-passed sockets? I think that's not too surprising, considering: OpenSSH is an OpenBSD project systemd only supports the Linux kernel systemd support would need to be explicitly added to OpenSSH, as an optional/build-time dependency, so it would probably be a hard sell. Can only units of root use the BindTodevice option? User systemd instances are generally pretty isolated, and e.g. can not communicate with the main pid-0 instance. Things like depending on system units from user unit files are not possible. The documentation for BindToDevice mentions: Note that setting this parameter might result in additional dependencies to be added to the unit (see above). Due to the above-mentioned restriction, we can imply that the option doesn't work from user systemd instances. Why is my proxy service not respawning correctly on first new connection after the old tunnel dies? As I understand, the chain of events is as follows: SocksProxyHelper.socket is started. A SOCKS client connects to localhost:8118. systemd starts SocksProxyHelper.service . As a dependency of SocksProxyHelper.service , systemd also starts SocksProxy.service . systemd-socket-proxyd accepts the systemd socket, and forwards its data to ssh . ssh dies or is killed. systemd notices, and places SocksProxy.service into a inactive state, but does nothing. SocksProxyHelper.service keeps running and accepting connections, but fails to connect to ssh , as it is no longer running. The fix is to add BindsTo=SocksProxy.service to SocksProxyHelper.service . Quoting its documentation (emphasis added): Configures requirement dependencies, very similar in style to Requires= . However, this dependency type is stronger: in addition to the effect of Requires= it declares that if the unit bound to is stopped, this unit will be stopped too . This means a unit bound to another unit that suddenly enters inactive state will be stopped too. Units can suddenly, unexpectedly enter inactive state for different reasons: the main process of a service unit might terminate on its own choice , the backing device of a device unit might be unplugged or the mount point of a mount unit might be unmounted without involvement of the system and service manager. When used in conjunction with After= on the same unit the behaviour of BindsTo= is even stronger. In this case, the unit bound to strictly has to be in active state for this unit to also be in active state . This not only means a unit bound to another unit that suddenly enters inactive state, but also one that is bound to another unit that gets skipped due to a failed condition check (such as ConditionPathExists= , ConditionPathIsSymbolicLink= , … — see below) will be stopped, should it be running. Hence, in many cases it is best to combine BindsTo= with After= . Is this the right way to set-up an "on-demand ssh socks proxy"? If, not, how do you do it? There's probably no "right way". This method has its advantages (everything being "on-demand") and disadvantages (dependency on systemd, the first connection not getting through because ssh hasn't begun listening yet). Perhaps implementing systemd socket activation support in autossh would be a better solution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383678",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/127903/"
]
} |
383,712 | I am trying to interpret this result of hdparm: janus@behemoth ~ $ sudo hdparm -Tt --direct /dev/nvme0n1/dev/nvme0n1: Timing O_DIRECT cached reads: 2548 MB in 2.00 seconds = 1273.69 MB/sec Timing O_DIRECT disk reads: 4188 MB in 3.00 seconds = 1395.36 MB/sec I do not understand how the cached reads can be slower than the direct disk reads. If I drop the --direct, I get what I would have expect: the disk reads are slower than the cached ones: janus@behemoth ~ $ sudo hdparm -Tt /dev/nvme0n1/dev/nvme0n1: Timing cached reads: 22064 MB in 2.00 seconds = 11042.86 MB/sec Timing buffered disk reads: 2330 MB in 3.00 seconds = 776.06 MB/sec (Although it says "buffered disk reads" now). Can somebody explain to me what is going on? | Per hdparm man page: --directUse the kernel´s "O_DIRECT" flag when performing a -t timingtest. This bypasses the page cache, causing the reads to godirectly from the drive into hdparm's buffers, using so-called"raw" I/O. In many cases, this can produce results that appearmuch faster than the usual page cache method, giving a betterindication of raw device and driver performance. It so explains why hdparm -t --direct may be faster than hdparm -t . It also says that --direct only applies to the -t test, not to the -T test which is not supposed to involve the disk (see below). -T Perform timings of cache reads for benchmark and comparison pur‐poses. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no otheractive processes) with at least a couple of megabytes of freememory. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor,cache, and memory of the system under test. I guess -T works by reading the same cached part of the disk. But your --direct prevents this. So, logically, you should have the same results with -t --direct as with -T --direct . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383712",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47265/"
]
} |
383,727 | I want to delete all .swp files recursively. However: rm -r *.swp Gives: rm: cannot remove ‘*.swp’: No such file or directory Just to be sure, ls -all gives: total 628drwxr--r--. 8 przecze przecze 4096 Aug 3 18:16 .drwxr--r--. 31 przecze przecze 4096 Aug 3 18:14 ..-rwxrwxr-x. 1 przecze przecze 108 Jul 28 21:41 build.sh-rwxrwxr-x. 1 przecze przecze 298617 Aug 3 00:52 execdrwxr--r--. 8 przecze przecze 4096 Aug 3 18:08 .gitdrwxrwxr-x. 2 przecze przecze 4096 Aug 3 18:14 inc-rw-rw-r--. 1 przecze przecze 619 Aug 3 00:52 main.cc-rw-r--r--. 1 przecze przecze 12288 Aug 3 17:29 .main.cc.swp-rw-rw-r--. 1 przecze przecze 850 Aug 1 00:30 makefile-rw-------. 1 przecze przecze 221028 Aug 3 01:47 nohup.outdrwxrwxr-x. 2 przecze przecze 4096 Aug 3 00:52 objdrwxrwxr-x. 2 przecze przecze 4096 Aug 3 00:52 outdrwxrwxr-x. 12 przecze przecze 4096 Aug 3 18:14 runs-rwxr--r--. 1 przecze przecze 23150 Aug 2 18:56 Session.vimdrwxrwxr-x. 2 przecze przecze 4096 Aug 3 18:14 src-rw-rw-r--. 1 przecze przecze 13868 Jul 31 19:28 tags-rw-rw-r--. 1 przecze przecze 2134 Aug 3 00:31 view.py-rw-r--r--. 1 przecze przecze 12288 Aug 3 17:29 .view.py.swp So there are *.swp files to delete! And rm .build.sh.swp successfully deleted one of them. What am I doing wrong? | Try to match the dot: $ rm -r .*.swp I hope this solve your problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383727",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244865/"
]
} |
383,751 | From https://unix.stackexchange.com/a/32227/674 The main places where it's safe not to use the double quotes are: in an assignment: foo=$bar (but note that you do need the double quotes in export "foo=$bar" or in array assignments like array=("$a" "$b") ); Does it mean that it is not required to double quote $bar in assignment foo=$bar ? Why? Why do we need the double quotes in export "foo=$bar" and in array assignments like array=("$a" "$b") ? Thanks. | foo=$bar is safe because it's an assignment , and an assignment to a scalar variable, using the scalar assignment syntax. It's a scalar context , only one value can be stored in $var , it would not make sense to split or glob $bar . If the expansion resulted in several words, the shell would need to somehow combine them again to be able to store them as one string in $foo . It's different when you use: foo=($bar) Where you're assigning to an array variable. There it's a list context. You're assigning a number of words to elements of the array. split+glob occurs. Also beware of the double-nature of things like export / local / typeset / declare / readonly in some shells (explained in more details at Are quotes needed for local variable assignment? ) You'll notice that: foo=$bar is parsed as an assignment while "foo"=$bar is just an attempt to run the foo=content_of_bar command (where the content of bar is subject to split+glob). In shells where export (and other local / typeset ...) is both a keyword and builtin (ksh, bash and recent versions of zsh), in: export foo=$bar export is recognised as a keyword and foo=$bar as an assignment, so $bar is not subject to split+glob. But it takes little for export to stop being recognised as a keyword . In which case, it's just treated as a simple command and split+glob happens like in any argument to any other command. And even in the cases where export is seen as a keyword, if the arguments don't look like variable assignments (like in the "foo"=$bar above), then they're treated like normal arguments and subject to split+glob again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
383,818 | Is any alias equivalent to a function, in the sense that each appearance of the alias can be replaced with the function's name, and the function name in each call to the function can be replaced with the alias? If I am correct, an arbitrary alias is defined in the followingform: alias myalias=blahblah Is the alias defined in the above form always equivalent to afunction defined as myfun () { blahblah $@ } ? If not, what function is the alias equivalent to? Thanks. | As the fine manual tells us, aliases are almost completely superceded by functions. Functions can do pretty much anything an alias can do, and are capable of doing a lot more as they take arguments which can be used in arbitrary order. The thing that functions can't do is prevent the expansion of their arguments. This means that the only reason for using an alias is to set up a function call without expansion. alias funny='set -f; _funny'_funny(){ set +f ; do_something_with_unexpanded_args ;} and now you can run funny * and see the * rather than a list of the files in the current directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
383,836 | How to rename all the files in the directory in such a way the files get added "_1" before ".txt" apac_02_aug_2017_file.txtemea_02_May_2017_file.txtger__02_Jun_2017_file.txt To apac_02_aug_2017_file_1.txtemea_02_May_2017_file_1.txtger__02_Jun_2017_file_1.txt | With rename rename .txt _1.txt * should do what you are looking for. To quote man rename : rename [options] expression replacement file... rename will rename the specified files by replacing the first occurrence of expression in their name by replacement. With common bash commands Since you said that rename is not installed on your system, here's a solution that uses more standard Bash: for file in *.txt; do mv "$file" "${file%.txt}_1.txt"done Explanation: We loop over all files. For each file, we move it to the correct location by making use of something called "parameter expansion" (this is the ${} part). The special character % can be used within parameter expansion to match a pattern at the end of the string and delete it. For further information, see: http://wiki.bash-hackers.org/syntax/pe#from_the_end | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244415/"
]
} |
383,862 | As I was looking this answer https://stackoverflow.com/a/11065196/4706711 in order to figure out on how to use parameters like --something or -s some questions rised regarding the answer's script : #!/bin/bashTEMP=`getopt -o ab:c:: --long a-long,b-long:,c-long:: \ -n 'example.bash' -- "$@"`if [ $? != 0 ] ; then echo "Terminating..." >&2 ; exit 1 ; fi# Note the quotes around `$TEMP': they are essential!eval set -- "$TEMP"while true ; do case "$1" in -a|--a-long) echo "Option a" ; shift ;; -b|--b-long) echo "Option b, argument \`$2'" ; shift 2 ;; -c|--c-long) # c has an optional argument. As we are in quoted mode, # an empty parameter will be generated if its optional # argument is not found. case "$2" in "") echo "Option c, no argument"; shift 2 ;; *) echo "Option c, argument \`$2'" ; shift 2 ;; esac ;; --) shift ; break ;; *) echo "Internal error!" ; exit 1 ;; esacdoneecho "Remaining arguments:"for arg do echo '--> '"\`$arg'" ; done First of all what does the shift program in the following line: -a|--a-long) echo "Option a" ; shift ;; Afterwards what is the purpose to use the eval command in the following line: eval set -- "$TEMP" I tried to comment the line in script mentioned above and I got the following response: $ ./getOptExample2.sh -a 10 -b 20 --a-long 40 -charem --c-long=echiParam: -aOption aParam: 10Internal error! But if I uncomment it it runs like a charm: Option aOption b, argument `20'Option aOption c, argument `harem'Option c, argument `echi'Remaining arguments:--> `10'--> `40' | One of the many things that getopt does while parsing options is to rearrange the arguments, so that non-option arguments come last, and combined short options are split up. From man getopt : Output is generated for each element described in the previous section.Output is done in the same order as the elements are specified in theinput, except for non-option parameters. Output can be done incompatible (unquoted) mode, or in such way that whitespace and otherspecial characters within arguments and non-option parameters arepreserved (see QUOTING). When the output is processed in the shellscript, it will seem to be composed of distinct elements that can beprocessed one by one (by using the shift command in most shelllanguages).[...]Normally, no non-option parameters output is generated until alloptions and their arguments have been generated. Then '--' isgenerated as a single parameter, and after it the non-option parametersin the order they were found, each as a separate parameter. This effect is reflected in your code, where the option-handling loop assumes that all option arguments (including arguments to options) come first, and come separately, and are finally followed by non-option arguments. So, TEMP contains the rearranged, quoted, split-up options, and using eval set makes them script arguments. Why eval ? You need a way to safely convert the output of getopt to arguments. That means safely handling special characters like spaces, ' , " (quotes), * , etc. To do that, getopt escapes them in the output for interpretation by the shell. Without eval , the only option is set $TEMP , but you're limited to what's possible by field splitting and globbing instead of the full parsing ability of the shell. Say you have two arguments. There is no way to get those two as separate words using just field splitting without additionally restricting the characters usable in arguments (e.g., say you set IFS to : , then you cannot have : in the arguments). So, you need to able to escape such characters and have the shell interpret that escaping, which is why eval is needed. Barring a major bug in getopt , it should be safe. As for shift , it does what it always does: remove the first argument, and shift all arguments (so that what was $2 will now be $1 ). This eliminates the arguments that have been processed, so that, after this loop, only non-option arguments are left and you can conveniently use $@ without worrying about options. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173648/"
]
} |
383,883 | I basically want to combine the output of find / with the output of: find / | xargs -L1 stat -c%Z The first command lists the files in the / directory and the second lists the timestamp of each file. I want to combine these two such that I get the file and the timestamp on one like, like: /path/to/file 1501834915 | If you have GNU find, you can do it entirely using find : find / -printf '%p %C@\n' The format specifiers are: %p File's name. %Ck File's last status change time in the format specified by k, which is the same as for %A. %Ak File's last access time in the format specified by k, which is either `@' or a directive for the C `strftime' function. The possible values for k are listed below; some of them might not be available on all systems, due to differences in `strftime' between systems. @ seconds since Jan. 1, 1970, 00:00 GMT, with fractional part. If you don't want fractional parts, use s instead of @ as the time format specifier. (There are a few systems without s , but Linux and *BSD/OSX do have s .) find / -printf '%p %Cs\n' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185491/"
]
} |
383,918 | I have a bash script ( x11docker ) that needs to run some commands as root ( docker ), and some commands as unprivileged user (X servers like Xephyr ). The script prompts for password at some point. It should run on arbitrary linux systems without configuring the system first. Some systems use su , some use sudo to get root privileges.How can I recognize which one will work? I tried sudo -l docker . That should tell me if sudo docker is allowed. Unfortunately, it needs a password even for this information. The point is, root may or may not have a password (that is needed to use su -c ), and sudo may or may not be allowed to run docker. How to decide which one will do the job (=executing a command with root privileges)? Checking for group sudo may be a good guess, but is not reliable, as it does not tell me if /etc/sudoers is configured to allow group sudo arbitrary root access. Also, docker can be allowed in /etc/sudoers without the user being member of group sudo. pkexec should be a solution, but is not reliable. The password prompt fails on console, fails on X servers different from DISPLAY=:0 , and fails on OpenSuse at all. Currently, the script defaults to use su , and a switch --sudo allows to use sudo . Possible, but not nifty. I am working on an update allowing to run the script as root at all and checking for the "real" unprivileged user with logname , SUDO_USER and PKEXEC_UID . Not nifty, too. Is there a way to know if I should use su or sudo ? | If you have GNU find, you can do it entirely using find : find / -printf '%p %C@\n' The format specifiers are: %p File's name. %Ck File's last status change time in the format specified by k, which is the same as for %A. %Ak File's last access time in the format specified by k, which is either `@' or a directive for the C `strftime' function. The possible values for k are listed below; some of them might not be available on all systems, due to differences in `strftime' between systems. @ seconds since Jan. 1, 1970, 00:00 GMT, with fractional part. If you don't want fractional parts, use s instead of @ as the time format specifier. (There are a few systems without s , but Linux and *BSD/OSX do have s .) find / -printf '%p %Cs\n' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/383918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185747/"
]
} |
383,926 | The /proc/devices file lists devices by major revision number and name. On my system it shows (partially): Block devices:259 blkext 7 loop 8 sd 9 md 11 sr 65 sd 66 sd 67 sd 68 sd 69 sd 70 sd 71 sd128 sd129 sd130 sd131 sd132 sd133 sd134 sd135 sd253 device-mapper254 mdp What are all those 'sd' devices? The first one (rev. no. 8) is probably /dev/sda but the rest of them don't show up in /dev - there are no devices with those major revision numbers. I do see a list of devices: crw-rw---- 1 root tty 7, 128 Jul 29 14:15 vcsacrw-rw---- 1 root tty 7, 129 Jul 29 14:15 vcsa1crw-rw---- 1 root tty 7, 130 Jul 29 14:15 vcsa2crw-rw---- 1 root tty 7, 131 Jul 29 14:15 vcsa3crw-rw---- 1 root tty 7, 132 Jul 29 14:15 vcsa4crw-rw---- 1 root tty 7, 133 Jul 29 14:15 vcsa5crw-rw---- 1 root tty 7, 134 Jul 29 14:15 vcsa6 where the minor number might be a match - would /proc show minor revision numbers, and why are they called sd . Either way, I don't see any device with no. 135 . Could someone explain this to me? | The first disk /dev/sda is 8:0 (major:minor), but the major number 8 contains the next 15 disks too ( Documentation/devices.txt in the kernel source): 8 block SCSI disk devices (0-15) 0 = /dev/sda First SCSI disk whole disk 16 = /dev/sdb Second SCSI disk whole disk 32 = /dev/sdc Third SCSI disk whole disk ... 240 = /dev/sdp Sixteenth SCSI disk whole disk Partitions are handled in the same way as for IDE disks (see major number 3) except that the limit on partitions is 15. The rest are for the rest of your drives (major numbers 66-71 and 128-134 are similar, and the partitioning scheme is the same for all of them): 65 block SCSI disk devices (16-31) 0 = /dev/sdq 17th SCSI disk whole disk 16 = /dev/sdr 18th SCSI disk whole disk ...135 block SCSI disk devices (240-255) 0 = /dev/sdig 241st SCSI disk whole disk ... 240 = /dev/sdiv 256th SCSI disk whole disk Well, you probably don't have that many disks, and the system only generates the nodes that are required for the devices you actually have, so you don't see anything but sda and its partitions in /dev . As for vcsa and friends, they're related to the virtual consoles: 7 char Virtual console capture devices 0 = /dev/vcs Current vc text contents 1 = /dev/vcs1 tty1 text contents ... 128 = /dev/vcsa Current vc text/attribute contents 129 = /dev/vcsa1 tty1 text/attribute contents ... Also note that /dev/vcs* are character devices, not a block devices. The first letter in the ls output tells which one it is. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383926",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240014/"
]
} |
383,955 | Does starting Linux from USB pen drive delete anything on hard drive? | The first disk /dev/sda is 8:0 (major:minor), but the major number 8 contains the next 15 disks too ( Documentation/devices.txt in the kernel source): 8 block SCSI disk devices (0-15) 0 = /dev/sda First SCSI disk whole disk 16 = /dev/sdb Second SCSI disk whole disk 32 = /dev/sdc Third SCSI disk whole disk ... 240 = /dev/sdp Sixteenth SCSI disk whole disk Partitions are handled in the same way as for IDE disks (see major number 3) except that the limit on partitions is 15. The rest are for the rest of your drives (major numbers 66-71 and 128-134 are similar, and the partitioning scheme is the same for all of them): 65 block SCSI disk devices (16-31) 0 = /dev/sdq 17th SCSI disk whole disk 16 = /dev/sdr 18th SCSI disk whole disk ...135 block SCSI disk devices (240-255) 0 = /dev/sdig 241st SCSI disk whole disk ... 240 = /dev/sdiv 256th SCSI disk whole disk Well, you probably don't have that many disks, and the system only generates the nodes that are required for the devices you actually have, so you don't see anything but sda and its partitions in /dev . As for vcsa and friends, they're related to the virtual consoles: 7 char Virtual console capture devices 0 = /dev/vcs Current vc text contents 1 = /dev/vcs1 tty1 text contents ... 128 = /dev/vcsa Current vc text/attribute contents 129 = /dev/vcsa1 tty1 text/attribute contents ... Also note that /dev/vcs* are character devices, not a block devices. The first letter in the ls output tells which one it is. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/383955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245036/"
]
} |
384,036 | I am trying to create a script that will get 5 numbers and then sort them from biggest to lowest.So far, this is what I have: #!/bin/bashclearecho "********Sorting********"echo "Enter first number:"read n1echo "Enter second number:"read n2echo "Enter third number:"read n3echo "Enter fourth number:"read n4echo "Enter fifth number:"read n5 | you can just use sort with the reverse switch: echo -e "$n1\n$n2\n$n3\n$n4\n$n5" | sort -rn | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384036",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244912/"
]
} |
384,092 | I created a simple C program and every time I load it in GDB, I see the same memory addresses allocated to the instructions of the program. For example, a function what() always loads at memory location 0x000055555555472d. In fact the stack is exactly the same for each execution(Not just the content of stack but the memory address which rsp points to. I understand that ASLR can be disabled in Linux by setting "/proc/sys/kernel/randomize_va_space" to 0 but my Debian system has value 2 in it. root@Sierra ~ % cat /proc/sys/kernel/randomize_va_space 2 According to my understanding of ASLR, these addresses should be randomized at each run. My question is why is this happening? Did I get something wrong? | By default, gdb disables address space randomization on Linux, overriding whatever value the kernel.randomize_va_space sysctl variable may have. The gdb command set disable-randomization off will turn this feature off, and any debugging targets created by gdb afterwards will have ASLR either on or off depending on the value of kernel.randomize_va_space . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384092",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152971/"
]
} |
384,117 | I am building a system which has the functions of an online judge system. I need to run all the executables and evaluate their output. The problem is that if all of them will be placed in a container, in different folders one of the application may try to exit it's folder and access another folder belonging to another application. In this case the main server will be protected, but not the other applications and not the evaluator. I have found myself a solution, but I am thinking there is a better one, I will create for example 5 containers, each one of them will be runing the same algorithm and each one of them will evaluate 1 problem at a time. After the problem is evaluated this one will be deleted and another one received. In this case, the main server and all the applications will be protected, but not the evaluator. The evaluated application may exit it's folder and start writing random text files for example, filling the entire memory. The evaluator will start the executable, measure it's time (if it is longer than 1 or 2 seconds it will kill it) and it's used memory(if it reaches a certain limit it will kill it). I have also thought to create a container each time and delete it after the executable is evaluated, but it takes a few seconds only to create and start the container... How do I isolate the evaluated process from messing with the container and evaluator? I basically want to block a process from accessing other folders. | I have not read anything in the description of your problem that would prevent you from creating different user accounts for the applications. You can then use trivial file permissions for preventing interference: chown app1 /var/lib/myapps/app1chmod 700 /var/lib/myapps/app1sudo -u app1 /var/lib/myapps/app1/run.sh edit If the evaluator is running as root then it can simply start the applications via sudo . If the evaluator does not run as root then the applications it calls (in the normal way) can be installed with the SUID bit (set user ID) so that the process will run as the user which owns the binary file and not as the user of the evaluator process. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245133/"
]
} |
384,121 | I'm running Centos 7 and Python 3.6. I have a python script that uses multi-treading. I want to renice or change the niceness value for all of the treads the script spawns. At present, I am able to change the niceness value of the parent process using the command below. while read -r pid; dorenice -n -20 "$pid" ; done < <(ps -o pid= -C "python /path/script.py") Then, when I use htop to view the status of the scripts processes, only the parent process shows the updated or 'reniced' value. The 'child' processes all show the standard nice value (I can manually change these using the F7 key in htop) I have found a few similar questions here and on the web and I have tried the solutions suggested, but these do not seem to work on Centos 7 so they maybe distro specific. Please can anyone point me in the right direction on how to amend the above command to be able to renice the parent and child processes at the same time or if there is a better solution I should use, then I'm happy to try it. *** Please note that the script is running in a Centos 7 docker container on a Centos 7 host. I am unable to start the script in the container with the nice value I want due to permission issues in docker, so I want to renice the script process on the host, which I can do using the above process. | I have not read anything in the description of your problem that would prevent you from creating different user accounts for the applications. You can then use trivial file permissions for preventing interference: chown app1 /var/lib/myapps/app1chmod 700 /var/lib/myapps/app1sudo -u app1 /var/lib/myapps/app1/run.sh edit If the evaluator is running as root then it can simply start the applications via sudo . If the evaluator does not run as root then the applications it calls (in the normal way) can be installed with the SUID bit (set user ID) so that the process will run as the user which owns the binary file and not as the user of the evaluator process. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384121",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181191/"
]
} |
384,159 | This is my loop program, running in the background and waiting for a command. #include <iostream>using namespace std;char buffer[256];int main(int argc, char *argv[]){ while(true){ fgets(buffer, 255, stdin); buffer[255] = 0; if(buffer[0] != '\0'){ cout << buffer; buffer[0] = '\0'; } } return 0;} I ran it with: myLoop & Now, how can I pipe a command to this process? | I guess that is impossible with a "real" pipeline. Instead you can use a FIFO (named pipe, see man mkfifo ) or (more elegant but more complicated) a Unix socket (AF_UNIX). ./background-proc </path/to/fifo &cat >/path/to/fifo# typing commands into cat I am not a developer so my only relation to sockets is socat . But that may help as a start. You need a "server" which communicates with your program. Such a pipeline would be started in the background: socat UNIX-LISTEN:/tmp/sockettest,fork STDOUT | sed 's/./&_/g' The sed is just for testing. Then you start one or more socat STDIN UNIX-CONNECT:/tmp/sockettest If you have a program which generates the commands for your background program then you would use a pipeline here, too: cmd_prog | socat STDIN UNIX-CONNECT:/tmp/sockettest The advantage in comparison with a FIFO is that (with the option fork on the server side) you can disconnect and reconnect the client. Using a FIFO you would need tricks for keeping the receiving side running: while true; do cat /path/to/fifo; done | background_prog | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/384159",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245159/"
]
} |
384,181 | Bash manual says The first word of the replacement text is tested for aliases, but a word that is identical to an alias being expanded is not expanded a second time. This means that one may alias ls to ls -F , for instance, and Bash does not try to recursively expand the replacement text. I'm trying to figure out which alias follows "identical to" in the quote, any alias being expanded in the same sequence of alias expansion recursions, or the alias whose expansion was first started, or the alias whose expansion was last started. So I create an example $ alias a1=a2; $ alias a2=a3;$ alias a3=a4; and want to check the alias expansion result of a1 , in the following cases $ alias a4=a1; or $ alias a4=a2; or $ alias a4=a3; How can I check the alias expansion result of a1 , possibly by performing alias expansion on a1 without letting the shell going further than alias expansion? | What the manual says is that the shell will avoid any loop that may be caused by recursion of alias expansion. With your example ( a1=a2=a3=a4 ), if you execute alias a4=a1 you are creating a loop. Then, as soon as you will execute a1 (resp. a2 , a3 , a4 ), once the shell loops back to a1 (resp. a2 , a3 , a4 ) it will search for a command named a1 (resp. a2 , a3 , a4 ) that is NOT an alias (since that would create a never-ending loop). Example: $ a1() { echo Phew, I got out of the loop; }$ alias a1='echo "(a1)"; a2' a2='echo "(a2)"; a3'$ alias a3='echo "(a3)"; a4' a4='echo "(a4)"; a1'$ a1(a1)(a2)(a3)(a4)Phew, I got out of the loop$ a2 # Command a2 does not exist anywhere(a2)(a3)(a4)(a1)a2: command not found | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
384,303 | The default target returned by systemctl [user@host system]$ systemctl get-defaultmulti-user.target differs from the value of the /usr/lib/systemd/system/default.target link: [user@host system]$ ls -l /usr/lib/systemd/system/default.targetlrwxrwxrwx. 1 root root 16 Mar 10 21:20 /usr/lib/systemd/system/default.target -> graphical.target My understanding was that these were one and the same. If systemd doesn't store the default value as the default.target symlink, where is the real value of the default target stored by systemd? | This is most likely because /etc/systemd/system/default.target exists and points to multi-user.target If you change the default.target with systemctl set-default [unit] , the new default.target link is created in /etc/systemd/system/ . The existing /usr/lib/systemd/system/default.target is not changed when using the set-default command. Like with all systemd units, the ones in /etc take precedence over /usr . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/182211/"
]
} |
384,314 | I have an external USB-drive which is giving me the following output on running the command $ smartctl /dev/sdb -H on it: SMART Status not supported: Incomplete response, ATA output registers missingSMART overall-health self-assessment test result: PASSED Warning: This result is based on an Attribute check. Could you elaborate if this is something to worry about or if it is just a wrong setting? Generally, what is the meaning of the health status in simplified form? Maybe as a relevant aside: The short and long tests finish without issues. | I haven't seen this kind of warning you've got, yet. But apparently it means that smartctl only evaluated the attribute table (see below) because there is no further information from SMART explicitly about the health which is typically a part of the ATA protocol. The response overall is considered not reliable in this case by the author of smartmontools. Drives attached directly to a SATA controller work better with SMART from what I've seen so far. As concerns the attribute table, when you take a look at a SMART attribute output with smartctl -A /dev/XXX , you'll see three columns VALUE , WORST and THRESH . Here a part of such an output: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 189 182 021 Pre-fail Always - 5508 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 18 The first column VALUE tells you the current value of the attribute. The WORST column tells you the worst (typically lowest) value SMART has ever seen. The THRESH column tells you what the vendors considers as lowest possible value considered as healthy. If the WORST column shows values below THRESH in same row, the drive is considered as not healthy. It also implies that VALUE has been seen below THRESH , of course. You can also see that only the attributes of type Pre-fail matter when evaluating health. Other thresholds are simply set to 0 and their attributes cannot fail. This table is all that smartctl used for the analysis of the drive's health. And it is not really the correct way to do it right. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102045/"
]
} |
384,325 | I want to create a new file or directory with ranger. I suppose I could use mkdir or touch , but I'm not sure if these would go in the current directory as viewed in ranger. | To create a directory in ranger , just type :mkdir exampledir or, :touch examplefile | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/384325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124109/"
]
} |
384,345 | In an image browser called gpicview ( http://lxde.sourceforge.net/gpicview/ ) I can enable fullscreen mode by pressing a button with a square and four arrows. But how can I disable the fullscreen mode? | To create a directory in ranger , just type :mkdir exampledir or, :touch examplefile | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/384345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245275/"
]
} |
384,363 | So I'm using etckeeper on my machine running Debian 9.1 with KDE and would like to view diffs (or if that isn't yet implemented: past versions) of specific files. How can I do that? | By default, with etckeeper , /etc is a git repository, so you can use git tools to view its contents (and the changes). For example, you can use gitk (after installing it) to browse the repository’s history, and if you want to focus on a specific file, you can specify it on the command line: cd /etcgitk apt/sources.list & Since you’re a KDE user, you might find qgit nicer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384363",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233262/"
]
} |
384,371 | Input file looks something like this: chr1 1 G 300chr1 2 A 500chr1 3 C 200chr4 1 T 35chr4 2 G 400chr4 3 C 435chr4 4 A 223chr4 5 T 400chr4 6 G 300chr4 7 G 340chr4 8 C 400 The actual file is too big to process, so I want to output a smaller file filtering by chromosome (column 1) and position (column 2) within a specific range. For example, I'm looking for a Linux command (sed, awk, grep, etc.) that will filter by chr4 from positions 3 to 7. The desired final output is: chr4 3 C 435chr4 4 A 223chr4 5 T 400chr4 6 G 300chr4 7 G 340 I don't want to modify the original file. | The solution for potentially unsorted input file: sort -k1,1 -k2,2n file | awk '$1=="chr4" && $2>2 && $2<8' The output: chr4 3 C 435chr4 4 A 223chr4 5 T 400chr4 6 G 300chr4 7 G 340 If the input file is sorted it's enough to use: awk '$1=="chr4" && $2>2 && $2<8' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245288/"
]
} |
384,457 | I add this line to visudo , in order to give full permissions to yael user: yael ALL=(ALL) NOPASSWD: ALL But when I want to update the /etc/hosts file, I get permission denied: su – yael echo "10.10.10.10 yael_host">>/etc/hosts -bash: /etc/hosts: Permission denied sudo echo "10.10.10.10 yael_host">>/etc/hosts-bash: /etc/hosts: Permission denied ls -ltr /etc/hosts -rw-r--r--. 1 root root 185 Aug 7 09:29 /etc/hosts How can I give to user yael ability like root? | The source of the problem is that the output redirection is done by the shell (user yael) and not by sudo echo . In order to enforce that the writing to /etc/hosts will be done by user root instead of user yael - You can use the following format: echo "10.10.10.10 yael_host" | sudo tee --append /etc/hosts or sudo sh -c "echo '10.10.10.10 yael_host'>>/etc/hosts" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/384457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
384,461 | I am trying to set up a privileged container on Ubuntu where apparmor denies to mount /run /run/lock and / sys/fs/cgroup while running lxc-start . Violations [ 1621.278919] audit: type=1400 audit(1499177276.634:12): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/run/" pid=2097 comm="systemd" fstype="tmpfs" srcname="tmpfs" flags="rw, nosuid, nodev, strictatime"[ 1621.302331] audit: type=1400 audit(1499177276.658:13): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/run/lock/" pid=2097 comm="systemd" fstype="tmpfs" srcname="tmpfs" flags="rw, nosuid, nodev, noexec"[ 1621.325944] audit: type=1400 audit(1499177276.682:14): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/sys/fs/cgroup/" pid=2097 comm="systemd" fstype="tmpfs" srcname="tmpfs" flags="rw, nosuid, nodev, noexec, strictatime" lxc-start --version : 2.0.6 Kernel version: 4.9 Any hints? | The source of the problem is that the output redirection is done by the shell (user yael) and not by sudo echo . In order to enforce that the writing to /etc/hosts will be done by user root instead of user yael - You can use the following format: echo "10.10.10.10 yael_host" | sudo tee --append /etc/hosts or sudo sh -c "echo '10.10.10.10 yael_host'>>/etc/hosts" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/384461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70728/"
]
} |
384,488 | Device files are not files per se. They're an I/O interface to use the devices in Unix-like operating systems. They use no space on disk, however, they still use an inode as reported by the stat command: $ stat /dev/sda File: /dev/sda Size: 0 Blocks: 0 IO Block: 4096 block special fileDevice: 6h/6d Inode: 14628 Links: 1 Device type: 8,0 Do device files use physical inodes in the filesystem and why they need them at all? | The short answer is that it does only if you have a physical filesystem backing /dev (and if you're using a modern Linux distro, you probably don't). The long answer follows: This all goes back to the original UNIX philosophy that everything is a file. This philosophy is part of what made UNIX so versatile, because you could directly interact with devices from userspace without needing to have special code in your application to talk directly to the physical hardware. Originally, /dev was just another directory with a well-known name where you put your device files. Some UNIX systems still take this approach (I believe OpenBSD still does), and you can usually tell if a system is like this because it will have lots of device files for devices the system doesn't actually have (for example, files for every possible partition on every possible disk). This saves space in memory and time at boot at the cost of using a bit more disk space, which was a good trade off for early systems because they were generally very memory constrained and not very fast. This is generally referred to as having a static /dev . On modern Linux systems (and I believe also FreeBSD and possibly recent versions of Solaris), /dev is a temporary in-memory filesystem populated by the kernel (or udev if you use Systemd, because they don't trust the kernel to do almost anything). This saves some disk space at the price of some memory (usually less than a few MB) and a very small processing overhead. It also has a number of other advantages, with one of the biggest being that it's easier to detect hot-plugged hardware. This is generally referred to as having a dynamic /dev . In both cases though, device nodes are accessed through the regular VFS layer, which by definition means they have to have an inode (even if it's a virtual one that just exists so that stuff like stat() works like it's supposed to. From a practical perspective, this has zero impact on systems that use a dynamic /dev because they just store the inodes in memory or generate them as needed, and near zero impact where /dev is static because inodes take up near zero space on-disk and most filesystems either have no upper limit on them or provision way more than anybody is likely to ever need. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/384488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233788/"
]
} |
384,499 | The unit file works when manually started. systemctl --user enable does not auto-start the service after user login. Unit File [Unit]Description = VNC Duplicate Display RDPAfter = default.target[Service]Type = simpleExecStart = /opt/tigervnc/usr/bin/x0vncserver -passwordfile /etc/.vncpasswd -display :0TimeoutSec = 30RestartSec = 10Restart = always[Install]WantedBy = default.target I have reloaded and reenabled this unit $ systemctl --user daemon-reload$ systemctl --user reenable x0vncserver Status Status after user login ● x0vncserver.service - VNC Duplicate Display RDP Loaded: loaded (/usr/lib/systemd/user/x0vncserver.service; enabled; vendor preset: enabled) Active: inactive (dead) Target Status $ systemctl --user --type targetUNIT LOAD ACTIVE SUB DESCRIPTIONbasic.target loaded active active Basic Systemdefault.target loaded active active Defaultpaths.target loaded active active Pathssockets.target loaded active active Socketstimers.target loaded active active TimersLOAD = Reflects whether the unit definition was properly loaded.ACTIVE = The high-level unit activation state, i.e. generalization of SUB.SUB = The low-level unit activation state, values depend on unit type.5 loaded units listed. Pass --all to see loaded but inactive units, too.To show all installed unit files use 'systemctl list-unit-files'. Manual Start $ systemctl --user start x0vncserver$ systemctl --user status x0vncserver● x0vncserver.service - VNC Duplicate Display RDP Loaded: loaded (/usr/lib/systemd/user/x0vncserver.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2017-08-07 18:27:00 IST; 5s ago Main PID: 2999 (x0vncserver) CGroup: /user.slice/user-1004.slice/[email protected]/x0vncserver.service └─2999 /opt/tigervnc/usr/bin/x0vncserver -passwordfile /etc/.vncpasswd -display :0Aug 07 18:27:00 Machine systemd[930]: Started VNC Duplicate Display RDP.Aug 07 18:27:00 Machine x0vncserver[2999]: Mon Aug 7 18:27:00 2017Aug 07 18:27:00 Machine x0vncserver[2999]: Geometry: Desktop geometry is set to 1920x1080+0+0Aug 07 18:27:00 Machine x0vncserver[2999]: Main: XTest extension present - version 2.2Aug 07 18:27:00 Machine x0vncserver[2999]: Main: Listening on port 5900 References I've looked around and found users with similar issues but none of the proposed solutions fixed my issue https://stackoverflow.com/questions/39871883/systemctl-status-shows-inactive-dead https://bbs.archlinux.org/viewtopic.php?id=170344 Why is my Systemd unit loaded, but inactive (dead)? https://github.com/systemd/systemd/issues/4301 https://github.com/systemd/systemd/issues/2690 https://superuser.com/questions/955922/enabled-systemd-unit-does-not-start-at-boot Update This happens to a particular user. systemctl --user enable works for at least one other user on the same device. | The short answer is that it does only if you have a physical filesystem backing /dev (and if you're using a modern Linux distro, you probably don't). The long answer follows: This all goes back to the original UNIX philosophy that everything is a file. This philosophy is part of what made UNIX so versatile, because you could directly interact with devices from userspace without needing to have special code in your application to talk directly to the physical hardware. Originally, /dev was just another directory with a well-known name where you put your device files. Some UNIX systems still take this approach (I believe OpenBSD still does), and you can usually tell if a system is like this because it will have lots of device files for devices the system doesn't actually have (for example, files for every possible partition on every possible disk). This saves space in memory and time at boot at the cost of using a bit more disk space, which was a good trade off for early systems because they were generally very memory constrained and not very fast. This is generally referred to as having a static /dev . On modern Linux systems (and I believe also FreeBSD and possibly recent versions of Solaris), /dev is a temporary in-memory filesystem populated by the kernel (or udev if you use Systemd, because they don't trust the kernel to do almost anything). This saves some disk space at the price of some memory (usually less than a few MB) and a very small processing overhead. It also has a number of other advantages, with one of the biggest being that it's easier to detect hot-plugged hardware. This is generally referred to as having a dynamic /dev . In both cases though, device nodes are accessed through the regular VFS layer, which by definition means they have to have an inode (even if it's a virtual one that just exists so that stuff like stat() works like it's supposed to. From a practical perspective, this has zero impact on systems that use a dynamic /dev because they just store the inodes in memory or generate them as needed, and near zero impact where /dev is static because inodes take up near zero space on-disk and most filesystems either have no upper limit on them or provision way more than anybody is likely to ever need. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/384499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/161627/"
]
} |
384,613 | I'm building a function that will calculate the gauge of wire required given amperage, distance(in feet), and allowable voltage drop. I can calculate the "circular mils" given those values and with that get the AWG requirement . I started building a large if elif statement to compare the circular mils to it's respected gauge but I believe case is the right tool for this. I haven't yet found any examples of case being used to compare numbers though so I'm wondering if it's even possible to do something like below: what.gauge () { let cmils=11*2*$1*$2/$3 let amils=17*2*$1*$2/$3 case $cmils in 320-403) cawg="25 AWG" ;; 404-509) cawg="24 AWG" ;; 510-641) cawg="23 AWG" ;; etc...} | case $cmils in 3[2-9][0-9]|40[0-3]) cawg="25 AWG" ;; 40[4-9]|4[1-9][0-9]|50[0-9]) cawg="24 AWG" ;; 51[0-9]|6[0-3][0-9]|64[01]) cawg="23 AWG" ;; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237982/"
]
} |
384,614 | Some time ago I have written a bash script which now should be able to run in environment with ash . In bash it was like : services=( "service1.service" "service2.service" "service3.service" ) for service in "${services[@]}"do START $service doneSTART(){ echo "Starting "$1 systemctl start $1} In reality there are like 40 services in array, and I want to make this transition as painless and clean as possible. Have always been using bash isms. Now I'm in a pinch with the task to make scripts more portable. For portability reasons probably it would be nice to have a pure ash solution. But since I have a pretty robust busybox at my disposal I might sacrifice some portability. Only if readability improves a lot, since "clean" script is a metric too. What would be portable and clean solution in this case? | Before arrays were in bash , ksh , and other shells, the usual method was to pick a delimiter that wasn't in any of the elements (or one that was uncommon to minimise any required escaping), and iterate over a string containing all the elements, separated by that delimiter. Whitespace is usually the most convenient delimiter choice because the shell already splits "words" by whitespace by default (you can set IFS if you want it to split on something different). For example: # backslash-escape any non-delimiter whitespace and all other characters that# have special meaning to the shell, e.g. globs, parenthesis, ampersands, etc.services='service1.service service2.service service3.service'for s in $services ; do # NOTE: do not double-quote $services here. START "$s"done $services should NOT be double-quoted here because we want the shell to split it into "words". | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/384614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/238442/"
]
} |
384,619 | My arch was installed on sda3, debian was installed on sda2. Boot with grub and select menu arch to enter into arch. sudo blkid/dev/sda1: UUID="7E91-CA50" TYPE="vfat" PARTUUID="e0c51e12-9954-4cb9-ae62-cebdec976e88"/dev/sda3: UUID="a872403e-0f73-4c64-8530-0f286fe6a4ee" TYPE="ext4" PARTLABEL="arch" PARTUUID="4329e96c-6d71-4259-9f2a-534b130aae65"/dev/sda4: UUID="eb4181c2-93ee-4f2d-8e27-5c40512b5293" TYPE="swap" PARTUUID="03d13d65-9504-4703-97e8-794171f3a9a7"/dev/sda2: PARTLABEL="debian" PARTUUID="4bfda6e3-70fa-4316-a01e-475c53e0b51b"/dev/sda5: PARTUUID="9a1fdb1d-a3c3-494a-a43f-24215320e2cc"sudo fdisk -lDisk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: gptDisk identifier: F29018A3-5A1D-41A5-B30D-DEF536D2E361Device Start End Sectors Size Type/dev/sda1 2048 117186559 117184512 55.9G Microsoft basic data/dev/sda2 117186560 195311615 78125056 37.3G BIOS boot/dev/sda3 195311616 312500223 117188608 55.9G Linux filesystem/dev/sda4 312500224 314454015 1953792 954M Linux filesystem/dev/sda5 314454016 332031999 17577984 8.4G Linux filesystem I want to mount /dev/sda2 on directory /tmp. sudo mount -t boot -o rw /dev/sda2 /tmpsudo mount -o rw /dev/sda2 /tmp None of them can mount it.Why to write the -t argument with boot? The type info on /dev/sda2 in sudo fdisk -l . Device Start End Sectors Size Type/dev/sda2 117186560 195311615 78125056 37.3G BIOS boot | Before arrays were in bash , ksh , and other shells, the usual method was to pick a delimiter that wasn't in any of the elements (or one that was uncommon to minimise any required escaping), and iterate over a string containing all the elements, separated by that delimiter. Whitespace is usually the most convenient delimiter choice because the shell already splits "words" by whitespace by default (you can set IFS if you want it to split on something different). For example: # backslash-escape any non-delimiter whitespace and all other characters that# have special meaning to the shell, e.g. globs, parenthesis, ampersands, etc.services='service1.service service2.service service3.service'for s in $services ; do # NOTE: do not double-quote $services here. START "$s"done $services should NOT be double-quoted here because we want the shell to split it into "words". | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/384619",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243284/"
]
} |
384,621 | I would like to make this command xrandr -s 640x480 use variables like so #!/bin/bashdisplay_x=640display_y=480xrandr -s $display_xx$display_y The command does not run correctly. How can I do this? | #!/bin/bashdisplay_x=640display_y=480xrandr -s ${display_x}x${display_y} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/95387/"
]
} |
384,676 | How to remove the comma (,) between two words? How can I place those two words in two different rows? This is my input: ent0ent4ent1,ent5ent2,ent6ent3,ent7ent29,ent30 | tr ',' '\n' would replace all , s in your input file with line breaks, that sounds like it is what you want. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245509/"
]
} |
384,690 | How can I split a word's letters and in between one single space, with each last four letters in a line?For example, Given, 1. placing 2. backtick 3. paragraphs I would like to see in below 1. pla cing 2. back tick 3. pa ragr aphs | tr ',' '\n' would replace all , s in your input file with line breaks, that sounds like it is what you want. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197308/"
]
} |
384,713 | I have a script which dumps out database and uploads the SQL file to Swift. I've run into the issue where the script runs fine in terminal but fails in cron. A bit of debugging and I found that the /usr/local/bin/swift command is not found in the script. Here's my crontab entry: */2 * * * * . /etc/profile; bash /var/lib/postgresql/scripts/backup Here's what I've tried: Using full path to swift as /usr/local/bin/swift Executing the /etc/profile script before executing the bash script. How do I solve this? | Cron doesn't run with the same environment as a user. If you do the following you will see what I mean: type env from your terminal prompt and make note of the output. Then set a cron job like this and compare it's output to the previous: */5 * * * * env > ~/output.txt You will find that the issue is likely because crontab does not have the same PATH variable as your user. As a solution to this you can (from your postgres user) echo $PATH and then copy the results of that to the first line of your crontab (something like this) PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/jbutryn/.local/bin:/home/jbutryn/bin Or if you want to be more specific you can simply add the following: PATH=$PATH:/usr/local/bin However I normally always put my user's PATH in crontab because I haven't yet heard a good reason not to do so and you will likely run into this issue again if you don't. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/79830/"
]
} |
384,763 | I want to delete everything under some directory /path/to/foo , EXCEPT those sub-directories that match the meta-pattern /path/to/foo/<DIGITS>/ For example, if the contents under /path/to/foo are initially like this: /path/to/foo├── 0/│ ├── a│ └── b├── 232532/├── 42├── 73/│ ├── d│ └── e├── 8xyz/│ ├── i│ └── j├── _bar/│ ├── x│ ├── y│ └── z├── .baz/│ ├── f│ └── frobozz/│ ├── g│ └── h└── quux/ └── 123/ ...I want to end up with /path/to/foo├── 0/│ ├── a│ └── b├── 232532/└── 73/ ├── d └── e I'm looking for a find ... -delete -based incantation, or a suitable zsh glob pattern (for rm -r ), that will do this. I am using Linux. | With zsh : set -o extendedglob # best in ~/.zshrcrm -rf /path/to/foo/^<->(D) /path/to/foo/<->(^-/) ^something is not something (similar to ksh 's !(something) ) <-> is <x-y> to match decimal integers from x to y , but with none of the bounds provided (so matches any sequence of decimal digits, similar to ksh 's +([0-9]) ). (D) a glob qualifier to include hidden files ( D ot files) (^-/) a glob qualifier to say only files that are not of type directory after symlink resolution (remove the - if you also want to remove symlinks to directories). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384763",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
384,766 | I have a list of files that are space separated and I want to use the touch command to update their timestamps in that order. But when I supply the filenames as arguments, the timestamps get updated in a different order. touch 1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt 10.txt 11.txt 12.txt After running the command above and running ls -t (sorting by time modified) I get the following: 1.txt 10.txt 11.txt 12.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt Does supplying arguments to commands not guarantee the execution order? If not, how can I update the timestamps of those files in that specific order? | With no time specified, touch changes the timestamps of all its arguments to the current time at the time each file is touched, which should produce a different timestamp for each file, but in many cases this ends up applying the same timestamp to all its arguments; you can verify this by running stat on all the touched files. They are processed in the order specified on the command line. To get the result you want, you need to loop and touch each file individually, with some delay: for file in {1..12}.txt; do touch $file; sleep 0.1; done (with more or less delay depending on the timestamp resolution of the underlying file system). Note that ls -t lists files sorted by descending timestamp; to see increasing times you need to use ls -rt . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245580/"
]
} |
384,796 | I am on Xubuntu (16.04, btw) however lsb_release -a gives me: user@host:~$ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 16.04.3 LTSRelease: 16.04Codename: xenial Should it display Xubuntu , or is this legit output? And if it is, how can I (by the CLI) know the exact Ubuntu distro I'm on? And what does the warning No LSB modules are available. mean anyway? (It is output to stderr : user@host:~$ lsb_release -a >/dev/nullNo LSB modules are available. so I'm guessing in nominal case it shouldn't be there) | As a general rule, you should not be writing software that cares what 'flavor' of Ubuntu you're on. The only difference between Ubuntu, Kubuntu, Xubuntu, and most of the other upstream supported versions is the desktop environment and what high-level software is installed by default. They are all Ubuntu at the core, so the lsb_release output is correct, and once the system is installed it doesn't really matter (you can easily convert between variants with just a few apt-get commands). If you actually need specific software that is only installed on one variant by default, you should be checking for that software, not the variant itself (for example, if you need GNOME 3 for your software to work, you should depend on that, not on stock Ubuntu). If you absolutely have to check for a specific variant, each one has it's own metapackage in the package manager (for Xubuntu for example, you should see a package called xubuntu-meta in a list of installed packages). This is not a reliable way to check though because such metapackages aren't mutually exclusive (and I've seen terminal servers that have most of them installed just to let users choose their desktop environment) and can be manually installed or removed after the initial install. As for the No LSB modules are available. bit, you can safely ignore that if you're not using software that requires LSB compliance (and no sane software does these days). the LSB standard itself was originally intended to define a common platform for Linux distributions so that third-party software could easily chec in a distro-agnostic way if their dependencies were met. 'Modules' referred to optional subcomponents of the LSB standard. The whole thing is largely irrelevant today (except for the lsb_release command being the only reliable way to determine the distro you're on) because it's not been updated in years and includes things like Qt 3, and it was never well supported on Ubuntu or Debian systems to begin with (because it was largely based on Red Hat). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148149/"
]
} |
384,971 | I'm setting up a docker container and there are two places where the timezone are set: /etc/localtime /etc/timezone Can anybody explain me what's the difference between them and what are both used for? | /etc/timezone is a text-based representation of what timezone you are in. This could be expressed as an offset from GMT/UTC, but more often it's the path under /usr/share/zoneinfo that points to the appropriate timezone data file (for example, if you're in most places in the Eastern US, this will be America/New_York or US/Eastern ). The main purpose of this is to make sure that /etc/localtime gets updated correctly when the data files in /usr/share/zoneinfo are updated (although some systems make /etc/localtime a symbolic link pointing to the correct file there) and to provide a quick user-friendly name for the timezone ( US/Eastern is a lot more user friendly than EST or EDT ). Only some systems actually use this file. /etc/localtime is a binary representation of the exact rules for calculating the time relative to UNIX time (the internal representation used by the kernel, which is measured as seconds since 1970-01-01 00:00:00 UTC). This includes things like the normal offset from UTC, as well as the rules for daylight saving time (when it starts and ends and what offset it applies), as well as the rules for leap day, and annotating how many leap seconds have been observed. This gets used by things like the date command (and its equivalent functions in various programming languages) to show you exactly what time it is locally. All Linux systems with a conventional userspace use this file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/384971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243135/"
]
} |
384,979 | I have a file that looks like this: 2017-07-30 A2017-07-30 B2017-07-30 B2017-07-30 A2017-07-30 A2017-07-30 C2017-07-31 A2017-07-31 B2017-07-31 C2017-07-31 B2017-07-31 C Each line represent an event (A, B, or C) and the day it occured on.I want to count the number events per type for each day.This can be done with sort file | uniq -c , giving output like this: 3 2017-07-30 A 2 2017-07-30 B 1 2017-07-30 C 1 2017-07-31 A 2 2017-07-31 B 2 2017-07-31 C However, I would like to have each event type as a column: A B C2017-07-30 3 2 12017-07-31 1 2 2 Is there a reasonably common command line tool that can do this? If necessary, it can be assumed that all event types (A, B, C) are known in advance, but it's better if it isn't necessary.Likewise it can be assumed that each event occurs at least once per day (meaning no zeros in the output), but here too it's better if it isn't necessary. | If "reasonably common" includes GNU datamash , then datamash -Ws crosstab 1,2 < file ex. $ datamash -Ws crosstab 1,2 < file A B C2017-07-30 3 2 12017-07-31 1 2 2 (unfortunately the formatting of this site doesn't preserve tabs - the actual output is tab aligned). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/384979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14657/"
]
} |
385,009 | In Ubuntu 16.04 I've added the following code to /etc/bash.bashrc : alias ll="ls -la --group-directories-first" I then rebooted. Note: I used /etc/bash.bashrc because I needed all aliases whatsoever in the one file and available for all users. My intention was to rewrite the "native" ll alias. Yet it wasn't changed; if I go to any dir that includes dirs and files, and I execute ll , I get a list without dirs being sorted above files. In other words. What did I do wrong? | The ll alias is defined in the default .bashrc . An alias definition is a command. Bash is an imperative language, it executes commands one after the other. If there are multiple definitions for the same alias, the alias is redefined each time the shell executes one of the definitions. Thus the last definition wins. When bash starts, it reads the system file /etc/bash.bashrc before the user file ~/.bashrc . Thus a definition in ~/.bashrc overrides any definition of the same alias in /etc/bash.bashrc . You can't (sanely) do anything in /etc/bash.bashrc to prevent a redefinition in ~/.bashrc . It doesn't make sense to impose convenience aliases on users. That's why ll is defined in ~/.bashrc and not in /etc/bash.bashrc . So instead of putting your preferred definition in the system file, put it in your user file. You could change the default .bashrc — that's /etc/skel/.bashrc . This file is copied to a user's home directory when the user's account is created. Changing a file in /etc/skel has no impact on already-existing accounts. But even that is not a good idea since what you're defining is a personal preference. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/385009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.