source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
387,771 | Given a pattern of the form: a\b\c\d:text\text how can I use sed to output a/b/c/d:text\text that is, given a path using backslashes followed by a colon followed by arbitrary text, how do I replace only the backslashes preceding the colon with forward slashes? | Problem 1: In your example, read does not get its input from a command line argument, but from stdin. As such, the input it receives does not go through bash 's string parser. Instead, it is treated as a literal string, delimited by spaces. So with your input, your array values become: [0]->("apple[1]->fruit"[2]->"orange"[3]->"grapes" To do what you want, you need to escape any spaces you have, to avoid the delimiter from kicking in. Namely, you must enter the following input after invoking read : apple\ fruit oranges grapes Problem 2: In order for read to store the input it receives as an array, you must have an -a switch followed by the array name. So you need: read -a myarray -p "Enter your items" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/387771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247745/"
]
} |
387,786 | I made a script (name of the file is update ) to update and upgrade in one command. All it is is: #! /bin/bashsudo /usr/bin/apt-get updatesudo /usr/bin/apt-get upgrade I used the full paths, as well putting this in its own directory, /home/user_name/custom_scripts . I also made sure to designate this directory as root , the permissions are listed as drwxr-xr-x. 2 root root 4096 Aug 23 00:12 custom_scripts and the executable script is: -rwx------. 1 root root 73 Aug 23 00:12 update I edited my path to look like this /home/user_name/custom_scripts:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games yet for some reason this won't execute if I type sudo update . The weirdest thing is if I just try update , I get a permission denied exception. I'm not really sure what's wrong. | Problem 1: In your example, read does not get its input from a command line argument, but from stdin. As such, the input it receives does not go through bash 's string parser. Instead, it is treated as a literal string, delimited by spaces. So with your input, your array values become: [0]->("apple[1]->fruit"[2]->"orange"[3]->"grapes" To do what you want, you need to escape any spaces you have, to avoid the delimiter from kicking in. Namely, you must enter the following input after invoking read : apple\ fruit oranges grapes Problem 2: In order for read to store the input it receives as an array, you must have an -a switch followed by the array name. So you need: read -a myarray -p "Enter your items" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/387786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/212657/"
]
} |
387,847 | I've got the following script: #!/bin/bashecho "We are $$"trap "echo HUP" SIGHUPcat # wait indefinitely When I send SIGHUP (using kill -HUP pid ), nothing happens. If I change the script slightly: #!/bin/bashecho "We are $$"trap "kill -- -$BASHPID" EXIT # add thistrap "echo HUP" SIGHUPcat # wait indefinitely ...then the script does the echo HUP thing right as it exits (when I press Ctrl+C): roger@roger-pc:~ $ ./hupper.sh We are 6233^CHUP What's going on? How should I send a signal (it doesn't necessarily have to be SIGHUP ) to this script? | The Bash manual states: If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will not be executed until the command completes. That means that despite the signal is received by bash when you send it, your trap on SIGHUP will be called only when cat ends. If this behavior is undesirable, then either use bash builtins (e.g. read + printf in a loop instead of cat ) or use background jobs (see Stéphane's answer ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/387847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46851/"
]
} |
387,855 | I'm constantly having the situation where I want to correlate the output of lsblk which prints devices in a tree with their name in the scheme of /dev/sdXY with the drives /dev/disk/by-id/ names. | The by-id names consists of the drive model together with the serial something which lsblk can be instructed to list: lsblk -o name,model,serial The output of this command will look something like this: NAME MODEL SERIALsda SAMSUNG HD203WI S1UYJ1VZ500792 ├─sda1 └─sda9 sdb ST500DM002-1BD14 W2APGFP8├─sdb1 └─sdb9 sdc ST500DM002-1BD14 W2APGFS0├─sdc1 └─sdc9 For posterity here's also a longer command with some commonly used columns: sudo lsblk -o name,size,fstype,label,model,serial,mountpoint The output of which could be: NAME SIZE FSTYPE LABEL MODEL SERIAL MOUNTPOINTsda 1,8T zfs_member SAMSUNG HD203WI S1UYJ1VZ500792├─sda1 1,8T zfs_member storage /home └─sda9 8M zfs_member sdb 465,8G btrfs ST500DM002-1BD14 W2APGFP8 ├─sdb1 465,8G btrfs └─sdb9 8M btrfs sdc 465,8G btrfs ST500DM002-1BD14 W2APGFS0 ├─sdc1 465,8G btrfs rpool / └─sdc9 8M btrfs | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/387855",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27761/"
]
} |
387,859 | I've followed this tutorial to see how to use scp to transfer files to my server. And all is well. So I'm using commands like this: scp examplefile yourusername@yourserver:/home/yourusername/ But I'm wondering if there's a way for me to not have to specify the destination with the prepended /home/yourusername/ . I'm already using the username in the address, is there a way to make the home directory on the remote user the "base" of the file transfer destination? Or, to clarify, I want to be able to send files to the home directory of the user on the remote computer ( yourusername@yourserver:/home/yourusername/ ) with a command like this: scp examplefile yourusername@yourserver Is it possible? Feasible? | Using username@server: as the target should be enough, i.e. scp somefile username@server: This would copy the file somefile to the server server and place it in the home directory of the user username . By "home directory" I mean whatever directory will be the default one that this user arrives in when logging in on the server system using ssh (most likely /home/username , or /Users/username on macOS, or /usr/home/username on FreeBSD). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/387859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89959/"
]
} |
387,906 | I have a problem with an uncompleted upgrade on my local Debian install: Operating System: Debian GNU/Linux buster/sid Kernel: Linux 4.12.0-1-686-pae Architecture: x86 I tried to fix it with apt upgrade -f and dpkg --configure lilypond-data but whatever I do, I get the following message: Setting up lilypond-data (2.18.2-8) ... Running mktexlsr /usr/share/texlive/texmf-dist...mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Done.ln: failed to create symbolic link 'lilypond/user': File existsdpkg: error processing package lilypond-data (--configure): subprocess installed post-installation script returned error exit status 1Errors were encountered while processing: lilypond-dataE: Sub-process /usr/bin/dpkg returned an error code (1) if I run dpkg --remove --force-remove-reinstreq --dry-run lilypond-data I get: dpkg: dependency problems prevent removal of lilypond-data: lilypond depends on lilypond-data (= 2.18.2-8).dpkg: error processing package lilypond-data (--remove): dependency problems - not removingErrors were encountered while processing: lilypond-data | The general approach would be to look at (shell script) /var/lib/dpkg/info/lilypond-data.postinst and find the ln line that's failing. Then determine why, and work around it (e.g., by rm ing the existing link, or worst case by editing the postinst). And of course then file a bug. Except someone else has already done so—see bug 871631 . And the bug has been fixed; you just need to grab (and install) 2.18.2-9 from unstable. (Which yields an important lesson: check the bug tracking system before thinking about how to fix it...) Also: you may want to install apt-listbugs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/387906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
388,044 | I am trying to sort file names in a directory as below. $ ls -1vfile-1.10.0-114.1.1.x86.tb1_2.rpm file-1.10.0-114.2.2.x86.tb1_2.rpm file-1.10.0-114.11.2.x86.tb1_2.rpm file-1.10.0-114.x86.tb1_2.rpm file-1.10.0-115.1.1.x86.tb1_2.rpm file-1.10.0-115.2.2.x86.tb1_2.rpm file-1.10.0-115.3.1.x86.tb1_2.rpm file-1.10.0-115.22.1.x86.tb1_2.rpm file-1.10.0-115.x86.tb1_2.rpm But, my expectation was the below. file-1.10.0-114.x86.tb1_2.rpm file-1.10.0-114.1.1.x86.tb1_2.rpm file-1.10.0-114.2.2.x86.tb1_2.rpm file-1.10.0-114.11.2.x86.tb1_2.rpm file-1.10.0-115.x86.tb1_2.rpm file-1.10.0-115.1.1.x86.tb1_2.rpm file-1.10.0-115.2.2.x86.tb1_2.rpm file-1.10.0-115.3.1.x86.tb1_2.rpm file-1.10.0-115.22.1.x86.tb1_2.rpm I tried sort -V , but it showed the same result.How do I sort like this way? | Try to use this command: ls -h | sort -t. -k3,3 -k4,4n ls -h - standard output of ls command (you can use ls -lh with the same result provided the user and group names don't contain dots); -t. - setting up the separator for sort command; -k3,3 - sorting by third field and after this ... -k4,4n - sorting by fourth field numerically | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205358/"
]
} |
388,136 | I was splitting the output from id to provide a more readable line-by-list list of groups of which a user is a member: id roaima | sed 's/,/\n\t/g'uid=1001(roaima) gid=1001(roaima) groups=1001(roaima) 24(cdrom) 25(floppy) ... 822413650 (international (uk) location) I wanted to separate the group number from its bracketed name so I extended the expression like this id roaima | sed -e 's/,/\n\t/g' -e '2,$s/(/ (/' However, this did not act as I initially expected. The second expression appeared to have no effect. Instead, to get the result I wanted, I needed to run two separate sed commands, like this: id roaima | sed -e 's/,/\n\t/g' | sed '2,$s/(/ (/'uid=1001(roaima) gid=1001(roaima) groups=1001(roaima) 24 (cdrom) 25 (floppy) ... 822413650 (international (uk) location) Why do I need two sed commands in a pipe rather than one with multiple instructions? Or if I can do this with one sed , how would I do so? What I would particularly like is to have the single space between the UID/GID value and its bracketed name for every single item (including the UID and GIDs on the first line), but the caveat is that in my real data I can have groups containing brackets in their names and I don't want the names themselves mangled. | sed , like awk or cut or perl -ne works on each line individually one after the other. sed -e code1 -e code2 is actually run as: while(patternspace = getline()) { linenumber++ code1 code2} continue {print patternspace} If your code2 is 2,$ s/foo/bar/ , that's: if (linenumber >= 2) sub(/foo/, "bar", patternspace) As your input has only one line, the sub() will never be run. Inserting newline characters in the pattern space in code1 doesn't make the linenumber increase. Instead, you have one pattern space with several lines in it while processing the first and only line of input. If you want to do modifications on the second line and over of that multi-line pattern space, you need to do something like: s/\(\n[^(]*\)(/\1 (/g Though here of course, you might as well do the two operations in one go: id | sed 's/,\([^(]*\)(/\n\t\1 (/g' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/388136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100397/"
]
} |
388,157 | I am trying to partition a usb 3 drive but for some reason parted can't properly set the start sector. The drive is identical to multiple other sata drives, the only difference is that it's inside a usb 3 enclosure with 2 port hub integrated. I wouldn't have though it would matter. Here are the steps I always used before: sudo parted /dev/sd?mklabel gptmkpart primary 0% 100%quit Here is fdisk -l output of the last 2 drives: Disk /dev/sdk: 7.3 TiB, 8001563222016 bytes, 15628053168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: gptDisk identifier: DB93D173-858A-475C-81CD-DB616E91C110Device Start End Sectors Size Type/dev/sdk1 2048 15628053134 15628051087 7.3T Linux filesystemDisk /dev/sdl: 7.3 TiB, 8001563221504 bytes, 15628053167 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 33553920 bytesDisklabel type: gptDisk identifier: B3791850-76F8-4CE2-B1CC-DF40886292CEDevice Start End Sectors Size Type/dev/sdl1 65535 15628000379 15627934845 7.3T Linux filesystemPartition 1 does not start on physical sector boundary. Second drive is the problematic one. Performance really seems to take a big hit as formatting to ext4 takes a long time (never waited to finish) where normally it would only take seconds. Why is this happening? How can I get proper alignment? The only other difference I can think of is that it was originally formatted as ntfs with some un-partitioned space. I also ran this command to clear any leftover partitions: dd if=/dev/zero of=/dev/sdl bs=512 count=10000 with no luck. Using optimal alignment doesnt work either: sudo parted -a optimal /dev/sdl mkpart primary 0% 100%Warning: You requested a partition from 0.00B to 8002GB (sectors 0..15628053166).The closest location we can manage is 17.4kB to 1048kB (sectors 34..2047).Is this still acceptable to you? | sed , like awk or cut or perl -ne works on each line individually one after the other. sed -e code1 -e code2 is actually run as: while(patternspace = getline()) { linenumber++ code1 code2} continue {print patternspace} If your code2 is 2,$ s/foo/bar/ , that's: if (linenumber >= 2) sub(/foo/, "bar", patternspace) As your input has only one line, the sub() will never be run. Inserting newline characters in the pattern space in code1 doesn't make the linenumber increase. Instead, you have one pattern space with several lines in it while processing the first and only line of input. If you want to do modifications on the second line and over of that multi-line pattern space, you need to do something like: s/\(\n[^(]*\)(/\1 (/g Though here of course, you might as well do the two operations in one go: id | sed 's/,\([^(]*\)(/\n\t\1 (/g' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/388157",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67187/"
]
} |
388,165 | Can sed make something like : 12345 become : 1&2&3&4&5 ? | With GNU sed : sed 's/./\&&/2g' ( s ubstitute every ( g ) character ( . ) with the same ( & ) preceded with & ( \& ) but only starting from the second occurrence ( 2 )). Portably: sed 's/./\&&/g;s/&//' (replace every occurrence, but then remove the first & which we don't want). With some awk implementations (not POSIX as the behaviour is unspecified for an empty FS): awk -F '' -v OFS="&" '{$1=$1;print}' (with gawk and a few other awk implementations, an empty field separator splits the records into its character constituents . The output field separator ( OFS ) is set to & . We assign a value to $1 (itself) to force the record to be regenerated with the new field separator before printing it, NF=NF also works and is slightly more efficient in many awk implementations but the behaviour when you do that is currently unspecified by POSIX). perl : perl -F -lape '$_=join"&",@F' ( -pe runs the code for every line, and prints the result ( $_ ); -l strips and re-adds line endings automatically; -a populates @F with input split on the delimiter set in -F , which here is an empty string. The result is to split every character into @F , then join them up with '&', and print the line.) Alternatively: perl -pe 's/(?<=.)./&$&/g' (replace every character provided it's preceded by another character (look-behind regexp operator (?<=...)) Using zsh shell operators: in=12345out=${(j:&:)${(s::)in}} (again, split on an empty field separator using the s:: parameter expansion flag, and join with & ) Or: out=${in///&} out=${out#?} (replace every occurrence of nothing (so before every character) with & using the ${var//pattern/replacement} ksh operator (though in ksh an empty pattern means something else, and yet something else, I'm not sure what in bash ), and remove the first one with the POSIX ${var#pattern} stripping operator). Using ksh93 shell operators: in=12345out=${in//~(P:.(?=.))/\0&} ( ~(P:perl-like-RE) being a ksh93 glob operator to use perl-like regular expressions (different from perl's or PCRE's though), (?=.) being the look-ahead operator: replace a character provided it's followed by another character with itself ( \0 ) and & ) Or: out=${in//?/&\0}; out=${out#?} (replace every character ( ? ) with & and itself ( \0 ), and we remove the superflous one) Using bash shell operators: shopt -s extglobin=12345out=${in//@()/&}; out=${out#?} (same as zsh 's, except that you need @() there (a ksh glob operator for which you need extglob in bash )). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/388165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
388,166 | Recently I had a file that reported a file size of 33P bytes in my 500GB SSD, more details here . That was through ls and cp would report that there was no enough space. In my short knowledge and poor understanding of VFS, I would believe that the (SATA) drivers talk to the disk and it moves its way through the VFS until it makes it to the inodes (assumption from the description on section 8.6 Inodes here ) and then the kernel somehow pass it to user space. In the end, I like to know how ls and cp know the size, but I would also like to know how a file could report the wrong size and if it were to happen again in the future, where to look for answers. | With GNU sed : sed 's/./\&&/2g' ( s ubstitute every ( g ) character ( . ) with the same ( & ) preceded with & ( \& ) but only starting from the second occurrence ( 2 )). Portably: sed 's/./\&&/g;s/&//' (replace every occurrence, but then remove the first & which we don't want). With some awk implementations (not POSIX as the behaviour is unspecified for an empty FS): awk -F '' -v OFS="&" '{$1=$1;print}' (with gawk and a few other awk implementations, an empty field separator splits the records into its character constituents . The output field separator ( OFS ) is set to & . We assign a value to $1 (itself) to force the record to be regenerated with the new field separator before printing it, NF=NF also works and is slightly more efficient in many awk implementations but the behaviour when you do that is currently unspecified by POSIX). perl : perl -F -lape '$_=join"&",@F' ( -pe runs the code for every line, and prints the result ( $_ ); -l strips and re-adds line endings automatically; -a populates @F with input split on the delimiter set in -F , which here is an empty string. The result is to split every character into @F , then join them up with '&', and print the line.) Alternatively: perl -pe 's/(?<=.)./&$&/g' (replace every character provided it's preceded by another character (look-behind regexp operator (?<=...)) Using zsh shell operators: in=12345out=${(j:&:)${(s::)in}} (again, split on an empty field separator using the s:: parameter expansion flag, and join with & ) Or: out=${in///&} out=${out#?} (replace every occurrence of nothing (so before every character) with & using the ${var//pattern/replacement} ksh operator (though in ksh an empty pattern means something else, and yet something else, I'm not sure what in bash ), and remove the first one with the POSIX ${var#pattern} stripping operator). Using ksh93 shell operators: in=12345out=${in//~(P:.(?=.))/\0&} ( ~(P:perl-like-RE) being a ksh93 glob operator to use perl-like regular expressions (different from perl's or PCRE's though), (?=.) being the look-ahead operator: replace a character provided it's followed by another character with itself ( \0 ) and & ) Or: out=${in//?/&\0}; out=${out#?} (replace every character ( ? ) with & and itself ( \0 ), and we remove the superflous one) Using bash shell operators: shopt -s extglobin=12345out=${in//@()/&}; out=${out#?} (same as zsh 's, except that you need @() there (a ksh glob operator for which you need extglob in bash )). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/388166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167520/"
]
} |
388,225 | I recently installed a jenkins using docker using the official image from blueocean. It is based on alpine linux. Now I cant run 32bit programs on it $ /opt/android-sdk-linux/build-tools/25.0.3/aapt bash: /opt/android-sdk-linux/build-tools/25.0.3/aapt: No such file or directory And I can't find out which packages need to be installed for running 32 bit programs. Could It be possible that the official blueoceans (jenkins) image does not support running 32bit programs when it is impossible to build many things without it. Also, I found out this issue which says "it doesn't seem that it is possible to build android currently on alpine" But can't wrap my head around it. Can someone confirm this? | musl (and therefore Alpine) doesn't really support "multilib" like glibc. You need to have a 32-bit environment in a chroot to run 32-bit applications. Follow the chroot install guide on the wiki, and make sure to pass --arch x86 to each apk command; this will give you a 32-bit chroot on which you can run 32-bit applications. As for running Android development tools on Alpine: I started a library called gcompat that attempts to allow glibc binaries to run natively on musl without using glibc. There are a few outstanding issues left before it can run the Android tools, but more testing is always welcome. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388225",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248120/"
]
} |
388,276 | I'm debugging using core dumps, and note that gdb needs you to supply the executable as well as the core dump. Why is this? If the core dump contains all the memory that the process uses, isn't the executable contained within the core dump? Perhaps there's no guarantee that the whole exe is loaded into memory (individual executables are not usually that big though) or maybe the core dump doesn't contain all relevant memory after all? Is it for the symbols (perhaps they're not loaded into memory normally)? | The core dump is just the dump of your programs memory footprint, if you know where everything was then you could just use that. You use the executable because it explains where (in terms of logical addresses) things are located in memory, i.e. the core file. If you use a command objdump it will dump the meta data about the executable object you are investigating.Using an executable object named a.out as an example. objdump -h a.out dumps the header information only, you will see sections named eg. .data or .bss or .text (there are many more). These inform the kernel loader where in the object various sections can be found and where in the process address space the section should be loaded, and for some sections (eg .data .text) what should be loaded. (.bss section doesn't contain any data in the file but it refers to the amount of memory to reserve in the process for uninitialised data, it is filled with zeros ). The layout of the executable object file conforms to a standard, ELF. objdump -x a.out - dumps everything If the executable object still contains its symbol tables (it hasn't been stripped - man strip and you used -g to generate debug generation to gcc assuming a c source compilation), then you can examine the core contents by symbol names, e.g. if you had a variable/buffer named inputLine in your source code, you could use that name in gdb to look at its content. i.e. gdb would know the offset from the start of your programs initialised data segment where inputLine starts and the length of that variable. Further reading Article1 , Article 2 , and for the nitty gritty Executable and Linking Format (ELF) specification . Update after @mirabilos comment below. But if using the symbol table as in $ gdb --batch -s a.out -c core -q -ex "x buf1" Produces 0x601060 <buf1>: 0x72617453 and then not using symbol table and examining address directly in, $ gdb --batch -c core -q -ex "x 0x601060" Produces 0x601060: 0x72617453 I have examined memory directly without using the symbol table in the 2nd command. I also see, @user580082 's answer adds further to explanation, and will up-vote. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/388276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
388,298 | Today i downloaded the latest installer( eclipse-inst-linux64.tar.gz ) of eclipse from the official website and i installed in my system. now i want to create a shortcut to launch the program. how can i do that ? if i double click on the eclipse file which is selected as the screenshot i am able to launch the program, someone help me to create a shortcut | You can create a .desktop file: $ cd Desktop$ touch eclipse.desktop Open it with your favorite text editor (gedit, for example): $ gedit eclipse.desktop Add this to that file: [Desktop Entry]Type=ApplicationName=Name of your applicationIcon=/path/to/iconExec=/path/to/application Finally, make it executable: $ chmod u+x eclipse.desktop Final result should be something like: | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247825/"
]
} |
388,328 | Now sudoers supports the subfolder /etc/sudoers.d where we can set personalized rules there. I want use it and avoid changing the main /etc/sudoers file. So, in a file into /etc/sudoers.d/99_adjusts I want to unset the main user specification rule : ALL ALL=(ALL) ALL I am trying to avoid commenting it out at /etc/sudoers . I would want something which revokes this rule set before: !ALL ALL=(ALL) ALL But the above unfortunately does not work;! looking at the man pages I can't figure out if there is some trick to do that. | I would want something which revokes this rule set before: !ALL ALL=(ALL) ALL But the above unfortunately does not work Naturally not. The !ALL at the beginning means "grant sudo permissions to no users," so the rule does nothing. It doesn't matter what follows on the line, it can't match anything. It is possible to override /etc/sudoers via /etc/sudoers.d , as pointed out in the sudoers man page : When multiple entries match for a user, they are applied in order. Where there are multiple matches, the last match is used (which is not necessarily the most specific match). Since the sudoers.d directory contents are pulled into the top-level sudoers file via a #includedir directive typically placed at the end of the file, adding a rule in /etc/sudoers.d/* will override any rule in the top-level file that is an equivalent match. The correct syntax for overriding the rule in question is: ALL ALL = (ALL) !ALL What this says, left to right, is: ALL users... on ALL hosts... may run the following commands as any one of the set of ALL users, defaulting to root... which commands are !ALL Since the last bit means "no commands," it completely overrides the "grant everyone all rights to run all commands" rule in your main sudoers file . You then need to add any rules you want from the main /etc/sudoers file after this to restore their permission grants. This is a simple matter of selective copy-and-paste from the /etc/sudoers file. Remember: Last match wins. WARNING: If you do not put anything after the above line into a file, you will have removed sudo permissions from all users. On a system where the root user is disabled, so that the su command doesn't work, you stand a high risk of inadvertently locking yourself out of the machine by messing with this. I suggest that you keep a separate root terminal open while testing this in another terminal. This was verified on a macOS 10.12 system, but it should apply to any system that has the #includedir directive at the bottom of the main sudoers file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388328",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36403/"
]
} |
388,345 | $ echo $(echo x; echo y)x y$ a='echo x; echo y'$ echo $($a) # expect 'x y'x; echo y Why does command substitution behave in such way? How to perform the command substitution for list of commands stored in a variable without using eval and bash -c ? echo $(eval $a) and echo $(bash -c "$a") actually does what I want, but I heard that using eval often is a wrong way to solve problems, so I want know how to manage without these commands (using bash -c is in fact just the same thing) | Word splitting happens quite late in the evaluation of a command. Most crucially for you, it happens after variable expansion and command substitution. This means that the second line in s="echo a; echo b"echo $($s) will first have $s expanded to echo a; echo b and then have that command executed without splitting it into a compound command consisting of two echo s. (Details: $s gets split into four words, echo , a; , echo and b . The encapsulating $(...) executes this as one command, echo with three arguments, a; , echo and b , i.e., you essentially have echo $('echo' 'a;' 'echo' 'b') ). What is given to the outmost echo is, therefore, the string a; echo b (actually three words as $(...) is not in quotes) since this was what was outputted by the innermost echo . Compare that to s1="echo a"s2="echo b"echo $($s1;$s2) which results in what you'd expect. Yes, " eval is evil", most of the time, and sh -c could feel clunky and could be as fragile if you don't know what you're doing. But suppose you have a bit of shell-code that you trust in a string in a shell script. In that case, these tools are the only (?) way to get that code to execute correctly since this often requires explicitly evaluating the text in the string as shell-code (with all evaluation phases from start to finish), especially if it's a compound command. I think it's only due to the Unix shell's intimate relation to text that you're lucky enough to have echo $($s) execute something at all . Just think about the steps you'd have to take to get a C program to execute a piece of C code that gets given as a string... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230365/"
]
} |
388,346 | I'm looking for a way to remove one specific line from a bunch of files, but only if it occurs more than once in that file. Other lines should be kept, even if they are duplicates. For example, a file like this where I would like to remove the duplicates of AAA AAABBBAAABBBCCC should become AAABBBBBBCCC I guess I should use sed but I have no idea how to write the command. | With GNU sed : sed '0,/^AAA$/b;//d' That is, let everything through ( b branches off like a continue ) up to the first AAA (from the 0th line (that is even before the first line) and the first one matching /^AAA$/ (which could be the first line)), and then for the remaining lines, delete every occurrence of AAA (an empty // pattern reuses the last pattern). GNU sed is needed for the 0 address (and the ability to have other commands after the b one in the same expression, though that could be easily worked around in other implementations by using two -e expressions) With awk : awk '$0 != "AAA" || !n++' (or for a regexp pattern: awk '!/^AAA$/ || !n++' ) a shorthand for: awk '! (&0 == "AAA" && count > 0) {print; count++}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111964/"
]
} |
388,389 | I have a file like this with tab separated two columns: ENSG00000242268.2 0.07563 ENSG00000270112.3 0.09976 ENSG00000167578.15 4.38608 ENSG00000273842.1 0.0 ENSG00000078237.5 4.08856 I would like to remove the numeric extensions from the end in the 1st column, so the output will be: ENSG00000242268 0.07563 ENSG00000270112 0.09976 ENSG00000167578 4.38608 ENSG00000273842 0.0 ENSG00000078237 4.08856 Simply doing sed 's/\..*$//' returns only first column value and using awk with field separator '.' , awk -F'.' removes the values from second column too as there are decimal numbers. A similar question has been answered here: removing extensions in a column I am still not being able to delete just from column 1st only. | awk solution: awk -F'\t' '{sub(/\..+$/,"",$1)}1' OFS='\t' file -F'\t' - field separator sub(/\..+$/,"",$1) - removes . with following chars from the 1st field at once The output: ENSG00000242268 0.07563ENSG00000270112 0.09976ENSG00000167578 4.38608ENSG00000273842 0.0ENSG00000078237 4.08856 Or with simple sed approach: sed 's/\.[0-9]*//' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22327/"
]
} |
388,475 | I read about that I should quote variables in bash, e.g. "$foo" instead of $foo. However, while writing a script, I found an a case where it works without quotes but not with them: wget_options='--mirror --no-host-directories'local_root="$1" # ./testdir recieved from command lineremote_root="$2" # ftp://XXX recieved from command line relative_path="$3" # /XXX received from command line This one works: wget $wget_options --directory_prefix="$local_root" "$remote_root$relative_path" This one does not (note the double quotes aroung $wget_options): wget "$wget_options" --directory_prefix="$local_root" "$remote_root$relative_path" What is the reason for this? Is the first line the good version; or should I suspect that there isa hidden error somewhere that causes this behavior? In general, where do I find good documentation to understand how bash and its quoting works? During writing this script I feel that I started to work on a trial-and-error base instead of understanding the rules. | Basically, you should double quote variable expansions to protect them from word splitting (and filename generation). However, in your example, wget_options='--mirror --no-host-directories'wget $wget_options --directory_prefix="$local_root" "$remote_root$relative_path" word splitting is exactly what you want . With "$wget_options" (quoted), wget doesn't know what to do with the single argument --mirror --no-host-directories and complains wget: unknown option -- mirror --no-host-directories For wget to see the two options --mirror and --no-host-directories as separate, word splitting has to occur. There are more robust ways of doing this. If you are using bash or any other shell that uses arrays like bash do, see glenn jackman's answer . Gilles' answer additionally describes an alternative solution for plainer shells such as the standard /bin/sh . Both essentially store each option as a separate element in an array. Related question with good answers: Why does my shell script choke on whitespace or other special characters? Double quoting variable expansions is a good rule of thumb. Do that . Then be aware of the very few cases where you shouldn't do that. These will present themselves to you through diagnostic messages, such as the above error message. There are also a few cases where you don't need to quote variable expansions. But it's easier to continue using double quotes anyway as it doesn't make much difference. One such case is variable=$other_variable Another one is case $variable in ...) ... ;;esac | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/388475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245573/"
]
} |
388,483 | I want to delay the start of a service if a file exists (instead of fail the service if the file exist, as with ConditionPathExists= ) but did not find anything in unit documentation Is it technically possible with systemd ?How ? | using just one unit Put TimeoutStartSec=infinity in the unit file and configure ExecStart= with a script like #! /bin/bashTIMEOUT=1000test -f /path/to/testfile && sleep "$TIMEOUT"exec /path/to/service/binary plus arguments This cannot be done (in a useful way) with ExecStartPre= , see man systemd.service : Note that ExecStartPre= may not be used to start long-running processes. All processes forked off by processes invoked via ExecStartPre= will be killed before the next service process is run. using a helper unit If you want to do this with systemd "alone" then you can create a helper unit check_and_wait.target . This one gets the entries # check_and_wait.target[Unit]TimeoutStartSec=infinityConditionPathExists=/path/to/testfileExecStart=/usr/bin/sleep 1000RemainAfterExit=yes The main unit gets these entries: Wants=check_and_wait.targetAfter=check_and_wait.target | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53490/"
]
} |
388,492 | I need the list of sub-directories (not files) in a directory so I can pass it to a Java program. So I am using this command to get the list on Linux machine: find /some_directory -depth -maxdepth 1 -mindepth 1 -exec basename {} \; > listfile.txt And then I pass listfile.txt to Java program as an argument. There are some issues to get the list directories from the Java program itself, hence I am doing this. But the above find command is taking a lot of time (~ 35 mins) as there are more than 200k files. Can this be optimized or is there a better alternative? | To print only file name instead of path, with GNU¹ find , you can replace -exec basename with -printf '%f\n' . Explained in GNU find man page: %f File's name with any leading directories removed (only the last element). Also if you want only directories in your output you probably should use -type d option: find /some_directory -maxdepth 1 -mindepth 1 -type d -printf '%f\n' > listfile.txt -depth is superfluous as you're only finding files at one depth (1). ¹ -maxdepth and -mindepth are also GNU extensions, but contrary to -printf , they are also found in some other find implementations nowadays. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388492",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248303/"
]
} |
388,500 | Using setcap to give additional permissions to a binary should write the new permission somewhere, on storage or in memory, where is it stored ? Using lsof as is doesn't work because the process disappear too quickly. | Extended permissions such as access control lists set by setfacl and capability flags set by setcap are stored in the same place as traditional permissions and set[ug]id flags set by chmod : in the file's inode. (They may actually be stored in a separate block on the disk, because an inode has a fixed size which has room for the traditional permission bits but not for the potentially unbounded extended permissions. But that only matters in rare cases, such as having to care that setcap could run out of disk space. But even chmod could run out of disk space on a system that uses deduplication!) GNU ls doesn't display a file's setcap attributes. You can display them with getcap . You can list all the extended attributes with getfattr -d -m - ; the setcap attribute is called security.capability and it is encoded in a binary format which getcap decodes for you. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/388500",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53490/"
]
} |
388,586 | Is there any difference between Requires vs Wants in target files? [Unit]Description=Graphical Interface Documentation=man:systemd.special(7)Requires=multi-user.targetWants=display-manager.service Thanks | As heemayl noted in the comment, the man page answers your question.From the web: Wants= A weaker version of Requires=. Units listed in this option will be started if the configuring unit is. However, if the listed units fail to start or cannot be added to the transaction, this has no impact on the validity of the transaction as a whole. This is the recommended way to hook start-up of one unit to the start-up of another unit. And Requires= Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units gets deactivated or its activation fails, this unit will be deactivated. This option may be specified more than once or multiple space-separated units may be specified in one option in which case requirement dependencies for all listed names will be created. Note that requirement dependencies do not influence the order in which services are started or stopped. This has to be configured independently with the After= or Before= options. If a unit foo.service requires a unit bar.service as configured with Requires= and no ordering is configured with After= or Before=, then both units will be started simultaneously and without any delay between them if foo.service is activated. Often, it is a better choice to use Wants= instead of Requires= in order to achieve a system that is more robust when dealing with failing services. Note that this dependency type does not imply that the other unit always has to be in active state when this unit is running. Specifically: failing condition checks (such as ConditionPathExists=, ConditionPathExists=, … — see below) do not cause the start job of a unit with a Requires= dependency on it to fail. Also, some unit types may deactivate on their own (for example, a service process may decide to exit cleanly, or a device may be unplugged by the user), which is not propagated to units having a Requires= dependency. Use the BindsTo= dependency type together with After= to ensure that a unit may never be in active state without a specific other unit also in active state (see below). From the freedesktop.org page Your service will only start if the multi-user.target has been reached (I don't know what happens if you try to add it to that target?), and systemd will try to start the display-manager.service together with your service.If display-manager.service fails for whatever reason, your service will still be started (so if you really need the display-manager, use Requires= for that).If the multi-user.target is not reached however, your service will not be launched. What is your service? Is it a kiosk system? Intuitively I'd suppose you want to add your service to the multi-user.target (so its launched at startup), and have it strictly depend on the display-manager.service via Requires=display-manager.service . But that's just wild guessing now. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/388586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65498/"
]
} |
388,682 | I created a websites folder into the / directory, and gave it full permission with sudo chmod -R 777 /websites/ . After that, I made a change in /etc/nginx/conf.d/default.conf to point to the websites directory: server { listen 80; server_name localhost; location / { root /websites; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /websites/nginx/html; }} But I am having an 403 Forbidden , when I tried to browse to public ip of the server. Why is it happening? How can I solve it? I have this in the nginx error.log: 2017/08/27 20:41:03 [error] 3849#3849: *37 "/websites/index.html" isforbidden (13: Permission denied), client: **.**.130.159, server:localhost, request: "GET / HTTP/1.1", host: "**.**.**.120" | The error log very clearly says: Your nginx would try to read /websites/index.html , but it can't This is why it gives 403 error, not because of its configuration. It is because of the 13: Permission denied . It is a system error. Thus, your nginx is configured well, it tries to read that file, but it can't. The next question is, why it can't. First, you should check, what it does. Sudo to the user, on which nginx is running (it is probably www-data, so the command is: sudo -u www-data /bin/bash ), and try to read that file for yourself ( cat /websites/index.html ). The next step depends on, what is the result. @sebasth has right in his comment: Possibly wrong permissions on the file/folder, or/and SELinux policy not permitting access. If you have SELinux enabled you should check audit logs (tools such as audit2why might be helpful). I think the two most probable outcomes: Something wasn't set up correctly with the permissions, despite that your chmod command looks okay There is some SELinux thingy making your life nicer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388682",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248421/"
]
} |
388,695 | Within my ~./bashrc in an alias I have turned of globbing like this. alias x='set -f;. any.sh' But which command enables globbing again or should I set this options in any.sh ? Any response is welcome. | If you want globs to be disabled only while the shell is interpreting code in any.sh , with bash4.4+ or ash -based shells, you can do: x() { local - set -o noglob . any.sh} Or in zsh : x() { set -o localoptions -o noglob . any.sh} That is use a function instead of an alias (you don't want to use aliases for several commands as that doesn't do what you want when you do cmd | x or cmd && x for instance), make sure changes to options (to the $- variable as one way to look at it) are local to the function, disable glob and source the file. With older versions of bash , you could do: x() { local ret restore [[ $- = *f* ]] || restore='set +o noglob' set -o noglob . any.sh ret=$? eval "$restore" return "$ret"} Or maybe a more generic helper function like: withopt() { local ret option local -a restore option=$1; shift [[ -o $option ]] || restore=(set +o "$option") set -o "$option" "$@" ret=$? "${restore[@]}" return "$ret"} And then, you can use an alias if you like like: alias x='withopt noglob . any.sh' Note that it doesn't prevent * from being expanded if you do: x * As the noglob option ends up being enabled long after that command has been evaluated. For that, see my other answer (an answer to a different question). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223883/"
]
} |
388,748 | The following syntax should match the "Ambari Server running", but how to match in case there are multiple spaces between words? How to ignore spaces between words? echo "Ambari Server running" | grep -i "Ambari Server running"echo "Ambari Server running" | grep -i "Ambari Server running"echo " Ambari Server running" | grep -i "Ambari Server running" The expected results should be: Ambari Server runningAmbari Server runningAmbari Server running | Use tr with its -s option to compress consecutive spaces into single spaces and then grep the result of that: $ echo 'Some spacious string' | tr -s ' ' | grep 'Some spacious string'Some spacious string This would however not remove flanking spaces completely, only compress them into a single space at either end. Using sed to remove the flanking blanks as well as compressing the internal blanks to single spaces: echo ' Some spacious string' |sed 's/^[[:blank:]]*//; s/[[:blank:]]*$//; s/[[:blank:]]\{1,\}/ /g' This could then be passed through to grep . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/388748",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
388,759 | I was completing the git tutorial found here: https://try.github.io/levels/1/challenges/7 And it said that I had to put single quotes around *.txt. I had not seen this before when using linux but thought it was peculiar. I also have seen single quotes when using html and php as a way to make sure that the string is interpretted literally instead of using special characters. | This is the same in the shell as in the other grammars that you mention. A single quoted string will be treated as a "string literal" (so to speak). The difference between git add '*.txt' and git add *.txt is who's doing the matching of the pattern against filenames. In the case of git add '*.txt' , git is doing the pattern matching. Since the shell won't expand the string literal '*.txt' , git add will be called with a single argument, *.txt . git then does the matching against the filenames available in the whole repository (because... git ). In the case of git add *.txt , the shell does the filename matching and will pass a list of matching filenames from the current directory to git add . Note that if there are no names matching the given pattern, the shell will (usually 1 ) pass the pattern on to git add unexpanded. If this happens, the result will be the same as if the pattern had been quoted. 1 Usually, but see e.g. the failglob shell option in bash . See also comments to this answer. When git add gets a filename pattern , it will add not only the files matching in the current directory, but it will add all files that matches in the whole repository (i.e. including any subdirectories). This is why the text in the lower right hand corner says Wildcards: We need quotes so that Git will receive the wildcard before our shell can interfere with it. Without quotes our shell will only execute the wildcard search within the current directory. Git will receive the list of files the shell found instead of the wildcard and it will not be able to add the files inside of the octofamily directory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248483/"
]
} |
388,772 | I have a file containing a line which I want to replace something in the middle of it: database.url=jdbc:mysql://my.sql.ip.address:mySqlPort/mySqlDbName?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull Assume that I want to change the IP address; I used following sed command: sed -i -r 's/(database.url=jdbc:mysql:\/\/).+(:.+)/\1zizi/' myFile This outputs: database.url=jdbc:mysql://zizi But I want: database.url=jdbc:mysql://zizi:mySqlPort/mySqlDbName?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull How should I write my sed ? Is there any other command which I can use to better get this? | Just print second next captured group as well \2 (I'm not modifying or enhance your sed ) sed -i -r 's/(database.url=jdbc:mysql:\/\/).+(:.+)/\1zizi\2/' myFile this is improved command to change only IP matched part. sed 's/[1-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/NEWIP/' infile.txt If only for the lines starts with database.url : sed '/^database\.url/ s/[1-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/NEWIP/' infile.txt Or in more sed capability and even shorter. sed 's/\([1-9]\)\{1,3\}\(\.[0-9]\{1,3\}\)\{3\}/NEW/' Note that this can also can change and improve to match exact IP address rather than matching. 1.1.1.999 as a IP which is not a valid. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61120/"
]
} |
388,815 | I have more then 30 different text files and each one of them has a same word which repeated different time for example in text1 "esr" repeated 12 times and in text2 "esr" repeated 21 times. Is it possible to output the number of time that the word repeated separately with one command? | With grep + wc pipeline: for f in *.txt; do echo -n "$f "; grep -wo 'esr' "$f" | wc -l; done grep options: -w - word-regexp (to match whole/separate word) -o - print only matched substrings wc -l - count the number of lines (matched words in our case) for each file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388815",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
388,827 | I want to set up a continuous task that pings a gateway every second. How would I go about it? The most performance friendly solution would be best. No output needed, I just want it to ping. | With grep + wc pipeline: for f in *.txt; do echo -n "$f "; grep -wo 'esr' "$f" | wc -l; done grep options: -w - word-regexp (to match whole/separate word) -o - print only matched substrings wc -l - count the number of lines (matched words in our case) for each file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122851/"
]
} |
388,844 | I can run ENV_VAR=value command to run command with a specific value for ENV_VAR . What is the equivalent to unset ENV_VAR for command ? | Other than with the -i option, which wipes the whole environment, POSIX env doesn't provide any way to unset a variable. However, with a few env implementations (including GNU's, busybox ' and FreeBSD's at least), you can do: env -u ENV_VAR command Which would work at removing every instance of an ENV_VAR variable from the environment (note though that it doesn't work for the environment variable with an empty name ( env -u '' either gives an error or is ineffective depending on the implementation even though all accept env '=value' , probably a limitation incurred by the unsetenv() C function which POSIX requires to return an error for the empty string, while there's no such limitation for putenv() )). Portably (in POSIX shells), you can do: (unset -v ENV_VAR; exec command) (note that with some shells, using exec can change which command is run: runs the one in the filesystem instead of a function or builtin for instance (and would bypass alias expansion obviously), like env above. You'd want to omit it in those cases). But that won't work for environment variables that have a name that is not mappable to a shell variable (note that some shells like mksh would strip those variables from the environment on startup anyway), or variables that have been marked read-only. -v is for the Bourne shell and bash whose unset without -v could unset a ENV_VAR function if there was no variable by that name. Most other shells wouldn't unset functions unless you pass the -f option. (unlikely to make a difference in practice). (Also beware of the bug/misfeature of bash / mksh / yash whose unset , under some circumstance may not unset the variable, but reveal the variable in an outer scope ) If perl is available, you could do: perl -e 'delete $ENV{shift@ARGV}; exec @ARGV or die$!' ENV_VAR command Which will work even for the environment variable with an empty name. Now all those won't work if you want to change an environment variable for a builtin or function and don't want them to run in a subshell, like in bash : env -u LANG printf -v var %.3f 1.2 # would run /usr/bin/printf instead(unset -v LANG; printf -v var %.3f 1.2) # changes to $var lost afterwards (here unsetting LANG as a misguided approach at making sure . is used and understood as the decimal separator. It would be better to use LC_ALL=C printf... for that) With some shells, you can instead create a local scope for the variable using a function: without() { local "$1" # $1 variable declared as initially unset in bash¹ shift "$@"}without LANG printf -v var %.3f 1.2 With zsh , you can also use an anonymous function: (){local ENV_VAR; command} That approach won't work in some shells (like the ones based on the Almquist shell), whose local don't declare the variable as initially unset (but inherit the value and attributes). In those, you can do local ENV_VAR; unset ENV_VAR , but don't do that in mksh or yash ( typeset instead of local for that one) as that wouldn't work, the unset would only be cancelling the local . ¹ Also beware that in bash , the local ENV_VAR (even though unset) would retain the export attribute. So if command was a function that assigns a value to ENV_VAR , the variable would be available in the environment of commands called afterwards. unset ENV_VAR would clear that attribute. Or you could use local +x ENV_VAR which would also ensure a clean slate (unless that variable has been declared read-only, but then there's nothing you can do about it in bash ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/388844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72868/"
]
} |
388,875 | How can I determine or set the size limit of /etc/hosts ? How many lines can it have? | Problematical effects include slow hostname resolution (unless the OS somehow converts the linear list into a faster-to-search structure?) and the potential for surprising interaction with shell tab completion well before any meaningful file size is reached. For example! If one places 500,000 host entries in /etc/hosts # perl -E 'for (1..500000) { say "127.0.0.10 $_.science" }' >> /etc/hosts for science, the default hostname tab completion in ZSH takes about ~25 seconds on my system to return a completion prompt (granted, this is on a laptop from 2009 with a 5400 RPM disk, but still). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/388875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37153/"
]
} |
388,892 | I wrote the following command in order to match $a with $b, but when the value includes "-", then I get an error. How can I avoid that? # a="-Xmx5324m"# b="-Xmx5324m"### echo "$a" | grep -Fxc "$b"grep: conflicting matchers specified | Place -- before your pattern: echo "$a" | grep -Fxc -- "$b" -- specifies end of command options for many commands/shell built-ins, after which the remaining arguments are treated as positional arguments. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/388892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
388,911 | I want to be able to intercept connection establishment, e.g. want to know when some process (any process) establishes a connection. Is there a way to achieve that? The only thing I can think of is to intercept connect() syscall. But may be there's an other way? May be when networking context is created in a kernel? The goal is to filter processes based on some requirements and enable/disable connection establishment in real-time. Thanks in advance. P.S. I'm doing that for absolutely legal purposes. P.P.S. I've searched in google but only found how to intercept the already established connection (not exactly what I want). I'm asking for an idea, a direction of search, not for a code. | For the aspect of just extracting information, it's possible to do that with iptables using AUDIT matches (or possibly LOG matches if you don't need huge amounts of info). For the case of actually allowing or disallowing connections in real-time based on some complex rules, I'm not sure you can do that on reliably on Linux. Options include: Seccomp-BPF, but I'm not certain it can do this (and the filter would be static once instantiated in a given process). Overriding various socket calls using LD_PRELOAD or some other method. This is unreliable because it's easy to bypass (making direct syscalls is trivial, and you can also dlopen() whatever libc and make calls that way). The net_cls control group. This requires firewall setup, may impact active connections, and may not work exactly as you want (it will require a daemon that moves processes into the appropriate control group as they are started). If you can tolerate some data getting onto the network, you can use the iptables NFLOG target and watch for interesting connections (if you want real-time evaluation, you'll need to log all new connections and parse things in userspace), and then reactively close the connections that you don't want. You can run each application in it's own network namespace and force outbound traffic through the host system, then use policy routing based on the source to control what gets out to the actual network. That said, you may want to reevaluate why you need this. Unless you're feeding your decision making from a neural network or some other heuristic approach (both of which are problematic options for multiple reasons), you're almost always going to be better off either coding things directly into the firewall (iptables can do some seriously complex stuff, like matching only connections using a specific IP protocol to a specific port initiated by a particular UID at a uniformly distributed random rate and sending the unmatched packets to a different destination), or using scheduling tools or other hooks to update firewall rules dynamically (for example, changing firewall rules when the system is supposed to be unused, or only allowing new connections originating from a given UID when that user is logged in). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/388911",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248580/"
]
} |
389,005 | According to xrdp docs it should be possible to connect remotely without using a local VNC server: xrdp can connect to a locally created X.org session with the xorgxrdp drivers [my emphasis], to a VNC X11 server, and forward to another RDP server. I can connect with RDP from Windows: Then I select Xorg session and supply username and password. After some timeout an error pops up about an unknown connection problem. This is tail /var/log/xrdp.log and tail /var/log/xrdp-sesman.log output: [DEBUG] Closed socket 17 (AF_UNIX) ... [DEBUG] Closed socket 17 (AF_UNIX) [DEBUG] xrdp_wm_log_msg: some problem [DEBUG] xrdp_mm_module_cleanup [DEBUG] Closed socket 16 (AF_INET6 ::1 port 38094) dmesg doesn't show any problems nor references to Xorg or similar. ps -A | grep rdp shows xrdp and xrdp-sesman processes running. Tried connecting with Windows 7 to Debian: same problem. xrdp.ini and sasman.ini : In sesman.ini the AlwaysGroupCheck=false . The startwm.sh : Any ideas? Running on a virtualized minimal, clean Debian 9.1 installation. Only only lxde-core and xrdp installed with apt-get . (No errors during installation.) xorgxrdp drivers installed (since they depend on xrdp ). | This bug report has the same symptoms as described in the question. Seems xserver-xorg-legacy package is the culprit. So to make it work, it boils down to the following two commands: apt-get purge xserver-xorg-legacyapt-get install xrdp The required services are started automatically after install. No need to reboot. Connecting and authenticating should automatically show the desktop. I do not know though what the consequences are of removing xserver-xorg-legacy . In the bug report it is mentioned to remove if not needed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30593/"
]
} |
389,014 | So I wanted to call two background ssh processes: ssh -D localhost:8087 -fN aws-gateway-vpc1ssh -D localhost:8088 -fN aws-gateway-vpc2 These gateways don't have the benefit of letting me set an authorized_keys file, so I must be prompted for my interactive password. That is why I'm using the -f flag and not the shell's & which will only background the process after I authenticate interactively. In this scenario I appear to be unable to use the $! bash variable to get the pid of the recently [self] backgrounded process. What other options do I have to find the correct pid to kill later if interrupted? | Finding the pid by grepping might be error prone. Alternative option would be to use ControlPath and ControlMaster options of SSH. This way you will be able to have your ssh command listen on a control socket and wait for commands from subsequent ssh calls. Try this ssh -D localhost:8087 -S /tmp/.ssh-aws-gateway-vpc1 -M -fN aws-gateway-vpc1# (...)# later, when you want to terminate ssh connectionssh -S /tmp/.ssh-aws-gateway-vpc1 -O exit aws-gateway-vpc1 The exit command lets you kill the process without knowing the PID. If you do need the PID for anything, you can use the check command to show it: $ ssh -S /tmp/.ssh-aws-gateway-vpc1 -O check aws-gateway-vpc1Master running (pid=1234) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126165/"
]
} |
389,055 | So I've been playing around with filesystem and wondered about listing the files in /etc that contains only upper-case letters in their names. I commanded ls *[A-Z]* But the console shows the files containing lower chars too. I want to use only ls command. Is the console program locale dependent? What is the underlying cause? | [A-Z] doesn't mean upper case. It means letters from A to Z , which may include lower-case letters. Usually you should use [[:upper:]] instead. (This works in Bash even without extglob .) What characters [A-Z] matches depends on your locale. You have clarified that you want to show all filenames that contain at least on upper-case character anywhere--not only filenames consisting entirely of upper case--but that when you use ls *[A-Z]* , you get some filenames that don't have any upper-case characters in them. This happens when your locale's lexicographic ordering interperses upper- and lower-case letters (e.g., AaBbCcDd...). Although you can set another locale (e.g., LC_ALL=C ), the best solution is usually to write a pattern that specifically matches upper-case letters. Which characters are upper-case letters may also vary between locales, but presumably if something is an upper-case letter in your locale then you want to include it. So that's probably an advantage of [[:upper:]] rather than a disadvantage. Use [[:upper:]] instead. Most Bourne-style shells, such as Bash, support POSIX character classes in globs. This command will list entries in /etc whose names have at least one upper-case letter: ls -d /etc/*[[:upper:]]* Some of the entries you get may be directories. If you want to show their contents rather than just list the directories, then you can remove the -d flag. You may also want to put a -- flag before the pattern, in case you have entries in /etc that begin with - . You probably don't, though. (In a script, you will usually want to use -- here.) You probably don't want dotfiles, but if you do... This will not show entries that start with . . Usually you don't want to show them. If you do want them, most shells allow you to write a single glob that also matches them or to configure globbing to include them by default. The option to automatically include leading- . entries in Bash is dotglob and it can be enabled with shopt -s dotglob . For other shells see . Or you can simply write a second glob for them: ls -d /etc/*[[:upper:]]* /etc/.*[[:upper:]]* Most popular Bourne-style shells support brace expansions, so you can write this more compactly with less repetition: ls -d /etc/{,.}*[[:upper:]]* In most shells including Bash, when you write two separate globs, you'll get an error message when either one does not expand--because the default behavior in most shells is to pass it unexpanded. But ls will still show the entries that matched the other one. But as Stéphane Chazelas has pointed out , in some shells including the very popular Zsh, the whole command fails and ls is never run. If you're using the shell interactively this is not really harmful, because you can modify the command run it again, but such constructions are unsuitable for portable scripts. Bash will also behave this way if you set the failglob shell option. You don't need extended globbing for that. In Bash you do not need to have extended globbing enabled to use POSIX character classes in glob patterns. On my system with Bash 4.3.48: ek@Io:~$ shopt extglobextglob offek@Io:~$ ls -d /etc/*[[:upper:]]*/etc/ConsoleKit /etc/LatexMk /etc/ODBCDataSources /etc/UPower/etc/ImageMagick-6 /etc/NetworkManager /etc/rcS.d /etc/X11 But you do need it to match filenames of only upper-case letters. What you do need extended globbing for is if you want to match filenames consisting only of upper-case letters. Then you would use +([[:upper:]]) or *([[:upper:]]) , and those are extended globs. If you're using Bash, see this article , this guide , 3.5.8.1 Pattern Matching in the GNU Bash manual for details. See also Stéphane Chazelas's answer . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248684/"
]
} |
389,056 | I am no professional in scripting or find commands what so ever (Just a warning). I am trying to create a script that can move all my mp3 podcasts to a folder that I can then move to my phone. So the script I made needs to move all mp3's (even those with weird paths). I created a script as follows and it seems* to do what I intended it to but it keeps giving errors as follows: find /home/jason/gPodder/ -name '*.mp3' -exec bash " cp '{}' /home/jason/gPodder/mp3/ " \;Which returns: bash: cp '/home/jason/gPodder/Downloads/The Documentary/TheDocumentary20170823-GoingGreenInTheOilState.mp3' /home/jason/gPodder/mp3/ : No such file or directory But If I copy the above bash 'command' it works no problem.Please could you help me understand what my error is in the script. Thank you | As @Kusalananda said , you're missing a -c to be able to use an inline-script. But even then, never embed the {} in the shell code, that would be an arbitrary command injection vulnerability (think for instance of a file called '$(reboot)'.mp3 with your example). Instead make it an argument of the inline script (assuming you do need an inline script here and that the cp is just an example). find ... -exec sh -c 'cp "$1" /home/jason/gPodder/mp3' sh {} \; (you also don't need bash just for that. Your sh will do just as well). Or even better, pass several arguments at once to cp : find ... -exec sh -c 'cp "$@" /home/jason/gPodder/mp3' sh {} + With GNU cp , you can also make it: find ... -exec cp -t /home/jason/gPodder/mp3 {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248686/"
]
} |
389,156 | Trying to install libssl-dev on ubuntu 14.04.2 $ makefatal error: openssl/sha.h: No such file or directorycompilation terminated.$ sudo apt-get install libssl-devThe following packages have unmet dependencies: libssl-dev : Depends: zlib1g-dev but it is not going to be installedE: Unable to correct problems, you have held broken packages.$ sudo apt-get install zlib1g-devThe following packages have unmet dependencies: zlib1g-dev : Depends: zlib1g (= 1:1.2.3.4.dfsg-3ubuntu4) but 1:1.2.8.dfsg-1ubuntu1 is to be installedE: Unable to correct problems, you have held broken packages. How can I remove the held package and install the correct? | First, try entering this: sudo dpkg --configure -a This will go a long way toward fixing broken dependencies.If that does not suffice, try: sudo apt-get install -f Then clean and update: sudo apt-get clean && sudo apt-get update Then upgrade: sudo apt-get upgrade (say 'y' to upgrades) If you get a message saying some packages are "held back", do this: sudo apt-get dist-upgrade Now do these two commands: sudo apt-get purge zlib1g-devsudo apt-get autoremove Then see what happens when you enter: sudo apt-get install libssl-dev If it still doesn't work, try this: sudo aptitude install libssl-dev zliblg-dev=1:1.2.8.dfsg-1ubuntu1 If it still doesn't work, open up Synaptic Package Manager (It should be in your Launcher. If you can't find it, then at the Terminal enter synaptic-pkexec ). Search for zlib1g-dev .If the box next to zlib1g-dev is not green, click on it and select "Mark for installation". If you see more than one version, pick 1:1.2.8.dfsg-1ubuntu1 .Then search for libssl-dev . If the box next to libssl-dev is not green, click on it and select "Mark for installation". Then select "Apply". | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/389156",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248748/"
]
} |
389,203 | I would like to replace the contents of the 3rd column of a .CSV file (using awk) by looking for a specific string and replace with another string that has a single quote and copies the output of that into another file. Any advice on what am I doing wrong? For instance: column1, coluumn2, coulumn3, column4, column51, item1, WALMART, 2.39, 502, item2, TARGET, 4.99, 523, item3, SAMS CLUB, 8.19, 154, item4, KROGER, 12.49, 335, item6, WEGMANS, 32.69, 756, item6, TARGET, 12.99, 257, item7, SAMS CLUB, 8.19, 92 I tried using the below awk but doesn't work: awk '{gsub("SAMS CLUB","SAM\'S CLUB",$3);print}' filename1 > filename2 | You need to set the field separator ( FS ) properly. By default, awk uses any horizontal whitespace(s) as the field separator, so in your case SAMS becomes one field and CLUB becomes another. Hence, {gsub("SAMS CLUB","SAM\'S CLUB",$3);print} is not working expectedly. You can do: awk -F ', +' '{gsub("SAMS CLUB","SAM'\''S CLUB",$3); print}' OFS=", " file.txt -F ', +' sets FS as comma, followed by one or more space(s). If you are unsure about the spaces, use character class [:blank:] instead to represent any horizontal whitespace and change OFS to meet your need as well. Example: % cat file.txt column1, coluumn2, coulumn3, column4, column51, item1, WALMART, 2.39, 502, item2, TARGET, 4.99, 523, item3, SAMS CLUB, 8.19, 154, item4, KROGER, 12.49, 335, item6, WEGMANS, 32.69, 756, item6, TARGET, 12.99, 257, item7, SAMS CLUB, 8.19, 92% awk -F ',[[:blank:]]+' '{gsub("SAMS CLUB","SAM'\''S CLUB",$3); print}' OFS=", " file.txtcolumn1, coluumn2, coulumn3, column4, column51, item1, WALMART, 2.39, 502, item2, TARGET, 4.99, 523, item3, SAM'S CLUB, 8.19, 154, item4, KROGER, 12.49, 335, item6, WEGMANS, 32.69, 756, item6, TARGET, 12.99, 257, item7, SAM'S CLUB, 8.19, 92 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389203",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44638/"
]
} |
389,209 | Is "${blah}" allowed in POSIX sh, or does this require bash derived shells? | "${blah}" and "$blah" are portable shell syntax: they work on all POSIX-compliant shells as well as in traditional Bourne shells. POSIX also requires other features of variable expansion: String manipulation with ${VAR#PREFIX} , ${VAR##PREFIX} , ${VAR%SUFFIX} and ${VAR%%SUFFIX} . Conditional treatment of unset variables with ${VAR-DEFAULT} , ${VAR=DEFAULT} , ${VAR+FALLBACK} and ${VAR?MESSAGE} as well as the unset-or-empty variants with :- , := , :+ and :? . Variable length with ${#VAR} . In all cases, remember that the result of $… undergoes whitespace-splitting (more precisely, splitting at $IFS characters) and wildcard expansion (globbing) unless it's in double quotes (or a few other contexts that don't allow multiple words). You can look up what exists in POSIX by reading the specification. Modern versions of POSIX are identical to the Open Group Base Specifications (without optional components). Older versions are a subset of Single Unix v2 . Unix-like systems without a POSIX shell are extremely rare nowadays. /bin/sh is a non-POSIX Bourne shell on a few systems, notably Solaris, but a POSIX shell is available ( /usr/xpg4/bin/sh on Solaris, and you should have /usr/xpg4/bin ahead of /usr/bin in your PATH). If you need compatibility with Bourne shells, check the man page on the systems you're interested in, as there have been many versions of sh with slightly different sets of features. Sven Mascheck maintains a page with a lot of information . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/389209",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32972/"
]
} |
389,255 | In a text processing field is there a way to know if a tab is 8 characters in length (the default length) or less? For example, if I have a sample file with tab delimiter and the content of a field fit in less than one tab (≤7), and if I have a tab after that, then that tab will be only ‘tab size – field size’ in length. Is there a way to get the total length of tabs on a line? I'm not looking for the number of tabs (i.e. 10 tabs should not return 10) but the character length of those tabs. For the following input data (tab delimited between fields and only one tab): field0 field00 field000 last-fieldfld1 fld11 fld001 last-fldfd2 fld3 last-fld I expect to count length of tabs in each line, so 1199 | The TAB character is a control character which when sent to a terminal¹ makes the terminal's cursor move to the next tab-stop. By default, in most terminals, the tab stops are 8 columns apart, but that's configurable. You can also have tab stops at irregular intervals: $ tabs 3 9 11; printf '\tx\ty\tz\n' x y z Only the terminal knows how many columns to the right a TAB will move the cursor. You can get that information by querying the cursor position from the terminal before and after the tab has been sent. If you want to make that calculation by hand for a given line and assuming that line is printed at the first column of the screen, you'll need to: know where the tab-stops are² know the display width of every character know the width of the screen decide whether you want to handle other control characters like \r (which moves the cursor to the first column) or \b that moves the cursor back...) It can be simplified if you assume the tab stops are every 8 columns, the line fits in the screen and there are no other control characters or characters (or non-characters) that your terminal cannot display properly. With GNU wc , if the line is stored in $line : width=$(printf %s "$line" | wc -L)width_without_tabs=$(printf %s "$line" | tr -d '\t' | wc -L)width_of_tabs=$((width - width_without_tabs)) wc -L gives the width of the widest line in its input. It does that by using wcwidth(3) to determine the width of characters and assuming the tab stops are every 8 columns. For non-GNU systems, and with the same assumptions, see @Kusalananda's approach . It's even better as it lets you specify the tab stops but unfortunately currently doesn't work with GNU expand (at least) when the input contains multi-byte characters or 0-width (like combining characters) or double-width characters. ¹ note though that if you do stty tab3 , the tty device line discipline will take over the tab processing (convert TAB to spaces based on its own idea of where the cursor might be before sending to the terminal) and implement tab stops every 8 columns. Testing on Linux, it seems to handle properly CR, LF and BS characters as well as multibyte UTF-8 ones (provided iutf8 is also on) but that's about it. It assumes all other non-control characters (including zero-width, double-width characters) have a width of 1, it (obviously) doesn't handle escape sequences, doesn't wrap properly... That's probably intended for terminals that can't do tab processing. In any case, the tty line discipline does need to know where the cursor is and uses those heuristics above, because when using the icanon line editor (like when you enter text for applications like cat that don't implement their own line editor), when you press Tab Backspace , the line discipline needs to know how many BS characters to send to erase that Tab character for display. If you change where the tab stops are (like with tabs 12 ), you'll notice that Tabs are not erased properly. Same if you enter double-width characters before pressing Tab Backspace . ² For that, you could send tab characters and query the cursor position after each one. Something like: tabs=$( saved_settings=$(stty -g) stty -icanon min 1 time 0 -echo gawk -vRS=R -F';' -vORS= < /dev/tty ' function out(s) {print s > "/dev/tty"; fflush("/dev/tty")} BEGIN{out("\r\t\33[6n")} $NF <= prev {out("\r"); exit} {print sep ($NF - 1); sep=","; prev = $NF; out("\t\33[6n")}' stty "$saved_settings") Then, you can use that as expand -t "$tabs" using @Kusalananda's solution. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72456/"
]
} |
389,256 | I have an input file with non-fixed column number on which I would like to do some arithmetic calculations on: input.txtID1 4651455 234 4651765 392 4652423 470ID2 16181020 176 16184958 869 16185889 347 16187777 231 The input file has tab-separated fields has always a unique ID in column $1 (not duplicated). Not all rows have the same number of columns. What I would like to achieve is a tab-separated file as follows: output1.txt ID1 76 266 ID2 3762 62 1541 Basically it would print the $1 of the original file, then it would start from the second even column of the file ( $4 ) and subtract to its value the previous two columns ( $4 - $3 - $2 ) then do the same with all the even columns of the input file (e.g., $6 - $5 - $4 ; $8 - $7 - $6 ; ...). In my knowledge, this can be done with awk print , but I only know how to deal with it when my file has a fixed number of columns in every row. An even more ideal output for my needs would be the following: output2.txtID1 234 76 392 266 470ID2 176 3762 869 62 347 1541 231 Basically it would print the $1 of the original file, then interleave printing the odd columns from the input file to the columns as in output1.txt . | The TAB character is a control character which when sent to a terminal¹ makes the terminal's cursor move to the next tab-stop. By default, in most terminals, the tab stops are 8 columns apart, but that's configurable. You can also have tab stops at irregular intervals: $ tabs 3 9 11; printf '\tx\ty\tz\n' x y z Only the terminal knows how many columns to the right a TAB will move the cursor. You can get that information by querying the cursor position from the terminal before and after the tab has been sent. If you want to make that calculation by hand for a given line and assuming that line is printed at the first column of the screen, you'll need to: know where the tab-stops are² know the display width of every character know the width of the screen decide whether you want to handle other control characters like \r (which moves the cursor to the first column) or \b that moves the cursor back...) It can be simplified if you assume the tab stops are every 8 columns, the line fits in the screen and there are no other control characters or characters (or non-characters) that your terminal cannot display properly. With GNU wc , if the line is stored in $line : width=$(printf %s "$line" | wc -L)width_without_tabs=$(printf %s "$line" | tr -d '\t' | wc -L)width_of_tabs=$((width - width_without_tabs)) wc -L gives the width of the widest line in its input. It does that by using wcwidth(3) to determine the width of characters and assuming the tab stops are every 8 columns. For non-GNU systems, and with the same assumptions, see @Kusalananda's approach . It's even better as it lets you specify the tab stops but unfortunately currently doesn't work with GNU expand (at least) when the input contains multi-byte characters or 0-width (like combining characters) or double-width characters. ¹ note though that if you do stty tab3 , the tty device line discipline will take over the tab processing (convert TAB to spaces based on its own idea of where the cursor might be before sending to the terminal) and implement tab stops every 8 columns. Testing on Linux, it seems to handle properly CR, LF and BS characters as well as multibyte UTF-8 ones (provided iutf8 is also on) but that's about it. It assumes all other non-control characters (including zero-width, double-width characters) have a width of 1, it (obviously) doesn't handle escape sequences, doesn't wrap properly... That's probably intended for terminals that can't do tab processing. In any case, the tty line discipline does need to know where the cursor is and uses those heuristics above, because when using the icanon line editor (like when you enter text for applications like cat that don't implement their own line editor), when you press Tab Backspace , the line discipline needs to know how many BS characters to send to erase that Tab character for display. If you change where the tab stops are (like with tabs 12 ), you'll notice that Tabs are not erased properly. Same if you enter double-width characters before pressing Tab Backspace . ² For that, you could send tab characters and query the cursor position after each one. Something like: tabs=$( saved_settings=$(stty -g) stty -icanon min 1 time 0 -echo gawk -vRS=R -F';' -vORS= < /dev/tty ' function out(s) {print s > "/dev/tty"; fflush("/dev/tty")} BEGIN{out("\r\t\33[6n")} $NF <= prev {out("\r"); exit} {print sep ($NF - 1); sep=","; prev = $NF; out("\t\33[6n")}' stty "$saved_settings") Then, you can use that as expand -t "$tabs" using @Kusalananda's solution. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246707/"
]
} |
389,263 | Is it possible to corrupt a RHEL7 system (specifically an XFS filesystem) by completely filling it with data? I can imagine bad things happening if disk writes fill to complete but I would also hope that there are protections in place. Would it matter whether / fills up or another partition? | The TAB character is a control character which when sent to a terminal¹ makes the terminal's cursor move to the next tab-stop. By default, in most terminals, the tab stops are 8 columns apart, but that's configurable. You can also have tab stops at irregular intervals: $ tabs 3 9 11; printf '\tx\ty\tz\n' x y z Only the terminal knows how many columns to the right a TAB will move the cursor. You can get that information by querying the cursor position from the terminal before and after the tab has been sent. If you want to make that calculation by hand for a given line and assuming that line is printed at the first column of the screen, you'll need to: know where the tab-stops are² know the display width of every character know the width of the screen decide whether you want to handle other control characters like \r (which moves the cursor to the first column) or \b that moves the cursor back...) It can be simplified if you assume the tab stops are every 8 columns, the line fits in the screen and there are no other control characters or characters (or non-characters) that your terminal cannot display properly. With GNU wc , if the line is stored in $line : width=$(printf %s "$line" | wc -L)width_without_tabs=$(printf %s "$line" | tr -d '\t' | wc -L)width_of_tabs=$((width - width_without_tabs)) wc -L gives the width of the widest line in its input. It does that by using wcwidth(3) to determine the width of characters and assuming the tab stops are every 8 columns. For non-GNU systems, and with the same assumptions, see @Kusalananda's approach . It's even better as it lets you specify the tab stops but unfortunately currently doesn't work with GNU expand (at least) when the input contains multi-byte characters or 0-width (like combining characters) or double-width characters. ¹ note though that if you do stty tab3 , the tty device line discipline will take over the tab processing (convert TAB to spaces based on its own idea of where the cursor might be before sending to the terminal) and implement tab stops every 8 columns. Testing on Linux, it seems to handle properly CR, LF and BS characters as well as multibyte UTF-8 ones (provided iutf8 is also on) but that's about it. It assumes all other non-control characters (including zero-width, double-width characters) have a width of 1, it (obviously) doesn't handle escape sequences, doesn't wrap properly... That's probably intended for terminals that can't do tab processing. In any case, the tty line discipline does need to know where the cursor is and uses those heuristics above, because when using the icanon line editor (like when you enter text for applications like cat that don't implement their own line editor), when you press Tab Backspace , the line discipline needs to know how many BS characters to send to erase that Tab character for display. If you change where the tab stops are (like with tabs 12 ), you'll notice that Tabs are not erased properly. Same if you enter double-width characters before pressing Tab Backspace . ² For that, you could send tab characters and query the cursor position after each one. Something like: tabs=$( saved_settings=$(stty -g) stty -icanon min 1 time 0 -echo gawk -vRS=R -F';' -vORS= < /dev/tty ' function out(s) {print s > "/dev/tty"; fflush("/dev/tty")} BEGIN{out("\r\t\33[6n")} $NF <= prev {out("\r"); exit} {print sep ($NF - 1); sep=","; prev = $NF; out("\t\33[6n")}' stty "$saved_settings") Then, you can use that as expand -t "$tabs" using @Kusalananda's solution. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248845/"
]
} |
389,289 | I am used to the old method of calling init 0 to shutdown. Bad, I know; but when I tried it on my new Arch install I get this: # init 0Excess Arguments This confuses me because I thought systemd was supposed to support run levels? Looking at the man page, it mentions this: For compatibility with SysV, if systemd is called as init and a PID that is not 1, it will execute telinit and pass all command line arguments unmodified. That means init and telinit are mostly equivalent when invoked from normal login sessions. See telinit(8) for more information. Am I just using the wrong syntax or have I completely misunderstood systemd ? More Init/Systemd Information # command -v init/usr/bin/init# file /bin/init/usr/bin/init: symbolic link to ../lib/systemd/systemd# /lib/systemd/systemd --versionsystemd 234+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN default-hierarchy=hybrid # command -v telinit/usr/bin/telinit# file /bin/telinit/bin/telinit: symbolic link to systemctl# systemctl --versionsystemd 234+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN default-hierarchy=hybrid General System Info # uname -aLinux arch 4.12.5-1-ARCH #1 SMP PREEMPT Fri Aug 11 12:40:21 CEST 2017 x86_64 GNU/Linux# bash --versionGNU bash, version 4.4.12(1)-release (x86_64-unknown-linux-gnu) | For compatibility with SysV, […] systemd 234[…] -SYSVINIT […] You've built systemd without the compatibility option, so the compatibility behaviour described in the manual is not going to be present. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389289",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171726/"
]
} |
389,305 | Current situation: When an application/script has been started as a background process from a terminal and is providing output to the console (i.e. to STDOUT, thus no output redirection), when I type something in the terminal, and the running process outputs something to the terminal as well at the same moment, the process' output gets "appended" to whatever I was typing at that moment and thus visually the input and output are garbled together. Desired result: I wonder if it's possible to have the "so far typed" input jump on the next line whenever the background process displays its output on the terminal (i.e. have the input text always displayed on the last line automatically, separately from the ongoing output). Basically I'm looking for a way to achieve the same results as the " logging synchronous " command allows in IOS on Cisco devices (better exemplified here ) which, once enabled, takes whatever you typed so far and puts it on a new line (always the last one) whenever there is any "system-related" output displayed during your typing. Additional stuff: I know that even though visually the input and output text are mixed together, if I continue typing my command all the way and press Enter it will execute fine, it's just rather hard to figure out exactly what you typed when the output catches you unawares. I'm on Debian Jessie with Gnome so I'm using Bash with the default Gnome Terminal but the same behavior exhibits when using a virtual console (e.g after CTRL+ALT+F1). I'm not sure if there is some very easy, well-known and obvious way to do it that I'm missing but I've been searching for the better part of last hour to no avail, so I apologize if this is a no-brainer. Or is this feature (if it exists at all) dependent on the terminal application used? Thanks for any input. | For compatibility with SysV, […] systemd 234[…] -SYSVINIT […] You've built systemd without the compatibility option, so the compatibility behaviour described in the manual is not going to be present. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247110/"
]
} |
389,373 | I have an application which requires a producer to send filenames to a consumer , and have producer indicate to the consumer when the last filename has been sent and the end of file has been reached. For simplicity, in the following example producer is demonstrated with echo and printf , while the consumer is demonstrated with cat . I have tried to extrapolate the "here file" method without success, using <<EOF to indicate to the producer-wrapper (if such a thing exists) what to look for as an indication of end of file . If it worked cat should filter EOF from the output. Ex 1) input {echo "Hello World!" printf '\x04' echo "EOF"} <<EOF |\cat output bash: warning: here-document at line 146 delimited by end-of-file (wanted `EOF')Hello World!EOF Ex 2) input { echo "Hello World!" printf '\x04' echo "EOF"} |\cat <<EOF output bash: warning: here-document at line 153 delimited by end-of-file (wanted `EOF') Is it correct that the "here files" method for indicating delimiter only works for static text, and not dynamically created text? -- the actual application -- inotifywait -m --format '%w%f' /Dir | <consumer> The consumer is waiting for files to be written to directory /Dir.It would be nice if when a file "/Dir/EOF" was written the consumer would detect logical end-of-file condition simply by writing shell script as follows: inotifywait -m --format '%w%f' /Dir |<</Dir/EOF <consumer> -- In response to Giles answer -- Is it theoretically possible to implement cat <<EOFhelloworldEOF as SpecialSymbol="EOF"{ echo hello echo world echo $SpecialSymbol} |\while read Line; do if [[ $Line == $SpecialSymbol ]] break else echo $Line fidone |\cat By theoretically possible I mean "would it support existing usage patterns and only enable extra usage patterns which had previously been illegal syntax?" - meaning no existing legal code would be broken. | For a pipe, the end of file is seen by the consumer(s) once all the producers have closed their file descriptor to the pipe and the consumer has read all the data. So, in: { echo foo echo bar} | cat cat will see end-of-file as soon as the second echo terminates and cat has read both foo\n and bar\n . There's nothing more for you to do. Things to bear in mind though is that if some of the commands on the left side of the pipe starts some background process, that background process will inherit a fd to the pipe (its stdout), so cat will not see eof until that process also dies or closes its stdout. As in: { echo foo sleep 10 & echo bar} | cat You see cat not returning before 10 seconds have passed. Here, you may want to redirect sleep 's stdout to something else like /dev/null if you don't want its (non)output to be fed to cat : { echo foo sleep 10 > /dev/null & echo bar} | cat If you want the writing end of the pipe to be closed before the last command in the subshell left of the | is run, you can close stdout or redirecting to that subshell in the middle of the subshell with exec , like: { echo foo exec > /dev/null sleep 10} | (cat; echo "cat is now gone") However note that most shells will still wait for that subshell in addition to the cat command. So while you'll see cat is now gone straight away (after foo is read), you'll still have to wait 10 seconds for the whole pipeline to finish. Of course, in that example above, it would make more sense to write it: echo foo | catsleep 10 <<ANYTHING...content...ANYTHING is a here-document, it's to make the stdin of command a file that contains the content . It wouldn't be useful there. \4 is byte that when read from a terminal makes data held by a terminal device be flushed to the application reading from it (and when there's no data, read() returns 0 which means end-of-file). Again, not of any use here. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247579/"
]
} |
389,383 | Here's my source: #!/bin/bashecho "Running script to free general cached memory!"echo "";echo "Script must be run as root!";echo "";echo "Clearing swap!";swapoff -a && swapon -a;echo "";echo "Clear inodes and page file!";echo 1 > /proc/sys/vm/drop_caches;echo ""; It clears caches and stuff, and it echoes that it needs to be run as root in the terminal. I basically just want the script to cease running if it detects it's not being executed as root. Example: "Running script to free general cached memory!""Warning: script must be run as root or with elevated privileges!""Error: script not running as root or with sudo! Exiting..." If run with elevated privileges, it just runs as normal. Any ideas? Thanks! | #!/bin/shif [ "$(id -u)" -ne 0 ]; then echo 'This script must be run by root' >&2 exit 1ficat <<HEADERHost: $(hostname)Time at start: $(date)Running cache maintenance...HEADERswapoff -a && swapon -aecho 1 >/proc/sys/vm/drop_cachescat <<FOOTERCache maintenance done.Time at end: $(date)FOOTER The root user has UID 0 (regardless of the name of the "root" account). If the effective UID returned by id -u is not zero, the user is not executing the script with root privileges. Use id -ru to test against the real ID (the UID of the user invoking the script). Don't use $EUID in the script as this may be modified by an unprivileged user: $ bash -c 'echo $EUID'1000$ EUID=0 bash -c 'echo $EUID'0 If a user did this, it would obviously not lead to privilegie escalation, but may lead to commands in the script not being able to do what they are supposed to do and files being created with the wrong owner etc. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/389383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248942/"
]
} |
389,405 | I cannot ssh into my server from one of my Ubuntu installations, but if I use another Ubuntu installation or Windows operating system connecting with SSH works smoothly. So something is broken in one of my Ubuntu installation and I'm struggling to find the exact problem. I've tried reinstalling ssh/openssh-client/openssh- /ssh . Here is few lines from verbose output : ssh username@MYSERVERADDRESS -vdebug1: Offering RSA public key: /home/user/.ssh/id_rsadebug1: Server accepts key: pkalg ssh-rsa blen 279debug1: Authentication succeeded (publickey).Authenticated to MYSERVER ([MYSERVERADDRESS]:22).debug1: channel 0: new [client-session]debug1: Requesting [email protected]: Entering interactive session.debug1: pledge: networkpacket_write_wait: Connection to MYSERVERADDRESS port 22: Broken pipe Tried many different solutions from googling but never worked any.Deleted .ssh directory, Deleted /etc/ssh/ssh_config (It was automatically created again with default values). One more information is that problem isn't from server-side as I can SSH into server using another os and same network. Update : Firewall disabled Server hosted on cloud I've 3 different machines with dual booted Windows and Linux. SSH working perfectly all machines except one in which Linux is troubling connection, and in same machine using Windows everything working fines. More clear view of point 4 : Total 3 machines each loaded with Linux and Windows (dual boot), and only one machine while running Linux having problem with SSH. Let me know if you need more data from me (except SERVER ADDRESS and USERNAME). | I've found solution for this problem (sorry for answering my own question).I'm answering it because If someone has this problem then he/she can use solution that I found. Actually problem is on both sides server as well as client side. Server side problem was that /home/<user>/.ssh/known_hosts file on server was having invalid entry for Ubuntu installation as both operating system having same hardware id and same ip (static ip) but different keys. So what I did is : ssh-keygen -f /home/<user>/.ssh/known_hosts -R ip.ip.ip.ip In my case ip.ip.ip.ip is static public ip of my network. Execute this command on both server as well as client machine where ip.ip.ip.ip will changed respectively. I don't what this command does ( I found this solution from googling / trial-error) You can also copy your client machines known_hosts file to other client machine or operating systems. Sorry for my bad english. And If anyone knows what this command does and why this command solved problem, then please tell us ! thank you. Bingo solved! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389405",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248969/"
]
} |
389,408 | I have a sequence file and want to count consecutive character "N" with its position of occurrence and the lengthSay I have a file named mySequence.fasta like this: >sequence-1ATCGCTAGCATNNNNNNNNNNNNNNCTAGCATCATGCNNNNNNATACGCATCACANNNNNNNNNCgcatATCAC and anticipated output should be like this: Position 12 N 14Position 38 N 6Position 56 N 9 Kindly help me to solve this by awk or sed providing my file name mySequence.fasta | You could do that with awk , whose match() that sets the RSTART and RLENGTH variable is quite useful for that: <mySequence.fasta awk -v C=N '{ i=0 while (match($0, C "+")) { printf "Position %d %s %d\n", i+RSTART, C, RLENGTH i += RSTART+RLENGTH-1 $0 = substr($0, RSTART+RLENGTH) }}' Or with perl using the @- and @+ arrays that record the start and end of matches: perl -ne 'printf "Position %d N %d\n", $-[0]+1, $+[0]-$-[0] while /N+/g' Another slightly faster (at least with my version of perl ) perl approach using the ( experimental ) (?{...}) regexp operator: perl -ne '0 while /N(?{$s=pos})N*(?{printf "Position %d N %s\n", $s, pos()-$s+1})/g' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389408",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248974/"
]
} |
389,424 | At work we have to use Dell's SonicWall NetExtender software to connect to the company VPN. Some people use Windows and contractors (like myself), use whatever, which in my case is Manjaro (Arch-based) Linux. The issue is that I seem to be the only who can not connect via the client or CLI. What happens is the connection just hangs forever at Connecting to tunnel . Diagnostic outputs: netExtender log : 08/31/2017 10:01:32.792 [connect warn 4847] SSL_get_peer_certificate: err= (success?) self signed certificate in certificate chain08/31/2017 10:01:32.793 [general notice 4847] Connected.08/31/2017 10:01:32.793 [general notice 4847] Logging in...08/31/2017 10:01:32.886 [general notice 4847] Login successful.08/31/2017 10:01:32.928 [general error 4847] Version header not found08/31/2017 10:01:32.928 [epc info 4847] Server don't support EPC check. Just pass EPC check08/31/2017 10:01:33.047 [general notice 4847] SSL Connection is ready08/31/2017 10:01:34.049 [general info 4847] Using new PPP frame encoding mechanism08/31/2017 10:01:34.050 [general info 4847] Using PPP async mode (chosen by server) 08/31/2017 10:01:34.050 [general info 4847] Connecting tunnel... It stays this way without errors or timeouts for however long I let it run. journalctl -u NetworkManager seems to have no useful output for ppp , or anything related. journalctl -b --no-pager | grep pppd : aug 31 10:01:34 daniel-pc pppd[4893]: pppd 2.4.7 started by daniel, uid 1000aug 31 10:01:34 daniel-pc pppd[4893]: Using interface ppp0aug 31 10:01:34 daniel-pc pppd[4893]: Connect: ppp0 <--> /dev/pts/1aug 31 10:01:34 daniel-pc pppd[4893]: Cannot determine ethernet address for proxy ARPaug 31 10:01:34 daniel-pc pppd[4893]: local IP address <local ip>aug 31 10:01:34 daniel-pc pppd[4893]: remote IP address <remote ip>aug 31 10:19:46 daniel-pc pppd[4893]: Modem hangupaug 31 10:19:46 daniel-pc pppd[4893]: Connect time 18.2 minutes.aug 31 10:19:46 daniel-pc pppd[4893]: Sent 80 bytes, received 0 bytes.aug 31 10:19:46 daniel-pc pppd[4893]: Connection terminated.aug 31 10:19:46 daniel-pc pppd[4893]: Exit. The happens once I terminate the netExtender process. The same process worked previously on a previous installment of the same OS and Windows also, which is why I suspect an issue elsewhere. Output of uname -a : Linux daniel-pc 4.9.44-1-MANJARO #1 SMP PREEMPT Thu Aug 17 08:23:52 UTC 2017 x86_64 GNU/Linux | After trying several different methods of fixing, none working, I finally came across a forum post on the Gentoo forums about the same issue. It seems that the issue is that some files are named incorrectly and so a symbolic link needs to be created in order to successfully connect. Link to thread. To create a symbolic link to connect with NetExtender successfully you need to: cd /etc/ppp/ip-up.dln -s sslvpnroute sslvpnroute.sh This should allow you to get past the Connecting to tunnel... part.Once connected, NetExtender will create a file called sslvpnroutecleanup . You also need to link this file, so cd /etc/ppp/ip-down.dln -s sslvpnroutecleanup sslvpnroutecleanup.sh Note, you can only do that once you're successfully connected to the route. These steps fixed the issue for me. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248990/"
]
} |
389,456 | Would it make much difference in time to log in on the machine that has the directory before doing a rm -rf on the directory, or just rm -rf the directory over NFS? | Of course the ssh is the better. Nfs uses a complex network protocol with various remote procedure calls and data synchronization waiting times. In the case of ssh, these don't apply. Furthermore, there are many locks. File deletion in nfs works on this way: your rm command gives the unlink() syscall nfs driver converts it to a sunrpc request, sends it to the nfs server nfs server converts this sunrpc request back to an unlink() call executes this unlink() call on the remote side after it succeed, gives back the rpc reply message equivalent of "all right, it is done" to the client the kernel driver of the client-side converts this back to the exit code 0 of the unlink() call of your original rm rm iterates to the next file, goto 1 Now, the important thing is: between 2-7, rm has to wait. It could send the next unlink() call asynchronously, but it is a single-threaded, not event-oriented tool. Even if it could, it would still require tricky nfs mount flags. Until it doesn't get the result, it waits. Nfs - and any network filesystem - is always much slower. In many cases, you can make recursive deletions quasi-infinite speed with a trick: First move the directory to a different name ( mv -vf oldfilms oldfilms- ) Delete in the background ( rm -rf oldfilms- & ) From many (but not all) aspects, this directory removal will look as if it had been happened in practically zero time. Extension: As @el.pascado mentions in his excellent comment, actually 2-7 has to run 3x for any files: to determine if it is a file or a directory (with an lstat() syscall), then do accordingly. In the cases of ordinary files, unlink() , in the case of directories, opendir() , deleting all files/directories in it recursively, then closedir() , finally rmdir() . finally, iterate to the next directory entry with a readdir() call. This, it requires 3 nfs RPC commands for files, and an additional 3 for directories. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/389456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249015/"
]
} |
389,465 | I'm using Awesome Windows Manager and two display ( LVDS-1 as the secondary display in left and HDMI-1 as the main display in right). Awesome, by default duplicates screens and I want to switch focus on my display using the keyboard. It's now possible by moving the mouse cursor to each monitor. Are there any hotkey commands or another way which does not require the mouse? | Yes there should be. Try Mod + Ctrl + j to focus the next screen. Then Mod + Ctrl + k should focus the previous screen. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/389465",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53918/"
]
} |
389,495 | I just found a way to start zsh when I start the bash on Windows from https://www.howtogeek.com/258518/how-to-use-zsh-or-another-shell-in-windows-10/ . It recommended to add following code at the last of .bashrc . # Launch Zshif [ -t 1 ]; thenexec zshfi What does [ -t 1 ] mean? Is it just true? Then, can I just do this? exec zsh | [] is shortcut of test command. According to man test : -t FD True if FD is a file descriptor that is associated with a terminal. So if you running bash as interactive shell (terminal - see this thread for terminology explanation), bash will be replaced by zsh. More about .bash* files: When bash is invoked as an interactive login shell , or as a non-interactive shell with the --login option , it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior. When a login shell exits , bash reads and executes commands from the files ~/.bash_logout and /etc/bash.bash_logout, if the files exists. When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc , if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc. Stéphane Chazelas comment: Note that a shell can be interactive without stdout being a terminal, and a shell can be non-interactive with a terminal on stdout (like anytime you run a script within a terminal without redirecting/piping its output), and bash can read .bashrc even when not interactive (like in ssh host cmd where bash is the login shell of the user on host, or bash --login -c 'some code' ). case $- in *i*)... is the correct way to test if a shell is interactive. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/389495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8557/"
]
} |
389,514 | I have 100 files. I want to add a text 'new' before all the filenames. Please help me. Thanks in advance. Example: file1.txt ---> new_file1.txt....file100.txt ---> new_file100.txt Please provide a solution to rename multiple files. Here is what I have tried. But this is not a better solution. bala@SMS:~/test1$ ls -ltotal 0-rw-rw-r-- 1 bala bala 0 Aug 31 19:10 file1.txt-rw-rw-r-- 1 bala bala 0 Aug 31 19:10 file2.txt-rw-rw-r-- 1 bala bala 0 Aug 31 19:10 file3.txtbala@SMS:~/test1$ mv file1.txt new_file1.txtbala@SMS:~/test1$ mv file2.txt new_file2.txtbala@SMS:~/test1$ mv file3.txt new_file3.txtbala@SMS:~/test1$ lsnew_file1.txt new_file2.txt new_file3.txtbala@SMS:~/test1$ | You should use loop to change filename of multiple files: for file in *do mv -v ${file} new_${file}done The same code in one line: for file in *; do mv -v ${file} new_${file}; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249054/"
]
} |
389,520 | I've enabled compression (mounted with compress=lzo ) for my btrfs partition and used it for a while. I'm curious about how much benefit the compression brought me and am interested in the saved space value (sum of all file sizes) - (actual used space) . Is there any straightforward way to get this value, or would I have to write a script that sums up e.g. df output and compres it to btrfs filesystem df output? | In Debian/Ubuntu: apt install btrfs-compsizecompsize /mnt/btrfs-partition In Fedora: dnf install compsizecompsize /mnt/btrfs-partition output is like this: Processed 123574 files, 1399139 regular extents (1399139 refs), 69614 inline.Type Perc Disk Usage Uncompressed Referenced TOTAL 73% 211G 289G 289G none 100% 174G 174G 174G lzo 32% 37G 115G 115G It requires root ( sudo ) to work at all (otherwise SEARCH_V2: Operation not permitted ). It can be used on any directory (totalling the subtree), not just the whole filesystem from the mountpoint. On a system with zstd, but some old files still compressed with lzo, there will be rows for each of them. (The Perc column is the disk_size / uncompressed_size for that row, not how much of the total is compressed that way. Smaller is better.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9266/"
]
} |
389,539 | we have 100% on / Filesystem Size Used Avail Use% Mounted on/dev/mapper/vg08_root 20G 20G 132K 100% / so when I do lvextend we get the following errors # lvextend -L+5G /dev/mapper/vg08_root Couldn't create temporary archive name. Volume group "vg00" metadata archive failed. how to resolve this? | You may be able to circumvent the space requirement for this operation by disabling the metadata backup with the -A|--autobackup option: lvextend -An -L+5G /dev/mapper/vg08_root If you do this, follow the operation with a vgcfgbackup to capture the new state. Post-mortem note: Since the ultimate goal was to expand the logical volume and resize the encapsulated filesystem, a one-step operation could have been used: lvextend -An -L+5G --resizefs /dev/mapper/vg08_root In this case, the filesystem type would have been automatically deduced, avoiding trying to use resize2fs in lieu of `xfs_growfs'. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/389539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
389,573 | I'm logged into a Linux server and use ftp placeftp.thing.com to connect to a different server. I receive the message below and now I can transfer files to that server, however a basic Unix command echo $SHELL doesn't work. Is it because I'm in binary transfer mode or simply because of the FTP connection? Connected to placeftp.thing.com (12.10.115.175).220 You are connected to PLACEFTP.THING.COM.331 User name okay, need password.230 User logged in, proceed.Remote system type is UNIX.Using binary mode to transfer files.ftp> echo $SHELL?Invalid command I might be asking my question incorrectly, but what am I misunderstanding about FTP connections? | FTP is not a remote shell like SSH or telnet. FTP is a protocol with only a few select commands. See the standard RFC 959 for details about the supported commands. The various terminal interfaces which exist and the various graphical FTP clients essentially just translate some local commands or clicks into a FTP command. For example many terminal clients have ls or dir which will be translated in FTP LIST command, put into STOR , get into RETR etc. But there is no FTP command for an echo functionality you've tried because such functionality does not really make sense when the single goal of the protocol is file transfer. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/389573",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210133/"
]
} |
389,599 | Why I can't do curl --referer in my machine? When I do curl -i --referer https://google.com All I got is this: curl: no URL specified! curl: try 'curl --help' or 'curl --manual' for more information I am using Ubuntu 16.04.3 LTS. | As stated in man curl : curl [options] [URL...] --referer is an option that takes an argument, and https://google.com is the argument for it. You didn't provide any URL to fetch. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/179748/"
]
} |
389,615 | I need internationalized utility that does the same thing as tr : gets character from stream and substitutes it with a corresponding character.Not a particular case solution like lower-to-upper, but a general case solution is needed.Without gorillion piped sed calls if possible. Note that tr does not work on Linux: it translates bytes, not characters. This fails with multibyte encodings. $ tr --version | head -n 1tr (GNU coreutils) 8.23$ echo $LC_CTYPEen_US.UTF-8$ echo 'Ångstrom' | tr Æ Œ Ņngstrom | GNU sed does work with multi-byte characters. So: $ echo 齯 | sed 'y/齯/ABŒ/'ABŒ It's not so much that GNU tr hasn't been internationalised but that it doesn't support multi-byte characters (like the non-ASCII ones in UTF-8 locales). GNU tr would work with Æ , Œ as long as they were single-byte like in the iso8859-15 character set. More on that at How to make tr aware of non-ascii(unicode) characters? In any case, that has nothing to do with Linux, it's about the tr implementation on the system. Whether that system uses Linux as a kernel or tr is built for Linux or use the Linux kernel API is not relevant as that part of the tr functionality takes place in user space. busybox tr and GNU tr are the most commonly found on distributions of software built for Linux and don't support multi-byte characters, but there are others that have been ported to Linux like the tr of the heirloom toolchest (ported from OpenSolaris) or of ast-open that do. Note that sed 's y doesn't support ranges like a-z . Also note that if that script that contains sed 'y/齯/ABŒ/' is written in the UTF-8 charset, it will no longer work as expected if called in a locale where UTF-8 is not the charset. An alternative could be to use perl : perl -Mopen=locale -Mutf8 -pe 'y/a-z齯/A-ZABŒ/' Above, the perl code is expected to be in UTF-8, but it will process the input in the locale's encoding (and output in that same encoding). If called in a UTF-8 locale, it will transliterate a UTF-8 Æ (0xc3 0x86) to a UTF-8 Œ (0xc5 0x92) and in a ISO8859-15 same but for 0xc6 -> 0xbc. In most shells, having those UTF-8 characters inside the single quotes should be OK even if the script is called in a locale where UTF-8 is not the charset (an exception is yash which would complain if those bytes don't form valid characters in the locale). If you're using other quoting than single-quotes, however, it could cause problems. For instance, perl -Mopen=locale -Mutf8 -pe "y/♣\`/&'/" would fail in a locale where the charset is BIG5-HKSCS because the encoding of \ (0x5c) also happens to be contained in some other characters there (like α : 0xa3 0x5c, and the UTF-8 encoding of ♣ happens to end in 0xa3). In any case, don't expect things like perl -Mopen=locale -Mutf8 -pe 'y/Á-Ź/A-Z/' to work at removing acute accents. The above is actually just perl -Mopen=locale -Mutf8 -pe 'y/\x{c1}-\x{179}/\x{41}-\x{5a}/' That is, the range is based on the unicode codepoints. So ranges won't be useful outside of very well defined sequences that happen to be in the " right " order in Unicode like A-Z , 0-9 . If you want to remove acute accents, you'd have to use more advanced tools like: perl -Mopen=locale -MUnicode::Normalize -pe ' $_ = NFKD($_); s/\x{301}//g; $_ = NFKC($_)' That is use Unicode normalisation forms to decompose characters, remove the acute accents (here the combining form U+0301 ) and recompose. Another useful tool to transliterate Unicode is uconv from ICU . For instance, the above could also be written as: uconv -x '::NFKD; \u0301>; ::NFKC;' Though would only work on UTF-8 data. You'd need: iconv -t utf-8 | uconv -x '::NFKD; \u0301>; ::NFKC;' | iconv -f utf-8 To be able to process data in the user's locale. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/389615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246234/"
]
} |
389,640 | While researching another problem, I noticed (annotations added) me@it: ~ $ date ; sudo find / -maxdepth 1 -xdev -type d | grep -ve '/$' | sortThu Aug 31 14:58:25 MST 2017/bin/boot [*]/.config/dev/etc/home [*]/lib/lib64/lost+found/media/mnt/opt/proc/root/run/sbin/srv/sys/tmp/usr/var However I also know me@it: ~ $ date ; lsblkThu Aug 31 14:52:58 MST 2017NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 465.8G 0 disk├─sda1 8:1 0 16.6G 0 part├─sda2 8:2 0 97.7G 0 part├─sda3 8:3 0 500M 0 part /boot [*]├─sda4 8:4 0 1K 0 part└─sda5 8:5 0 351G 0 part └─LVM2_crypt 254:0 0 351G 0 crypt ├─LVM2_crypt-swap 254:1 0 3.9G 0 lvm [SWAP] ├─LVM2_crypt-root 254:2 0 20G 0 lvm / └─LVM2_crypt-home 254:3 0 327.1G 0 lvm /home [*]sr0 11:0 1 1024M 0 rom So why is find / -xdev showing /boot and /home ? FWIW, the order of -maxdepth 1 and -xdev does not seem to be causing the problem: me@it: ~ $ sudo su -it ~ # date ; diff -wB <(find / -maxdepth 1 -xdev -type d | sort) <(find / -xdev -maxdepth 1 -type d | sort)Thu Aug 31 15:09:53 MST 2017it ~ # logout Am I missing something? If not, why am I seeing /boot and /home in the 1st spew above? | find -xdev doesn't descend into directories that are mount points, but it still lists them. Try find / -xdev -maxdepth 2 , you'll see that /dev , /proc , /sys and any other mount point are listed but their contents are not. I think the rationale is that the mount point is present on the parent filesystem — even though what find lists is the root of the mounted filesystem and not the directory that serves as a mount point. There's no convenient way to omit mount points. If you do post-processing, you can check that they're on the same device as the root of the traversal by comparing the output of df -P or stat -c %d (on Linux). But that's quite a lot of overhead for a rare situation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38638/"
]
} |
389,717 | Makefile my_test:ifdef $(toto) @echo 'toto is defined'else @echo 'no toto around'endif Expected behavior $ make my_testno toto around$ make my_test totototo is defined Current behavior $ make my_testno toto around$ make my_test totono toto aroundmake: *** No rule to make target `toto'. Stop. When I run make my_test I get the else text as expected no toto around . However make my_test totono toto aroundmake: *** No rule to make target `toto'. Stop. Makefile version $ make -v GNU Make 3.81 SLE version $ cat /etc/*release VERSION_ID="11.4" PRETTY_NAME="SUSE Linux Enterprise Server 11 SP4" PS The point is to make make my_test verbose if toto , if toto not given then the command will run silently | You need to remove the dollar around toto, and also pass toto from the command line differently Command line make toto=1 my_test Makefile my_test:ifdef toto @echo 'toto is defined'else @echo 'no toto around'endif | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142331/"
]
} |
389,731 | I have output, like the following, from postgres database datname | size ---- template1 | 6314 kB template0 | 6201 kB postgres | 7938 kB misago |6370 kB (4 rows) I want only these 6314, 6201, and 7938 values from output.How can I do this? awk, grep or sed are preferable. | You need to remove the dollar around toto, and also pass toto from the command line differently Command line make toto=1 my_test Makefile my_test:ifdef toto @echo 'toto is defined'else @echo 'no toto around'endif | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231557/"
]
} |
389,783 | I have a binary file named hello2 in my current working directory. To execute it I need to press ./hello2 and it shows the output. But when I use the following command ././hello2 it still works. Can you please explain how the shell is interpreting this command? | When you run the command $ ./hello2 the shell looks up the file hello2 in the directory . , i.e. in the current directory. It then runs the script or binary according to some rules (which are uninteresting in this context). The command $ ././hello2 also causes the shell to execute the file. This is because . and ./. is the same directory. Every directory has a . directory entry. This entry corresponds to the directory itself. So saying ./ is the same as saying ././ and ././././ etc. The only difference is that the system might have to do a few extra directory lookups (unless the shell is smart and spots the obvious simplification). Every directory also has a .. entry which points to its parent directory. This means that if the current directory is called alamin , then the following would also execute the file: $ ../alamin/hello2 as would $ ../alamin/./hello2 and $ .././alamin/./hello2 The root directory, / , is a special case. Its .. directory entry is the same as its . directory entry. This means that you can't go "above" it with /.. . See also: The number of links for a folder doesn't reflect the real status? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174274/"
]
} |
389,830 | I am documenting some commands for future usage, some of them are too long and I want to document them in multiple lines for visualization, and then just copy and paste them for usage. For example: Raw: openssl pkcs12 -export -in intermediate/certs/lala-lira.cert.pem -inkey intermediate/private/lala-lira.key.pem -out intermediate/private/lala-lira.pfx Presentational: openssl pkcs12 -export-in intermediate/certs/lala-lira.cert.pem-inkey intermediate/private/lala-lira.key.pem-out intermediate/private/lala-lira.pfx The problem is if I copy and paste the presentational form, each line will be interpreted as one individual and independent command. | End every line but for the last with a backslash. To use your command as an example: openssl pkcs12 -export \-in intermediate/certs/lala-lira.cert.pem \-inkey intermediate/private/lala-lira.key.pem \-out intermediate/private/lala-lira.pfx What you are doing here is escaping the end-of-line, causing the shell to treat is as non-delimiting whitespace. Since the escape marker only has an effect upon the next character, the next character must be the end-of-line. (That means no trailing spaces allowed; beware!) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389830",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168378/"
]
} |
389,837 | I have a directory where daily subdirectories are created, literally named according to $date . How can I delete folders and their contents that are older than 7 days according to the YYYYMMDD in the file name and not the metadata date? Say I have (skipped some for brevity): 20170817201708232017082820170901 I would end up with the following folders (which those should keep): 2017082820170901 I created a variable that holds the date 7 days ago: dt_prev=$(date -d "`date`-7days" +%Y%m%d) My thought was to ls -l a list of these folder names and compare row by row, but this involves cleaning that list, etc., and I figure there has to be an easier way. | I think the solution would be a simpler version of what glenn jackman posted , e.g. seven_days=$(date -d "7 days ago" +%Y%m%d)for f in [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]; do [ -d "$f" ] || continue (( $f < $seven_days )) && echo rm -r "$f"done Remove the echo if the results look correct. The -d test ensures that we only inspect (remove) directories. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389837",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245705/"
]
} |
389,864 | How to generate an output below using bash script/or command: Desired output: contacts.USA | Name:John Due | Gender:Male | Age:21 | Address: Texas contacts.USA | Name:Ed Mundo | Gender:Male | Age:41 | Address: California contacts.BRAZIL | Name:Tom Paul | Gender:Male | Age:26 | Address: Sau Paulo Example input:I have 100 of contacts file for diff. countries. Contacts.USA Name:John Due Gender:Male Age:21Address: TexasName:Ed Mundo Gender:Male Age:41 Address: California Contacts.Brazil Name:Tom PaulGender:Male Age:26 Address: Sau Paulo I'm using unix cmd below but unable to generate the desired output. grep -E 'Name|Gender|Age|Address' contacts.* output of this cmd is showing the result in a row: contacts.USA Name:John Due contacts.USA Gender:Male contacts.USA Age:21 contacts.USA Address: Texascontacts.USA Name:Ed Mundo contacts.USA Gender:Male contacts.USA Age:41 contacts.USA Address: California contacts.BRAZIL Name:Tom Paul contacts.BRAZIL Gender:Male contacts.BRAZIL Age:26 contacts.BRAZIL Address: Sau Paulo | awk solution: Assuming input files contacts.USA and contacts.BRAZIL . awk '/Name/{ printf "%s | %s",FILENAME,$0 } /Gender|Age|Address/{ printf " | %s",$0; if($0~/Address/) print "" }' contacts.* /Name/{ printf "%s | %s",FILENAME,$0 } - capturing line with Name keyword, appending the filename FILENAME to the start of the resulting string if($0~/Address/) print "" - print record separator (newline) after encountering line with Address keyword The output: contacts.BRAZIL | Name:Tom Paul | Gender:Male | Age:26 | Address: Sau Paulocontacts.USA | Name:John Due | Gender:Male | Age:21 | Address: Texascontacts.USA | Name:Ed Mundo | Gender:Male | Age:41 | Address: California | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244912/"
]
} |
389,879 | I would like to set up wpa_supplicant and openvpn to run as non-root user, like the recommended setup for wireshark . I can't find any documentation for what +eip in this example means: sudo setcap cap_net_raw,cap_net_admin,cap_dac_override+eip /usr/bin/dumpcap | The way capabilities work in Linux is documented in man 7 capabilities . Processes' capabilities in the effective set are against which permission checks are done. File capabilities are used during an execv call (which happens when you want to run another program 1 ) to calculate the new capability sets for the process. Files have two sets for capabilities, permitted and inheritable and effective bit . Processes have three capability sets: effective , permitted and inheritable . There is also a bounding set, which limits which capabilities may be added later to a process' inherited set and affects how capabilities are calculated during a call to execv . Capabilities can only be dropped from the bounding set , not added. Permissions checks for a process are checked against the process' effective set . A process can raise its capabilities from the permitted to the effective set (using capget and capset syscalls, the recommended APIs are respectively cap_get_proc and cap_set_proc ). Inheritable and bounding sets and file capabilities come into play during an execv syscall. During execv , new effective and permitted sets are calculated and the inherited and bounding sets stay unchanged. The algorithm is described in the capabilities man page: P'(permitted) = (P(inheritable) & F(inheritable)) | (F(permitted) & cap_bset)P'(effective) = F(effective) ? P'(permitted) : 0P'(inheritable) = P(inheritable) [i.e., unchanged] Where P is the old capability set, P' is the capability set after execv and F is the file capability set. If a capability is in both processes' inheritable set and the file's inheritable set (intersection/logical AND), it is added to the permitted set . The file's permitted set is added (union/logical OR) to it (if it is within the bounding set). If the effective bit in file capabilities is set, all permitted capabilities are raised to effective after execv . Capabilities in kernel are actually set for threads, but regarding file capabilities this distinction is usually relevant only if the process alters its own capabilities. In your example capabilities cap_net_raw , cap_net_admin and cap_dac_override are added to the inherited and permitted sets and the effective bit is set. When your binary is executed, the process will have those capabilities in the effective and permitted sets if they are not limited by a bounding set. [1] For the fork syscall, all the capabilities and the bounding set are copied from parent process. Changes in uid also have their own semantics for how capabilities are set in the effective and permitted sets. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249275/"
]
} |
389,881 | Whenever I open a new instance of a terminal, the history is empty. Why is that? Do I need to set something up? In bash there's no need for this, though. | Bash and zsh have different defaults. Zsh doesn't save the history to a file by default. When you run zsh without a configuration file, it displays a configuration interface. In this configuration interface, select (1) Configure settings for history, i.e. command lines remembered and saved by the shell. (Recommended.) then review the proposed settings and select # (0) Remember edits and return to main menu (does not save file yet) Repeat for the other submenus for (2) completion, (3) keybindings and (4) options, then select (0) Exit, saving the new settings. They will take effect immediately. from the main menu. The recommended history-related settings are HISTFILE=~/.histfileHISTSIZE=1000SAVEHIST=1000setopt appendhistory I would use a different name for the history file, to indicate it's zsh's history file. And 1000 lines can be increased on a modern system. HISTFILE=~/.zsh_historyHISTSIZE=10000SAVEHIST=10000setopt appendhistory These lines go into ~/.zshrc , by the way. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
389,896 | I want to install a local tar file in such a way that it's immediately available on the command line, just as if I had used "sudo apt-get install XYZ" (assume there are no dependencies). I know how to unpack the tar and then compile with configure/make, but that just leaves me with an executable that I have to add a path to later. I suppose I could copy to /bin and be done with it, but I was just wondering what the standard practice was here. As a bonus, it would be nice to know how to do this with RPM's and other types of packages as well. | If the program you want to install follows good practice, you can install it with ./configuremakemake install ./configure checks if your system meets all requirements and configures the install options. make compiles everything, and make install copies all necessary files to the right places. You don't want to do the last step by hand, because it will be quite tedious to get all the libraries, man pages and whatnot to the right place. You can also define where the packages should be installed. For example, if you want to install a package in your home directory (because e.g. you don't have admin rights), you can use ./configure --prefix="$HOME"/somefolder make install will then install it to this folder. Usually you won't need this, though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249285/"
]
} |
389,912 | As the subject mentions, I can successfully ping the ip address of a public site like google, i.e. ping 216.58.200.238 works, but ping www.google.com does not work and an error name or service not known was returned. My machine is a virtual machine deployed in VMWARE workstation, 64bit CentOS7. I got another exact same machine with the same setting mentioned in point 2, and I make sure they have the same network setting. Let's call it B and previous one A. ping www.google.com works in machine B but not machine A. I also make sure that NetworkManager is diabled in both machines. Below is the exact same network setting for both A and B except IP address. TYPE=EthernetDEVICE=ens33NM_CONTROLLED=noBOOTPROTO=staticDNS=8.8.8.8IPADDR=192.168.0.12(for A)/13(for B)NETMASK=255.255.255.0GATEWAY=192.168.0.1 | Check your resolver configuration, the file contains information that is read by the resolver routines the first time they are invoked by a process. The file is designed to be human readable and contains a list of keywords with values that provide various types of resolver information. Therefore if this file does not exist, only the name server on the local machine will be queried; the domain name is determined from the hostname and the domain search path is constructed from the domain name. Edit /etc/resolv.conf and add them to the top of the file so they are used first, optionally removing or commenting out already listed servers. Currently, you may include a maximum of three nameserver lines. Note : Changes made to /etc/resolv.conf take effect immediately. Source: resolve.conf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/389912",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244690/"
]
} |
389,938 | I am investigating how to open a UDP port, specifically the port 1194. This is what I've done: Check if the port is opened: % sudo nmap –sU –p 1194 <hostnameOfMyMachine>-> PORT STATE SERVICE-> 1194/udp closed unknown Open the port: % sudo iptables -A INPUT -p udp --dport 1194 -d 0/0 -s 0/0 -j ACCEPT Finally, I repeated the step 1) to check if the port has been opened but the same output is displayed: -> PORT STATE SERVICE-> 1194/udp closed unknown Does anybody know why the port did not open? How can I open it? % sudo netstat --inet --inet6 -lnpConexiones activas de Internet (solo servidores)Proto Recib Enviad Dirección local Dirección remota Estado PID/Program nameudp 0 0 0.0.0.0:41503 0.0.0.0:* 1372/avahi-daemon: udp 0 0 127.0.0.1:53 0.0.0.0:* 2344/dnsmasq udp 0 0 0.0.0.0:68 0.0.0.0:* 2303/dhclient udp 0 0 0.0.0.0:17500 0.0.0.0:* 5935/dropbox udp 0 0 0.0.0.0:17500 0.0.0.0:* 6025/dropbox udp 0 0 0.0.0.0:5353 0.0.0.0:* 1372/avahi-daemon: udp6 0 0 :::44533 :::* 1372/avahi-daemon: udp6 0 0 :::5353 :::* 1372/avahi-daemon: % sudo iptables -L INPUT -nvChain INPUT (policy ACCEPT 134K packets, 19M bytes) pkts bytes target prot opt in out source destination 3 84 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 | A UDP port is considered open by nmap if a packet sent to this port results in a reply from this port. This, of course, means, that there needs to be some service running at the port. Port 1194 is typically OpenVPN. Even if there is some service running at that port, it might be considered closed if a firewall is filtering packets to or from this port. That's why you've tried to add a rule to the firewall ( iptables ) to let these packets pass. But this rule does not help if there is no service running on this port in the first place. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/389938",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132673/"
]
} |
389,969 | I've compiled linked and created a program in C++ now I have foobar.out I want to be able to put it into the bin directory and use it like system wide commands e.g. ssh, echo, bash, cd... How can I achieve that? | There are two ways of allowing you to run the binary without specifying its path (not including creating aliases or shell functions to execute it with an absolute path for you): Copy it to a directory that is in your $PATH . Add the directory where it is to your $PATH . To copy the file to a directory in your path, for example /usr/local/bin ( where locally managed software should go ), you must have superuser privileges, which usually means using sudo : $ sudo cp -i mybinary /usr/local/bin Care must be taken not to overwrite any existing files in the target directory (this is why I added -i here). To add a directory to your $PATH , add a line in your ~/.bashrc file (if you're using bash ): PATH="$HOME/bin:$PATH" ... if the binary is in $HOME/bin . This has the advantage that you don't need to have superuser privileges or change/add anything in the base system on your machine. You just need to move the binary into the bin directory of your home directory. Note, changes to .bashrc takes effect when the file is sourced next time, which happens if you open a new terminal or log out and in again, or run source ~/.bashrc manually. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/389969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249332/"
]
} |
390,031 | I have bought new RAM, and it's not detected. In short I got new RAM, 16 GB, to switch my old one, 4 GB + 4 GB. New one isn't detected by my laptop OS(?)/software(?). But when I installed it, it didn't work, I got only 4 GB. Long one I got new RAM, 16 GB, to switch my old one, 4 GB + 4 GB, so it will be 20 GB. But when I installed it, it didn't work, what I mean is I opened System Monitor and it showed (and shows) only 4 GB. But the things is, some programs/utils cat detect it here I will paste all the output of the commands I've tried inxi -Fxz System: Host: lmde Kernel: 4.8.0-53-generic x86_64 (64 bit gcc: 5.4.0) Desktop: Cinnamon 3.4.3 (Gtk 3.18.9-1ubuntu3.3) Distro: Linux Mint 18.2 SonyaMachine: System: LENOVO product: 20250 v: Lenovo Z710 Mobo: LENOVO model: Durian 7A1 v: 31900004Std Bios: LENOVO v: 7FCN35WW date: 09/02/2013CPU: Quad core Intel Core i7-4700MQ (-HT-MCP-) cache: 6144 KB flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 19156 clock speeds: max: 2400 MHz 1: 2400 MHz 2: 2147 MHz 3: 2350 MHz 4: 2400 MHz 5: 2400 MHz 6: 2400 MHz 7: 2399 MHz 8: 2400 MHzGraphics: Card-1: Intel 4th Gen Core Processor Integrated Graphics Controller bus-ID: 00:02.0 Card-2: NVIDIA GK107M [GeForce GT 745M] bus-ID: 01:00.0 Display Server: X.Org 1.18.4 drivers: intel (unloaded: fbdev,vesa) FAILED: nouveau Resolution: [email protected] GLX Renderer: Mesa DRI Intel Haswell Mobile GLX Version: 3.0 Mesa 12.0.6 Direct Rendering: YesAudio: Card-1 Intel 8 Series/C220 Series High Definition Audio Controller driver: snd_hda_intel bus-ID: 00:1b.0 Card-2 Intel Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller driver: snd_hda_intel bus-ID: 00:03.0 Sound: Advanced Linux Sound Architecture v: k4.8.0-53-genericNetwork: Card-1: Intel Wireless 7260 driver: iwlwifi bus-ID: 07:00.0 IF: wlp7s0 state: up mac: <filter> Card-2: Qualcomm Atheros QCA8171 Gigabit Ethernet driver: alx port: 3000 bus-ID: 08:00.0 IF: enp8s0 state: down mac: <filter>Drives: HDD Total Size: 1240.3GB (6.3% used) ID-1: /dev/sda model: ST1000LM014 size: 1000.2GB ID-2: /dev/sdd model: ADATA_SP580 size: 240.1GBPartition: ID-1: / size: 220G used: 20G (10%) fs: ext4 dev: /dev/sdd2RAID: No RAID devices: /proc/mdstat, md_mod kernel module presentSensors: System Temperatures: cpu: 59.0C mobo: 59.0C gpu: 45.0 Fan Speeds (in rpm): cpu: N/AInfo: Processes: 294 Uptime: 1:24 Memory: 2301.1/3863.2MB Init: systemd runlevel: 5 Gcc sys: 5.4.0 Client: Shell (zsh 5.1.1) inxi: 2.2.35 sudo lshw -short -C memory (pasting with sudo so others cat just copy-paste) H/W path Device Class Description============================================================/0/0 memory 128KiB BIOS/0/4/b memory 32KiB L1 cache/0/4/c memory 256KiB L2 cache/0/4/d memory 6MiB L3 cache/0/a memory 32KiB L1 cache/0/2a memory 20GiB System Memory/0/2a/0 memory 16GiB SODIMM DDR3 Synchronous 1600 MHz (0,6 ns)/0/2a/1 memory DIMM [empty]/0/2a/2 memory 4GiB SODIMM DDR3 Synchronous 1600 MHz (0,6 ns)/0/2a/3 memory DIMM [empty] sudo lshw -class memory *-firmware description: BIOS vendor: LENOVO physical id: 0 version: 7FCN35WW date: 09/02/2013 size: 128KiB capacity: 4032KiB capabilities: pci upgrade shadowing cdboot bootselect edd int13floppynec int13floppytoshiba int13floppy360 int13floppy1200 int13floppy720 int13floppy2880 int9keyboard int10video acpi usb biosbootspecification uefi *-cache:0 description: L1 cache physical id: b slot: L1 Cache size: 32KiB capacity: 32KiB capabilities: synchronous internal write-back instruction *-cache:0 description: L1 cache physical id: b slot: L1 Cache size: 32KiB capacity: 32KiB capabilities: synchronous internal write-back instruction configuration: level=1 *-cache:1 description: L2 cache physical id: c slot: L2 Cache size: 256KiB capacity: 256KiB capabilities: synchronous internal write-back unified configuration: level=2 *-cache:2 description: L3 cache physical id: d slot: L3 Cache size: 6MiB capacity: 6MiB capabilities: synchronous internal write-back unified configuration: level=3 *-cache description: L1 cache physical id: a slot: L1 Cache size: 32KiB capacity: 32KiB capabilities: synchronous internal write-back data configuration: level=1 *-memory description: System Memory physical id: 2a slot: System board or motherboard size: 20GiB *-bank:0 description: SODIMM DDR3 Synchronous 1600 MHz (0,6 ns) product: CT204864BF160B.C16 vendor: Unknown physical id: 0 serial: A4205EAD slot: DIMM0 size: 16GiB width: 64 bits clock: 1600MHz (0.6ns) *-bank:1 description: DIMM [empty] product: Empty vendor: Empty physical id: 1 serial: Empty slot: DIMM1 *-bank:2 description: SODIMM DDR3 Synchronous 1600 MHz (0,6 ns) product: M471B5173BH0-YK0 vendor: Samsung physical id: 2 serial: 136B8093 slot: DIMM2 size: 4GiB width: 64 bits clock: 1600MHz (0.6ns) *-bank:3 description: DIMM [empty] product: Empty vendor: Empty physical id: 3 serial: Empty slot: DIMM3 sudo dmidecode free -m total used free shared buff/cache availableMem: 3863 2406 277 430 1178 696Swap: 0 0 0 cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-4.8.0-53-generic root=UUID=91af3ab8-8c93-40ef-930a-2dc7038f2dfc ro elevator=deadline quiet splash vt.handoff=7 dmesg | grep -i memory did but it's very long Also:-I booted to BIOS and it showed that there is 20 GB (but it said in MB, something like 20480 MB)-I visited Intel page for my processor(google for Intel® Core™ i7-4700MQ Processor can't paste links, have newbie restrictions) for my CPU and it said that it does support 32 GB-I booted to Windows 10 live CD and it show that it has 20 GB but only 4 GB is available-I did memtest86 here are the screenshot with results what I don't like about it is that in the top left corner it shows that [c]Memory: 4009 MB[/c]. So was the 16 GB detected?-I booted to Linux Mint live CD and it showed exactly the same as current version (4 GB).-I found on internet that it can be caused by that contacts on the module was made "dirty" with dirt on my hands, so plugged it out got, wipe with ethanol, I did that but not with ethanol, instead of it I used vodka same results didn't work.-I swapped RAM modules, didn't work.I don't remember exactly, but also with installation the GRUB menu broke, what I mean is I got black screen for 3 seconds (which I can configure from /etc/default/grub ). The only thing where I kind of "broke the rule" is before buying it, I visited (google for Lenovo Lenovo Z710 compatible upgrades crucial ) crucial website for my laptop (they have very fluent interface to choose the upgrades) and it says that the max RAM is 16 GB (8 + 8 ), I've ignored it. [update] Antz answer kind of solves my problem, but the real answer with code example was given on official Linux Mint Forum . sudo dmidecode gave huge output, but there it said about error that I had Handle 0x0005, DMI type 5, 24 bytesMemory Controller Information Error Detecting Method: None Error Correcting Capabilities: None Supported Interleave: One-way Interleave Current Interleave: One-way Interleave Maximum Memory Module Size: 8192 MB | In summary, one of two things generally happens. The memory works, but is limited to the maximum amount supported by the motherboard, or the memory doesn't work at all. Let me be a bit detail to you. On every motherboard, there is a controller for accessing the RAM. The limiting factor is how much memory can be accessed (or addressed) by that memory controller. Theoretically, a 64-bit CPU can access 2^64 bytes of RAM. For practical reasons, however, the number of addresses lines actually etched into a motherboard is much smaller, and the controller is created to be able to access up to a specific number of addresses. It can address fewer memory locations just fine as well. That determines the range and maximum amount of memory. So when memory is installed with more addressable bytes than the controller understands, the best outcome is that only the lower portion of the RAM is used. However, because of the way memory is constructed, it's also possible that the larger memory won't work at all as that's the case with yours. But again, it depends on the motherboard on how it handles memory errors. This stackexchange site gives more detailed information concerning your RAM issue. What happens when more RAM is installed than the motherboard supports? You can also read this: RAM . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185701/"
]
} |
390,033 | I have a ubuntu 16.04 server install that was initially installed with root user only so root users home directory is /root . If I add another user such as bob bobs home directory is created in /home/ as expected. If I want add a public key for bob to ssh in with, I add /home/bob/.ssh/authorized_keys and put bobs public key in the authorized key file. Is this the correct way so far? Problem is when I try ssh [email protected] I get Permission denied (publickey) The .ssh directory permissions are set to 700 and the authorized_key file is set to 600. In my sshd_config the path to the key file is the default #AuthorizedKeysFile %h/.ssh/authorized_keys I set ssh logging to verbose but it only shows Failed publickey for.... What could I be doing wrong? Is it only looking in /root/.ssh for the key file? | In summary, one of two things generally happens. The memory works, but is limited to the maximum amount supported by the motherboard, or the memory doesn't work at all. Let me be a bit detail to you. On every motherboard, there is a controller for accessing the RAM. The limiting factor is how much memory can be accessed (or addressed) by that memory controller. Theoretically, a 64-bit CPU can access 2^64 bytes of RAM. For practical reasons, however, the number of addresses lines actually etched into a motherboard is much smaller, and the controller is created to be able to access up to a specific number of addresses. It can address fewer memory locations just fine as well. That determines the range and maximum amount of memory. So when memory is installed with more addressable bytes than the controller understands, the best outcome is that only the lower portion of the RAM is used. However, because of the way memory is constructed, it's also possible that the larger memory won't work at all as that's the case with yours. But again, it depends on the motherboard on how it handles memory errors. This stackexchange site gives more detailed information concerning your RAM issue. What happens when more RAM is installed than the motherboard supports? You can also read this: RAM . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/105605/"
]
} |
390,061 | I'd like to set the parent of a newly started process, is that possible? Example, let us assume that we start a new desktop environment session via a login manager, so our process tree would look something like this: init \- login-manager \- de-session Now I do have a script to launch my most essential applications which should start with the session, for various reasons I'd like to keep these as a script and not migrate them to the autostart manager of any DE. It looks something like this: #!/usr/bin/envapplication1 &application2 &application3 & After running this automatically at the start of the session, our process tree looks like this: init |- application1 |- application2 |- application3 \- login-manager \- de-session But what I'd actually like is to "reparent" these processes under the session, like this: init \- login-manager \- de-session |- application1 |- application2 \- application3 So, is there any way to "reparent" a process under another one? | In summary, one of two things generally happens. The memory works, but is limited to the maximum amount supported by the motherboard, or the memory doesn't work at all. Let me be a bit detail to you. On every motherboard, there is a controller for accessing the RAM. The limiting factor is how much memory can be accessed (or addressed) by that memory controller. Theoretically, a 64-bit CPU can access 2^64 bytes of RAM. For practical reasons, however, the number of addresses lines actually etched into a motherboard is much smaller, and the controller is created to be able to access up to a specific number of addresses. It can address fewer memory locations just fine as well. That determines the range and maximum amount of memory. So when memory is installed with more addressable bytes than the controller understands, the best outcome is that only the lower portion of the RAM is used. However, because of the way memory is constructed, it's also possible that the larger memory won't work at all as that's the case with yours. But again, it depends on the motherboard on how it handles memory errors. This stackexchange site gives more detailed information concerning your RAM issue. What happens when more RAM is installed than the motherboard supports? You can also read this: RAM . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4106/"
]
} |
390,064 | I am a teacher and I use Linux which is great! But students are curious about this "new" operating system they do not know and in GUI they tweak program settings which affects hidden files inside /home/user : [profesor@240-kateder ~]$ ls -a. .dbeaver4 .gtkrc-2.0 .sane.. .dbeaver-drivers .icons .swt.bash_history .dropbox .kde4 .themes.bash_logout .eclipse .local .thumbnails.bash_profile .esd_auth .lyx .ViberPC.bashrc .FlatCAM .masterpdfeditor .w3m.cache .FreeCAD .mozilla .Xauthority.config .gimp-2.8 .pki .xinitrc.convertall .gnupg .qucs .xournal This is unwanted because over time program interfaces will change so dramatically that programs will be missing toolbars, buttons, main menus, status menus... and students end up with completely different GUI, so they are calling me about the issue and we spend too much time. Now to optimize this I have to make sure that program settings (hidden files inside /home/user ) aren't changed, so I tried to change them like sudo chmod -R 555 ~/.* but this didn't work out well for all of the programs, because some of the programs want to manipulate their settings at boot and they therefore fail to start withouth sudo . And student's don't have sudo privileges. But sudo chmod -R 555 ~/.* worked for .bash_profile , .bash_logout , .bashrc , .bash_history , .xinitrc so I was thinking if I would: prevent user from deleting .bash_profile , .bash_logout , .bashrc , .bash_history , .xinitrc copy all hidden setting files into a folder /opt/restore_settings program .bash_profile to clean up all settings in users home directory on login using rm -r ~/.* (I assume this wouldn't delete files from point 1., if I protect them) and then restore settings from the /opt/restore_settings . I wan't to know your opinion about this idea, or if there is any better way to do it. And I need a way to prevent users from deleting files from point 1. Otherwise this can't work. | Totally different approach: Create a group students , give each student his own account with group membership in students . Have a script that restores a given home directory from a template to a known good state, possibly deleting all extra dot files. Tell students about this script. If you have a number of computers, centralize this approach (user management on a single central server), and use a central file server for student home directories, so each student gets the same home directory on any machine. Together with proper (basic chmod ) permissions everywhere, this will ensure that each student can only wreak havoc in his or her own home directoy, and can restore it when it breaks, possibly loosing their own customizations in this process, so they'll be more cautious next time. BTW, that's a very standard setup for many users on a cluster of machines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9135/"
]
} |
390,135 | I was thinking that it might be advantageous to have a user with permissions higher than the root user. You see, I would like to keep all of the activities and almost all existing root user privileges exactly as they are now. However, I would like the ability to deny privileges to root on an extremely isolated case by case basis. One of the advantages of this would allow me to prevent certain unwanted files from being installed during updates. This is just an example of one possible advantage. Because apt-get updates are run by root or with sudo privileges, apt-get has the ability to replace certain unwanted files during updates. If I could deny these privileges to these individual particular files, I could set them as a simlink to /dev/null or possibly have a blank placeholder file that could have permissions that would deny the file from being replaced during the update. Additionally, I can't help but be reminded about a line which was said in an interview with one of the Ubuntu creators when the guy said something about how users better trust "us" (referring to the Ubuntu devs) "because we have root" which was a reference to how system updates are performed with root permission. Simply altering the installation procedure to say work around this problem is absolutely not what I am interested here. Now that my mind has a taste for the idea of having the power to deny root access, I would like to figure out a way to make this happen just for the sake of doing it. I just thought about this and have not spent any time on the idea so far and I'm fairly confident that this could be figured out. However, I am curious to know if this has already been done or if this is possibly not a new idea or concept. Basically, it seems like there should be some way to have a super super-user which would have permission beyond that of the system by only one degree. Note: Although I feel the accepted answer fits the criteria the most, I really like the answer by @CR. also. I would like to create an actual user higher on the tree (me) but I guess I'll just have to sit down one day when I have the time to figure it out. Additionally, I'm not trying to pick on Ubuntu here; I wouldn't use it as my main distro if I felt negative about it. | The "user" you want is called LSM: Linux security module. The most well known are SELinux and AppArmor. By this you can prevent certain binaries (and their child processes) from doing certain stuff (even if their UID is root ). But you may allow these operations to getty and its child processes so that you can do it manually. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/390135",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70847/"
]
} |
390,141 | I am in ubuntu, when I write dmesg the error messages appear in red, how could I print in the console only those? | use --level option described in man dmesg : -l, --level list Restrict output to the given (comma-separated) list of levels. For example: dmesg --level=err,warn will print error and warning messages only. For all supported levels see the --help output. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/390141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226042/"
]
} |
390,197 | Service name is zabbix_agentd and OS : Amazon Linux AMI release 2014.09 When i type service --status-all I see zabbix_agentd (pid 10052)is running But when i type service status zabbix_agentd I get : zabbix_agentd: unrecognized service Why my service is not recognized even though I can confirm it on running services ? | use --level option described in man dmesg : -l, --level list Restrict output to the given (comma-separated) list of levels. For example: dmesg --level=err,warn will print error and warning messages only. For all supported levels see the --help output. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/390197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/248947/"
]
} |
390,215 | In the CentOS7.2 I download PyCharm CE, and unarchived it to this: if I want use it, I can run: ./pycharm.sh in the directory, but there will take up a terminal, if control + c to exit it, the pycharm interface will exit too. So, can I open pycharm like windows or mac on desktop? | use --level option described in man dmesg : -l, --level list Restrict output to the given (comma-separated) list of levels. For example: dmesg --level=err,warn will print error and warning messages only. For all supported levels see the --help output. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/390215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/239156/"
]
} |
390,223 | It looks like I have logcheck set up as a cron job and whenever it's run process grep by logcheck takes up around ¼ of my CPU. Now I have certain times during which I need my full CPU capacity and have my system take up fewest resources as possible except for specific/processes (which I maybe could specify somehow). Is it possible to set my Debian 9.1 with KDE machine into some sort of performance mode (or 'Gaming mode') that prevents processes not explicitly started by the user from taking up much system resources, lowers the load of background-processes and most importantly: delays cron jobs until that mode is stopped again? | If “certain times” aren’t fixed, i.e. you want to specify manually when your system enters and leaves “performance mode”, you can simply stop and start cron : sudo systemctl stop cron will prevent any cron jobs from running, and sudo systemctl start cron will re-enable them. You could also check out anacron instead of cron , it might be easier to tweak globally in a way which would fit your uses. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/390223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233262/"
]
} |
390,248 | I totally understand that --dig-holes creates a sparse file in-place. That is, if the file has holes --dig-holes options removes those holes: Let's take it in a very simplified way, let's say we have a huge file named non-sparse: non-sparse: aaaaaaaaaaaaaaaaaaaaaaaaaaaa\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00bbbbbbbbbbbbbbbbbbbbbbbbbbbb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00cccccccccccccccccccccccccccc non-sparse has many zeros in it, assume that the interleaving zeros are in Gigabytes. fallocate --dig-holes de-allocates the space available for the zeros (holes) where the actual file size remains the same (preserved). Now, there's --punch-hole what does it really do? I read the man page, still don't understand: -p, --punch-hole Deallocates space (i.e., creates a hole) in the byte range starting at offset and continuing for length bytes. Within the specified range, partial filesystem blocks are zeroed, and whole filesystem blocks are removed from the file. After a successful call, subsequent reads from this range will return zeroes. Creating hole, that's the opposite of --dig-hole option it seems like that, and how come that digging a hole isn't the same as creating a hole?! Help! we need a logician :). The naming of the two options are synonymous linguistically which perhaps makes confusion. What's the difference between --dig-holes and --punch-holes operationally (not logically or linguistically please!)? | --dig-holes doesn’t change the file’s contents, as determined when the file is read: it just identifies runs of zeroes which can be replaced with holes. --punch-hole uses the --offset and --length arguments to punch a hole in a file, regardless of what the file contains at that offset: it works even if the file contains non-zeroes there, but the file’s contents change as a result. Considering your example file, running fallocate --punch-hole --offset 2 --length 10 would replace ten a characters with zeroes, starting after the second one. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/390248",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233788/"
]
} |
390,283 | Using kernel 2.6.x How would you script the result below with the following variables using sh (not bash, zsh, etc.) ? VAR1="abc def ghi"VAR2="1 2 3"CONFIG="$1"for i in $VAR1; do for j in $VAR2; do [ "$i" -eq "$j" ] && continue donecommand $VAR1 $VAR2done Desired result: command abc 1command def 2command ghi 3 | One way to do it: #! /bin/shVAR1="abc def ghi"VAR2="1 2 3"fun(){ set $VAR2 for i in $VAR1; do echo command "$i" "$1" shift done}fun Output: command abc 1command def 2command ghi 3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227235/"
]
} |
390,300 | I'm looking for a way to extract the first column of a text file that has no specific delimiters except for arbitrary digits that begin the next column. Example: John Smith 1234 Main StreetAmy Brown and Sally Williams 9 Drury LaneSunny's 1000 Brown Avenue Expected output would be: John SmithAmy Brown and Sally WilliamsSunny's It appears that cut doesn't support functionality such as cut file.txt -d {0..9} -f 1 Solutions can use any standard unix utility. | $ awk -F'[0-9]' '{ print $1 }' fileJohn SmithAmy Brown and Sally WilliamsSunny's With -F'[0-9]' we say that digits are to be considered field separators in the input data, and with print $1 we output the first digit-separated field. Change -F'[0-9]' to -F' *[0-9]' to also get rid of any spaces before the digit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226626/"
]
} |
390,307 | On a fresh installation of Debian 9 Stretch on a desktop PC when booting the ...Failed to start Raise network interfaces... error occurres. The (cable) LAN-connection works but the (USB) WiFi is not working properly (detecting the WiFi networks but failing to connect). Previously on the same harware Debian 8 Jessie was installed working fine without any errors. Seems the issues are connected to the recent predictable network interface names changes. Found users A , B , C , D , and E had similar symptoms. However, they had upgraded Ubuntu systems (without a clean install). Aditionally the proposed solutions are suggesting disabling the assignment of fixed/predictable/unique names . I would prefer to keep the new naming scheme/standard, eventually to find and eliminate the reason why( ? ) it is not working properly. Found also users F , and G with the same problem -- without solution. Would be very thankful for any hint. Also, I'm happy to answer your questions if you need more in depth details. Further you find some detailed system output. $ sudo systemctl status networking.service ● networking.service - Raise network interfaces Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Mon 2017-09-04 17:21:42 IST; 1h 27min ago Docs: man:interfaces(5) Process: 534 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE) Process: 444 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (code=exited, status=0/SUCCESS) Main PID: 534 (code=exited, status=1/FAILURE)Sep 04 17:21:42 XXX ifup[534]: than a configuration issue please read the section on submittingSep 04 17:21:42 XXX ifup[534]: bugs on either our web page at www.isc.org or in the README fileSep 04 17:21:42 XXX ifup[534]: before submitting a bug. These pages explain the properSep 04 17:21:42 XXX ifup[534]: process and the information we find helpful for debugging..Sep 04 17:21:42 XXX ifup[534]: exiting.Sep 04 17:21:42 XXX ifup[534]: ifup: failed to bring up eth0Sep 04 17:21:42 XXX systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURESep 04 17:21:42 XXX systemd[1]: Failed to start Raise network interfaces.Sep 04 17:21:42 XXX systemd[1]: networking.service: Unit entered failed state.Sep 04 17:21:42 XXX systemd[1]: networking.service: Failed with result 'exit-code'.$ cat /etc/network/interfaces.d/setupauto loiface lo inet loopbackauto eth0iface eth0 inet dhcp EDIT2start: $ sudo ifconfig[sudo] password for XXX: enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.178.31 netmask 255.255.255.0 broadcast 192.168.178.255 inet6 xxxx::xxx:xxxx:xxxx:xxxx prefixlen 64 scopeid 0x20<link> ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet) RX packets 765 bytes 523923 (511.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 803 bytes 101736 (99.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 17 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 50 bytes 3720 (3.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 50 bytes 3720 (3.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0wlxf4f26d1b7521: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ether xx:xx:xx:xx:xx:xx txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 EDIT2end. $ ip link1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:002: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff3: wlxf4f26d1b7521: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff EDITstart: $ lsusb...Bus 001 Device 004: ID 0cf3:9271 Atheros Communications, Inc. AR9271 802.11n...$ sudo cat /etc/network/interfaces# This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*# The loopback network interfaceauto loiface lo inet loopback EDITend. EDIT3start: $ sudo systemctl status networking.service● networking.service - Raise network interfaces Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled) Active: active (exited) since Tue 2017-09-05 10:29:16 IST; 44min ago Docs: man:interfaces(5) Process: 565 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=0/SUCCESS) Process: 438 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (code=exited, status=0/SUCCESS) Main PID: 565 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 4915) CGroup: /system.slice/networking.serviceSep 05 10:26:56 sdd9 systemd[1]: Starting Raise network interfaces...Sep 05 10:26:56 sdd9 ifup[565]: ifup: waiting for lock on /run/network/ifstate.enp3s0Sep 05 10:29:16 sdd9 systemd[1]: Started Raise network interfaces. EDIT3end. | Remove the /etc/network/interfaces.d/setup file then edit your /etc/network/interfaces as follows : auto loiface lo inet loopback (friendly edit: GAD3R suggested there should be nothing else in the file. It appears that entries can also be ignored if a line starts with # followed by a space) Save and reboot The man interfaces : INCLUDING OTHER FILES Lines beginning with "source" are used to include stanzas from other files, so configuration can be split into many files. The word "source" is followed by the path of file to be sourced. Shell wildcards can be used. (See wordexp(3) for details.) In your case you are using the /etc/network/interfaces.d/setup to configure the network instead of /etc/network/interfaces Lines beginning with "allow-" are used to identify interfaces thatshould be brought up automatically by various subsytems. This may bedone using a command such as "ifup --allow=hotplug eth0 eth1", whichwill only bring up eth0 or eth1 if it is listed in an "allow-hotplug"line. Note that "allow-auto" and "auto" are synonyms. (Interfacesmarked "allow-hotplug" are brought up when udev detects them. This caneither be during boot if the interface is already present, or at alater time, for example when plugging in a USB network card. Pleasenote that this does not have anything to do with detecting a networkcable being plugged in.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/390307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58310/"
]
} |
390,360 | I've been looking into escape sequences lately, and I'm surprised of what they can do. You can even move an xterm X11 window with them (try printf '\e[3;0;0t' ), wow! The most common way to know what features a terminal supports seems to be using a database. That's what ncurses does, and it's used by 99.9% of applications that rely on escape sequences. Ncurses reads the terminfo database for your shell's TERM environment variable in order to decide which features are supported by your console. You can change the TERM environment variable of your shell, and if you do it most applications could start using less features or misbehaving (try running nano or vim after setting TERM="" ). I've seen that some escape codes cause the terminal to report stuff. For instance <ESC>[6n causes the terminal to report the cursor position. ( printf '\e[6n' ) Why don't we use similar report mechanisms to let the console report which features it supports? Instead of coupling the features with the value of TERM , each console could advertise its own features, making the whole thing more precise and reliable. Why isn't this a thing? Edit: something that I should have asked before... I'd like to create a new escape sequence, to hack konsole and gnome-terminal in order to support it and to use it in some scripts. I'd like to be able to query the console in order to know whether the one I'm running supports this feature - what's the suggested way to do that? | It's not as simple as you might suppose. xterm (like the DEC VTxxx terminals starting with VT100) has a number of reports for various features (refer to XTerm Control Sequences ). The most generally useful is that which tells what type of terminal it is: CSI Ps c Send Device Attributes (Primary DA). Not all terminals have that type of response (Sun hardware console has/had none ). But there are more features than reports (for instance, how to tell whether a terminal is really interpreting UTF-8: the accepted route for that is via the locale environment variables, so no need has been established for another control sequence/response). In practice, while there are a few applications that pay attention to reports (such as vim , checking the actual values of function keys, the number of colors using DCS + p Pt ST , and even the cursor appearance using DCS $ q Pt ST ), the process is unreliable because some developers find it simpler to return a given report-response than to implement the feature. If you read through the source code for various programs, you'll find interesting quirks where someone has customized a response to make it look like some version of xterm. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390360",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4041/"
]
} |
390,367 | I am getting a series of errors similar to this file /usr/share/doc/glibc/NEWS from install of glibc-2.25-10.fc26.i686 conflicts with file from package glibc-2.25-7.fc26.x86_64 when I try to dnf update , and a Python exception when I dnf install anything. These are a new errors that appeared at the same time, possibly related to a power failure I suffered yesterday in the middle of a dnf update , although the history log appears to suggest it had finished, with some errors, before the power went out. Full errors for a current dnf update : Error: Transaction check error: file /usr/share/doc/glibc/NEWS from install of glibc-2.25-10.fc26.i686 conflicts with file from package glibc-2.25-7.fc26.x86_64 file /usr/share/man/man1/xmlwf.1.gz from install of expat-2.2.4-1.fc26.i686 conflicts with file from package expat-2.2.1-1.fc26.x86_64 file /usr/share/doc/sqlite-libs/README.md from install of sqlite-libs-3.20.1-1.fc26.i686 conflicts with file from package sqlite-libs-3.19.3-1.fc26.x86_64 file /usr/share/doc/gdk-pixbuf2/NEWS from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/cs/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/de/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/es/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/fr/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/fur/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/gl/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/hu/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/id/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/kk/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/lt/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/pl/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/pt_BR/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/sl/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/sr/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/sr@latin/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/sv/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/locale/tr/LC_MESSAGES/gdk-pixbuf.mo from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/man/man1/gdk-pixbuf-query-loaders.1.gz from install of gdk-pixbuf2-2.36.9-1.fc26.i686 conflicts with file from package gdk-pixbuf2-2.36.7-1.fc26.x86_64 file /usr/share/doc/libidn2/README.md from install of libidn2-2.0.4-1.fc26.i686 conflicts with file from package libidn2-2.0.2-1.fc26.x86_64 file /usr/share/doc/libidn2/NEWS from install of libidn2-2.0.4-1.fc26.i686 conflicts with file from package libidn2-2.0.2-1.fc26.x86_64 file /usr/share/info/libidn2.info.gz from install of libidn2-2.0.4-1.fc26.i686 conflicts with file from package libidn2-2.0.2-1.fc26.x86_64 file /usr/share/man/man1/idn2.1.gz from install of libidn2-2.0.4-1.fc26.i686 conflicts with file from package libidn2-2.0.2-1.fc26.x86_64 file /usr/share/man/man5/cert8.db.5.gz from install of nss-3.32.0-1.1.fc26.i686 conflicts with file from package nss-3.30.2-1.1.fc26.x86_64 file /usr/share/man/man5/cert9.db.5.gz from install of nss-3.32.0-1.1.fc26.i686 conflicts with file from package nss-3.30.2-1.1.fc26.x86_64 file /usr/share/man/man5/key3.db.5.gz from install of nss-3.32.0-1.1.fc26.i686 conflicts with file from package nss-3.30.2-1.1.fc26.x86_64 file /usr/share/man/man5/key4.db.5.gz from install of nss-3.32.0-1.1.fc26.i686 conflicts with file from package nss-3.30.2-1.1.fc26.x86_64 file /usr/share/man/man5/pkcs11.txt.5.gz from install of nss-3.32.0-1.1.fc26.i686 conflicts with file from package nss-3.30.2-1.1.fc26.x86_64 file /usr/share/man/man5/secmod.db.5.gz from install of nss-3.32.0-1.1.fc26.i686 conflicts with file from package nss-3.30.2-1.1.fc26.x86_64 file /usr/share/man/man5/k5identity.5.gz from install of krb5-libs-1.15.1-25.fc26.i686 conflicts with file from package krb5-libs-1.15.1-17.fc26.x86_64 file /usr/share/man/man5/k5login.5.gz from install of krb5-libs-1.15.1-25.fc26.i686 conflicts with file from package krb5-libs-1.15.1-17.fc26.x86_64 file /usr/share/man/man5/krb5.conf.5.gz from install of krb5-libs-1.15.1-25.fc26.i686 conflicts with file from package krb5-libs-1.15.1-17.fc26.x86_64 file /usr/share/doc/wine-core/AUTHORS from install of wine-core-2.15-1.fc26.i686 conflicts with file from package wine-core-2.12-1.fc26.x86_64 file /usr/share/doc/wine-core/VERSION from install of wine-core-2.15-1.fc26.i686 conflicts with file from package wine-core-2.12-1.fc26.x86_64 file /usr/share/doc/wine-core/ANNOUNCE from install of wine-core-2.15-1.fc26.i686 conflicts with file from package wine-core-2.12-1.fc26.x86_64 file /usr/share/doc/pango/NEWS from install of pango-1.40.11-3.fc26.i686 conflicts with file from package pango-1.40.7-1.fc26.x86_64 file /usr/share/man/man1/pango-view.1.gz from install of pango-1.40.11-3.fc26.i686 conflicts with file from package pango-1.40.7-1.fc26.x86_64 file /usr/share/doc/gtk3/README from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/doc/gtk3/NEWS from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/cs/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/de/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/es/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/fi/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/fr/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/fur/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/gl/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/hr/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/id/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/kk/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/lt/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/ne/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/pl/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/pt_BR/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/sk/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/sl/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/sr/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/locale/sr@latin/LC_MESSAGES/gtk30.mo from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/man/man1/broadwayd.1.gz from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/man/man1/gtk-launch.1.gz from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/man/man1/gtk-query-immodules-3.0.1.gz from install of gtk3-3.22.19-1.fc26.i686 conflicts with file from package gtk3-3.22.17-2.fc26.x86_64 file /usr/share/doc/libsoup/NEWS from install of libsoup-2.58.2-1.fc26.i686 conflicts with file from package libsoup-2.58.1-2.fc26.x86_64 file /usr/share/doc/libgusb/NEWS from install of libgusb-0.2.11-1.fc26.i686 conflicts with file from package libgusb-0.2.10-1.fc26.x86_64 file /usr/share/doc/p11-kit/NEWS from install of p11-kit-0.23.8-1.fc26.i686 conflicts with file from package p11-kit-0.23.5-3.fc26.x86_64 file /usr/share/man/man1/trust.1.gz from install of p11-kit-0.23.8-1.fc26.i686 conflicts with file from package p11-kit-0.23.5-3.fc26.x86_64 file /usr/share/man/man5/pkcs11.conf.5.gz from install of p11-kit-0.23.8-1.fc26.i686 conflicts with file from package p11-kit-0.23.5-3.fc26.x86_64 file /usr/share/man/man8/p11-kit.8.gz from install of p11-kit-0.23.8-1.fc26.i686 conflicts with file from package p11-kit-0.23.5-3.fc26.x86_64Error Summary------------- Error from dnf install [anything] : Last metadata expiration check: 0:15:05 ago on Tue 05 Sep 2017 11:09:50 AEST.Traceback (most recent call last): File "/bin/dnf", line 58, in <module> main.user_main(sys.argv[1:], exit_code=True) File "/usr/lib/python3.6/site-packages/dnf/cli/main.py", line 179, in user_main errcode = main(args) File "/usr/lib/python3.6/site-packages/dnf/cli/main.py", line 64, in main return _main(base, args, cli_class, option_parser_class) File "/usr/lib/python3.6/site-packages/dnf/cli/main.py", line 99, in _main return cli_run(cli, base) File "/usr/lib/python3.6/site-packages/dnf/cli/main.py", line 115, in cli_run cli.run() File "/usr/lib/python3.6/site-packages/dnf/cli/cli.py", line 962, in run return self.command.run() File "/usr/lib/python3.6/site-packages/dnf/cli/commands/install.py", line 120, in run self.base.install(pkg_spec, strict=strict, forms=forms) File "/usr/lib/python3.6/site-packages/dnf/base.py", line 1582, in install subj._is_arch_specified(self.sack): File "/usr/lib/python3.6/site-packages/dnf/subject.py", line 71, in _is_arch_specified q = self._nevra_to_filters(sack.query(), nevra) File "/usr/lib/python3.6/site-packages/dnf/subject.py", line 49, in _nevra_to_filters query._filterm(*flags, **{name + '__glob': attr}) File "/usr/lib/python3.6/site-packages/dnf/query.py", line 93, in _filterm return super(Query, self)._filterm(*args, **nargs)AttributeError: 'super' object has no attribute '_filterm' Errors from the end of dnf history info 52 (the update before the power failure): Scriptlet output: 1 warning: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.144-5.b01.fc26.x86_64/jre/lib/security/java.security created as /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.144-5.b01.fc26.x86_64/jre/lib/security/java.security.rpmnew 2 error: db5 error(-30969) from dbenv->open: BDB0091 DB_VERSION_MISMATCH: Database environment version mismatch 3 error: cannot open Packages index using db5 - (-30969) 4 error: cannot open Packages database in /var/lib/rpm 5 error: db5 error(-30969) from dbenv->open: BDB0091 DB_VERSION_MISMATCH: Database environment version mismatch 6 error: cannot open Packages index using db5 - (-30969) 7 error: cannot open Packages database in /var/lib/rpm 8 error: db5 error(-30969) from dbenv->open: BDB0091 DB_VERSION_MISMATCH: Database environment version mismatch 9 error: cannot open Packages index using db5 - (-30969) 10 error: cannot open Packages database in /var/lib/rpm 11 error: db5 error(-30969) from dbenv->open: BDB0091 DB_VERSION_MISMATCH: Database environment version mismatch 12 error: cannot open Packages index using db5 - (-30969) 13 error: cannot open Packages database in /var/lib/rpm 14 restored /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.144-5.b01.fc26.x86_64/jre/lib/security/java.security.rpmnew to /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.144-5.b01.fc26.x86_64/jre/lib/security/java.security | The most typical cause of this error is trying to install packages without having everything up to date. That sometimes causes new dependencies to be brought in which conflict with packages already on disk, and dnf doesn't know that it should update those otherwise-unrelated packages. In your case, it seems to be something else, since you are seeing the problem while running the update. Here, though, the error is definitely something out of sync between available and installed versions in the different architectures. For example: file /usr/share/doc/glibc/NEWS from install of glibc-2.25-10.fc26.i686 conflicts with file from package glibc-2.25-7.fc26.x86_64 Linebreaks added to make this obvious: It's trying to install newer i686 packages and it doesn't know to update older glibc. The first thing I'd check is to see if you have multiple versions of the x86_64 glibc installed. If so, run sudo dnf repoquery --duplicates see the extra versions, and dnf remove --duplicates to clean up. (This will leave the files belonging to the latest package, so is safe even if it tries to remove things which seem important.) If that's not the case, it may simply be the mirror you are hitting now is not as up-to-date as the one you got earlier. In that case, try sudo dnf clean all and update again. If that doesn't work, you can often resolve this by temporarily removing all i686 packages. The system will function without them, and then you can do the update, and then put back what you need for compatibility with 32-bit apps. If you are still getting db errors, you may have a different problem at the RPM level — but start with the above. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53164/"
]
} |
390,480 | I need to run some long and heavy commands, but at the same time, I'd like to keep my desktop system responsive. Examples: btrfs deduplication, btrfs balance, etc. I don't mind if such commands take longer to finish if I give them a lower priority, but my system should always be responsive. Using nice -n 19 and ionice -c 3 should solve my problem, but I'm not sure which command should come first for maximum benefit. Option A: nice -n 19 ionice -c 3 btrfs balance start --full-balance / Option B: ionice -c 3 nice -n 19 btrfs balance start --full-balance / Is there some subtle difference between options A and B? Are they equivalent perhaps? | If nice caused lots of I/O, you would want to do: ionice -c 3 nice ... so that the impact of the I/O would be minimized. Conversely, if ionice performed lots of computation, you would want to do nice -n 19 ionice ... to minimize its CPU impact. But neither of these is true, they're both very simple commands (they just make a system call to change a process parameter, then execute the command). So the difference should be negligible. And just to be complete, if both were true, you can't really win -- the impact of one of them can't be reduced. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/390480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
390,518 | Within the output of top, there are two fields, marked "buff/cache" and "avail Mem" in the memory and swap usage lines: What do these two fields mean? I've tried Googling them, but the results only bring up generic articles on top, and they don't explain what these fields signify. | top ’s manpage doesn’t describe the fields, but free ’s does: buffers Memory used by kernel buffers ( Buffers in /proc/meminfo ) cache Memory used by the page cache and slabs ( Cached and SReclaimable in /proc/meminfo ) buff/cache Sum of buffers and cache available Estimation of how much memory is available for starting newapplications, without swapping. Unlike the data provided bythe cache or free fields, this field takes into account pagecache and also that not all reclaimable memory slabs will bereclaimed due to items being in use ( MemAvailable in /proc/meminfo , available on kernels 3.14, emulated on kernels2.6.27+, otherwise the same as free) Basically, “buff/cache” counts memory used for data that’s on disk or should end up there soon, and as a result is potentially usable (the corresponding memory can be made available immediately, if it hasn’t been modified since it was read, or given enough time, if it has); “available” measures the amount of memory which can be allocated and used without causing more swapping (see How can I get the amount of available memory portably across distributions? for a lot more detail on that). | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/390518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
390,574 | If I watch a video with mpv, it closes after the video ends. How can I configure it such that it doesn't close, for example just freezes the last image of the movie, so that I can seek back and forth without restarting the video. | You'd use mpv --keep-open=yes , which you can find in the mpv manpage . It allows three values: no (close/advance to next at end of video, the default), yes (advance if there is a next video, otherwise pause), and always (always pause at end of video, even if there is a next video). You should also be able to put keep-open=yes in your ~/.config/mpv/mpv.conf or ~/.mpv/config (whichever you're using) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/390574",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
390,710 | I'm really new to linux (like earlier this week is when I started) so excuse me if this is a really simple question. I am running a program on a remote server and it's a lengthy process dealing with a couple hundred gb of data so I wanted to leave it running overnight. Long story short I ssh in, I start the program, watch to see that it's running and then close out of terminal. I come back this morning and see it stopped exactly where it was when I terminated my ssh the night before. Is there a way to keep the process running on the server when I close out? | As the other answers suggest. You can use nohup <command> & . You can also use screen , this (basically) is a detachable terminal. You can start a terminal using the command screen and when you want to detach from it. Ctrl+a d . And to reattach your terminal run screen -r <terminal_name> . If you have more then one detached terminal's, you can use the command screen -r to see the names of the detached terminal's. Screen is a full-screen window manager that multiplexes a physical ter‐ minal between several processes (typically interactive shells). Each virtual terminal provides the functions of a DEC VT100 terminal and, in addition, several control functions from the ISO 6429 (ECMA 48, ANSI X3.64) and ISO 2022 standards (e.g. insert/delete line and support for multiple character sets). There is a scrollback history buffer for each virtual terminal and a copy-and-paste mechanism that allows moving text regions between windows. EDIT: You can also use tmux . Have a look here for a basic "how to use tmux ". tmux is a terminal multiplexer. What is a terminal multiplexer? It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal. And do a lot more. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249798/"
]
} |
390,757 | I want to write a script to reference multiple arrays from another array which holds these array's variable names. Here's my code so far: #!/bin/basharray1=('array1string1' 'array1string2')array2=('array2string1' 'array2string2')array_names=('array1' 'array2')for a in ${array_names[@]}do for b in ${a[@]} do echo $b donedone I'd like the output to scan through both arrays (from the outer for loop) and print the respective strings in the inner for loop which calls echo. My current output is just showing me: array1array2 I'd be grateful for any pointers regarding this. Thank you! | Bash 4.3 and later supports "name references", or namerefs (a similar concept exists in ksh93 , but the scoping is annoyingly different ): #!/bin/basharray1=('array1string1' 'array1string2')array2=('array2string1' 'array2string2')array_names=('array1' 'array2')for a in "${array_names[@]}"; do declare -n arr="$a" for b in "${arr[@]}"; do echo "$b" donedone The variable arr is a nameref that acts like an alias for the named variable (the variable with name $a in this example). Without namerefs, in earlier Bash versions, one solution would be to create a new array that contains all the elements from the other arrays: all=( "${array1[@]}" "${array2[@]}" ) ... a bit like the array_names array in the question but with the contents of all arrays, and then iterate over "${all[@]}" . It's also possible to use eval , but the resulting code looks astoundingly awful. See glenn jackman's answer for a variation with variable indirection (introduced in its current form with Bash version 2). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249958/"
]
} |
390,768 | To catch mail send to any recipient I have added the following router directive in my exim4 configuration: local_catchall: debug_print = "R: catchall for $local_part@$domain" driver = redirect domains = +local_domains allow_fail allow_defer data = johanna How can I exclude some recipient addresses like it is possible with sender addresses per: acl_check_data: strong textdeny senders = /etc/deny_senders | Bash 4.3 and later supports "name references", or namerefs (a similar concept exists in ksh93 , but the scoping is annoyingly different ): #!/bin/basharray1=('array1string1' 'array1string2')array2=('array2string1' 'array2string2')array_names=('array1' 'array2')for a in "${array_names[@]}"; do declare -n arr="$a" for b in "${arr[@]}"; do echo "$b" donedone The variable arr is a nameref that acts like an alias for the named variable (the variable with name $a in this example). Without namerefs, in earlier Bash versions, one solution would be to create a new array that contains all the elements from the other arrays: all=( "${array1[@]}" "${array2[@]}" ) ... a bit like the array_names array in the question but with the contents of all arrays, and then iterate over "${all[@]}" . It's also possible to use eval , but the resulting code looks astoundingly awful. See glenn jackman's answer for a variation with variable indirection (introduced in its current form with Bash version 2). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/214992/"
]
} |
390,831 | With the Bourne shell family, the shell variables all have upper-case case names; which means you can't tell if a particular variable is an environment variable or not just by looking at its name. How do you determine which Bourne shell variables are local (defined only within the current shell)? | If you want see if a variable is exported or not, use declare : $ foo=a bar=b$ export foo$ declare -p foo bardeclare -x foo="a"declare -- bar="b" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/390831",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250010/"
]
} |
390,871 | If I understand correctly, the default separator for the output of awk is space . However, the following script does not behave as I expect. I do not manage to parse the output of awk into an array: #!/bin/bashecho "------ with input string from awk ------"ALL_TTY_OWNERS_STR=$(ls -l /dev | grep tty | awk '{print $3}')read -r -a ALL_TTY_OWNERS_ARRAY <<< "$ALL_TTY_OWNERS_STR"echo "${#ALL_TTY_OWNERS_ARRAY[@]}" # This says 1echo "${ALL_TTY_OWNERS_ARRAY[0]}" # "root", as expectedecho "${ALL_TTY_OWNERS_ARRAY[1]}" # empty string, expected "root"echo "${ALL_TTY_OWNERS_ARRAY[2]}" # empty string, expected "root"echo "------ with my manually created input string ------"ALL_TTY_OWNERS_STR="root root root" # only for testingread -r -a ALL_TTY_OWNERS_ARRAY <<< "$ALL_TTY_OWNERS_STR"echo "${#ALL_TTY_OWNERS_ARRAY[@]}" # 3, as expectedecho "${ALL_TTY_OWNERS_ARRAY[0]}" # "root", as expectedecho "${ALL_TTY_OWNERS_ARRAY[1]}" # "root", as expectedecho "${ALL_TTY_OWNERS_ARRAY[2]}" # "root", as expected Why can't I parse the output of awk with read as I expected I could? | It's about field separator. You need to define record separator to put each string in single one. Use ORS param: ls -l /dev | grep tty | awk 'BEGIN { ORS=" " }; {print $3}' Without it you output will: rootrootrootetc... And when you define ALL_TTY_OWNERS_STR variable you put only first string of output in first element of array. Because of this you array will contain only one element and this is exactly what you are gets | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/390871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38106/"
]
} |
391,011 | I have a file that contains a total of 482 lines i want to remove ~~adt*~~ from. This is how the content looks like: 478|~~adt00000aa9~~~~adt0000000b~~14395189_p0.jpg479|~~adt00000995~~44836628_p0.jpg480|~~adt00000aae~~Miku_Collab_2_by_Luciaraio.jpg I tried sed 's/~~adt*~//' file > new_file but it didn't remove everything. How can i remove everything between the first and the last two ~~ signs? | Given that you want to remove ~~adt(something)~~ and that there may be ~~(something different)~~ on other lines (not shown in the question): $ sed 's/~~adt[^~]*~~//g' file.in >file.out For the given data, this will generate 478|14395189_p0.jpg479|44836628_p0.jpg480|Miku_Collab_2_by_Luciaraio.jpg Changing the adt of the last line to xxx , the command generates 478|14395189_p0.jpg479|44836628_p0.jpg480|~~xxx00000aae~~Miku_Collab_2_by_Luciaraio.jpg The pattern ~~adt[^~]*~~ will match all occurrences of ~~adt followed by any number of characters that are not ~ , and then ~~ again. The /g at the end will ensure that all such matches on every line are removed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174627/"
]
} |
391,028 | using time sleep 1 yields: $ time sleep 1real 0m1.005suser 0m0.001ssys 0m0.001s is there a command I can use to print the exit code of sleep or whatever command I want to run? Something likes: $ log-exit-code sleep 1 perhaps this sufficient? sleep 1 && echo "$?" | cmd && echo "$?" wouldn't work since it would by necessity only print zeroes (the echo would only execute on successful completion of the preceding command). Here's a short shell function for you: tellexit () { "$@" local err="$?" printf 'exit code\t%d\n' "$err" >/dev/tty return "$err"} This prints the exit code of the given command in a similar manner as the time command does. $ tellexit echo "hello world"hello worldexit code 0$ tellexit falseexit code 1 By redirecting the printf to /dev/tty in the function, we may still use tellexit with redirections without getting junk in our standard output or error streams: $ tellexit bash -c 'echo hello; echo world >&2' >out 2>errexit code 0$ cat outhello$ cat errworld By saving the exit code in a variable we are able to return it to the caller: $ tellexit false || echo 'failed'exit code 1failed A fancier version of the same function also prints the signal that killed the command if the exit code is greater than 128 (which means it terminated due to a signal): tellexit () { "$@" local err="$?" if [ "$err" -gt 128 ]; then printf 'exit code\t%d (%s)\n' "$err" "$(kill -l "$err")" >/dev/tty else printf 'exit code\t%d\n' "$err" >/dev/tty fi return "$err"} Testing: $ tellexit sh -c 'kill $$'exit code 143 (TERM)$ tellexit sh -c 'kill -9 $$'Killedexit code 137 (KILL) (The local thing requires ash / pdksh / bash / zsh , or you can change it to typeset which a few other shells also understand.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391028",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
391,040 | I would like to start a service using a systemd unit file. This service requires a password to start. I don't want to store the password in plaintext in the systemd unit file, because it is world-readable. I also don't want to provide this password interactively. If I were writing a normal script for this, I would store the credentials in a file owned by root with restricted permissions (400 or 600), and then read the file as part of the script. Is there any particular systemd-style way to do this, or should I just follow the same process as I would in a regular shell script? | There are two possible approaches here, depending on your requirements. If you do not want to be prompted for the password when the service is activated, use the EnvironmentFile directive. From man systemd.exec : Similar to Environment= but reads the environment variables from a text file. The text file should contain new-line-separated variable assignments. If you do want to be prompted, you would use one of the systemd-ask-password directives. From man systemd-ask-password : systemd-ask-password may be used to query a system password or passphrase from the user, using a question message specified on the command line. When run from a TTY it will query a password on the TTY and print it to standard output. When run with no TTY or with --no-tty it will use the system-wide query mechanism, which allows active users to respond via several agents | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/391040",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102852/"
]
} |
391,076 | I just did a system update on Arch Linux ( pacman -Syu ) and saw a warning about there being old Perl modules: WARNING: '/usr/lib/perl5/site_perl' contains data from at least 2 packages which will NOT be used by the installed perl interpreter. -> Run the following command to get a list of affected packages: pacman -Qqo '/usr/lib/perl5/site_perl'WARNING: '/usr/lib/perl5/vendor_perl' contains data from at least 8 packages which will NOT be used by the installed perl interpreter. -> Run the following command to get a list of affected packages: pacman -Qqo '/usr/lib/perl5/vendor_perl' Running the recommended commands yielded the following: $ pacman -Qqo '/usr/lib/perl5/site_perl'gscan2pdfperl-filesys-df$ pacman -Qqo '/usr/lib/perl5/vendor_perl'perl-config-generalperl-data-uuidperl-goo-canvasperl-gtk2-ex-simple-listperl-gtk2-imageviewperl-pdf-api2perl-saneperl-set-intspan I encountered something similar once before when the version of Perl was updated. If I recall correctly those Perl packages needed to be rebuilt; however, I do not remember where I found the incantation to do this. Please note that I have absolutely no experience with Perl. Those modules are simply used by gscan2pdf which I installed through AUR. How do I rebuild these Perl modules? | To rebuild the perl libraries from AUR for pacmanager I use yaourt : yaourt -S --asdeps perl-gnome2-gconf glade-perl perl-crypt-blowfish perl-gnome2-vte perl-expect perl-crypt-rijndael perl-gtk2-ex-simple-list perl-io-stty perl-io-tty perl-net-arp perl-yaml perl-crypt-cbc perl-gtk2-unique perl-socket6 uuid so just replace the libraries with the ones pacman gave you: yaourt -S --asdeps gscan2pdf perl-filesys-df perl-config-general perl-data-uuid perl-goo-canvas perl-gtk2-ex-simple-list perl-gtk2-imageview perl-pdf-api2 perl-sane perl-set-intspan Update 2018 additional the perl 5.28 update in Arch means that you have to rebuild all unsupported packages again. The solution to fix dependency errors is to remove the offending AUR perl application & then remove the orphan perl libraries & update your system: sudo pacman -Rns $(pacman -Qtdq)sudo pacman -Syu Finally reinstall the AUR perl application. Update 2019 additional With the perl 5.30.0 update pacman now gives warnings about old perl libs. This time I had a big update & needed to do the following: sudo pacman -R $conflicting_perl_mod_&_appsudo pacman -Syu# pacman now generates a warning for unused perl libspacman -R $(pacman -Qqo '/usr/lib/perl5/5.28')pacman -S $perl_app# optional remove orphaned packagessudo pacman -Rns $(pacman -Qtdq)sudo pacman -Syu | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/391076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29398/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.